AWS for Industries

How Angel One Built Post-Trade Reporting Platform on AWS

Modernizing legacy financial systems for regulatory compliance requires careful architectural planning and proven cloud solutions. This post shows you how Angel One transformed a fragile, stored-procedure-driven back office into a cloud-native, auditable system on AWS. You’ll learn the architecture, challenges, and results of building a post-trade reporting platform that processes 30 million records daily while meeting SEBI’s strict compliance requirements for client-level collateral segregation. Angel One, trusted by over 30 million clients and among India’s leading retail full-service brokers, moved from a fragile, stored-procedure-driven setup to meet the Securities and Exchange Board of India (SEBI) mandate for enhanced regulatory reporting and daily monitoring. This transformation demonstrates how AWS analytics services can modernize critical financial infrastructure while ensuring regulatory compliance.

Key challenges

Legacy Database related challenges – Angel One faced operational challenges with their legacy post-trade reporting system. The existing infrastructure was built on an MSSQL database that had evolved over years, resulting in a complex maze of hundreds of T-SQL stored procedures where the original business logic had become increasingly difficult to decipher. This opacity was compounded by a fragile scheduling system that relied on SQL agent timers, where a slight delay in one job could trigger a cascade of failures in subsequent processes. The legacy MSSQL-based reporting platform’s brittleness was evident during critical overnight processing windows, where a 2:10 AM job running overtime could disrupt the entire chain of scheduled tasks, leading to frequent weekend war rooms and manual interventions to keep operations running.

Compliance related challenges – When SEBI introduced new regulations requiring detailed client-level collateral segregation and daily reporting, the legacy MSSQL-based system’s limitations were exposed as the team struggled with limited replay capabilities for handling late data feeds, requiring manual edits and extensive weekend troubleshooting sessions. The lack of clear data lineage made audit processes difficult to maintain, with teams often spending more time explaining how they arrived at results than producing the reports. The solution’s reliance on CSV files stored on shared drives created implicit dependencies that required constant monitoring, making it increasingly difficult to maintain the accuracy and timeliness demanded by regulatory requirements. These challenges were further exacerbated by the system’s inability to scale with growing transaction volumes and the increasing complexity of regulatory reporting requirements, pushing the organization to recognize the urgent need for a comprehensive modernization effort.

Requirements for re-imagined backoffice

Daily client-level collateral reporting – SEBI’s regulatory mandate fundamentally changed the collateral management in India’s securities market. The regulation stemmed from a critical recognition: the risks associated with brokers commingling collateral from different clients had created a transparency gap and increased systemic risk in the market. Prior to this mandate, the lack of clear segregation meant that if a broker faced financial difficulties, individual client assets could be difficult to identify and protect. SEBI addressed this vulnerability by requiring brokers to implement precise daily client-level reporting of collateral, thereby enhancing investor protection and market transparency.

Segregation of Assets – The regulatory requirements demanded a granular client-level data. For each trading day, brokers needed to maintain detailed records of how each client’s collateral – whether cash or securities – was being utilized. This meant tracking multiple forms of collateral including cash deposits, shares, mutual funds, and other securities, along with their specific allocations to various trading activities. The solution needed to demonstrate exactly how each rupee of cash or each unit of security was attributed to individual clients, maintaining clear separation between different clients’ assets and the broker’s own funds.

The complexity of the requirement is best illustrated through practical examples. Consider a scenario where a client maintains ₹30,000 in cash and ₹70,000 in securities as collateral in their trading account. If this client initiates a derivatives trade requiring ₹90,000 in margin, the solution must determine the appropriate mix of collateral based on clearing corporation rules. These rules often specify that a certain portion must be maintained in cash. The solution needs to calculate and report how the client’s ₹30,000 cash is utilized, how much of their securities (say ₹30,000 worth) is allocated, and in cases where the client’s cash-to-securities ratio doesn’t meet requirements, track how the broker’s own funds (in this case, ₹30,000) are temporarily used to bridge the gap.

Comprehensive timely auditability – Beyond just tracking these allocations, the requirements demanded complete auditability and reproducibility of calculations. Each reported figure needed to be traceable back to its source, with clear documentation of any transformations or adjustments applied along the way. The solution had to maintain this accuracy and transparency while processing millions of records daily, with strict timelines for report submission to regulatory authorities. Additionally, the platform needed to be flexible enough to accommodate future regulatory changes while maintaining historical accuracy for audit purposes. This combination of granular tracking, real-time processing, and complete auditability created a complex set of requirements that pushed the boundaries of traditional reporting systems.

Solution approach

Cloud-native serverless components – Angel One’s modernization approach centered on moving from a rigid, monolithic on-premises system to a cloud-native architecture built on AWS serverless components. The fundamental shift in thinking was to break away from the traditional batch-processing mindset, where operations were tied to specific time windows, and instead embrace an event-driven, modular architecture that could process data continuously and recover gracefully from failures. This architectural transformation was guided by several key principles: ensuring recoverability with 15-minute checkpoint restoration capabilities, achieving end-to-end processing within a one-hour window, maintaining clear audit trails, and enabling small team operations through simplified maintenance.

The team adopted a two-pronged strategy to manage the transition. First, they leveraged AWS’s managed services to handle infrastructure complexities, allowing the development team to focus on business logic and regulatory compliance. AWS Step Functions became the orchestration backbone, providing visual monitoring and state management capabilities that were sorely missing in the legacy system. AWS Glue was chosen as the primary data processing engine, offering the scalability needed to handle tens of millions of records while maintaining processing speed and accuracy. This combination of services enabled them to break down the monolithic stored procedures into discrete, manageable components that could be tested, monitored, and modified independently.

Security and compliance considerations were woven into the architecture from the ground up, rather than being added as an afterthought. The team implemented comprehensive audit logging and monitoring through Amazon CloudWatch, ensuring that each data transformation and business decision could be traced and verified. They also built data quality checks into each processing stage, with automated alerts for anomalies. This proactive approach to data quality and compliance reduced the manual oversight required and virtually eliminated the need for weekend troubleshooting sessions that were common with the legacy system.

Scalability and extensibility – The modernization approach also accounted for future scalability and extensibility. While the immediate goal was to meet SEBI’s regulatory requirements, the team designed the platform to support additional use cases such as exposure analytics, reconciliations, and client-facing transparency initiatives. They implemented a flexible data model that could accommodate evolving regulatory requirements without requiring significant architectural changes. The use of infrastructure as code through Terraform, combined with automated deployment pipelines through GitHub Actions, ensured that the AWS Glue-based platform could be consistently updated and scaled while maintaining operational reliability. The compliance project now supports additional use cases like exposure analytics and reconciliations.

Usage of AWS analytics services

AWS Glue – At the core of Angel One’s modernized platform, AWS Glue serves as the primary data processing engine, handling the complex transformation of trading and collateral data. The implementation utilizes 20 G.2X workers, providing approximately 160 vCPUs of processing power to manage the massive scale of daily operations. This configuration enables the system to process over 30 million records daily, reducing the end-to-end processing time from 4 hours to just 30 minutes. The team leveraged Glue’s native integration with Apache Spark to implement data transformations, crucial for handling complex allocation calculations and regulatory reporting requirements.

AWS Step Functions emerged as a critical component in solving the orchestration challenges that had plagued the legacy system. The service manages complex workflows while providing essential visual monitoring capabilities that were previously unavailable. Step Functions maintains precise workflow state information and offers clear visualization across different processing stages, eliminating the opacity that characterized the old stored-procedure-based system. This visibility has proven invaluable for both operational monitoring and audit purposes, allowing teams to quickly identify and resolve any processing issues without the extensive manual investigation previously required.

Amazon S3 – The solution’s storage architecture leverages Amazon S3 as its foundation, efficiently managing tens of terabytes of data with date-based partitioning for optimal query performance. This approach solved several critical challenges in data management and access. The team implemented data organization strategies in S3, creating logical partitions improved query performance and reduced costs. A pre-signed URL is generated for secure report distribution to operations teams, solving the security challenges associated with the previous file-sharing approach. Furthermore, the S3-based storage architecture facilitates uploads to regulatory entities like NSE and MCX, meeting strict compliance requirements while maintaining data security.

Amazon Athena plays a crucial role in enabling ad-hoc analysis and investigation of the stored data. This serverless query service allows teams to perform complex data analysis without maintaining additional infrastructure. The integration of Athena solved the challenge in data accessibility, allowing business analysts and compliance teams to directly query the data lake using standard SQL, eliminating the need for complex data exports and manual analysis that were common in the legacy system.

The implementation of real-time processing capabilities represented another technical achievement. The team developed stream processing pipeline that consumes live trade streams and updates allocations based on order execution and market changes. This near real-time processing capability was crucial for maintaining accurate position and risk calculations throughout the trading day. Incremental updates are pushed to exchanges in near real-time, maintaining low latency for critical updates while ensuring data consistency across the platforms.

Amazon CloudWatch – The platform’s DevOps and security infrastructure leverages multiple AWS services to ensure robust operation and compliance. CloudWatch provides comprehensive monitoring and alerting capabilities, with detailed logging of system operations. The team implemented alerting mechanisms through Amazon SNS topics, triggering email and PagerDuty alerts for immediate incident response. This monitoring framework has reduced both mean time to detect (MTTD) and mean time to resolve (MTTR) for system issues, improving overall reliability and operational efficiency.

Solution flow

The end-to-end solution flow is detailed below –

Diagram of angel architecture

Figure 1 AngelOne Post Trade Solution

Step 1 – Data Ingestion

The workflow begins with AWS Glue initiating the critical data ingestion phase. During this step, AWS Glue pulls essential data from Angel One’s Trading Application, including comprehensive customer cash balances and real-time trade information. This initial stage performs preliminary data transformation and validation checks to ensure data quality and consistency before proceeding to subsequent processing stages.

Step 2 – Allocation Calculations

Once the initial data is ingested, AWS Glue executes complex allocation calculations based on multiple data points including customer financial data, current cash balances, active trading positions, and collateral requirements. This process implements business rules for collateral segregation, creating a foundational allocation seed dataset that serves as the basis for subsequent reporting and analysis operations.

Step 3 – Reporting Data Generation

In this crucial phase, AWS Glue transforms the processed allocation data into various regulatory-compliant report formats. The AWS Glue pipeline generates comprehensive customer collateral reports, detailed allocation reports, and customer margins documentation. Each report undergoes strict regulatory formatting rules and validation checks to ensure compliance with SEBI requirements.

Step 4 – Data Egress Processing

The data egress process, managed by AWS Glue, prepares processed data for multiple downstream systems. This step is crucial as it formats data appropriately for the near real-time allocation system while ensuring that downstream consumers receive data in their required formats, maintaining data consistency across the platform.

Step 5 – Real-time Updates

This step involves processing data through the runtime allocation system, where customer positions and margins are updated in real-time. The real-time processing component continuously processes incoming trade data and market changes, ensuring that back-office systems typically have the most current information for risk management and compliance purposes.

Step 6 – AWS Lambda Triggers

AWS Lambda functions serve as the event-driven processing engine throughout the workflow. These functions manage critical workflow transitions, execute complex business logic, and perform validation rules. They act as the glue between different components, ensuring smooth operation of the entire system.

Step 7 – DevOps & Security Layer

The DevOps and security infrastructure is managed through multiple AWS services. Terraform handles infrastructure configuration, while AWS IAM provides robust access control. Amazon KMS ensures data encryption, and Amazon CloudWatch maintains comprehensive system monitoring. This layer ensures the entire platform operates securely and efficiently.

Step 8 – Operational Controls

GitHub Actions manages the deployment pipeline, implementing crucial security and compliance controls while maintaining infrastructure as code. This step ensures consistent deployment practices and maintains the integrity of the production environment through automated processes.

Step 9 – Workflow Management

AWS Step Functions orchestrates the entire process flow, managing state transitions and error handling while providing clear visual monitoring of workflow progress. This central orchestration ensures reliable execution of workflow components and maintains process integrity.

Step 10 – Back-office Integration

The back-office integration phase ensures data flow between the allocation system and back-office systems. This step manages final data reconciliation and ensures consistency across systems, critical for maintaining accurate financial records.

Step 11 – Runtime Processing

The runtime processing system handles real-time allocation updates, continuously processing live trading data and updating customer positions dynamically. This ensures that stakeholders have access to the most current information for decision-making.

Step 12 – Report Distribution

The final step involves generating secure S3 pre-signed URLs for report access, distributing reports to operations teams, and submitting required documentation to regulatory bodies. This step ensures that stakeholders receive their required information in a secure and timely manner, meeting both operational and regulatory requirements.

This modernized workflow processes over 30 million records daily, completing end-to-end processing within 30 minutes, an improvement from the legacy system’s 4-hour processing window. The solution maintains high accuracy with zero exchange rejection rates while providing complete audit trails and operational visibility.

Performance improvements and business outcome
Timeliness: The modernized system achieved a perfect first-run success rate of 100%, an improvement from the previous 90%. This enhancement eliminated the need for manual interventions and reruns. The new platform reduced average processing time from 4 hours to just 30 minutes for the complete cycle from on-premises collateral collection to AWS report dispatch. This improvement enables the organization to meet regulatory deadlines consistently and provides more time for validation and review before submission.

Accuracy & Data Quality: The AWS-based post-trade reporting platform maintains a flawless exchange rejection rate of 0%, continuing the high standards of the previous system but with less manual oversight. This perfect acceptance rate is achieved through enhanced data validation, automated quality checks, and robust error handling mechanisms implemented throughout the processing pipeline. The improved data quality controls and automated reconciliation processes ensure that reports meet regulatory requirements consistently.

Operational Efficiency: Processing throughput has seen a remarkable improvement, scaling from handling 8 million records per day to processing 30 million records daily. This nearly 4x increase in throughput was achieved without adding operational complexity or requiring additional staff. The serverless architecture automatically scales to handle volume spikes, and the improved processing efficiency has reduced infrastructure costs despite the increased workload.

Observability: The AWS Glue and AWS Step Functions solution provides comprehensive real-time visibility across critical processing stages – Collateral Data Sync, Staging Data Computation, and Final Report Generation. Each stage is monitored through Amazon SNS notifications that provide event-driven alerts to subscribed endpoints. Integration with Slack via AWS Lambda functions gives operations teams immediate situational awareness. This enhanced observability has reduced incident response times and improved system reliability.

CloudWatch-Based Logging, Monitoring & Alarm: The implementation of comprehensive monitoring through Amazon CloudWatch has transformed system oversight. Complete execution histories, including state transitions and error traces, are retained according to audit policies. Amazon CloudWatch Alarms on key metrics (such as execution failures, state timeouts, error thresholds) trigger Amazon SNS topics that generate email and PagerDuty alerts for immediate escalation when needed. This robust monitoring framework has reduced mean time to detect (MTTD) and mean time to resolve (MTTR) for system issues, improving overall system reliability.

The solution now meets SEBI requirements with a 100% first-run success rate and zero exchange rejections. The platform’s success has led to its expansion into other post-trade use cases, including exposure analytics, reconciliations, and client-facing transparency initiatives, demonstrating the scalability and flexibility of the AWS-based solution. The modernization has effectively transformed what was initially a compliance project into a strategic platform for future growth and innovation.

Summary

By migrating from stored procedures to AWS serverless analytics services, you can achieve similar regulatory reporting transformations. This architecture demonstrates how AWS Glue, Step Functions, Lambda, and S3 can process millions of financial records daily while maintaining perfect compliance rates. The key success factors include designing for recoverability with 15-minute checkpoint restoration, implementing end-to-end processing within one-hour windows, and building comprehensive audit trails from the start. For organizations facing similar regulatory pressures, this serverless approach reduces operational overhead while providing the scalability and reliability needed for critical financial reporting.

Next steps

To implement a similar solution, start by evaluating your current processing volumes and regulatory requirements. Consider these AWS services for your financial data processing: AWS Glue for data transformation, Step Functions for workflow orchestration, and Amazon CloudWatch for monitoring and compliance tracking. For detailed implementation guidance, explore the AWS Financial Services documentation and AWS Glue best practices. To learn more about building compliant data processing pipelines, see AWS compliance resources.

Nishant Chandra

Nishant Chandra

He is SVP of Engineering at Angel One, where he leads the Post-Trade systems. His teams are building a next-generation Post-Trade platform focused on scale, resilience, and auditability. With decades of experience across fintech and e-commerce, he has led large-scale platform and data engineering initiatives. Outside work, he writes practical essays on FinTech systems and developing engineering leaders.

Abhin Hattikudru

Abhin Hattikudru

Abhin Hattikudru is a Senior Principal Architect at Angel One, where he leads the design of the high scale backend architectures, With two decades of experience, he has a wide expertise in big data stacks, and microservice architecture.

Fuzail Ahmad

Fuzail Ahmad

Fuzail Ahmad is Senior Staff Engineer at Angel One, working in the Post-Trade business unit where he designs and scales mission-critical financial systems. Passionate about simplifying complexity, he shares practical insights from his engineering journey to empower teams and the broader tech community.

Karan Mandal

Karan Mandal

Karan is a Sr Software Engineer at Angel One and works on Post-Trade reporting platform.

Shailesh Shivakumar

Shailesh Shivakumar

Shailesh Shivakumar is an FSI Sr. Solutions Architect with AWS India. He works with financial enterprises such as banks, NBFCs, and trading enterprises to help them design secure cloud platforms and engage with them to accelerate their cloud journey. He builds demos and proof-of-concepts to demonstrate the art of the possible on the AWS Cloud. He leads other initiatives such as customer enablement workshops, AWS demos, cost optimization, and solution assessments to ensure that AWS customers succeed in their cloud journey. Shailesh is part of Machine Learning TFC at AWS, handling the generative AI and machine learning-focused customer scenarios. Security, serverless, containers, and machine learning in the cloud are his key areas of interest.

Vikas Rajoria

Vikas Rajoria

Vikas Rajoria is a Sr. Director of Engineering at Angel One, where he leads the back-office modernization charter, building mission-critical systems such as Settlement and Trade Server Seeding. He is passionate about building systems that scale seamlessly and are transparent through strong observability.