Rajandran R Creator of OpenAlgo - OpenSource Algo Trading framework for Indian Traders. Building GenAI Applications. Telecom Engineer turned Full-time Derivative Trader. Mostly Trading Nifty, Banknifty, High Liquid Stock Derivatives. Trading the Markets Since 2006 onwards. Using Market Profile and Orderflow for more than a decade. Designed and published 100+ open source trading systems on various trading tools. Strongly believe that market understanding and robust trading frameworks are the key to the trading success. Building Algo Platforms, Writing about Markets, Trading System Design, Market Sentiment, Trading Softwares & Trading Nuances since 2007 onwards. Author of Marketcalls.in

Understanding AngelOne’s Tech Stack

6 min read

AngelOne is one of India’s largest online stock brokerage firms. Behind its fast, reliable, and high-performance trading platform is a meticulously engineered technology stack that has evolved over years. In this blog post, we’ll dive into the key components and design decisions of AngelOne’s tech stack—from evolving a monolith into a microservices architecture, to the choice of programming languages and infrastructure, and most importantly, how the platform is designed to handle intense concurrency and complex order flows.


Evolving from Monolith to Microservices

The Journey of Transformation

AngelOne began its journey with a monolithic application. As user traffic and data volumes soared, the need for independent scaling and rapid deployment cycles became critical. This led to a phased migration toward a microservices architecture, enabling teams to build, test, and deploy individual services independently.

Architectural Patterns for Scalability

A standout pattern is Command Query Responsibility Segregation (CQRS). By separating the write (command) and read (query) paths, AngelOne optimizes performance:

  • Write Path (Order Placement): Ultra-low latency is essential.
  • Read Path (Portfolio & Market Data): Managed with a relaxed consistency model for improved scalability.

Hybrid Infrastructure: Balancing Control and Agility

Private Data Centers & AWS

AngelOne’s infrastructure strategy is hybrid:

  • Private Data Centers:
    Mission-critical trading operations—where every millisecond counts—are handled in-house. These environments are finely tuned for ultra-low latency, ensuring that order placements and real-time data feeds remain uninterrupted even during peak trading hours.
  • AWS Managed Services:
    The majority of AngelOne’s customer-facing applications and non-critical data processing run on Amazon Web Services. AWS provides agility, scalability, and pre-built managed services (like load balancing, managed databases, and monitoring tools), allowing rapid development without sacrificing performance.

This approach delivers the best of both worlds—tight control over latency-sensitive operations while leveraging cloud-native services for the rest of the platform.


Backend Power: Concurrency with Go

Why Go Was Chosen

At the heart of AngelOne’s backend is Go (Golang), selected for its ability to handle extreme concurrency with efficiency:

  • Lightweight Goroutines:
    Go’s goroutines have a small initial stack (as low as 2KB), which can dynamically grow. This lightweight concurrency model allows each server to spawn hundreds of thousands, even millions, of concurrent routines—essential for processing real-time trading data and managing millions of simultaneous client connections.
  • Efficient Scheduling:
    The Go runtime efficiently multiplexes goroutines onto a limited number of OS threads, ensuring that CPU resources are optimally utilized without incurring the overhead associated with OS-level threading.
  • Simplicity and Uniformity:
    With a concise language specification, Go enforces a consistent coding style that reduces cognitive overhead during code reviews and long-term maintenance.

Concurrency in Action: Order Flow & Real-Time Processing

AngelOne’s trading platform is a high-concurrency system where every millisecond matters. Here’s how its design impacts concurrency:

1. Order Flow Pipeline

  • Initiation at the Client:
    When a user places an order through the mobile or web app, the order request is sent with a unique identifier. This ensures idempotency—if a request is retried, the same order ID is used to avoid duplicates.
  • Asynchronous Processing:
    The order enters a highly concurrent processing pipeline:
    • Validation: Funds and risk management checks are performed concurrently.
    • Routing: The validated order is then dispatched to the appropriate microservice, often segmented into multiple shards. Each shard handles a subset of orders in parallel, ensuring that high volumes of order requests don’t bottleneck the system.
    • Exchange Communication: Orders are transmitted via low-latency channels (often over private data centers) to the exchange, with responses (e.g., order confirmations, partial fills) fed back asynchronously.
  • Event-Driven Architecture:
    Using messaging platforms like Kafka, every event in the order flow—order placement, updates, and confirmations—is published and consumed in real time. This allows independent services to process and react to order events concurrently, minimizing delays and ensuring data consistency across the system.

2. Real-Time Market Data & Websockets

  • UDP Market Feeds:
    Market data from exchanges arrives via UDP—designed for speed over guaranteed delivery. AngelOne’s system concurrently processes these feeds to update live prices, order books, and other critical data.
  • Websocket Connections:
    Real-time updates are pushed to client apps via websockets. Go’s concurrency model allows a single server to manage millions of active websocket connections, ensuring that users receive up-to-date market data with minimal latency.

3. Order Matching and Sharding

  • Concurrent Order Matching:
    Once orders reach the trading engine, they are matched in real time. The microservices architecture, combined with Go’s concurrency primitives, allows order matching algorithms to run concurrently across multiple shards. This parallel processing ensures that even in high-volume scenarios, orders are matched swiftly and efficiently.
  • Shard Isolation:
    Each shard operates independently, minimizing contention. If one shard experiences an issue, it doesn’t affect the entire system—thereby enhancing overall reliability and throughput.

Frontend Innovations: Performance-First Design

Native Mobile Applications

For mobile trading, performance is critical. AngelOne develops native apps for both iOS and Android, leveraging platform-specific optimizations to deliver:

  • Ultra-Responsive Interfaces: Direct access to device hardware and optimized networking libraries ensures minimal latency.
  • Real-Time Updates: Native code can handle rapid updates, ensuring that price movements and order statuses are reflected instantaneously.

Web and Cross-Platform UI

When speed-to-market is essential, AngelOne adopts cross-platform strategies:

  • WebViews for Consistency:
    WebViews allow a single code base to power multiple platforms while maintaining reasonable performance. For critical components, these are augmented with native modules.
  • Reactive Frameworks:
    On the web, reactive libraries such as Solid enable dynamic rendering of live data (e.g., updating price tickers and order statuses). Solid’s minimal overhead ensures that even with rapid data changes, the user interface remains smooth and responsive.

Data Management and Messaging

ACID-Compliant Database: PostgreSQL

For core financial transactions where data integrity is non-negotiable, AngelOne relies on PostgreSQL. This robust, ACID-compliant relational database ensures:

  • Strong Consistency:
    Financial data and transactions remain reliable and consistent, even under heavy load.
  • Scalability:
    PostgreSQL supports complex queries and high volumes of transactional data, making it ideal for fintech applications.

Real-Time Data Streaming with Kafka

To manage the flow of real-time data—from market feeds to order events—AngelOne uses Kafka. This distributed streaming platform facilitates:

  • High-Throughput Processing:
    Kafka’s design enables scalable and fault-tolerant processing of event streams.
  • Loose Coupling:
    Microservices can subscribe to relevant topics and process events concurrently, ensuring that updates propagate through the system with minimal delay.

Technologies that Angel Phased Out

AngelOne has gradually phased out several legacy technologies and vendor-dependent solutions as part of its evolution toward a high-performance, in-house engineered platform. Some key examples include:

  • Vendor-Driven Code: Early in the journey, many critical components (such as mobile app development and backend services) were outsourced or heavily reliant on vendor solutions. Over time, these were replaced by building an in-house team and bespoke solutions tailored to AngelOne’s needs.
  • Stored Procedures for Business Logic: Initially, stored procedures were used extensively—for example, to calculate billing components like GST. However, as the codebase grew, these became unwieldy and difficult to maintain. The team phased out heavy reliance on stored procedures in favor of more modular, maintainable code.
  • Legacy Java Components: Some of the early backend systems were built in Java, but these systems began to show scalability challenges—especially regarding thread management and overall performance. AngelOne transitioned to using Go, which provides a simpler concurrency model with lightweight goroutines and better efficiency for their high-throughput requirements.

By moving away from these legacy and vendor-dependent technologies, AngelOne has been able to simplify its tech stack, improve scalability, and maintain a lean, agile, and high-performing engineering environment.


Operational Excellence: Monitoring, Testing, and Reliability

Rigorous Pre-Deployment Testing

Before code reaches production, it undergoes extensive performance and load testing in a pre-production environment. This testing verifies that:

  • Systems Can Handle Peak Loads:
    Simulated high-traffic scenarios ensure that all components operate reliably.
  • Bottlenecks Are Identified Early:
    Any potential issues are detected and resolved before deployment.

Detailed Runbooks and Incident Management

Comprehensive runbooks and automated alerting (using tools like Grafana) enable rapid incident detection and resolution. This operational rigor is critical in a system where even minor delays can have significant financial implications.


Key Takeaways

  1. Simplicity and Scalability:
    A clear separation of concerns—via microservices and patterns like CQRS—ensures that AngelOne can scale effectively while maintaining simplicity.
  2. Concurrency is King:
    The choice of Go, with its lightweight goroutines and efficient scheduling, allows AngelOne to handle millions of concurrent processes. This design is fundamental for real-time order processing and market data updates.
  3. Robust Order Flow:
    AngelOne’s order flow pipeline is designed to process orders asynchronously and concurrently—from client initiation, through validation and sharding, to exchange communication—ensuring minimal latency and high throughput.
  4. Hybrid Infrastructure:
    Combining private data centers for critical trading functions with AWS-managed services for other workloads offers both control and agility.
  5. Front-End Responsiveness:
    Native mobile apps and reactive web UIs ensure that users receive instantaneous updates, a crucial factor in the fast-paced world of trading.
  6. Operational Excellence:
    Rigorous testing, detailed monitoring, and robust incident management protocols ensure that the platform remains reliable even under peak loads.

Conclusion

AngelOne’s tech stack is a masterclass in building a high-performance, scalable fintech platform. By leveraging a microservices architecture, hybrid infrastructure, and Go’s powerful concurrency model, AngelOne efficiently handles intense order flows and real-time market data. The carefully designed order processing pipeline and robust concurrency mechanisms ensure that every trade is executed swiftly and reliably—even during the busiest trading periods.

As the fintech landscape evolves, the design philosophies and technical innovations at AngelOne offer invaluable insights for any organization aiming to build resilient, high-throughput systems in a competitive market.

Stay tuned for more deep dives into the world of fintech engineering and how cutting-edge technologies are shaping the future of financial services!


Rajandran R Creator of OpenAlgo - OpenSource Algo Trading framework for Indian Traders. Building GenAI Applications. Telecom Engineer turned Full-time Derivative Trader. Mostly Trading Nifty, Banknifty, High Liquid Stock Derivatives. Trading the Markets Since 2006 onwards. Using Market Profile and Orderflow for more than a decade. Designed and published 100+ open source trading systems on various trading tools. Strongly believe that market understanding and robust trading frameworks are the key to the trading success. Building Algo Platforms, Writing about Markets, Trading System Design, Market Sentiment, Trading Softwares & Trading Nuances since 2007 onwards. Author of Marketcalls.in

Get Notifications, Alerts on Market Updates, Trading Tools, Automation & More