AWS Web3 Blog
Implementing an Event Driven DeFi Portfolio Tracker on AWS
Decentralized Finance (DeFi) has revolutionized how users interact with financial services, introducing new ways to lend, borrow, and earn interest through smart contracts. However, unlike traditional finance (TradFi) or Web2 systems where tracking user positions is straightforward through account numbers, DeFi presents unique challenges. Monitoring a user’s position value over time requires gathering data from multiple contracts, listening to events, and chaining together requests based on numerous pieces of data – all while dealing with raw blockchain data in hex format that requires specialized knowledge to process into human-readable information.
The fragmented nature of DeFi, with its numerous protocols and open-source projects, further complicates the task of tracking user positions over time. To address this complexity, this solution focuses on the vfat.io platform – a DeFi position management platform that aggregates multiple protocols into a single interface. By capturing and processing events from vfat.io’s sickle proxy wallet contracts, we can build a streamlined backend for user position tracking that demonstrates key patterns for DeFi data collection.
For this implementation, we’ll specifically target Uniswap V3-style concentrated liquidity pool gauges, deliberately excluding Standard AMMs(SAMMs) and standard Concentrated Liquidity AMM pools(CLAMMs) to maintain simplicity. SAMMs are simpler liquidity vehicles when compared to CLAMMs in that they distribute contributed liquidity across the pools entire price range and as a result may require special handling of events. Similarly, standard CLAMM pools(ie: not paired with a gauge) behave differently and may also require special event handling. Our solution leverages Amazon Elastic Compute Cloud (Amazon EC2) for event listening, connected to an RPC server provider via WebSocket, with events flowing through Amazon SQS for buffering and AWS Lambda for transformation into human-readable format before final storage in Amazon DynamoDB. This architecture provides a robust foundation that can be extended to support additional protocols while maintaining efficient, real-time position tracking capabilities. The RPC provider can be either a self-hosted solution(eg: AWS Node Runners) or a third party provider such as ChainStack or Alchemy.
Solution overview
Our solution requires several assumptions and design choices to maintain a focused and manageable implementation. The system will process and store position data only for Uniswap V3 style concentrated liquidity pools, automatically skipping or throwing errors for standard pools or other DEX types. While this might seem limiting, it provides a clear foundation that can be extended to support additional protocols once the core functionality is established.
While the architecture could easily be extended to support these additional protocols, focusing on a single protocol type allows us to demonstrate the core concepts more clearly. The code for this solution is located in the AWS Github repo.
Design Considerations
For our event listener, we’ve chosen to use Amazon EC2 rather than a serverless solution like AWS Fargate. This decision is driven by the nature of WebSocket connections in blockchain applications: a single WebSocket connection is highly efficient and can handle hundreds of transactions per second (TPS). The EC2 instance maintains a persistent WebSocket connection to the RPC node, ensuring reliable event capture with minimal infrastructure complexity. While this blog details a connection to the Ethereum network, it will work for any EVM based network such as Base provided that you update the contract addresses accordingly. Periodically the contract is redeployed with updates and may have a different address. Always verify the contract addresses are correct and current.
Architectural Flow
The following diagram illustrates the solution architecture. This architecture is intended as a proof of concept for defi event processing and is not considered production ready. Additional enhancements are required to meet production standards for security, reliability, compliance, and auditability.

The solution follows an event-driven architecture:
- An EC2 instance hosts our event listener application, which establishes a WebSocket connection to an ETH RPC node. This listener’s sole responsibility is to capture relevant events and forward them to an SQS queue, maintaining a clean separation of concerns.
- The event listener runs with a minimal IAM instance role, adhering to the principle of least privilege, with permissions only to push messages to SQS. Similarly, the Lambda function operates under a restricted execution role with permissions limited to reading from SQS and writing to DynamoDB.
- AWS Lambda processes the raw event data, transforming hex-encoded blockchain data into human-readable format. This transformation step is crucial for making the data accessible and useful for downstream applications.
- Finally, the processed data is stored in Amazon DynamoDB, providing fast, consistent access to position data.
This architecture provides a robust foundation for real-time position tracking while maintaining operational simplicity and cost-effectiveness. The use of managed services like SQS, Lambda, and DynamoDB allows us to focus on application logic rather than infrastructure management, while the EC2-based listener provides the stability needed for consistent event capture.
Technical Implementation
Setting Up the Infrastructure
Amazon SQS Configuration
We’ll start by creating a standard SQS queue with a 60 second visibility timeout to buffer our events. While FIFO queues offer strict ordering guarantees, a standard queue is sufficient for our use case and provides better throughput.

Note that while standard queues can handle up to 3,000 transactions per second, DeFi applications can generate significant event volume. Monitor your queue metrics and adjust batch sizes accordingly. The remaining settings can be left to the defaults.
DynamoDB Table Setup
Our DynamoDB table uses a composite key structure optimized for querying user position history:

The partition key (user_address) enables efficient queries for all positions belonging to a specific user, while the sort key (nft_id_type) allows for chronological ordering and filtering of specific position types. The sort key is a concatenation of the nft_id and the event type that is created in the event listener. The remaining settings can be left to the defaults.
Implementing the Event Listener
The event listener is a NodeJS application that listens for blockchain events via a websocket connection to our RPC node. Upon receiving an event, we ensure it’s been emitted from the correct function, call additional contracts to collect position and token metadata and then push the event onto an SQS queue for later processing by Lambda.

To begin, deploy an EC2 instance in a private subnet. Note that if you’re connecting to a third-party node provider such as ChainStack, ensure there’s proper routing to access the node. For this demonstration we’ll be using a t4g.nano instance as the requirements for the application are very low. Ensure your IAM Instance Profile role is minimally allowed to push items into SQS.
Dependencies and Configuration
The solution has minimal package dependencies, only requiring etherjs and the AWS SQS Client SDK. The event listener requires several key configurations and contract interfaces to function properly:
External APIs and Endpoints
- Gecko Terminal API for token metadata (decimals, symbols, prices)
- Network-specific WebSocket RPC URL for blockchain connection
- AWS SQS queue URL for event publishing
Smart Contract Addresses
- vfat Farm Strategy Contract: Handles concentrated liquidity pool farms
- Sickle Factory Contract: Manages proxy wallets
Function Signatures
The system tracks specific function signatures to identify relevant transactions.
Contract Interfaces (ABIs)
The system requires several contract interfaces to interact with the blockchain:
- Sickle Factory Interface
- Allows fetching wallet owner information
- Concentrated Liquidity Pool (CLP) Interface
- Shared between Gauge and Pool contracts
- slot0(): Returns current pool state including price and tick
- pool(): Returns pool address
- Event Interface
- Tracks two main events:
- SickleDepositedNft: Triggered when positions are opened
- SickleExitedNft: Triggered when positions are closed
- Each event includes details about the sickle wallet, NFT position, and staking contract
- Tracks two main events:
- NFT Manager Interface
- Provides position details including:
- Token addresses (token0, token1)
- Position boundaries (tickLower, tickUpper)
- Liquidity amount
- Provides position details including:
These interfaces enable the system to:
- Track position entries and exits
- Calculate position values
- Monitor liquidity changes
- Fetch token prices and metadata
- Associate positions with their owners
As we’re targeting EVM compatible networks, the configuration is designed to be network-agnostic, requiring only network-specific addresses and RPC endpoints to be updated for cross-chain deployment.
WebSocket Connection and Event Listening
The system establishes three core connections: a blockchain WebSocket provider, interfaces to the vfat Strategy and Sickle Factory contracts, and an SQS client for message queuing. It monitors two key events: SickleDepositedNft for position entries and SickleExitedNft for exits.
Event Processing
When a deposit occurs, the system validates the transaction signature to confirm it’s a genuine deposit rather than some other call that can emit the same event such as a rebalance. It then gathers position data including NFT details, pool information, and current market state. Token metadata and prices are pulled from Gecko Terminal API.
The system combines this data into a structured record with identifiers, position parameters, and timestamps. This complete record is then queued through SQS for Lambda processing.
Exit processing differs mainly because the NFT is burned during exit. The system fetches position data from the block before the exit, then collects current market conditions and token prices. It creates an exit record with final position state and routes it through SQS.
Lambda Processing
Dependencies
Similar to the event listener, the processor requires minimal package dependencies: ethersjs and two AWS DynamoDB client libraries.
Helper Functions
Uniswap V3’s concentrated liquidity model requires specific mathematical calculations to convert between ticks and prices, and to determine the actual token amounts in a position. These calculations are fundamental to understanding a position’s value – ticks represent the price range in which liquidity is provided, but they must be converted using the formula 𝑝(𝑖) = 1.0001i to get actual prices, while the liquidity amount must be transformed through square root of price calculations to determine the actual quantities of tokens in a position at any given price point. Both of these calculations are detailed in the Uniswap V3 Whitepaper. We implement these methods as well as adjusting for the decimals to ensure human readable and intuitive pool prices.
Event Processing
The Lambda function processes position events from SQS, handling both entry and exit events. It uses the DynamoDB Document Client for database operations, providing a more intuitive interface for data manipulation.
For position entries, the function transforms raw blockchain data into human-readable format through several steps:
The system calculates actual prices from Uniswap V3 ticks using their mathematical formula. It then determines token amounts based on the position’s liquidity and price range. These calculations convert the technical blockchain data into meaningful financial information before storing in DynamoDB.
Exit events require more complex handling since the NFT is burned during the exit process. The function first retrieves the original entry event from DynamoDB to reconstruct the complete position context. It then copies essential data like token addresses, decimals, and symbols to maintain position continuity.
Using this reconstructed data, the function calculates final position values including current prices and token amounts. The processed exit event is then stored in DynamoDB, completing the position’s lifecycle record.
Lambda Deployment
The Lambda function requires specific permissions to interact with SQS and DynamoDB. Ensure the Lambda Execution Role can minimally read and delete items from SQS as well as get and put objects in DynamoDB. Deploy the function either via SAM or the console/CLI. Configure an SQS trigger with initial batch size of 10, monitoring queue metrics to adjust as needed. This implementation ensures idempotent processing through our DynamoDB composite key design, allowing for safe reprocessing of events without creating duplicates.
Before using this solution in production:
- Review and test these scripts in a secure environment
- Add input validation where necessary
- Implement proper error handling and logging
- Run the scripts with minimal required privileges
Conclusion
Solution Summary
In this blog post, we’ve implemented a simple, event-driven system for tracking DeFi positions using AWS managed services. The solution captures concentrated liquidity position events through a WebSocket connection, processes them using serverless functions, and stores the data in a format optimized for position tracking and analysis. By leveraging Amazon EC2 for reliable event listening, SQS for event buffering, Lambda for serverless processing, and DynamoDB for efficient storage, we’ve created a scalable architecture that can handle the complex requirements of DeFi position tracking while maintaining operational simplicity.
Future Improvements
Frontend Integration
To complete the position tracking system, consider implementing a frontend application utilizing a Single Page Application (SPA) via Svelte stored in Amazon Simple Storage Service (Amazon S3) served by Amazon CloudFront and interfacing with Amazon API Gateway:

This serverless architecture would provide users with real-time position data while maintaining low operational overhead.
Multi-Chain Support
The architecture can be extended to support multiple blockchain networks by either deploying private nodes on EC2 instances or using third party node providers like ChainStack. This would create a more comprehensive DeFi position tracking system while maintaining the core benefits of our event-driven architecture. The modular nature of the solution makes it straightforward to implement these improvements incrementally as needed.
Check out the complete implementation on GitHub and start building your own real time DeFi portfolio tracker.