With Dexible, our goal was to build the industry's leading institutional-grade DeFi DEX aggregator. Our goal was to bring DeFi trading into the forefront and make it accessible to institutions that seriously wanted to push volume. See, the main challenge was there was really no way to mitigate risk when trading billions of dollars on chain. Hedge funds wanted to pursue opportunities off major exchanges. It wasn’t merely about trading options, futures, and perps on centralized exchanges like Deribit, Bybit, Binance, and such, it was about deploying funds into DeFi where they could run simultaneous yield, lending/borrowing, staking/farming, and multiple trading strategies (see Nansen, 2023 and Antier). They’d do this to complement their centralized portfolio. However, there were so many risks and extra hurdles in playing in this world.
The only risk mitigation strategy was to hire a dedicated team of backend engineers, quantitative developers, analysts, and traders. There were significant upfront costs to have something of like an automated and protected trading portfolio. Institutions had to hire entire teams of engineers and quants to develop and maintain automated trading strategies. This was resource-intensive and costly. My task as cofounder and product leader was to spearhead the development of a solution that could offer these institutions institutional-grade security and execution. But the architecture required to achieve this was daunting.
Hedge funds were eager to deploy complex strategies—yield farming, staking, lending, and trading—all while maintaining liquidity off CEXs. Yet, DeFi posed significant hurdles: price slippage, price impact, liquidity fragmentation, the lack of robust execution, the lack of multi-conditional trades, and the lack of sophisticated PnL post-trade analysis.
We needed a roadmap that seamlessly integrated our frontend, backend, DevOps, smart contracts, and infrastructure. Each system had unique demands, and every misstep could result in product delays, security vulnerabilities, or execution failures.
So to build out this institutional-grade DeFi DEX trading platform, the challenge was not only institutional-grade security and execution, it was also an architectural nightmare. To attack this, we needed a comprehensive product roadmap that integrated diverse product systems across frontend, backend, DevOps, infrastructure, and smart contract development. Each layer had distinct demands, yet had to function as a cohesive unit and interconnected with one another at multiple choke points that could significantly delay product releases and testing.
After securing our pre-seed funding in Q2 2021, our immediate focus was designing a unified roadmap that could align our frontend, backend, DevOps, and Web 3.0 teams. Dexible was taking on an ambitious vision: conditional on-chain execution for DeFi trading, which, at the time, hadn’t been attempted in any meaningful way.
So after we secured our pre-seed round in Q2 2021, our team set about seriously delving into creating a cohesive roadmap that could bring together all our teams across frontend, backend, DevOps, infrastructure, and Web 3.0. To run this roadmap, we needed to consider our successive versioning ahead of time prior to us rolling out our technical upgrades to the platform.
Dexible was pushing an extraordinarily ambitious vision. Up to that point, no one had attempted conditional on-chain execution for DeFi trading. Up until Dexible, the market was divided between decentralized exchanges (DEXs) and simple DEX aggregators.
The major DEXs allowed participate in multi-sided markets which SushiSwap, Uniswap, Trader Joe. So traders could deposit one or more token into liquidity pools to supply liquidity and gain fees from volume. The economics of constant product market makers (CPMMs) and liquidity provisioning (aka “LPing” aka “staking”) are pretty sophisticated see: Mohan, 2022.
The DEX aggregators ran quick queries across all these DEXs to determine the optimal routing for swapping. Two crucial challenges arose, particularly from price impact and slippage. Price impact refers to the effect a trade has on the asset's price within a liquidity pool on a DEX, while slippage is the difference between the expected and actual execution price (Bitget, 2023). These factors are more pronounced in DEXs due to their reliance on automated market makers (AMMs) and liquidity pools, as opposed to the order book model used by CEXs (Cointelegraph, 2023). DEX aggregators aim to mitigate these issues by splitting orders across multiple DEXs and routing trades through the most efficient paths5. This optimization is crucial because DEXs often suffer from lower liquidity and higher volatility compared to CEXs, making large trades particularly susceptible to unfavorable price movements. By leveraging multiple liquidity sources and employing sophisticated algorithms, DEX aggregators allow traders to reduce negative price impact, minimize slippage, and ultimately better execution prices and lower transaction costs.
Up to that point, there were two other aggregators, primarily 1inch and 0x’s Matcha. Routing complexity is unique to DEXs because of their decentralized nature and the need to navigate multiple, fragmented liquidity pools across various platforms.
Adding to the complexity, it was clear that the multi-chain world was inevitable, however, we were only just in the beginning stages of that. The cross-chain bridges at the time were painfully slow, prone to errors, and experienced multiple exploits. These bridges would later facilitate a greater degree of ease of capital flows and ease to sourcing liquidity for more sophisticated cross-chain operations, however, the design for Dexible had to be isolated to each chain’s own available liquidity, which varied dramatically.
As we brainstormed the future of DEX design, my co-founders and I realized the future lay in automation—traders would need more control over their strategies, from setting specific conditions for trade execution to dynamically adjusting those conditions based on market changes. Our focus was to empower users with advanced features like automated trade execution, visualizing potential price impact, and segmenting the total swap to break down large trades into smaller, more manageable ones. This served as a form of dollar-cost averaging into liquidity positions, ensuring users retained control of their assets until execution, without the need for Dexible to take custody.
What made this particularly challenging was our insistence on maintaining non-custodial control. Most DeFi platforms would require traders to grant full custody of their assets to execute trades, but we engineered a solution that allowed traders to maintain control until the exact moment of execution. This was groundbreaking—it was necessary for traders to feel comfortable granting limited spending authority to our platform, which Dexible used to trigger automated actions without locking up user funds beforehand.
As the technical product manager, I was responsible for overseeing competing priorities across all teams. I broke down our year-long roadmap into quarterly milestones and aligned the team’s efforts to ensure we hit strategic KPIs. To maximize efficiency, we employed a hybrid model of agile development, focusing on 2-week sprints where each unit's progress reinforced the overall system’s momentum.
Each sprint was treated as a tactical move in a larger optimization problem—balancing resource constraints, technical complexity, and deadlines. By dissecting the development cycle and focusing on the most critical bottlenecks (whether related to data, infrastructure, or on-chain execution challenges), we were able to resolve high-priority issues early, accelerating our overall delivery timeline.
I was constantly reviewing input from my co-founders and team leads, prioritizing efforts that would yield 80% of the impact from 20% of the work—my version of applying Pareto principles to development. It wasn’t just about shipping code; it was about building the foundations that would enable us to scale the platform, hit customer satisfaction goals, and continuously improve backend performance.
To deliver a fast, scalable, and reliable product, I worked closely with our VP of Engineering and CTO to ensure our infrastructure supported both our rapid iteration cycles and the demands of institutional-grade users.
First, we focused on building out our CI/CD pipeline. Together with our VP of Engineering, I decided that Jenkins would serve as the backbone for our continuous integration and deployment processes. Jenkins was critical to automating testing and deployments across our system. We integrated Postman for API testing and used Selenium for full frontend testing, while backend unit tests were built with Mocha and Chai for our Node.js services. My involvement in prioritizing these tests was essential—I worked closely with the team to focus on high-risk features first, ensuring any bugs or inconsistencies were identified early before they became production issues.
We containerized our testing environments using Docker, replicating production conditions to ensure accurate testing results. This was a key part of ensuring our sprints could move forward without regressions—each environment mirrored production with precision. We deployed Kubernetes for container orchestration, ensuring the platform could scale dynamically with user demand.
In addition, I collaborated with our VP of Engineering to integrate Elasticsearch for real-time log monitoring, and Grafana for system health visualization. Grafana became our central hub for tracking KPIs tied to the roadmap, and I helped design the dashboards to monitor key metrics like gas overages, performance bottlenecks, and infrastructure stability. This allowed me to adjust sprint priorities based on live infrastructure feedback and ensure our teams were always addressing the most pressing challenges.
An essential use case we discovered from Grafana was highlighting our gas overage. Which would be a crucial design hurdle to achieving automated conditional execution—setting a transaction cost that reflects future, undetermined states.
Our CTO was leading the charge on cloud infrastructure, deploying our services using Amazon EKS (Elastic Kubernetes Service). I worked alongside him to ensure that our infrastructure could scale elastically while maintaining high availability. He used CloudFormation to manage infrastructure as code, allowing us to replicate environments effortlessly across testing, staging, and production.
We also implemented Amazon RDS with multi-AZ support, ensuring database redundancy and high availability. I coordinated with the CTO on optimizing our database queries to handle the complexity of syncing on-chain and off-chain data. The result was a backend architecture capable of real-time synchronization with on-chain events, all while supporting institutional-level traffic.
One of the biggest challenges I faced was managing on-chain state while syncing it seamlessly with off-chain services. Our Solidity smart contracts handled core financial operations such as asset swaps, liquidity pooling, and fee distribution. I worked closely with our Web 3.0 and backend teams to establish a bidirectional flow of data between web3.js and our Node.js backend, which allowed us to listen for on-chain events and trigger corresponding updates in real time.
We chose Amazon DynamoDB for off-chain storage of unstructured data—like transaction metadata and user sessions—while Amazon Aurora (RDS) handled more structured, relational data like transaction histories and financial metrics. I worked directly with our engineers to optimize queries, reducing latency and ensuring that on-chain state changes were reflected instantly on the backend.
On the frontend, we built with Next.js and TypeScript to ensure type safety and scalability. The frontend had to interact seamlessly with both on-chain data and our backend APIs. I was responsible for overseeing the integration of websockets and GraphQL to ensure real-time data updates across the system. Through these systems, users could view balance changes, transaction histories, and liquidity positions as they happened.
Initially, we used Redux for state management, but as our system complexity grew—particularly in handling real-time data streams—we transitioned to React’s Context API. This shift reduced our reliance on boilerplate-heavy Redux and simplified our approach to state management. Using React hooks, we streamlined asynchronous data fetching and side-effect handling, which dramatically improved both developer velocity and frontend performance.
By ensuring the smooth interaction between Next.js and our Node.js backend, I guaranteed that our RESTful APIs and TypeScript-powered frontend operated in harmony. I worked closely with both teams to refine our API endpoints, making sure that frontend requests and on-chain data pulls were processed with minimal latency, improving the overall user experience.
Tracking user behavior was essential to ensure Dexible’s continuous improvement. We implemented PostHog for event-based tracking on the frontend and Google Analytics to gather broader platform usage metrics. PostHog integrated seamlessly with our Next.js frontend, capturing key user interactions—like trade submissions and order adjustments—which gave us granular insights into user engagement and friction points.
On the backend, Google Analytics allowed us to track traffic patterns and identify trends across the platform, such as frequent order abandonments or popular trading strategies. By combining these tools, we could track both quantitative metrics from Google Analytics and qualitative event-level data from PostHog, giving us a comprehensive view of trader behavior.
We used these insights to optimize the platform. If data revealed high order abandonment rates at the confirmation screen, we would dive into the flow, making changes to simplify the process or provide clearer information about slippage or gas fees. A/B testing validated whether our adjustments improved engagement, letting us iterate efficiently.
For more on how we reduced order abandonment and enhanced UX, I invite you to read my piece on frontend design strategies.
The success of this roadmap lay in its adaptability—balancing a rigid long-term vision with the flexibility of 2-week sprints.
The success of this product roadmap lied in the balance between rigidity and flexibility—a strategic paradox. The year-long plan provided a clear direction, yet the 2-week sprints allowed for continuous adjustments based on real-time data and market shifts. This mirrors the concept of adaptive planning often seen in decision theory, where long-term success depends on continuously refining short-term tactics.
Each sprint generated new insights, creating a feedback loop that allowed us to continuously refine both the product and the process. The accelerated iteration wasn’t just about speed; it was about the velocity of learning. Each sprint provided critical data, refining both the product and the process. This approach can be applied across industries: it's about creating a system where feedback is continuous, actions are decisive, and outcomes are measurable.
In retrospect, the execution of this roadmap was akin to orchestrating a finely-tuned complex adaptive system. Each part of the system—frontend, backend, DevOps—functioned autonomously, yet interdependently, driven by a shared goal. The result was a faster, more agile development process, with high-impact results that resonated both in the short and long term.