Back to Blog
Product Updates

From 0 to 45 Bots in 90 Days: Building the Platform

QFQuantForge Team·April 3, 2026·10 min read

In January 2026, we had an idea: build a quantitative crypto trading platform that could systematically discover, validate, and deploy trading strategies. No cloud dependency, no third-party API key exposure, no black-box strategies. Everything self-hosted, everything transparent, everything testable.

Ninety days later, we have 45 paper trading bots running on $45K of simulated capital across 9 strategies and 13 symbols. Forty strategies have been built and tested. A distributed backtesting infrastructure processes hundreds of parameter combinations across two worker nodes. A five-layer risk hierarchy monitors every trade. AI enrichment adjusts signal confidence in real time. And a React dashboard provides full visibility into every aspect of the system.

Here is how we got from zero to here.

Month 1: Foundation

The first month was infrastructure. The choices made here would constrain or enable everything that followed, so we optimized for iteration speed and correctness rather than performance.

The core decision was Python 3.11 with a single-process architecture. Everything, the bot engine, the API server, the scheduler, and the data layer, runs in one Python process. Communication happens through an asyncio event bus with in-process queues. This sounds limiting, but for a single-operator system managing 45 bots, it is more than sufficient and dramatically simpler than a distributed architecture.

SQLite with WAL mode handles all persistence. A single database file stores 20.8 million candles across 11 timeframes, along with trades, positions, equity snapshots, risk events, and backtest results. WAL mode provides concurrent reads with a single writer, which matches our access pattern perfectly.

FastAPI serves the REST API and SSE endpoint. The API binds to localhost by default for security, with the option to bind to all interfaces when running distributed workers. Bearer token authentication protects all endpoints.

The React frontend uses Vite, TypeScript, and Tailwind CSS with Zustand for state management. TradingView Lightweight Charts render candlestick data. The UI was built to be functional first and polished later.

Month 2: Strategies and Backtesting

With the foundation in place, month two focused on building and testing strategies. The strategy framework uses a simple ABC: every strategy must implement an analyze method that takes candles and context and returns a signal or nothing. This contract enforced discipline from the start. Strategies produce signals. They never manage positions, execute orders, or check risk. Separation of concerns, enforced by the interface.

We built strategies in waves. The first wave was classical technical analysis: Bollinger Band mean reversion, RSI/MACD momentum, Donchian breakout, EMA crossover, VWAP reversion, and several others. Nineteen classical TA strategies in total.

The backtesting engine runs historical replay with no lookahead. Slippage and fees are modeled. Stop losses are checked against candle highs and lows. Walk-forward validation tests strategies across five market regime periods. Monte Carlo simulation shuffles trade sequences to generate confidence intervals.

The five-stage pipeline emerged during this month: tournament screening at default parameters, parameter sweep for calibration, Phase 2 refinement around winners, Phase 3 symbol-specific optimization, and regime validation. Each stage answers a different question, and each stage eliminates strategies that would otherwise waste capital.

The distributed worker infrastructure was built out of necessity. Running hundreds of parameter sweeps on a MacBook was too slow. We set up two worker VMs (one with 14 parallel processes, one with 6) that poll the coordinator API, claim tasks, and return results. This cut sweep times from days to hours.

Month 3: Advanced Strategies and Deployment

The third month expanded the strategy catalog beyond classical TA. We built four derivatives strategies using open interest, funding rates, long/short ratios, and spot-futures basis data. Five statistical strategies using wavelet decomposition, Ornstein-Uhlenbeck processes, Kalman filters, Hurst exponents, and Hidden Markov Models. Four cross-asset macro strategies using BTC dominance, fear and greed index, altcoin season indicators, and stablecoin market cap. Four on-chain analytics strategies using NUPL, SOPR, exchange flows, and stablecoin supply momentum.

The data pipeline expanded alongside the strategy development. We integrated Coinglass for derivatives and macro data (24 data types across 24 tables, auto-synced every 4 hours). Binance perpetual futures data for funding rates, open interest, long/short ratios, and premium index. WebSocket listeners for real-time liquidation data.

Of the 40 strategies tested, the validation pipeline identified 6 as deploy-ready. Mean reversion with Bollinger Bands on 13 altcoins (Sharpe 9-19). RSI/MACD momentum on 5 altcoins (Sharpe 3.5-7.8). 4-hour momentum on BTC, ETH, SOL, ADA, SHIB, AVAX (Sharpe 1.7-3.9). Leverage composite on ARB, OP, WIF (Sharpe 1.89-3.02). Correlation regime on 6 altcoins (Sharpe 0.11-0.62). NUPL cycle filter on 7 symbols (positive across 4-5 of 5 regime periods). Stablecoin supply momentum on 5 symbols.

The deployment process is automated. A setup script creates bots via the API with sweep-winner parameters, assigns them to the appropriate strategy and symbol, sets capital allocation to $1K per bot, and optionally starts them immediately. The bots run inside the API server process, scheduled by APScheduler to tick at their configured timeframe interval.

Architecture Decisions We Would Repeat

The single-process architecture was the right call. It eliminated an entire category of bugs (network partitions, message ordering, distributed state) and made debugging trivial. When something goes wrong, there is one process and one log file to examine.

SQLite was the right call. For a single-operator system with fewer than 100 concurrent bots, a full database server adds operational complexity with zero benefit. WAL mode handles our read/write pattern. The entire database backs up as a single file.

The event bus pattern was the right call. Publishing events at every state transition (signal generated, order placed, order filled, position opened, position closed, risk rejected) creates a complete audit trail and enables the SSE endpoint to push real-time updates to the UI without polling.

The strict strategy interface was the right call. By forcing strategies to be pure signal generators with no side effects, we made them trivially testable and composable. The same strategy code runs in backtests, paper trading, and live execution without modification.

What We Would Change

The validation pipeline should have been built earlier. We spent time optimizing strategies based on sweep results that later collapsed in validation. If validation had been available from day one, we would have caught overfitted strategies earlier and focused development time on the strategies that actually work.

The distributed worker setup was more complex than necessary. We could have used a simpler job queue (Redis + RQ) instead of building a custom claim-based system. The custom system works, but it required solving race conditions, stale claim detection, and heartbeat monitoring that an off-the-shelf queue would have handled.

The UI should have been built after the trading system was stable, not in parallel. Changes to the API during strategy development required constant UI updates. Building the UI last would have produced a cleaner interface with less rework.

Current State: April 2026

The platform today runs 45 bots across 9 strategies, 13 symbols, and 4 timeframes. Total simulated capital is $45K with $1K per bot. The bots are deployed on mainnet Binance with paper mode (real prices, simulated fills).

The backtesting infrastructure has processed over 15,000 individual backtests across the strategy catalog. The worker fleet handles sweeps and validations routinely, with job tracking and resume capability for long-running tasks.

The risk system monitors all 45 bots in real time. Portfolio exposure, drawdown, correlation, and per-bot metrics are computed at every tick. Risk events are persisted to an audit trail. The UI provides three risk views: live risk monitoring, strategy risk profiles, and a risk event log.

AI enrichment runs on every signal. Claude adjusts confidence based on sentiment analysis. Ollama runs local anomaly detection. The AI contribution is bounded and advisory, never autonomous.

The strategy catalog contains 40 implementations across six categories. Six are deployed, several more are queued for further testing, and the remainder are archived with their validation results preserved for reference.

From zero to 45 bots in 90 days. The platform is not done. It will likely never be done. But it is operational, systematic, and producing data that will inform the decision to deploy real capital when the paper trading results provide sufficient confidence.