Xavier Puig / Product Management · User Experience · AI Systems
Back to work

What optimising a trading bot taught me about product decisions

A few lessons that apply well beyond trading.

What is this experiment about

I recently built a momentum rotation bot for crypto. I've been around crypto for years, but never managed to trade it profitably. At some point it became obvious that without a system, I'd always be reacting. AI coding tools made it easier enough to actually try. So I built one. And I thought the hard part would be the build. It wasn't.

You're often testing your bias, not the system

I started with a mental model that felt solid. Crypto is momentum-driven. Coins trend. Buy strength, sell weakness, manage risk. I used familiar tools: moving averages, stops, position sizing. I picked a watchlist based on projects I believed in. Backtested over two years. Results looked good. I deployed.

Only later did I realise what I'd actually done. I hadn't tested a strategy. I'd built a system that confirmed what I already believed. The parameters felt right because they were familiar, but I wasn't validating a hypothesis. I was measuring my bias.

The biggest gains came from what I wasn't looking for

The real learning came from optimisation. I ran a grid search across dozens of parameter combinations. I expected my original setup to hold up. It didn't. It ranked near the bottom.

But the most important finding wasn't a parameter tweak. It was a dimension I hadn't even considered. Allowing the bot to rotate out of a position mid-trade if a stronger signal appeared elsewhere improved performance more than any fine-tuning. Not because I optimised better. But because I expanded the search space.

That's the thing about unknown unknowns. You don't find them by refining what you already know. You find them by exploring.

The hardest part: removing logic that feels right

The most difficult decision came from something that looked like a clear flaw. The bot gets stopped out mid-trend. The daily trend is still intact. Price is already bouncing. It feels wrong to stay out. So I built re-entry logic. A lot of it.

Re-enter on recovery. Re-enter on lower timeframe confirmation. Re-enter conditionally based on alternative signals. Over ten variations tested across two years of data.

Every single one underperformed doing nothing. Consistently. By the time any re-entry signal confirmed continuation, the move was already exhausted. The system kept buying the tail, not the beginning.

What made this hard wasn't the data. The data was clear. What made it hard was that the intuition is directionally correct. The multi-timeframe mismatch is real. The logic made sense. But in practice, it didn't work. And the more you invest in something, the harder it is to remove it.

Removing the re-entry logic entirely produced the best results.

What I'd do differently

Run the grid search before going live, not after. I spent days making decisions that felt reasonable. The grid took hours and invalidated most of them.

What this taught me about product

Optimisation rarely finds magic. Most reasonable starting points are already close.

What it actually gives you is understanding. The more you test, the more you see how the system works. Where it breaks. What actually drives outcomes. What doesn't matter at all. And that changes the questions you ask. You stop trying to optimise the knobs you already know and start looking for dimensions you hadn't considered. That's where the real gains come from.

And it only comes from invalidating biases. Not from refining your initial idea, but from escaping it.


This bot is a side project I run for my own account. Nothing here is financial advice.

What optimising a trading bot taught me about product decisions