My first backtest showed 400% annual returns.
I was going to be rich. Obviously.
Then I ran it live. Lost 30% in the first month.
What happened? I fooled myself. And I'm going to teach you how not to make the same mistake.
Backtesting is simple: test your strategy on historical data to see how it would have performed.
If it worked in the past, maybe it'll work in the future. Right?
Sort of. But there are traps everywhere.
This is the #1 backtest killer.
Curve fitting means optimizing your strategy to fit historical data so perfectly that it can't work on new data.
Here's how it happens:
Each tweak makes the backtest look better. But you're not improving the strategy. You're fitting it to past data.
The more you optimize, the worse it performs on new data.
Never test on the same data you used to develop the strategy.
Split your data:
If the strategy works on out-of-sample data, it might be real. If it only works on in-sample data, it's curve-fitted.
Even better than simple out-of-sample testing.
This simulates how you'd actually use the strategy: develop, test, adjust, repeat.
The more parameters, the more curve-fitting risk.
A strategy with 3 parameters is more robust than one with 15.
If you need 15 conditions for a strategy to work, it's probably not a real edge.
Your backtest should include:
A strategy showing 30% returns with perfect execution might show 10% with realistic assumptions. Or negative.
Statistical significance requires sample size.
A strategy with 10 trades isn't validated. It could be luck.
I want at least 100 trades in my backtest. Preferably 200+.
What percentage of trades are winners?
50-60% is good for most strategies. Higher is suspicious (might be curve-fitted).
Total profits divided by total losses.
Above 1.5 is good. Above 2.0 is excellent. Above 3.0 is suspicious.
The largest peak-to-trough decline.
This tells you the worst-case scenario. Can you stomach a 30% drawdown? 50%?
Risk-adjusted returns. Higher is better.
Above 1.0 is acceptable. Above 2.0 is good. Above 3.0 is suspicious.
Are returns consistent across different periods? Or did one lucky month drive all the profits?
I look at monthly returns. I want to see consistent positive months, not one huge month and many losers.
500% annual returns with 5% drawdown? Nope.
Real strategies have realistic returns and real drawdowns. If it looks too good, it's curve-fitted.
One month +50%, next month -30%, then +40%, then -25%.
This is noise, not edge. Real edges are more consistent.
Change one parameter slightly and results collapse?
That's a fragile strategy. It won't survive real markets.
Amazing in 2021 bull market. Terrible in 2022 bear market.
The strategy is regime-dependent. It'll fail when conditions change.
20 trades over 3 years? Not enough data.
You need statistical significance. That requires volume.
Here's my actual workflow:
Before any testing, I have a hypothesis.
"Price tends to bounce at support levels, especially on the second touch."
This is based on market logic, not data mining.
I translate the hypothesis into simple, specific rules.
Maximum 3-5 conditions. No more.
Test on in-sample data. Does it make sense? Is there an edge?
I'm not looking for amazing returns. I'm looking for a positive expectancy.
Test on data the strategy has never seen.
If it still works, proceed. If not, back to the drawing board.
Add fees, slippage, spread. Does it still work?
Many strategies die at this step. That's okay. Better to know now.
Run the strategy in real-time with fake money.
This catches issues backtesting misses: execution problems, data issues, edge cases.
Real money, tiny size. Verify everything works.
Only after this do I scale up.
What do I use?
For simple strategies:
For complex strategies:
For execution:
The tool matters less than the process. A simple spreadsheet backtest done correctly beats a sophisticated platform used incorrectly.
Here's the mindset shift that changed everything:
Backtesting is not about finding profitable strategies. It's about eliminating unprofitable ones.
Most strategies don't work. Backtesting helps you discover this before losing real money.
When a backtest fails, that's a success. You learned something without losing money.
When a backtest succeeds, be skeptical. It might be curve-fitted. Validate thoroughly.
The journey from backtest to live trading:
Each step filters out bad strategies. By the time you're trading real size, you have confidence.
dashpull helps with steps 3-5. Set up your conditions. Paper trade them. Go live with small size. Scale up.
Backtesting is essential. But it's also dangerous.
Done wrong, it gives you false confidence in strategies that will fail.
Done right, it filters out bad strategies and validates good ones.
The keys:
dashpull helps me take validated strategies live. Conditions defined. Execution automated. Confidence earned through proper testing.
Don't fool yourself. Test properly. Then trade.
Ready to take your tested strategy live? Try dashpull →
Automated Trading Strategies: How to Actually Make Automation Work
Automation sounds perfect. Set it up, let it run, collect profits. Here's the reality of what it takes to make automated trading actually work.
Crypto Trading Bot: How to Automate Your Cryptocurrency Trading
Learn how crypto trading bots work and how to use conditional orders to automate your cryptocurrency trading strategy without coding knowledge.