All posts

You Don't Need to Code to Backtest a Strategy

backtestingno-codestrategybeginner

I avoided backtesting for two years.

Not because I didn't understand the concept. The concept is simple: take a trading rule, run it against historical data, see if it would have made money. Any serious trader will tell you it's essential. The books say so. The forums say so. The traders who actually make money say so.

I avoided it because every backtesting tool I found wanted me to write code.

Pine Script. MQL4. Python with pandas and a backtesting library. One platform had its own proprietary language with documentation that read like it was translated from German by someone who spoke neither German nor English. I'd open the editor, stare at a blinking cursor, type something wrong, get an error I didn't understand, and close the tab.

This went on for two years. During that time I traded based on setups I'd never validated against data. I don't know how much that cost me. I don't want to know.

The gap between "backtest your strategy" and actually doing it

There's a specific kind of advice that's technically correct but practically useless. "Backtest your strategy" is one of them. It's like telling someone to "just learn to cook." True, helpful in theory, missing every detail that matters.

The details that matter for backtesting:

You need a tool that has historical data. You need to express your trading rules in whatever format that tool accepts. You need the tool to execute those rules against the data and give you statistics. And you need to understand what those statistics mean.

For most retail traders, step two is where things break down. Expressing a rule like "buy when RSI crosses below 30 and the previous candle closed above the 20 EMA" in plain English takes one sentence. Expressing it in code takes knowledge of syntax, data structures, time series indexing, and whatever quirks the platform's API has decided to impose.

The gap between having a trading idea and being able to test it shouldn't require learning a programming language. But for years, it did.

What a no-code backtester actually looks like

I'll describe what I use now, because "no-code backtesting" can sound like marketing fluff until you see how it works in practice.

Formiq's backtester is a form. Not a code editor, not a flowchart. A form with dropdowns.

Entry conditions: you pick an indicator (RSI, MACD, Stochastic, EMA crossover, etc.), pick a condition (crosses above, crosses below, is greater than, is less than), and pick a threshold value. If you want a second condition, you click "Add" and set another one. They combine with AND logic, meaning both must be true for an entry to trigger.

Exit conditions work the same way. You can also set a fixed take-profit and stop-loss in pips.

Then you choose a currency pair, a timeframe, and click Run.

That's the entire interface. No syntax, no compilation step, no runtime errors. If you can fill out a web form, you can backtest a strategy.

Run 1: RSI onlyEUR/USD · H457%Win Rate1.64Profit Factor12.1%Max DD142TradesRun 2: RSI + EMA filterEUR/USD · H468%Win Rate2.10Profit Factor7.4%Max DD89Trades
Run 1 tested the base idea. Run 2 added a trend filter. Fewer trades, better edge.

What the results tell you (and what they don't)

The output is a panel with five or six numbers: win rate, profit factor, max drawdown, Sharpe ratio, total trades, net P&L. Below that, an equity curve showing how your hypothetical account balance moved over the test period.

These numbers are useful, but they're easy to misread if you don't know what to look for.

Win rate is the one beginners fixate on. It's also the least informative on its own. A strategy that wins 80% of the time but loses three times as much on each loser as it gains on each winner is a losing strategy. I've run backtests with 45% win rates that were solidly profitable because the average winner was 2.5x the average loser.

Profit factor is more honest. It's total gross profit divided by total gross loss. Above 1.0 means the strategy made money. Above 1.5 and you have something worth paying attention to. Above 2.0 and you should be suspicious, because either the strategy is genuinely good, or you've overfit it to the test data.

Max drawdown tells you how much pain you'd have endured. A strategy with a great profit factor but a 40% max drawdown means your account would have lost nearly half its value at some point during the test period. Most people can't sit through that, even if the strategy eventually recovers.

Total trades is the credibility check. A 90% win rate over 8 trades is noise. A 58% win rate over 300 trades is a signal. I don't trust any backtest with fewer than 80 trades.

The numbers don't tell you why trades won or lost. They don't tell you whether the strategy worked because of the conditions you set or because of a macro trend that happened to run in your favor during the test period. That's where the next step comes in.

Watching your backtest trades play out

This is the feature that turned backtesting from a curiosity into something I actually use.

After a backtest completes, Formiq lets you click on any individual trade in the results list and replay it on the chart. The chart shows the entry point, the exit point, the price action between them, and the indicator values that triggered each signal.

I've spent more time doing this than running the backtests themselves.

Aggregate statistics hide the individual trades. A strategy with a 60% win rate and a 1.8 profit factor sounds solid. But when you click through the trades, you might find that half the winners were barely profitable, with price touching your take-profit by a single pip before reversing. Or that most of the losses happened during news events no indicator could have predicted. Or that the entries were consistently early by two or three candles.

You can't see any of this in a summary table. You can only see it by watching the trades happen.

Trade #47 of 142+38 pipsEUR/USD · H4 · RSI oversold entry← PrevNext →RSI < 30EntryExit
Click any trade in the backtest results to watch it play out candle by candle.

This is also where you catch overfitting. If you've added three entry conditions and a day-of-week filter and a time-range filter, and the backtest shows amazing results, replaying the individual trades often reveals that you've just carved out a very specific historical period where those exact conditions happened to work. The trades cluster suspiciously: same month, same market condition, same macro move.

The strategies I've actually tested

I'll share a few examples to make this concrete. These aren't recommendations. They're illustrations of what the testing process looks like.

RSI mean reversion on EUR/USD H4. Entry: RSI(14) crosses below 30. Exit: RSI(14) crosses above 55. Stop loss: 60 pips. Result over two years: 142 trades, 57% win rate, 1.64 profit factor. Decent, but when I replayed the losers, most happened during strong trend moves where RSI stayed oversold for days. Adding a 50 EMA trend filter (only take trades when price is above the EMA) cut the trade count to 89 but pushed profit factor to 2.1.

MACD crossover on GBP/USD D1. Entry: MACD line crosses above signal line. Exit: MACD line crosses below signal line. No additional filters. Result: 67 trades, 49% win rate, 1.38 profit factor. Marginal. The equity curve had long flat periods and sharp spikes. Replaying the winners showed they were almost all trend-following trades in strong directional markets. During ranging periods, the strategy gave back everything. I shelved it.

Stochastic + session filter on USD/JPY H1. Entry: Stochastic(8,3,3) crosses above 20 while price is within 15 pips of a round number (a manual observation I couldn't fully automate). Since the backtester can't detect round numbers, I used a simpler proxy: Stochastic oversold during the London session only. Result: 203 trades, 55% win rate, 1.52 profit factor. The session filter alone added about 0.3 to the profit factor compared to running it 24 hours.

None of these are strategies I'd trade with real money tomorrow. But each one taught me something specific about how my ideas hold up against data, and each iteration took about ninety seconds to set up and run.

Three runs a day is enough

Formiq's free plan limits you to three backtest runs per day. My initial reaction was frustration. Three felt restrictive.

In practice, three is fine.

Backtesting isn't useful when you spray random conditions at the wall and see what sticks. It's useful when you have a specific hypothesis ("I think RSI works better during trending sessions than ranging ones") and you design a test to check it. Formulating that hypothesis, choosing the right conditions, interpreting the results, and replaying the trades takes real time, if you're doing it properly.

On a typical evening, I'll run one test, study the results, replay a handful of the trades, adjust one variable, run a second test, compare, and maybe run a third with a different pair to see if the pattern generalizes. That's a full session. I'm usually mentally done before I've used all three runs.

The constraint also prevents a bad habit I've seen in myself: running dozens of configurations until one produces good numbers, then convincing myself I "found" a strategy. That's not strategy development. That's data mining. Three runs per day makes you think before you click, which is exactly the right incentive.

What no-code backtesting can't do

I want to be straightforward about limitations.

You can't test strategies that depend on complex multi-step logic. "Buy when A happens, but only if B happened within the last 5 candles, and C hasn't happened since the last D." That kind of nested conditional logic requires code, period.

You can't test strategies that depend on pattern recognition: chart patterns, specific candlestick formations, order flow dynamics. The backtester works with indicator math, not visual patterns.

You can't model realistic execution. Slippage, variable spreads, partial fills, requotes: none of these exist in a backtest. Your live results will always be somewhat worse than your backtest results. How much worse depends on the pair, the timeframe, and your broker. This is true of all backtesting, not just no-code, but it's worth remembering.

And you can't test on tick data. The smallest granularity is per-candle. For strategies that depend on intra-candle price action (scalping, for instance), this is a real limitation.

If your strategy needs any of the above, you'll eventually need to learn a scripting language. But most strategies that beginners and intermediate traders are working with are expressible as indicator conditions with thresholds. For those, dropdown-based testing does the job.

What I wish I'd known two years ago

Backtesting doesn't tell you whether a strategy will work in the future. It tells you whether it would have worked in the past. That's the best proxy we have, but it's not a guarantee.

What backtesting does reliably is kill bad ideas fast. If a strategy you've been trading by feel turns out to have a 44% win rate and a 0.9 profit factor over 200 historical trades, you know something concrete: your instinct is wrong, or at least incomplete. That information alone can save you months of real-money losses.

I think about the two years I spent avoiding backtesting because I couldn't code. Every strategy I traded during that period was untested. Some of them worked. Some of them didn't. I had no way to know which was which except by losing money and hoping I'd notice the pattern eventually.

A three-minute backtest would have told me what six months of live trading couldn't. That's not an argument for any specific tool. It's an argument for testing your ideas against data before you test them against your account balance.

The barrier used to be a programming language. It isn't anymore.


Try Formiq's backtester free — set conditions with dropdowns, run against historical data, no code required.