كل المقالات
ab-testingsallacroecommerce

How to Run an A/B Test on Salla in Under 30 Minutes

A practical walkthrough for running your first A/B test on a Salla store: what to test, how to pick a goal, and when to stop. Set it up today.

22 أبريل 2026

Most Salla store owners we talk to have never run a single A/B test. They redesign the homepage, swap product photos, change the checkout button color — all on gut feel. Then they wonder why revenue jumps one month and tanks the next.

If you want to learn how to run an A/B test on Salla without a developer, two browser tabs, and 30 minutes of focused work is enough. The trick is picking a small, high-impact test instead of trying to redesign your whole funnel on day one.

Pick one page that already gets traffic

The single biggest mistake is testing on a page nobody visits. If your category page gets 50 sessions a week, no test will ever finish. Open Salla's analytics and find the top three pages by sessions. Pick the one closest to the money — usually the product detail page or the cart page.

Rule of thumb: the page should get at least 1,000 sessions per week to give you a chance of seeing a result inside a month. Less than that, and you should focus on traffic first, optimization second.

Choose a goal before you choose a variant

This sounds backwards but it's the part most people skip. Decide what you're measuring before you write a single line of variant copy. Pick one of these:

  • Add-to-cart rate — best for product-page tests
  • Checkout starts — best for cart and category tests
  • Completed orders — the only metric that pays the bills, but needs more traffic

If you're under 5,000 sessions per week, optimize for add-to-cart or checkout starts. You'll get a readable result faster, and those metrics correlate strongly with revenue on most Salla stores.

Pick one goal and write it down before opening any test setup screen. The discipline of committing to a single metric in advance protects you from a different failure: running the test, getting a flat result on the goal you set, then mining the data for any metric that did move and declaring that the win. That's noise hunting, not testing.

What to test first

Don't test colors. Don't test fonts. Don't move the logo. Those changes rarely move the needle by more than 1–2%, which is too small to detect on most stores.

Test the things that change how a shopper makes the decision:

  1. The price display — single price vs price with strikethrough, or showing the per-unit price for bundles
  2. The shipping line — "Free shipping over 200 SAR" vs "Free shipping today" vs nothing at all
  3. The trust block — Mada/Apple Pay logos near the buy button vs at the bottom of the page
  4. The CTA copy — "Add to cart" vs "Buy now" vs "Reserve yours" (especially for limited-stock items)
Test things that change the buying decision, not things that change the page's vibe. Vibe rarely converts.

Setting up the test on Salla

Salla doesn't have native A/B testing inside the dashboard, so you have two paths. The first is editing the theme directly and using a tracking script to split traffic — fine if you have a developer on call, painful if you don't. The second is using a tool that injects variants without touching theme code.

Either way, the setup follows the same shape:

  • Define the URL or page type the test runs on (e.g., all product pages, or one specific category)
  • Create variant B by editing the element you want to change — usually the headline, button, or shipping line
  • Set the traffic split to 50/50
  • Define the goal event — add-to-cart click, checkout page view, or order confirmation
  • Hit start, then leave it alone

That last step matters. Most failed tests fail because the owner peeked on day three, saw variant B was "winning," declared victory, and shipped it. Three days isn't a result. It's noise.

One thing to do before launching

Take screenshots of both variants on three devices — an iPhone, an Android phone, and a desktop browser. Save them. When the test ends and you're trying to remember exactly what you tested (and you will forget), those screenshots are the only reliable record. We've seen teams ship a winner only to realize months later they couldn't reproduce the variant because nobody documented it.

How long to run it, and when to stop

The honest answer: until you hit your pre-calculated sample size, or two full weeks — whichever is longer. Two weeks captures both weekends, both salary days (the 27th in Saudi Arabia matters), and smooths out the random spike from a TikTok mention.

If you stop early because the result "looks obvious," you'll ship a losing variant and not know it. This is the single most common failure mode in A/B testing, and it deserves its own discussion — we cover it in detail in why most A/B tests don't show a winner.

The other version of this mistake is the opposite: running for too long because you're not getting the result you wanted, hoping the trend reverses. It rarely does. Set a hard end date when you launch, write it on a calendar, and stop on that date regardless of where the numbers sit.

Next steps

Pick one page, one goal, and one change. Run it for two weeks. Don't peek. Whatever you learn — even "no difference" — is more than you knew before. After your first test wraps, the second one takes ten minutes to set up. That's how a testing habit compounds.

If you're stuck choosing, here's a default first test that works on most Salla stores: on your top product page, test "Free shipping over X SAR" displayed near the price versus the same page with no shipping line. Goal: add-to-cart rate. Two weeks. That single test usually settles a debate the team has been having for months.