How to A/B Test Your Website

A step-by-step guide to running your first split test. No experience needed. No credit card required. Just results.

What is A/B Testing?

A/B testing (or split testing) is showing different versions of something to different visitors, then measuring which version performs better.

Say your signup button says “Create Account.” You wonder if “Start Free” would get more clicks. Instead of guessing, you show half your visitors the original and half the new version. After enough visits, you'll know which one actually converts better.

That's it. No PhD required. Just a simple question: which version wins?

What You'll Need

🌐

A Website

Any site where you can add a bit of JavaScript. Works with React, Next.js, vanilla HTML, WordPress — anything.

🎯

Something to Test

A headline, button text, call-to-action — any text element you want to optimize.

📊

A Goal

What counts as success? A click, signup, purchase, or any action you can track with code.

⏱️

Some Traffic

The more visitors, the faster you'll get results. A few hundred per week is enough to start learning.

Step-by-Step: Your First A/B Test

We'll use abee.pro for this guide — it's free for manual testing and takes about 10 minutes to set up.

1

Create Your Free Account

Head to abee.pro/login and sign in with your email. No credit card needed — manual mode is completely free.

Screenshot: Login page
Show the magic link login screen with email input.
2

Create a Site

Click “Add Site” and give it a name. This represents the website you're testing. You can add the domain too, but it's optional.

Screenshot: New site modal
Show site name: “My Landing Page”
Domain: “example.com” (optional)
3

Create a Goal

Before you can test, you need to define what success looks like. Go to your site's Goals page and create one.

Give it a name (like “Signup Button Click”) and a key (like signup_click). The key is what you'll use in your tracking code.

Screenshot: Create goal modal
Goal name: “Signup Button Click”
Goal key: “signup_click”
4

Create an Experiment

Now the fun part. Go to Experiments and click “New Experiment.”

Give it a name (like “Hero Button Test”) and select your goal. Leave “Auto-Optimize” off — that's the AI mode. We're doing this manually.

Screenshot: Experiment form - Step 1
Name: “Hero Button Test”
Key: “hero_button_test” (auto-generated)
Goal: “Signup Button Click”
Auto-Optimize: OFF (toggle disabled or unchecked)
5

Define Your Variations

This is where you set up what you're testing. You need at least two variations:

  • Control: Your current/original version
  • Variant: The new version you want to test

Set the traffic split — 50/50 is standard. This means half your visitors see the control, half see the variant.

Screenshot: Experiment form - Step 2 (Variations)
Control: “Create Account” — 50%
Variant A: “Start Free” — 50%
Show the allocation sliders or inputs.
6

Add the Code to Your Site

After creating the experiment, click the “Integrate” button to see the code snippets. There are two ways to integrate, depending on your setup:

Important: Sticky Assignment
Once a visitor is assigned a variation, they'll see that same variation every time they return. This ensures a consistent experience and accurate data. The assignment is “sticky” — it doesn't change until the experiment ends.

Option A: Frontend (Browser)

Best for client-side apps (React, Vue, vanilla JS). The system uses a cookie to remember which variation each visitor was assigned. As long as the cookie exists, they'll see the same variation.

// Fetch which variation this visitor should see
// The cookie handles sticky assignment automatically
const response = await fetch(
  'https://app.abee.pro/api/public/YOUR_SITE_ID/experiments/hero_button_test',
  { credentials: 'include' }  // Important: sends/receives cookies
);
const { variation } = await response.json();

// Use variation.value to render your content
document.getElementById('hero-button').innerText = variation.value;
// Track conversion when the goal is completed
document.getElementById('hero-button').addEventListener('click', () => {
  fetch('https://app.abee.pro/api/public/YOUR_SITE_ID/goal/signup_click', {
    method: 'POST',
    credentials: 'include'  // Cookie identifies which variation they saw
  });
});

Option B: Backend (Server-Side)

Best for SSR frameworks (Next.js, Remix, etc.) or when you need more control. You pass a visitor ID in the URL — this is how the system knows who's who. Use any stable identifier (user ID, session ID, or generate a UUID and store it).

// Server-side: pass visitor ID explicitly
// Same visitor ID = same variation every time
const visitorId = getOrCreateVisitorId(request); // Your logic

const response = await fetch(
  `https://app.abee.pro/api/public/YOUR_SITE_ID/experiments/hero_button_test?visitor=${visitorId}`
);
const { variation } = await response.json();

// Render the page with this variation
return renderPage({ buttonText: variation.value });
// Track conversion (also pass visitor ID)
await fetch(
  `https://app.abee.pro/api/public/YOUR_SITE_ID/goal/signup_click?visitor=${visitorId}`,
  { method: 'POST' }
);
Which should I use?
Frontend: Simpler setup, cookies handle everything. Works great for SPAs and static sites.
Backend: More control, works with SSR, no cookie dependency. Better for logged-in users where you have a user ID.

Pro Tip: Use Variations as Keys

Variation values don't have to be content. They can be keys that your app uses to decide what to do. This makes manual mode incredibly powerful — you can test anything, not just text.

Examples:
blue_theme vs red_theme — Test completely different page designs
algorithm_v1 vs algorithm_v2 — Test search or recommendation algorithms
checkout_short vs checkout_long — Test different checkout flows
pricing_a vs pricing_b — Test different pricing displays
// Use variation.value as a key to control behavior
const { variation } = await response.json();

if (variation.value === 'algorithm_v2') {
  results = await searchWithNewAlgorithm(query);
} else {
  results = await searchWithCurrentAlgorithm(query);
}

// Or use it to pick a component
const CheckoutFlow = variation.value === 'checkout_short'
  ? ShortCheckout
  : LongCheckout;

This turns A/B testing into a general-purpose experimentation tool. Test layouts, features, algorithms, flows — anything where you want to compare “this vs that” with real user data.

Screenshot: Integration modal
Show the modal with tabs for “Client-Side” and “Server-Side”.
Show copy buttons for each code snippet.
7

Start the Experiment

Once your code is deployed, go back to the experiment page and click the Play button to start it. The status will change from “Draft” to “Running.”

Screenshot: Experiment header with Play button
Show experiment name, status badge (Running), and the play/pause controls.
8

Watch the Results Come In

As visitors hit your page, you'll see the numbers update in real-time:

  • Views: How many visitors saw each variation
  • Conversions: How many clicked the button
  • Conversion Rate: Conversions ÷ Views
  • Confidence: How sure we are there's a real difference
Screenshot: Results dashboard with data
Show the variations table with:
Control: 1,234 views, 89 conversions, 7.2% CR
Variant A: 1,198 views, 112 conversions, 9.3% CR, +29% uplift
Show confidence indicator (e.g., 94%)

Keep Improving: Running New Iterations

Found a winner? Great — but don't stop there. The best results come from continuous testing.

Once you have a winner, click “New Iteration” to start another round. In the modal, you can:

  • Promote the winner to Control: Your best performer becomes the new baseline
  • Remove the loser: No need to keep testing what doesn't work
  • Add new challengers: Try fresh ideas against your new control
Screenshot: New Iteration modal
Show the modal with:
- Previous winner “Start Free” promoted to Control
- New Variant A: “Get Started Free”
- New Variant B: “Try It Free”
Traffic allocation: Control 34%, Variant A 33%, Variant B 33%

Example journey:
Round 1: “Create Account” vs “Start Free” → “Start Free” wins (+29%)
Round 2: “Start Free” vs “Get Started Free” vs “Try It Free” → “Try It Free” wins (+12%)
Round 3: “Try It Free” vs “Start Your Free Trial” → No significant difference
Result: You've improved conversions ~40% from where you started.

When to Call a Winner

Don't just pick whichever number looks bigger. Wait for statistical significance.

Wait for 95% Confidence

The dashboard shows a confidence percentage. When it hits 95%, you can be reasonably sure the difference is real, not random noise.

The math (briefly): We use a statistical test called a z-test for two proportions. It compares the conversion rates of your variations and calculates the probability that any difference is due to random chance. This gives us a p-value — if it's below 0.05 (5%), we call the result “statistically significant” at 95% confidence.

The good news: You don't need to do any of this yourself. The system runs the calculations automatically and shows you a clear confidence percentage. When it's high enough, you'll see a “Significant Winner” badge — that's your signal that the result is trustworthy.

Do: Wait for the “Significant Winner” badge to appear
Don't: Call a winner after 50 visitors because one “looks better”

Get Enough Data

As a rule of thumb, you want at least a few hundred conversions total before trusting the results. More traffic = more reliable results.

Choose Your Goal Strategically

Statistical significance requires enough conversions. If you're testing whether people register, you need lots of registrations to call a winner. That could take weeks.

A faster approach: test whether people reach the registration page. More events = faster results. Once you've optimized the path to registration, then test the registration form itself.

Faster: “Clicked signup button” (many events, quick results)
Slower: “Completed registration” (fewer events, takes longer)

One Experiment Per Funnel Step

If a visitor sees your homepage test, then your pricing page test, then converts — which test caused the conversion? You can't know for sure.

This is the attribution problem. Even with separate goals, multiple concurrent experiments on the same user journey muddy your data. The safest approach: run one experiment at a time per funnel, or test pages that don't share users.

Clear: Test homepage CTA this week, pricing page next week
Muddy: Test homepage + pricing + checkout simultaneously

Tips for Better Tests

🎯 Test One Thing at a Time

If you change the headline AND the button AND the color, you won't know which change made the difference. Keep it focused.

📍 One Test Per Page

Running multiple experiments on the same page muddies your data. Test one element, learn from it, then move to the next.

🎲 Don't Peek Too Often

Checking results every hour leads to false conclusions. Set it up, let it run for a week, then analyze. Patience pays off.

📝 Document Your Learnings

Even losing tests teach you something. Keep notes on what you tried and what you learned. Patterns will emerge.

Ready to Run Your First Test?

Manual mode is completely free. No credit card, no trial limits. Just sign up and start testing.

Start FreeTakes about 10 minutes to set up
How to A/B Test Your Website (Free, No Experience Needed) | abee.pro · A/Bee