Kitbase
Experimentation

What is A/B Testing?

A/B testing (also called split testing) is a controlled experiment where users are randomly assigned to different variants — typically a control (A) and a treatment (B) — and their behavior is compared against a predefined metric.

The goal is to make data-driven decisions: instead of debating whether a green or blue button converts better, you test both with real users and let the data decide. Proper A/B tests require statistical significance to ensure results aren't due to random chance.

Feature flags are the most common mechanism for implementing A/B tests in product development. The flag assigns each user to a variant, and your analytics tool measures the impact on your target metric (conversion rate, engagement, revenue, etc.).

Example

You A/B test two pricing page layouts. Variant A (the current design) shows a 4.2% signup rate. Variant B (with testimonials added above the fold) shows 5.1%. After reaching statistical significance with 10,000 visitors per variant, you ship Variant B.

Related Terms

Ship features with confidence

Use feature flags to control rollouts, run A/B tests, and instantly kill problematic features.

Explore Feature Flags