Determine whether your A/B test results are statistically significant. Enter visitors and conversions for both variants, and instantly see conversion rates, relative uplift, p-values, and confidence intervals. Includes a sample size calculator for planning future experiments.
Build Prompt
Copy this prompt into Claude Code
Build a single-page A/B testing significance calculator as a static HTML/CSS/JS site. Dark background (#0a0a0f) with a centered container (max-width 800px).
At the top, a title "A/B Test Significance Calculator" in white (24px, bold) and a subtitle "Enter your experiment data to determine statistical significance" in #8888a0.
Below that, two side-by-side input cards (each 48% width) on a #12121a background with 1px #2a2a3a border and 12px border-radius:
CONTROL (Variant A) card — header with "Control (A)" label in #94a3b8:
- "Visitors" number input (large, 20px font, full-width, placeholder "e.g. 10,000")
- "Conversions" number input (same styling, placeholder "e.g. 350")
- Below inputs, auto-calculated "Conversion Rate: X.XX%" in #22c55e once both fields have values
VARIATION (Variant B) card — header with "Variation (B)" label in #60a5fa:
- Same two inputs with placeholders "e.g. 10,000" and "e.g. 420"
- Auto-calculated conversion rate displayed the same way
Below the input cards, a "Calculate" button (full width, #3b82f6 background, white text, 14px font, 48px height, rounded 10px). The button should also trigger on Enter key press from any input field.
RESULTS SECTION (appears after calculation, animated slide-down):
First row — four metric cards in a 2x2 grid:
1. "Relative Uplift" — show percentage difference between B and A (e.g. "+20.0%"), green if positive, red if negative, large 32px number
2. "P-Value" — show the two-tailed p-value to 4 decimal places (e.g. "0.0023"), with a color indicator: green if < 0.05, yellow if 0.05-0.10, red if > 0.10
3. "Confidence Level" — show "95%" / "99%" / "90%" etc. based on the p-value, with the percentage in large text
4. "Statistical Power" — calculate and display the observed power of the test (1 - beta)
Implement the statistics using a two-proportion z-test: calculate the pooled proportion, standard error, z-score, and derive the two-tailed p-value using a JavaScript implementation of the normal CDF (use the rational approximation algorithm for the error function — do NOT use any external library). Also calculate the 95% confidence interval for the difference in proportions.
Below the metric cards, a large verdict banner:
- If p < 0.05: green background (#22c55e20), green border, text "Statistically Significant" with a checkmark icon, and explanation "You can be 95% confident that Variant B performs differently from Control A."
- If 0.05 <= p < 0.10: yellow background (#f59e0b20), text "Marginally Significant" with explanation "Consider collecting more data before making a decision."
- If p >= 0.10: red background (#ef444420), text "Not Significant" with explanation "There is not enough evidence to conclude the variants perform differently."
Below the verdict, a horizontal bar chart visualization using pure CSS/HTML (no chart library). Two horizontal bars — Control A (gray #64748b) and Variation B (blue #3b82f6) — with widths proportional to their conversion rates. Show the conversion rate percentage at the end of each bar. Add thin vertical lines showing the 95% confidence interval bounds on each bar.
SAMPLE SIZE CALCULATOR SECTION (separate collapsible area below, toggled by a "Plan Your Next Test" header button):
- Baseline conversion rate input (%, default from Control A if available)
- Minimum detectable effect input (%, default 10%)
- Significance level dropdown (90%, 95%, 99% — default 95%)
- Power dropdown (80%, 90%, 95% — default 80%)
- "Calculate Sample Size" button
- Result: "You need X visitors per variation (X total)" in a highlighted box
Add a "Share Results" button that encodes the inputs into URL query parameters so the page can be bookmarked/shared with pre-filled data. On page load, check for query parameters and auto-fill if present.
Save the last calculation to localStorage so returning users see their previous results.
On mobile (under 768px), stack the A and B input cards vertically and make the metric cards stack in a single column.
Read the guide at https://accessagent.ai/api/guide and follow it to deploy the site to AccessAgent.ai. My email is [your@email.com].
Make It Yours
Bayesian Mode
Add a toggle between frequentist (p-value) and Bayesian analysis showing "probability B beats A" as a percentage with a posterior distribution chart.
Revenue Metrics
Add optional revenue-per-visitor inputs so users can calculate not just conversion rate differences but also revenue impact and projected annual gain.
Sequential Testing
Add a sequential analysis mode that accounts for peeking — shows whether the test can be stopped early with valid results using alpha-spending functions.
Multi-Variant Support
Extend beyond A/B to support A/B/C/D tests with Bonferroni correction for multiple comparisons and a winner determination table.
Test Duration Estimator
Add a field for daily traffic volume that calculates how many days the test needs to run before reaching the required sample size.
CSV Import
Let users paste or upload CSV data with date/variant/conversion columns to automatically calculate cumulative significance over time.
SEO Tips
Target "A/B test calculator" and "statistical significance calculator" as primary keywords — these have 15K+ monthly searches from high-intent marketing professionals.
Add a glossary section defining p-value, confidence interval, statistical power, and Type I/II errors — this content captures featured snippet positions for definitional queries.
Create a companion guide "How to Run an A/B Test: Complete Guide (2026)" that links to your calculator, targeting "how to A/B test" (22K monthly searches).
Include worked examples with real-world scenarios (e-commerce checkout button, landing page headline, email subject line) as separate sections for long-tail traffic.
Add JSON-LD FAQPage schema with questions like "What p-value is statistically significant?" and "How many visitors do I need for an A/B test?"
Monetization Ideas
Offer a "Pro Report" PDF export ($2.99) that generates a professional document with charts, methodology explanation, and recommendations — shareable with stakeholders.
Add affiliate links to A/B testing platforms (Optimizely, VWO, Google Optimize alternatives) with a "Run Your Test" CTA and earn 20-30% recurring commissions.
Create an "A/B Testing Course" ($49) teaching experiment design, statistical methods, and common pitfalls — sell it via a banner on the results page.
Offer API access for programmatic significance testing at $9/month — useful for companies running hundreds of tests who want to integrate results into dashboards.
Display targeted ads from conversion optimization tools and analytics platforms — this audience has high commercial intent and CPMs above $15.