A/B Testing with a Headless Site

Last updated: August 8, 2024

A common question we get from new merchants is:

Can I run A/B tests on my headless storefront? 

The short answer is, absolutely!

A/B testing is beyond the scope of Nacelle's product offering, but we hope this document helps inform the conversation.

 

A/B Testing & Progressive Roll-Out Best Practices

Background

In its simplest form, an A/B test consists of splitting traffic to a website into two groups (control vs. variant), with each group being presented a different visual interface. Data is then gathered over the test period to determine the statistical significance of the variant’s impact on conversion rate vs. the control.

It’s critical that data-driven companies measure the impact of their migration to a headless architecture. By choosing the correct metrics and measuring them in the form of an A/B test, executives can rally their organizations around the most impactful company initiatives.

Traditional Approach

Traditional, client-side A/B testing products offered by companies like Optimizely, VWO, and Google Optimize offer convenient WYSIWYG editing interfaces; however, all of these products lean on client-side DOM manipulation to deliver the experiment treatment to users. There are a couple of significant drawbacks that are present in this approach:

  1. Performance Degradation - To prevent “flickering of original content” as the client-side script manipulates the DOM, these providers often recommend loading their scripts synchronously in the head of the HTML document. This roundtrip request impacts page speed performance in a meaningful way and has downstream impacts on SEO.

  2. Conflict with Modern Web Technologies - Both Vue and React (and their respective frameworks, Nuxt and Next) have given web developers a tremendous amount of flexibility in how they deliver applications to users. Patterns like server-side rendering and static site generation have enabled companies to deliver sites faster and more securely. Modifying the DOM at run-time is in direct conflict with this development approach.

Traffic Splitting at the Edge

Many Content Delivery Networks offer “compute at the edge” features such as AWS Lambda@Edge, Netlify Edge Handlers, and CloudFlare Workers. These features are powered by servers that share the same point of presence as the rest of nodes in the CDN’s respective network so the latency introduced to the request lifecycle is reduced to a few milliseconds. See an example of a CloudFlare traffic splitting script below:

AB1.png

Splitting traffic at the edge can follow this general approach:

  1. Listen for new inbound requests.

  2. Determine if the requester is already in the test by searching for the presence of a cookie.

  3. If the requester has the cookie, then assign them to the appropriate test group.

  4. If the requester does not have the cookie, assign them to a group, then pass the group value in the Set-Cookie response header.

Here is sample code used within a CloudFlare Worker that follows the above approach. Note that products like VWO and Optimizely Full Stack are compatible with this approach. Additionally, there are many open source alternatives published on NPM that can help your organization execute tests effectively.

// Attach handler to CloudFlare's emitted 'fetch' event
addEventListener('fetch', event => {
event.respondWith(handleRequest(event.request))
})

const oldStoreFront = new URL("https://nacelletestlab.com");
const newStoreFront = new URL("https://headless.nacelletestlab.com");

async function handleRequest(request) {
const NAME = "first-experiment"
const TEST_RESPONSE = await fetch(new Request(newStoreFront.toString()), request);// e.g. await fetch("/test/sompath", request)
const CONTROL_RESPONSE = await fetch(new Request(oldStoreFront.toString()), request); // e.g. await fetch("/control/sompath", request)

// Determine which group this requester is in.
const cookie = request.headers.get("cookie")
if (cookie && cookie.includes(`${NAME}=control`)) {
return CONTROL_RESPONSE
}
else if (cookie && cookie.includes(`${NAME}=test`)) {
return TEST_RESPONSE
}
else {
// If there is no cookie, this is a new client. Choose a group and set the cookie.
const group = Math.random() < 0.5 ? "test" : "control" // 50/50 split
let response = group === "control" ? CONTROL_RESPONSE : TEST_RESPONSE
response = new Response(response.body, response);
response.headers.append("Set-Cookie", `${NAME}=${group}; path=/`)

return response
}
}

 

For further reading, please consider Optimizely’s documentation on the test design pattern.

 

SEO, Caveats & Other Considerations

When split testing between two separate URLs there are a couple of considerations to keep in mind:

  1. 301 (Permanently moved) redirects are considered more SEO friendly than other redirect methods.

  2. Utilizing rel=canonical tagging on your control pages helps Google’s Search engine better understand the presence of URLs with duplicate/similar content.

  3. To prevent event attribution problems during the test, it might be worthwhile to exclude paid search traffic from the test by excluding requestors who have a utm_campaign parameter included in their request.

  4. You should consult an SEO expert before starting this kind of experiment to ensure your specific scenario is well understood.

  5. Keep in mind that new browser features like Safari’s Intelligent Tracking Prevention will place limits on how long a cookie can be present on a user’s browser. Using Set-Cookie in the response header of the original request is the most durable method for extending the life of the cookie.

 

A/B Testing - React on Vercel

For those using React and Vercel, Vercel has a great article on how to build experiments at scale.

 

Client-side vs Server-side testing

The good people of medium.com wrote a worthwhile piece on client-side vs server-side testing. It is more philosophical than directly applicable, but it is helpful if you are debating between these two options.