Written by –

Before you start: if you’re unfamiliar with the principles of statistical SEO split-testing and how SplitSignal works, we’re suggesting you start here or request a demo of SplitSignal. 


First, we asked our Twitter followers to vote:

img-semblog

Here’s what other SEO professionals have to share about this test:

Vladimir Gertner, Senior Project Manager at Soft Road Apps:

I would be surprised if negative review var increases ctr. Removing it should be a good thing.

Bostjan Tanko, SEO Specialist at APOLLO Insurance:

CTR for sure! But not SEO per se aka lower rankings because of it!

Find out if our followers were right by reading the complete analysis of this test.

The star rating rich result is one of the most sought-after SERP features for many SEOs. The main reason for this is that if you are listed as a rich result, searchers will notice you more because you stand out from the other search results. This can lead to a higher click-through rate (CTR). By giving users additional information about the entity (for example, a product), users can also make better judgments directly from the search results. This in turn means your rich result needs to be up to par, otherwise it could backfire, but more on that later.

Most rich results are generated using structured data. While there are other important use cases for structured data, rich results are one of the biggest drivers for adopting structured data. The review snippet is a short snippet of a rating, for example of a product. When Google finds valid reviews or rating markup, it may show a rich result that includes stars.

SM4JfTVg1oo39jVZf-9MlfnURHBUSPEvsUt-Tc8CSrmYqLEUAR_OfXVxffebPTRbCHLy0vqrtjBxhaOh1Z0_Lzoqstt2RJRNH6xHnxpwAC95nYRZYO1Wwe_B431RIzabViJt8NWbOObz6XZlWA

The most common ratings you see in the SERP are nested aggregate ratings using the aggregateRating property. 

As mentioned, if your reviews are not up to par, users may not click through to the website from the search results. Our OrangeValley partner Koen Leemans wanted to put this to the test for one of the largest e-commerce parties in the Netherlands.

The Hypothesis

The website in question marked all of their product pages with structured data for products. If a product had reviews, they were included in the markup, regardless of whether the review score was positive or negative. 

The website sells over 500,000 products, so products with a low review score are unavoidable. However, this meant that some search results weren’t very attractive to click through:

img-semblog

We hypothesized that low ratings would have a negative impact on CTR and therefore organic traffic to the website. So we wanted to validate what would happen if we didn’t include the aggregateRating property (and its nested objects and values) for products with a rating value lower than 3 (out of 5) in the markup.

By doing this, we wanted to increase the likelihood that users would visit the website to learn all about the products without relying solely on the product’s review score. In addition, if a user is already on the website, they may be visiting other (related) product pages on the website instead of continuing to navigate on Google.

The Test

We used SplitSignal to set up and analyze the test. All product pages with a rating value score below 3 were selected as either variant or control. We kicked off the test and ran it for 21 days. We were able to determine that Googlebot visited 98% of the tested pages.

The Result

img-semblog

Removing the aggregateRating property (and its nested objects and values) for products with a rating value lower than 3 (out of 5) resulted in a 21% increase in clicks!

After only 6 days we were able to determine that this increase that we saw was significant. When the blue shaded area performs below or above the y=0 axis, the test is statistically significant at the 95% level. This means that we can be confident that the increase we are seeing is due to the change we have made and not to other (external) factors. 

Note that we are not comparing the actual control group pages to our variant pages but rather a forecast based on historical data. We compare this with the actual data. We use a set of control pages to give the model context for trends and external influences. If something else changes during our test (e.g., seasonality), the model will detect and take it into account. By filtering these external factors, we gain insight into what the impact of an SEO change really is.

The Analysis of the Result (Why?)

This test shows that rich results by themselves are not the formula for success, displaying low rating scores in search results can draw the wrong attention. We can also assume that the opinion of other users is considered important when choosing to click through to the website from the search results. So as an SEO you have to think carefully about what information you want to give users directly in the search results. 

As assumed, CTR increased dramatically for the tested product pages. But although the result of this test was very positive, the total number of product pages with a low rating score is relatively low. This means that the absolute impact on traffic may not be as great compared to other possible SEO changes for pages or templates that attract more traffic. Split testing not only helps our clients discover ways they can improve their organic traffic, it can also help prove the impact of SEO changes by prioritizing them in the development queue.

Gaining additional organic traffic through split testing helps SEOs build strong business cases to push through their SEO changes.

The increase in traffic we saw for this website was very valuable as it also gave them new insights into what their target audience cares about. Keep in mind that something that works for one website may not work for another. The only way to know for sure is to test what works for you!

Have your next SEO split-test analyzed by OrangeValley Agency.





Source link