SEO Articles

The Long-Term Link Acquisition Value of Content Marketing

The Long-Term Link Acquisition Value of Content Marketing

Posted by KristinTynski

Recently, new internal analysis of our work here at Fractl has yielded a fascinating finding:

Content marketing that generates mainstream press is likely 2X as effective as originally thought. Additionally, the long-term ROI is potentially many times higher than previously reported.

I’ll caveat that by saying this applies only to content that can generate mainstream press attention. At Fractl, this is our primary focus as a content marketing agency. Our team, our process, and our research are all structured around figuring out ways to maximize the newsworthiness and promotional success of the content we create on behalf of our clients.

Though data-driven content marketing paired with digital PR is on the rise, there is still a general lack of understanding around the long-term value of any individual content execution. In this exploration, we sought to answer the question: What link value does a successful campaign drive over the long term? What we found was surprising and strongly reiterated our conviction that this style of data-driven content and digital PR yields some of the highest possible ROI for link building and SEO.

To better understand this full value, we wanted to look at the long-term accumulation of the two types of links on which we report:

Direct links from publishers to our client’s content on their domain
Secondary links that link to the story the publisher wrote about our client’s content

While direct links are most important, secondary links often provide significant additional pass-through authority and can often be reclaimed through additional outreach and converted into direct do-follow links (something we have a team dedicated to doing at Fractl).

Below is a visualization of the way our content promotion process works:

So how exactly do direct links and secondary links accumulate over time?

To understand this, we did a full audit of four successful campaigns from 2015 and 2016 through today. Having a few years of aggregation gave us an initial benchmark for how links accumulate over time for general interest content that is relatively evergreen.

We profiled four campaigns:

Perceptions of Perfection Across Borders
America’s Most P.C. and Prejudiced Places
Reverse-Photoshopping Video Game Characters
Water Bottle Germs Revealed

The first view we looked at was direct links, or links pointing directly to the client blog posts hosting the content we’ve created on their behalf.

There is a good deal of variability between campaigns, but we see a few interesting general trends that show up in all of the examples in the rest of this article:

Both direct and secondary links will accumulate in a few predictable ways:
A large initial spike with a smooth decline
A buildup to a large spike with a smooth decline
Multiple spikes of varying size

Roughly 50% of the total volume of links that will be built will accumulate in the first 30 days. The other 50% will accumulate over the following two years and beyond.
A small subset of direct links will generate their own large spikes of secondary links.

We’ll now take a look at some specific results. Let’s start by looking at direct links (pickups that link directly back to our client’s site or landing page).

The typical result: A large initial spike with consistent accumulation over time

This campaign, featuring artistic imaginings of what bodies in video games might look like with normal BMI/body sizes, shows the most typical pattern we witnessed, with a very large initial spike and a relatively smooth decline in link acquisition over the first month.

After the first month, long-term new direct link acquisition continued for more than two years (and is still going today!).

The less common result: Slow draw up to a major spike

In this example, you can see that sometimes it takes a few days or even weeks to see the initial pickup spike and subsequent primary syndication. In the case of this campaign, we saw a slow buildup to the pinnacle at about a week from the first pickup (exclusive), with a gradual decline over the following two weeks.

“These initial stories were then used as fodder or inspiration for stories written months later by other publications.”

Zooming out to a month-over-month view, we can see resurgences in pickups happening at unpredictable intervals every few months or so. These spikes continued up until today with relative consistency. This happened as some of the stories written during the initial spike began to rank well in Google. These initial stories were then used as fodder or inspiration for stories written months later by other publications. For evergreen topics such as body image (as was the case in this campaign), you will also see writers and editors cycle in and out of writing about these topics as they trend in the public zeitgeist, leading to these unpredictable yet very welcomed resurgences in new links.

Least common result: Multiple spikes in the first few weeks

The third pattern we observed was seen on a campaign we executed examining hate speech on Twitter. In this case, we saw multiple spikes during this early period, corresponding to syndications on other mainstream publications that then sparked their own downstream syndications and individual virality.

Zooming out, we saw a similar result as the other examples, with multiple smaller spikes more within the first year and less frequently in the following two years. Each of these bumps is associated with the story resurfacing organically on new publications (usually a writer stumbling on coverage of the content during the initial phase of popularity).

Long-term resurgences

Finally, in our fourth example that looked at germs on water bottles, we saw a fascinating phenomenon happen beyond the first month where there was a very significant secondary spike.

This spike represents syndication across (all or most) of the iHeartRadio network. As this example demonstrates, it isn’t wholly unusual to see large-scale networks pick up content even a year or later that rival or even exceed the initial month’s result.

Aggregate trends
“50% of the total links acquired happened in the first month, and the other 50% were acquired in the following two to three years.”

When we looked at direct links back to all four campaigns together, we saw the common progression of link acquisition over time. The chart below shows the distribution of new links acquired over two years. We saw a pretty classic long tail distribution here, where 50% of the total links acquired happened in the first month, and the other 50% were acquired in the following two to three years.

“If direct links are the cake, secondary links are the icing, and both accumulate substantially over time.”

Links generated directly to the blog posts/landing pages of the content we’ve created on our clients’ behalf are only really a part of the story. When a campaign garners mainstream press attention, the press stories can often go mildly viral, generating large numbers of syndications and links to these stories themselves. We track these secondary links and reach out to the writers of these stories to try and get link attributions to the primary source (our clients’ blog posts or landing pages where the story/study/content lives).

These types of links also follow a similar pattern over time to direct links. Below are the publishing dates of these secondary links as they were found over time. Their over-time distribution follows the same pattern, with 50% of results being realized within the first month and the following 50% of the value coming over the next two to three years.

The value in the long tail

By looking at multi-year direct and secondary links built to successful content marketing campaigns, it becomes apparent that the total number of links acquired during the first month is really only about half the story.

For campaigns that garner initial mainstream pickups, there is often a multi-year long tail of links that are built organically without any additional or future promotions work beyond the first month. While this long-term value is not something we report on or charge our clients for explicitly, it is extremely important to understand as a part of a larger calculus when trying to decide if doing content marketing with the goal of press acquisition is right for your needs.

Cost-per-link (a typical way to measure ROI of such campaigns) will halve if links built are measured over these longer periods — moving a project you perhaps considered a marginal success at one month to a major success at one year.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Read More

A Quarter-Million Reasons to Use Moz’s Link Intersect Tool

A Quarter-Million Reasons to Use Moz’s Link Intersect Tool

Posted by rjonesx.

Let me tell you a story.

It begins with me in a hotel room halfway across the country, trying to figure out how I’m going to land a contract from a fantastic new lead, worth annually $250,000. We weren’t in over our heads by any measure, but the potential client was definitely looking at what most would call “enterprise” solutions and we weren’t exactly “enterprise.”

Could we meet their needs? Hell yes we could — better than our enterprise competitors — but there’s a saying that “no one ever got fired for hiring IBM”; in other words, it’s always safe to go with the big guys. We weren’t an IBM, so I knew that by reputation alone we were in trouble. The RFP was dense, but like most SEO gigs, there wasn’t much in the way of opportunity to really differentiate ourselves from our competitors. It would be another “anything they can do, we can do better” meeting where we grasp for reasons why we were better. In an industry where so many of our best clients require NDAs that prevent us from producing really good case studies, how could I prove we were up to the task?

In less than 12 hours we would be meeting with the potential client and I needed to prove to them that we could do something that our competitors couldn’t. In the world of SEO, link building is street cred. Nothing gets the attention of a client faster than a great link. I knew what I needed to do. I needed to land a killer backlink, completely white-hat, with no new content strategy, no budget, and no time. I needed to walk in the door with more than just a proposal — I needed to walk in the door with proof.

I’ve been around the block a few times when it comes to link building, so I wasn’t at a loss when it came to ideas or strategies we could pitch, but what strategy might actually land a link in the next few hours? I started running prospecting software left and right — all the tools of the trade I had at my disposal — but imagine my surprise when the perfect opportunity popped up right in little old Moz’s Open Site Explorer Link Intersect tool. To be honest, I hadn’t used the tool in ages. We had built our own prospecting software on APIs, but the perfect link just popped up after adding in a few of their competitors on the off chance that there might be an opportunity or two.

There it was:

3,800 root linking domains to the page itself
The page was soliciting submissions
Took pull requests for submissions on GitHub!

I immediately submitted a request and began the refresh game, hoping the repo was being actively monitored. By the next morning, we had ourselves a link! Not just any link, but despite the client having over 50,000 root linking domains, this was now the 15th best link to their site. You can imagine me anxiously awaiting the part of the meeting where we discussed the various reasons why our services were superior to that of our competitors, and then proceeded to demonstrate that superiority with an amazing white-hat backlink acquired just hours before.

The quarter-million-dollar contract was ours.

Link Intersect: An undervalued link building technique

Backlink intersect is one of the oldest link building techniques in our industry. The methodology is simple. Take a list of your competitors and identify the backlinks pointing to their sites. Compare those lists to find pages that overlap. Pages which link to two or more of your competitors are potentially resource pages that would be interested in linking to your site as well. You then examine these sites and do outreach to determine which ones are worth contacting to try and get a backlink.

Let’s walk through a simple example using Moz’s Link Intersect tool.

Getting started

We start on the Link Intersect page of Moz’s new Link Explorer. While we had Link Intersect in the old Open Site Explorer, you’re going to to want to use our new Link Intersect, which is built from our giant index of 30 trillion links and is far more powerful.

For our example here, I’ve chosen a random gardening company in Durham, North Carolina called Garden Environments. The website has a Domain Authority of 17 with 38 root linking domains.

We can go ahead and copy-paste the domain into “Discover Link Opportunities for this URL” at the top of the Link Intersect page. If you notice, we have the choice of “Root Domain, Subdomain, or Exact Page”:

I almost always choose “root domain” because I tend to be promoting a site as a whole and am not interested in acquiring links to pages on the site from other sites that already link somewhere else on the site. That is to say, by choosing “root domain,” any site that links to any page on your site will be excluded from the prospecting list. Of course, this might not be right for your situation. If you have a hosted blog on a subdomain or a hosted page on a site, you will want to choose subdomain or exact page to make sure you rule out the right backlinks.

You also have the ability to choose whether we report back to you root linking domains or Backlinks. This is really important and I’ll explain why.

Depending on your link building campaign, you’ll want to vary your choice here. Let’s say you’re looking for resource pages that you can list your website on. If that’s the case, you will want to choose “pages.” The Link Intersect tool will then prioritize pages that have links to multiple competitors on them, which are likely to be resource pages you can target for your campaign. Now, let’s say you would rather find publishers that talk about your competitors and are less concerned about them linking from the same page. You want to find sites that have linked to multiple competitors, not pages. In that case, you would choose “domains.” The system will then return the domains that have links to multiple competitors and give you example pages, but you wont be limited only to pages with multiple competitors on them.

In this example, I’m looking for resource pages, so I chose “pages” rather than domains.

Choosing your competitor sites

A common mistake made at this point is to choose exact competitors. Link builders will often copy and paste a list of their biggest competitors and cross their fingers for decent results. What you really want are the best link pages and domains in your industry — not necessarily your competitors.

In this example I chose the gardening page on a local university, a few North Carolina gardening and wildflower associations, and a popular page that lists nurseries. Notice that you can choose subdomain, domain, or exact page as well for each of these competitor URLs. I recommend choosing the broadest category (domain being broadest, exact page being narrowest) that is relevant to your industry. If the whole site is relevant, go ahead and choose “domain.”

Analyzing your results

The results returned will prioritize pages that link to multiple competitors and have a high Domain Authority. Unlike some of our competitors’ tools, if you put in a competitor that doesn’t have many backlinks, it won’t cause the whole report to fail. We list all the intersections of links, starting with the most and narrowing down to the fewest. Even though the nurseries website doesn’t provide any intersections, we still get back great results!

Now we have some really great opportunities, but at this point you have two choices. If you really prefer, you can just export the opportunities to CSV like any other tool on the market, but I prefer to go ahead and move everything over into a Link Tracking List.

By moving everything into a link list, we’re going to be able to track link acquisition over time (once we begin reaching out to these sites for backlinks) and we can also sort by other metrics, leave notes, and easily remove opportunities that don’t look fruitful.

What did we find?

Remember, we started off with a site that has barely any links, but we turned up dozens of easy opportunities for link acquisition. We turned up a simple resources page on forest resources, a potential backlink which could easily be earned via a piece of content on forest stewardship.

We turned up a great resource page on how to maintain healthy soil and yards on a town government website. A simple guide covering the same topics here could easily earn a link from this resource page on an important website.

These were just two examples of easy link targets. From community gardening pages, websites dedicated to local creek, pond, and stream restoration, and general enthusiast sites, the Link Intersect tool turned up simple backlink gold. What is most interesting to me, though, was that these resource pages never included the words “resources” or “links” in the URLs. Common prospecting techniques would have just missed these opportunities altogether.

While it wasn’t the focus of this particular campaign, I did choose the alternate of “show domains” rather than “pages” that link to the competitors. We found similarly useful results using this methodology.

For example, we found had linked to multiple of the competitor sites and, as it turns out, would be a perfect publication to pitch for a story as part of of a PR campaign for promoting the gardening site.


The new Link Intersect tool in Moz’s Link Explorer combines the power of our new incredible link index with the complete features of a link prospecting tool. Competitor link intersect remains one of the most straightforward methods for finding link opportunities and landing great backlinks, and Moz’s new tool coupled with Link Lists makes it easier than ever. Go ahead and give it a run yourself — you might just find the exact link you need right when you need it.

Find link opportunities now!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Read More

Faceted Navigation and SEO: A Deeper Look

Faceted Navigation and SEO: A Deeper Look

The complex web of factors that determine page counts for a site with faceted navigation. It’s about the SEO, folks

tl;dr: Skip to each “Takeaways” section if you want a few ideas for handling faceted navigation and SEO. But do so at your own risk. The “why” is as important as the “what.”

If you have ever shopped for anything online, you’ve seen faceted navigation. This is the list of clickable options, usually in the left panel, that can be used to filter results by brand, price, color, etc. Faceted navigation makes it possible to mix & match options in any combination the user wishes. It’s popular on large online stores because it allows the user to precisely drill down to only the things they are interested in.

An example of faceted navigation

But this can cause huge problems for search engines because it generates billions of useless near-duplicate pages. This wastes crawl budget, lowers the chances that all of the real content will get indexed, and it gives the search engines the message that the site is mostly low-quality junk pages (because, at this point, it is).

Many articles talk about faceted navigation and how to mitigate the SEO problems that it causes. Those are reactive strategies: How to prevent the search engines from crawling and indexing the billions of pages your faceted navigation created.

This is not one of those how-to articles.

Instead, it’s about the decisions that create massive duplication and how to avoid them from the start. It’s about the seemingly innocuous UX choices and their unintended consequences. My goal is to give you a deeper understanding of how each decision affects crawlability and final page counts. I’m hoping this will give you knowledge you can use, both to avoid problems before they start and to mitigate problems that can’t be avoided.

Match Types and Grouping

Faceted navigation is typically divided into groups, with a list of clickable options in each group. There might be one group for brand names, another for sizes, another for colors, etc. The options in a group can be combined in any of a few different ways:

“AND” matching — With this match type, the store only shows an item if it matches all of the selected options. “AND” matching is most often used for product features where it is assumed the shopper is looking for a specific combination of features, and is only interested in a product if it has all of them. (e.g., headphones that are both wireless and noise-canceling)
“OR” matching — With this match type, the store shows items that match any of the selected options. This can be used for lists of brand names, sizes, colors, price ranges, and many other things. The assumption here is that the user is interested in a few different things, and wants to see a combined list that includes all of them. (e.g., all ski hats available in red, pink or yellow).
“Radio button” matching — With this match type, only one option may be selected at a time. Selecting one option deselects all others. The assumption here is that the options are 100% mutually exclusive, and nobody would be interested in seeing more than one of them at a time. Radio buttons are often used to set sort order. It is also sometimes used to choose between mutually exclusive categories. (e.g., specifying the smartphone brand/model when shopping for phone cases) Some radio button implementations require at least one selected option (e.g., for sort order), and others don’t (e.g., for categories).

The options within a given group can be combined using any one of these match types, but the groups themselves are almost always combined with each other using “AND” matching. For example, if you select red and green from the “colors” group, and you select XL and XXL from the “sizes” group, then you will get a list of every item that is both one of those two colors and one of those two sizes.

A typical real-world website will have several groups using different match types, with many options between them. The total number of combinations can get quite large:

The above example has just over 17 billion possible combinations. Note that the total number of actual pages will be much larger than this because the results from some combinations will be split across many pages.

For faceted navigation, page counts are ultimately determined by three main things:

The total number of possible combinations of options — In the simplest case (with only “AND” & “OR” matching, and no blocking) the number of combinations will be 2n, where n is the number of options. For example, if you have 12 options, then there will be 212, or 4,096 possible combinations. This gets a bit more complicated when some of the groups are radio buttons, and it gets a lot more complicated when you start blocking things.
The number of matching items found for a given combination — The number of matching items is determined by many factors, including match type, the total number of products, the fraction of products matched by each filter option, and the amount of overlap between options.
The maximum number of items to be displayed per page — This is an arbitrary choice set by the site designer. You can set this to any number you want. A bigger number means fewer pages but more clutter on each of them.


Test: How Does Match Type Affect Page Counts?

The choice of match type affects the page count by influencing both the number of combinations of options and also the number of matching items per combination.

How were these results calculated?
All of the numeric results in this article were generated by a simulation script written for this purpose. This script works by modeling the site as a multi-dimensional histogram, which is then repeatedly scaled and re-combined with itself each time a new faceted nav option is added to the simulated site. The script simulates gigantic sites with many groups of different option types relatively quickly. (For previous articles, I have always generated crawl data using an actual crawler, running on a test website made up of real HTML pages. That works fine when there are a few tens of thousands of pages, but some of the tests for this article have trillions of pages. That would take my crawler longer than all of recorded human history to crawl. Civilizations rise and fall over centuries. I decided not to wait that long.)

Test #1 — Simple “AND” Matching

Suppose we have a site with the following properties:

The faceted nav consists of one big group, with 32 filtering options that can be selected in any combination.
There are 10,000 products.
On average, each filtering option matches 20% of the products.
The site displays (up to) 10 products per page.
Options are combined using “AND” matching.

The above assumptions give you a site with:

4,294,967,296 different combinations of options
4,295,064,687 pages.
4,294,724,471 empty results.

The obvious: The number of pages is enormous, and the vast majority of them are empty results. For every 12,625 pages on this site, one shows actual products. The rest show the aggravating “Zero items found” message. This is a terrible user experience and a colossal waste of crawl budget. But it’s also an opportunity.

So what can we do about all those empty results? If you are in control of the server side code, you can remove them. Any option that would lead to a page that says “Zero items found” should either be grayed out (and no longer coded as a link) or, better yet, removed entirely. This needs to be evaluated on the server side each time a new page is requested. If this is done correctly, then each time the user clicks on another option, all of the remaining options that would have led to an empty result will disappear. This reduces the number of pages, and it also dramatically improves the user experience. The user no longer has to stumble through a maze of mostly dead ends to find the rare combinations that show products.

So let’s try this.

Test #2 — “AND” Matching, With Empty Results Removed

This test is identical to Test #1, except now all links that lead to empty results are silently removed.

This time, we get:

1,149,017 (reachable) combinations of options.
1,246,408 pages.
0 empty results. (obviously, because we’ve removed them)

This may still seem like a lot, but it’s a significant improvement over the previous test. The page count has gone from billions down to just over one million. This is also a much better experience for the users, as they will no longer see any useless options that return zero results. Any site that has faceted nav should be doing this by default.

Test #3 — “OR” Matching

This test uses the same parameters as Test #1, except it uses “OR” matching:

The faceted nav still has 32 filtering options
There are still 10,000 products.
Each filtering option still matches 20% of the products.
The site still displays 10 products per page.
Options are now combined using “OR” matching instead of “AND” matching.

This gives us:

4,294,967,296 different combinations of options.
4,148,637,734,396 pages (!)
0 empty results.

The number of combinations is precisely the same, but the number of pages is much higher now (966 times higher), and there are no longer any empty results. Why is the page count so high? Because, with “OR” matching, every time you click on a new option the number of matching items increases. This is the opposite of “AND” matching, where the number decreases. In this test, most combinations now include almost all of the products on the site. In Test #1, most combinations produced empty results.

There are no empty results at all in this new site. The only way there could be an empty result would be if you chose to include a filtering option that never matches anything (which would be kind of pointless). The strategy of blocking empty results does not affect this match type.

Test #4 — Radio Buttons

This test uses radio button matching.

If we repeat Test #1, but with radio button matching, we get:

33 different combinations of options.
7,400 pages.
0 empty results.

This is outrageously more efficient than any of the others. The downside of radio button matching is that it’s much more restrictive in terms of user choice.

The takeaway: Always at least consider using radio button matching when you can get away with it (any time the options are mutually exclusive). It will have a dramatic effect on page counts.

Recap of Tests #1–4:

Page count

“AND” matching (without blocking empty results)

“AND” matching, with empty results blocked

“OR” matching

Radio buttons


The choice of match type is important and profoundly impacts page counts.
“OR” matching can lead to extremely high page counts.
“AND” matching isn’t as bad, provided you are blocking empty results.
You should always block empty results.
Blocking empty results helps with “AND” matching, but doesn’t affect “OR” matching.
Always use radio buttons when the options are mutually exclusive.

How Grouping Affects Page Count

So far, we have looked at page counts for sites that have one big group of options with the same match type. That’s unrealistic. On a real website, there will usually be many groups with different match types. The exact way the options are separated into groups is another factor that can affect page counts.

Test #5 — “OR” Matching, Split Into Multiple Groups

Let’s take the original parameters from Test #3:

The faceted nav has a total of 32 filtering options.
There are 10,000 products.
On average, each filtering option matches 20% of the products.
The site displays up to 10 products per page.
Options are combined using “OR” matching.

But this time, we’ll redo the test several times, and each time, we’ll split the 32 options into a different number of groups.

This gives us:

Empty Results

1 group with 32 options

2 groups with 16 options per group

4 groups with 8 options per group

8 groups with 4 options per group

16 groups with 2 options per group

The interesting thing here is that the last two tests have some empty results. Yes, all groups used “OR” matching, and yes, I told you “OR” matching does not produce empty results. So what’s going on here? Remember, no matter which match types are used within each group, the groups are combined with each other using “AND” matching. So, if you break an “OR” group into many smaller “OR” groups, you get behavior closer to an “AND” group.

Another way to put it: Suppose there are eight groups with four options each, and the user has selected exactly one option from each group. For any item to show up in those results, the item would have to match all eight of those selected options. This is functionally identical to what you would get if those eight selected options were part of an “AND” group.

If you are blocking empty results (which you should be doing anyway), then the actual page counts for the last two tests will be much smaller than is shown in this table. Before you get all excited, note that you have to have quite a few groups before this starts happening. It’s possible some site might be in a market where it makes sense to have eight groups with four options each, but it isn’t something that will happen often.

The boring but more practical observation is that even breaking the group into two parts reduces the page count noticeably. The difference isn’t huge, but it’s enough to be of some value. If a group of options that uses “OR” matching can be logically separated into two or more smaller groups, then it may be worth doing.

Test #6 — “AND” Matching, Split Into Multiple Groups

(I’m including this test because, if I don’t, people will tell me I forgot to do this one)

This test is the same as Test #5, but with “AND” matching instead of “OR” matching (and empty results are now being blocked).


1 group with 32 options

2 groups with 16 options per group

4 groups with 8 options per group

8 groups with 4 options per group

16 groups with 2 options per group

Yep. They all have the same number of pages. How can this be? The options within each group use “AND” matching, and groups are combined with each other using “AND” matching, so it doesn’t matter if you have one group or several. They are functionally identical.


If you want to split up an “AND” group because you think it will make sense to the user or will look nicer on the page, then go for it, but it will not affect page counts.

Other Things that Affect Page Counts
Test #7 — Changing “Items per Page”

This test uses the following parameters:

The faceted nav consists of five groups, with varying option counts, and a mix of different match types.
There are 10,000 products.
On average, each filtering option matches 20% of the products.
Links to empty results are blocked.

The test was repeated with different values for “Items per Page.”

This gives us:

Page Count

10 items per page

32 items per page

100 items per page

320 items per page

1,000 items per page

This makes a difference when the values are small, but the effect tapers off as the values gets larger.

Test #8 — Adding a Pagination Limit

Some sites, especially some very large online stores, try to reduce database load by setting a “pagination limit.” This is an arbitrary upper limit to the number of pages that can be returned for a given set of results.

For example, if a given filter combination matches 512,000 products, and the site is set up to show 10 products per page, this particular combination would normally create 51,200 pages. Some sites set an arbitrary limit of, say, 100. If the user clicks all the way to page 100, there is no link to continue further.

These sites do this because, compared to delivering pages at the start of a pagination structure, delivering pages deeper in a pagination structure create a massive load on the database (for technical reasons beyond the scope of this article). The larger the site, the greater the load, so the largest sites have to set the arbitrary limit.

This test uses the following parameters:

The faceted nav consists of five groups, with varying option counts, and a mix of different match types.
There are 500,000 products.
On average, each filtering option matches 20% of the products.
Links to empty results are blocked.

The test was repeated with different values for the pagination limit.

This gives us:

Pagination Limit
Total Page Count







(no limit)

That’s definitely an improvement, but it’s underwhelming. If you cut the pagination limit in half, you don’t wind up with half as many pages. It’s more in the neighborhood of 90% as many. But this improvement is free because this type of limit is usually added for reasons other than SEO.

Pagination Takeaways

Test 7:

For lower values, changing “Items per Page” improves page counts by a noticeable amount.
When the values get higher, the effect tapers off. This is happening because most of the results now fit on one page. (and the page count can’t get lower than one)

Test 8:

If you have a huge site implementing a pagination limit primarily for database performance reasons, you may see a minor SEO benefit as a free bonus.
If you’re not also doing this to reduce database load, it’s not worth it.

Selectively Blocking Crawlers

All of the tests so far let the crawler see all of the human-accessible pages. Now let’s look at strategies that work by blocking pages via robots meta, robots.txt, etc.

Before we do that, we need to be clear about what “page count” really means. There are actually three different “page counts” that matter here:

Human-readable page count — Pages that can be viewed by a human being with a browser.
Crawlable page count — Pages that a search engine crawler is allowed to request.
Indexable page count — The number of pages that the search engine is allowed to index, and to potentially show in search results.

The crawlable page count is important because it determines how much crawl budget is wasted. This will affect how thoroughly and how frequently the real content on the site gets crawled. The indexable page count is important because it effectively determines how many thin, near-duplicate pages the search engines will try to index. This is likely to affect the rankings of the real pages on the site.

Test #9 — Selection Limit via Robots Meta with “noindex, nofollow”

In this test, if the number of selected options on the page gets above a pre-specified limit, then <meta name="robots" content="noindex,nofollow"> will be inserted into the HTML. This tells the search engines not to index the page or follow any links from it.

This test uses the following parameters:

The faceted nav consists of five groups, with varying option counts, and a mix of different match types.
There are 10,000 products.
On average, each filtering option matches 20% of the products.
Links to empty results are blocked.

For this test, the “selection limit” is varied from 0 to 5. Any page where the number of selected options is larger than this selection limit will be blocked, via robots meta tag with noindex, nofollow.

selection limit
crawlable pages
indexable pages







(no limit)

In these results, both indexable and crawlable page counts are reduced dramatically, but the number of crawlable pages is reduced by much less. Why? Because a robots meta tag is part of the HTML code of the page it is blocking. That means the crawler has to load the page in order to find out it has been blocked. A robots meta tag can block indexing, but can’t can’t block crawling. It still wastes crawl budget.

You might well ask: If robots meta can’t directly block a page from being crawled, then why is the crawlable page count reduced at all? Because crawlers can no longer reach the deepest pages: The pages that link to those pages are no longer followed or indexed. Robots meta can’t directly block crawling of a particular page, but it can block the page indirectly, by setting “nofollow” for all of the pages that link to it.

Test #10 — Repeat of Test #9, But With “noindex, follow”

This a repeat of test #9, except now the pages are blocked by a robots meta tag with “noindex, follow” instead of “noindex, nofollow.” This tells the crawler that it still shouldn’t index the page, but it is OK to follow the links from it.

(I’m only including this one because, if I don’t, someone is bound to tell me I forgot to include it.)

selection limit
crawlable pages
indexable pages







(no limit)

This scheme reduces the number of indexable pages, but it does nothing whatsoever to prevent wasted crawl budget. Wasted crawl budget is the main problem that needs to be solved here, so this makes this scheme useless. There are some use cases (unrelated to faceted nav) where “noindex, follow” is a good choice, but this isn’t one of them.

Can the selection limit be implemented with robots.txt?

As shown in test #9, using robots meta tags to implement a selection limit is not ideal, because robots meta tags are part of the HTML of the page. The crawler has to load each page before it can find out if the page is blocked. This wastes crawl budget.

So what about using robots.txt instead? Robots.txt seems like a better choice for this, because it blocks pages from being crawled, unlike robots meta, which blocks pages from being indexed and/or followed. But can robots.txt be used to selectively block pages based on how many options they have selected? The answer is: it depends.

This depends on the URL structure. In some cases it’s simple, in others it’s difficult or impossible.

For example, if the URL structure uses some completely impenetrable format like base-64-encoded JSON:

Then you are out of luck. You cannot use robots.txt to filter this, because there’s no way for robots.txt to tell how many selected options there are. You’ll have to use robots meta or X-Robots. (both of which can be generated by the server-side code, which has access to the decoded version of the query data)

On the other hand, if all filter options are specified as a single underscore-separated list of ID numbers in the query string, like this:

Then you can easily block all pages that have more than (for example) two options selected, by doing this:

User-agent: *
Disallow: /products?*filters=*_*_

So let’s try this.

Test #11 — Selection Limit, via Robots.txt

This is a repeat of test #9, except now the pages are blocked using robots.txt instead of robots meta.

selection limit
crawlable pages
indexable pages







(no limit)


Blocking pages based on a selection limit is a very effective way to reduce page counts.
Implementing this with robots.txt is best.
But you can only use robots.txt if the URL structure allows it.
Implementing this with robots meta is less effective than robots.txt, but still useful.


Faceted navigation is one of the thorniest SEO challenges large sites face. Don’t wait to address issues after you’ve built your site. Plan ahead. Use robots.txt, look at selection options, and “think” like a search engine.

A little planning can improve use of crawl budget, boost SEO, and improve the user experience.

The post Faceted Navigation and SEO: A Deeper Look appeared first on Portent.

Read More

How to scale content production to capture the long-tail opportunity

Here’s something we all know so well that nobody needs to say it anymore: content is king.

We know it because we’ve been hit over the head with the phrase more times than you can shake a page view at. There’s no getting away from it: producing high-quality, engaging content and unique copy is vital for SEO, brand awareness, and affinity.

There will be few digital marketers out there who are not painfully aware of the challenge. When resources, time, and money are (more likely than not) limiting factors, how do you produce large amounts of content to a high enough standard to be effective?

This can be especially true if you or your client is a business with many different product lines, or in multiple locations around the world. The potential topics are infinite, red tape acts as a bottleneck, and copywriters can be overworked and expensive.

The good news is that with the rising popularity of remote working and digital nomads, partnered with a solid strategy and process, you don’t have to make the impossible choice of quality or quantity.

Use a network of freelancers

Perhaps you have a short-term project in the pipeline, or your client suddenly wants to dramatically increase the amount of content in production. What do you do? Hiring a team of copywriters is expensive.

The freelance market, however, is competitive, and these days you don’t have to compromise quality for the sake of cost. Digital nomads are highly-skilled, maybe even multi-lingual, and are likely to be based in countries where the cost of living is low.

Of course, this might not work for you if you need writers based in your market, in which case you could use your international freelancers for other means. Have you got a killer strategist on your books, or someone who speaks four languages who could translate and localize your copy using their knowledge of your markets? Make use of their skills.

It goes without saying that good communication is central to making it work with freelancers. Make yourself as available as possible to your writers and remind them again and again that there is no such thing as a silly question. Building a personal rapport is vital—video calls are great for this, and often far quicker than trying to painfully explain something over email. Apps such as Google Hangouts will become your best friend, for when a simple question requires a quick answer.

With freelancers you have the opportunity to not only become more cost-effective, but to make time zones work for you. This is the key: whilst you’re sleeping, some of your freelancers will be working. Manage this effectively and the amount you produce will rapidly increase, without compromising on quality.

Establish a process

It sounds absurdly simple, but if you don’t set up a clear, defined process, then you’re at very real risk of not achieving the core goals of the project. Common pitfalls include repeating work (or producing the wrong content due to poor briefs), missing deadlines, and inefficiently handling budgets.

It may take some time to set up, but it will undoubtedly pay off once it’s up, running, and ticking along by itself whilst you dedicate yourself to other tasks.

Firstly, one of the most useful things you can do is to spend some time getting your briefs watertight. Provide key details about the client, background information for the task such as the target audience, and clearly explain how this work fits into the wider strategy. Outline the deliverables clearly, and provide a step-by-step guide and examples if necessary.

Brief templates can help with this, especially if you’re producing different types of hygiene content for the same client. It will be worth it when you receive the work back exactly as needed, with minimal questions in the process, and future you will thank you.

Secondly, I strongly advise setting up trackers, because let’s face it: the benefit of a good Excel document cannot be underestimated. Create them so you know what stage your project is at from a glance and include pricing information and details of your freelancers. These trackers should essentially be a one-stop-shop for everything you need to know about the project. This will be invaluable not only for measuring where you are in the process but also for reporting.

Project tracking and management services such as Trello can be a godsend. Make use of them. Here at Croud we have our own proprietary technology, Croud Control, which allows us to manage huge content projects flexibly, with full visibility and control over every aspect of each project.

If this all sounds a little exhausting, why not use a trusted freelancer to manage this process for you? That way you only need to brief one person (although admittedly you will probably need to do a deep-dive), and providing you have regular check-ins along the way, you will only need to get involved at the final stage.

QA, QA, and QA again

Speaking of the final stage: check everything. Then check again.

It is unavoidable that your copywriters will make mistakes, as they are human beings. It’s also possible that your proofreaders will miss the odd spelling mistake here or there. This is the reason why I operate on a two-stage QA process at a minimum.

If your client is a multinational company, you may be required to translate or localize your copy into several different languages. It goes without saying that native speakers should perform the QA on this type of work, especially if the copywriter was a non-native speaker.

Providing your freelancers with feedback is crucial to the success of content projects, aside from just being a decent thing to do. After all, everyone wants to do a good job and more likely than not, wants to know how they could do it better.

Tight budgets mean you might have to get creative with how you manage it. This QA process allows me to do just that. If a new, potentially unexperienced copywriter with good writing skills and low hourly rate does the bulk of the work, the more skilled writers who are almost definitely more expensive can be lined up to proofread, check tone, and generally make sure it is up to scratch, in half the time it took to write it. Just make sure they don’t end up re-writing the work. Empower them to provide constructive feedback directly to your copywriters, and effectively train them up.

If your QAs pick up on the same mistakes being made repeatedly, allow your copywriters the opportunity to review their edits. If they can actually see the corrections being made, they are more likely to bear them in mind when they write for you again. If fewer edits are required, then congratulations, you have made the process even more efficient and cost-effective.


Creating high-quality, unique copy and content on a large scale is never going to be easy, but it doesn’t have to be painful. With a bit of legwork at the beginning to establish a well built process, and by making the most of a network of freelancers, it has the potential to be a breeze.

Not only that, but you and your clients will undoubtedly reap the commercial rewards of your hard work. Using exactly this process, together with our global network of 1,700+ freelancers known as ‘Croudies’, we were able to produce city-specific landing page copy for a client with hundreds of locations. This work led to a 113% increase in organic traffic, coupled with a 124% uplift in domain visibility.

And the key to success? Engage your writers at every available opportunity, so they don’t feel like a cog in a machine. Provide them with valuable feedback and help them whenever you can. This will likely not only improve your enjoyment of the project, but you’ll also probably find that they are more willing to help with future work. And when the whole project goes off without a hitch and you receive fantastic reviews (because why wouldn’t you), tell them of the good news and allow them to share in your success.

Read More

How I Boosted My Rankings Without Creating Content or Building Links

How I Boosted My Rankings Without Creating Content or Building Links

I know what you are thinking, this isn’t impossible.

Because the more content you have and the more links you have, the higher your rankings will be.

Although that is true, it doesn’t mean that content marketing and link building are the only ways to increase your rankings.

It doesn’t matter what update Google rolls out, I’ve found that there are a few hacks that consistently work to boost your rankings without creating more content or building more links.

So, are you ready to find out what they are?

What does Google want to rank at the top?

Before I get into the exact “hacks” and tactics that can boost your rankings, I want to first help you change the way you think about SEO.

Do you think Google really cares about on-page SEO and link building?

Sure, it matters to some extent, but that’s not what Google cares about the most.

Google wants to rank websites that people love. If they ranked websites that you hated, then you would slowly stop using Google.

And if people stopped using Google, then there would be fewer people to click on their ads, which means they would make less money.

That’s why Google cares about what you think and they ideally want to rank the websites that you love.

Now let’s dive into some hacks that will make people love your site, which will boost your rankings.

And don’t worry… I am not going to give you some fluffy tactics, I have data to back up everything. 😉

Hack #1: Optimize your click-through-rate

Let me ask you this:

If 10,000 people performed a Google search for the term “SEO” and clicked on the number 2 listing instead of the number 1 listing, what would that tell Google?

It would tell them that the number 2 listing is more relevant and that Google should move that listing to the number 1 spot.

Rand Fishkin ran an experiment where he told all of his Twitter followers to perform a Google search for the term “best grilled steak” and to click on the first listing, hit the back button, and then click on the 4th listing.

Within 70 minutes the 4th listing jumped into the top spot.

And that page even started to rank at the top of page 1 for the term “grilled steak”.

The ranking eventually slipped back down because people didn’t really feel that the listing was that great compared to some of the other listings.

Instead, it only climbed because Rand has a loyal following and everyone helped trick Google to believe that it was more relevant (at least in the short term).

But this should give you a sense that Google cares what you think. So much so that they will adjust rankings in real time because they don’t want to show you pages that you feel are irrelevant (no matter how many backlinks the page has or how well its on-page code is optimized).

And Rand wasn’t the only person who tested out this theory. It’s been done a countless number of times and each time it produced similar results.

You want people to click on your listing more than the other ones out there. It’s that simple.

If you can generate more clicks (in a legitimate way) than the listings above you, eventually you’ll notice your rankings climb without having to write more content or build more links.

So, how do you get more clicks?

Well, you have to adjust your title tag and meta description tag to be more appealing.

Anytime you perform a Google search, you see a list of results. And each result has a title, URL, and description:

The link part is the title (also known as the title tag), then there is the URL (which is green in color), and lastly, there is the description (black text… that is also known as the meta description).

If you are running a WordPress blog, you can easily modify your title tag and meta description using the Yoast SEO plugin.

There are a few ways you can generate more clicks on your listing over the competition:

Include keywords – people tend to click on listings that include the keyword or phrase they just searched for. Make sure you are using the right keywords within your title and description (I will get to this in a bit). This may sound basic, but when your web pages rank for thousands of terms, which one do you include in your 60-character title tag?
Evoke curiosity – titles that are super appealing tend to generate clicks. For example, if the keyword you were going after is “green tea,” a good title would be “11 Proven Benefits of Green Tea (#6 Will Shock You)”. I know it may seem a bit long, but it works because a lot of people will wonder what number 6 will be.
Copy magazines – anytime you see a magazine, you’ll notice that they have appealing titles and headlines on the cover. A lot of their titles contain “how to” or are list oriented. Look at magazines for inspiration.

Improving your search listings isn’t rocket science. Where most people mess up is that they pick the wrong keywords or they are terrible at writing copy. Remember, humans are reading your title tag and meta description tag, so they need to be appealing.

If you are struggling writing appealing copy, read my ultimate guide to copywriting.

Now let’s go over the exact steps you need to take to get more clicks.

The first step is to use Google Search Console.

Log into Google Search Console, then click on “Search Traffic” and then click on “Search Analytics”:

You’ll see a page that looks something like this:

Scroll back up to the top and click on the “pages” radio button and “CTR” checkbox:

You’ll see a list of results sorted by your most popular URLs and their respective click-through-rate (also known as CTR):

Look for pages that have high traffic but a CTR of less than 5%.

Click on one of the listings with a CTR of less than 5% and then click on the “queries” radio button:

You’ll then want to look for the keywords with the highest amount of “clicks” and the lowest CTR.

Those are the keywords you want to focus on in your title tag and meta description.

Remember, your title tag is limited to roughly 60 characters, which means you won’t be able to fit more than 2 or 3 keywords.

So, you want to pick the keywords that typically have the most clicks. They should also have a low CTR because you selected pages with a CTR rate lower than 5%.

By adjusting your title tag and meta description to include the right keywords and by evoking curiosity, you’ll be able to increase your clicks. This will get you more search traffic in the short run and boost your rankings over time.

Here are 3 tests that worked well for me when I adjusted my title tag:

I noticed I was getting a lot of traffic for the term “marketing digital” from countries outside of North America on one of my posts.

So, I adjusted my title tag from saying “digital marketing” to “marketing digital” which took my CTR from 3.36% to 4.45%. It also increased my search traffic by 1,289 visitors a month.

With the key phrase “social media marketing,” I adjusted my title tag based on an idea I got from a magazine. My CTR went from 2.38% to 2.84%. In total, that increased my traffic by 932 visitors a month.

With my social media marketing title tag, I added the phrase “step-by-step guide.”

This lets people know it is a how-to related post and it is action oriented. I also added the word “social media” a few times within the meta description.

And with the query “Google AdWords,” I noticed that Google announced that they are switching their ad platform name from Google AdWords to Google Ads, so I did the opposite and focused more on the term “Google AdWords” because very few people knew about the name switch.

This helped drive an extra 1,355 visitors per month.

I’ve also had instances where the changes I’ve made had hurt my Google traffic.

So, whenever you adjust your title tag and meta description, mark that date down and look at the data within Google Search Console after 30 or so days to see if it hurt or helped.

If it hurt, revert it back and wait another 30 days. It can hurt your rankings if you continuously test. So when you have a losing variation, no matter what, wait 30 days as it will stabilize your rankings.

If the change helped boost your CTR and rankings, then you are off to a good start.

Now that you’ve optimized your click-through-rate, it’s time for you to optimize your user experience.

Hack #2: Show people what they want when they want it

If you go back to the experiment Rand Fishkin ran above, you’ll notice he told people to click the “back” button.

You don’t want people going to your site and clicking the back button… it will hurt your rankings.

People tend to click the back button because they don’t like what they see. If you can optimize your website for the optimal user experience, people will be less likely to click the back button.

I do this through 2 simple steps.

The first is to use Qualaroo and survey people. By asking people (right when they are on your website) a simple question of “how can I improve this page,” you’ll get tons of ideas.

You can even use Qualaroo to find out why people are visiting your website, which again will help you understand the type of people visiting your site. This will allow you to tailor your experience to them.

I ran a Qualaroo survey on my main blog page. The biggest feedback I got from you was that it was hard to find the exact content you were looking for.

And I know why too. It’s because I have marketing related content on everything. From ecommerce to SEO to content marketing…

I decided to try something out where when you land on the blog page, you can select the type of content that piques your interest and then all of the content gets tailored to your needs.

I also ran a Crazy Egg test to ensure that you like the change I made. Based on the Crazy Egg heatmap below, you can see that it was successful.

The bounce rate on my blog page dropped by 21% as well. 🙂

I then looked at the Crazy Egg scrollmap to see which elements/areas of the page have the most attention. This helped me determine where I should place the content filtering option.

The Crazy Egg scrollmap of my blog page shows that the content filtering option generates 70% of the page’s attention.

Placing the filtering in a place where there is a lot of attention ensures that I am giving you what you need in a place that is easy to find.

After you optimize your user experience, you want to focus on building a brand.

I recommend that you look at the pages on your site with high bounce rates and consider running this process in order to improve the user experience. When selecting the pages, make sure you are also picking pages that have decent traffic.

Hack #3: Build a brand

If you build a brand like Facebook or Amazon or any of the popular site, you’ll rank higher.

Eric Schmidt, the ex-CEO of Google, once said:

Brands are the solution, not the problem. Brands are how you sort out the cesspool.

I ran an experiment, which helped build up my brand and my search traffic skyrocketed (unintentionally).

My traffic went from 240,839 unique visitors per month in June 2016:

To 454,382 unique visitors per month by August 2016:

Once I realized the power of branding, I started a podcast called Marketing School, and I started to publish videos on YouTube, Facebook, and LinkedIn multiple times per week.

This has led me to generate 40,412 brand queries per month:

I’m even getting 3,806 brand queries per month on YouTube alone:

But as you know, producing good content doesn’t guarantee that your brand will grow.

Even if you build tools like me and release them for free (like what I did with Ubersuggest), it still won’t guarantee success.

But the one thing I have learned that works is the rule of 7.

When someone hears your message 7 times or sees it 7 times, they are more likely to resonate, build a connection, and continually come back.

So how do you get people to come back to your site?

The simplest solution that I’ve found to work is a free tool called Subscribers.

It leverages browser notifications to get people to “subscribe” to your website. It’s better than email because it is browser-based, which means people don’t have to give you their name or email address.

And then every time you want to get people to come back to your website, you simply send them a notification.

Look at how I’ve gotten over 42,316 people back to my site 174,281 times. That’s roughly 4 times per person.

Based on the rule of 7, I only have 3 more times to go. 😉

The way I use Subscribers is that I send out a notification blast every time I release a blog post.

The push looks something like this:

And instantly I’m able to get people back to my site:

When you start using Subscribers you won’t see results right away. It takes time to build up your subscriber base, but it happens pretty fast.

Typically, you’ll generate a browser notification subscriber three times faster than an email subscriber.


If you only focus on things like on-page SEO, link building, or even blogging, you won’t dominate Google.


Because that is what everyone else focuses on. You have to do more if you want to beat the competition.

By doing what’s best for the user, you’ll have a better chance of beating everyone else.

Just look at me, I do what every other SEO does plus more. Sometimes this causes my traffic to dip in the short run, but in the long run, it generally climbs.

From creating compelling copy so people want to click on your listing, to optimizing your user experience, to building a brand… you have to go beyond the SEO basics.

SEO has become extremely competitive. 5 years ago, it was much easier to rank at the top of Google.

If you use the 3 hacks above, here’s how long it will typically take to notice results.

Optimizing title tags – assuming you run successful tests, you can see small results in 30 to 60 days. Over time the results get even better.
Improving user experience – making your user experience better will instantly improve your metrics such as bounce rate, pageviews per visitor, time on site, and conversion rate. As for search rankings, it does help, but not instantly. Typically, it takes about 4 to 6 months to see results from this.
Brand building – sadly it takes years. Sure, tools like Subscribers will instantly grow your traffic, but it won’t impact your search rankings right away. You have no choice but to build a brand.

So which one of these hacks are you going to test out first?

The post How I Boosted My Rankings Without Creating Content or Building Links appeared first on Neil Patel.

Read More

Yoast SEO 8.1: Gutenberg part 2, introducing the snippet preview

Yoast SEO 8.1: Gutenberg part 2, introducing the snippet preview

Two weeks ago, we launched Yoast SEO 8.0. In it, we shipped the first part of our integration with Gutenberg: the sidebar. That release was the foundation on which we are building the next parts of our integration with the new WordPress editor. In Yoast SEO 8.1, we introduce part 2: a Gutenberg-proof snippet preview. Also, a much better experience in the content analysis thanks to webworkers!

Optimize for synonyms and related keywords and prevent broken pages on your site with Yoast SEO Premium! »

$89 – Buy now » Info

Gutenberg, meet the Yoast SEO snippet preview

Yoast SEO 8.0, unfortunately, had to make do without a snippet preview inside Gutenberg. There were still some kinks to iron out before we could add that snippet preview to our WordPress plugin. The code for that new modal — the pop-up screen — had to be written from the ground up, exclusively for Gutenberg. That code has now been added to Gutenberg’s core so every WordPress developer can make use of the modal inside the new editor. How awesome is that!

Here’s what snippet preview pop-up inside Gutenberg looks like:

You see that it looks just like the regular Yoast SEO snippet preview. It has all the features you know and love, like the true-to-life rendering of your snippet on both mobile as well as desktop screens, SEO title field editor with snippet variables, slug editor and meta descriptions, also with snippet variables. To open the snippet preview, you simply click on the Snippet Preview button in the Yoast SEO Gutenberg sidebar.

Another cool thing now available in Gutenberg is the Primary Category picker. This has been a staple for many years in Yoast SEO. It lets you make and set the primary category for a post. This will be automatically selected whenever you make a new post. We will port more features over to Gutenberg shortly.

What’s next

We, of course, have big plans for Gutenberg. There’s still a lot to be done and not everything we’re dreaming up is possible right now. Step by step, we’re turning Yoast SEO and Gutenberg into a dream combination. We’re not just porting over existing features to the new Gutenberg, but actively exploring what we can do and what we need to do that. In some cases that means we have to develop the support inside Gutenberg’s core ourselves, this way loads of developers can benefit from the results as well.

Speeding up the content analysis with webworkers

Speed = user experience. To keep Yoast SEO performing great, we added a dedicated webworker to our content analysis. Webworkers let you run a script in the background without affecting the performance of the page. Because it runs independently of the user interface, it can focus on one task and does that brilliantly. Webworkers are very powerful and help us to keep Yoast SEO stable, responsive and fast even when analyzing pages with thousands of words of content. Try it!

The update is available now

Yoast SEO 8.1 has a lot of improvements behind the scenes that should drastically improve how the plugin functions. We are dedicated to giving you the best possible user experience, while also improving our current features and laying the groundwork for new ones. And not to forget that new WordPress editor, right? Update and let us know what you think!

Read more: Why you should buy Yoast SEO Premium »

The post Yoast SEO 8.1: Gutenberg part 2, introducing the snippet preview appeared first on Yoast.

Read More

Skip to content