Blog

The Secret Behind My 1,866,913 Monthly Search Visitors (It’s Not What You Think)

Posted by on Sep 21, 2018 in SEO Articles | Comments Off on The Secret Behind My 1,866,913 Monthly Search Visitors (It’s Not What You Think)

The Secret Behind My 1,866,913 Monthly Search Visitors (It’s Not What You Think)

How many visitors do you think NeilPatel.com generates each month?

Maybe a million… maybe 2 million?

I bet you’re going to guess 1,866,913.

If that’s what you guessed, you are wrong. This blog actually generated 2,530,346 visitors. 1,866,913 is the number that came from search engines.

So, what’s the secret to my ever-growing Google traffic?

Sure, I have optimized my on-page SEO, I’ve built links, written tons of blog post… I’ve done all of the stuff that most of my competition has done. But doing the same stuff as your competition isn’t enough.

My secret sauce is that I optimize for user signals.

Last week, I broke down some of the user signals Google looks at, as well as providing benchmarks to aim for if you don’t want to be penalized by Google.

If you aren’t familiar with user signals, check the article I linked to above.

So, how do you optimize for user signals?

Well, I know everyone has different types of websites, so I thought I would share the process I use to optimize NeilPatel.com.

Are you showing people what they want?

Google Analytics is an amazing tool. I’m so addicted to it that I log in at least 3 or 4 times a day. Heck, I even log in on weekends.

But here’s the thing, it only tells you half the story. It gives you numbers, but it doesn’t help you visualize what people are doing and what they aren’t.

For example, here is what my main blog page looked like according to Crazy Egg:

What’s wrong with the image?

Everyone is going to the blog to learn more about marketing. Above the fold, I have a box that showcases an SEO Analyzer. But there is one big issue: it’s barely clicked compared to the drop-down that lets you filter the blog content.

The SEO Analyzer had 128 clicks versus 359 clicks to the content filtering option.

Because you didn’t care for it as much, I removed it from the main blog page. And now when you head to the blog page you can see the filtering options above the fold.

I am looking to see what you click on and what you don’t. Simple as that.

If I keep showing you something you aren’t clicking on, I am wasting the opportunity to present you with something you do want to see. Which means I either need to adjust it or delete it.

Now, let me show you my current homepage:

What’s wrong?

Go ahead, take a guess…

Well, looking at the image you’ll notice there are tons of hot spots in the footer. That’s where the navigation is. With there being all of the clicks on the navigation, I should consider adding a navigation menu bar in the header.

Are you getting the hang of how to make your website more user-friendly? Well, let’s try another one.

Here’s an element in the sidebar of my blog posts:

That element only has 1 click. That’s terrible considering that the blog post generated 10,016 visits. And to top it off, that click came from a repeat visitor.

My goal is to convert more first-time visitors into leads, which makes up the majority of my visitors, but they are the lowest percentage of my leads.

So, what did I do? I deleted that element and you no longer see it in my sidebar.

Are you optimizing for mobile?

Let’s face it, more people are visiting your site using mobile devices than laptops or traditional computers.

If that’s not the case, it is just a matter of time.

So, have you optimized your site for mobile? And no, I’m not just talking about having a responsive design because everyone is doing that these days.

If you look at the image above, you’ll notice that I removed the image of myself and a few other elements. This helps make the loading experience faster and it helps focus people’s attention on the most important elements.

Similar to the desktop version, my mobile homepage has a 24% conversion rate. When my mobile version included a picture of me above the fold, my conversion rate dropped to 17%… hence there is no picture of me. 😉

Now, I want you to look at the mobile version of my main blog page and compare it to my homepage.

Do you see an issue?

The blog page generates a lot of clicks on the 3 bars at the top… that’s my navigation menu.

My developer accidentally removed that from the mobile homepage. That’s why the contact button in the footer of the homepage gets too many clicks.

Hopefully, that gets fixed in the next day or two as that could be negatively impacting my mobile rankings.

On top of optimizing the mobile experience, you need to ensure your website loads fast. It doesn’t matter if people are using LTE or 4G, sometimes people have terrible reception. And when they do, your website will load slow.

By optimizing it for speed, you’ll reduce the number of people who just bounce away from your site.

If you want a faster load time, follow this.

And don’t just optimize your site for speed once and forget about it. As you make changes to your site, your pagespeed score will drop, which means you’ll have to continually do it.

For example, you’ll notice I have been making a lot of change to NeilPatel.com (at least that is what the heatmaps above show). As I am making those changes, sometimes it affects my pagespeed score negatively. That means I have to go back and optimize my load time again.

A second in load time delay on average will cost you 6.8% of your revenue.

Are you focusing on helping all of your users?

Not every person who visits your website is the same.

For example, a small percentage of the people who visit NeilPatel.com work at large corporations that are publicly traded and are worth billions of dollars.

And a much larger percentage of my visitors own small and medium-sized businesses. These people are trying to figure out how to grow their traffic and revenue without spending an arm and a leg.

And the largest percentage of my visitors don’t have a website and they are trying to figure out how to get started for free.

In a nutshell, I have three groups of people who visit my website. The first group tends to turn into consulting leads for my agency, but they make up the smallest portion of my traffic.

One could say that I should only focus on helping them and ignore everyone else. But I can’t do that for a few reasons…

I started off with having practically no money and people helped me out when I couldn’t afford to pay them. I love paying it forward and helping people who can’t afford my services because I have been there, and I know what it’s like.
If I only focused on the large companies, who would link to my website and promote my content? You can bet that Microsoft isn’t going to link to me on a regular basis. If you want to generate social shares and backlinks you have to focus on the masses.
Little is the new big… if you can please the masses, they will make noise and the big players will eventually hear about you. So, don’t just treat people with deep pockets kindly, treat everyone the same and truly care about your visitors.

Once you figure out the types of people coming to your website (and if you are unsure just survey them), go above and beyond to help them out. Create different experiences for each group.

On NeilPatel.com, I’ve learned that people who work at large corporations are busy and they want to listen to marketing advice on the run. For that reason, I have the Marketing School podcast.

And a lot of beginners wanted me to break down my steps over video, so they can more easily replicate my tactics. For that reason, I create new videos 3 times per week giving marketing and business advice.

Many of you want to attend the conferences that I speak at, but can’t afford to buy a ticket. For those people, I create weekly webinars that are similar to the speeches I give at conferences.

And best of all, I know the majority of you find it hard to follow along with all of these tips as it can be overwhelming. So, I created Ubersuggest to help you out.

In other words, I try to go above and beyond for all of my visitors.

Yes, it is a lot of work, but if you want to dominate an industry it won’t happen overnight. Expect to put in a lot of time and energy.

Are you taking feedback from people?

You are going to get feedback. Whether it is in the form of email or comments, people will give you feedback.

It’s up to you if you want to listen… but if a lot of people are telling you the same thing you should consider it.

For example, I get a ton of comments on YouTube from people asking me to create videos in Hindi.

And…

Now, I am not only working on adding Hindi subtitles to my videos, but I am also working on translating my blog content to Hindi.

I’m not doing these to make more money… I’m not doing this to become popular… I’m just trying to do this to help out more people.

It’s the same reason why I have Spanish, Portuguese, and German versions of this website. I had enough requests where I pulled the trigger even though I am not focusing on generating income in those areas.

But here is the thing that most people don’t tell you about business. If you just focus on helping people and solving their problems, you’ll notice that your income will go up over time.

Businesses make money not because their goal is to make money… they make money because they are solving a problem and helping people out.

Another piece of feedback I have been getting recently is that my blog is too hard to read on mobile devices.

For that reason, I’ve assigned a task to one of my developers to fix this.

Conclusion

Traffic generation is a business. It’s not a hobby. It’s competitive, and it’s difficult to see short-term gains.

If you want to rank at the top of Google, you can’t treat your website as a hobby. You have to treat it like a business.

And similar to any business, you won’t succeed unless you pay attention to the needs of your customers. That means you have to listen to them. Figure out what they want and provide it.

That’s what Google is trying to do. They are trying to rank sites that people love at the top of their search engine. If you want to be one of those sites, then start paying attention to your visitors.

Show them what they want and go above and beyond so that they will fall in love with your website instead of your competition.

If you aren’t sure if you are making the right changes, monitor your brand queries. The more people that are searching for your brand terms on Google is a big leading indicator that people are happy with your website.

Just look at NeilPatel.com: I get over 40,000 visitors a month from people Googling variations of my name:

And I generate over 70,000 visits a month just from people searching for my free tool, Ubersuggest.

That’s how I’m continually able to make my traffic grow.

Yes, I do pay attention to what Google loves, but more importantly, I pay attention to your needs and wants.

Are you going to start optimizing your website for user signals?

The post The Secret Behind My 1,866,913 Monthly Search Visitors (It’s Not What You Think) appeared first on Neil Patel.

Local Business Transparency & Empathy for the Holidays: Tips + Downloadable Checklist

Posted by on Sep 14, 2018 in SEO Articles | Comments Off on Local Business Transparency & Empathy for the Holidays: Tips + Downloadable Checklist

Posted by MiriamEllis

Your local business will invest its all in stocking shelves and menus with the right goods and services in advance of the 2018 holiday season, but does your inventory include the on-and-offline experiences consumers say they want most?

Right now, a potential patron near you is having an experience that will inform their decision of whether to do business with you at year’s end, and their takeaway is largely hinging on two things: your brand’s transparency and empathy.

An excellent SproutSocial survey of 1,000 consumers found that people define transparency as being:

Open (59%)
Clear (53%)
Honest (49%)

Meanwhile, after a trying year of fake news, bad news, and privacy breaches, Americans could certainly use some empathy from brands that respect their rights, needs, aspirations, and time.

Today, let’s explore how your local brand can gift customers with both transparency and empathy before and during the holiday season, and let’s make it easy for your team with a shareable, downloadable checklist, complete with 20 tips for in-store excellence and holiday Google My Business best practices:

Grab the Holiday Checklist now!

For consumers, even the little things mean a lot

Your brother eats at that restaurant because its owner fed 10,000 meals to displaced residents during a wildfire. My sister won’t buy merchandise from that shop because their hiring practices are discriminatory. A friend was so amazed when the big brand CEO responded personally to her complaint that she’s telling all her social followers about it now.

Maybe it’s always been a national pastime for Americans to benefit one another with wisdom gained from their purchasing experiences. I own one of the first cookbooks ever published in this country and ‘tis full of wyse warnings about how to avoid “doctored” meats and grains in the marketplace. Social media has certainly amplified our voices, but it has done something else that truly does feel fresh and new. Consider SproutSocial’s findings that:

86% of Americans say transparency from businesses is more important than ever before.
40% of people who say brand transparency is more important than ever before attribute it to social media.
63% of people say CEOs who have their own social profiles are better representatives for their companies than CEOs who do not.

What were customers’ chances of seeking redress and publicity just 20 years ago if a big brand treated them poorly? Today, they can document with video, write a review, tweet to the multitudes, even get picked up by national news. They can use a search engine to dig up the truth about a company’s past and present practices. And… they can find the social profiles of a growing number of brand representatives and speak to them directly about their experiences, putting the ball in the company’s court to respond for all to see.

In other words, people increasingly assume brands should be directly accessible. That’s new!

Should this increased expectation of interactive transparency terrify businesses?

Absolutely not, if their intentions and policies are open, clear, and honest. It’s a little thing to treat a customer with fairness and regard, but its impacts in the age of social media are not small. In fact, SproutSocial found that transparent practices are golden as far as consumer loyalty is concerned:

85% of people say a business’ history of being transparent makes them more likely to give it a second chance after a bad experience.
89% of people say a business can regain their trust if it admits to a mistake and is transparent about the steps it will take to resolve the issue.

I highly recommend reading the entire SproutSocial study, and while it focuses mainly on general brands and general social media, my read of it correlated again and again to the specific scenario of local businesses. Let’s talk about this!

How transparency & empathy relate to local brands“73.8% of customers were either likely or extremely likely to continue to do business with a merchant once the complaint had been resolved.”
GetFiveStars

On the local business scene, we’re also witnessing the rising trend of consumers who expect accountability and accessibility, and who speak up when they don’t encounter it. Local businesses need to commit to openness in terms of their business practices, just as digital businesses do, but there are some special nuances at play here, too.

I can’t count the number of negative reviews I’ve read that cited inconvenience caused by local business listings containing wrong addresses and incorrect hours. These reviewers have experienced a sense of ill-usage stemming from a perceived lack of respect for their busy schedules and a lack of brand concern for their well-being. Neglected online local business information leads to neglected-feeling customers who sometimes even believe that a company is hiding the truth from them!

These are avoidable outcomes. As the above quote from a GetFiveStars survey demonstrates, local brands that fully participate in anticipating, hearing, and responding to consumer needs are rewarded with loyalty. Given this, as we begin the countdown to holiday shopping, be sure you’re fostering basic transparency and empathy with simple steps like:

Checking your core citations for accurate names, addresses, phone numbers, and other info and making necessary corrections
Updating your local business listing hours to reflect extended holiday hours and closures
Updating your website and all local landing pages to reflect this information

Next, bolster more advanced transparency by:

Using Google Posts to clearly highlight your major sale dates so people don’t feel tricked or left out
Answering all consumer questions via Google Questions & Answers in your Google Knowledge Panels
Responding swiftly to both positive and negative reviews on core platforms
Monitoring and participating on all social discussion of your brand when concerns or complaints arise, letting customers know you are accessible
Posting in-store signage directing customers to complaint phone/text hotlines

And, finally, create an empathetic rapport with customers via efforts like:

Developing and publishing a consumer-centric service policy both on your website and in signage or print materials in all of your locations
Using Google My Business attributes to let patrons know about features like wheelchair accessibility, available parking, pet-friendliness, etc.
Publishing your company giving strategies so that customers can feel spending with you supports good things — for example, X% of sales going to a local homeless shelter, children’s hospital, or other worthy cause
Creating a true welcome for all patrons, regardless of gender, identity, race, creed, or culture — for example, gender neutral bathrooms, feeding stations for mothers, fragrance-free environments for the chemically sensitive, or even a few comfortable chairs for tired shoppers to rest in

A company commitment to standards like TAGFEE coupled with a basic regard for the rights, well-being, and aspirations of customers year-round can stand a local brand in very good stead at the holidays. Sometimes it’s the intangible goods a brand stocks — like goodwill towards one’s local community — that yield a brand of loyalty nothing else can buy.

Why not organize for it, organize for the mutual benefits of business and society with a detailed, step-by-step checklist you can take to your next team meeting?:

Download the 2018 Holiday Local SEO Checklist

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

What We Learned in August 2018: The Digital Marketing Month in a Minute

Posted by on Sep 6, 2018 in SEO Articles | Comments Off on What We Learned in August 2018: The Digital Marketing Month in a Minute

What We Learned in August 2018: The Digital Marketing Month in a Minute

The average Briton spends over 2 hours online per day

Several fascinating findings were revealed in Ofcom’s annual review of the UK communications sector. Key takeaways include:

On average, people in the UK spend the equivalent of a day online per week

Facebook’s reach among 18-24 year-olds is in decline (by 4% YoY)

The smartphone video advertising market is worth over 1 billion pounds

Read the full story (Linkedin)

Google risks class action due to “surreptitious” tracking of user location

A lawsuit was filed versus Google in California after it was found that Google still tracks a smartphone’s location, even when the “Location History” setting is turned off. Some of the implications may force Google to pay a substantial fine, together with deleting some its location tracking data. A few days after the lawsuit, the Location History support page on Google’s website was changed from “with Location History off, the places you go are no longer stored” to “some location data may be saved as part of your activity on other services, like Search and Maps.”

Read the full story (Marketing Land)

Google improves accuracy of Index Coverage report in Search Console

Google has updated the Index Coverage report within Search Console for the first time since its launch. This feature was originally introduced in 2017 in order to provide information on the pages of your website that have/have not been indexed (with instructions to fix issues). According to Google, the new update will heavily impact the accuracy of report starting from August. The only drawback of this refresh is that the index coverage data for the period July 14-August 1 was not fully recorded, so it was calculated as an estimate based on the values recorded on the 1st of August.

Read the full story (Search Engine Journal)

Google confirms core algorithm update

On August 1, Google confirmed industry rumours about remarkable ranking fluxes by stating the roll out of a core algorithm update. Named “medic update” in the SEO industry due to the large number of Health and Medical sites affected, it is worth mentioning how its reach expanded beyond just this bracket but affected the broader category of YMYL (Your Money or Your Life) sites.

Read the full story (Moz)

Study reveals truth behind shopping via voice search

Despite being one of the most hyped subjects of 2018, a recent study from The Information revealed that voice search does not yet seem to be driving sales. Only 2% of the customers who use Amazon’s Alexa intelligent assistant appear to have made a purchase via voice search, and only 10% of those have made a second purchase. This is probably due to an inefficient consumer journey, device limitations or simply that people are not generally aware of the capabilities yet. How soon might this change?

Read the full story (TechCrunch)

Schema markup for datasets now supported in the SERP

Google has confirmed that dataset markups will be supported in the SERP. By doing so, Google is trying to improve the way users visualise data in the search result page, rewarding organisations that mark up datasets such as tables, CSV files, images containing data and more.

As Google stated, this new markup aims “to improve discovery of datasets from fields such as life sciences, social sciences, machine learning, civic and government data, and more”.

Read our blog to learn how to understand and implement structured data.

Read the full story (Search Engine Land)

FB launches mobile-first video creation for better ads

Facebook has rolled out a new set of tools aimed at advertisers that produce assets with mobile-first in mind, since their research has proven that “mobile-first creative has a 27 percent higher likelihood of driving brand lift compared to ads that are not optimized for mobile.”

It is now possible to add motion to existing images/videos or create videos from assets such as pictures and logos.

Read the full story (Marketing Land)

Ad spend on Instagram on the rise

Despite Facebook having experienced the largest one-day fall in American stock market history on the 26th of July, the stock is still trading at May 2018 levels. In their latest earnings report, they disclosed strong growth in ad spend on Instagram in Q2, which was 177% compared to previous year.

Instagram’s customer base has surpassed a billion active users and, according to the social media listening company socialmediabakers, brands’ Instagram profiles have a much higher user engagement compared to their Facebook equivalents. While there are challenges around the advertising opportunities in the “stories” functionality, Facebook as a whole is continuing to see their playbook work supremely well.

Read the full story (Marketing Land)

Europe to fine sites for not taking down illegal content within one hour

The EU is planning to take a stronger position on illegal or controversial material posted online, especially on social media platforms such as Facebook, Twitter and YouTube. Julian King, EU Commissioner for Security, has put forward legislation which would fine tech companies that do not remove illegal content within one hour. This follows wide reporting of a recent study which found a correlation between social media usage and hate crimes (study based on violence against refugees in Germany in 2018) though this study has received some criticism in particular related to its use of global “likes” for Nutella as a proxy for German Facebook usage.

Read the full story (Business Insider)

Twitter’s new tests: threaded replies and online status indicators

Twitter’s director of product management, Sara Haider, has posted a few screenshots that display some new features Twitter is working on to improve threaded conversations and add online status indicators. The reason behind these changes is to make Twitter “more conversational”. Neither feature appears to be groundbreaking if compared to other social media platforms: threaded replies will potentially look very similar to Facebook’s comments, while online status indicators have been used by Facebook Messenger and Instagram’s direct messages. The tech giant is currently collecting feedback before rolling out the changes.

Read the full story (Search Engine Journal)

Distilled News

We kick off the Distilled news this month with a post from Craig Bradford explaining what SEO split testing is.  The post focuses on the simple principle of split testing, how it differs from CRO and UX testing and covers a few examples to outline Distilled’s methodology. As VP in charge of Distilled’s ODN platform, Craig has a front-row seat to this hot area of SEO.

Analyst Paola Didone took her recent experiences of handling large data sets and wrote up a blog post about how to use Regex formulae in Google Sheets.

From the US, SEO Analyst Shannon Dunn suggests an easy approach to optimizing website internal linking structures.

Outside the SEO world, our People Operations Executive Kayla Walker shares her thoughts on how to give better positive feedback.

Distilled’s CEO, Will Critchlow, tried to address the industry’s confusion on the differences between URL structures and Information Architecture.

Last but not least, SearchLove London is approaching! Get your tickets here and do not miss out on the newest and hottest topics in digital marketing. At the time of writing, only 1 VIP ticket is left and you only have two weeks to take advantage of our early bird pricing (£150 off!) which is only available until the 19th of September.

Need further convincing? Will has written up 8 reasons why Search Love London is worth attending.

Do 404s Hurt SEO and Rankings?

Posted by on Sep 3, 2018 in SEO Articles | Comments Off on Do 404s Hurt SEO and Rankings?

Do 404s Hurt SEO and Rankings?

Status code 404 is probably the most common HTTP error that people encounter when they’re browsing the web. If you’ve been using the internet for over a year, chances that you haven’t encountered one yet are pretty low. They’re very common.

 

Normally, people don’t pay too much attention to them. As a user, you will get frustrated at most and hit the back button or close the tab. As a webmaster, however, more things might be at stake. Many website owners ask themselves if 404 pages hurt their SEO and rankings in any way.

 

 

 

What Is Error Code 404?

How to Add/Customize a 404 Page
How to Find 404 Errors

Do 404 Pages Hurt SEO?

What Does Google Say About 404s?
Incoming 404s
How to Fix Incoming 404s
Outgoing 404s (Includes Internal 404s)
How to Fix Outgoing 404s

Building Backlinks with the Broken Links Method

 

Keep reading, as in this article we’ll go over how 404 pages affect your website, SEO and rankings and what you can do to fix things.

 
What Is Error Code 404?

 

Error 404 is a standard HTTP status code (also called response code). When you try to access a URL or a server, the server returns a status code to indicate how the operation went. Assuming that most of the web works fine, the most common status code is 200. If you’re reading this article now, it means that your browser was able to access our server and the server found the requested resource, so it returned a 200 response code.

 

When the client can establish a connection with the server but can’t find the requested resource, it pulls out a error 404 status code. It basically means that the page or whatever resource was requested cannot be found at that particular address.

 

To check the response code of a page you can right click anywhere on the page in your browsers, hit Inspect and then go to the Network section. If you can’t see the status codes, press the F5 key, or refresh the page while the inspector is still open.

 

Chrome Inspector

 

You will usually see a bunch of status codes there. That’s because a page will load multiple resources. For example, the requested page HTML/PHP file might be found, but some image resources have been misspelled or deleted. In this case, the page document type will return a 200 response code, while the missing image resources will return 404s.

 

A 404 status code in your browser will look something like this:

 

CognitiveSEO’s 404 Page

 

As you can see, we have a document type 404 error code, which means the page doesn’t exist or wasn’t found at that address, followed by two 200 status codes that represent a couple of images that have been found.

 

Another option would be to use a tool like https://httpstatus.io/. You can insert multiple URLs and it will return you their HTTP status codes. This will only pull out the main status code of the document, excluding any other resources. You can, however add the resource URL.

 

Response code tool

 

There are other response codes there that you might have heard of. 500, 501 and 503, for example, usually indicate a server error, while 301 and 302 stand for redirects. These, along with 200 and 404, make up the most common status codes on the web.

 

The 301s you see above in the tool and browser inspector are there because I’ve entered the HTTP version instead of the HTTPS version, so a 301 is performed by our server to redirect users to the secure version of our website. I’ve decided to leave them in the screenshots, because they’re a good example of how browsers and status code tools work.

 

It is really important for a page/resource that doesn’t exist to return a 404 status code. If it returns a 200 code, Google might index it.

 

However, to combat this, Google created a “Soft 404” label. Basically, if the page states that the content isn’t found, but the HTTP status code is 200, we have a soft 404. You can find these types of errors in Google’s Search Console (Former Webmaster Tools), under Crawl Errors. If you’re already on the new version of Search Console, the easiest way is to temporarily switch to the old one.

 

 

Soft 404s aren’t real error codes. They’re just a label added by Google to signal this issue or a missing page returning a 200 code.

 
How to Add/Customize a 404 Page

 

Normally, your web server should already handle 404s properly. This means that if you try to access a URL that doesn’t exist, the server will already pull out a 404.

However, sometimes the platform might not return a 404, but a blank 200 page. Also, as you can see above, the design isn’t very pleasant and the only option given is to refresh the page… which doesn’t exist. That will keep pulling up a 404 code.

 

It’s a very good idea to have a custom web design for your 404 page. Why? Well, because it can create a better experience for your users. I mean, the experience of not finding what you’re looking for is already bad. But you can add some humor to it, at least.

 

The most important part on your 404 page is to include a CTA (call to action).

 

Without a call to action, users will most probably leave when they see a regular 404. By inserting some links to some relevant pages, you can hopefully harvest some more traffic to your main pages.

 

Take a look at our example of 404 page. Big difference, isn’t it? It might actually convince you not to be upset with us. Also, we have a pretty clear CTA that tells you to click on it. It links to the homepage. Our homepage is our hub, from which you can access the most important and relevant parts of our website.

 

cognitiveSEO’s 404 Page Design

 

However, you don’t have to limit yourself to this. You can add links to relevant category pages or other sections of your site. A search bar would also be a great idea.

 

Be creative with your 404’s web design. If it puts a smile on the users’ faces, it might even be better than if they landed on the right page. You can take a look at a few examples in this article, to get your gears spinning.

 

If you have a cool 404 design, share your website with us in the comments section and let’s have a look at it!

 

Most popular CMS (Content Management Systems), like WordPress or Joomla, already have some sort of design implemented. You can easily add a custom design using a plugin. Here’s a plugin for WordPress.

 

If you have a custom built website, then you’ll have to create a 404 template. Log into your Apache web server and create a 404.php file. If you already have one, just edit that. Sometimes, it might have the .html extension. If it doesn’t return a 404 status code, change it to .php, because we’ll need to force the HTTP request header with the proper 404 error code using some PHP.

 

<?php
header(“HTTP/1.0 404 Not Found”);
?>

 

Then, find your .htaccess file and add the following line to it:

 

ErrorDocument 404 /404.php

 

This will tell the server which page should be shown when a 404 error code is detected. If the line is already there, just modify that. That’s it. Make sure you check everything again with your browser’s inspector or with the tool mentioned above. If it returns a 404 code, you’re good to go!

 
How to Find 404 Errors

 

An easy way to find 404 errors is to log into Google’s Search Console (Former Webmaster Tools). Those are the 404s that Google see, so they’re definitely the most important ones.

 

If you see Soft 404 errors, like mentioned above in the article, you have to make sure you 404 page actually returns a 404 error code. If not, it’s a good idea to fix that.

 

There are other ways to find 404 errors. If you’re looking for broken pages on your website, which other people have linked to, you can use the cognitiveSEO Site Explorer and check the Broken Pages section.

 

Screenshot from the CognitiveSEO Tool. More details about it below, in the article.

 

If you’re looking to find broken links within your own site, or links to other websites from your website, you can use Screaming Frog. A free alternative would be Xenu Link Sleuth.

 

I’ll show you how to use these SEO tools in detail below.

 
Do 404 Pages Hurt SEO?

 

There are a lot of experts out there stating that 404s will ruin your rankings and that you should fix them as soon as possible. But, the truth is that 404s are a normal part of the web and they are actually useful.

 

Think of it. If a specific place didn’t exist, wouldn’t you rather know it than constantly be directed to other random places? It’s the same on the web. While it’s a good idea to redirect an old page that’s been deleted to a new, relevant page, it’s not such a good idea to redirect every 404 to your homepage, for example. However, I’ve seen some sites redirect their users after a countdown timer, which I thought was a good idea.

 

In theory, 404s have an impact on rankings. But not the rankings of a whole site. If a page returns a 404 error code, it means it doesn’t exist, so Google and other search engines will not index it. Pretty simple, right? What can I say… make sure your pages exist if you want them to rank (ba dum ts).

 

So what’s all the hype about 404s? Well, obviously, having thousands and thousands of 404 pages can impact your website overall.

 

However, it’s not so much the actual 404 pages that hurt SEO, but the links that contain URLs pointing to the 404s.

 

You see, these links create a bad experience. They’re called broken links. If there were no broken links, there wouldn’t even be any 404 errors. In fact, you could say that there are an infinity of 404s, right? Just add a slash after your domain, type something random and hit enter. 404. But if search engines can’t find any links pointing to 404s, the 404s are… double non-existent. Because they already don’t exist… And then they don’t exist again. I hope you get the point.

 

I’ll explain everything in more detail soon, so keep reading.

 
What Does Google Say About 404s?

 

Google has always pointed out that 404s are normal. They also seem to be pretty forgiving with them. I mean, that’s natural, considering that they have 404s of their own:

 

 

In fact they’ve pointed these things out in an article from 2011 and also in this more recently posted video:

 

 

There’s also this source that also treats the issue:

 

 

If you want to read more on this, visit this link, then scroll to the bottom and open the Common URL errors dropdown.

 

However, let’s explain everything in more detail. People often forget that there two types of 404 pages. The ones on your site and the ones on other people’s website. They can both affect your site, but the ones that affect you most are the ones on other people’s websites.

 

“What? Other websites’s 404s can impact my website?”

 

Yes, that’s right. If your website links to other websites that return a 404, it can negatively impact its rankings. Remember, it’s not so much the 404s that cause the trouble, but the links to the 404s. No links to 404s, no 404s. So you’d better not create links to 404s.

 
Incoming 404s

 

Incoming 404s are URLs from other websites that point to your website, but return a 404. Incoming 404s are not always easy to fix. That’s because you can’t change the URLs on other websites, if you don’t own them. However, there are workarounds, such as 301 redirects. That should be kept as a last option, in case you cannot fix the URL.

 

These don’t really affect you negatively. I mean, why should you be punished? Maybe someone misspelled it, or maybe you deleted the page because it’s no longer useful. Should you be punished for that? Common sense kind of says that you shouldn’t and Google agrees.

 

However, this does affect your traffic, as when someone links to you, it sends you visitors. This might lead to bad user experience on your side as well. You can’t always change the actions of others, but you can adapt to them and you can definitely control yours.

 

Most webmasters will be glad to fix a 404, because they know it hurts their website. By sending their users to a location that doesn’t exist, they’re creating a bad experience.

 

If you’ve deleted a page with backlinks pointing to it (although it’s not a good idea to delete such a page) you must make sure you have a 301 redirect set up. If not, all the link equity from the backlinks will be lost.

 

If you don’t redirect backlinks to broken pages on your website to relevant locations, you won’t be penalized or anything, but you will miss out on the link equity.

 

A 301 is mandatory, because often you won’t be able to change all the backlinks. Let’s take social media, for example. On a social media platform like Facebook, one post with a broken link could be shared thousands of times. Good luck fixing all of them!

 

You could also link to your own website with a 404, from your own website. Broken internal linking is common on big websites with thousands of pages or shops with dynamic URLs and filters. Maybe you’ve removed a product, but someone linked to it in a comment on your blog. Maybe you had a static menu somewhere with some dynamic filters that don’t exist anymore. The possibilities are endless.

 
How to Fix Incoming 404s

 

Fixing incoming 404 URLs isn’t always very easy. That’s because you’re not in full control. If someone misspells a link pointing to your website, you’ll have to convince them to fix it. A good alternative to this is to redirect that broken link to the right resource. However, some equity can be lost in the process, so it’s great if you can get them to change the link. Nevertheless, the 301 is mandatory, just to make sure.

 

If you’ve deleted a page, you can let those webmasters know that link to it. Keep in mind that they might not like this and decide to link to another resource. That’s why you have to make sure that the new resource is their best option.

 

To find incoming broken links, you can use cognitiveSEO’s Site Explorer. Type in your website, hit enter, then go to the Broken Pages tab.

 

 

If you click the blue line, you can see what links are pointing to your 404 URL. The green line represents the number of total domains pointing to it. Some domains might link to your broken page multiple times. For example, the second row shows 33 links coming from 12 domains. The green bar is bigger because the ratio is represented vertically (the third green bar is 4 times smaller than the second green bar).

 

Then, unfortunately, the best method is to contact the owners of the domains and politely point out that there has been a mistake. Show them the correct/new resource and let them know about the possibility of creating a bad experience for their users when linking to a broken page. Most of them should be happy to comply.

 

Whether you get them to link to the right page or not, it’s a good idea to redirect the broken page to a relevant location. I repeat, a relevant location. Don’t randomly redirect pages or bulk redirect them to your homepage.

 

It’s also a good idea to do a background check on the domains before redirecting your URLs. Some of them might be spam and you might want to add them to the disavow list.

 

Remember, 404s should generally stay 404. We only redirect them when they get traffic or have backlinks pointing to them. If you change a URL or delete a page and nobody links to it or it gets absolutely no traffic (check with Google Analytics), it’s perfectly fine for it to return a 404.

 
Outgoing 404s (Includes Internal 404s)

 

Outgoing 404s are a lot easier to fix because you have complete control over them. That’s because they’re found on your own website. You’re the one linking to them. Sure, someone might have screwed you over by deleting a page or changing its URL, but you’re still responsible for the quality of your own website.

 

The only type of 404 links that really hurt your website are the ones that are on it. When you add a link from your website to another website, you have to make sure that URL actually exists or that you don’t misspell it. You might also have internal links that are broken. Similar to shooting yourself in the foot.

 

Broken links create bad user experience and we all know that Google (and probably other search engines as well) cares about user experience.

 

Google crawls the web by following links from one site to another, so if you tell Google “Hey man, check out this link!” only for it to find a dead end, I’m pretty sure whom Google’s going to be mad about.

 

That’s why, from time to time, it’s a good idea to check if you’re not linking out to 404s. You never know when one shows up. The best way to do it is to use some software that crawls your website. 

 
How to Fix Outgoing 404s

 

Fixing outgoing 404s is easier because you have full control over them. They’re on your site, so you can change them.

 

To find them, you can use either Screaming Frog or Xenu Link Sleuth. I know Xenu looks shady, but it’s safe, it works and it’s free.

 

If you have a Screaming Frog subscription, go ahead and crawl your website. The free version supports 500 URLs, but a new website with under 500 URLs rarely has broken links. After the crawl is finished (it might take hours or even days for big sites), go check the Response Code tab and then filter it by searching for 404. At the bottom, go to the Inlinks section to find the location of the broken URL on your website.

 

 

 

Another way to do it is to go to the External tab, but there you won’t find the internal broken links. To find its location, go to Inlinks, again.

 

 

If you want to use a free alternative, go for Xenu. However, things are a little more complicated with Xenu. Xenu doesn’t point out pretty much anything else other than URLs and their status codes. It also doesn’t always go through 301s to crawl your entire site, so you’ll have to specify the correct version of your site, be it HTTP or HTTPS, www or non-www.

 

To begin the crawl, go to File -> Check URL. Then enter your website’s correct main address and hit OK. Make sure that the Check External Links box is checked.

 

 

After the crawl is done, you can sort the list by status codes. However, a better way is to go to View and select Show Broken Links Only. After that, to view the location of the broken link on your site, you’ll have to right click and hit URL properties. You’ll find all the pages that link to it.

 

Unfortunately, I haven’t found a proper way of exporting the link locations, so you’re stuck with right clicking each link manually.

 

After you’ve located the links with either Xenu or Screaming Frog, edit them in your admin section to point them to a working URL. You can also just 301 them, but some link equity will be lost so the best thing to do is to fix the links themselves. Just remember that the 301 redirect is mandatory.

 
Building Links with the Broken Links Method

 

These 404s, always a struggle, aren’t they? That’s true, but there’s also a very cool thing about 404s. The fact that you can exploit them to build new links.

 

Sounds good, right? Let me explain.

 

Wouldn’t you like someone to point out to you a broken link on your site? I’d certainly like that. What if then, they’d even go further as to give you a new resource to link to, one even better than the one you were linking to before? Would you consider linking to it?

 

Well, if you find some relevant sites that link to broken pages, you might as well do them a favor and let them know. And how can you do that, exactly? Well, you can use the Broken Pages section of CognitiveSEO’s Site Explorer, of course.

 

 

However, you’ll also need some great content to pitch them if you want this to work. If you don’t have that, they won’t bother linking to you. They’ll just remove the broken link and thank you for pointing it out. So, if you aren’t already working on a great content creation strategy, you should get started.

 

The secret to broken link building, however, is to have awesome content that they can link to.

 

Once you find a website linking to a broken page, all you have to do is email them something like this:

 

Hey there, I was checking your site and followed a link but it leads to a page that doesn’t exist. You might want to fix that, as it creates a bad experience for your users. Also, if you find it appropriate, I have a pretty good resource on that topic you could link to. Let me know if you like it.

 

I’d go one step further and actually search the site which has been linked to for the resource. If it’s there, at a new location, point that out before your article. You’ll have more chances of them trusting you this way. Your article will be an alternative. Also, if the old resource is worse, they’ll be able to compare them and see the difference.

 

The broken link method is one of the best SEO strategies for link building. If you want to learn more about this method and how to apply it effectively, you can read this awesome article about broken pages link building technique.

 

Conclusion

 

So, if you were wondering if 404 errors hurt SEO, now you know the answer. Anyway, let me summarize it:

 

404 error pages don’t really hurt your SEO, but there’s definitely a lot you can miss out if you don’t fix them. If you have backlinks pointing to pages on your website that return a 404, try to fix those backlinks and 301 redirect your broken URLs to relevant location. If you have links on your site that point to broken pages, make sure you fix those as soon as possible, to maximize the link equity flow and UX.

 

What are your experiences with 404 pages? Do you constantly check your website for 404s? Have you ever used the broken pages link building method mentioned above? Let us know in the comments section!

The post Do 404s Hurt SEO and Rankings? appeared first on SEO Blog | cognitiveSEO Blog on SEO Tactics & Strategies.

4 warning signs AdSense is ruining your contextual advertising strategy

Posted by on Aug 30, 2018 in SEO Articles | Comments Off on 4 warning signs AdSense is ruining your contextual advertising strategy

4 warning signs AdSense is ruining your contextual advertising strategy

In the dark ages of the SEO era, when bloggers and webmasters were still familiarizing themselves with the process and its functionality, certain tactics and strategies had become industry standards.

The era I’m talking about is the one where Google AdSense was heavily built into the foundation of a blogger’s strategy. The “legacy” tactics associated with this approach can still be found in the way modern publishers think about SEO and branding strategy. However, AdSense’s limited customizability can hold back publishers. This needs be addressed and rooted out.

Before assuming AdSense is the best monetization partner for you, consider these four warning signs. If you’re guilty of practicing any of these points, it’s time you re-evaluated your monetization partner and strategy.

1. You haven’t considered other platforms

It’s no secret that AdSense as a standalone monetization stream isn’t enough to earn substantial revenue. Most solopreneurs that still operate in the “blogosphere” have understood for years that it is important to branch out and diversify revenue streams. So there’s nothing revolutionary about this concept.

Most of the focus on diversification has been on developing products to sell, with eBooks being a gold standard. This is great advice, even if it can become a bit boilerplate at times. But we’re not talking about selling products today. We’re talking about contextual advertising, which means placing relevant ads on your site that fit in with the content of your page. When it comes to contextual advertising, too many people still aren’t considering their other options.

Media.net, the second largest contextual advertising business worldwide by revenue, is a good place to start experimenting. The platform uses machine-learning algorithms to predict user intent, based on the content of your pages, and serves ads based on the associated keywords. With Media.net you get exclusive access to Yahoo! Bing’s $6 billion worth of search demand. This allows you to leverage quality advertisers even if you are in a smaller niche.

Performance is obviously different for every site, but Perrin Carrell of AuthorityHacker claims Media.net ads earns them 5x as much as AdSense ads, and Jon Dykstra of FatStacksBlog reported that some Media.net ad placements were earning more revenue than all other ad networks.

One of the biggest advantages of Media.net ads is that their ads are heavily customizable. Sizes and designs can be designed to match your site so that they are native to your content and inline with your branding, resulting in higher engagement and revenue. Native ads are a great way to offer your readers an uninterrupted experience since these ads look like a natural extension of your website. In fact, these ads are also mobile responsive, which means more revenue for you.

Media.net Native Ad Unit

 

Media.net Contextual Ad Unit

From there, you can also consider ad servers like the Google Ad Manager (formerly DoubleClick For Publishers) and OpenX. Ad server platforms like these give publishers great control over ads, including the ability to set preferred deals with a CPM floor, and the option to interact directly with the ad marketplace.

In short, if AdSense is the only ad platform you’ve experimented with, you are missing out on great revenue-generating opportunities.

2. You are picking topics based on AdWords keyword bids

The SEO industry grew up on the Google AdWords Keyword Tool, and its successor, the Keyword Planner. One trend, born in the age of “Made For AdSense” (MFA) blogs and microsites, was to use the Keyword Planner to discover topics to write about based on AdWords bid prices.

This approach was never a good long-term strategy. A blog based on topics chosen to optimize revenue according to this approach often leads to a poorly branded content site that doesn’t naturally adapt to the needs of its audience. The obviously commercial intent of the topics chosen puts a hard ceiling on the size of your recurring audience.

Search engines like sites that build recurring audiences. They earn higher click through rates from the search engines, which Googlers have admitted are used to judge SERP quality.

Modern content creators need to select topics based on what will most successfully help them cultivate an audience. This means identifying keywords that address specific problems you can help users solve. 

You do not find these topics by typing generic industry keywords into the Keyword Planner. You find them by identifying your audience and the platforms they frequent, the kind of questions they ask one another, or even asking them directly what they are most frustrated with, and looking for satisfaction gaps in the answers to those questions. Only then should you turn to the Keyword Planner to start looking for the best keywords to represent your solutions.

The goal isn’t to target valuable keywords, but to target valuable audiences. This is a crucial difference that should guide your strategy at a more foundational level.

3. Your ad placement is based on MFA “best practices” instead of testing

“Best practices” rooted in old school MFA thinking prevent you from building your own monetization strategy from the ground up. They can also hurt your rankings in the search results.

Damaged Rankings

Old school, “gray hat” MFA tactics like trying to place ads where they will be confused for navigation rather than placing them depending on your layout and content were never good branding strategies, and simply don’t work anymore.

Google’s internal quality rater guidelines explicitly state that sites should never disguise advertisements as the main content or navigation of the site, and if they do they will receive the “lowest” quality rating. Likewise for ads that make the main content unreadable, as well as ads that are distracting because they are too shocking.

Bad Strategy

Even advice that seems innocuous and doesn’t violate search guidelines can be harmful.

Recommendations like “place your ad in the sidebar,” “place it within your content just above the fold,” or “use the 300×250 ad size” are often unhelpful and counterproductive. Advice this specific shouldn’t be given without context, because ads should be placed in a way that fits your site design.

Suggestions like these are always hypotheses that you should test, not rules written in stone. Run your own A/B tests to find out what works for you.

We recommend Google Analytics Experiments for your testing because their Bayesian statistical methods make it easier to interpret results, because they are free, and because the data is as fully incorporated into Google Analytics as possible.

4. You are not partnering with sponsors

This is one of the biggest opportunities you miss out on if you operate on an AdSense-focused monetization strategy. When you work with sponsors, you can work advertisements entirely into the main content of your blog post, or host articles that are sponsored content created by sponsors themselves. You can negotiate deals that will guarantee a certain level of revenue, which is not always possible using programmatic advertising.

You can collaborate with sponsors on innovative campaigns that will earn the sponsor far more attention than traditional ads, which naturally means they will be willing to spend more. Innovative approaches can also result in more exposure not just for your sponsor, but even for your own brand.

It also lets you monetize on channels where AdSense won’t, such as your social media platforms.

If you aren’t reaching out to potential sponsors to discuss possibilities like these, you are missing out on substantial revenue.

Conclusion

AdSense should not be thought of as central to your contextual advertising strategy, or worse, the foundation of how you approach brand building. Diversify your advertising platforms, migrate your market research outside of AdSense’s native tools, and rely on your own testing strategies. Let your brand drive your monetization strategy, not the other way around.

Manish Dudharejia is the president and founder of E2M Solutions Inc, a San Diego based digital agency that specializes in website design & development and ecommerce SEO. Follow him on Twitter.

Structured Data Can = MehSEO

Posted by on Aug 30, 2018 in SEO Articles | Comments Off on Structured Data Can = MehSEO

Structured Data Can = MehSEO

In 2011, Google, Bing & Yahoo announced Schema.org which got SEOs all excited to start marking up website content to turn it into “structured data.” The benefit would be that search engines would be more certain that a text string of numbers was in fact a phone number, or at least they would be more certain that you wanted them to think it was phone number. The search engines could then turn the structured data into eye-catching fripperies designed to seduce searchers into surrendering their clicks and revenue to your fabulously marked-up site (aka “Rich Results).

It also could help your fridge talk to your Tesla.

So pretty much every SEO marked-up their audits and conference presentations with recommendations to mark up all the things. LSG was no exception. And we have seen it work some nice SEO miracles.

There was the ecommerce site that lost all its product review stars until we reconfigured the markup. There was the yellow pages site that got a spammy structured data manual action for merging a partner’s review feed into its own. There is the software vendor and its clients that (still!) violate Google’s structured data guidelines and get away with it. There have been countless Knowledge Panels that have needed the tweaking one can only get from a perfectly implemented https://schema.org/logo.

But structured data is not a killer SEO strategy for all situations, and it’s important that SEOs and clients understand that often it’s more of a future-proofing game than an actual near-term traffic or money-generator. For example, let’s take this UGC site that generated about 22 million clicks from Google over the past three months and see how many clicks are reported as coming from “Rich Results” in Google Search Console:

So less than one-half of one-half of 1% of clicks came from a “Rich Result.” Not particularly impressive.

The good news is that Google is in fact using the structured markup. We can see evidence of it in the SERPs. But it’s likely the content of this site doesn’t lend itself to eye-popping featured snippets. For example, many of the Rich Results appear to just be bolded words that appear in the URL snippets in the SERPs, kind of like this:

It also may just take time before Google trusts your markup.

So before you drop everything and prioritize structured markup, you may want to consult Google’s Structured Data Gallery to get an idea of which types of content Google is pushing to markup. You also should check the SERPs to see what your competitors are doing in this area and how their marked-up content is being displayed. This should give you a good idea of what the potential is for your site.

And remember,”you can mark-up anything, but you can’t mark-up everything…” – Tony Robbins?

The post Structured Data Can = MehSEO appeared first on Local SEO Guide.

Getting personal with SEO: how to use search behavior to transform your campaign

Posted by on Aug 29, 2018 in SEO Articles | Comments Off on Getting personal with SEO: how to use search behavior to transform your campaign

In order to meet the needs of today’s consumers and a more intelligent digital market, creating value in optimization campaigns requires innovative thinking and a personalized approach. Adverts, landing pages, and on-site messages that feel tailor-made are becoming the norm for many brands, contributing to higher response rates, visibility, and value.

Arguably, in today’s post-truth era, creating a personal message that can tap into the emotions and needs of a consumer is exactly the direction in which we will continue to progress. It’s also likely that in the near future, this will become the only way that optimization campaigns can be successful.

Anyone can enhance and deliver stronger campaigns by picking insights from search behaviors and using them to directly address your digital customers. But how can you maximize the effectiveness of doing this? Using Delete’s European Search Award-winning campaign for Leeds Beckett University as a case study, this article will take an in-depth look into profiling and understanding your browsers to attract and convert new customers.

Why utilizing user search behavior is necessary in campaigns

From Google’s personalized search algorithm that was launched in 2005, to 2015’s RankBrain, search results have consistently shifted towards searcher satisfaction rather than the needs of a webmaster or business. As users began to demand more intelligent, considered content (keyword stuffing is now a definitive no-go), we’ve had to adapt by creating engaging content that is authoritative in terms of knowledge and information.

There are clear signs that behavior signals are on Google’s radar. Google now elevates the results that it considers to be more relevant to a searcher based on profile information that it gathers about them. So, when it comes to creating your own outreach campaigns, it is only logical to harness and use this profile information to influence post-click user experience.

Harness search behavior to create customer profiles and develop positive relationships

Using search behavior information and user profiles is important because of the phenomenal results you can achieve, particularly at a time when advertising is becoming more challenging by the day.

Splitting users into customer profiles is a method that will enable the creation of targeted, tailor-made advertising and content that is more likely to result in conversions. There are a variety of ways that user behavior can be tracked and profiled, varying from more in-depth and specific methods to quicker, cheaper options that may benefit a brand looking to boost a current campaign or alter the way that their advertising is completed in-house. Not only will customer profiles ensure that only relevant content is delivered to users, but it can also contribute to the development of customer trust and loyalty.

Delete’s Leeds Beckett campaign saw the development of delivering tailor-made landing pages and adverts to international students in an aim to encourage verbal contact with the university as early in the cycle as possible and to make an easier, less daunting application process. By using geographical data, we were able to create customer profiles for international students, which then meant we were able to serve carefully selected imagery to visitors from China, India, and Europe, as well as clear and relevant calls to action.

Splitting apart potential customers by geography, interests, and type of content consumption on the site is the most efficient way to create customer profiles. It can be done through both organic searches and paid searches, with both outlets leading to different customer bases across a variety of platforms. Leveraging existing data is also a practical and simple solution that will help develop stronger relationships with a current customer base. You can then lead users to dynamic pages and imagery that are reflective of organic searches, geolocation, and paid advertising clicks.

The value in creating customer profiles from paid or organic searches

Advertisers now have to look for ways to outsmart the competition. Unfortunately, managing a campaign well is no longer anything special, but a default expectation. Try going beyond the boundaries of just “best practice” SEO or PPC and show real innovation and creativity; it will really pay off.

Using data from users’ organic searches enables a valuable customer profile of people who are already invested or interested in a brand. When it comes to applying this behavior to SEO, it results in the opportunity to tap into a receptive audience who will benefit from additional information and who may have abandoned conversion if they hadn’t been given access to the information that they were looking for.

Delete’s campaign with Leeds Beckett University experienced phenomenal results. For a typical budget for a campaign of its caliber, we were able to generate approximately £6.9 million revenue in one year and an ROI of 10,403.00%. The use of customer profiles undoubtedly played a large part in this.

Use geographical data to deliver direct and relevant information

In an aim to target potential customers and increase conversion, Delete used an innovative method of developing a live map that would plot the addresses of past enrollments, prospects gathered at educational fairs, and open day registrations. This completely changed their geographical targeting in all marketing campaigns, resulting in a 691.67% increase in traffic to the clearing section.

By creating customer profiles based on geography, there is the opportunity to attract and cater to people who may have less initial interest as well as reduce abandoned conversions due to unrelated content. As well as this, it can encourage behaviors that are natural and reflective of the user with a lower cost per click and a higher volume of leads.

Revolutionize the way you use paid and organic search behavior for remarkable results

To maximize results in a marketing campaign, create dynamic landing pages and website experience based on recorded search behaviors and the profiles that can be subsequently created using this information. When it comes to paid ads, you can pass targeting and settings to a website and use this information to personalize the website.

With organic listings, you can glean user interests from entrance pages from organic search and what users do once they are on a page. If you create your landing pages right, so that they target the desired keywords well, you can also make assumptions from people landing on these pages from organic search and then interact with them in whichever way you want, even targeting certain interests.

For example, in our campaign with Leeds Beckett, if a user indicated an interest in a Civil Engineering degree (by clicking on a PPC ad from Civil Engineering for Undergraduates ad group), the landing page or the whole website would start surfacing an image of a work placement student standing on a building site, wearing a hard hat and high visibility jacket. This brings the individual student’s interests to the surface, highlighting the best relevant features that the university has on offer. Ultimately the aim here is to shorten the user journey and increase the chance of a conversion.

This can be applied to almost any marketing area or industry, and it will transform the way that your users are able to engage with your content.

The Long-Term Link Acquisition Value of Content Marketing

Posted by on Aug 29, 2018 in SEO Articles | Comments Off on The Long-Term Link Acquisition Value of Content Marketing

The Long-Term Link Acquisition Value of Content Marketing

Posted by KristinTynski

Recently, new internal analysis of our work here at Fractl has yielded a fascinating finding:

Content marketing that generates mainstream press is likely 2X as effective as originally thought. Additionally, the long-term ROI is potentially many times higher than previously reported.

I’ll caveat that by saying this applies only to content that can generate mainstream press attention. At Fractl, this is our primary focus as a content marketing agency. Our team, our process, and our research are all structured around figuring out ways to maximize the newsworthiness and promotional success of the content we create on behalf of our clients.

Though data-driven content marketing paired with digital PR is on the rise, there is still a general lack of understanding around the long-term value of any individual content execution. In this exploration, we sought to answer the question: What link value does a successful campaign drive over the long term? What we found was surprising and strongly reiterated our conviction that this style of data-driven content and digital PR yields some of the highest possible ROI for link building and SEO.

To better understand this full value, we wanted to look at the long-term accumulation of the two types of links on which we report:

Direct links from publishers to our client’s content on their domain
Secondary links that link to the story the publisher wrote about our client’s content

While direct links are most important, secondary links often provide significant additional pass-through authority and can often be reclaimed through additional outreach and converted into direct do-follow links (something we have a team dedicated to doing at Fractl).

Below is a visualization of the way our content promotion process works:

So how exactly do direct links and secondary links accumulate over time?

To understand this, we did a full audit of four successful campaigns from 2015 and 2016 through today. Having a few years of aggregation gave us an initial benchmark for how links accumulate over time for general interest content that is relatively evergreen.

We profiled four campaigns:

Perceptions of Perfection Across Borders
America’s Most P.C. and Prejudiced Places
Reverse-Photoshopping Video Game Characters
Water Bottle Germs Revealed

The first view we looked at was direct links, or links pointing directly to the client blog posts hosting the content we’ve created on their behalf.

There is a good deal of variability between campaigns, but we see a few interesting general trends that show up in all of the examples in the rest of this article:

Both direct and secondary links will accumulate in a few predictable ways:
A large initial spike with a smooth decline
A buildup to a large spike with a smooth decline
Multiple spikes of varying size

Roughly 50% of the total volume of links that will be built will accumulate in the first 30 days. The other 50% will accumulate over the following two years and beyond.
A small subset of direct links will generate their own large spikes of secondary links.

We’ll now take a look at some specific results. Let’s start by looking at direct links (pickups that link directly back to our client’s site or landing page).

The typical result: A large initial spike with consistent accumulation over time

This campaign, featuring artistic imaginings of what bodies in video games might look like with normal BMI/body sizes, shows the most typical pattern we witnessed, with a very large initial spike and a relatively smooth decline in link acquisition over the first month.

After the first month, long-term new direct link acquisition continued for more than two years (and is still going today!).

The less common result: Slow draw up to a major spike

In this example, you can see that sometimes it takes a few days or even weeks to see the initial pickup spike and subsequent primary syndication. In the case of this campaign, we saw a slow buildup to the pinnacle at about a week from the first pickup (exclusive), with a gradual decline over the following two weeks.

“These initial stories were then used as fodder or inspiration for stories written months later by other publications.”

Zooming out to a month-over-month view, we can see resurgences in pickups happening at unpredictable intervals every few months or so. These spikes continued up until today with relative consistency. This happened as some of the stories written during the initial spike began to rank well in Google. These initial stories were then used as fodder or inspiration for stories written months later by other publications. For evergreen topics such as body image (as was the case in this campaign), you will also see writers and editors cycle in and out of writing about these topics as they trend in the public zeitgeist, leading to these unpredictable yet very welcomed resurgences in new links.

Least common result: Multiple spikes in the first few weeks

The third pattern we observed was seen on a campaign we executed examining hate speech on Twitter. In this case, we saw multiple spikes during this early period, corresponding to syndications on other mainstream publications that then sparked their own downstream syndications and individual virality.

Zooming out, we saw a similar result as the other examples, with multiple smaller spikes more within the first year and less frequently in the following two years. Each of these bumps is associated with the story resurfacing organically on new publications (usually a writer stumbling on coverage of the content during the initial phase of popularity).

Long-term resurgences

Finally, in our fourth example that looked at germs on water bottles, we saw a fascinating phenomenon happen beyond the first month where there was a very significant secondary spike.

This spike represents syndication across (all or most) of the iHeartRadio network. As this example demonstrates, it isn’t wholly unusual to see large-scale networks pick up content even a year or later that rival or even exceed the initial month’s result.

Aggregate trends
“50% of the total links acquired happened in the first month, and the other 50% were acquired in the following two to three years.”

When we looked at direct links back to all four campaigns together, we saw the common progression of link acquisition over time. The chart below shows the distribution of new links acquired over two years. We saw a pretty classic long tail distribution here, where 50% of the total links acquired happened in the first month, and the other 50% were acquired in the following two to three years.

“If direct links are the cake, secondary links are the icing, and both accumulate substantially over time.”

Links generated directly to the blog posts/landing pages of the content we’ve created on our clients’ behalf are only really a part of the story. When a campaign garners mainstream press attention, the press stories can often go mildly viral, generating large numbers of syndications and links to these stories themselves. We track these secondary links and reach out to the writers of these stories to try and get link attributions to the primary source (our clients’ blog posts or landing pages where the story/study/content lives).

These types of links also follow a similar pattern over time to direct links. Below are the publishing dates of these secondary links as they were found over time. Their over-time distribution follows the same pattern, with 50% of results being realized within the first month and the following 50% of the value coming over the next two to three years.

The value in the long tail

By looking at multi-year direct and secondary links built to successful content marketing campaigns, it becomes apparent that the total number of links acquired during the first month is really only about half the story.

For campaigns that garner initial mainstream pickups, there is often a multi-year long tail of links that are built organically without any additional or future promotions work beyond the first month. While this long-term value is not something we report on or charge our clients for explicitly, it is extremely important to understand as a part of a larger calculus when trying to decide if doing content marketing with the goal of press acquisition is right for your needs.

Cost-per-link (a typical way to measure ROI of such campaigns) will halve if links built are measured over these longer periods — moving a project you perhaps considered a marginal success at one month to a major success at one year.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

A Quarter-Million Reasons to Use Moz’s Link Intersect Tool

Posted by on Aug 29, 2018 in SEO Articles | Comments Off on A Quarter-Million Reasons to Use Moz’s Link Intersect Tool

A Quarter-Million Reasons to Use Moz’s Link Intersect Tool

Posted by rjonesx.

Let me tell you a story.

It begins with me in a hotel room halfway across the country, trying to figure out how I’m going to land a contract from a fantastic new lead, worth annually $250,000. We weren’t in over our heads by any measure, but the potential client was definitely looking at what most would call “enterprise” solutions and we weren’t exactly “enterprise.”

Could we meet their needs? Hell yes we could — better than our enterprise competitors — but there’s a saying that “no one ever got fired for hiring IBM”; in other words, it’s always safe to go with the big guys. We weren’t an IBM, so I knew that by reputation alone we were in trouble. The RFP was dense, but like most SEO gigs, there wasn’t much in the way of opportunity to really differentiate ourselves from our competitors. It would be another “anything they can do, we can do better” meeting where we grasp for reasons why we were better. In an industry where so many of our best clients require NDAs that prevent us from producing really good case studies, how could I prove we were up to the task?

In less than 12 hours we would be meeting with the potential client and I needed to prove to them that we could do something that our competitors couldn’t. In the world of SEO, link building is street cred. Nothing gets the attention of a client faster than a great link. I knew what I needed to do. I needed to land a killer backlink, completely white-hat, with no new content strategy, no budget, and no time. I needed to walk in the door with more than just a proposal — I needed to walk in the door with proof.

I’ve been around the block a few times when it comes to link building, so I wasn’t at a loss when it came to ideas or strategies we could pitch, but what strategy might actually land a link in the next few hours? I started running prospecting software left and right — all the tools of the trade I had at my disposal — but imagine my surprise when the perfect opportunity popped up right in little old Moz’s Open Site Explorer Link Intersect tool. To be honest, I hadn’t used the tool in ages. We had built our own prospecting software on APIs, but the perfect link just popped up after adding in a few of their competitors on the off chance that there might be an opportunity or two.

There it was:

3,800 root linking domains to the page itself
The page was soliciting submissions
Took pull requests for submissions on GitHub!

I immediately submitted a request and began the refresh game, hoping the repo was being actively monitored. By the next morning, we had ourselves a link! Not just any link, but despite the client having over 50,000 root linking domains, this was now the 15th best link to their site. You can imagine me anxiously awaiting the part of the meeting where we discussed the various reasons why our services were superior to that of our competitors, and then proceeded to demonstrate that superiority with an amazing white-hat backlink acquired just hours before.

The quarter-million-dollar contract was ours.

Link Intersect: An undervalued link building technique

Backlink intersect is one of the oldest link building techniques in our industry. The methodology is simple. Take a list of your competitors and identify the backlinks pointing to their sites. Compare those lists to find pages that overlap. Pages which link to two or more of your competitors are potentially resource pages that would be interested in linking to your site as well. You then examine these sites and do outreach to determine which ones are worth contacting to try and get a backlink.

Let’s walk through a simple example using Moz’s Link Intersect tool.

Getting started

We start on the Link Intersect page of Moz’s new Link Explorer. While we had Link Intersect in the old Open Site Explorer, you’re going to to want to use our new Link Intersect, which is built from our giant index of 30 trillion links and is far more powerful.

For our example here, I’ve chosen a random gardening company in Durham, North Carolina called Garden Environments. The website has a Domain Authority of 17 with 38 root linking domains.

We can go ahead and copy-paste the domain into “Discover Link Opportunities for this URL” at the top of the Link Intersect page. If you notice, we have the choice of “Root Domain, Subdomain, or Exact Page”:

I almost always choose “root domain” because I tend to be promoting a site as a whole and am not interested in acquiring links to pages on the site from other sites that already link somewhere else on the site. That is to say, by choosing “root domain,” any site that links to any page on your site will be excluded from the prospecting list. Of course, this might not be right for your situation. If you have a hosted blog on a subdomain or a hosted page on a site, you will want to choose subdomain or exact page to make sure you rule out the right backlinks.

You also have the ability to choose whether we report back to you root linking domains or Backlinks. This is really important and I’ll explain why.

Depending on your link building campaign, you’ll want to vary your choice here. Let’s say you’re looking for resource pages that you can list your website on. If that’s the case, you will want to choose “pages.” The Link Intersect tool will then prioritize pages that have links to multiple competitors on them, which are likely to be resource pages you can target for your campaign. Now, let’s say you would rather find publishers that talk about your competitors and are less concerned about them linking from the same page. You want to find sites that have linked to multiple competitors, not pages. In that case, you would choose “domains.” The system will then return the domains that have links to multiple competitors and give you example pages, but you wont be limited only to pages with multiple competitors on them.

In this example, I’m looking for resource pages, so I chose “pages” rather than domains.

Choosing your competitor sites

A common mistake made at this point is to choose exact competitors. Link builders will often copy and paste a list of their biggest competitors and cross their fingers for decent results. What you really want are the best link pages and domains in your industry — not necessarily your competitors.

In this example I chose the gardening page on a local university, a few North Carolina gardening and wildflower associations, and a popular page that lists nurseries. Notice that you can choose subdomain, domain, or exact page as well for each of these competitor URLs. I recommend choosing the broadest category (domain being broadest, exact page being narrowest) that is relevant to your industry. If the whole site is relevant, go ahead and choose “domain.”

Analyzing your results

The results returned will prioritize pages that link to multiple competitors and have a high Domain Authority. Unlike some of our competitors’ tools, if you put in a competitor that doesn’t have many backlinks, it won’t cause the whole report to fail. We list all the intersections of links, starting with the most and narrowing down to the fewest. Even though the nurseries website doesn’t provide any intersections, we still get back great results!

Now we have some really great opportunities, but at this point you have two choices. If you really prefer, you can just export the opportunities to CSV like any other tool on the market, but I prefer to go ahead and move everything over into a Link Tracking List.

By moving everything into a link list, we’re going to be able to track link acquisition over time (once we begin reaching out to these sites for backlinks) and we can also sort by other metrics, leave notes, and easily remove opportunities that don’t look fruitful.

What did we find?

Remember, we started off with a site that has barely any links, but we turned up dozens of easy opportunities for link acquisition. We turned up a simple resources page on forest resources, a potential backlink which could easily be earned via a piece of content on forest stewardship.

We turned up a great resource page on how to maintain healthy soil and yards on a town government website. A simple guide covering the same topics here could easily earn a link from this resource page on an important website.

These were just two examples of easy link targets. From community gardening pages, websites dedicated to local creek, pond, and stream restoration, and general enthusiast sites, the Link Intersect tool turned up simple backlink gold. What is most interesting to me, though, was that these resource pages never included the words “resources” or “links” in the URLs. Common prospecting techniques would have just missed these opportunities altogether.

While it wasn’t the focus of this particular campaign, I did choose the alternate of “show domains” rather than “pages” that link to the competitors. We found similarly useful results using this methodology.

For example, we found CarolinaCountry.com had linked to multiple of the competitor sites and, as it turns out, would be a perfect publication to pitch for a story as part of of a PR campaign for promoting the gardening site.

Takeaways

The new Link Intersect tool in Moz’s Link Explorer combines the power of our new incredible link index with the complete features of a link prospecting tool. Competitor link intersect remains one of the most straightforward methods for finding link opportunities and landing great backlinks, and Moz’s new tool coupled with Link Lists makes it easier than ever. Go ahead and give it a run yourself — you might just find the exact link you need right when you need it.

Find link opportunities now!

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Faceted Navigation and SEO: A Deeper Look

Posted by on Aug 29, 2018 in SEO Articles | Comments Off on Faceted Navigation and SEO: A Deeper Look

Faceted Navigation and SEO: A Deeper Look

The complex web of factors that determine page counts for a site with faceted navigation. It’s about the SEO, folks

tl;dr: Skip to each “Takeaways” section if you want a few ideas for handling faceted navigation and SEO. But do so at your own risk. The “why” is as important as the “what.”

If you have ever shopped for anything online, you’ve seen faceted navigation. This is the list of clickable options, usually in the left panel, that can be used to filter results by brand, price, color, etc. Faceted navigation makes it possible to mix & match options in any combination the user wishes. It’s popular on large online stores because it allows the user to precisely drill down to only the things they are interested in.

An example of faceted navigation

But this can cause huge problems for search engines because it generates billions of useless near-duplicate pages. This wastes crawl budget, lowers the chances that all of the real content will get indexed, and it gives the search engines the message that the site is mostly low-quality junk pages (because, at this point, it is).

Many articles talk about faceted navigation and how to mitigate the SEO problems that it causes. Those are reactive strategies: How to prevent the search engines from crawling and indexing the billions of pages your faceted navigation created.

This is not one of those how-to articles.

Instead, it’s about the decisions that create massive duplication and how to avoid them from the start. It’s about the seemingly innocuous UX choices and their unintended consequences. My goal is to give you a deeper understanding of how each decision affects crawlability and final page counts. I’m hoping this will give you knowledge you can use, both to avoid problems before they start and to mitigate problems that can’t be avoided.

Match Types and Grouping

Faceted navigation is typically divided into groups, with a list of clickable options in each group. There might be one group for brand names, another for sizes, another for colors, etc. The options in a group can be combined in any of a few different ways:

“AND” matching — With this match type, the store only shows an item if it matches all of the selected options. “AND” matching is most often used for product features where it is assumed the shopper is looking for a specific combination of features, and is only interested in a product if it has all of them. (e.g., headphones that are both wireless and noise-canceling)
“OR” matching — With this match type, the store shows items that match any of the selected options. This can be used for lists of brand names, sizes, colors, price ranges, and many other things. The assumption here is that the user is interested in a few different things, and wants to see a combined list that includes all of them. (e.g., all ski hats available in red, pink or yellow).
“Radio button” matching — With this match type, only one option may be selected at a time. Selecting one option deselects all others. The assumption here is that the options are 100% mutually exclusive, and nobody would be interested in seeing more than one of them at a time. Radio buttons are often used to set sort order. It is also sometimes used to choose between mutually exclusive categories. (e.g., specifying the smartphone brand/model when shopping for phone cases) Some radio button implementations require at least one selected option (e.g., for sort order), and others don’t (e.g., for categories).

The options within a given group can be combined using any one of these match types, but the groups themselves are almost always combined with each other using “AND” matching. For example, if you select red and green from the “colors” group, and you select XL and XXL from the “sizes” group, then you will get a list of every item that is both one of those two colors and one of those two sizes.

A typical real-world website will have several groups using different match types, with many options between them. The total number of combinations can get quite large:

The above example has just over 17 billion possible combinations. Note that the total number of actual pages will be much larger than this because the results from some combinations will be split across many pages.

For faceted navigation, page counts are ultimately determined by three main things:

The total number of possible combinations of options — In the simplest case (with only “AND” & “OR” matching, and no blocking) the number of combinations will be 2n, where n is the number of options. For example, if you have 12 options, then there will be 212, or 4,096 possible combinations. This gets a bit more complicated when some of the groups are radio buttons, and it gets a lot more complicated when you start blocking things.
The number of matching items found for a given combination — The number of matching items is determined by many factors, including match type, the total number of products, the fraction of products matched by each filter option, and the amount of overlap between options.
The maximum number of items to be displayed per page — This is an arbitrary choice set by the site designer. You can set this to any number you want. A bigger number means fewer pages but more clutter on each of them.

 

Test: How Does Match Type Affect Page Counts?

The choice of match type affects the page count by influencing both the number of combinations of options and also the number of matching items per combination.

How were these results calculated?
All of the numeric results in this article were generated by a simulation script written for this purpose. This script works by modeling the site as a multi-dimensional histogram, which is then repeatedly scaled and re-combined with itself each time a new faceted nav option is added to the simulated site. The script simulates gigantic sites with many groups of different option types relatively quickly. (For previous articles, I have always generated crawl data using an actual crawler, running on a test website made up of real HTML pages. That works fine when there are a few tens of thousands of pages, but some of the tests for this article have trillions of pages. That would take my crawler longer than all of recorded human history to crawl. Civilizations rise and fall over centuries. I decided not to wait that long.)

Test #1 — Simple “AND” Matching

Suppose we have a site with the following properties:

The faceted nav consists of one big group, with 32 filtering options that can be selected in any combination.
There are 10,000 products.
On average, each filtering option matches 20% of the products.
The site displays (up to) 10 products per page.
Options are combined using “AND” matching.

The above assumptions give you a site with:

4,294,967,296 different combinations of options
4,295,064,687 pages.
4,294,724,471 empty results.

The obvious: The number of pages is enormous, and the vast majority of them are empty results. For every 12,625 pages on this site, one shows actual products. The rest show the aggravating “Zero items found” message. This is a terrible user experience and a colossal waste of crawl budget. But it’s also an opportunity.

So what can we do about all those empty results? If you are in control of the server side code, you can remove them. Any option that would lead to a page that says “Zero items found” should either be grayed out (and no longer coded as a link) or, better yet, removed entirely. This needs to be evaluated on the server side each time a new page is requested. If this is done correctly, then each time the user clicks on another option, all of the remaining options that would have led to an empty result will disappear. This reduces the number of pages, and it also dramatically improves the user experience. The user no longer has to stumble through a maze of mostly dead ends to find the rare combinations that show products.

So let’s try this.

Test #2 — “AND” Matching, With Empty Results Removed

This test is identical to Test #1, except now all links that lead to empty results are silently removed.

This time, we get:

1,149,017 (reachable) combinations of options.
1,246,408 pages.
0 empty results. (obviously, because we’ve removed them)

This may still seem like a lot, but it’s a significant improvement over the previous test. The page count has gone from billions down to just over one million. This is also a much better experience for the users, as they will no longer see any useless options that return zero results. Any site that has faceted nav should be doing this by default.

Test #3 — “OR” Matching

This test uses the same parameters as Test #1, except it uses “OR” matching:

The faceted nav still has 32 filtering options
There are still 10,000 products.
Each filtering option still matches 20% of the products.
The site still displays 10 products per page.
Options are now combined using “OR” matching instead of “AND” matching.

This gives us:

4,294,967,296 different combinations of options.
4,148,637,734,396 pages (!)
0 empty results.

The number of combinations is precisely the same, but the number of pages is much higher now (966 times higher), and there are no longer any empty results. Why is the page count so high? Because, with “OR” matching, every time you click on a new option the number of matching items increases. This is the opposite of “AND” matching, where the number decreases. In this test, most combinations now include almost all of the products on the site. In Test #1, most combinations produced empty results.

There are no empty results at all in this new site. The only way there could be an empty result would be if you chose to include a filtering option that never matches anything (which would be kind of pointless). The strategy of blocking empty results does not affect this match type.

Test #4 — Radio Buttons

This test uses radio button matching.

If we repeat Test #1, but with radio button matching, we get:

33 different combinations of options.
7,400 pages.
0 empty results.

This is outrageously more efficient than any of the others. The downside of radio button matching is that it’s much more restrictive in terms of user choice.

The takeaway: Always at least consider using radio button matching when you can get away with it (any time the options are mutually exclusive). It will have a dramatic effect on page counts.

Recap of Tests #1–4:

Test
Configuration
Page count

1
“AND” matching (without blocking empty results)
4,295,064,687

2
“AND” matching, with empty results blocked
1,246,408

3
“OR” matching
4,148,637,734,396

4
Radio buttons
7,400

Takeaways

The choice of match type is important and profoundly impacts page counts.
“OR” matching can lead to extremely high page counts.
“AND” matching isn’t as bad, provided you are blocking empty results.
You should always block empty results.
Blocking empty results helps with “AND” matching, but doesn’t affect “OR” matching.
Always use radio buttons when the options are mutually exclusive.

How Grouping Affects Page Count

So far, we have looked at page counts for sites that have one big group of options with the same match type. That’s unrealistic. On a real website, there will usually be many groups with different match types. The exact way the options are separated into groups is another factor that can affect page counts.

Test #5 — “OR” Matching, Split Into Multiple Groups

Let’s take the original parameters from Test #3:

The faceted nav has a total of 32 filtering options.
There are 10,000 products.
On average, each filtering option matches 20% of the products.
The site displays up to 10 products per page.
Options are combined using “OR” matching.

But this time, we’ll redo the test several times, and each time, we’ll split the 32 options into a different number of groups.

This gives us:

Configuration
Pages
Empty Results

1 group with 32 options
4,148,637,734,396
0

2 groups with 16 options per group
2,852,936,777,269
0

4 groups with 8 options per group
466,469,159,950
0

8 groups with 4 options per group
5,969,194,867
290,250,752

16 groups with 2 options per group
4,296,247,759
4,275,284,621

The interesting thing here is that the last two tests have some empty results. Yes, all groups used “OR” matching, and yes, I told you “OR” matching does not produce empty results. So what’s going on here? Remember, no matter which match types are used within each group, the groups are combined with each other using “AND” matching. So, if you break an “OR” group into many smaller “OR” groups, you get behavior closer to an “AND” group.

Another way to put it: Suppose there are eight groups with four options each, and the user has selected exactly one option from each group. For any item to show up in those results, the item would have to match all eight of those selected options. This is functionally identical to what you would get if those eight selected options were part of an “AND” group.

If you are blocking empty results (which you should be doing anyway), then the actual page counts for the last two tests will be much smaller than is shown in this table. Before you get all excited, note that you have to have quite a few groups before this starts happening. It’s possible some site might be in a market where it makes sense to have eight groups with four options each, but it isn’t something that will happen often.

The boring but more practical observation is that even breaking the group into two parts reduces the page count noticeably. The difference isn’t huge, but it’s enough to be of some value. If a group of options that uses “OR” matching can be logically separated into two or more smaller groups, then it may be worth doing.

Test #6 — “AND” Matching, Split Into Multiple Groups

(I’m including this test because, if I don’t, people will tell me I forgot to do this one)

This test is the same as Test #5, but with “AND” matching instead of “OR” matching (and empty results are now being blocked).

Configuration
Pages

1 group with 32 options
1,246,408

2 groups with 16 options per group
1,246,408

4 groups with 8 options per group
1,246,408

8 groups with 4 options per group
1,246,408

16 groups with 2 options per group
1,246,408

Yep. They all have the same number of pages. How can this be? The options within each group use “AND” matching, and groups are combined with each other using “AND” matching, so it doesn’t matter if you have one group or several. They are functionally identical.

Takeaway

If you want to split up an “AND” group because you think it will make sense to the user or will look nicer on the page, then go for it, but it will not affect page counts.

Other Things that Affect Page Counts
Test #7 — Changing “Items per Page”

This test uses the following parameters:

The faceted nav consists of five groups, with varying option counts, and a mix of different match types.
There are 10,000 products.
On average, each filtering option matches 20% of the products.
Links to empty results are blocked.

The test was repeated with different values for “Items per Page.”

This gives us:

Configuration
Page Count

10 items per page
18,690,151,025

32 items per page
10,808,363,135

100 items per page
8,800,911,375

320 items per page
8,309,933,890

1,000 items per page
8,211,780,310

This makes a difference when the values are small, but the effect tapers off as the values gets larger.

Test #8 — Adding a Pagination Limit

Some sites, especially some very large online stores, try to reduce database load by setting a “pagination limit.” This is an arbitrary upper limit to the number of pages that can be returned for a given set of results.

For example, if a given filter combination matches 512,000 products, and the site is set up to show 10 products per page, this particular combination would normally create 51,200 pages. Some sites set an arbitrary limit of, say, 100. If the user clicks all the way to page 100, there is no link to continue further.

These sites do this because, compared to delivering pages at the start of a pagination structure, delivering pages deeper in a pagination structure create a massive load on the database (for technical reasons beyond the scope of this article). The larger the site, the greater the load, so the largest sites have to set the arbitrary limit.

This test uses the following parameters:

The faceted nav consists of five groups, with varying option counts, and a mix of different match types.
There are 500,000 products.
On average, each filtering option matches 20% of the products.
Links to empty results are blocked.

The test was repeated with different values for the pagination limit.

This gives us:

Pagination Limit
Total Page Count

5
12,079,937,370

10
13,883,272,770

20
15,312,606,795

40
16,723,058,170

80
17,680,426,670

160
18,252,882,040

(no limit)
18,690,151,025

That’s definitely an improvement, but it’s underwhelming. If you cut the pagination limit in half, you don’t wind up with half as many pages. It’s more in the neighborhood of 90% as many. But this improvement is free because this type of limit is usually added for reasons other than SEO.

Pagination Takeaways

Test 7:

For lower values, changing “Items per Page” improves page counts by a noticeable amount.
When the values get higher, the effect tapers off. This is happening because most of the results now fit on one page. (and the page count can’t get lower than one)

Test 8:

If you have a huge site implementing a pagination limit primarily for database performance reasons, you may see a minor SEO benefit as a free bonus.
If you’re not also doing this to reduce database load, it’s not worth it.

Selectively Blocking Crawlers

All of the tests so far let the crawler see all of the human-accessible pages. Now let’s look at strategies that work by blocking pages via robots meta, robots.txt, etc.

Before we do that, we need to be clear about what “page count” really means. There are actually three different “page counts” that matter here:

Human-readable page count — Pages that can be viewed by a human being with a browser.
Crawlable page count — Pages that a search engine crawler is allowed to request.
Indexable page count — The number of pages that the search engine is allowed to index, and to potentially show in search results.

The crawlable page count is important because it determines how much crawl budget is wasted. This will affect how thoroughly and how frequently the real content on the site gets crawled. The indexable page count is important because it effectively determines how many thin, near-duplicate pages the search engines will try to index. This is likely to affect the rankings of the real pages on the site.

Test #9 — Selection Limit via Robots Meta with “noindex, nofollow”

In this test, if the number of selected options on the page gets above a pre-specified limit, then <meta name="robots" content="noindex,nofollow"> will be inserted into the HTML. This tells the search engines not to index the page or follow any links from it.

This test uses the following parameters:

The faceted nav consists of five groups, with varying option counts, and a mix of different match types.
There are 10,000 products.
On average, each filtering option matches 20% of the products.
Links to empty results are blocked.

For this test, the “selection limit” is varied from 0 to 5. Any page where the number of selected options is larger than this selection limit will be blocked, via robots meta tag with noindex, nofollow.

selection limit
crawlable pages
indexable pages

0
11,400
1,000

1
79,640
11,400

2
470,760
79,640

3
2,282,155
470,760

4
9,269,631
2,282,155

5
32,304,462
9,269,631

(no limit)
18,690,151,025
18,690,151,025

In these results, both indexable and crawlable page counts are reduced dramatically, but the number of crawlable pages is reduced by much less. Why? Because a robots meta tag is part of the HTML code of the page it is blocking. That means the crawler has to load the page in order to find out it has been blocked. A robots meta tag can block indexing, but can’t can’t block crawling. It still wastes crawl budget.

You might well ask: If robots meta can’t directly block a page from being crawled, then why is the crawlable page count reduced at all? Because crawlers can no longer reach the deepest pages: The pages that link to those pages are no longer followed or indexed. Robots meta can’t directly block crawling of a particular page, but it can block the page indirectly, by setting “nofollow” for all of the pages that link to it.

Test #10 — Repeat of Test #9, But With “noindex, follow”

This a repeat of test #9, except now the pages are blocked by a robots meta tag with “noindex, follow” instead of “noindex, nofollow.” This tells the crawler that it still shouldn’t index the page, but it is OK to follow the links from it.

(I’m only including this one because, if I don’t, someone is bound to tell me I forgot to include it.)

selection limit
crawlable pages
indexable pages

0
18,690,151,025
1,000

1
18,690,151,025
11,400

2
18,690,151,025
79,640

3
18,690,151,025
470,760

4
18,690,151,025
2,282,155

5
18,690,151,025
9,269,631

(no limit)
18,690,151,025
18,690,151,025

This scheme reduces the number of indexable pages, but it does nothing whatsoever to prevent wasted crawl budget. Wasted crawl budget is the main problem that needs to be solved here, so this makes this scheme useless. There are some use cases (unrelated to faceted nav) where “noindex, follow” is a good choice, but this isn’t one of them.

Can the selection limit be implemented with robots.txt?

As shown in test #9, using robots meta tags to implement a selection limit is not ideal, because robots meta tags are part of the HTML of the page. The crawler has to load each page before it can find out if the page is blocked. This wastes crawl budget.

So what about using robots.txt instead? Robots.txt seems like a better choice for this, because it blocks pages from being crawled, unlike robots meta, which blocks pages from being indexed and/or followed. But can robots.txt be used to selectively block pages based on how many options they have selected? The answer is: it depends.

This depends on the URL structure. In some cases it’s simple, in others it’s difficult or impossible.

For example, if the URL structure uses some completely impenetrable format like base-64-encoded JSON:

https://example.com/products?p=WzczLCA5NCwgMTkxLCAxOThd

Then you are out of luck. You cannot use robots.txt to filter this, because there’s no way for robots.txt to tell how many selected options there are. You’ll have to use robots meta or X-Robots. (both of which can be generated by the server-side code, which has access to the decoded version of the query data)

On the other hand, if all filter options are specified as a single underscore-separated list of ID numbers in the query string, like this:

https://example.com/products?filters=73_94_191_198

Then you can easily block all pages that have more than (for example) two options selected, by doing this:

User-agent: *
Disallow: /products?*filters=*_*_

So let’s try this.

Test #11 — Selection Limit, via Robots.txt

This is a repeat of test #9, except now the pages are blocked using robots.txt instead of robots meta.

selection limit
crawlable pages
indexable pages

0
1,000
1,000

1
11,400
11,400

2
79,640
79,640

3
470,760
470,760

4
2,282,155
2,282,155

5
9,269,631
9,269,631

(no limit)
18,690,151,025
18,690,151,025

Takeaways

Blocking pages based on a selection limit is a very effective way to reduce page counts.
Implementing this with robots.txt is best.
But you can only use robots.txt if the URL structure allows it.
Implementing this with robots meta is less effective than robots.txt, but still useful.

Summary

Faceted navigation is one of the thorniest SEO challenges large sites face. Don’t wait to address issues after you’ve built your site. Plan ahead. Use robots.txt, look at selection options, and “think” like a search engine.

A little planning can improve use of crawl budget, boost SEO, and improve the user experience.

The post Faceted Navigation and SEO: A Deeper Look appeared first on Portent.