SEO Articles

Use Google's Search Quality Evaluator Guidelines To Assess Site Quality

What are Google’s Search Quality Evaluator Guidelines?  

To make sure that Google’s search engine is returning the right answers, Google regularly uses human contractors to evaluate search results to make sure that the latest algorithm is returning the kind of results they want. Google’s Quality Evaluator Guidelines are the instructions that they give these evaluators.

Contents

What are Google’s Search Quality Evaluator Guidelines?
How can Google’s Search Quality Evaluator Guidelines be used to help websites?
Methodology
Page type and site wide questions
How you would use insight gained from the Search Quality Evaluator checklist
The 10 questionnaire topics and what they mean
How to use the Search Quality Evaluator checklist
Who should use the Search Quality Evaluator checklist

How can Google’s Search Quality Evaluator Guidelines be used to help websites?  

The guidelines that Google gives Quality Evaluators give us insight into what Google is aiming for. Even if Quality Evaluators don’t visit the site, meeting these criteria will help organisations match what Google believes is a good quality website. Jennifer Slegg’s piece does a good job of detailing the 2019 update to the Search Quality Evaluator Guidelines.

We’ve created a checklist that distills Google’s Quality Evaluator Guidelines and aims to give us more information to succeed in search.

This post and accompanying checklist will allow you to:

  1. Gain insight into what a website and organisation is doing well and could be doing better, in terms of facilitating a good user experience and the creation of quality content.
  2. Provide a platform for a website to perform well in search, by aligning with Google’s perceived quality attributes.
  3. Get internal buy in. The checklist is consistent, independent, quantifiable and based on Google’s guidelines. It could be filled out by internal teams or used as part of user testing.
  4. Make sure content is optimised for launch. The checklist can be used for internal stakeholders to go through before publishing new content, website features or most things that could impact the UX of a website and content quality.

Methodology

Creating the checklist

We took Google’s 164 page Search Quality Evaluator documentation, Distilled’s previous Panda survey and Google’s guide to building high quality websites and condensed it down to the 10 most important topics. 

We then took the questions that Google is trying to answer in each topic and turned them into Pass/Fail items in our survey.

Perhaps not surprisingly, the principle of what Google believes constitutes a high quality website hasn’t changed too much throughout the years, although the qualifying criteria have certainly become more robust. 

Weighting of questions

We decided how important a question should be based on wording in Google’s documentation.

For example, Google rates pages “lowest quality” if there is intent to deceive users or search engines. 

Under section 7.6 Pages that Potentially Deceive Users, Google writes:

“We will consider a page to be “deceptive” if it may deceive users or trick search engines. All deceptive pages should be rated Lowest”

Google then goes on to describe misleading titles as a form of deception.

Section 7.6.1 Deceptive Page Purpose:

“A webpage with a misleading title or a title that has nothing to do with the content on the page. Users who come to the page expecting content related to the title will feel tricked or deceived”.

As a result of this wording in Google’s documentation, where they deem deception to be indicative of “lowest quality” pages, we have assigned strong relative importance to questions that pertain to deception. 

If your site fails the question below, it will fail that entire portion of the checklist, because of the importance that Google has placed on accurate titles.

Does the title in Google Search accurately describe the topic of the page?

How to use the checklist 

Pagetype questions

When you are filling out the checklist, there are some questions which should be asked about multiple different types of pages. The page types include:

  • Homepage
  • Product pages
  • Blog pages 

Applying topics to different page types will give a broader view of the overall UX and content quality and help pinpoint strengths and weaknesses.  

Of course, some page types may not be relevant to an organisation. For instance, publications don’t have product pages, so asking questions about a product page is not applicable.

Sitewide questions

Over the last year or so Google has really focussed on the profile of organisations and content authors in their search algorithms, with August’s 2018 medic update being of particular note.

The E-A-T (Expertise, Trustworthiness and Authoritativeness) of organisations and content creators is seen by Google to be indicative of quality and value. Importantly, Google places more weight on the external evidence of E-A-T, rather than an organisations’ or content creators’ self-endorsement. 

We’ve created specific questions that focus on identifying external reputation, with the idea that external reputation represents the potential E-A-T of a website.

We also created questions that focussed on more hygiene aspects of a website, specifically the adequacy of information on a business/content creators and how the design of a website affects the overall UX.

How you would use insight gained from the checklist

There could be many actions that could be taken from the checklist. Some insights we’ve provided for clients when running our checklist include:

Customer concerns/complaints on third party websites

External reputation information of a brand is important for both Google and users. 

So by identifying negative sentiment towards a brand and/or product on an influential industry related website, we convinced our client to not only review concerns on said website, but commission an off site review investigation for many other related websites.  

Content creators lacking perceived expertise

Google says that the E-A-T of content creators is indicative of content quality. Our checklist highlighted that content creators, despite being experienced within the vertical they work in, didn’t have an online presence. 

The outcome of this insight was to plan guest writing opportunities on industry publications to enhance employees perceived expertise.

Lack of information on content creators 

Users want to learn more about and validate the legitimacy of content creators. Our checklist highlighted that individual content creators didn’t have bio pages. 

The outcomes of this insight was to develop individual bio pages, with information on their experience and expertise.  

Product page insights

Some common insights include:

  1. Not fully understanding the purpose and/or benefits of a product: often a common issue on B2B websites, this can be the result of jargon heavy content, long-winded descriptions that don’t make the benefits clear, or a lack of FAQ’s.  
  2. Page design: content is sometimes difficult to digest. This can be the result of bunched up blocks of text, blurry or broken images, or key content placed near the bottom of the page.

Blog page insights

If executed properly, an organisation’s blog is key in driving top of the funnel traffic. This checklist highlights areas such as:

  1. Inaccurate headlines:  users need to have an accurate  understanding and expectation of the benefits/knowledge they could gain by reading a blog post. We certainly don’t want to mislead readers in any form.
  2. Fact validation: to enhance the authority of your content, you should link to reputable third party sites to validate facts when possible and appropriate.
  3. Unique content: create more in depth and potentially data driven pieces of content to encourage bookmarking/sharing.

The 10 questionnaire topics and what they mean

Does [PAGE TYPE] have a beneficial purpose?

The goal of page quality rating is to evaluate how well a page achieves its purpose and objectives. The way a page is measured on “beneficial purpose” is dependant on the page type.

Under section 2.2, What is the Purpose of a Webpage? Google writes:

“As long as the page is created to help users, we will not consider any particular page purpose or type to be higher quality than another.”

For example, the purpose of a product page is to inform the user about the product, such as the features and benefits of the product. 

If a product page fails to achieve this objective then it lacks a beneficial purpose and would be considered a low quality page and in theory not perform well in search results.

Is the main content on the [PAGE TYPE] created with substantial effort, time and talent/skill?  

Google needs to understand whether content provides enough informative, unique information, to be deemed high quality.

Under section 5.1, Very High Quality MC, Google writes:

“We will consider the MC of the page to be very high or highest quality when it is created with a high degree of time and effort, and in particular, expertise, talent, and skill—this may provide evidence for the E-A-T of the page”

Content that is vague and lacking in detail is unlikely to perform well, especially if other sites produce more comprehensive and useful pieces of content.

We’ve excluded questions that specifically tackle the E-A-T of the content creators in this section, with the aim to focus on evaluating the usefulness of existing content.

Not every business will be, or have industry experts producing content, but that shouldn’t detract from assessing the quality of content that is actually being produced. 

Google states that smaller businesses are likely to have a smaller web presence and a lack of external evidence of E-A-T is not an indication of low page quality. 

Under section 2.6.5, What to Do When You Find No Reputation Information, Google writes:

“Frequently, you will find little or no information about the reputation of a website for a small organization. This is not indicative of positive or negative reputation. Many small, local businesses or community organizations have a small “web presence” and rely on word of mouth, not online reviews. For these smaller businesses and organizations, lack of reputation should not be considered an indication of low page quality.

However, for “Your Money or Your Life” (YMYL) pages, such as medical, or financial information websites, the reputation of the website and/or the individual content creator is just as important as the potential usefulness of content.

Evaluating the E-A-T of a website and/or content creator will be addressed in the topics “Would you trust information from this website’s authors?” and “Would you trust information from this website?”.

Does the [PAGE TYPE] on this site have obvious errors?

This topic is about identifying errors on a website, with a focus on recognising unmaintained and/or defaced pages.

Under section 7.2.9, Unmaintained Websites, and Hacked, Defaced, or Spammed Pages, Google writes:

“Unmaintained websites should be rated Lowest if they fail to achieve their purpose due to the lack of maintenance.

Unmaintained websites may also become hacked, defaced, or spammed with a large amount of distracting and unhelpful content. These pages should also be rated Lowest because they fail to accomplish their original purpose.”

Although not explicitly mentioned in Google’s Search Quality Evaluator Guidelines, other signs of unmaintained websites are broken links and images as well as obvious missing blocks of main content.

If a website has a large number of glaring errors, it could compromise user experience and suggests the website may be insecure. Failing this topic indicates a website will likely struggle to perform well in search results.

Does the [PAGE TYPE] contain excessive adverts, or pop-ups?

Over the last couple of years, Google has been increasingly targeting websites that excessively contain adverts and/or block users’ access to content with pop-ups.

Under section 7.2.7, Obstructed or Inaccessible Main Content, Google writes:

“MC cannot be used if it is obstructed or inaccessible due to Ads, SC, or interstitial pages. If you are not able to access the MC, please use the Lowest rating”.

Google also suggests ads don’t need to fully obstruct a page to be distracting to receive a “low” rating. Under section 6.4, Distracting Ads/SC, Google writes:

“…some Ads, SC, or interstitial pages (i.e., pages displayed before or after the content you are expecting) make it difficult to use the MC. Pages with Ads, SC, or other features that distract from or interrupt the use of the MC should be given a Low rating”.

Whilst Google does state that ads can contribute to good user experience, they should be used  in moderation and not negatively impact the experience of consuming main content.

Sites that contain excessive adverts and obstructive pop-ups run the risk of penalisation, which will impact online visibility. 

This topic is particularly pertinent for mobile devices where screen space is limited.

The questions in our checklist for this topic also reflect Google’s firm stance on excessive advertisements. Any “Failed” question in our checklist will fail this entire topic.

Is [PAGE TYPE] deceptive?

Earlier, we mentioned how misleading page titles can be a means of deception, but Google cites other forms of deception. 

For example, under section 7.6.2, Deceptive Page Design, Google states the following are forms of deception:

  • Pages that disguise Ads as MC. Actual MC may be minimal or created to encourage users to click on the Ads.
  • Pages that disguise Ads as website navigation links.
  • Pages where the MC is not usable or visible.
  • Any page designed to trick users into clicking on links, which may be Ads or other links intended to serve the needs of the website rather than to the benefit of the user.

Pages that are misleading or attempt to cause harm in some way are deemed the “lowest” quality of page and will negatively impact the perception of the organisation and hurt online visibility.

Is there adequate information about the business and/or content creators?

Google wants websites to provide adequate information about a business and/or authors.

Under section 2.5.3, Finding About Us, Contact Information, and Customer Service Information, Google writes:

“There are many reasons that users might have for contacting a website, from reporting problems such as broken pages, to asking for content removal. Many websites offer multiple ways for users to contact the website: email addresses, phone numbers, physical addresses, web contact forms”

Google adds that contact information is particularly important for websites that handle money.

“Contact information and customer service information are extremely important for websites that handle money, such as stores, banks, credit card companies, etc”

When it comes to individual content creators, Google doesn’t specifically mention that authors should have biography pages on a website. Google also state the E-A-T of authors should be primarily judged on external evidence; however, there is sound logic in providing an option to view biography pages on a website that demonstrate the E-A-T of authors.    

Under section 6.1, Lacking Expertise, Authoritativeness, or Trustworthiness (E-A-T), Google writes:   

“Low quality pages often lack an appropriate level of E-A-T for the purpose of the page. Here are some examples:

  • The creator of the MC does not have adequate expertise in the topic of the MC, e.g. a tax form instruction video made by someone with no clear expertise in tax preparation.
  • The website is not an authoritative source for the topic of the page, e.g. tax information on a cooking website.”

Ultimately, users want to be able to contact and learn more about an organisation/content creator. A website that provides adequate and reliable information that demonstrates E-A-T will foster trust and credibility.

Would you trust information from this website?

Google is emphasising the importance of websites and organisations having credible external reputation. 

Google has stated they will trust external sources of reputation information over internal information.

Under section 2.6, Reputation of the Website or Creator of the Main Content, Google writes:

“…for Page Quality rating, you must also look for outside, independent reputation information about the website. When the website says one thing about itself, but reputable external sources disagree with what the website says, trust the external sources.

Google wants to evaluate the external reputation information of an organisation using credible third party sources like external news articles, blog posts, or even Wikipedia pages. 

Under section 2.6.2, Sources of Reputation Information, Google writes:

“Look for information written by a person, not statistics or other machine-compiled information. News articles, Wikipedia articles, blog posts, magazine articles, forum discussions, and ratings from independent organizations can all be sources of reputation information. Look for independent, credible sources of information.” 

By identifying various sources of external reputation information, Google can better evaluate E-A-T. 

Content that is written by websites/authors who meet a high level of E-A-T is deemed to be of the “highest quality” and in theory should perform better in search results. This is particularly important for YMYL websites/topics, such as health and safety, finance and news and current events. 

Interestingly, for the September 2019 Search Quality Evaluator Guidelines update, Google adds “Shopping” as a type of YMYL. Under section 2.3, Your Money or Your Life (YMYL) Pages, Google defines “Shopping” as:

“information about or services related to research or purchase of goods/services, particularly webpages that allow people to make purchases online”   

Under section 5.2, Very Positive Reputation, Google writes:

“For shopping pages, experts could include people who have used the store’s website to make purchases; whereas for medical advice pages, experts should be people or organizations with appropriate medical expertise or accreditation”.

This suggests customer reviews for an organisation’s products/services can be used as a valid source of external reputation information for ecommerce and B2B websites. 

As part of our checklist, we’ve included questions to evaluate online reviews for a brand, which could also encompass specific review information on a brand’s products/services. 

Would you trust information from this website’s authors?

Like evaluating the E-A-T of a website/organisation, Google also wants to evaluate external reputation information for individual content creators.

Under section 9.2, Reputation and E-A-T: Website or the Creators of the Main Content?, Google writes: 

“You must consider the reputation and E-A-T of both the website and the creators of the MC in order to assign a Page Quality rating.”

“The reputation and E-A-T of the creators of the MC is extremely important when a website has different authors or content creators on different pages.

Sometimes a website/organisation will be creating content and other times it will be individual content creators. Either way, evaluating the E-A-T for both types of content creators is important can be equally important.   

Does the design of the website make it difficult to use?

This topic focuses on the functionality of the website and in turn, user experience. 

Under section 7.2.3, Lowest Quality Main Content, Google writes: 

“The Lowest rating should also apply to pages where users cannot benefit from the MC, for example:

  • Informational pages with demonstrably inaccurate MC.
  • The MC is so difficult to read, watch, or use, that it takes great effort to understand and use the page.
  • Broken functionality of the page due to lack of skill in construction, poor design, or lack of maintenance.

The design of a website plays a pivotal role in functionality and since Google rightfully views bad functionality with a grade of “lowest rating”, the design of a website deserves its own topic.

A website with a good design and functionality will contribute to high quality pages and in theory convert better and potentially perform better in search results.

Would you trust this website with personal details?

This topic is the culmination of all the previous questions. If the overarching results are positive then users could be more likely to hand over personal information, such as credit card details and email addresses, which is an objective for most businesses. 

How to use the checklist

Access the Search Quality Evaluator checklist for free.

//

There will be an option to select “TRUE”, “FALSE”, or “OKAY” for each question in the checklist. 

Necessary

The importance (necessity) of each question will determine whether a section will “Pass”, or “Fail”. In the screenshot above, the question “Is it clear and easy to understand what the website offers?” is a necessity to pass and because the answer is “FALSE”, the section has “FAILED”.

You can view the necessity of each question by unhiding columns C-E but these additional columns can be confusing for people who don’t understand what the sheet is doing, so I would keep them hidden most of the time.

There are also two additional columns, “Ideal answer” and “Achieves ideal answer”

Ideal answer

“Ideal answer” determines what the “ideal” statement should be for each question. 

In some cases you want the statement to be “TRUE”. In other cases, you want the statement to be “FALSE”.

For instance, we want the statement to be “TRUE” for the question “Is it clear and easy to understand what the website offers?”.

However, the “ideal” answer for the question “Is the content too short, insubstantial or unhelpfully vague?” would be “FALSE”. “FALSE” states that the content is in fact not short, insubstantial, or unhelpfully vague.

Achieves ideal answer

Achieves ideal answer” is determined by whether the “Grade” column matches the “Ideal answer” column. 

Using the above screenshot, we know the “Ideal answer” for the question “Is it clear and easy to understand what the website offers?” is “TRUE”“. 

However, because the answer given is “FALSE”, the ideal answer has not been achieved;  therefore, “FALSE” is the statement given for “Achieves ideal answer”.

This allows us to see all “Passed” and “Failed” sections of the checklist located in the“Passed and Failed section” tab.

We’ve also added a description and/or instructions and references column in the checklist questions tab to help explain questions, provide instructions and add context. Feel free to add to, or iterate the content found in these columns. 

 

You’ll also find references of “[ADD BRAND]” in the “Checklist questions” tab. You’ll need to update these references to reflect the organisation the checklist is being run against.   

The tab “Website content” concatenates example URLs and example titles to help make this checklist a little more efficient to use.

The easiest way to see how all of these fields work is just to fill them out with information from your brands first, and then notice how the questions have changed as you go through the checklist.

Who should use the checklist?

As we’ve mentioned, our Search Quality Evaluator Checklist can be used by a range of people.

  1. SEO professionals: filling out this checklist is a great way to evaluate the UX and website quality of a client. It will allow you to pinpoint strengths and weaknesses and help you deliver priority recommendations.
  2. Internal stakeholders: the checklist can be used by internal stakeholders to go through before publishing new content, website features or most things that could impact the UX and content quality. The checklist could also be filled out by internal stakeholders as a way of getting them on board with SEO priorities.
  3. The public: if you want to get a true understanding of how users perceive your website and brand, then why not ask your potential customers? 

Members of the public NEED TO fill out all sitewide questions, but you should split page type questions to avoid a horribly long survey that they’ll give lazy answers to. For instance, get a respondent to only answer questions for the homepage, blog pages, or product pages. Distilled’s original Panda Survey post contains some advice and guidelines on running a survey, including collecting data from respondents. 

Our other checklists

We like using checklists here at Distilled and if you do too, get your hands on our technical SEO audit checklist, Google Analytics audit checklist and our PPC audit checklist.

Read More

How The Internet Happened: From Netscape to the iPhone

Brian McCullough, who runs Internet History Podcast, also wrote a book named How The Internet Happened: From Netscape to the iPhone which did a fantastic job of capturing the ethos of the early web and telling the backstory of so many people & projects behind it’s evolution.

I think the quote which best the magic of the early web is

Jim Clark came from the world of machines and hardware, where development schedules were measured in years—even decades—and where “doing a startup” meant factories, manufacturing, inventory, shipping schedules and the like. But the Mosaic team had stumbled upon something simpler. They had discovered that you could dream up a product, code it, release it to the ether and change the world overnight. Thanks to the Internet, users could download your product, give you feedback on it, and you could release an update, all in the same day. In the web world, development schedules could be measured in weeks.

The part I bolded in the above quote from the book really captures the magic of the Internet & what pulled so many people toward the early web.

The current web – dominated by never-ending feeds & a variety of closed silos – is a big shift from the early days of web comics & other underground cool stuff people created & shared because they thought it was neat.

Many established players missed the actual direction of the web by trying to create something more akin to the web of today before the infrastructure could support it. Many of the “big things” driving web adoption relied heavily on chance luck – combined with a lot of hard work & a willingness to be responsive to feedback & data.

  • Even when Marc Andreessen moved to the valley he thought he was late and he had “missed the whole thing,” but he saw the relentless growth of the web & decided making another web browser was the play that made sense at the time.
  • Tim Berners-Lee was dismayed when Andreessen’s web browser enabled embedded image support in web documents.
  • Early Amazon review features were originally for editorial content from Amazon itself. Bezos originally wanted to launch a broad-based Amazon like it is today, but realized it would be too capital intensive & focused on books off the start so he could sell a known commodity with a long tail. Amazon was initially built off leveraging 2 book distributors ( Ingram and Baker & Taylor) & R. R. Bowker’s Books In Print catalog. They also did clever hacks to meet minimum order requirements like ordering out of stock books as part of their order, so they could only order what customers had purchased.
  • eBay began as an /aw/ subfolder on the eBay domain name which was hosted on a residential internet connection. Pierre Omidyar coded the auction service over labor day weekend in 1995. The domain had other sections focused on topics like ebola. It was switched from AuctionWeb to a stand alone site only after the ISP started charging for a business line. It had no formal Paypal integration or anything like that, rather when listings started to charge a commission, merchants would mail physical checks in to pay for the platform share of their sales. Beanie Babies also helped skyrocket platform usage.
  • The reason AOL carpet bombed the United States with CDs – at their peak half of all CDs produced were AOL CDs – was their initial response rate was around 10%, a crazy number for untargeted direct mail.
  • Priceline was lucky to have survived the bubble as their idea was to spread broadly across other categories beyond travel & they were losing about $30 per airline ticket sold.
  • The broader web bubble left behind valuable infrastructure like unused fiber to fuel continued growth long after the bubble popped. The dot com bubble was possible in part because there was a secular bull market in bonds stemming back to the early 1980s & falling debt service payments increased financial leverage and company valuations.
  • TED members hissed at Bill Gross when he unveiled GoTo.com, which ranked “search” results based on advertiser bids.
  • Excite turned down offering the Google founders $1.6 million for the PageRank technology in part because Larry Page insisted to Excite CEO George Bell ‘If we come to work for Excite, you need to rip out all the Excite technology and replace it with [our] search.’ And, ultimately, that’s—in my recollection—where the deal fell apart.”
  • Steve Jobs initially disliked the multi-touch technology that mobile would rely on, one of the early iPhone prototypes had the iPod clickwheel, and Apple was against offering an app store in any form. Steve Jobs so loathed his interactions with the record labels that he did not want to build a phone & first licensed iTunes to Motorola, where they made the horrible ROKR phone. He only ended up building a phone after Cingular / AT&T begged him to.
  • Wikipedia was originally launched as a back up feeder site that was to feed into Nupedia.
  • Even after Facebook had strong traction, Marc Zuckerberg kept working on other projects like a file sharing service. Facebook’s news feed was publicly hated based on the complaints, but it almost instantly led to a doubling of usage of the site so they never dumped it. After spreading from college to college Facebook struggled to expand ad other businesses & opening registration up to all was a hail mary move to see if it would rekindle growth instead of selling to Yahoo! for a billion dollars.

The book offers a lot of color to many important web related companies.

And many companies which were only briefly mentioned also ran into the same sort of lucky breaks the above companies did. Paypal was heavily reliant on eBay for initial distribution, but even that was something they initially tried to block until it became so obvious they stopped fighting it:

“At some point I sort of quit trying to stop the EBay users and mostly focused on figuring out how to not lose money,” Levchin recalls. … In the late 2000s, almost a decade after it first went public, PayPal was drifting toward obsolescence and consistently alienating the small businesses that paid it to handle their online checkout. Much of the company’s code was being written offshore to cut costs, and the best programmers and designers had fled the company. … PayPal’s conversion rate is lights-out: Eighty-nine percent of the time a customer gets to its checkout page, he makes the purchase. For other online credit and debit card transactions, that number sits at about 50 percent.

Here is a podcast interview of Brian McCullough by Chris Dixon.

How The Internet Happened: From Netscape to the iPhone is a great book well worth a read for anyone interested in the web.

Categories: 

Read More

Ubersuggest 5.0: Generate 1 Million Keyword Suggestions in 7 Seconds (Seriously)

Ubersuggest started out as a tool that provided suggestions through Google Suggest.

And although that’s great, everyone these days is using
Google Suggest to come up with keyword ideas.

There have to be more keyword ideas out there that get more search volume and aren’t competitive, right?

Well, there are, and now with the new Ubersuggest, you’ll get access to them.

Here are the 2 big changes I am making with this release…

Introducing a new keyword database

Because we have results in our database for more than a billion different keywords, I thought it would be fun to tap into it and provide you with even more keyword suggestions.

Now when you perform a search on Ubersuggest for any keyword, you’ll see a “related” tab with even more suggestions.

And each tab shows you how many keywords are in that group.

As you can see for the term “digital marketing”, just in the United States there are over 30,000 keyword recommendations.

And for terms like “dog”, there are over 1 million keyword recommendations.

dog

Don’t worry though, results from Google Suggest are still there under the “suggestions” tab but you can now see even more search terms if you click on “related”.

What’s cool is that you can even export all of the keywords via CSV.

And if you want to leverage the filters to fine-tune the results, you can easily do so.

filters

For example, I used the filters setting to find keywords with a minimum search volume of 400 searches a month and a maximum SEO difficulty of 50. Ubersuggest then fine-tuned the results to 489 keywords related to “digital marketing” that I should consider target instead of me having to manually go through 30,000 plus recommendations.

And with the CSV report, it adjusts as the settings with the filters change. So you can export the keywords that you want and ignore the keywords you aren’t planning to target.

Ubersuggest now has local search

Another big change that we made to Ubersuggest is that we
introduced local keyword research.

You can now pull up keyword stats and ideas on any city, county, region, or country.

For example, if I want to know the search volume and keyword
recommendations for West Hollywood, California, I can now do so.

From there, Ubersuggest shows keyword search volume, keyword recommendations, and even content ideas for a blog post.

On top of that, when you head over to the “keyword ideas” report, you’ll also notice that the SERPs results, which shows all of the sites that rank for that term, are now adjusted to also show the ranking sites within that region.

So, what’s next for Ubersuggest?

Well, speaking of keyword research, you’ll start seeing keyword recommendations based on questions, comparisons, and prepositions like Answers the Public within a month.

And, of course, as I promised earlier, next Tuesday I am releasing the rank tracking and dashboard features on Ubersuggest.

If you haven’t already, head on over to Ubersuggest to give the new keyword database a try.

And if you are trying to use the local SEO features, you may find that they only work once you are in the app.

I haven’t been able to make the changes to the main
Ubersuggest landing page yet, but once you type in a keyword and test it out,
you can then switch your location to any city.

So, what do you think of the changes?

PS: Make sure you test Ubersuggest out.

The post Ubersuggest 5.0: Generate 1 Million Keyword Suggestions in 7 Seconds (Seriously) appeared first on Neil Patel.

Read More

"Impressions" is an Undervalued SEO KPI

This post is about a hill I’m willing to die on, but might not need to – I actually have no idea whether the title of this post is a controversial statement, or not. I’m keen to find out! However, from what I see in talks, posts, pitches, and business practices, our industry has definitely not taken to heart the value of an impression.

I’m going to lay out five reasons why I think impressions are not just a valid indicator of SEO success, but actually an unusually good one.

Before we go any further, it’s probably worth clarifying what I mean by an impression – I’m talking about the number of times someone has seen a site in search results. The most common place this is measured is Google Search Console, who write about their metrics in more detail here.

Reason 1: Clickless results are still worth having

If organic search was ever just a performance channel, to be measured by directly attributed conversions, it isn’t any more. Organic search is also, increasingly, a brand channel – something we all implicitly recognise when we invest in top of funnel content for our sites and clients. 

Furthermore, even if you were working on top of funnel content entirely for the remarketing list, you probably also work for a business that invests in branded advertising in other channels, whether it be billboards, sponsorships, display ads, radio, TV… the list goes on. All of these are channels where the primary objective is to get the brand in front of a potential customer, often at great cost.

What this means for organic search is that as much as it’s annoying that Google is interpreting more and more search terms as informational, or delivering more and more clickless searches, we probably need to stop complaining and start playing the game. This is value that we can provide to our clients and businesses, that they’re probably already paying a great deal of money for in other channels.

Indeed, if you just aim for search results that deliver a click, or a converting click, you’re leaving open space to your competitors.

Most SEO metrics, however, don’t capture this value. The one that comes closest is rankings, which brings me to my next reason…

Reason 2: Why not rankings?

There definitely is value to rank tracking – it’s easily communicated and understood, it’s precise and controlled and easy to break down, it provides a whole load of analytic depth you don’t get in other places. However, some of those strengths are also weaknesses – rank tracking, even if it does include the numerous location-based, personalised, device-based, or result-type, or synonym differences, only tracks the keywords you ask it to. It’s limited, therefore, by your budget, and your imagination.

Impressions, as reported by Google Search Console, include any keyword you might happen to rank for – even if you never thought of it. Even “search visibility” trackers like SEMRush, Ahrefs, Sistrix, and Searchmetrics have a limited corpus of keywords, often biased heavily towards high volume terms.

Impressions from keywords you care less about might be less valuable than any you garner for your target keywords, but they still have some value.

Reason 3: PRs have been fighting this battle for years, with worse data

Public relations (PR) agencies are paid a great deal of money to get their clients mentioned, preferably in a positive light. The general consensus, certainly among clients I’ve worked with, seems to be that this is valuable and useful, and it’s obviously an industry that’s been going for scores (hundreds?) of years, and doesn’t seem to be going anywhere.

To demonstrate and measure the value of their work, PRs have historically reported on metrics like the circulation of publications they have achieved placements in, number of column inches placed, or “advertising equivalent value” (AVE). These metrics are, compared to what we have online, terrible. There’s no real way of knowing how many people saw a print article, and yet, businesses found the notion so important, they were willing to tolerate that, and invest billions.

As SEOs, and digital marketers in general, we sit on far, far better data. Impressions are a pertinent example – we know exactly how many people saw a page mentioning our content, and can describe the light in which it was seen. But, we too often don’t bother communicating that.

Reason 4: Impressions are a ranking factor (well, not quite, but…)

This is not literally true – I do not think that Google looks at the number of impressions that a site gets and then factors it into that site’s rankings. That would be a bizarre and circular system. It did get your attention though, didn’t it? And if the principal benefit of impressions as separate from clicks is brand awareness, there’s lots of reasons to suspect that that would indirectly impact rankings.

Brands which are well known and trusted will find it easier to garner links and clicks, which will in turn make it easier to rank.

(I’ve written a bit more about this ranking factor nuance in this recent post.)

Reason 5: Straight from the horse’s mouth, absolutely free of charge

Lots of the metrics we have access to in SEO are critically flawed. Because Google is so cagey, we end up paying rank tracking providers a small fortune to operate an enormous batch of IP addresses with which to scrape search results. Even then, getting a real picture of what results are like in different parts of a country, and weighting your metrics by the different volumes in those parts, is an absolute nightmare. Or you can use an on-site analytics platform, which seems to get shakier with each passing day.

Impression data, on the other hand, is free from Google Search Console. There’s an API, which has a free Data Studio integration. The data is minimally sampled, robust to tracking protection, and not biased towards any location or device type. Sure, there are occasional updates to Google Search Console, which can be a bit of a spanner in the works of your year-on-year comparisons, but that’s true of most other search data platforms, too.

Caveat

I’m hoping to read on Twitter and in the comments below the various ways in which I’m stupendously wrong. However, one flaw I’d particularly like to draw your attention to is that sometimes fewer impressions is better. This is particularly true if it comes as a result of better focusing your site’s targeting, such that it actually gets better rankings and clicks for the keywords it’s relevant for.

To give you an extreme example – I had a client several years ago that ranked 8th for the keyword “Facebook”. This resulted in a huge amount of impressions, and when they dropped out of that SERP, the impact on their business was at worst neutral. That said, many other tools (looking at you SEMRush etc.) were similarly thrown, and it was a bit of an edge case.

What this means is that impressions needs to be used alongside other KPIs, hopefully including clicks. That’s true of any good metric, though.

Read More

Ubersuggest 6.0: Track and Improve Your Rankings Without Learning SEO

I’ve been an SEO for roughly 17 years now.

And one thing that has remained constant, no matter how much
you know about SEO, there is just too much to do.

So much so, that most SEOs don’t even optimize their own websites anymore. And if they do, you’ll find that their site doesn’t rank for many competitive terms.

Why?

Because it is a lot of work!

That’s why I’m excited to announce Ubersuggest 6.0.

It now tracks and improves your rankings, even if you don’t have an SEO bone in your body.

So, what’s new?

Dashboard and login

First off, you can now keep track of all of your websites.
You’ll have to register to
use this feature, but don’t worry, it’s free.

Once you register, you’ll be dropped into a dashboard.

Now for me, I’m already tracking a few websites. Which is why
my dashboard is already populated.

The dashboard will keep track of your SEO errors, link
growth (or decline), your monthly search traffic, your overall search rankings,
and any SEO errors that you need to fix.

Best of all, it crawls your website for you each and every week so you don’t have to worry about keeping up with Google’s latest algorithm changes.

And with the search rankings feature, you can automatically track how your rankings are changing on a daily basis.

Rank tracking

Within each site you add to the dashboard, you’ll be able to
automatically track your rankings for any specific keyword.

Not only are you able to track your rankings on desktop devices, but Ubersuggest also shows how you rank on mobile devices.

If you want to track specific keywords, all you have to do is click Add Keywords and it will pull a list of suggestion from your Google Search Console. Of course, you can also track any other keyword even if it doesn’t show up in your Search Console.

What’s also cool is that you have the ability to track your rankings in any country, city, or region. That means if you do local SEO or international SEO, you can see your rankings anywhere.

There’s also a date picker so once you’ve been using
Ubersuggest for a while, you’ll be able to see a nice chart of how your
rankings are improving over time.

Conclusion

What’s great about these changes is you can now directly see how Ubersuggest is helping you grow your search traffic.

It will automatically keep track of all of your changes and
notify you when it finds any new SEO issues to fix.

And over the next few months, you’ll see a few more features added that will make your life even easier.

One example is that I’ll introduce email alerts so that you don’t have to log into Ubersuggest anymore and it emails you when there is an issue that needs your attention.

I’ll also be adding in competitive analysis features. You’ll be able to track your competitors and be notified when they make an SEO or marketing change that you should look at.

And my long-term goal is to make it so you don’t even have to code or make any changes manually. Ubersuggest will eventually be able to go into your website and make these fixes for you. However, this feature won’t happen until next year sometime.

So, what do you think of the new Ubersuggest? Give it a try… make sure you create your free account.

PS: If you missed it, I released some cool features like local keyword research and a billion-plus keyword database last week. Click here to get the update on those new Ubersuggest features.

The post Ubersuggest 6.0: Track and Improve Your Rankings Without Learning SEO appeared first on Neil Patel.

Read More

What rel=”noreferrer noopener” Mean and How it Affects SEO

noreferrer-noopenener link tags

“Noreferrer noopener”, are HTML attributes that can be added to outgoing links. What do these tags do and how they can impact your SEO efforts?

In this post, I will explain the difference between noreferrer and noopener tags, how they are different from the nofollow tag and the impact on SEO when each one is used.

Let’s start with some definitions.

What is rel=”noreferrer”?

The rel=”noreferrer” tag is a special HTML attribute that can be added to a link tag (<a>). It prevents passing the referrer information to the target website by removing the referral info from the HTTP header.

This means that in Google analytics traffic coming from links that have the rel=”noreferrer” attribute will show as Direct Traffic instead of Referral.

This is how the noreferrer attribute looks in HTML View:

<a href="https://www.example.com" rel="noreferrer">Link to Example.com</a>

Here is an example to understand this better:

Let’s say that you link from website A to Website B without the “noreferrer” tag.

When the owner of website B views the ‘ACQUISITION’ report in Google Analytics, he can see traffic coming from Website A, under the ‘REFERRALS’ section.

noreferrer impact on Google Analytics reports
Traffic coming from links without the rel=”noreferrer” shows as Referral traffic

When you link from Website A to Website B using the “noreferrer” tag, any traffic going from Website A to Website B will show as DIRECT traffic in Google Analytics (and not referral).

rel=noreferrer in Google Analytics
Traffic coming from links with the rel=”noreferrer” shows as Direct traffic

When to use rel=”noreferrer”?

Use the rel=”noreferrer” attribute on outgoing links when you don’t want other sites to know that you are linking to them. Can’t think of any valid reason why you might want to do this, but that’s the case.

Definitely do not use the rel=”noreferrer” attribute on internal links, it can mess up with your Google analytics reports.

rel=”noreferrer” and SEO

Adding the noreferrer tag to your links does not directly impact SEO. You can safely use it without worrying about anything.

But it does have an indirect effect in your link building and promotion efforts and the reason is the following:

One of the ways to get the attention of other webmasters is to link to their sites. All webmasters check their Google analytics on a daily basis and especially the ‘Referral traffic’.

When they see traffic from a website, they will most probably check it out and share the page in social media, follow the author or even decide to return the favor by linking back.

This is good for SEO and in fact, it is something that Google recommends as a valid way to get links from other websites (see below the relevant quote from a Google document).

Google Advice on how to attract new links
Google Advice on how to attract new links

When you have the noreferrer tag attached to your links, nothing from the above will happen because traffic from your website will not show as ‘Referral’ in Google analytics and so the other webmasters will not know that you have linked to them.

You might be thinking, ‘Why even talk about this, I will not add it to my links and that’s the end of the story’.

The reason that this issue has become popular is because WordPress ads the ‘noreferrer’ tag by default to all outgoing links that are set to open in a ‘new tab.’

Noreferrer and WordPress

So, if you are on WordPress you should know that when you add an external link to your content and set it to open in a ‘new tab’ (target=”_blank”), WordPress will automatically add rel=”noopener noreferrer” to the link.

They did this to improve the security of WordPress rich editor (TinyMCE) and to prevent tabnapping and other phishing attacks.

Here is an example:

<a href="https://www.externalsite.com/" target="_blank" rel="noopener noreferrer">my external link</a>

As explained above, this will prevent any information to be passed to the new tab and the end result is that any traffic that will go from your website to the linking website (by clicking the link), will not be shown in Google Analytics.

How to remove rel=”noreferrer” from WordPress links

The easiest way to prevent WordPress from automatically adding the attribute to external links is NOT to open the links in a new tab. In other words, to have the links open in the same window.

This is the simplest way to deal with this problem but the drawback is that users clicking the external link will leave your website and this might increase your bounce rate, decrease time on site etc.

Nevertheless, since the majority of traffic is now coming from mobile devices, you shouldn’t worry too much about users exiting your website since the behavior of the ‘new tab’ on mobile makes it difficult for users to come back to the previous window.

There are plugins that prevent WordPress from adding the rel=”noreferrer” to external links but they only work when using the TinyMCE and NOT the new editor (Gutenberg).

My recommendation is not to mess with this, just avoid opening external links in a new tab and you are good to go.

Noreferrer has no impact on affiliate links. The reason is that the majority of affiliate programs do not rely on ‘referral traffic’ to award a conversion but on the affiliate ID which is included in the link. For example:

<a href="//www.semrush.com/sem/?ref=15096612" rel="noreferrer noopener" target="_blank">

So, you have nothing to worry about.

The Difference Between Nofollow and Noreferrer

When you add rel=”nofollow” to an external link, you basically instruct search engines not to pass any PageRank from one page to the other. In other words, you tell them to ignore that link for SEO purposes.

The difference between nofollow and noreferrer is that noreferrer does not pass any referral information to the browser but the link is followed. With nofollow, referral information is passed to the browser but the link is not followed.

So, they are not the same things. Use nofollow on links that you don’t trust and use noreferrer if you don’t want the other site to know that you have linked to them.

What is rel=”noopener”?

rel=”noopener” is an HTML attribute that can be added to external links. It prevents the opening page to gain any kind of access of the original page.

Here is an example of a link with the rel=”noopener” tag:

<a href="https://www.example.com" rel="noopener">Link to Example.com</a>

This is added automatically by WordPress on all external links that open in a new tab for security reasons and it is recommended that you keep it.

If you are not on WordPress, it is recommended to add the rel=”noopener” to all your external links that open in a new tab.

Rel=”noopener” and SEO

Noopener has zero impact on your SEO so you can safely use it to enhance the security of your website.

Key Learnings

Dealing with HTML tags and attributes is confusing for many people but that shouldn’t be the case with noreferrer and noopener.

None of them can negatively impact your SEO, use them without fear.

If you are on WordPress, these tags are added automatically on external links that open in a new tab.

The noopener is needed to enhance the security of your website and prevent other websites from gaining access to your page (through the browser session).

The noreferrer is used to protect referral information from being passed to the target website and this also hides referral traffic in Google analytics.

If you want other websites to see traffic from your website as ‘Referral traffic’, then simply do not open external links in a new tab. This will prevent WordPress from automatically adding the attributes to the links and everything is good.

Nofollow is not the same as noreferrer. When the rel=”nofollow” is added to a link, it instructs search engines not to use that link for SEO purposes. Noreferrer does pass link juice from one website to the other.

If you are still confused about the role of rel=”noreferrer noopener”, let me know in the comments.

The post What rel=”noreferrer noopener” Mean and How it Affects SEO appeared first on reliablesoft.net.

Read More

Skip to content