Blog

Multilingual Plugins for WordPress

Posted by on May 16, 2019 in Greg's SEO Articles | Comments Off on Multilingual Plugins for WordPress

By sticking to just one language on your website you’re limiting your sales from around the world. As people find new ways to communicate instantly, you’re more and more likely to communicate with people whose first language isn’t English. You’ll be able to reach far more people by translating text into their language. Here are 4 plugins we recommend:

POLYLANG

With this plugin you can add as many languages as you need to every page or post that you’re working on. Although it doesn’t do the translation for you, it does make it very easy to scale your content for multilingual purposes – you don’t have to have multiple websites with different languages, because a user logging onto your site merely has to select a language of their choice.

LOCO TRANSLATE

This plugin does more than just translate posts – it uses the readily available variety of languages WordPress has and does the translation from there. It is $5.95 monthly but you can translate 2,000 words for free.

WORDPRESS MULTINGUAL PLUGIN (WPML)

This plugin lets you automatically or even manually translate your content. More than 40 languages are available for auto-translation. It will cost $29 for “Standard” and $79 for “Advanced” but it has a solid reputation.

GOOGLE LANGUAGE TRANSLATOR

Last but not least is the Google Language Translator. Not only is it free, but Google Translate has become substantially more accurate, and it can support over 100 languages!

What are SEO Keywords and How to Find Them

Posted by on May 15, 2019 in SEO Articles | Comments Off on What are SEO Keywords and How to Find Them

SEO Keywords

You probably already know that keywords are important for SEO but what do we actually mean when we talk about SEO keywords? How can you find the right keywords for your website and more importantly, how to use those keywords in your content and maximize your SEO?

These are some of the questions I will answer in this post.

What are SEO Keywords?

Let’s start with a definition. SEO Keywords are words or phrases (search terms) that people use when searching for information through a search engine.

By adding those keywords in your content, you increase your chances of appearing in the search results for those terms.

SEO Keywords make it easier for search engines to understand what your content is all about and help users find the information they need.

Why are keywords important?

As you might have already guessed, keywords are very important for SEO.

Without using the right SEO keywords in your content, search engines have a hard time understanding the meaning of your content. And this diminishes your chances of getting organic traffic to your website.

The way search engines work is by matching the user search queries with pages available in their index.

During the crawling and indexing phase, search engine crawlers visit a webpage. They extract the information they need and add it to their index. They use this information later during the ‘matching’ process (also referred to as the ranking process).

Part of the information they extract is the keywords a web page is associated with.

If during this process your website is associated with the wrong keywords, then you have no chances of appearing high in the results for the keywords that matter for your website.

Let me give you an example to understand this better.

Let’s say you want to rank for [electric bicycles]. You create a page and showcase your products (product name, images of the products, etc.).

Unless you mention on the page that they are electric, Google will most probably associate your page with bicycles and this not what you want.

What you should do instead is to find out which keywords people use as search terms when they look for electric bicycles and make sure that these keywords are part of your content.

Different types of SEO keywords

Keywords are classified in two categories: head keywords and long tail keywords.

Head keywords (also known as seed keywords) usually consist of one or two words and have a high search volume.

Long tail keywords consist of more words, have less search volume compared to head keywords but make up 70% of all searches.

Long Tail Keywords - Search Curve
Long Tail Keywords – Search Curve

What is important to understand at this point is that head keywords may have more search volume but they are also highly competitive.

This means that thousands of websites are competing for one of the top positions in the SERPS and this makes it almost impossible for new websites (or small businesses) to rank for these terms.

The solution is to focus on long-tail keywords.

They have less search volume but with the right SEO plan, it is possible to rank in the top positions of Google and get targeted organic traffic to your website.

How to choose keywords for SEO

Let’s see how to choose the “right” SEO keywords to use in your content.

This process is known in SEO as keyword research.

Step 1: Decide for which ‘search terms’ you want to be known for

Your first step is to spend some time thinking about the following:

  • For which search terms do you want to be known for online
  • What words or search phrases people might use in the search engines to find you
  • Which words best describe your products or services
  • Which words describe your niche better

The outcome of the above exercise should be a list of phrases that we’ll use to turn into a keyword list in the next step.

Step 2: Create a Keyword list

To create a keyword list, you need to take the list created above and associate it with the actual keywords people type in the search box i.e. SEO Keywords.

To do that we need the help of keyword research tools. There are a lot of tools you can use but my recommended tools are the Google Keyword Planner (Free) and SEMRUSH (Paid).

Google keyword Planner

The Google Keyword Planner is part of Google Ads and it’s offered by Google for free for the Google Ads customers.

Nevertheless, it’s a great keyword research tool and you can use it to find your SEO keywords.

Go to Google Ads and create an account. Follow the steps to create a draft campaign so that the system will allow you to access the Keyword Planner.

Once you are done with the draft campaign, select TOOLS > KEYWORD PLANNER.

Google Keyword Planner
Access the Google Keyword Planner

Click on FIND NEW KEYWORDS

For the sake of this example, let’s say that you are a dog trainer selling an online course teaching people how to train their dogs.

If you enter the relevant keywords in the keyword planner, you will be present with a list of keywords ideas matching your selected topics.

Keyword Planner Keyword Ideas
How to Find SEO Keywords using the Keyword Planner

Notice that besides the keywords, the tool has a few more columns.

The ‘Average Monthly Searches’ shows you how many searches are performed on Google per month for that term and the ‘Competition’ gives you an idea of how competitive a keyword is.

By ‘competitive’ we mean how many people are bidding to advertise for that keyword on Google Ads.

Remember that the keyword planner is a tool for Google PPC Ads and not for SEO search results.

Select some of the keywords that are highly relevant to your website and products, both ‘head’ and ‘long-tail keywords’, add them to a spreadsheet and move on to the next step.

Step 3: Find Semantic SEO Keywords

The next step is to find keywords related to your target keywords. These keywords are known as semantic keywords or LSI keywords.

A semantic keyword is a keyword that is strongly related with another keyword.

The reason you want to do this is because Google no longer ranks pages that target individual keywords but it’s more targeted to topics.

So, by finding LSI keywords for your main keywords and including them in your content, you help Google get a better understanding about your content and this translates to higher rankings.

The best way to find LSI keywords is to use the LSI Keyword Generator and Google search.

LSI Keyword Generator

Go to LSI Graph and your head keywords and click search. Take note of the keywords and add them in a second column in your spreadsheet.

LSI Graph
How to Find Semantic SEO Keywords

Google Search

When you search for a keyword in Google, there are 2 ways to find out what Google considers to be the related keywords for a given search term.

The first one is to look at the “People also ask” section.

People Also Ask
People Also Ask – Keyword Ideas

And the second one is the “Searches related to…” section.

Google Related Searches
Google Related Searches

SEMRUSH

SEMRUSH is my favorite keyword research tool.

Among many other useful features, it has  two very powerful tools for keyword research.

The first is the “Keyword Magic Tool” and the second one is the “Topic Research”.

With SEMRUSH, you don’t have to go through the process of creating an account with the Google Keyword Planner or going to LSIGraph and Google search to find related keywords.

Everything is done within SEMRUSH.

Keyword Magic Tool

The first step is to create an account (there is a 7 Day Free Trial).

Then select KEYWORD MAGIC TOOL under KEYWORD ANALYTICS.

Type in your head keyword and click SEARCH.

semrush keyword magic
Find SEO keywords with the Keyword Magic Tool

SEMRUSH will group related keywords together.

Sort the keywords by volume and KD (Keyword Difficulty). Unlike the Google Keyword Planner, KD refers to how difficult to RANK for a particular keyword in Google.

A metric very useful since what you want is to pick the right SEO keywords that have higher search volume and lower KD score.

Topic Research

Remember what I mentioned above that Google now ranks websites based on Topics and not just keywords?

The Topic Research tool will help you find long-tail keywords related to a topic.

Click TOPIC RESEARCH from the left menu and type in our main head keyword.

Topic Research with SEMRUSH
Topic Research with SEMRUSH

What you see on the right (under Interesting Questions), is questions related to your niche. It’s similar to “People also ask” but more comprehensive since it includes questions from various sources.

How can you take advantage of this?

These questions can help you build TOPIC RELEVANCY, which is what you want if your goal is to rank higher on Google.

How can you do this in practice? Optimize your homepage for your head keywords and then create content (through a blog) targeting each of the questions (which in essence it’s the long tail keywords).

Make sure that within your blogs, you link to your main pages.

How to use SEO Keywords in your Content

Doing good keyword research and having a keyword list, is not enough.

In order to benefit from this process, you need to know how to use those keywords in your content.

This is known as SEO Content, which is a subset of On Page SEO.

Here are some tips to follow:

Optimize your Homepage for your main keyword

Homepage SEO is very important. Search engines start the crawling process from the homepage and follow up any links from there.

When it comes to keyword optimization, you should optimize your homepage for your main head keywords (even if their keyword difficulty is very high).

The reason is that you want to make it clear to both crawlers and users, what your website is all about.

Create a separate page for each of your main keywords

Let’s say that your company sells services (like my company), you need to create a page for each of your services, each page to be optimized for a main keyword.

Look at how my services page is organized.

I have a summary page for all my services and individual pages for each of the services we offer. Each page is optimized for a specific keyword.

Create pieces of content to target long-tail keywords

Once you are done with the main keywords, it’s time to utilize the power of blogging and start creating content targeting long-tail keywords.

You can use the results of topic research to decide which keywords to target in your blog posts.

Optimize your content with SEO keywords

When writing the content for both your pages and blog posts, you need to make sure that:

Use keywords in the URL – you include your target keyword in the page URL.

Use keywords in the page title – you include your target keyword in the page title.

Use keywords in the H1 Tag – you include your target keyword (or close variations) is the h1 tag

Use long-tail keywords as subheadings (h2, h3) – you include related keywords in your subheadings

Use LSI keywords in your content – you include LSI keywords within you copy.

The following guides will help you understand this better:

SEO Keywords: Final Advice

Picking up the right keywords for your website is important.

When choosing keywords, try to think outside the box and consider all possible search phrases people might use in search engines to find your products.

Add those keywords in a spreadsheet and take advantage of the data provided by the different keyword tools, to expand your keyword list as much as possible.

Group your keywords into two categories. First are the keywords to use in your homepage and main website pages and second the keywords to use in your blog.

Follow the on-page SEO tips outlined above to intelligently include keywords in your copy but always make sure that you pay special attention to the quality of the content.

Publish content related to your target topics to create content relevancy and watch your rankings and traffic increase.

The post What are SEO Keywords and How to Find Them appeared first on reliablesoft.net.

A Website Crawler for Search and Information Architecture

Posted by on May 14, 2019 in SEO Articles | Comments Off on A Website Crawler for Search and Information Architecture

Distilled is all about effective and accountable search marketing. Part of being effective is being able to gather the data we need to diagnose an issue. For a while, we’ve been using a custom crawler at Distilled to solve technical problems with our clients. Today, we’re making that crawler available to you.

This crawler solves three long-standing pain points for our team:

  1. Unhelpful stock reports. Other crawlers limit us to predefined reports. Sometimes these reports don’t answer our questions. This crawler exports to BigQuery, which lets us stay flex.
  2. Limited crawl scope. When crawling on your own computer, your crawl is limited by how much RAM you’ve got. Our crawler is so efficient that you’re more likely to run out of time than memory.
  3. Inflexible schema. Other crawlers generally export flattened data into a table. This can make it hard to analyze many-to-many relationships, like hreflang tags. This crawler outputs complete, non-flattened information for each page. With this data, the queries our team runs are limited only by their imaginations.

Our team still uses both local and hosted crawlers every day. We break out this custom crawler when we have a specific question about a large site. If that’s the case, this has proven to be the best solution.

To use the crawler, you’ll need to be familiar with running your computer from the command line. You’ll also need to be comfortable with BigQuery. This blog post will cover only high-level information. The rest is up to you!

This is not an official Distilled product. We are unable to provide support. The software is open-source and governed by an MIT-style license. You may use it for commercial purposes without attribution.

What it is

We’ve imaginatively named the tool crawl. crawl is an efficient and concurrent command-line tool for crawling and understanding websites. It outputs data in a newline-delimited JSON format suitable for use with BigQuery.

By waiting until after the crawl to analyze data, analysis can be more cost-effective. If you don’t try to analyze the data at all as you’re collecting it, crawling is much more efficient. crawl keeps track of the least information necessary to complete the crawl. In practice, a crawl of a 10,000-page site might use ~30 MB RAM. Crawling 1,000,000 pages might use less than a gigabyte.

Cloud computing promises that you can pay for the computing power you need, when you need it. BigQuery is a magical example of this in action. For many crawl-related tasks, it is almost free. Anyone can upload data and analyze it in seconds.

The structure of that data is essential. With most crawlers that allow data exports, the result is tabular. You get, for instance, one row per page in a CSV. This structure isn’t great for many-to-many relationships of cross-linking within a website. crawl outputs a single row per page, and that row contains nested data about every link, hreflang tag, header field, and more. Here are some example fields to help you visualize this:

Some fields, like Address, have nested data. Address.Full is the full URL of the page. Other fields, like StatusCode, are simply numbers or strings. Finally, there are repeated fields, like Links. These fields can have any number of data points. Links records all links that appear on a page being crawled.

So using BigQuery for analysis solves the flexibility problem, and helps solve the resource problem too.

Install with Go

Currently, you must build crawl using Go. This will require Go version >1.10. If you’re not familiar with Go, it’ll be best to lean on someone you know who is willing to help you.

go get -u github.com/benjaminestes/crawl/...

In a well-configured Go installation, this will fetch and build the tool. The binary will be put in your $GOBIN directory. Adding $GOBIN to your $PATH will allow you to call crawl without specifying its location.

Valid commands

USAGE: crawl <command> [-flags] [args]

help Print this message.
list

Crawl a list of URLs provided on stdin.
The -format={(text)|xml} flag determines the expected type.

Example:
crawl list config.json <url_list.txt >out.txtcrawl list -format=xml config.json <sitemap.xml >out.txt

schema

Print a BigQuery-compatible JSON schema to stdout.

Example:
crawl schema >schema.json

sitemap

Recursively requests a sitemap or sitemap index from a URL provided as argument.

Example:
crawl sitemap http://www.example.com/sitemap.xml >out.txt

spider

Crawl from the URLs specific in the configuration file.

Example:
crawl spider config.json >out.txt

Configuring your crawl

The repository includes an example config.json file. This lists the available options with reasonable default values.

{
"From": [
"https://www.example.com/"
    ],
    "Include": [
        "^(https?://)?www\.example\.com/.*"
    ],
    "Exclude": [],

    "MaxDepth": 3,

    "WaitTime": "100ms",
    "Connections": 20,

    "UserAgent": "Crawler/1.0",
    "RobotsUserAgent": "Crawler",
    "RespectNofollow": true,

    "Header": [
{"K": "X-ample", "V":"alue"}
    ]}

Here’s the essential information for these fields:

  • From. An array of fully-qualified URLs from which you want to start crawling. If you are crawling from the home page of a site, this list will have one item in it. Unlike other crawlers you may have used, this choice does not affect the scope of the crawl.
  • Include. An array of regular expressions that a URL must match in order to be crawled. If there is no valid Include expression, all discovered URLs could be within scope. Note that meta-characters must be double-escaped. Only meaningful in spider mode.
  • Exclude. An array of regular expressions that filter the URLs to be crawled. Meta-characters must be double-escaped. Only meaningful in spider mode.
  • MaxDepth. Only URLs fewer links than MaxDepth from the From list will be crawled.
  • WaitTime. Pause time between spawning requests. Approximates crawl rate. For instance, to crawl about 5 URLs per second, set this to “200ms”. It uses Go’s time parsing rules.
  • Connections. The maximum number of current connections. If the configured value is < 1, it will be set to 1 upon starting the crawl.
  • UserAgent: The user-agent to send with HTTP requests.
  • RobotsUserAgent. The user-agent to test robots.txt rules against.
  • RespectNofollow. If this is true, links with a rel=”nofollow” attribute will not be included in the crawl.
  • Header. An array of objects with properties “K” and “V”, signifying key/value pairs to be added to all requests.

The MaxDepth, Include, and Exclude options only apply to spider mode.

How the scope of a crawl is determined

Given your specified Include and Exclude lists, defined above, here is how the crawler decides whether a URL is in scope:

  1. If the URL matches a rule in the Exclude list, it will not be crawled.
  2. If the URL matches a rule in the Include list, it will be crawled.
  3. If the URL matches neither the Exclude nor Include list, then if the Include list is empty, it will be crawled, but if the Include list is not empty, it will not be crawled.

Note that only one of these cases will apply (as in Go’s switch statement, by way of analogy).

Finally, no URLs will be in scope if they are further than MaxDepth links from the From set of URLs.

Use with BigQuery

Run crawl schema >schema.json to get a BigQuery-compatible schema definition file. The file is automatically generated (via go generate) from the structure of the result object generated by the crawler, so it should always be up-to-date.

If you find an incompatibility between the output schema file and the data produced from a crawl, please flag as a bug on GitHub.

In general, you’ll save crawl data to a local file and then upload to BigQuery. That involves two commands:

$ crawl spider config.json >output.txt 

$ bq load --source_format=NEWLINE_DELIMITED_JSON dataset.table output.txt schema.json

Crawl files can be large, and it is convenient to upload them directly to Google Cloud Storage without storing them locally. This can be done by piping the output of crawl to gsutil:

$ crawl spider config.json | gsutil cp - gs://my-bucket/crawl-data.txt

$ bq load --source_format=NEWLINE_DELIMITED_JSON dataset.table gs://my-bucket/crawl-data.txt schema.json

Analyzing your data

Once you’ve got your data into BigQuery, you can take any approach to analysis you want. You can see how to do interactive analysis in the example notebook.

In particular, take a look at how the nested and repeated data fields are used. With them, it’s possible to generate reports on internal linking, canonicalization, and hreflang reciprocation.

Bugs, errors, contributions

All reports, requests, and contributions are welcome. Please handle them through the GitHub repository. Thank you!

This is not a Distilled product. We are unable to provide support. The software is open-source and governed by an MIT-style license. You can use it for commercial purposes without attribution.

A summary of Google Data Studio: Updates from April 2019

Posted by on May 14, 2019 in SEO Articles | Comments Off on A summary of Google Data Studio: Updates from April 2019

April was a big month for Google Data Studio (GDS), with Google introducing some significant product updates to this already robust reporting tool.

For those not familiar with GDS, it is a free dashboard-style reporting tool that Google rolled out in June 2016. With Data Studio, users can connect to various data sources to visualize, and share data from a variety of web-based platforms.

GDS supports native integrations with most Google products including Analytics, Google Ads, Search Ads 360 (formerly Doubleclick Search), Google Sheets, YouTube Analytics, and Google BigQuery.

GDS supports connectors that users can purchase to import data from over one hundred third-party sources such as Bing Ads, Amazon Ads, and many others.  

Sample Google Data studio dashboard

Source: Google

1. Google introduces BigQuery BI Engine for integration with GDS

BigQuery is Google’s massive enterprise data warehouse. It enables extremely fast SQL queries by using the same technology that powers Google Search. Per Google,

“Every day, customers upload petabytes of new data into BigQuery, our exabyte-scale, serverless data warehouse, and the volume of data analyzed has grown by over 300 percent in just the last year.”

BigQuery BI Engine stores, analyzes, and finds insights on your data Image Source: Google

Source: Google

2. Enhanced data drill-down capabilities

You can now reveal additional levels of detail in a single chart using GDS’s enhanced data drill down (or drill up) capabilities.

You’ll need to enable this feature in each specific GDS chart and, once enabled, you can drill down from a higher level of detail to a lower one (for example, country to a city). You can also drill up from a lower level of detail to a higher one (for example, city to the country). You must be in “View” mode to drill up or drill down (as opposed to the “Edit” mode).

Here’s an example of drilling-up in a chart that uses Google’s sample data in GDS.

GDS chart showing clicks by month

Source: Google

To drill-up by year, right click on the chart in “View” mode and select “Drill up” as shown below.

GDS chart showing the option to “Drill up” the monthly data to yearly data

Visit the Data Studio Help website for detailed instructions on how to leverage this feature.

3. Improved formatting of tables

GDS now allows for more user-friendly and intuitive table formatting. This includes the ability to distribute columns evenly with just one click (by right-clicking the table), resizing only one column by dragging the column’s divider, and changing the justification of table contents to left, right, or center via the “Style” properties panel in “Edit” mode.

Example of editing, table properties tab in GDS

Source: Google

Detailed instructions on how to access this feature are located here.

4. The ability to hide pages in “View” mode

GDS users can now hide pages in “View” mode by right clicking on the specific page (accessed via the top submenu), clicking on the three vertical dots to the right of the page name, and selecting “Hide page in view mode”. This feature comes in handy when you’ve got pages you don’t want your client (or anyone) to see when presenting the GDS report.

The new “Hide page” feature in GDS

Source: Google

5. Page canvas size enhancements

Users can now customize each page’s size with a new feature that was rolled out on March 21st (we’re sneaking this into the April update because it’s a really neat feature).

Canvas size settings can be accessed from the page menu at the top of the GDS interface. Select Page>Current Page Settings, and then select “Style” from the settings area at the right of the screen. You can then choose your page size from a list of pre-configured sizes or set a custom size of your own.

GDS Page Settings Wizard

Source: Google

6. New Data Studio help community

As GDS adds more features and becomes more complex, it seems only fitting that Google would launch a community help forum for this tool. So, while this isn’t exactly a new feature to GDS itself, it is a new resource for GDS users that will hopefully make navigating GDS easier.

Users can access the GDS Help Community via Google’s support website or selecting “Help Options” from the top menu bar in GDS (indicated by a question mark icon) then click the “Visit Help Forum” link.

The Help menu within GDS

Source: Google

Conclusion

We hope that summarizing the latest GDS enhancements has made it a little easier to digest the many new changes that Google rolled out in April (and March). Remember, you can always get a list of updates, both new and old by visiting Google’s Support website here.

Jacqueline Dooley is the Director of Digital Strategy at CommonMind.

The post A summary of Google Data Studio: Updates from April 2019 appeared first on Search Engine Watch.

Reading Between the Lines – Three Deeper Takeaways from John Mueller at BrightonSEO

Posted by on May 14, 2019 in SEO Articles | Comments Off on Reading Between the Lines – Three Deeper Takeaways from John Mueller at BrightonSEO

Reading Between the Lines – Three Deeper Takeaways from John Mueller at BrightonSEO

Last Friday I had the pleasure of watching John Mueller of Google being interviewed on the BrightonSEO main stage by (Distilled alumna!) Hannah Smith. I found it hugely interesting how different it was from the previous similarly formatted sessions with John I’ve seen – by Aleyda at BrightonSEO previously, and more recently by my colleague Will Critchlow at SearchLove. In this post, I want to get into some of the interesting implications in what John did and, crucially, did not say.

I’m not going to attempt here to cover everything John said exhaustively – if that’s what you’re looking for, I recommend this post by Deepcrawl’s Sam Marsden, or this transcript via Glen Allsopp (from which I’ve extracted below). This will also not be a tactical post – I was listening to this Q&A from the perspective of wanting to learn more about Google, not necessarily what to change in my SEO campaigns on Monday morning.

Looking too closely?

I’m aware of the dangers of reading too much into the minutia of what John Mueller, Garry Ilyes, and crew come out with – especially when he’s talking live and unscripted on stage. Ultimately, as John said himself, it’s his job to establish a flow of information between webmasters and search engineers at Google. There are famously few people, or arguably no people at all, who know the ins and outs of the search algorithm itself, and it is not John’s job to get into it in this depth.

That said, he has been trained, and briefed, and socialised, to say certain things, to not say certain things, to focus on certain areas, and so on. This is where our takeaways can get a little more interesting than the typical, clichéd “Google says X” or “we think Google is lying about Y”. I’d recommend this presentation and deck from Will if you want to read more about that approach, and some past examples.

So, into the meat of it.

1. “We definitely use links to recognize new content”

Hannah: Like I said, this is top tier sites…  Links are still a ranking factor though, right? You still use links as a ranking factor?

John: We still use links. I mean it’s not the only ranking factor, so like just focusing on links, I don’t think that makes sense at all… But we definitely use links to recognize new content.

Hannah: So if you then got effectively a hole, a very authoritative hole in your link graph… How is that going to affect how links are used as a ranking factor or will it?

John: I dunno, we’ll see. I mean it’s one of those things also where I see a lot of times the sites that big news sites write about are sites that already have links anyway. So it’s rare that we wouldn’t be able to find any of that new content. So I don’t think everything will fall apart. If that happens or when that happens, but it does make it a little bit harder for us. So it’s kind of tricky, but we also have lots of other signals that we look at. So trying to figure out how relevant a page is, is not just based on the links too.

The context here is that Hannah was interested in how much of a challenge it is for Google when large numbers of major editorial sites start adding the “nofollow” attribute to all their external links – which has been a trend of late in the UK, and I suspect elsewhere. If authoritative links are still an important trust factor, does this not weaken that data?

The interesting thing for me here was very much in what John did not say. Hannah asks him fairly directly whether links are a ranking factor, and he evades three times, by discussing the use of links for crawling & discovering content, rather than for establishing a link graph and therefore a trust signal:

“We still use links”
“We definitely use links to recognize new content”
“It’s rare we wouldn’t be able to find any of that new content”

There’s also a fourth example, earlier in the discussion – before the excerpt above –  where he does the same:

“…being able to find useful content on the web, links kind of play a role in that.”

This is particularly odd as in general, Google is pretty comfortable still discussing links as a ranking factor. Evidently, though, something about this context caused this slightly evasive response. The “it’s not the only ranking factor” response feels like a bit of an evasion too, given that Google essentially refuses to discuss other ranking factors that might establish trust/authority, as opposed to just relevance and baseline quality – see my points below on user signals!

Personally, I also thought this comment was very interesting and somewhat vindicating of my critique of a lot of ranking factor studies:

“…a lot of the times the sites that big news sites write about are sites that already have links anyway”

Yeah, of course – links are correlated with just about any other metric you can imagine, whether it be branded search volume, social shares, click-through rate, whatever.

2. Limited spots on page 1 for transactional sites

Hannah: But thinking about like a more transactional query, for example. Let’s just say that you want to buy some contact lenses, how do you know if the results you’ve ranked first is the right one? If you’ve done a good job of ranking those results?

John: A lot of times we don’t know, because for a lot of these queries there is no objective, right or wrong. They’re essential multiple answers that we could say this could make sense to show as the first result. And I think in particular for cases like that, it’s useful for us to have those 10 blue links or even 10 results in the search page, where it’s really something like we don’t completely know what you’re looking for. Are you looking for information on these contact lenses? Do you want to buy them? Do you want to compare them? Do you want to buy a specific brand maybe from this-

This is one of those things where I think I could have figured this out from the information I already had, but it clicked into place for me listening to this explanation from John. If John is saying there’s a need to show multiple intents on the first page for even a fairly commercial query, there is an implication that only so many transactional pages can appear.

Given that, in many verticals, there are far more than 10 viable transactional sites, this means that if you drop from being the 3rd best to the 4th best among those, you could drop from, for example, position 5 to position 11. This is particularly important to keep in mind when we’re analysing search results statistically – whether it be in ranking factor studies or forecasting the results of our SEO campaigns, the relationship between the levers we pull and the outputs we see can be highly non-linear. A small change might move you 6 ranking positions, past sites which have a different intent and totally different metrics when it comes to links, on-page optimisation, or whatever else.

3. User signals as a ranking factor

Hannah: Surely at that point, John, you would start using signals from users, right? You would start looking at which results are clicked through most frequently, would you start looking at stuff like that at that point?

John: I don’t think we would use that for direct ranking like that. We use signals like that to analyze the algorithms in general, because across a million different search queries we can figure out like which one tends to be more correct or not, depending on where people click. But for one specific query for like a handful of pages, it can go in so many different directions. It’s really-

So, the suggestion here is that user signals – presumably CTR (click-through rates), dwell time, etc. – are used to appraise the algorithm, but not as part of the algorithm. This has been the line from Google for a while, but I found this response far more explicit and clear than John M’s skirting round the subject in the past.

It’s difficult to square this with some past experiments from the likes of Rand Fishkin manipulating rankings with hundreds of people in a conference hall clicking results for specific queries, or real world results I’ve discussed here. In the latter case, we could maybe say that this is similar to Panda – Google has machine learned what on-site attributes go with users finding a site trustworthy, rather than measuring trust & quality directly. That doesn’t explain Rand’s results, though.

Here are a few explanations I think are possible:

Google just does not want to admit to this, because it’d look spammable (whether or not it actually is)
In fact, they use something like “site recent popularity” as part of the algorithm, so, on a technicality, don’t need to call it CTR or user signals
The algorithm is constantly appraising itself, and adjusts in response to a lot of clicks on a result that isn’t p1 – but the ranking factor that gets adjusted is some arbitrary attribute of that site, not the user signal itself

Just to explain what I mean by the third one a little further – imagine if there are three sites ranking for a query, which are sites A, B, & C. At the start, they rank in that order – A, B, C. It just so happens, by coincidence, that site C has the highest word count.

Lots of people suddenly search the query and click on result C. The algorithm is appraising itself based on user signals, for example, cases where people prefer the 3rd place result, so needs to adjust to make this site rank higher. Like any unsupervised machine learning, it finds a way, any way, to fit the desired outcome to the inputs for this query, which in this case is weighting word count more highly as a ranking factor. As such, result C ranks first, and we all claim CTR is the ranking factor. Google can correctly say CTR is not a ranking factor, but in practice, it might as well be.

For me, the third option is the most contrived, but also fits in most easily with my real world experience, but I think either of the other explanations, or even all 3, could be true.

Discussion

I hope you’ve enjoyed my rampant speculation. It’s only fair that you get to join in too: tweet me at @THCapper, or get involved in the comments below.

How Often Does Google Update Its Algorithm?

Posted by on May 14, 2019 in SEO Articles | Comments Off on How Often Does Google Update Its Algorithm?

How Often Does Google Update Its Algorithm?

Posted by Dr-Pete

In 2018, Google reported an incredible 3,234 improvements to search. That’s more than 8 times the number of updates they reported in 2009 — less than a decade ago — and an average of almost 9 per day. How have algorithm updates evolved over the past decade, and how can we possibly keep tabs on all of them? Should we even try?

To kick this off, here’s a list of every confirmed count we have (sources at end of post):

2018 – 3,234 “improvements”2017 – 2,453 “changes”2016 – 1,653 “improvements”2013 – 890 “improvements”2012 – 665 “launches”2011 – 538 “launches”2010 – 516 “changes”2009 – 350–400 “changes”

Unfortunately, we don’t have confirmed data for 2014-2015 (if you know differently, please let me know in the comments).

A brief history of update counts

Our first peek into this data came in spring of 2010, when Google’s Matt Cutts revealed that “on average, [Google] tends to roll out 350–400 things per year.” It wasn’t an exact number, but given that SEOs at the time (and to this day) were tracking at most dozens of algorithm changes, the idea of roughly one change per day was eye-opening.

In fall of 2011, Eric Schmidt was called to testify before Congress, and revealed our first precise update count and an even more shocking scope of testing and changes:

“To give you a sense of the scale of the changes that Google considers, in 2010 we conducted 13,311 precision evaluations to see whether proposed algorithm changes improved the quality of its search results, 8,157 side-by-side experiments where it presented two sets of search results to a panel of human testers and had the evaluators rank which set of results was better, and 2,800 click evaluations to see how a small sample of real-life Google users responded to the change. Ultimately, the process resulted in 516 changes that were determined to be useful to users based on the data and, therefore, were made to Google’s algorithm.”

Later, Google would reveal similar data in an online feature called “How Search Works.” Unfortunately, some of the earlier years are only available via the Internet Archive, but here’s a screenshot from 2012:

Note that Google uses “launches” and “improvements” somewhat interchangeably. This diagram provided a fascinating peek into Google’s process, and also revealed a startling jump from 13,311 precisions evaluations (changes that were shown to human evaluators) to 118,812 in just two years.

Is the Google algorithm heating up?

Since MozCast has kept the same keyword set since almost the beginning of data collection, we’re able to make some long-term comparisons. The graph below represents five years of temperatures. Note that the system was originally tuned (in early 2012) to an average temperature of 70°F. The redder the bar, the hotter the temperature …

Click to open a high-resolution version in a new tab

You’ll notice that the temperature ranges aren’t fixed — instead, I’ve split the label into eight roughly equal buckets (i.e. they represent the same number of days). This gives us a little more sensitivity in the more common ranges.

The trend is pretty clear. The latter half of this 5-year timeframe has clearly been hotter than the first half. While warming trend is evident, though, it’s not a steady increase over time like Google’s update counts might suggest. Instead, we see a stark shift in the fall of 2016 and a very hot summer of 2017. More recently, we’ve actually seen signs of cooling. Below are the means and medians for each year (note that 2014 and 2019 are partial years):

2019 – 83.7° /82.0°2018 – 89.9° /88.0°2017 – 94.0° /93.7°2016 – 75.1° / 73.7°2015 – 62.9° / 60.3°2014 – 65.8° / 65.9°

Note that search engine rankings are naturally noisy, and our error measurements tend to be large (making day-to-day changes hard to interpret). The difference from 2015 to 2017, however, is clearly significant.

Are there really 9 updates per day?

No, there are only 8.86 – feel better? Ok, that’s probably not what you meant. Even back in 2009, Matt Cutts said something pretty interesting that seems to have been lost in the mists of time…

“We might batch [algorithm changes] up and go to a meeting once a week where we talk about 8 or 10 or 12 or 6 different things that we would want to launch, but then after those get approved … those will roll out as we can get them into production.”

In 2016, I did a study of algorithm flux that demonstrated a weekly pattern evident during clearer episodes of ranking changes. From a software engineering standpoint, this just makes sense — updates have to be approved and tend to be rolled out in batches. So, while measuring a daily average may help illustrate the rate of change, it probably has very little basis in the reality of how Google handles algorithm updates.

Do all of these algo updates matter?

Some changes are small. Many improvements are likely not even things we in the SEO industry would consider “algorithm updates” — they could be new features, for example, or UI changes.

As SERP verticals and features evolve, and new elements are added, there are also more moving parts subject to being fixed and improved. Local SEO, for example, has clearly seen an accelerated rate of change over the past 2-3 years. So, we’d naturally expect the overall rate of change to increase.

A lot of this is also in the eye of the beholder. Let’s say Google makes an update to how they handle misspelled words in Korean. For most of us in the United States, that change isn’t going to be actionable. If you’re a Korean brand trying to rank for a commonly misspelled, high-volume term, this change could be huge. Some changes also are vertical-specific, representing radical change for one industry and little or no impact outside that niche.

On the other hand, you’ll hear comments in the industry along the lines of “There are 3,000 changes per year; stop worrying about it!” To me that’s like saying “The weather changes every day; stop worrying about it!” Yes, not every weather report is interesting, but I still want to know when it’s going to snow or if there’s a tornado coming my way. Recognizing that most updates won’t affect you is fine, but it’s a fallacy to stretch that into saying that no updates matter or that SEOs shouldn’t care about algorithm changes.

Ultimately, I believe it helps to know when major changes happen, if only to understand whether rankings shifted due something we did or something Google did. It’s also clear that the rate of change has accelerated, no matter how you measure it, and there’s no evidence to suggest that Google is slowing down.

Appendix A: Update count sources

2009 – Google’s Matt Cutts, video (Search Engine Land)
2010 – Google’s Eric Schmidt, testifying before Congress (Search Engine Land)
2012 – Google’s “How Search Works” page (Internet Archive)
2013 – Google’s Amit Singhal, Google+ (Search Engine Land)
2016 – Google’s “How Search Works” page (Internet Archive)
2017 – Unnamed Google employees (CNBC)
2018 – Google’s “How Search Works” page (Google.com)

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

The New Moz Local Is on Its Way!

Posted by on May 14, 2019 in SEO Articles | Comments Off on The New Moz Local Is on Its Way!

Posted by MiriamEllis

Exciting secrets can be so hard to keep. Finally, all of us at Moz have the green light to share with all of you a first glimpse of something we’ve been working on for months behind the scenes. Big inhale, big exhale…

Announcing: the new and improved Moz Local, to be rolled out beginning June 12!

Why is Moz updating the Moz Local platform?

Local search has evolved from caterpillar to butterfly in the seven years since we launched Moz Local. I think we’ve spent the time well, intensively studying both Google’s trajectory and the feedback of enterprise, marketing agency, and SMB customers.

Your generosity in telling us what you need as marketers has inspired us to action. Over the coming months, you’ll be seeing what Moz has learned reflected in a series of rollouts. Stage by stage, you’ll see that we’re planning to give our software the wings it needs to help you fully navigate the dynamic local search landscape and, in turn, grow your business.

We hope you’ll keep gathering together with us to watch Moz Local take full flight — changes will only become more robust as we move forward.

What can I expect from this upgrade?

Beginning June 12th, Moz Local customers will experience a fresh look and feel in the Moz Local interface, plus these added capabilities:

New distribution partners to ensure your data is shared on the platforms that matter most in the evolving local search ecosystemListing status and real-time updates to know the precise status of your location data Automated detection and permanent duplicate closure, taking the manual work out of the process and saving you significant timeIntegrations with Google and Facebook to gain deeper insights, reporting, and management for your location’s profilesAn even better data clean-up process to ensure valid data is formatted properly for distributionA new activity feed to alert you to any changes to your location’s listingsA suggestion engine to provide recommendations to increase accuracy, completeness, and consistency of your location data

Additional features available include:

Managing reviews of your locations to keep your finger on the pulse of what customers are sayingSocial posting to engage with consumers and alert them to news, offers, and other updatesStore locator and landing pages to share location data easily with both customers and search engines (available for Moz Local customers with 100 or more locations)

Remember, this is just the beginning. There’s more to come in 2019, and you can expect ongoing communications from us as further new feature sets emerge!

When is it happening?

We’ll be rolling out all the new changes beginning on June 12th. As with some large changes, this update will take a few days to complete, so some people will see the changes immediately while for others it may take up to a week. By June 21st, everyone should be able to explore the new Moz Local experience!

Don’t worry — we’ll have several more communications between now and then to help you prepare. Keep an eye out for our webinar and training materials to help ensure a smooth transition to the new Moz Local.

Are any metrics/scores changing?

Some of our reporting metrics will look different in the new Moz Local. We’ll be sharing more information on these metrics and how to use them soon, but for now, here’s a quick overview of changes you can expect:

Profile Completeness: Listing Score will be replaced by the improved Profile Completeness metric. This new feature will give you a better measurement of how complete your data is, what’s missing from it, and clear prompts to fill in any lacking information.Improved listing status reporting: Partner Accuracy Score will be replaced by improved reporting on listing status with all of our partners, including continuous information about the data they’ve received from us. You’ll be able to access an overview of your distribution network, so that you can see which sites your business is listed on. Plus, you’ll be able to go straight to the live listing with a single click.Visibility Index: Though they have similar names, Visibility Score is being replaced by something slightly different with the new and improved Visibility Index, which notates how the data you’ve provided us about a location matches or mismatches your information on your live listings.New ways to measure and act on listing reach: Reach Score will be leaving us in favor of even more relevant measurement via the Visibility Index and Profile Completeness metrics. The new Moz Local will include more actionable information to ensure your listings are accurate and complete.
Other FAQs

You’ll likely have questions if you’re a current Moz Local customer or are considering becoming one. Please check out our resource center for further details, and feel free to leave us a question down in the comments — we’ll be on point to respond to any wonderings or concerns you might have!

Head to the FAQs

Where is Moz heading with this?

As a veteran local SEO, I’m finding the developments taking place with our software particularly exciting because, like you, I see how local search and local search marketing have matured over the past decade.

I’ve closely watched the best minds in our industry moving toward a holistic vision of how authenticity, customer engagement, data, analysis, and other factors underpin local business success. And we’ve all witnessed Google’s increasingly sophisticated presentation of local business information evolve and grow. It’s been quite a ride!

At every level of local commerce, owners and marketers deserve tools that bring order out of what can seem like chaos. We believe you deserve software that yields strategy. As our CEO, Sarah Bird, recently said of Moz,

“We are big believers in the power of local SEO.”

So the secret is finally out, and you can see where Moz is heading with the local side of our product lineup. It’s our serious plan to devote everything we’ve got into putting the power of local SEO into your hands.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

I/O announcements have some applauding and others shaking their fists

Posted by on May 13, 2019 in SEO Articles | Comments Off on I/O announcements have some applauding and others shaking their fists

Now that this year’s I/O conference is in the books, digital marketers have had a chance to digest Google’s big announcements. Chief among them was Googlebot getting pushed to the latest version of Chromium, Assistant delivering results up to 10 times faster and, perhaps the most contentious, Search supporting FAQ and How-to structured data. As you can imagine, reactions weren’t limited to applause from the live audience.

Googlebot’s long-awaited update has engineers and developers nodding favorably.

Super cool. No more testing in Chrome webmaster tools to verify if your site is crawlable.

— Samar Panda (@samarpanda) May 8, 2019

Biggest unsung news 📰 of #io19…Googlebot now indexes the web using the latest Chromium rather than super old Chrome 42. Use modern features with confidence, without SEO issues. Huge! 🙌 pic.twitter.com/VJWjw71MyP

— Eric Bidelman (@ebidel) May 7, 2019

Still, some are keen to point out that this should have come sooner, especially because the update benefits businesses, consumers and Google itself.

it only took half a decade ++ !!

— jameschurchman (@jameschurchman) May 7, 2019

A number of Google Assistant-related announcements were made, but the speed demonstration is what might get users to take advantage of it more often and, by extension, businesses to prioritize integrating with it. Naturally, people drew comparisons with the competition.

Here’s an incredible demo from Google I/O. What they’re doing with the new Google Assistant is light-years ahead of Siri, which is a shame given the couple year head start Apple had. pic.twitter.com/eJiXv4SI7m

— Mike (@ekimgary) May 8, 2019

Siri: We now have better Maps integration
Bixby: We can now recognize multiple voices

Google Assistant: We’ll read your email, find the last time you booked a certain car, then book the same one in the same color for your next trip, which we also put in your calendar……..

— Marques Brownlee (@MKBHD) May 7, 2019

The announcement of support for How-to markup in search results received strong reactions. Some were excited to give it a test drive…

Actually pretty excited about this. Will definitely test out ASAP.

— Alexander Juul (@AlexanderJuul) May 8, 2019

…while others were anxious about what it could mean for the industry.

Hurray for more ways to get less traffic to your website and generate free content for @Google to run ads against !

— PaulsSEOstuff (@PaulsSEOstuff) May 8, 2019

Google is becoming a parasite. Not the mutually beneficial kind either, just a leech. You produce nothing, steal content, to make $$ & now even steal the click.

This can’t and won’t go on forever.

— Kristine Schachinger (@schachin) May 9, 2019

“Google started adding ‘features’ to the SERPS. Features whose content is not created by Google, but which operates off the scraped content of the sites in their index,” Schachinger, the digital strategist and SEO consultant quoted above, elaborated in a follow-up with Search Engine Land.

“These features ‘steal the click’ meant for the site because they are meant to keep people on Google’s page, so they will click on Google Ads. Despite a recent study showing users still, by majority, prefer the ten blue links the how-to feature shows these features are just becoming more (and not less) prevalent. The ten blue links now appear, on average, 1000px down the page, where previously they appeared between 300-400px.”

“In ‘stealing the click,’ Google is only benefiting its bottom line. And for those whose content they are using to do this, it fundamentally alters the previously beneficial relationship between Google and site owners,” she points out. “What happens to their business when site owners start putting their money and efforts elsewhere? And this is not just supposition, I can tell you I know of some enterprise level C Suites that are testing just this, right now, because of the perception that Google is becoming less and less beneficial.”

Adding to the assortment of reactions, some see structured data (such as How-to markup) as an opportunity to gain more visibility by leapfrogging the top organic search results. Others, like Greg Finn, digital marketer and partner at Cypress North, acknowledges that the change does convenience users.

“On one hand, users should benefit in the immediate future by having Google surface every bit of helpful content on a site and showing it directly in the search results. Better yet, webmasters that participate may see a boost as they put themselves into the position of offering better content for Google.”

“The other hand is the scarier one,” Finn admits. “One way to look at it is that they are cutting out the middleman, with the middleman being the website itself. Many of the examples shown simply won’t drive traffic. Take a look at the FAQs and the ‘How To Tie a Tie’ example specifically. There is a monumental downside to Google Search changes that bypass your site & your work, so be careful. Make sure you know who is benefiting on your markup. When websites lose visitors & income, the overall content and output inevitably become worse. That’s my fear here.”

Why we should care. Google has been introducing numerous products and features that insert itself between businesses and users under the guise of getting users the info they want faster. The problem is businesses aren’t necessarily seeing the benefits but Google still stands to gain.

Clicks are becoming more scarce, and that’s an indicator that potential customers are getting less contact with our brands. By investing resources and embracing these new features and markups, are we facilitating search engines at our own expense? If that’s the case, at some point brands are bound to get fed up and seek alternative routes to their audiences, or the search engines will have to offer us more for our efforts and ad budgets.

The post I/O announcements have some applauding and others shaking their fists appeared first on Search Engine Land.

The Future of Display Advertising

Posted by on May 13, 2019 in SEO Articles | Comments Off on The Future of Display Advertising

The Future of Display Advertising

Display is a key tool in the digital marketing playbook. But the landscape is rapidly changing, as emerging adtech formats – including in-banner video, dynamic creative and mobile optimization – help marketers achieve greater efficiencies and improved display results.

Are you ready to leverage these new opportunities?

Join our display advertising experts as they discuss new display best practices that can lift both brand awareness and bottom-line conversions. You’ll hear how you can effectively adopt emerging technologies to create more personalized, relevant display ad campaigns.

Register today for “The Future of Display Advertising: New marketing strategies to boost results,” produced by Digital Marketing Depot and sponsored by Bannerflow.

The post The Future of Display Advertising appeared first on Search Engine Land.

Local SEO for enterprises: Optimizing for the Local 3-Pack

Posted by on May 13, 2019 in SEO Articles | Comments Off on Local SEO for enterprises: Optimizing for the Local 3-Pack

Local SEO for enterprises: Optimizing for the Local 3-Pack

Fifteen years ago, if a customer needed a hammer, they’d probably get out a phone book, look up “Hardware Store,” choose the hardware store closest to their house, drive there, go inside, and ask the clerk “Do you sell hammers?” If they happened to be out of hammers, the clerk might draw the customer a map to the next closest hardware store and the process would start all over again.

Now that most of us are walking around with tiny computers in our pockets, much preliminary research is taken care of in a matter of seconds via mobile search. If a customer needs a hammer, they simply google “Hardware Store,” and three nearby results pop up instantly.

Chances are, that customer will then be done searching. Any stores that don’t pop up will not get their business. Securing one of those top three spots in a Google search is an essential part of nailing local SEO.

This is especially significant for enterprise brands to be able to compete at the local level.

With hundreds or thousands of locations, it can be overwhelming to ensure data accuracy across the board. Partnering with a local search solution to maintain and monitor listings across all locations is a great way boost online presence and drive foot traffic.

Content produced in collaboration with Rio SEO.

If you’re just looking at website analytics, you could be missing out

Most consumers are researching businesses on mobile before they make decisions about which locations to visit in person. In fact, according to RetailDive, two-thirds of consumers conduct research online before even stepping foot in a store.

And while most businesses know that they should pay close attention to their website analytics, many are forgetting that preliminary online research also includes local listings. Research shows that while 75% of consumers use a business’s website as part of their decision-making process, an even greater number, 87%, also consider local listings.

Going beyond website analytics to understand how your ranking in local search results affects in-person visits to your businesses is key to understanding how to use local SEO for real-life traffic.

A study by Sparktoro found that in 62% of local mobile searches, the customer doesn’t click search results to visit a business’s webpage. Further, Rio SEO found in recent data from enterprise clients that just 1 in 60 Map Pack views resulted in a click-through to a website.

Rather, they get the information they need from the local listings that come up at the top of their search results. For many businesses, this means that if you’re not at the top, you might as well be invisible.

Optimizing for the Local 3-Pack

Mobile users are most likely using Google to search for local businesses, and those searches are generally limited to what’s called the “Local 3-Pack.”

In Google’s search engine results, the Local 3-Pack is a colorful, prominent map listing that presents to consumers the three businesses Google considers most relevant to the query and searcher’s location (refer again to the image above).

Coming in as one of those first three spots is critical for making sure local searchers can find your business.

How can your business break the top three?

The key to breaking into that coveted Local 3-Pack is making sure your corporate and local site’s SEO are in order. And the best way to get your SEO in order is to optimize your Google My Business (GMB) page to give Google’s algorithm everything it needs to find your company in local searches.

Here are a few tips for optimizing your GMB:

Provide critical business information, such as business name and category, location, and/or service area, hours of operation (with special hours or holidays), phone number, website URL, business description, and more
Give advanced information, like store code, labels, or Google Ads location extension phone
Encourage customers to leave reviews, which you can respond to within the GMB dashboard
Upload photos, which appear in both the listing and Google Images

The right tools can boost your online presence

If you’re worried that your business isn’t coming up at the top of those critical mobile local searches, changing your SEO strategy to adopt the right tools could be your best bet for getting seen by mobile users. Join SEW, ClickZ, and Rio SEO in our webinar to learn more about how to choose the right SEO toolkits for boosting your local business into those crucial top three search results–and keeping it there.

What to know more about mastering local SEO for enterprises?

The brands killing it in local SEO now are freeing their corporate teams and local managers of complicated workarounds and messy, muddled local data.

In this webinar, you’ll explore the benefits of taking a toolkit approach to enterprise local search and discover the key tools that must be a part of your local marketing arsenal. Join us and learn how to:

leverage location-based martech effectively to optimize your brand’s online presence,
improve customer experience in decision-making moments,
track and measure location metrics that matter and stop wasting time on the wrong data,
gain and retain search engine trust in your brand and each of its locations to improve local rankings and visibility,
empower local managers to support the brand’s marketing efforts without losing control

It’s time to stop throwing disparate, disconnected solutions that only accomplish one or two things into your stack. Isn’t it time your brand’s local marketing efforts worked together to achieve the results your local stores and customers crave?

Join us for our webinar, “Scrap Your Stack: High-Performance Local SEO for Enterprise Brands, Simplified” to learn how.    

The post Local SEO for enterprises: Optimizing for the Local 3-Pack appeared first on Search Engine Watch.

Skip to content