Blog

Redirects: One Way to Make or Break Your Site Migration – Whiteboard Friday

Posted by on Jan 26, 2019 in SEO Articles | Comments Off on Redirects: One Way to Make or Break Your Site Migration – Whiteboard Friday

Redirects: One Way to Make or Break Your Site Migration – Whiteboard Friday

Posted by KameronJenkins

Correctly redirecting your URLs is one of the most important things you can do to make a site migration go smoothly, but there are clear processes to follow if you want to get it right. In this week’s Whiteboard Friday, Kameron Jenkins breaks down the rules of redirection for site migrations to make sure your URLs are set up for success.

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Hey, guys. Welcome to this week’s edition of Whiteboard Friday. My name is Kameron Jenkins, and I work here at Moz. What we’re going to be talking about today is redirects and how they’re one way that you can make or break your site migration. Site migration can mean a lot of different things depending on your context.

Migrations?

I wanted to go over quickly what I mean before we dive into some tips for avoiding redirection errors. When I talk about migration, I’m coming from the experience of these primary activities.

CMS moving/URL format

One example of a migration I might be referring to is maybe we’re taking on a client and they previously used a CMS that had a default kind of URL formatting, and it was dated something.

So it was like /2018/May/ and then the post. Then we’re changing the CMS. We have more flexibility with how our pages, our URLs are structured, so we’re going to move it to just /post or something like that. In that way a lot of URLs are going to be moving around because we’re changing the way that those URLs are structured.

“Keywordy” naming conventions

Another instance is that sometimes we’ll get clients that come to us with kind of dated or keywordy URLs, and we want to change this to be a lot cleaner, shorten them where possible, just make them more human-readable.

An example of that would be maybe the client used URLs like /best-plumber-dallas, and we want to change it to something a little bit cleaner, more natural, and not as keywordy, to just /plumbers or something like that. So that can be another example of lots of URLs moving around if we’re taking over a whole site and we’re kind of wanting to do away with those.

Content overhaul

Another example is if we’re doing a complete content overhaul. Maybe the client comes to us and they say, “Hey, we’ve been writing content and blogging for a really long time, and we’re just not seeing the traffic and the rankings that we want. Can you do a thorough audit of all of our content?” Usually what we notice is that you have maybe even thousands of pages, but four of them are ranking.

So there are a lot of just redundant pages, pages that are thin and would be stronger together, some pages that just don’t really serve a purpose and we want to just let die. So that’s another example where we would be merging URLs, moving pages around, just letting some drop completely. That’s another example of migrating things around that I’m referring to.

Don’t we know this stuff? Yes, but…

That’s what I’m referring to when it comes to migrations. But before we dive in, I kind of wanted to address the fact that like don’t we know this stuff already? I mean I’m talking to SEOs, and we all know or should know the importance of redirection. If there’s not a redirect, there’s no path to follow to tell Google where you’ve moved your page to.

It’s frustrating for users if they click on a link that no longer works, that doesn’t take them to the proper destination. We know it’s important, and we know what it does. It passes link equity. It makes sure people aren’t frustrated. It helps to get the correct page indexed, all of those things. So we know this stuff. But if you’re like me, you’ve also been in those situations where you have to spend entire days fixing 404s to correct traffic loss or whatever after a migration, or you’re fixing 301s that were maybe done but they were sent to all kinds of weird, funky places.

Mistakes still happen even though we know the importance of redirects. So I want to talk about why really quickly.

Unclear ownership

Unclear ownership is something that can happen, especially if you’re on a scrappier team, a smaller team and maybe you don’t handle these things very often enough to have a defined process for this. I’ve been in situations where I assumed the tech was going to do it, and the tech assumed that the project assistant was going to do it.

We’re all kind of pointing fingers at each other with no clear ownership, and then the ball gets dropped because no one really knows whose responsibility it is. So just make sure that you designate someone to do it and that they know and you know that that person is going to be handling it.

Deadlines

Another thing is deadlines. Internal and external deadlines can affect this. So one example that I encountered pretty often is the client would say, “Hey, we really need this project done by next Monday because we’re launching another initiative. We’re doing a TV commercial, and our domain is going to be listed on the TV commercial. So I’d really like this stuff wrapped up when those commercials go live.”

So those kind of external deadlines can affect how quickly we have to work. A lot of times it just gets left by the wayside because it is not a very visible thing. If you don’t know the importance of redirects, you might handle things like content and making sure the buttons all work and the template looks nice and things like that, the visible things. Where people assume that redirects, oh, that’s just a backend thing. We can take care of it later. Unfortunately, redirects usually fall into that category if the person doing it doesn’t really know the importance of it.

Another thing with deadlines is internal deadlines. Sometimes maybe you might have a deadline for a quarterly game or a monthly game. We have to have all of our projects done by this date. The same thing with the deadlines. The redirects are usually unfortunately something that tends to miss the cutoff for those types of things.

Non-SEOs handling the redirection

Then another situation that can cause site migration errors and 404s after moving around is non-SEOs handling this. Now you don’t have to be a really experienced SEO usually to handle these types of things. It depends on your CMS and how complicated is the way that you’re implementing your redirects. But sometimes if it’s easy, if your CMS makes redirection easy, it can be treated as like a data entry-type of job, and it can be delegated to someone who maybe doesn’t know the importance of doing all of them or formatting them properly or directing them to the places that they’re supposed to go.

The rules of redirection for site migrations

Those are all situations that I’ve encountered issues with. So now that we kind of know what I’m talking about with migrations and why they kind of sometimes still happen, I’m going to launch into some rules that will hopefully help prevent site migration errors because of failed redirects.

1. Create one-to-one redirects

Number one, always create one-to-one redirects. This is super important. What I’ve seen sometimes is oh, man, it could save me tons of time if I just use a wildcard and redirect all of these pages to the homepage or to the blog homepage or something like that. But what that tells Google is that Page A has moved to Page B, whereas that’s not the case. You’re not moving all of these pages to the homepage. They haven’t actually moved there. So it’s an irrelevant redirect, and Google has even said, I think, that they treat those essentially as a soft 404. They don’t even count. So make sure you don’t do that. Make sure you’re always linking URL to its new location, one-to-one every single time for every URL that’s moving.

2. Watch out for redirect chains

Two, watch out for chains. I think Google says something oddly specific, like watch out for redirect chains, three, no more than five. Just try to limit it as much as possible. By chains, I mean you have URL A, and then you redirect it to B, and then later you decide to move it to a third location. Instead of doing this and going through a middleman, A to B to C, shorten it if you can. Go straight from the source to the destination, A to C.

3. Watch out for loops

Three, watch out for loops. Similarly what can happen is you redirect position A to URL B to another version C and then back to A. What happens is it’s chasing its tail. It will never resolve, so you’re redirecting it in a loop. So watch out for things like that. One way to check those things I think is a nifty tool, Screaming Frog has a redirect chains report. So you can see if you’re kind of encountering any of those issues after you’ve implemented your redirects.

4. 404 strategically

Number four, 404 strategically. The presence of 404s on your site alone, that is not going to hurt your site’s rankings. It is letting pages die that were ranking and bringing your site traffic that is going to cause issues. Obviously, if a page is 404ing, eventually Google is going to take that out of the index if you don’t redirect it to its new location. If that page was ranking really well, if it was bringing your site traffic, you’re going to lose the benefits of it. If it had links to it, you’re going to lose the benefits of that backlink if it dies.

So if you’re going to 404, just do it strategically. You can let pages die. Like in these situations, maybe you’re just outright deleting a page and it has no new location, nothing relevant to redirect it to. That’s okay. Just know that you’re going to lose any of the benefits that URL was bringing your site.

5. Prioritize “SEO valuable” URLs

Number five, prioritize “SEO valuable” URLs, and I do that because I prefer to obviously redirect everything that you’re moving, everything that’s legitimately moving.

But because of situations like deadlines and things like that, when we’re down to the wire, I think it’s really important to at least have started out with your most important URLs. So those are URLs that are ranking really well, giving you a lot of good traffic, URLs that you’ve earned links to. So those really SEO valuable URLs, if you have a deadline and you don’t get to finish all of your redirects before this project goes live, at least you have those most critical, most important URLs handled first.

Again, obviously, it’s not ideal, I don’t think in my mind, to save any until after the launch. Obviously, I think it’s best to have them all set up by the time it goes live. But if that’s not the case and you’re getting rushed and you have to launch, at least you will have handled the most important URLs for SEO value.

6. Test!

Number six, just to end it off, test. I think it’s super important just to monitor these things, because you could think that you have set these all up right, but maybe there were some formatting errors, or maybe you mistakenly redirected something to the wrong place. It is super important just to test. So what you can do, you can do a site:domain.com and just start clicking on all the results that come up and see if any are redirecting to the wrong place, maybe they’re 404ing.

Just checking all of those indexed URLs to make sure that they’re going to a proper new destination. I think Moz’s Site Crawl is another huge benefit here for testing purposes. What it does, if you have a domain set up or a URL set up in a campaign in Moz Pro, it checks this every week, and you can force another run if you want it to.

But it will scan your site for errors like this, 404s namely. So if there are any issues like that, 500 or 400 type errors, Site Crawl will catch it and notify you. If you’re not managing the domain that you’re working on in a campaign in Moz Pro, there’s on-demand crawl too. So you can run that on any domain that you’re working on to test for things like that.

There are plenty of other ways you can test and find errors. But the most important thing to remember is just to do it, just to test and make sure that even once you’ve implemented these things, that you’re checking and making sure that there are no issues after a launch. I would check right after a launch and then a couple of days later, and then just kind of taper off until you’re absolutely positive that everything has gone smoothly.

So those are my tips, those are my rules for how to implement redirects properly, why you need to, when you need to, and the risks that can happen with that. If you have any tips of your own that you’d like to share, pop them in the comments and share it with all of us in the SEO community. That’s it for this week’s Whiteboard Friday.

Come back again next week for another one. Thanks, everybody.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

Uncovering SEO Opportunities via Log Files

Posted by on Jan 24, 2019 in SEO Articles | Comments Off on Uncovering SEO Opportunities via Log Files

Uncovering SEO Opportunities via Log Files

Posted by RobinRozhon

I use web crawlers on a daily basis. While they are very useful, they only imitate search engine crawlers’ behavior, which means you aren’t always getting the full picture.

The only tool that can give you a real overview of how search engines crawl your site are log files. Despite this, many people are still obsessed with crawl budget — the number of URLs Googlebot can and wants to crawl.

Log file analysis may discover URLs on your site that you had no idea about but that search engines are crawling anyway — a major waste of Google server resources (Google Webmaster Blog):

“Wasting server resources on pages like these will drain crawl activity from pages that do actually have value, which may cause a significant delay in discovering great content on a site.”

While it’s a fascinating topic, the fact is that most sites don’t need to worry that much about crawl budget —an observation shared by John Mueller (Webmaster Trends Analyst at Google) quite a few times already.

There’s still a huge value in analyzing logs produced from those crawls, though. It will show what pages Google is crawling and if anything needs to be fixed.

When you know exactly what your log files are telling you, you’ll gain valuable insights about how Google crawls and views your site, which means you can optimize for this data to increase traffic. And the bigger the site, the greater the impact fixing these issues will have.

What are server logs?

A log file is a recording of everything that goes in and out of a server. Think of it as a ledger of requests made by crawlers and real users. You can see exactly what resources Google is crawling on your site.

You can also see what errors need your attention. For instance, one of the issues we uncovered with our analysis was that our CMS created two URLs for each page and Google discovered both. This led to duplicate content issues because two URLs with the same content was competing against each other.

Analyzing logs is not rocket science — the logic is the same as when working with tables in Excel or Google Sheets. The hardest part is getting access to them — exporting and filtering that data.

Looking at a log file for the first time may also feel somewhat daunting because when you open one, you see something like this:

Calm down and take a closer look at a single line:

66.249.65.107 – – [08/Dec/2017:04:54:20 -0400] “GET /contact/ HTTP/1.1” 200 11179 “-” “Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)”

You’ll quickly recognize that:

66.249.65.107 is the IP address (who)
[08/Dec/2017:04:54:20 -0400] is the Timestamp (when)
GET is the Method
/contact/ is the Requested URL (what)
200 is the Status Code (result)
11179 is the Bytes Transferred (size)
“-” is the Referrer URL (source) — it’s empty because this request was made by a crawler
Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html) is the User Agent (signature) — this is user agent of Googlebot (Desktop)

Once you know what each line is composed of, it’s not so scary. It’s just a lot of information. But that’s where the next step comes in handy.

Tools you can use

There are many tools you can choose from that will help you analyze your log files. I won’t give you a full run-down of available ones, but it’s important to know the difference between static and real-time tools.

Static — This only analyzes a static file. You can’t extend the time frame. Want to analyze another period? You need to request a new log file. My favourite tool for analyzing static log files is Power BI.
Real-time — Gives you direct access to logs. I really like open source ELK Stack (Elasticsearch, Logstash, and Kibana). It takes a moderate effort to implement it but once the stack is ready, it allows me changing the time frame based on my needs without needing to contact our developers.
Start analyzing

Don’t just dive into logs with a hope to find something — start asking questions. If you don’t formulate your questions at the beginning, you will end up in a rabbit hole with no direction and no real insights.

Here are a few samples of questions I use at the start of my analysis:

Which search engines crawl my website?
Which URLs are crawled most often?
Which content types are crawled most often?
Which status codes are returned?

If you see that Google is crawling non-existing pages (404), you can start asking which of those requested URLs return 404 status code.

Order the list by the number of requests, evaluate the ones with the highest number to find the pages with the highest priority (the more requests, the higher priority), and consider whether to redirect that URL or do any other action.

If you use a CDN or cache server, you need to get that data as well to get the full picture.

Segment your data

Grouping data into segments provides aggregate numbers that give you the big picture. This makes it easier to spot trends you might have missed by looking only at individual URLs. You can locate problematic sections and drill down if needed.

There are various ways to group URLs:

Group by content type (single product pages vs. category pages)
Group by language (English pages vs. French pages)
Group by storefront (Canadian store vs. US store)
Group by file format (JS vs. images vs. CSS)

Don’t forget to slice your data by user-agent. Looking at Google Desktop, Google Smartphone, and Bing all together won’t surface any useful insights.

Monitor behavior changes over time

Your site changes over time, which means so will crawlers’ behavior. Googlebot often decreases or increases the crawl rate based on factors such as a page’s speed, internal link structure, and the existence of crawl traps.

It’s a good idea to check in with your log files throughout the year or when executing website changes. I look at logs almost on a weekly basis when releasing significant changes for large websites.

By analyzing server logs twice a year, at the very least, you’ll surface changes in crawler’s behavior.

Watch for spoofing

Spambots and scrapers don’t like being blocked, so they may fake their identity — they leverage Googlebot’s user agent to avoid spam filters.

To verify if a web crawler accessing your server really is Googlebot, you can run a reverse DNS lookup and then a forward DNS lookup. More on this topic can be found in Google Webmaster Help Center.

Merge logs with other data sources

While it’s no necessary to connect to other data sources, doing so will unlock another level of insight and context that regular log analysis might not be able to give you. An ability to easily connect multiple datasets and extract insights from them is the main reason why Power BI is my tool of choice, but you can use any tool that you’re familiar with (e.g. Tableau).

Blend server logs with multiple other sources such as Google Analytics data, keyword ranking, sitemaps, crawl data, and start asking questions like:

What pages are not included in the sitemap.xml but are crawled extensively?
What pages are included in the Sitemap.xml file but are not crawled?
Are revenue-driving pages crawled often?
Is the majority of crawled pages indexable?

You may be surprised by the insights you’ll uncover that can help strengthen your SEO strategy. For instance, discovering that almost 70 percent of Googlebot requests are for pages that are not indexable is an insight you can act on.

You can see more examples of blending log files with other data sources in my post about advanced log analysis.

Use logs to debug Google Analytics

Don’t think of server logs as just another SEO tool. Logs are also an invaluable source of information that can help pinpoint technical errors before they become a larger problem.

Last year, Google Analytics reported a drop in organic traffic for our branded search queries. But our keyword tracking tool, STAT Search Analytics, and other tools showed no movement that would have warranted the drop. So, what was going on?

Server logs helped us understand the situation: There was no real drop in traffic. It was our newly deployed WAF (Web Application Firewall) that was overriding the referrer, which caused some organic traffic to be incorrectly classified as direct traffic in Google Analytics.

Using log files in conjunction with keyword tracking in STAT helped us uncover the whole story and diagnose this issue quickly.

Putting it all together

Log analysis is a must-do, especially once you start working with large websites.

My advice is to start with segmenting data and monitoring changes over time. Once you feel ready, explore the possibilities of blending logs with your crawl data or Google Analytics. That’s where great insights are hidden.

Want more?

Ready to learn how to get cracking and tracking some more? Reach out and request a demo to get your very own tailored walkthrough of STAT.

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

A Day with 412FoodRescue: Tips for Managing Nonprofit Digital Marketing

Posted by on Jan 17, 2019 in SEO Articles | Comments Off on A Day with 412FoodRescue: Tips for Managing Nonprofit Digital Marketing

A Day with 412FoodRescue: Tips for Managing Nonprofit Digital Marketing

Nonprofit organizations have tough jobs. They’re busy saving the world one creative idea at a time and often have few resources to devote to developing or managing a digital strategy.

Through all the work we do – consulting, training, and blogging – we want to help demystify digital marketing and empower teams to do make efficient strategy decisions so they can spend time promoting good in the world. Our community outreach efforts range from give-back days, fundraising, and perhaps our favorite – sharing information with local organizations and nonprofits about our specialty areas. Learn how a few quick-hitting digital marketing wins could set up your organization for similar success.

Four Hours of Learning, Questions and Discussion

A few weeks ago, we invited 412FoodRescue to our offices for a half-day digital training session. We shared information about SEO, Paid Search and Google Analytics. Our goal was to equip the nonprofit team with best practices and provide quick wins to support their ever-expanding mission.

Having an in-person training allowed for a day that was about collaboration and team brainstorming instead of a one-sided lecture. As Becca from 412FoodRescue noted, “It was great to have an open learning environment where we could jump in and ask any question we needed.”

How 412FoodRescue Helps Pittsburghers

412FoodRescue is a 3-year old start-up that began–as the name suggests–in the “412”: Pittsburgh, PA.

Their business model is simple, yet so effective.

They partner with local businesses like grocery stores and restaurants to “rescue” surplus food that’s about to go to waste. When they identify an over-abundance of food, they leverage their network of 1,300+ local volunteers to complete same-day deliveries. This is coordinated through their app where volunteers receive real-time notifications when there’s a local delivery in their area.

Think of it as Uber for browning bananas!

Since 2015, 412FoodRescue’s volunteers have rescued 3.3 million pounds of food, generating 2.8 million meals. Their app is gaining national attention and they’re looking to roll out their business model to other cities. Some of our employees are food rescue-ers, so it became a cause we wanted to contribute to digitally. If you’re in the  Pittsburgh area, you can sign up for their volunteer opportunities, too.

Optimizing Your Site to Find Volunteers

We began the day with SEO, sharing tips for writing unique, tightly focused title tags, meta descriptions and H1s. The goal of these fields is to reflect a page’s theme. With a little keyword research, we work-shopped writing title tags and meta descriptions for a few of their key pages.

Their site scored a 70/100 on our 13 question, “Mini SEO Checklist”, which is a great score, so we spent time improving their on-page tags instead of delving into advanced technical items.

After using Google’s Keyword Planner, The “Volunteer” page changed to “Volunteer Opportunities” and the meta description was refreshed to include a stronger call to action. When we took a look at Google Search Console, we saw their CSA program–lovingly dubbed, “Ugly CSA”–actually had search demand. We added “in Pittsburgh” to help the title be more locally relevant.

It’s these small changes that can make a huge difference to a nonprofit site. Especially when there’s a critical need to be focused locally.

As a next step, the team plans to update metadata for their top 15 pages and think of future content opportunities as they conduct keyword research.

Does your business have a few key pages that could benefit from a little keyword research and some data analysis? Start small by taking cues from Google Search Console and optimizing pages by incorporating the keywords that users are already typing in to get to your pages.

Reaching a Larger Audience with Google Grants

Next up was paid search.

The team was curious to learn about creating better campaigns. Their agency had set up a Google AdWords Express account a few months ago, but we encouraged them to change their account to a “typical” Google Ads account and to sign up for a Google Grants account for additional bidding options, better management tools, and more detailed insights.

With a monthly Grants budget of $10,000, we found they could amplify their coverage by expanding their campaigns, defining important goals, choosing audience targeting and experimenting with ad formats. Having this capability will significantly change the way 412FoodRescue communicates with Pittsburghers via advertising.

Get Free Advertising For Your Nonprofit with Google Grants

By: Heather Post
Published: June 1, 2017

Together, we outlined a search campaign designed to encourage people in the Pittsburgh area to volunteer. We chose keywords, wrote ad copy and decided on the best targeting to get their ads in front of the right people.

The team was also excited to learn about remarketing opportunities that would allow them to communicate a different message to people who had already been to their site and had shown interest in their app. This option will be a quick win, especially in the volunteering space.

To complete the paid search portion of the training, we provided a template report in Data Studio to help 412 Food Rescue quickly analyze their Google Ads data to facilitate future marketing and advertising decisions.

Take a minute to think about what relevant ad copy might look like for your users. Are you trying to rally an army of volunteers or drum up additional donations? Crafting ad copy to speak to each group of users and applying the targeting options in Google Ads can ensure you’re delivering appropriate advertising.

Seeing the Big Picture with Google Analytics

We closed the day with an overview of Google Analytics. We saved the most complicated, most detailed project for last.

The goal here was to really empower the team on where to begin with their overall analytics strategy and what resources are available to learn more about GA. We provided them with a customized list of “homework” items to help plan out their overall analytics solution and links to great resources to learn how to accomplish those tasks.
We started with a discussion of 412FoodRescue’s business objectives and how to define them. Their goals revolve around engagement, user interactions, and downloading the app to become a Food Rescuer. To plan out your own strategy, check out Sam’s amazing post A Simple Start to a Powerful Analytics Strategy.

From there we went into the importance of a solid foundation and taking advantage of everything there is to offer out of the box within GA. We provided some recommendation of things to enable and update before starting to implement more customized features. For example, filtering out extra query parameter and setting up site search.

The true power of analytics comes from customization, your needs are different from the needs of the person next door; it’s impossible for Google to make an all-encompassing solution. Events and custom dimensions are the easiest tools to unlock more in-depth insights for your site or app. We walked through the site and talked about important event to start tracking now, such as clicks on download the app.

Finally, we ran through the Google Analytics interface together – answering reporting questions, showing the team our go-to reports, and making quick easy updates to their settings. One quick change we made was setting up a test view or making updates to filters.

Applying These Tactics to Your Nonprofit

Digital marketing can be overwhelming, but it can be approachable! The first step, like learning any new skill, is making it a priority.

As 412FoodRescue’s CEO and co-founder Leah Lizarondo noted, “We learned so much from our session and also learned that there is so much more to sink our teeth into. “

“We learned so much from our session and also learned that there is so much more to sink our teeth into. “

Leah Lizarondo
CEO & Co-Founder, 412FoodRescue

Setting aside time for keyword research and analyzing your existing site data can help you focus your marketing efforts and make sure you’re maximizing your limited resources.

Even a few hours can make a big impact. This is especially true for smaller organizations like nonprofits where time is valuable and resources are scarce. Our advice is to start small and scale what works.

Below are a few quick wins that you can apply to up your digital marketing:

Conduct keyword research for each page to determine a priority term. Write an optimized title tag and meta description centered around that term. Ensure you’re using your full character limit and include a descriptive, compelling call-to-action. Think about how your audiences might search, from volunteers to donors for those researching about your mission. Learn how to write effective title tags.
Write compelling ad copy for target audiences. Use compelling call-to-action verbs to active your audience. Terms like “Volunteer,” “Donate,” “Learn,” or “Help” can speak to the specific audiences you’re targeting. For ideas, view our call-to-action cheat sheet.
Define and implement event tracking. The standard metrics that come with an analytics install are great, but to get in-depth knowledge of your users implement customizations like events. Check out our event naming post to understand best practices for event tracking.
Apply for a Google Ad Grant. If you are an eligible nonprofit, a Google Grant can go a long way in helping create awareness for your cause.

Our team here at Lunametrics–Kristina, Megan and Jayna–so enjoyed our morning with 412FoodRescue. Our one regret is that the day ended too soon! We could have brainstormed for another couple hours.

Leah, Sara and Becca are a team of tenacious learners with a passion to end food insecurity in Pittsburgh and we’re happy that we could be a small part of 412FoodRescue’s digital journey. We can’t wait to see what they’ll dream up next!

Local to Pittsburgh? Learn more about 412FoodRescue and join their mission to end food insecurity in our communities and neighborhoods.

The post A Day with 412FoodRescue: Tips for Managing Nonprofit Digital Marketing appeared first on LunaMetrics.

Image SEO: alt tag and title tag optimization

Posted by on Jan 17, 2019 in SEO Articles | Comments Off on Image SEO: alt tag and title tag optimization

Image SEO: alt tag and title tag optimization

 

Adding images to your articles encourages people to read them, and well-chosen images can also back up your message and get you a good ranking in image search results. But you should always remember to give your images good alt attributes: alt text strengthens the message of your articles with search engine spiders and improves the accessibility of your website. This article explains all about alt tags and title tags and why you should optimize them.

Note: the term “alt tag” is a commonly used abbreviation of what’s actually an alt attribute on an img tag. The alt tag of any image on your site should describe what’s on it. Screen readers for the blind and visually impaired will read out this text and therefore make your image accessible.

What are alt tags and title tags?

This is a complete HTML image tag:

<img src=“image.jpg” alt=“image description” title=“image tooltip”>

The alt and title attributes of an image are commonly referred to as alt tag or alt text and title tag – even though they’re not technically tags. The alt text describes what’s on the image and the function of the image on the page. So if you are using an image as a button to buy product X, the alt text should say: “button to buy product X.”

The alt tag is used by screen readers, which are browsers used by blind and visually impaired people, to tell them what is on the image. The title attribute is shown as a tooltip when you hover over the element, so in the case of an image button, the image title could contain an extra call-to-action, like “Buy product X now for $19!”, although this is not a best practice.

Each image should have an alt text, not just for SEO purposes but also because blind and visually impaired people won’t otherwise know what the image is about, but a title attribute is not required. What’s more, most of the time it doesn’t make sense to add it. They are only available to mouse (or other pointing devices) users and the only one case where the title attribute is required for accessibility is on <iframe> and <frame> tags.

If the information conveyed by the title attribute is relevant, consider making it available somewhere else, in plain text and if it’s not relevant, consider removing the title attribute entirely.

But what if an image doesn’t have a purpose?

If you have images in your design that are purely there for design reasons, you’re doing it wrong, as those images should be in your CSS and not in your HTML. If you really can’t change these images, give them an empty alt attribute, like so:

<img src=”image.png” alt=””>

The empty alt attribute makes sure that screen readers skip over the image.

alt text and SEO

Google’s article about images has a heading “Use descriptive alt text”. This is no coincidence because Google places a relatively high value on alt text to determine not only what is on the image but also how it relates to the surrounding text. This is why, in our Yoast SEO content analysis, we have a feature that specifically checks that you have at least one image with an alt tag that contains your focus keyphrase.

Yoast SEO checks for images and their alt text in your posts:We’re definitely not saying you should spam your focus keyphrase into every alt tag. You need good, high quality, related images for your posts, where it makes sense to have the focus keyword in the alt text. Here’s Google’s advice on choosing a good alt text:

When choosing alt text, focus on creating useful, information-rich content that uses keywords appropriately and is in context of the content of the page. Avoid filling alt attributes with keywords (keyword stuffing) as it results in a negative user experience and may cause your site to be seen as spam.

If your image is of a specific product, include both the full product name and the product ID in the alt tag so that it can be more easily found. In general: if a keyphrase could be useful for finding something that is on the image, include it in the alt tag if you can. Also, don’t forget to change the image file name to be something actually describing what’s on it.

alt and title attributes in WordPress

When you upload an image to WordPress, you can set a title and an alt attribute. By default, it uses the image filename in the title attribute, which, if you don’t enter an alt attribute, it copies to the alt attribute. While this is better than writing nothing, it’s pretty poor practice. You really need to take the time to craft a proper alt text for every image you add to a post — users and search engines will thank you for it. The interface makes it easy: click an image, hit the edit button, and you’ll see this:There’s no excuse for not doing this right, other than laziness. Your (image) SEO will truly benefit if you get these tiny details right. Visually challenged users will also like you all the more for it.

Read more about image SEO?

We have a very popular (and longer) article about Image SEO. That post goes into a ton of different ways to optimize images but is relatively lacking in detail when it comes to alt and title tags — think of this as an add-on to that article. I recommend reading it when you’re done here.

Read more: Optimizing images for SEO »

The post Image SEO: alt tag and title tag optimization appeared first on Yoast.

 

It’s Not Too Late To Localize Your Black Friday SEO Strategy

Posted by on Jan 10, 2019 in SEO Articles | Comments Off on It’s Not Too Late To Localize Your Black Friday SEO Strategy

It’s Not Too Late To Localize Your Black Friday SEO Strategy

Black Friday and Cyber Monday are very important because, for most retailers, Black Friday is one of the biggest revenue generators of the year, but it’s surprising (except perhaps to most SEOs) how many of them neglect the basics. If you fall into that category, you still have a few days to try to turn that lemon into lemonade with some very simple updates to your site.

If you search for “black friday sale near me” you will likely see a Local Pack like this:

Notice how Google calls out that these sites mention black friday sales, deals, etc.

While most retailers likely already have a Black Friday Sale page and mention it on their home page, two out of the three sites above, Macy’s and Walmart, also mention Black Friday on their store location pages. For example:

Macy’s Black Friday:
https://l.macys.com/stoneridge-shopping-center-in-pleasanton-ca

Walmart Black Friday:
https://www.walmart.com/store/2161/pleasanton-ca/details 

While Kohl’s shows that you don’t need the location pages to be optimized for Black Friday to rank for these queries, updating your location pages to target Black Friday & Cyber Monday queries in both the title tag and in the body copy should likely improve your chances of appearing in localized Black Friday SERPs.

Even if your site is in code freeze, you (hopefully) should be able to make these updates and maybe next week you’ll find yourself with more than just some leftover turkey…

The post It’s Not Too Late To Localize Your Black Friday SEO Strategy appeared first on Local SEO Guide.

 

10 Local SEO Predictions for 2019: Job Security For Local SEOs

Posted by on Jan 3, 2019 in SEO Articles | Comments Off on 10 Local SEO Predictions for 2019: Job Security For Local SEOs

10 Local SEO Predictions for 2019: Job Security For Local SEOs

 

 

SMBs As a Group Will Continue To Not Get SEO
A little over a year ago, my dentist moved his office but never thought about updating his listing in Google Maps, Apple Maps or his website. I showed him how to fix the issue, but this morning on the way to my end of the year teeth cleaning, not only was his old location back on Apple Maps, but he had also decided to change his business name from Joseph A. Grasso DDS to San Ramon Valley Cosmetic & Family Dentistry (perhaps for SEO reasons?), but had not bothered to update either his GMB or Apple Maps listings, let alone his Facebook page or any other citations. I mentioned this to his receptionist. Her response was “Wow, I didn’t know anything about that stuff.” I envisioned my kids’ future tuition bills and sighed with relief.
Voice Search Will Continue To Be YUGE, But So What?
We keep seeing reports of how everyone is increasing their use of voice search to find information and buy stuff. Outside of being the default app for specific type of query on the various assistants, the end result is still often position #1 or #0 for a Google SERP. For local businesses this means you’ll want to be #1 or #0 for relevant local queries, and if there’s an app (e.g. Apple Maps, Yelp, etc.) that shows up in that position, then you’ll want to be #1 in those apps. Kind of like the way local search has been working for years…
Some Of Your Clients May Actually Ask For Bing SEO Help
If people are asking Alexa a lot more questions, per the previous prediction, Microsoft’s Cortana recently announced integration with Alexa may lead to more Bing results surfacing via Alexa. So those clients who have a data issue on Bing and the CEO happens to hear their kids looking for their business using Cortana on Alexa might send you that urgent message for “Bing SEO ASAP!” OK, we know – we just needed an extra prediction to get the right number for an Instant Answer result…
Google My Business Posts Will Be Where The Action Is
Since the roll out of GMB Posts, we have been calling them “the biggest gift to SEO agencies in years.” The ability to add minimal content to appear on a business’ GMB/Knowledge Panel that can attract clicks, most of which are from brand queries, and show clients how these impact performance will be hard to resist for most agencies that are currently blogging for their clients and praying someone cares about their 250-500 words of cheaply written brilliance. Expect GMB posts to be standard in most Local SEO packages, until of course Google deprecates them later this year.Bonus Prediction!: And while we are on the subject of GMB, I expect to see a lot more functionality, and promotion thereof, poured into this service. I wouldn’t be surprised if we saw a Super Bowl ad this year that shows how a business uses all Google services (websites, GMB messages, Q&A, Local Service Ads, GMB Post Videos, Reviews, etc.) to run its business and get customers.
Retailers Will Invest More In Local SEO
Google will continue to cannibalize SERPs with ads and “owned and operated” content such as GMB (which is basically training wheels for ads) making it easy for brands to increase their ad spend to eye-popping levels. Sooner or later multi-location brands who are tired of having to re-buy their customers every week will realize that for about 1% of their Google Ads budget, they can make a serious dent in their organic traffic and revenue. Topics like rebuilding their store locators, rewriting location pages, local linkbuilding and GMB optimization (including feeding real-time inventory to GMB) will no longer cause the CMO’s eyes to glaze over.
Links Will Still Be The Biggest Line Item In Your Local SEO Budget
There are only so many ways you can publish the best content about how to hire a personal injury attorney, before and after bunion surgery photos, or local SEO predictions. I am sure there are some cases where E-A-T trumps links, but sooner or later, in 2019 we will all need a link or two, or twenty…
We Will See More Consolidation In Local Listings & Review Management
While the business has become somewhat commodified, there is just too much value to owning the customer relationship attached to thousands of locations. Yext appears to be continuing its focus on high-value verticals (healthcare & financial services), international expansion to serve global brands and adding related functionality like Yext Brain. Over the past year, Uberall gobbled up NavAds and Reputation.com grabbed SIMPartners. Any big digital agency serving global multi-location brands sooner or later will want to own this functionality. Look for Asia to be a big growth area for these services.      (7.5)And I Wouldn’t Be Surprised If One Or More Of The Review Management StartUps Gets Acquired
While online review management feels like something of a commodity, kind of like listings management, it’s also a great gateway drug for multi-location brands & SMBs to eventually buy more of your services. I recall Ted Paff, founder of CustomerLobby, once telling me “the value of review management is trending towards $0.” Of course, that was right before he sold CL to EverCommerce and took off to Nepal to find his Chi. Fast-growing services with review management and related services that are not trending towards $0 are prime targets. Keep an eye on Broadly, BirdEye, Podium, GatherUp, NearbyNow and others.
Google Search Console Will Specify Local Pack Rankings In The Performance Report
Yeah, right. But maybe, just maybe, we’ll get regex filtering?
Apple Maps Will Continue To Be The Biggest Local Search Platform Everyone Ignores
Apple made a big deal in 2018 about its new map platform and while it is exciting to have more vegetation detail, Apple still shows little sign of giving a shit about its business data. In the four years since Maps Connect launched, the functionality for businesses to control their Apple Maps profiles has barely changed. While I find Apple Maps generally fine to use (except for that time it led me straight into a dumpster in San Francisco), I still see plenty of people criticizing it. At some point perhaps Apple will realize that businesses and their agencies can help make Apple Maps much better. It would be great if we could get actual analytics, ability to enhance profiles, true bulk account management, etc., but I am skeptical that will happen in 2019.
Amazon Will Not Buy Yelp!
But if the stock price goes below $25, it seems like there’s a private equity play here. cc: David Mihm.

So in 2019, Local SEO will pretty much look like this:

The post 10 Local SEO Predictions for 2019: Job Security For Local SEOs appeared first on Local SEO Guide.

How to get the most out of PageRank and boost rankings

Posted by on Dec 27, 2018 in SEO Articles | Comments Off on How to get the most out of PageRank and boost rankings

How to get the most out of PageRank and boost rankings

There are 100s of signals that help Google understand and rank content and one signal in particular, PageRank, is often not fully taken advantage of on large scale websites.

Harnessing PageRank can provide the boost you need to gain traction in search results once you’ve covered some of the SEO basics, such as on-page optimisation.

But first, let’s define what PageRank is and how it works.

What is PageRank?

PageRank is a metric used by Google to determine the authority of a page, based on inbound links.

Unlike other ranking factors, Google used to explicitly give you a PageRank score of a webpage; however, Google decided to retire the public PageRank  metric.

Although the public can no longer see PageRank scores, PageRank itself is a signal still used by Google, but many websites don’t efficiently harness its potential to boost rankings.

DYK that after 18 years we’re still using PageRank (and 100s of other signals) in ranking?

Wanna know how it works?https://t.co/CfOlxGauGF pic.twitter.com/3YJeNbXLml

— Gary “鯨理” Illyes (@methode) February 9, 2017

How is PageRank calculated?

The methodology to calculate PageRank has evolved since the first introduction of Larry Page’s (a co-founder of Google) PageRank patent.

“Even when I joined the company in 2000, Google was doing more sophisticated link computation than you would observe from the classic PageRank papers”

Matt Cutts

The original PageRank calculation would equally divide the amount of PageRank a page held by the number of outbound links found on a given page. As illustrated in the diagram above, page A has a PageRank of 1 and has two outbound links to page B and page C, which results in both page B and C receiving 0.5 PageRank.

However, we need to add one more aspect to our basic model. The original PageRank patent also cites what is known as the damping factor, which deducts approx. 15% PageRank for every time a link points to another page as illustrated below. The damping factor prevents artificial concentration of rank importance within loops of the web and is still used today for PageRank computation.

PageRank and the reasonable surfer model

The way PageRank is currently worked out is likely far more sophisticated than the original calculation, a notable example of this would be the reasonable surfer model, which may adjust the amount of PageRank that gets allocated to a link based on the probability it will be clicked. For instance, a prominent link placed above the fold is more likely to be clicked on than a link found at the bottom of a page and therefore may receive more PageRank.

PageRank simplified

An easy way to understand how PageRank works is to think that every page has a value and the value is split between all the pages it links to.

So, in theory, a page that has attained quality inbound links and is well linked to internally, has a much better chance of outranking a page that has very little inbound or internal links pointing to it.

How to harness PageRank?
If you don’t want to waste PageRank, don’t link to unimportant pages!

Following on from the previous explanation of PageRank, the first solution to harness PageRank is to simply not link to pages you don’t want to rank, or at the very least reduce the number of internal links that point to unimportant pages. For example, you’ll often see sites that stuff their main navigation with pages that don’t benefit their SEO, or their users.

However, some sites are setup in such a way that make it challenging to harness PageRank and below are some implementations and tips that can help you get the most out of PageRank in these kind of situations.

# fragments
What is a # fragment?

The # fragment is often added at the end of a URL to send users to a specific part of a page (called an anchor) and to control indexing and distribution of PageRank.

How to use # fragments?

When the goal is to prevent a large number of pages from being indexed, direct and preserve PageRank, # fragments should be added after the most important folder in your URL structure, as illustrated in example A.

We have two pages:

Example A

www.example.com/clothing/shirts#colours=pink,black

URL with a # fragment

Example B

www.example.com/clothing/shirts,colours=pink,black

URL without a # fragment

There is unlikely to be much, if any, specific search demand for a combination of pink and black shirts that warrants a standalone page. Indexing these types of pages will dilute your PageRank and potentially cause indexing bloat, where similar variations of a page compete against each other in search results and reduce the overall quality of your site. So you’ll be better off consolidating and directing PageRank to the main /shirts page.

Google will consider anything that’s placed after a # fragment in a URL to be part of the same document, so www.example.com/clothing/shirts#colours=pink,black should return www.example.com/clothing/shirts in search results. It’s a form of canonicalisation.

if page.php#a loads different content than page.php#b , then we generally won’t be able to index that separately. Use “?” or “/”

— John (@JohnMu) February 22, 2017

We fold #a & #b together, for indexing too, so all signals come together.

— John (@JohnMu) April 5, 2017

Pros:

# fragment URLs should consolidate PageRank to the desired page and prevent pages you don’t want to rank from appearing in search results.

Crawl resource should be focused on pages you want to rank.

Cons:

Adding # fragments can be challenging for most frameworks.

Using # fragments can be a great way to concentrate PageRank to pages you want to rank and prevent pages from being indexed, meaning # fragment implementation is particularly advantageous for faceted navigation.

Canonicalisation
What is canonicalisation?

rel=”canonical”  ‘suggests’ a preferred version of a page and can be added as an HTML tag or as an HTTP header. rel=”canonical” is often used to consolidate PageRank and prevent low-quality pages from being indexed.

How to use canonicalisation?

Going back to our shirt example…

We have two pages:

Example A

www.example.com/clothing/shirts/

Category shirt page

Example B

www.example.com/clothing/shirts,colours=pink,black

Category shirt page with selected colours

Page B type pages can often come about as a result of faceted navigation, so by making the rel=”canonical” URL on page B, mirror the rel=”canonical” URL on page A, you are signalling to search engines that page A is the preferred version and that any ranking signals, including PageRank, should be transferred to page A.

However, there are disadvantages with a canonicalisation approach as discussed below.

Pros:

Can transfer PageRank to pages you want to rank.

Can prevent duplicate/low-quality pages from being indexed.

Cons:

A canonical tag is a suggestive signal to search engines, not a directive, so they can choose to ignore your canonicalisation hints. You can read the following webmaster blog to help Google respect your canonicalisation hints.

Google has suggested that canonicals are treated like 301 redirects and in combination with the original PageRank patent, this implies that not all PageRank will pass to the specified canonical URL.

Even though canonicalised pages are crawled less frequently than indexed pages, they still get crawled. In certain situations, such as large-scale faceted navigation, the sheer amount of overly dynamic URLs can eat into your websites crawl budget, which can have an indirect impact on your site’s visibility.

Overall, if choosing a canonicalisation approach, be confident that Google will respect your canonicalisation suggestions and that you’ve considered the potential impact to your sites crawl budget if you have a large number of pages you want to canonicalise.

Make sure internal links return a 200 response code

Arguably one of the quickest wins in preserving PageRank is to update all internal links on a website so that they return a 200 response code.

We know from the original PageRank patent that each link has a damping factor of approx. 15%. So in cases where sites have a large number of internal links that return response codes other than 200, such as 3xx, updating them will reclaim PageRank.

As illustrated below, there is a chain of 301 redirects. Each 301 redirect results in a PageRank loss of 15%. Now imagine the amplified loss in PageRank if there were hundreds, or thousands of these redirects across a site.

This is an extremely common issue, but not exclusive, to sites that have undergone a migration. The exception to the rule of losing 15% PageRank through a 301 redirect is when a site migrates from HTTP to HTTPS. Google has been strongly encouraging sites to migrate to HTTPS for a while now and as an extra incentive to encourage more HTTPS migrations, 3xx redirects from HTTP to HTTPS  URLs will not cause PageRank to be lost.

Download Screaming Frog for free and check out their guide for identifying internal links that return a non 200 response code.

Reclaiming PageRank from 404 pages

Gaining inbound links is the foundation for increasing the amount of PageRank that can be dispersed across your site, so it really hurts when pages that have inbound links pointing to them return a 404 error. Pages that return a 404 error no longer exist and therefore can’t pass on PageRank.

Use tools such as Moz’s Link Explorer to identify 404 pages that have accumulated inbound links and 301 redirect them to an equivalent page to reclaim some of the PageRank.

However, unless there is not an appropriate equivalent page to redirect these URLs to, avoid redirecting 404 pages to your homepage. Redirecting pages to your homepage will likely result in little, if any, PageRank being reclaimed, due to the differences in the original content and your homepage.

Things to avoid
rel=”nofollow”

The use of rel=”nofollow” is synonymous with an old-school SEO tactic, whereby SEO’s tried to ‘sculpt’ the flow of PageRank by adding rel=”nofollow” to internal links that were deemed unimportant. The goal was to strategically manage how PageRank gets distributed throughout a website.

The rel=”nofollow” attribute was originally introduced by Google to fight comment link spam, where people would try to boost their backlink profile by inserting links into comment sections on blogs, articles or forum posts.

This tactic has been redundant for many years now, as Google changed how rel=”nofollow” worked. Now, PageRank sent out with every link is divided by the total amount of links on a page, rather than the amount of followed links.

However, specifically adding rel=”nofollow” to a link will mean PageRank that does flow through will not benefit the destination page and thus result in PageRank attrition. Additionally, the attrition of PageRank also applies to URLs you’ve disallowed in your robots.txt file, or pages that have had a noindex tag in place for a while.

Embedded JavaScript links

Content that’s reliant on JavaScript has been shown to negatively impact organic search performance. Essentially, it’s an inefficient process for Google to render, understand and then evaluate client-side rendered content and some in the SEO industry believe it’s possible to sculpt PageRank by making unimportant internal links embedded JavaScript links.

I decided to ask John Mueller whether JavaScript embedded links receive less PageRank than links found in the HTML and he responded unequivocally.

yes

— John (@JohnMu) October 18, 2018

JavaScript SEO is a complex topic, where the dynamic is constantly evolving, and Google is increasingly getting better at understanding and processing JavaScript.

However, if you’re going to use a JavaScript framework, make sure that Google is able to fully render your content.

Conclusion

PageRank is still an influential ranking signal and preserving, directing and ultimately harnessing this signal should be apart of any plan when trying to boost your organic search visibility. Every website’s situation is unique, and a one size fits all approach will not always apply, but hopefully, this blog highlights some potential quick wins and tactics to avoid. Let me know in the comment section if I’ve missed anything out?

Ecommerce SEO Guide: SEO Best Practices for Ecommerce Websites

Posted by on Dec 20, 2018 in SEO Articles | Comments Off on Ecommerce SEO Guide: SEO Best Practices for Ecommerce Websites

Ecommerce SEO Guide: SEO Best Practices for Ecommerce Websites

If you want to get more traffic and sales to your ecommerce website, then on-page SEO is a critical first step. There’s a multitude of how-to articles and tutorials on the web offering general SEO advice, but far fewer that specifically address the needs of ecommerce entrepreneurs. Today, we’d like to give you a basic understanding of on-site search engine optimization for ecommerce. It will be enough to get you started, make sure you’re sending all the right signals to Google, and set you up for SEO success. Let’s dive in. What is Ecommerce SEO? Definition Ecommerce SEO is the…

The post Ecommerce SEO Guide: SEO Best Practices for Ecommerce Websites appeared first on The Daily Egg.

Why Local Businesses Will Need Websites More than Ever in 2019

Posted by on Dec 13, 2018 in SEO Articles | Comments Off on Why Local Businesses Will Need Websites More than Ever in 2019

Why Local Businesses Will Need Websites More than Ever in 2019

Posted by MiriamEllis

64% of 1,411 surveyed local business marketers agree that Google is becoming the new “homepage” for local businesses. Via Moz State of Local SEO Industry Report

…but please don’t come away with the wrong storyline from this statistic.

As local brands and their marketers watch Google play Trojan horse, shifting from top benefactor to top competitor by replacing former “free” publicity with paid packs, Local Service Ads, zero-click SERPs, and related structures, it’s no surprise to see forum members asking, “Do I even need a website anymore?”

Our answer to this question is,“Yes, you’ve never needed a website more than you will in 2019.” In this post, we’ll examine:

Why it looks like local businesses don’t need websites
Statistical proofs of why local businesses need websites now more than ever
The current status of local business websites and most-needed improvements
How Google stopped bearing so many gifts

Within recent memory, a Google query with local intent brought up a big pack of ten nearby businesses, with each entry taking the user directly to these brands’ websites for all of their next steps. A modest amount of marketing effort was rewarded with a shower of Google gifts in the form of rankings, traffic, and conversions.

Then these generous SERPs shrank to seven spots, and then three, with the mobile sea change thrown into the bargain and consisting of layers and layers of Google-owned interfaces instead of direct-to-website links. In 2018, when we rustle through the wrapping paper, the presents we find from Google look cheaper, smaller, and less magnificent.

Consider these five key developments:

1) Zero-click mobile SERPs

This slide from a recent presentation by Rand Fishkin encapsulates his findings regarding the growth of no-click SERPs between 2016–2018. Mobile users have experienced a 20% increase in delivery of search engine results that don’t require them to go any deeper than Google’s own interface.

2) The encroachment of paid ads into local packs

When Dr. Peter J. Myers surveyed 11,000 SERPs in 2018, he found that 35% of competitive local packs feature ads.

3) Google becoming a lead gen agency

At last count, Google’s Local Service Ads program via which they interposition themselves as the paid lead gen agent between businesses and consumers has taken over 23 business categories in 77 US cities.

4) Even your branded SERPs don’t belong to you

When a user specifically searches for your brand and your Google Knowledge Panel pops up, you can likely cope with the long-standing “People Also Search For” set of competitors at the bottom of it. But that’s not the same as Google allowing Groupon to advertise at the top of your KP, or putting lead gen from Doordash and GrubHub front and center to nickel and dime you on your own customers’ orders.

5) Google is being called the new “homepage” for local businesses

As highlighted at the beginning of this post, 64% of marketers agree that Google is becoming the new “homepage” for local businesses. This concept, coined by Mike Blumenthal, signifies that a user looking at a Google Knowledge Panel can get basic business info, make a phone call, get directions, book something, ask a question, take a virtual tour, read microblog posts, see hours of operation, thumb through photos, see busy times, read and leave reviews. Without ever having to click through to a brand’s domain, the user may be fully satisfied.

“Nothing is enough for the man to whom enough is too little.”
– Epicurus

There are many more examples we could gather, but they can all be summed up in one way: None of Google’s most recent local initiatives are about driving customers to brands’ own websites. Local SERPs have shrunk and have been re-engineered to keep users within Google’s platforms to generate maximum revenue for Google and their partners.

You may be as philosophical as Epicurus about this and say that Google has every right to be as profitable as they can with their own product, even if they don’t really need to siphon more revenue off local businesses. But if Google’s recent trajectory causes your brand or agency to conclude that websites have become obsolete in this heavily controlled environment, please keep reading.

Your website is your bedrock

“65% of 1,411 surveyed marketers observe strong correlation between organic and local rank.” – Via Moz State of Local SEO Industry Report

What this means is that businesses which rank highly organically are very likely to have high associated local pack rankings. In the following screenshot, if you take away the directory-type platforms, you will see how the brand websites ranking on page 1 for “deli athens ga” are also the two businesses that have made it into Google’s local pack:

How often do the top 3 Google local pack results also have a 1st page organic rankings?

In a small study, we looked at 15 head keywords across 7 US cities and towns. This yielded 315 possible entries in Google’s local pack. Of that 315, 235 of the businesses ranking in the local packs also had page 1 organic rankings. That’s a 75% correlation between organic website rankings and local pack presence.

*It’s worth noting that where local and organic results did not correlate, it was sometimes due the presence of spam GMB listings, or to mystery SERPs that did not make sense at first glance — perhaps as a result of Google testing, in some cases.

Additionally, many local businesses are not making it to the first page of Google anymore in some categories because the organic SERPs are inundated with best-of lists and directories. Often, local business websites were pushed down to the second page of the organic results. In other words, if spam, “best-ofs,” and mysteries were removed, the local-organic correlation would likely be much higher than 75%.

Further, one recent study found that even when Google’s Local Service Ads are present, 43.9% of clicks went to the organic SERPs. Obviously, if you can make it to the top of the organic SERPs, this puts you in very good CTR shape from a purely organic standpoint.

Your takeaway from this

The local businesses you market may not be able to stave off the onslaught of Google’s zero-click SERPs, paid SERPs, and lead gen features, but where “free” local 3-packs still exist, your very best bet for being included in them is to have the strongest possible website. Moreover, organic SERPs remain a substantial source of clicks.

Far from it being the case that websites have become obsolete, they are the firmest bedrock for maintaining free local SERP visibility amidst an increasing scarcity of opportunities.

This calls for an industry-wide doubling down on organic metrics that matter most.

Bridging the local-organic gap“We are what we repeatedly do. Excellence, then, is not an act, but a habit.”
– Aristotle

A 2017 CNBC survey found that 45% of small businesses have no website, and, while most large enterprises have websites, many local businesses qualify as “small.”

Moreover, a recent audit of 9,392 Google My Business listings found that 27% have no website link.

When asked which one task 1,411 marketers want clients to devote more resources to, it’s no coincidence that 66% listed a website-oriented asset. This includes local content development, on-site optimization, local link building, technical analysis of rankings/traffic/conversions, and website design as shown in the following Moz survey graphic:

In an environment in which websites are table stakes for competitive local pack rankings, virtually all local businesses not only need one, but they need it to be as strong as possible so that it achieves maximum organic rankings.

What makes a website strong?

The Moz Beginner’s Guide to SEO offers incredibly detailed guidelines for creating the best possible website. While we recommend that everyone marketing a local business read through this in-depth guide, we can sum up its contents here by stating that strong websites combine:

Technical basics
Excellent usability
On-site optimization
Relevant content publication
Publicity

For our present purpose, let’s take a special look at those last three elements.

On-site optimization and relevant content publication

There was a time when on-site SEO and content development were treated almost independently of one another. And while local businesses will need a make a little extra effort to put their basic contact information in prominent places on their websites (such as the footer and Contact Us page), publication and optimization should be viewed as a single topic. A modern strategy takes all of the following into account:

Keyword and real-world research tell a local business what consumers want
These consumer desires are then reflected in what the business publishes on its website, including its homepage, location landing pages, about page, blog and other components
Full reflection of consumer desires includes ensuring that human language (discovered via keyword and real-world research) is implemented in all elements of each page, including its tags, headings, descriptions, text, and in some cases, markup

What we’re describing here isn’t a set of disconnected efforts. It’s a single effort that’s integral to researching, writing, and publishing the website. Far from stuffing keywords into a tag or a page’s content, focus has shifted to building topical authority in the eyes of search engines like Google by building an authoritative resource for a particular consumer demographic. The more closely a business is able to reflect customers’ needs (including the language of their needs), in every possible component of its website, the more relevant it becomes.

A hypothetical example of this would be a large medical clinic in Dallas. Last year, their phone staff was inundated with basic questions about flu shots, like where and when to get them, what they cost, would they cause side effects, what about side effects on people with pre-existing health conditions, etc. This year, the medical center’s marketing team took a look at Moz Keyword Explorer and saw that there’s an enormous volume of questions surrounding flu shots:

This tiny segment of the findings of the free keyword research tool, Answer the Public, further illustrates how many questions people have about flu shots:

The medical clinic need not compete nationally for these topics, but at a local level, a page on the website can answer nearly every question a nearby patient could have about this subject. The page, created properly, will reflect human language in its tags, headings, descriptions, text, and markup. It will tell all patients where to come and when to come for this procedure. It has the potential to cut down on time-consuming phone calls.

And, finally, it will build topical authority in the eyes of Google to strengthen the clinic’s chances of ranking well organically… which can then translate to improved local rankings.

It’s important to note that keyword research tools typically do not reflect location very accurately, so research is typically done at a national level, and then adjusted to reflect regional or local language differences and geographic terms, after the fact. In other words, a keyword tool may not accurately reflect exactly how many local consumers in Dallas are asking “Where do I get a flu shot?”, but keyword and real-world research signals that this type of question is definitely being asked. The local business website can reflect this question while also adding in the necessary geographic terms.

Local link building must be brought to the fore of publicity efforts

Moz’s industry survey found that more than one-third of respondents had no local link building strategy in place. Meanwhile, link building was listed as one of the top three tasks to which marketers want their clients to devote more resources. There’s clearly a disconnect going on here. Given the fundamental role links play in building Domain Authority, organic rankings, and subsequent local rankings, building strong websites means bridging this gap.

First, it might help to examine old prejudices that could cause local business marketers and their clients to feel dubious about link building. These most likely stem from link spam which has gotten so out of hand in the general world of SEO that Google has had to penalize it and filter it to the best of their ability.

Not long ago, many digital-only businesses were having a heyday with paid links, link farms, reciprocal links, abusive link anchor text and the like. An online company might accrue thousands of links from completely irrelevant sources, all in hopes of escalating rank. Clearly, these practices aren’t ones an ethical business can feel good about investing in, but they do serve as an interesting object lesson, especially when a local marketer can point out to a client, that best local links are typically going to result from real-world relationship-building.

Local businesses are truly special because they serve a distinct, physical community made up of their own neighbors. The more involved a local business is in its own community, the more naturally link opportunities arise from things like local:

Sponsorships
Event participation and hosting
Online news
Blogs
Business associations
B2B cross-promotions

There are so many ways a local business can build genuine topical and domain authority in a given community by dint of the relationships it develops with neighbors.

An excellent way to get started on this effort is to look at high-ranking local businesses in the same or similar business categories to discover what work they’ve put in to achieve a supportive backlink profile. Moz Link Intersect is an extremely actionable resource for this, enabling a business to input its top competitors to find who is linking to them.

In the following example, a small B&B in Albuquerque looks up two luxurious Tribal resorts in its city:

Link Intersect then lists out a blueprint of opportunities, showing which links one or both competitors have earned. Drilling down, the B&B finds that Marriott.com is linking to both Tribal resorts on an Albuquerque things-to-do page:

The small B&B can then try to earn a spot on that same page, because it hosts lavish tea parties as a thing-to-do. Outreach could depend on the B&B owner knowing someone who works at the local Marriott personally. It could include meeting with them in person, or on the phone, or even via email. If this outreach succeeds, an excellent, relevant link will have been earned to boost organic rank, underpinning local rank.

Then, repeat the process. Aristotle might well have been speaking of link building when he said we are what we repeatedly do and that excellence is a habit. Good marketers can teach customers to have excellent habits in recognizing a good link opportunity when they see it.

Taken altogether

Without a website, a local business lacks the brand-controlled publishing and link-earning platform that so strongly influences organic rankings. In the absence of this, the chances of ranking well in competitive local packs will be significantly less. Taken altogether, the case is clear for local businesses investing substantially in their websites.

Acting now is actually a strategy for the future

“There is nothing permanent except change.”
– Heraclitus

You’ve now determined that strong websites are fundamental to local rankings in competitive markets. You’ve absorbed numerous reasons to encourage local businesses you market to prioritize care of their domains. But there’s one more thing you’ll need to be able to convey, and that’s a sense of urgency.

Right now, every single customer you can still earn from a free local pack listing is immensely valuable for the future.

This isn’t a customer you’ve had to pay Google for, as you very well might six months, a year, or five years from now. Yes, you’ve had to invest plenty in developing the strong website that contributed to the high local ranking, but you haven’t paid a penny directly to Google for this particular lead. Soon, you may be having to fork over commissions to Google for a large portion of your new customers, so acting now is like insurance against future spend.

For this to work out properly, local businesses must take the leads Google is sending them right now for free, and convert them into long-term, loyal customers, with an ultimate value of multiple future transactions without Google as a the middle man. And if these freely won customers can be inspired to act as word-of-mouth advocates for your brand, you will have done something substantial to develop a stream of non-Google-dependent revenue.

This offer may well expire as time goes by. When it comes to the capricious local SERPs, marketers resemble the Greek philosophers who knew that change is the only constant. The Trojan horse has rolled into every US city, and it’s a gift with a questionable shelf life. We can’t predict if or when free packs might become obsolete, but we share your concerns about the way the wind is blowing.

What we can see clearly right now is that websites will be anything but obsolete in 2019. Rather, they are the building blocks of local rankings, precious free leads, and loyal revenue, regardless of how SERPs may alter in future.

For more insights into where local businesses should focus in 2019, be sure to explore the Moz State of Local SEO industry report:

Read the State of Local SEO industry report

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

An introduction to HTTP/2 for SEOs

Posted by on Dec 8, 2018 in SEO Articles | Comments Off on An introduction to HTTP/2 for SEOs

An introduction to HTTP/2 for SEOs

In the mid 90s there was a famous incident where an email administrator at a US University fielded a phone call from a professor who was complaining his department could only send emails 500 miles. The professor explained that whenever they tried to email anyone farther away their emails failed — it sounded like nonsense, but it turned out to actually be happening. To understand why, you need to realise that the speed of light actually has more impact on how the internet works than you may think. In the email case, the timeout for connections was set to about 6 milliseconds – if you do the maths that is about the time it takes for light to travel 500 miles.

We’ll be talking about trucks a lot in this blog post!

The time that it takes for a network connection to open across a distance is called latency, and it turns out that latency has a lot to answer for. Latency is one of the main issues that affects the speed of the web, and was one of the primary drivers for why Google started inventing HTTP/2 (it was originally called SPDY when they were working on it, before it became a web standard).

HTTP/2 is now an established standard and is seeing a lot of use across the web, but is still not as widespread as it could be across most site. It is an easy opportunity to improve the speed of your website, but it can be fairly intimidating to try to understand it.

In this post I hope to provide an accessible top-level introduction to HTTP/2, specifically targeted towards SEOs. I do brush over some parts of the technical details and don’t cover all the features of HTTP/2, but my aim here isn’t to give you an exhaustive understanding, but instead to help you understand the important parts in the most accessible way possible.

HTTP 1.1 – The Current Norm

Currently, when request a web page or other resource (such as images, scripts, CSS files etc.), your browser speaks HTTP to a server in order to communicate. The current version is HTTP/1.1, which has been the standard for the last 20 years, with no changes.

Anatomy of a Request

We are not going to drown in the deep technical details of HTTP too much in this post, but we are going to quickly touch on what a request looks like. There are a few bits to a request:

The top line here is saying what sort of request this is (GET is the normal sort of request, POST is the other main one people know of), and what URL the request is for (in this case /anchorman/) and finally which version of HTTP we are using.

The second line is the mandatory ‘host’ header which is a part of all HTTP 1.1 requests, and covers the situation that often a single webserver may be hosting multiple websites and it needs to know which are you are looking for.

Finally there will a variety of other headers, which we are not going to get into. In this case I’ve shown the User Agent header which indicates which sort of device and software (browser) you are using to connect to the website.

HTTP = Trucks!

In order to help explain and understand HTTP and some of the issues, I’m going to draw an analogy between HTTP and … trucks! We are going to imagine that an HTTP request being sent from your browser is a truck that has to drive from your browser over to the server:

A truck represents an HTTP request/response to a server

In this analogy, we can imagine that the road itself is the network connection (TCP/IP, if you want) from your computer to the server:

The road is a network connection – the transport layer for our HTTP Trucks

Then a request is represented by a truck, that is carrying a request in it:

HTTP Trucks carry a request from the browser to the server

The response is the truck coming back with a response, which in this case is our HTML:

HTTP Trucks carry a response back from the server to the browser

“So what is the problem?! This all sounds great, Tom!” – I can hear you all saying. The problem is that in this model, anyone can stare down into the truck trailers and see what they are hauling. Should an HTTP request contain credit card details, personal emails, or anything else sensitive anybody can see your information.

HTTP Trucks aren’t secure – people can peek at them and see what they are carrying

HTTPS

HTTPS was designed to combat the issue of people being able to peek into our trucks and see what they are carrying.

Importantly, HTTPS is essentially identical to HTTP – the trucks and the requests/responses they transport at the same as they were. The response codes and headers are all the same.

The difference all happens at the transport (network) layer, we can imagine it as a over our road:

In HTTPS, requests & responses are the same as HTTP. The road is secured.

In the rest of the article, I’ll imagine we have a tunnel over our road, but won’t show it – it would be boring if we couldn’t see our trucks!

Impact of Latency

So the main problem with this model is related to the top speed of our trucks. In the 500-mile email introductory story we saw that the speed of light can have a very real impact on the workings of the internet.

HTTP Trucks cannot go fast than the speed of light.

HTTP requests and many HTTP responses tend to be quite small. However, our trucks can only travel at the speed of light, and so even these small requests can take time to go back and forth from the user to the website. It is tempting to think this won’t have a noticeable impact on website performance, but it is actually a real problem…

HTTP Trucks travel at a constant speed, so longer roads mean slower responses.

The farther the distance of the network connection between a user’s browser and the web server (the length of our ‘road’) the farther the request and response have to travel, which means they take longer.

Now consider that a typical website is not a single request and response, but is instead a sequence of many requests and responses. Often a response will mean more requests are required – for example, an HTML file probably references images, CSS files and JavaScript files:

Some of these files then may have further dependencies, and so on. Typically websites may be 50-100 separate requests:

Web pages nowadays often require 50-100 separate HTTP requests.

Let’s look at how that may look for our trucks…

Send a request for a web page:

We send a request to the web server for a page.

Request travels to server:

The truck (request) may take 50ms to drive to the server.

Response travels back to browser:

And then 50ms to drive back with the response (ignoring time to compile the response!).

The browser parses the HTML response and realises there are a number of other files that are needed from the server:

After parsing the HTML, the browser identifies more assets to fetch. More requests to send!

Limit of HTTP/1.1

The problem we now encounter is that there are several more files we need to fetch, but with an HTTP/1.1 connection each road can only handle a single truck at a time. Every HTTP request needs its own TCP (networking) connection, and each truck can only carry one request at a time.

Each truck (request) needs its own road (network connection).

Furthermore, building a new road, or opening a new networking connection also requires a round trip. In our world of trucks we can liken this to needing a stream roller to first lay the road and then add our road markings. This is another whole round trip which adds more latency:

New roads (network connections) require work to open them.

This means another whole round trip to open new connections.

Typically browsers open around 6 simultaneous connections at once:

Browsers usually open 6 roads (network connections).

However, if we are looking at 50-100 files needed for a webpage we still end up in the situation where trucks (requests) have to wait their turn. This is called ‘head of line blocking’:

Often trucks (requests) have to wait for a free road (network connection).

If we look at the waterfall diagram for a page (this example this HTTP/2 site) of a simple page that has a CSS file and lot of images you can see this in action:

Waterfall diagrams highlight the impact of round trips and latency.

In the diagram above, the orange and purple segments can be thought of as our stream rollers, where new connections are made. You can see initially there is just one connection open (line 1), and another connection being opened. Line 2 then re-uses the first connection and line 3 is the first request over the second connection. When those complete lines 4 & 5 are the next two images.

At this point the browser realises it will need more connections so four more are opened and then we can see requests are going in batches of 6 at a time corresponding with the 6 roads or network connections that are open.

Latency vs Bandwidth

In the waterfall diagram above, each of these images may be small but each requires a truck to come and fetch it. This means lots of round trips, and given we can only run 6 simultaneously at a time there is a lot of time spent with requests waiting.

It is sometimes difficult to understand the difference between bandwidth and latency. Bandwidth could be thought of as the load capacity of our trucks, where each truck could carry more. This often doesn’t help with webpage times though, given each request and response cannot share a truck with another request. This is why it has been shown that increasing bandwidth has a limited impact on the load time of pages. This was shown in research conducted by Mike Belshe at Google which is discussed in this article from Googler Ilya Grigorik:

The reality was clear that in order to improve the performance of the web, the issue of latency would need to be addressed. The research above was what led to Google developing the SPDY protocol which later turned into HTTP/2.

Improving the impact of latency

In order to improve the impact that latency has on website load times, there are various strategies that have been employed. One of these is ‘sprite maps’ which take lots of small images and jam them together into single files:

Sprite maps are a trick used to reduce the number of trucks (requests) needed.

The advantage of sprite maps is that they can all be put into one truck (request/response) as they are just a single file. Then clever use of CSS can display just the portion of the image that corresponds to the desired image. One file means only a single request and response are required to fetch them, which reduces the number of round trips required.

Another thing that helps to reduce latency is using a CDN platform, such as CloudFlare or Fastly, to host your static assets (images, CSS files etc. – things that are not dynamic and the same for every visitor) on servers all around the world. This means that the round trips for users can be along a much shorter road (network connection) because there will be a nearby server that can provide them with what they need.

CDNs have servers all around the world, can make the required roads (network connections) shorter.

CDNs also provide a variety of other benefits, but latency reduction is a headline feature.

HTTP/2 – The New World

So hopefully, you have now realised that HTTP/2 can help reduce latency and dramatically improve the performance of pages. How does it go about it!

Introducing Multiplexing – More trucks to the rescue!

With HTTP/2 we are allowed multiplexing, which essentially means we are allowed to have more than one truck on each road:

With HTTP/2 a road (network connection) can handle many trucks (requests/responses).

We can immediately see the change in behaviour on a waterfall diagram – compare this with the one above (not the change in the scale too – this is a lot faster):

We now only need one road (connection) then all our trucks (requests) can share it!

The exact speed benefits you may get depend on a lot of other factors, but by removing the problem of head of line blocking (trucks having to wait) we can immediately get a lot of benefits, for almost no cost to us.

Same old trucks

With HTTP/2 our trucks and their contents stay essentially the same as they they always were, we can just imagine we have a new traffic management system.

Requests look as they did before:

The same response codes exist and mean the same things:

Because the content of the trucks doesn’t change, this is great news for implementing HTTP/2 – your web platform or CMS does not need to be changed and your developers don’t need to write any code! We’ll discuss this below.

Server Push

A much anticipated feature of HTTP/2 is ‘Server Push’ which allows a server to respond to a single request with multiple responses. Imagine a browser requests an HTML file but the server knows that that means the server will need a specific CSS file and a specific JS file as well. Then the server can just send those straight back, without needing them to be requested:

Server Push: A single truck (request) is sent…

Server Push: … but multiple trucks (responses) are sent back.

The benefit is obvious- it removes another whole round trip for each resource that the server can ‘anticipate’ that the client will need.

The downside is that at the moment this is often implemented badly, and it can mean the server sends trucks that the client doesn’t need (as it has cached the response from earlier) which means you can make things worse.

For now, unless you are very sure you know what you are doing you should avoid server push.

Implementing HTTP/2

Ok – this sounds great, right? Now you should be wondering how you can turn it on!

The most important thing is to understand that because the requests and responses are the same as they always were, you do not need to update the code on your site at all. You need to update your server to speak HTTP/2 – and then it will do the new ‘traffic management’ for you.

If that seems hard (or if you already have one) you can instead use a CDN to help you deploy HTTP/2 to your users. Something like CloudFlare, or Fastly (my favourite CDN – it requires more advanced knowledge to setup but is super flexible) would sit in front of your webserver and speaking HTTP/2 to your users:

A CDN can speak HTTP/2 for you whilst your server speaks HTTP/1.1.

Because the CDN will cache your static assets, like images, CSS files, Javascript files and fonts, you still get the benefits of HTTP/2 even though your server is still in a single truck world.

HTTP/2 is not another migration! 

It is important to realise that to get HTTP/2 you will need to already have HTTPS, as all the major browsers will only speak HTTP/2 when using a secure connection:

HTTP/2 requires HTTPS

However, setting up HTTP/2 does not require a migration in the same way as HTTPS did. With HTTPS your URLs were changing from http://example.com to https://example.com and you required 301 redirects, and a new Google Search Console account and a week long meditation retreat to recover from the stress.

With HTTP/2 your URLs will not change, and you will not require redirects or anything like that. For browsers and devices that can speak HTTP/2 they will do that (it is actually the guy in the steamroller who communicates that part – but that is a-whole-nother story..!), and other devices will fall back to speaking HTTP/1.1 which is just fine.

We also know that Googlebot does not speak HTTP/2 and will still use HTTP/1.1:

https://moz.com/blog/challenging-googlebot-experiment

However, don’t despair – Google will still notice that you have made things better for users, as we know they are now using usage data from Chrome users to measure site speed in a distributed way:

https://moz.com/blog/google-chrome-usage-data-measure-site-speed

This means that Google will notice the benefit you have provided to users with HTTP/2, and that information will make it back into Google’s evaluation of your site.

Detecting HTTP/2

If you are interested in whether a specific site is using HTTP/2 there are a few ways you can go about it.

My preferred approach is to turn on the ‘Protocol’ column in the Chrome developer tools. Open up the dev tools, go to the ‘Network’ tab and if you don’t see the column then right click to add it from the dropdown:

Alternatively, you can install this little Chrome Extension which will indicate if a site is using it (but won’t give you the breakdown for every connection you’ll get from doing the above):

https://dis.tl/showhttp2

Slide Deck

If you would prefer to consume this as a slide deck, then you can find it on Slideshare. Feel free to re-use the deck in part or its entirety, provided you provide attribution (@TomAnthonySEO):

An introduction to HTTP/2 & Service Workers for SEOs from Tom Anthony
Wrap Up

Hopefully, you found this useful. I’ve found the truck analogy makes something, that can seem hard to understand, somewhat more accessible. I haven’t covered a lot of the intricate details of HTTP/2 or some of the other functionality, but this should help you understand things a little bit better.

I have, in discussions, extended the analogy in various ways, and would love to hear if you do too! Please jump into the comments below for that, or to ask a question, or just hit me up on Twitter.