Artificial intelligence for marketers

Posted by on Jul 17, 2018 in SEO Articles | Comments Off on Artificial intelligence for marketers

Artificial Intelligence (AI) has blown up in the past few years and is quickly starting to take over the world of business marketing. From digital assistants like Alexa and Google Home to Siri and search algorithms, AI allows consumers to access the information they want quickly and efficiently.

It’s predicted that the world of AI is only going to continue to grow until it’s incorporated into most aspects of our business and personal lives. Biz Journals anticipates that “62% of enterprises will use AI technologies by 2018”, which has increased from the 38% of businesses that were using it in 2017.

Freeing up marketers’ time

Some are afraid that AI is going to take jobs away from marketers by performing tasks usually carried out by humans, but this isn’t the case.

Rather, AI is able to quickly carry out a lot of the time-consuming, tedious tasks previously required by small business owners so that their time can be freed up to focus on more in-depth tasks that require a human’s level of personalization.  Let AI handle tasks like recommendations and customer service so that marketers can focus on being creative and developing imaginative, engaging campaigns – something that AI is definitely not capable of doing.

Customer service is one area in which AI – specifically chatbots – should absolutely be used. In 2018 consumers expect to have their questions and concerns answered immediately, any hour of the day and day of the week, and this is a demand that humans can’t possibly meet, but robots can.

In addition to freeing up time for marketers, AI is accelerating marketing and sales. explains that by “giving robots access to [your] brand you’re giving consumers the same access”.  Embrace this thought and give AI opportunities to expand on your marketing efforts. Consumers are using AI in their search efforts in order to find what they’re looking for faster than they ever have before. Marketers need to make sure their content is optimized to meet these demands.

AI will be able to increase brand sophistication by analyzing copious amounts of consumer information. It uses machine learning to anticipate what a customer wants and needs faster than any human is able, and in turn this increases brand engagement and sophistication, and promotes customer loyalty.

The fact of the matter is that humans just don’t have the capability to access and analyze the huge amounts of customer data that AI can go through in a matter of seconds.

Data analysis

In order to truly reap the benefits of AI, it must be used correctly. AI should be used to deliver highly personalized and relevant messages. Consumers don’t want to feel like they’re being marketed to by a robot, even if that is the reality of the situation.

Generation Y and Z consumers want “a truly personalized experience [on websites] and within messages”. In the past, businesses could develop big marketing campaigns that appealed to huge amounts of people and they were wildly effective; but those days are gone.

Chatbots and virtual assistants are ‘the face’ of AI marketing and should be used accordingly. Although AI in marketing is more than just digital assistants, products like Alexa and Siri get most of the attention from businesses and from consumers. makes a good point in saying this could potentially be because these assistants act like humans, and even though consumers love computers and all their capabilities, they still want to feel like they’re interacting with a human. Remember this when crafting your AI marketing efforts and make sure your campaigns have that human element to them.

Also keep in mind that search queries using digital assistants are only supposed to increase in the coming years, so make sure your mobile site is optimized to meet this demand.

Social media marketing

AI can also be used in social media marketing. It’s already been used for targeting advertising, but AI can do so much more in the field of social media to help followers connect and engage with brands.

AI can quickly scan through social media content, data, and user history to help marketers create more relevant content.  Facebook uses AI extensively to do things like automatic face tagging in photos to determine which stories show up in a user’s newsfeed.

Despite how much better AI can make the user experience on social media platforms, many companies are still hesitant to incorporate it into their own social media marketing efforts. Hopefully this article has shown that instead of taking away jobs from marketers, AI will be able to free up some time so that they can focus on tasks that need a human element to them instead of drowning in data and mundane chores.


Have you incorporated AI into your business marketing plan?  How do you anticipate AI impacting your company’s marketing goals?  Comment in the section below!  Also, for more information on how AI is projected to impact marketing in the future, check out this article by Search Engine Watch.


Amanda DiSilvestro is a writer for No Risk SEO, an all-in-one reporting platform for agencies. You can connect with Amanda on  LinkedIn, or check out her content services at

An Evolution of SEO Techniques – What’s in and what’s out

Posted by on Jul 16, 2018 in SEO Articles | Comments Off on An Evolution of SEO Techniques – What’s in and what’s out

An Evolution of SEO Techniques – What’s in and what’s out

Like how the world wide web has been rapidly growing since its creation, the same could be said on SEO.

What might be working 5 years ago might have lost half of its credibility today. With the one and almost only search engine giant Google pumping out new lines to its algorithm every day, you can only imagine that all SEOs are kept on their toes all the time.

Here I have compiled a list of SEO techniques that you may or may not have heard of before. Let’s look at them together and you’ll see whether they are in or out.

An Evolution of SEO Techniques – What’s in and what’s out

A long long time ago when Google still wasn’t the total dominator of the internet world, nor was their algorithm as complex and fine-tuned as now, the keyword was literally THE key. It was the only SEO technique. Searches are answered by a list of websites that literally matches the word used to make the query.

If you look for “red sneakers” the search engine will give you results of pages most heavily loaded with “red sneakers”. Noticing this, smart online marketers started stuffing as many keywords as they can into a web page so they will rank.

The extremities that they would go to includes making sentences like this: Red sneakers dot com is the best place to shop for a variety of red sneakers including red sneakers for boys and girls you can absolutely get your desired red sneakers on red sneakers dot com.

Well isn’t that an eyesore. What’s better (or worse, really) is they even styled the keywords in a way where they will blend into the background of the web page so it’s not visible to visitors. But still visible to the crawlers so they can be ranked for the keyword.

That way they will be ranked for whichever keyword they want, as long as they stuff as much keyword as they can into a web page.

Google, of course, noticed this unethical practice and put an end on it by updating their algorithm and giving out penalties accordingly.

But, here’s the point, keyword still weighs heavily for ranking. How else can a user get what they want if they are not utilizing keyword? A query is, in its essence, done by typing in a word or a phrase. So how can a page rank for a keyword without keyword stuffing?


Get on that keyword research action!

Keyword is the core of SEO, keyword also plays a big part in Google ranking. Now that keyword stuffing is out of the way, how can you rank?

Also known as keyword optimization, this SEO technique is the way to get ranked for a keyword while simultaneously not pissing Google off. This is the leveled up technique to utilize keywords.

Keyword research is done by conducting research (surprise!) and analyzation. Then only selecting the best keywords to target. Every niche has a specific set of keywords that get more search volume, or drives more traffic or converts the most. But how can we know what are the keywords? There’s no shortcut to this. You need to do some in-depth research.

Keyword is the essence of SEO, choosing a relevant keyword can positively make or break a business. Too general? You’ll get buried in thousand other generic websites. Too specific? You might not get enough visitors to sustain.

The most straightforward way is to search for the niche that you’re targeting and see what the high ranked pages are. Then start analyzing them, stripped them off to see which keywords they are using.

You can also use the help of some tools like Google Keyword Planner or Google Trends to get an idea on how a certain keyword will perform or how relevant it is.


Remember keyword stuffing? An even uglier version of keyword stuffing is using irrelevant keywords. This is the kind of SEO technique that you should definitely avoid.

Web pages want to rank. They also want to rank for as many keywords as possible. During the good old days when SEO was more straightforward and simple. When keyword stuffing as a SEO technique still works like a charm. The marketers figured out they can also utilize irrelevant keywords.

Combined with the technique of making keywords invisible to visitors, they started stuffing different keywords on a single page.

Now red sneakers dot com will not only rank for red sneakers. They will also rank for best camping tent, walkman for sale, sushi restaurant in LA. This list of keywords doesn’t even make sense right? So are they to the innocent users. Imagine searching for a sushi restaurant in LA but be directed to red sneakers dot com. Not even JAPANESE sneakers but RED sneakers.

You can imagine how unhappy the users were. Neither was Google impressed, so this kind of practice is heavily criticized and frowned upon. Those who dare to try now will be at the receiving end of Google’s merciless wrath.

IN: LSI Keyword

Instead of trying to rank for multiple keywords that may or may not be related to your niche. Targeting a few selected keywords while adding in LSI keywords in the pile will work much better.

What is LSI Keyword? LSI stands for Latent Semantic Indexing.

Along the way, Google figured out that instead of focusing on keywords solely, they should also start actually understanding the content of a website. What happens is instead of a single keyword, they starting associating a topic keyword to a group of associated keywords. This gives birth to a brand new SEO technique.

Let’s look at an example. The LSI keyword for “dog” could be “dog facts”, “dog pictures”, “dog flu”, “dog for sale” etc.

LSI keywords for dog

Related searches to the word dog shows you what Google thinks are the keywords related to “dog”

So the short answer for what is LSI keyword is, whatever other keywords that Google thinks is related to your main keyword. From their point of view, if you’re talking about “dog”, it’s only natural that you might start talking about “dog for sale”.

By analyzing whether your web page is laden with LSI keywords instead of looking at how many times a keyword is used. Google can figure out (as best of their ability anyway) how relevant your web page is to a certain topic. Therefore influencing your Google rank.


No dear, it doesn’t work that way.

Now that the search engine giant notice that keyword is becoming less reliable of a reading to provide users with the relevant search results, they come up with another aspect, links.

The reasoning behind link is: a website would not link to another website voluntarily if they don’t find them relevant, informative or trustworthy. Based on this reasoning, Google started using link volume as a ranking criteria.

However, as always, we can never have nice things. Because people started abusing it, again. How? They come up with link farms, selling links, trading links. Whatever way they can think of to abuse links to get a higher rank. This is the kind of SEO technique that you should stay away from.

According to the almighty Google buying links means: “exchanging money for links, or posts that contain links; exchanging goods or services for links; or sending someone a “free” product in exchange for them writing about it and including a link.”

If you think you wouldn’t mind trading a couple hundred dollars for a link, and a little push up the SERP, think again.

A link is much more complicated than it seems because of PageRank. What this algorithm do is, each link has different values based on a couple criteria. From page authority, page content relevancy, to how many outward links they have.

So you can imagine that buying link isn’t really as efficient as hand in the money and get out with a rank 1 web page. You might also never get out of Google’s blacklist.


Now that it is certain that buying links will not only not work. You also risk getting penalized for being caught doing something that Google clearly frowned upon.

So how can you get that link juice that is clearly important for ranking? Short answer: effort and sweat and blood. (not real blood but you get the idea)

As far as Google’s reasoning goes, a link is hard to fake because people have to be willingly linking to you only if they find your content worth sharing, informative and authoritative. So try to be all those.

Invest time and effort into producing quality content that is relevant to your market and consumers. It might take some time, but people will start to notice and slowly you will get a link here and there.

Hey, Rome wasn’t built in a day neither is a rank 1 page. Instead of spending money on buying links it’s better to spend money on creating good content. At least honest work won’t get you penalized by Google and wiped out from the index.


SEO goes hand in hand with content. White hat SEO goes with good content, black hat SEO goes with lazy contents. How do they do that? By duplicating others’ hard work.

A duplicate content means a content that appears in multiple URLs in the World Wide Web.

A duplicate content is tricky for search engines because they don’t know:

which one is the original,
Which one to rank,
whether they should be treated as different version or not

Google is smart, but things like this can still confuse them. So marketers will try to plagiarize a content and hope to get ranked for the piece. Hey, isn’t that stealing? You might ask, and yes it is. Google rolled out the Panda update to counter unethical SEO technique like this. However, if you come across someone plagiarizing your stuff, you can file a complain to have it taken down.

Other than duplicated pages, pages with little value that is laden with keywords, or scrapers who scrape work of others and publishing it as their own are also considered thin content.

These tactics might have worked in the past, but the era is now long gone.


Not just any content… but quality content!

For SEO and inbound marketing, quality content is king. Pumping out quality, relevant, and creative contents consistently is one of the best SEO tactics that you can employ.

You might have noticed that a lot of e-commerce websites come with a blog page. Other than being a platform to publish informative and insightful articles, they also act as a magnet to incoming traffic volume and search ranking.

Now if you have a website selling baby products, it’s only natural for you to publish articles related to it. Be it the best time to start feeding the baby solid food. Or the how to make sure your house is childproof. Because those are the topics that interest your potential customers.

With each article published, you are building up trust and a sense of authority with your visitors. When they want to do a purchase, they are more likely to buy from you. Because in their eyes you are authoritative of your niche and they trust you on your expertise.

Remember keyword research and LSI keywords? Every time you publish an article, you are adding in keywords and LSI keywords to your website. Making it more and more relevant to your niche. Paired up with keyword research, you can consistently put out quality blog posts that are relevant and interest your potential customers. Therefore, turning them into actual customers.

To put in shortly, quality contents can: attract visitors, boost SERP ranking, build authority and trust, converts.

Tempted to start blogging on your e-commerce site yet? Like Nike said, just do it.


How cloaking works.

Want to know one of the worst SEO technique that will definitely get on Google’s bad side? The answer is cloaking.

What is cloaking? Like the word itself suggested, cloaking means covering something up to hide what it actually is.

This is a technique where they show the search engine one version of the website, but another to the visitor. They are covering themselves up and hiding their true form from the search engine crawlers.

Why would someone do that? Mostly to deceive the search engine to get a higher rank. With a higher rank, the website can attract more audience. But you can imagine the user experience will be terrible. Since they are being deceived to visit a website that does not host the content that they claim they do. Which totally defeats Google’s purpose of catering to what the users want.

Google by no means tolerates any form of cloaking. It is considered a violation of Google’s Webmaster Guidelines. You might get a boosted ranking for a few weeks, but is it worth it when your website got penalized and wiped off? Nope with a capital N.


Unlike cloaking that hides from the search engine, works the totally opposite way. You are telling the search engines exactly what the page is down to exactly what each line on the page is. is the brainchild of Google, Bing, Yandex, and Yahoo! Seeing all these big names in the same sentence you can tell that Schema is THE thing.

These search engines came together to help you correctly tag your web pages. That way search engines can display your pages accurately and more specifically on the SERP. The more specific your markup is, the easier it is to matches with user search intent. Therefore providing a better user experience which is what the search engines rave to cater.

Because of the complexity of a web page, it is difficult for crawlers to correctly understand each phrase according to its intend. By correctly utilizing schema you are basically telling the search engine what is what.

Whether your page is an article or news, is it about tourism or fashion, who is the author etc. There is a lot of things that can be markup to tell the search engine exactly what it is. Therefore giving the visitors a much more accurate and relevant portrayal.

Schema markup could be easily added to the HTML so you have no excuse to not utilize it. Google even has a tool to help you do just that.

Note that the whole schema is much more comprehensive and complex. The complete list of property can be viewed on the website.

A part of the list on blog properties and their schema markup tags.

Now that you have a general idea of these 10 SEO techniques, I trust you to have a good judgment on which techniques to employ. At the end of the day, remember that sticking to Google’s Webmaster Guideline will forever be the correct answer.

float: left;
margin: 0;
width: 100%;
max-width: 654px;
height: 100%;
#optin-template-3 .container{
float: left;
width: 100%;
height: 100%;
text-align: center;
background: #fff;
border: 0px solid d64a37;
padding-bottom: 16px;
#optin-template-3 .top-row{
display: inline-block;
width: 88%;
padding: 3% 6% 0%;
#optin-template-3 .top-row h2{
margin: 5px 0 0;
font-family: “roboto”, helvetica, sans-serif;
font-weight: 600;
text-align: center;
padding:0px 0px 5px;
#optin-template-3 .left-column{
display: inline-block;
width: 100%;
max-width: 270px;
min-width: 270px;
height: 100%;
vertical-align: top;
padding-top: 32px;
#optin-template-3 .ebook-img{
width: 100%;
height: 310px;
background: url(;
background-size: cover;
#optin-template-3 .right-column{
display: inline-block;
width: 60%;
min-width: 250px;
max-width: 305px;
padding: 24px 4% 32px;
#optin-template-3 .bodycopy ul{
text-align: left;
padding-left: 0;
#optin-template-3 .bodycopy ul li{
font-family: “roboto”, helvetica, sans-serif;
margin-left: 20px;
#optin-template-3 .optIn-form{
display: block;
bottom: 0;
#optin-template-3 .email{
display: block;
width: 100%;
border: 0;
padding: 8px 0;
font-size: 18px;
text-align: center;
border: 1px solid #d64a37;
#optin-template-3 .submit-button{
display: block;
margin-top: 4%;
width: 100%;
padding: 8px 0;
font-family: “roboto”, helvetica, sans-serif;
font-weight: 400;
color: #fff;
background: #d64a37;
font-size: 21px;
border: 0;
outline: 1px solid #d64a37;
cursor: pointer;

Deadliest SEO Mistakes
Revealed By Google Patent

“While not being a Black Hat is a no brainer, this book lets you know how not to let Google mistook you as one.” – Evander Wilmer
Cracking Methods Used By Google To Detect Black Hat Practices
4 Big Companies That Got Caught At Rank Manipulation
How You Can Avoid Being Penalized By Mistake

Why I Spent $500,000 Buying a Blog That Generates No Revenue

Posted by on Jul 16, 2018 in SEO Articles | Comments Off on Why I Spent $500,000 Buying a Blog That Generates No Revenue

Why I Spent $500,000 Buying a Blog That Generates No Revenue

(If you are wondering, the image of me above was taken when I used to work at KISSmetrics with Hiten Shah… I used to have hair)

In early January 2017, I purchased the KISSmetrics website for $500,000.

If you go to the site, you’ll notice that it forwards here to (which I will get into later in the post).

The $500,000 didn’t get me the company, KISSmetrics, or any of the revenue streams. The parent company, Space Pencil, is continually improving and developing the product.

And on top of that, there are restrictions. I can’t just pop up a competing company or any company on the KISSmetrics site.

So why did they sell me the domain? And why would I pay $500,000 for it?

I can’t fully answer why they sold it, but I do know a lot of their customers came from word of mouth, conferences, paid ads, and other forms of marketing that didn’t include SEO or content marketing.

For that reason, the domain probably wasn’t as valuable to them as it was to me. And of course, who wouldn’t want extra cash?

I’m assuming they are very calculated because they are an analytics company, so they probably ran the numbers on how much revenue the inbound traffic was generating them and came to the conclusion that the $500,000 price tag seemed worth it.

Now, before I get into why I spent $500,000 on the domain, let me first break down my thought process as I am buying out a lot of properties in the marketing space (more to be announced in the future).

Why am I buying sites that aren’t generating revenue?

This wasn’t the first or the last site that I’ll buy in the space.

I recently blogged about how I bought Ubersuggest. And it wasn’t generating a single dollar in revenue.

Well technically, there were ads on the site, but I quickly killed those off.

And eventually, I ported it over to

When I am looking at sites to buy, I am only looking for 1 thing… traffic. And of course, the quality (and relevancy) of that traffic.

See, I already have a revenue stream, which is my ad agency, Neil Patel Digital.

So, my goal is to find as many sites that have a similar traffic profile to and leverage them to drive my agency more leads.

How do you know you won’t lose money?

I don’t!

This approach doesn’t guarantee I’ll make more money.

I look at the business as tons of tiny experiments. You don’t build a huge business through one simple marketing strategy or tactic.

You have to combine a lot of little things to get your desired outcome.

And sometimes you’ll make mistakes along the way that will cost you money, which is fine. You have to keep one thing in mind… without testing, you won’t be big.

With my ad agency, we tend to mainly have U.S. clients. Yes, we serve other regions as well… for example, we have an ad agency in Brazil.

But I myself mainly focus on driving traffic to the U.S. ad agency, and the other teams just replicate as I don’t speak Portuguese, German, or any of the required languages for the other regions we are in.

So, when I buy companies, I look for traffic that is ideally in the U.S.

Sure, the ad agency can work with companies in Australia, Canada, and even the United Kingdom, but it’s tough.

There’s a huge difference in currency between Australia and the U.S. and the same goes for Canada.

And with the U.K. there is a 5 to 8-hour time zone difference, which makes it a bit more difficult to communicate with clients.

That’s why when I buy a site, I’m ideally looking for U.S. traffic.

When I bought Ubersuggest it had very little U.S. traffic. Indonesia and India were the two most popular regions.

But I bought it because I knew I could build a much better tool and over time grow the U.S. traffic by doing a few email blasts, getting on Product Hunt, and by creating some press.

And I have…

As you can see from the screenshot above, U.S. is the most popular region followed by India and Brazil.

Over time it shouldn’t be too difficult to 3 or even 4x that number as long as I release more features.

Now, my costs on Ubersuggest have gotten into the 6 figures per month, and I am not generating any income from it.

There is no guarantee that it will generate any revenue, but I have a pretty effective sales funnel, which I will share later in the post. Because of that sales funnel my risk with Ubersuggest is pretty low.

As long as I can grow the traffic enough, I should be able to monetize.

What about KISSmetrics?

As for KISSmetrics, I mainly bought the domain for the blog traffic.

During its peak it was generating 1,260,681 unique visitors per month:

By the time I bought the blog, traffic had dropped to 805,042 unique visitors per month:

That’s a 36% drop in traffic. Ouch!

And then to make matters worse, I decided that I wanted to cut the traffic even more.

There were so many articles on KISSmetrics that were outdated and irrelevant, so I had no choice but to cut them.

For example, there were articles about Vine (which Twitter purchased and killed), Google Website Optimizer (no longer exists), Mob Wars (a Facebook game that no longer exists)… and the list goes on and on.

In addition to that, I knew that I could never monetize irrelevant traffic. Yes, more traffic is good, but only as long as it is relevant.

I instantly cut the KISSmetrics blog in half by “deleting” over 1,024 blog posts. Now, I didn’t just delete them, I made sure I added 301 redirects to the most relevant pages here on

Once I did that, my traffic dropped again. I was now sitting at 585,783 unique visitors a month.

It sucks, but it had to be done. The last thing I wanted to do was spend time and money maintaining old blog posts that would never drive a dollar in revenue.

I knew that if someone was going to come to my blog to research Vine, there was little to no chance that the person would convert into a 6-figure consulting contract.

After I pruned and cropped the KISSmetrics blog, I naturally followed the same path of Ubersuggest and merged it in to

The merge

The KISSmetrics merge was a bit more complicated than Ubersuggest.

With Ubersuggest, I didn’t have a keyword research tool on, so all I had to do was slap on a new design, add a feature or two, and port it over.

With KISSmetrics, a lot of the content was similar to For the ones that were similar, I kept the version considering this blog generates more traffic than the KISSmetrics one.

As for all of the content that was unique and different, I ended up moving it over and applying 301 redirects.

If I decided to skip the pruning and cropping stage that I described above, the KISSmetrics blog would have had more traffic. And when I merged it in with I would have done even better.

But in marketing you can can’t focus on vanity metrics like how many more unique visitors you are getting per month. You need to keep your eye on the prize.

And for me, that’s leads.

The more leads I generate for my ad agency, the more likely I’ll increase my revenue.

Here’s my lead count for the weeks prior to the KISSmetrics merge:

When looking at the table above, keep in mind it shows leads from the U.S. only.

The KISSmetrics blog was merged on the 25th. When you add up all of the numbers from the previous week, there were 469 leads in total, of which 61 were marketing qualified leads.

That means there were 61 leads that the sales reps were able to contact as the vast majority of leads are companies that are too small for us to service.

When you look at the week of the 25th, there were a total of 621 leads. 92 where marketing qualified leads.

Just from that one acquisition, I was able to grow my marketing qualified leads by 50.8%. 🙂

I know what you are thinking though. The week after the 25th (7/2) the leads tanked again. Well, you have to keep in mind that the table only shows leads from the U.S. and during that week there was a national holiday, the 4th of July. So, leads were expected to be low.

But still, even with the holiday, we generated 496 leads, 68 of which where marketing qualified. We still generated more marketing qualified leads than when we didn’t have the KISSmetrics traffic.

The early results show that this is going to work out (or so I hope). If you ever want to consider buying up sites that aren’t generating revenue, you need to know your numbers like the back of your hand.

My sales funnel

Some of you are probably wondering how I promote my agency from this site. As I mentioned earlier, I will share my funnel and stats with you.

The way I monetize the traffic is by collecting leads (and my sales reps turn those leads into customers).

On the homepage, you will see a URL box.

Once you enter a URL, we do a quick analysis (it’s not 100% accurate all of the time).

And then we show you how many technical SEO errors you have and collect your information (this is how you become a lead).

And assuming we think you are a good fit, you see a screen that allows you to schedule a call (less than 18% of the leads see this).

From there, someone on my team will do a discovery call with you.

Assuming things go well, a few of us internally review everything to double check we can really help, we then create projections and a presentation before pitching you for your money (in exchange for services of course).

That’s the funnel on in a nutshell… It’s pretty fine-tuned as well.

For example, when someone books a call we send them text reminders using Twilio to show up to the call as we know this increases the odds of you getting on the phone.

We even do subtle things like asking for your “work email” on the lead form. We know that 9 out 10 leads that give us a Gmail, Hotmail, AOL, or any other non-work email are typically not qualified.

And it doesn’t stop there… there are lead forms all over for this same funnel.

If you are reading a blog post like this, you’ll see a bar at the top that looks something like:

Or if you are about to exit, you will see an exit popup that looks like:

You’ll even see a thank you page that promotes my ad agency once you opt-in:

And if I don’t convince you to reach out to us for marketing help right then and there, you’ll also receive an email or two from me about my ad agency.

As you can see, I’ve fine-tuned my site for conversions.

So much so, that every 1,000 unique visitors from the U.S. turns into 4.4 leads. And although that may not seem high, keep in mind that my goal isn’t to get as many leads as possible. I’m optimizing for quality over quantity as I don’t want to waste the time of my sales team.

For example, I had 2 reps that had a closing ratio of 50% last month. That means for every 2 deals they pitched, 1 would sign up for a 6-figure contract, which is an extremely high closing ratio.

Hence, I am trying to focus on quality so everyone in sales can get to 50% as it makes the business more efficient and profitable.

The last thing you want to do is pay a sales rep tons of money to talk to 50 people to only find 1 qualified lead. That hurts both you and your sales reps.


The strategy I am using to buy websites may seem risky, but I know my numbers like the back of my hand. From an outsider’s perspective it may seem crazy, but to me, it is super logical.

And the reason I buy sites for their traffic is that I already have a working business model.

So, buying sites based on their traffic is much cheaper than buying sites for their revenue. In addition to that, my return on investment is much larger.

For example, if I wanted to buy KISSmetrics (the whole business), I would have to spend millions and millions of dollars.

I’m looking for deals, it’s how you grow faster without having to raise venture capital.

When you use this strategy, there is no guarantee you will make a return on your investment, but if you spend time understanding the numbers you can reduce your risk.

I knew that going into this KISSmetrics deal that I will generate at least an extra $500,000 in profit from this one acquisition.

Realistically it should be much more than that as the additional leads seem to be of the same quality, and the numbers are penciling out for it to add well into the millions in revenue per year.

But before you pull the trigger and buy up a few sites in your space, there are a few things you need to keep in mind:

Don’t buy sites that rely on 1 traffic source – you don’t want to buy sites that only have Facebook traffic. Or even Google traffic. Ideally, any site you buy should have multiple traffic sources (other than paid ads) as it will reduce your risk in case they lose their traffic from a specific channel.
Buy old sites – sites that are less than 3 years old are risky. Their numbers fluctuate more than older sites.
Spend time understanding the audience – run surveys, dive deep into Google Analytics… do whatever you can to ensure that the site you are buying has an audience that is similar to your current business.
Be patient and look for deals – I hit up hundreds of sites every month. Some people hate my emails and won’t give me the time of day. That’s ok. I’m a big believer and continually pushing forward until I find the right deal. I won’t spend money just because I am getting antsy.
Get creative – a lot of people think their site is worth more than it really is. Try to explain to them what it is really worth using data. I also structure deals in unique ways, such as I gave KISSmetrics up to 6 months before they had to transition to a new domain (and to some extent they are still allowed to use the existing domain for their client login area). You can even work out payment plans, seller based financing, or equity deals… you just have to think outside the box.

So, what do you think about my acquisition strategy? Are you going to try it out?

The post Why I Spent $500,000 Buying a Blog That Generates No Revenue appeared first on Neil Patel.

What is AMP Project: A Breakdown

Posted by on Jul 16, 2018 in SEO Articles | Comments Off on What is AMP Project: A Breakdown

What is AMP Project: A Breakdown


AMP stands for Accelerated Mobile Contents. It is a website building framework aimed to optimize web pages for mobile views by creating lightweight pages and ensuring a fast load time on users end.

The open source AMP Project was announced on October 7, 2015. First unveiled to mobile internet users in February 2016 via the top stories session in Google search. And finally, in September users are being served actual AMP pages. Backed by Google and many more tech giants, AMP is taking the center stage as a standard for web pages optimization.

Since its birth, AMP web pages has been heavily featured in mobile Google searches. With the top stories carousel feature highlighting solely AMP pages. This heavy endorsement shows that the tech engine giant prefers the AMP format as the standard mobile format.

The top stories carousel featuring solely AMP pages that are marked with a lightning bolt.

Google has been actively advocating load time as a critical aspect of user experience. The AMP framework addresses what they deemed is bloating up web pages and set up restrictions. The end result achieved is a web page that is six times lighter than their HTML counterpart. There’s no JavaScript and no CSS. If you want those things, you need to play it the AMP way.

Now for the user’s end, all AMP pages are labeled with a lighting bolt signal on mobile searches. A top stories carousel consists of only AMP pages is presented on the top part of SERP. The users need to only scroll from left to right to access a list of articles related to their query. AMP pages listed in the query is also preloaded, creating an experience of instant load time when users click on the AMP page. All in all AMP pages make a pretty sweet user experience.


The main selling point of a web page build on AMP is how fast and light it is. Google and Twitter and even eBay advocate AMP because of this. Speed makes better user experience, better user experience translates to better conversion rate. To be honest, every business answers to money and revenue, and that’s Google’s way of ensuring publishers get it.

Boasting a median load time of 0.5 seconds from Google Search. It does seem AMP is holding up their end of the bargain of being lightning fast. As a rule of thumb, any and all web pages should load completely in 8 seconds. While mobile pages should load in under a second. 53% of mobile users abandon a web page if it takes more than 3 seconds to load.

The users want their pages fast and AMP is the answer from Google. There are a number of case studies published on the AMP website. Giving it a look, after adopting AMP all of them states that their load time is at least 2 times faster. Some websites even speed up by 8 times. This shows how powerful the AMP can be in terms of speed.

Pleased with high load speed, users spend twice the time on an AMP page. Here comes the most important part, e-commerce sites experience a 20 percent increase in sales conversion. Which proves that good user experience does translate to better conversion rate.

How do they achieve this forced diet plan on bloated web pages? They stripped them off.

“One thing we realized early on is that many performance issues are caused by the integration of multiple JavaScript libraries, tools, embeds, etc. into a page,” said Malte Ubl, AMP Project Team Lead.


Not everyone is happy with the AMP project and sings praises on it like its an angel descending though. To be clear, the harshest backlash happens to the way the project team is handling AMP pages instead of the actual framework itself.

People argue that AMP is Google’s way of reaching internet monopoly. On January, A Letter About Google AMP is published. Publicly addressing the concerns of internet activists, engineers, corporations and users alike.

A letter about Google AMP gained a little shy from 700 signatories.

Let’s have a look at the two points of arguments.

1. AMP participants are granted preferential search promotion.

2. Unbeknownst to users, they are staying in the Google ecosystem when they visit a piece of AMP content disregarding the publisher.

There is no denying that AMP pages are in fact, given preferential in mobile searches. Made obvious by the top stories carousel of AMP pages which appears at the top area of the SERP. Major publishers like CNN who adopted the AMP format even has a personal carousel of AMP articles in their search result rich snippet.

One thing to be taken into consideration, the top results displayed in the SERP remains a mix of AMP and non-AMP pages.

SERP with a mix of AMP and HTML results.

Meaning the actual search ranking is not affected nor dominated by the emergence of AMP pages. What you need to know is, a disruption appears at the traditional top results in the SERP with the AMP top stories carousel being highlighted.

My take on it? The SERP has not been the list of 10 blue links on a white backdrop for a long long time now. The SERP nowadays is much more dynamic and personally tailored. Google as a company, endorsing and pushing their product sounds only fitting. However, considering the fact that Google has already been dominating the search engine for a decade, that can be an overkill.

Google argues the AMP project is not their own, instead, it is a collaboration brainchild with multiple tech giants. For me, the point is, one cannot deny that the AMP project has been crowned the Google prefix more often than not. The AMP project and Google are closely tied, that’s a fact. To what extent? That is the question Google needs to answer.

Next, the second concern stems from the fact that AMP pages are cached on the Google CDN and the cache copy is distributed to users. When a user clicks on a preloaded search result AMP site, they are viewing it in the AMP viewer, on the search page itself. Are the users aware of this? Are you aware of this? That is the question.

Restlessness arises as people think Google is effectively stripping the content away from the creators and publishers. They feel a lack of control over the user experience aspect of their AMP pages since the users are given the Google copy of it.

What’s more, if you pay attention to the URL of an AMP page, instead of displaying the origin publisher’s URL, you will see a google URL. This further enrages contents creators alike. As from their point of view, the users never exited Google to view their creations. Instead, the users are being shown their web page inside Google, masked by a Google URL.

The AMP project team’s answer to the second concern is the new update published in May. Publishers can now choose to sign a bundle HTTP exchange which will then be distributed by the browser. That way, users will be shown the actual publisher’s URL. A new set of tools is also made available for programmers to implement the exchange.

On AMP’s defense, even though they break the URL system the actual distribution link is not broken. All the views and revenues of an AMP page go to the publishers instead of Google. Unlike what they are accused of. But breaking the URL system itself is a big no no. Everyone is aware of it, they are aware of it.

Will the signed exchange resolve the issue and tension? We’ll have to see.


The advertisement is one of the integral parts of the modern internet ecosystem. Tell me one single website without ads. If you do find one, it is because the whole website itself is an advertisement.

People get very creative with ads. Because ads get them revenue, revenue means money, people go absolutely all out if they can get money.

The problem with such a streamlined framework as the AMP is the restrictions. The AMP project sets up a lot of restrictions on what could be written. Thus restricting how creative the ads can be. No more pop-ups, double pop-ups, banners on the top, bottom, left and right. No more waiting 5 seconds to click ignore this ad to reach the page I want.

Everything is streamlined, every web pages are blocks of scrollable text and pictures, no tricks. To keep the AMP pages as light as possible, third-party JavaScript or complex HTML are not allowed. This means more advanced functions including tracking and analytics are not happening.

An advertisement featured in a Wired AMP article

Instead, ads are displayed in the block just like any others. Note, ads are loaded later than the content itself. To address the issue of ads loading up noticeably later, the AMP project comes up with amphtml ad which is supported by a handful of ad publishers.

The verdict, advertisements can absolutely be run in AMP pages. However, there are fewer placement options. Abiding by the AMP rules of HTML subsets, ads could be loaded just as lightning fast. Overall giving a better user experience without sacrificing the money maker.


Ever heard of Snapchat stories? What about Facebook stories? Now the newest stories are served up by AMP.

The AMP project announced the AMP stories feature in February. What they are, are basically a snapchat-explore-esque stacked cards. Users could easily browse these featured AMP stories in search. Think of it as a slide-show, but remastered, lightweight and instant. More importantly, endorsed by Google and their major publisher partners.

Although viewable on both desktop and mobile, the AMP stories only show up in mobile search. What’s more, with further inspection on the publisher’s website, I can’t find any links to the AMP stories.

The AMP stories could be about a wide range of topics. From “The Prince Harry You Don’t Know” to “Walking Through Raqqa’s Rubble”. Sky’s the limit for writers and creators alike to create a compelling story. Words and pictures stringed together in a stack of cards.

The AMP story on Prince Harry by CNN

A spread about the Raqqa Campaign in Syria.

On mobile, while searching for The Washington Post (one of the collaborators of AMP stories), I am greeted with a “top visual stories from” carousel. These eight visual stories each features a unique story. While browsing through I came over one card in the stack featuring (surprise!) an ad.

Since the AMP stories feature has only been introduced three months ago, it is pretty much still in the experimental stage. How ads will function is not set and we shall anticipate further updates from the AMP project team.

With how less time and effort people are willing to spend on gaining casual information, we can only imagine the format to be fitting.


As a Firefox user on android, I have not noticed an influx of AMP articles. Switching to the Chrome browser though, I am greeted with a dynamic and AMP highlighted search page. AMP pages also show up in iOS searches using Safari.

There are no AMP pages on the SERP of my Firefox Android

The AMP project official site stated support on all major browsers including Chrome, Firefox, Edge, Safari etc. Therefore I am baffled by the total lack of AMP pages search results in my Firefox browser. Or are they just not labeled as such?

More often than not, users are not aware of AMP. They either do not notice the lighting bolt symbol and the difference in load speed. Or they are aware and simply disregarded it.

However, the whole main purpose of the AMP project is to enhance user experience. The project aims to provide fast and straight to the point content to users. What’s the point if the one being served doesn’t even know they are being served?


Being an AMP page is not a ranking factor. Responsive web design and fast load time is. You can see a search result that is filled with AMP pages or HTML pages or a mix of both, depending on what you’re looking for. The point is, AMP has not been totally dominating the SERP.

However, being the golden child of Google does mean something for SEO. We’ll see why.

Google has been talking about responsive web designs and load speed since the “mobilegeddon” in 2015. That is not without reason. By 2017 in North America alone, a total of 42.41% of the total internet traffic comes from mobile users. And the number are only increasing. Developers and tech companies need to adapt to this rapid change in user preference.

AMP targeted the mobile part of responsiveness while ensuring a fast load time. Therefore, having an AMP page basically boost your chance of ranking higher in mobile searches.

However you have to keep in mind that, as the name suggested, AMP is only for mobile. What’s more, it has not got full support on all web browsers yet.

Here is where Google comes in the picture. Chrome, the mobile web browser with more than half of the market share is another golden child of Google. Which means Chrome fully supports the AMP format.

Chrome is entirely compatible with AMP because they are both product of Google.

With the SERP becoming more and more dynamic. There are a lot of new elements being introduced to the search page. From Featured Snippet to People Also Ask, these features often overtakes the traditional blue links results. SEO for web pages should be aiming for these featured spots too.

Here’s the thing, being featured in top stories carousel might very well be another strategic placement like being the featured snippet. AMP pages give SEO another spot to target on other than being on the first page of SERP.

SEOs can take advantage of Google’s effort on advocating AMP. They have more ways to get spotlighted on the SERP compared to the traditional web pages.


Unlike enforcing a responsive design, you are creating a whole other page. An alternate version of your web page served only on mobile and supported browsers.

The functioning model of the project’s publishing partner is to maintain two pages. One of your run-of-the-mill HTML while another one in AMP. Take CNN, for example, each of their articles could be browsed in AMP. While in the desktop you will be served the normal HTML version.

A CNN article as viewed on desktop.

The same article in AMP format.

What this means is it demands programmers to constantly maintain two copies of a page.

Implementation can be time-consuming and expensive, especially for existing websites that are complex or huge. Despite the readily available learning and templates provided, creating an AMP copy of an existing site is not instant.

You have to remember, the AMP project is not an out of the box solution. If your website lacks responsiveness or speed, adapting AMP can solve that problem. But adapting the format itself will be the other problem.

The AMP framework needs customization for all the elements on a traditional website to work. John Mueller from Google had openly expressed that an AMP page should be as equivalent to the main page as possible. Considering the restriction on JavaScript and CSS this means a lot of modification work.

That’s one of the reasons why AMP has been viewed as a format that works best for articles. News articles, blog posts, basic blocks of texts work better in AMP because of the usual lack of interactive elements.


If you’re thinking about AMP-lifyng your web pages, here’s one last info that will spike your interest. Google has announced that they are looking to bring the AMP benefits to all web pages. That means, regardless of your web page’s format, you might get featured in the top stories carousel, among others.

Keep in mind though, standardizing a feature requires a lot of testing and a lot of time. I will not say, hey just give up on AMP it doesn’t matter anyway. No, they still matter in the way that they are given advantages in mobile searches.

Since I’m not the one deciding how your website should be, I shall leave you with a few questions instead. Hopefully you can make a decision after this.

1. How much time, effort and money you are willing to put in AMP-lifying your website.

2. How much do you think AMP will benefit your website comparatively (refer to question 1) speaking?

3. Will it be easier to just revamp your code to make it responsive?

float: left;
margin: 0;
width: 100%;
height: 100%;
#optin-template-1 .container{
float: left;
width: 100%;
height: 100%;
text-align: center;
#optin-template-1 .left-column{
display: inline-block;
width: 40%;
max-width: 235px;
min-width: 150px;
height: 100%;
vertical-align: bottom;
#optin-template-1 .ebook-img{
width: 100%;
height: 270px;
background: url(;
background-size: cover;
background-position: 50% 15%;
#optin-template-1 .right-column{
display: inline-block;
width: 70%;
max-width: 330px;
min-width: 250px;
padding: 20px 5% 30px;
#optin-template-1 h2{
margin: 0;
#optin-template-1 .optin-header{
#optin-template-1 .optin-header h2{
font-family: “roboto”, helvetica, sans-serif;
color: #32485f;
font-weight: 600;
font-size: 1.8em;
text-align: center;
#optin-template-1 .optIn-form{
display: block;
bottom: 0;
#optin-template-1 .email{
display: block;
width: 94%;
border: 0;
padding: 10px 3%;
margin-top: 8%;
margin-bottom: 12px;
font-size: 1em;
text-align: left;
background: #f0f0f0;
#optin-template-1 .submit-button{
display: block;
margin-top: 3%;
width: 100%;
padding: 8px 0;
font-family: “Arial”, helvetica, sans-serif;
font-weight: bold;
background: rgb(249, 206, 29);
font-size: 24px;
border: 1px solid rgb(249, 206, 29);
border-radius: 4px;
cursor: pointer;

Download The Complete Guide To Creating A Mobile-Friendly Website

Noindex a post in WordPress, the easy way!

Posted by on Jul 16, 2018 in SEO Articles | Comments Off on Noindex a post in WordPress, the easy way!

Noindex a post in WordPress, the easy way!

Some posts and pages should not show up in search results. To make sure they don’t show up, you should tell search engines to exclude them. You do this with a meta robots noindex tag. Setting a page to noindex makes sure search engines never show it in their results. Here, we’ll explain how easy this is in WordPress, if you use Yoast SEO. 

Want to bump your SEO to a higher level? Become a technical SEO expert with our Technical SEO training! »

$199 – Buy now » Info

Why keep a post out of the search results?

Why would you NOT want a page to show up in the search results? Well, most sites have pages that shouldn’t show up in the search results. For example; you might not want people to land on the ‘thank you’ page you redirect people to when they’ve contacted you. Or your ‘checkout success’ page. Finding those pages in Google is of no use to anyone.

Not sure if you should noindex or nofollow a post? Read Michiel’s post: Which pages should I noindex or nofollow?

How to set a page to noindex with Yoast SEO

Setting a post or page to noindex is simple when you’re running Yoast SEO. Below your post, in the Yoast SEO meta box, just click on the advanced tab:

On the advanced tab, you’ll see some questions. The first is: “Allow search engines to show this post in search results?” If you select ‘Yes’, you’re post can show up in Google. If you select ‘No’ you’ll set the post to noindex . This means it won’t show up in the search results.

The default setting of the post – in this case Yes – is the setting you’ve selected for this post type in the Search Appearance tab of Yoast SEO. If you want to prevent complete sections of your site from showing up in Google, you can set that there. This is further explained in Edwin’s post: Show x in search results?.

Please note that if the post you’re setting to noindex is already in the search results, it might take some time for the page to disappear. The search engines will first have to re-index the page to find the noindex tag. And do not noindex posts frivolously: if they were getting traffic before, you’re losing that traffic.

Were you considering to use the robots.txt file to keep something out of the search results? Read why you shouldn’t use the robots.txt file for that.

Do links on noindexed pages have value?

When you set a post to noindex, Yoast SEO automatically assumes you want to set it to noindex, follow. This means that search engines will still follow the links on those pages. If you do not want the search engines to follow the links, your answer to the following question should be No:

This will set the meta robots tonofollow, which will change the search engines behavior. They’ll ignore all the links on the page. Use this with caution though! In doubt if you need it? Just check Michiel’s post right here.

Read more: The ultimate guide to the meta robots tag »

The post Noindex a post in WordPress, the easy way! appeared first on Yoast.

Audience expansion and discovery: how to get ahead

Posted by on Jul 16, 2018 in SEO Articles | Comments Off on Audience expansion and discovery: how to get ahead

Audience expansion and discovery: how to get ahead

One of the reasons we love paid search is because it performs, but its intent-driven nature means it’s not the channel to build scale. The way to do that is get in front of relevant audiences and generate demand for your product/service. This is where channels such as paid social come into play, and one of the best channels to really hone in on targeting various audiences is Facebook.

The most obvious ways to get in front of relevant audiences on Facebook are:

Lookalikes – leveraging CRM lists to create audiences that look similar to your customers. Get more advanced by segmenting your customer list into groups of identifiable characteristics (e.g. high lifetime value, high average order value) and target lookalikes of those groups
Use demographic data and interests of your prime customer base, and target people based on what you already know.

If you’re a semi-sophisticated marketer, you’ve already targeted the most obvious audiences. So what’s next? How do you continue to scale and find more audiences? In this article, we discuss some of the ways you can move forward with finding additional, relevant audiences to test to help push performance and scale.

Poach from competitors

You should absolutely be testing and targeting audiences that like your competitors. They are highly relevant, and as a bonus, you may be able to steal market share from your rivals. For good insights, go into interest targeting on Facebook, input your competitor names, and dig in.

Use Audience Insights tools from Facebook and Google

Advertisers can always use more personas, so it’s helpful to figure out characteristics of relevant audiences that may help you recognize new folks to target.

In Facebook’s Audience Insights tool, input your top competitors/brands and take a look at the audience make-up. For example, if you’re a cosmetics store/brand, you could put in audiences that have interest in Sephora and understand various traits such as demographic info and likes/interests. This can help expand on different personas to build and test in Facebook.


Google has a similar insights tool through which you can leverage Google’s data on your converting audiences to understand any additional traits and behaviors you may not have already known. Here is an example:

You can develop personas using the above information and craft additional audiences in Facebook to test. In the above scenario, for example, you may decide to create the audience “Female, age 25-34, Interests: Fashionista, fashion, etc.,” and target this exactly in Facebook (see below).

With the information presented to you from Google Insights on your existing customers/converters, you should be able to develop a variety of different personas, then create audiences based on those personas and test them in Facebook. For example, let’s say you’re selling machines that make single servings of popcorn. Your audience is probably full of young, single people who are huge Netflix fans or sports fans, for example. Popcorn is also gluten-free, so that gives you a huge segment to target if you haven’t already thought of it.

Get creative

It’s important to think about ways you can find new audiences without pulling the obvious levers. For example, if you know that your customers have a high household income, it’s likely you’re already targeting those incomes in Facebook and Google. But what are other ways to reach these people?

Target those who like and purchase more expensive brands. This will open doors to larger audiences (Facebook may not know their house-hold income, but since they purchase high-end products, chances are you are getting in front of relevant eyes). Another example: if you know your customers are ‘fashionistas’, then you can target those who like specific fashion bloggers (e.g. interests: Chiara Ferragni, Olivia Palermo).

You should also look at the top-converting placements in your Google Display Network (GDN) campaigns. If you’re spending a significant budget within GDN, this information can be very telling. For example, when running ads on a luxury home furniture site, we discovered that a large chunk of their converters were on celebrity gossip sites. You can take that information and craft an audience to target within Facebook.

Of course, if you have a bigger budget, you can (and should) invest in analytics software and support that pulls third-party information, and information from people visiting your site. But you can get a lot of insight for free – and should be taking advantage of that no matter how refined your paid analytics are.

10 Answers To Why do You have a high bounce rate

Posted by on Jul 16, 2018 in SEO Articles | Comments Off on 10 Answers To Why do You have a high bounce rate

10 Answers To Why do You have a high bounce rate

10 Questions to Ask Yourself to Get Lower Bounce Rate?

I’m gonna start off by asking you a question, do you know how high is considered high?

What is good bounce rate? Here’s the benchmark average bounce rate according to web page types by Quicksprout.

The reason we’re looking at this is that having a high bounce rate doesn’t necessarily means bad. Google won’t push your page to the depths of the SERP if you got a 90% bounce rate.

Here’s the thing, Google doesn’t even take the bounce rate as it is in Google Analytics as a ranking factor. In fact, they don’t use any of the numbers that you can pull from Analytics for ranking.

However, it doesn’t mean that you shouldn’t worry about a high bounce rate.

Remember this, bounce rate is personal and individual. Take our site as an example, our blog posts usually have a bounce rate around 80% while our landing pages have an average of around 60%.

Our blog has an average bounce rate of around 80%.

Why? Because they have different purposes. Look at your own browsing pattern. Would you stay longer than necessary at a website once you’ve got what you want? Nah, neither would I. That’s what blog posts give, answers and suggestions that people need. When they get it, they’ll leave. So having a higher bounce rate not necessarily mean a page has not achieved its purpose.

Therefore, the correct way to look at bounce rate is by aligning it with the purpose of your web page.

A landing page has no value if visitors don’t engage further. Same could be said for a homepage whose purpose is to guide visitors to explore the website. If those pages have more than 65% of bounce rate, it is definitely not going well.

Before we launch into more details on why high bounce rate happens, are you certain that you understand what a bounce rate is?

What does bounce rate mean?

No, no, Mr. Bean. Not that kind of bounce.

Bounce rate is one of the main data that you can pull from Google Analytics. A bounce happens when there is a single interaction visit to a website. While the bounce rate is the percentage calculated by dividing single-page sessions by all sessions.

Google dictionary defines bounce rate as “the percentage of visitors to a particular website who navigate away from the site after viewing only one page.”

Let’s break it down a little.

Percentage: dividing single page session by all sessions times 100%
Navigate away: exits the site
Viewing only one page: single engagement HIT (engagement includes: pageviews, events, e-commerce items, e-commerce transactions, social, and other user-defined actions)

If you are interested to dwell in the detailed technicality behind a bounce rate here is a good read on demystifying Google Analytics bounce rate.

A visitor can click on your website, spend 10 minutes reading your blog post, exits and that would count as a bounce. How could you say having your 30 hours crafted article being read from the first line to the last as NOT engaged?

The point is, bounce rate should be treated as a gauge in accordance with the purpose of a page. If your web pages have an alarmingly high bounce rate which doesn’t align with the purposes, then something has to be done. Be it a UI redesign, or changing your content marketing strategies.

Here are 10 questions to ask yourself if you want to know how to lower bounce rate.
1. Have you set up a clear Call-To-Action?

An example of a clear call-to-action button.

Another example of a clear call-to-action button.

If you end a transaction with “Thank you.” well that probably means goodbye. However, if you end a transaction with “We’ll be having a flash sales next Wednesday, hope to see you then.” I’d say you’ll see the customer again.

The same thing can be applied to your website. If you want visitors to advance forward in your website, you need to guide them to do so.

Look at your page, is there any clear and urgent CTA? Or are there so many CTA that it risks confusing the visitors? Every web page should only have a maximum of two CTA.

Here are some examples: put up a “Start your free trial” at the of your product description page, “Subscribe for future posts” at the end of your blog posts or a “sign up” at the right upper corner on every single page.

2. Does your site have an always visible navigation bar?

An example of breadcrumbs navigation.

Ever scrolled all the way down to the end of the page, then have to scroll all the way back up to the navigation bar? I’d just give up halfway and click exit.

If you’re not gonna make it easier for visitors to navigate around your website, you’re really just not making it easier for yourself.

Be it a breadcrumb, a navigation bar or a quick return to the top button. Implement a navigation system that would ease the navigation process of visitors. Therefore, encouraging them to explore different pages instead of leaving.

A handy back to top button could improve the user experience tenfold.

3. Are you talking to your targeted persona?

I’ve talked about creating a persona for your targeted audience before. Keep that in mind all the time. Be it when you’re writing blog posts, putting up new products or creating a CTA.

Aligning with your page purpose, should your copies sound convincing, exciting or helpful? Are you addressing their pain points? More importantly, do you sound respectful?

Stephanie who has a 2-year-old child might be more attracted if you talk to them kindly and be helpful by keeping instructions clear. While Peter will want to have the most details about the camera you’re trying to sell and will feel more at ease at a tone that is just as excited as he is about the newest gadgets.

Stephanie would appreciate it if your keep your instructions short and clear.

Sympathize with your targeted personas and put yourself in their shoes. What will they need? What will make them stay? Answering those questions can help lower your bounce rate.

4. Is your website mobile friendly?

Utilize Google’s mobile-friendly tool to check your website’s rating.

A smartphone is the primary internet browsing device for one-third of people, take me for example, I don’t even turn on my laptop after work. Mobile search is becoming increasingly prominent, and I can never stress enough on being mobile-friendly. You can opt for responsive website design or the AMP framework. Having a responsive website means your website is adaptive to desktop, tablet, and mobile. While AMP is a mobile targeted format.

Check out the number of mobile visitors on your Google Analytics, you might be surprised. Take our site, for example, even though 80% of our visitors are from the desktop, we do have slowly rising mobile users. That’s why we make sure that our site is responsive and would show up correctly regardless of the visitor’s device of choice.

Our website has a slowly rising number of mobile user.

You should not ignore the potential of mobile users. If your website is not loading and displaying properly, visitors will not hesitate to abandon it. Being abandoned will not only hike up your bounce rate but also your exit rate.

All in all, being mobile friendly will be encouraging more visitors and lower your bounce rate.

5. Are the ads intrusive?

Blocking your own site’s content by showing me an advertisement is not gonna make me feel like staying any longer.

If the first thing that loads the moment I enter a web page is a pop up that I can’t even find the X button, I’d just give up on that page altogether. No matter how good your content is, if you insist on putting it behind a multitude of pop-ups then I’m sorry cause this is only making thing difficult for yourself.

If you really want a pop-up, an exit pop that only appears when the visitor is done with viewing your content and ready to leave the page is a passable option.

Oops, we ourselves is guilty of the exit pop technique.

The point is, intrusive ads makes a crappy user experience. If intrusive ads like this repeat on every single page of your website, it is only natural that visitors won’t stick around for long.

6. Have you disabled autoplay video and audio?

Automated video advertisement eat away bandwidth and is also annoying.

For the sake of a better user experience, I shall declare a ban on autoplay video and audio. Well, of course, I have zero power to decide on things like this. But how annoying it is to enter a web page only to be blasted with moving pixels that I couldn’t care less about.

I’m sure the majority of the internet users would agree with me. Users are appalled by these autoplay media. And you can be sure that they would not hesitate to exit your page the second they are being bombarded by these bandwidth suckers.

If you have to insert video and audio, make it so that they only play when prompted. For a better user experience and better bounce rate for yourself.

As much as I love the game, my bandwidth don’t really love it when you autoplay the trailer for me.

7. How long does it take to load your page?

Utilize PageSpeed Insights by Google to see how many seconds it takes to load your website on both mobile and desktop.

I’ve said it before and I will say it again, the rule of thumb is websites should load completely under 10 seconds. While half of the mobile visitors will abandon a website if it takes more than 3 seconds to load.

Images should be optimized. Both in terms of size and usage. Those banners at the sides of the page that showcase some third party ads are also slowing down your web page. Widgets, Javascript, plugins, do you really need them?

Are they benefiting your visitors? Are they benefiting your bounce rate? Use them only when necessary.

8. Does your title correlate with the content?

If you’re expecting a pastor stopping a fight between Trump and Clinton like a referee you’ll be disappointed.

Visitors are attracted to click on a page based on the title, and maybe 2 lines of description and a picture. Imagine clicking on a page titled “How to change a tire” but instead it talks about how to change engine oil. I would nope out of it in seconds.

You need to give them what you advertise. When the title says bounce rate I am talking about bounce rate, if I am talking about exit rate instead would you have stayed? Would you trust any of my other blog posts?

A more eye-catching, or keyword matching title might benefit you, but not if they don’t match the content itself. False advertisements will not bring your bounce rate anywhere.

9. Are you updating your content?

A 2019 marketing trends would be nice.

If you have a posts talking about the 2014 median size of web pages, why not update it to a 2018 version? Especially if you’re working in tech industries, an information that is correct a year ago may not be anymore.

Keep your content updated means keeping it relevant. Visitors are more likely to stick around knowing a website is updated and well maintained.

On the opposite, if the information on your website is outdated and irrelevant, it’s easy to see why visitors would not be interested to stay.

10. Do you have a story to tell about your brand?

A brand that is willing to share their story and value deserves at least some of my time.

Building trust and relationship and a budding community is the best thing a website or a brand can achieve. You don’t do that by flinging visitors with a sales pitch. They know sincerity when they see it.

Share the story of your brand, be sincere and genuine. Stories provoke emotions and emotions is what build loyalty.

If you want the visitors to engage with you, you should be frank with them and have their best intention in mind. Why would I stick around listening to a salesman trying to sell me stuff that I don’t want and don’t have to money to buy? It’s the same thing.

Tell them a story, tell them the value that you hold as a brand. Google might be getting rid of the don’t be evil motto, it doesn’t mean you have to hide yours.

Some behind the scene story really makes a company feels much more personal.

Yesoptimist is one of those websites whose blog posts I absolutely enjoy reading. Cause you can tell that they are honest and their backstory is engaging. Reading one blog post made me curious about their other posts. Which means they won’t be getting a 100% bounce rate from me.

float: left;
margin: 0;
width: 100%;
max-width: 654px;
height: 100%;
#optin-template-3 .container{
float: left;
width: 100%;
height: 100%;
text-align: center;
background: #fff;
padding-bottom: 16px;
#optin-template-3 .top-row{
display: inline-block;
width: 88%;
padding: 3% 6% 0%;
#optin-template-3 .top-row h2{
margin: 5px 0 0;
font-family: “roboto”, helvetica, sans-serif;
color: #de5840;
font-weight: 600;
text-align: center;
padding:0px 0px 5px;
#optin-template-3 .left-column{
display: inline-block;
width: 100%;
max-width: 270px;
min-width: 270px;
height: 100%;
vertical-align: top;
padding-top: 32px;
#optin-template-3 .ebook-img{
width: 100%;
height: 280px;
background: url(;
background-size: contain;
background-repeat: no-repeat;
#optin-template-3 .right-column{
display: inline-block;
width: 60%;
min-width: 250px;
max-width: 305px;
padding: 24px 4% 32px;
#optin-template-3 .bodycopy ul{
text-align: left;
padding-left: 0;
#optin-template-3 .bodycopy ul li{
font-family: “roboto”, helvetica, sans-serif;
margin-left: 20px;
#optin-template-3 .optIn-form{
display: block;
bottom: 0;
#optin-template-3 .email{
display: block;
width: 100%;
border: 0;
padding: 8px 0;
font-size: 18px;
text-align: center;
border: 1px solid #de5840;
#optin-template-3 .submit-button{
display: block;
margin-top: 4%;
width: 100%;
padding: 8px 0;
font-family: “roboto”, helvetica, sans-serif;
font-weight: 400;
color: #fff;
background: #de5840;
font-size: 21px;
border: 0;
outline: 1px solid #de5840;
cursor: pointer;
#optin-template-3 .container .align-justify { text-align:justify !important;}

Checklist: Are You Providing What Your Readers Love Reading?

The ultimate checklist for every content marketer.
Learn what makes your readers tick
Get more new readers and traffic with this step-by-step checklist
20 tips included to help you create compelling content that readers love

The EU is Wrong, but Google is Still in Trouble

Posted by on Jul 16, 2018 in SEO Articles | Comments Off on The EU is Wrong, but Google is Still in Trouble

The EU is Wrong, but Google is Still in Trouble

I’ve found it tough to get my head around all the arguments in the recent EU judgement against Google. I find that writing helps me get my thoughts together and work out what I really think, so here goes. Let’s find out whether I agree with the ruling or not…

First – the background – you only really need to read two things to get the gist of the complaint:

The EU’s press release announcing the record EUR2.4 billion fine is surprisingly accessible and readable, and links out to a bunch of useful resources

Google’s response is relatively short and seemingly light on content, but actually frames the key points of the counter-argument well when you know what you’re looking for – see below!


In a case like this that relies on complex technical subjects and also areas of the law (in this case competition law) with which most of us are not familiar, I think it’s useful to make sure we are all talking about the same things. In fact, this is probably my biggest criticism of the EU’s press release (not the judgement – just the communication). It would have benefited greatly from making sure we are all talking about the same things. Here are my best explanations of the key elements you need to understand:

Search (sometimes “general search”) – at its simplest, this is the process of typing a query, and receiving links to pages that satisfy your need. In practice, it has extended to other inputs (e.g. voice) and other outputs (e.g. answers, rich results etc.). We’ve written extensively about changes in the search market

Organic search – the results of searches that appear in an order determined by the search engine based on their quality and relevance as well as their likelihood of satisfying the searcher’s intent. Organic (or “natural”) search results are not advertising results, no money changes hands, and there is no way to pay for inclusion or for a better ranking

Paid search (also “Pay Per Click” or PPC) – adverts sold by Google and other search engines allowing advertisers to pay to appear next to search results based on the search word or phrase (and other variables)

Comparison shopping engines (CSEs) – websites where you can search for a product type, category or brand and then compare different products and/or different retailers – typically by sorting, filtering, and applying facets. The business model is typically either for retailers/brands to pay directly for inclusion, to pay for the clicks they receive, or for the engine to receive affiliate payouts when a searcher buys from the destination site

Google Shopping – Google’s own comparison shopping engine – where you can sort and filter products, apply facets, and click out to retailers’ sites – typically accessed via the “shopping” link at the top of a search results page

Product Listing Ads (PLAs) – a form of paid search whereby adverts for individual products (complete with rich information in the form of photos and prices) appear above, or to the side of the search results. Individual PLAs are presented in the same form as product links in Google Shopping, and the data comes from the same sources, but PLAs in the search results do not constitute a comparison shopping engine to my mind – there is no filtering and there are no facets

Want more advice like this in your inbox? Join the monthly newsletter.


A challenging definition – what markets are we talking about?

In Google’s response, they highlight Amazon and eBay as examples of sites that have grown during the time in question (despite Google’s alleged anti-competitive behaviour), and which offer much of the user benefit of comparison shopping engines. The user experience is clearly similar, while the key difference is that you can actually check out and buy on both sites. In some cases on Amazon, you are buying directly from Amazon – i.e. they are also the retailer – and in many cases on both eBay and Amazon, they are functioning as a marketplace so you enter your payment details on their site, but you are buying from a third party. This raises two difficult and related questions for me – questions which the EU has not answered to my satisfaction:

Must comparison shopping engines send users off to another site in order to purchase? If not, and Amazon and eBay are examples of comparison shopping engines, then I think the case is much harder to make – certainly some CSEs have fared poorly during the time period in question, but some (most notably Amazon) have thrived

If comparison shopping engines are narrowly defined, is it really a separate “market”? The EU’s case relies on a finding that Google is using its dominance in one market (general search) to crush competition in another market. I’m not even convinced that “product search” is actually a separate market to “general search” (I’m inclined to think users see everything they do from the main Google search box as one thing), but if you define comparison shopping so narrowly that it’s a different market to both “search” and “searchable marketplaces” (or whatever you label Amazon and eBay as) then the sub-divisions stop making sense to consumers in my opinion

Going even further down this rabbit-hole, I would love to have an expert (a competition lawyer?) explain to me how the markets are delineated and where these different features/businesses fall – all of which satisfy some of the same user intent:

General search

Product search

Product comparison engines (facets, filters, etc)

Visual search (e.g. Pinterest)

Product recommendation sites (e.g. The Wirecutter)

The easy points of agreement

I don’t think it is hard to make the case that Google meets the criteria to be considered a monopoly in “search” (almost any way you define it) in Europe. Google themselves make essentially no effort to rebut this, so let’s allow this strut of the EU’s argument.

Google has a comparison shopping engine – in the form of Google Shopping (previously Google Product Search and originally Froogle). You can see this in action by going to the “Shopping” tab at and searching for a product (the example in Google’s own post is [puma shoes]). You then get the opportunity to compare products by a range of metrics and facets, with the links going out to places to buy the individual products (retailers, manufacturers, brands).

The problems with the EU’s case

In addition to the definitional problem I highlighted above (and that Google presses on heavily in their response), I think there is another problem with the EU’s case in the specific market of shopping (note that the ultimate EU case is much broader and covers many other verticals – more on that below).

The EU’s ultimate finding is:

Google abused its market dominance as a search engine by promoting its own comparison shopping service in its search results, and demoting those of competitors — Commissioner Margrethe Vestager

I have issues with both parts of that:

Where does Google promote “its own comparison shopping service in its search results”?

You will not find Google Shopping pages ranking in Google’s organic search results. Nor will you find links to Google Shopping in paid search links. There is only one link to Google’s own CSE on any of their search results page – on shopping queries it is here:

And on other searches it is in the “more” menu:

Now, it’s possible that even this level of cross-promotion (simply linking to the Google Shopping product from their menu) is too much, and maybe they should remove it (see below) but this is too small to warrant the huge fine in my opinion. (There is actually one more way of getting there – if you click the right-arrow by the PLAs multiple times to scroll through all the products on offer, you eventually get to a link that takes you to the Google Shopping results page. I’m willing to bet that the percentage of people who actually do this is miniscule and Google could remove it with essentially no impact on Google Shopping).

I believe the EU has a problem with the individual products listed at the top of the search results in the first image above – but that is very clearly not comparison shopping functionality (it is very similar to the links to individual products in the organic results below) – it’s simply what paid search looks like on commercial product queries. It is also very clearly not links to Google’s own CSE – those links go out to retailers’ sites – not to Google Shopping. The data comes from the same source – that is all.

“…abused its dominance…by demoting…competitors”

As I pointed out above, Google Shopping does not appear anywhere in the organic search results. To the extent that they treat Google Shopping differently to a third party CSE, they treat it worse – (one of the complainants does appear somewhere in the organic results, while Google Shopping appears nowhere). [In reality, they actually treat it identically – if Ciao were to block Google’s crawlers the way Google Shopping does, they would also rank nowhere in the organic results].

Given that none of the organic results are Google properties for any of these product queries, any “demoting” of a specific competitor involves the promoting of another. For every Ciao that loses rankings, there must be a retailer, marketplace, or other CSE that gains rankings.

This argument is problematic in other verticals – where there are links to Google properties in the search results, and those links do push down links to competitors – but it holds strong in comparison shopping as far as I can see.

The competitors are objectively poor in comparison shopping

If you rule marketplaces and retailers like eBay and Amazon out of the “comparison shopping engine” market, then the general quality of these sites is low. The complainants in particular – and are both very much worse user experiences than either regular Google search or Google Shopping. This SearchEngineLand article breaks Foundem down nicely, while Ciao is still online and you can go and see for yourself:

Slow loading, intrusive irrelevant banner advertising, broken links, missing images, and irrelevant reviews:

“Advantages: Great price, good quality, useful pockets” — review on the Puma shoes results page (emphasis mine).

I know that EU competition law focuses on the impact on competitors rather than the impact on consumers as US competition law does, but we should also step back a moment and look at the fact that these sites complaining about unfair treatment are objectively worse than Google’s offering.

[Note that this is not the case in other verticals – travel and financial services, in particular, are sticky areas for Google – see below.]

A ballsy response from Google

I hesitate to say that this would be my recommended course of action if I were advising Google, but a direction I would love to see them take is as follows:

Leave PLAs as they are – if comparison shopping is a separate market to “general search” in which Google has a monopoly, then PLAs definitely fall in the general search part rather than the comparison shopping part. They are integrated into the results a searcher receives when they perform a search that starts at the Google homepage, and there is no comparison functionality – it simply links to products

Remove the shopping link in the top menu – this is the one area I can see that they have favoured their comparison shopping engine (Google Shopping) over others (e.g. – one of the complainants) who cannot get their homepage linked from the top menu

Open up Google Shopping pages to their own search index – i.e. enable pages like the result you find when you search [Puma shoes] on the Google Shopping tab to be indexed and appear in the regular organic search results (to be clear, this does not happen at the moment – Google keeps these pages explicitly out of the main search index). Doing this will increase competition in the general search results for the complainants, but it clarifies that Google is treating their comparison shopping engine (Google Shopping) exactly on a level playing field with competitors such as and paves the way for them to treat (all) comparison shopping engines as harshly as they like in regular search [I’m not the first to think of this – see Danny Sullivan’s excellent article from the beginning of this case]

The reality: this is really bad for Google

The EU has started with what I think is the weakest of the verticals, and I think there are strong arguments that Google has not abused their market power in the specific ways this case claims.

But the EU has ruled against Google. In the weakest case against them there is.

The EU has shown here that they are prepared to take action to defend businesses offering worse user experiences against integrated changes Google makes to their core search engine. Shopping is an arguable case, but there are many more verticals where this precedent opens the way for future large fines:

Travel (especially flight search)

Maps / local


Financial products





If the EU insists on treating each of these verticals the same way they have treated shopping, then Google is facing many huge fines – quite aside from the AdSense and Android cases (each of which could be a big deal in its own right). If the EU is prepared to take a lower-quality competitor in each case and say competitiveness has been harmed by Google’s inclusion of a great user experience directly in the general search results, then each of these is at least as egregious as the comparison shopping case.

Even worse for Google, some of these other competitors are not lower-quality. In travel and financial products, in particular, there are some spectacularly good sites offering great UX. The defence in those areas will not be as easy for Google.

What does real regulation of Google look like?

The case for some increased regulation of Google (and other tech giants) is growing. I recently wrote another article breaking down my objections to a New York Times article calling for a break-up.

I certainly don’t have all the answers, but I did get drawn into a bit of speculation about what I thought effective regulation could look like in the comments on that post. To be clear, I don’t expect to see an unbundling, but I was interested to see the same thought experiment applied to Amazon the other day.

For more on the subject, I recommend this week’s public Stratechery article and the other articles linked from it. Stratechery remains my top paywalled recommendation – easily worth the $100 / year in my opinion.

Why You Need to be Building for Intelligent Personal Assistants

Posted by on Jul 16, 2018 in SEO Articles | Comments Off on Why You Need to be Building for Intelligent Personal Assistants

Why You Need to be Building for Intelligent Personal Assistants

What makes personal assistants interesting is that Alphabet, Amazon, Microsoft, and Apple are vulnerable and need you to get on board.

Apparently seven years ago Jeff Bezos, inspired by a love of Star Trek, decided that Amazon should build a something that you can talk to, and which would turn your commands into actions. Now Amazon Alexa is competing with a growing number of intelligent personal assistants[1] from almost every corporate behemoth around. Alphabet (the company that owns Google) has Google Home, Microsoft has Cortana, and Apple, of course, has Siri which was perhaps the best known early on. Viv and Facebook M are also interesting contenders but the state of play for each of those is different enough that we’ll cover them at the end. If you ask any of these companies about investment in this endeavour they’ll say that digital assistants are the next big thing.

While each of the big four has their physical product, the battle isn’t between Google Home and Echo Dot – what they need to succeed is their operating system. Microsoft is in this race by merit of owning the first most popular way we interact with computers by mouse. Owning the most popular assistant OS could help any of these companies define the next decade.

Regardless of the millions of pounds and work hours that each of these organisations has put into their respective offerings, one thing they have in common is; it’s not enough.  Not yet, not for the lofty goal of having a program that can understand anything you say and do just what you want it to.

That’s not to say that these programs aren’t incredibly impressive technological forward steps. However, reviews like this business insider comparison and this one from CNN make it pretty clear that asking too much of these programs will quickly reveal how much they cannot do.

The problem is that we’ve come to expect a lot from computers. I would be personally outraged if my mobile phone refused to update my social media, download video, send and receive emails from multiple accounts, display every photo I took over the past few years (regardless of what device I took it on) and tell me the top speed of a grizzly bear[2]. This functionality has become synonymous with the device but mobile phone manufacturers are responsible for a relatively small proportion of that. We’re used to platforms that use years of established protocols to support all kinds of software. Now, every company trying to build the world’s AI is coming up against two core problems:

They are building a brand new breed of platform. Intelligent assistants are different enough from existing operating systems that they need to put in a lot more work rebuilding existing connections

Previously there was a certain amount of leeway for programs to be pedantic. Up to a certain extent, we accept that it’s our fault for not pushing the right buttons. We don’t have the same patience when speaking so these programs have to be able to respond to pretty much anything a person might say.

Apple once described the iPhone 6’s multi-pressure touch as “trying to read minds” but really, technology has been about reading trends and teaching minds – a series of incremental tweaks with the onus on us as consumers to adapt. The challenge here is to recreate decades of program integrations and, as a small side project, codify the entire spoken human language.

It can’t be done. Certainly not by one team and, let’s face it, if you were racing Apple to build Weird Science would you bet on yourself to do it alone?

What makes personal assistants interesting is that Alphabet, Amazon, Microsoft, and Apple are vulnerable and need you to get on board. They have bet a lot on this, and none of them wants to be Betamax. Or Zune.

Get the very best content from Distilled in your inbox every month


Your chance

There’s almost no graphic design involved and competition is far lower than you’ll find in any of their respective app stores.

This is where you come in. In order for any one of these companies to win this race, they need individuals and companies to develop a lot of the programs, or at least the program-specific integrations, for them.

Your choice now is whether you invest the time to get a stake in the ground, knowing that most will welcome it but that you’re also betting on their success.

Compared to designing a standard app, the time and training investment for many simple functions is hugely reduced.  There’s no design involved and competition is far lower than you’ll find in any of the respective app stores. As a proof of concept, with no coding knowledge pre-February, I’m building an interactive program that could integrate with a bunch of messaging platforms as well as Google Home and Alexa (more on that later).

Amazon are offering free bootcamps to learn more about building Alexa skills. Image source: Guillermo Fernandes via Flickr

The companies at play are also far more open here than in other arenas. Amazon is running free half day bootcamps to teach the principles of building Alexa skills, and are giving out a plethora of prizes and incentives for successful attempts. Alphabet is offering to suggest you if a user asks for something that your program could fulfil – the kind of relevant, single-result search ownership that companies would kill for in a browser. Companies that are taking advantage of these platforms are already reaping the rewards, for instance, the JustEat skill has been preinstalled on Amazon Echo from the first shipment thanks to their chatbot strategy – a huge advantage over competitor programs which users will manually have to download.  What’s more, a lot of these new eco-systems use engagement metrics as a way of ranking programs, so by starting now and building up those numbers before competitors cotton on, companies can vastly improve their chances when things get far more crowded.

How to build a chatbot

Unsurprisingly, the biggest change you need to make to capitalise on AI is replacing button clicks with phrases. Each of the big four has started advocating platforms that take the burden of recognising a sentence (spoken or written), breaking it up, and sending you the important information in digestible chunks. You just have to tell them what is important and when (I’ve included a list of these platforms at the bottom of this post).

By and large, the following, intentionally broad instructions, will serve you in creating a conversational application on any these platforms as they all have a few things in common. This will give you an idea of the way you need to think about interacting with them. In the coming months, I’ll be writing a more in-depth post about how I created my bot using the platform, which Alphabet acquired last year.

Plan your interactions

This will be easier once you’ve got a feel for the platform but you almost need a flowchart for the conversation with markers for times when your program is doing things behind the scenes.  David Low, Product Evangelist for Amazon, says that the lowest rated apps often have too many options. He recommends starting very small and adding options later.

Always plan your interactions to get an idea how conversations will play out.

Decide what you want people to call your program

This is the part of the process that is the most ‘SEO’ and applies most specifically to spoken interactions. Essentially this is what people need to say to wake up your program. Think “OK Google I want to talk to Superdry Online” or “Alexa, ask Dominoes to order me a 12-inch pizza”. It’s a bit clumsier than might be ideal but it means you know what you’re getting, rather than accidentally posting your Spotify password on Facebook.

Usually, once you publish your program it’s too late to change your invocation so you need to think in advance about something short, memorable, and descriptive. It helps if your brand name already ticks those boxes but you’re likely to run into problems if you have a web2.0 name like ‘Pinkr’ or ‘seetbk’. The platforms are prone to confusing homophones and you may need to get in touch with the companies directly to overcome that confusion. The fact that they are willing to work with individual brands to manage proper brand recognition is one sign of the opportunity at this point.

Create the phrases you want your program to respond to and highlight the variable information

On all of these platforms you create phrases with parts that won’t change, then you can also add parts that will change. For instance, the phrase “My name is Slim Shady” is of the format “My name is {name}”. This means that you can handle the heavy lifting of variations in speech using these platforms and it takes a load of the burden off of any external code.

Deal with the JSON it sends you

First things first – there are scenarios where you won’t need to code at all, it just limits you in what your bot can do. I created the simple back and forth you can see in this gif in about ten minutes using no external code. If you have coding experience or are comfortable with learning, you can integrate pretty much any of these services if you can securely receive and respond to a JSON POST request within about 5-8 seconds.

Test and go live

Most of the services offer some kind of easy integration out of the box. They’ll often walk you through it and, if all you need is a relatively standard setup, this will probably take you all of twenty minutes.

You’ll then usually need to go through a slightly separate process to actually publish, mainly medium-specific quality checks.

Fortunately, platforms like and allow integration to multiple mediums at a time. So, having built for Google Home, you can roll out to Facebook, Slack, Telegram etc. with relatively little overhead.

The next five years

If you can only build for one platform and you’re trying to prioritise, you can’t go terribly far wrong. Microsoft’s linguistic processing platform, LUIS, is integrated with the popular Microsoft Bot framework which has almost tripled developer usage in the last six months and stretches far further than Cortana. This is the framework that JustEat and Three are using to build across multiple mediums, including website integrations. It’s worth noting that consumer usage figures for Cortana may be heavily inflated depending on whether Microsoft is including any use of the Windows 10 search bar, however, they are also using those search bar inputs to perfect their back end machine learning platform which should help improve accuracy across all applications.

Alphabet’s recommended platform – API.AI is easy to pick up and can launch on a number of mainstream chat mediums with just a few clicks. Alphabet can also rely heavily on their Google search engine to help make their assistants more full-functioned and user attractive from the off. Unlike with Alexa, users don’t need to manually select your bot to be installed on their device, this helps users access your service but means that individual requests become more like web searches, rather than using a specifically chosen app. Instead of competing once for install you’re competing every time a user says “Hey Google” and getting in early to be the program that Google Assistant suggests will be a huge win.

Apple seems to be the furthest behind with their developer kit, SiriKit, being pretty much limited to things that Siri can already do. That being said, Apple’s dominance in smartphone hardware and OS is a strong foothold. Apple’s laser focus on their own ecosystem could hamper long term plans to be everyone’s HAL 9000, but in the short term, people who have committed to Apple’s vision are already the closest casual consumers to having an omniscient machine that follows you from room to room.

Apple’s focus on its own ecosystem could cause it to lose out in the personal assistants arms race. Image source: Kārlis Dambrāns via Flickr.

Facebook M, Facebook’s intelligent personal assistant is an interesting departure from the norm. Rather than trying to create a program that can do everything, Facebook’s offering is more like partial automation. Facebook M is designed to deal with as many queries as possible like the other IPAs but, when it gets stuck, send the request on to human customer service reps that go as far as calling the DMV. The idea is that everything these reps do is recorded, so that Facebook M can eventually do it alone. While this is currently only available to limited geographies, and could run into some serious scalability issues, Facebook M has the potential to deliver the customer experience they’re all striving for within far shorter timelines.

Viv is another IPA worth of mention at this point. Viv was created by the team which originally built Siri. In a launch video, Co-Founder, Dag Kittlaus, explains that Viv receives a request, checks all the integrations it has at its disposal, and then writes the code it needs to fulfil the request itself. While their developer centre isn’t yet open to wider use, you can email them about a partnership and this different setup should mean the platform is far easier to build services for.

For my money, Amazon is making the most interesting strategic decisions. They are actively courting programmers and brands and are expressly separating Alexa, the program, from the Echo devices that run it. Amazon’s laissez-faire attitude to Alexa uses meant that CES 2017 included Alexa on devices from cars and washing machines to direct Echo competitors.  They’ve even managed to sneak Alexa onto iPhones by adding it as a feature to the Amazon app, which many users already have installed. This can’t compete with the ease of summoning Siri at just the hold of a button but it’s a shot across the bow for Apple’s own assistant. It’s particularly interesting that Amazon has said they think digital assistants should be able to use each other – a nice ideal and fantastic way to break out of platform silos if one service is to become dominant.

Chances are that all of these players are too big to be stamped out of the race entirely but if one of them can reach critical mass of developers and users to become the defacto disembodied voice, that is going to become very interesting indeed. And particularly valuable for those businesses that have the foresight or agility to keep up.

Platform specific resources

Microsoft is pushing LUIS in conjunction with the Microsoft Bot Framework, Google have invested in, and Amazon recommends building Alexa skills using the purpose-built section in Amazon is also offering free (up to a point) hosting for your external code on – the downside is that the Amazon platform is a bit more dependent on code but they make linking into your code easier. Apple gives information about SiriKit, their SDK built specifically for Siri, here.

Sample JSON from API.AI

This isn’t identical to the messages that all of the platforms will send, but it’s the kind of thing you can expect:

{ “id”: “9962fb04-3808-472e-9fe0-f34de1f029b7”,
“timestamp”: “2017-06-26T17:27:48.156Z”,
“lang”: “en”,
“result”: {
“source”: “agent”,
“resolvedQuery”: “My name is Slim Shady”,
“action”: “”,
“actionIncomplete”: false,
“parameters”: {“name”: “Slim Shady”},
“contexts”: [],
“metadata”: {
“intentId”: “2c7ba931-5ea7-4693-b384-eea23a661c68”,
“webhookUsed”: “false”,
“webhookForSlotFillingUsed”: “false”,
“intentName”: “My name is name”},
“fulfillment”: {
“speech”: “”,
“messages”: [{
“type”: 0,
“speech”: “”}] },
“score”: 1},
“status”: { “code”: 200,”errorType”: “success”},
“sessionId”: “1b0e0d9a-0efb-4d48-9dfc-9a1d5ebf1364”}

[1] Or interactive personal assistants, or Digital assistants, or AI, or bots, or any of the other host of names that have sprung up.

[2] In case you’re interested, apparently it is almost 35mph according to although I have not one clue what the “feels like” column means.


SEO for JavaScript-powered websites (Google IO 18 summary)

Posted by on Jul 16, 2018 in SEO Articles | Comments Off on SEO for JavaScript-powered websites (Google IO 18 summary)

SEO for JavaScript-powered websites (Google IO 18 summary)

SEO for JavaScript-powered websites (Google IO 18 summary)

You have probably heard that in the recent Google I/O 18, Google shed some light on SEO.

Tom Greenaway and John Muller of Google presented a session about making your modern JavaScript powered websites search friendly.

They actually listed some recommended best practices, useful tools, and Google policy change.

Here’s the thing:

In a pretty un-Google like way, the duo also shed some light on how the actual crawl and index process for javascript websites work.

Check out the video here:

But, if you don’t want to spend 40 minutes watching the recording.

Hang around, cause here’s a quick summary of the important key points of the session.

A brief background introduction on the presenters…

Tom Greenaway is a senior developer advocate from Australia. While John Mueller (aka johnmu, ring a bell?), is Google’s webmaster trends analyst from Zurich, Switzerland.

How does crawl, render and index works for JavaScript powered websites?

Tom started the talk by sharing a little background of search engines.

Here’s the deal,

The purpose of search engines is to provide a relevant list to answer user’s queries. A library of web pages is compiled where answers are pulled from.

That library is the index.

Building an index starts with a crawlable URL.

Now, the crawler is designed to find contents to crawl.

But, in order to do this, the content must be retrievable via an URL. When the crawler gets to an URL, it will look through the HTML to index the page as well as find new links to crawl.

Here’s a diagram on how search works for Google.

So how do you make sure that your content is reachable for the Googlebot?

Here’s what you need to know, Tom shared the six steps to ensure your web page will be indexed.

1. Make sure that your URL is crawlable
– Set up robots.txt at the top level domain of your site. Robots.txt is useful to let Googlebot know which URLs to crawl and which to ignore.

2.Utilize canonical tags
– In case of content syndication where a content is distributed on different sites to maximize exposure. The source document should be tagged as the canonical document.

3. Make sure the URL is clean and unique
– Don’t list session information on the URL.

4.Provide a sitemap to Googlebot
– That way the crawler has a list of URLs to crawl and you can sleep better at night knowing your website is properly crawled.

5. Use history API
– It replaces the hashbang tag(#!), which, if used will no longer be indexed.

6. Make sure your links have anchor tags with HREF attributes
– Googlebot only recognizes links with BOTH anchor tags and HREF attributes, otherwise, they won’t be crawled therefore never indexed.

What’s more important is,

Tom said Google has been encountering a list of problems trying to crawl and index websites that are built using Javascript.

Here’s the list of most commonly face problem for javascript website indexing

Make sure to have a good look at it, don’t wanna be repeating these same mistakes.

1. HTML delivered from the server is devoid of any content…
– Which leads Googlebot to assume that there’s nothing to index.

2. Lazy loading images are only sometimes indexable
– Make sure that they are properly indexed,use noscript tag or structured data.
– Take caution, images only referenced through CSS are not indexed.

3. Any contents that are triggered via an interaction won’t be indexed
-Googlebot is not an interactive bot, which means he won’t go around clicking tabs on your website. Make sure the bot can get to all your stuff by either preloading the content or CSS toggling visibility on and off.
– What’s better, just use separate URLs to navigate user and Googlebot to those pages individually.

4. Rendering timeout
– Make sure your page is efficient and performant by limiting the number of embedded resources and avoid artificial delays such as time interstitials.

5. API that store local information is not supported.
– What happens instead is that Googlebot crawls and renders your page in a stateless way.

Now, due to the increasingly widespread use of JavaScript, there is another step added between crawling and indexing. That is rendering.

Rendering is the construction of the HTML itself.

Like mentioned before, the crawler needs to sift through your HTML in order to index your page. JavaScript-powered websites need to be rendered before it can be indexed.

According to Tom and John, Googlebot is already rendering your JavaScript websites.

What we can make out of the rendering process and indexing process for a JavaScript website is as below.

1. Googlebot uses the Chrome 41 browser for rendering
-Chrome 41 is from 2015 and any API added after Chrome 41 is not supported.

2. Rendering of JavaScript Websites in Search is deferred
– Rendering web pages is a resource heavy process, therefore rendering might be delayed for a few days until Google has free resources.

3. Two-phase indexing
– First indexing happens before the rendering process is complete. After final render arrives there will be a second indexing.
– The second indexing doesn’t check for canonical tag so the initially rendered version needs to include the canonical link, or else Googlebot will miss it altogether.
– Due to the nature of two-phase indexing, the indexability, metadata, canonical tags and HTTP codes of your web pages could be affected.

John Mueller takes the baton and shares with us some basic information on rendering.

What’s important is, he shared with the crowd which is the preferred rendering method of Google.

Client side, server side, hybrid and dynamic rendering.

1. Client side rendering
– This is the traditional state where the rendering happens on the browser of users or on a search engine.

2. Server side rendering
– Your server deals with the rendering and serve users and search engine alike static HTML.

3. Hybrid rendering (the long-term recommendation)
– Pre-rendered HTML is sent to users and search engine. Then, the server adds JavaScript on top of that. For the search engine, they will simply pick up the pre-rendered HTML content.

4. Dynamic rendering (the policy change & Google’s preferred way)
– This method sends client side rendered contents to users while search engines got server side rendered content.
– This works in the way that your site dynamically detects whether its a search engine crawler request.
– Device focused contents need to be served accordingly (desktop version for the desktop crawler and mobile version for the mobile crawler).

How hybrid rendering works.

Now that it is out in the open that Google prefers the (NEW) dynamic rendering method to help the crawling, rendering and indexing of your site. John also gives a few suggestions on how to implement dynamic rendering.

Ways to implement dynamic rendering

1. Puppeteer
– A Node.js library, which uses a headless version of Google Chrome that allows you to render pages on your own server.

2. Rendertron
– Could be run as a software or a service that renders and caches your content on your side.

Both of these are open source projects where customization is abundant.

John also advises that rendering is resource extensive, so do it out of band from your normal web server and implement caching where needed.

The most important key point of dynamic rendering is this,

it has the ability to recognize a search engine request from a normal user request.

But how could you recognize a Googlebot request?

The first way is to find Googlebot in the user-agent string.
The second way is to do a reverse DNS lookup.

John stresses during the session that implementing the suggested rendering methods is not a requirement for indexing.

What it does, is it makes the process crawling and indexing process easier for Googlebot.

Considering the resource needed to run server side rendering, you might want to consider the toll before implementing.

So when do you need to have dynamic rendering?

Here’s what,

When you have a large and constantly updated website like a news portal because you want to be indexed quickly and correctly.

Or, when you’re relying on a lot of modern JavaScript functionality that is not supported by Chrome 41, which means Googlebot won’t be able to render them correctly.

And finally, if your site relies on social media or chat applications that require access to your page’s content.

Now let’s look at when you don’t need to use dynamic rendering.

The answer is simple,

if Googlebot can index your pages correctly, you don’t need to implement anything.

So how can you know whether Googlebot is doing their job correctly?

You can employ a progressive checking.

Keep in mind that you don’t need to run tests on every single web pages. Test perhaps two each from a template, just to make sure they are working fine.

So here’s how to check whether your pages are indexed

1. Fetch as Google on Google Search Console after verifying ownership, this will show you the HTTP response before any rendering as received by Googlebot.

2. Run a Google Mobile Friendly Test.


Because of the mobile-first indexing that is being rolled out by Google where mobile pages will be the primary focus of indexing. If the pages render well in the test, it means Googlebot can render your page for Search

3. Keep an eye out for the new function in the mobile friendly test. It shows you the Googlebot rendered version and full information on landing issue in case it doesn’t render properly.

4. You can always check the developer console when your page fails in a browser. In developer console, you can access the console log when Googlebot tries to render something. Which allows you to check for a bunch of issues.

5. All the diagnostics can also be run in the rich results test for desktop version sites.

At the end of the session, John also mentions some changes that will happen.

The first happy news,

Google will be moving rendering closer to crawling and indexing.

Which we can safely assume that it will mean that the second indexing will happen much quicker than before.

The second happy news,

Google will make Googlebot use a more modern version of Chrome. Which means a wider support of APIs.

They do make it clear that these changes will not happen until at least the end of the year.

To make things easier, here are the four steps to make sure your JavaScript-powered website is search friendly.

With that, the session is concluded. Do check out our slide show for a quick refresh.

All in all, Google is taking the mic and telling you exactly what they want.

Better take some note.

Delivering search friendly java script-powered websites (Google io 18 summary) from Jia Thong Lo

float: left;
margin: 0;
width: 100%;
max-width: 654px;
height: 100%;
#optin-template-3 .container{
float: left;
width: 100%;
height: 100%;
text-align: center;
background: #fff;
padding-bottom: 16px;
#optin-template-3 .top-row{
display: inline-block;
width: 88%;
padding: 3% 6% 0%;
#optin-template-3 .top-row h2{
margin: 5px 0 0;
font-family: “roboto”, helvetica, sans-serif;
color: #FFAB40;
font-weight: 600;
text-align: center;
padding:0px 0px 5px;
#optin-template-3 .left-column{
display: inline-block;
width: 100%;
max-width: 270px;
min-width: 270px;
height: 100%;
vertical-align: top;
padding-top: 32px;
#optin-template-3 .ebook-img{
width: 100%;
height: 280px;
background: url(;
background-size: contain;
background-repeat: no-repeat;
#optin-template-3 .right-column{
display: inline-block;
width: 60%;
min-width: 250px;
max-width: 305px;
padding: 24px 4% 32px;
#optin-template-3 .bodycopy ul{
text-align: left;
padding-left: 0;
#optin-template-3 .bodycopy ul li{
font-family: “roboto”, helvetica, sans-serif;
margin-left: 20px;
#optin-template-3 .optIn-form{
display: block;
bottom: 0;
#optin-template-3 .email{
display: block;
width: 100%;
border: 0;
padding: 8px 0;
font-size: 18px;
text-align: center;
border: 1px solid #FFAB40;
#optin-template-3 .submit-button{
display: block;
margin-top: 4%;
width: 100%;
padding: 8px 0;
font-family: “roboto”, helvetica, sans-serif;
font-weight: 400;
color: #fff;
background: #FFAB40;
font-size: 21px;
border: 0;
outline: 1px solid #FFAB40;
cursor: pointer;
#optin-template-3 .container .align-justify { text-align:justify !important;}

18 Essential SEO Tools To Optimize Your Website

An up-to-date list of SEO tools for every marketer to optimize your website.
Identify 18 practical tools that save your time to optimize manually
Get more traffic and higher ranking with these tools
Discover the benefits of every tool to help strengthen your SEO strategy