Wednesday, February 21, 2018

How to Deal with Fake Negative Reviews on Google

Posted by JoyHawkins

Fake reviews are a growing problem for those of us that own small businesses. In the online world, it's extremely easy to create a new account and leave either a positive or negative review for any business — regardless of whether you’ve ever tried to hire them.

Google has tons of policies for users that leave reviews. But in my experience they're terrible at automatically catching violations of these policies. At my agency, my team spends time each month carefully monitoring reviews for our clients and their competitors. The good news is that if you’re diligent at tracking them and can make a good enough case for why the reviews are against the guidelines, you can get them removed by contacting Google on Twitter, Facebook, or reporting via the forum.

Recently, my company got hit with three negative reviews, all left in the span of 5 minutes:

Two of the three reviews were ratings without reviews. These are the hardest to get rid of because Google will normally tell you that they don’t violate the guidelines — since there's no text on them. I instantly knew they weren’t customers because I'm really selective about who I work with and keep my client base small intentionally. I would know if someone that was paying me was unhappy.

The challenge with negative reviews on Google

The challenge is that Google doesn’t know who your customers are, and they won’t accept “this wasn't a customer” as an acceptable reason to remove a review, since they allow people to use anonymous usernames. In most cases, it’s extremely difficult to prove the identity of someone online.

The other challenge is that a person doesn’t have to be a customer to be eligible to leave a review. They have to have a “customer experience,” which could be anything from trying to call you and getting your voicemail to dropping by your office and just browsing around.

How to respond

When you work hard to build a good, ethical business, it's always infuriating when a random person has the power to destroy what took you years to build. I’d be lying if I said I wasn’t the least bit upset when these reviews came in. Thankfully, I was able to follow the advice I’ve given many people in the last decade, which is to calm down and think about what your future prospects will see when they come across review and the way you respond to it.

Solution: Share your dilemma

I decided to post on Twitter and Facebook about my lovely three negative reviews, and the response I got was overwhelming. People had really great and amusing things to say about my dilemma.

Whoever was behind these three reviews was seeking to harm my business. The irony is that they actually helped me, because I ended up getting three new positive reviews as a result of sharing my experience with people that I knew would rally behind me.

For most businesses, your evangelists might not be on Twitter, but you could post about it on your personal Facebook profile. Any friends that have used your service or patronized your business would likely respond in the same manner. It’s important to note that I never asked anyone to review me when posting this — it was simply the natural response from people that were a fan of my company and what we stand for. If you’re a great company, you’ll have these types of customers and they should be the people you want to share this experience with!

But what about getting the negative reviews removed?

In this case, I was able to get the three reviews removed. However, there have also been several cases where I’ve seen Google refuse to remove them for others. My plan B was to post a response to the reviews offering these “customers” a 100% refund. After all, 100% of zero is still zero — I had nothing to lose. This would also ensure that future prospects see that I’m willing to address people that have a negative experience, since even the best businesses in the world aren’t perfect. As much as I love my 5-star rating average, studies have shown that 4.2–4.5 is actually the ideal average star rating for purchase probability.

Have you had an experience with fake negative reviews on Google? If so, I’d love to hear about it, so please leave a comment.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Tuesday, February 20, 2018

The Google Ranking Factor You Can Influence in an Afternoon [Case Study]

Posted by sanfran

What does Google consider “quality content"? And how do you capitalize on a seemingly subjective characteristic to improve your standing in search?

We’ve been trying to figure this out since the Hummingbird algorithm was dropped in our laps in 2013, prioritizing “context” over “keyword usage/frequency.” This meant that Google’s algorithm intended to understand the meaning behind the words on the page, rather than the page’s keywords and metadata alone.

This new sea change meant the algorithm was going to read in between the lines in order to deliver content that matched the true intent of someone searching for a keyword.

Write longer content? Not so fast!

Watching us SEOs respond to Google updates is hilarious. We’re like a floor full of day traders getting news on the latest cryptocurrency.

One of the most prominent theories that made the rounds was that longer content was the key to organic ranking. I’m sure you’ve read plenty of articles on this. We at Brafton, a content marketing agency, latched onto that one for a while as well. We even experienced some mixed success.

However, what we didn’t realize was that when we experienced success, it was because we accidentally stumbled on the true ranking factor.

Longer content alone was not the intent behind Hummingbird.

Content depth

Let’s take a hypothetical scenario.

If you were to search the keyword “search optimization techniques,” you would see a SERP that looks similar to the following:

Nothing too surprising about these results.

However, if you were to go through each of these 10 results and take note of the major topics they discussed, theoretically you would have a list of all the topics being discussed by all of the top ranking sites.

Example:

Position 1 topics discussed: A, C, D, E, F

Position 2 topics discussed: A, B, F

Position 3 topics discussed: C, D, F

Position 4 topics discussed: A, E, F

Once you finished this exercise, you would have a comprehensive list of every topic discussed (A–F), and you would start to see patterns of priority emerge.

In the example above, note “topic F” is discussed in all four pieces of content. One would consider this a cornerstone topic that should be prioritized.

If you were then to write a piece of content that covered each of the topics discussed by every competitor on page one, and emphasized the cornerstone topics appropriately, in theory, you would have the most comprehensive piece of content on that particular topic.

By producing the most comprehensive piece of content available, you would have the highest quality result that will best satisfy the searcher’s intent. More than that, you would have essentially created the ultimate resource center for everything a person would want to know about that topic.

How to identify topics to discuss in a piece of content

At this point, we’re only theoretical. The theory makes logical sense, but does it actually work? And how do we go about scientifically gathering information on topics to discuss in a piece of content?

Finding topics to cover:

  • Manually: As discussed previously, you can do it manually. This process is tedious and labor-intensive, but it can be done on a small scale.
  • Using SEMrush: SEMrush features an SEO content template that will provide guidance on topic selection for a given keyword.
  • Using MarketMuse: MarketMuse was originally built for the very purpose of content depth, with an algorithm that mimics Hummingbird. MM takes a largely unscientific process and makes it scientific. For the purpose of this case study, we used MarketMuse.

The process

Watch the process in action

1. Identify content worth optimizing

We went through a massive list of keywords our blog ranked for. We filtered that list down to keywords that were not ranking number one in SERPs but had strong intent. You can also do this with core landing pages.

Here’s an example: We were ranking in the third position for the keyword “financial content marketing.” While this is a low-volume keyword, we were enthusiastic to own it due to the high commercial intent it comes with.

2. Evaluate your existing piece

Take a subjective look at your piece of content that is ranking for the keyword. Does it SEEM like a comprehensive piece? Could it benefit from updated examples? Could it benefit from better/updated inline embedded media? With a cursory look at our existing content, it was clear that the examples we used were old, as was the branding.

3. Identify topics

As mentioned earlier, you can do this in a few different ways. We used MarketMuse to identify the topics we were doing a good job of covering as well as our topic gaps, topics that competitors were discussing, but we were not. The results were as follows:

Topics we did a good job of covering:

  • Content marketing impact on branding
  • Impact of using case studies
  • Importance of infographics
  • Business implications of a content marketing program
  • Creating articles for your audience

Topics we did a poor job of covering:

  • Marketing to millennials
  • How to market to existing clients
  • Crafting a content marketing strategy
  • Identifying and tracking goals

4. Rewrite the piece

Considering how out-of-date our examples were, and the number of topics we had neglected to discuss, we determined a full rewrite of the piece was warranted. Our writer, Mike O’Neill, was given the topic guidance, ensuring he had a firm understanding of everything that needed to be discussed in order to create a comprehensive article.

5. Update the content

To maintain our link equity, we kept the same URL and simply updated the old content with the new. Then we updated the publish date. The new article looks like this, with updated content depth, modern branding, and inline visuals.

6. Fetch as Google

Rather than wait for Google to reindex the content, I wanted to see the results immediately (and it is indeed immediate).

7. Check your results

Open an incognito window and see your updated position.

Promising results:

We have run more than a dozen experiments and have seen positive results across the board. As demonstrated in the video, these results are usually realized within 60 seconds of reindexing the updated content.

Keyword target

Old Ranking

New ranking

“Financial content marketing”

3

1

“What is a subdomain”

16

6

“Best company newsletters”

32

4

“Staffing marketing”

7

3

“Content marketing agency”

16

1

“Google local business cards”

16

5

“Company blog”

7

4

“SEO marketing tools”

9

3

Of those tests, here’s another example of this process in action for the keyword, “best company newsletters.”

Before:

After

Assumptions:

From these results, we can assume that content depth and breadth of topic coverage matters — a lot. Google’s algorithm seems to have an understanding of the competitive topic landscape for a keyword. In our hypothetical example from before, it would appear the algorithm knows that topics A–F exist for a given keyword and uses that collection of topics as a benchmark for content depth across competitors.

We can also assume Google’s algorithm either a.) responds immediately to updated information, or b.) has a cached snapshot of the competitive content depth landscape for any given keyword. Either of these scenarios is very likely because of the speed at which updated content is re-ranked.


In conclusion, don’t arbitrarily write long content and call it “high quality.” Choose a keyword you want to rank for and create a comprehensive piece of content that fully supports that keyword. There is no guarantee you’ll be granted a top position — domain strength factors play a huge role in rankings — but you’ll certainly improve your odds, as we have seen.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Monday, February 19, 2018

The Biggest Mistake Digital Marketers Ever Made: Claiming to Measure Everything

Posted by willcritchlow

Digital marketing is measurable.

It’s probably the single most common claim everyone hears about digital, and I can’t count the number of times I’ve seen conference speakers talk about it (heck, I’ve even done it myself).

I mean, look at those offline dinosaurs, the argument goes. They all know that half their spend is wasted — they just don’t know which half.

Maybe the joke’s on us digital marketers though, who garnered only 41% of global ad spend even in 2017 after years of strong growth.

Unfortunately, while we were geeking out about attribution models and cross-device tracking, we were accidentally triggering a common human cognitive bias that kept us anchored on small amounts, leaving buckets of money on the table and fundamentally reducing our impact and access to the C-suite.

And what’s worse is that we have convinced ourselves that it’s a critical part of what makes digital marketing great. The simplest way to see this is to realize that, for most of us, I very much doubt that if you removed all our measurement ability we’d reduce our digital marketing investment to nothing.

In truth, of course, we’re nowhere close to measuring all the benefits of most of the things we do. We certainly track the last clicks, and we’re not bad at tracking any clicks on the path to conversion on the same device, but we generally suck at capturing:

  • Anything that happens on a different device
  • Brand awareness impacts that lead to much later improvements in conversion rate, average order value, or lifetime value
  • Benefits of visibility or impressions that aren’t clicked
  • Brand affinity generally

The cognitive bias that leads us astray

All of this means that the returns we report on tend to be just the most direct returns. This should be fine — it’s just a floor on the true value (“this activity has generated at least this much value for the brand”) — but the “anchoring” cognitive bias means that it messes with our minds and our clients’ minds. Anchoring is the process whereby we fixate on the first number we hear and subsequently estimate unknowns closer to the anchoring number than we should. Famous experiments have shown that even showing people a totally random number can drag their subsequent estimates up or down.

So even if the true value of our activity was 10x the measured value, we’d be stuck on estimating the true value as very close to the single concrete, exact number we heard along the way.

This tends to result in the measured value being seen as a ceiling on the true value. Other biases like the availability heuristic (which results in us overstating the likelihood of things that are easy to remember) tend to mean that we tend to want to factor in obvious ways that the direct value measurement could be overstating things, and leave to one side all the unmeasured extra value.

The mistake became a really big one because fortunately/unfortunately, the measured return in digital has often been enough to justify at least a reasonable level of the activity. If it hadn’t been (think the vanishingly small number of people who see a billboard and immediately buy a car within the next week when they weren’t otherwise going to do so) we’d have been forced to talk more about the other benefits. But we weren’t. So we lazily talked about the measured value, and about the measurability as a benefit and a differentiator.

The threats of relying on exact measurement

Not only do we leave a whole load of credit (read: cash) on the table, but it also leads to threats to measurability being seen as existential threats to digital marketing activity as a whole. We know that there are growing threats to measuring accurately, including regulatory, technological, and user-behavior shifts:

Now, imagine that the combination of these trends meant that you lost 100% of your analytics and data. Would it mean that your leads stopped? Would you immediately turn your website off? Stop marketing?

I suggest that the answer to all of that is “no.” There's a ton of value to digital marketing beyond the ability to track specific interactions.

We’re obviously not going to see our measurable insights disappear to zero, but for all the reasons I outlined above, it’s worth thinking about all the ways that our activities add value, how that value manifests, and some ways of proving it exists even if you can’t measure it.

How should we talk about value?

There are two pieces to the brand value puzzle:

  1. Figuring out the value of increasing brand awareness or affinity
  2. Understanding how our digital activities are changing said awareness or affinity

There's obviously a lot of research into brand valuations generally, and while it’s outside the scope of this piece to think about total brand value, it’s worth noting that some methodologies place as much as 75% of the enterprise value of even some large companies in the value of their brands:

Image source

My colleague Tom Capper has written about a variety of ways to measure changes in brand awareness, which attacks a good chunk of the second challenge. But challenge #1 remains: how do we figure out what it’s worth to carry out some marketing activity that changes brand awareness or affinity?

In a recent post, I discussed different ways of building marketing models and one of the methodologies I described might be useful for this - namely so-called “top-down” modelling which I defined as being about percentages and trends (as opposed to raw numbers and units of production).

The top-down approach

I’ve come up with two possible ways of modelling brand value in a transactional sense:

1. The Sherlock approach

When you have eliminated the impossible, whatever remains, however improbable, must be the truth."
-
Sherlock Holmes

The outline would be to take the total new revenue acquired in a period. Subtract from this any elements that can be attributed to specific acquisition channels; whatever remains must be brand. If this is in any way stable or predictable over multiple periods, you can use it as a baseline value from which to apply the methodologies outlined above for measuring changes in brand awareness and affinity.

2. Aggressive attribution

If you run normal first-touch attribution reports, the limitations of measurement (clearing cookies, multiple devices etc) mean that you will show first-touch revenue that seems somewhat implausible (e.g. email; email surely can’t be a first-touch source — how did they get on your email list in the first place?):

Click for a larger version

In this screenshot we see that although first-touch dramatically reduces the influence of direct, for instance, it still accounts for more than 15% of new revenue.

The aggressive attribution model takes total revenue and splits it between the acquisition channels (unbranded search, paid social, referral). A first pass on this would simply split it in the relative proportion to the size of each of those channels, effectively normalizing them, though you could build more sophisticated models.

Note that there is no way of perfectly identifying branded/unbranded organic search since (not provided) and so you’ll have to use a proxy like homepage search vs. non-homepage search.

But fundamentally, the argument here would be that any revenue coming from a “first touch” of:

  • Branded search
  • Direct
  • Organic social
  • Email

...was actually acquired previously via one of the acquisition channels and so we attempt to attribute it to those channels.

Even this under-represents brand value

Both of those methodologies are pretty aggressive — but they might still under-represent brand value. Here are two additional mechanics where brand drives organic search volume in ways I haven’t figured out how to measure yet:

Trusting Amazon to rank

I like reading on the Kindle. If I hear of a book I’d like to read, I’ll often Google the name of the book on its own and trust that Amazon will rank first or second so I can get to the Kindle page to buy it. This is effectively a branded search for Amazon (and if it doesn’t rank, I’ll likely follow up with a [book name amazon] search or head on over to Amazon to search there directly).

But because all I’ve appeared to do is search [book name] on Google and then click through to Amazon, there is nothing to differentiate this from an unbranded search.

Spotting brands you trust in the SERPs

I imagine we all have anecdotal experience of doing this: you do a search and you spot a website you know and trust (or where you have an account) ranking somewhere other than #1 and click on it regardless of position.

One time that I can specifically recall noticing this tendency growing in myself was when I started doing tons more baby-related searches after my first child was born. Up until that point, I had effectively zero brand affinity with anyone in the space, but I quickly grew to rate the content put out by babycentre (babycenter in the US) and I found myself often clicking on their result in position 3 or 4 even when I hadn’t set out to look for them, e.g. in results like this one:

It was fascinating to me to observe this behavior in myself because I had no real interaction with babycentre outside of search, and yet, by consistently ranking well across tons of long-tail queries and providing consistently good content and user experience I came to know and trust them and click on them even when they were outranked. I find this to be a great example because it is entirely self-contained within organic search. They built a brand effect through organic search and reaped the reward in increased organic search.

I have essentially no ideas on how to measure either of these effects. If you have any bright ideas, do let me know in the comments.

Budgets will come under pressure

My belief is that total digital budgets will continue to grow (especially as TV continues to fragment), but I also believe that individual budgets are going to come under scrutiny and pressure making this kind of thinking increasingly important.

We know that there is going to be pressure on referral traffic from Facebook following the recent news feed announcements, but there is also pressure on trust in Google:

While I believe that the opportunity is large and still growing (see, for example, this slide showing Google growing as a referrer of traffic even as CTR has declined in some areas), it’s clear that the narrative is going to lead to more challenging conversations and budgets under increased scrutiny.

Can you justify your SEO investment?

What do you say when your CMO asks what you’re getting for your SEO investment?

What do you say when she asks whether the organic search opportunity is tapped out?

I’ll probably explore the answers to both these questions more in another post, but suffice it to say that I do a lot of thinking about these kinds of questions.

The first is why we have built our split-testing platform to make organic SEO investments measurable, quantifiable and accountable.

The second is why I think it’s super important to remember the big picture while the media is running around with hair on fire. Media companies saw Facebook overtake Google as a traffic channel (and then are likely seeing that reverse right now), but most of the web has Google as the largest and growing source of traffic and value.

The reality (from clickstream data) is that it's really easy to forget how long the long-tail is and how sparse search features and ads are on the extreme long-tail:

  1. Only 3–4% of all searches result in a click on an ad, for example. Google's incredible (and still growing) business is based on a small subset of commercial searches
  2. Google's share of all outbound referral traffic across the web is growing (and Facebook's is shrinking as they increasingly wall off their garden)

The opportunity is for smart brands to capitalize on a growing opportunity while their competitors sink time and money into a social space that is increasingly all about Facebook, and increasingly pay-to-play.

What do you think? Are you having these hard conversations with leadership? How are you measuring your digital brand’s value?


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Friday, February 16, 2018

Using the Cross Domain Rel=Canonical to Maximize the SEO Value of Cross-Posted Content - Whiteboard Friday

Posted by randfish

Same content, different domains? There's a tag for that. Using rel=canonical to tell Google that similar or identical content exists on multiple domains has a number of clever applications. You can cross-post content across several domains that you own, you can benefit from others republishing your own content, rent or purchase content on other sites, and safely use third-party distribution networks like Medium to spread the word. Rand covers all the canonical bases in this not-to-be-missed edition of Whiteboard Friday.

Using the Cross Domain Rel=Canonical to Maximize the SEO Value of X-Posted Content

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we're going to chat about the cross-domain rel=canonical tag. So we've talked about rel=canonical a little bit and how it can be used to take care of duplicate content issues, point Google to the right pages from potentially other pages that share similar or exactly the same content. But cross-domain rel=canonical is a unique and uniquely powerful tool that is designed to basically say, "You know what, Google? There is the same content on multiple different domains."

So in this simplistic example, MyFriendSite.com/green-turtles contains this content that I said, "Sure, it's totally fine for you, my friend, to republish, but I know I don't want SEO issues. I know I don't want duplicate content. I know I don't want a problem where my friend's site ends up outranking me, because maybe they have better links or other ranking signals, and I know that I would like any ranking credit, any link or authority signals that they accrue to actually come to my website.

There's a way that you can do this. Google introduced it back in 2009. It is the cross-domain rel=canonical. So essentially, in the header tag of the page, I can add this link, rel=canonical href — it's a link tag, so there's an href — to the place where I want the link or the canonical, in this case, to point to and then close the tag. Google will transfer over, this is an estimate, but roughly in the SEO world, we think it's pretty similar to what you get in a 301 redirect. So something above 90% of the link authority and ranking signals will transfer from FriendSite.com to MySite.com.

So my green turtles page is going to be the one that Google will be more likely to rank. As this one accrues any links or other ranking signals, that authority, those links should transfer over to my page. That's an ideal situation for a bunch of different things. I'll talk about those in a sec.

Multiple domains and pages can point to any URL

Multiple domains and pages are totally cool to point to any URL. I can do this for FriendSite.com. I can also do this for TurtleDudes.com and LeatherbackFriends.net and SeaTees.com and NatureIsLit.com. All of them can contain this cross-domain rel=canonical pointing back to the site or the page that I want it to go to. This is a great way to potentially license content out there, give people republishing permissions without losing any of the SEO value.

A few things need to match:

I. The page content really does need to match

That includes things like text, images, if you've embedded videos, whatever you've got on there.

II. The headline

Ideally, should match. It's a little less crucial than the page content, but probably you want that headline to match.

III. Links (in content)

Those should also match. This is a good way to make sure. You check one, two, three. This is a good way to make sure that Google will count that rel=canonical correctly.

Things that don't need to match:

I. The URL

No, it's fine if the URLs are different. In this case, I've got NatureIsLit.com/turtles/p?id=679. That's okay. It doesn't need to be green-turtles. I can have a different URL structure on my site than they've got on theirs. Google is just fine with that.

II. The title of the piece

Many times the cross-domain rel=canonical is used with different page titles. So if, for example, CTs.com wants to publish the piece with a different title, that's okay. I still generally recommend that the headlines stay the same, but okay to have different titles.

III. The navigation

IV. Site branding

So all the things around the content. If I've got my page here and I have like nav elements over here, nav elements down here, maybe a footer down here, a nice little logo up in the top left, that's fine if those are totally different from the ones that are on these other pages cross-domain canonically. That stuff does not need to match. We're really talking about the content inside the page that Google looks for.

Ways to use this protocol

Some great ways to use the cross-domain rel=canonical.

1. If you run multiple domains and want to cross-post content, choose which one should get the SEO benefits and rankings.

If you run multiple domains, for whatever reason, let's say you've got a set of domains and you would like the benefit of being able to publish a single piece of content, for whatever reason, across multiples of these domains that you own, but you know you don't want to deal with a duplicate content issue and you know you'd prefer for one of these domains to be the one receiving the ranking signals, cross-domain rel=canonical is your friend. You can tell Google that Site A and Site C should not get credit for this content, but Site B should get all the credit.

The issue here is don't try and do this across multiple domains. So don't say, "Oh, Site A, why don't you rel=canonical to B, and Site C, why don't you rel=canonical to D, and I'll try and get two things ranked in the top." Don't do that. Make sure all of them point to one. That is the best way to make sure that Google respects the cross-domain rel=canonical properly.

2. If a publication wants to re-post your content on their domain, ask for it instead of (or in addition to) a link back.

Second, let's say a publication reaches out to you. They're like, "Wow. Hey, we really like this piece." My wife, Geraldine, wrote a piece about Mario Batali's sexual harassment apology letter and the cinnamon rolls recipe that he strangely included in this apology. She baked those and then wrote about it. It went quite viral, got a lot of shares from a ton of powerful and well-networked people and then a bunch of publications. The Guardian reached out. An Australian newspaper reached out, and they said, "Hey, we would like to republish your piece." Geraldine talked to her agent, and they set up a price or whatever.

One of the ways that you can do this and benefit from it, not just from getting a link from The Guardian or some other newspaper, but is to say, "Hey, I will be happy to be included here. You don't even have to give me, necessarily, if you don't want to, author credit or link credit, but I do want that sweet, sweet rel=canonical." This is a great way to maximize the SEO benefit of being posted on someone else's site, because you're not just receiving a single link. You're receiving credit from all the links that that piece might generate.

Oops, I did that backwards. You want it to come from their site to your site. This is how you know Whiteboard Friday is done in one take.

3. Purchase/rent content from other sites without forcing them to remove the content from their domain.

Next, let's say I am in the opposite situation. I'm the publisher. I see a piece of content that I love and I want to get that piece. So I might say, "Wow, that piece of content is terrific. It didn't do as well as I thought it would do. I bet if we put it on our site and broadcast it with our audience, it would do incredibly well. Let's reach out to the author of the piece and see if we can purchase or rent for a time period, say two years, for the next two years we want to put the cross-domain rel=canonical on your site and point it back to us and we want to host that content. After two years, you can have it back. You can own it again."

Without forcing them to remove the content from their site, so saying you, publisher, you author can keep it on your site. We don't mind. We'd just like this tag applied, and we'd like to able to have republishing permissions on our website. Now you can get the SEO benefits of that piece of content, and they can, in exchange, get some money. So your site sending them some dollars, their site sending you the rel=canonical and the ranking authority and the link equity and all those beautiful things.

4. Use Medium as a content distribution network without the drawback of duplicate content.

Number four, Medium. Medium is a great place to publish content. It has a wide network, people who really care about consuming content. Medium is a great distribution network with one challenge. If you post on Medium, people worry that they can't post the same thing on their own site because you'll be competing with Medium.com. It's a very powerful domain. It tends to rank really well. So duplicate content is an issue, and potentially losing the rankings and the traffic that you would get from search and losing that to Medium is no fun.

But Medium has a beautiful thing. The cross-domain rel=canonical is built in to their import tool. So if you go to Medium.com/p/import and you are logged in to your Medium account, you can enter in their URL field the content that you've published on your own site. Medium will republish it on your account, and they will include the cross-domain rel=canonical back to you. Now, you can start thinking of Medium as essentially a distribution network without the penalties or problems of duplicate content issues. Really, really awesome tool. Really awesome that Medium is offering this. I hope it sticks around.

All right, everyone. I think you're going to have some excellent additional ideas for the cross-domain rel=canonical and how you have used it. We would love you to share those in the comments below, and we'll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Thursday, February 15, 2018

Reading Between the Lines: A 3-Step Guide to Reviewing Web Page Content

Posted by Jackie.Francis

In SEO, reviewing content is an unavoidable yet extremely important task. As the driving factor that brings people to a page, best practice dictates that we do what we can to ensure that the work we've invested hours and resources into creating remains impactful and relevant over time. This requires occasionally going back and re-evaluating our content to identify areas that can be improved.

That being said, if you've ever done a content review, you know how surprisingly challenging this is. A large variety of formats and topics alongside the challenge of defining “good” content makes it hard to pick out the core elements that matter. Without these universal focus areas, you may end up neglecting an element (e.g. tone of voice) in one instance but paying special attention to that same element in another.

Luckily there are certain characteristics — like good spelling, appealing layouts, and relevant keywords — that are universally associated with what we would consider “good” content. In this three-step guide, I'll show you how to use these characteristics (or elements, as I like to call them) to define your target audience, measure the performance of your content using a scorecard, and assess your changes for quality assurance as part of a review process that can be applied to nearly all types of content across any industry.


Step 1: Know your audience

Arguably the most important step mentioned in this post, knowing your target reader will identify the details that should make up the foundation of your content. This includes insight into the reader’s intent, the ideal look and feel of the page, and the goals your content’s message should be trying to achieve.

To get to this point, however, you first need to answer these two questions:

  1. What does my target audience look like?
  2. Why are they reading my content?

What does my target audience look like?

The first question relies on general demographic information such as age, gender, education, and job title. This gives a face to the ideal audience member(s) and the kind of information that would best suit them. For example, if targeting stay-at-home mothers between the ages of 35 and 40 with two or more kids under the age of 5, we can guess that she has a busy daily schedule, travels frequently for errands, and constantly needs to stay vigilant over her younger children. So, a piece that is personable, quick, easy to read on-the-go, and includes inline imagery to reduce eye fatigue would be better received than something that is lengthy and requires a high level of focus.

Why are they reading my content?

Once you have a face to your reader, the second question must be answered to understand what that reader wants from your content and if your current product is effectively meeting those needs. For example, senior-level executives of mid- to large-sized companies may be reading to become better informed before making an important decision, to become more knowledgeable in their field, or to use the information they learn to teach others. Other questions you may want to consider asking:

  • Are they reading for leisure or work?
  • Would they want to share this with their friends on social media?
  • Where will they most likely be reading this? On the train? At home? Waiting in line at the store?
  • Are they comfortable with long blocks of text, or would inline images be best?
  • Do they prefer bite-sized information or are they comfortable with lengthy reports?

You can find the answers to these questions and collect valuable demographic and psychographic information by using a combination of internal resources, like sales scripts and surveys, and third-party audience insight tools such as Google Analytics and Facebook Audience Insights. With these results you should now have a comprehensive picture of your audience and can start identifying the parts of your content that can be improved.


Step 2: Tear apart your existing content

Now that you understand who your audience is, it’s time to get to the real work: assessing your existing content. This stage requires breaking everything apart to identify the components you should keep, change, or discard. However, this task can be extremely challenging because the performance of most components — such as tone of voice, design, and continuity — can’t simply be bucketed into binary categories like “good” or “bad.” Rather, they fall into a spectrum where the most reasonable level of improvement falls somewhere in the middle. You'll see what I mean by this statement later on, but one of the most effective ways to evaluate and measure the degree of optimization needed for these components is to use a scorecard. Created by my colleague, Ben Estes, this straightforward, reusable, and easy to apply tool can help you objectively review the performance of your content.

Make a copy of the Content Review Grading Rubric

Note: The card sampled here, and the one I personally use for similar projects, is a slightly altered version of the original.

As you can see, the card is divided into two categories: Writing and Design. Listed under each category are elements that are universally needed to create a good content and should be examined. Each point is assigned a grading scale ranging from 1–5, with 1 being the worst score and 5 being best.

To use, start by choosing a part of your page to look at first. Order doesn’t matter, so whether you choose to first check “spelling and grammar” or “continuity” is up to you. Next, assign it a score on a separate Excel sheet (or mark it directly on the rubric) based on its current performance. For example, if the copy has no spelling errors but some minor grammar issues, you would rank “spelling and grammar” as a four (4).

Finally, repeat this process until all elements are graded. Remember to stay impartial to give an honest assessment.

Once you’re done, look at each grade and see where it falls on the scale. Ideally each element should have a score of 4 or greater, although a grade of 5 should only be given out sparingly. Tying back to my spectrum comment from earlier, a 5 is exclusively reserved for top-level work and should be something to strive for but will typically take more effort to achieve than it is worth. A grade of 4 is often the highest and most reasonable goal to attempt for, in most instances.

A grade of 3 or below indicates an opportunity for improvement and that significant changes need to be made.

If working with multiple pieces of content at once, the grading system can also be used to help prioritize your workload. Just collect the average writing or design score and sort them in ascending/descending order. Pages with a lower average indicate poorer performance and should be prioritized over pages whose averages are higher.

Whether you choose to use this scorecard or make your own, what you review, the span of the grading scale, and the criteria for each grade should be adjusted to fit your specific needs and result in a tool that will help you honestly assess your content across multiple applications.

Don’t forget the keywords

With most areas of your content covered by the scorecard, the last element to check before moving to the editing stage is your keywords.

Before I get slack for this, I’m aware that the general rule of creating content is to do your keyword research first. But I’ve found that when it comes to reviews, evaluating keywords last feels more natural and makes the process a lot smoother. When first running through a page, you’re much more likely to notice spelling and design flaws before you pick up whether a keyword is used correctly — why not make note of those details first?

Depending on the outcomes stemming from the re-evaluation of your target audience and content performance review, you will notice one of two things about your currently targeted keywords:

  1. They have not been impacted by the outcomes of the prior analyses and do not need to be altered
  2. They no longer align with the goals of the page or needs of the audience and should be changed

In the first example, the keywords you originally target are still best suited for your content’s message and no additional research is needed. So, your only remaining task is to determine whether or not your keywords are effectively used throughout the page. This means assessing things like title tag, image alt attributes, URL, and copy.

In an attempt to stay on track, I won’t go into further detail on how to optimize keywords but if you want a little more insight, this post by Ken Lyons is a great resource.

If, however, your target keywords are no longer relevant to the goals of your content, before moving to the editing stage you’ll need to re-do your keyword research to identify the terms you should rank for. For insight into keyword research this chapter in Moz’s Beginner's Guide to SEO is another invaluable resource.


Step 3: Evaluate your evaluation

At this point your initial review is complete and you should be ready to edit.

That’s right. Your initial review.

The interesting thing about assessing content is that it never really ends. As you make edits you’ll tend to deviate more and more from your initial strategy. And while not always a bad thing, you must continuously monitor these changes to ensure that you are on the right track to create a highly valued piece of content.

The best approach would be to reassess all your material when:

  • 50% of the edits are complete
  • 85% of the edits are complete
  • You have finished editing

At the 50% and 85% marks, keep the assessment quick and simple. Look through your revisions and ask the following questions:

  • Am I still addressing the needs of my target audience?
  • Are my target keywords properly integrated?
  • Am I using the right language and tone of voice?
  • Does it look like the information is structured correctly (hierarchically)?

If your answer is “Yes” to all four questions, then you've effectively made your changes and should proceed. For any question you answer “No,” go back and make the necessary corrections. The areas targeted here become more difficult to fix the closer you are to completion and ensuring they're correct throughout this stage will save a lot of time and stress in the long run.

When you've finished and think you're ready to publish, run one last comprehensive review to check the performance status of all related components. This means confirming you've properly addressed the needs of your audience, optimized your keywords, and improved the elements highlighted in the scorecard.


Moving forward

No two pieces of content are the same, but that does not mean there aren’t some important commonalities either. Being able to identify these similarities and understand the role they play across all formats and topics will lead the way to creating your own review process for evaluating subjective material.

So, when you find yourself gearing up for your next project, give these steps a try and always keep the following in mind:

  1. Your audience is what makes or breaks you, so keep them happy
  2. Consistent quality is key! Ensure all components of your content are performing at their best
  3. Keep your keywords optimized and be prepared to do additional research if necessary
  4. Unplanned changes will happen. Just remember to remain observant as to keep yourself on track

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Wednesday, February 14, 2018

The 2018 Local SEO Forecast: 9 Predictions According to Mozzers

Posted by MiriamEllis

It's February, and we've all dipped our toes into the shallow end of the 2018 pool. Today, let's dive into the deeper waters of the year ahead, with local search marketing predictions from Moz's Local SEO Subject Matter Expert, our Marketing Scientist, and our SEO & Content Architect. Miriam Ellis, Dr. Peter J. Myers, and Britney Muller weigh in on what your brand should prepare for in the coming months in local.


WOMM, core SEO knowledge, and advice for brands both large and small

Miriam Ellis, Moz Associate & Local SEO SME

LSAs will highlight the value of Google-independence

Word-of-mouth marketing (WOMM) and loyalty initiatives will become increasingly critical to service area business whose results are disrupted by Google’s Local Service Ads. SABs aren’t going to love having to “rent back” their customers from Google, so Google-independent lead channels will have enhanced value. That being said, the first small case study I’ve seen indicates that LSAs may be a winner over traditional Adwords in terms of cost and conversions.

Content will be the omni-channel answer

Content will grow in value, as it is the answer to everything coming our way: voice search, Google Posts, Google Questions & Answers, owner responses, and every stage of the sales funnel. Because of this, agencies which have formerly thought of themselves as strictly local SEO consultants will need to master the fundamentals of organic keyword research and link building, as well as structured data, to offer expert-level advice in the omni-channel environment. Increasingly, clients will need to become “the answer” to queries… and that answer will predominantly reside in content dev.

Retail may downsize but must remain physical

Retail is being turned on its head, with Amazon becoming the “everything store” and the triumphant return of old-school home delivery. Large brands failing to see profits in this new environment will increasingly downsize to the showroom scenario, significantly cutting costs, while also possibly growing sales as personally assisted consumers are dissuaded from store-and-cart abandonment, and upsold on tie-ins. Whether this will be an ultimate solution for shaky brands, I can’t say, but it matters to the local SEO industry because showrooms are, at least, physical locations and therefore eligible for all of the goodies of our traditional campaigns.

SMBs will hold the quality high card

For smaller local brands, emphasis on quality will be the most critical factor. Go for the customers who care about specific attributes (e.g. being truly local, made in the USA, handcrafted, luxury, green, superior value, etc.). Evaluating and perfecting every point of contact with the customer (from how phone calls are assisted, to how online local business data is managed, to who asks for and responds to reviews) matters tremendously. This past year, I’ve watched a taxi driver launch a delivery business on the side, grow to the point where he quit driving a cab, hire additional drivers, and rack up a profusion of 5-star, unbelievably positive reviews, all because his style of customer service is memorably awesome. Small local brands will have the nimbleness and hometown know-how to succeed when quality is what is being sold.


In-pack ads, in-SERP features, and direct-to-website traffic

Dr. Peter J. Meyers, Marketing Scientist at Moz

In-pack ads to increase

Google will get more aggressive about direct local advertising, and in-pack ads will expand. In 2018, I expect local pack ads will not only appear on more queries but will make the leap to desktop SERPs and possibly Google Home.

In-SERP features to grow

Targeted, local SERP features will also expand. Local Service Ads rolled out to more services and cities in 2017, and Google isn’t going to stop there. They’ve shown a clear willingness to create specialized content for both organic and local. For example, 2017 saw Google launch a custom travel portal and jobs portal on the “organic” side, and this trend is accelerating.

Direct-to-website traffic to decline

The push to keep local search traffic in Google properties (i.e. Maps) will continue. Over the past couple of years, we’ve seen local packs go from results that link directly to websites, to having a separate “Website” link to local sites being buried 1–2 layers deep. In some cases, local sites are being almost completely supplanted by local Knowledge Panels, some of which (hotels being a good example) have incredibly rich feature sets. Google wants to deliver local data directly on Google, and direct traffic to local sites from search will continue to decline.


Real-world data and the importance of Google

Britney Muller, SEO & Content Architect at Moz

Relevance drawn from the real world

Real-world data! Google will leverage device and credit card data to get more accurate information on things like foot traffic, current gas prices, repeat customers, length of visits, gender-neutral bathrooms, type of customers, etc. As the most accurate source of business information to date, why wouldn’t they?

Google as one-stop shop

SERPs and Maps (assisted by local business listings) will continue to grow as a one-stop-shop for local business information. Small business websites will still be important, but are more likely to serve as a data source as opposed to the only place to get their business information, in addition to more in-depth data like the above.


Google as friend or foe? Looking at these expert predictions, that's a question local businesses of all sizes will need to continue to ask in 2018. Perhaps the best answer is "neither." Google represents opportunity for brands that know how to play the game well. Companies that put the consumer first are likely to stand strong, no matter how the nuances of digital marketing shift, and education will remain the key to mastery in the year ahead.

What do you think? Any hunches about the year ahead? Let us know in the comments.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!

Tuesday, February 13, 2018

New Research: 35% of Competitive Local Keywords Have Local Pack Ads

Posted by Dr-Pete

Over the past year, you may have spotted a new kind of Google ad on a local search. It looks something like this one (on a search for "oil change" from my Pixel phone in the Chicago suburbs):

These ads seem to appear primarily on mobile results, with some limited testing on desktop results. We've heard rumors about local pack ads as far back as 2016, but very few details. How prevalent are these ads, and how seriously should you be taking them?

11,000 SERPs: Quick summary

For this study, we decided to look at 110 keywords (in 11 categories) across 100 major US cities. We purposely focused on competitive keywords in large cities, assuming, based on our observations as searchers, that the prevalence rate for these ads was still pretty low. The 11 categories were as follows:

  • Apparel
  • Automotive
  • Consumer Goods
  • Finance
  • Fitness
  • Hospitality
  • Insurance
  • Legal
  • Medical
  • Services (Home)
  • Services (Other)

We purposely selected terms that were likely to have local pack results and looked for the presence of local packs and local pack ads. We collected these searches as a mobile user with a Samsung Galaxy 7 (a middle-ground choice between iOS and a "pure" Google phone).

Why 11 categories? Confession time – it was originally 10, and then I had the good sense to ask Darren Shaw about the list and realized I had completely left out insurance keywords. Thanks, Darren.

Finding #1: I was very wrong

I'll be honest – I expected, from casual observations and the lack of chatter in the search community, that we'd see fewer than 5% of local packs with ads, and maybe even numbers in the 1% range.

Across our data set, roughly 35% of SERPs with local packs had ads.

Across industry categories, the prevalence of pack ads ranged wildly, from 10% to 64%:

For the 110 individual keyword phrases in our study, the presence of local ads ranged from 0% to 96%. Here are the keywords with >=90% local pack ad prevalence:

  • "car insurance" (90%)
  • "auto glass shop" (91%)
  • "bankruptcy lawyer" (91%)
  • "storage" (92%)
  • "oil change" (95%)
  • "mattress sale" (95%)
  • "personal injury attorney" (96%)

There was no discernible correlation between the presence of pack ads and city size. Since our study was limited to the top 100 US cities by population, though, this may simply be due to a restricted data range.

Finding #2: One is the magic number

Every local pack with ads in our study had one and only one ad. This ad appeared in addition to regular pack listings. In our data set, 99.7% of local packs had three regular/organic listings, and the rest had two listings (which can happen with or without ads).

Finding #3: Pack ads land on Google

Despite their appearance, local packs ads are more like regular local pack results than AdWords ads, in that they're linked directly to a local panel (a rich Google result). On my Pixel phone, the Jiffy Lube ad at the beginning of this post links to this result:

This is not an anomaly: 100% of the 3,768 local pack ads in our study linked back to Google. This follows a long trend of local pack results linking back to Google entities, including the gradual disappearance of the "Website" link in the local pack.

Conclusion: It's time to get serious

If you're in a competitive local vertical, it's time to take local pack ads seriously. Your visitors are probably seeing them more often than you realize. Currently, local pack ads are an extension of AdWords, and require you to set up location extensions.

It's also more important than ever to get your Google My Business listing in order and make sure that all of your information is up to date. It may be frustrating to lose the direct click to your website, but a strong local business panel can drive phone calls, foot traffic, and provide valuable information to potential customers.

Like every Google change, we ultimately have to put aside whether we like or dislike it and make the tough choices. With more than one-third of local packs across the competitive keywords in our data set showing ads, it's time to get your head out of the sand and get serious.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don't have time to hunt down but want to read!