Share on facebook
Share on google
Share on linkedin
Share on twitter

Throwback: Old Online Trends That Have Gone Stale

If you work in any technology-based company or sector, you know things are always changing. What was standard practice and trendy and new one day or week becomes outdated and ineffective the next. As SEO trends constantly come and go, one simply can’t afford to not change. It is a company’s flexibility and ability to accept and support constant change that often succeeds.

If you’re interested in entering the technology field or you’re returning to the industry after a long break, here are some of the old online trends to avoid. You’ll notice these trends were once the accepted but have since grown stale and have fallen by the wayside:

Music Playing Website

 

As if hold music or elevator music wasn’t bearable, there was a time when companies welcomed visitors to their website with this awful, outdated music. This old online trend quickly became a no-no as website visitors quickly left a site.

“Click Here!” Linked Buttons

 

These forms of old online trends are still heavily used even though they aren’t as effective as other types of linked text. Web users now are in a hurry and want context to what kind of website a text link will take them to. The generic “click here” doesn’t tell the busy web user what the linked page is about and whether they would benefit by clicking through to it.

Long Sales Letters

 

There are multiple reasons the long sales letters disappeared. The largest two reasons being the fact that print is now largely out of date, being replaced with everything digitally, and the other being the short patience and attention spans of busy consumers who don’t want to read long text.

Buying Links

 

While a good link building campaign still has value today, the link building process has changed. The old online trend of buying links was common practice, but that technique of getting links is a serious no-no that violates SEO ethics and best practice today. Before buying links was a thing, anyone could link to whatever and however many of external pages as they liked. This caught the attention of Google, the largest search engine who put a stop to this form of link spamming with its Penguin algorithm. Since then it’s become important to consider where you’re linking to and the authority of that site.Linking to quality sites gains credibility for your site as well as the trust of Google and web users. Instead of buying links, it is acceptable to “trade” for links via guest posting and blogging outreach.

Yellow Page Advertising

 

Back in the day, the “Yellow Pages,” or phone book was delivered to each home. It contained a large, local business directory. If you needed a company’s phone number, hours, location or briefly see what they specialized in, you would look it up in the “Yellow Pages.” In addition to basic business information, some businesses would grab reader’s attention through sprinkling ads throughout the phone book. These old online trends was a good idea then. Now, however, with the widespread use and accessibility of the computer and Internet and smartphones, people now have immediate access to the same vital business information on the go.

Keyword Stuffing

 

In the past, SEO specialists and programmers were overzealous in their use of keywords. The use of keywords in a piece of content made it hard to read and understand. This caught the eye of Google who implemented the Panda algorithm to put a stop to the practice. The use of keywords in online content is still important today, but, now inserting an amount of keywords exceeding 3% of the content is looked down upon.

Artsy and Hard to Read Fonts

 

With all the websites out there, people use to use a variety of ridiculous fonts such as comic sans and papyrus to show the unique style and personality of the brand. However, these fonts were also distracting and

hard to read. Now, its more professional and credible to use an easy to read font such as a Times New Roman or Calibri and avoid the old online trend of overly artsy fonts.

Phony Stock Photos

 

While numerous organizations and companies still use stock photos on their websites and printed collateral, the stock photos of today are vastly better than their predecessors. The use of staged, grainy obviously noticeable stock photos are unpopular as these old online trends lower a brand’s credible perception.

Distracting Websites

 

Back in the day an overly busy website was seen as a way to impress the website visitor, make one’s website memorable in the sea of millions of other websites. Little did developers know how distracting and confusing these old online trends made their website. Today, developers are more focused on the user’s experience. This has resulted in an easy-to navigate, simplistic, yet creative looking websites.

Print Media

 

Print media has seen a similar to the demise of the “Yellow Pages” and phone book advertising. While print media is till widely used in advertising and in journalism, many consumers access information online via the internet on their smartphones. With the popularity of online media, many advertisers and news outlets have developed digital copies of their content.

Whether you’re an “old school” SEOer who has resisted changing with the times or you’re a new SEO specialist, avoid these common old online trends. They will keep you living in the “dark ages” and prevent you from seeing results in your company’s SEO strategies and campaigns.

If you’re a business owner and aren’t familiar with SEO and its current best practices, contact the specialists at SEO.com. We are a full-service internet marketing firm helping businesses both large and small with all aspects of their online presence.  Our staff has been in the industry for numerous years and know the current SEO online trends.  Contact us today to learn how we can help update your SEO strategy to be in alignment with current online trends.

The post Throwback: Old Online Trends That Have Gone Stale appeared first on SEO.com.

More Articles

WordPress robots.txt: Best-practice example for SEO

Your robots.txt file is a powerful tool when you’re working on a website’s SEO – but it should be handled with care. It allows you to deny search engines access to different files and folders, but often that’s not the best way to optimize your site. Here, we’ll explain how we think webmasters should use their robots.txt file, and propose a ‘best practice’ approach suitable for most websites.

You’ll find a robots.txt example that works for the vast majority of WordPress websites further down this page. If want to know more about how your robots.txt file works, you can read our ultimate guide to robots.txt.

What does “best practice” look like?

Search engines continually improve the way in which they crawl the web and index content. That means what used to be best practice a few years ago doesn’t work anymore, or, may even harm your site.

Today, best practice means relying on your robots.txt file as little as possible. In fact, it’s only really necessary to block URLs in your robots.txt file when you have complex technical challenges (e.g., a large eCommerce website with faceted navigation), or when there’s no other option.

Blocking URLs via robots.txt is a ‘brute force’ approach, and can cause more problems than it solves.

For most WordPress sites, the following example is best practice:

# This space intentionally left blank
# If you want to learn about why our robots.txt looks like this, read this post: https://yoa.st/robots-txt
User-agent: *

We even use this approach in our own robots.txt file.

What does this code do?

  • The User-agent: * instruction states that any following instructions apply to all crawlers.
  • Because we don’t provide any further instructions, we’re saying “all crawlers can freely crawl this site without restriction”.
  • We also provide some information for humans looking at the file (linking to this very page), so that they understand why the file is ’empty’.

If you have to disallow URLs

If you want to prevent search engines from crawling or indexing certain parts of your WordPress site, it’s almost always better to do so by adding meta robots tags or robots HTTP headers.

Our ultimate guide to meta robots tags explains how you can manage crawling and indexing ‘the right way’, and our Yoast SEO plugin provides the tools to help you implement those tags on your pages.

If your site has crawling or indexing challenges that can’t be fixed via meta robots tags or HTTP headers, or if you need to prevent crawler access for other reasons, you should read our ultimate guide to robots.txt.

Note that WordPress and Yoast SEO already automatically prevent indexing of some sensitive files and URLs, like your WordPress admin area (via an x-robots HTTP header).

Why is this ‘minimalism’ best practice?

Robots.txt creates dead ends

Before you can compete for visibility in the search results, search engines need to discover, crawl and index your pages. If you’ve blocked certain URLs via robots.txt, search engines can no longer crawl through those pages to discover others. That might mean that key pages don’t get discovered.

Robots.txt denies links their value

One of the basic rules of SEO is that links from other pages can influence your performance. If a URL is blocked, not only won’t search engines crawl it, but they also might not distribute any ‘link value’ pointing to that URL to, or through that URL to other pages on the site.

Google fully renders your site

People used to block access to CSS and JavaScript files in order to keep search engines focused on those all-important content pages.

Nowadays, Google fetches all of your styling and JavaScript and renders your pages completely. Understanding your page’s layout and presentation is a key part of how it evaluates quality. So Google doesn’t like it at all when you deny it access to your CSS or JavaScript files.

Previous best practice of blocking access to your wp-includes directory and your plugins directory via robots.txt is no longer valid, which is why we worked with WordPress to remove the default disallow rule for wp-includes in version 4.0.

Many WordPress themes also use asynchronous JavaScript requests – so-called AJAX – to add content to web pages. WordPress used to block Google from this by default, but we fixed this in WordPress 4.4.

You (usually) don’t need to link to your sitemap

The robots.txt standard supports adding a link to your XML sitemap(s) to the file. This helps search engines to discover the location and contents of your site.

We’ve always felt that this was redundant; you should already by adding your sitemap to your Google Search Console and Bing Webmaster Tools accounts in order to access analytics and performance data. If you’ve done that, then you don’t need the reference in your robots.txt file.

Read more: Preventing your site from being indexed: the right way »

The post WordPress robots.txt: Best-practice example for SEO appeared first on Yoast.

Read More »

How to Make Your Website Less Scary and Intrusive to Gain More Business

Creepy looking guy wearing a red hoody squats in the dark. His eyes are ligh graffiti crosses, his mouth a light line. It seems there is a car in the background - we see the headlights.

When it comes to online business: are you a creepy stalker invading people’s privacy for profit?

  • Do you have dozens of trackers installed on your site?
  • Do you want visitors to allow you to send notifications up front on the first visit?
  • Do you let your ads follow people around the Web in a creepy way?

Not just tech-savvy users hate and block intrusive marketing tactics. They’re also unethical and harm the perception of your website.

You effectively scare people away! They won’t be doing business with you whether you are

  1. just a publisher
  2. running an online store
  3. or selling services on the Web.

It’s not hard to fathom. Being creepy is actually a bad habit in private but is downright self-sabotage when you ask people to spend time, effort or money on your site.

Are You Stalking Your Visitors You Creep?

Yeah, I know. Many marketers will advise you to “retarget” your visitors and the likes to increase profits but you actually stalk your users or let other stalkers do it.

Most business sites also do use Google Analytics because it’s free and everybody loves Google. Does everybody really?

Privacy Badger blocks Facebook and Google on a website.

There is a growing unease about Google and Facebook tracking every single step of ours online. Protecting your online privacy seems to be common sense by now.

Yet most website owners still treat privacy of their visitors lightly. They trade user data for some free tools or scripts. Is it worth it?

Protect Users or Wreck Yourself

By now there are many tools to protect your privacy online you can use directly in your browser – ideally Firefox – as Google which builds Chrome makes money off your “big data”.

Personally I use Privacy Badger by the EFF (Electronic Frontier Foundation) – an American non profit fighting for our online rights and Findx Privacy Control.

There is also a similar extension by the DuckDuckGo team which is probably a good start for beginners. I like the way Privacy Badger works though.

I have been using Privacy Badger for over a year. At first many sites literally broke when I visited them with privacy protection.

Many sites literally use hundreds of potential trackers on their unsuspecting users! Consider Wired.com – the website of the popular technology magazine.

239 tracking items blocked on Wired.com

I can’t link Wired.com here for security reasons! The Wired website is spyware! The FindX Privacy Control extension blocked 239 suspicious items! And guess what? The site still worked.

What does that mean? Wired used up to 239 redundant tracking scripts that you didn’t even need to load to make the website work or view the content!

Of course it’s mostly third party scripts and images. Google ads and analytics by themselves were responsible for dozens of trackers. Every image Google shows reports back on you!

Wired can get away with it why can’t I? Well, Wired exists for almost 20 years and has a faithful audience. Even I do visit it despite all the surveillance.

Imagine a site you don’t even know though that x-rays you on entry. That’s nono. That’s like a stranger looking under your skirt or opening your zipper.

Yet many sites add dozens or even hundreds of cookies to your local machine. Due to the European privacy law they have to ask for consent. It sometimes looks like this:

A wesbite ask for permission to use numerous cookies for all kinds of purposes. It's a dialog provided by Cookiebot.

This site is even one of the better examples – it just adds 60 cookies and only 34 of them “for marketing purposes”. I have seen worse ones. I rather decline in such cases.

Most of the cookies are used for tracking and are “unnecessary”.

By itself the idea and implementation by Cookiebot is a good one – it is also compliant with the EU privacy law. You just need to make sure to limit the number of cookies!

Lack of Privacy May Cost You Money

Some people by now think I’m one of those paranoid “tinfoil hat” nerds. I’m not advocating hiding in the woods with a bunch of survivalists though.

We need to use technology ind the information age or we’ll get left behind. I still want to be able to use the Web but I don’t want to be exploited by “big data” while at it.

Even in case you don’t care for privacy you will surely admit that some of the ramifications are unsettling. You surely care about money, don’t you?

Based on your Internet activity or data you share you may see a different pricing online. In simple terms: lack of privacy will cost you more money!

Based on your profile some products won’t even get shown to you while others may be overtly promoted.

For example inner city Afro-American youths are much more likely to see ads for alcohol while they won’t get shown real estate ads.

Who are you on the Web? Are you yourself with your

  • ethnicity and skin color
  • religion or lack there of
  • sexual orientation and gender
  • age and birth date
  • political bias and affiliations

or a carefully crafted persona made to be as likeable as possible?

Each of those very common “data points” may have some negative impact on your online and real life. For example I can’t see a lot of online content because I’m in Germany.

A lot of video content and music is limited to the United States or at least blocked in Germany as Google’s YouTube fails to pay German copyright holders.

  • To see videos or listen to music that is “not available in your country” you have to use a so called VPN or Virtual Private Network like Proton VPN that hides your actual whereabouts.
  • Muslims are not only subject to discrimination on the street but also on the Web. Are you sure you want to disclose that you believe in Allah?
  • Studies show that women are much more likely to be harassed online than men. Homosexuals or transgender people are exposed to even more hate speech.
  • Some online stores show different pricing depending on your background and browsing history. You may pay more than others without realizing it.
  • Age is clearly often used to decide whether you can access some online content. It’s not only about adult topics though.

Ageism becomes also apparent when you have to pay more because you’re older. Just think insurance policies.

When you follow the news online you might have noticed over the recent years that increasingly more and more people tend to agree with you. That’s the so called filter bubble.

Algorithms notice what you like and only show you items based on your political preferences. In the US this has led to a completely unexpected presidency by Donald Trump.

Many websites collect data like age or gender routinely. Just to sign up somewhere or to buy something you need to give away vital information on yourself.

Yet an increasing number of people – potential customers – are not fond of such random data collection even you have a – or despite of your – huge privacy policy.

Thus not only Internet users who end up on your site may lose money. You – the website owner – may lose money too when you neglect actual data protection.

I don’t even refer to the actual threat of getting sued when you don’t comply with local privacy laws . I mean losing customers because of lack of or downright disregard for privacy.

You Have a Privacy Policy? Awesome! Can I Read it?

Excerpt from Ecosia privacy policy explaining clearly that they us no third party trackers like Gogle Analytics

A privacy policy – some people already regard it as a “profiling policy” – may actually backfire. Most such policies are written in undecipherable legalese only lawyers can understand after many hours of study.

Even by skimming such wall of text written in alien language many people get scared. You only share the data with your partners, advertisers and everybody else? Back off!

There are some examples of actually human readable privacy policies out there you can not only understand but they don’t scare you with their message. Sadly they are few and far in between.

It seems the more complicated a privacy policy is the more suspicious activity of dubious data sharing it is hiding. Just think Facebook.

While Facebook may get away with an egregious tracking record because it is too big to fail and indispensable for most people your website may not.

Now with the new European privacy law every website serving visitors from the EU needs a plain language privacy policy. I have created one for myself. Yes, you can read it!

Know When to Ask for Private Information and Permission or if at All

One creepy yet wide-spread practice many business websites adhere to is asking strangers for private information or access to their mailboxes.

This is the infamous “I fcuk on the first date” mentality. You enter a site and have to close a pop up asking for your mail address, a notification permission dialog and a consent notice for cookies.

Some sites also ask for permission to have access to location data – that is where you are or where you live. No, thank you!

When I asked my colleagues on Twitter most of them mentioned these issues. Even as Web professionals with technical know how they are ostracized by such sites.

Thank you Dean Cruddace, Zack Neary-Hayes, Andrew Akesson for feedback and additional insights. Click their names for their feedback!

The privacy-oriented Firefox browser already allows to block most of these requests altogether out of the box:

Firefox brwoser permission that allow to block notification rquests by default along with pop-ups.

It’s a shame! These features can be very useful when used responsively. They are not just tools for stalkers and creepy marketers!

Personally I’m trying my best to reconcile website optimization and privacy needs. You need analytics to know how your website works and whether people really view it.

What you don’t really need unless for selfish reasons is tracking across domains and similar privacy breaches. Learn from the Facebook debacle!

Stop treating website visitors like easy prey. The goal of a website is not to make money off unsuspecting visitors by tricking them into giving away their data.

You need to convince people that you offer value. That’s marketing. Attract privacy oriented visitors to gain more business!

Stalking and selling private data is crime even if the laws aren’t applicable everywhere yet. Just because some sites still can get away with it does not mean it’s OK.

The post How to Make Your Website Less Scary and Intrusive to Gain More Business appeared first on SEO 2.0.

Read More »

Brands vs Ads

About 7 years ago I wrote about how the search relevancy algorithms were placing heavy weighting on brand-related signals after Vince & Panda on the (half correct!) presumption that this would lead to excessive industry consolidation which in turn would force Google to turn the dials in the other direction.

My thesis was Google would need to increasingly promote some smaller niche sites to make general web search differentiated from other web channels & minimize the market power of vertical leading providers.

The reason my thesis was only half correct (and ultimately led to the absolutely wrong conclusion) is Google has the ability to provide the illusion of diversity while using sort of eye candy displacement efforts to shift an increasing share of searches from organic to paid results.

As long as any market has at least 2 competitors in it Google can create a “me too” offering that they hard code front & center and force the other 2 players (along with other players along the value chain) to bid for marketshare. If competitors are likely to complain about the thinness of the me too offering & it being built upon scraping other websites, Google can buy out a brand like Zagat or a data supplier like ITA Software to undermine criticism until the artificially promoted vertical service has enough usage that it is nearly on par with other players in the ecosystem.

Google need not win every market. They only need to ensure there are at least 2 competing bids left in the marketplace while dialing back SEO exposure. They can then run other services to redirect user flow and force the ad buy. They can insert their own bid as a sort of shill floor bid in their auction. If you bid below that amount they’ll collect the profit through serving the customer directly, if you bid above that they’ll let you buy the customer vs doing a direct booking.

Where this gets more than a bit tricky is if you are a supplier of third party goods & services where you buy in bulk to get preferential pricing for resale. If you buy 100 rooms a night from a particular hotel based on the presumption of prior market performance & certain channels effectively disappear you have to bid above market to sell some portion of the rooms because getting anything for them is better than leaving them unsold.

Dipping a bit back into history here, but after Groupon said no to Google’s acquisition offer Google promptly partnered with players 2 through n to ensure Groupon did not have a lasting competitive advantage. In the fullness of time most those companies died, LivingSocial was acquired by Groupon for nothing & Groupon is today worth less than the amount they raised in VC & IPO funding.

Most large markets will ultimately consolidate down to a couple players (e.g. Booking vs Expedia) while smaller players lack the scale needed to have the economic leverage to pay Google’s increasing rents.

This sort of consolidation was happening even when the search results were mostly organic & relevancy was driven primarily by links. As Google has folded in usage data & increased ad load on the search results it becomes harder for a generically descriptive domain name to build brand-related signals.

It is not only generically descriptive sorts of sites that have faded though. Many brand investments turned out to be money losers after the search result set was displaced by more ads (& many brand-related search result pages also carry ads above the organic results).

The ill informed might write something like this:

Since the Motorola debacle, it was Google’s largest acquisition after the $676 million purchase of ITA Software, which became Google Flights. (Uh, remember that? Does anyone use that instead of Travelocity or one of the many others? Neither do I.)

The reality is brands lose value as the organic result set is displaced. To make the margins work they might desperately outsource just about everything but marketing to a competitor / partner, which will then latter acquire them for a song.

Travelocity had roughly 3,000 people on the payroll globally as recently as a couple of years ago, but the Travelocity workforce has been whittled to around 50 employees in North America with many based in the Dallas area.

The best relevancy algorithm in the world is trumped by preferential placement of inferior results which bypasses the algorithm. If inferior results are hard coded in placements which violate net neutrality for an extended period of time, they can starve other players in the market from the vital user data & revenues needed to reinvest into growth and differentiation.

Value plays see their stocks crash as growth slows or goes in reverse. With the exception of startups frunded by Softbank, growth plays are locked out of receiving further investment rounds as their growth rate slides.

Startups like Hipmunk disappear. Even an Orbitz or Travelocity become bolt on acquisitions.

The viability of TripAdvisor as a stand alone business becomes questioned, leading them to partner with Ctrip.

TripAdvisor has one of the best link profiles of any commercially oriented website outside of perhaps Amazon.com. But ranking #1 doesn’t count for much if that #1 ranking is below the fold.

TripAdvisor shifted their business model to allow direct booking to better monetize mobile web users, but as Google has ate screen real estate and grew Google Travel into a $100 billion business other players have seen their stocks sag.

Google sits at the top of the funnel & all other parts of the value chain are compliments to be commoditized.

  • Buy premium domain names? Google’s SERPs test replacing domain names with words & make the domain name gray.
  • Improve conversion rates? Your competitor almost certainly did as well, now you both can bid more & hand over an increasing economic rent to Google.
  • Invest in brand awareness? Google shows ads for competitors on your brand terms, forcing you to buy to protect the brand equity you paid to build.

Search Metrics mentioned Hotels.com was one of the biggest losers during the recent algorithm updates: “I’m going to keep on this same theme there, and I’m not going to say overall numbers, the biggest loser, but for my loser I’m going to pick Hotels.com, because they were literally like neck and neck, like one and two with Booking, as far as how close together they were, and the last four weeks, they’ve really increased that separation.”

As Google ate the travel category the value of hotel-related domain names has fallen through the floor.

Most of the top selling hotel-related domain names were sold about a decade ago:

On August 8th HongKongHotels.com sold for $4,038. And the buyer may have overpaid for it!

Google consistently grows their ad revenues 20% a year in a global economy growing at under 4%.

There are only about 6 ways they can do that

  • growth of web usage (though many of those who are getting online today have a far lower disposable income than those who got on a decade or two ago did)
  • gain marketshare (very hard in search given that they effectively are the market in most markets outside of China & Russia)
  • create new inventory (new ad types on Google Maps & YouTube)
  • charge more for clicks
  • improve at targeting by better surveillance of web users (getting harder after GDPR & similar efforts from some states in the next year or two)
  • shift click streams away from organic toward paid channels (through larger ads, more interactive ad units, less appealing organic result formatting, etc.)

Wednesday both Expedia and TripAdvisor reported earnings after hours & both fell off a cliff: “Both Okerstrom and Kaufer complained that their organic, or free, links are ending up further down the page in Google search results as Google prioritizes its own travel businesses.”

Losing 20% to 25% of your market cap in a single day is an extreme move for a company worth billions of dollars.

Thursday Google hit fresh all time highs.

“Google’s old motto was ‘Don’t Be Evil’, but you can’t be this big and profitable and not be evil. Evil and all-time highs pretty much go hand in hand.” – Howard Lindzon

Booking held up much better than TripAdvisor & Expedia as they have a bigger footprint in Europe (where antitrust is a thing) and they have a higher reliance on paid search versus organic.

The broader SEO industry is to some degree frozen by fear. Roughly half of SEOs claim to have not bought *ANY* links in a half-decade.

Long after most of the industry has stopped buying links some people still run the “paid links are a potential FTC violation guideline” line as though it is insightful and/or useful.

Ask the people carrying Google’s water what they think of the official FTC guidance on poor ad labeling in search results and you will hear the beautiful sound of crickets chirping.

Where is the ad labeling in this unit?

Does small gray text in the upper right corner stating “about these results” count as legitimate ad labeling?

And then when you scroll over that gray text and click on it you get “Some of these hotel search results may be personalized based on your browsing activity and recent searches on Google, as well as travel confirmations sent to your Gmail. Hotel prices come from Google’s partners.”

Zooming out a bit further on the above ad unit to look at the entire search result page, we can now see the following:

  • 4 text ad units above the map
  • huge map which segments demand by price tier, current sales, luxury, average review, geographic location
  • organic results below the above wall of ads, and the number of organic search results has been reduced from 10 to 7

How many scrolls does one need to do to get past the above wall of ads?

If one clicks on one of the hotel prices the follow up page is … more ads.

Check out how the ad label is visually overwhelmed by a bright blue pop over.

Worth noting Google Chrome has a built-in ad blocking feature which allows them to strip all ads from displaying on third party websites if they follow Google’s best practices layout used in the search results.

You won’t see ads on websites that have poor ad experiences, like:

  • Too many ads
  • Annoying ads with flashing graphics or autoplaying audio
  • Ad walls before you can see content

When these ads are blocked, you’ll see an “Intrusive ads blocked” message. Intrusive ads will be removed from the page.

The following 4 are all true:

And, as a bonus, to some paid links are a crime but Google can sponsor academic conferences for market regulators while requesting the payments not be disclosed.

Hotels have been at the forefront of SEO for many years. They drive massive revenues & were perhaps the only vertical ever referenced in the Google rater guidelines which stated all affiliate sites should be labeled as spam even if they are helpful to users.

Google has won most of the profits in the travel market & so they’ll need to eat other markets to continue their 20% annual growth.

Some people who market themselves as SEO experts not only recognize this trend but even encourage this sort of behavior:

Zoopla, Rightmove and On The Market are all dominant players in the industry, and many of their house and apartment listings are duplicated across the different property portals. This represents a very real reason for Google to step in and create a more streamlined service that will help users make a more informed decision. … The launch of Google Jobs should not have come as a surprise to anyone, and neither should its potential foray into real estate. Google will want to diversify its revenue channels as much as possible, and any market that allows it to do so will be in its sights. It is no longer a matter of if they succeed, but when.

The dominance Google has in core profitable vertical markets also exists in the news & general publishing categories. Some publishers get more traffic from Google Discover than from Google search. Inclusion in Google Discover requires using Google’s proprietary AMP format.

Publishers which try to turn off Google’s programmatic ads find their display ad revenues fall off a cliff:

“Nexstar Media Group Inc., the largest local news company in the U.S., recently tested what would happen if it stopped using Google’s technology to place ads on its websites. Over several days, the company’s video ad sales plummeted. “That’s a huge revenue hit,” said Tony Katsur, senior vice president at Nexstar. After its brief test, Nexstar switched back to Google.” … “Regulators who approved that $3.1 billion deal warned they would step in if the company tied together its offerings in anticompetitive ways. In interviews, dozens of publishing and advertising executives said Google is doing just that with an array of interwoven products.”

News is operating like many other (broken) markets. The Salt Lake Tribune converted to a nonprofit organization.

Many local markets have been consolidated down to ownership by a couple private equity shop roll ups looking to further consolidate the market. Gatehouse Media is acquiring Gannett.

The Washington Post – owned by Amazon’s Jeff Bezos – is creating an ad tech stack which serves other publishers & brands, though they also believe a reliance on advertiser & subscription revenue is unsustainable: “We are too beholden to just advertiser and subscriber revenue, and we’re completely out of our minds if we think that’s what’s going to be what carries us through the next generation of publishing. That’s very clear.”

We are nearing many inflection points in many markets where markets that seemed somewhat disconnected from search will still end up being dominated by Google. Gmail, Android, Web Analytics, Play Store, YouTube, Maps, Waze … are all additional points of leverage beyond the core search & ads products.

Google is investing heavily in quantum computing. Google Fiber was a nothingburger to force competing ISPs into accelerating expensive network upgrades, but beaming in internet services from satellites will allow Google to bypass local politics, local regulations & heavy network infrastructure construction costs. A startup named Kepler recently provided high-bandwidth connectivity to the Arctic. When Google launches a free ISP there will be many knock on effects causing partners to long for the day where Google was only as predatory as they are today.

Categories: 

Read More »
Scroll to Top