SEO In A Bottle

SEO Demystified – How Search Engine Optimization Works

David HMany of the readers who come to our site have varying questions regarding Reputation Management and how to solve unique problems they are faced with concerning search engine rankings. However, some requests have come in for a detailed rundown of SEO (Search Engine Optimization); outlining exactly how it came into existence plus how it has evolved over the years since the birth of Internet search engines. This article aims to provide the interested reader with a historical account of SEO as well as how it goes hand in hand with Reputation Management. I’ve been wanting to write this article for the better part of 15 years but had never gotten around to it. Hopefully now in late 2013 it will serve as a reliable means for explaining SEO as well as revealing how optimization has evolved since the 1990s.

Pre 1996 Internet – The Early Years

Ah, the “good old days” of the World Wide Web! Although the Internet has officially been around since the mid 1980s (see this Wikipedia article on The History of The Internet), I personally didn’t begin “surfing” the Net until 1993. You see, back in the day, the WWW was a collage of websites with zero indexing that was similar to the Wild West. Imagine walking into a large library in which books aren’t sorted by author name, where there is no categorization and no chronological order. No card catalog exists and you’re left to your own devices to find literature that interests you. This was the Internet in the early 90s. “Surfers” pretty much had to be aware of the exact domain name (or be directed to it by a browser toolbar and/or advertisement) in order to get to where they were going.

This inefficient system inevitably led to the invention of search engines – the first of which was created in the early 1990s. Perhaps the most popular early-day search engine that many remember was Yahoo!. Search engine software brought indexing capabilities to the Internet and made it infinitely easier to find content. All the casual user had to do was type in a few keywords related to what he or she was interested in finding – such as Video Games – and the search engine would quickly do an internal search of all the archived pages it had “crawled” (indexed) and list a number of webpages – with links – that it thought would most likely take the reader to where he or she wanted to go. That was simple enough, but led to website owners studying how they could rank as highly as possible for search results in order to receive the largest amount of page views possible for certain keywords. After all, website traffic equals revenue and users were quickly catching on that search engines were the easiest method for finding relevant content on the Net. The more Internet surfers a website had, the higher chance there was for more individuals to click on ads and affiliate links (this still holds true today but also depends on a number of marketing techniques to “convert” the website visitor).

So how did ancient search engines decide which webpages ranked highest for certain keywords? It didn’t take long for webmasters to figure out that the mid-1990s search engine software developers responsible for portals such as Yahoo! were depending exclusively on keyword density to decide which pages ranked highest for terms such as start your own business, best hotels, online dating, buy office supplies, favorite beer etc. Website owners along with webmasters (the people responsible for developing, designing, administrating, maintaining, and publishing content on any given website) quickly became aware that it was possible to game the system. By simply inserting commonly searched-for keywords into a webpage’s content, they could improve their chances of having that page rank highly when someone entered that term into a search engine. The more keywords, the better the page ranking. This is why so many primitive search engines often directed users to spam websites filled with ads and affiliate links instead of actual useful content. Here’s an example…

Office Supplies Spam Website Example

Remember occasionally being directed to websites that had “articles” that looked similar to the screenshot I’ve pasted above? This was because the webmasters of that website were simply using keyword meta tags when writing content and filling the text portion of their pages with those keywords; not at all concerned about publishing reliable information, adhering to subject/verb agreement, or even making an effort to assist the interested reader in any way. The only thing they wanted was as much traffic as possible – and that’s exactly what they achieved!

Search Engine Evolution

After a few years it became embarrassingly obvious that relying exclusively on keyword density simply would not work. Back in those days, when the vast majortiy of casual Internet users had to deal with extremely slow service, paralyzing page load times, and that annoying dial-up tone, it didn’t take long for many to give up their foray into the online realm and go back to whatever it was they were doing before trying to search for Internet content (an offline game of Solitaire perhaps). Search engine developers could see that users were shying away from their services due to inefficiency. Then the Google search engine was created in 1996 by Stanford University students Larry Page and Sergey Brin (see the following History of Google Wikipedia article for more information). This was a game changer.

Rather than using only keyword density to determine how high a particular page ranked in its search engine, Google introduced the PageRank Algorithm. This didn’t catch on immediately as Google was not a popular search engine during the years immediately following its inception – yet the writing was on the wall for keyword-dense spam websites as soon as the algorithm was created. Google correctly assumed that the keyword density abuse would persist and even increase in use as more webmasters figured out how to “game” search engines. The question was, what measures could they take to combat this? That’s where PageRank came in.

PageRank, in its infancy, relied on one major component – crawling webpages and deciphering how many other webpages linked to them. That was it. Although it was a significant improvement over relying solely on keyword density, it didn’t exactly require a genius to conclude that the next logical step in optimizing websites for search engines (yes, that means SEO) was getting as many websites as possible to link to a page you wanted to rank highly when certain keywords were searched.

Link Farming Example

Within months, a multitude of webmasters had caught on to Google’s new PageRank algorithm and had begun the practice of Link Farming. That is, the science of getting as many websites to link to your page as humanly possible – and this worked! Expand the screenshot above by a few hundred websites and you get the gist of how PageRank was getting hammered by link farms.

Link farming could be further enhanced by having those websites link to you with anchor text (meaning the precise text that is displayed for a hyperlink) containing the keyword(s) you wanted to rank highly for. So all you had to do was get a bunch of websites to link to your spam site using the term office supplies in the anchor text and you would begin appearing near the very top of a search engine list when that term was search for. It didn’t matter that many of these websites were simply linking to each other, had no useful content, were full of advertisements and affiliate links, were poorly designed, and were gaming the system. Nope. Webpages were now being “weighted” by keyword density as well as page ranking… yet the saturation conundrum persisted. Casual users would search for a term and be directed to a useless spam site.

Before we get into the next step of search engine evolution, I think it is important to let our readers know that there indeed was some fallout from the first years of Google PageRank. For one, many website owners had invested heavily in optimizing their pages for keyword density. Keep in mind that back in the 1990s, website designers and webmasters charged anywhere from $20 to well over $100 per hour for “guaranteeing” that a page would rank highly for specific terms. SEO is not a particularly difficult task even in today’s environment, but it was a mind-blowingly remedial effort in the late 1990s. Webmasters (or web architects, web designers, SEO specialists, etc.) could charge an unsuspecting website owner exorbitant amounts for tasks that could be performed by a reasonably knowledgeable 10 year old without the proprietor ever being the wiser. I suppose not many tears were shed for true spam sites who partook in this primitive method of SEO, but there were plenty of “legitimate” websites that used more appropriately-placed keyword density to rank high for commonly searched terms while working diligently to provide quality content. When PageRank came along, many website owners found themselves unable to budget thousands of dollars in additional investment to pay another “Internet expert” (or the same one they had contracted before) to begin the elementary yet highly lucrative practice of link farming.

This evolution of sorts did not take place immediately. After all, Google was not a household name in 1996. That didn’t occur until the year 2000 – when Yahoo! partnered with Google and allowed them to power their search engine results. Until then, not many Internet users had heard of the Google search engine, but that all changed once Yahoo! allowed its brand to be compromised by its most algorithm-savvy competitor. In no time, the word “Google” became a verb people used when referring to searching for something on the Internet. Now that Google possessed the most useful search engine on the Web and had gained popularity due to their agreement with Yahoo, they had to work quickly to improve their algorithm and modify how pages ranked. The quicker they could reduce the number of highly ranked spam websites (a huge turn-off to Internet users), the better.

Google Acquires Hilltop Algorithm

Following the 2000 dot-com bubble, Internet companies were forced to halt many projects until additional investment could be secured. Google was quickly earning its reputation as the most reliable search engine in the early 2000s but it wasn’t until 2003, when it acquired the Hilltop Algorithm, that webmasters began coming to grips with the fact that search engine rankings cannot be fully manipulated. The subsequent introduction of TrustRank drove this reality home so to speak, as Google (and other search engines) set their sites on putting the guesswork into SEO.

TrustRank prioritized links to any given website and weighted them according to how “trusted” they were by the Google algorithm. If your site was getting linked to by these trusted entities, then you were much more likely to rank highly for a search term than if you were mindlessly farming links. Website domain names that ended with .edu and .gov were automatically trusted, as were high traffic or established reputable websites such as The New York Times, blogs with loads of comments, CNN, websites with low bounce rates, and so on.

* A “bounce” is when an Internet user goes to your website and then leaves without visiting any other page on that site – theoretically signaling that the user did not find what he or she was looking for or did not find the website as a whole useful enough to stick around.

For example, if you owned a content-driven website that published detailed information on how to design a website (still a popular search term today), you could be fast-tracked to higher search rankings for that term and were considered by Google to be an authority figure on the topic. Correspondingly, if the owner of that site had a high authority ranking, he or she could improve the page rank of lesser sites by simply linking to them! This resulted in the inevitable practice of quality website owners and bloggers linking to those less prominent sites in exchange for a fee. After all, if a low quality website received a link from an “authority” on the topic, it should be useful to the person searching for that specific term – at least according to Google’s algorithm at the time.

* A webpage is one unique page that is contained within a website. Webpage and website authority are calculated differently by search engines.

What’s more, trust ranking increases on a logarithmic scale. Essentially, a website that has an authority ranking of ‘2’ is worth ten times as much as a ranking of ‘1’. A ranking of ‘4’ is ten times higher than ‘3’ and 100 times higher than ‘2’ for example. So if a spam site could get linked to by highly authoritative or trusted sites (regardless of whether the link was “natural” or purchased), it could in turn improve its own authority ranking on a search term from only one link. This is because, in logarithmic terms, one link from a ‘7’ authority is worth 100,000 links from a website that has an authority ranking of ‘2’.

To say that some successful bloggers who had a large organic following and high authority ranking made a lot of money from the practice of selling links (before 2011) would be an understatement. Selling links resulted in an entirely new form of monetizing a successful website, but that practice is gradually yet quickly being made obsolete due to search engine algorithm adjustments that scrutinize how links are generated more than ever.

SEO Today

Many, many changes have taken place in the last few years regarding Google’s search engine algorithm – namely the creation of Google Panda, which blocks or “penalizes” certain sites for being over weighted in keyword density, untrusted links, excessive ads, too many affiliate links, and a number of other factors that aren’t released to the general public. In all honesty, there is no way for an SEO expert to guarantee your page will rank at the top of Google’s search engine for a particular term. That’s not happening anymore. Sure, there are plenty of services that claim they can pull this off, but it’s nonsense. Google’s search engine algorithm is black boxed. Feel free to attempt to get your website to rank #1 for a commonly used term on your own (or hire an SEO company to try) and I’ll do my best to be sympathetic when it doesn’t work out. SEO efforts in today’s environment are all about analyzing available data and doing the very best one can do to optimize a website for search rankings while adapting to the ever-changing landscape (for example, Google’s search engine algorithm is now updated hundreds of times per year with little or no specific information given).

Social media (particularly the Google+ service) has entered the mix in conjunction with social sharing, followers, comments and friends (giving higher priority to “authority figures”) to form an entirely new strategic method for manipulating search rankings. Although Microsoft and Yahoo! signed a ten year partnership in 2009 to rival Google’s stronghold on the search engine market, Google is still the dominant search engine as of late 2013 with no signs of that changing in the short-term.

This article has dealt almost exclusively with Search Engine Optimization rather than Reputation Management – which in modern terms can be defined as a public relations effort pertaining exclusively to search engine rankings. I look forward to providing our readers with more information and guides related to how Reputation Management works as well as assisting those in need. If you have anything to add to this article or would like to make a comment, feel free to leave a reply below.

-David H.

The following two tabs change content below.

David Harold

Reputation Consultant at Reputable.com
David Harold began programming as a kid (in Basic) and never lost his passion for computers and technology. Since the 1990s, he has worked extensively with online directories, search engine optimization and most recently Reputation Management. He enjoys discussing social media strategy and assisting individuals as well as companies with their Reputation Management needs. Feel free to contact him anytime if you are in need of assistance with search engine results or have any questions.

Leave a Reply