If you own a website, you have probably heard of the Google Algorithm Updates Penguin and Panda, that according to some are meant to make your life much more difficult. The goal of these updates is to penalize sites that use spammy black-hat methods in order to boost their search engine rankings. So what after Panda and Penguin updates?
As unknown as why they are named after cute fuzzy creatures is what these updates actually do. That is to say, we know about half the answer to both questions. You see, the Panda update was named after a Google developer who worked on the project. We can only guess that Penguin probably just followed suit by being an animal that started with “p,” was black and white, and pretty non-threatening.
As for what the algorithms actually do, what we know is their effect. The goal is obviously to eliminate spam. The means by which this objective is reached is a little hazy. The truth is that Google has been updating their algorithms to fight spam since their search engine bots have existed. These updates simply represent a more concentrated effort to eliminate certain tactics.
The main point of the panda update was to attack those who got irrelevant links that would never be of real use to anyone. There are hundreds of data points that Google’s website crawlers look for on a site. Those points are what we don’t know. We know some of the general objectives, but not what exactly is typed into their computer.
What is clear is that sites that exist only for links are not faring well. Pages that are littered with unrelated links, unnoted sponsored links, and links to disreputable sites are being pushed to the bottom of the search results. Google’s stance is that they exist for the user. They do not want searchers to find un-useful or low-quality information.
To reach this end, Google had to go beyond links. After all, web pages don’t just have links on them. Links are a good resource, but the meat of a website is the content it possesses. Google had to find a way to measure content quality. We don’t know how they did it, and it’s still being refined, but that’s exactly what Penguin aims to do.
One of the things that Penguin seeks out is duplicate content. If a million websites say the same thing, they’re not helpful to users who are seeking comprehensive information. It’s not just a question of plagiarism, for even attributed duplicate content takes a hit. This encourages site managers to produce unique sites that don’t just restate the same things that everyone else is saying or that they’ve said before.
Besides originality, Google also wants the sites they recommend to be useful. This means that it needs to be well-written, display a good grasp of the intended language (usually English), and not be a jumble of nonsense.
So what’s next? Well, obviously Google is forever refining their process, finding new ways to measure quality. Computers can only do so much, so there will always be a process for site owners to question and petition why their site is placed where it is. Since links are a great measure of whether a site is recommended by others as an authority, they aren’t going anywhere. While there will be updates to the algorithm concerning links, the current guidelines are pretty comprehensive as far as where desirable links should be placed and found.
This means that Google will place most of their innovative focus on measuring content. The next step is to measure how relevant websites are to searchers’ queries. Site writers already have keywords that they aim to rank for. Google has already started battling “keyword stuffing.” They don’t want to see pages with the words “mountain range” five billion times, they want useful information about mountain ranges. This means that they will find ways to measure the relevance of content to particular subjects.
It is unclear how relevancy could be measured, since it can be pretty subjective. Perhaps there will be pools of keywords that relate to each other, with different levels of significance for more closely related words. This could open a whole new branch of manipulation though, so the system, whatever it is, it will have to be pretty sophisticated.
It will obviously take major work for Google to implement these changes. This would be a prime opportunity for them to have another animal-themed algorithm name. But what will they choose? Will they stick with their black-and-white theme and go with Dalmatians? Or will alliteration lead to the inception of “python” or “piglet”. Perhaps they will go less for cute and more for confusing and settle on the ever-intriguing “platypus.” Though we can speculate as to the effects of the changes, only time will tell us the implementation and the name.