Leaving the “Trivial-But-Fixed” Phase
And Entering “It’s Fixed… Really”
In Part 2 of this series, we closed with Google leaving the impression of having fixed the Googlebomb with an algorithm. Algorithms should, of course, detect and defuse new bombs. This is not what happened (nor is it happening today)…
Bombs continued to appear, including the Scientology “dangerous cult” in January, 2008 and a host of political bombs that appeared pretty much “at will” as campaign season rolled into gear during April-May-June 2008.
When publicity heats up, can a “cool it down” announcement by Google be far behind? Sure enough…
3 Years Ago
July 23, 2008
Google Bomb is categorically pronounced “over” (article by Garance Franke-Ruta, The Washington Post)
Rick Klau, a member of the Google strategic partner development content acquisition team at the time, announced that “Google bombs don’t work anymore.”
Klau claimed, according to the Post, that Google is “far more perceptive when it comes to these link swarms that show up in a matter of hours or days.”
This is the first-ever mention of the term, “link swarms” and the very first time that some sort of concrete feature of an actual anti-bomb algorithm was mentioned, and supposedly recognizable, by Google.
The term touches upon part of The 5-Minute Googlebomb Algorithm described in the 1st of this 3 Part series.
Remember the term “link swarm.” I’ll show you the ultimate, unmissable high-volume “link swarm” that succeeded brilliantly as a link bomb in March 2009, 9 months after this categorical statement by Google.
It puts the lie to the existence of any sort of serious attempt to detect and defuse Googlebombs.
With religion and politics jumping on board, and with more commercial bombs appearing, Google has finally dropped its attempt to trivialize. Note how every phase of Google’s “evolving opaque transparency” meets, grudgingly, the circumstances of the moment. At each stage, Google says just enough to appease/mislead without actually flat-out lying.
That is an art form.
So here we are, moving into the fall of 2008…
The Googlebomb is no longer “trivial.” It’s now “fixed – this time – really. No really.” So says Google.
Well, that must mean we won’t see any more bombs, right?
You know the answer to that by now.
What follows is rather complicated, so I’ll summarize and leave enough additional documentation for you to dig further, if you like…
2.5 Years Ago
Jan 22, 2009
President Obama link bomb (for “failure” and “cheerful achievement,” article breaks in SearchEngineLand)
SearchEngineLand breaks the “Obama is failure” Googlebomb. Yahoo! has the same problems, but note that Yahoo! is not pretending to fix link bombs.
You would expect engines (like Yahoo!) without such algorithms to be impacted. Google claims to have fixed the bomb, but here we are, with yet another presidential Googlebomb.
The mass-media pick up on it…
Google-Bombing Moves From Bush to Obama (Jan 23, 2009, Wall Street Journal)
Bush administration hands off ‘failure’ Google bombs to Obama (Jan 23, 2009, LA Times)
Within hours of the mass-media coverage (Google’s #1 motivator), the bomb is gone. Danny Sullivan questions just how “automated” the so-called algorithm is (scroll down to Postscript 5 Jan. 23, 6:50pm). Why? Because suddenly, and here I’ll quote Sullivan…
“Obama no longer ranks for ‘failure’ on Google.
Now, the White House hasn’t changed anything. And the link data that Google has been using to rank this hasn’t changed.
So the Googlebomb fix for this that has not worked since earlier that month just happens to kick in a few hours after I post this article? That is one mighty big coincidence.
That’s going to kick off another round of questioning over how ‘automated’ that fix really is.“
The paragraph breaks are mine, added to clarify Mr. Sullivan’s step-by-step logic. The bolding of the conclusion is mine to emphasize the inescapable conclusion.
That is Mr. Sullivan’s polite way of questioning Google’s truthfulness. There is a very big coincidence at play… way too big.
Google’s in a corner. If they could lay claim to an algorithm that they run “infrequently,” that would make a pretty darn good excuse right about now, wouldn’t it?
Here we go, in Google’s Public Policy Blog, coincidentally (???) just two days later (January 24, 2009)…
Read the whole post. It opens with “the old online prank called Googlebombing” and how it “returned for a brief while recently, when Google searches for the words [failure] and [cheerful achievement] returned President Obama’s biography as the top result.”
And then, here we go… we are now entering the era of “we run it infrequently”…
Google tries a new explanation…
“Rather than edit these prank results by hand, we developed an algorithm a few years ago to detect Googlebombs. We tend not to run it all the time, because it takes some computing power to process our entire web index and because true Googlebombs are quite rare.”
Google’s Only Way Out
This was inevitable, their only way to avoid being caught in a lie, the only way to explain real-time facts that did not add up. It’s the magic glove, created to fit the facts.
It’s the “we run it infrequently” gambit.
Why not mention this modus operandi in January, 2007 when Google declared they had “begun minimizing” the Googlebomb? Wouldn’t it be natural to explain how it works at that time?
Why not mention it in April, 2007, when Google got caught again? Matters went awry for Google during Stephen Colbert’s “Greatest Living American” Googlebomb that was publicly engineered and reported.
It was a magnificent real-time demonstration of how well-defined a Googlebomb is. If The 5-Minute Googlebomb Algorithm existed, Colbert’s bomb would not have worked. An engineer would have pressed that “infrequent” button daily to prevent its success. After all, this attempt was no secret.
Colbert’s really was a “prank,” so Google let the embarrassment blow over.
The next time, though, the “Obama-bomb” was too big and too many people were catching on to Google, including “the Crown Prince of SEO,” Danny Sullivan, who is on to the Google scam…
Mr. Sullivan figured it out, and his conclusion is damning. He has concluded that the sudden fix is too coincidental to be “automated.” Google had no choice. “Evolving opaque transparency” moved into the era of “We run it infrequently.” They continue…
“After we became aware of this latest Googlebomb, we re-ran our algorithm and it detected the Googlebomb for [cheerful achievement] as well as for [failure]. As a result, those search queries now return discussion about the Googlebombs rather than the original pages that were returned.”
Imagine that? This conveniently explains it all away. An algorithm exists, it’s just that they don’t run it very often (only, it seems, at times when Google’s credibility is loudly called into question).
“We re-ran our algorithm and it detected the Googlebomb.”
What a coincidence! How wonderful to have such a magical algorithm.
I have a question, though. About this…
“We tend not to run it all the time because it takes some computing power.”
Um, this is Google? The Google? The same one whose mission is to organize all the world’s information.
Just a few months earlier, Google notes (in its official blog July 25, 2008)…
“Google downloads the web continuously, collecting updated page information and re-processing the entire web-link graph several times per day. This graph of one trillion URLs is similar to a map made up of one trillion intersections. So multiple times every day, we do the computational equivalent of fully exploring every intersection of every road in the United States. Except it’d be a map about 50,000 times as big as the U.S., with 50,000 times as many roads and intersections.”
With that type of computational power, they have a “bomb detector” that has to be run separately… at special moments… like when a Googlebomb next embarrasses Google? But not, of course, for lesser-known or commercial Googlebombs that have no PR cost for Google.
That would, um, admit that this infrequent algorithm is just a manual change.
What a strange, feeble attempt to answer Mr. Sullivan’s questioning of how “automated” that fix really is, which is, in fact, questioning the algorithm itself. Here’s a far more likely explanation…
Google has a blacklist. Google adds a site or URL to that list. Google then runs that “algorithm.”
Problem is fixed. But that is a manual fix, not an algorithmic one.
Whatever it is, is it really an algorithm if you have to run it to get out of public embarrassments?
Ph.D….. Piled Higher and Deeper
Working on the premise that if you keep saying it, someone may believe it…
Matt Cutts, too, takes an entire post trying to control the damage! It’s depressing and funny at the same time.
Suddenly we have a convoluted explanation that “fits the (new) facts” and that Google would love us to believe. It’s like the little boy caught in a lie. Once caught, he adds to it, then again, and again, increasingly more elaborate to cover what he hopes is going to be the last inconsistency (and, naturally, it’s increasingly unbelievable). So…
Guess what? There are now two systems! It seems they ran the “special bomb detector algorithm” only 5-6 times during 2008. Well, was it 5… or 6? Why fudge the number? Is there a range of possible times to have run it? Whatever, it’s just enough to “sound right.”
This version of “that’s our story and we’re sticking with it” explains it all…
“We re-ran our algorithm last week and it detected both the [failure] and the [cheerful achievement] Googlebombs, so our system now minimizes the impact of those Googlebombs.”
The “The Infrequent Algorithm” era feels like a keeper. Every future public “bomb-barrassment” can be washed away by infrequently running the (non-existent) algorithm, when all they really do is add the bomb to the “manual fix” list.
There are, however, as always when you are not telling the truth, loose strings. In this case, if the bomb detector is run 5-6 times per year, why is it that only the publicly known bombs are fixed. Should not most (or all) of them be detected and defused?
The answer, as we’ll see, is a resounding “no.” We’ll come back to this point later. For now, suffice it to say…
Google uses a “blacklist” to handle damage to them from Googlebombs, but not to dismantle bombs that do damage to others nor to improve the all-important user experience (“important” according to Google’s public statements).
That is a solution that does not, in Google’s word, “scale.” This is just for Google.
The Real Story?
Deconstructing this, the more likely scenario is that Google has no genuine algorithm, not in the 5-minute sense that I outlined in an earlier post. They never did. (And they still don’t.)
The re-running of some kind of special algorithm (perhaps just a blacklist of URLs) is a technical way, perhaps, of being “sort-of-truthful” when Google says they have “an algorithm.”
But it’s basically the equivalent of making a manual change.
If it’s truly algorithmic, why is it not running at regular intervals and picking up new and existing bombs? If they run it 5 or 6 times a year, how does it miss yet more major bombs such as…
Racist Obama image shines light on Web searching (Dec 2, 2009, CNN)? The course that this H-bomb runs is so predictable… initial Google resistance to making a manual change for Michelle Obama’s “ape-face” is quickly overcome by political expediency.
Sort through the story and you’ll come to the same conclusion as I do.
They made a manual change because a real Googlebomb algorithm does not exist.
At this Obama point in our Googlebomb history, it’s been 9 years since we started with George Bush. The fact that they can’t detect a bomb of this importance means that they have no real detector.
What about the fix that they claim this time? It can’t be programmed with that type of speed.
Why not just admit that the page was manually removed with a blacklist?
Because then we’re back into a solution that “does not scale” (costs money). If a simple blacklist was found to handle Googlebombs, the pressure to help out others damaged by the Googlebomb would be intense. It’s better (financially) to insist that they “fix these things algorithmically.”
In fact, they fix these things for Google. Period. Let’s jump ahead almost another year, to the New York Times…
A Bully Finds a Pulpit on the Web (Nov 26, 2010 — New York Times). This was a “Googlebomb with a twist,” whereby a nasty businessperson (Vitaly Borker) allegedly built a Googlebomb through brilliantly nefarious online and offline threatening behavior!
As the story explains…
“Web advocacy sites like Get Satisfaction are vast and score high on Google’s augustness scale. The [Google] spokesman surfed the Web as he spoke and said he could see scads of links between RipoffReport.com and DecorMyEyes [the business].”
Commercial Googlebombs (the dangerous present and future of Googlebombing) can be either self-promoting or competitor-damning. This one is self-serving.
The NYT article elaborates…
“In short, a Google side stage—Google Shopping—is now hosting a marathon reading of DecorMyEyes horror stories. But those tales aren’t even hinted at [Google]‘s premier arena, its main search page.”
Within 5 days, Google posted…
Google worked magic for one little case…
“Even though our initial analysis pointed to this being an edge case and not a widespread problem in our search results, we immediately convened a team that looked carefully at the issue. That team developed an initial algorithmic solution, implemented it, and the solution is already live.”
“… in the last few days we developed an algorithmic solution which detects the merchant from the Times article along with hundreds of other merchants that, in our opinion, provide an extremely poor user experience. The algorithm we incorporated into our search rankings represents an initial solution to this issue, and Google users are now getting a better experience as a result.”
Now Let’s Get This Straight…
In 10 years, they cannot develop a simple Googlebomb algorithm, but they can develop a much more “touch-feely” algorithm to detect merchants with bad attitudes… in 5 DAYS?
At least they didn’t say that they’ll only be running it infrequently.
Let’s face it. Google may be playing with semantics “in-house” to justify (to themselves) that they are not lying. (We all need to feel that we are on the right side of the truth.) But once you cut through their long-winded non-explanation of a technical miracle that was developed for an “edge case”…
Why not just admit that this was some sort of a manual change? A list of bad guy(s) was created. The “list” (it would have added some credibility to leak a few other names) is run and it penalizes the sites of “bad owners.”
Wouldn’t you say that was a far more likely solution, knowing Google’s obsession with “scaling?” After all, this was an “edge case.”
One way or another, this was a manual change. Google leaves itself the usual “tell,” what I now call their “loophole clause”…
“We can’t say for sure that no one will ever find a loophole in our ranking algorithms in the future.”
Just like Google’s “we can’t say it’s 100% sure” disclaimer in January 2007, there is no need to say this. No one expects 100% accuracy. The point is that a company only feels compelled to make these statements when they feel they need to give themselves “outs” in the future.
Mass-media negative publicity compels Google to do the right thing. It’s self-preservation by PR. It is not about Borker being a bad man.
The “regular” person believes that “Google is great.” Should that confidence be shaken, Google’s profits/existence are threatened.
Public problems cause red alerts to fly around the Googleplex…
“Fix this fast!“
And fix it they do.
Followed by spin.
Rinse and repeat, as necessary.
Google’s admirable policy of transparency (God how I loved the way they used to do business) has become one of “evolving opaque transparency.” The message is not consistent and the transparency is, in fact, worse than the silence of earlier engines.
It’s manipulatively misleading. I mean, really…
Is it likely that these algorithms keep popping out of the woodwork whenever there is intense public heat? When they are cornered, it turns out that they run (or develop in days) special algorithms, right at the moment of negative PR?
Fast-Forward to 3 Months Ago
A Stunning Admission…
Google admits to using both blacklists and whitelists. See my blog post…
Both admissions come on the heels of being caught in the act.
Whitelists and blacklists are manual changes. If a Googlebomber is added to a blacklist and then an algorithm is run, Google may call that “running an algorithm.” But any reasonable person knows it’s a manual change.
Ditto for whitelists. Websites that were unjustly penalized by Panda (and there were thousands) could be whitelisted. They aren’t. The only site to be whitelisted is the one that received publicity from WIRED.com. (Google denies it, but the evidence is overwhelming.)
The Big Bottom Line
Google has all the tools needed to “fix things” manually. If there is a Googlebomb algorithm, it is merely a useless list of “known/public Googlebombs.”
The list is updated by hand. Then the “algorithm” is run manually (“infrequently”) to penalize any new bombs that were added to the list, in order to extinguish negative PR.
Google only does this when they themselves face damage to their business.
Google’s users (who follow bad the recommendations of a Googlebombed search result) or the commercial victims of Google’s mistakes suffer privately. Those bombs are not detected and defused (when run “infrequently”) because no such algorithm (along the lines of The 5-Minute Googlebomb Algorithm) exists.
“Privately” is the operative word.
How do you get a Googlebomb that impacts your business fixed?
Make it to prime-time media. Google suddenly “does the right thing.” It defends its guidelines. It restores what good search results should be. It reassures everyone in its belief in motherhood and apple pie… while the world is looking.
The motive, though, is to extinguish the flames.
Let me repeat…
There is no Googlebomb algorithm, nothing on the simple-to-achieve level of “The 5-Minute Googlebomb Algorithm.” Period.
We’ll give the final word to Google, who said the following back in January, 2009…
“We joke around the Googleplex that more articles have been written about Googlebombs than there are actual examples of Googlebombs.”
Perhaps that is because Google does not make public all the submissions made by victims of Googlebombs, the ones that they ask for as feedback.
What is rare is mass-media public exposure of the Googlebomb.
When that happens, it is fixed… fast. They, um, just “run the algorithm.”
Somehow, the mass media still believes this shameless act of self-preservation.
Since Google jokes about how rare Googlebombs are, let’s ask the question…
What About the Less-Known and Private Googlebombs?
For them, it’s no joke.
Those are the bombs that Google does not defend against its guidelines. The search results mislead Google’s users as badly as the ones that embarrass Google publicly. They hurt its commercial victims as much as the public ones hurt Google.
Those are far from rare.
And Google does nothing about them.
No matter how perfectly any company proves its case, Google does nothing.
Even if the bomb-perpetrators were to confess publicly, Google would do nothing (unless the confession was the lead story on CNN).
We know of at least one such flagrant, private bomb that Google’s “infrequent” algorithm misses over and over again. It is likely the most clear-cut Googlebomb in the world.
The obvious question to Google is this…
Given that you have not solved the Googlebomb algorithmically in 10 years…
Given that you are responsible for the damage it causes to your users (and its victims)…
Given that you now admit to using blacklists (and whitelists, for that matter)…
Given that you have “infrequently” run the algorithm to protect your own business…
Given your near-monopolistic control of Web Search…
Why don’t you do the right thing?
Do your users not count? They follow bad search suggestions that carry the stamp of Google credibility.
Do you feel no responsibility for the undeserved damage caused to the victims of your algorithm?
Are only Google-damaging bombs fixed?
You’ve had 10 years to make this right. Instead, we’ve gone through 6 phases of “evolving opaque transparency”…
- 10+ years ago (Jan 2001) = “fake confusion” (“dumb m_f_r”)
- 7.5 years ago (December, 2003) = “dismissal — it’s OK” (“miserable failure”)
- 5.75 years ago (September, 2005) = “admit — trivialize — not OK — no manual change”
- 4.5 years ago (January, 2007) = “trivial-but-fixed” (announces “fixed”)
- 3 years ago (July 23, 2008) – “It’s Fixed… Really”
- 2.5 years ago (Jan, 2009) – “The Infrequent Algorithm”
How about returning to the original Google, the one we used to love and trust?
Do the right thing.
Fix it for real.
Make it right for more than just yourself.
The Next Post in This Blog Is Our Final Post… Ever
I had thought this was going to be a 3-part series. And it was. The 10-year history is over. Well, the recounting is over. The story continues.
In SiteSell Blog’s final post (until the Googlebomb is definitively fixed), I’ll reveal the detailed anatomy and construction of a real-world, private Googlebomb. We’ll track it from its moment of inception to the detailed communications between the victimized company and Google.
Google’s answers and actions will stun you and confirm every word of this 3-part series.
Rather than let this series scroll away into obscurity, as blogs tend to do to their content, we’ll end it right here.
Hopefully, others will catch on to “the real Google.”
Hopefully, Google will finally do the right thing for their users, for victims of mistakes that Google knowingly makes and for its own Google Guidelines… the supposed core of Google Search.
All the best,