Back in 2008 I declared that the information economy was giving way to what I called the search economy. The Internet was making it more important to know how to find information than to actually possess that information, because data — and therefore the fully-explored truth of any matter — might be constantly in flux. Even more to the point today, we need the same knowledge on many devices so it is usually better to find the link than to maintain multiple copies of aging data. This might explain in an ass-backwards way why Google just changed the name of its largest tech division from search to knowledge. A more accurate explanation for this name change is just that Microsoft’s Bing has better marketing than Google. But in the longer run the distinction between search and knowledge probably does mean everything.
Bing is a major influence on Google. Though Google is vastly more successful in terms of search volume, Bing has influenced Google’s look and feel, especially in the way that results are displayed. An even bigger success, though, has been Bing’s marketing campaign calling itself not a search engine but a Decision Engine — looking beyond the search to the answer.
Microsoft did more than just try to out-search Google. They gave some serious thought to how to make the quest for information on the Internet more productive and useful. Bing struck a chord with users and competitors alike and one result is that Google, too, is becoming more results-centric. That’s what is largely behind this perceptual shift from search to knowledge. It was behind Google’s Instant Search results, too — a technically non-trivial effort that lies at the heart of what this particular column is all about. For the moment, Google trading search for knowledge is just posturing, but in the longer run it has really significant meaning. It’s a game-changer.
“Not all smart people work at Sun (Microsystems),” Bill Joy used to say and not all smart people work at Google, either. You can put together the smartest team and still not produce the best product for any number of reasons, which Google is beginning to realize, especially as the company is having trouble retaining technical talent. It’s a small enough world where the current Google brain drain probably is worth the nine-figure bonuses the company has been handing-out here and there to keep important would-be defectors in the fold. But in the long run some of these big brains will leave anyway, so Google has to find a way to compete despite their loss. That way is through knowledge, the company has decided.
But actually moving from search to knowledge — from searching to finding — turns out to have a heavy systems component. As they show with Instant Search, Google’s sustainable advantage probably lies not in software but in hardware optimization. What they do may be a little better or a little worse, but if they can do it faster and (here’s the important bit) do more of it, Google will retain its lead and dominant market share.
This is a hardware war and Google so far is winning, primarily because most of their competitors don’t even realize that’s what is going-on.
Time for Cringely’s seventh law of information technology: all things evolve to abstraction over time, becoming uninteresting commodities. That’s what is happening right now in searching, which is why the major players are trying to jump to the next level, to finding. This is a case of evolve or die.
Microsoft and Google and their competitors, if any, may know a crapload of computer science, but for the most part that is becoming irrelevant. The fastest way to sort numbers was discovered years ago and mathematically proved. That’s not going to change. It is completely uninteresting trying to invent a faster method of sorting.
“What’s next after X?” on the Internet can usually be answered by turning the question into “what if X were utterly ubiquitous and free — what could we do then?”
That’s the question Google is now asking itself with prompting from Bing.
Next — the future of search…
I recall, reading somewhere, that the invention of the printing press doomed the renaissance man – the idea that one mind could know everything known. After the printing press, knowledge exploded, was recorded on paper instead of in brains, and kept in ever increasingly large libraries.
Eventually library became a science, and in the 20th century, if you really wanted to find something, you would just go up and talk to a librarian and they’d find it for you promply, and with gusto – more people were drawn to library science than was demanded and so they came reletively fast, cheap and friendly, and having way more capability than normally demanded, were usually glad to take on a challenge – much like a beagle takes on hunting rabbits, or huskies pulling sleds.
More often than not, I craft poorly conceived searches. I also suffer from internet amnesia – I rack up a list of things I’m dying to look up when I’m away from a computer, only to draw a blank next time I sit down in front of when. The reverse situation is to fall into a wiki-hole, that is, I sit down for five minutes to look up the date of the Pearl Harbor attack and end up spending seven hours reading up on World War II and only getting half way through it, and since I did this on wikipedia half of it is garbage, but I don’t know how much. I call that kind of knowledge wikifacts, where I’m not certain if its a fact or not.
Seems to me there will be some kind of library science to emerge on the internet — beyond entertainment, there’s little value in wikifacts, though it was funny at the time John Belushi said it, the Germans did not bomb Pearl Harbor, but he might have read that on wikipedia at some point.
That’s certainly an aspect of this evolution, Tim. Metadata can be used to assign to sources a reliability score, for example (there are many other techniques, too). Librarians had the advantage of operating on a much smaller database with mainly professional sources, though that doesn’t always make them more correct. Then there’s the information backscatter that afflicts you. That, too, has to be controlled in some fashion, probably through better targeted search results.
Not that IBM could, or might even want to be, a player in this search->find evolution, but might the technology that IBM’s Watson demonstrated on Jeopardy be a model/prototype for the kind of “knowledge finding” that you’re talking about?
(Notice how that was phrased (poorly) in the form of a question?)
I work in medical research and the data deluge is drowning everyone. When a single human genome sequence cost $1 billion to complete, you could afford to thrown thousands of minds at it. Today, it costs $10,000 and everyone is neck-high in billions of nucleotide fragments. Searching through the morass has become a major problem. Google and Bing search data, most of which is crap, and the results usually don’t yield answers per se, but clues/opinions to answers. Those clues/opinions are governed by how good the initial question was. There’s the weakest link. If you know what to ask, the results will usually inform you of the answer. The other weaknesses in search (than confound the ability to productively “find”) are relevancy and garbage. These can be circumvented to a large degree by quality of question. But is it in Google or Microsofts interest to filter out garbage? Could this be an opportunity for a company that builds search questions and applies filters. It would use Googles hardware at the back end but help frame better questions and filter the results. Didn’t Microsoft try that (use Google to improve their Bing results)?
This is my biggest gripe about Google’s results. It seems as if there is deliberate obfuscation on the part of site operators to get that ad view income. Many high ranked results purporting to have an answer to a technical question actually don’t answer anything. Google seems to be completely clueless at how they are being manipulated.
They’re making a lot of ad money from those manipulators, so I’m hesitant to assume they’re clueless. Let’s delicately call it a carefully cultivated plausible unawareness…
Time to trot out the bit of hardware/software that makes this transition possible: the SSD/RDBMS. Google, et al, have been exploding the available dataset into a gazillion flatfiles, which they then have to search individually. Such a waste.
The relational model tells you things are related *even though you didn’t know it at the time you stored the data*. It’s the RM that allows one to reason through data. That long lost language, prolog (invented at just the same time), does exactly that, using native datastores that look an awful lot like the RM. That not a coincidence, both were the creation of maths.
Culling out the massive redundancy into normal form datastores is the largest problem. But with all those smart people and hardware they can do that.
You may well be correct, though making this happen would certainly be non-trivial even for Google. Look to my follow-up column for an idea how they might broadly approach it.
I agree with your seventh law; where can we read the rest of your laws? I believe there is power not only in knowing key concepts, but being able to express them in a concise and compelling way.
Good question. Are there other laws or was seven chosen because it sounds marketable like IBM 1620.
Every few years I publish a list of Cringely’s Laws. I’ll try to cobble them together later this week. You are right, though, that calling it the seventh law was mainly a guess. It might have been the sixth or the eighth. I need to keep better records.
> nine-figure bonuses
Yay for the cents.
Even with cents that’s a pretty big number 🙂
Wolfram wrote a smallish maths book on Von Neumann’s automata theory from the perspective of knowledge OR the answer to life meaning everything. He would make Deep Thought to run the Universe experiment in 0 and 1s — well his problem was parameters (too many) and in the conclusion that although possible the Deep Thought project would take just a week shorter than the life of the universe itself! Just enough time to sit down and have a cuppa before the end.
What Google started with Android — fragmentation is the end of all Google’s endeavors. The big picture is only possible in miniature know everything about you only and only today.
In the days of paper the CIA trashed 200 tons of paper that had not been analyzed.
9/11 was known in essence and preventable yet manpower was not put to its prevention!!
So much for Knowledge so much for power so much for prevention of 3000 dead and the destruction of an icon!!
Microsoft or Google will fare better?
Only if they’re corrupt and forge the results! Which Google does now for a price!
Entropy kills Google!
Entropy is dimming the universe. Google is the least of our problems; well in the next few billion years, anyway.
the next after X we could do would be to structure all knowledge we have in form of a tree (!) and use this to transform feature-based search requests into “normal” search terms formed from words
Speaking of searching vs. finding reminds me of the old addage “seek and ye shall find”. Of course we want to find, but the key is looking in the right place, not getting something in 0.1 seconds. I turn off instant search since when using it you are limited to a page limited to 10 results so you have to keep changing pages to see all the results. And getting a list of the top 10 visited web sites using your search terms is not always what you want especially since there are so many sites that try to game google by specializing in repeating questions.
Personally, I would be thrilled if google would just get back to displaying relevant search results. I do not need their search engine or developers trying to guess what I may want. It has gotten to the point that I have started exploring other search engine options. If I find something that is good enough, google will not be getting my search traffic.
You don’t really go into detail here on what exactly Bing is doing to produce search results that are better — or at least different — than Google’s. If you have done any research on this, I think that it would be an interesting topic for a future article. According to comScore, Bing now provides as much as 30% of Internet search results (between Bing.com and Yahoo.com).
https://www.comscore.com/Press_Events/Press_Releases/2011/5/comScore_Releases_April_2011_U.S._Search_Engine_Rankings
If that’s true, Google ought to be more than a little bit afraid, although other sources (including my own site analytics) report figures that differ greatly from comScore’s. In some areas, Google is starting to look like the follower rather than the leader.
1. Bringing social knowledge to search: Bing now lets Facebook users know if their friends have “Liked” any of the websites that appear on search result pages. More importantly, websites with “Like” votes from the searcher’s friends get ranking boosts. This really changes the game of search engine optimization, because people are pretty darn good at detecting garbage. You might be able to fool search engine algorithms, but you can’t fool the wisdom of the crowd — and Bing was there first. Sure, Google introduced the +1 Button before Bing integrated with Facebook, but Bing is leveraging billions of existing “Like” votes while Google’s +1 Button is still in the initial testing stages.
2. Weeding out the poor quality article submission websites: affiliate marketers used these websites to publish hundreds of spun versions of the same article and get their websites ranked for competitive keyword phrases. The content was garbage, and everyone knew it. Judging, at least, from my personal experience, Bing weeded these websites out of the top spots on its result pages long before Google released the “Panda” update.
3. Link color: I can’t explain this one. Bing’s links on search result pages are light blue, while Google’s are more of a royal blue. Recently, Google started to experiment with a new result page format that used the same link color as Bing’s result pages.
Bob,
you might be interesting in my short thought paper on getting from search to semantics, through a combination of structured interfaces protocols (in essence, doing what the Palm Pilot did, not the Apple Newton), having curated, trustworthy data sets (Wolfram Alpha, for instance), and using search terms as information passing mechanisms (i.e., Google “Bob Cringely google knowledge” rather than “go to https://www.cringely.com/2011/05/google-decides-knowledge-is-power/“.
See http://ubiquity.acm.org/article.cfm?id=1891342 (or google “Espen Andersen semantic web”)
Imagine you wanted to sort a bunch of straws. You grab them up in a bundle and bang one end of the bundle down on a table and you’re done. More straws? Bigger arm. Doesn’t the “fastest way to sort numbers” in itself have a set a preconditions? And what if those preconditions don’t hold?
Actually, sound like google has decided power is knowledge . . . . 🙂
What’s interesting is the notion of folding in search with cloud computing (the ever-abused term, sorry…), and specifically I mean the ability to flex “infinitely” (yeah, you know what I mean) across CPUs/hardware. Consider what can become of search as algorithm (which essentially we have, though that is sans database, a non-critical component that at this moment is a show-stopper to anyone not already in the search business with a major database) wedded to “Internet as cloud,” and in particular now throw in “the semantic web” which in essence eliminates the need for a database as the “mere” existence of understood identities and the ability to rapidly catalogue/compile those identities on demand, and imagine the next steps (or struggle to imagine; I’m not sure we can even, not properly anyway).
The point is, at some point the value of CPU as independent variable, callable as a commodity, opens up a floodgate as search techniques become truly commoditized and the semantic web becomes a real thing. Naturally, an opposite reaction will be, and we see it already with Facebook and such, a resistance to allowing for a semantic web in order to hoard and create higher value for those who manage and execute properties, so I think we’ll see in the mid-term more and more “walled garden” moves. But it would seem inexorably that other forces, including basic needs of larger businesses to inter-operate and open markets, will push towards a more-open-than-not web enabling leveraging of semantically-understood resources. (Note this does not mean the actual use or leverage of content will be open; I think it will be an ongoing game, necessarily of “see what I am but don’t use me until you pay [in some fashion]” for many resources). So then does Google shift more and more towards a CPU farm of sorts, with an obvious benefit of being a clear top tier player? And how long does physical space and resource consumption seriously constrain who can be a CPU player? And given the nature of semantics, does the CPU provider necessarily start to intrude on “observing” what is being analyzed/gathered and therefore start to create pricing based on suspected value of the semantics, CPU aside, especially as CPU becomes more and more a commodity while the value of (some) content remains, more or less, distinctive and rarefied?
[…] Google decides knowledge is power >> I, Cringely […]
[…] Google decides knowledge is power I, Cringely […]
[…] from Apple, Amazon and the Android-based tablets.”Not available in the UK, of course.Google decides knowledge is power I, Cringely“Microsoft did more than just try to out-search Google. They gave some serious thought to how […]
[…] Google decides knowledge is power I, Cringely […]
What Google provides is data, pure and simple (eg, “price of gas in Denver average May 2011” gives 18.9 million results). It’s up to the user to figure out how to get information from that data (“the average price of gas in Denver is $3.75/gallon”). Bing seems to be marketing the ability to provide information; good luck to them (but the same search on Bing gives 13.6 million results). My company uses smart people to refine data that we collect into information, because our clients want the answer, not a batch of data. Providing knowledge from this information (“the price is going up gradually, following a trend due to the turmoil in the middle east”) takes yet more work and intelligence. Whether our clients have the wisdom to use their knowledge properly is yet another question. (And I’m aware I may not have used the best example.)
Computer-generated knowledge is still a long way ahead of us. Oh, and to contradict Sir Frances: knowledge + resources = power.
Actually, Ask.com influenced Bing’s look and feel, doing the three pane thing before Bing. Search for “Meet The New Google Look & Its Colorful, Useful “Search Options” Column” if you want my article documenting all the fun and finger-pointing in the “who copied whom?” department.
The marketing campaign is a big success in that by spending well over $100 million, it seems, Bing added about 50% of its search share from around 8% of the US market to about 12%. That’s an expensive process, and you wonder just how long Bing will keep spending like that, whether the growth will continue to be about 2% per year and what happens if they stop spending.
And is the whole “decision engine” better marketing than Google? That’s hard to say, since Google really hasn’t done much marketing of search at all, with its Superbowl ad being one of the few major exceptions.
No one has really said why people are going to Bing, if those ads work because they want a better decision, or if they just are checking out something new. And all the product placements probably don’t help. Nor does having your MSN portal feature a “search” of the day.
Meanwhile, Bing’s share, each time I look, hasn’t really been that much at the expense of Google rather than Yahoo. That’s not surprising, since Yahoo dumped its own search technology. Doing a deal with Yahoo, to gain that 30% overall share of the US market, was one of the smartest things Bing did. That was helped, of course, by the US Department of Justice declaring Yahoo couldn’t do a deal with Google, thus handing Yahoo over at a fire sale price.
As for the actual function of helping to make decisions, I’m sorry. Bing simply doesn’t have any magic wand that is somehow making it operate that differently than Google. It’s still pretty much put words in, you get back links.
Don’t get me wrong. Bing has indeed given serious thought on how to improve search. They’ve made huge accomplishments in putting out a solid rival to Google, and one that can beat Google in some queries, or in some features.
But the fundamental shift that you’re writing so glowingly about — that’s not actually happening, not in the marketing hype you buy into. It’s still pretty much “10 blue links” because, you know what? 10 blue links works.
The far bigger issue isn’t the interface but rather the ranking signals. The links that are used to rank web pages have become more and more untrustworthy over time. That’s why both Google and Bing are desperately looking to social signals in order to do a ranking signal transplant for links.
only to draw a blank next time I sit down in front of when. The reverse situation is to fall into a wiki-hole, that is, I sit down for five minutes to look up the date of the Pearl Harbor attack and end up spending
That’s certainly an aspect of this evolution, Tim. Metadata can be used to assign to sources a reliability score, for example (there are many other techniques, too). Librarians had the advantage of operating on a much smaller database with mainly professional sources, though that doesn’t always make them more correct. Then there’s the information backscatter that afflicts you. That, too, has to be controlled in some fashion, probably through better targeted search results.
and there’s so little new stuff. I’m willing to pay the $1 on-demand for Zediva, though, and am using them much more (I have two small children and don’t go out to the movies).
Power supplies are usually optimized for high loads (i.e. 70%-90% maximum power output). So I wonder if they use power supplies that can even use the CPU at full-power
[…] competition from Apple, Amazon and the Android-based tablets."Not available in the UK, of course.Google decides knowledge is power >> I, Cringely"Microsoft did more than just try to out-search Google. They gave some serious thought to how to […]
[…] the one, boys do it to load off hormones and show off in front of friends.Powered by Yahoo! AnswersDaniel asks…Why do teenagers place such a great importance on companionship of the opposite sex? D…"dtm-content">Why do teenagers place such a great importance on companionship of the opposite sex? […]
You can certainly see your skills within the work you write. The arena hopes for more passionate writers such as you who are not afraid to say how they believe. All the time go after your heart.
they compose their sentences and paragraphs. you…
may also want to check out the words they use, check how they enrich their piece with appropriate vocabulary words. take a close look at their diction, or their choice of words as well. find out if they use other writing…