In part one we learned about data and how it can be used to find knowledge or meaning. Part two explained the term Big Data and showed how it became an industry mainly in response to economic forces. This is part three, where it all has to fit together and make sense — rueful, sometimes ironic, and occasionally frightening sense. You see our technological, business, and even social futures are being redefined right now by Big Data in ways we are only now coming to understand and may no longer be able to control.
Whether the analysis is done by a supercomputer or using a hand-written table compiled in 1665 from the Bills of Mortality, some aspects of Big Data have been with us far longer than we realize.
The dark side of Big Data. Historically the role of Big Data hasn’t always been so squeaky clean. The idea of crunching numbers to come up with a quantitative rationalization for something we wanted to do anyway has been with us almost as long as we’ve had money to lose.
Remember the corporate raiders of the 1980s and their new-fangled Big Data weapon — the PC spreadsheet? The spreadsheet, a rudimentary database, made it possible for a 27-year-old MBA with a PC and three pieces of questionable data to talk his bosses into looting the company pension plan and doing a leveraged buy-out. Without personal computers and spreadsheets there would have been no Michael Milkens or Ivan Boeskys. That was all just a 1980’s-era version of Big Data. Pedants will say that isn’t Big Data, but it served the same cultural role as what we call Big Data today. It was Big Data for that time.
Remember Reaganomics? Economist Arthur Laffer claimed to prove that you could raise government revenue by lowering taxes on the rich. Some people still believe this, but they are wrong.
Program trading — buying and selling stocks automatically following the orders of a computer algorithm — caused Wall Street to crash in 1987 because firms embraced it but didn’t really know how to use it. Each computer failed to realize that other computers were acting at the same time, possibly in response to the same rules, turning what was supposed to be an orderly retreat into a selling panic.
Long Term Capital Management took derivative securities to a place in the 1990s they had never been before only to fail spectacularly because nobody really understood derivatives. If the government hadn’t quickly intervened, Wall Street would have crashed again.
Enron in the early 2000‘s used giant computers to game energy markets, or so they thought until the company imploded. Enron’s story, remember, was that their big computers made them smarter, while the truth was their big computers were used to mask cheating and market manipulation. “Pay no attention to that man behind the curtain!”
The global banking crisis of 2007 was caused in part by using big computers to create supposedly perfect financial products that the market ultimately could not absorb. But wasn’t that all caused by deregulation? Deregulation can encourage financiers to be reckless, but more importantly Moore’s Law had brought down the cost of computing to the point where it seemed plausible to mount a technical assault on regulation. Technology created temptation to try for the big score.
These were all just variations on the darker side of Big Data. Big Data schemes become so big so fast and then they crash. These data-driven frenzies tend not to be based on reality but on the fantasy that you can somehow trump reality.
And at their heart these failures tend to be based on lies or become undermined by lies. How do you make a AAA-rated Collateralized Mortgage Obligation entirely from junk-rated mortgages? You lie.
It’s a huge problem of basing conclusions on either the wrong method or the wrong data. Some experts would argue this can be cured simply through the use of Bigger Data. And maybe it can be, but our track record of doing so hasn’t been a good one.
There is an irony here that when we tend to believe the wrong Big Data it usually concerns money or some other manifestation of power, but we have an equal tendency to not believe the right Big Data when it involves politics or religion. So Big Scientific Data often struggles to gain acceptance against unbelievers — those who deny climate change, for example, or endorse the teaching of creationism.
Big Data and insurance. Big Data has already changed our world in many ways. Take health insurance, for example. There was a time when actuaries at insurance companies studied morbidity and mortality statistics in order to set insurance rates. This involved metadata — data about data — because for the most part the actuaries weren’t able to drill down far enough to reach past broad groups of policyholders to individuals. In that system, insurance company profitability increased linearly with scale so health insurance companies wanted as many policyholders as possible, making a small profit on most of them.
Then in the 1990s something happened: the cost of computing came down to the point where it was cost-effective to calculate likely health outcomes on an individual basis. This moved the health insurance business from being based on setting rates to denying coverage. In the U.S. the health insurance business model switched from covering as many people as possible to covering as few people as possible — selling insurance only to healthy people who didn’t need the healthcare system.
Insurance company profits soared but we also had millions of uninsured families as a result.
Given that the broad goal of society is to keep people healthy, this business of selling insurance only to the healthy probably couldn’t last. It was just a new kind of economic bubble waiting to burst — hence Obamacare. There’s a whole book in this subject alone, but just understand that something had to happen to change the insurance system if the societal goal was to be achieved. Trusting the data crunchers to somehow find an intrinsic way to cover more citizens while continuing to maximize profit margins, that’s lunacy.
Big scary Google. The shell games that too often underly Big Data extend throughout our economy to the very people we’ve come to think of as embodying — even having invented — Big Data. Google, for example, wants us to believe they know what they are doing. I’m not saying what Google has done isn’t amazing and important but at the base of it is an impenetrable algorithm they won’t explain or defend any more than Bernie Madoff would his investment techniques. Big Scary Google, they have it all figured out.
Maybe, but who can even know?
But they are making all the money!
The truth about advertising. Google bills advertisers but what makes them all that money is the advertisers pay those bills. While this may sound simplistic, often advertisers don’t really want to know how well their campaigns are going. If it were ever clear what a ridiculously low yield there was on most advertising, ad agencies would go out of business. And since agencies not only make the ads, they also typically place them for customers, in the ad industry’s point of view it is sometimes better not to know.
Among Mad Men the agency power structure is on its head as a result. Senior agency people who earn most of the money are on the creative side, making the ads. Junior people who make almost no money and have virtually zero influence are the ones who place print, broadcast, and even Internet ads. The advertising industry values ad creation above any financial return to clients. That’s crazy and to make it work ignorance has to be king.
As a result the Internet is corrupt. The Huffington Post tells its writers to use Search Engine Optimization terms in their stories because that is supposed to increase readership. Does it really? That’s not clear, though what research there is says SEO numbers go up, too, if you insert gibberish into your posts, which makes no sense at all.
Ultimately what happens is we give in and accept a lower standard of performance. Will Match.com or eHarmony really help you find a better mate? No. But it’s fun to think they can, so what the hell…
What’s odd here is we have data cynicism when it concerns real science (climate change, creationism, etc.) but very little cynicism when it comes to business data.
Now here is something really interesting to consider. Google’s data — the raw data, not the analysis — is for the most part accessible to other companies, so why doesn’t Google have an effective competitor in search? Microsoft’s Bing certainly has the same data as Google but they have a sixth the users. It comes down to market perception and Bing is not perceived as a viable alternative to Google even though it clearly is.
This is the spin game of Big Data.
Here’s another side of this story that hasn’t been covered. Apple has a $1 billion data center in North Carolina, built before Steve Jobs died. I spent an afternoon parked by the front gate of that facility and counted one vehicle enter or leave. I went on to calculate the server capacity required if Apple were to store in the facility multiple copies of all known data on Earth and it came to eight percent of the available floor space. This building has the capacity to hold two million servers. Later I met the salesman who sold to Apple every server in that building — all 20,000 of them, he told me.
Twenty thousand servers is plenty for iTunes but only one percent of the building’s capacity. What’s going on there? It’s Big Data BS: by spending $1 billion for a building, Apple looks to Wall Street (and to Apple competitors) like a player in Google’s game.
This is not to say that Big Data isn’t real when it is real. At Amazon.com and any other huge retailer including Walmart, Big Data is very real because those firms need real data to be successful on thin profit margins. Walmart’s success has always been built on Information Technology. In e-commerce, where real things are bought and sold, the customer is the customer.
At Google and Facebook the customer is the product. Google and Facebook sell us.
All the while Moore’s Law marches on, making computing ever cheaper and more powerful. As we said in part one, every decade computing power at the same price goes up by a factor of 100 thanks to Moore’s Law alone. So the computer transaction required to sell an airline ticket on the SABRE System in 1955 dropped by a factor of a billion today. What was a reasonable $10 ticketing expense in 1955 is today such a tiny fraction of a penny that it isn’t worth calculating. In SABRE terms, computing is now effectively free. That changes completely the kind of things we might do with computers.
Your personal intelligence agency. Computing has become so inexpensive and personal data has become so pervasive that there are now cloud-based apps that make your smart phone the equivalent of J. Edgar Hoover’s FBI or today’s NSA — a data mining machine. One of these tools was called Refresh and is pictured here. Refresh has now been absorbed into LinkedIn which is being absorbed into Microsoft but the example is still valid. Punch someone’s name into your phone and hundreds of computers — literally hundreds of computers — fan out across social media and the world wide web composing an on-the-fly dossier on the person you are about to meet in business or would like to meet at the end of the bar, not only telling you about this person, their life, work, family, education, but even mapping any intersections between their life and yours, suggesting questions you might ask or topics of conversation you might want to pursue. All of this in under one second. And for free.
Okay an electronic crib sheet is hardly the apex of human computational development, but it shows how far we have come and suggests how much farther still we can go as computing gets even cheaper. And computing is about to get a lot cheaper since Moore’s Law isn’t slowing down, it’s actually getting faster.
The failure of Artificial Intelligence. Back in the 1980s there was a popular field called Artificial Intelligence, the major idea of which was to figure out how experts do what they do, reduce those tasks to a set of rules, then program computers with those rules, effectively replacing the experts. The goal was to teach computers to diagnose disease, translate languages, to even figure out what we wanted but didn’t know ourselves.
It didn’t work.
Artificial Intelligence or AI, as it was called, absorbed hundreds of millions of Silicon Valley VC dollars before being declared a failure. Though it wasn’t clear at the time, the problem with AI was we just didn’t have enough computer processing power at the right price to accomplish those ambitious goals. But thanks to Map Reduce and the cloud we have more than enough computing power to do AI today.
The human speed bump. It’s ironic that a key idea behind AI was to give language to computers yet much of Google’s success has been from effectively taking language away from computers — human language that is. The XML and SQL data standards that underly almost all web content are not used at Google where they realized that human-readable data structures made no sense when it was computers — and not humans — that would be doing the communicating. It’s through the elimination of human readability, then, that much progress has been made in machine learning. THIS IS REALLY IMPORTANT, PLEASE READ IT AGAIN.
You see in today’s version of Artificial Intelligence we don’t need to teach our computers to perform human tasks: they teach themselves.
Google Translate, for example, can be used online for free by anyone to translate text back and forth between more than 70 languages. This statistical translator uses billions of word sequences mapped in two or more languages. This in English means that in French. There are no parts of speech, no subjects or verbs, no grammar at all. The system just figures it out. And that means there’s no need for theory. It clearly works, but we can’t say exactly how it works because the whole process is data driven. Over time Google Translate will get better and better, translating based on what are called correlative algorithms — rules that never leave the machine and are too complex for humans to even understand.
Google Brain. At Google they have something called Google Vision that not long ago had 16000 microprocessors equivalent to about a tenth of our brain’s visual cortex. It specializes in computer vision and was trained the same way as Google Translate, through massive numbers of examples — in this case still images (BILLIONS of still images) taken from YouTube videos. Google Vision looked at images for 72 straight hours and essentially taught itself to see twice as well as any other computer on Earth. Give it an image and it will find another one like it. Tell it that the image is a cat and it will be able to recognize cats. Remember this took three days. How long does it take a newborn baby to recognize cats?
This is exactly how IBM’s Watson computer came to win at Jeopardy, just by crunching old episode questions: there was no underlying theory.
Let’s take this another step or two. There have been data-driven studies of Magnetic Resonance Images (MRIs) taken of the active brains of convicted felons. This is not in any way different from the Google Vision example except we’re solving for something different — recidivism, the likelihood that a criminal will break the law again and return to prison after release. Again without any underlying theory Google Vision seems to be able to differentiate between the brain MRIs of felons likely to repeat and those unlikely to repeat. Google’s success rate for predicting recidivism purely on the basis of one brain scan is 90+ percent. Should MRIs become a tool, then, for deciding which prisoners to release on parole? This sounds a bit like that Tom Cruise movie Minority Report. It has a huge imputed cost savings to society, but also carries the scary aspect of there being no underlying theory: it works because it works.
Google scientists then looked at brain MRIs of people while they were viewing those billions of YouTube frames. Crunch a big enough data set of images and their resultant MRIs and the computer could eventually predict from the MRI what the subject was looking at.
That’s reading minds and again we don’t know how.
Advance science by eliminating the scientists. What do scientists do? They theorize. Big Data in certain cases makes theory either unnecessary or simply impossible. The 2013 Nobel Prize in Chemistry, for example, was awarded to a trio of biologists who did all their research on deriving algorithms to explain the chemistry of enzymes using computers. No enzymes were killed in the winning of this prize.
Algorithms are currently improving at twice the rate of Moore’s Law.
What’s changing is the emergence of a new Information Technology workflow that goes from the traditional:
1) New hardware enables new software
2) New software is written to do new jobs enabled by the new hardware
3) Moore’s Law brings hardware costs down over time and new software is consumerized.
4) Rinse, repeat
To the next generation:
1) Massive parallelism allows new algorithms to be organically derived
2) New algorithms are deployed on consumer hardware
3) Moore’s Law is effectively accelerated though at some peril (we don’t understand our algorithms)
4) Rinse, repeat
What’s key here are the new derive-deploy steps and moving beyond what has always been required for a significant technology leap — a new computing platform. What’s after mobile, people ask? This is after mobile. What will it look like? Nobody knows and it may not matter.
In 10 years Moore’s Law will increase processor power by 128X. By throwing more processor cores at problems and leveraging the rapid pace of algorithm development we ought to increase that by another 128X for a total of 16,384X. Remember Google Vision is currently the equivalent of 0.1 visual cortex. Now multiply that by 16,384 to get 1,638 visual cortex equivalents. That’s where this is heading.
A decade from now computer vision will be seeing things we can’t even understand, like dogs sniffing cancer today.
We’ve simultaneously hit a wall in our ability to generate appropriate theories while finding in Big Data a hack to keep improving results anyway. The only problem is we no longer understand why things work. How long from there to when we completely lose control?
That’s coming around 2029, according to Ray Kurzweil, when we’ll reach the technological singularity.
That’s the year the noted futurist (and Googler) says $1000 will be able to buy enough computing power to match 10,000 human brains. For the price of a PC, says Ray, we’ll be able to harness more computational power than we can understand or even describe. A virtual supercomputer in every garage.
Matched with equally fast networks this could mean your computer — or whatever the device is called — could search in its entirety in real time every word ever written to answer literally any answerable question. No leaving stones unturned.
Nowhere to hide. Apply this in a world where every electric device is a sensor feeding the network and we’ll have not only incredibly effective fire alarms, we’re also likely to have lost all personal privacy.
Those who predict the future tend to overestimate change in the short term and underestimate change in the long term. The Desk Set from 1957 with Katherine Hepburn and Spencer Tracy envisioned mainframe-based automation eliminating human staffers in a TV network research department. That happened to some extent, though it took another 50 years and people remain part of the process. But the greater technological threat wasn’t to the research department but to the TV network itself. Will there even be television networks in 2029? Will there even be television?
Nobody knows.
If you have been reading this series and happen to work at Google, your reaction might be to feel attacked since much of what I am describing seems to threaten our current ways of living and the name “Google” is attached to a lot of it. But that’s not the way it is. More properly, it’s not the only way it is. Google is a convenient target but much the same work is happening right now at companies like Amazon, Facebook and Microsoft as well as at a hundred or more startup companies. It’s not just Google. And regulating Google (as the Europeans are trying to do) or simply putting Google out of business probably won’t change a thing. The future is happening, no matter what. Five of those hundred startups will wildly succeed and those alone will be enough to change the world forever.
And that brings us to the self-driving car. Companies like Google and its direct competitors thrive by making computing ever faster and cheaper because it makes them the likely providers of future data-driven products and services. It’s the future of industry.
Today if you look at the parts cost of a modern automobile the wiring harness that links together all the electrical bits and controls the whole mechanism costs more than the engine and transmission! This shows our priority lies in command and communication, not propulsion. But those costs are dropping dramatically just as their capabilities are increasing. The $10,000 added cost to Google for making a car self-driving will drop to nothing a decade from now when all new cars will be capable of driving themselves.
Make all new cars self-driving and the nature of our automotive culture changes completely. Cars will go everywhere at the speed limit with just one meter separating each vehicle from the next, increasing the traffic capacity of highways by a factor of 10.
The same effect can be applied to air travel. Self-flying aircraft may mean lots of small planes that swarm like birds flying direct to their destinations.
Or maybe we won’t travel at all. Increased computational power and faster networks are already bringing telepresence — life-size video conferencing — into the consumer space.
It could be that the only times we’ll physically meet anyone outside our village will be to touch them.
All these things and more are possible. Bio-informatics — the application of massive computing power to medicine — combined with correlative algorithms and machine learning will be finding answers to questions we don’t yet know and may never know.
Maybe we’ll defeat both diseases and aging, which means we’ll all die from homicide, suicide, or horrible accidents.
Big Data companies are rushing headlong, grabbing prime positions as suppliers of the future. Moore’s Law is well past the inflection point that made this inevitable. There is no going back now.
So the world is changing completely and we can only guess how that will look and who… or what… will be in control.
666
Lazy and non original science fiction article with no factual backing. All this computing power and data — but all it’s used for is to spam us with more useless ads.
–
It’s true the advertising customers don’t really want to know their success rate. If they did, they’d find out most of us are using ad blockers, and detest their crappy products for the annoyance they cause us.
–
Truth is, we are in a sort of dark ages of tech. Moore’s law is simply giving us a game/music/porn device in our pockets, more precise weapons, spy tech, and everything is a billboard. A massive FAIL.
In the 80s, AI proponents believed someday soon computers would model the human brain, then surpass it. Using the big data method is an admission that this approach has failed. Computers are not brains, and nobody understands how brains work. If it takes a whole data center sifting through petabytes of data to sort out cat pictures, an easier and cheaper way is to give a 3 year old a cookie.
–
I do not fear the future because all this investment and research, so far, is producing very little.
I had to bail at “climate change”.
That’s as much religion as anything else. I’ve been an engineer / scientist since day a long time. I was 4 years of age when I told my mother I did not want to go to church any more because I did not believe in god.
It took me even less time to figure out “climate change” is a scam and a hoax.
You disappoint Bob. You know better. You’re just going along because they’ll beat you up if you don’t.
At least they stopped calling it “global warming”. Climate change is hard to disagree with, since I know today’s climate is different than yesterday’s, which’s why we care about weather forecasts. The debate is really about the extent to which human activity affects it. As far as “creationism” is concerned, it can’t be taught, other than to say some people believe God participates in each creation either directly or indirectly through evolution. Evolution is a science, creationism, a belief. Both should be discussed as part of a well-rounded education, keeping the science vs. belief distinction in mind.
This series has been outstanding and mind blowing. One of your best ever – and it needs a larger audience.
You need to shop this, Bob. PBS needs to make this an Nova episode about this.
“Hitchhiker’s Guide to the Galaxy.” The computer provides the answer, but no one understands it because it turns out they didn’t know what the question actually was.
> “Bing is not perceived as a viable alternative to Google even though it clearly is.”
Written by someone who has clearly not really tried to use Bing. Bing results are often head scratchily bad.
Unfortunately, I have to agree – Bing is my default search engine in Firefox, but I often have to switch a query over to Google. And this includes looking things up on Microsoft’s own MSDN.
In the previous column I posted why Google’s arrogance caused me to switch to Bing. I have had no problems with Bing. In addition, to what I wrote about 4 days ago, Google again irritated me yesterday. Without any notice or authorization from me, it placed my birthday in my Calendar.
…
I am very careful to, as much as possible, keep my personal information off of the web for security reasons. (For example, someone tried to hack a bank account last year) Google is so arrogant and overreaching that it decided it would place my security at risk and put my birthday in my calendar. I use gmail because it is superior to all of the other email services that I have used. However, wherever I can, I avoid Google because of its arrogant and aggressive behaviors. I was happy to stop searching with Google and reduce its bottom line in some amount however very small that is.
Should have said that the gmail spam filter was a very good product. The changes in the compose feature implemented in 2013 were horrible. See user reaction here: https://productforums.google.com/forum/#!topic/gmail/10vlXAH5jKo
With the addon “old compose”, gmail was serviceable. Sometime in the last 5 or 6 months Google seems to have removed the most awful “new compose” features although the “new compose” is still worse than the old product.
A possibly simple solution for the Gmail composer problem (agree that it is horrible from several perspectives) is to use a client you like that allows interfacing with Gmail’s SMTP and POP or IMAP access. I use SeaMonkey (mostly on Linux) as it retains its Netscape UI roots quite nicely, such that my email from 17 years ago originally processed by the OS/2 Netscape mail client is still part of my email archive on my own PC’s.
However, Google does seem to be making it less inclusive such that any mail I view with their web client (and maybe the IMAP client on a phone such as the Outlook client provided with Windows Phone 8.1) does not seem to be available after that activity for download by the POP client – needs more investigation. My response to this apparent degradation of my control over my email is to use Gmail less, but a lot of my correspondents, especially business/government, are not easy to change to using a new address. Still, I can use SM’s composer to send email to such recipients.
RO Thunderbird works well with POP & gmail, and I can still see gmail mails after downloading gmail to Thunderbird.
I’ve used Bing. My point here is that we’ve reached a point where spider technology is mature. There is no technical reason why Bing should be worse than Google from the standpoint of raw data. What you are talking about, however, are search results, which evidently have a different style in Bing — a style you don’t find useful. Bing may or may not use Google’s PageRank (actually Stanford University’s PageRank to which Google has had a NON-EXCLUSIVE license since 2011). So maybe Google IS better, but why? In any case, the PageRank patent expires next year.
You don’t define “better”. I find search results are generally lousy unless I’m trying to buy something (surprise, surprise), or it is something undeniably factual — like how many cups in a gallon.
—
For anything more complex, I often go through many pages of search results without finding anything relevant. And as another reader pointed out, just finding search results doesn’t mean the answer is accurate.
Quote: “It’s a huge problem of basing conclusions on either the wrong method or the wrong data.”
Plus examples from Insurance, Advertising, Google Brain.
Actually, rather than “wrong method” or “wrong data”….it might be a very old problem with the application of logic. Namely the complete inability to separate a correlation from a cause. The Google Brain example might be a good example of such a mistake. “Google’s success rate for predicting recidivism purely on the basis of one brain scan is 90+ percent.” Yes the correlation is 90+ percent. But is the CAUSE some inbuilt tendency to recidivism, or is the cause something else which causes recidivism. This distinction is particularly important for someone trying to treat recidivism.
Quote: “…..search in its entirety in real time every word ever written to answer literally any question”
Really? How about “Where will I be at the instant of my death?” How about “Where (exactly) is Ambrose Bierce buried?” Or “What is the meaning of the second and third Beale cipher documents?” ……I could go on…….
I’ll change it to “any question to which there is an answer.” Satisfied?
Not exactly. What year will I die has a correct answer, and is thus a decidable question.
We just don’t know which is the correct answer right now.
Bob said “any question for which there is an answer”, not “…will be an answer”. It sounds like you are assuming statements about the future, have a known, provable, correct answer today, but I suspect that ignores the inherent randomness of nature.
No, I just remember a test that most of the class missed one question:
You had to identify which were decidable, that is there exists a function that produces the right output
The number of students who will pass the class. I don’t remember if it was just one output, or a separate output for each student.
Most of us said it was not decidable, but the correct answer is that it is.
That function exists, we just don’t know which it is yet.
Bob: The problem with technology which claims to “search in its entirety in real time every word ever written to answer literally any question” is threefold:
1. The answers provided may in fact be wrong (as compared with an answer like “Sorry, Dave, I don’t know”)
2. People might believe the wrong answer
3. ……and those people may take action based on the wrong answer
So it’s not just the answer which matters, but also the provision of provenance for the answer.
We need to learn the difference between “wisdom” and “knowledge.” We can find all the information we want but do we understand it?
but-but-but Vger knows…
And the problem with AI the last time was optimism and magical thinking.
So this time it’s different because..?
Thirty years ago AI was a joke, yet it attracted huge amounts of venture capital anyway. It was a joke for two reasons: 1) the only way to make it work was to ship an actual expert with the product, and; 2) my stated reason that working AI was far more computationally intensive than could be managed for a reasonable price at the time. This latter part HAS changed. We have cheap access to a lot more computing power today and — as a result — AI is starting to produce usable results. As computing gets cheaper still, results will improve more. I’m not convinced, however, that we’ve outgrown that need to ship an expert in the box (or cloud, as the case may be) and that may be where you are heading in your comment, I’m not sure. Even experts have biases. My worry about AI going forward is we may find instances where it looks like a duck, walks like a duck, quacks like a duck, but in fact ISN’T actually a duck. There are instances where such concerns are testable. You can, for example, use Google Translate to convert your text into Kazakh, then use it again to translate the Kazakh text back into English. If what comes back looks very close to what went out, then you are probably pretty safe even without knowing a single word of the target language. But how do you do such a test when it comes to recidivism? I don’t know.
Great series, thanks!
My former company (Wipro) has a data center right around the corner from where you parked outside of Apple’s DC, and Facebook and a couple of other companies have DC’s in the same area around King’s Mountain.
While the $1 Billion data center may only have 20,000 servers, those servers have multiple processors with 12 cores, which provides a huge amount of computing power, more than sufficient for just about anything they would need to do, and the actually server operating systems are all virtualized, so you may be looking at 200,000 server VMs. As CPU’s become increasingly more dense, they will simply replace the servers they have, and maybe only need 10,000. The rest of the space is being consumed by STORAGE, which is the current bottleneck. They simply can’t get shelves of spinning rust in racks fast enough, although with the latest 4TB SSD’s and of course higher densities in the future, the spinning rust will be phased out quickly to speed up access and further lower energy consumption.
When I originally wrote about the Apple data center I consulted old friends who were designers of large data centers. It’s not rocket science. There are standards for rack capacities, energy consumption per square foot, etc., and I applied these to come up with the numbers I cited — that all digitized data available at that time could be STORED in 20 percent of the available floor space. Time has passed since then and the amount of data in the world has grown but so has storage density as you mention. My guess is they are neck-and-neck so there is still plenty of empty space and your point is probably wrong. At the time the explanations I was given for such a large build-out pretty much came down to indoor soccer fields, basketball and volleyball courts. I am not making this up. Hardly anybody works there but if they want to shoot some hoops they can do so in air conditioned comfort.
Since you visited the Apple DC in NC, more transformers have been added to their substation. A whole new utility pad has been built for what appears to be a battery farm. They just started construction on another big building to the SE of the data center. Apple is definitely investing in its facility.
For our future, please see: The Machine Stops by E.M. Forster
Thanks, always appreciate a good sci-fi short story. About an hour read,
Written in 1909 and prescient today. http://archive.ncsa.illinois.edu/prajlich/forster.html
Glad you enjoyed it. I was at the library to check out a Hornblower novel and the collection of E.M. Forster short stories was an the shelf next to it.
And speaking of books – it seems that Calvin’s dad was not so far off..
https://www.gocomics.com/calvinandhobbes/1993/12/07
I have read all the series. Its wonderful work you did with the first two. You vision in third too dramatic. Big data will make us smarter. It will enable AI and will bring huge change in our life and work being. Yes, there are many threats – but is inevitable. Refer to Asimov’s The End of Eternity to understand why we have to embrace threat in order to evolve.
Yes, our life and business will change. But this is inevitable. AI will part of our life in very near future and its up to us find our way of living with it.
What you note is the fundamental difference as a writer between looking backward and looking forward in time. The past is the past and should remain pretty static if the writer knows what he is doing and works hard. Sometimes new information comes along that changes the way we see the past but it’s rare. But the future is wide open. We’re sitting at the origin point of a vector that could change its course at any time and for any reason. So we hedge a little more, we are of course less detailed and — in contrast to the history — honest writers have to admit they really don’t know. I am an honest writer. So if it comes across as too dramatic, that’s the drama of the unknown, the yet to be. I’m not saying to somehow avoid the future: it is inevitable so we must embrace it or die. In fact I am saying exactly what you did: there will be threats. There have always been threats. I just think it’s smarter to be on the lookout for them.
For part 1 and 2 there is enough data to be very analytical. In part 3 analysis of the data can only take you so far, then, you need to do, as Brian Christian would say, play the middle game (chess) where we play the least like machines. Perhaps if we embrace what is coming with our humanity we will discover something more of what it means to be human. BTW: while I enjoy the facts and analysis in your posts in general, I also very much enjoy the humanity that comes across.
I wonder what will Google/Facebook/Microsoft et al do when ocean of data (aka IOT) will not enter their pipes.
Google will not be able to buy it easily as they did with patents’ libraries and scripts.
Dispencing free Google Home products? maybe.
I predict a “Battle for the data” sooner than later.
“Creationism” is “real science”?! You just reduced the quality of this article. (I like the rest of it though.)
I think you misread that sentence.
Though personally, I don’t see what the problem is. If I had been educated a couple millennia ago, and miraculously observed a fast-forward version of the scientific creation of the earth, I’d likely write something similar to what the ancients did.
The servers are shoved in a corner while the rest of the space is devoted to a track where they test out the Apple Car™.
I would like to see more information on Google’s Deep Learning techniques. Most of what passes for AI today is just Neural Networks. Here’s a bunch of data and here is result we are looking for; repeat enough times and it gets good at matching your pattern. This training requires some sort of human setup with decisions about what data to provide and a value judgement about a “good” match. Incomplete data, incomplete value judgements – you get a nice party game to recognize pictures of cats. Maybe big data combined with big processing get us further faster, but real progress and real learning needs to eliminate the human judgement bottleneck.
So, in summary, big data is a huge problem when people get their greedy, scheming hands on it. Big data works when intelligent algorithms work on it.
BTW, I read the other day that those studies on active brains via fMRI have all been rendered moot. Apparently one of the algorithms has a huge flaw in its statistical routines. A lot of our big data, including climate data, are based on human-designed algorithms and calculations that are impossible to validate. What happens when our fatally flawed data drive the future artificial intelligence?
The situation is both better and worse than you describe. Machine learning has made remarkable advances in recognizing patterns in big data archives, but when it comes do doing more general things, there are still areas where there are huge gaps. Robotics expert Rodney Brooks has noted that we’re pretty clueless about how to get a machine to pick an individual part out of a bin into which a bunch of parts have been dumped. The ancestors of the squirrels stealing sunflower seeds from the bird feeders in my backyard figured this out millions of years ago.
The AlphaGo system became a world-class player by starting with a database of 30 million go moves, and then built on this by playing against itself more games than have ever been played by every human since the game was invented. Human players like European Champion Fan Hui are vastly superior learners, needing nowhere near that amount of data to achieve the same skill level — and we have only sketchy clues about how they do it.
For every exponential advance in computing power provided by Moore’s Law, there is an “NP-hard” problem that gets exponentially more difficult when you try to scale it up. Google is spending tens of millions of dollars on exotic hardware that still can’t beat that kind of combinatorial explosion in complexity, even in theory, and the exotic hardware doesn’t seem to be that much better than a regular computer.
Philosophers got all bent out of shape when railroads began to go faster than horses in the 1820s, and predicted “the end of the world as we know it”. And they were right! If you’re in a depressive mood, you can remember that railroads played a key part in winning the US Civil War, and in every war for the next 120 years.
To me, the most interesting part of this “revolution” is that anyone can play – if you have the right problem and a modicum of talent you don’t need to have the resources of a billion-dollar “unicorn” to do awesome things. Alex Krizhevsky was a student at the University of Toronto when he helped launch the deep learning revolution in 2012 using a PC in his bedroom to break open the ImageNet challenge with results that were nearly 50% better than the previous record. OK, he had a super gaming rig with *two* GPUs working in tandem, but that’s not an unreachable investment. And anyone can subscribe to an upcoming course from Geoff Hinton who invented many of the most important deep learning techniques, via Coursera.
And if you anticipate getting more data than your PC can handle (a few 4TB drives go a long way), your credit card will get a free starter account on Amazon Web Services or Microsoft’s or Google’s clouds, that you can expand in minutes if the need arises. (GPU-equipped servers are not included in the free accounts, sadly.) Those companies are in a long-lasting price war that will keep their costs below any build-it-yourself and run-it-yourself installation. They’re the reason that IBM is doomed even if it solves its management problems.
Good comment.
–
BTW, the birds in my yard also pick out the sunflower seeds.
Great series, Bob, some of your best work.
My worry about self-driving cars (and other situations where computers hold primary responsibility for safety) is hackers.
Doctor: Watson, I have a patient who is complaining that the medicine you prescribed is making him sleepwalk.
Watson: I recommend amputation of his legs to solve his sleepwalking problem.
Fear Not Malevolent AI Robots https://fee.org/articles/fear-not-malevolent-ai-robots/
The articles with some of readers comments makes compelling insight of where we are headed next, definitely a nice e-book and basis for a video series, available online and on cable and broadcast networks. I’m sure it can be sold to Mark Cuban, just pitch it as like Cosmos except it’s about computing and data, you can hire a personable host who is quirky, likable, a little showbizzy and has a ring of authenticity or being an actual scientist (Bill Nye?). Hell, you could probably “hire” the Pixar characters of Bob “Mr. Incredible” Parr and Gilbert Huph to help explain how Big Data changed the insurance business in one episode.
Re: “you can hire a personable host who is quirky, likable, a little showbizzy and has a ring of authenticity or being an actual scientist ” You’re describing Bob, himself.
Second that.
I think you’ve missed one aspect of this – the effect that this has on the Daily News although you allude to it with The Huffington Post referance. Once upon a time Newspapers and New organizations researched events and wrote stories to inform and entertain their readership. This has changed – these days stories are written to generate eyeball hits on the accompanying advertisements as measured by Big Data – the actual content of the story, truth etc are more or less irrelevant – the function of “News” is to generate page views and their associated advertising revenue. As a result the reported news is far more vitriolic and I believe that this feeds into the politicians current inability to run the country. News reporting generates much more advertising revenue when the daily reports discuss scandal than ability.
Science and technology create moral questions that only the philosphy behind “creationism” answers.
Big Data has moved into education over the last 10 years, and using the data to track student activity and responses to stimuli inserted into questions and online activity. An amazing amount of response data on humans with the opportunity to fine tune the stimuli to better predict the response. You should look in to that aspect of Big Data.
Great article.
Great series of articles Bob, really enjoyed them. Thanks.
Are the Caves of Steel and The Naked Sun where we are going? So sad to picture
Bob at his best. It’s been too long since one of these epic articles. No doubt that the IBM articles represent some very good in-depth reporting, but they are a bit dry.
These are great articles.
Not that it detracts from the article, but I take exception to a couple of points. the first:
“Remember Reaganomics? Economist Arthur Laffer claimed to prove that you could raise government revenue by lowering taxes on the rich. Some people still believe this, but they are wrong.”
I must respectfully disagree — there are many historical cases (with the above being the most recent example), where lowering the tax rate causes overall tax income to the government to go Up. My explanation is that when taxes are excessively high, the “rich” people spend a lot of their money on experts who work vigorously to find and create loopholes to help them pay as little tax as possible. Overall, revenues go down.
On the other hand, when the general tax rates are low enough, the “rich” decide that it is a reasonable cost of doing business, and will simply pay that rate (proportionally) — but that frees up a bunch of money which they don’t spend on tax experts, they spend it on making more money (which is taxed proportionally). Likewise, they grow their businesses, hire more people (who now have money to spend and pay taxes on themselves), and that wealth does trickle down, and the net result is that you get more tax revenue with lower rates.
those of you who doubt that this happens will take exception..
Great Scott,
I was also going to chide Cringely for his erroneous swipe at “Reaganomics”. The Laffer Curve is legitimate economics. There are levels of taxation where more tax revenue can be realized by lowering tax rates. An analogy would be there are engine / prop speeds at which more aircraft power can be realized by reducing throttle. A constant increase of engine throttle does not translate to ever increasing aircraft power.
What is true is that Republicans later bastardized Laffer Curve theory by falsely claiming that lowering tax rates always increases total tax revenues.That is a myth.
Re: “claiming that lowering tax rates always increases total tax revenues”. No one ever claimed that, since a near-zero rate will produce near-zero tax revenue, obviously.
It depends on who and what you lower taxes on. Lower taxes on corporations, even approaching zero, still retains the income from citizens and promotes employment.
That’s where people get confused. They confuse “the rich” and corporations.
Re: “It depends on who and what you lower taxes on” Of course that’s true and also obvious. The issue is not lowering or raising taxes, but maintaining an incentive for people to take the risks necessary to improve everyone’s standard of living. The problem with big government and socialism in general is that the goal is often equality over a higher average standard of living.
Cringely loves his swipes. I wonder if Obamacare is still a great deal for him?
Even if Cringely is right and Art Laffer and the supply siders are wrong, he is still way off in claiming that comes from big data. That graph he puts up looks like an 8th grade homework problem.
Very thought provoking article Bob. The end of it reminds me of Asimovs Caves of Steel Solarian society. Pretty much as you describe in terms of socialization (remotely) and communications. The later Susan Calvin stories also address the issues of the Machines, the supercomputers that ran Earth. Calvin admitted humans had to trust the machines because they were built to be ethical. And there lies the difference between speculative fiction and reality. Ethics are not mentioned in any Big Data talk I have heard as intrinsic to the technical operation. Now I think about it, not mentioned at all.
Perhaps thats not a bad thing. I cannot think of a single moral panic that has not done more harm than good.
Regarding ethics, at the first IEEE Rock Stars of Big Data conference, the keynote was “Grady Booch: The Human and Ethical Aspects of Big Data”.
Or you could just read it. It’s amazing to me that a text article, available on the IEEE Computer Society’s web site, is most accessible as a podcast, published as a video on YouTube. Amazing, and a bit horrifying, because of the time commitment and the data bloat on each step.
.
The problem of ethics in AI is that ethics involve meanings, and as Cringely explains out, we are now skipping that step. In Asimov’s stories, there are conversations and drama about what exactly is an ethical course of action, but his robots are born with complete awareness of their surroundings, total command of language, and instant and unerring logic. As Microsoft recently demonstrated with Tay, real life AI… not so much. So, Booch discusses ethics, but it’s still just human ethics, what the programmer chooses to do. The computer has no ethics.
.
As you interact with more people, the less you can make everybody happy. Many countries want to control what information their citizens receive so they remain on the moral path, or at least keep incumbent rulers and designated heirs in power. The US claims to be against censorship, but with so many asterisks that I question the claim. There is no single system of ethics that can be universally applied.
.
It certainly doesn’t help that the cloud companies are controlled by creeps like Eric Schmidt. He says if you don’t want anybody to know what you’re doing, maybe you shouldn’t be doing it. Actually, I don’t want to be pigeonholed into some category, so maybe Google shouldn’t be tracking what I do. Booch doesn’t propose this approach to big data ethics.
Grady Booch: The Human and Ethical Aspects of Big Data
http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=6750430
@GS. You failed to note that most limiting word, “historical”. new jobs dont happen anymore. Machines are cheaper than humans. Aside from that, enough studies have shown that in last 30 years reducing taxes on wealthy does nothing to help anyone else. Cutting taxes on tea 2 centuries ago may have increased revenue, but nothing of need is in that shortage of supply anymore.
And with all due respect, please realize that “People who fail to learn from history tend to repeat it”. I’m reasonably confident that if the majority of voters learn from history and repudiate the last 7.5 years, that there may be time to reverse the trends and bring things more to my vision. If on the other hand they keep them going another 4 years, that the history we’ll be repeating will be that of the former soviet union. Time will tell.
[…] about Big Data — (the final and somewhat scary part) https://www.cringely.com/2016/07/11/thinking-big-data-part-3-final-part/ another word for mass #surveillance […]
[…] Thinking about Big Data — Part Three (the final and somewhat scary part) […]
All these techniques have the same strength and the same flaw, they classify facts but that does’nt lead to understanding. For example, every one of elements of the periodic table emit a characteristic spectrum. Make a gas out of the atoms and heat it up or run an electric current through it and it will emit light of very specific colors. The most famous are neon red used in advertising signs and sodium yellow used in street lights. Every atom has literally an infinite number of spectral lines. It’s no trick to stuff all the known lines into a database and run one of the AI algorithms over them. If you now heat a gas of an unknown atom and measure the spectrum, without a doubt, the algorithm would correctly identify it. It would probably do a good job with a mixture of atoms in the gas too.
Real understanding of the spectra requires quantum mechanics and the Schrodinger and Dirac equations. No AI algorithm is going to come up with that because the knowledge is encapsulated in a way that has no obvious connection to the phenomena that it explains. The AI algorithm can be explained to a smart layman but the equations might as well be in Martian, which they are to the vast majority of people
Kalkavan Yapı, Hasan Sever İnşaat ve Gül Yapı Gayrimenkul Geliştirme İş Ortaklığı’nda, 1 milyar 140 milyon liralık rekor ihale bedeliyle TOBB’unZeytinburnu arazisi üzerinde hayata geçirilecek Yedi Mavi Projesi ’nde ön satış süreci başladı.
17 katla 6 blok halinde yükselecek olan İstanbul’un yeni hediyesi Yedi Mavi Projesinde,1+1, 2+1, 3+1, 4+1 ve 5+1 olmak üzere 5 farklı daire seçeneğinin yer alıyor.
Llansman tarihine kadar avantajlı fiyatlar ve ödeme koşullarıyla satışa sunulacak olan
Yedi Mavi Projesinin ortaklarından Kalkavan Yapı Yönetim Kurulu Başkanı Nazım Kalkavan ve Hasan Sever İnşaat Yönetim Kurulu Başkanı Hasan Sever’in katılımıyla Eylül de yapılacak lansman öncesinde proje detaylarının konuşulacağı Yedi Mavi projesi basın toplantısı düzenlenecek.
Yedi Mavi Projesi , marmara projesi,emlak dream yedi mavi projesi,
https://www.emlakdream.com/haber/Yedi-Mavi-Projesi-Lansman-Oncesi-Basin-Toplantisi-/78285
Hello Bob, Thank you for the terrific article. I only wish you would have addressed John Searle’s chinese room argument because you seemed to be touching that boundary. https://en.wikipedia.org/wiki/Chinese_room
I guess being wrong about Art Laffer is acceptable, since so many Republicans misquote it, but you put the chart right there, which turns it into an example of the extreme value theorem.
Just for the record : I do think eHarmony can help you find a better mate. I met my wife through eHarmony and dated three others for 6 months each. From my experience, I honestly think they have some ability. It’s not pie in the sky to think that some smallish number of data points (variably applicable due to subjectivity in answering the questions) can indicate personality trends or traits in broad strokes. What is unknown is how you match one with another. Maybe it’s just random probability and I got lucky several times (no pun intended) but.. it worked for me I’m happy to say.
Actually, the real scary part is #4,
Greetings. The previous 3 posts were written by Bob’s AI BlogBot.
I got the feeling that Big Data might eventually lead to some gigantic disaster. Using AI when no one understands how the results are developed certainly seems to be leading to a possible disaster. There is the possibility that the head AI computer would discover that humans cause most of the problems in the world, so it might order the robot police officers to kill all the humans in the world. Success, no more problems from those pesky humans.
.
As far as science solving problems, there are many problems that science can’t solve. Scientists can devise likely theories about many things. But that’s not science. Science is based on conducting tests or experiments to validate those theories. Theories have not been validated simply by becoming popular with scientists. These theories are no more valid than tossing coins to “prove” something. Theories, such as climate change being caused by certain actions of people, cannot be tested. They are based on computer models that include any number of unproven assumptions. Conducting realistic testing would require thousands or millions of years under any number of different conditions. Of course, you might argue that merely the possibility of humans causing climate change would justify changing our actions. But that is not based on science. It is merely a possible, but untested, solution.
.
There are many results in life that occur over millions of years, require complex interactions of any number of different agencies, or take effect at levels far below subatomic levels. The theories concerning those results often cannot be tested or should not be tested.
.
Some scientists worried, for a plausible reason, that exploding the first atomic bomb might set fire to the atmosphere. But the only way to know for sure was to carry out a dangerous test. Thank heavens, the air did not burn up.
“I got the feeling that Big Data might eventually lead to some gigantic disaster.” It is certainly plausible. A certain subset of the data that exists is wrong. Since we don’t understand the algorithms by which the data is processed into decisions, it’s possible some of them will concentrate on the errors, inside a feedback cycle. The results of ever concentrated errors could certainly be disastrous.
This overlaps with a video I saw a year or two ago, called “humans need not apply” by CGP Grey.
https://www.youtube.com/watch?v=7Pq-S557XQU
His version of Moore’s Law is, in 10 years you’ll get 10x the computing power for 1/10 of the price you pay today. Stuff that’s bleeding-edge today will be so cheap that it will be ubiquitous … and when it comes to earning an income, there really won’t be much for humans to do.
As he mentioned, the horse population peaked in the 18th century, and now … well, horses just aren’t very *necessary* and so the world has far fewer horses.
How can we be convinced humans will see a different fate?
İstanbul Esenyurt bölgesinde Uçar Group imzası ile hayata geçecek Gümüş Panorama projesi 19.200 metrekare arsa alanı üzerinde inşa edilecek. 627 konut, 107 ofis, 33 cadde dükkanı, 30 dükkanlı bedesten ve sosyal tesislerden oluşacak projeye yönelik çalışmalar ve ruhsat süreci tüm hızıyla sürüyor.
Esenyurt Konut Projeleri ile geleceğe yatırım yapabileceksiniz üstelik geri dönüşü çok güzel olacaktır. Kazanç sağlayabileceğiniz bir yatırımı neden tercih etmeyesiniz ki ?
#EsenyurtKonutProjeleri #konutprojeleri #konutprojeleriesenyurt #konutprojesi
https://www.emlakdream.com/haber/Esenyurt-konut-projeleri/72561
“We’ve simultaneously hit a wall in our ability to generate appropriate theories while finding in Big Data a hack to keep improving results anyway. The only problem is we no longer understand why things work. ”
hasn’t this been human history for most our existence? For thousands of years implementation outpaced theory and understanding, the enlightenment happened, then we started to explain what we already implemented…then we invented lasers…the first device ever theorized prior to implementation…since then we’ve grown accustomed to theory before implementation…but it’s gonna shift back…we can’t keep up with implementation.
But can’t we use big data to explain the theory to ourselves…?
Güneşli – Basın Ekspres bölgesinin en büyük avantajlarından birisi Atatürk Havaalanına yakın olması. Ayrıca Tem ile E-5 arasında önemli bir bağlantı noktası olması ulaşılabilirliği arttırıyor. İnşaat aşamasında olan metronun da tamamlanmasıyla bölgenin çekim gücünün daha da artması bekleniyor.
İşte Güneşli Konut Projeleri bölgesinde hayata geçirilen konut projeleri…
Nurol GYO tarafından hayata geçirilen Nurol Park projesi farklı daire seçenekleri ile her ihtiyaca cevap verebilecek nitelikte daire tiplerine sahip. AVM’lerin boğuculuğu ve sıkıcılığından hoşlanmayanlar için açık havada özel bir alışveriş alternatifi sunan Nurol Park, ticari üniteleriyle her ihtiyacı karşılayacak. Ayrıca gözde restoran ve kafeler, Nurol Park sakinlerinin ve ziyaretçilerin hizmetinde olacak.
Nurol Park Güneşli projesinde bürüt 55, net 39 metrekarelik 1+1dairelerin fiyatları 435 bin TL’den başlıyor. Metrekare fiyatlarının 7 bin 900 TL’den başladığı projenin, ödeme planı kapsamında yüzde 10 indirim fırsatı bulunuyor. Projede daire teslimlerinin Şubat 2016’da yapılması hedefleniyor.
https://www.emlakdream.com/haber/gunesli-konut-projeleri/61239
[…] his series on the major trends in the sea of information we live in with the last installment of Thinking about Big Data — Part Three (the final and somewhat scary part). (Click these links for Part 1 and Part 2 of the series if you missed them last week.) […]
Great article. Mr. Cringeley has stepped up his game to as good as he has ever been (the leader). Mainly, I’m really glad to have articles at least weekly. Also, wow, I get the feeling of learning what is really going on.
The views on artificial intelligence are very consistent with what is really going on in the heads of those working on it. They are wrong of course. “Consciousness is appreciated by information occurring at multiple hierarchical levels simultaneously”. Frege, Russel, Godel all noted hierarchical paradoxes and said “hmm, isn’t it interesting that we can discuss them…brains just must work funny”. The crowd-sourced methodology is a far second to “cyborg” systems such as
https://id.lichess.org/team/cyborg
pretty much all of science and technology for decades
somewhat every other field (except google……….).
Google is looking for a big collapse; if anyone wants, I can engineer it pretty easily.
A splendid series Mr. Cringely-your best writing since ‘Accidental Empires’.
Thanks for posting.
Great article! But the prediction for the future is off a bit. This is because Moore’s law is basically dead. At least as everyone has come to accept it. Moore’s law simply stated was that the number of transistors that could be placed on a die would increase exponentially with time (the scaling factor has wiggled around a bit but that’s not important). What you’ve all appreciated that gave you more compute power for that density increase was Dennard scaling and that eneded a decade ago. It’s been material and structure changes that have given us the compute power boost since then – high k metal gates, strained silicon and now fin fets. But there’s no more physics tricks “ready to go” in the pipeline. And Moore’s law itself is breaking down as the costs to shrink the dies go up exponentially instead (due to the utter failure of lithography and having to move to multiple patterned layers). Look at it this way. If your costs go up 2x for a double patterned layer but the shrink associated with that only goes down 30-40% then the functionality of the chip is more expensive. You see the evidence in the slowdown of process release cycles (Intel’s just moved to 3 years from 2ish), the artificial naming of processes that have no bearing on size (20nm to 16nm in foundry land was just a transistor swap for fin fets, no real dimensional change was possible) and that the latest crop of gadgets are getting less of a jump in features or performance than has been true of previous crops of gadgets. There’s less and less of a reason to upgrade all the time – That’s the result of the beakdown of all the aspects we’ve lumped into Moore’s law. I’ve been designing semiconductors for 25 years and we’ve always predicted the end was near, but this time we apparently really meant it about four years ago. And it’s not a brick wall, it’s a Jello wall and we’ve already pushed into it for real now and have been for a couple process generations. You don’t yet see it as consumers because of all the clever innovations Bob has mentioned as well as incorporating more dedicated hardware (streaming instructions, GPU acceleration and now FPGA acceleration) with what transistors we can make. But that’s going to run out long before the singularity comes. I’m sorry but Kurtzweil is wrong – the singularity isn’t coming and he’s not going to live forever. And I’m not happy about it because I was hoping he would so I would, too. =) The semiconductor industry has given it it’s best shot – exponential advancement longer than anything in history – but we can’t do it anymore. If we’re lucky we will stay in business with maybe some linear growth but there’s no real idea how to do that, either. Don’t get me started on EUV, either. Anyway, if we do manage to keep increasing our processing goodness exponentially, it’s not going to be semiconductor based and nothing else seems to be ready. At best there’s going to be a gap, but the exponential gain is coming off the table for now.
Let’s face it, as long as the prices of cell phones, computers, and gadgets in general go down in real terms, dollars corrected for inflation, Bob is going to call it Moore’s Law.
Which is absolutely fair. And for the last 10 years that has been true enough as the actual scaling wasn’t providing all the benefits we all know and love. That’s fine, we’ve all just gone along with calling it Moore’s law. But what I’m saying is that the ENTIRE gravy train is basically over. Things aren’t going to get cheaper and more powerful at the hectic rate they’ve been getting cheaper and more powerful. This article explains it pretty well: http://spectrum.ieee.org/tech-talk/computing/hardware/transistors-will-stop-shrinking-in-2021-moores-law-roadmap-predicts
The “Moore’s Law” roadmap is done, finished, kaput. The very last EVER ITRS roadmap was just released with a flatline for transistors in 2021. And then they hung up the towel. There will be no more ‘Moore’s law” roadmaps from the ITRS.
And, sure every manufacturer puts on a big happy face and says we don’t need the ITRS anymore and the shrink will come from 3D or new technology or whatever, and it probably will at some point, but not at an exponential rate, it’s just too expensive as the potential tricks are still in the lab with no way to move them to commercial volumes at an acceptable cost. We’ve really hit a cost wall. At this point, improvements are still possible, sure. But they cost MORE than without the improvement. Once the system efficiencies have been wrung out (and we all go back to programming in assembly like in the early days 😉 ) then it’s done. It’s like stopping a big ship. The propulsion is shut off, but it’s going to coast for quite a while longer. But you know it’s never speeding up again and it’s going to slow down until it stops and just drifts. How long will that take? It’s hard to predict, of course, but I’d guess about 10 years. And it will slow down continuously during that time, so it’s not 10 years of more doubling. Lots of people will argue that I’m wrong and that more technologies will come to fill the gap, but I’m not saying they won’t do that. I’m saying they won’t fill the gap exponentially! That’s a tough order to fill, there’s nothing that’s been able to provide exponential advancement like lithography has. And it is way done since 20nm. So, linear growth at best, and that’s really slow.
Think of it this way. If you put $100 in a savings account and it grew exponentially at 10% a year for 50 years is nearly $12k. But if it grew linearly at $10/year for 50 years (same starting rate) that’s $600. Linear growth is REALLY slow compared to exponential. Continued improvement in semiconductors isn’t enough in my mind to say Moore’s law hasn’t ended – it’s exponential growth that is what Moore’s law was all about. If you can’t get exponential improvement, then it’s done as we knew it.
There was a notable episode in an otherwise average sci-fi series (Seaquest) that had a future that somewhat matches this article. The effects of having a world where the central computer system became so powerful and efficient in getting people what they needed and wanted that people stopped ever leaving their homes. That lack of physical contact eventually led to a decline in the human race itself down to a really small number. While the rest of the episode’s plot was a little too metaphysical (two humans left, kill the supercomputer to reset, a 2nd “Adam and Eve” type ending), the concept of isolation by extreme technology leading to human decline sounds like a real possibility.
I’m surprised no one has mentioned Nick Bostrom’s book, “Superintelligence: Paths, Dangers, Strategies” as useful supplementary reading regarding the issues that Bob raises.
The idea of the machines taking over has been around since long before his 2014 book. Has this Oxford philosopher said something new, since the movie “2001 A Space Odyssey” was released almost a half century ago?
Another diet pill that is certainly known as being the “tool for destruction” is Lipo
6 Black. The weight loss helps in burning
extra fats and can be useful for attaining a physique much
desired. Top ten diet pills 2014 Xenical sticks to blocking 30% fat in your diet program and flushes
it of one’s body through your bowels.
Natural slimming capsules aren’t regulated while using Federal Drug Administration, so
those considering taking such eating better drug should use caution. All
these factors are, in truth, tried and tested and effective in weight reduction.
It includes small, regular and jumbo multi-serve Nutri
Ninja Cup with Sip and Seal Lids that permits making nutrient-rich Super Juices to
adopt on-the-go. The screen automatically charges the built-in solar
batteries and that that it was supplies the ability to this technique.
The power from the handle using the professional brush heads helps brushes
to oscillate, rotate and pulsate to wash two times better compared to a regular manual toothbrush.
Wheeler regrets calling the authorities before remotely triggering your home alarm.
The Freeview-HD tuner within the LG 47LD790 works within the same way like
a standard Freeview digital tuner does providing you around 50 tv and radio stations.
Unlike conventional grow lights, LED’s produce very narrow wavelengths of light that could be fine-tuned to check these photosynthesis conversion points.
So the main point here is, while LED Grow Lights are not around so long as standard HID, they’re pound for pound the most effective choice for the
money. Ebay grow lights led s Farm is said to make 20,000 heads of
lettuce per year with 5 separate growing beds per machine.
The biggest benefit of hydroponic greenhouse systems refers to lighting.
You will allow others to begin to determine their light and allow it to shine.
This was an interesting read. I knew much of it but there were a few new things and some new interpretations. The problem with the media, the tech media, is always the focus on who made lots of money. Big data is really about statistics and that predates the internet etc. Indeed, you could buy speech recognition software for your PC in the 1990s. Many people did and that is big data.
Future big data applications are not based on Amazon’s early e-commerce. They are based on early speech recognition. Even Amazon agrees which explains why they put so much into Alexa, etc. Speech recognition is based on statistics, some of it going back more than a hundred years. It relies heavily on Digital Signal Processing, which Shannon developed. It’s part of his Information Theory.
If AI learns to recognize cats it does it by figuring out the signature for cats, many signatures, made up in a number in the same way DSP turns audio into a digital number. This has little to do with anything else you mentioned, other than these companies are putting a lot of money and computing power into research on this issue. Actually, the Chinese or someone else might make a bigger advance.
The data is indeed big and impossible for a human to read, but hardly anyone has ever been able to read machine code. The math behind it must make sense still. I’m not aware of computers coming up with their own math. This seems to be what is implied in articles of this type. The problem is journalists typically don’t have much math skill, or they assume readers don’t. But that’s what it is.
In part one we learned about data and how it can be used to find knowledge or meaning. Part two explained the term
This contraction is frequently partnered that has a release of female ejaculate that creates
her increasingly wet. You would like to transform the best way she
feels about pleasure and you wish to blow her mind.
How to make a girl squirt Resting your ear on top of an hot water bottle or possibly
a heating pad (set with a comfortable temperature) will warm inside
the ear canal and soften as well as melt the wax.
These lessons and activities will assist you assembled a purple theme
your students will adore. It’s fairly well known that a majority of women please take
a while before they reach climax; sometimes it might take women nearly 20 minutes to attain climax – whenever they climax in any respect.