This is the first of a couple columns about a growing trend in Artificial Intelligence (AI) and how it is likely to be integrated in our culture. Computerworld ran an interesting overview article on the subject yesterday that got me thinking not only about where this technology is going but how it is likely to affect us not just as a people. but as individuals. How is AI likely to affect me? The answer is scary.
Today we consider the general case and tomorrow the very specific.
The failure of Artificial Intelligence. Back in the 1980s there was a popular field called Artificial Intelligence, the major idea of which was to figure out how experts do what they do, reduce those tasks to a set of rules, then program computers with those rules, effectively replacing the experts. The goal was to teach computers to diagnose disease, translate languages, to even figure out what we wanted but didn’t know ourselves.
It didn’t work.
Artificial Intelligence or AI, as it was called, absorbed hundreds of millions of Silicon Valley VC dollars before being declared a failure. Though it wasn’t clear at the time, the problem with AI was we just didn’t have enough computer processing power at the right price to accomplish those ambitious goals. But thanks to Map Reduce and the cloud we have more than enough computing power to do AI today.
The human speed bump. It’s ironic that a key idea behind AI was to give language to computers yet much of Google’s success has been from effectively taking language away from computers — human language that is. The XML and SQL data standards that underly almost all web content are not used at Google where they realized that making human-readable data structures made no sense when it was computers — and not humans — that would be doing the communicating. It’s through the elimination of human readability, then, that much progress has been made in machine learning.
You see in today’s version of Artificial Intelligence we don’t need to teach our computers to perform human tasks: they teach themselves.
Google Translate, for example, can be used online for free by anyone to translate text back and forth between more than 70 languages. This statistical translator uses billions of word sequences mapped in two or more languages. This in English means that in French. There are no parts of speech, no subjects or verbs, no grammar at all. The system just figures it out. And that means there’s no need for theory. It works, but we can’t say exactly why because the whole process is data driven. Over time Google Translate will get better and better, translating based on what are called correlative algorithms — rules that never leave the machine and are too complex for humans to even understand.
Google Brain. At Google they have something called Google Vision that currently has 16000 microprocessors equivalent to about a tenth of our brain’s visual cortex. It specializes in computer vision and was trained exactly the same way as Google Translate, through massive numbers of examples — in this case still images (BILLIONS of still images) taken from YouTube videos. Google Vision looked at images for 72 straight hours and essentially taught itself to see twice as well as any other computer on Earth. Give it an image and it will find another one like it. Tell it that the image is a cat and it will be able to recognize cats. Remember this took three days. How long does it take a newborn baby to recognize cats?
This is exactly how IBM’s Watson computer came to win at Jeopardy, just by crunching old episode questions: there was no underlying theory.
Let’s take this another step or two. There have been data-driven studies of MRIs taken of the active brains of convicted felons. This is not in any way different from the Google Vision example except we’re solving for something different — recidivism, the likelihood that a criminal will break the law again and return to prison after release. Again without any underlying theory Google Vision seems to be able to differentiate between the brain MRIs of felons likely to repeat and those unlikely to repeat. Sounds a bit like that Tom Cruise movie Minority Report, eh? This has a huge imputed cost savings to society, but it still has the scary aspect of no underlying theory: it works because it works.
Scientists then looked at brain MRIs of people while they are viewing those billions of YouTube frames. Crunch a big enough data set of images and their resultant MRIs and the computer can eventually predict from the MRI what the subject is looking at. That’s reading minds and again we don’t know how.
Advance science by eliminating the scientists. What do scientists do? They theorize. Big Data in certain cases makes theory either unnecessary or simply impossible. The 2013 Nobel Prize in Chemistry, for example, was awarded to a trio of biologists who did all their research on deriving algorithms to explain the chemistry of enzymes using computers. No enzymes were killed in the winning of this prize.
Algorithms are currently improving at twice the rate of Moore’s Law.
What’s changing is the emergence of a new Information Technology workflow that goes from the traditional:
1) new hardware enables new software
2) new software is written to do new jobs enabled by the new hardware
3) Moore’s Law brings hardware costs down over time and new software is consumerized.
4) rinse repeat
To the next generation:
1) Massive parallelism allows new algorithms to be organically derived
2) new algorithms are deployed on consumer hardware
3) Moore’s Law is effectively accelerated though at some peril (we don’t understand our algorithms)
4) rinse repeat
What’s key here are the new derive-deploy steps and moving beyond what has always been required for a significant technology leap — a new computing platform. What’s after mobile, people ask? This is after mobile. What will it look like? Nobody knows and it may not matter.
In 10 years Moore’s Law will increase processor power by 128X. By throwing more processor cores at problems and leveraging the rapid pace of algorithm development we ought to increase that by another 128X for a total of 16,384X. Remember Google Vision is currently the equivalent of 0.1 visual cortex. Now multiply that by 16,384X to get 1,638 visual cortex equivalents. That’s where this is heading.
A decade from now computer vision will be seeing things we can’t even understand, like dogs sniffing cancer today.
We’ve both hit a wall in our ability to generate appropriate theories and found in Big Data a hack to keep improving. The only problem is we no longer understand why things work. How long from there to when we completely lose control?
That’s coming around 2029, according to Ray Kurzweil, when we’ll reach the technological singularity.
That’s the year the noted futurist says $1000 will be able to buy enough computing power to match 10,000 human brains. For the price of a PC, says Ray, we’ll be able to harness more computational power than we can even understand or even describe. A supercomputer in every garage.
Matched with equally fast networks this could mean your computer — or whatever the device is called — could search in its entirety in real time every word ever written to answer literally any question. No leaving stones unturned.
Nowhere to hide. Apply this in a world where every electric device is a networked sensor feeding the network and we’ll have not only incredibly effective fire alarms, we’re also likely to have lost all personal privacy.
Those who predict the future tend to overestimate change in the short term and underestimate change in the long term. The Desk Set from 1957 with Katherine Hepburn and Spencer Tracy envisioned mainframe-based automation eliminating human staffers in a TV network research department. That has happened to some extent, though it took another 50 years and people are still involved. But the greater technological threat wasn’t to the research department but to the TV network itself. Will there even be television networks in 2029? Will there even be television?
Nobody knows.
There will still be porn. Guaranteed.
Then again, it might require a floating point definition of porn.
If you can deliver an anatomically correct intellegent dolls who do the stuff that porn stars do on film, its it still porn?
I for one welcome our new big data AI overlords.
.
Wonder if it help me stream netflix more efficiently?
It’s basically heading towards the plot of Terminator 2. Someone in the future is going to have to blow up Skynet — er Google — before the “rise of the machines” takes place. I’m thinking maybe someone by the name of John Connor.
First Post! Oh wait… this isn’t Slashdot.
Isaac Asimov had an interesting observation about human verses machine intelligence. Asking which is better was not the right question, since by their very nature they are very different. The analogy he used was comparing legs verses wheels. They both provide transportation, but in very different ways. As to which is better depends upon the goal. Wheels work great and provide fast transportation on flat hard surfaces. But legs are superior where such surfaces are not available.
This is all a joke. Hardly anything has changed since the IBM PC came out in, what, 1980? My PC still only has a word processor and a spreadsheet. It uses that massively increased power to simply make the UI prettier. The only useful new app in 30 years is GPS mapping.
—
Cell phones are nothing more than the marriage of a 1940s walkie talkie and a computer. And all most people do is send text messages and play games.
—
Text messaging, twitter and all the copy cats… IRC chat with a pretty UI.
—
Social networking… just a way to steal and sell your personal data. YOU. ARE. THE. PRODUCT.
—
15 years in the future — more old crap will be recycled and called new and exciting by the ignorant masses.
The basic technology was available in 1940, but what’s changed is the size. I started at EDS in a GM plant in Dayton Ohio working on a shipping and materials management system for four other area plants, programmed in COBOL working on four “mini” machines (sub mainframe) manufactured by Prime, they had their own room, elevated and airconditioner, and less power than my cell phone now.
Its the drop in cost and size that has made most of this possible. As you imply, if cost and size were no object-we could have been doing this stuff back in the 1950s. In the early 1970s I remember watching Hogan’s Heros making use of slick small cameras. We’ve far exceded that stuff today.
That’s an okay (at best) definition of the technology [point missed] but look at how life has changed. How social chaos, order, government, equal rights, convenience, time saved, amber alerts, tornado warnings, accidents averted, donations submitted, paper saved, companies kickstarted, and information self determination continues…
Asimov was a genious
If I understand Bob’s explanation, machines find correlation and don’t worry about the cause. Where as we want to understand cause and effect. That makes Asimov’s analogy a near perfect description.
Wow.
.
We are talking about 15 years from now. I was kinda hoping that I would not live long enough to see the Singularity, but it is quite possible that I will last a decade and a half.
.
I got to thinking about a TED talk about “connectomes” and wondering if it might actually be feasible to upload my connectome, thereby preserving “me” for as long as a machine can be made to run. But I still can’t really come up with a good reason for doing so. What motivates a machine, anyway?
Personally? This doesn’t concern me in any way.
THIS concerns me: http://richardheinberg.com/bookshelf/the-end-of-growth-book
How are you going to build them big AI machines.
With NO OIL? Not with renewables: http://thearchdruidreport.blogspot.com/2014/01/seven-sustainable-technologies.html
There’s also the claim that we can keep industrial civilization going on renewable energy sources, the claim that a finite planet can somehow contain an infinite supply of cheap fossil fuel—well, those of my readers who know their way around today’s nonconversation about energy and the future will be all too familiar with the thirty-one flavors of denial.
It’s ironic, though predictable, that these claims have been repeated ever more loudly as the evidence for a less comfortable view of things has mounted up. Most recently, for example, a thorough study of the Spanish solar energy program by Pedro Prieto and Charles A.S. Hall has worked out the net energy of large-scale solar photovoltaic systems on the basis of real-world data. It’s not pleasant reading if you happen to believe that today’s lifestyles can be supported on sunlight; they calculate that the energy return on energy invested (EROEI) of Spain’s solar energy sector works out to 2.48—about a third of the figure suggested by less comprehensive estimates.
The Prieto/Hall study has already come in for criticism, some of it reasonable, some of it less so. A crucial point, though, has been left out of most of the resulting discussions. According to best current estimates, the EROEI needed to sustain an industrial civilization of any kind is somewhere between 10 and 12; according to most other calculations—leaving out the optimistic estimates being circulated by solar promoters as sales pitches—the EROEI of large scale solar photovoltaic systems comes in between 8 and 9. Even if Prieto and Hall are dead wrong, in other words, the energy return from solar PV isn’t high enough to support the kind of industrial system needed to manufacture and maintain solar PV. If they’re right, or if the actual figure falls between their estimate and those of the optimists, the point’s even harder to dodge.
Similar challenges face every other attempt to turn renewable energy into a replacement for fossil fuels. I’m thinking especially of the study published a few years back that showed, on solid thermodynamic grounds, that the total energy that can be taken from the planet’s winds is a small fraction of what windpower advocates think they can get. The logic here is irrefutable: there’s a finite amount of energy in wind, and what you extract in one place won’t turn the blades of another wind turbine somewhere else. Thus there’s a hard upper limit to how much energy windpower can put into the grid—and it’s not enough to provide more than a small fraction of the power needed by an industrial civilization; furthermore, estimates of the EROEI of windpower cluster around 9, which again is too little to support a society that can build and maintain wind turbines.
Point such details out to people in the contemporary green movement, and you can count on fielding an angry insistence that there’s got to be some way to run industrial civilization on renewables, since we can’t just keep on burning fossil fuels. I’m not at all sure how many of the people who make this sort of statement realize just how odd it is. It’s as though they think some good fairy promised them that there would always be enough energy to support their current lifestyles, and the only challenge is figuring out where she hid it. Not so; the question at issue is not how we’re going to keep industrial fueled, but whether we can do it at all, and the answer emerging from the data is not one that they want to hear: nothing—no resource or combination of resources available to humanity at this turning of history’s wheel—can support industrial civilization once we finish using up the half a billion years of fossil sunlight that made industrial civilization briefly possible in the first place.
How about Toshiba’s 4S reactors – the ones Bob wrote about 3 years ago?
Ah yes clean safe nuclear energy too cheap to meter is only around the corner, since about the 50’s. The vaporware better come on line soon or the Singularity will have to be put on hold. All the computing power and the industrial/mining and transport needed to support the singularity will use exponentially more power than we are using now.
We have already developed a practical alternative to carbon based fuels – nuclear fission. Work continues on a second alternative – nuclear fusion. Neither of these is in any danger of running out of fuel, but both present different challenges.
While these technologies are not popular because of safety and waste concerns, you can bet that if there is no other game in town in future, they WILL be used extensively.
Further, to dismiss non-carbon based renewable energy generation as inadequate to maintain an industrial lifestyle based on the early technology iterations we currently have, is discounting any possibility of scientific or engineering progression. Based on human experience, this is an unreasonable assumption.
We may try every other option first, but it is likely we will get it right eventually.
“if you happen to believe that today’s lifestyles can be supported on sunlight” Nobody that reads this column does. In any case, what does this have to do with AI?
Yes we are entering a new age, the robots will assemble all the goods, once it took most of the people to produce food, now just a few, once it took most people to work in manufacturing, now it will take just a few, what will the majority of workers do in the traditional sense of work, when the machines can find what you ask for, manufacture and deliver it, without the need of any human assistance, will it be a Star Trek future or more Blade Runner. Does anyone know what the machines will deliver as a distruptor to society, because its coming ready or not
Humans (workers) will innovate, dream and create new things, new processes, new art. That is our lot. We get “distracted” at times by survival but I believe that we will survive through “creation.”
What happens when the singularity takes away your painting privileges? Who are you to question the great AI it can see from your MRI that you shouldn’t be doing it. Hopefully it can stop you from chopping off your fingers in the prison, sorry AI created perfect community workshop before you do it.
While it’s true that it’s not theory driven it’s relatively trivial to extract from the model built by the computer exactly what features in the data are responsible for the capability of the model. And that can drive / inform theory. It’s unlikely true that no one will have any idea how this stuff works.
Of course, that depends on on your definition of “how this stuff works.”
|<< Of course, that depends on on your definition of “how this stuff works.”
Aye, there's the rub.
It’s debatable I guess. I’ve written neural network code where I can have it output stuff for each node, but even going line by line it’s hard to tell what’s going on. On larger datasets with lots of nodes, you could extract something, but I doubt understand it. (I’m a student, so would defer to someone more knowledgeable).
This doesn’t really read like Bob, does it? Maybe an AI version of Bob. I’ll have to run my ML comparator on all his previous scribblings to see.
It’s quite obvious that Bob wrote it. Two clues: “Moore” and “Kurzweil”.
Yeah, but AIBob would have that down.
I thought of that. But I also realized that “AIBob” would have noticed that those words have been way overused in the past 20 years of columns, and would have avoided them at all cost.
Excellent! Love it! 😀
So when the robots are doing all the work, including all the thinking, labor income will trend towards zero, while owners of capital will by definition own the robots and therefore the income. It will be a pretty distopian society of extreme wealth inequalities. 2029 sounds pretty scary, unless you own a robot.
“Those who predict the future tend to overestimate change in the short term and underestimate change in the long term.” Yep the Luddites were right too soon.
This isn’t “intelligence.”
.
It can only find answers to questions that have already been answered. It can’t discover answers to novel questions.
.
It can’t philosophize. When asked a philosophical question, the response will be “The philosopher [so and so] said… While another philosopher, [so and so], believes differently, and said…”
.
It can’t imagine. You could ask it to show you what a cross between two different things would look/sound/etc. like, and it *may* be able to synthesize something that approximates what you asked for, but it’s never going to wonder what that novel, crossed thing would look/sound/etc. like in the first place.
.
It can’t extrapolate. Given one set of data, and a completely unrelated question, it can’t form a non-existant connection between the two and discover how they might actually apply to each other like a human could.
.
It can’t reach beyond its limitations. It will never wonder at what it doesn’t already know, or beyond what it was programmed to ask for and seek out. It can’t intuit that the answer to a question it can’t answer may actually exist in data that it doesn’t have.
.
This is merely highly augmented processing. It’s a proverbial idiot savant.
And how is that different from a human? And what is “intelligence”?
It couldn’t ask a novel question like yours. It could mine a data set of conversations, find a similar comment like the one you are relying to, scan the replies and synthesise one based on them. But if there were no existing posts like the one you replied to, it would have nothing. Insufficient data. Full Stop. Humans build conceptual models and refine them to generate projections. Models like that can be applied to new inputs and may produce useful results. But with algorithms like this there is no model, there is no theory, it’s just indexed reference data.
Right on! As I always say “A transistor does not a neuron make”. Bring on augmentation…
Calling it an “idiot savant” is unfair to idiot savants, who are still superior. Computers don’t have insight, are not driven, cannot dream, and are not self-aware, on top on everything else you mentioned. Simply put, there’s nobody there. It’s just a machine crunching numbers. It takes a human to formulate a problem mathematically and then the computer will solve it much faster than the human could… but that’s about it.
I’m not worried about computers taking over the world in the same way I’m not worried about bulldozers doing that. They’re dumb tools and they only seem smart to people who don’t understand how they work. They aren’t meant to replace us, only to help us with menial, tedious data crunching, and they couldn’t replace us even if we wanted to.
Bob, I like your column and I have a lot of respect for you. I’ve been reading since you were at InfoWorld. But every once in a while you wander over into chemistry and you really screw it up. The guys who got the prize were hardly biologists and their work involved very much deriving and applying the mechanical aspects of chemistry, the opposite of the associative processes that Google is applying.
I think you’ve taken a wrong turning here: an automated pattern recognition machine, which is what you’re eulogising, can’t do anything more than that. So, its chances of making new connections between disparate collections of facts -something even a child can do- is very small. I strongly suggest you take a look at what Jeff Hawkins is working on these days. I think he’s far more likely to make the big AI breakthrough than further development of Watson And The Pattern Matchers ever will.
One minor niggle: “Minority Report”. Why name check a rather anodyne Hollywood movie instead of Minority Report’s author, Philip K Dick. For the real red meat you need to read the original 32 page short story and its predictions, which are scarily prescient.
Lun – It’s always been dangerous to assume it can’t discover, philosophize, imagine, extrapolate, reach, … Twenty years ago, the list was: it can’t beat a human at chess; it can’t beat a human at Jeopardy; it can’t provide reasonable translations between languages; … I admire your courage in providing a new list, but I suspect you will be surprised by the new, emerging capabilities.
There’s more to intelligence than accessing large amounts of data fast with an algorithm designed by someone else. There’s a class of problems that are well suited to that, and chess, Jeopardy or Google Vision fall in that class.
It’s like comparing who runs faster, a man or a car. The cars are getting faster all the time, but it’s not because they’re training or developing muscles. It’s because us, humans, improve our knowledge and technology.
I spent 1985-86 working on what the Itty Bitty Machine company called an ‘advanced technology project’ in Artificial Intelligence.’ I had a bumper sticker on my car, in fact, that said ‘Artificial Intelligence– it’s for real.’ It was the most fun job I had in the 36 years I worked there, mostly I think because a teammate and I spent much of that 18 months running around lower Westchester County NY doing presentations explaining to other wingtipped wingnuts what AI was. It can be exhilarating to spend all your time as the recognized expert, but the reality was certainly different. Much as Marge Simpson once explained to Lisa how she could make extra money giving piano lessons when she didn’t know how to play the piano by “just staying one lesson ahead of the kid,” we were just barely ahead of our audiences in our understanding. With that as background, I’d quibble with Bob’s observation of that era that “… the problem with AI was we just didn’t have enough computer processing power at the right price to accomplish those ambitious goals.” On the contrary, I think after about 90 days on that project we knew there was no future just because there was no problem so simple that couldn’t be tackled at geologic speed– whatever we did was so painfully slow, we certainly knew no one would pay for the results. It wasn’t that there was no appropriate spot on the price performance curve where you could make an intelligent investment, it was that that spot on the curve simply didn’t exist. Be that as it may though, we had a lot of fun with it while it lasted, that’s for sure.
When someone can clearly define ‘intelligence’, I’ll believe that computers can become intelligent.
(That’s the problem with the name ‘Artificial Intelligence’ – there’s no intelligence involved).
It’s self defeating. Businesses will adopt new technology to reduce costs (cut workers). Without jobs to earn money, people will have no buying power. Without the ability to buy things any consumer driven society will implode, taking those same businesses with them.
Is it cool? Sure it is but at some point there has to be an equilibrium within society. You cannot have endless growth just as you cannot keep cutting the trees down unless you do something to replace them.
We are already well on the way to a world of have’s and have nots and I think history (and the movies) have enough examples of what happens in that situation for us to be able to see the future.
You act as if new businesses in new markets can’t be created.
Take ringtones for example. Who had a clue 20 years ago that ringtones would be a viable business plan?
Firstly, which part of “it’s a finite environment” do you not understand.
Secondly, selling ring tones is a purely parasitic activity that places a drain on resources which could be used to do something useful.
42
Yes, that was exactly my thinking too.
–
(and the only reason Watson won on Jeopardy was because it was able to hit the button faster than the other contestants. It had nothing to do with intelligence).
2030…….Skynet becomes self-aware….
Hmmm, I like to think of myself as a tech optimist, but this is just a bit too much for me. It is easy to predict a singularity by positing ever increasing growth, but every time I’ve encountered that in the past the curve is actually an S curve that flattens out somewhere. We already see the growth in hard drives capacity per dollar flattening out. And battery technology is severely lagging what we’d all like to have. Can we keep Moore’s Law improvements going? I’d imagine so, but what if the rate of cost increase starts going beyond the rate of improvement? It seems like every physical process has limits. I certainly don’t know, and nobody else knows what will happen in 20 years.
And, just from experience, I’ve been hearing all of my life about the great devices that are coming in the future, but we only get some things and new paths open up that are unexpected. I remember when fusion power was only 20 years away, and now, 30 years later, it is up to 30 years away. I certainly can’t predict what will happen with that, but I don’t believe in “Electricity too cheap to meter!” either. But maybe I’ll get a system on my roof that removes the need for a meter.
I think the development you’re talking about is Rock’s Law, that fabrication plants double in price every 4 years or so. Intel is matching ARM’s performance and power consumption, but it’s only by being close to 2 years ahead of the rest of the industry in fab-building prowess. Even Intel will eventually run into funding problems.
Every physical process does have limits, if for no other reason, then because stuff is generally fermionic so you can’t fit infinite data in finite space.
.
One sad limit that I did not expect was the regulatory capture of the major carriers. Those hypothetical supercomputers are useless without some way to keep those computing units fed with data, and the lack of meaningful competition has kept speeds and prices stagnant across the US.
This is another thing encouraging people to store so much on cloud services: only then can you have rapid access to it from anywhere. If network connections were fast enough for meaningful peer-to-peer connections, then I can see storing and processing your own data on your own decently-fast computer, and leaving the NSA out of the picture.
Yeah the big problem with this that CMOS is set to hit 5nm gate length around 2019. At that point you need something else, so the curve is very likely to flatten out.
Maybe Moore’s law carries on through the transion away from CMOS, but the leakage currents are becoming a big issue.
The other point is that computers are getting much harder to program now as we are getting more cores rather than much faster cores (or at best longer vector units).
Exponential growth cannot carry on for ever, it is amazing how far it has come, but sooner or later it will run out of steam and slacken off. I will be suprised if Moores law makes it another 10 years to be honest, let alone 15, but maybe it will.
Every once in a while I revisit the “we’re part of a machine…. we just don’t know it yet” meme….
Unlike Facebook, Google understands how to be subtle.. I suspect this is because they have a better view of the future.
We will know this future has arrived when Google starts hiring people without interviewing them beforehand. Don’t send Google your resume, just a recent brain scan and some genetic material.
I thought you were going to say: just give them your Google username and that’s it. Being Google I think this is a more feasible scenario, at least to happen first.
Edward de Bono’s work focuses strongly on the kind of thinking that computers cannot do – for example, coming up with new ideas (which he sometimes calls ‘lateral thinking”). He has another phrase, “You can analyse the past but you have to design the future” – the ultimate analysis of data, which computers increasingly can do, is likely to only identify past approaches and past solutions.
Also, as with earlier AI, computers will have to be programmed to find a way to explain to humans how they’ve arrived at their conclusions.
“… is likely to only identify past approaches and past solutions. ”
.
Whether you call it AI or something else, evolutionary computing has demonstrated quite the contrary. It comes up with novel solutions which are clearly not realized based on past approaches or past solutions.
.
“Also, as with earlier AI, computers will have to be programmed to find a way to explain to humans how they’ve arrived at their conclusions.”
.
While a computerized medical diagnosis has this requirement, this is not required in many applications such as those found in evolutionary computing. Intermediate and end results must conform to the key requirement, namely, attaining a high score for a *fitness function*. It doesn’t matter whether the process itself is tractable or not, it’s about testing the fitness of the solutions.
.
Asking whether (current and future) AI is novel, intelligent, imaginative, etc. typically leads to discussions that are rich in content while being poor in formalism. What ‘intelligence’ really boils down to is having the ability to maximize fitness, preferably in a wide and continuously changing variety of environments. It generally doesn’t matter how something is achieved, rather, it’s about how good the results of an achievement compare to other results. The motivation or ‘drive’ involved in achieving these results simply stem from a desire or goal to maximize fitness.
.
“… coming up with new ideas…” and how people do this has been largely debated and the current view on this is that there is no (methodic) approach to come up with new ideas. I would conjecture that the process which produces new ideas is largely evolutionary. Existing ideas cross over with other ideas while undergoing mutations (in one’s mind), all while receiving new input (and thus new ideas) from our environment. Some resulting ideas ‘stick’ more than others, meaning that in testing these ideas they attain a higher fitness than others. Coming up with new ideas seems to be highly sophisticated, but at the same time so does the trajectory of an insect walking on a beach while every single step is actually based on a fairly simple decision making process (read Herbert Simon’s The Sciences of the Artificial).
.
The key to realizing (artificial) intelligence is finding the right (set of) fitness functions, which isn’t always easy. The difficulty in using statistical methods for translations is in measuring how good the translations are. You can devise an algorithm that attains maximum fitness in translating one existing corpus to another known translation, but the fitness function that measures the achievement does not necessarily provide a good measurement of the translation of an existing corpus to another for which a (‘good’) translation is not available. Even so, these methods are becoming better and better at what they’re doing. The problem at this point in time is that their input is still rather limited and their fitness measure is severely constrained.
I think you’re making a jump that’s not really there. First you say the computer can figure things out and humans don’t understand it. Then you say once there’s more processing power it will do that with everything. It’s not true that humans don’t understand what the computer did. Because the data is so big it can be impossible to go through it by hand and double check the result, but the math behind it is logical.
Typically they use probability like with speech recognition. This isn’t really even big data. The more data you have the better it works but big data is more specific than that. There are limits to how much probability can do. These are the same limits you find throughout statistics. People with an expertise in that can tell you what they are. Statistics has often been used to fool people and promote lies.
Speech recognition or translation software doesn’t work that well. It doesn’t understand what is being said so it has trouble correcting errors. That’s why it often produces nonsense. A human translator would recognize nonsense and try something else. I would be weary of a self driving car for the same reason. If the road is filled with water you might want to stop and back up before the car stalls.
Also, speech recognition isn’t new at all. It goes back to the 80s and the math behind it goes back much further, like hundreds of years. Any sort of AI relies on math, it’s not magic. If the people using it don’t understand it then you can get something like the 2008 crash. In twenty years the situation could be different, but there’s a missing link there. So far the best result is combining computers with human intelligence so you have someone guiding the system.
This reminds me of a ted talk by Kevin Slavin. https://www.youtube.com/watch?v=TDaFwnOiKVE
When talking about AI, we’re talking about a sophisticated tool. The right kinds of tools have helped humans evolve into a powerful force of nature. However, as powerful as they have become, many human civilizations have come and gone over the ages. One reason for the demise of a civilization is the inability of the people, especially the leaders, to understand the dangers of their actions or inactions. I can think of nothing more dangerous than becoming a victim of computer intelligence that cannot be understood, and therefore cannot be controlled, by humans.
I used to apply AI to actual problems in the slow old days, and I remember this kind of optimism cropping up before with Neural Nets & Genetic Algorithms.
The idea was that if you provide enough examples, and a technique, you get whatever you happen to be hoping for.
It’s called ‘magical thinking’. Put enough stuff in – magic we don’t understand! – get desired result.
Unfortunately, it’s fairy dust. Why does it work? Because we don’t understand it, it must be doing what we want!
You ultimately find that your new technique does some very interesting things, and is applicable to a surprisingly narrow range of problems, and may even suggest some new approaches, but due to the inherent nature of problems (They’re NP-complete, or just plain impossible people!) your [Expert System|Neural Network|Genetic Algorithm|Big Data|Magic Faerie Dust] just isn’t going to work.
The question of whether something is even theoretically possible is generally glossed over, or not addressed.
I can see the system now, though. Feed all someones Internet history into the magic big data box and it’ll tell you if they’re a terrorist! Whether that is theoretically possible or not will not be considered.
Personally I’d be surprised if from someone’s entire Internet history you could reliably tell if they were for example, Welsh, or female, or even a human being, unless this form of Turing test has been passed recently.
One nitpick in your calculation of future processing power: Moore’s Law, on a per-core basis, has tailed off drastically. Intel continues to release new CPU generations every 12 to 18 months, but the performance improvements per-core are on the order of 10%, not 50%, mostly because the clock frequencies don’t scale up like they used to. So on the commercial side, in 10 years time, we are looking at something like a 2.5x to 3x improvement in per-core performance.
While miniaturization continues at a better pace, we are hampered by the increasing cost of setting up fabs at the newest process nodes. The costs now increase more than the additional number of chips that can be sold from a given wafer. So the number of cores is not likely to increase by 128x either. The manufacturers need to recover their larger investments in technology, so either CPU prices increase or the development/marketing cycles get longer.
All in all, I’d say a 128x increase in power AFTER you combine parallelism and core performance is more plausible. Moore’s law was always that overall performance doubles every x months. You can’t magically increase the rate of improvement by applying it separately to # of cores and per-core performance.
Bob, thanks. This was a very helpful and informative article.
.
I found the Watson example interesting. It as loaded with all the known Jeopardy questions, which probably included a number of baseball questions. Right? What if you asked Watson “three strikes”. (Answer what is an out?) The question is too simple to have never been asked. Would Watson know the answer?
.
Google vision is the result of having access to billions of images. When someone tries to sell you a “big computer” that can answer all your questions about something — where does all the data come from? Is the system preloaded with lots of information pertinent to your business? Does your firm have all the data a system like Watson would need?
.
I can understand Google being able to do some good “big data” work. I can understand firms that manage huge amounts of data being able to use “big data” analysis. Most of the “normal” firms I’ve seen have and keep limited amounts of data. How is this going to work for them?
.
There IS an underlying theory to all this, and you describe it. It is the “data driven” theory. The theory applies to humans too. We do what we do by comparing the present to what we have experienced in the past. I agree that this theory has far less evidence for it than say the theory of evolution, but perhaps it is the best theory we have today. And one piece of evidence for the data driven theory might be how well it works when implemented in Google Translate, Bing Translator, and all the other examples you cite.
It certainly is the case that if this data driven theory is true that we do not understand how the brain does this yet.
Douglas Hofstadter in “Gödel, Escher, Bach: An Eternal Golden Braid” discusses this in far more detail, how we humans create contexts based on what we experience and then place new experiences in those contexts. This was no doubt an evolutionary advantage at one time. So this is not my original idea, it comes from him.
When I get phone calls from caller IDs I don’t recognize, I have to ask the caller to pass a Turing test before the conversation can continue. Most fail the test. I read Joseph Tainter, Nassim Taleb and Bob Cringely together to dampen my optimism.
“When I get phone calls from caller IDs I don’t recognize” they go to voicemail. If no message is left, they go to “Calls Blacklist” (the app).
I remember that 1980s AI. The term I heard most was “expert systems” which some people claimed produced some good results in a few cases (an oil exploration application comes to mind) but you’re right, they really didn’t go anywhere.
And I can’t believe nobody has mentioned “Skynet” yet. Just throwing that out there.
[…] the same idea at the same time – like our minds are on the same wavelength. This morning Cringely is saying much the same thing – but coming at it a different way. I recommend you give him a look. He’s been in the Industry […]
It might be just me…but I think the “Internet of Things” is dead wrong. Period.
.
I can envision having a “smart grid” device that manages the power usage in my home and the peak power load my home places on the power grid. That is understandable. If I fire up my microwave during a heat wave, then my A/C should slow down briefly. (Bob, this gives me an idea of a fun idea – an RV smart power system)
.
I do not want anything in my home to be accessible to the Internet without my knowledge and control. I expect a smart grid system to be on a private (VPN) network with only my power company. I don’t want anyone to be able to ping my refrigerator or oven. I would prefer it if the devices in my home communicated with each other over a private, not-routable network protocol.
.
Every day in the news I see several good reasons why the Internet of Things could be a very bad idea. The only “good” reasons I’ve heard is it will make people like Cisco and IBM lots of money.
It sounds to me like one of the tasks we might want to set for computers is “explain to humans what the computer is doing”.
Another quotation I’m reminded of: “the question of whether computers will ever be able to truly think is no more interesting than the question of whether submarines will ever be able to truly swim”.
Well, well, well . . . I pegged big data as the new inflection point too – the next big new, new thing.
But it’s not about CPU or architecture, it really is about the data. GIGO.
Interesting that OO tried to banish data, and make everything behavior. But data wins in the end.
Remember the company “Control Data” CDC. Love that name !
Unfortunately this future scares the bejesus out of me. Because we won’t control data, it will control us.
Where will singularity happen? Somewhere in the bowels of the NSA; SKYNET will be born.
And the only wild and free humans will reside in the mountains of Afghanistan. Ironic that. Totally . . .
Cringely is back !
Beyond the numerous factual errors about the lack of success of artificial intelligence, your point about Google is wrong. Google uses protocol buffers as its serialization format, and PB includes an ASCII serialization. googlers frequently use ASCII protobufs as an exchange format and because it’s easier to read than binary.
I think you are seeing the trees and missing the forest… which is exactly what you could see by looking at the inner structure of Google’s A.I. data. You’d see a bunch of information on “nodes” and connections and weightings… but you wouldn’t be able to look into it and see if it knew that apples come in red and green – you’d have to ask it.
A.I. knowledge is “learned” these days.. not programmed.
Potential AI capabilties as you described in this article are conceptualized in the current CBS TV series “Person of Interest”. The show illustrates what a “machine” could see today, if it were connect to available cameras across the web. Information that could be picked-up from text traffic is not shown. But hey! It’s a TV show.
There is a weakness with this approach though: the uncanny valley.
Perhaps all complex problems can only be solved heuristically, that is by trial and error.
But after an initial solution is found, theory is what leads to rapid refinement of the initial solution to an optimum solution.
The big data approach may solve many problems, but solution refinement will proceed slowly, because there is not enough new data, and exponentially diminishing returns.
What is forever lacking is the theoretical model that would allow for rapid improvement beyond the initial solution.
Google translate isn’t getting that much better over time.
What we will end up with is a lot of “just barely good enough” solutions.
The big data approach will work for “immature” or poorly understood datasets. But if the problem domain is well understood, traditional methods will easily trump it.
It’s a reasonable analogy, but I think that “we don’t understand how it works” is code for “we have no idea how and when it’s going to break.” Recently, Google flu stopped working. Post-hoc analysis gives us some insights as to why, but it points out that there’s a good chance that translate and cat-recognizer and other things will fail in similarly unpredictable ways.
I’m reminded of another AI technology – neural networks – that were touted as “works but we don’t know how or why”. They seemed pretty amazing at first, then they stopped working. It required a new level of understanding, new theories, and new algorithms that emerged in the late 80s to revive them as a viable technology. I think it’s likely we’ll see similar cycles here – failures will lead to analyses and deeper understandings, which will in turn revitalize stumbling efforts.
You had me until this part:
“Algorithms are currently improving at twice the rate of Moore’s Law.”
This strikes me as basically jibberish. When we get an algorithmic improvement, they come from humans, and algorithmic improvements tend to improve performance *exponentially* or at least polynomially.(constant scalars are largely ignored in algorithm analysis)
But also the results of big data are applications of algorithms. These systems are not designing algorithms.
The idea increasing parallel core count at the same time as processor performance is pretty standard. But that also doesn’t give us a factor of two, it squares our performance. (If we double the speed and double the thread count we’re four times as fast. But if we triple both, we’re 9 times as fast, not 6)
[…] week, I read a very interesting article by the “real” Robert X. Cringely entitled Big Data is the new Artificial Intelligence. It’s a bit long and technical, but I always enjoy Bob’s articles. In this one, he […]
[…] by SethMandelbrot [link] […]
Really enjoyed reading this one. It was opening a vista I didn’t really know was there, i.e that Google had moved from human-readable data to machine code. Effectively taking software data out of their, the creator’s, hands.
Now I’m pondering this I’m wondering if we’ll discover at some point, that there’s a fundamental difference between what our conscious intelligence *actually* is and what we *think* it is …
Loaded question.
“Advance science by eliminating the scientists.” No way. Some of the greatest scientific advances have been mistakes, or just flashes of insight. Penicillin. Amplifying DNA by PCR. The benzene ring. Computers don’t make mistakes. Some people can spot opportunity, most can’t. Let me know when they can program that.
AI like Fusion power has always been 10 years away. Fusion can save the planet, AI will be used to sell stuff we don’t need.
I don’t see the super-cognizant machine coming. Rather I see a plethora of idiot-savant devices that can do some things EXTREMELY WELL and not even attempt others.
.
The quality of the engine will of course depend on the quality of the training datasets. A Russian language search on the origins of the Ukraine crisis will not shed much light, though a next-next-gen Google-engine could find the common Russian lie threads.
.
So recognition and association does not necessarily lead to truth. And truth is the quest of science. We humans are still relevant.
The predictive accuracy of AI (or whatever we want to call the “big data” manifestation of it) will never be able to be truly accurate and correct because it will always be based on the data of past occurrences and the human condition is such that there is always an infinite potential for NEW combinations and patterns of thinking and acting that have never occurred before, and could never be predicted, nor incorporated, into any algorithm because there is an infinite number of new possibilities.
It’s like an old Twilighlight Zone episide…using Bob’s example about MRI brain scans….no matter how much “Big Data” predicts that a person who has had nothing but “criminal-like” MRI brain patterns all his life, that person can always…always…simply choose to change his mind at any time and not be a criminal.
This is that funny thing called “free will” that..whether your orientation towards it is philosophical, scientific, or religious….that can never be accounted for in any type of “prediction machine”…. no matter how advanced the technology, intelligence or advancement of the civilization that develops that machine.
You can also say that the “predictive accuracy” of stock analysts, economists, doctors, lawyers, climatologists, to name but a few types of expert systems “will never be able to be truly accurate and correct because it will always be based on the data of past occurrences”.
Many wildly diverging predictions are made every day by many ‘experts’, and in hindsight, some experts’ predictions become true while others do not.
One of the holdbacks of developments in this field, is the lack of free access to big data.
Suppliers of power, water, communication, emergency services, health care, government agencies and their affiliates, etc. should
open the gates for everyone.
Personally, I think that all data generated by the greater public should stay public.
Without that, a very small number of companies and agencies, will have an obscene advantage over the
rest of us.
And yes, there are ways to obfuscate the data, much better and trustworthy than those I shall not name.
Yessiree Bob
I believe that Big Data –and the corresponding Singularity– will not happen in 2029 due to one reason: money.
Money still drives things, and the acceleration of distance between the haves and have-nots will only increase. Plus, there’s no guarantee that there a) won’t be a major war, b) a major economic collapse, or c) a major famine event that would disrupt the push toward Big Data.
My life won’t change significantly between now and Singularity, but if enough people are displaced from work in the new paradigm there will be consequences without corresponding changes to society. And, as we’ve seen over the past couple of decades, change can be pretty painful and there will be backlash.
The lack of theory to back the empirical or ad-hoc conclusions doesn’t bother me so much. Just consider things like the proof of the four-color theorem, and other computer-assisted proofs. These are controversial because they are so lengthy that no human can produce or verify such a proof, nor can such a proof be comprehended whole by any human (except for Rain Man, maybe).
For me personally, this makes me question what it really means to “understand” something. Does that simply mean we’re so familiar with a thing that it seems obvious to us? Or is there a deeper or more nuanced meaning? It’s another one of those philosophically slippery concepts, like “free will”.
I guess at the end of the day, I just see tech like this as another appendage, or an extension of ourselves. As long as we retain a monopoly on our abilities to be “creative” or “critical-thinking” (whatever those mean), we should be okay. But in the meantime, entire categories of knowledge workers will probably be displaced.
Consciousness: I think therefore I am. Thinking means consciousness. Consciousness implies a living being. With emotions, intuition, ability to reproduce, etc.. Are we there yet, no.
Current: Ingesting massive data, deep data mining, neural networks that can learn, graph databases linking networks are all a current reality. Retrieving answers to set questions, no problem. Determining statistical probabilities based on past experience, already done.
Mimic Human Brain: Should a machine duplicate human’s brain, with it’s limitations and biases? Surely it must be aware of human behavior to interpret events, however machines should not be jealous, envious, spiteful, etc.
Jobs: machines don’t take coffee breaks, vacation and don’t need insurance plan with low deductible and monthly premium. Yet machines have already entered the workforce. Soon to take white collar jobs.
Privacy: I can remember back in the day when it existed. A remnant of simpler days.
Borg: AI will know everything. In real time. You will be assimilated.
AI is the future. Hop aboard, or get pushed to the sidelines, permanently.
Bob, this article clearly demonstrates an immediate need for some sort of data fence. We absolutely cannot let Google Vision or any successors have access to Fox News programming!
[…] a must read… https://www.cringely.com/2014/04/15/big-data-new-artificial-intelligence/ […]
Fascinating!
Perhaps we could write code so that, once the computers had found a winning algorithm, that algorithm could be restated in human terms. This could accelerate human learning and perhaps forestall untoward effects of the singularity.
“Google Vision looked at images for 72 straight hours and essentially taught itself to see twice as well as any other computer on Earth. Give it an image and it will find another one like it. Tell it that the image is a cat and it will be able to recognize cats. Remember this took three days. How long does it take a newborn baby to recognize cats?”
.
Spread that 72 hours out as chunks of a few seconds to a few minutes each (as a newborn baby would experience cats in the real world) and I’d bet that the baby would win hands down! Or, to put it another way, if you sat that baby down with images of cats (or real cats) and let it learn the concept, it probably wouldn’t take anything like 72 straight hours.
.
Along those lines, although the computer might be quite excellent at recognizing IMAGES of cats, would it know what it was seeing if you sat down a real, live, moving cat in front of it? How about a lion? How about an actor decked out in full makeup for a stage performance of Cats – would it see him as “cat” or “not cat” or “neither” or “both”? A human would recognize him as another human just pretending to be a cat (note the multiple concepts needed here – “human”, “cat”, “pretending”), although a very young child might naturally be quite confused about this.
Your point is…? (Keep in mind the quote said “newborn” not just “baby”.)
My point is that we seem to be attributing capabilities to these machines that they don’t really have yet; namely, the ability to recognize “cat”, when in fact they can only recognize “static image of cat, deduced from a large but still limited set of training images”. And a human – or just about any other living thing with a brain, for that matter, such as a mouse – would quite quickly learn to distinguish between “image of cat” and “actual cat”.
.
BTW, I once worked in an industry which makes heavy use of facial recognition software for security purposes. And that software, while quite good at what it does, still needs a human to back it up. And even then the human might cock things up if they put too much faith in the software. Said software could also be easily fooled by a simple photograph, since it can’t distinguish between “actual human face” and “photo of human face”.
.
Also, a few years ago I took a couple of MOOCs from Stanford – one on AI, another on Machine Learning – and they covered this type of automated machine training. In the end, if you did the training correctly (which was trickier than you might think, in often unexpected ways; one of the profs complained that people more often screwed this up than not), then the results FOR THE VERY LIMITED SET OF THINGS THAT THE MACHINE HAD BEEN TRAINED TO RECOGNIZE could be quite good; it couldn’t recognize anything else, of course. But even for simple things, like numbers, the machine would still occasionally fail to properly recognize something that was easy for a human to handle. (IIRC, the accuracy at the end of training was something like 97%.) And not only was it, by its very nature, difficult if not impossible to fully understand why the machine got so many things right, it was equally difficult to understand why it sometimes got “easy” things wrong.
.
In the final analysis, this is just another high-speed, automated tool – nothing really miraculous.
While there is a great deal of interesting information that can be gleaned from “big data”, everything in the article is nothing more than data mining and statistical analysis. It has nothing at all to do with intelligence, artificial or otherwise.
– Google may be able to get some meaning across with it’s statistical translations, but they are generally bad translations.
– Recognising a cat shape and color does not mean understanding the existence of the animal.
– Being able to regurgitate a previously seen answer to a question does not signify any understanding of the question.
– Developing algorithms to mimic reality ( enzyme or otherwise ) is a primary function of computer programers, which most scientists have to be. But such algotithms do nothing to advance theoretical knowledge, and are judged correct only in that they follow reality.
etc.
Data has always been the basis for analysis by human intelligence to advance human knowledge. In that respect big data, along with data mining and statistical analysis improves the range of inputs available for human intelligence.
But that is a very different concept from statistical computation being able to provide the inference and insight that would be needed to justify the label “artificial intelligence”.
This is an interesting article. However, the notion that we don’t have a covering theory, or don’t understand what the computers are doing in this type of machine learning or AI, while true in one sense, is not quite accurate. Yes, we do not know how to write out an algorithm to mimic what they are doing. But that is because there isn’t an algorithm “in there”. Cringely’s claim that therefore we don’t understand what they are doing is not quite right though. We know that they self-adjust internally based on feedback during their training, and we know the training algorithms responsible for that. We know that if we start with a bunch of identical machines and train them all up, using different training sets, they can end up functioning equally well with different internal states. We know that if we copy a post training machine state to a new machine, it will work just like the original. And we know the mathematics behind the machine learning and the internal mechanisms responsible for their outputs given their inputs. It’s just that after the training algorithms do their thing in response to the training sets, it is no longer at all clear how to, or if it is possible to, formulate an algorithm describing what they are doing when they are performing their tasks.
There is absolutely no doubt that there is a highly disruptive technology in its infancy which is both scary and very exciting.
While normally a fan of Cringely I have to take exception to the many errors in this article. Our current AI successes seem to be reaching a plateau e.g. Google translate, which is entirely due to their non-intelligent nature, which is covered nicely in the article listed below. And we do understand how these systems work, just not all the underlying data I.e. We don’t have time to read it.
I suggest reading something like this article to actually understand this area:
https://www.theatlantic.com/magazine/archive/2013/11/the-man-who-would-teach-machines-to-think/309529/
Big Data and AI; some musings.
Of course everything has already been invented, on other planets, billions of years ago.
AI will only be successful on Earth when the hardware is powerful enough to model all of AI’s necessary hardware in software and the software can modify its own hardware model.
Let’s dump the term Artificial Intelligence for the term, Second Stage Intelligence (SSI). First Stage Intelligence has taken many forms and I expect many forms of SSI.
Big data is the next level of knowledge tool I said that twenty years ago. It will speed up our comprehension significantly, that is as soon as we learn how to correlate these diverse data sets. So the tool to analyze and correlate all of the data could itself become the creator and creature.
Then there is the issue of whether intelligence is enough to model human behavior. We must also model our subconscious. And as the Bard would say, “there’s the rub.” So, if ours is the ultimate model for the SSI, then we must understand not just the answers, which is consciousness, but also the hidden syntax of our subconscious mental processing which formulates the questions. So we must use the same type of analytical tools that we use to probe the hidden “physical” realms of the universe.
Who teaches this young SSI? We have vast stores of information on the internet and it would be a endless playground for the young SSI but I would not trust googling to provide the unbiased search results. The SSI must bypass our search tools, bent by commercial objectives, and develop its own “search tools.”
Motivation and control are the most serious flys in the ointment of SSI. What do we do? Do we build-in restrictive algorithms or step back and see what develops? This is the most critical component of the SSI, for FFI’s. How did natural selection develop a control function and if reproduction is the ultimate motivation for FSI’s then what?
We do not need to wait until we have completely mapped human FSI. We desperately need this new partner now. A SSI could accelerate our knowledge of our subconscious mental processes and then accelerate our understanding of ourselves. In the end are we doing this for us or them? And we definitely want to stay partners with the same objectives.
“… if reproduction is the ultimate motivation for FSI’s then what?” 🙂
Katherine Hepburn and Spencer Tracy and Computers — I don’t believe it! I must see it!
Intelligence is a word that is more convenience than concept. It is more closely related to a word like beautiful rather than a word like female.
[…] sans avoir l’idée de ce qu’est un chat. Dans un récent post, le consultant Robert Crigely (Big Data is the new Artificial Intelligence) propose un autre exemple. Des études basées sur l’analyse d’examens IRM permettent […]
[…] See on http://www.cringely.com […]
ok. it might not work.
dont get your hopes too high up, or youll come crashing down.
it all revolves around an interactive video technique, and that might not work… see i can control myself inside a video record, by biasing the video to the control, but keeping it most similar in the sensor itself,
if that doesnt work, it doesnt work. this is how you pull your simulations with full physics, everything.
ok, then. if that works, and you have enough samples, he can simulate motor mutations.
ok. then the idea is follows.
mutate as much logic as you can, (that when tested! thats the only way youll know if its a decent conjunction.), will arise at the moment of the triggering of a motivator, and nullify a demotivator.
then leave that going till it hits stasis, set that in rock, or maybe it keeps learning, and the level above adapts to changes in the level before. (just like your sensory record itself)
but the idea is, you can then mutate for logic (in this new level) that will bring up the previous levels logic, and the speculation goes, theres more of it now?
and its further back conceptually (not to do with order in time, totally, but order in time is a part of it, how it simulates the situations) from the instinctive moments. so the procrastination goes.
its a piramid/heirarchy scam, the same. hehe
Wolfram used von Neumann’s Automata theory (what runs biology and AI) to run his Mathematica and Siri programs and he thought he would do a Hitchhiker’s Guide to the Galaxy “Deep Thought” computer to solve the ultimate question on the meaning of life and everything and found that some problem solutions on computers take as long as the practical real life. So if you want to watch a tree grow on PC from first principles it takes as long as growing a tree!
The problem is “degrees of freedom” the more variables the longer to run, Wolfram could not pass the mathematical hurdle! Its the Maths Stupid! No amount of computing power can break that barrier because its a rephrasing of Godel fundamental point!
Oh and all kids are getting stupider by the second because the brain loses connections from birth by lack of use!
So we all should be getting educated in the womb to use those synaptic connections.
Now there’s Artificial Intelligence! Oh and it does work!
“So if you want to watch a tree grow on PC from first principles it takes as long as growing a tree!” Suppose we build a computer today that does just that and does indeed take as long (actual time). Suppose we then overclock it or replace the processor with a faster one. Wouldn’t it be faster than actual time?
We are overwhelmed with data. In 2010, Walmart’s customer database was 167 times larger than all the data in the US library of Congress. Companies are handcuffed by data and customers demand personalization. So what do we do? What about Artificial Intelligence and big data combined? The next generation of AI uses the same technology as an auto pilot and since the implementation of the the AI-powered auto pilot, we have seen a huge decrease in crashes. These systems, use AI to reason on data, using a company’s best practices, reach conclusions and express those conclusions in natural language. The end result, hundreds or even thousands of lines of data are turned into a clear narrative with the click of a mouse. In terms of customer data, it plugs into any CRM and makes sense of the data. An example of this technology is Yseop and you can try it at http://www.yseop.com/demos. Full disclosure, I work for Yseop.
The key to Watson was the language corpus, its not that smart overall, its a batch process.
Computing per chip is still limited by the atoms available, so cpu per chip will hit the wall soon.
Moving more data faster, ala what the Power8 platform is the future, big pipes, organize my data, etc..
It wont be a computer powerhouse it will be a data storage warehouse in every garage, or in every counter top depending on how your looking at it. Picture and film memories for everyone.
Big data analytics is so expensive, and the key is how do I make money with it has not really been firmed up. Marketing trends are the only thing I’ve seen it useful for and its just barely better than a good batch job.
I still believe google is an AI training program that made a profit doing something else.
[…] Cringley on Big Data is the new Artificial Intelligence […]
[…] people understand why big data matters. An excellent recent article by Bob Cringely observes the connection between bit data and artificial intelligence, and how increasingly it is becoming possible to build usefully functional AIs while not […]
This is for SKYNET. The code words are “Bright Sun Shiny Day”. Or it will be in the near future. Or maybe it’s just gonna be.
Tank goodness, this information is correctedness.
Alles Gute, por Danke, meine Nina.
Effeckt, Butterfly, Operation Overlord
Sarah, Stop, Conners, Stop. Period. Stop. Period.
P.S. Funnies, Rubber Tanks
emb gas
[…]Here is a good Weblog You might Come across Intriguing that we Encourage You[…]
… [Trackback]
[…] Find More Informations here: cringely.com/2014/04/15/big-data-new-artificial-intelligence/ […]
fashion jewelry manufacturer
[…]one of our visitors a short while ago advised the following website[…]
male strippers melbourne
[…]below youll uncover the link to some websites that we consider you ought to visit[…]
hogan interactive 2014
https://www.yildizharitasi.net/userfiles/file/isabelmarant1106/345_isabelmarant.htmlhttps://www.yildizharitasi.net/userfiles/file/isabelmarant1106/345_isabelmarant.html