“The step after ubiquity is invisibility,” Al Mandel used to say and it’s true. To see what might be The Next Big Thing in personal computing technology, then, let’s try applying that idea to mobile. How do we make mobile technology invisible?
Google is invisible and while the mobile Internet consists of far more than Google it’s a pretty good proxy for back-end processing and data services in general. Google would love for us all to interface completely through their servers for everything. That’s their goal. Given their determination and deep pockets, I’d say Google — or something like it — will be a major part of the invisible mobile Internet.
The computer on Star Trek was invisible, relying generally (though not exclusively) on voice I/O. Remember she could also throw images up on the big screen as needed. I think Gene Roddenberry went a long way back in 1966 toward describing mobile computing circa 2016, or certainly 2020.
Voice input is a no-brainer for a device that began as a telephone. I very much doubt that we’ll have our phones reading brainwaves anytime soon, but they probably won’t have to. All that processing power in the cloud will quickly have our devices able to guess what we are thinking based on the context and our known habits.
Look at Apple’s Siri. You ask Siri simple questions. If she’s able to answer in a couple words she does so. If it requires more than a few words she puts it on the screen. That’s the archetype for invisible mobile computing. It’s primitive right now but how many generations do we need for it to become addictive? Not that many. Remember the algorithmic Moore’s Law is doubling every 6-12 months, so two more years could bring us up to 16 times the current performance. If that’s not enough then wait awhile longer. 2020 should be 4096 times as powerful.
The phone becomes an I/O device. The invisible and completely adaptive power is in the cloud. Voice is for input and simple output. For more complex output we’ll need a big screen, which I predict will mean retinal scan displays.
Retinal scan displays applied to eyeglasses have been around for more than 20 years. The seminal work was done at the University of Washington and at one time Sony owned most of the patents. But Sony, in the mid-90s, couldn’t bring itself to market a product that shined lasers into people’s eyes. I think the retinal scan display’s time is about to come again.
The FDA accepted 20 years ago that these devices were safe. They actually had to show a worst case scenario where a user was paralyzed, their eyes fixed open (unblinking) with the laser focused for 60 consecutive minutes on a single pixel (a single rod or cone) without permanent damage. That’s some test. But it wasn’t enough back when the idea, I guess, was to plug the display somehow into a notebook.
No more plugs. The next-generation retinal scan display will be wireless and far higher in resolution than anything Sony tested in the 1990s. It will be mounted in glasses but not block your vision in any way unless the glasses can be made opaque as needed using some LCD shutter technology. For most purposes I’d like a transparent display but to watch an HD movie maybe I’d like it darker.
The current resting place for a lot of that old retinal scan technology is a Seattle company called Microvision that mainly makes tiny projectors. The Sony patents are probably expiring. This could be a fertile time for broad innovation. And just think how much cheaper it will be thanks to 20 years of Moore’s Law.
The rest of this vision of future computing comes from Star Trek, too — the ability to throw the image to other displays, share it with other users, and interface through whatever keyboard, mouse, or tablet is in range.
What do you think?
How wide a range of eyeball motion can the laser projectors handle?
I think a shrunk-down version of the Oculus, with stereo cameras and pupil sensors, is more likely to succeed. Right now they use stock Galaxy Note 3 screens (with digitizer!) but presumably a mass-produced version at scale could have transparent AMOLEDs with an electrochromic backing.
If you turn your eyeball the beam can’t enter your pupil to draw on your retina anymore. Look away and the image disappears.
The display only had a field of view of 27 degrees.. so not useful as an immersive display.
p.s. They switched to LED light sources.. much better than the laser version.
“LED light sources” do you mean colored LEDs or LED/LCD as in most flat panel monitors and TVs.
I mean colored LEDs.
Looking at their projector products they mention lasers. I’m not sure if that’s just marketing speak or they actually switched back to lasers for technical reasons ( power? ). I suspect they are still using LEDs.
Yes Star Trek, and also the movie ‘Her’ is a good reference for this edition…
I really envision the smart phone evolving into something like the computer interfaces used in the book “A Hymn Before Battle”. The device was good at predicting what you needed but had the computing power to switch gears instantly if it got it wrong. They could provide complex data and display it anywhere – A wall, tabletop, etc. They used a voice interface exclusively. No buttons on the device and the batter lasted for hundreds of years. Other than the battery, we are getting pretty close.
The funny thing is that I described such a gadget, in a high school Spanish class no less, around 1970 or 1971. The assignment was to make up a gadget, then do a quick impromptu sales pitch for it, all in-class. The technology to build it didn’t exist then.
I also remember daydreaming about being able to carry a decent computer in a backpack, a few years later. What I didn’t realize was how soon that would get here, or how powerful that “decent” computer would turn out to be. (Now, if someone could come up with an operating system and a set of apps that didn’t suck up 90% of the processor…)
Just as in the film “Demolition Man” everything eating was Taco Bell. For IT everything will be Google.
@What do you think?
government mandated chip-in-head. for the good of the body.
The MMI in Oath of Fealty (Niven & Pournelle, 1982) laid this out nicely. Though the interface was text through an implant, the glasses are just a less-intrusive method of the same thing with the added bonus of images but with spoken voice, not sub-vocals. Beats Google glass by ~30 years and is pretty much doable with current technology.
“Sony, in the mid-90s, couldn’t bring itself to market a product that shined lasers into people’s eyes.”
I think you either mean “shone lasers” or “shines laser light” – or is this American English.
“Shone” is the past tense of “shine” (it shines now but shone earlier). The word “light” is missing, but can be filled in by the reader. Of course, if one can say a light bulb shines, then so can a laser.
“Shined” is the past tense of “shine.” Speak English boy!
I’m speaking Merriam- Webster American English: “SHONE: past and past participle of shine” https://www.merriam-webster.com/dictionary/shone .
Exactly.
“Shone” is NOT the past tense of “shine” – “shined” is the past tense and “shone” is the past participle.
I shine.
I shined.
I have shone.
https://www.verbix.com/webverbix/English/shine.html
@pedanticdan It’s OK if you choose not to accept Merriam Webster. Perhaps they are influenced by common usage more than you like. They did say both “past” and “past participle”.
I had to laugh about the Siri reference and voice recognition in general. The voice operated navigation system in our new car is hopeless at recognizing our voices. Just an exercise in frustration.
Siri is not a lot better. MAybe it’s our accents, who knows but they are pretty terrible. In comparison most automated phone answering systems seem to be better at it but then they are dealing with a limited set of responses.
As for optical scanner or projectors, and I’m not sure which you are talking about here be ‘scanning’ implies reading to me but either way, I’m taking up wearing VERY dark glasses as soon as the retinal scanning of ‘Minority Report’ even looks like it is going into general use.
Call me a Luddite but the more connected and integrated the world becomes, the LESS I want to be connected and integrated into it.
Honestly, I think more credit should be given to NASA than Roddenberry (with all due credit to him). When they were prepping for ST:TOS NASA was very helpful in providing “pie in the sky” ideas of what the future looked like. Combine that with some very good sci-fi writing and the biggest problem becomes determining whether the Motorola flip-phone was predicted or caused by ST:TOS. The bigger question is whether or not ST:TNG has already inspired the visor Geordi wore, or God forbid, the Borg.
Too often we forget that what is on display required a lot of basic scientific funding and support that commercial entities will drop because they have no practicality or marketability. Hollywood tends to data mine the academy, and sometimes promotes development of what would otherwise be overlooked (the PADD in ST:TOS, the camouflage field in Predator, the gesture interface in Minority Report) or worse yet, encourages a desire for what is impractical (every robot ever presented, flying cars) without much more basic research.
Star Trek didn’t have to wholly invent the Borg; nature already has partial models… …e.g., the Bermuda grass in my yard… ;-D
And then there’s this: “The transition from single-celled life to multicellular life can be seen as ‘borganisation.’ The chemical ‘minds’ of cells are closely connected, and in some cases cells have gap-junctions connecting their cytoplasm or even merge to form a cyncyticum. In a multicellular organism the cells are differentiated into different tissues with different functions, which sometimes include the planned death of cells (such as in the case of the formation of the protective outer layer of skin, the stratum corneum).
Differentiation is mediated through chemical signals from other cells which affect the genetic expression of proteins and continued cell behaviour. This examples shows that a borganism can have a complex internal structure. All units do not need to be equal, and specialisation and hierarchical control is a possibility.”
https://www.aleph.se/Trans/Global/Posthumanity/WeBorg.html
I’m amazed no none has brought up Snow Crash. That had implanted speakers in your ears and the equivalent of Google Glasses, including wirelessness. Plus, it’s a great book. 🙂
I thought that was Diamond Age??
Are you off on Moore’s Law? The original estimate was that transistor capacity doubles every 18 months. Per Wikipedia, it doubles every two years. The Wikipedia article also states that the speed/transistor capacity ratio is less than 1/1, so that means it will take even longer than two years to double the speed. I’m sure there’s a lot of people who know more than what’s in Wikipedia, so correct me if I’m wrong.
I think I read somewhere that mobile processing is moving even faster than Moore’s law. However, we’re bound to have a slow down in progress eventually.
This is not THAT Moore’s Law, it’s a different one. I’ve written about this before. Graham Spencer at Google Ventures speaks in terms of an Algorithmic Moore’s Law that defines how our ability to process in the cloud is increasing in capability or decreasing in cost. Right now prices are dropping 50 percent per year or better. THIS IS ON TOP OF THE SEMICONDUCTOR MOORE’S LAW. We’re in a period of dizzying advancement that will soon have profound effect on all cloud services and platforms.
The closest I could find in Wikipedia is this discussion of “algorithmic efficiency”: “One essential object is to choose that arrangement which shall tend to reduce to a minimum the time necessary for completing the calculation”: http://en.wikipedia.org/wiki/Algorithmic_efficiency . As Bob said, this now applies to obtaining the answer in the cloud. However, the real problem is getting the answer back to us, ever since the Information Superhighway has turned into a bumpy country road.
Bob, a year or so ago you suggested that Apple’s ultimate goal is for the phone in your pocket to BE your computer. You could walk up to a desk with a screen, keyboard and mouse which would wirelessly connect to the phone in your pocket. You’d be carrying around your computer with you all the time. I think there’s a very good chance that such a direction is where things will go in the not too distant future.
Al I have for that is this 🙂
That process began in 2004 and was further refined in 2007 and again in 2009 with the pocket PC: http://web.archive.org/web/20110725135644/https://www.oqo.com/ . But that company went out of business. Today the emphasis is on separate devices and cloud services tying them all together, most likely to create a continuous-revenue business model for hardware and software. But I agree, the dockable pocket phone/PC combination is more possible now than ever, but that would be TOO efficient.
Anything mounted on eyeglasses (or on contact lenses) is a non-starter for most people who don’t wear them. Even sunglasses bother me when I have to wear them.
Are you a nudist?
I won’t write off piping images directly onto the eyes, but I think a lower hanging fruit would be to add something like the Kinect to more appliances, allowing our internet of things to parse meat space for context. When your refrigerator can recognize you by sight and inform your weight watching app that you are midnight snacking and offer you an opportunity to do some exercise for another reward, I think the mobility will have disappeared by offloading it from your personal affects like your phone or smart watch to your home, your car, your job, then the wider world…
I don’t want to wear glasses, contacts, earplugs, watches, nor wristbands. Not do I want tattoos, piercings, and implants.We shall soon see if other people feel the same way.I don’t mind carrying my phone around with me. If those interfaces were the future I would think we took a step back. I prefer the I, Robot approach.
“I don’t want to wear glasses, contacts, earplugs, watches, nor wristbands. ”
Wait until you’re in your 40’s. Your vision deteriorates and it’s a case of wear glasses, or contacts, or give up reading. Hearing goes with age as well, especially if you like to listen to that hippity hoppity stuff at high volume. Or even if you don’t, but just are unlucky. A analog watch is nice for telling time without needing to put on your glasses.
The product I want…
Electronic “eyeglasses” that look just like regular eyeglasses, so, for one thing, they don’t tempt thieves to (literally) rip them off. Electronically adjusted vision correction replacing bifocals, etc, that works automatically based on the distance your eyes are currently focused. (Vision correction calibration at home by just looking at an eye chart.) Automatically adjusted opacity to work like sunglasses. Wireless display of video from tier of sources, from handheld computer/phone to home video (no more big screen TVs) to cloud.
And for 2.0, tiny stereo cameras to provide macro and telephoto zooming on demand, and night vision.
This vision misses a more personal aspect that is also coming. You can designate ringtones for each person that calls you and then identify the person calling by the ringtone used. I see Siri or Google Voice as the “operator”, but usually we interact with multiple people.
.
So why have only one operator? Why not have a group of personalities we interact with depending on the interaction? For information a librarian, for shopping a store clerk personality, etc. The Star Trek vision reduces the computer to one personality (bland) and style of interaction. Our vocabulary does change depending on context and I expect computer interactions to pick up on this eventually.
.
As far as non-verbal communication, screen displays do seem natural. What’s needed is an open standard so we can have displays scattered about public areas that you can use like a drinking fountain. You approach it, start some chat with your smartphone (i.e. cloud), but answers appear on the public display. So that means some device pairing and short range wireless data exchange protocol. All doable, but not yet done.
why have a ring tone at all for known-person incoming calls? Just use their voice, “hey Bob. you there?”. Or probably better just them making a humming noise or something.
Moore’s law has a doubling of “performance” (vague term) every 18-24 months, not 6-12.
The Wikipedia article on it is quite detailed.
Given that not blinking for 60 minutes causes permanent eye damage, how was that FDA test conducted?
Yes, he switched to an even vaguer ‘algorithmic’ doubling. Software + Hardware will make things double every 6-12 months. I see no evidence of that in terms of actual software and hardware.
Consider that NASA still uses software where the input files have to be formatted to look like punch cards.
For that matter, Apple didn’t develop SIRI either, and the producer relies more on buying up the competition than innovation.
Speculating:
The primary purpose of blinking is to re-wet the surface of the eye, to keep it moist.
You’d use a saline pump and sprayer, along with the eyelid retractors from “Clockwork Orange”.
The part about the FDA testing the retinal display reminded me of a girl I met while traveling. She thought she possessed a special ability to look at the sun. She described to me how black discs would move down to block out the sun for her…. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
I suspect the FDA test enabled that special ability as well… but hopefully a smaller dot ?
Forget glasses… stick to your faith in Star Trek: if required it will pop up an image on the nearest suitable display device as required. Display tech is getting so cheap that if there’s a need it’ll be built into everyone’s bathroom mirror or eventually even entire walls.
It worked in Star Trek because they were always in the ship, with the infrastructure handy. As soon as they got out of it they had to turn to the badges (and tricorders).
Voice recognition is a nice gimmick but it will never be 100% practical. It’s about privacy. We don’t want everybody around to know what we’re doing on our personal device. It’s ok for innocuous queries (“hold on, I’ll ask Siri where the pub is”) but you don’t want your entire interaction with the device to be heard by everybody.
Ask yourself why video calling never became mainstream. The technology has been with us for a long time now. It’s nice when you’re away for a long time and need to stay in touch with loved ones. For the rest of the time it’s audio and text.
Perhaps it would take off it the privacy could be preserved. Perhaps by subvocalization (but do you want a patch on your neck?) Perhaps by “brain currents” (patch on the temple). Perhaps it will eventually come down to implants (watch the “Ghost in the Shell” series if you haven’t, right now).
My personal speculation: miniaturization until the device looks like a wrist band or wrist watch. Plus holograms and hand gestures. Shake your hand and a holographic screen appears in your hand. Use the same gestures you use now on touch screens. Resize the screen however you want by dragging on the corners. Lift the hand to your ear for audio calls. Do a palm rollover to make the screen go away. Magic.
Bob,
Though Trekker @ heart, having grown up on ToS – I would point you towards a bit of a reality check: Moore’s law is coming to an end. . . If you scan discussions on sites dealing with semiconductor tech, you’ll see that the lower processing nodes 10 years is just now being worked on. . . Intel, the worlds largest Semiconductor house, and probably has more PhD’s working on this, is having issues with 14nm process. Cost is also the key here, as the newer processing nodes < 28nm are actually MORE expensive, not less. Sure, there's talk of 14/10/7/4 nm tech, but it won't be here anytime soon. Lab, yes, production, not so ready. . .
So given that CPU power (cycles) are peaking (since only data centers are willing to shell out the bucks for 200W Xeon's you won't find in your cell phone), the "next big thing" will be: software. How to make better use of the current CPU power available? Though I'm not saying we should roll back and all code in assembly language (bit-bangers are we!) – but today's code is full of "stuff" – the "quick get to market" business drive overrides all other "engineering" concerns and most company's put out junk….
Take Nvidia's new Denver Arch: Mixing in some of the old Transmeta technology – interesting concept. Unfortunately, the best man doesn't always win. . .
I believe we can make big strides in many areas (I'm more of an "embedded" type) but the mix of how we get there will be very different in the future. . . . To Boldly Go. . .
I’m actually amazed at how good Siri is at understanding my queries. Getting the right answer is another matter.
But I really don’t like the dumb terminal approach to computing. With the large memories and decent processing power of devices it shouldn’t be that difficult to keep the software on the device. The downside is lower battery life, but that’s more a function of style, with the “thin is in” devices sacrificing battery size for looks. As I was reminded of last week while on vacation, there’s a lot of this country that isn’t blanketed in super fast LTE and free WiFi, in fact I spent a fair amount of time in NO SERVICE land, where it would have been nice to have a plant or bird watching database available. But no service means no clouds, and no idea what sort of bird that is. By the time you get back in range, the bird is forgotten.
” the ability to throw the image to other displays, share it with other users, and interface through whatever keyboard, mouse, or tablet is in range…What do you think?”
Unfortunately, Microsoft’s secret list of patents already covers the ability to throw the image to other displays, share it with other users, etc.
Fortunately , the Chinese gov’t has revealed these secret patents:
http://arstechnica.com/tech-policy/2014/06/chinese-govt-reveals-microsofts-secret-list-of-android-killer-patents/
And X11 has been throwing windows across displays for nearly 30 years.
Moore’s law won’t be the only factor governing innovation, even if chip development hits a wall, there will be room to refine design and clever folk working out new tricks for old hardware.
Moore’s law isn’t really a law is it. It’s not constrained by any physical boundaries that say the evolution of processing power will be on a given timescale. There’s nothing in the universe that I know of that say’s this is an unbreakable ‘law’ of the universe and that if little aliens on a distant planet starting building computers they would be subject to the same development cycle that we are.
Isn’t it more of an ‘observation’ that this is the case. After all, there’s nothing to say that someone will not create a chip (qbits anyone?) that will multiply processing power by a hundred or a thousand fold in one step. Wouldn’t such an event sort of break this ‘law’. If it were really a law we could come up with some software or programming paradigm that required a certain amount of processing power and then extrapolate the current chip development timeline to a point that would predict when such capabilities would be sufficient to support the software. Then just before that time you buy stock in Intel and Apple!
Google Glass has been around for almost 20 years, and it is still stupid.
All along is just technological convergence and miniaturization which came from our spending on previous evolutions. There have been no real invention since a long time. Personal computer is just miniaturized of laptop. I believe Internet of Things (IoT) will happen in a big way only if we kept spending on gadgets.
I’d be more ready to believe this if I didn’t want to punch every Glasshole I see. It’s viscerial, I’m not that kind of guy, really, and like my latest iPhone alot but you have to know where the line is.
I’m sold on your thesis. I have long said all they got wrong was not anticipating ringtones for the communicators otherwise they nailed it.
I also think you description of metadata is absolute genius. They should quote it in schools.
” I very much doubt that we’ll have our phones reading brainwaves anytime soon, but they probably won’t have to. All that processing power in the cloud will quickly have our devices able to guess what we are thinking based on the context and our known habits.”
That is it.
The communicators didn’t have ring tones because they were not phones, but walkie-talkies.
I just don’t like speaking to a computer, versus clicking a button on a remote or mouse. It takes more effort to say “XBox, go to Netflix” than to hit a few buttons on my universal remote. That’s one of the reasons current smartphones have done so well – tap on an app, get instant results. Speaking commands takes more work – it requires preparing the command in your mind, speaking the command with proper syntax, and enunciating the words. Drunk people can’t ask Siri what the weather is, but they can click on the weather app on their phone.
Bob, we miss you. Is everything okay?
Actually, this future in all of its details belongs to Frederik Pohl, for his 1966 book The Age of The Pussyfoot. See this list of his ideas from the book:
https://www.technovelgy.com/ct/AuthorSpecAlphaList.asp?BkNum=287
Perhaps his book was influenced by StarTrek: “In 1964, Gene Roddenberry, a longtime fan of science fiction, drafted a proposal for a science-fiction television series that he called Star Trek.” http://en.wikipedia.org/wiki/Star_Trek:_The_Original_Series
Making predictions is hard — especially about the future. Yogi Berra