CHAPTER THREE
WHY THEY DON’T CALL IT
COMPUTER VALLEY
Reminders of just how long I’ve been around this youth-driven business keep hitting me in the face. Not long ago I was poking around a store called the Weird Stuff Warehouse, a sort of Silicon Valley thrift shop where you can buy used computers and other neat junk. It’s right across the street from Fry’s Electronics, the legendary computer store that fulfills every need of its techie customers by offering rows of junk food, soft drinks, girlie magazines, and Maalox, in addition to an enormous selection of new computers and software. You can’t miss Fry’s; the building is painted to look like a block-long computer chip. The front doors are labeled Enter and Escape, just like keys on a computer keyboard.
Weird Stuff, on the other side of the street, isn’t painted to look like anything in particular. It’s just a big storefront filled with tables and bins holding the technological history of Silicon Valley. Men poke through the ever-changing inventory of junk while women wait near the door, rolling their eyes and telling each other stories about what stupid chunk of hardware was dragged home the week before.
Next to me, a gray-haired member of the short-sleeved sport shirt and Hush Puppies school of 1960s computer engineering was struggling to drag an old printer out from under a table so he could show his 8-year-old grandson the connector he’d designed a lifetime ago. Imagine having as your contribution to history the fact that pin 11 is connected to a red wire, pin 18 to a blue wire, and pin 24 to a black wire.
On my own search for connectedness with the universe, I came across a shelf of Apple III computers for sale for $100 each. Back in 1979, when the Apple III was still six months away from being introduced as a $3,000 office computer, I remember sitting in a movie theater in Palo Alto with one of the Apple III designers, pumping him for information about it.
There were only 90,000 Apple III computers ever made, which sounds like a lot but isn’t. The Apple III had many problems, including the fact that the automated machinery that inserted dozens of computer chips on the main circuit board didn’t push them into their sockets firmly enough. Apple’s answer was to tell 90,000 customers to pick up their Apple III carefully, hold it twelve to eighteen inches above a level surface, and then drop it, hoping that the resulting crash would reseat all the chips.
Back at the movies, long before the Apple Ill’s problems, or even its potential, were known publicly, I was just trying to get my friend to give me a basic description of the computer and its software. The film was Barbarella, and all I can remember now about the movie or what was said about the computer is this image of Jane Fonda floating across the screen in simulated weightlessness, wearing a costume with a clear plastic midriff. But then the rest of the world doesn’t remember the Apple III at all.
It’s this relentless throwing away of old technology, like the nearly forgotten Apple III, that characterizes the personal computer business and differentiates it from the business of building big computers, called mainframes, and minicomputers. Mainframe technology lasts typically twenty years; PC technology dies and is reborn every eighteen months.
There were computers in the world long before we called any of them “personal.” In fact, the computers that touched our lives before the mid-1970s were as impersonal as hell. They sat in big air-conditioned rooms at insurance companies, phone companies, and the IRS, and their main function was to screw up our lives by getting us confused with some other guy named Cringely, who was a deadbeat, had a criminal record, and didn’t much like to pay parking tickets. Computers were instruments of government and big business, and except for the punched cards that came in the mail with the gas bill, which we were supposed to return obediently with the money but without any folds, spindling, or mutilation, they had no physical presence in our lives.
How did we get from big computers that lived in the basement of office buildings to the little computers that live on our desks today? We didn’t. Personal computers have almost nothing to do with big computers. They never have.
A personal computer is an electronic gizmo that is built in a factory and then sold by a dealer to an individual or a business. If everything goes as planned, the customer will be happy with the purchase, and the company that makes the personal computer, say Apple or Compaq, won’t hear from that customer again until he or she buys another computer. Contrast that with the mainframe computer business, where big computers are built in a factory, sold directly to a business or government, installed by the computer maker, serviced by the computer maker (for a monthly fee), financed by the computer maker, and often running software written by the computer maker (and licensed, not sold, for another monthly fee). The big computer company makes as much money from servicing, financing, and programming the computer as it does from selling it. It not only wants to continue to know the customer, it wants to be in the customer’s dreams.
The only common element in these two scenarios is the factory. Everything else is different. The model for selling personal computers is based on the idea that there are millions of little customers out there; the model for selling big computers has always been based on the idea that there are only a few large customers.
When IBM engineers designed the System 650 mainframe in the early 1950s, their expectation was to build fifty in all, and the cost structure that was built in from the start allowed the company to make a profit on only fifty machines. Of course, when computers became an important part of corporate life, IBM found itself selling far more than fifty—1,500, in fact—with distinct advantages of scale that brought gross profit margins up to the 60 to 70 percent range, a range that computer companies eventually came to expect. So why bother with personal computers?
Big computers and little computers are completely different beasts created by radically different groups of people. It’s logical, I know, to assume that the personal computer came from shrinking a mainframe, but that’s not the way it happened. The PC business actually grew up from the semiconductor industry. Instead of being a little mainframe, the PC is, in fact, more like an incredibly big chip. Remember, they don’t call it Computer Valley. They call it Silicon Valley, and it’s a place that was invented one afternoon in 1957 when Bob Noyce and seven other engineers quit en masse from Shockley Semiconductor.
William Shockley was a local boy and amateur magician who had gone on to invent the transistor at Bell Labs in the late 1940s and by the mid-1950s was on his own building transistors in what had been apricot drying sheds in Mountain View, California.
Shockley was a good scientist but a bad manager. He posted a list of salaries on the bulletin board, pissing off those who were being paid less for the same work. When the work wasn’t going well, he blamed sabotage and demanded lie detector tests. That did it. Just weeks after they’d toasted Shockley’s winning the Nobel Prize in physics by drinking champagne over breakfast at Dinah’s Shack, a red clapboard restaurant on El Camino Real, the “Traitorous Eight,” as Dr. S. came to call them, hit the road.
For Shockley, it was pretty much downhill from there; today he’s remembered more for his theories of racial superiority and for starting a sperm bank for geniuses in the 1970s than for the breakthrough semiconductor research he conducted in the 1940s and 1950s. (Of course, with several fluid ounces of Shockley semen still sitting on ice, we may not have heard the last of the doctor yet.)
Noyce and the others started Fairchild Semiconductor, the archetype for every Silicon Valley start-up that has followed. They got the money to start Fairchild from a young investment banker named Arthur Rock, who found venture capital for the firm. This is the pattern that has been followed ever since as groups of technical types split from their old companies, pick up venture capital to support their new idea, and move on to the next start-up. More than fifty new semiconductor companies eventually split off in this way from Fairchild alone.
At the heart of every start-up is an argument. A splinter group inside a successful company wants to abandon the current product line and bet the company on some radical new technology. The boss, usually the guy who invented the current technology, thinks this idea is crazy and says so, wishing the splinter group well on their new adventure. If he’s smart, the old boss even helps his employees to leave by making a minority investment in their new company, just in case they are among the 5 percent of start-ups that are successful.
The appeal of the start-up has always been that it’s a small operation, usually led by the smartest guy in the room but with the assistance of all players. The goals of the company are those of its people, who are all very technically oriented. The character of the company matches that of its founders, who were inevitably engineers—regular guys. Noyce was just a preacher’s kid from Iowa, and his social sensibilities reflected that background.
There was no social hierarchy at Fairchild—no reserved parking spaces or executive dining rooms—and that remained true even later when the company employed thousands of workers and Noyce was long gone. There was no dress code. There were hardly any doors; Noyce had an office cubicle, built from shoulder-high partitions, just like everybody else. Thirty years later, he still had only a cubicle, along with limitless wealth.
They use cubicles, too, at Hewlett-Packard, which at one point in the late 1970s had more than 50,000 employees, but only three private offices. One office belonged to Bill Hewlett, one to David Packard, and the third to a guy named Paul Ely, who annoyed so many coworkers with his bellowing on the telephone that the company finally extended his cubicle walls clear to the ceiling. It looked like a freestanding elevator shaft in the middle of a vast open office.
The Valley is filled with stories of Bob Noyce as an Everyman with deep pockets. There was the time he stood in a long line at his branch bank and then asked the teller for a cashier’s check for $1.3 million from his personal savings, confiding gleefully that he was going to buy a Learjet that afternoon. Then, after his divorce and remarriage, Noyce tried to join the snobbish Los Altos Country Club, only to be rejected because the club did not approve of his new wife, so he wrote another check and simply duplicated the country club facilities on his own property, within sight of the Los Altos clubhouse. “To hell with them,” he said.
As a leader, Noyce was half high school science teacher and half athletic team captain. Young engineers were encouraged to speak their minds, and they were given authority to buy whatever they needed to pursue their research. No idea was too crazy to be at least considered, because Noyce realized that great discoveries lay in crazy ideas and that rejecting out of hand the ideas of young engineers would just hasten that inevitable day when they would take off for their own start-up.
While Noyce’s ideas about technical management sound all too enlightened to be part of anything called big business, they worked well at Fairchild and then at Noyce’s next creation, Intel. Intel was started, in fact, because Noyce couldn’t get Fairchild’s eastern owners to accept the idea that stock options should be a part of compensation for all employees, not just for management. He wanted to tie everyone, from janitors to bosses, into the overall success of the company, and spreading the wealth around seemed the way to go.
This management style still sets the standard for every computer, software, and semiconductor company in the Valley today, where office doors are a rarity and secretaries hold shares in their company’s stock. Some companies follow the model well, and some do it poorly, but every CEO still wants to think that the place is being run the way Bob Noyce would have run it.
The semiconductor business is different from the business of building big computers. It costs a lot to develop a new semiconductor part but not very much to manufacture it once the design is proved. This makes semiconductors a volume business, where the most profitable product lines are those manufactured in the greatest volume rather than those that can be sold in smaller quantities with higher profit margins. Volume is everything.
To build volume, Noyce cut all Fairchild components to a uniform price of one dollar, which was in some cases not much more than the cost of manufacturing them. Some of Noyce’s partners thought he was crazy, but volume grew quickly, followed by profits, as Fairchild expanded production again and again to meet demand, continually cutting its cost of goods at the same time. The concept of continually dropping electronic component prices was born at Fairchild. The cost per transistor dropped by a factor of 10,000 over the next thirty years.
To avoid building a factory that was 10,000 times as big, Noyce came up with a way to give customers more for their money while keeping the product price point at about the same level as before. While the cost of semiconductors was ever falling, the cost of electronic subassemblies continued to increase with the inevitably rising price of labor. Noyce figured that even this trend could be defeated if several components could be built together on a single piece of silicon, eliminating much of the labor from electronic assembly. It was 1959, and Noyce called his idea an integrated circuit. “I was lazy,” he said. “It just didn’t make sense to have people soldering together these individual components when they could be built as a single part.”
Jack Kilby at Texas Instruments had already built several discrete components on the same slice of germanium, including the first germanium resistors and capacitors, but Kilby’s parts were connected together on the chip by tiny gold wires that had to be installed by hand. TI’s integrated circuit could not be manufactured in volume.
The twist that Noyce added was to deposit a layer of insulating silicon oxide on the top surface of the chip—this was called the “planar process” that had been invented earlier at Fairchild —and then use a photographic process to print thin metal lines on top of the oxide, connecting the components together on the chip. These metal traces carried current in the same way that Jack Kilby’s gold wires did, but they could be printed on in a single step rather than being installed one at a time by hand.
Using their new photolithography method, Noyce and his boys put first two or three components on a single chip, then ten, then a hundred, then thousands. Today the same area of silicon that once held a single transistor can be populated with more than a million components, all too small to be seen.
Tracking the trend toward ever more complex circuits, Gordon Moore, who cofounded Intel with Noyce, came up with Moore’s Law: the number of transistors that can be built on the same size piece of silicon will double every eighteen months. Moore’s Law still holds true. Intel’s memory chips from 1968 held 1,024 bits of data; the most common memory chips today hold a thousand times as much—1,024,000 bits—and cost about the same.
The integrated circuit—the IC—also led to a trend in the other direction—toward higher price points, made possible by ever more complex semiconductors that came to do the work of many discrete components. In 1971, Ted Hoff at Intel took this trend to its ultimate conclusion, inventing the microprocessor, a single chip that contained most of the logic elements used to make a computer. Here, for the first time, was a programmable device to which a clever engineer could add a few memory chips and a support chip or two and turn it into a real computer you could hold in your hands. There was no software for this new computer, of course—nothing that could actually be done with it —but the computer could be held in your hands or even sold over the counter, and that fact alone was enough to force a paradigm shift on Silicon Valley.
It was with the invention of the microprocessor that the rest of the world finally disappointed Silicon Valley. Until that point, the kids at Fairchild, Intel, and the hundred other chipmakers that now occupied the southern end of the San Francisco peninsula had been farmers, growing chips that were like wheat from which the military electronics contractors and the computer companies could bake their rolls, bagels, and loaves of bread—their computers and weapon control systems. But with their invention of the microprocessor, the Valley’s growers were suddenly harvesting something that looked almost edible by itself. It was as though they had been supplying for years these expensive bakeries, only to undercut them all by inventing the Twinkie.
But the computer makers didn’t want Intel’s Twinkies. Microprocessors were the most expensive semiconductor devices ever made, but they were still too cheap to be used by the IBMs, the Digital Equipment Corporations, and the Control Data Corporations. These companies had made fortunes by convincing their customers that computers were complex, incredibly expensive devices built out of discrete components; building computers around microprocessors would destroy this carefully crafted concept. Microprocessor-based computers would be too cheap to build and would have to sell for too little money. Worse, their lower part counts would increase reliability, hurting the service income that was an important part of every computer company’s bottom line in those days.
And the big computer companies just didn’t have the vision needed to invent the personal computer. Here’s a scene that happened in the early 1960s at IBM headquarters in Armonk, New York. IBM chairman Tom Watson, Jr., and president Al Williams were being briefed on the concept of computing with video display terminals and time-sharing, rather than with batches of punch cards. They didn’t understand the idea. These were intelligent men, but they had a firmly fixed concept of what computing was supposed to be, and it didn’t include video display terminals. The briefing started over a second time, and finally a light bulb went off in Al Williams’s head. “So what you are talking about is data processing but not in the same room!” he exclaimed.
IBM played for a short time with a concept it called teleprocessing, which put a simple computer terminal on an executive’s desk, connected by telephone line to a mainframe computer to look into the bowels of the company and know instantly how many widgets were being produced in the Muncie plant. That was the idea, but what IBM discovered from this mid-1960s exercise was that American business executives didn’t know how to type and didn’t want to learn. They had secretaries to type for them. No data were gathered on what middle managers would do with such a terminal because it wasn’t aimed at them. Nobody even guessed that there would be millions of M.B.A.s hitting the streets over the following twenty years, armed with the ability to type and with the quantitative skills to use such a computing tool and to do some real damage with it. But that was yet to come, so exit teleprocessing, because IBM marketers chose to believe that this test indicated that American business executives would never be interested.
In order to invent a particular type of computer, you have to want first to use it, and the leaders of America’s computer companies did not want a computer on their desks. Watson and Williams sold computers but they didn’t use them. Williams’s specialty was finance; it was through his efforts that IBM had turned computer leasing into a goldmine. Watson was the son of God—Tom Watson Sr.—and had been bred to lead the blue-suited men of IBM, not to design or use computers. Watson and Williams didn’t have computer terminals at their desks. They didn’t even work for a company that believed in terminals. Their concept was of data processing, which at IBM meant piles of paper cards punched with hundreds of rectangular, not round, holes. Round holes belonged to Univac.
The computer companies for the most part rejected the microprocessor, calling it too simple to perform their complex mainframe voodoo. It was an error on their part, and not lost on the next group of semiconductor engineers who were getting ready to explode from their current companies into a whole new generation of start-ups. This time they built more than just chips and ICs; they built entire computers, still following the rules for success in the semiconductor business: continual product development; a new family of products every year or two; ever increasing functionality; ever decreasing price for the same level of function; standardization; and volume, volume, volume.
**********
It takes society thirty years, more or less, to absorb a new information technology into daily life. It took about that long to turn movable type into books in the fifteenth century. Telephones were invented in the 1870s but did not change our lives until the 1900s. Motion pictures were born in the 1890s but became an important industry in the 1920s. Television, invented in the mid-19205, took until the mid-1950s to bind us to our sofas.
We can date the birth of the personal computer somewhere between the invention of the microprocessor in 1971 and the introduction of the Altair hobbyist computer in 1975. Either date puts us today about halfway down the road to personal computers’ being a part of most people’s everyday lives, which should be consoling to those who can’t understand what all the hullabaloo is about PCs. Don’t worry; you’ll understand it in a few years, by which time they’ll no longer be called PCs.
By the time that understanding is reached, and personal computers have wormed into all our lives to an extent far greater than they are today, the whole concept of personal computing will probably have changed. That’s the way it is with information technologies. It takes us quite a while to decide what to do with them.
Radio was invented with the original idea that it would replace telephones and give us wireless communication. That implies two-way communication, yet how many of us own radio transmitters? In fact, the popularization of radio came as a broadcast medium, with powerful transmitters sending the same message—entertainment—to thousands or millions of inexpensive radio receivers. Television was the same way, envisioned at first as a two-way visual communication medium. Early phonographs could record as well as play and were supposed to make recordings that would be sent through the mail, replacing written letters. The magnetic tape cassette was invented by Phillips for dictation machines, but we use it to hear music on Sony Walkmans. Telephones went the other direction, since Alexander Graham Bell first envisioned his invention being used to pipe music to remote groups of people.
The point is that all these technologies found their greatest success being used in ways other than were originally expected. That’s what will happen with personal computers too. Fifteen years from now, we won’t be able to function without some sort of machine with a microprocessor and memory inside. Though we probably won’t call it a personal computer, that’s what it will be.
It takes new ideas a long time to catch on—time that is mainly devoted to evolving the idea into something useful. This fact alone dumps most of the responsibility for early technical innovation in the laps of amateurs, who can afford to take the time. Only those who aren’t trying to make money can afford to advance a technology that doesn’t pay.
This explains why the personal computer was invented by hobbyists and supported by semiconductor companies, eager to find markets for their microprocessors, by disaffected mainframe programmers, who longed to leave their corporate/mainframe world and get closer to the machine they loved, and by a new class of counterculture entrepreneurs, who were looking for a way to enter the business world after years of fighting against it.
The microcomputer pioneers were driven primarily to create machines and programs for their own use or so they could demonstrate them to their friends. Since there wasn’t a personal computer business as such, they had little expectation that their programming and design efforts would lead to making a lot of money. With a single strategic exception—Bill Gates of Microsoft—the idea of making money became popular only later.
These folks were pursuing adventure, not business. They were the computer equivalents of the barnstorming pilots who flew around America during the 1920s, putting on air shows and selling rides. Like the barnstormers had, the microcomputer pioneers finally discovered a way to live as they liked. Both the barnstormers and microcomputer enthusiasts were competitive and were always looking for something against which they could match themselves. They wanted independence and total control, and through the mastery of their respective machines, they found it.
Barnstorming was made possible by a supply of cheap surplus aircraft after World War I. Microcomputers were made possible by the invention of solid state memory and the microprocessor. Both barnstorming and microcomputing would not have happened without previous art. The barnstormers needed a war to train them and to leave behind a supply of aircraft, while microcomputers would not have appeared without mainframe computers to create a class of computer professionals and programming languages.
Like early pilots and motorists, the first personal computer drivers actually enjoyed the hazards of their primitive computing environments. Just getting from one place to another in an early automobile was a challenge, and so was getting a program to run on the first microcomputers. Breakdowns were frequent, even welcome, since they gave the enthusiast something to brag about to friends. The idea of doing real work with a microcomputer wasn’t even considered.
Planes that were easy to fly, cars that were easy to drive, computers that were easy to program and use weren’t nearly as interesting as those that were cantankerous. The test of the pioneer was how well he did despite his technology. In the computing arena, this meant that the best people were those who could most completely adapt to the idiosyncrasies of their computers. This explains the rise of arcane computer jargon and the disdain with which “real programmers” still often view computers and software that are easy to use. They interpret “ease of use” as “lack of challenge.” The truth is that easy-to-use computers and programs take much more skill to produce than did the hairy-chested, primitive products of the mid-1970s.
Since there really wasn’t much that could be done with microcomputers back then, the great challenge was found in overcoming the adversity involved in doing anything. Those who were able to get their computers and programs running at all went on to become the first developers of applications.
With few exceptions, early microcomputer software came from the need of some user to have software that did not yet exist. He needed it, so he invented it. And son of a gun, bragging about the program at his local computing club often dragged from the membership others who needed that software, too, wanted to buy it, and an industry was born.
Your prognostications have held up well.
Absolutely – “By the time that understanding is reached, and personal computers have wormed into all our lives to an extent far greater than they are today, the whole concept of personal computing will probably have changed. That’s the way it is with information technologies. It takes us quite a while to decide what to do with them.”
Or as William Gibson puts it, “the street finds a use for it”.
Television, invented in the mid-19205, took until the mid-1950s to bind us to our sofas.
I don’t thank that was the TV you are talking about. I am pretty sure it was the time machine.
>invented in the mid-19205
The last 5 should be an s.
Doesn’t matter if it was an “s” or a “5” – his comment makes no sense either way.
BOB … may I suggest we put all OCR and related issues in one message at the top of each post, that way we don’t get a dozen comments about the same issue?
… and here’s my real contribution. In the paragraph 7, sentence 1, it reads:
long before the Apple Ill’s problems
… when it probably should read:
long before the Apple III’s problems
… the first uses an uppercase I and two lowercase L’s, while the second is three uppercase I’s.
My guess is that Bob is scanning his book and his software got confused when converting the image to text.
The TV comment makes perfect sense because it is true (except for a minor typo), although even in the 50’s they were rather rare (where I lived).
Bob’s statement seems correct “… if television is defined as the live transmission of moving images with continuous tonal variation, Baird first achieved this privately on October 2, 1925.” http://en.wikipedia.org/wiki/History_of_television . But wasn’t until the 50s that the quality improved enough that it became mainstream, affordable, able to support content with advertising, and downright popular.
It does. If you’re unfamiliar, Philo Farnsworth invented electronic television between 1921 and 1927, 1927 being when he had an actual working prototype.
I thinks he was a farmer. And got the idea of scan lines from ploughing fields.
I think he was a farmer and got the idea of scan-lines from ploughing fields.
Well, I believe the original idea has come true (cell phones).
I have a ham radio, which is what he really means methinks, though cell phones are a good realization of the wireless communication dream. 🙂
“… with square, not round holes. Round holes belonged to Univac.”
That’s a brilliant and concise summary of how technology “ownership” worked, and also how slow the advancement of technology had been up to that point.
Round holes vs rectangles? That’s nothing, consider the simple idea that when your cursor gets to the end of a line on a CRT or when you hit Enter on your keyboard, the cursor would not only go to the left side of the screen but would actually drop down to the next line. I heard back then that IBM patented that and was getting paid by PC manufactures who might possibly violate that and other idiotic patents.
Is that true about that IBM patent? I tried googling it but didn’t find anything…
Related show on PBS just this week: American Experience: Robert Noyce Goes to Silicon Valley
I read the whole chapter.
Seems the 30years are over now, so what’s next?
We build it.
My guess would be different programming paradigms, in particular a clear enforced separation between application layer and systems level programming – with greater focus on provably secure systems and tools; end users will be able to automate simple things safely. To make this work, I think you will see a renewed emphasis on simplicity (KISS) in any given system – but particularly at the interfaces between user space and the operating system. Languages differences will become largely irrelevant – as translators/cross compilers targeting a universal virtual machine will become the norm in application space; anyone with sufficient skill in a given language will find that their skills become more useful over time, rather than less. [I’m already starting to see these things happen – though they have a long time to go to shake out (less than 30 years?) – e.g. Apple’s IOS and Google Android environments aren’t quite ‘fully baked’ in the way I envision – yet]
Your contrast between PC design and mainframe design ignore the fact that PC based technology (NMOS/CMOS) was improving much faster than mainframe technology (TTL/ECL). The former was improving at 30% a year whereas the later was at 15% a year. It was this fact that killed the mainframe, minicomputer and super-minicomputer markets, because the PC became a lot cheaper faster, although at lower performance. Mainframe and mini-computer engineers know how to design with discrete circuits, but don’t have the skills to integrate the circuits on a chip efficiently. (They would have to use Gate arrays or Field Programmable devices, which are not as well high performing). Nowadays, it seems we have the same phenomenon, with only a few companies left with the expertise to design advanced processor circuits (Intel, AMD, etc.), the rest just know how to take those components and plug them together.
This makes it sound as if mainframes are dead. But of course plenty of industries still use mainframes, and for many purposes, a mainframe is superior to the alternatives. I realize Bob (and the other readers) knows this, but just wanted to point that out.
The new pundits will tell us that desktops and laptops will soon be gone because all we need is our tablet and our cell phone. Not everything we do is Facebook and twitter. Try and do a schematic, cad drawing, or write software on a tablet or a cell phone. My finger is a little large to point accurately.
I agree that the desktop isn’t dead or even dying, but I think we are perhaps seeing another recursion of Bob’s theme of “Big Iron is superseded for primacy by the PC” in the way that the universe of ARM products is expanding (see smartphones, tablets, et al), while the x86 ecosystem of desktop, notebooks and laptops remains static.
Wrong. The desktop is dying. Just because it will have a niche role remaining doesn’t mean that it won’tl be gone from the mainstream. It’s already considered a grandpa box compared to laptops, which are already getting the boot for tablets. It’s just a matter of time.
Well, that was about the same time that the Smart Phone and Tablets came about, making the computer truly “personal”. Just think, no more 8″ floppies …
They didn’t so much make the computer personal as the computer chip made personal devices possible.
I can’t resist, even a day later. “Just think, no more 8″ floppies …” That’s right. Now we have 8″ non-floppies (tablets).
“Radio was invented with the original idea that it would replace telephones and give us wireless communication. That implies two-way communication, yet how many of us own radio transmitters?”
Don’t we all? I think they’re called cell phones.
Spectacles with in-built sat-nav. Through a mash-up with Google Maps. (Goggle Maps?)
The phrase “augmented reality” and its acronym “AR” have seen a fair bit of adoption as of late. I wonder if we’ll still use the phrase a few years down the road, but I don’t wonder if we’ll use the technology (that’s a certainty).
I’ve seen a few ideas (on TV) using AR with mobile phones, to do virtual guide around museums, and a treasure hunt in, and around London.
Not entirely practical looking down at a screen whilst running through traffic.
William Gibson’s 2007 Novel “Spook Country” illustrates how an artist and curator might use AR in interesting ways, and how the government might pervert the uses as well.
It’s true, practical use cases are not prevalent yet … but consider the heads-up-display (HUD) seen in fighter jets. It’s a piece of glass on which is displayed information about what you’re looking at (altitude, direction, weapon-aiming, etc).
From what I’ve read, HUD’s are standard on luxury cars from Mercedes, BMW, and others showing speed, engine temp, warnings, etc so that the driver need not look down from the windshield. As cars get “smarter”, more and more information can be displayed in such a manner (incoming call to the car’s built-in phone, for example).
The “AR Drone”, a remote-control quad-copter sending camera feeds back to the iPhone (!) controlling it, is pretty popular these days, too. But pricey (for now).
… and then there’s the glasses one commuter noticed on Sergey Brin while on an NYC subway a few weeks back.
Google’s glasses – last item of clip
https://www.bbc.co.uk/programmes/p015fmy0
I believe it was Philips, with one L, that developed the compact cassette.
Correct, only one L.
nitpick: reminders keep hitting you in the face do they?
if I had a blue pencil
Siri: kick me in the shin at 3 today
then 5 paragraphs down you’re “back at the movies”. what did I miss?
“Fifteen years from now, we won’t be able to function without some sort of machine with a microprocessor and memory inside. Though we probably won’t call it a personal computer, that’s what it will be.”
Bob accurately predicts the smart phone!
Just today I instructed a friend to fix his “cell phone” by removing the battery to reset it.
Of course.. if it was a windows phone he probably would’ve thought of that himself…
I think the manual for my LG or Galaxy cell phone has that in the instructions as part of troubleshooting. I also have a Gobi modem card in my pc that requires the same thing, so I actually have to remove the pc battery. Ever notice that if you have a usb connected broadband modem, the instructions for that also say to remove it and reinsert it. All of these examples involve removing power from the cellular device to reset it. Sometimes just “disconnecting” from the carrier is not enough. I’m inclined to blame the radio manufacturer or the wireless carrier for not including (or allowing) a thorough software reset.
Contrast to the late ’90’s, when the PC makers started offering more and more support to the masses. And now, when the PC makers really don’t make (that’s handled by Foxconn, Asustek, et al) but instead design (well, a lot of that’s handled in India) and market and provide lots of support. Indeed, Apple with its entire ecosystem and Google (if Chromebook puts them in the same universe) with its data are very much up in the consumer’s business. Both of these companies may know more about us than the NSA, no slouches themselves when it comes to knowing all about us.
These days consumers have a very intimate relationship with PC makers (I’m including tablets, smart phones), sending them a constant stream (nay, torrent) of data to mine. Probably more intimate than they realize: one way or another, Apple/Google/etc always knows where you are and how fast you drive.
“but we use it to hear music on Sony Walkmans.” should probably be used. After all, when was the last time you used a cassette-encumbered portable music device?
You skip an entire generation — the minicomputer. It had more in common with the PC than the mainframe, in terms of how it was purchased, financed, used, etc. A department owned it. It didn’t have to run 24 hours per day to pay for itself. Programmers touched it.
Yes.
My first real job was around the time that Digital (DEC) PDP-minis morphed into desktop versions. There were even PCI cards with PDP-11 CPU’s for the PC.
The PCI cards were around the early 1990s.
…by which time they’ll no longer be called PCs.
By the time that understanding is reached, and personal computers have wormed into all our lives to an extent far greater than they are today, the whole concept of personal computing will probably have changed.
I wonder if the Smart Phone would be an example of the PC that is no longer called a PC, but that has wormed its way into our lives and is used for purposes no originally envisioned by the inventors of the PC. Smart Phones are nearly ubiquitous, are a part of many peoples’ lives (certainly mine) 24×7, and are comprise a microprocessor, memory, applications, display, and user interface.
And now … everybody does, right? Mobile phones are walkie-talkies on steroids, aren’t they? And it’s worth considering how many other things are near constantly communicating both ways over radio waves (cars, computers, peripherals, etc).
The whole proprietary bullshit of the computer industry was a P.I.T.A.
Thankfully when Steve Jobs returned to Apple and produced the iMac, he’d obviously seen the error of having proprietary connections and cables. ADB & Video to name the obvious ones.
Though we still see so called mini-USB connectors with two incompatible shapes for mobile phones. How the heck did that come about?
Definitely not contradicting Bob, but it seems the computer industry was absorbed into society in significantly fewer than 30 years. By 1990, 15 years after the intro of the hobbyist Altair, many people had computers in their homes. You wonder if this more-rapid-than-usual timeframe, which did not allow for the normal development of laws and regulations, contributed to the “wild west” days of the PC industry.
Perhaps Bob means computers in general, not PCs in particular. Computers existed during the 40s, they just didn’t rely on “chips”, but relays instead.
I think Bob is referring to 30 years for non-geeks to start using computers.
After all it wasn’t until the widespread adoption of the world wide web, that entire families saw Macs and PCs as more ‘friendly’. That was a long standing goal of Steve Jobs with Macintosh.
I certainly felt a change in attitudes after 1993/4.
There is a reasonable argument that personal computers beat the 30 year trend. Still when the iPhone, the first truly modern smartphone, was released in 2007 that was exactly 30 years after the introduction of the Apple ][ in 1977.
When you are in the middle of a revolution it’s hard to see the overall trends and direction things are going in. I think in 20 years time the personal computers most people use will look a lot more like an iOS or android phone or tablet than a desktop PC, or even a laptop. Those people, looking back, won’t recognise the Apple ][ as being an obvious ancestor of the computers they use, but they will recognise an iPhone in that way.
I’m beginning to question the “x-year” trend concept. It seems to me the more time that passes the more high-tech products advance. The abacus dates back to 2500 BC, so it took over 4500 years for computers to evolve to their present state.
Surely an abacus is just a state register for the human pushing the beads. Just like a steering wheel can’t drive.
Computers were people upto the end of WWII.
“this explains why the personal computer was invented by hobbyists and supported by semiconductor companies, eager to find markets for their micropro- cessors, by disaffected mainframe programmers, who longed to leave their corporate/mainframe world and get closer to the machine they loved, . . . ”
We, the disaffected programmers, rationalized spending more on our passions than cars and any other hobbies to:
– reconcile our checking accounts,
– do our annual taxes,
– keep recipes,
– make Christmas card lists,
and automate our homes.
But the real reason we wanted our own computers was so we could try those things we couldn’t justify at work, learn the new languages we couldn’t write projects in and simply play. They were the hotrods we never got.
I’ve always had better personal computing tools at home than my work provides – without all the nanny code and sms cruft that slows a fast machine to a crawl.
On the other hand, I would love to have some of the new server gear geared for virtualization – with RAM beyond my wildest imagination, and 32 or more CPUs – and fiber attached storage…I can’t afford the $10,000 price tag though.
‘course this brings up the question, “what will we hack when the PCs are all gone?”
“what will we hack when the PCs are all gone?” The cloud. 🙂
you mean to say that Weird Stuff Warehouse still exists?
ah, this brings up what is to me a bit of a flaw in this book. It is still written as if it was 1990 or whenever. Of course we understand that it was, but for a new edition I suggest that maybe the method of time expression should be changed to more like past tense, like a currently written history perhaps. I think this would help newer and younger readers more.
Another thing we understand even better now: you might stick in a few more acid comments about MBA’s and spreadsheets (vs thinking?). I am proud to say that I never hired a engineer who had gotten an MBA ‘on the side’ and still expected to be an engineer.
“It was with the invention of the microprocessor that the rest of the world finally disappointed Silicon Valley.”
I suspect we “discovered” it
I think Bob means at the time it was invented, the main frame, serious business users, thought it was just a “toy”. At the time they were correct. Even in the 70s I could achieve much more accurate results with a 10-digit calculator that with early mini computers. That made them totally useless for financial calculations that required accuracy to the penny even when the total amount was over $1,000,000.
I dunno; you’re probably correct in the real sense but I’d say that Silicon Valley is definitely disappointed with the rest of the world…
What were people doing with the ICs before the microprocessor? What was driving all that purchasing? Was it just transistor radios and military gear? How did the Valley get scale in those early days? Making parts for a $1 and getting rich implies there were a LOT of $1 parts being sold. To whom? For what?
Before the microprocessor? Well, you had the 74 series of logic gates (in regular, CMOS etc), and you could do quite a lot with them. Throw in a 555 timer IC and you had the ability to make all sorts of timing logic circuits.
It was around this time (before the dawn of the microprocessor) that one Seymour Cray decided that individual transistors were no longer fast enough to build supercomputers with, and started using IC’s – and went on to build the CRAY-1 (see The Supermen: The Story of Seymour Cray and the Technical Wizards behind the Supercomputer for more details).
I was working in the early ’80s for National Semiconductor, a company just reaching $1 billion/yr in sales. I can assure you we were selling a TON of TTL chips (74xx logic, Bipolar PROMs, Schottky memory) to the minicomputer companies (DEC, DG, Prime, Wang, etc.) on the East Coast.
UK comment.
‘Maalox’ is a culture-specific reference (I’ve no idea what it is. Sounds like the villain in a bad fantasy novel).
By contrast ‘Barnstormer’ is noted as a specifically American term, although widely known, and Twinkies were in Ghostbusters so must be recognised throughout the known world.
Maalox is an ‘antacid’ – for those with gastric problems. Relieves heartburn, presumably acquired from the aforementioned ‘junk food’, or stress from being in the fast paced Silicon Valley industrial milieu.
With Linux and OS X both being descendants of Unix, one can argue that current PCs have considerably more roots in the prior computing era than your chapter says. Increasingly, I am impressed how much we owe to Thompson, Ritchie et al.
there were many time sharing systems in existence before Unix was invented, which itself was a bastardization of Multics.
Excessive complexity is the enemy of good.
Unix, the POSIX standard based upon it, and its spinoffs largely run the world today – on the backend, as well as on mobile devices.
Multics is…where?
Bastard or not, *nix is still kicking.
The power of the PC was that you were free of the rules, constraints, and control of the DP department.
With a PC you could do what you needed/wanted to do and the data centers lost control.
This is in part what fulled the growth of the PC.
Today networking, and more importantly cloud computing, is the return to central control.
You forget to mention the other part of that uncontrolled environment equation:
Zero days.
As long as end users can compromise their own systems by poorly contrived systems level programs – we will continue to suffer data compromises and worse. Professional programmers can’t event get it right – and they should if anyone can – but they don’t. The problem is compounded.
That being said, I don’t advocate taking away the ability of hobbyists to hack systems – but I would like to see two things:
1. Clear demarcation between application level programming and systems level programming. This way, if someone wants to configure their system to be able to do systems programming – it is clearly different than a normal ‘applications’ programming and runtime environment geared towards end users – with a clear pathway to getting systems code added to the system (e.g. think of the process used today to get changes into the linux kernel by kernel devs).
2. Creation of tools and systems that make that possible. Current Apple IOS and Google Android aren’t quite ‘there’ yet (barriers to entry to hobbyists, plus limited end-user application tools and/or opportunities to make systems level improvements).
Rather than central control – I would call it ‘secure computing’. You can hack – just not on a key piece of infrastructure – without repercussions.
“It takes society thirty years, more or less, to absorb a new information technology into daily life.”
What this means is that it takes Congress thirty years to become aware of and to regulate the life out new technology, thus stalling further progress.
Good one!! 🙂
Bob, in this chapter you wrote about entrepreneurs who founded companies because they were fed up with or didn’t want to work at their old companies anymore. That seems like the class of entrepreneur who started the valley. There is another breed too, especially nowadays, of people who start companies even if they have never worked at another company. They are the Stanford undergrads, the Zuckerbergs, the Steve Jobs. Now, because the silicon valley entrepreneur model has been validated, they go to college never even intending to get a corporate job. It would be interesting to examine the modern motivations of present day Bob Noyces.
One other thing, most of the comments on here are kind of nit picking the technological aspects of the book. E.g., radio actually did turn into two way communication – cell phones.
In my eyes, the value of Bob’s book doesn’t actually have much to do with the details of the technology, but is in his description of the cultural and personal aspects of this world-changing revolution. It would be cool to see more discussion of those aspects on this board. Maybe I’m wrong.
Radio didn’t “turn into” two way communication with cellphones. People managed to make the last kilometer of the same old phone network wireless. Sorry, but the whole radio / cell phone comparison irritates me somehow. No offence meant.
Website comment: I’d like to vote down the annoying flock of flying objects on the right that keep blocking the text as they drift by. They are way more annoying than ads that are at least separated from the content.
Are we currently in the middle of another revolution/evolution of this same technology?
The BASIC Stamp launched a hobbyist revolution 20 years ago where microcontroller gadgets could be created at home. Today Arduino has kids making robots and internet connected devices that interface with phone and PCs. Many start-ups have launched to support this hobbyist market. MakerFaires have sprouted up all over the world for people to show off their creations.
How long before this embeds itself in our daily life the same way you can purchase wood, nails and glue and put together something you need/want. With software easier to write (BASIC and Simplified Arduino C) everybody can program. With pre-build plug in modules (shields) electronic interface to the real world is plug and play. Self made Drones. Smart phone controlled products. And all can be invested in via Kickstarter.
Are we witnessing a revolution/evolution right before our eyes and many still don’t see it?
It’s always been there since the late 80s. Things just got whole lot more cheap and accessible in recent years.
We used to have robot wars on TV here in the UK.
My candidate for the line that most needs to be removed:
“Radio was invented with the original idea that it would replace telephones and give us wireless communication. That implies two-way communication, yet how many of us own radio transmitters?”
Hi Bob, you used “Information Technology” twice. Was it in use then or it was your creation?
g+ takes me into a search of cringely on g+. I was expecting a +1 or a post.
Yes, it’s the “visit up” pop-up on every floatie. I think the annoying floaties may be Cringely’s new form of paid advertising. I sure hope they are paying him enough to annoy his readers.
One thing I miss about the old days is the lack of the social media hover-above and the WordPress admin bar and the dynamic to-the-top button that obscure the text that I’m trying to read. The old displays just didn’t have space and processing power to waste on this sort of triviality.
The new small displays don’t have space to waste either; especially when you’re using a full-featured browser that allows increasing the text size without forcing horizontal scrolling. Anything that superimposes itself over the text gets in the way.
[…] Intro Chapter 1A Chapter 1B Chapter 1C Chapter 1D Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 Chapter 12 […]