CHAPTER FIVE
ROLE MODELS
This being the 1990s, the economy is shot to hell and we’ve got nothing much better to do, the personal computer industry is caught up in an issue called look and feel, which means that your computer software can’t look too much like my computer software or I’ll take you to court. Look and feel is a matter of not only how many angels can dance on the head of a pin but what dance it is they are doing and who owns the copyright.
Here’s an example of look and feel. It’s 1913, and we’re at the Notre Dame versus Army football game (this is all taken straight from the film Knute Rockne, All-American, in which young Ronald Reagan appeared as the ill-fated Gipper—George Gipp). Changing football forever, the Notre Dame quarterback throws the first-ever forward pass, winning the game. A week later, Notre Dame is facing another team, say Purdue. By this time, word of the forward pass has gotten around, the Boilermakers have thrown a few in practice, and they like the effect. So early in the first quarter, the Purdue quarterback throws a forward pass. The Notre Dame coach calls a time-out and sends young Knute Rockne jogging over to the Purdue bench.
“Coach says that’ll be five dollars,” mumbles an embarrassed Knute, kicking at the dirt with his toe.
“Say what, son?”
“Coach says the forward pass is Notre Dame property, and if you’re going to throw one, you’ll have to pay us five dollars. I can take a check.”
That’s how it works. Be the first one on your block to invent a particular way of doing something, and you can charge the world for copying your idea or even prohibit the world from copying it at all. It doesn’t even matter if the underlying mechanism is different; if the two techniques look similar, one is probably violating the look and feel of the other.
Just think of the money that could have been earned by the first person to put four legs on a chair or to line up the clutch, brake, and gas pedals of a car in that particular order. My secret suspicion is that this sort of easy money was the real reason Alexander Graham Bell tried to get people to say “ahoy” when they answered the telephone rather than “hello.” It wasn’t enough that he was getting a nickle for the phone call; Bell wanted another nickle for his user interface.
There’s that term, user interface. User interface is at the heart of the look-and-feel debate because it’s the user interface that we’re always looking at and feeling. Say the Navajo nation wants to get back to its computing roots by developing a computer system that uses smoke signals to transfer data into and out of the computer. Whatever system of smoke puffs they settle on will be that computer’s user interface, and therefore protectable under law (U.S., not tribal).
What’s particularly ludicrous about this look-and-feel business is that it relies on all of us believing that there is something uniquely valuable about, for example, Apple Computer’s use of overlapping on-screen windows or pull-down menus. We are supposed to pretend that some particular interface concepts sprang fully grown and fully clothed from the head of a specific programmer, totally without reference to prior art.
Bullshit.
Nearly everything in computing, both inside and outside the box, is derived from earlier work. In the days of mainframes and minicomputers and early personal computers like the Apple II and the Tandy TRS-80, user interfaces were based on the mainframe model of typing out commands to the computer one 80-character line at a time—the same line length used by punch cards (IBM should have gone for the bucks with that one but didn’t). But the commands were so simple and obvious that it seemed stupid at the time to view them as proprietary. Gary Kildall stole his command set for CP/M from DEC’S TOPS-10 minicomputer operating system, and DEC never thought to ask for a dime. Even IBM’s VM command set for mainframe computers was copied by a PC operating system called Oasis (now Theos), but IBM probably never even noticed.
This was during the first era of microcomputing, which lasted from the introduction of the MITS Altair 8800 in early 1075 to the arrival of the IBM Personal Computer toward the end of 1981. Just like the golden age of television, it was a time when technology was primitive and restrictions on personal creativity were minimal, too, so everyone stole from everyone else. This was the age of 8-bit computing, when Apple, Commodore, Radio Shack, and a hundred-odd CP/M vendors dominated a small but growing market with their computers that processed data eight bits at a time. The flow of data through these little computers was like an eight-lane highway, while the minicomputers and mainframes had their traffic flowing on thirty-two lanes and more. But eight lanes were plenty, considering what the Apples and others were trying to accomplish, which was to put the computing environment of a mainframe computer on a desk for around $3,000.
Mainframes weren’t that impressive. There were no fancy, high-resolution color graphics in the mainframe world—nothing that looked even as good as a television set. Right from the beginning, it was possible to draw pictures on an Apple II that were impossible to do on an IBM mainframe.
Today, for example, several million people use their personal computers to communicate over worldwide data networks, just for fun. I remember when a woman on the CompuServe network ran a nude photo of herself through an electronic scanner and sent the digitized image across the network to all the men with whom she’d been flirting for months on-line. In grand and glorious high-resolution color, what was purported to be her yummy flesh scrolled across the screens of dozens of salivating computer nerds, who quickly forwarded the image to hundreds and then thousands of their closest friends. You couldn’t send such an image from one terminal to another on a mainframe computer; the technology doesn’t exist, or all those wacky secretaries who have hopped on Xerox machines to photocopy their backsides would have had those backsides in electronic distribution years ago.
My point is that the early pioneers of microcomputing stole freely from the mainframe and minicomputer worlds, but there wasn’t really much worth stealing, so nobody was bothered. But with the introduction of 16-bit microprocessors in 1981 and 1982, the mainframe role model was scrapped altogether. This second era of microcomputing required a new role model and new ideas to copy. And this time around, the ideas were much more powerful—so powerful that they were worth protecting, which has led us to this look-and-feel fiasco. Most of these new ideas came from the Xerox Palo Alto Research Center (PARC). They still do.
To understand the personal computer industry, we have to understand Xerox PARC, because that’s where the most of the computer technology that we’ll use for the rest of the century was invented.
There are two kinds of research: research and development and basic research. The purpose of research and development is to invent a product for sale. Edison invented the first commercially successful light bulb, but he did not invent the underlying science that made light bulbs possible. Edison at least understood the science, though, which was the primary difference between inventing the light bulb and inventing fire.
The research part of R&D develops new technologies to be used in a specific product, based on existing scientific knowledge. The development part of R&D designs and builds a product using those technologies. It’s possible to do development without research, but that requires licensing, borrowing, or stealing research from somewhere else. If research and development is successful, it results in a product that hits the market fairly soon— usually within eighteen to twenty-four months in the personal computer business.
Basic research is something else—ostensibly the search for knowledge for its own sake. Basic research provides the scientific knowledge upon which R&D is later based. Sending telescopes into orbit or building superconducting supercolliders is basic research. There is no way, for example, that the $1.5 billion Hubble space telescope is going to lead directly to a new car or computer or method of solid waste disposal. That’s not what it’s for.
If a product ever results from basic research, it usually does so fifteen to twenty years down the road, following a later period of research and development.
What basic research is really for depends on who is doing the research and how they are funded. Basic research takes place in government, academic, and industrial laboratories, each for a different purpose. Basic research in government labs is used primarily to come up with new ideas for blowing up the world before someone else in some unfriendly country comes up with those same ideas. While the space telescope and the supercollider are civilian projects intended to explain the nature and structure of the universe, understanding that nature and structure are very important to anyone planning the next generation of earth-shaking weapons. Two thirds of U.S. government basic research is typically conducted for the military, with health research taking most of the remaining funds.
Basic research at universities comes in two varieties: research that requires big bucks and research that requires small bucks. Big bucks research is much like government research and in fact usually is government research but done for the government under contract. Like other government research, big bucks academic research is done to understand the nature and structure of the universe or to understand life, which really means that it is either for blowing up the world or extending life, whichever comes first. Again, that’s the government’s motivation. The universities’ motivation for conducting big bucks research is to bring in money to support professors and graduate students and to wax the floors of ivy-covered buildings. While we think they are busy teaching and learning, these folks are mainly doing big bucks basic research for a living, all the while priding themselves on their terrific summer vacations and lack of a dress code.
Small bucks basic research is the sort that requires paper and pencil, and maybe a blackboard, and is aimed primarily at increasing knowledge in areas of study that don’t usually attract big bucks—that is, areas that don’t extend life or end it, or both. History, political science, and romance languages are typical small bucks areas of basic research. The real purpose of small bucks research to the universities is to provide a means of deciding, by the quality of their small bucks research, which professors in these areas should get tenure.
Nearly all companies do research and development, but only a few do basic research. The companies that can afford to do basic research (and can’t afford not to) are ones that dominate their markets. Most basic research in industry is done by companies that have at least a 50 percent market share. They have both the greatest resources to spare for this type of activity and the most to lose if, by choosing not to do basic research, they eventually lose their technical advantage over competitors. Such companies typically devote about 1 percent of sales each year to research intended not to develop specific products but to ensure that the company remains a dominant player in its industry twenty years from now. It’s cheap insurance, since failing to do basic research guarantees that the next major advance will be owned by someone else.
The problem with industrial basic research, and what differentiates it from government basic research, is this fact that its true product is insurance, not knowledge. If a researcher at the government-sponsored Lawrence Livermore Lab comes up with some particularly clever new way to kill millions of people, there is no doubt that his work will be exploited and that weapons using the technology will eventually be built. The simple rule about weapons is that if they can be built, they will be built. But basic researchers in industry find their work is at the mercy of the marketplace and their captains-of-industry bosses. If a researcher at General Motors comes up with a technology that will allow cars to be built for $100 each, GM executives will quickly move to bury the technology, no matter how good it is, because it threatens their current business, which is based on cars that cost thousands of dollars each to build. Consumers would revolt if it became known that GM was still charging high prices for cars that cost $100 each to build, so the better part of business valor is to stick with the old technology since it results in more profit dollars per car produced.
In the business world, just because something can be built does not at all guarantee that it will be built, which explains why RCA took a look at the work of George Heilmeier, a young researcher working at the company’s research center in New Jersey and quickly decided to stop work on Heilmeier’s invention, the liquid crystal display. RCA made this mid-1960s decision because LCDs might have threatened its then-profitable business of building cathode ray picture tubes. Twenty-five years later, of course, RCA is no longer a factor in the television market, and LCD displays—nearly all made in Japan—are everywhere.
Most of the basic research in computer science has been done at universities under government contract, at AT&T Bell Labs in New Jersey and in Illinois, at IBM labs in the United States, Europe, and Japan, and at the Xerox PARC in California. It’s PARC that we are interested in because of its bearing on the culture of the personal computer.
Xerox PARC was started in 1970 when leaders of the world’s dominant maker of copying machines had a sinking feeling that paper was on its way out. If people started reading computer screens instead of paper, Xerox was in trouble, unless the company could devise a plan that would lead it to a dominant position in the paperless office envisioned for 1990. That plan was supposed to come from Xerox PARC, a group of very smart people working in buildings on Coyote Hill Road in the Stanford Industrial Park near Stanford University.
The Xerox researchers were drawn together over the course of a few months from other corporations and from universities and then plunked down in the golden hills of California, far from any other Xerox facility. They had nothing at all to do with copiers, yet they worked for a copier company. If they came to have a feeling of solidarity, then, it was much more with each other than with the rest of Xerox. The researchtrs at PARC soon came to look down on the marketers of Xerox HQ, especially when they were asked questions like, “Why don’t you do all your programming in BASIC—it’s so much easier to learn,” which was like suggesting that Yehudi Menuhin switch to rhythm sticks.
The researchers at PARC were iconoclastic, independent, and not even particularly secretive, since most of their ideas would not turn into products for decades. They became the celebrities of computer science and were even profiled in Rolling Stone.
PARC was supposed to plot Xerox a course into the electronic office of the 1990s, and the heart of that office would be, as it always had been, the office worker. Like designers of typewriters and adding machines, the deep thinkers at Xerox PARC had to develop systems that would be useful to lightly trained people working in an office. This is what made Xerox different from every other computer company at that time.
Some of what developed as the PARC view of future computing was based on earlier work by Doug Engelbart, who worked at the Stanford Research Institute in nearby Menlo Park. Engelbart was the first computer scientist to pay close attention to user interface—how users interact with a computer system. If computers could be made easier to use, Engelbart thought, they would be used by more people and with better results.
Punch cards entered data into computers one card at a time. Each card carried a line of data up to 80 characters wide. The first terminals simply replaced the punch card reader with a new input device; users still submitted data, one 80-column line at a time, through a computer terminal. While the terminal screen might display as many as 25 lines at a time, only the bottom line was truly active and available for changes. Once the carriage return key was punched, those data were in the computer: no going back to change them later, at least not without telling the computer that you wanted to reedit line 32, please.
I once wrote an entire book using a line editor on an IBM mainframe, and I can tell you it was a pain.
Engelbart figured that real people in real offices didn’t write letters or complete forms one line at a time, with no going back. They thought in terms of pages, rather than lines, and their pens and typewriters could be made to scroll back and forth and move vertically on the page, allowing access to any point. Engelbart wanted to bring that page metaphor to the computer by inventing a terminal that would allow users to edit anywhere on the screen. This type of terminal required some local intelligence, keeping the entire screen image in the terminal’s memory. This intelligence was also necessary to manage a screen that was much more flexible than its line-by-line predecessor; it was comprised of thousands of points that could be turned on or off.
The new point-by-point screen technology, called bit mapping, also required a means for roaming around the screen. Engelbart used what he called a mouse, which was a device the size of a pack of cigarettes on wheels that could be rolled around on the table next to the terminal and was connected to the terminal by a wire. Moving the mouse caused the cursor on-screen to move too.
With Engelbart’s work as a start, the folks at PARC moved toward prototyping more advanced systems of networked computers that used mice, page editors, and bit-mapped screens to make computing easier and more powerful.
During the 1970s, the Computer Science Laboratory (CSL) at Xerox PARC was the best place in the world for doing computer research. Researchers at PARC invented the first high-speed computer networks and the first laser printers, and they devised the first computers that could be called easy to use, with intuitive graphical displays. The Xerox Alto, which had built-in networking, a black-on-white bit-mapped screen, a mouse, and hard disk data storage and sat under the desk looking like R2D2, was the most sophisticated computer workstation of its time, because it was the only workstation of its time. Like the other PARC advances, the Alto was a wonder, but it wasn’t a product. Products would have taken longer to develop, with all their attendant questions about reliability, manufacturability, marketability, and profitability—questions that never once crossed a brilliant mind at PARC. Nobody was expected to buy computers built by Xerox PARC.
There is a very good book about Xerox PARC called Fumbling the Future, which says that PARC researchers Butler Lampson and Chuck Thacker were inventing the first personal computer when they designed and built the Alto in 1972 and 1973 and that by choosing not to commercialize the Alto, Xerox gave up its chance to become the dominant player in the coming personal computer revolution. The book is good, but this conclusion is wrong. Just the pans to build an Alto in 1973 cost $10,000, which suggests that a retail Alto would have had to sell for at least $25,000 (1973 dollars, too) for Xerox to make money on it. When personal computers finally did come along a couple of years later, the price point that worked was around $3,000, so the Alto was way too expensive. It wasn’t a personal computer.
And there was no compelling application on the Alto—no VisiCalc, no single function—that could drive a potential user out of the office, down the street, and into a Xerox showroom just to buy it. The idea of a spreadsheet never came to Xerox. Peter Deutsch wrote about what he called spiders—values (like 1989 revenues) that appeared in multiple documents, all linked together. Change a value in one place and the spider made sure that value was changed in all linked places. Spiders were like spreadsheets without the grid of columns and rows and without the clearly understood idea that the linked values were used to solve quantitative problems. Spiders weren’t VisiCalc.
If Xerox made a mistake in its handling of the Alto, it was in almost choosing to sell it. The techies at PARC knew that the Alto was the best workstation around, but they didn’t think about the pricing and application issues. When Xerox toyed with the idea of selling the Alto, that consideration instantly erased any doubts in the minds of its developers that theirs was a commercial system. Dave Kerns, the president of Xerox, kept coming around, nodding his head, and being supportive but somehow never wrote the all-important check.
Xerox’s on-again, off-again handling of the Alto alienated the technical staff at PARC, who never really understood why their system was not marketed. To them, it seemed as if Kerns and Xerox, like the owners of Sutter’s Mill, had found gold in the stream but decided to build condos on the spot instead of mining because it was never meant to be a gold mine.
There was a true sense of the academic—the amateur—in Ethernet too. PARC’s technology for networking all its computers together was developed in 1973 by a team led by Bob Metcalfe. Metcalfe’s group was looking for a way to speed up the link between computers and laser printers, both of which had become so fast that the major factor slowing down printing was, in fact, the wire between the two machines rather than anything having to do with either the computer or the printer. The image of the page was created in the memory of the computer and then had to be transmitted bit by bit to the printer. At 600 dots-per-inch resolution, this meant sending more than 33 million bits across the wire for each page. The computer could resolve the page in memory in 1 second and the printer could print the page in 2 seconds, but sending the data over what was then considered to be a high-speed serial link took just under 15 minutes. If laser printers were going to be successful in the office, a faster connection would have to be invented.
PARC’s printers were computers in their own right that talked back and forth with the computers they were attached to, and this two-way conversation meant that data could collide if both systems tried to talk at once. Place a dozen or more computers and printers on the same wire, and the risk of collisions was even greater. In the absence of a truly great solution to the collision problem, Metcalfe came up with one that was at least truly good and time honored: he copied the telephone party line. Good neighbors listen on their party line first, before placing a call, and that’s what Ethernet devices do too—listen, and if another transmission is heard, they wait a random time interval before trying again. Able to transmit data at 2.67 million bits per second across a coaxial cable, Ethernet was a technical triumph, cutting the time to transmit that 600 dpi page from 15 minutes down to 12 seconds.
At 2.67 megabits per second (mbps), Ethernet was a hell of a product, for both connecting computers to printers and, as it turned out, connecting computers to other computers. Every Alto came with Ethernet capability, which meant that each computer had an individual address or name on the network. Each user named his own Alto. John Ellenby, who was in charge of building the Altos, named his machine Gzunda “because it gzunda the desk.”
The 2.67 mbps Ethernet technology was robust and relatively simple. But since PARC wasn’t supposed to be interested in doing products at all but was devoted instead to expanding the technical envelope, the decision was made to scale Ethernet up to 10 mbps over the same wire with the idea that this would allow networked computers to split tasks and compute in parallel.
Metcalfe had done some calculations that suggested the marketplace would need only 1 mbps through 1990 and 10 mbps through the year 2000, so it was decided to aim straight for the millennium and ignore the fact that 2.67 mbps Ethernet would, by these calculations, have a useful product life span of approximately twenty years. Unfortunately, 10 mbps Ethernet was a much more complex technology—so much more complex that it literally turned what might have been a product back into a techr nology exercise. Saved from its brush with commercialism, it would be another six years before 10 mbps Ethernet became a viable product, and even then it wouldn’t be under the Xerox label.
Beyond the Alto, the laser printer, and Ethernet, what Xerox PARC contributed to the personal computer industry was a way of working—Bob Taylor’s way of working.
Taylor was a psychologist from Texas who in the early 1960s got interested in what people could and ought to do with computers. He wasn’t a computer scientist but a visionary who came to see his role as one of guiding the real computer scientists in their work. Taylor began this task at NASA and then shifted a couple years later to working at the Department of Defense’s Advanced Research Projects Administration (ARPA). ARPA was a brainchild program of the Kennedy years, intended to plunk money into selected research areas without the formality associated with most other federal funding. The ARPA funders, including Taylor, were supposed to have some idea in what direction technology ought to be pushed to stay ahead of the Soviet Union, and they were expected to do that pushing with ARPA research dollars. By 1965, 33-year-old Bob Taylor was in control of the world’s largest governmental budget for advanced computer research.
At ARPA, Taylor funded fifteen to twenty projects at a time at companies and universities throughout the United States. He brought the principal researchers of these projects together in regular conferences where they could share information. He funded development of the ARPAnet, the first nationwide computer communications network, primarily so these same researchers could stay in constant touch with each other. Taylor made it his job to do whatever it took to find the best people doing the best work and help them to do more.
When Xerox came calling in 1970, Taylor was already out of the government following an ugly experience reworking U.S. military computer systems in Saigon during the Vietnam War. For the first time, Taylor had been sent to solve a real-world computing problem, and reality didn’t sit well with him. Better to get back to the world of ideas, where all that was corrupted were the data, and there was no such thing as a body count.
Taylor held a position at the University of Utah when Xerox asked him to work as a consultant, using his contacts to help staff what was about to become the Computer Science Laboratory (CSL) at PARC. Since he wasn’t a researcher, himself, Taylor wasn’t considered qualified to run the lab, though he eventually weaseled into that job too.
Alan Kay, jazz musician, computer visionary, and Taylor’s first hire at PARC, liked to say that of the top one hundred computer researchers in the world, fifty-eight of them worked at PARC. And sometimes he said that seventy-six of the top one hundred worked at PARC. The truth was that Taylor’s lab never had more than fifty researchers, so both numbers were inflated, but it was also true that for a time under Taylor, CSL certainly worked as though there were many more than fifty researchers. In less than three years from its founding in 1970, CSL researchers built their own time-sharing computer, built the Alto, and invented both the laser printer and Ethernet.
To accomplish so much so fast, Taylor created a flat organizational structure; everyone who worked at CSL, from scientists to secretaries, reported directly to Bob Taylor. There were no middle managers. Taylor knew his limits, though, and those limits said that he had the personal capacity to manage forty to fifty researchers and twenty to thirty support staff. Changing the world with that few people required that they all be the best at what they did, so Taylor became an elitist, hiring only the best people he could find and subjecting potential new hires to rigorous examination by their peers, designed to “test the quality of their nervous systems.” Every new hire was interviewed by everyone else at CSL. Would-be researchers had to appear in a forum where they were asked to explain and defend their previous work. There were no junior research people. Nobody was wooed to work at CSL; they were challenged. The meek did not survive.
Newly hired researchers typically worked on a couple of projects with different groups within CSL. Nobody worked alone. Taylor was always cross-fertilizing, shifting people from group to group to get the best mix and make the most progress. Like his earlier ARPA conferences, Taylor chaired meetings within CSL where researchers would present and defend their work. These sessions came to be called Dealer Meetings, because they took place in a special room lined with blackboards, where the presenter stood like a blackjack dealer in the center of a ring of bean-bag chairs, each occupied by a CSL genius taking potshots at this week’s topic. And there was Bob Taylor, too, looking like a high school science teacher and keeping overall control of the process, though without seeming to do so.
Let’s not underestimate Bob Taylor’s accomplishment in just getting these people to communicate on a regular basis. Computer people love to talk about their work—but only their work. A Dealer Meeting not under the influence of Bob Taylor would be something like this:
Nerd A (the dealer): “I’m working on this pattern recognition problem, which I see as an important precursor to teaching computers how to read printed text.”
Nerd B (in the beanbag chair): “That’s okay, I guess, but I’m working on algorithms for compressing data. Just last night I figured out how to … ”
See? Without Taylor it would have been chaos. In the Dealer Meetings, as in the overall intellectual work of CSL, Bob Taylor’s function was as a central switching station, monitoring the flow of ideas and work and keeping both going as smoothly as possible. And although he wasn’t a computer scientist and couldn’t actually do the work himself, Taylor’s intermediary role made him so indispensable that it was always clear who worked for whom. Taylor was the boss. They called it “Taylor’s lab.”
While Bob Taylor set the general direction of research at CSL, the ideas all came from his technical staff. Coming up with ideas and then turning them into technologies was all these people had to do. They had no other responsibilities. While they were following their computer dreams, Taylor took care of everything else: handling budgets, dealing with Xerox headquarters, and generally keeping the whole enterprise on track. And his charges didn’t always make Taylor’s job easy.
Right from the start, for example, they needed a DEC PDP-io time-sharing system, because that was what Engelbart had at SRI, and PDP-ios were also required to run the ARPAnet software. But Xerox had its own struggling minicomputer operation, Scientific Data Systems, which was run by Max Palevsky down in El Segundo. Rather than buy a DEC computer, why not buy one of Max’s Sigma computers, which competed directly with the PDP-io? Because software is vastly more complex than hardware, that’s why. You could build your own copy of a PDP-io in less time than it would take to modify the software to run on Xerox’s machine! And so they did. CSL’s first job on their way toward the office of the future was to clone the PDP-io. They built the Multi-Access Xerox Computer (MAXC). The C was silent, just to make sure that Max Palevsky knew the computer was named in his honor.
The way to create knowledge is to start with a strong vision and then ruthlessly abandon parts of that vision to uncover some greater truth. Time sharing was part of the original vision at CSL because it had been part of Engelbart’s vision, but having gone to the trouble of building its own time-sharing system, the researchers at PARC soon realized that time sharing itself was part of the problem. MAXC was thrown aside for networks of smaller computers that communicated with each other—the Alto.
Taylor perfected the ideal environment for basic computer research, a setting so near to perfect that it enabled four dozen people to invent much of the computer technology we have today, led not by another computer scientist but by an exceptional administrator with vision.
I’m writing this in 1991, when Bill Gates of Microsoft is traveling the world preaching a new religion he calls Information At Your Fingertips. The idea is that PC users will be able to ask their machines for information, and, if it isn’t available locally, the PC will figure out how and where to find it. No need for Joe User to know where or how the information makes its way to his screen. That stuff can be left up to the PC and to the many other systems with which it talks over a network. Gates is making a big deal of this technology, which he presents pretty much as his idea. But Information At Your Fingertips was invented at Xerox PARC in 1973′-Like so many PARC inventions, though, it’s only now that we have the technology to implement it at a price normal mortals can afford.
In its total dedication to the pursuit of knowledge, CSL was like a university, except that the pay and research budgets were higher than those usually found in universities and there was no teaching requirement. There was total dedication to doing the best work with the best people—a purism that bordered on arrogance, though Taylor preferred to see it more as a relentless search for excellence.
What sounded to the rest of the world like PARC arrogance was really the fallout of the lab’s intense and introverted intellectual environment. Taylor’s geniuses, used to dealing with each other and not particularly sensitive to the needs of mere mortals, thought that the quality of their ideas was self-evident. They didn’t see the need to explain—to translate the idea into the world of the other person. Beyond pissing off Miss Manners, the fatal flaw in this PARC attitude was their failure to understand that there were other attributes to be considered as well when examining every idea. While idea A may be, in fact, better than idea B, A is not always cheaper, or more timely, or even possible —factors that had little relevance in the think tank but terrific relevance in the marketplace.
In time the dream at CSL and Xerox PARC began to fade, not because Taylor’s geniuses had not done good work but because Xerox chose not to do much with the work they had done. Remember this is industrial basic research—that is, insurance. Sure, PARC invented the laser printer and the computer network and perfected the graphical user interface and something that came to be known as what-you-see-is-what-you-get computing on a large computer screen, but the captains of industry at Xerox headquarters in Stamford, Connecticut, were making too much money the old way—by making copiers—to remake Xerox into a computer company. They took a couple of halfhearted stabs, introducing systems like the Xerox Star, but generally did little to promote PARC technology. From a business standpoint, Xerox probably did the right thing, but in the long term, failing to develop PARC technology alienated the PARC geniuses. In his 1921 book The Engineers and the Price System, economist Thorstein Veblen pointed out that in high-tech businesses, the true value of a company is found not in its physical assets but in the minds of its scientists and engineers. No factory could continue to operate if the knowledge of how to design its products and fix its tools of production was lost. Veblen suggested that the engineers simply organize and refuse to work until they were given control of industry. By the 1970s, though, the value of computer companies was so highly concentrated in the programmers and engineers that there was not much to demand control of. It was easier for disgruntled engineers just to walk, taking with them in their minds 70 or 80 percent of what they needed to start a new company. Just add money.
From inside their ivory tower, Taylor’s geniuses saw less able engineers and scientists starting companies of their own and getting rich. As it became clear that Xerox was going to do little or nothing with their technology, some of the bolder CSL veterans began to hit the road as entrepreneurs in their own right, founding several of the most important personal computer hardware and software companies of the 1980s. They took with them Xerox technology—its look and feel too. And they took Bob Taylor’s model for running a successful high-tech enterprise—a model that turned out not to be so perfect after all.
I am really digging your republication of this book. It is fascinating to see the perspective from 1991. I was exposed to computers in the mainframe lab, then when the first Apple came out. Then the PC with floppies, etc. However, being a young doctor in the mid 80’s and having “moved past” my hobbiest interests, and having too much other stuff to learn, I became a more standard computer user, albeit a fairly early adopter of things. I remember Bill Gates’ information at your fingertips, and that’s what I always yearned for. Likewise, I remember searching for clinical information with my medical students on Alta Vista when it became available – when? – mid 90’s, and then people thinking I was crazy for searching Google when it came along, instead of PubMed directly. I’m digressing. You were a lot smarter about these things in 1991than I was, and even now your insights from the time are fascinating. Sad to say, I never paid you for reading the book when you actually sold it; I never heard about it, until I started reading your columns in the early 2000’s. I would have, though. And I’d have loved it then. But reading it now, it provides not only the history, but allows me to examine my own thoughts and feelings about computing over the years, and specifically around 1991, and to compare them with where we’ve come and how I think differently now. Thanks so much for the opportunity! Sent, by the way, from my armchair, via iPad…
My story is different, but you perfectly expressed my feelings about reading this now.
“It wasn’t enough that he was getting a nickle for the phone call; Bell wanted another nickle
for his user interface.”
nickle should be nickel in both instances.
Can I make a call for a dime????
Right now it is about 20 dollars a day.
How did you get $20/day?
Suggestions:
– talking of patents, it was noted that much is built upon prior work, the same is very often true in patents, it is possible to patent an improvement, it is done all the time
– talking of patents and look and feel, there is a lot of prior to technology trade dress patents, one popularly cited one is coke bottle
– talking of ibm and 80 columns, they did not patent that but they reportedly did patent the idea of the cursor moving to the beginning of the next line when leaving the 80th column on a display, this was often reported anecdotally when they reportedly went after PC makers for patent portfolio licenses
– i liked the expanded section on PARC
mlk
(I don’t know how nit-picky you really want us to get here…)
—
“…the introduction of the MITS Altair 8800 in early 1075 to the arrival…”
OCR error (10 is superscripted) – change to 1975.
—
“To understand the personal computer industry, we have to understand Xerox PARC, because that’s where the most of the computer technology that we’ll use for the rest of the century was invented.”
change “…that’s where the most of the…” to “…that’s where most of the…”
—
Re Englebart: check out https://www.theregister.co.uk/2008/12/11/engelbart_celebration/. There is a link to the famous 1968 demo there, and there is an Englebart-Taylor connection which may be worth mentioning in passing.
—
“Unfortunately, 10 mbps Ethernet was a much more complex technology—so much more complex that it literally turned what might have been a product back into a techr nology exercise.”
OCR error – “techr nology”
—
“…PDP-io…”
There are a number of these in the chapter. OCR error – change to “PDP-10”
—
“…Information At Your Fingertips was invented at Xerox PARC in 1973′- …”
OCR error in punctuation at end of sentence
—
“I’m writing this in 1991, when Bill Gates of Microsoft is traveling the world preaching a new religion he calls Information At Your Fingertips. The idea is that PC users will be able to ask their machines for information, and, if it isn’t available locally, the PC will figure out how and where to find it. ” – something else that came true, as in ask SIRI for information and if it isn’t on your iPhone she’ll search the web….
Bob,
I don’t understand how you want the readers to help improve your chapters. In the first place, all the OCR errors should have been fixed (by a spell checker program or a human editor) before you loaded the text into these semi-weekly sessions. Then the readers could concentrate on the important matters — improving the concepts, insights, and general usefulness of these chapters. Or am I barking up the wrong tree? Arff, arff.
Good question. Perhaps one way Bob could answer the question of what we are supposed to comment on, is to let us know if this quote from the old book will appear unchanged in the update: “I’m writing this in 1991,”. At least that would let us know such things as whether “nerd” should be replaced by “geek”.
I am also enjoying this “look into the future and the past, from the past” and am also wondering if it might not be more helpful for you to give your readers some structure in adding to this updated version. I also am overtaken by OCR issues and ramblings of not-so-helpful replies. I don’t really have any solution but am just hoping that by tweaking your process somehow that you could get more value out of this exercise. When reading about Bob Taylor I ended up on Wikipedia http://en.wikipedia.org/wiki/Robert_Taylor_(computer_scientist)#ARPA and though it might be interesting to add to wikipedia or showcase how they have it relative to your version.
Take care.
Bob,
I don’t understand how you want the readers to help improve your chapters. In the first place, all the OCR errors should have been fixed (by a spell checker program or a human editor) before you loaded the text into these semi-weekly sessions. Then the readers could concentrate on the important matters — improving the concepts, insights, and usefulness of these chapters. Or am I barking up the wrong tree? Arff, arff.
I overstated my case. Please delete the last “arff.”
Good question. Perhaps one way Bob could answer the question of what we are supposed to comment on, is to let us know if this quote from the old book will appear unchanged in the update: “I’m writing this in 1991,”. At least that would let us know such things as whether “nerd” should be replaced by “geek”.
I think Charles and I have exposed a WordPress flaw. In my case the initial response was not displayed on page reload even though I have my browser set to not use cached copies of pages. On subsequent attempts I got the “already said that message” error until I posted after the first of Charles’ duplicated entry instead of the second. Then both my original and my duplicate appeared at the same time upping the comment count from 10 to 12!
i remember using one of those first mice: a block of wood with 2 wheels at 90 degree angles. you had to move it in the plane of the wheels, and not diagonal. some wag had glued a piece of mouse skin on the end of it as a tail.
we also used a chord keyboard (like http://research.microsoft.com/en-us/um/people/bibuxton/buxtoncollection/detail.aspx?id=7).
i used these in a project associated with SRI and the Air Force, doing editing of manuals (via a TIP) from VA to CA over the Arpanet. this was in the 1973-74ish timeframe.
RE Mice: Somewhere in my collection of debris I have a IT history book stating a cursor moving device, now known as a mouse was developed by an IBM engineer in 1957. Made of wood. Predates PARC somewhat if true.
Or did I not absorb Bobs text somewhere ?
If IBM had invented the mouse, there would have been a patent filed. Surely they weren’t that short-sighted even back then.
Bill English built the first prototype for Engelbart. 1963
“gzunda”?
Bazinga!
The hiring process and organizational structure described of Xerox PARC’s CSL sound a lot like the early days of Google.
So it’s terribly funny and interesting that, 10 years after this book, Google came along to do what Bill Gates was talking about. Microsoft is still around, but isn’t the dominant player .. or at least, isn’t the only one.
That’s the problem with Bill. He’s fantastic at re-imagining and developing other people’s products – BASIC, MS_DOS, word processors, spreadsheets, the GUI, etc. All improved as products, but essentially clones of existing stuff.
When it comes to original ideas or products, where there isn’t a successful existing one to iterate on, he floundered. IAYF, mobile computing, WinFS, etc.
innovators usually trump inventors in business success.
Here are a few comments, some directly related to Chapter 5 and others more general.
My first micro computer was a KIM (6502 single board) from which I learned low-level software. Next was an early Apple II. I was ready to buy a Commodore PET but saw a side-by-side comparison with Apple II at a Houston retailer. Short story, I wanted the Apple II and was willing to wait (save more money) until I could afford it. Just like today, more $$$ but worth it in my opinion. That is the story of Apple’s success — a superior product at a premium, but not unreasonable, price.
Back to chapter 5. In about 1978 I saw a demo of the XEROX system at a national computer conference (NCC). To this day, I remember seeing the mouse and window environment in action. My jaw dropped — I will never forget it. And, I will never forget when Apple introduced the original Mac (of course, I was aware of LISA — beyond my budget). I waited until rev 2 (the fat Mac, 512 MB ram) to buy. Used Windows at work, but only bought Apple products for personal use (photography, software development and general household use).
Conclusion. Jobs saw the value in XEROX technology and did the development after XEROX did the research.
I think every computer visionary influenced by Vannevar Bush (“As We May Think”, 1945 https://www.theatlantic.com/magazine/archive/1945/07/as-we-may-think/303881/) has wanted something like Information At Your Fingertips. We should say, every computer visionary has wanted the memex, which was Bush’s term for the thing. And it remains elusive.
Bill Gates has failed utterly at this. The last attempt at his vision seems to be WinFS, and then he quit and Microsoft’s not trying in public anymore. Instead, we have Google. It indexes the Internet, but it doesn’t index your hard drive, and it also quit trying some time ago. Siri doesn’t draw from apps not produced by Apple. Ubuntu Dash is another attempt, and it’s proving to be somewhat controversial.
We’ll continue approaching the memex, but I don’t think we’ll actually achieve it.
In my opinion, having ready access to vast stores of knowledge isn’t nearly as important as being able to make the most effective use of whatever information is available. To make effective use of information requires users to discard those limiting beliefs that prevent them from taking the constructive actions needed to achieve meaningful goals and/or improve their lives.
It’s amusing how bandwidth use keeps going up. Back in 1991, 10 Mbps Ethernet was common, though a little expensive. By 2000, 100 Mbps was common even for cheap PCs. It was hard to imagine uses for so much bandwidth back in the 1970s, but increases in bandwidth enable new ideas. Or revive old ideas that were impractical when bandwidth was too scarce. Now many PCs have gigabit Ethernet, high-end servers have 10G Ethernet, and data centers are moving to 40G and 100G Ethernet.
Because increased bandwidth enables new ideas, it makes me somewhat upset how the American ISPs insist on providing the minimum of bandwidth. The American taxpayer has paid them to upgrade our Internet, but they implement bandwidth caps and pocket the change. We need to bypass or regulate the telecom companies into improving our bandwidth, just to enhance our global competitiveness.
While it’s true that some regulation is needed in the case of natural monopolies it must be used very carefully. Back in the 50s the USSR controlled the price of all essential goods like bread and toilet paper; what happened was a severe shortage of bread and toilet paper and the eventual downfall of communism. People don’t do what they do just because they have a knack for it but because it makes their lives better overall. Take away the incentive and you take away the product.
Which is why water and electricity are so scarce in American cities. They’re so tightly regulated that nobody has an incentive to make them work well. This is sarcasm.
The truth is that very little about corporations is natural. Even corporate personhood is a legal fiction enforced by governments. Bread and toilet paper are subject to regulations about truth in advertising and purity of the product. The question is not whether we should have regulation, but how best to regulate. Practically anybody can start making and selling bread, which is what the planned economies suppressed, but we do so with a lot of government intervention.
One major intervention is transport. Bread and toilet paper are carried on public infrastructure practicing network neutrality. Shipping lanes are kept safe via government intervention against piracy, airlines work with many safety regulations, and roads and rails are frequently bought and paid for with federal funds. An Internet that gives freedom should be similarly trustworthy.
I can imagine many situations where regulations can be harmful, and several times when harmful regulations have been passed, but regulations are often necessary.
Agree with most of what you say except possibly: “This is sarcasm”. Not entirely, as there is some element of truth, especially since “regulation” has turned into ads that tell customers to use less of their product and demands that infrastructure be maintained but not expanded, since expansion would justify price increases, which must be kept to an absolute minimum.
Agree with most of what you say, except possibly: “This is sarcasm” Not entirely, as there is some element of truth, especially since “regulation” has turned into ads that tell customers to use less of their product and demands that infrastructure be maintained but not expanded, since expansion would justify price increases, which must be kept to an absolute minimum.
“Bread and toilet paper are carried on public infrastructure practicing network neutrality.”
They probably don’t intervene after use, it just goes down the pan with minimal paperwork.
But it’s still transportation.
This chapter uses PARC and CSL as synonymous, but the Computer Science Laboratory was just one division of PARC while the System Science Laboratory (SSL) where Alan Kay had his Learning Research Group was another.
About the number of researchers, you shouldn’t forget to count visitors like Jef Raskin and Niklaus Wirth.
One important effort to commercialize the Alto was the NoteTaker machine. Douglas Fairbairn wanted to do an “Alto on a chip” taking advantage of the company’s advanced IC capabilities, but was forbidden to do so by management. So the machine was built around three 8086 processors instead. It would have cost under $3000 and their aim was to have a third version for $500 by 1980. But this was also killed, so Fairbairn left to found VLSI Technology Inc. which later helped tiny Acorn build its own ARM processor. Revenge of the nerds indeed 🙂
Reading Dilbert, I often marvel at the poor choices of managers who lack training in the technical field. Reading this chapter made me think it does not have to be so. It reminded me of the Manhattan Project and Leslie Groves.
“Nearly everything in computing…” not just everything in computing, but nearly everything in everything, including in Nobel prizes, comes from some predecessor.
Bob,
You mention the Party Line. I know what that is (though only because my grandparents lived so far out in the boonies that they didn’t even have Cable), but the chances of the younger generation getting that reference is just about zero.
Heck, ask your boys. Then try to get them to believe you.
lots of these somewhat date references can be dealt with by adding footnotes/wikipedia links. ala a bill simmons article. make this bad boy hypertext!
“somewhat dated references”
you had a post a bit back where you identified the key differentiator of PARC was that they were told to solve problems using technology such as it would be available 10 years in the future. More RAM in your computer, more network speed, bigger monitors, etc. this chapter should really incorporate that philosophy to give insight into PARCs basic premise, that they were today’s role models by pretending to live in tomorrow.