And so our two month retrospective comes to an end with this 17th and final chapter, circa 1996. I hope you have enjoyed it. Tomorrow I’ll be back to talk about the eBook version of this work as well as what I’ve been up to for the last eight weeks. It’s more than I ever expected… and less.
ACCIDENTAL EMPIRES
CHAPTER SEVENTEEN
DO THE WAVE
We’re floating now on surfboards 300 yards north of the big public pier in Santa Cruz, California. As our feet slowly become numb in the cold Pacific water, it’s good to ponder the fact that this section of coastline, only fifteen miles from Silicon Valley, is the home of the great white shark and has the highest incidence of shark attacks in the world. Knowing that we’re paddling not far from creatures that can bite man and surfboard cleanly in half gives me, at least, a heightened awareness of my surroundings.
We’re waiting for a wave, and can see line after line of them approaching from the general direction of Hawaii. The whole ritual of competitive surfing is choosing the wave, riding it while the judges watch, then paddling out to catch another wave. Choose the wrong wave—one that’s too big or too small—and you’ll either wipe out (fall off the board) or not be able to do enough tricks on that wave to impress the judges. Once you’ve chosen the wave, there is also the decision of how long to ride it, before heading back out to catch another. Success in competitive surfing comes from riding the biggest waves you can handle, working those waves to the max, but not riding the wave too far—because you get more total points riding a lot of short waves than by riding a few waves all the way to the beach.
Surfing is the perfect metaphor for high-technology business. If you can succeed as a competitive surfer, you can succeed in semiconductors, computers, or biotechnology. People who are astute and technically aware can see waves of technology leaving basic research labs at least a decade before they become commercially viable. There are always lots of technology waves to choose from, though it is not always clear right away which waves are going to be the big ones. Great ideas usually appear years—sometimes decades—before they can become commercial products. It takes that long both to bring the cost of a high-tech product down to where it’s affordable by the masses, and it can take even longer before those masses finally perceive a personal or business need for the product. Fortunately for those of us who plan to be the next Steve Jobs or Bill Gates, this means that coming up with the technical soul of our eventual empire is mainly a matter of looking down the food chain of basic research to see what’s likely to be the next overnight technosensation a few years from now. The technology is already there; we just have to find it.
Having chosen his or her wave, the high-tech surfer has to ride long enough to see if the wave is really a big one. This generally takes about three years. If it isn’t a big wave, if your company is three years old, your product has been on the market for a year, and sales aren’t growing like crazy, then you chose either the wrong wave or (by starting to ride the wave too early) the wrong time.
Software has been available on CD-ROM optical disks since the mid-1980s, for example, but that business has become profitable only recently as the number of installed CD-ROM drives has grown into the millions. So getting into the CD-ROM business in 1985 would have been getting on the wave too early.
Steve Jobs has been pouring money into NeXT Inc.—his follow-on to Apple Computer—since 1985 and still hasn’t turned a net profit. Steve finally turned NeXT from a hardware business into a software business in 1993 (following the advice in this book), selling NeXTStep, his object-oriented version of the Unix operating system. Unfortunately, Steve didn’t give up the hardware business until Canon, his bedazzled Japanese backer, had invested and lost $350 million in the venture. The only reason NeXT survives at all is that Canon is too embarrassed to write off its investment. So, while NeXT survives, Steve Jobs has clearly been riding this particular wave at least five years too long.
All this may need some further explanation if, as I suspect, Jobs attempts an initial public stock offering (IPO) for NeXT in 1996. Riding on the incredible success of its computer-animated feature film Toy Story, another Jobs company—Pixar Animation Studios—used an IPO to once again turn Steve into a billionaire. Ironically, Pixar is the company in which Jobs has had the least direct involvement. Pixar’s success and a 1996 feeding frenzy for IPOs in general suggest that Jobs will attempt to clean up NeXT’s balance sheet before taking the software company public. IPOs are emotional, rather than intellectual, investments, so they play well under the influence of Steve’s reality distortion field.
Now back to big business. There are some companies that intentionally stay on the same wave as long as they can because doing so is very profitable. IBM did this in mainframes, DEC did it in minicomputers, Wang did it in dedicated word processors. But look at what has happened to those companies. It is better to get off a wave too early (provided that you have another wave already in sight) than to ride it too long. If you make a mistake and ride a technology wave too far, then the best thing to do is to sell out to some bigger, slower, dumber company dazzled by your cash flow but unaware that you lack prospects for the future.
Surfing is best done on the front of the wave. That’s where the competition is least, the profit margins are highest, and the wave itself provides most of the energy propelling you and your company toward the beach. There are some companies, though, that have been very successful by waiting and watching to see if a wave is going to be big enough to be worth riding; then, by paddling furiously and spending billions of dollars, they somehow overtake the wave and ride it the rest of the way to the beach. This is how IBM entered the minicomputer, PC, and workstation markets. This is how the big Japanese electronics companies have entered nearly every market. These behemoths, believing that they can’t afford to take a risk on a small wave, prefer to buy their way in later. But this technique works only if the ride is a long one; in order to get their large investments back, these companies rely on long product cycles. Three years is about the shortest product cycle for making big bucks with this technique (which is why IBM has had trouble making a long-term success of its PC operation, where product cycles are less than eighteen months and getting shorter by the day).
Knowing when to move to the next big wave is by far the most important skill for long-term success in high technology; indeed, it’s even more important than choosing the right wave to ride in the first place.
Microsoft has been trying to invent a new style of surfing, characterized by moving on to the next wave but somehow taking the previous wave along with it. Other companies have tried and failed before at this sport (for example, IBM with its Office-Vision debacle, where PCs were rechristened “programmable terminals”). But Microsoft is not just another company. Bill Gates knows that his success is based on the de facto standard of MS-DOS. Microsoft Windows is an adjunct to DOS—it requires that DOS be present for it to work—so Bill used his DOS franchise to popularize a graphical user interface. He jumped to the Windows wave but took the DOS wave along with him. Now he is doing the same thing with network computing, multimedia computing, and even voice recognition, making all these parts adjuncts to Windows, which is itself an adjunct to DOS. Even Microsoft’s next-generation 32-bit operating system, Windows NT, carries along the Windows code and emulates MS-DOS.
Microsoft’s surfing strategy has a lot of advantages. By building on an installed base, each new version of the operating system automatically sells millions of upgrade copies to existing users and is profitable from its first days on the market. Microsoft’s applications also have market advantages stemming from this strategy, since they automatically work with all the new operating system features. This is Bill Gates’s genius, what has made him the richest person in America. But eventually even Bill will fail, when the load of carrying so many old waves of innovation along to the next one becomes just too much. Then another company will take over, offering newer technology.
But what wave are we riding right now? Not the wave you might expect. For the world of corporate computing, the transition from the personal computing wave to the next wave, called client-server computing, has already begun. It had to.
**********
The life cycles of companies often follow the life cycles of their customers, which is no surprise to makers of hair color or disposable diapers, but has only lately occurred to the ever grayer heads running companies like IBM. Most of IBM’s customers, the corporate computer folks who took delivery of all those mainframe and minicomputers over the past thirty years, are nearing the end of their own careers. And having spent a full generation learning the hard way how to make cantankerous, highly complex, corporate computer systems run smoothly, this crew-cut and pocket-protectored gang is taking that precious knowledge off to the tennis court with them, where it will soon be lost forever.
Marshall McLuhan said that we convert our obsolete technologies into art forms, but, trust me, nobody is going to make a hobby of collecting $1 million mainframe computers.
This loss of corporate computing wisdom, accelerated by early retirement programs that have become so popular in the leaner, meaner, downsized corporate world of the 1990s, is among the factors forcing on these same companies a complete changeover in the technology of computing. Since the surviving computer professionals are mainly from the 1980s—usually PC people who not only know very little about mainframes, but were typically banned from the mainframe computer room— there is often nobody in the company to accept the keys to that room who really knows what he or she is doing. So times and technologies are being forced to change, and even IBM has seen the light. That light is called client-server computing.
Like every other important computing technology, client-server has taken about twenty years to become an overnight sensation. Client-server is the link between old-fashioned centralized computing in organizations and the newfangled importance of desktop computers. In client-server computing, centralized computers called “servers” hold the data, while desktop computers called “clients” use that data to do real work. Both types of computers are connected by a network. And getting all these dissimilar types of equipment to work together looks to many informed investors like the next big business opportunity in high technology.
In the old days before client-server, the computing paradigm du jour was called “master-slave.” The computer—a mainframe or minicomputer—was the master, controlling dozens or even hundreds of slave terminals. The essence of this relationship was that the slaves could do nothing without the permission of the master; turn off that big computer and its many terminals were useless. Then came the PC revolution of the 1980s, in which computers got cheap enough to be bought from petty cash and so everybody got one.
In 1980, virtually all corporate computing power resided in the central computer room. By 1987, 95 percent of corporate computing power lay in desktop PCs, which were increasingly being linked together into local area networks (LANs) to share printers, data, and electronic mail. There was no going back following that kind of redistribution of power, but for all the newfound importance of the PC, the corporate data still lived in mainframes. PC users were generally operating on ad hoc data, copied from last quarter’s financial report or that morning’s Wall Street Journal
Client-server accepts that the real work of computing will be done in little desktop machines and, for the first time, attempts to get those machines together with real corporate data. Think of it as corporate farming of data. The move to PCs swung the pendulum (in this case, the percentage of available computing power) away from the mainframe and toward the desktop. LANs have helped the pendulum swing back toward the server; they make that fluidity possible. Sure, we are again dependent on centralized data, but that is not a disadvantage. (A century ago we grew our own food too, but do we feel oppressed or liberated by modern farming?) And unlike the old master-slave relationship, PC clients can do plenty of work without even being connected to the mainframe.
Although client-server computing is not without its problems (data security, for one thing, is much harder to maintain than for either mainframes or PCs), it allows users to do things they were never able to do before. The server (or servers—there can be dozens of them on a network, and clients can be connected to more than one server at a time) holds a single set of data that is accessible by the whole company. For the first time ever, everyone is using the same data, so they can all have the same information and believe the same lies. It’s this turning of data into information (pictures, charts, graphs), of course, that is often the whole point of using computers in business. And the amount of data we are talking about is enormous: far greater than any PC could hold, and vastly more than a PC application could sort in reasonable time (that’s why the client asked the server—a much more powerful computer—to sort the data and return only the requested information). American Express, for example, has 12 terabytes of data—that’s 12,000,000,000,000 bytes—on mainframes, mainframes that they must get rid of for financial reasons. So if a company, university, or government agency wants to keep all its data accessible in one place, they need a big server, which is still more often than not a mainframe computer. In the client-server computing business, these old but still useful mainframes are called “legacy systems,” because the client-server folks are more or less stuck with them, at least for now.
But mainframes, while good storehouses for centralized data, are not so good for displaying information. People are used to PCs and workstations with graphical users interfaces (GUIs) but mainframes don’t have the performance to run GUIs. It’s much more cost-effective to use microprocessors, to put part of the application on the desk so the interface is quick. That means client-server.
Client-server has been around for the last ten years, but right now users are changing over in phenomenal numbers. Once you’ve crossed the threshold, it’s a stampede. First it was financial services, then CAD (computer-aided design), now ordinary companies. It’s all a matter of managing intellectual inventory more effectively. The nonhierarchical management style—the flat organizational model—that is so popular in the 1990s needs more communication between parts of the company. It requires a corporate information base. You can’t do this with mainframes or minicomputers.
The appeal of client-server goes beyond simply massaging corporate data with PC applications. What really appeals to corporate computing honchos is the advent of new types of applications that could never be imagined before. These applications, called groupware, make collaborative work possible.
The lingua franca of client-server computing is called Structured Query Language (SQL), an early-1970s invention from IBM. Client applications running on PCs and workstations talk SQL over a network to back-end databases running on mainframes or on specialized servers. The back-end databases come from companies like IBM, Informix, Oracle, and Sybase, while the front-end applications running on PCs can be everything from programs custom-written for a single corporate customer to general productivity applications like Microsoft Excel or Lotus 1-2-3, spreadsheets to which have been added the ability to access SQL databases.
Middleware is yet another type of software that sits between the client applications and the back-end database. Typically the client program is asking for data using SQL calls that the back-end database doesn’t understand. Sometimes middleware makes the underlying mainframe database think it is talking not to a PC or workstation, but to a dumb computer terminal; the middle ware simply acts as a “screen scraper,” copying data off the emulated terminal screen and into the client application. Whatever it takes, it’s middleware’s job to translate between the two systems, preserving the illusion of open computing.
Open computing, also called standards-based computing, is at the heart of client-server. The idea is simple; customers ought to be able to buy computers, networking equipment, and software from whomever offers the best price or the best performance and all those products from all those companies ought to work together right out of the box. When a standard is somewhat open, other people want to participate in the fruits of it. There is a chance for synergy and economies of scale. That’s what’s happening now in client-server.
What has made America so successful is the development of infrastructure—efficient distribution systems for goods, money, and information. Client-server replicates that infrastructure. Companies that will do well are those that provide components, applications, and services, just as the gas stations, motels, and fast food operations did so well as the interstate highway system was built.
For all its promise, client-server is hard to do. Open systems often aren’t as open as their developers would like to think, and the people being asked to write specialized front-end applications inside corporate computing departments often have a long learning curve to climb. Typically, these are programmers who come from a mainframe background and they have terrible trouble designing good, high-performance graphical user interfaces. It can take a year or two to develop the skill set needed to do these kinds of applications, but once the knowledge is there, then they can bang out application after application in short time.
Companies that will do less well because of the migration toward client-server computing are traditional mid-range computer companies, traditional mainframe software companies, publishers of PC personal productivity applications, and, of course, IBM.
**********
If client-server is the present wave, the next wave is probably extending those same services out of the corporation to a larger audience. For this we need a big network, an Internet. There’s that word.
The so-called information superhighway has already gone from being an instrument of oedipal revenge to becoming the high-tech equivalent of the pet rock: everybody’s got to have it. Young Senator (later vice-president) Al Gore’s information superhighway finally overshadows Old Senator Gore’s (Al’s father’s) interstate highway system of the 1950s; and an idea that not long ago appealed only to Democratic policy wonks now forms the basis of an enormous nonpartisan movement. Republicans and Democrats, movie producers and educators, ad executives and former military contractors, everyone wants their own on-ramp to this digital highway that’s best exemplified today by the global Internet. Internet fever is sweeping the world and nobody seems to notice or care that the Internet we’re touting has some serious flaws. It’s like crossing a telephone company with a twelve-step group. Welcome to the future.
The bricks and mortar of the Internet aren’t bricks and mortar at all, but ideas. This is probably the first bit of public infrastructure anywhere that has no value, has no function at all, not even a true existence, unless everyone involved is in precise agreement on what they are talking about. Shut the Internet down and there isn’t rolling stock, rights-of-way, or railway stations to be disposed of, just electrons. That’s because the Internet is really a virtual network constructed out of borrowed pieces of telephone company.
Institutions like corporations and universities buy digital data services from various telephone companies, link those services together, and declare it to be an Internet. A packet of data (the basic Internet unit of 1200 to 2500 bits that includes source address, destination address, and the actual data) can travel clear around the world in less than a second on this network, but only if all the many network segments are in agreement about what constitutes a packet and what they are obligated to do with one when it appears on their segment.
The Internet is not hierarchical and it is not centrally managed. There is no Internet czar and hardly even an Internet janitor, just a quasi-official body called the Internet Engineering Task Force (IETF), which through consensus somehow comes up with the evolving technical definition of what “Internet” means. Anyone who wants to can attend an IETF meeting, and you can even participate over the Internet itself by digital video. The evolving IETF standards documents, called Requests for Comment (RFCs), are readable on big computers everywhere. There are also a few folks in Virginia charged with handing out to whomever asks for them the dwindling supply of Internet addresses. These address-givers aren’t even allowed to say no.
This lack of structure extends to the actual map of the Internet, showing how its several million host computers are connected to each other: such a map simply doesn’t exist and nobody even tries to draw one. Rather than being envisioned as a tree or a grid or a ring, the Internet topology is most often described as a cloud. Packets of data enter the cloud in one place, leave it in another, and for whatever voodoo takes place within the cloud to get from here to there no money at all is exchanged. This is no way to run a business.
Exactly. The Internet isn’t a business and was never intended to be one. Rather, it’s an academic experiment from the 1960s to which we are trying, so far without much success, to apply a business model. But that wasn’t enough to stop Netscape Communications, Inc., publishers of software to use data over the Internet’s World Wide Web, from having the most successful stock offering in Wall Street history. Thus, less than two years after Netscape opened for business, company founder Jim Clark became Silicon Valley’s most recent billionaire.
Today’s Internet evolved from the ARPAnet, which was yet another brainchild of Bob Taylor. As we already know, Taylor was responsible in the 1970s for much of the work at Xerox PARC. In his pre-PARC days, while working for the U.S. Department of Defense, he funded development of the ARPAnet so his researchers could stay in constant touch with one another. And in that Department of Defense spirit of $900 hammers and $2000 toilet seats, it was Taylor who declared that there would be no charges on the ARPAnet for bandwidth or anything else. This was absolutely the correct decision to have made for the sake of computer research, but it’s now a problem in the effort of turning the Internet into a business.
Bandwidth flowed like water and it still does. There is no incentive on the Internet to conserve bandwidth (the amount of network resources required to send data over the Internet). In fact, there is an actual disincentive to conserve, based on “pipe envy”: every nerd wants the biggest possible pipe connecting him or her to the Internet. (We call this “network boner syndrome”— you know, mine is bigger than yours.) And since the cost per megabit drops dramatically when you upgrade from a 56K leased line (56,000 bits-per-second) to a T-1 (1.544 megabits per second) to a T-3 (45 megabits per second), some organizations deliberately add services they don’t really need simply to ratchet up the boner scale and justify getting a bigger pipe.
The newest justification for that great big Internet pipe is called Mbone, the Multimedia Backbone protocol that makes it possible to send digital video and CD-quality audio signals over the Internet. A protocol is a set of rules approved by the IETF defining what various types of data look like on the Internet and how those data are to be handled between network segments. If you participate in an IETF meeting by video, you are using Mbone. So far Mbone is an experimental protocol available on certain network segments, but all that is about to change.
Some people think Mbone is the very future of the Internet, claiming that this new technology can turn the Internet into a competitor for telephone and cable TV services. This may be true in the long run, five or ten years hence, but for most Internet users today Mbone is such a bandwidth hog that it’s more nightmare than dream. That’s because the Internet and its fundamental protocol, called TCP/IP (transport control protocol/Internet protocol) operates like a telephone party line and Mbone doesn’t practice good phone etiquette.
Just like on an old telephone party line, the Internet has us all talking over the same wire and Mbone, which has to send both sound and video down the line, trying to keep voices and lips in sync all the while, is like that long-winded neighbor who hogs the line. Digital video and audio means a lot of data—enough so that a 1.544-megabit-per-second T-1 line can generally handle no more than four simultaneous Mbone sessions (audio-only Mbone sessions like Internet Talk Radio require a little less bandwidth). Think of it this way: T-1 lines are what generally connect major universities with thousands of users each to the Internet, but each Mbone session can take away 25 percent of the bandwidth available for the entire campus. Even the Internet backbone T-3 lines, which carry the data traffic for millions of computers, can handle just over 100 simultaneous real-time Mbone video sessions: hardly a replacement for the phone company.
Mbone video is so bandwidth-intensive that it won’t even fit in the capillaries of the Internet, the smaller modem links that connect millions of remote users to the digital world. And while these users can’t experience the benefits of Mbone, they share in the net cost, because Mbone is what’s known as a dense-mode Internet protocol. Dense mode means that the Internet assumes almost every node on the net is interested in receiving Mbone data and wants that data as quickly as possible. This is assured, according to RFC 1112, by spreading control information (as opposed to actual broadcast content) all over the net, even to nodes that don’t want an Mbone broadcast—or don’t even have the bandwidth to receive one. This is the equivalent of every user on the Internet receiving a call several times a minute asking if they’d like to receive Mbone data.
These bandwidth concerns will go away as the Internet grows and evolves, but in the short term, it’s going to be a problem. People say that the Internet is carrying multimedia today, but then dogs can walk on their hind legs.
There is another way in which the Internet is like a telephone party line: anyone can listen in. Despite the fact that the ARPAnet was developed originally to carry data between defense contractors, there was never any provision made for data security. There simply is no security built into the Internet. Data security, if it can be got at all, has to be added on top of the Internet by users and network administrators. Your mileage may vary.
The Internet is vulnerable in two primary ways. Internet host computers are vulnerable to invasion by unauthorized users or unwanted programs like viruses, and Internet data transmissions are vulnerable to eavesdropping.
Who can read that e-mail containing your company’s deepest secrets? Lots of people can. The current Internet addressing scheme for electronic mail specifies a user and a domain server, such as my address (bob@cringely.com). “Bob” is my user name and “cringely.com” is the name of the domain server that accepts messages on my behalf. (Having your own domain [like cringely.com] is considered very cool in the Internet world. At last, I’m a member of an elite!) A few years ago, before the Internet was capable of doing its own message routing, the addressing scheme required the listing of all network links between the user and the Internet backbone. I recall being amazed to see back then that an Internet e-mail message from Microsoft to Sun Microsystems at one point passed through a router at Apple Computer, where it could be easily read.
These sort of connections, though now hidden, still exist, and every Internet message or file that passes through an interim router is readable and recordable at that router. It’s very easy to write a program called a “sniffer” that records data from or to specific addresses or simply records user addresses and passwords as they go through the system. The writer of the sniffer program doesn’t even have to be anywhere near the router. A 1994 break-in to New York’s Panix Internet access system, for example, where hundreds of passwords were grabbed, was pulled off by a young hacker connected by phone from out of state.
The way to keep people from reading your Internet mail is to encrypt it, just like a spy would do. This means, of course, that you must also find a way for those who receive your messages to decode them, further complicating network life for everyone. The way to keep those wily hackers from invading Internet domain servers is by building what are called “firewalls”—programs that filter incoming packets, trying to reject those that seem to have evil intent. Either technique can be very effective, but neither is built in to the Internet. We’re on our own.
Still, there is much to be excited about the Internet. Many experts are excited about a new Internet programming language from Sun Microsystems called Java. What became the Java language was the invention of a Sun engineer named James Gosling. When Gosling came up with the idea, the language was called Oak, not Java, and it was aimed not at the Internet and the World Wide Web (WWW didn’t even exist in 1991) but at the consumer electronics market. Oak was at the heart of *7, a kind of universal intelligent remote control Sun invented but never manufactured. After that it was the heart of an operating system for digital television decoders for Time Warner. This project also never reached manufacturing. By mid-1994, the World Wide Web was big news and Oak became Java. The name change was driven solely by Sun’s inability to trademark the name Oak.
Java is what Sun calls an “architecture neutral” language. Think of it this way: If you found a television from thirty years ago and turned it on, you could use it today to watch TV. The picture might be in black and white instead of color, but you’d still be entertained. Television is backward-compatible and therefore architecture neutral. However, if you tried to run Windows 95 on a computer built thirty years ago, it simply wouldn’t work. Windows 95 is architecture specific.
Java language applications can execute on many different processors and operating system architectures without the need to rewrite the applications for those systems. Java also has a very sophisticated security model, which is a good thing for any Internet application to have. And it uses multiple program threads, which means you can run more than one task at a time, even on what would normally be single-tasking operating systems.
Most people see Java as simply a way to bring animation to the web, but it is much more than that. Java applets are little applications that are downloaded from a WWW server and run on the client workstation. At this point an applet usually means some simple animation like a clock with moving hands or an interactive map or diagram, but a Java applet can be much more sophisticated than that. Virtually any traditional PC application—like a word processor, spreadsheet, or database—can be written as one or more Java applets. This possibility alone threatens Microsoft’s dominance of the software market.
Sun Microsystems’ slogan is “the network is the computer,” and this is fully reflected in Java, which is very workstation-centric. By this I mean that most of the power lies in the client workstation, not in the server. Java applications can be run from any server that runs the World Wide Web’s protocol. This means that a PC or Macintosh is as effective a Java server as any powerful Unix box. And because whole screen images aren’t shipped across the network, applets can operate using low-bandwidth dial-up links.
The Internet community is very excited about Java, because it appears to add greater utility to an already existing resource, the World Wide Web. Since the bulk of available computing power lies out on the network, residing in workstations, that’s where Java applets run. Java servers can run on almost any platform, which is an amazing thing considering Sun is strictly in the business of building Unix boxes. Any other company might have tried to make Java run only on its own servers—that’s certainly what IBM or Apple would have done.
In a year or two, when there are lots of Java browsers and applets in circulation, we’ll see a transformation of how the Internet and the World Wide Web are used. Instead of just displaying text and graphical information for us, our web browsers will work with the applets to actually do something with that data. Companies will build mission-critical applications on the web that will have complete data security and will be completely portable and scalable. And most important of all, this is an area of computing that Microsoft does not dominate. As far as I can see, they don’t even really understand it yet, leaving room for plenty of other companies to innovate and be successful.
**********
What might be the wave after the Internet is building a real data network that extends multimedia services right into our homes. Since this is an area that is going to require very powerful processor chips, you’d think that Intel would be in the forefront, but it’s not. Has someone already missed the wave?
Intel rules the microprocessor business just as Microsoft rules the software business: by being very, very aggressive. Since the American courts have lately ruled in favor of the companies that clone Intel processors, the current strategy of Intel president Andy Grove is to outspend its opponents. Grove wants to speed up product development so that each new family of Intel processors appears before competitors have had a chance to clone the previous family. This is supposed to result in Intel’s building the very profitable leading-edge chips, leaving its competitors to slug it out in the market for commodity processors. Let AMD and Cyrix make all the 486 chips they want, as long as Intel is building all the Pentiums.
Andy Grove is using Intel’s large cash reserves (the company has more than $2.5 billion in cash and no debt) to violate Moore’s Law. Remember Moore’s law was divined in the late 1950s by Gordon Moore, now the chairman of Intel and Andy Grove’s boss. Moore’s Law states that the number of transistors that can be etched on a given piece of silicon will double every eighteen months. This means that microprocessor computing power will naturally double every eighteen months, too. Alternately, Moore’s Law can mean that the cost of buying the same level of computing power will be cut in half every eighteen months. This is why personal computer prices are continually being lowered.
Although technical development and natural competition have always fit nicely with the eighteen-month pace of Moore’s Law, Andy Grove knows that the only way to keep Intel ahead of the other companies is to force the pace. That’s why Intel’s P-6 processor appeared in mid-1995, more than a year before tradition dictated it ought to. And the P-7 will appear just two years after that, in 1997.
This accelerated pace is accomplished by running several development groups in parallel, which is incredibly expensive. But the same idea of spending, spending, spending on product development is what President Reagan used to force the end of communism by simply spending faster than his enemies could on new weapons. Any Grove figures that what worked for Reagan will also work for Intel.
But what if the computing market takes a sudden change of direction? If that happens, wouldn’t Intel still be ahead of its competitors, but ahead of them at running in the wrong direction? That’s exactly what I think is happening right now.
Intel’s strategy is based on the underlying idea that the computing we’ll do in the future is a lot like the computing we’ve done in the past. Intel sees computers as individual devices on desktops, with local data storage and generally used for stand-alone operation. The P-6 and P-7 computers will just be more powerful versions of the P-5 (Pentium) computers of today. This is not a bad bet on Intel’s part, but sometimes technology does jump in a different direction; and that’s just what is happening right now, with all this talk of a digital convergence of computing and communication.
The communications world is experiencing an explosion of bandwidth. Fiber-optic digital networks and new technologies like asynchronous transfer mode (ATM) networking is leading us toward a future where we’ll mix voice, data, video, and music, all on the same lines that today deliver either analog voice or video signals. The world’s telephone companies are getting ready to offer us any type of digital signal we want in our homes and businesses. They want to deliver to us everything from movies to real-time stock market information, all through a box that in America is being called a “set-top device.”
What’s inside this set-top device? Well, there is a microprocessor to decode and decompress the digital data-stream, some memory, a graphics chip to drive a high-resolution color display, and system software. Sounds a lot like a personal computer, doesn’t it? It sure sounds that way to Microsoft, which is spending millions to make sure that these set-top devices run Windows software.
With telephone and cable television companies planning to roll out these new services over the next two to three years, Intel has not persuaded a single maker of set-top devices to use Intel processors. Instead, Apple, General Instrument, IBM, Scientific Atlanta, Sony, and many other manufacturers have settled on Motorola’s PowerPC processor family. And for good reason, because a PowerPC 602 processor costs only $20 each in large quantities, compared with more than $80 for a Pentium.
Intel has been concentrating so hard on the performance side of Moore’s Law that the company has lost sight of the cost side. The new market for set-top devices—at least 1 billion set-top devices in the next decade—demands good performance and low cost.
But what does that have to do with personal computing? Plenty. That PowerPC 602 yields a set-top device that has the graphics performance equivalent to a Silicon Graphics Indigo workstation, yet will cost users only $200. Will people want to sit at their computer when they can find more computing power (and more network services) available on their TV?
The only way that a new software or hardware architecture can take over the desktop is when that desktop is undergoing rapid expansion. It happened that way after 1981, when 500,000 CP/M computers were replaced over a couple of years by more than 5 million MS-DOS computers. The market grew by an order of magnitude and all those new machines used new technology. But these days we’re mainly replacing old machines, rather than expanding our user base, and most of the time we’re using our new Pentium hardware to emulate 8086s at faster and faster speeds. That’s not a prescription for revolution.
It’s going to take another market expansion to drive a new software architecture, and since nearly every desk that can support a PC already has one sitting there, the expansion is going to have to happen where PCs aren’t. The market expansion that I think is going to take place will be outside the office, to the millions of workers who don’t have desks, and in the home, where computing has never really found a comfortable place. We’re talking about something that’s a cross between a television and a PC—the set-top box.
Still, a set-top device is not a computer, because it has no local storage and no software applications, right? Wrong. The set-top device will be connected to a network, and at the other end of that network will be companies that will be just as happy to download spreadsheet code as they are to send you a copy of Gone with the Wind. With faster network performance, a local hard disk is unnecessary. There goes the PC business. Also unnecessary is owning your own software, since it is probably cheaper to rent software that is already on the server. There goes the software business, too. But by converting, say, half the television watchers in the world into computer users, there will be 1 billion new users demanding software that runs on the new boxes. Here come whole new computer hardware and software industries—at least if Larry Ellison gets his way.
Ellison is the “other” PC billionaire, the founder of Oracle Systems, a maker of database software. The legendary Silcion Valley rake who once told me he wanted to be married “up to five days per week,” has other conquests in mind. He wants to defeat Bill Gates.
“I think personal computers are ridiculous,” said Ellison. “It’s crazy for me to have this box on my desk into which I pour bits that I’ve brought home from the store in a cardboard box. I have to install the software, make it work, and back up my data if I want to save it from the inevitable hard disk crash. All this is stupid: it should be done for me.
“Why should I have to go to a store to buy software?” Ellison continued. “In a cardboard box is a stupid way to buy software. It’s a box of bits and not only that, they are old bits. The software you buy at a store is hardly ever the latest release.
“Here’s what I want,” said Ellison. “I want a $500 device that sits on my desk. It has a display and memory but no hard or floppy disk drives. On the back it has just two ports—one for power and the other to connect to the network. When that network connection is made, the latest version of the operating system is automatically downloaded. My files are stored on a server somewhere and they are backed up every night by people paid to do just that. The data I get from the network is the latest, too, and I pay for it all through my phone bill because that’s what the computer really is—an extension of my telephone. I can use it for computing, communicating, and entertainment. That’s the personal computer I want and I want it now!”
Larry Ellison has a point. Personal computers probably are a transitional technology that will be replaced soon by servers and networks. Here we are wiring the world for Internet connections and yet we somehow expect to keep using our hard disk drives. Moving to the next standard of networking is what it will take to extend computing to the majority of citizens. Using a personal computer has to be made a lot easier if my mom is going to use a computer. The big question is when it all happens? How soon is soon? Well, the personal computer is already twenty years old, but my guess is it will look very different in another ten years.
Oracle, Ellison’s company, wants to provide the software that links all those diskless PCs into the global network. He thinks Microsoft is so concentrated on the traditional stand-alone PC that Oracle can snatch ownership of the desktop software standard when this changeover takes place. It just might succeed.
**********
If there’s a good guy in the history of the personal computer, it must be Steve Wozniak, inventor of the Apple I and Apple II. Prankster and dial-a-jokester, Woz was also the inventor of a pirated version of the VisiCalc spreadsheet called VisiCrook that not only defeated VisiCalc’s copy protection scheme but ran five times faster than the original because of some bugs that he fixed along the way. Steve Wozniak is unique, and his vision of the future of personal computing is unique, too, and important. Wozniak is no longer in the computer industry. His work is now teaching computer skills to fifth- and sixth-grade students in the public schools of Los Gatos, California, where he lives. Woz teaches the classes and he funds the classes he teaches. Each student gets an Apple PowerBook 540C notebook computer, a printer, and an account on America Online, all paid for by the Apple cofounder. Right now, Woz estimates he has 100 students using his computers.
“If I can get them using the computer and show them there is more that they can do than just play games, that’s all I want,’ said Wozniak. “Each year I find one or two kids who get it instantly and want to learn more and more about computers. Those are the kids like me and if I can help one of them to change the world, all my effort will have been worthwhile.”
As a man who is now more a teacher than an engineer, Woz’s view of the future takes the peculiar perspective of the computer educator trying to function in the modern world. Woz’s concern is with Moore’s Law, the very engine of the PC industry that has driven prices continually down and sales continually up. Woz, for one, can’t wait for Moore’s Law to be repealed.
Huh?
For the last thirty years and probably for another decade, Moore’s Law will continue to apply. But while the rest of the computing world waits worriedly for that moment when the lines etched on silicon wafers get so thin that they are equal to the wavelength of the light that traces them—the technical dead end for photolithography—Steve Wozniak looks forward to it. “I can’t wait,” he said, “because that’s when software tools can finally start to mature.”
While the rest of us fear that the end of Moore’s Law means the end of progress in computer design, Wozniak thinks it means the true coming of age for personal computers—a time to be celebrated. “In American schools today a textbook lasts ten years and a desk lasts twenty years, but a personal computer is obsolete when it is three years old,” he said. “That’s why schools can’t afford computers for every child. And every time the computer changes, the software changes, too. That’s crazy.
“If each personal computer could be used for twenty years, then the schools could have one PC for each kid. But that won’t happen until Moore’s Law is played out. Then the hardware architectures can stabilize and the software can as well. That’s when personal computers will become really useful, because they will have to be tougher. They’ll become appliances, which is what they should always have been.”
To Woz, the personal computer of twenty years from now will be like haiku: there won’t be any need to change the form, yet artists will still find plenty of room for expression within that form.
**********
We overestimate change in the short term by supposing that dominant software architectures are going to change practically overnight, without an accompanying change in the installed hardware base. But we also underestimate change by not anticipating new uses for computers that will probably drive us overnight into a new type of hardware. It’s the texture of the change that we can’t anticipate. So when we finally get a PC in every home, it’s more likely to be as a cellular phone with sophisticated computing ability thrown in almost as an afterthought, or it will be an ancillary function to a 64-bit Nintendo machine, because people need to communicate and be entertained, but they don’t really need to compute.
Computing is a transitional technology. We don’t compute to compute, we compute to design airplane wings, simulate oil fields, and calculate our taxes. We compute to plan businesses and then to understand why they failed. All these things, while parading as computing tasks, are really experiences. We can have enough power, but we can never have enough experience, which is why computing is beginning a transition from being a method of data processing to being a method of communication.
People care about people. We watch version after version of the same seven stories on television simply for that reason. More than 80 percent of our brains are devoted to processing visual information, because that’s how we most directly perceive the world around us. In time, all this will be mirrored in new computing technologies. We’re heading on a journey that will result, by the middle of the next decade, in there being no more phones or televisions or computers. Instead, there will be billions of devices that perform all three functions, and by doing so, will tie us all together and into the whole body of human knowledge. It’s the next big wave, a veritable tsunami. Surf’s up!
Amazon and Google were quite unanticipated developments.
Amazon’s power was in using scale to drive margins through the floor, starving everybody else of the ability to compete. And somehow getting the investors to tolerate all those years that they weren’t profitable. Some people still think of them as a book store, but they are really the most significant company in cloud computing.
Google’s big innovation was building with the assumption that any piece could fail at any time. At first, it was because they were using whatever they could scrounge from Stanford, but they discovered that this was a compelling way to build a truly huge computing resource. A mainframe, or even an Oracle system, is built around the assumption that the computer is reliable. This means the administrators must study religiously to acquire the certificates and maintain the assumptions that keep the mainframe running, and scaling is a difficult and painful exercise.
Overall, mixed feelings. I’m surprised I enjoyed the quality of the writing more than the trip down memory lane, even though I started out in Silicon Valley about the same time Robert X. did.
I commented a few weeks ago about how clunky that 1980’s technology was. I feel like my first “real computer” was an early Sun workstation. Then I went out on my own and had to backslide into the Messy-DOS world until I could afford my second “real computer,” a Windows 95 system.
The TRS-80s and Apples and XTs and ATs I worked on during the time covered by Accidental Empires now seem closer to Steampunk than the cutting edge tech we have in 2013. The Triumph of Moore’s Law, I suppose.
I remember when Java came out. My initial experiences soured me on it immediately. It’s ironic that in 1996 Java was to be the big client enabler, but now it’s used mainly for middleware, and US-CERT is recommending that clients uninstall it if possible.
The big problem was that Sun tried to keep an iron grip on Java. It might have been justified in combating Microsoft and their incompatible MSJVM, but it kept Java applets from becoming an integral part of the web. Their treatment of Apache Harmony, regarding the Technology Compatibility Kit, was especially despicable. The only major implementation of Java from outside Sun seems to be the IBM JDK, and that is again for servers and middleware.
So, my experience with Java in the early web was this: Any site with Java applets would incur an immediate performance hit as the JVM spins up. While you wait, you could see where the Java applets were supposed to go because they were the gray rectangles. Not native, and definitely not fast. Java is also used for desktop applications: Slow to start, laggy compared with native apps, and ugly. Also, I didn’t like how new apps required new versions of the JVM, which might not be available on older systems.
My experience with Java now is this: Any site with Java applets starts with a mess of security warnings. Unfortunately, I still need Java for one web site, so on that site I click the approvals and wait for the JVM to spin up. I also sometimes run Java applications; .NET and Objective C have gotten people used to laggy programs with slow startups, but nothing can beat Java on Ugly, at least without trying.
In the meanwhile, JavaScript, which was actually a completely different language code-named Mocha then LiveScript, turned into the client-side language of the Web. Being so open was horrible for its compatibility, and its rushed early development made it a language full of the strangest issues. But because it can be implemented by anybody, it has been implemented many times, and it now has adequate performance. Gmail takes less time to load in Chrome than the JVM takes to load and run a trivial program, and I don’t think Google has ported Gmail to NaCl.
Sun’s iron grip meant they were unable to keep up with client innovations. Sun NeWS might have been nice for the 80’s, especially compared to X11, but the Java UI was ugly in the 90’s. It’s still ugly in the 21st Century. Java ME works, but it is also ugly. Probably the greatest innovations in Java user experiences are coming from the Google Android project, and they do it by disregarding all collaboration and doing things their own way. Which is, again, horrible for compatibility.
Oh, yeah, the Java language was exciting because it was bringing into mainstream programming, object-oriented and garbage-collected techniques. A huge number of problems in C and C++ come from mistakes in memory management, but Java moved all of that into the JVM. There are still memory-management problems in the JVM, as it’s just code running on essentially insecure computers, but they are many orders of magnitude fewer than if everybody had to program in, say, C++.
On the other hand, Java is a seriously un-fun language. As Alan Kay put it, “Java is the most distressing thing to hit computing since MS-DOS.” More from Alan Kay about computer languages (as of 2004). Alan Kay is not very practical, but he’s insightful.
Personally, I don’t like Java’s incredible wordiness and inconsistency. Objects are safe, memory-managed and reflected and everything, but pointers to Objects are unsafe. So you have to litter your code with checks that your Object is actually there. No RAII pattern (as in C++) here. Java was also designed for a traditional open-run-close cycle with compiled code, so you have to do the tedious write-save-compile-run cycle to develop for it. I was using Scheme on my handheld computer, so I really resented this limitation. And then, somehow, some rather unfriendly program construction techniques became the standard way of doing Java: Rants by Steve Yegge and Joel Spolsky.
So, from a language standpoint, it’s not surprising that JavaScript has some fans.
Oh, yeah, the Java language was exciting because it was bringing into mainstream programming, object-oriented and garbage-collected techniques. A huge number of problems in C and C++ come from mistakes in memory management, but Java moved all of that into the JVM. There are still memory-management problems in the JVM, as it’s just code running on essentially insecure computers, but they are many orders of magnitude fewer than if everybody had to program in, say, C++.
On the other hand, Java is a seriously un-fun language. As Alan Kay put it, “Java is the most distressing thing to hit computing since MS-DOS.” More from Alan Kay about computer languages (as of 2004). Alan Kay is not very practical, but he’s insightful.
Personally, I don’t like Java language’s incredible wordiness and inconsistency. Objects are safe, memory-managed and reflected and everything, but pointers to Objects are unsafe. That’s because pointers are “primitives,” and primitives are not object-oriented in Java. So you have to litter your code with checks that your Object is actually there. No RAII pattern (as in C++) here. Java was also designed for a traditional open-run-close cycle with compiled code, so you have to do the tedious write-save-compile-run cycle to develop for it. I was using Scheme on my handheld computer, so I really resented this limitation. And then, somehow, some rather unfriendly program construction techniques became the standard way of doing Java: Rants by Steve Yegge and Joel Spolsky.
So, from a language standpoint, it’s not surprising that JavaScript has some fans.
On the other hand, Java is a seriously un-fun language. As Alan Kay put it, “Java is the most distressing thing to hit computing since MS-DOS.” More from Alan Kay about computer languages (as of 2004). Alan Kay is not very practical, but he’s insightful.
Personally, I don’t like Java’s incredible wordiness and inconsistency. Objects are safe, memory-managed and reflected and everything, but pointers to Objects are unsafe. That’s because pointers are “primitives,” and primitives are not objects in Java. So you have to litter your code with checks that your Object is actually there. No RAII pattern (as in C++) here. Java was also designed for a traditional open-run-close cycle with separate compiler, so you have to do the tedious write-save-compile-run cycle to develop for it. I was using Scheme on my handheld computer, so I really resented this limitation. And then, somehow, some rather unfriendly program construction techniques became the standard way of doing Java: Rants by Steve Yegge and Joel Spolsky.
So, from a language standpoint, it’s not surprising that JavaScript has some fans.
On the other hand, Java is a seriously un-fun language. As Alan Kay put it, “Java is the most distressing thing to hit computing since MS-DOS.” More from Alan Kay about computer languages (as of 2004). Alan Kay is not very practical, but he’s insightful.
And then, somehow, some rather unfriendly program construction techniques became the standard way of doing Java: Rants by Steve Yegge and Joel Spolsky.
So, from a language standpoint, it’s not surprising that JavaScript has some fans.
Wow. I need a good editor to put all those comments back together.
One parting shot. C# from Microsoft is really what the Java language should have been, but it’s severely hindered by being tied to Microsoft. Also, the CLR hasn’t had the extensive optimization effort that the JVM has had, so its performance sucks. I can guess why Microsoft insisted on using the CLR for Windows Phone 7, but the sucky apps that resulted have not helped advance the Windows Phone platform.
There are some retrospectives somewhere, from Vint Cerf and others, that the real reason for the Internet was not for collaboration. I mean, they were scientists; they were going to collaborate anyway. The Internet was built so the researchers could share computers, and then the Department of Defense would not have to buy a separate high-end computer for every institution that they were funding. Because even the Department of Defense couldn’t afford to buy so many general-purpose computers. We all know how that turned out.
The Internet we still have is experimental and not meant for production use. That’s why we were already taking steps to keep from running out of addresses in the 90’s, and now we’re out. Why are you not on IPv6, Cringely?
“So when we finally get a PC in every home, it’s more likely to be as a cellular phone with sophisticated computing ability thrown in almost as an afterthought, or it will be an ancillary function to a 64-bit Nintendo machine”
– so while not precisely iPads and other tablets, I think you got smart phones and gaming consoles down…
Many commenters in the past couple of months have asked about Cringely’s plans to extend or not extend this book up to the present day. There is certainly some interesting territory to explore…
How “accidental” were the rises of eBay, Amazon, Google, Facebook? What are the parallels between these companies and the pioneers he covered in AE … and what are the differences? What are the parallels or differences among the losers these first- and second- wave Godzillas squeezed out of their respective markets?
Have things stayed the same the more they’ve changed? Is human nature repeating “the same seven [business] stories” over and over as the bits continue to move faster and faster?
A number of things:
1/Thank you, reading this book again brought back a number of memories from my first year in college when I referenced some of the examples in the book to getting my first job in a PR agency – the book was a useful primer in the ways and methods of the tech sector
2/ I read about Microsoft’s mastery of surfing waves right after reading that Gartner claims Microsoft will be on just 17 per cent of computing devices by 2017, down from a high of 97%
3/ The precarious state of Intel at that time mirrored the challenges that Intel faces today with the rise of mobile computing devices
4/ The old adage of Bill Gates being most concerned about kids in a garage than existing competitors is brought home by some of the comments made here about Amazon, Google and Facebook
“…Microsoft will be on just 17 per cent of computing devices…” That statistic can be altered merely by changing the definition of a “computing device”. It’s lowered by including tablets and smartphones (although it’s hard to think of a smartphone as a computing device due to the small screen and touch input method required). It could be lowered further by including mp3 players, calculators, paper and pencil, rulers and chalk boards, as well as our fingers which help with addition and subtraction.
“The market expansion that I think is going to take place will be outside the office, to the millions of workers who don’t have desks, and in the home, where computing has never really found a comfortable place…the set-top box”
Your premise was right, but your conclusion was off. Certainly mobile was the next wave because technology not only got faster but smaller. The set-top box does exist today in the form of Roku, AppleTV, etc., not to mention the Internet software built into nearly every new TV and everything that plugs into a TV, but they still only deliver Gone with the Wind (and much less), not spreadsheets.
Your section about Mbone reminds me of the time I was a graduate student and managed to effectively take down the internet by running a molecular modeling simulation that shared data between NCSA and SDSC… we couldn’t figure out why the program kept seizing up until we started getting frantic calls from the NSFNet operations center in Boulder. Apparently, you only need to take up 10-20% of the pipe to shut everything else out. Streaming data is especially good at doing that when you skip using TCP/IP and roll your own data link protocols.
Needless to say, we didn’t try that again and bandwidth has improved somewhat since my days as a graduate student.
we’re not out of Internet IPv4 addresses, we have 10-networks of 10-networks worth.
locally.
not globally.
the Wacky Wacky Webbiepoo is going to be bifurcated for a long time, because there is a boner factor in having an A or B class address (or being HP, which has the A addresses from HP, DEC, Compaq, 3Com, and almost certainly others.) them’s braggin’ rights at the bar, buddy. if you happen to be China, or the Noplacenobody Independent Republic of the East Wind, you get IPv6 and you better like it.
and as long as they inteoperate in practice, nobody arms the bombs.
in the case of certain nation-states that would like to turn the clock back to 666 AD yet want to control the entire WWW so none of their decision makers gets to see a joke on them from the US or Britain, well, the basis of The Connected Internet is that data rules, and whiners = fools. they are welcome to pull the plug and sink into cavemanship.
those little computers and their network have been as liberating as language, which is the real fear. get with it, or get out of the way.
Why are people still fixated by the pre-ICANN Class A networks? (Answer: Because they don’t really understand the growth of the Internet.)
IP addresses were intended to be free. (…free-ish. Getting an ASN and applying for addresses cost a lot of pennies, and you can’t get a router that does BGP at Best Buy. Well, I can, but it’s probably beyond your ability.) So, if you want IP addresses and have a use for them, then you should be able to get them. On that basis, IPv4 is just impossible to maintain.
The theoretical maximum number of addresses on IPv4 is about 4 billion. That’s if you fill the network to a ludicrous capacity. For a variety of reasons, you can’t actually use so many addresses. The human population is currently over 7 billion. In the Western world, each person on the Internet can use several IP addresses, due to computers and various iThingies. Cloud providers also have (trade secret) numbers of computers requiring IP addresses. Mathematically, there’s just no way to provide IPv4 addresses for every desired use.
In fact, HP has just 2 class A networks: The one from DEC, and their own from 1994. At the rate that the addresses were being allocated, giving those addresses back to IANA would give us only a few months before exhaustion strikes again. If we could somehow convince all 20-something organizations to give up their class A networks (about 30), we’d get maybe a couple years of extension of IPv4. That’s if we can get them to invest money and countless man-hours to change all the addresses that they’re using, and reconfigure their firewalls, just to give a reprieve to an outdated protocol when a superior alternative is available: IPv6. Realistically, almost nobody will go through that effort.
Even in North America, we don’t really have enough addresses for everybody. My small organization and my home get only one IPv4 address from our ISPs, and we are obligated to share that one address with dozens of devices via NAT. That restricts uses of the Internet to fit the tight restrictions of the NAT. A lot of people are happy with NAT, but it has always bothered me, because it prevents the Internet from fulfilling its full potential: The disintermediation of communication.
One quick example of the Internet’s potential is as a secure replacement for the phone. Skype and Vonage and stuff do exist, but they all involve central registries vulnerable to interception. If I could give each phone its own IP address, then I could install something like PGPfone, a free program that was technically possible with the bandwidth and processing power of the mid-1990s, and every phone would be a secure line. This is impossible with NAT.
IPv6 and IPv4, unfortunately, do not interoperate. Halfway solutions like NAT64 break DNSSEC. We need to be well on our way to replacing IPv4 with IPv6, but I’m still encountering business and technical leaders who say we don’t personally need IPv6. People suck.
Far East people are forced to use IPv6? I envy them. Get with the program. Bring on the IPv6.
By far the biggest mistake in the history of computing was Java applets. Second worst internet scourge would be Flash player plugins.
Even to this day the #1 security weaknesses are those damned-awful browser plugins that we must have to access content of big streaming-media providers.
Java as a programming language is awful to code with, as it suffers from syntax-diarrhoea that really ought to have died out, shortly after the language ADA (syntax-diarrhoea exhibit A) failed to make an impact in the mainstream software channels.
Social networking has probably been the most recent unforseen phenomenon missed by the technology pundits.
Cringely, your blog is broken. Please fix it.
I wrote a longish post and submitted it several times, but it never showed up. But this does show up.
Actually, it wasn’t that long compared with a couple of the comments that did make it through. There’s something about the content that’s blocking the comments from posting. I guess I’ll do the bisect and test technique for finding the problem. Yay for lots of small comments.
It looks like this blogging platform didn’t like several of the links that I put into the post using <a> tags. I don’t see any pattern to this list.
A collection of quotes:
http://web.onetel.net.uk/~hibou/Why%20Java%20is%20Not%20My%20Favourite%20Programming%20Language.html
An explanation of RAII:
https://www.hackcraft.net/raii/
The version of Scheme on my handheld computer:
https://www.mazama.net/scheme/pscheme.htm
I approved your comments. I set Spam prevention to block out any comment with more than a few links. Bob gets a LOT of Spam comments, so I had to be somewhat stringent when cleaning them up.
I doubt the problem we are all seeing has anything to do with spam. Check out this column: https://www.cringely.com/2009/12/15/fedex-kinkos-wont-print-our-christmas-card/ . 800 comments in a few weeks and each one was posted and displayed immediately. No delays, no errors, no disagreement between the comment count on the index page and the article page, and very little spam if any. All the problems that started this year could be eliminated by going back to the old website.
Oh yea, the old website never produced this nonsense “Duplicate comment detected; it looks as though you’ve already said that!” since posts appeared immediately, or error messages saying “the website is not responding”.
Hey Cringely,
i started reading this series since the first chapter. Few days after that, i actually bought the book “accidental empires” WITHOUT KNOWING that you are publishing the ENTIRE BOOK on this web site. When i received the book and went through it, i realized that i have already read 4 chapters on this website. Alas! i wasted money (Indian Rupees 972, around USD 17) in buying that book, instead of reading all chapters on this site!
Anyways, this is a great series of chapters that provide an interesting perspective of the origin of and earlier stages of PC industry.
Thanks for the series!
Ah, what a time capsule. Things that happened just after this was written:
1. Apple bought NeXTStep, bringing Steve Jobs back into Apple, and getting NeXTStep as the core of their new Unix-compatible OS. Whoever saw Steve Jobs back in charge of Apple?
2. Microsoft’s new 32-bit OS, Windows-95, turned out to be an enormously successful application of their Windows-NT technology. And yes, you could STILL ‘shell-out’ to DOS.
3. The rise of the Web-Site. The WWW technology and Web-Browser was built on top of the Internet infrastructure, but provided a very convenient and easy to use way to access other people’s web-sites.
Each of these steps improved dramatically the image of the future, both for Microsoft, Apple, Steve Jobs, and the rest of us.
Ugh. You don’t understand Windows.
Windows 95 is not NT. It has libraries and APIs from NT, but it’s enormously crippled by running them on DOS. Windows 98 and Windows ME are also this way.
Windows XP and later are variants of Windows NT. They run the same ugly API as Windows 95, but they run it on a modern kernel with memory protection, security systems, and other features. There is no “shell-out to DOS” in modern versions of Windows, because there is no DOS running underneath.
Modern versions of Windows do include a Command Prompt, and if you have special access to Microsoft, you may encounter the MinWin project. It is superficially similar to DOS, but running programs in the command prompt is different than running programs in DOS.
In DOS, programs run directly on the hardware, using the BIOS to abstract away some of the differences between different machines. In a modern Command Prompt, programs run in virtual environments with all the security protections of a normal program. The NT kernel underneath the Command Prompt forbids access to the BIOS, unless you elevate the privileges of the program. Some DOS programs run in Windows, because Windows includes a DOS emulator, again preventing access to the hardware.
Modern versions of Windows are superficially similar to DOS-based Windows, but they’re actually more similar to Unix or OpenVMS than DOS.
Agreed. Windows 9x is fast on slow hardware, while NT/XP/Vista/7/8 are vice versa!
Hi Bob,
Still a great read after all these years, and as crystal ball readers go, you have a pretty good track record, especially considering how far off “experts” like Gartner were.
One current trend I find incomprehensible, and look forward to seeing how it plays out, is the way people mindlessly install just about anything on their mobile devices, without a second’s thought to security and privacy, while at the same time are incredibly reluctant to do the same on laptops and desktops. While app stores provide some safety from viruses, under the hood mobile platforms are no more secure than traditional desktops. Many of these apps request access to content (email, contacts etc) that you wouldn’t dream of trusting to anyone in the offline world.
Since the most downloaded mobile apps are simply richer interfaces for existing web sites, will they go away when (if?) mobile development improves? In the past many of these would have been built with Flash or HTML5, until Moore’s Law segued into “twice the cores! half the price! … but your app isn’t any quicker!”. Will app stores and stalled performance gains result in the same custom app installing trend taking hold in the new homogenized mobile/desktop market?
Interesting times.
Bob,
First, let me add to the thank-yous. As someone who’d never read Accidental Empires before, I appreciate the chance to do so without plunking down $16.99 (ok, I see Amazon has it for $11.60). Yes, I know you’ve got three boys to put through college, but I have three girls of my own who are going to need tuition money.
Second, as I recall, one of your purposes in this exercise was to collect experiences and reminiscences from us readers in an effort to add more perspective (or hindsight?). At times, it seems what you got was reports of typographical errors. And a number of us, in one way or another, feeling that the book is dated by the fact that it stops in the mid-90’s, when clearly so much more has happened since then.
In a small effort to counter that, I’ll try to close with a small insight.
I’ve been in the IT industry for exactly 30 years, mostly doing operations or systems integration work. Unlike those profiled in your book, I’ve never been a visionary, just someone who could figure out systems and make them work for whichever company has been kind enough to employ me. However, there was one moment where I saw a technology in its infancy and knew instinctively that it was going to change the world. This came one day, after ten years or so, of seeing users struggle with mainframes, minis, and most of the stuff you’ve described in the book. A colleague of mine showed me an application it had taken days to download via FTP and configure. It was running on a Windows 3.1 workstation connected to our Netware LAN, tunneling through the campus backbone and making a tentative connection out a Cisco router through the new campus Internet connection. It was a very early version of Netscape, and it was a slow as molasses. But we both just knew that, once hardware and networks became fast and reliable enough to support it, this was going to take computing out of the hands of the few of us who worked with it every day (“the kids in the computer lab” as one of our VPs had put it). and make it truly usable by pretty much everyone. I suppose, this was the same way some IT professionals of the early 80’s felt when they realized those toys with the funny names like Kaypro, Macintosh, Compaq, and such were getting a foothold. That said, Accidental Empires ends at just the right point – sort of the end of an era. On the other hand, don’t let that stop you from writing another volume on what’s gone on since 1995. If you do, I’ll even buy a copy, tuition bills be damned.
“…I saw a technology in its infancy and knew instinctively that it was going to change the world…” I had the same feeling when I first discovered Napster.
Yes, let’s leave behind us Triumph of the Nerds and Accidental Empires and segue into Triumph of the Giants and Ruthless Empires. What do you say, Bob?
It would also be interesting to write an “Alternate Reality” version guessing how our current software patents, trolls and all would’ve affected some of the companies and technologies you covered in Accidental Empires.
Could even go so far as how the current laws, immigration policies and stricter or even zealous enforcement has changed things. Yes I’m thinking of Aaron Swartz and the clever comment comparing him to Jobs, Woz phreaking and Gates’ virus: https://www.technewsdaily.com/16431-are-you-looking-at-this-website-you-might-be-breaking-the-law.html ). Did Gates really do such a thing. I’ve heard (probably from you) about Jobs and Woz selling early devices to get free phone calls ?
I had no problem with how dated the material is. That stuff actually happened. “Those who cannot remember the past…” and all that.
What I found a bit sad was the West Coast provincialism of the book. The only major East Coast character was doomed IBM. Nothing about the research going on at Bell Labs in North Carolina (transistors, Unix, information theory). Nor about the hacker culture of Boston, which gave us the Free Software movement.
I grant that PCs of the time sucked badly, so things like Unix were not useful until several years after Linux started. Even then, it took a long time before Linux really took off as a kernel for smartphones. It’s something that became obviously important long after it happened, unlike the accidental empires of Silicon Valley.
First comment after finishing the reading. Probably should have done this earlier when ’80’s computers where the topic.
I “grew up” at Radio Shack. I was in the store and helped the manager open the first TRS-80 boxes. I assembled the computer myself. I was twelve at the time. Spent the next several years learning to “program” in BASIC. I knew the ins and outs of TRS-DOS pretty well and did my share of “peeking” and “poking” in the memory.
I wrote a database program in BASIC that I used as a “little black book”, because I was a geek. I transitioned that code into a program that would do the daily reports for the store manager. After three months of sending in his printed (versus hand written) reports he got a call from Fort Worth asking why he was taking so much time to type out his reports. He told them to look closely and they’d see it was dot-matrix. I was asked if I could modify the program to account for fifteen sales staff (each page had room for seven) to accommodate their largest store during xmas. I did and turned the code over – without a copyright or royalties. I was fifteen years old at the time – an idiot looking back. I did all of that on a Model II – Level II with Level II BASIC. The program was about 700 lines.
When the Model 4 came out it also came with some new software called DeskMate (sold separately). This was the first “office suite” I can remember. It had a text editor, a spreadsheet (99 x 99), dialer software (for all those BBS’s I connected to), file manager, a calendar and mail.
http://users.abo.fi/adeheer/students/itdeskmt.gif
I used the crap out of that package but have never seen it given any accolades for being the first to combine useful tools in a single, shrink-wrap case.
Yes, it was DOS-based and yes, it was very limited in what it could do. But it was available in 1982, a full decade before any other “suite”. I miss my DeskMate. 🙁
This has been great. Thanks Bob!
Being only slightly younger than Bob, I lived through this period, but it nice to have a insider’s perspective.
Thanks for the book, the tv show and this serial re-telling of the story.
“Marshall McLuhan said that we convert our obsolete technologies into art forms, but, trust me, nobody is going to make a hobby of collecting $1 million mainframe computers.”
I just attended a mixer at the new Living Computer Museum. livingcomputermuseum.org
Again an exceptional read X!
But a warning from the past, the distant past.
Alexandria in Egypt 1800 odd years ago, the data network was the library and by law all books had to be there and unlike libraries today you bought the book you wanted and waited a week to have it written for you. If you had a new book you were compelled to give it to the library to have it copied.
Well fundamentalists burned the books. 90% of past books were lost. Cesarian births come from Julius Caesar – but nobody can tell me what happened to mother after the birth in 90BCE!! Yes 2100 years ago!!
China under Mao did the same in the cultural revolution in the 60’s. Saudi Arabia did the same in the 80’s with buildings in Mecca that Mohamed passed or lived in.
The only way to preserve the past is to have millions of artifacts being decayed so that one or two will preserve the knowledge in it for future use.
Cloud repositories like Alexandria’s library are easy to destroy and humans return to the dark ages.*
There are fundamentalists everywhere Bush43 was one – he believed Banks were perfect and deserved not to pay their share of tax to USA and did not need to abide by the laws governing you and me. Fundamentalists believe crap, without examination and cause irreparable damage to societies. Japanese ruling class after Fukushima (Fu(c)k us hima) are idiot fundamentalists. Google believes that Apple’s patents are free and theirs deserve the full force of the law to extract millions from Apple.
It is pockets of preserved knowledge that keeps progress moving forward.
And I fear the future that fundamentalists will impose on us.
* USA in the cold war destroyed military technology plans so that the USSR could not access it!! USA has lost that knowledge!!
peninggi badan alami
[…]we prefer to honor quite a few other online internet sites around the internet, even though they arent linked to us, by linking to them. Below are some webpages worth checking out[…]
follow me
[…]Here are a number of the web sites we suggest for our visitors[…]
GreenSmoke
[…]just beneath, are various absolutely not connected websites to ours, having said that, they’re certainly really worth going over[…]
Weight Loss products
[…]Sites of interest we have a link to[…]
petsitters in naples
[…]we prefer to honor a lot of other web web-sites around the web, even though they aren?t linked to us, by linking to them. Under are some webpages really worth checking out[…]
enlevement epave gratuit
[…]always a huge fan of linking to bloggers that I appreciate but don?t get quite a bit of link enjoy from[…]
DJ Topsail Island NC
[…]one of our visitors a short while ago encouraged the following website[…]
HENS NIGHT
[…]Sites of interest we’ve a link to[…]
social media management indianapolis IN
[…]Wonderful story, reckoned we could combine a number of unrelated data, nevertheless genuinely worth taking a appear, whoa did 1 learn about Mid East has got far more problerms too […]
Tagesdeals
[…]the time to study or take a look at the content material or websites we’ve linked to beneath the[…]
https://www.adsbeta.net/3GrIo
[…]Sites of interest we’ve a link to[…]
brian poncelet
[…]please take a look at the web-sites we adhere to, including this one particular, because it represents our picks through the web[…]
HITRUST Readiness
[…]below you will come across the link to some web sites that we feel you must visit[…]
tempur pedic mattress rating scale
[…]always a large fan of linking to bloggers that I adore but don?t get quite a bit of link enjoy from[…]