I don’t think posting pieces of chapters is working for any of us, so I’m changing the plan.
We have 16 chapters to go in the book so I’ll be posting in their entirety two chapters per week for the next eight weeks. The chapters will be posted late on Sunday and Wednesday nights so you will have several days to read and comment.
Down at the Specks Howard School of Blogging Technique they teach that this is blogging suicide because these chapters are up to 7000 words long! Blog readers are supposed to have short attention spans so I’ll supposedly lose readers by doing it this way. But I think Specks is wrong and smart readers want more to read, not less — if the material is good. You decide.
See you in the comments, then, and again late Wednesday night.
ACCIDENTAL EMPIRES — CHAPTER TWO
THE TYRANNY OF THE NORMAL DISTRIBUTION
This chapter is about smart people. My own, highly personal definition of what it means to be smart has changed over the years. When I was in the second grade, smart meant being able to read a word like Mississippi and then correctly announce how many syllables it had (four, right?). During my college days, smart people were the ones who wrote the most complex and amazing computer programs. Today, at college plus twenty years or so, my definition of smart means being able to deal honestly with people yet somehow avoid the twin perils of either pissing them off or of committing myself to a lifetime of indentured servitude by trying too hard to be nice. In all three cases, being smart means accomplishing something beyond my current level of ability, which is probably the way most other folks define it. Even you.
But what if nothing is beyond your ability? What if you’ve got so much brain power that little things like getting through school and doing brain surgery (or getting through school while doing brain surgery) are no big sweat? Against what, then, do you measure yourself?
Back in the 1960s at MIT, there was a guy named Harvey Allen, a child of privilege for whom everything was just that easy, or at least that’s the way it looked to his fraternity brothers. Every Sunday morning, Harvey would wander down to the frat house dining room and do the New York Times crossword puzzle before breakfast—the whole puzzle, even to the point of knowing off the top of his head that Nunivak is the seven-letter name for an island in the Bering Sea off the southwestern coast of Alaska.
One of Harvey Allen’s frat brothers was Bob Metcalfe, who noticed this trick of doing crossword puzzles in the time it took the bacon to fry and was in awe. Metcalfe, no slouch himself, eventually received a Ph.D., invented the most popular way of linking computers together, started his own company, became a multimillionaire, put his money and name on two MIT professorships, moved into a 10,000-square-foot Bernard Maybeck mansion in California, and still can’t finish the New York Times crossword, which continues to be his definition of pure intelligence.
Not surprisingly, Harvey Allen hasn’t done nearly as much with his professional life as Bob Metcalfe has because Harvey Allen had less to prove. After all, he’d already done the crossword puzzle.
Now we’re sitting with Matt Ocko, a clever young programmer who is working on the problem of seamless communication between programs running on all different types of computers, which is something along the lines of getting vegetables to talk with each other even when they don’t want to. It’s a big job, but Matt says he’s just the man to do it.
Back in North Carolina, Matt started DaVinci Systems to produce electronic mail software. Then he spent a year working as a programmer at Microsoft. Returning to DaVinci, he wrote an electronic mail program now used by more than 500,000 people, giving Matt a net worth of $1.5 million. Eventually he joined a new company, UserLand Software, to work on the problem of teaching vegetables to talk. And somewhere in there, Matt Ocko went to Yale. He is 22 years old.
Sitting in a restaurant, Matt drops every industry name he can think of and claims at least tangential involvement with every major computer advance since before he was born. Synapses snapping, neurons straining near the breaking point—for some reason he’s putting a terrific effort into making me believe what I always knew to be true: Matt Ocko is a smart kid. Like Bill Gates, he’s got something to prove. I ask him if he ever does the New York Times crossword.
Personal computer hardware and software companies, at least the ones that are doing new and interesting work, are all built around technical people of extraordinary ability. They are a mixture of Harvey Aliens and Bob Metcalfes—people who find creativity so effortless that invention becomes like breathing or who have something to prove to the world. There are more Bob Metcalfes in this business than Harvey Aliens but still not enough of either type.
Both types are exceptional. They are the people who are left unchallenged by the simple routine of making a living and surviving in the world and are capable, instead, of first imagining and then making a living from whole new worlds they’ve created in the computer. When balancing your checking account isn’t, by itself, enough, why not create an alternate universe where checks don’t exist, nobody really dies, and monsters can be killed by jumping on their heads? That’s what computer game designers do. They define what it means to be a sky and a wall and a man, and to have color, and what should happen when man and monster collide, while the rest of us just try to figure out whether interest rates have changed enough to justify refinancing our mortgages.
Who are these ultrasmart people? We call them engineers, programmers, hackers, and techies, but mainly we call them nerds.
Here’s your father’s image of the computer nerd: male, a sloppy dresser, often overweight, hairy, and with poor interpersonal communication skills. Once again, Dad’s wrong. Those who work with nerds but who aren’t themselves programmers or engineers imagine that nerds are withdrawn—that is, until they have some information the nerd needs or find themselves losing an argument with him. Then they learn just how expressive a nerd can be. Nerds are expressive and precise in the extreme but only when they feel like it. They look the way they do as a deliberate statement about personal priorities, not because they’re lazy. Their mode of communication is so precise that they can seem almost unable to communicate. Call a nerd Mike when he calls himself Michael and he likely won’t answer, since you couldn’t possibly be referring to him.
Out on the grass beside the Department of Computer Science at Stanford University, a group of computer types has been meeting every lunchtime for years and years just to juggle together. Groups of two, four, and six techies stand barefoot in the grass, surrounded by Rodin sculptures, madly flipping Indian clubs through the air, apparently aiming at each other’s heads. As a spectator, the big thrill is to stand in the middle of one of these unstable geometric forms, with the clubs zipping past your head, experiencing what it must be like to be the nucleus of an especially busy atom. Standing with your head in their hands is a good time, too, to remember that these folks are not the way they look. They are precise, careful, and . . .
POW1I
“Oh, SHIT!!!!!!”
“Sorry, man. You okay?”
One day in the mid-1980s, Time, Newsweek, and the Wall Street Journal simultaneously discovered the computer culture, which they branded instantly and forever as a homogenized group they called nerds, who were supposed to be uniformly dressed in T-shirts and reeking of Snickers bars and Jolt cola.
Or just reeking. Nat Goldhaber, who founded a software company called TOPS, used to man his company’s booth at computer trade shows. Whenever a particularly foul-smelling man would come in the booth, Goldhaber would say, “You’re a programmer, aren’t you?” “Why, yes,” he’d reply, beaming at being recognized as a stinking god among men.
The truth is that there are big differences in techie types. The hardware people are radically different from the software people, and on the software side alone, there are at least three subspecies of programmers, two of which we are interested in here.
Forget about the first subspecies, the lumpenprogrammers, who typically spend their careers maintaining mainframe computer code at insurance companies. Lumpenprogrammers don’t even like to program but have discovered that by the simple technique of leaving out the comments—clues, labels, and directions written in English—they are supposed to sprinkle in among their lines of computer code, their programs are rendered undecipherable by others, guaranteeing them a lifetime of dull employment.
The two programmer subspecies that are worthy of note are the hippies and the nerds. Nearly all great programmers are one type or the other. Hippy programmers have long hair and deliberately, even pridefully, ignore the seasons in their choice of clothing. They wear shorts and sandals in the winter and T-shirts all the time. Nerds are neat little anal-retentive men with penchants for short-sleeved shirts and pocket protectors. Nerds carry calculators; hippies borrow calculators. Nerds use decongestant nasal sprays; hippies snort cocaine. Nerds typically know forty-six different ways to make love but don’t know any women.
Hippies know women.
In the actual doing of that voodoo that they do so well, there’s a major difference, too, in the way that hippies and nerds write computer programs. Hippies tend to do the right things poorly; nerds tend to do the wrong things well. Hippie programmers are very good at getting a sense of the correct shape of a problem and how to solve it, but when it comes to the actual code writing, they can get sloppy and make major errors through pure boredom. For hippie programmers, the problem is solved when they’ve figured out how to solve it rather than later, when the work is finished and the problem no longer exists. Hippies live in a world of ideas. In contrast, the nerds are so tightly focused on the niggly details of making a program feature work efficiently that they can completely fail to notice major flaws in the overall concept of the project.
Conventional wisdom says that asking hippies and nerds to work together might lead to doing the wrong things poorly, but that’s not so. With the hippies dreaming and the nerds coding, a good combination of the two can help keep a software development project both on course and on schedule. The real problem is finding such superprogrammers in the first place. Often they hide.
**********
Back in the 1950s, a Harvard psychologist named George A. Miller wrote “The Magical Number Seven, Plus or Minus Two,” a landmark journal article. Miller studied short-term memory, especially the quick memorization of random sequences of numbers. He wanted to know, going into the study, how many numbers people could be reliably expected to remember a few minutes after having been told those numbers only once.
The answer—the magical number—was about seven. Grab some people off the street, tell them to remember the numbers 2-4-3-5-1-8-3 in that order, and most of them could, at least for a while. There was variation in ability among Miller’s subjects, with some people able to remember eight or nine numbers and an equal number of people able to remember only five or six numbers, so he figured that seven (plus or minus two) numbers accurately represented the ability range of nearly the entire population.
Miller’s concept went beyond numbers, though, to other organizations of data. For example, most of us can remember about seven recently learned pieces of similarly classified data, like names, numbers, or clues in a parlor game.
You’re exposed to Miller’s work every time you dial a telephone, because it was a factor in AT&T’s decision to standardize on seven-digit local telephone numbers. Using longer numbers would have eliminated the need for area codes, but then no one would ever be able to remember a number without first writing it down.
Even area codes follow another bit of Miller’s work. He found that people could remember more short-term information if they first subdivided the information into pieces—what Miller called “chunks.” If I tell you that my telephone number is (707) 525-9519 (it is; call any time), you probably remember the area code as a separate chunk of information, a single data point that doesn’t significantly affect your ability to remember the seven-digit number that follows. The area code is stored in memory as a single three-digit number—415—related to your knowledge of geography and the telephone system that rather than the random sequence of one-digit numbers—4-1-5—that relate to nothing in particular.
We store and recall memories based on their content, which explains why jokes are remembered by their punch lines, eliminating the possibility of mistaking “Why did the chicken cross the road?” with “How do you get to Carnegie Hall?” It’s also why remembering your way home doesn’t interfere with remembering your way to the bathroom: the sets of information are maintained as different chunks in memory.
Some very good chess players use a form of chunking to keep track of the progress of a game by taking it to a higher level of abstraction in their minds. Instead of remembering the changing positions of each piece on the board, they see the game in terms of flowing trends, rather like the intuitive grammar rules that most of us apply without having to know their underlying definitions. But the very best chess players don’t play this way at all: they effortlessly remember the positions of all the pieces.
As in most other statistical studies. Miller used a random sample of a few hundred subjects intended to represent the total population of the world. It was cheaper than canvassing the whole planet, and not significantly less accurate. The study relied on Miller’s assurance that the population of the sample studied and that of the world it represented were both “normal”—a statistical term that allows us to generalize accurately from a small, random sample to a much larger population from which that sample has been drawn.
Avoiding a lengthy explanation of bell-shaped curves and standard deviations, please trust George Miller and me when we tell you that this means 99.7 percent of all people can remember seven (plus or minus two) numbers. Of course, that leaves 0.3 percent, or 3 out of every 1,000 people, who can remember either fewer than five numbers or more than nine. As true believers in the normal distribution, we know it’s symmetrical, which means that just about as many people can remember more than nine numbers as can remember fewer than five.
In fact, there are learning-impaired people who can’t remember even one number, so it should be no surprise that 0.15 percent, or 3 out of every 2,000 people, can remember fewer than five numbers, given Miller’s test. Believe me, those three people are not likely to be working as computer programmers. It is the 0.15 percent on the other side of the bell curve that we’re interested in—the 3 out of every 2,000 people who can remember more than nine numbers. There are approximately 375,000 such people living in the United States, and most of them would make terrific computer programmers, if only we could find them.
So here’s my plan for leading the United States back to dominance of the technical world. We’ll run a short-term memory contest. I like the idea of doing it like those correspondence art schools that advertise on matchbook covers and run ads in women’s magazines and Popular Mechanics—you know, the ones that want you to “draw Skippy.”
“Win Big Bucks Just by Remembering 12 Numbers!” our matchbooks would say.
Wait, I have a better ideal We could have the contest live on national TV, and the viewers would call in on a 900 number that would cost them a couple of bucks each to play. We’d find thousands of potential top programmers who all this time were masquerading as truck drivers and cotton gin operators and beauticians in Cheyenne, Wyoming—people you’d never in a million years know were born to write software. The program would be self-supporting, too, since we know that less than 1 percent of the players would be winners. And the best part of all about this plan is that it’s my idea. I’ll be richl
Behind my dreams of glory lies the fact that nearly all of the best computer programmers and hardware designers are people who would fall off the right side of George Miller’s bell curve of short-term memory ability. This doesn’t mean that being able to remember more than nine numbers at a time is a prerequisite for writing a computer program, just that being able to remember more than nine numbers at a time is probably a prerequisite for writing a really good computer program.
Writing software or designing computer hardware requires keeping track of the complex flow of data through a program or a machine, so being able to keep more data in memory at a time can be very useful. In this case, the memory we are talking about is the programmer’s, not the computer’s.
The best programmers find it easy to remember complex things. Charles Simonyi, one of the world’s truly great programmers, once lamented the effect age was having on his ability to remember. “I have to really concentrate, and I might even get a headache just trying to imagine something clearly and distinctly with twenty or thirty components,” Simonyi said. “When I was young, I could easily imagine a castle with twenty rooms with each room having ten different objects in it. I can’t do that anymore.”
Stop for a moment and look back at that last paragraph. George Miller showed us that only 3 in 2,000 people can remember more than nine simultaneous pieces of short-term data, yet Simonyi looked wistfully back at a time when he could remember 200 pieces of data, and still claimed to be able to think simultaneously of 30 distinct data points. Even in his doddering middle age (Simonyi is still in his forties), that puts the Hungarian so far over on the right side of Miller’s memory distribution that he is barely on the same planet with the rest of us. And there are better programmers than Charles Simonyi.
Here is a fact that will shock people who are unaware of the way computers and software are designed: at the extreme edges of the normal distribution, there are programmers who are 100 times more productive than the average programmer simply on the basis of the number of lines of computer code they can write in a given period of time. Going a bit further, since some programmers are so accomplished that their programming feats are beyond the ability of most of their peers, we might say that they are infinitely more productive for really creative, leading-edge projects.
The trick to developing a new computer or program, then, is not to hire a lot of smart people but to hire a few very smart people. This rule lies at the heart of most successful ventures in the personal computer industry.
Programs are written in a code that’s referred to as a computer language, and that’s just what it is—a language, complete with subjects and verbs and all the other parts of speech we used to be able to name back in junior high school. Programmers learn to speak the language, and good programmers learn to speak it fluently. The very best programmers go beyond fluency to the level of art, where, like Shakespeare, they create works that have value beyond that even recognized or intended by the writer. Who will say that Shakespeare isn’t worth a dozen lesser writers, or a hundred, or a thousand? And who can train a Shakespeare? Nobody; they have to be born.
But in the computer world, there can be such a thing as having too much gray matter. Most of us, for example, would decide that Bob Metcalfe was more successful in his career than Harvey Allen, but that’s because Metcalfe had things to prove to himself and the world, while Harvey Allen, already supreme, did not.
Metcalfe chose being smart as his method of gaining revenge against those kids who didn’t pick him for their athletic teams back in school on Long Island, and he used being smart as a weapon against the girls who broke his heart or even in retaliation for the easy grace of Harvey Allen. Revenge is a common motivation for nerds who have something to prove.
The Harvey Aliens of the world can apply their big brains to self-delusion, too, with great success. Donald Knuth is a Stanford computer science professor generally acknowledged as having the biggest brain of all—so big that it is capable on occasion of seeing things that aren’t really there. Knuth, a nice guy whose first-ever publication was “The Potrszebie System of Weights and Measures” (“one-millionth of a potrszebie is a farshimmelt potrszebie”), in the June 1957 issue of Mad magazine, is better known for his multivolume work The Art of Computer Programming, the seminal scholarly work in his field.
The first volume of Knuth’s series (dedicated to the IBM 650 computer, “in remembrance of many pleasant evenings”) was printed in the late 1960s using old-fashioned but beautiful hot-type printing technology, complete with Linotype machines and the sharp smell of molten lead. Volume 2, which appeared a few years later, used photo-offset printing to save money for the publisher (the publisher of this book, in fact). Knuth didn’t like the change from hot type to cold, from Lino to photo, and so he took a few months off from his other work, rolled up his sleeves, and set to work computerizing the business of setting type and designing type fonts. Nine years later, he was done.
Knuth’s idea was that through the use of computers, photo offset, and especially the printing of numbers and mathematical formulas, could be made as beautiful as hot type. This was like Perseus giving fire to humans, and as ambitious, though well within the capability of Knuth’s largest of all brains.
He invented a text formatting language called TeX, which could drive a laser printer to place type images on the page as well as or better than the old linotype, and he invented another language, Metafont, for designing whole families of fonts. Draw a letter “A,” and Metafont could generate a matching set of the other twenty-five letters of the alphabet.
When he was finished, Don Knuth saw that what he had done was good, and said as much in volume 3 of The Art of Computer Programming, which was typeset using the new technology.
It was a major advance, and in the introduction he proudly claimed that the printing once again looked just as good as the hot type of volume 1.
Except it didn’t.
Reading his introduction to volume 3,1 had the feeling that Knuth was wearing the emperor’s new clothes. Squinting closely at the type in volume 3,1 saw the letters had that telltale look of a low-resolution laser printer—not the beautiful, smooth curves of real type or even of a photo typesetter. There were “jaggies”— little bumps that make all the difference between good type and bad. Yet here was Knuth, writing the same letters that I was reading, and claiming that they were beautiful.
“Donnie,” I wanted to say. “What are you talking about? Can’t you see the jaggies?”
But he couldn’t. Donald Knuth’s gray matter, far more powerful than mine, was making him look beyond the actual letters and words to the mathematical concepts that underlay them. Had a good enough laser printer been available, the printing would have been beautiful, so that’s what Knuth saw and I didn’t. This effect of mind over what matters is both a strength and a weakness for those, like Knuth, who would break radical new ground with computers.
Unfortunately for printers, most of the rest of the world sees like me. The tyranny of the normal distribution is that we run the world as though it was populated entirely by Bob Cringelys, completely ignoring the Don Knuths among us. Americans tend to look at research like George Miller’s and use it to custom-design cultural institutions that work at our most common level of mediocrity—in this case, the number seven. We cry about Japanese or Korean students, having higher average math scores in high school than do American students. “Oh, no!” the editorials scream. “Johnny will never learn FORTRAN!” In fact, average high school math scores have little bearing on the state of basic research or of product research and development in Japan, Korea, or the United States. What really matters is what we do with the edges of the distribution rather than the middle. Whether Johnny learns FORTRAN is relevant only to Johnny, not to America. Whether Johnny learns to read matters to America.
This mistaken trend of attributing average levels of competence or commitment to the whole population extends far beyond human memory and computer technology to areas like medicine. Medical doctors, for example, say that spot weight reduction is not possible. “You can reduce body fat overall through dieting and exercise, but you can’t take fat just off your butt,” they lecture. Bodybuilders, who don’t know what the doctors know, have been doing spot weight reduction for years. What the doctors don’t say out loud when they make their pronouncements on spot reduction is that their definition of exercise is 20 minutes, three times a week. The bodybuilder’s definition of exercise is more like 5 to 7 hours, five times a week —up to thirty-five times as much.
Doctors might protest that average people are unlikely to spend 35 hours per week exercising, but that is exactly the point: Most of us wouldn’t work 36 straight hours on a computer program either, but there are programmers and engineers who thrive on working that way.
Average populations will always achieve only average results, but what we are talking about are exceptional populations seeking extraordinary results. In order to make spectacular progress, to achieve profound results in nearly any field, what is required is a combination of unusual ability and profound dedication—very unaverage qualities for a population that typically spends 35 hours per week watching television and less than 1 hour exercising.
Brilliant programmers and champion bodybuilders already have these levels of ability and motivation in their chosen fields. And given that we live in a society that can’t seem to come up with coherent education or exercise policies, it’s good that the hackers and iron-pumpers are self-motivated. Hackers will seek out and find computing problems that challenge them. Bodybuilders will find gyms or found them. We don’t have to change national policy to encourage bodybuilders or super-programmers.
All we have to do is stay out of their way.
Minor spelling mistake: Harvey Alien rather than Harvey Allen in a few places (unless it’s a joke, maybe it is. Not sure now, its not consistent anyway)
Not a joke, a minor OCR issue.
Also: I’ll be richl
I think that l should be a !
I do LOTS of things at once and was intrigued by the concept that people can understand a word as long as all the letters are in the word AND the first and last letters are correct. So … I never caught the “Alien” . Fast typers do not have a high accuracy rate I’ve found – if you want to get a lot of words in.
Yes, on the Bodybuilder comment, Agree.
I guess you would end up with lots of monkeys if you used your number-memorizing-contest to select programmers. See here: https://www.youtube.com/watch?v=ravykEih1rE
POW1I is another OCR error. Should be POW!!
You say: Draw a letter “A,” and Metafont could generate a matching set of the other twenty-five letters of the alphabet.
I have used TeX, LaTeX, and Metafont. Your implication that given A, it can generate matching letters, does not match my experience. You have to design each letter. Or there is a magic option that I never noticed 🙂
It should probably be reworded. I don’t have a better wording to suggest.
I guess that’s why Bill G gave job applicants the manhole cover test. He wanted to hire only the best brains. Too bad many of the best brains he hired were shot down by his overbearing management style. Then again, Bill G used his Attila the Hun marketing style to drive his company to dominance. So is brain power really that important?
Could you post some stats about user behaviour with this little test of long blog posts?
Thanks
OK. There is a 100% chance that comments will be made.
I guess I learned a different mythology in school. In mine, Prometheus gave fire to humans. And, paid a terrible price.
Oh, well.
I guess that’s why I write operating systems, device drivers, firmware and cryptographic systems.
Another OCR bug – the middle “e” of TeX has been mangled into a subscript.
That’s not a bug — that’s actually how TeX is written. (Except here, where the comment formatting doesn’t support subscripts.)
Not that it detracts from your point but I do not think the Sunday New York Times crossword is the most difficult. I think it is set up to get more difficult Monday to Saturday with Sunday being on the “sort of difficult” scale. (I manage to complete most Monday to Thursday puzzles but can’t get anywhere with Friday or Saturday. Usually, I can get some of a Sunday puzzle.)
Another minor point is that I am guessing you put your current phone number in place of the one you had in the 90’s but later in the paragraph you refer to area code 415 instead of the current “707”.
Again minor, but I think the earliest versions of Tex were designed to drive a phototypesetter. Laser printer technology at the time was about 300 dots per inch (dpi) and phototypesetters were more like 1200 dpi. The typeset output looked pretty good but you would do draft copies on a cheaper laser printer.
I like the chapter at a time format. It really doesn’t take that long to read, and has the benefit of better continuity of thought.
Being a programmer I can see how being able to juggle more things in my memory would help with solving some problems. Since my brain can’t handle any more simultaneous items than the average person I rely on various crutches like writing lists or creating diagrams. It is kind of like swapping data from RAM to hard disk; it works, but is kind of slow.
I think with the trend towards higher level languages the need for explicitly remembering things isn’t as vital as it was. In Java or .NET you rarely need the ability to visualize abstract things like pointers to pointers, but that sort of thing was done frequently in C. On the other hand the newer tools have more code supporting more features spread across more files, so organization is quite a bit harder. I guess the biggest change is that tools can help you stay organized easier than they can help you keep your indirect memory references straight.
Yeah, code completion really helps to avoid those typos and remembering class/method names.
In addition to the commonly-used programming languages of today needing less brainpower to use because they are not primary or secondary, but rather tertiary to the hardware, I think another item is in play.
In the 1970’s and 1980’s, we had specific problems we knew that software could help us solve. We needed to write letters and books and research paper, so we needed to create a word processor. We needed to create plans to attack successful businesses to take them private and destroy large economic values, so we needed to create spreadsheets.
Today, the obvious things that make people more productive (word processors, spreadsheets) have been invented. Today’s software seems to be more about distracting people and entertaining them with infotainment. For this class of activity, you need an idea man who can think up a novelty to distract the consumer, and someone to run the sweatshop were the coding can be done in some tertiary language on the cheap.
The need for the Simonyis of the world that existed in Sunnyvale and Mountain View twenty years ago seems to have gone away.
It all depends on who you ask. If you ask me, I see problems on a daily basis that are the result of this mindset, the idea that programmers are all fungible – and the more bodies you throw at a problem the faster you can solve the problem. The reality is we are wasting resources and time creating bloated code that is faulty. In my own case, my team ends up having to rewrite code to make up for the deficiencies – deficiencies that run the gamut from simple annoyances for customers, to security breaches and loss of data. Businesses and the customers they serve suffer because of it.
Therefore we still need the high performing developers – if we expect to compete effectively.
IDE tools may have improved but imperative programming languages used today, are still no better than 20 years ago. So it’s improved the K-loc per programmer, so what? The code bloat from oo languages such as C++ can’t be good for overall stability (or security) driven by the Kloc productivity mindset.
C++ has it’s good points but it’s also a minefield of things to trip you up if you temporarily absent-minded about obscure things lurking in the language. You only have to pick up Scott Meyers Effective C++ books to realise how many there are.
The main problem is we’re still programming from a machine-oriented perspective. Just like in the 1950s. OK so the language of choice isn’t assembly anymore, but the problems we want to solve are obscured in ‘machine-thinking’ idiosyncracies, these have to be explicitly stated in the code we write. Word size, signed or unsigned ints, floats, strings, characters, structs, unions. All about ‘the machinery’ and nothing to do with the problem description. This all gets in the way of stating problems clearly.
Functional languages are very good at describing problems in the problem domain, but just seem a bit long-winded compared to our imperative tools. This makes change difficult, but proving software correctness is near impossible with imperative languages. Mutable states are the big problem, and makes writing *safe* threaded code a crap shoot.
Sorry. I felt the urge to rant a bit.
But this article in wired describes most of the issues I’ve raised.
https://www.wired.com/opinion/2013/01/wiretap-backdoors/
“…imperative programs define sequences of commands for the computer to perform…: http://en.wikipedia.org/wiki/Imperative_programming . So what’s the alternative? (By the way, the English language hasn’t improved in the past 20 years either, but it still does the job for those who understand it.)
No real effort to shift away from C-like languages.
F# is a functional language with imperative bits thrown in to help ease the transition. Similar to Ocaml but afaik only compiles to .NET
Erlang, which is also a functional language and very small syntax. (just the ticket)
Some kind of evolution needs to take place. But I guess a few dinosaurs will have to perish before that happens.
Actually, we’ve been shifting BACK to C. A lot of the supposed advances since then haven’t been… and on the other side you have languages like lisp (and derivatives like Haskell), which are even older than C growing in popularity.
“Prometheus” not “Perseus”
This was like Perseus giving fire to humans…
It was Prometheus.
Ditto on the chapter-at-a-time approach; I do prefer it. Partly because it is easier to keep track of what I’ve read (memory issue?)
As for the chapter, it needs some serious updating to be relevant to the topic and representative of current understanding of memory and expertise. I’ve been out of the field too long to suggest specific, current research. However, even basic understanding of Miller’s conclusions have been revised. I’m not at all sure that trace memory would correlate very highly with “genius” level programming. Finally, programming is still a critical element but hardly seems like the driving skill set behind current technology invention.
“POW1I” ? Do you mean “POW!”
Any working musician will confirm Bob’s basic idea here, which is that real innovation comes from a very small number of exceptionally-gifted people. The rest of us just copy. The good news, however, is that great ideas are so great, people can’t get enough of ’em. You can almost endlessly give renditions and variations, and people will still line up to hear you. You can talk abou them, discuss them, even create entire University courses about them, and people will pay for that, too.
So, for every Taylor Swift earning $57 million per year, there are billions more earned by the rest of us working out all the iterations inherent in one really good song.
Why isn’t this book available as a KIndle or Google Books e-book?
Yes, Bob, why no Kindle edition?
I like the chapter-every-couple-days format. I was worried about falling behind if I had to read it every day. I like (well, maybe not “like”) the term “lumpenprogrammer.” That, I hate to say it, probably applies to me; 30 years of COBOL programming maintaining a water billing system.
Solving a rubik’s cube is more impressive than crosswords. The problem with crosswords is, if you aren’t on the same wavelength as the crossword setter you’ll feel pretty lousy.
I can do neither, but I could remember 15 digit numbers or variable names in my head. At 47 I can still manage about 11.
IMHO, I think smart people do more with less. (less code not more)
Two chapters a week, great!!
Solving a rubik’s cube comes down to memorizing a few transformations and understanding how to apply them to different situations. You can get away with memorizing fewer transformations if you’re clever about how you apply them (I hate memorizing things, so I traded off — several of my friends opted to memorize a larger number of transformations because they didn’t understand how to combine simple transformations to achieve identical results).
Speaking as someone who never really got into cryptic crosswords (the London Times is generally considered far more challenging than the NY Times) from my observation of those more dedicated to them than I it’s a case of understanding certain transformation rules (e.g. words like “mixed” or “muddled” in a clue refer to anagrams) as well as having a huge vocabulary (and ideally literature, etc.). This seems to me to be a strictly more difficult problem than rubik’s cube which has a very well-defined dataset.
English:
Therein lies the problem for me. Although I live in England, I have a hard time with the language, and the way some words have a different meaning, depending on the context.
Lord knows how the non-natives cope with Engilsh that sounds the same when spoken, but spelt different and more crucially mean someting entirely different.
I’m very bad at remembering jokes, and remembering names.
(socially disabled would be an appropriate term)
Chapter 2, great stuff!
Just a few comments and one correction
selecting good programmers, a few more things to remember
– generally bad, or lazy, about counting (leave out number and repeat numbers – hmm?)
– much better remembering numbers than names
-great in visualizing n-dimensional spaces of numbers (pointers to pointers to pointers . . . )
– great at seeing patterns, hence writing reusable routines – also fits with the lazy bit
– take pride in elegance over “brute force” coding (C vs Cobol)
– don’t do more than six hour days unless they are in a challenge
– frequently get inspiration outside the office and bring in a whole chunk of code
– divide themselves into:
–designers
— coders
— maintenance programmers
– stay out of the way – amen! You have to lead them by challenge.
The correction is on page 7, “Reading his introduction to volume 3,1 had the feeling that Knuth was wearing the emperor’s new clothes. Squinting closely at the type in volume 3,1 saw the letters had that telltale . . . “ — the number “1” should be an “I”.
One more vote for your “Specks is wrong” belief, I would definitely rather read an entire chapter at a time….
Yeah, I think I like the new format better. Easier to get the point.
What is it that allows smart people to thrive at an early age?
We’re they shielded from the daily turmoil of an average family just trying to get by?
Bill Gates was challenged at an early age, and their family could afford to spend money on expanding his knowledge.
cannot find this or any books on search for ‘cringely’ on kindle.
just wondered if this is deliberate?
It’s a delight to re-read your terrific book, Bob. I’m sure you just love to hear from us about typos and such…but it was Prometheus, not Perseus, who in legend gave fire to humans.
I’m confused about what we are reading. Is this the original “Accidental Empires” book or a draft of the updated book? Or perhaps this is an OCR of the original, which would explain the obvious typos, and just a starting point of the updated book.
Bob,
I don’t agree with the concept that a great programmer should be able to remember the names and functions of x number of variables or routines. Instead, I believe that a great programmer should be able to visualize how to build a project that is elegant, easy to maintain and modify, and will have few if any bugs. The actual coding should be performed by normal programmers who do not need to understand the grand concept and how all the variables and routines relate to each other. They only need to know how to code a small piece of the project by following the guidelines laid down by the master architect. This implies the use of information hiding and other insightful breakthroughs. If the master architect moves on, the project still can be maintained and modified by mere mortals.
Charles, if we lived in a world where management managed by thinking thru the whole project, maybe. In this universe, if one is lucky or very selective, a programmer can be left alone long enough to do real work before some process droid disrupts their thinking by demanding dead tree carcase as proof the programmer is doing something.
In my own bleeding edge systems scripting, once I had an idea of how to do the code, I got so involved I remembered more variables and code process states than I usually do. If I was interrupted while coding I was physically sick and dis-oriented for a while. Once code was running, my “working memory” seemed to return to its average state.
I think there has been a change in coding culture. Programmers are less likely to go write new tools now. No need. Hardware capability has increased so much the number of computationally attackable problems has risen, so still problems to solve for the younger coders.
Bob, I like the new approach.
The bit about some programmers being 100 times more productive than average programmers has become a mill-stone around our necks.
From what I read around the web these days, job recruiters have never come across this species, and are still searching.
Programmers are generally lazy. (and self proclaimed)
Thus more likely to spend their time writing tools to make their lives easier.
Job recruiters never come across this species because the best programmers never have to search for jobs. Thus, they do not need a recruiter’s help. This is well known.
I had wondered if that was the case.
“Reading his introduction to volume 3,1 had the feeling that” – the “1” should probably be an “I”
And I too vastly prefer having the whole chapter all at once to having snippets to piece together.
Yep. The 1’s after the 3 are I.
my reference is 1992 copy of AE.
You mention the 36 hour coding marathons but I notice that you don’t expand on why many programmers work long hours in a way that borders on obsession. I suspect that it is tied into the “keeping all the balls in the air” memory that you equate to sequences of digits. In many cases it can take hours to get all the parts of a complex problem into the mental model (“balls in the air” or “in the zone”) before productive coding can start. If you quit and go home after 8 hours you have to start all over again the next day. Once you have the model complete and code is starting to flow there is a temptation to keep going for fear of losing it all and having to start over again.
You could probably reinforce your position on just how few uber programmers there are and just how much more productive they can be with stats on how many core people there are on the really large programming projects vs total on the team. MS Word, Excel, Visual Studio or a large CAD system would probably all be good examples. I am guessing a fairly small handful are responsible for most of the key parts.
In August, The Daily Beast put together a list of 16 authors who they claim are Malcolm Gladwell “clones and wannabes who specialize in writing counterintuitive books that explain the world.” What The Daily Beast failed to recognize is that your counterintuitive book that explains the world predated Gladwell’s first tome by 8 years. Clearly Gladwell should be on a list of Cringely clones.
Or are Cringely clones required to actually write under the Robert X. Cringely pen name. Now I’m confused.
Nonetheless, I have watched three of your documentaries over the years (Revenge, Web and airplane) and I’ve been following I, Cringely for at least a decade, but this is my first read of Accidental Empires. I’m loving it so far. Thanks.
See: JONAH LEHRER, DAVID BROOKS & MORE MALCOLM GLADWELL WANNABES
I make a full time living doing 30min – 1hr youtube videos, WAY above all the recommended attention spans limits, like an order of magnitude above!
Full chapters are best, you are right Bob, forget the advice, it’s generic and wrong.
If they had their way you’d be dumbed down to a single info-graphic.
I like the change (two chapters / week), and wonder if our enigmatic host did this on purpose: start off one way, then change to a better way, in order to encourage us by demonstrating that he cares about us.
Hmm … I should’ve rephrased that: two longer posts per week. Two chapters per week was pretty much his plan already.
The full-chapter posting is preferable to me as well. I wouldn’t worry about it…stands to reason that your work would attract people with decent attention spans.
Also, “lumpenprogrammers”: Man, that is classic. My technologically swift & agile company full of nerds & hippies was acquired by a company full of lumpenprogrammers (and the corporate mentality to match). Guess which company’s approach was allowed to prevail?
Alas, your definition of “lumpenprogrammers” is sadly perfect: In the case of my company, it means a bunch of late-middle-age dudes with no technological ambition, and no conception of a user interface that is not text-based.
I remember reading somewhere that America spends around $1 million dollars a year on gifted kids (top of the bell curve), but spends $1 billion dollars a year on remedial programs…regression to the mean may be hard to overcome…
Are politicians included in the remedial budget? Thought not.
It is fascinating to read about how we arrived at the lives we live with computers! Your two chapters a week format is great! My wife suggested I take a memory test at the doctors office. They gave me a list of five things, then told me a short story and asked me to repeat the five things. I remembered two! Obviously I am not a programmer!
I have a better ideal ==> Wait I have a better idea!
oops. I should have said:
I have a better ideal ==> I have a better idea!
and in fact the same error occurs at the end of that same paragraph, where an exclamation has been transmogrified into a lowercase ell. there may be others I’ve overlooked.
Kaitlin died two weeks ago:
https://www.bostonglobe.com/metro/obituaries/2013/02/10/john-karlin-industrial-psychologist-for-bell-labs/jfucVP0VRjWuDmV1zmp7pJ/story.html
Whole chapters = bigger picture. Much better, and I love this chapter. I first got into controls engineering when it was done with Op Amps and discrete logic elements. A good memory helped there too, although for me remembering graphical images has always been easier than remembering lines of text code. A gas turbine control might have about 40 pages of inter-linked schematics, but you remembered them by functions and linkages. If you couldn’t keep track of a dozen or more conditions and their logical result, you spent way too much time flipping through the pages. Not much different from function blocks and subroutines.
Hi Bob,
I am enjoying reading your book. To hell with this with short attention spans. they don’t deserve to get to read it anyway.
Wow, that hippy description describes me a little too well. Lives in San Francisco. Wears T-shirt, shorts and sandals in winter. (Hey, it almost never snows in San Francisco. This week, we’re supposed to have lows in the mid-40s Fahrenheit and highs in the mid-60s.) Not really interested in implementing a solution once it has been solved in my head. Satisfied with my sex life. I don’t do drugs, though.
I think that remembering lots of data at the same time is a big part of being a great programmer, but from my experience it also has a lot to do with the “10,000-Hours to be a expert rule” as many of the very best programmers I know, have simply put more hours in to computers, and has programmed more things that others they work with.. It surly helps to be smart from the beginning, but you need to put in the hours to fully learn the language/os/etc to be able to reach 100x productivity..
There was a Dilbert cartoon related to that in the past week or so. Where Dogbert declared that anyone willing to spend 10,000 hours working on something had an obvious mental issue.
I’m not convinced about the hippy and the nerd dichotomy. I’ve known some great programmers and possibly I am one myself. I have a hard time to place us in either of these boxes. It isn’t that I think those boxes don’t exist, just they don’t cover everyone. Lots of us are great at design and great at coding the details. We don’t have long hair or do drugs, but neither do we wear pocket protectors. And yes we do have girlfriends or wives. Today, perhaps geek is chic – but we’ve always been there, if perhaps unnoticed.
I think you have to bear in mind when this was originally written. I met some of the guys porting Linux to SPARC back in the 90s and they matched the hippie description exactly. I also got to know a guy who did freelance electronic hardware debugging and optimisation in the valley later in the 90s during a holiday out there and he was stamped out of the nerd cookie cutter.
Sure, real people are points on a spectrum, but it seem sot me that the techie culture has become a lot more variegated and diverse as it has expanded over the last few decades.
That reminds me, if Bob really wants to “update” his history book he should substitute “nerd” with “geek” and forget about hair length and remove the word “hippie”. Or at least recognize that those few people who can’t recognize the value of a geek may be the only ones choosing to use a derogatory word like “nerd”.
A geek is someone who bites off the head of a chicken in a side show.
Don’t know the origin of nerd, but I guess geek is more insulting.
I guess it depends on where and when you live: “The definition of geek has changed considerably over time, and there is no longer a definitive meaning. The term nerd has a similar, practically synonymous meaning as geek, but many choose to identify different connotations among these two terms, although the differences are disputed.”: http://en.wikipedia.org/wiki/Geek . My point was that in the US “geek” is now better than “nerd”; think of what it means to be called an “uber-geek” vs. an “uber-nerd”.
Not directly related, but every time I hear a story like the “Harvey Allen” one, I have to wonder how true it is. In this example, did Harvey actually solve the crossword in his room before coming down, so he could be perceived as incredibly brilliant at how quick he was when he duplicated it from memory? I wonder which type of programmer that makes me…
Very interesting comment on the Bell Curve and distribution of talent.
We all see that exceptional people produce exceptional results.
These talents in a free society mean exceptional pay.
Except now this talent is vilified, punished, and taxed.
The 0.15% will just shrug and not try.
First class Freshman Psych memory… prof asked us to remember 12 objects.
It was easy for me.
The enraptured way the prof said, “That’s Amazing ! You were supposed to put them in 3 groups of similar objects not in sequence !”, made me feel sad to tell him I use mnemonics.
So… 7… heck 15 is a snap for anyone with about 1-2 hrs training…..
Gee… if there were a program for that… oh. wait. I built it 20 yrs ago.
Website Comment: Recently a WordPress login bar appeared. It covers up part of the text at the top of every window. Can it be removed since it serves no purpose?
This is all based on one measure of intelligence. If it were true then companies like Microsoft would have never lost their stranglehold on the computer industry, but somehow they did dispite these hiring practices. I saw a documentary last year examining the possible types of intelligence by taking people from different fields (such as art, music, physics etc…) and measuring their performance in solving a wide range of problems. The results suggested that everyone has a specific and unique strength in intelligence. My belief is that the whole is greater than the sum of the parts for a software development team and if this is not the case then you’ve put the wrong team together.
I have worked with engineers that others considered brilliant but that didn’t stop them from churning out crap from time to time and missing some key insights.
Hope someone else hasn’t suggested this. When you post these chapters, why don’t you post them on facebook. At least the link.
If you are going to talk about tex you might want to go on to talk about postscript and it’s adoption by apple. There is an interesting story there and an early example of Steve jobs pushing a new technology to widespread adoption.
I wonder if your categories of nerd and hippie still apply to young programmers today. It seems like there is a new generation, where being into programming is pretty normal and almost mainstream. Granted, most of us commenters are in Silicon Valley, but maybe young nerds today don’t have as much to prove because they are more generally accepted by society.
I guess “the Zuck” still fits into your “ridiculed nerd with something to prove” paradigm though.
“I guess ‘the Zuck’ still fits into your ‘ridiculed nerd with something to prove’ paradigm though.” I think Bob uses the phrase “something to prove” as an “artistic” way to say “self motivated”. As far as I can tell Zuck, Gates, Jobs, and numerous other geeks are all way smarter and more motivated than you or I but not quite up there with Einstein.
Short term memory is such a gross over simplification that it does not deserve the attention given. There are many people with exceptional memories who can’t solve simple problems. There are many people who can solve very complex problems and later forget how they did it. Instead the focus should be on problem solving, the practice of logic using abstract symbols and abstract reasoning. Much harder to measure. Your chances at identifying and quantifying the intellect that eludes you are less than 0.00015%! Just ask a woman.
I think I’m a pretty good programmer, and I know I have an above average memory. But I have been coding for the past couple of days since I first read this chapter, and have been trying to introspect as I go along. Conclusion: while memory plays a part, the ability to visualize relationships and control flow seem to be more important. Imagination trumps memory, in my experience.
Bob, I don’t know if you are planning to surround a few minor corrections in your new edition with a new intro and summary, or if you plan to revisit all your original research and assumptions. If the latter, EA2 could end up being harder to write than the original!
So, was Simonyi a nerd or a hippie? In his terms that’s programmer or metaprogrammer, which sounds like a hippie concept in nerd clothing.
Bob,
A few random comments:
Whoever has phone number 525-9519 in Napa is going to LOVE you.
Enjoyed the references to Miller, TeX, the Bell curve, and the mythical man-month. Few people could integrate them all so well.
I don’t think you’ve established a direct relationship from the magical number 7 (plus or minus 2) to the conclusion that uberprogrammers are at the far end of the bell curve. I do see the merits of the argument that programmers have to juggle a lot of things in their head, especially as the application gets more complex, but I’m not so sure that the best ones have better than average short-term memories. Some do, but some probably don’t.
As for motivation to put in all those long hours, maybe they just enjoy programming. It beats balancing the check book.
I concur with many others that two posts a week are better than the original daily format. I’ll give Specks Howard a half-hearted defense by saying that, while you’re using a blogging tool, what your doing here is not really a blog. It’s more like academic peer review.
remembering lots of things was a necessity back when we coded on home computers with 40 x 24 text resolution. Half the code was off-screen.
I was waiting for entire chapters to be posted and its good move. Also, as we move along into steve, msft, lotus chapters, I believe you’ll get considerable comments and feedback for your experiment to be successful..
In this case, the distribution is unlikely to be symmetric, as on the left, no one will be able to remember less than zero items, whereas on the right, some people will remember more than 14. This is hair splitting, but Bob, your text claims symmetry
Symmetric curve probable for population above or below median regardless of min or max score of outliers.
Mean could be skewed to right if outliers with astronomically high max digit recall.
Size of effect depends on sample size, number of outliers (those > 2SD from the mean) and how much they exceed the mean.
“When I was young, I could easily imagine a castle with twenty rooms with each room having ten different objects in it.”
Yet he had to devise a methodology to remember the data types of those objects?
Provocative column- thanks.
Not sure what prompted your glowing reverence for body builders.
I get the point of encouraging “10 & uppers” to pursue careers in computer programming; however, society does not need more bodybuilders! The analogy that body builders are so far to the right of the Bell Curve (may not even be on it) is accurate.
Readers emboldened by visions of tight buttocks, washboard abs, rippling pecs and bulging biceps might attempt this at home. Temporary disabilities may include the inability to bend, squat, adequately wipe between ones’s buttock or get up off the toilet seat without assistance.
Yes, spot fate reduction is achievable in relative terms.
Must exercise the desired area a lot and diet!
Most dramatic spot fat reduction in competitive body builders is due to combination of dieting involving total calorie/ severe carbohydrate restriction with increased protein intake, isolated muscle group exercises and anabolic steroids. Cannot overemphasize the impact of the ‘anabolics’.
Competetive body building as we know it (amateur too) would not exist without generous cycles of “juice”. By the way- bodybuilders are at their weakest (“sucked out”) when they look their best on the day of the competition.
Bob’s point is made in the last sentence: “We don’t have to change national policy to encourage bodybuilders or super-programmers.” The idea is that the super-programmers don’t need external encouragement. They, like the bodybuilders, are what they are: extremely self-motivated.
My point remains the same:
It may be society’s interest to encourage other self motivated persons with 9 digit or higher short term numerical memory retention to consider programming as they have the perquisite grey matter to do well. You need to attract the people who may not know they have the potential to be a super programmer. Society needs more super programmers but not body builders.
Tremendous metal endurance and focus that allow for 36 hour ‘productivity binges’ are generally reflective of a restless mind that can’t relax or sleep but does not necessarily indicate discipline; a compulsion is a better description.
Compulsive pursuit of a singular interest is present in those pursuing less productive activities such as porn surfing, gambling and social networking.
May indicate focus but certainly is not disciplined.
Body builders (successful) are highly motivated and disciplined. T
They do not train for 36 hour stretches.
Most that I know hit the sack early and get up early.
It is true that body builders and uber programmers are unique breeds falling way of the Bell Curve of “normal”.
[…] Intro Chapter 1A Chapter 1B Chapter 1C Chapter 1D Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 6 Chapter 7 Chapter 8 Chapter 9 Chapter 10 Chapter 11 Chapter […]
nice articles
thank you for share!