No law is more powerful or important in Silicon Valley than Moore’s Law — the simple idea that transistor density is continually increasing which means computing power goes up just as costs and energy consumption go down. It’s a clever idea we rightly attribute to Gordon Moore. The power lies in the Law’s predictability. There’s no other trillion dollar business where you can look down the road and have a pretty clear idea what you’ll get. Moore’s Law lets us take chances on the future and generally get away with them. But what happens when you break Moore’s Law? That’s what I have been thinking about lately. That’s when destinies change.
There may have been many times that Moore’s Law has been broken. I’m sure readers will tell us. But I only know of two times — once when it was quite deliberate and in the open and another time when it was more like breaking and entering.
The first time was at the Computer Science Lab at Xerox PARC. I guess it was Bob Taylor’s idea but Bob Metcalfe explains it so well. The idea back in the early 1970s was to invent the future by living in the future. “We built computers and a network and printers the way we thought they’d be in 10 years,” Metcalfe told me long ago. “It was far enough out to be almost impossible yet just close enough to be barely affordable for a huge corporation. And by living in the future we figured out what worked and what didn’t.”
Moore’s Law is usually expressed as silicon doubling in computational power every 18 months, but I recently did a little arithmetic and realized that’s pretty close to a 100X performance improvement every decade. One hundred is a much more approachable number than 2 and a decade is more meaningful than 18 months if you are out to create the future. Cringely’s Nth Law, then, says that Gordon Moore didn’t think big enough, because 100X is something you can not only count on but strive for. One hundred is a number worth taking a chance.
The second time we broke Moore’s Law was in the mid-to-late 1990s but that time we pretended to be law abiding. More properly, I guess, we pretended that the world was more advanced than it really was, and the results — good and bad — were astounding.
I’m talking about the dot-com era, a glorious yet tragic part of our technological history that we pretend didn’t even happen. We certainly don’t talk about it much. I’ve wondered why? It’s not just that the dot-com meltdown of 2001 was such a bummer, I think, but that it was overshadowed by the events of 9/11. We already had a technology recession going when those airliners hit, but we quickly transferred the blame in our minds to terrorists when the recession suddenly got much worse.
So for those who have forgotten it or didn’t live it here’s my theory of the euphoria and zaniness of the Internet as an industry in the late 1990s during what came to be call the dot-com bubble. It was clear to everyone from Bill Gates down that the Internet was the future of personal computing and possibly the future of business. So venture capitalists invested billions of dollars in Internet startup companies with little regard to how those companies would actually make money.
The Internet was seen as a huge land grab where it was important to make companies as big as they could be as fast as they could be to grab and maintain market share whether the companies were profitable or not. For the first time companies were going public without having made a dime of profit in their entire histories. But that was seen as okay — profits would eventually come.
The result of all this irrational exuberance was a renaissance of ideas, most of which couldn’t possibly work at the time. While we tend to think of Silicon Valley being built on Moore’s Law making computers continually cheaper and more powerful, the dot-com bubble era only pretended to be built on Moore’s Law. It was built mainly on hype.
In order for many of those 1990s Internet schemes to succeed the cost of server computing had to be brought down to a level that was cheaper even than could be made possible at the time by Moore’s Law. This was because the default business model of most dot-com startups was to make their money from advertising and there was a strict limit on how much advertisers were willing to pay.
For awhile it didn’t matter because venture capitalists and then Wall Street investors were willing to make up the difference, but it eventually became obvious that an Alta-Vista with its huge data centers couldn’t make a profit from Internet search alone.
The dot-com meltdown of 2001 happened because the startups ran out of investors to fund their Super Bowl commercials. When the last dollar of the last yokel had been spent on the last Herman Miller office chair, the VCs had, for the most part, already sold their holdings and were gone. Thousands of companies folded, some of them overnight. And the ones that did survive — including Amazon and Google and a few others — did so because they’d figured out how to actually make money on the Internet.
Yet the 1990s built the platform for today’s Internet successes. More fiberoptic cable was pulled than we will ever require and that was a blessing. Web 2.0, then mobile and cloud computing each learned to operate within their means. And all the while the real Moore’s Law was churning away and here we are, 12 years later and enjoying 200 times the performance we knew in 2001. Everything we wanted to do then we can do easily today for a fraction of the cost.
We had been trying to live 10 years in the future and just didn’t know it.
first – really ? Ok, I grew up with Moore’s law. recently stopped believing in it – they realised it applies to all the way. Look at Tesla or Zero Motorcycles. I ride one everyday and mine is already out of date ….
Yet the graph is still to linear – coz it is retrospective. looking forward on Moore’s law tell me what is to come ?
Moore’s law is exponential. That graph uses a logarithmic vertical scale so that our tiny brains can even begin to comprehend it. See the Wikipedia entry: https://en.wikipedia.org/wiki/Moore's_law
Frankly the next 10 years are going to result in some insane changes in computing power and ubiquity (assuming Fukishima doesn’t put us all back into the dark ages!)
I doubt Fukishima will pull us back into the Dark Ages, but our destruction is around us. We’re reliant upon energy and electricity, yet our grid is aging. We rely upon just-in-time shipping to move goods –real physical goods, like food– yet our infrastructure is crumbling. If our grid has a few disasters, it could easily pull us out of the information age.
Unless, of course, solar panels suddenly become so cheap we can all install them on our rooftops. (But you can’t eat a solar panel, or a JPG of a piece of pie.)
Tesla loses $10-$15k per car according to their financial statements(but are overall profitable by building and selling “carbon credits”).
It seems they have adopted the 90’s dotcom business model.
Tesla is loosing $10k to $15k per car?!?!?!
On $100k + cars ?!?!?!
This seems ridiculous on the face of it. Anyone with $100k free to purchase a car, and the jones for an electric car/status symbol will gladly spend $120 for the same vehicle.
Losing money on making cars is really easy. When BMW bought the Rover Group in the 90s they looked at the books for the Mini. At that point, the various versions of the company had been building the car for 35 years, selling over 5,000,000 of them. And they lost money on every single one. Even factoring in depreciation etc they were still a money loser. They could never get people to pay a price for them that covered the cost and this was one of the world’s best loved cars.
So I can believe Tesla lose money, and keep going.
I’m not sure about the dot com bubble. Yes, many of the business models were based in Ads, but even 10 years later most of companies based in advertising will fail (yes, the exceptions of today will be Twitter and maybe a couple more). So for these companies not even the cheapest hardware will compensate the fact that the number of ads you can show at any market is limited. (And don’t get me started on the growing concern for privacy). Yes, we tried to live in the future, but with business models taken from Willy Wonka and the Chocolate Factory.
Bob, your article seems to presuppose that the only way to make money on the internet is by selling advertising when clearly there are people making a living by selling goods and services.
In my recollection, the dotcom burst happened because, although everybody could see the beauty of selling stuff online, Joe Public (here in the UK at least) was using dialup. At the time, I went to a demonstration by a small, forward-thinking company who had developed software to create rotating images of garments for online fashion retailers. I thought then, ‘great idea, but however long is it going to take for these images to appear on the page?’ They failed. It didn’t however, stop the mania for investing in potential online retailers.
I could never understand how online retailing would ever work over dialup. Fifteen years later, now that everyone has broadband, half the High Streets in Britain are boarded up.
Yes, plenty of startups sold things and some of them still do. Amazon.com is a great example of how to make dollars — not hundredths of a cent — per transaction, which is what they all needed to do to overcome the Internet cost barriers of 1998. So we have some terrific success stories from e-commerce. But we also have some terrible e-commerce failures, too. Webvan was doomed because the cost structure never made sense. Pets.com imploded. Even eLoan, which delivered its good primarily in the form of electrons, couldn’t make it work. I think part of the problem was a lack of attention to the cost side. There was a used car buying startup in San Francisco I remember that lost hundreds of dollars per car simply because they paid too much for their inventory. So there was plenty of stupidity to go around. But most of it can be traced back to an over-reliance on Moore’s Law and a sense that they’d make it up in volume, if only they could survive that long.
You’re right David. Advertising is still the “holy grail” for many online businesses. But as many have said, that ship has sailed as rates have plummeted. The challenge is to move goods and services (commerce) in more effective ways.
I believe the dot com burst was partly a Y2K effect. Rightly or wrongly, Y2K was hyped to beat the band. In the process, millions of PCs were purchased earlier than they would have normally been. Companies didn’t want to take a chance on failure. Purchases pulled forward in time meant that there was going to be a significant drop in sales post Y2K. Chicken Little was right… The sky did fall, just not the way expected.
I think the Y2K factor fed in another way, again mostly psychologically, as well. To anybody in the 1930s, “the future” was 2000. To anybody in the 1950s, “the future” was 2000 (Disney was the odd one out looking at his first Tomorrowland as being 1985 instead).
To anybody in the 1980s, “the future” was 2000. So suddenly it is 1999 and “the future” is, literally, tomorrow. So anybody who wanted to do something in “the future” had, well, a year left to do it. So they did.
Thousands of people did economically questionable things, and one of the biggest being that they would borrow off their 401Ks and other future investments to do that something they wanted in “the future”, like buying a house (yes, it is a legal way to get access to your retirement funds penalty-free). So millions were being sucked out of Wall Street – the traders suddenly had less money to spend, so they stopped spending and stock prices started dropping. The bottom, literally, was yanked out of the market.
I’m not sure how much further the internet companies of today can take it. Google, Twitter, FB all rely on advertising for basically 99% of their revenue as do a lot of the new venture backed tech companies. How much more can these companies rely on advertising for growth? I’m impressed by their operations but am not overly impressed by their revenue streams. If we will have 100 times the computing power in 10 years can we make energy more efficiently or at least make a hover board?
And yet we continue to bore in fiber, even though we have up to 64 lambdas on each pair. The present revolution is local, not long haul. Governments are pulling their own glass, something I told HB Fuller to look into in 1993. Telcos are putting in gigabit to homes and 40 to 120 megabit from the curb because there is an insatiable appetite for gigabit to cell towers for 3Gand now 4G. While ocean cabling has never been busier. Infrastructure is king, and Alcatel-Lucent has finally found gold in them there routers — oops, I mean high-layer data switches. Still fun times if you’re doing on-ramps to our old Connected Internet. Even my old 90s dream of billing data by the SNAP tag is alive in our network, but it’s billing streaming video on PRISM.
The “renaissance of ideas”…….. I think this was the essence of all the bits exchanged at that time. Will we see another?
Great post Bob.
There’s a good reason why Moore’s Law describes how long it takes for computing power to increase by a factor of two, instead of by a hundredfold every decade as Bob prefers to measure it. It has to do with the chip makers (mainly Intel) persuading as many people as possible to upgrade their computers as often as possible. A typical customer is not going to get a new computer less than a couple of years after the previous one and they’re not going to get one that isn’t at least twice as powerful.
Intel, with its huge resources, could bring out faster CPUs even more often if it wanted to, but there wouldn’t be much point in that if it can only persuade very few people to upgrade that quickly. So Moore’s Law is really more a law of marketing PCs than a law of technology.
Great comment… !
“It has to do with the chip makers (mainly Intel) persuading as many people as possible to upgrade their computers as often as possible.”
If it was just a conspiracy along those lines, someone would leapfrog Intel to take away their huge business. AMD would have tried it and many others. But it aint that easy.
If you’re a marathon runner do you sprint to set a record? How fast do you run? What’s the optimal speed?
Brendan’s comment was eye-opening for me… I don’t know if it was correct or not.. but it certainly has merit and led me to consider a broader set of variables.
Wikipedia mentions “self fulfilling prophecy” (s.f.p.) which certainly plays into it.. but the economics, the rate and timing at which customers are willing to “fund” further technological R&D also plays a part and unlike s.f.p. was there from the beginning.
Moore’s Law isn’t conspiracy, it just describes a stable rate of increase in computing power that is economicaly worthwhile to chipmakers.
Extra competition might have some temporary effect on that rate of increase but as soon as AMD come close to Intel in chip performance, they face the same customer resistance to the extra cost and inconvenience of upgrading. Even if they could leapfrog them, they could only do it once.
“More fiberoptic cable was pulled than we will ever require”
I’d like to see the scrawed on envelop behind that statement. It
strikes me as one of your less prescient thoughts.
Yes, we’re already running short on fiber optic backbone.
Actually this is true. One of the reasons for the telecommunications collapse was due to over capacity. Many business plans were based on the assumption that everyone and their dog would want infinite bnadwidth, so they pulled 1,000 fibre strands down a pipe and got their infrastructure (switches, etc) set up to use a good chunk of those, when in fact only one or two strands were ever lit. What they didn’t take into account is just how much data you can stuff down a single pipe, and the fact that people’s capacity to absorb information is both very limited, and actually predictable.
There have been several “Moore’s Laws”: transistor size, clock speed, transistors per chip, etc. Usually an exponential improvement in one of these implies an exponential improvement in the others, but not always. Transistors per chip can be improved by improving yield and/or decreasing the cost per square mm of the chip. Scaling of clock speed slowed in the early 2000’s. The big problem is gate leakage, and it might be the thing that stops the whole train. Even if a solution to the gate leakage problem is found, transistors can’t be scaled down past the granularity imposed by atoms. As a result, Moore’s law for size has only about 10 years left. We are closer to the end than the beginning.
Moore’s law holding up questionable business practices… hmm
Along those lines, I’m wondering how much of the internet economy is held up by an underlying economy of selling personal information. How much of that is unethical/illegal/unconstitutional behaviour?
Are we too far along to fix things without breaking everything?
It feels like we’re running out of Big Ideas. All the Big Ideas that you could put on a whiteboard in 1989 are either done, being done, or were done and found wanting. The only thing really left on the whiteboard is strong AI.
Pocket computers with the capabilities of desktops? Check.
Wireless unlimited access to global communication and information services? Check.
Realtime photorealistic 3D multiuser virtual worlds(*)? Check.
Servers delivering all human knowledge on demand for free? Check.
Access to every movie and TV show ever made in any format for low/no cost? Check.
Ability to order any good or service from anywhere at any time at the lowest possible price and have it delivered to my door within a reasonably short period of time? Check.
The ability to print in 2D or 3D on demand anywhere I want a printer at a reasonable cost? Check.
My home turned into a web of connected devices including heating, cooling, lighting, and security, that I can monitor and adjust anywhere at any time? Check.
What keeps pushing the market to abandon and adopt new hardware to fund Moore’s Law if we don’t seem to need more horsepower?
When it cost tends of thousands of dollars to run a web site, a hundredfold reduction was a Big Deal. But when it costs a $10 to run a website, reducing that to $1 isn’t going to have any meaningful impact on the sites that are created.
When the most pixels you could get on an LCD for a portable device gave you VGA resolution, it was a big deal when the hundredfold improvement gave you retina displays with more colors than your eye can see. But “retina display” is a human-factor term which belies the problem. My eye can’t see any higher pixel density, so continuing to make smaller pixels gives me no value.
My $500 desktop computer runs so fast that it does all the tasks I ask of it in realtime. The slowest part of my system is the data flowing down the internet. There’s literally nothing I can do to my local hardware to make my desktop experience meaningfully better (except when I’m working with gigantic video files and editing in realtime, and even then a 2x improvement would eliminate all delays and I’ll get that the next time I upgrade my desktop). Turning my $500 desktop into a $5 desktop wouldn’t induce me to have more than one of them per desk.
If the speed doesn’t matter, the cost doesn’t matter, and the quality doesn’t matter, why upgrade. What is the “killer app” of the future?
I guess it’s gotta be A.I. – I just don’t see anything else left on the whiteboard.
(*) this is my actual business and I know we have some room to grow here. We still can’t have an unlimited number of highest-resolution avatars acting in the same shared game space due to N^2 network problems and video card limitations. But the latter will be solved in 2 or 3 Moore iterations, and the former can’t be solved by hardware speedup but could be solved by finding a way to make clients secure. Both problems will be fixed by the time the 10th anniversary of this column is reached.
As for things to do, digital holography comes to mind. When resolution gets fine enough, we’ll be able to put holograms on a display. No, not the 3-D glasses variety, but the real thing. The horsepower to do that will need to be impressive, and I’m not convinced we’re quite there yet.
Another potential application is human interface convergence. I doubt a lot of people will sign up for that sort of thing, but augmenting your existing biology with cybernetic implants is on the way.
I agree with the AI future. Siri is a good example an idea ready to explode. People are pushing Siri to the limit of it’s abilities with questions outside it’s expertise. “Can you give me directions to Omaha from St. Louis?” is easier for AI to answer than “Does my blue shirt clash with me green tie?” But it looks like AI is about as far off as nuclear fusion. Always 10 years away, for the past 50 years. I can’t imagine market forces not wanting AI assistants in the future. Viz.:
Human: “Siri-like-thing, what can I do about this recent drop in my portfolio?”
Siri-like-thing: “I have access to a very help analysis program, and you can purchase it from the corporation. I see you have sufficient funds available. Shall I download it?”
Better-than-Siri thing: “Get a job.”
There are many areas in life that could benefit from the “killer app” of the future. But most of them are being held back by the failure to see human activity in a new way. We don’t need better transistors so much as we need a new vision for organizing and harmonizing human activity. Just look at all the problems that have yet to be solved. There are failures in the education system, failures in the political system, failures in the financial system, the massive corruption in high places, and above it all are the failures in human nature.
.
There is just no end of the various problems that could be solved if a new visionary were to somehow arise from the population.
The first Internet was still largely a dial-up Internet. Moore’s law had nothing to do with the growth of it, even when looking at servers. Remeber at the time web pages were mostly static with much of the processing done on the server because web browsers and HTML wasn’t as advanced, but that was OK because you couldn’t push much data down the pipe anyway.
Even the early cable and DSL services were fairly slow by today’s standard. My 1st cable connection was rated “between 1.5 and 5 Mbps.” I just ran a speed test on my phone because I saw the LTE icon on the status display and saw 10Mbps down and 7Mbps up. ON A CELL PHONE, sitting in a restaurant. My cablemodem now can run a sustained 25Mbps downstream all day long.
I happen to work for a large cable ISP. The 1st connection @home had to our headend (that servied about 10K cablemodems) was 7 T1 lines, which was quickly maxed out (most of the blame for early speed issues with cable modems were because @home refused to supply enough bandwidth to the headends), then upgraded to a DS-3 (45Mbps), later an OC-3 (155Mbps) and then additional OC-3s until they went out of business. These days the minimum connection to even a fairly small headend is a 10 Gigabit Ethernet ring. Larger headends and hubs in large metro areas are fed with 40GigE and/or 100GgigE rings.
Also don’t forget that Microsoft took a long time to get a decent TCP/IP stack on Windows 95/98. In fact, 98SE was the first 16 bit OS that really pushed packets through the Ethernet card. Macs and Windows NT machines did much better.
Once the bottlenecks were cleared up it made HTML 4, javascript and Flash actually work, so it made the web browser much more useful. If you really want to see how far we’ve come, just tether to a 2.5G cell phone and see how long it take even a simple page like Google.com to load.
Moore Law is dead, Cringely is ignorant. If you look at performance of Pentium III and Haswell, you will not notice 200 times (13 years of 100 times per decade growth) performance growth. 10-20 times, not more. So Cringely assumption of 100 times performance growth before 2023 is a simply wishfull thinking. Transistors based ICs are near end of they usefull life for computation like vacuum tubes and discrete elements. The future is non-transistors based ICs, neural-net like based ICs, but when will they be created and put at the wide-scale manufacturing? Before this, information processing will stagnate and all significant achievements will be useless web applications for ego-boosting like FB and various e-trade platforms.
Moore’s Law, as it describes the increasing number of transistors, is still holding up very well, but you’re right that it hasn’t given the same increase in performance as before. Moore’s Law itself hasn’t hit a limit but the chip speed has hit a limit of about 4 GHz, simply because silicon melts if you try to switch it faster. So Moore’s Law is now giving us lower power consumption and more CPU cores, but only slightly faster computational speed per core.
The only way to increase clock speed is to use a different material for the transistor, like diamond which can switch more a hundred times faster than silicon before melting. Diamond transistors still seem to be at the laboratory stage but we might see them in products in the next few years.
OK. How about this one?
Economics could kill Moore’s Law – http://fudzilla.com/home/item/32369
Moore’s law (and processor speed) become irrelevant when the main bottleneck is actually a laggy network connection.
I spend more time waiting for my computer to do things now than I did ten years ago, it’s just now I’m waiting for the network not the processor.
Hello, Cloud!
Bob, I think you missed the death of Denson scaling, or the end of the Mhz wars in 2003. The reason mobile has become so powerful is it was not limited by silicon’s inability to handle higher frequency’s because it was tuned for power efficiency rather than frequency. The real war now is for eyeballs and data. Moore’s Law has held true in density, but not for business planning purposes. Both Intel and Microsoft would be in much stronger economic positions if we were still being forced into updating home computer’s every 2-3 years. There is no compelling reason to do so because we are only seeing 10% improvements in productivity for computers since 2004. Intel and Microsoft are trying to switch industries to mobile and they are not succeeding because they are having a really hard time dropping their current income support.
I like your columns, but I think this is one of your weaker efforts.
Moore’s Law is CRIPPLED. As Craig, Dima and Brendan pointed out in various ways above, while silicon transistor size may still be following Moore’s Law (and it is getting horrendously expensive to do so), the speed of commercial circuits has topped out under 4 GHZ.
.
That speed improvement was basically half the charm of Moore’s in regard to computing power, and until parallel processing becomes much better tamed, it’s not coming back.
.
This isn’t to say that some breakthrough may bypass the silicon limitation. A new process technology (diamond, carbon nanotubes and/or other nanowires, …) or a new computing paradigm (quantum, neural, …) may leapfrog the current state of the art and perhaps even leave Moore’s in its dust.
.
But nobody should blindly wave the Moore’s Law flag anymore. It’s not what it used to be.
I suppose I should acknowledge that at the cloud level, parallelism has made great strides. Google, for example, uses a massively parallel map-reduce algorithm to give us our 20 million hits in a fraction of a second.
.
For apps I can download that make use of all eight cores in the Intel chip under my desk … not so much.
Moore’s Law is no longer relevant for desktop computers but it is driving the development of smartphones where it is more important for power consumption and battery life than computing power.
Even if it were possible to give a huge boost to chip speed on PCs, there wouldn’t be much demand for that anyway because very few applications would make use of it.
“Even if it were possible to give a huge boost to chip speed on PCs, there wouldn’t be much demand for that anyway because very few applications would make use of it.” So far that has not been the case. As chip speeds go up all apps become more complex, even if it’s only due to poorer coding and unneeded features. The browsers alone would take on more and more work. The only thing that can replace raw power on the desktop is raw power in the cloud. And that won’t really happen until we have gigabit internet connections. But it’s true that the demand appears to be going down as simpler mobile devices replace PCs for the tasks performed by most people. Look at it this way: before cars were invented and made affordable there was no demand for them.
Nowadays a lot of people are perfectly contented with their five year old PCs because they don’t see much need for extra speed, even to make up for extra bloat in new versions of software. Having said that, it’s possible that somebody will bring out a killer app that requires as much performance as possible. That would bring back the demand that there used to be for more speed before the faster CPUs were even released.
That’s because processor speeds haven’t improved that much in the past 5 years. I bought one of these HTPCs in 2008 (Core 2 Duo T9300 2.5 GHz – 22″ TFT) but it sometimes has trouble with live TV and with the equally old installation of Vista that it came with. That’s why for the first time Microsoft had to back off with Windows 7 and 8 rather than continue with increased complexity in new OSs.
What about Wright’s law? http://spectrum.ieee.org/tech-talk/at-work/test-and-measurement/wrights-law-edges-out-moores-law-in-predicting-technology-development
What about Ray Kurzweil’s Law of Accelerating Returns?
Many here are criticising dial up speeds for on-line shopping. No, it wasn’t quick but there’s two factors that you have to consider. Firstly, websites were smaller with less data to transfer. Secondly, people were more patient than they are now!
.
I’ve never liked the term Moore’s Law. I’d much rather see it as Moore’s Observation. And one of the reasons for that is look at how we get processing speed today. Processor clock speeds have effectively topped out at around 3GHz and have been stuck there for what seems like a decade. Instead the designs of the processor has become more efficient, and more importantly there’s now multiple cores.
.
That is your computer isn’t one computer, but many, multiplying up to follow “Moore’s Law”. Taking each core individually and Moore’s Law broke down a long time ago.
We used to buy new cars every 4-5 years, too. At some point cars and PC’s became “good enough” for a lot longer time span. I will drive my 2004 Passat for another 3 years. And I will keep my Core2Duo another year…probably just upgrade RAM, just like a 100,000 mile service.
[…] Breaking Moores Law […]
I am surprised that no one corrected Moore’s Law as actually being 24 months and not 18. Appearantly Gordon Moore claims he never said 18 and that may be taken from a similar prediction done by Alan Turing who wanted to guess computing power in the future based on similar metrics but came up with a rule of thumb of about 18 months.
This “law” has been written many ways over many years. I have the advantage of having actually asked Gordon Moore about it. He told me it’s 18 months but they fudged it in certain publications just for the sake of not being seen as outrageous. In fact the time period varies a bit, but 18 month is a good mean.
That’s interesting. One thing everyone forgets is Google was against advertising for several years. They didn’t start charging until 2002. Then they had to pay a company called GoTo owned by idealab, because they had a patent on ad words. They were bought by Yahoo. How Google made money prior to that is an interesting question. My memory is a little hazy but I didn’t really hear much about them until 2001 or so. I worked at a dot.com owned by idealab. They were trying to do something like YouTube but it was too early. I don’t know if they misjudged Moore’s law, or there was so much money available they didn’t want to pass it up.
And it certainly hasn’t been worth the economic instabilities, acceptance of (even eagerness for) crap software, and ill-considered architectures (including lack of security), from my POV.
Progress has a way of causing economic instabilities…like lowering prices.
Wow, so much misunderstanding, so little space. Digital technology has indeed brought a long lasting geometric progression in capability, but as my favorite mathematician once explained to me, no geometric progression goes unpunished. You may recall chip geometries were the subject of the statement Gordon Moore made, which was later formulated as “Moore’s Law”. He observed that the scale of Intel’s chips was halving each 18 to 24 months. That’s it. No magic, just an observation. Yes, in fifty years we grew accustomed to having compute power roughly double each 18 months, but the real equation describing that progress is more complex [pun intended]. As many of you recall Intel giveth while Micro$oft taketh away. Moreover significantly more compute power is consumed today by embedded processors than by general purpose machines. I would suggest Moore’s Law died when Intel stopped advertising clock rates, and all the silicon foundries were worrying about the “thermal barrier”.
The progress will continue, change happens and I would never suggest one sell human ingenuity short, but the chip geometry contribution to the equation is finished (almost there is still a little lift in going 3D) therefore the “law” Mr. Moore observed no longer applies. MOORE’S LAW is dead. Long Live Moore’s Law. Because when photonic switches are manufacturable and all light CPU/GPU are being designed, bar the door we will be in for another half century to a century of very impressive graphs… 😉
[…] I, Cringely Breaking Moore’s Law – I, Cringely […]