Two weeks ago IBM told the IT world it was taking on Intel in the battle for server chips with new Power8 processors incorporating advanced interconnection and GPU technology from NVIDIA. This followed an announcement earlier in the year that Google was using Power8 processors in some of its homemade servers. All this bodes well for IBM’s chip unit, right?
Not so fast.
Some product announcements are more real than others. While it’s true that IBM announced the imminent availability of its first servers equipped with optional Graphical Processing Units (GPUs), most of the other products announced are up to two years in the future. The real sizzle here is the NVlink and CAPi stuff that won’t really ship until 2016. So they are marketing a 2016 line of Power8 vaporware against a 2014 Intel spec. No wonder it looks so good!
This new technology won’t contribute to earnings until 2016, if then.
Worse still, the Power8 chips rely on a manufacturing process that’s behind Intel from a division IBM was trying to pay to give away (without success) only a few weeks before. And that Google announcement of Power8 servers didn’t actually say the search giant would be building the boxes in large numbers, just that they were testing them. Google could be doing that just to get better pricing from Intel.
This is IBM marketingspeak making the best of a bad situation. It’s clever leveraging of technology that is otherwise becoming rapidly obsolete. If IBM wants Power to remain competitive they’ll have to either spend billions to upgrade their Fishkill, NY chip foundry or take the chips to someplace like TSMC which will require major reengineering because TSMC can’t support IBM’s unique — and orphaned — copper and silicon-on-insulator (SOI) technology.
IBM is right that GPUs will play an important part in cloud computing as cloud vendors add that capability to their server farms. First to do so was Amazon Web Services with its so-called Graphical Cloud. Other vendors have followed with the eventual goal that we’ll be able to play advanced video games and use graphically-intensive applications like Adobe Photoshop purely from the cloud on any device we like. It’s perfectly logical for IBM to want to play a role in this transition, but so far those existing graphical clouds all use Intel and Intel-compatible servers. IBM is out of the Intel server business, having just sold that division to Lenovo in China for $2.1 billion.
Key to understanding how little there is to this announcement is IBM’s claim that the new servers they are shipping at the end of this month have a price-performance rating up to 20 percent higher than competing Intel-based servers. While 20 percent is not to be sneezed-at, it probably isn’t enough to justify switching vendors or platforms for any big customers. If the price-performance were, say, double that might be significant, but 20 percent — especially 20 percent that isn’t really explained since IBM hasn’t revealed prices yet — is just noise.
Those who adopt these new IBM servers will have to switch to Ubuntu Linux, for example. Industry stalwart Red Hat doesn’t yet support the platform, nor does IBM’s own AIX. Switch vendor, switch operating system and maybe gain 20 percent won’t cut it with most IT customers.
And we’ve been here before.
From Wikipedia: “The PowerPC Reference Platform (PReP) was a standard system architecture for PowerPC based computer systems developed at the same time as the PowerPC processor architecture. Published by IBM in 1994, it allowed hardware vendors to build a machine that could run various operating systems, including Windows NT, OS/2, Solaris, Taligent and AIX.”
PReP and its associated OSF/1 operating system were an earlier IBM attempt to compete with and pressure Intel. Intel kept improving its products and made some pricing adjustments, destroying PReP. Linux basically killed OSF/1 and every other form of Unix, too. OSF/1 came out in 1992. PReP came out in 1994. By 1998 both were forgotten and are now footnotes in Wikipedia.
IBM’s OpenPower is likely to suffer the same fate.
I get confused about cloud processing. I can understand companies will want online services that process their data quickly and efficiently but graphics data seems to me a different game entirely.
Those who want intense graphics are usually either photographers, movie-makers or gamers. Of the three movie-makers really only want data conversion not graphics processing directly. Photographers too probably. So that leaves gamers. Yes, they want fast screen updates but surely that is governed more by local graphics cards and monitors. Stick the Internet into the process and you have a very tight bottleneck, especially for most domestic users who don’t live in a premium-speed data area (like me). Until recently we had a 2mb/s line on the local loop. It’s nearer 8mb/s now. Not accounting for ISP throttling. What is more all that graphics data, if processed online, is going to be cluttered with transmission protocols making the data actually larger (although I concede more compressible).
As I say, I’m confused. What is the advantage of doing graphics processing online?
Nigel, I am reading this on a Chromebook. This little device has become my primary computer at home. In recent weeks I have been doing vector illustrations for a proposal in InkScape, entirely in a Chrome browser. Given the poor quality of my DSL connection here at home, this may remain a frustrating process for some time to come. Any latency on the “cloud” end of this process is multiplied by my iffy network connection. When I do a vector illustration on a local computer, I am the only user at the moment and can assume that only my screen is being redrawn. In the “cloud” there may be thousands of users, each expecting performance similar to that delivered by a dedicated CPU. Preparing multi-layer vector illustrations may be even more of a corner case than gaming, but all such graphical applications must be able to scale to demand. I think this is the leading case for GPU’s in “cloud” services.
Hi Tokind. Thanks for the response. Yes I can see vector graphics as being basically maths as in each entity is described as a set of coordinates and attributes. This type of calculation can be whipped along and fed back via DSL quite quickly. The modest graphics capabilities of your Chromebook (I have an Acer C720 too) will be glad for the cloud computations as it can them just concentrate on redrawing the screen etc. but I didn’t think of that as an online GPU.
I could be making a mistake here in thinking that cloud graphics are actually any different from standard online computing. Maybe it’s just a way of more massively paralleling the processing using less physical chips?
While the ‘G’ in GPU stands for graphics, the ‘PU’ (processing unit) is the more interesting part. They are designed to work on a lot of data quickly and in parallel, specialising in arithmetic. For example they can perform operations like matrix multiplication way quicker than a general purpose cpu can. GPUs excel at data heavy, versus code heavy applications. That is why people care about these. For more details see https://en.wikipedia.org/wiki/General-purpose_computing_on_graphics_processing_units
Hi Roger. Thanks too for your response. Yes I can understand that GPUs are more dedicatedly parallel processing units and that they usually use faster RAM speeds too. Some graphics-card drivers have been written to allow the main CPU to off-load other non-graphics tasks to them when the requirement for screen calculations is low.
I suppose I have always imagined GPUs to specialise in geometric calculations – those used in 3D image processing for example – as I think that’s what they were originally designed to do.
Way back when Intel offered Maths Co-processors to compliment their main CPUs (at extra cost) to help with maths-centric calculations, I had the sockets but never filled them. Subsequently they were added on to the same main die as an upgrade incentive as the years rolled on. Those are the type of thing I can understand being added to servers to speed up any calculations. Perhaps that’s all these online GPUs really are?
Is the use of other companies technology in Power8 also a symptom of the decline of IBM ? Once a new generation of Power chip meant a significant increase in performance and sometimes abilities. Given the steady comoditisation of IT across the board, Intel, after its scare about X64 a few years back, has played its cards well with steadily improving and capable chips. I don’t think IBMs attempt to keep its proprietary lines competitive (outside mainframe) have succeeded, so I agree with your assessment. Will AIX see out the next decade ? How much longer will ailing HP keep HPUX alive and Oracle its Solaris ? Are there any other significant old unix vendors left ?
I think it is telling that the new large x86_64 systems that HP is building are being touted as having improved Linux and Windows scalability assisted by HP systems and OS engineers, but HP-UX is not being ported to the systems. That says to me that HP-UX does not have enough user base to generate a return on the investment required to port to x86 (probably the easy part) and fully test, debug, certify, and market the x86 variant. I don’t have any idea of the financial status of HP-UX specifically, but in the 3rd quarter financial report from HP, the “Industry Standard Servers” segment generated $3 billion in revenue with 9% growth, while the “Business Critical Systems” segment (which includes not only the HP-UX systems but also NonStop) generated $233 million in revenue while shrinking 18%. It is obvious why the traditional BCS systems have to utilize as many commodity components as possible. I don’t think there is a good answer to how long HP will keep HP-UX alive, but the best guess would be based on how many customers have contracts which require it and the length of those contracts. It doesn’t seem like there is any financial justification at this point.
It is pretty telling that IBM is marketing these new hardware platforms but failing to bring AIX along with them. Either IBM is really dropping the ball when the hardware doesn’t come with an operating system, or else AIX is doomed.
Guys, NVlink will take a while to be implemented… but CAPI is already there. Now, it is a matter of using the technology, which I’m sure several companies are already doing. Soon, we will see FPGA accelerated computations. Also, the availability of GPU and little endian mode of POWER8 will make that platform a real competitor against exsiting intels x86. Actually, according the SPEC benchmarks POWER8 is around 2x the performance per core w.r.t. latest v3 xeons. The only problem IBM had, was applicaiton portability and the lack of accelerators. Now, we already have that. Easy coupling with FPGAs and GPU + little endian mode. I think that POWER8 is going to do a big bite to intels high-end market share. Let’s see whats happens… but with Tyan white box at ~2700$ and next white boxes comming next year… go go IBM! We need some competition, otherwise we are in hands of intel guys…
Fast new processors are USELESS without software that takes advantage of that. Remember Itanium? Hardware companies are stuck because software companies… well, suck.
.
X86 has been a dead end for more than a decade. We need some geniuses to come up with a way to get around software monopolies. Any new processor is a dead investment unless windows/apple/etc can be dealt with.
Even Microsoft and Apple made new OSs to take advantage of ARM processors, but they couldn’t beat the more power-hungry Intel processors, which are already entrenched in the minds of developers, who seek to develop for hardware that already exists, and does the job. These types of problems are more like chicken-egg situations. ARM won out on simple mobile apps, where battery-life is most important. So to beat Intel in the high-power server world, there would have to be valuable apps that would overwhelm Intel chips, yet run well on the IBM chips. Otherwise developers will (correctly) assume that the market is too small to bother with. So while it’s true that Microsoft and Apple have monopolies, it’s not that they are unwilling to change. They will change their software for a new processor, as long as there is sufficient need.
1994 PowerPC Reference Platform – IBM – Wikipedia article.
1995 Common Hardware Reference Platform – IBM & Apple – Wikipedia article.
Later Power Architecture Platform Reference – Power.org – Wikipedia article.
.
I suppose persisting is worth something. 🙂
I watched my son playing an online video game on his new i7, which he paid for himself from his minimum wage job as a dishwasher and bread maker . . . but I digress. The graphics were stunning and the game responsive. It looked like a game console experience. I’m just not able to keep up with this stuff. Becoming an old man, an old man with old engineering knowledge, who nevertheless still loves his profession.
Power8 Teknoloji 😀
If the Power 8 processors have 20% more performance and they use a lot less power (no pun intended) than the Intel processors, then Google may indeed have plenty of motivation to switch. Porting their home-grown server stuff is likely a no-brainer.
Some years ago I read that Google prefers to use cheap processors with low speeds, like 300 MHz, and just plugs in new CPUs when the old ones go down.
By the time we are editing in the cloud; local systems will be vastly superior manipulating 4k and realtime 3D ray tracing. Some say they already are.
The client server power shift continues. Thin client reboots are cyclical and often ignore the evolving sweet spot of power that fits in the palm of one’s hand (or wrist watch).
all the technological specs aside and with loop debate on how fast do we need to go to be “performance effective”
where is one good opinion on IBM technology and practices from you, cringley?
IBM speak: These new Power8 servers dramatically reduce the power requirements per rack of servers
Translation: These servers do in 4U of rack space what others do in 1.