First a look at my predictions from one year ago and how they appear in the light of today:
Prediction #1 — Everyone gets the crap scared out of them by data security problems. Go to the original column (link just above) to read the details of this and all the other 2015 predictions but the gist of it was that 2015 would be terrible for data security and the bad guys would find at least a couple new ways to make money from their hobby. I say I got this one right — one for one.
Prediction #2 — Google starts stealing lunch money. The title is 100 percent smart-ass but my point (again, read the details) is that reality was finally intruding on Google and they’d have to find more and better ways to make money. And they did through a variety of Internet taxes as well and cutting whole businesses, reorganizing the company and laying-off thousands of people. I got this one right, too — two for two.
Prediction #3 — Google buys Twitter. It didn’t happen, though it still might. I got this one wrong — two for three.
Prediction #4 — Amazon finally faces reality but that has no effect on the cloud price war. This one is kind of subtle, the point being that Amazon would have to cut some costs to appease Wall Street but that the cloud price way (and Amazon’s dominance in that area) would continue. I think I got this one right — three for four.
Prediction #5 — Immigration reform will finally make it through Congress and the White House and tech workers will be screwed. This certainly hasn’t happened yet and the situation looks marginally less horrible than it was a year ago. A true resolution will depend on who is the next President. I got this one wrong — three for five.
Prediction #6 — Yahoo is decimated by activist investors. The fat lady has yet to finish singing but I’ll declare victory on this one anyway — four for six.
Prediction #7 — Wearables go terminal. Wearables were big in 2015 but I was way too optimistic about the technical roadmap. Maybe 2016 or — better still — 2017. I got this one wrong — four for seven.
Prediction #8 — IBM’s further decline. I got this one dead-on — five for eight.
Prediction #9 — Where IBM leads, IBM competitors are following. Just look at HP or almost any direct competitor to IBM, especially in services — six for nine.
Prediction #10 — Still no HDTV from Apple, but it won’t matter. I was right — seven for 10.
Seventy percent right is my historic average, which I appear to have maintained. Hopefully I’ll do better in 2016.
Now to my first prediction for 2016 — the beginning of the end for engineering workstations. These high-end desktop computers used for computer-aided design, gene sequencing, desktop publishing, video editing and similar processor-intensive operations have been one of the few bright spots in a generally declining desktop computer market. HP is number one in the segment followed by Dell and Lenovo and while the segment only represents $25-30 billion in annual sales, for HP and Dell especially it represents some very reliable profits that are, alas, about to start going away, killed by the cloud.
A year ago the cloud (pick a cloud, any cloud) was all CPUs and no GPUs. And since engineering workstations have come to be highly dependent on GPUs, that meant the cloud was no threat. But that’s all changed. Amazon already claims to be able to support three million GPU workstation seats in its cloud and I suspect that next week at CES we’ll see AWS competitors like Microsoft and others announce significant cloud GPU investments for which they’ll want to find customers.
This change is going to happen because it helps the business interests of nearly all parties involved — workstation operators, software vendors, and cloud service providers. Even the workstation makers can find a way to squint and justify it since they all sell cloud hardware, too.
Say you run a business that uses engineering workstations. These expensive assets are generally used about 40 hours out of a 168 hour week. They depreciate, require maintenance, and generally need to be replaced completely every 2-3 years just to keep up with Moore’s Law. If you do the numbers for acquisition cost, utilization rates, upkeep, software, software upgrades, and ultimate replacement, you’ll find that’s a pretty significant cost of ownership.
Now imagine applying that same cost number to effectively renting a similar workstation in the cloud. You still use the resource 40 hours per week but when you aren’t using it someone else can, which can only push prices lower. Installing a new workstation takes at most minutes and the cost of starting service is very low. Upgrading an existing workstation takes only seconds. Dynamically increasing and decreasing workstation performance as needed becomes possible. Hardware maintenance and a physical hardware inventory pretty much goes away. Most data security becomes the responsibility of the software vendor who is now selling their code as a service. Only intelligent displays will remain — a new growth area for the very vendors who will no longer be selling us boxes.
This will happen because corporate bean-counters will demand it.
From the software vendor’s perspective moving to software as a service (SAAS) has many benefits. It cuts out middlemen, reduces theft and piracy, allows true rolling upgrades, lowers support costs and raises revenue overall.
Now add-in for all parties the cross-platform capabilities of running your full workstation applications occasionally from a notebook, tablet, or even a mobile phone and you can see how compelling this can be. And remember that when that big compile or rendering job has to be done it’s a simple matter to turn up the dial to 11 and make your workstation into a supercomputer. Yes it costs more to do that but then you are paying by the minute.
Let’s extend this concept a bit further to the only other really robust PC hardware sector — gaming. There are 15 million engineering workstations but at least 10 times that many gaming PCs and the gamers face precisely the same needs and concerns as corporate workstation owners.
My son Cole just built a 90 percent Windows gaming PC. I refer to it as a 90 percent computer because it probably represents 90 percent of the gaming power of a top-end machine. Cole’s new PC cost about $1800 to build including display while a true top-of the-line gamer would cost maybe $6,000. If Cole’s PC was a piece of business equipment, what would it cost to lease it — $20-30 per month? If he could get the same performance for, say, $40 per month from a cloud gaming PC, wouldn’t he do it? I asked him and he said “Heck yes!” The benefits are the same — low startup costs, no maintenance, better data security, easy upgradability, and cross-platform capability.
This makes every gaming PC vendor vulnerable.
If you think this won’t happen or won’t happen soon you are wrong. AWS can right now support 4K displays. Moving the game into the cloud is a leveler, too, as gamers vie not over their puny Internet connections but rather over the memory bus of the server that hosts them. Just as Wall Street robo-traders move closer to the market servers to gain advantage so too will gamers. And the networks will only get better.
This is going to be big.
I fear you are right.
.
The cost of ownership would be wonderful, and CPU and storage scaling up and down as per need of moment awesome.
.
It would however leave us completely at the mercy of our network access monopolists, and that is NOT a postion I want to get any deeper into.
.
I also dont relish subscription software. Being able to occasionally fire up 15 and 20 year old programs to get at that old project exactly as it was is satisfying (blindingly fast!) and sometimes very important.
What, no redundant network connection? I live on the side of a mountain with neither cable nor DSL available yet still have two ISPs.
Curious how you connect to get speed…
This all sounds grand and glorious, but it assumes one thing – reliable, fast, reasonably priced internet access. You, a resourceful, technically capable person, may have found a way around the issue but your situation is very unique. I can assure you, outside of metropolitan areas (at least here in the southeast and Midwest US), these options simply do not exist. Many in the hinterlands are *still* living with dreadfully slow (think 3 to 6 mbit) wired connections or 3G connections. This is why you are starting to seen the “deals” with “free” video streaming at the wireless companies because these customers simply have no other options. With the speed of wired connections or the cost of 3G/4G connections, it will be a *long* time before this spreads to gaming for these folks. Until someone is willing to step in and break up the monopolies, all hail our new internet tollgate overlords!
” Many in the hinterlands are *still* living with dreadfully slow (think 3 to 6 mbit) wired connections”
I live 12 miles from AT&T HDQ. They can only provide me a connection speed of 2-3mbit.
Not only is it slow, but it’s really expensive. Your $40/month workstation sounds good in the abstract, but it has to ride on the DSL or cable bill, which is never as low as you’d expect. We’re caught in this uncomfortable place between utilities and free markets where everyone pretty much has to have it, but relying on the invisible hand to control costs fails because the high cost of entry and near monopoly in most places dissuades anyone new from competing and driving costs down. All that’s left is corporate instincts to cut costs, increase profits and never, ever build out expensive infrastructure in places they can’t get their money back. Worst of both worlds.
I’m in a city that just got wired for 1gb right down my alley, and there are ads everywhere telling me how awesome and affordable it is. But you can’t buy it a la carte at the advertised prices ($29 with purchase or fancy phone service or DirectTV!) so you end up paying $90 or $110 or whatever. And you can’t get the old copper at any price in my area: Centurylink no longer recognizes it as a product. I understand they have fixed costs to connect a customer at any speed, and they want to maximize revenues, and people get happier the faster they go and all that. But honestly most folks couldn’t spot the difference between 20mb and 50mb and 100mb within their Netflix, email and Google world. There’s no requirement to offer a basic connectivity package at a low cost, and companies won’t offer it because that’s all most folks would ever buy.
This is a reply not just to Cris, but also to the commenters preceding him in this thread. I think you are mistaken about the bandwidth required. The key to making this work is painting the workstation screen with H.264 or H.265 video. Hardware encoders are built into most modern processors so generating this video does not slow anything down. And compared with watching a movie on Netflix, workstation or gaming PC screens actually use LESS bandwidth because there’s not much really changing on-screen. So if you assume that running virtual Photoshop will tax your two megabit Internet service, it won’t. Just as an example, the network required for a typical 1920-by-1080 Minecraft session (admittedly on the lower end of gaming) is 40 kilobits-per-second. I’ve measured it.
How many are already working via the internet? High priced labor that uses these workstations often work from home, logging in remotely from home into their workstations. Sure, this won’t work for everyone, and in many cases, the company will host their own workstation cloud, but it’s a real thing. It isn’t for consumers in rural areas, it’s for knowledge workers that already have this capability, really, imo.
I’ll say, however, that it might take a year or two longer, it takes time to build out data centers with expensive GPUs. NVIDIA is looking better everyday, though.
Hah! We had two ISPs with two separate fibre cables entering the building. All was well until a construction contractor put his backhoe’s bucket right through both.
Are you still using your yagi setup?
Must be two dial-ups: MSN and AOL. 🙂
And I was sure it was Prodigy.
meet the new mainframe operator, same as the old mainframe operator. only now it’s not card decks, but technical workstations.
the big question is how much of the action has defense or export-restriction tech being designed, and how much of it is game graphics? you can’t put restricted-use stuff IMPHO in anybody’s cloud. one crack and the whole empire is lost.
This prediction is spot on. The end of the engineering workstation is already happening, but at different rates in different sectors. One aspect not to overlook here is the impact of new software tools coming out of the Social Media/Cloud sector. These tools tend to be Open Source with APIs aimed at mainstream software developers. A very interesting example of the technology change is the Google Earth Engine (free to researchers) , users can perform complex geo-spatial and image analysis on remotely sensed images by leveraging Google’s back end via …. JavaScript (who would have ever imagined?)…. Plus JavaScript recipes are available within the Earth Engine.
You do not mention it, but I suspect you could include graphic design workstations with Photoshop and all the other Adobe stuff. Leasing something to do that could be quite attractive compared to buying Mac Pros and subscriptions to Adobe Creative Cloud.
Technically, the cloud sounds like the answer to all of today’s cost of computing. Yet, the Achilles’s Heel of cloud computing is the reliability and speed of the internet. In my experience, the internet is still not reliable enough nor consistently fast enough to risk tying your productivity to your ISP’s ability to provide a good connection. There are still too many unexplained outages — servers down or connections that won’t make — with no real help from the ISP (it’s always someone else’s fault).
I already work at a company that utilizes the cloud for most server operations, with the IT department almost fully outsourced, and we constantly have outages that lasts hours, sometimes days, long. Not the whole network mind you, just a piece here or there. But the toll is that productivity suffers and the worker drones are constantly required to “catch up” once the server becomes available again. No doubt the cost looks great…but it’s at the expense of lost productivity or long hours of making up for lost time.
There’s no way the internet can support reliable day-to-day operations 24/7, much less mission critical operations, yet. I can’t guess at when it might be able to, but 2016 surely isn’t it.
I’m not sure about this. I work in the entertainment industry on a cloud based solution and quite a few of our clients are actually choosing to roll out our services on their own private cloud since their engineering workstations (used to run software like Photoshop, Maya, Nuke, Houdini) are all offline for security reasons. If there is one thing these companies are afraid of it is data theft.
Edit: By the way, congrats on properly supporting the ç on your site. I can’t tell you how many sites get the ç wrong once it goes into their DB. 🙂
Network problems seem to come down to IT mandates that don’t take into account the real world. IT managers LOVE to make rules and tell people what they can and cannot do and sometimes even forget that their company exists to do actual work. For exactly this reason I (as my own IT department) have required redundant Internet connections and channel bonding since 2004. If I can do it in a location that has no DSL or cable Internet then almost anyone can do it. Network disruptions on the user’s end ought to be unacceptable.
You didn’t answer the security issue.
Companies need to get over data theft. Seriously – if your machine can get online or is networked in anyway, it is vulnerable to attack. The fact that you have your rendering tower doesn’t make it any more secure than being in Amazon’s server farm. The janitor cleaning your cubicle could easily snatch your machine and have a whole evening to clone the HDD and do what they want.
People need to stop freaking out over this. Just assume everything stored digitally nowadays can be tracked/snooped on at any time, by anyone. Whether it’s one foot away or in a big server rack across the country makes no difference.
I think that’s the point of Bitlocker or any type of whole drive encryption. Stealing the hard drive, or the whole pc, for that matter, is pointless without a proper logon.
Reply to “Ronc”
.
Considering Bitlocker is MS’s baby, I guarantee there are backdoors built into it, since this is a gigantic US company. Social engineering at the company could nab the user’s Bitlocker key (often stored in AD) which could then be used to unlock the cloned HDD contents elsewhere.
.
And considering SO much data is sent via the internet now, most people expose sensitive work files to the net on a regular basis, which could be intercepted at multiple points, regardless of what the company physically uses in its facilities.
.
I’m just saying, it’s fine to be mindful of security, but nothing is 100% secure, so don’t expect that. At a certain point, convenience and price trumps most other factors.
Prediction #10
I have been investigating this technology for about 3 years now and you hit the nail right on the head. Vendors like Onshape and Autodesk Fusion360 are already there (or nearly so) with cloud based CAD.
We had a proposal where I work for a locally hosted ‘cloud’ solution for our workstations (NVIDIA GRID based) that was no more expensive that buying individual workstations for the first install and (among other things) would let us access our REAL desktops and apps from anywhere (nearly) and on any device. Our management decided not to purchase this tech on this go around (probably due to concerns about network reliability and security) but in 2-3 years when our workstations need replaced, this will be a strong contender and I would purchase this solution in a heartbeat.
One of the biggest issues for CAD/CAM/CAE users is that vendors like to lock you into a specific software configuration that is not ‘floatable’ to other users without buying licenses that sit mostly unused. With the GRID solution you could configure a virtual workstation to have just the licenses and options you need for a specific task. Data translation, photo rendering, kinematic simulation, etc… would be much easier to manage without the fickleness of license managers and their need to be flushed when they lock licenses from being used or released.
Browser-based CAD apps like Onshape also allow for real time collaboration and support 3D point devices in the browser like 3D connexion mice, etc… by default.
One of the other big changes to come is the need for CAM/CAE vendors to support GPU acceleration and remote cluster computing ‘out of the box’ and for no extra cost. I have CAM/CAE solutions now that can only run on 1 cpu core (no parallel option) or are licensed by the number of ‘cpus’ assigned to the task. If someone can create translation/abstraction software that will take MPI (parallel) code and get it to run in a TESLA environment, this would be a big thing, as this is the excuse I get for not having parallel apps ported over to TESLA hardware.
Well done, Bob, but could you please clarify the statement in Prediction #2 that Google collects “Internet taxes”? I thought the FCC prohibited paid prioritization with its Net Neutrality/Title II action…? Or is this something else? Thanks.
#2 – The 2015 predictions were explained more fully back when they were made – see the link given in the first sentence of this article.
The workstation prediction is definitely mixed. AWS GPU EC2 instances are good for running things like Finite Element Analysis solvers. Video games would be a bonus feature you get for free. A good connection to Amazon’s cloud could be ALMOST as fast as an ethernet connection inside your own home, and Valve has made streaming games from one PC to another a reality., So, yeah, it’s possible, but I’ve tried it, and the lag is noticeable. For serious gamers — say, someone who would spend $1800 on his rig — it’s something to avoid. (My Dell 730x cost $8,000 new, so maybe I’m too sensitive.) Basically, it’s technology that’s useful for when the kids want to play a AAA title on their hand-me-down computers while you’re not using yours. There’s a subset of games that this would acceptable for, but certainly not for “twitchy” games like Counterstrike or League of Legends. It might be great for Civilization V, though, once you get used to it.
High-end games won’t move to cloud due to latency. The speed of light is far too slow.
Even current ‘network’ games actually play on your local PC and then make subtle behind-the-scenes corrections when the server gets around to updating the global picture.
Yup. I agree. I find it hard to understand the rationale behind updating HD frames @ 30fps on the cloud and streaming them to the gaming device. Isn’t that such a terrible waste of bandwidth? Not to mention the effect of rolling technology back to the dumb terminal days of the 80s, Isn’t it more efficient to maintain the state machine of the game on the cloud, and do all the graphics crunching at the device end?
I played many games using OnLive years ago (like in 2010/2011) and they were totally playable…on a paltry 3-5mbps connection. I even beat Arkham City. That was with ~480p-ish visuals @ 30fps. Lag wasn’t really an issue.
Thing is, for Photoshop, video editing and CAD work, you don’t need constant 30fps speeds – you just need it to be responsive enough to not seem too noticeable.
I think we’re there. In fact, Adobe has been beta testing Photoshop Streaming since 2014 among educators and certain students.
On prediction 10, you forgot that this will be the first year of consumer VR. Your son won’t be able to use a cloud workstation to play that and neither will all of these companies developing apps and games for it, latency would be too big an issue. Also while I agree some things like CAD will move more into the cloud. New sales of powerful VR computers will more than offset any loss of workstation sales.
Thought of this as soon as I posted my comment. I think you’re spot on.
We’ll just have to agree to disagree then because we have a VR application running at my house right now that is entirely Internet-based and my kids aren’t complaining about performance. And remember that Moore’s Law is only going to make this easier.
“Consumer” VR won’t happen as long as the headsets are $400-600 and the hardware to run them range from $350 (PS4) to $1500+ (Oculus PC setup).
Stop drinking the VR Kool-Aid.
We have CAD vendors pushing us towards the cloud all the time. My biggest resistance is one of latency. If the screen is even a quarter of a second behind in catching up with what you’re trying to do (spinning around a 3D model or selecting objects, for example), the psychological effect is absolutely AGONIZING. It feels like you’re walking through molasses all day. These cloud dreams are dreamt up in Silicon Valley or major metropolitan areas where everyone has a fast, fat pipe, but come out here to rural America for a day and it all goes out the window…
Not to be a fanboi about this, but when using a remote app like Splashtop to remote display my desktop to another device, I do get issues with display stutter and lag. This isn’t a problem when checking in on the progress of a simulation that takes hours to solve, but doesn’t work well for CAD.
So far, when testing the GRID solution that was hosted remotely to our main office, the display was very snappy and worked well for CAD/CAM apps. As others have said, the key is fast, cheap internet access along with managing ‘buffer bloat’ in the routers as RXC has commented on before.
Hey, Bob,
I’d like to ask for one more prediction: Do you predict that you’ll do any more columns for InfoWorld (your last one was at the beginning of August)?
Thanks,
Tenaya
Tenaya – Bob’s last article for InfoWorld was in 1995. See his “About” link at the top of this blog, or check his Wikipedia page for more details.
CringelysMyHero – Notes from the Field By Robert X. Cringely – “Good bot, bad cop, and the road to enlightenment”
https://www.infoworld.com/article/2955991/robotics/hitchbot-bad-cop-road-to-enlightenment.html
Bob- I only ask because I enjoy your writing.
Tenaya
One of my kids took some engineering CAD classes in high school, then went on to study engineering in college. The software applications he used in his classes required super high end workstations. In both cases his school had set up a virtual desktop environment. From any low end PC they ran a desktop session on a large virtualized system in a data room somewhere else. Typical users would get 1 or 2 virtual cpu’s. The CAD users would 8 (or more) virtual cpu’s and lots more memory too.
.
Yes, there are now more cost effective alternatives to big dedicated engineering workstations. When you throw in some Frame2 technology, it gets even better!
Re rolling software updates: I can tell you that they stink for Photoshop. The real problem is that the testing of each update seems to be significantly reduced. In the decades before Adobe’s software lease model PS had a most one dot release to fix bugs each “generation” and sometimes zero. And for my use there was never a bug released that impacted me.
Now, there rolling bug shipments, including really bad flaws that get shipped.
I am not against the cloud model of sales that Adobe is using, and it is helping Adobe which makes me happy. I just don’t like what it has done the mentality of “testing”. I’m sure that they don’t think “no matter, we can fix it tomorrow”, but the result seems to be the same, and that is even with the same great engrs that they have had for years. Must be a management “focus” 🙂
While I agree with you on workstations, I disagree on gaming PCs. Gaming requires latency to be as low as possible between mouse and screen. That isn’t possible when it’s going through the network. Even if the processing is faster, there is too much delay for it to work on fast twitch games, as several game streaming companies have found out.
“but then you are paying by the minute”. This isn’t true for Amazon. From https://aws.amazon.com/ec2/pricing/:
“Pricing is per instance-hour consumed for each instance, from the time an instance is launched until it is terminated or stopped. Each partial instance-hour consumed will be billed as a full hour.”
So, you are paying by the hour.
Google is closer to what you originally said. From “https://cloud.google.com/compute/pricing”:
“All machine types are charged a minimum of 10 minutes. For example, if you run your virtual machine for 2 minutes, you will be billed for 10 minutes of usage.
After 10 minutes, instances are charged in 1 minute increments, rounded up to the nearest minute. An instance that lives for 11.25 minutes will be charged for 12 minutes of usage.”
The GPU cloud is new and that means business models will be in flux for awhile. Yes, AWS charges for full hours NOW but you are unlikely to be buying direct from AWS. A third party like a software vendor will commit to buy from AWS million of seat-hours per month and how they use those (and how they charge you to use them) will not directly involve AWS policies.
The cloud will the throttled by ISPs. ISPs lost on net neutrality so they are charging bandwidth caps. Customers of cloud services will have anxiety as they reach their bandwidth caps. That uncertainty might dissuade customers from using cloud services. That could even throttle internet growth. Millions of people are going to be cut off from the internet. It was fun while it lasted, but the party might be over soon with bandwidth caps.
“Expensive Asset”? What, a $5K or $10K high-end desktop computer used for 2 or 3 years? Used 40 hours per week (up to 60 hours/week), by a $100K/year or more Engineer? With depreciation credits for tax-time?
That’s not expensive. What would be expensive is having down-time on the Cloud, having your intellectual property stolen from the Cloud, having ‘crunch time’ being blocked by limited Cloud resources, having all productive work channeled through the fast, yet still limited pipe of the Internet.
That part of your prediction is penny-wise, pound foolish. Otherwise, you provide a fascinating perspective on the industry. Keep up the good work.
….and then there’s the fact that all your workstation content is being managed (and maybe browsed or stolen) by who-knows-who bad actors. So AWS is cheap….but what about all the other costs….including loss of your own content.
What the cloud naysayers are forgetting is economics. If you can get 80% of the capability for 50% of the hardware costs then who cares about the meat bags? They will get it done somehow. We all know that poor decisions are made all the time because of the “numbers”. Cloud will not be any different.
Others have said it and I’ll repeat it: cloud depends on fast (latency AND bandwidth), reliable Internet. Preferably uncapped. The further you get from Silicon Valley, the less likely it exists.
I’ve only recently upgraded my home internet to 4G. No, that’s not a typo. DSL and cable modems, with < 20 msec latency and 10+ megabit bandwidth, simply do NOT EXIST here. At any price. The latency on satellite Internet is improving but nowhere near anything you'd want to do RDP or similar over (not to mention the caps). 3G worked as long as only one of us was using it and we weren't pushing too many pixels through it. I could SSH all day and do some VNC and it worked.
My wife spends a lot of time using RDP to hit a corporate, private cloud over a VPN. Push enough pixels through that channel and it bogs down. Bad. When I get online and do the same, it's noticeable. She can tell when I'm online and when I'm not.
Companies may go with a private cloud and VDI but that only works well if the employees have corporate office, gigabit connections or DSL/cable modems. Anywhere other than the office or a major metro area, forget it. I'm better off doing my development work and testing in a VM, running locally, and occasionally pushing code updates through the VPN. I can only imagine what it's like trying to use CAD or some kind of graphic or video editing through that.
I can recall a prior employer where everyone was using thin clients and we had a cluster of app servers. Even with commercial connectivity, < 20 msec ping and ample bandwidth, pushing lots of pixels really bogged things down, for the person trying to use PhotoShop and for everyone else.
Until such time as the client is "fatter" and capable of offloading more of the work, so you aren't needing to push so many pixels, or such time as uncapped, low-latency Internet access is more widely available, there are certain tasks where "cloud" will be pure, useless hype instead of useful reality.
Web browsers, capable of running JavaScript client-side, work reasonably well hitting services in the cloud. Load a pile of code and data, up front, and much of the work happens locally. Tweak a CSS class or show/hide elements in the HTML and the pixels change, locally. Very little bandwidth or latency needed, especially if you can pull at least some of that from a local cache. For such things, the cloud makes sense, even if the connectivity is not so fast. Show me a desktop VDI tech that does something similar and I'll change my tune. For now, every time I see someone bloviating about running apps "in the cloud," it's a glorified RDP/VNC session. Which suffers badly over widely-available Internet access.
Actually, it is even worse. Cloud depends on competence and lack of greed (like limiting, and on and on).
Fat chance—some years ago the intracompany servers couldn’t even be kept online, let alone some cloud barf.
today for ex; some TMobile account services are not available, and they require almost NOTHING in resources. Want to change a PIN? can’t. The prob is both down resources apparently AND lack of competence in website design. This is with a company that gives much better service than the “big guys” like AT&P and VERiSIN.
I’d like to see a more advanced scoring system. Maybe there’s a way to get the readers to rate the predictions based on originality and whether they think it will happen or not. Bob gets more points for originality and for getting it right when everyone thinks it’s wrong.
Sorry Bob, not going to happen in 2016.
Two reasons:
1) Bandwidth. Not everyone has it, not even close.
I make Youtube videos for a living, and it’s simply impossible to use the cloud due to the file sizes and transfer times required. Just *copying* the files from the camera to a fast drive locally takes an annoying amount of time, let alone transfer them anywhere else. And I’m not even running shooting 4K.
And then you have otherwise first world countries like Australia where practically no one has decent internet good enough for this use (1TB fibre required), *expecially* businesses. I’m in a major business park in Sydney and pay $440 per month for 20M/20M fibre, and that’s *cheap* here. *And* we have data caps.
Even our planned and slowly rolling out National Broadband Network (NBN) that is costing us $35BN+ won’t help here.
A single video has 10GB of raw video files, it’s easy to do the math.
2)
I’m also a design engineer, and engineers are otherwise fairly much control freaks. Using a cloud machine does not cut it, we only trust local working files.
A major PCB design company (Altium) last year tried to release a *free* high end PCB design package that was cloud only. The blow-back from the industry was incredible. Even a top end package for free which was unheard of in the industry, engineers still didn’t want to use it because it was a cloud only.
Gee I thought engineering workstations were already dead. Isn’t Sun already gone (eaten up by Oracle) and weren’t they interested in selling servers anyway? Everyone I knew moved to x86-based Linux workstations a lonnnngggggg time ago.
.
I don’t see those Linux boxes going away, but I do see them becoming pure clients. On projects everywhere businesses are using Jenkins-based build boxes, pulling from some kind of git repository. That Jenkins box could easily be in the cloud as it serves up continuous builds to the engineering team.
.
So if engineering systems are already just relegated to purely graphical work (not my area), that makes me think of long gone Silicon Graphics which was also destroyed by cheap Linux systems. No reason cloud computing can’t do it, I’m hearing all the same arguments made against configuration management in the cloud yet github rules supreme.
Re: Intelligent displays
I got rid of my large workstation last year and replaced it with an intelligent display, specifically a $1k Mac Mini and a really nice 2560×1440 display. All the heavy computation/testing/etc occurs in the cloud at a fraction of the price. If I needed better graphics or a sub 1k system I’d go with a cheap Linux or Windows PC, but I also wanted something super small/quiet, which the Mac Mini is.
Re:”Google starts stealing lunch money”. That’s fine as long as they continue to give it to me. Thanks to Google Fi my cell phone cost is down to $23/mo and the 1GB of data that cost me $10 will probably last a year thanks to Wi-Fi. I can also discontinue my $50/mo data card charge for my PC, which I rarely needed anyway, since Fi includes free tethering. I can also discontinue my GoPhone service at about $30/mo and don’t need the ATT cell phone to internet device at home, since Fi uses Wi-Fi directly, when the the cell reception is poor.
People that buy into using the cloud for the engineering development are crazy. This is like putting all of intellectual property in cloud, so everyone can have a copy.
This might help the VR world too – my Oculus Rift requires at high-powered graphics card that can render two 1920×1280 displays concurrently (one for each eye) at a decent frame rate. That baby cost me a few hundred six months ago. But the minor cost increase of a slightly fatter internet connection would appeal to a much wider audience.
I totally disagree with the assumption that AWS is where engineering workstations will go to rest in the near future. Maybe someday but our development group has already tested this idea and it still remains too cost prohibitive. I don’t understand the cost assumption of $20-30 dollars a month for a true workstation environment. We currently have 6 MS 2012 servers running on AWS at about $300 an instance. These currently only have 2 CPU’s with about 6 gigs of system memory. As soon as you start scaling up CPU cores and system memory the cost can get crazy. Even our AWS representative stated that we were better off just using local workstations to code and compile on. Our development test environment was costing almost $400 a month for 8 cores and 64 gigs of system memory–this does not even include a workstation level GPU. We also recognized we would need to upgrade our internet fiber circuit to 1 gig of throughput in order to decrease latency and transfer large file sizes in order to compile our code. We currently custom build our developer workstations for about $3200 and use them for about 3 years. Dell and HP charge a ton for their workstation configurations and can be twice as much for similar configurations. When AWS comes down in price for CPU core use and system memory configurations then maybe we can switch but today AWS would cost way too much for our needed development horsepower.
Bob, like good hypotheses, good predictions should be ‘testable’. You have made some spot-on predictions in the past, and I enjoyed reading this one too…but somehow prediction #1 (at least the way you have phrased it) doesn’t seem very testable! :=) What say?
[…] Cringeley’s 2015 and 2016 predictions, including the end of engineering workstations in favor of the cloud [Cringely]. […]
I dissent.
Engineering workstations still need software, and not all software is going to be usable or even transferable to a cloud-based workstation. I don’t see most government agencies or defense contractors moving to a public cloud (a private cloud may be an option, but then what’s the gain?).
Graphic design and gaming workstations, sure, that’s where things are headed already. Boeing? No.
The other main impediment as many have already pointed out is bandwidth. Whil getting cheaper all the time, try to justify bumping up your OC-3 to an OC-12 to handle the bandwidth of delivering 4K video to dozens or hundreds of workstations simultaneously. Again, no.
OC3 to OC12? so yesterday. get fractional gigabit on MOE or GPON and save a shipload over that “so aughts” technology.
in other news CWA has given up the IBM unionization effort. too many supporters have been canned, it seems, to make a dent. so Alliance@IBM is gone as a source. another, less kind, way to say it is “not enough of IBM left to have workers that are being harassed.”
I expect a Cringeable take on IBM will be in the predictions.
Heeeey bud… So… You have time to update your blog but not your kickstarter page?… Please head on over and fufill your obligations.
BTK
The last computer for which Bill Gates actually wrote code was the Tandy 102 portable, which was built by Kyocera. When the Kyocera executives arrived from Japan for a big meeting at Microsoft, Bill Gates dragged-in late, unbathed and dressed in clothes he may have been wearing for days. His excuse was simple: “I’ve been up for the last three days finishing your code.”
I have many obligations and my one to Kickstarter has kept me working until 3-4AM every night for the past month. You may not feel that’s a proper use of my time but I do.
I’m a huge fan… But it’s been 2 weeks since an update has been given… Can you just plop down “We are still working, please be patient.” into the text field? Something to make your fans realize they aren’t being jipped?
Totally agree with this prediction. NVIDIA GRID has changed what is possible and now there are many companies already doing this. You don’t have to use the cloud though, you can virtualize in your own datacenter and give your engineers access from anywhere with like for like performance. So on or off premises, all is possible. One thing for sure though, data is getting bigger, users are becoming more and more mobile and workstations just won’t be able to keep up with user demand.
I think you should give yourself a win on immigration because of the changes passed by executive order by Obama as well as what was passed by Congress in the omnibus.
Our physical desktop workstations will not disappear, but they will ‘shrink’ (i.e. getting smaller, lightweight, cheaper, etc.) while our virtual workstations in the cloud are getting more powerful, easy to use, pay per use. The main driver of this is new software container technology, https://www.hpcwire.com/2016/01/07/towards-ubiquitous-hpc/ which makes engineering software and tools packageable, portable, easy to access and use, and dead easy to maintain and support.
I’m a bit skeptical, especially since not even Microsoft has been able to come up with a cloud based email program that is as feature complete and responsive as any of their former desktop clients, from Outlook Express to Windows Live Mail.
BAKMAYI DEĞİL, GÖRMEYİ ÖĞRENİYORUM !
Türkiye`de eğitim sistemimiz içinde, ne yazık ki sanata ayrılan alan çok az.
Sanat derslerinin haftada sadece birkaç saate sıkıştırıldığını görüyoruz.
Sanat eğitimi ikinci plana atılıyor çoğu zaman; lüks olarak görülebiliyor yada
sadece yetenekli çocukların bu eğitimi almaları gerektiği gibi yanlış bir kanıya
sahip bir çokları. Oysaki çocuk eğitiminde en mühim şey, çocuğun sanatı
sevmesi, yapmak istemesi, sabır gösterebilmesi başlangıç için yeterlidir.
Öncelikle çocuk hangi mesleğe yönelirse yönelsin, kişilik gelişimi için
muhakkak sanat eğitimi almış olması gerekir. Sanat sayesinde bireylerin
özgüveni ve estetik beğenisi gelişir, eleştirel bakış açısına sahip olur, yaratıcılığı
artar. Sadece bakmaz “görür”. Sanat eğitimi sayesinde özgür düşünür,
sorumluluk almayı bilir, üreten, kendini daha iyi ifade eden birey olur.
Böylelikle hem kendilerine, hem de topluma katkıları daha da fazla olur.
Hep sanat algısının ne kadar zayıf olduğundan yakınıyoruz Türkiye`de;
çocukların gelecek neslin, sanatla bağ kurabilmesini sağlayacak bu eğitimler
sayesinde bu durumu değiştirebiliriz. Bu çocuklar sanatsal beğeniye sahip
bireyler olarak “fark yaratacaklardır”. Pek çoğu geleceğin kolleksiyonerleri,
sanatseveri olarak karşımıza çıkacak, hatta kimisi geleceğin sanatçısı olacak.
Onların “doğru” , “yaratıcı” ve de tabii eğlenceli sanat eğitimleri
alabilecekleri kurumumuza bekliyoruz. Bakırköy resim kursu ile sizleri memnun etmeyi umut ediyoruz.
http://ruyaavcisi.com/
I’m a little late to this article, but google didn’t lay off thousands of people. Also, google was already advertising their own insurance and mortgages before 2015. Can you explain at least what the thousands of people reference was?
Maybe they were temps:
https://www.pcworld.com/article/154450/google_layoffs.html
“Google classifies about 10,000 of its workers as “temporary operational expenses,” which means their positions are not official and could be eliminated without public notification.”
[…] this prediction that 2016 is the beginning of the end for engineering workstations, to be replaced by – wait for it – the cloud and […]