As promised, here are my technology predictions for 2011. These columns usually begin with a review of my predictions from the previous year because it annoys me that writers make predictions without consequences. If we are going to claim expertise then our feet should be held to the fire. But last January I didn’t write a predictions column, thinking we were past all that (silly me) so there is nothing with which to embarrass myself here. More sobering still, after last year’s holiday firestorm over our naked card Mrs. Cringely won’t let me post this year’s card. We have become so dull.
We also seem to have become verbose, because my first prediction (below) took 1400 words to write. So tell you what: this year I’ll do 10 predictions, but as separate posts. Here’s #1, which is of course you knew would be… bufferbloat!
Prediction one: Bufferbloat will become a huge problem this year.
What, you’ve never heard of bufferbloat?
Sixteen years ago Ethernet inventor Bob Metcalfe made some calculations on a napkin at Sushi Sam’s in San Mateo and concluded that the Internet was set to shortly implode. He saw client traffic growing faster than backbone capacity and felt that unchecked this would lead to eventual gridlock on the World Wide Web. Obviously it didn’t happen, but that doesn’t mean it didn’t come close. Props to my old friend Bob for bringing the problem to everyone’s attention in enough time for it to be avoided by dramatically expanding backbone capacity, a trend that continues to this day.
The fact that backbone capacity increases have continued at a faster rate than Moore’s Law grows our ability to use that bandwidth means that the very gridlock feared by Metcalfe won’t (indeed can’t) happen anytime soon. It’s lost in our dust. But now we face a similar though far more insidious problem in bufferbloat, which is a big enough deal that it will be involved in three of my predictions for the coming year.
Bufferbloat is a term coined by Jim Gettys at Bell Labs for a systemic problem that’s appearing on the Internet involving the downloading of large files or data streams. In simple terms, our efforts to improve client performance for watching TV over the Internet is interfering with the Internet’s own ability to modulate the flow of data traffic to our devices.
Think this isn’t happening? How fast is your Internet connection today compared to, say, three years ago? Most users have 2-3 times the bandwidth today than they did three years ago. Does your connection feel 2-3 times faster? Of course it doesn’t, because it isn’t. And it isn’t because of bufferbloat.
In terms of latency, the Internet was faster 20 years ago than it is today — in many cases vastly faster. And it is getting slower every day. Unchecked, bufferbloat will eventually make the Internet unusable for some data-intensive activities.
Here’s how bufferbloat works. Your cable or DSL modem is connected to a data pipe that’s spewing bits. Internet traffic is bursty, so imagine that pipe flowing with what’s sometimes a trickle and a second later a flood of data that may then drop back to a trickle again. In order to make the best-possible user experience, the more intelligent your modem the more it will use techniques like pre-fetching and buffering. Memory is cheap, after all, so why not grab some extra data and hold it in a memory buffer in the modem just in case it is needed, at which point it can be delivered — BAM! — to the user? Why not indeed?
Few homes today have a modem connected directly to a PC. Multiple computing devices are the norm. At our House of Boys, for example, we have three home computers, three iPod Touches, two smart phones, a Roku box, a Wii, and an xBox 360, which means we have a network and a router, with the router adding a second buffering stage. The iPod Touches and smart phones are connected to the network solely by WiFi, adding yet another buffering stage. There are the devices themselves, which tend to buffer data in the OS for running apps like browsers and media players. And don’t forget the apps with their little buffer indicators.
So most of us have cascading buffers, which sound good in principle but actually screw-up the flow control algorithms in TCP/IP — the networking technology that runs the Internet.
What these buffers are intended to minimize are dropped data packets that require expensive (in terms of time) retransmissions under TCP (Transport Control Protocol) or cause a playback glitch in media apps that use the simpler UDP (Universal Datagram Protocol). UDP doesn’t ask for retransmissions and is used in most video and audio streaming services since it is faster.
So we’re trying to improve the user experience by minimizing dropped packets, yet TCP requires dropped packets to implement its flow control, which was developed in response to a huge NSFnet meltdown back in 1987. Retransmission requests from the client are the way, under TCP, that the server knows it is sending data just fast enough. If there are no dropped packets at all, the server sends data as fast as it can, which isn’t always good.
Here is the key point: if the amount of data held in buffers is more than the client can process in the expected Round-Trip Time (RTT) from the server and back, then TCP’s flow control simply stops working. The server runs like a bat out of Hell. This might be okay on a very lightly-loaded network or on a network with only two nodes (server and client) but our average Internet connection today is about 15 hops, meaning there are 13 other points of possible congestion between the server and the client. TCP flow control normally operates on all of those interim nodes, but not if bufferbloat circumvents flow control. The result is network congestion that happens at some interim node and, because TCP flow control isn’t working, we have no way of knowing which node is having the problem. So network latency drops from milliseconds to seconds (see the chart above), the connection eventually fails, the buffers are all drained, then your Netflix or Hulu client begs for patience while it tries to reestablish a connection that shouldn’t have failed in the first place.
That’s exactly what happened this Christmas when our new xBox 360 replaced the entry-level Roku box on our big-screen TV. Either device can be used to watch Netflix, so I disconnected the Roku, moving it to another TV in the house. But when Mrs. Cringely and I began watching our Lie to Me marathon one night on Netflix after the boys were asleep, it was clear that the old Roku did a much better job streaming video than the far more powerful xBox 360. The xBox kept stopping every 3-5 minutes to change the video quality as the network got slower or faster. It was bufferbloat. The el cheapo Roku with its tiny memory was a better Netflix streaming device than was the state-of-the-art gaming system.
I know, I know, here I am 1100 words into this mess and still on my first prediction. They’ll get a lot shorter from here on. But first I have to explain why bufferbloat is going to be a big problem in 2011. Some of it has to do with the rise of streaming video services, but most of it has to do with the retirement of PCs running Windows XP.
XP is a pretty archaic design compared to Linux or OS X or Windows 7. TCP/IP running under Windows XP can only buffer 64 kilobytes at a time which implies a data rate of about six megabits-per-second. So you can’t saturate most cable or DSL connections with a computer running Windows XP, which is good. If we all ran XP and nothing but XP, there probably wouldn’t be a bufferbloat problem at all. But you can’t wear bellbottoms forever, so we’ve moved-on and in the process created this huge mess for ourselves that we’ll all be talking about by the end of the year.
The coping strategy for bufferbloat, by the way, is to minimize the buffers on your home network. Where you can adjust buffer sizes, make them as small as possible (but don’t turn them off — a little buffering is good). Routers with editable firmware like OpenWRT are nice for this. Or consider getting a DSL or cable modem aimed at gamers, because they are already optimized for lower latency. And some vendor should really offer a DSL/cable modem-router with dual-band 802.11N WiFi so there’s only one device between the ISP and you. I’d buy one.
Reduced buffer sizes or as you say, smarter devices – that for example buffer part of the stream but self-throttles based on the downstream demand. Secondly, how does this affect your predicted P2P video services, moderated centrally but dished out from one set-top box to another within neighbourhoods?
Look for me to cover precisely this in another prediction later this evening.
Motorola makes a Cable Modem Gateway with a router and Wireless N built in, 150$ at amazon.
SBG6580
NOT dual-band. I’m picky.
Bob,
Try this. I’ve been running it now for a couple of months and works great. Dual-band wifi. About $125
Cisco DPC3825
I think he means simultaneous dual band, as in other 2.4 and over 5 at the same time. A feature that as of now I believe is only available in standalone wireless routers.
Exactly! 802.11N runs even better if you can use the 5.8 GHz band for backhaul and 2.4 GHz for serving portable devices.
Very nice residential gateway.
$119 delivered from Amuras,
https://www.amuras.com/product.asp?pf_id=1016999841
And to be unincumbered by Comcast box fees pick up a Ceton InfiniTV4..
I’m a little confused by this one. The Cisco spec says it IS a cable box, so why do I need the cablecard, too? Also, I think it’s strange for Cisco to offer a 340 megabit WAN device that ha Ethernet ports that run at only 100 mbps. Maybe that 340 isn’t of our world? And it isn’t dual-band, either.
Bob,
It’s strictly a cable modem. I didn’t read any where in their technical specs that it servers as a cable box as well. It doesn’t run simultaneous dual-band which I believe you are looking for but it does offer b/g as well. I guess it’s loose marketing words being thrown around by Cisco.
The router also runs 1gbs ethernet 1000 Base-T (look it up in the tech spec) although you need a separate card to run at that speed.
The Ceton quad tuner allows configuration of HTPC which can replace the cable company set top boxes and Tivo if you are so inclined. If all you want to do is “watch” Hulu or Netflix then the tuner is not necessary.
And announced at CES last week Ceton USB and 6-tuner:
https://www.engadget.com/2011/01/08/infinitv-4-usb-cablecard-tuner-hands-on/
http://thedigitalmediazone.com/2011/01/06/ceton-announces-4-tuner-usb-and-6-tuner-pci-e-cablecard-tuners/
Also of interest for those with HBO premium via cable subscription or satellite is HBO GO,
https://www.hbogo.com/
If you have DSL, you can (and should) set your modem to behave as *just* a DSL modem (not a NAT-gateway-thing, which is what most of them do these days) and run PPPoE on your router. That way the modem is not yet another hop at the IP level. I started doing this when I started having BitTorrent trouble a couple years ago and it helped quite a bit.
Where would I go to start looking for such a setting? I know it will be different on different modems (mine is FiOS, not DSL), but what might it be called?
Hmmm… I’m going from (somewhat rusty) memory here, but as I recall I had to hook my computer up to the modem directly – I couldn’t access its admin interface from behind the router. I think that once I did that, I could go to 192.168.1.1 and log in with some fairly obvious admin password (like ‘admin’) and poked around until I found a setting like “be a gateway” and turned it off. Note this is a circa 2007 Siemens DSL modem so YMMV.
I suspect the only point where your FiOS box will look much like my DSL modem is in that you probably have to be hooked up to it direct – it probably doesn’t give your router an IP that’ll be accessible from behind the router. Good luck!
There should be a setting that forces the DSL/FiOS/Cable router to act like a bridge. The setting looks like ‘pass on the IP addy of the router to the computer.’ YMMV
In any case, I don’t trust the FW in the router anyway and run DHCP/NAT from a homebrew FW which is the computer plugged into the router. Which runs PPPoE for me and updates my Dynamic DNS account when the IP changes. From the green interface on my FW I connect my switch and locked down wireless.
You could try the D-Link DSL-2640B. Not sure the wifi specs on it and it has issues with some DSL providers such as Speakeasy, which I couldn’t get it to work with.
Hmmm, Comcast is jacking up the rent on their cable modems. Might be time to go shopping. Thanks Bob!
So the prediction is, “we’ll all be talking about [bufferbloat] by the end of the year.” Right?
Would you care to quantify that a little more? I’d bet against you on this one, but not without a little more clarity in terms of what constitutes a successful outcome.
Also, maybe you’ve pitched this at too general an audience for technical analysis to be possible, but I have a relevant PhD and I’m not sure I understand the problem based on your description.
I’m with you. Bob talks about buffering (actually prefetching, I think) on the intermediate hops will cause intermittent congestion on the backbone, which confuses streaming apps like NetFlix. But then he says that it’s dependent on the size of the buffer on the client, which shouldn’t have any affect on the intermediate hops. I don’t get it.
I’m rusty on my TCP, but I think the general idea is that a full buffer on the client will result in lost packets, which will cause the server’s end to slow down transmission, thus lightening the load for all the intermediate hops the data has to go through. I think “overbuffering” may be a better term for it than bufferbloat, but it doesn’t have quite the same ring.
I write about what I write about: if you’ll read ALL the words it is pretty clear. TCP relies on detecting dropped packets for flow control. Buffering is used in part to avoid dropped packets. A little buffering is good but a lot is bad. A little buffering smooths-over the bursty nature of Internet traffic. But excessive buffering — buffering that precludes ANY dropped packets — encourages the server to go as fast as it can. This inevitably leads to congestion but that congestion is not at the client but rather at some intermediate hop. With minimal buffering it would become quickly clear not only that flow control was required but that it was happening because of problems at some interim hop. Circumventing flow control entirely means the system goes flat-out until it suddenly doesn’t go at all. The process crashes and needs to be rebuilt from scratch, which is WAY slower than just using flow control int he first place. All of this is covered in the prediction, which is why it is so long.
I don’t recall having offered to bet with anyone about anything.
820.11g/a WiFi these days can support up to 25Mbps, while 802.11n up to around 100Mbps in theory.
The DSL between home and ISP however, is at the level of 10Mbps.
I would believe that bufferbloat happens when one UPLOADs something, rather than download.
It can get a little faster than that with N, depending on how you use it. Remember on a lot of these home network not everything is going over the Internet. My kids are watching an 1080p movie right now that uses the Internet for nothing at all but uses the WiFi network to reach their TV upstairs.
Uploads would be just as problematic if the same excessive buffering is used.
Are you saying that Van Jacobson’s slow start isn’t working, or that it is based on the wrong data?
For those willing to sink their teeth in it: here is a call for a paper on that https://www.linkedin.com/groupItem?view=&gid=125056&type=member&item=39287152
That’s exactly what I am saying. Slow-start is being circumvented.
I don’t understand how XP is limited to 6 megabit/s connections. We can download the full available 11 megabit/s through our DSL connection and local file transfers over tcp/ip run at around 20-30 megaBYTE/s (limited to hard disk write speeds).
Agreed. I’ve done gigabytes over a few minutes at 30+ MegaBytes per second numerous times (use to be my job).
Given that TCP and the various routing protocols do not guarantee that any two packets for the same destination even from the same source will traverse a network the same way, Bob’s scenario requires that there be at least one point of congestion between you and the transmission source. This is mostly likely your own home network connection to the Internet.
Now, in Bob’s cited case – of the XBox360 vs Roku – I’d much more suspect that the issue is itself the XBox360 given it is a Windows-based system, and likely has a number of issues there any way – it’s primary function is playing game discs loaded locally, not running things over the Internet. Roku was probably more dedicated to low-latency of data coming in from an outside source, thus better for the job.
But even so, this likely isn’t near as much an issue as Bob makes it out to be simply because the routing paths will auto-adjust for congested devices where possible – it happens all the time. Now, having a good router and modem on your own home is of course important – and the routers they provide are not necessarily the best either.
Now, Windows also is a little behind the TCP stack too – as Linux has several different TCP algorithms to choose from, some of which are far better than the normal one (additive increase, multiplicative decrease). But that depends more on the server side (which most is much more likely Linux than Windows to start with).
That said, my home is connected through a single (Linux-based) computer to the modem serving the Internet connection.
I wouldn’t be surprised if the xbox is suffering from either:
1. Window Scaling issues.
2. MTU/MSS issues (broken pathmtu discovery?)
XP doesn’t ask for window scaling for connections it initiates but does seem to honour window scaling when the remote end requests it. I don’t know what the xbox does.
MTU/MSS issues are more prevalent when tunneling. Maybe test your path mtu using the following command from a linux host behind the router:
ping -s -M do
Start with 1500 for size and decrease the value until packets start going through. 1472 works for me.
The ping command should read:
ping -s [size] -M do [remoteIP]
The TCP algorithms are pretty much the entire point.
They were specifically engineered to decide how fast to send packets, to be “good neighbors” and give users a consistent amount of latency. Dropped packets let TCP know that there’s congestion, so it knows to slow things down. Buffers are designed to deal with a little congestion, so a little buffering is a good thing.
Huge buffers take a long time (say, an entire second) to fill up. So they undermine one of the basic foundational assumptions that TCP is based on.
In the Net Neutrality debates, ISPs have been talking about how they have to throttle back the evil bandwidth-hogging file sharers, so they don’t destroy the user experience for everyone else.
Gettys’ claim is that the underlying problem there is really bufferbloat, and that he’s one of the first to get even close to the right ballpark to diagnose it. And it’s just going to keep getting worse, as more and more people expect to do things like stream video.
He was only able to do this because he has the knowledge, the connections, and the battle scars to see and understand the big picture. If he’s saying it’s a big deal, then it almost definitely is.
Any idea how this affects the AppleTV device?
From what I hear from readers the case is even worse for Apple TV because they apparently have some wacko DNS problem related to abandoning Akamai. They’ll solve that soon, I’m sure, but yes, bufferbloat is as bad for AppleTV as it is for any other device that uses big buffers.
Interesting….
Personally, I don’t see it causing issues though. Streams are not buffered, unless locally or by the ISP to reduce bandwidth costs. The stream will run at the max speed it can into the buffer. As long as the backbones continue to expand to fill the demand, and ISP’s continue to tune their networks to cope with the expanding traffic, then it will be fine.
I’ll make a different prediction though – internet connection prices will have to start going up soon. The current downward trend just doesn’t scale!
You can’t have more users downloading more data each month, and not increase the size of the network infrastructure, and that costs. So sooner or later ISP’s are going to have to increase costs to stay in business….
It’s buffering at a different level.
I think the best analogy I’ve seen so far is trying to drive a car with nothing but an emergency brake, that doesn’t engage until a second after you activate it.
Now put a bunch of those cars on a highway together.
It is ALREADY causing issues as Gettys so clearly documents. It may be causing no terrible issues for you based on your net behavior, but that says nothing for the rest of us. And it’s not just video streaming but also BitTorrent, which is used for lots more than just video (and not at all for streaming).
Netgear DGND3700 – a built-in DSL modem, supports concurrent dual-band (2.4GHz and 5GHz) and Gigabit ethernet
Why would I want to give up my 36 mbps cable service for slower DSL?
Bob, you’re totally wrong about TCP requiring retransmission of packets for flow control. That would totally kill throughout.
TCP uses a “window” where the sender can transmit up to a predetermined number of bytes before it must receive an ack. The ack tells the sender how many bytes have been received. The sender then slides the window to allow more bytes to be transmitted. Ideally acks will flow back at a rate which allows the sender to transmit continuously.
TCP flow control requires dropped packets. Dropped packets are dropped — unrecoverable — and must be retransmitted.
“Few homes today have a modem connected directly to a PC.”
I live on a planet where the vast majority of homes have a modem connected directly to a PC, or no PC at all.
Why would someone have a modem without a PC?
Really? I think the suggestion is that most homes have some sort of router between the modem and the PC. Dial up modems were connected directly to PCs. Cable and DSL lines are often shared with router of some sort.
Sounds like someone wants to help coin the next big buzzword.
While my technical expertise almost instantly discredits this concept as a real problem, two things you mentioned really made me doubt your credibility in general:
1. It’s User Datagram Protocol, not Universal
2. You unplug your Rokubox, plug in your Xbox 360, proceed to experience poor streaming video performance and then suddenly declare “Bufferbloat!”, as if there are not about a million other (reasonable) factors that could have caused the problem.
Waste of time.
Daniel, you don’t know what you’re talking about. Your “technical expertise” is clearly lacking. Read some of these posts by Jim Gettys (who created X and HTTP/1.1) and maybe you will learn something. Don’t be one of those ignorant naysayers who only makes the problem harder to solve.
http://gettys.wordpress.com/2010/12/06/whose-house-is-of-glasse-must-not-throw-stones-at-another/
It is evident that most of your readers don’t understand latency, and your post here is a little off….
UDP packets are used for many things besides the types of streaming audio you mention. In particular, it’s used for DNS, DHCP, and gaming applications. We are at a point now – which I wasn’t aware of until Jim Gettys pointed it out – except from the pain I was feeling in using the modern internet – where tcp/ip traffic is drowning out these other critical packets.
On my own network, before I took steps to mitigate the problems that he outlines… DNS packets were taking 500ms to get to the internet while I was doing a big download or upload. This meant that the typical web page would actually load much slower than they should, under this contention, because a typical webpage downloads from dozens of different sites, and each of those packets were being delayed from resolving a DNS IP address by half a second – drowned out and delayed by the TCP packets emitted by one device.
Similarly, in my case, on my wireless network, DHCP was failing to work reliably – there was so much unbalanced TCP/ip traffic in just a single stream that other machines couldn’t even get on the net!
So multi-user use of the home network is impaired by bufferbloat severely, even for basic web, email, chat, where it is important that the latency for critical non-
tcp packets be low.
The reduced sharing because of bufferbloat problem is even worse on corporate networks, and made even worse by unclassifyable traffic such as skype.
Lastly, I note you can mitigate (not solve) the problem by buying a cheap, old, well supported by dd-wrt router and reducing it’s tx dma buffers to the bare minimum, as well as the txqueuelen.
This lets higher priority udp packets get through the noise of the saturated tcp/ip connection on a more reliable basis, but does not solve everything.
Rate limiting your incoming and outgoing traffic via traffic shaper is also helpful (the wondershaper is not good enough, sadly), as due to the asymmetric nature of most internet connections today, uploads interfere even worse with downloads, and that requires special treatment.
I am glad you are raising awareness of this problem to the general public. It’s serious, and has to be fixed at multiple levels, not just the home gateway, if you want to stream radio, surf the web and/or watch a video at the same time, or have more than one member of your household or business.
I would love to see the internation focus on “download speed” be replaced by one on “low latency” – for if that happens – the promise of the internet to revolutionize telephony and other interactive applications will be fulfilled.
I’m not holding my breath.
Thanks for this clarification. I, too, am doubtful about any quick fixes, but little to nothing will happen if we don’t publicize the problem.
One thought that comes to me is I have two layers of backup Internet service. My primary connection is a cable modem but I also have DSL and even 3G wireless. I wonder if one of those could be used for out-of-band UDP signaling?
What you are calling “bufferbloat” falls within the domain of the IETF Working Group (now inactive) known as PILC (Performance Implications of Link Characteristics).
It was noticed early on that TCP over satellite links, for instance, with its extremely high latencies, would break TCP’s sliding window scheme (or rather reduce it to a crawl).
Over ADSL or cable modem links, the problem was slightly different: the asymmetrical link was frequently unable to return the TCP ACK’s fast enough for downloads, because TCP”s ACK scheme is relatively bulky
Now, of course, multiple layers of router, introduces latencies in a different way, but the principle is the same.
You can read about the final report of this group’s work at the following link:
https://www.ietf.org/wg/concluded/pilc.html
Solutions to this included things like
faking ACKs. — for a satellite link they would sometimes fake ACK’s to the sender, so that it kept sending.
Use a protocol like Z-modem which keeps sending until there’s a NAK without waiting for an ACK (using UDP, not TCP). If there’s an error, Z-modem backs up to the error point and tries again.
In general, avoid TCP in favour of UDP. Unfortunately, the web is a TCP based system and so that won’t work for most of the internet.
But of course you made predictions last year: https://www.cringely.com/2010/01/mobile-2010-predictions-apple-google-rim-oh-my/
Now explain yourself!
Bob-
Tell Mrs. Cringely I promise not to leak this year’s holiday greeting card. Send it along. Last year’s was too cute!
Thanks for the great columns. I don’t understand two thirds of what you write but I enjoy every word.
[…] Cringelygets many of the details wrong. To name a few: he posits that modems and routers pre-fetch and buffer data in case it’s needed later. Those simple devices—including the big routers in the core of the Internet—simply aren’t smart enough to do any of that. They just buffer data that flows through them for a fraction of a second to reduce the burstiness of network traffic and then immediately forget about it. Having more devices, each with their own buffers, doesn’t make the problem worse: there will be one network link that’s the bottleneck and fills up, and packets will be buffered there. The other links will run below capacity so the packets drain from those buffers faster than they arrive. […]
>: he posits that modems and routers pre-fetch and buffer data in case it’s
> needed later. Those simple devices—including the big routers in the core
> of the Internet—simply aren’t smart enough to do any of that. They just
> buffer data that flows through them for a fraction of a second to reduce
> the burstiness of network traffic and then immediately forget about ….
In so far as the writer is saying routing occurs at layer 3 WITHOUT participation in TCP, he’s correct. But the purpose is not to reduce “burstiness” but rather to manage congestion. A good core router will switch traffic onto an uncongested link with microseconds of added latency. Congest the same link, and the router will queue it for transmission in a … buffer. If the network is uncongested, there’s virtually no buffering.
My primary goal here is to get people talking, NOT claiming to be the smartest guy in the room. I get details wrong from time to time and am happy to be corrected. What I take pride in is the quality of my readers and that I can show them from time to time issues they hadn’t thought about but perhaps should.
My cable provider uses a cable modem/wifi router combo device (made by SMC), but it gives me the willies from a security perspective: the box supports multiple login ID’s.
While I can manage the password for the “customer” login, in theory there are other login ID’s that are not documented (at least not for the customer). The risk was laid bare when Google quickly found me a cabelco “admin” login ID+password that apparently works on ALL such cableco devices (it sure worked on mine).
The admin login even has, unsurprisingly, more features and greater control over how the box behaves. I see how this is attractive to the cableco, but Yikes! what a security risk. My network has a backdoor!
I’d be happy to provide the evidence, as I think it constitutes negligence on the part of the cableco. Meantime, I’m keeping another device between me and them.
I know what you mean. Cable companies especially tend to think that because this is residential service it doesn’t really matter that much. I sat on a plane once next to the head of the National Reconnaissance Office and he was shocked to learn about his cable modem (we shared an ISP).
I appreciate you for maintaining such a useful portal. this site wasn’t merely useful but in addition quite inventive too. We come across a smallish amount of pros that are capable of writing not simple posts that artistically. I am looking out for articles relating to this topic. We ourselves explored several weblogs to encounter knowledge with respect to this.I will keep returning !!
[…] gets many of the details wrong. To name a few: he posits that modems and routers pre-fetch and buffer data in case it’s needed […]
[…] gets many of the details wrong. To name a few: he posits that modems and routers pre-fetch and buffer data in case it’s […]
According to “Nanotechnology for Dummies” crystal routers will soon replace the chip based routers of today that route traffic over the Internet and the speed across fiber will increase 10 fold.
Personally my DSL is 10 times faster than it used to be, my Roku box does periodically stop to buffer and reading a book instead (even on a Kindle) solves this problem quite nicely
For that matter my TV shows stream FAR faster than I can write them…
Not sure that I agree with this analysis. Packet networks are statistical. If a sender is not throttled by flow control, it will send faster. However, as a consequence the transmission will not take as long. The net amount of data transferred in a given time will not change. So the total load on the network is not changed.
Say one had an infinite reception buffer. The sender would send at full rate, but only till the file (say, a video) was finished. Then the receiver would presumably not want to receive any more for a while as they watched the video.
In fact the re-transmission of dropped packets would cause more congection as it adds load to the network over and above what is actually needed to successsfully transmit the date.
It is the total load on the network that matters, not the speed of an individual session.
Oldhand
In fact the re-transmission of dropped packets would cause more congection as it adds load to the network over and above what is actually needed to successsfully transmit the date.
It is the total load on the network that matters, not the speed of an individual session.
Oldhand
Is “Bufferbloat” a serious problem?…
See Jim Getty’s Ramblings (http://gettys.wordpress.com/2010/12/06/whose-house-is-of-glasse-must-not-throw-stones-at-another/), Tim Bray’s recent blog post on ‘Fat Pipes’ for more information (https://www.tbray.org/ongoing/When/201x/2011/01/09/Fat-Pi…
A recording of Jim Gettys latest bufferbloat talk and slides are up at:
http://mirrors.bufferbloat.net
Perhaps this will clarify the issues.
[…] posts provide a lot of insight into the world of technology and he’s not afraid to make very specific predictions of what he thinks is going to happen in the future. His latest project is a tv series about […]
Buy $10 Replica Designer Sunglasses with 3-day FREE SHIPPING
[…] Cringely (especially #2, #9 and #10). […]
[…] eight months ago in my annual predictions column I made a big deal about Bufferbloat, which was the name Bell Labs researcher Jim Gettys had given […]
https://www.longchamp-longchamps.com
I think this is among the most significant information for me. And i’m happy reading your article. However want to statement on some basic issues, The web site taste is great, the articles is truly nice : D. Just right job, cheers
[…] wrote about how bufferbloat was going to be a big problem in 2011. Bufferbloat, you may recall, is the name for our propensity […]
[…] as you’ll recall from my 2011 predictions column, is the result of our misguided attempt to protect streaming applications (now 80 percent of […]
to your site. using appropriate heading tags:…
in many blogs, a lot of people write about important topics in a single post but forget to show the search engine what is important and what is really just another sentence. google about , or tags on the blog platform…