Almost eight months ago in my annual predictions column I made a big deal about Bufferbloat, which was the name Bell Labs researcher Jim Gettys had given to the insidious corruption of Internet service by too many intelligent network devices. Well I’ve been testing one of the first products designed to treat bufferbloat and am here to report that it might work. But like many other public health problems, if we don’t all pay attention and do the right thing, ultimately we’ll all be screwed.
At the risk of pissing-off the pickier network mavens who read this column, Bufferbloat is a conflict between the Internet’s Transmission Control Protocol (TCP) and various buffering schemes designed into network devices like routers and modems as well as into applications on PCs and mobile devices.
TCP was designed in the Bob Kahn and Vint Cerf era to support devices with very limited memory connected over links that maxed-out at around 50 kilobits-per-second. Network reliability was paramount, trumping speed, the idea being that if there was an ABORT LAUNCH signal sent by the White House it had darned well better be delivered. So flow control was very robust.
But then memory got cheaper, networks got faster, and we started sending new data types like Laverne & Shirley video streams that were intended to be delivered without interruption — something that had never been imagined for a best effort network like the Internet. Our method that eventually emerged for watching TV and other bandwidth-intensive Internet activities was through the clever use of datagram protocols and pre-buffering. Connect to Hulu for an episode of Glee (or to Netflix to watch Cliff Robertson in 633 Squadron as I did last night) and you can watch that buffer fill, making us wait half a minute or so in exchange for an implied guarantee that there will be no further stopping and starting once the video starts to play.
Yet pre-buffering often isn’t enough and the show stops for more buffering or even a change of network connections or speeds — that’s Bufferbloat.
Bufferbloat becomes a problem when you have a buffer in your application (say a video player) a buffer in your PC network connection, another in the router, and another still in your cable or DSL modem. Add to this at least one and sometimes two levels of Network Address Translation (NAT) and things start to get ugly. All that filling and emptying of buffers defeats TCP’s own flow control algorithms leading to the wholesale destruction of network connections that are so confused by cascading buffer effects they don’t even know they’ve been destroyed and so take even longer to be reestablished — up to seven seconds in cases I’ve measured at home.
The solution is to keep buffers as small as possible and to be looking constantly for network performance issues. That’s what happens in Cerowrt, experimental Open Source firmware for certain Netgear N600 routers that I’ve been testing. The N600 is one of the few routers in the market with Open Source software so it is an easy platform for experimentation. And having experimented now for several days I can report that it might even work.
I think my network is running better but there are no good benchmarks yet for this stuff. And to really show obvious effects we should all be running Cerowrt or its closed source counterparts.
One of the issues here is network speed. We need a new marketing term. Broadband connections that clock in excess of 20 megabits-per-second (mbps) yet carry with them latencies in excess of a second aren’t fast at all. In practice, by turning on bandwidth shaping latency can be easily reduced but that comes at a bandwidth cost. Your connection may be faster at 20 mbps but much snappier throttled back to 1-2 mbps. “Speed” has been stolen from us as a useful term.
In fact our fixation with Internet speed tests may well be hurting us all in this regard.
I know this problem is both real and within our capability to fix. Hopefully as modem and router manufacturers and network application vendors become more aware of these systemic issues they’ll do the right thing to minimize Bufferbloat in their new products. The cable industry has already updated its DOCSIS spec to allow a CMTS (Cable Modem Termination System of course) to centrally control buffering in downstream cable modems. Modems that support the change will probably be in stores later this year, though not broadly deployed for two years or more. This will reduce the bufferbloat problem (though not solve it; you need Active Quality Management for that).
In my example, this DOCSIS change (and a new modem for me) would take latency from 1.2 seconds down to around 100ms. One hundred milliseconds is still too much, but it’s a Hell of a lot better than where we are today.
And if we do nothing? My fear is that sections of the Internet will succumb to a sort of TCP standing wave, large files will become effectively untransportable and Cliff Robertson will never grace my Roku box again.
So, if I understand this, it’s hare and tortoise? More haste less speed etc. Slow and steady wins the race? I’ve always been suspicious of ISPs touting ever-faster connection speeds – especially as my “local loop” has it’s own limit anyway. Buffers are still “header-tanks” and need feeding.
I like the idea that improved firmware can help because I’ve always felt the supplied software in my Router is pretty poor.
I’m guessing movies might have less of a need for meticulous error-trapping in the data than The Big Red Button scenario. A few square black blocks a small price to pay for reduced through-flow but still watchable media. Perhaps software could vary the quality control depending on use?
I tried watching HD video streamed from the Internet, through my Xperia phone and into my TV. A first in all three respects and it worked well (I’m green, and these were the only cables I had!) All went well until I received a loud “Buzzurrp!” from somewhere and the playback stuttered and stuck.
Ah well, my local-loop isn’t up to streaming HD video quite yet, but updating my O/S through Software Update is quite snappy and certainly quicker than it was.
That’ll do until the playing field is level. 🙂
Nigel: Bob knows that the possessive case of ‘it’ is ‘its’. Why don’t you?
He wasn’t using possessive; he meant “it is” so he used it correctly.
“..especially as my “local loop” has it’s own limit anyway” = wrong!
(Couldn’t let you have the last word on this 🙂
Cliff Robertson in 633 Squadron rocks!
I watched it myself recently. I think the first time I saw it I was 8 years old. Cliff in his prime.
Those were the days: Cliff kills her brother by bombing the building where he’s being interrogated by sadistic Nazis yet somehow Cliff STILL gets the girl.
It’s a fantastic film with a superb score.
Do have a look at the Battle of Britain too. Slightly fictional, The Great Escape is worth a watch. And although complete fiction, who can’t like Where Eagles Dare. Oh, and The Eagle Has Landed!
Bufferbloat – I solve this by pausing a bit longer before playback. My broadband is really slow… Scotland need broadband “FREEDOM!”
“You can’t kill a squadron…”
I’d like to see all those old WWII movies with updated special effects a la the re-released TOS Star Trek. It would be great to see “Kelly’s Heroes” and “Battle of the Bulge” with realistic Tiger VI’s.
That’s one of the charms of those old films; the more realistic special effects! Where they had to use models, or in some cases the real thing. Yes, they’re a bit duff by today’s CGI, but a lot of the CGI effects are ruining todays films. It’s not about plot or story, but which alien robot can smash the other louder.
It’s because these old films tell a story that we’re still talking about them today.
The old movies are better because we’ve quit watching the old movies that were pathetic. The pathetic new movies are still with us for a few more years.
Bob,
Your headline has a typo in it, I think, I assume you want Bufferbloat there, as well, and not **Bufferlbloat**.
Thanks as always for your great writing and ideas.
Neil M
Fixed, thanks.
And “pissed-off” should be “pissed off”.
Did Bob change something? I remember reading the post 8 months ago, and when I read it he called the phenomenon “flufferboat”. I remember specifically thinking that “flufferboat” was a very strange word to pick, and figuring Bob / his friend must have picked it on purpose with that kind of uniqueness and memorability in mind. But now I see his old post also says “bufferbloat”.
Neil, and your reply has a typo, too.
Facinating!
Keep it up, Cringe!
These Mosquitos were great, weren’t they?
Bob,
Do DOCSIS v3.0 modems have this built in?
Minor correction: AQM stands for Active Queue Management, not Active Quality Management.
Well, I guess that we can hope.
You talk about the history of TCP/IP and yet we’re not using it as envisioned.
I spent the best part of a decade writing real-time UDP and TCP transport stacks for low-latency video and audio streaming. It’s my experience that NAT and the various filtering and firewall technologies have killed of many of the innovations in this area.
Many of the techniques required to skirt these things _without_ end-user having some kind of technical understanding result in a less than perfect solution.
It’s almost impossible to get a product of the ground if it requires tinkering with end-user’s infrastructure – in a case like bittorrent, the promise of endless “free” content like can motivate that.
Usual “foolproof” solutions revolve around using something dumb to transport video like HTTP.
Less efficient -> more overhead
More overhead -> more data
More data -> more buffering, more latency, more congestion etc etc.
Instead of fixing the right problems, we allowed the band-aid solution to propagate and this is the result.
The testing side of things is also complicated – it’s difficult to get good test data across a public connection when the internet traffic can quite literally change with the weather (and time of day, holidays, etc etc).
Fancy statistical control algorithms often fall in a heap when faced with the reality of badly implemented network stacks, broken application code, unsuitable codecs etc.
Having said all of that, the N600 and devices like it that support IPv6 may help.
I don’t think IPv6 alone is a solution (…) but the reworking of the shared infrastructure required to transition to it may eliminate a lot of the accumulated crap like cheap ass NAT boxes.
“It’s my experience that NAT and the various filtering and firewall technologies have killed of many of the innovations in this area.”
Halleluya! I’m not the only one seeing this! 🙂
Bob,
You mention that Netgear router supports “open-source” is running Cerowrt but when I went to look at the website, they say to stay away from the Netgear routers? https://www.bufferbloat.net/projects/uberwrt/wiki/Hardware_evaluation
Confused ???
I would assume the recommendation was based on the stock firmware that is running on that router, and would not apply to something designed for better shaping.
In going with the N600 router as our test platform we had evaluated more routers than I care to remember, including netgear’s entire current line of products.
The N600 (wndr3700v2) was the best of the bunch by a large margin.
In fact, we have been working closely with the authors of the wireless and wired drivers for that devices to make them less bloated, for several months.
I have clarified the wiki page you mention to describe our evaluation process better. I didn’t pull any punches in describing some of the products we dealt with, from all vendors, and could perhaps have been more tactful…
I note that the default firmware in the wndr3700v2 was pretty good too, aside from the fact it ran a truly ancient version of Linux (2.6.16, as best as I recall). I would say cerowrt is better in most respects now, except that we’re currently coping with an ethernet bug that only showed up 2 weeks ago while we were debloating that driver, and it’s proving very difficult to find and fix.
So I while we DO want people using cerowrt ASAP, especially for network research on all sorts of topics besides bufferbloat, until this bug is closed:
https://www.bufferbloat.net/issues/210
Cerowrt is only for the few, the proud, the brave, equipped with a jtag debugger.
One of the things that I find frustrating is that often you can stream a video to your PC as many times as you wish (and consuming bandwidth each time) but can’t download it once and play it whenever you wish (consuming no bandwidth) !
Why would the ISPs let you do that, when they have the power to prevent you??? More money for them. Messes up the net for you and every other user, but the ISPs couldn’t care less.
Didn’t your mother ever say: “what would the world be like if everybody acted as stupid and self-absorbed as you?” Even once?? Well, all of these problems are the result of actions which, seen through the lens of a 1776 English dreamer (look it up), are perfect. They aren’t. There are externalities to actions. The Tea Baggers are just the latest group to claim interest in The Common Man, all the while playing pawn to corporate intentions.
Firefox has lots of addons to do that. I use DownloadHelper, works great! Check it out.
I still don’t understand why people are excepting poor quality movies and music from the internet.
I can sort of get the idea of trading off music quality for quantity on a portable device but I don’t get it for the home. But when it comes to movies I really don’t get it. People, have spent thousands of dollars on a home entertainment system, so why would you download a movie that is less than blu-ray quality? I did it once on Fios, I paid the extra dollar for the HD version of the film, it’s a let down, it’s no where near as good as putting in a disc. Obviously most people settle for ‘it’s good enough’ but then why spend the extra money on the big TV?
I realize most people are watching movies this way, but it’s not for me. When it comes to movies and music I’m keeping my discs and I don’t have to worry about the bandwidth issues.
I have to agree, I didn’t spend a great deal of money on my HDTV, but I’ve tried iTunes HD, Vudu, Cinema Now, and Netflix.
None of them compare to Blu Ray.
Most don’t even compare to DVD.
I weep the day a good movie is released “Streaming Only” and there is no disk to be bought.
We won the battle against Divx (remember that?) But I fear we’ve lost the war…
in my case: because a) I don’t have to waste time sitting through stupid commercials that we’ve already seen a hundred times for products I’m not interested in (an hour long show on cable is 40-45 min downloaded, that’s 25-30% overhead!), b) I don’t have to subscribe to and pay for and flick through channels I don’t want, c) I can watch when I want, not only when it’s broadcast (without buying or renting a tv-recording box).
I don’t own expensive high quality players like bluray and am not likely too anytime soon. We did get a 42″ plasma screen this year, but only ’cause it was offered as a deal with a bed a family member was buying anyway and allowed us to go direct from laptop to TV instead of futzing with computer>usb stick>dvd player>tv shenanigans.
For shows that we’ll watch more than once or twice or are simply just too darn beautiful not to watch in full detailed glory we’ll shell out for a dvd. Other than that the so-so quality of internet vids are just fine with me. it’s the story and quality of performance that matters to, and that is usually not impacted (much) by a few blurry pixels here and there.
…sorry, belatedly noticed you wrote “movies” and not “tv”.
Our cable company/ISP/telephone provider has hit on a solution. They just compress the fool out of everything. We now enjoy “crystal clear HD” TV with so much blocking in the images that it looks like a 20-year old VHS rental. And who cares if the telephone S/N ratio is crackling and wheezing around 30dB, it’s still better than a cell phone, right? Download speeds advertised as “blazing fast” are in fact fast — at 3 AM. When the Netlix crowd kicks in at 5 PM, the speed is throttled significantly.
Bob!
Ahhh. 633! The lovely sight (if only a frame grab) of a three ship flight of the all wood DH.98. F Mk II’s I believe? Genuflected to one on static display at Duxford just a few years back. Glyn Powell of NZ is restoring one to flight status as we speak….!
I have to confess that I didn’t understand the first article posted here on bufferbloat, and this latest article hasn’t helped make it any clearer.
From the point of view of the network, why should it care whether the customer demand for data is buffered or not buffered? Either way, the total demand on the network is the same. What’s different is the experience on the user side: you either absorb all the network delays up front before the movie starts (buffered), or you play the movie right away and tolerate skips (unbuffered).
As far as buffering that occurs inside the network, well, once you get to the switch fabric itself, you have to address the problem of an oversubscribed switch fabric port, and the only way to do that that I’ve ever heard of (at least from a hardware perspective) is to put a queue (a buffer) usually at the output side of the switch fabric port. How does one avoid this? Bigger switch fabrics? Am I missing something?
Rupe
@Rupe, the fix of course is dedicated lines, which will cost more than your house payment.
we have a bit of a collision in the marketplace… I happen to be a fanboi of network neutrality, with the caveat that if MonsterSuperMegaCo wants a 10 gig pipe into their web center, not a problem, we bill you monthly and you pay net 10. that 10 gig, however, would be required to fight it out at The Connected Internet level with everybody else. no preferred routing, no special backbone.
this rather collides with the lack of prioritization in the UBR/NBR world that is the Internet backbone. so data streams get interleaved, and if you are streaming HD1080 realtime with limited buffering, and 500 little old ladies click on the email attachment of dancing kittens at the same time, you get big black blockiness as the little old ladies get interleaved in the traffic through your connection.
but to veer the network the other way, protecting the streams, the little old ladies with 1.5 Mb will never see another attachment from the grandkids again. unless they log on at 4:15 am.
there is a technology to resolve both issues, commonly known as Redbox.
the alternative right now to make everybody happy is an infinite bandwidth backbone and 10 gig to the subscriber. except the telcos which provide that bandwidth. they would all be broke and foundering in the fiber monster trying to add new 3-rack switches.
some other alternatives might be DVRs with no silly nanny-flags in all PCs, stream-within-10 (minutes) feeds you take on a multipoint feed, and other “hopeless lame old-school” methods of trying to organize a mob’s worth of unpredictable immediate gratification.
I don’t see any votes for any of it, hmmmmmmm.,…
Rupe has an excellent question which I’m not sure has been answered by talking about network details, little old ladies, or the fact that we don’t all have 10 Gb/sec to our homes. He points out that that the end point buffer just needs to be large enough to hold the entire file. An entire Blu-ray disk will fit on a thumb drive and can be downloaded in 10 minutes at 10 Mb/sec. The real problem is that we insist on sending out 1000 streams for 1000 customers when one will do. The solution may be to have all customers decide in advance what movies or TV shows they want and subscribe to them via a set top box or built in TV hard drive. The source files would be streamed once and all the bits would eventually arrive at the subscribers’ boxes even if every show has to be watched a day later than the transmission. In fact that’s how I watch prime time TV now using a VCR. My buffer consists of a few 2-hour tapes that allow me to watch 2 hours of prime time TV every day of the week.
Did I miss something with your math here ?
10 minutes at 10 Mb/sec == ~750 MB, assuming of course that you’re using 8 bits to the byte… my background is SAN based stuff so 10 bits to the byte is commonplace, but I digress.
A Blu Ray is something in the region of 25 GB, with 50 GB available on dual layer. I haven’t (yet) seen any thumb drives that can stomach that much data, and if we’re compressing that lovely BR feature down to 750 MB I daresay we haven’t tackled any of the issues with video compression that have been raised already.
An uncompressed DVD movie is usually around 5 GB and even that would need an appalling level of compression to knock it down to that limit… But I’m no expert here… care to clarify ?
You are absolutely right. I confused the blu-ray with the standard DVD which intrduced a factor of 5 error and on top of that I confused bits with bytes which contributed another factor of 8 error so I must have been off by a factor of 40. So instead of 10 mins for blu-ray it should be 80 mins per std DVD. So with a little compression or a little more speed than 10 mbps it takes about an hour to download a std DVD. That’s still about twice as fast as I am doing currently with an analog VCR which takes 2 hours for a 2-hour movie. However, my point is still valid that transmitting a program once and downloading it multiple times by all subscribers for future viewing is way more efficient and enjoyable than putting up with live streams limited in quality by compression, bufferbloat, and speed when using the internet.
@swschrad: Dude. Not only did you not explain anything, but you totally missed the point. More bandwidth is not the answer. Cringely and Jim Gettys are trying to make the point that “It’s the latency, stupid, not the bandwidth”.
I too did not get it at first 8 months ago. But I read Jim’s and other stuff online, and noodled it through and came up with an understanding. I’m not a network expert, but I have a thought experiment to illustrate the point.
Just to make it clear that it’s not about bandwidth, let’s say you have gigabit fiber into your house and your network is also all using gigabit ethernet. Let’s also say that you have a 20 ms latency to cringely.com.
Also assume that the designers of your router decided to put a 10 megabit buffer in it. This seems reasonable — at a gigabit that buffer will take 10 ms to empty. It increases your latency by 50% but you won’t notice it — it’s only a hundredth of a second. I takes this long for light to travel 2000 miles.
Now let’s also say that someone in your house, call her Alice, is watching 633 Squadron at DVD quality, which is about a megabyte per second or 8 megabits per second. Now this amount of bandwidth out of a gigabit (1000 megabits per second) is downright luxurious. It’s like being the only car on an 8-lane freeway. Alice’s viewing experience is just fine — no hangs, blockiness, or dropouts. The router’s buffer is full, but it does not matter to Alice because her movie player is getting all the data it needs when it needs it. Who cares if data is entering the buffer much earlier than it is being pulled out.
Now, let’s say someone else, call him Bob (not Cringely, but the second person mentioned in these things is always Bob) wants to view this web page on bufferbloat using the same network that Alice is on. It’s just a bunch of text and a tiny picture of some airplanes and remember we have 20 ms latency from cringely.com to the router, so it aughta load right quick. But because the router’s buffer is full, and it is being emptied at 8 megabits per second, each packet of the page’s data takes 1250 ms to get through it. That’s a second and a quarter of latency. Bob is going to notice that, especially if he is used to things being much faster.
You may have noticed that I made the assumption that the movie player gets data at a constant 6 megabits per second. This is unreasonable — it will probably be “bursty”, taking data at the full gigabit per second for a short time, and then getting no data for a relatively long while after that. This will allow the web page to get through during the gaps at the full rate. Even if the buffer is full when Bob clicks on the link, the buffer empties at the full gigabit per second and so the 10 ms applies.
So this situation is simplified and may never happen just this way, but it illustrates the point that with a bottleneck and a big enough buffer you’ll get bufferbloat. And you always will have a bottleneck. There will never be a time when everyone has as much bandwidth as the backbone. And there are buffers all over the internet. Bufferbloat lives.
Thanks. I’ll have to think about that with a pencil and paper at hand. But now, can you explain Bob’s (Cringely) final comment: “Modems that support the change will probably be in stores later this year, though not broadly deployed for two years or more. This will reduce the bufferbloat problem (though not solve it; you need Active Quality Management for that).” So what is AQM and why does it SOLVE the problem?
Hi Ronc,
A couple good links regarding Random Early Discard & Active Queue Management:
http://codingrelic.geekhold.com/2011/03/random-early-mea-culpa.html
http://gettys.wordpress.com/2010/12/17/red-in-a-different-light/
@ Walt S
Thanks for the very cogent explanation. The key point in your illustration is that the buffer is shared between users having very disparate bandwidth and latency requirements.
At an endpoint like the sort of residential subscriber you describe, the problem seems simple enough to fix– every user gets their own buffer. Is this not the case already, or does one very often find a single buffer at the Internet access point of a typical consumer? Certainly the endpoint interface devices (i.e. PCs, Xboxs, or whatever it may be) already have their own buffers.
As far as the network core is concerned, you can either design it so that it can pass peak traffic without buffering (very expensive since data traffic tends to have a very high peak-to-average ratio in terms of bit rate), discard packets (which would require retransmission at the source and probably entail much greater delays), or you put buffers in the network which only get used when a particular hop or switch point is oversubscribed, which is pretty much the way it’s done today, and seems like a reasonable compromise. Actually, from what I know of core switch/router designs, you will typically find more than one buffer in parallel at each switch fabric port, and a control plane processor deciding which buffer to a send packet to based on considerations of traffic type, network congestion, and so on– i.e. “traffic management”.
So I guess though I’ll concede that the phenomenon exists, it’s nothing that core switch designers aren’t already accounting for as best they can within the all of the applicable engineering limitiations.
Rupe
I watch MLB on both the appleTv and roku, appleTv has far less buffer problems.
The solution to bufferbloat is as simple as the problem:
head drop (instead of tail drop) when the buffer fills up.
This is the only solution that allows buffering to go on, but doesn’t break TCP
The reason this solution doesn’t get implemented is because is *feels* unfair to drop the packets that have been waiting the longest in line.
That one little change will fix it, though.
The original digital ATSC standard and its slightly more bandwidth-efficient QAM on cable, didn’t have any of these problems. It seems to me the problems are all caused by attempting to transmit via the Internet. It’s like using the wrong technology to slove a non-existing problem and creating new problems in the process. (My name is not Dvorak, but you can call me a cranky geek if you like.) Just because everyone wants to be on TV does not mean it’s in the public interest to provide each and every citizen with the where with all to do that for little cost and make the rest of us suffer as a result.
What about IPv6 multicast.
It’s built in, and could reduce the number of ‘streams’ dramatically if rolled out correctly.
Give that man a cigar ! 😉
For broadcast events (ie: anything that replaces a normal tv channel), multicast and client side buffering (eg: PVR) radically reduces bandwidth consumption.
It’s sometimes implemented where the content provider and the network provider have a sweetheart deal, but it’s one of those things that’s incredibly difficult _politically_ and _economically_ to achieve while being trivial _technically_.
(even my cheap 10yo NAT box supports IGMP)
You could multicast lots of stuff like Twitter – that’s a broadcast type service also.
Bob —>
https://www.youtube.com/watch?v=4OZq-tlJTrU
Am I the only person who doesn’t get this?
One thing I’d like to point out is that the buffer is never used on a network with no congestion. Unless that device is misconfigured.
Your buffer getting filled is a sign that your downlink or uplink needs to be upgraded with additional bandwidth.
This is done easily enough with non-flawed technologies such as T-1/T-3 or DSL.
The problem is cable internet – it’s an old technology designed in the early 90’s when shared bandwidth networking was considered normal. Congestion is a fact of life on any normally-operating cable internet network because that’s how it was designed.
In other words: you get what you pay for. Stop paying for overpriced technically inferior cable internet and go with FIOS, DSL, T-1/T-3, etc.
And when picking your ISP go with one that regularly and proactively spends money on upgrading their uplinks/downlinks instead of traffic-shaping the hell out of it with a couple commands on their routers.
For most of us cable is the only practical high-speed option. FIOS is non-existant for most of the US population and Verizon has no plans to expand it. For DSL you have to live in the CO’s back yard…since I’m a few miles away, I can get only 0.15 mbps via DSL. I went to T1 and thanks to the use of a repeater system I can get 1.5 mbps for over $300/mo!!! They were kind enough to offer me 3 mbps for $600/mo!!!!!! And that’s with a 3rd party provider offering discounted prices compared to the local telco. So let’s not pretend there are options other than moving your home.
I read Cringely’s two articles on bufferbloat, I read some of Gettys’
articles, and I read Wikipedia. What’s missing is a REALLY SIMPLE
but CONCRETE example of how bufferbloat works.
It’s really frustrating that I can’t find a simplest case example
anywhere. I simply cannot understand how the existence
or non-existence of a buffer makes any difference to the
latency without having a line-by-line exhaustive example of it.
Let me start off what I’m looking for. Somebody, PLEASE, fill in the
blanks below to turn this into the simplest case example of how a buffer
can be bad. (I repeat: I don’t care if the example is strained, I don’t
care if it doesn’t show all the ways that bufferbloat can happen, I don’t
care if it doesn’t show all the symptoms, I don’t care that small buffers
are usually good. I just want to see ANY example of how a buffer can be
bad for latency compared to unbuffered transmission. OK? 🙂
So here’s my beginning at a simplest-case scenario:
1. Let’s say server A wants to transmit 3 packets to client B
using TCP/IP. We will number these packets 1, 2, and 3, in order of
transmission.
2. Without any buffering, we have established that a packet travels from
server A to client B in 20 ms.
3. Now, let’s put a buffer somewhere between A and B that can hold at
most 2 packets.
4. Server A transmits packet #1 to B, …
5. … [please fill in the blanks]
6. …
7. …
8. …
9. …
10. As you can see in point above, the packet took 1000 ms to get from
A to B. Therefore, the existence of the buffer made the latency worse.
@Patricia
If you think of a buffer as being analagous to queue– let’s say of people waiting for some kind of service or to make some kind of purchase– one example goes like this:
You are at a bank with two service windows that offer two different service types. One window has a service that takes 10 minutes, the other takes 30 seconds. However, no matter which window you want to go to, you must stand in the same single line. If your business only takes 30 seconds to transact, there might be a dozen people in front of you who all have business at the 10 minute window. You will have to wait 120 minutes even though your 30 second service window will be free and awaiting the next customer.
The point being made is not that buffering is bad per se (without them, customers don’t even get to wait for an open window– they go home if the window isn’t open, which is the equivalent of a dropped packet). The problem is that the single buffer/queue is servicing vastly different customer needs.
Rupe
Another way of looking at this is to look at road traffic.
How many times have you come to a choke point in the road during peak times?
We have a bridge near where I live. Traffic entering the bridge goes round a tight corner and traffic slows and tends to bunch here. OK, that curve is the router’s buffer.
From the outside, it seems insane that there should be a queue at all. You look at the bridge and there is a small amount of free-flowing traffic on it. The road leading up to the bridge is free flowing also and the queue for the bridge is not getting any longer. What’s going on here? Well, it’s called a stationary wave caused by momentary extreme congestion.
Now if the car driver’s just drove more sensibly and paced their approach to the bridge, the slow queue would never develop in the first place. That pacing is the proper Internet protocol pacing controls.
It’s the best way that I can visualise the problem.
like the idea that improved firmware can help because I’ve always felt the supplied software in my Router is pretty poor.
I found this very interesting, and informative.
Do any of the other open source firmwares (ddwrt, Tomato, OpenWRT etc) implement this feature as well?
Letmewatchthis Movies…
Hey.. im not considerably into examining, but somehow i received to go through loads of content on your web site….
smart shop…
[…]I, Cringely » Blog Archive » Bufferbloat 2: The Need for Speed – Cringely on technology[…]…
point where you can think that you…
know everything. so these are the most important and must have traits for a to-be successful blogger. keep these in mind in 2012 and you will eventually experience the difference. to be a successful blogger, you need not be a genius….
Bowflex SelectTech…
[…]I, Cringely » Blog Archive Bufferbloat 2: The Need for Speed – I, Cringely – Cringely on technology[…]…
I would like to thnkx for the efforts you’ve put in writing this web site. I’m hoping the same high-grade site post from you in the upcoming as well. In fact your creative writing abilities has encouraged me to get my own site now. Actually the blogging is spreading its wings quickly. Your write up is a good example of it.