How Network Non-Neutrality Affects Real Businesses

3/24/08

Network neutrality leaped back into the headlines last month, when FCC commissioners held a public hearing at Harvard University to examine whether the commission should institute rules to regulate the way Internet service providers (ISPs) manage traffic on their networks. The panel heard from executives representing the two largest ISPs in the Northeast, Comcast and Verizon, along with Internet pundits, politicians and academics.

The hearing coincided with an increasing public awareness that Comcast and dozens of other ISPs (most of them cable TV companies) commonly use methods to throttle some forms of traffic on their networks. They do this to prevent their networks from becoming congested. These methods typically target peer-to-peer traffic from BitTorrent, a popular music and video file sharing program the ISPs say generates a third or more of their traffic.

Accordingly, BitTorrent has become the debate’s poster child, pushing much of the net neutrality debate into endless arguments over free speech, copyright law and what—if anything—should constitute “legal use” of the Net.

But there’s another side to this debate, one that gets far too little attention. In their attempt to limit BitTorrent and other peer-to-peer file sharing traffic, some ISPs have unwittingly caused collateral damage to other, unrelated businesses and their users. For example, some Web conferencing providers have seen their services slow to a crawl in some regions of the world because of poorly executed traffic management policies. Since ISPs often deny they use such practices, it can be exceedingly difficult to identify the nature of the problem in an attempt to restore normal service.

My company, Glance Networks, has first hand experience. Glance provides a simple desktop screen sharing service that thousands of businesses use to show online presentations and web demos to people and businesses worldwide. When a Glance customer hosts a session, bursts of high speed data are sent each time the person’s screen content changes. The Glance service forwards these data streams to all guests in the session, so they can see what the host sees. The streams need to flow quickly, so everyone’s view stays in sync.

One day a few years ago, our support line got a spate of calls from customers complaining that our service had suddenly slowed to a crawl. We soon realized the problem was localized to Canada, where nearly everyone gets their Internet service through one of just two ISPs. Sure enough, posts on blogs indicated that both of these ISPs had secretly deployed “traffic shaping” methods to beat back the flow of BitTorrent traffic. But the criteria their methods used to identify the streams were particularly blunt instruments that not only slowed BitTorrent, but many other high-speed data streams sent by their customers’ computers.

This experience illustrates why additional rules need to be imposed on ISPs. While we were working the problem, customers were understandably stuck wondering who was telling them the truth. Their ISP was saying “all is well” and that “nothing has changed”, both of which turned out to be wrong. But how were they to know? Their other Web traffic flowed normally. From their perspective, only our service had slowed.

Luckily, we quickly discovered that by changing a few parameters in our service, we were able to restore normal performance to our Canadian customers. But the Canadian ISPs were of no help. For over a year, they denied even using traffic shaping, let alone what criteria they used to single out “bad” traffic. We were forced to find our own “workaround” by trial and error.

And there’s the rub.

Imagine for a moment that regional phone companies were allowed to “manage their congestion” by implementing arbitrary methods that block a subset of phone calls on their network. People whose calls got blocked would be at a loss to know why some calls failed to connect, while others continued to go through normally. Such behavior would never be tolerated in our telephony market. Yet we allow ISPs to “manage their congestion” this way today.

In a truly open marketplace, we could expect market forces to drive bad ISPs out of the market. But most ISPs are monopolies, for good reason. Their infrastructure costs are enormous. The right to have a monopoly, however, must always be balanced by regulations that prevent abuse of that right.

Business and markets cannot thrive when ISPs secretly delay or discard a subset of their traffic. Networks need to be free of secret, arbitrary traffic management policies. Just because an ISP’s network suffers chronic congestion, that ISP cannot be allowed to selectively block arbitrary classes of traffic.

Some ISPs argue that the solution is to let them offer “tiered” services that guarantee some classes of traffic flow unimpeded, while other classes may be delayed or discarded altogether. I disagree. The net works just fine when traffic flows smoothly through the pipes.

The real problem is chronic congestion that occurs when ISPs sell more capacity than they can deliver. BitTorrent has become the ISPs’ sore thumb today (as will other high traffic services of the future) because these ISPs sold their customers flat-rate plans limited by speed, not volume. Amending those plans to include a volume-related surcharge would quickly squelch traffic or motivate customers to switch to ISPs with more capacity. Either way, congestion is eliminated.

Some readers may object that this practice itself violates the principle of net neutrality. But the net neutrality debate should not be hijacked by people advocating some illusory “right” to flat-rate pricing plans. Businesses on the Web have paid graduated rates according to their consumption for years, metered by the gigabyte or by peak burst rates.

Meanwhile, FCC commissioners need to understand that arbitrary and secret traffic management policies have already impacted businesses unrelated to the peer-to-peer file sharing applications targeted by those policies. These are not hypothetical scenarios. The ongoing threat to legitimate Web services that businesses and consumers depend upon daily is real.

The FCC must impose rules that prevent ISPs from implementing such policies. ISPs that oversold capacity must respond with improved pricing plans, not traffic blocking policies. To let the status quo continue imperils legitimate users of the global information infrastructure that so many of us depend upon daily.

Rich Baker is founder and CEO of Glance Networks. Follow @

By posting a comment, you agree to our terms and conditions.

  • Pingback: Les fournisseurs d’accès Internet limiteraient le débit en fonction du type de contenus - jonathan.bonzy.tv : le blog de Jonathan Bonzy

  • Kevin Holtsbery

    I really enjoyed reading your opinion piece about net neutrality and how it’s affected you personally. I greatly appreciate how middle of the road your opinion is even after those affects. The problem I have with your conclusion that the internet should be a per byte subscription based service is that 1) people won’t swallow that pill easily. 2) how can people possibly keep track of how many bytes they have used, and furthermore, 3) how can a consumer confirm that the telecom mougles aren’t abusing that figure. It’s not like how it was in the beginning when you paid per minute. Too many sites have way too much data on them already that I personally don’t even pay attention to. With this proposed method I would now need to pay for all that extra nonsense! And it hasn’t been a huge issue lately but all those pop-up ads certainly add up. Currently you pay for throughput not total usage and you have tiers to that degree. Per byte usage will hurt not only the little guy, it will prevent people from expanding their exposure to new sites for fear of going over their limit. It’s the same reason I don’t txt message, because it hurts my pocket way more than it’s worth.

  • Pingback: How Network Non-Neutrality Affects Real Businesses | Adam McNamara

  • Wayne

    The ISP’s think traffic shaping is the best option. For them it’s simple, easy, and as long as they were able to hide it non-controversial. Now that it’s become a bit of a hot button issue, they may be having second thoughts. I’ve seen a couple of polls (including the one at the Globe and Mail) which have been heavily against traffic shaping. Those ISP’s who use it rather than upgrade their lines to carry the traffic that they promised to carry when they offered “Unlimited” accounts, may have severe problems with customer churn if their competition decides to not use traffic shaping. Of course there really isn’t all that much competition in the ISP market in Canada.

  • Pingback: Andrew Patrick » Collateral damage from network throttling and business impacts

  • Pingback: Wikinomics » Blog Archive » It’s time to deal with the Net Neutrality issue

  • Ryan Friedrich

    For those of you who don’t know, shaped internet already exists. Ask anyone who uses the internet in Austrailia about it. The companies gouge them by giving them excessively fast internet and limiting them to a certain amount of GBs per month of transfer. Typically the limit in AUS is about 4-6GB’s a month with plans for “unlimited” in the 200 Aus dollar per month range. After you have exceeded your 6GBs for the month, your speed drops from broadband to a meager 64kbps, roughly faster than dialup.

  • Pingback: Can Revenue Sharing settle the Content vs. Access Provider conflict? « Broadband Prime