« Round Two | Main | Wanna Grow Up to be a Debaser »

October 06, 2009

Comments

Cool. Chevron deference is much better topic than stupid crab pictures with nonsensical captions.

Publius, check your email. I'd like to get your post to the "top" of the page (i.e., above my recent post).

Publius, leaving aside the merits of the dispute, your brief is very well written.

many thanks von -- kind of you to say.

I haven't had the chance to read the whole thing, but at first glance it strikes me as a pretty persuasive brief, even from my libertarian-ish perspective.

It seems like you're in part indirectly analogizing Comcast to a common carrier, no?

Regardless, nice use of Schumpeter to make an argument in favor of regulation. I'm impressed.

So now that we know who's in the fight, where do you put the odds, chief?

Publius, leaving aside the merits of the dispute, your brief is very well written.

I second that. It's a good brief.

Mark -- I think that's an interesting aspect of this debate. Most open network types are fierce libertarians at the APPLICATION LAYER. The issue here is whether Comcast can interfere with that layer by messing with the lower physical layer.

Comcast is appears to be evil incarnate.

There's this.

And Comcast wants to buy NBC and kill free Hulu downloads.

And Comcast is in a contract dispute with DirecTV over increased carriage fees for Versus (Comcast owns Versus -- doesn't that look like the behavior one would expect from a vertical monopoly?).

And Comcast has packaged Bay Area (California) sports programming into a new, separate cable channel many carriers don't offer/will only offer as part of a premium sports package.

Comcast owns Versus -- doesn't that look like the behavior one would expect from a vertical monopoly?

Vertical monopoly? Antitrust in the Bork era doesn't care about that.

I agree with von, though I do have a bias on this topic as I used to work for a pro-net-neutrality nonprofit. A few constructive comments as I'm reading:

First, the non-standard-protocols-argument is creative, but I don't see why you opened with it, because it's not really a legal argument so much as a pragmatic one. It requires you to prove that innovation and simple-as-possible interconnection are the paramount goals for internet regulation, which is really the entire debate.

The analogy to electrical voltage is OK, but the supporters of Comcast's filtering (I'm sure they exist) could legitimately ask why the market couldn't be trusted to sort this out -- a company that tried to provide or connect to the grid at 79.3V wouldn't last long.

The transparency variant of the argument (p. 13) is much better because it directly invokes Carterphone (although you didn't cite it) -- interfering with traffic in ways that exceed reasonable network-protection is per se prohibited. Even if Comcast had a legitimate network-security justification, the lack of transparency moots any potential consumer-choice justification.

Fundamentally, this argument is true, but it's not the strongest way to open the brief. The non-discrimination section that follows is much better.

However, you need a cite after the claim that "the principle of non-discrimination... has governed the Internet since its inception." The FCC has said as much (e.g. in the 2005 broadband policy) and it's at least implicit in FTA96, so your argument would be stronger if you at least portrayed this as official policy rather than just asserting it as a general tendency.

The transaction-costs and venture-funding arguments are novel and excellent. These are the kind of real-world-cost arguments that the DC Circuit and SC love, especially in the context of high-tech- and administrative-regulation-related cases. Tying these practical issues into "socially beneficial innovations" and §706(a) of FTA96 is masterful. (p.15) That's exactly what you need to do in an admin-law case.

The application-neutrality point that follows (p.16) is true and important, but not really supported by any direct, substantive authority -- again, this was a good but missed opportunity to make the conceptual leap-off from ostensibly-pro-neutrality authorities like the 2005 broadband policy. The shoot-the-moon goal of this brief should be an attempt to enshrine net neutrality as policy; there is enough authority to support that position, but the court needs to declare it directly.

"Anticompetitive effects" (p.16) are irrelevant per Trinko and now LinkLine. This section really shouldn't be in there, for that reason -- it opens you up to collateral attack. Likewise, the "economic theory" argument (p.18) is straying into the domain of antitrust, which is really not where you want to be.

"Limiting user choice" (p.17) is yet another example of something that is true but not necessarily relevant to the state of the law. I don't really see what any of this gets you if you don't win that competition is a mandated goal of the FCC -- that's really the whole game (and an easy argument to boot -- you make it on p.19 when you cite 47 U.S.C. §230(b), then in Section B, etc., it should just come earlier as it's the foundation of the argument, not its conclusion).

You really shouldn't end a paragraph with "(as history illustrates)" (p.18) -- either cite it or don't make the argument. It sounds like a reach. Same with "unlikely, to say the least" (p.20).

The argument that the DOJ/FTC are pro-competition doesn't help the "anticompetitive effects" argument above -- the FCC is a different agency with a different mandate, and it clearly has substantial authority to create limited monopolies. This is an antitrust argument, not a telecom policy argument -- a judge with a passing familiarity with Trinko would immediately pick up on this distinction and become hostile to your argument. The question in this case isn't whether the FCC should regulate, but whether it has the authority to regulate in this fashion, which it pretty clearly does.

Section C should be Section A, because it responds directly to the "the market will fix it" argument I mentioned earlier. The cite to Madison River is right on. This whole section is golden, though it still leans a bit too heavily on the "anticompetitive effects" claim that isn't really central to the argument.

The "expanding broadband access" argument is way undercooked -- it's directly relevant to the FCC's mandate (and therefore a legitimate subject of regulation) and therefore provides the Chevron hook. Should be way, way more prominent.

You should have worked harder to convince Larry, Jack, and Yochai to leave out Section II. Seriously.

To clarify, I realize that the other brief is the one intended to address most of the issues I raised on-point -- the problem I see is that the issues raised in Section I of your brief are the substantive ones that determine what the FCC should regulate and what its goals should be.

The Chevron argument in this case is kind of a gimme -- again, the real shoot-the-moon goal, as I see it, would be an explicit judicial affirmation of the FCC's right (or even obligation) to pursue net-neutrality as official policy rather than reversing course every few years and consequently making our 'net regulations conceptually incoherent.

This is a question of what FTA96 is supposed to "mean" that is long-overdue for some authoritative standards.

Adam - I should have sent this to you pre-filing rather than post. :)

I'll take that as a compliment :)

It really is a good brief, though.

Again, I take no position on the merits. On the lawyering, however:

Adam makes some good comments, but you can find faults like this with nearly every brief. Especially with amicus briefs; amici are desperately trying not to repeat the arguments of the parties. A good amicus brief cannot be an argument according to Hoyle.

On that note, I'll disagree with Adam on the following:

First, the non-standard-protocols-argument is creative, but I don't see why you opened with it, because it's not really a legal argument so much as a pragmatic one. It requires you to prove that innovation and simple-as-possible interconnection are the paramount goals for internet regulation, which is really the entire debate.

No, that's exactly where Amici should have started. The parties would have covered the traditional, standard arguments. For an Amicus brief to be useful -- aka, read -- it has to have a different (albeit plausible) take. I thought that this argument was extremely well done, and the right place to open. Will the Court adopt it? Probably not. But I see it shifting the way the panel thinks about the debate, which is the best an amicus brief can do most times. (I know it's technically an amici brief, but, for some reason, that looks wrong to me. I want to write amici briefs.)

publius, I'm still trying to figure out where your bottom line is on application neturality. I'm assuming that you have no problem with:

1) Spam filtering.

2) Filtering for prevention of DOS attacks.

3) Router policies like RED (random early drop) and WRED, which enhance the effectiveness of TCP congestion control and prevent global tail drop synchronization.

These are, of course, different only in degree with what Comcast did with BitTorrent, not in kind. How would you craft an app neutrality regulation that didn't forbid these kinds of behaviors?

A related question has to do with traffic engineering in general: What sorts of regulatory restriction are you willing to place on ISPs as they attempt to provide the best service for the most users at the lowest cost? This is obviously dramatically more difficult in the face of an application like BitTorrent, where a relatively small number of users consume the majority of upload bandwidth. Would you require MSOs to offer symmetric upload/download bandwidth, even though the costs of doing so will have to be passed on to all users?

Next, I'm still unclear on your position wrt QoS and premium classes of service. Do you think app neutrality will allow the provision of separate classes of service? What about if an ISP will allow premium classes of service to crowd out basic service? Would you allow the same application to work better with different classes of service?

As you know from your previous threads on this topic, I'm fine with location neutrality but continue to maintain that app neutrality is a serious threat, and that imposition of app neturality regulations will actually cause exactly the outcome you wish to avoid, i.e., they will stifle innovation. In particular, I dispute this:

For background, the reason that innovation is easy and inexpensive on the Internet is because everyone on the Internet has traditionally followed shared, non-secret practices and protocols. The creators of Twitter, for instance, didn’t have to call network providers like Comcast for permission to introduce their application. They simply conformed their application to transparent, standard protocols—and then trusted that their new application would work on the network. This ease of innovation allows an extremely diverse set of people (from college students to Fortune 500 companies) to experiment, which paves the way for new markets and economic growth.
This statement is only true for low-bandwidth, well-behaved, TCP-based applications. Your Twitter example is just fine, because Twitter consumes barely any bandwidth. Your statement is not true for BitTorrent (which conflicts with best practices for download/upload bandwidth segmentation), nor is it true for high-bandwidth RTP applications, nor would it be true for any number of potential applications that promiscuously used high-bandwidth UDP, or multicast, or even multi-unicast TCP.

For the record: What Comcast did was crummy, and I fully support requirements that all traffic management policies be open uniformly guaranteed for a particular SLA. But there is simply no way you can produce a one-size-fits-all set of regulatory mandates for traffic engineering, which is the inevitable end point of any process that adheres to application neutrality principles.

Perhaps you have thought of a compromise on this that I haven't. I'd love to hear it.

These are, of course, different only in degree with what Comcast did with BitTorrent, not in kind. How would you craft an app neutrality regulation that didn't forbid these kinds of behaviors?

No, these are very different in kind from what Comcast did. Spam filtering applies to the application level and only to Comcast's servers. Comcast does not interfere with its customers ability to send mail to other mail providers or to receive mail from other mail providers. DOS filtering is generally done at the recipient's end and at their request; failure to filter DOS attacks effectively means terminating service for the customer. In any event, such filtering is neutral. RED and other router optimizations are neutral in that they don't care what application is getting its packets dropped or who sent or received the packet.

Would you require MSOs to offer symmetric upload/download bandwidth, even though the costs of doing so will have to be passed on to all users?

Why would anyone require any such regime? Have you seen any net neutrality advocate anywhere call for such a regime?

Your statement is not true for BitTorrent (which conflicts with best practices for download/upload bandwidth segmentation)

Best practices? The only reason for down/up bandwidth segmentation is that some provider networks require it because of their technological limitations. Well designed networks at universities or businesses generally do not feature segmentation.

, nor is it true for high-bandwidth RTP applications, nor would it be true for any number of potential applications that promiscuously used high-bandwidth UDP, or multicast, or even multi-unicast TCP.
"Promiscuously use"? You mean when users make use of the service that they paid for? Do I promiscuously use my water mane when I brush my teeth? Or wash my car?

There is no reason that high bandwidth UDP applications pose a problem for any well designed network. Now, Comcast, unlike many smarter ISPs has not designed their network properly, so they might have trouble, but that's their problem.

For the record: What Comcast did was crummy, and I fully support requirements that all traffic management policies be open uniformly guaranteed for a particular SLA.

I don't think Comcast offers an SLA to its residential customers. Since they don't, doesn't your proposal here amount to absolutely nothing for the vast majority of Comcast's customers?

If Comcast owns the network they should be able to do what they want. Bittorrent just wastes bandwidth and slows things down for everyone. Those people download HUGE files. And it's always stolen music or port. So all publius is really trying to here is support copyright infringement. Apparently you think its so important for people to steal music, that you're willing to deprive Comcast of the use of its property without due process of law and without paying just compensation as required by the Constitution.

Turb--

Spam filtering applies to the application level and only to Comcast's servers.
There are lots of firewall-based SMTP filters. I don't know if any of them are used at the NAT/PAT boundary of ISP's network or not.
DOS filtering is generally done at the recipient's end and at their request; failure to filter DOS attacks effectively means terminating service for the customer. In any event, such filtering is neutral.
There are plenty of L3 DOS attacks, which get filtered inside the ISP. I'm also fairly sure that many ISPs filter things like SYN floods.

Finally, filtering of DOS attacks is the essence of non-neutral treatment: You're looking for particular (malignant) application patterns and you're throwing those apps off of your network. Who's going to determine what constitutes a "malignant" app? The FCC? What if the FCC decides that BitTorrent is a malignant app?

Why would anyone require any such [symmetric upload/download] regime?
You're de facto forcing this if you require ISPs to allow high-bandwidth symmetric traffic onto their networks, or you'll degrade the ordinary web performance so much that your customers will start screaming.
The only reason for down/up bandwidth segmentation is that some provider networks require it because of their technological limitations. Well designed networks at universities or businesses generally do not feature segmentation.
That's because they don't use DOCSIS. The MSOs do. DOCSIS requires that you set some reasonable allocation of up/down traffic. The "technological limitation" that you're referring to is the result of CSMA/CD working really poorly in a neighborhood-area network.
"Promiscuously use"? You mean when users make use of the service that they paid for? Do I promiscuously use my water mane when I brush my teeth? Or wash my car?
And what happens to you when you decide to water your lawn 24/7 during a drought? Doesn't the water company come and put a flow restricter on your main?

Right now, the only poorly-behaved app like this is BitTorrent. One of the reasons for that is that when somebody does something really stupid, the ISP usually sits them down and gives them a good talking to. But, suppose I decided that I wanted to multicast out hundreds of channels HD video 24/7. Don't you think that the tier 1 and 2 ISPs that had to transit my traffic would be entitled to drop large quantities of it on its head?

I don't think Comcast offers an SLA to its residential customers.
There's certainly an implicit SLA. Customers have a right to expect that their internet service has good availability and performs reasonably. Now, as you know from our previous, uh, discussion, I suspect that there will be all kinds of different SLAs that are needed for real time apps in the near future. At that point, you may find that you have multiple classes of service even for consumer-grade traffic. Would app neutrality allow that to occur?

> If Comcast owns the network
> they should be able to do what
> they want.

Except puppy killing. They should not be able to euthanize puppies no matter how much network they own. That's just common sense.

There are lots of firewall-based SMTP filters. I don't know if any of them are used at the NAT/PAT boundary of ISP's network or not.

Can you clarify what exactly you're talking about? Spam filtering is a large and complex enterprise and I have no idea what aspect you're referring to.

There are plenty of L3 DOS attacks, which get filtered inside the ISP. I'm also fairly sure that many ISPs filter things like SYN floods.

A major impetus for net neutrality is allowing users to make choices. I am not aware of any users who specifically requested from their network providers that a network destroying DOS attack be directed at them. Are you?

Finally, filtering of DOS attacks is the essence of non-neutral treatment: You're looking for particular (malignant) application patterns and you're throwing those apps off of your network. Who's going to determine what constitutes a "malignant" app? The FCC? What if the FCC decides that BitTorrent is a malignant app?

Providers will make that determination and defend it in court if need be. Do you really think there are users out there who would file suit over their ISP's blocking of a DDOS attack directed against them?

You're de facto forcing this if you require ISPs to allow high-bandwidth symmetric traffic onto their networks, or you'll degrade the ordinary web performance so much that your customers will start screaming.

On well managed networks, users can run high bandwidth symmetric traffic heavy apps without degrading anyone's performance but their own.

I really don't understand what you mean by requiring ISP to allow such traffic on their networks. ISPs all allow such traffic right now. Users decide whether their traffic will be high bandwidth and symmetric, but that decision does not require that ISPs structure their networks to conform with user demands.

You seem to think that network neutrality prohibits fair queuing. Is that true?

And what happens to you when you decide to water your lawn 24/7 during a drought? Doesn't the water company come and put a flow restricter on your main?

The water company doesn't care about what I use the water for, just the amount. I've got no problem with explicit bandwidth caps. In fact, there's no reason Comcast can't tell change their TOS right now and say "all that stuff about unlimited data plans? that was a lie! you're capped to 200 GB/month". Why don't they do that? It is perfectly legal. I mean, your whole point here is that only a tiny minority of customers would hit the cap anyway and users in general are already comfortable with capped services (like most cell phone plans).

Right now, the only poorly-behaved app like this is BitTorrent.

If you're concerned about support low latency services like real time teleconferencing, then BT is not the only problem. Anything which maxes out either downlink or uplink bandwidth is a problem. iTunes is a problem. Netflix viewing is a problem. Remote backup is a problem. Hulu is a problem.

One of the reasons for that is that when somebody does something really stupid, the ISP usually sits them down and gives them a good talking to. But, suppose I decided that I wanted to multicast out hundreds of channels HD video 24/7. Don't you think that the tier 1 and 2 ISPs that had to transit my traffic would be entitled to drop large quantities of it on its head?

IP multicast is dead as a doornail in the internet core.

Beyond that, I think any ISP would be delighted to sell you bandwidth if you wanted to serve lots of HD video to the world. You're not going to be able to do that with a standard residential package, but so what? You'll pay for the bandwidth that you use and the ISP won't care about what you're doing with those bits at all.

There's certainly an implicit SLA.

Um, I'm having trouble squaring away your call for traffic management policies being disclosed in SLAs with the notion of an "implicit SLA". How do you disclose information in an implicit agreement?

At that point, you may find that you have multiple classes of service even for consumer-grade traffic. Would app neutrality allow that to occur?

I don't see why not.

I hope. I guess maybe I should actually spell out my reasoning. I think that puppies have a native right to life, in part because they are living creatures and in part because they are really cute. I think that when we assign property rights to a network like we've done to Comcast we need to exclude rights that are not actually ours to give, like the right to use the underlying physical network cable to strangle puppies in various brutal fashions. That might technically fall under the right of ownership, the force majeure of the rights assignation as it were, but if we tracked it back we would find that the puppies had not properly and knowingly leased out the right to their adorable little lives to the agencies from which Comcast's ownership stems. I recognize that it might put an unconscionable burden on network owners to identify *which* adorable small animals they have actually obtained (as part of their network ownership acquisition) the right to kill, but I think, at the same time, that a sensible blacklisting policy like:

* cannot kill puppies
* cannot streak naked through the NY streets shouting "I'm Comcast, mofos!"
* cannot use cyanide in promotional materials
* cannot eat original paintings by the great masters without a separate purchase process
* cannot return dead puppies to life with necromantic arts

would resolve the questions that their ownership of the network raises while letting Irrumator's basic points stand.

Does that make sense? I guess you all probably already agreed with me about the puppies :( so I hope I wasn't just wasting time :(

I am not aware of any users who specifically requested from their network providers that a network destroying DOS attack be directed at them. Are you?
How about the rights of the attacker? I'm only half joking here. Suppose I've got a SPIM application (Spam for IM). I'm busily trying to establish IM connections with all of Comcast's users. Not only am I annoying the ones who don't know how to set their preferences right but I'm gobbling up sockets on the IM provider's server. How is that substantively different from me and a hundred of my closest friends bringing the network to a crawl because I'm doing P2P?
On well managed networks, users can run high bandwidth symmetric traffic heavy apps without degrading anyone's performance but their own.
Eh? Are you arguing that if a bunch of people run BitTorrent that the non-BitTorrent users don't suffer performance degradation? You seem like a bright fellow, so I must be misunderstanding you. If this were the case, why would Comcast have bothered to throttle P2P?
[Bandwidth caps are]perfectly legal. I mean, your whole point here is that only a tiny minority of customers would hit the cap anyway and users in general are already comfortable with capped services (like most cell phone plans).
But you're not talking about a bandwidth cap, you're talking about a data cap. Per neutrality, if I want to blast out UDP packets at the inter-frame gap for four hours until I hit my data cap, I can make life really miserable for my neighbors--at least for those four hours. Should I be allowed to do that, or should somebody take my modem out of service?

Plus, think about what you're advocating here. Instead of giving the ISP the ability to implement reasonable traffic engineering, you're saying that we ought to throttle all users to some sort of bizarro fair access policy. Do you really think that this is the way to promote innovation on the internet?

IP multicast is dead as a doornail in the internet core.
Excellent! And yet there are a whole raft of standards-track RFCs pertaining to it. So, I'm an application provider, and I run across some ISP that refuses to provide multicast access for me. He's being app non-neutral, isn't he? Quick, call the FCC!
Um, I'm having trouble squaring away your call for traffic management policies being disclosed in SLAs with the notion of an "implicit SLA". How do you disclose information in an implicit agreement?
Fair point. But there are certainly terms of service. If you're telling me that that's not an SLA--OK. Uncle. It could still spell out what the traffic engineering policies were for your class of service.
I don't see why not.
If that's the case, then cool. But I'm having a hard time squaring multiple classes of service with the idea of app neutrality. If you require any app to be able to run anywhere, then the whole notion of different CoS seems like it's on pretty shaky ground.

I sometimes struggle to follow these conversations, but if Jenna Moran keeps posting like her 4:11, I will definitely make the effort. It was Thullenesque, which, at least for me, is the highest praise around these parts.

How about the rights of the attacker?

There are two cases: either the attacker is also a customer of my ISP or they are not. If they are, then my ISP will terminate their service immediately since they're engaging in a felony. If they're not, then why should my ISP care about their rights? The attacker isn't giving them any money and I am.
Suppose I've got a SPIM application (Spam for IM).

In general, such systems route data to the IM provider and only from there to the end user. Therefore, the IM service provider can kill malicious users. There is no need for the network provider to get involved, and, practically speaking, in most cases it would be physically impossible for them to get involved. So I don't really understand your point: do you have any examples that work without assuming a radically different world than we live in?

How is that substantively different from me and a hundred of my closest friends bringing the network to a crawl because I'm doing P2P?

Explain to me how that is substantively different from me and my friends bringing the network to a crawl because we're watching Hulu? Or because we're using online backup services?

Eh? Are you arguing that if a bunch of people run BitTorrent that the non-BitTorrent users don't suffer performance degradation? You seem like a bright fellow, so I must be misunderstanding you. If this were the case, why would Comcast have bothered to throttle P2P?

I've been on several networks where BT users did not impose performance degradation on non-BT users. All it takes is a network structured so that shared resources (in Comcast's case, last mile bandwidth) are allocated using fair queuing. Comcast doesn't structure their network this way because they're not a particularly well run company. But other companies do. Most DSL providers allow users to saturate their connections 24x7 without impacting other DSL customers in the same area at all.

if I want to blast out UDP packets at the inter-frame gap for four hours until I hit my data cap, I can make life really miserable for my neighbors--at least for those four hours. Should I be allowed to do that, or should somebody take my modem out of service?

This can only occur on poorly designed networks. I think providers with really poorly designed networks should be forced to explain to customers the limitations they impose and then let the market take care of the rest.

you're saying that we ought to throttle all users to some sort of bizarro fair access policy. Do you really think that this is the way to promote innovation on the internet?

Um, no.

Excellent! And yet there are a whole raft of standards-track RFCs pertaining to it. So, I'm an application provider, and I run across some ISP that refuses to provide multicast access for me. He's being app non-neutral, isn't he? Quick, call the FCC!

I don't think you understand how the internet works. Anyone can write an RFC. And sometimes the IETF acts really really dumb (for example see IMAP) and pushes bad RFCs to a standards track. But even standards track RFCs are not necessarily intended to be used in the internet core. Multicast IP is widely used in large networks but not in the internet core. The reason is that multicast requires that core routers allow any user on the planet to allocate system resources. This obviously does not work outside of individual enterprises.

Failing to implement a service that is not in widespread use is not a neutrality violation. There might be a neutrality violation IF the provider offered multicast IP to its internal VOIP service but refused to offer the same service to customers at the same rate, but that's a radically different situation than what you described.

Adam makes some good comments, but you can find faults like this with nearly every brief. Especially with amicus briefs; amici are desperately trying not to repeat the arguments of the parties. A good amicus brief cannot be an argument according to Hoyle.

I don't disagree with you on this point, von -- let me restate my point. I agree that it is appropriate for amici to articulate the practical outcomes of policy. But in an admin law case especially, what separates a good amici brief from a great one is a brief that articulates why x policy should have been the correct policy all along, not just why a policy is good or bad. Those weighing factors are just frosting for judges.

But when you pull a Zachariah Chafee and write a brief that ties together the FCC's mandate with net neutrality (and that connection is there), you do a much more profound thing that simply saying, "well, this policy would be good." Publius' brief comes very close to doing this but doesn't quite make it in my opinion. It's still a very good and creative brief, though.

However, I do partially disagree with von on this point:

For an Amicus brief to be useful -- aka, read -- it has to have a different (albeit plausible) take.
The antitrust arguments that I pointed out seem to me to be an example of where this can be taken too far. Antitrust law is the bane of sane telecommunications policy and allowing the court to sideswipe the argument with "hey, companies can do whatever they want!" would be a disaster. (Look what happened in Trinko and LinkLine.)

My personal feeling, as I indicated, is that the free-speech argument is problematic for different reasons; basically, it's just too much of a reach -- it makes you look like starry-eyed tech-evangelists, which hurts your credibility on the efficiency/competition front, which is where the debate always is in these cases.

It's not that you shouldn't make these arguments, it's that you shouldn't let Lawrence Lessig write them -- that section should be about how free speech is the pounding thumping throbbing beating heart of innovation in the great laboratory that is America. And then you sacrifice little effigies of Edison, Tesla, and Oppenheimer.

Turb, I need to give you a brief primer on DOCSIS, which is the cable modem standard used pretty much by everybody. A cable modem (CM) is attached to a shared medium: There's only one conductive path between a bunch of CMs and the cable modem termination system (CMTS). The DOCSIS standard describes how the CMs contend for bandwidth on that medium. In DOCSIS 2.0, about 200 CMs share upstream bandwidth over a single channel, which can effectively transfer 27 Mbps. In DOCSIS 3.0, which is being rolled out now, each CM can transmit on multiple upstream channels, each capable of transferring 27 Mpbs, so eventually you will be correct that upstream bandwidth becomes a pure traffic engineering problem, although there will always be a tradeoff between how many channels get devoted to data and how many are devoted to other services (video on demand, pay-per-view, regular programming, etc.)

However, with both DOCSIS 1.1 and DOCSIS 2.0, upstream bandwidth is clearly a big deal. If one or more of the 200 homes on the same CMTS and channel as you is requesting lots of upstream bandwidth, there's less for everybody else. Can the amount of upstream bandwidth per CM be limited? Yes, but that limitation will affect people who have the need for bursts of upstream bandwidth (sending email, doing incremental backups, etc.). So, once again, we're faced with the problem that the kind of application and the application mix will affect the optimal traffic engineering for your cable system, and this is why you need to apply non-neutral policies to CMs that are doing bad things (e.g. spamming out UDP or, in some cases, acting as a giant BitTorrent provider node). As for bringing the network to a crawl watching Hulu, that's all downstream traffic, and can be managed just the same as any other bag of HTTP traffic. BTW, note that none of this has anything to do with router fair queuing; it's all MAC-level stuff.

I'm sure that you have been on networks where P2P didn't noticeably degrade network performance, because there are lots of symmetric networks out there. Cable does not happen to be one of them, however. Since we've mostly been discussing Comcast, I'd say that that was a relevant factor, wouldn't you?

Now, as for your multicast comments: It is no longer true that anybody can write an RFC. For many years now, you submit internet drafts (which anybody can indeed write) to specific IETF working groups, which have to approve a draft before it can be submitted to the IETF steering committee, which has the power to make an RFC. Furthermore, there are informational RFCs, which aren't very important, but there are also standards-track RFCs, which are considered normative (i.e., if you want to conform to a particular protocol, you have to do what the RFC says). Lots of the IP multicast RFCs are standards-track.

Do the core network providers hate IP multicast? Yes, they do, for some of the reasons you have stated (although protocols like PIM have made things a bit better). Do they have zillions of tricks to limit its impact? Yes, they do. Are they incredibly grateful that none of the multicast MBone experiments got traction in consumer applications? You betcha.

But the important meta-point to be made here is that if you're going to have application neutrality, you have to describe, in enforceable regulatory language, what that means. Does it mean that any IP packet that a consumer transmits, or that a consumer causes to be received off of a transit network, has to be treated neutrally? I certainly hope not, for all of the reasons I've stated on this and other threads. Does it mean that all TCP packets have to be treated neutrally? Or does it mean that packets conforming to any normative protocol have to be treated neutrally? Pick a definition, and we can discuss the set of things that can go wrong with an app-neutral policy based on that definition.

The bottom line, however, is that networks actually have to be managed. Network management requires a model of the application mix to be transported over the network. Optimal management also requires that that model be enforced so that applications that seriously jeopardize the assumptions built into the model can have their behavior modified.

You certainly can have the government mandate the model by which all consumer networks will be managed, but there will be a model, irrespective of who generates the model, and the network will be managed to that model. When the application mix changes, the government will have to change the model and allow the ISPs to manage their traffic differently. Do you really want the government as the arbiter of the proper application mix? Do you really think that that’s how we’ll get new, innovative applications?

Thank you for doing this publius. I know you haven't been paid for it, and that even makes me appreciate even more.

As for bringing the network to a crawl watching Hulu, that's all downstream traffic, and can be managed just the same as any other bag of HTTP traffic.

Let me ask you something. Imagine every Comcast subscriber in an area is watching Hulu. As you point out, that means that the downlink will be saturated but the uplink won't be. Now, suppose one of those subscribers wants to uplink a lot of data. They can only upload data as fast as they can receive TCP ACKs. Those ACKs have to travel on the downstream pipe....you know, the pipe that's completely saturated with Hulu download data. Do you think the upload will perform well?

BTW, note that none of this has anything to do with router fair queuing; it's all MAC-level stuff.

Queuing theory actually applies to all sorts of domains beyond IP packet routing. Math is cool that way.

I'm sure that you have been on networks where P2P didn't noticeably degrade network performance, because there are lots of symmetric networks out there.

You are very confused. Whether the network is symmetric is not the relevant issue. Most DSL is asymmetric and yet this problem does not occur with most DSL providers. The relevant issue is whether the network provider ensures fair access to shared resources. Most network providers do by ensuring that all shared pipes have enough capacity to cover all subscribers connected to that pipe. Some providers do that by placing intelligent routers in front of their shared resources that are capable of enforcing a fair queuing discipline. And some providers, like Comcast, do neither. But symmetry has nothing to do with it.

Turb, we've gotten way far away from the main point, which is simply that you have to engineer for your app mix, and therefore that application neutrality is somewhere between a really bad idea and impossible. Having said that, here are a few more comments, and then I'll give up:

Now, suppose one of those subscribers wants to uplink a lot of data. They can only upload data as fast as they can receive TCP ACKs. Those ACKs have to travel on the downstream pipe....you know, the pipe that's completely saturated with Hulu download data. Do you think the upload will perform well?
Fairly well, but the downloads won't--they'll drop into slow start, which will clear the congestion condition and allow most of the downstream ACKs to get through.

Of course, when you completely saturate the network in either direction, performance is going to suffer. This isn't the point, though. The point is that when you engineer your network for mostly asymmetric HTTP traffic and a few people come along and generate high-volume symmetric traffic, everybody's network performance is going to suffer.

Queuing theory actually applies to all sorts of domains beyond IP packet routing. Math is cool that way.
Yes, queuing theory is very nice, but fair queuing is a specific algorithm that only applies on switches or routers that are capable of buffering stuff. It can be just lovely once you get packets onto the CMTS, but it has nothing to do with contention mechanism that's built into the upstream MAC of DOCSIS. That contention algorithm starts generating collisions and access delays at high offered load.

We've gotten to the "is not!" / "is so!" portion of the debate here. Maybe this will help: Do you really think Comcast would have throttled BT just to get their jollies? If it behaved the way you think it does, then it's a self-correcting problem, because all the non-BT users aren't pissed off and the BT users are either satisfied or they stop using it. Either way, Comcast doesn't have to take any action that might cause anybody to get the vapors.

Most DSL is asymmetric and yet this problem does not occur with most DSL providers.
That's because the DSL MAC doesn't have contention problems. It's like switched ethernet, where the only time you're not clear to send is when the switch is congested--which is pretty much never.
Some providers do that by placing intelligent routers in front of their shared resources that are capable of enforcing a fair queuing discipline. And some providers, like Comcast, do neither.
Please go take a look at some HFC architectures.

The comments to this entry are closed.