Cox has stepped into the glare of the net neutrality limelight with its new policy, announced yesterday afternoon, on bandwidth throttling.
Succinctly: Cox has decided that it knows best which of your network activities is “time sensitive” and which ones are not. And it intends to force its ideas on your use of the bandwidth for which you’ve paid.
Unsurprisingly, reaction has been negative. Om Malik of GigaOm sez:
Who is Cox to decide that a certain FTP transfer is not time sensitive, or that some software update is not time sensitive? More importantly, why should consumers trust cable companies, whose record of giving customers the short end of the stick is pretty well known?…
Unfortunately, as long as we have this comfortable duopoly in the broadband market, we the broadband consumers are going to have suffer from these kind of practices as we don’t have much of a choice. Hopefully a post-Kevin Martin FCC will be more citizen-friendly, and will act promptly against Cox and other traffic shapers.
The Free Press, epicenter of the net neutrality battle in Congress, similarly remarks:
“The lesson we learned from the Comcast case is that we must be skeptical of any practice that comes between users and the Internet.
“As a general rule, we’re concerned about any cable or phone company picking winners and losers online. These kinds of practices cut against the fundamental neutrality of the open Internet. We urge the FCC to subject this practice to close scrutiny and call on Cox to provide its customers with more technical details about exactly what it’s doing.”
This will clearly be the first contentious issue to come before the new FCC and it is surely no accident that the strategy is announced at the moment when the agency is being reorganized and will find reacting quickly difficult.
Hovering in the background of this story is series of failures on the part of Comcast the nation’s largest, and hence most visible, cable company to extend the industry’s privileged position in regard to regulation. (Cable companies have historically been much more lightly regulated than their competitors the telephone companies.) What Comcast failed to secure was the “right” to inspect your bits and to discriminate against bits it didn’t like—especially P2P protocols. In doing so it ran up against the long-established ideals of common carriage. A common carrier is not allowed to discriminate in what it carries…it is not allowed to charge some loads of coal more than others nor to give some customers privileged service by delivering their coal first. Comcast was asserting the right to treat some bits differently based on the protocols that governed them.
The FCC came down on Comcast and in the ensuing back and forth Comcast, and the cable industry, got a huge black eye in public opinion as is evidenced by the qoutes above.
Cox steps into it
So Cox is deliberately taking up the network neutrality fight by declaring a new policy and is hoping to do a better job of it for the industry than the #1 guy. What’s not so well know, but was cited in the story-breaking AP account, is that Cox has also been doing exactly what Comcast has been castigated for attempting:
Comcast is fighting the FCC’s ruling in court, but has abandoned its congestion management system in favor of one that doesn’t discriminate between different types of traffic. It has also abandoned secrecy and revealed details on how the new system works.
Tests conducted by the Max Planck Institute for Software Systems in Germany indicated last year that Cox was using the same discriminatory network management system that Comcast employed then. Cox never revealed the details of its system but said it used “protocol filtering,” a principle also used by Comcast.
Further testing by the Max Planck Institute indicated that Cox cut back sharply on its use of the old congestion system in August, and that it was shut down by January.
Cox, apparently, is not willing to follow Comcast in a shift away from discrimination based on protocol to one based on the particular customers who actually use the most bandwidth. Instead it is trying to recast the issue in terms of “time sensitive” and “time insensitive” categories of protocols. (Cox, by all accounts, uses the same equipment (from a company called Sandvine that Comcast has; a technology that engages in deep packet inspection to try and discern the protocols used to transfer bits and other traits.) But whereas Comcast has been remarkably open about what it is trying to do since being spanked by the FCC and public opinion Cox appears to firmly set on the path continued obfuscation and misdirection. In its FAQ on the topic it says:
Our past practices were based on traffic prioritization and protocol filtering. This new technique is based on the time-sensitive nature of the Internet traffic itself, and we believe it will lead to a smoother Internet experience with fewer delays.
This is nonsense. Honestly. Past practices are present practices. The FAQ directs your attention to the categories of protocols that Cox has created (time sensitive and insenstive) and away from the raw fact that all that is being done is that Cox has grouped some protocols into one pile and some into another and is discriminating against more than one protocol at a time. Protocols in the disfavored pile include: P2P (the one that got Comcast in trouble), usenet, and FTP. All of this requires deep packet inspection—Cox examining your data to determine what’s in it—then deciding what is and isn’t important and slowing down those that it thinks aren’t important enough to get speedy service.
The confusing discussion that will ensue…….
I expect that all this will be recast by commentators, as soon as they get it together today, as a fork in the road betwen “caps” a la Comcast and “management” a la Cox. No. The real issue is congestion—the slowdown that comes when too many users are clogging the internet; usually in the local “last mile.” Comcast is essentially telling some of its biggest users that they are using an unfair amount and “capping” their use at 250 gigs a month in the hopes that will serve to make congestion bearable. Cox, in contrast, is saying that some protocols deserve better service and is slowing others in its attempt to make its congestion less visible by passing the slowdown off to less important (in its view) uses.
What the first wave of comment0rs will be ignoring is that Cox also caps usage. It is much less transparent about how much incurs its wrath—it apparently differs by location and even then is not applied consistently. (In Las Vegas, for instance, the cap is 60 gigs on its 12 Mbps tier…much more restrictive than Comcast’s more highly publicized and decried version.) So while Cox can be the newest villan in the the protocol arena of the network neutrality fight it cannot be cast as a hero to those who are disturbed about the implications of bandwidth caps. You can rest assured that if Cox succeeds in its current strategy Comcast will follow it into using both caps and “management” to restrict its users. The industry is not offering an either/or…this is an “in addition to” effort.
But the confusing discussion around caps and management will serve the incumbents as a whole by presenting the policy community with a Hobson’s choice between two objectionable “solutions” to the problem of congestion. There is a third choice, a better choice, that doesn’t involve picking either of the industry’s favorite children.
A Third Way
Commentators (and policy-makers) would be better served by focusing on the actual problem: The real issue is congestion. The real solution is to directly address the undersupply of bandwidth that is the root cause of congestion. A congested network is, almost by definition, one which is under-engineered and so cannot handle the traffic demands that its users put on it. The real solution to the real problem is to fix that…to put in place a network which can handle the traffic and one which can easily, quickly, and cheaply be upgraded to handle downstream increases in demand.
It is no accident that I find it easy to reject the choice offered by Cox and Comcast. It is largely a product of where I happen to live. Lafayette’s citizens are in a good position to see that there is a solution which doesn’t involve making selections between false choices presented by the incumbents that seek to get concessions from the public in order advance their interests instead of actually taking the costly and admittedly risky business of fixing what is broken. Lafayette’s new network, built explicitly to provide with the capacity the community believed was necessary for its future, is about to take on its first customers.
There are ways for the country as a whole to address the bandwidth/congestion issue directly without simply building an new network as Lafyette and other impatient communities have done. Regulators can do something as simple as setting standards on the advertising that currently allows companies to grossly overstate the amount of bandwidth they can reliably provide—buying a 12 meg tier seldom means that you can reliably get 12 megs. Simply require a truth in advertising standard of some sort: say that you have to actually be able to provide the advertised speed 98% of the time and that you must monitor and report your performance on a node by node basis to the FCC. If you fail to provide such speeds then you must rebate to your customers a per cent of their bill for the months in which the undersupply occurred. Performance standards like this used to be de rigeur for telephones back in the days before deregulation. They motivated the phone company to build the world’s best phone system.
The almost inevitable consequence of this and other regulatory methods of demanding better service (why not make symmetrical up and down speeds a service standard?) would clearly and cleanly set a framework for rational behavior on the part of the incumbents. [An aside: for a good, current, take on why competition is never enough in some situations see Harold Feld’s latest.]
What you could expect would be rational economic behavior that would drive
- telephone companies to follow Verizon’s lead in building out a new FTTH network capable of dealing with today’s demands. Verizion is unquestionably the smartest actor in the field. It is now well understood that Verizon has succeeded in what was initially judged a risky venture by the capital markets.
- the cable companies to push fiber much closer to the home and to restructure their use of bandwidth to supply fewer channels and more switched digital and raw bandwidth to customers.
- areas which have poor competitors in these categories—or who don’t want rely on national policy to secure their futures—to build their own FTTH networks. As Lafayette has done.
Cox is, frankly, a bad actor on the national stage as it has been here in Lafayette. The country would be wise to reject the false choices currently being offered and to find a concrete, direct way to insist on more, and more reliable, bandwidth.
13 thoughts on “Cox Steps Into It: Network Neutrality Returns”
The solution you suggest is not viable. The cost of buidling an upgraded, new network to increae bandwidth would result in fees the customer would not be willing to pay. As for a better, easily and cheaply, upgradeable network? Put down Elle McPherson in my living room naked when I get home too. Nothing that is better will come “cheaply”. Let’s say a new network with additional bandwidth was built, how long would it take technological applications to use up that new capacity? Not very long I’d bet. Network management does need to be exercised responsibly. Just as in any other finite resource, whether that be oil, natural gas or copper it must be used wisely and efficiently. Ideally the FCC will ensure that the network policy management is exercised justly to ensure that all users have adequate access to the internet.
I wonder if LUS is going to do this. Still curious if they plan on having a bandwidth cap. Cox has a bandwidth cap, but I dont think they enforce it.
Re the "its too costly" complaint: we're proving right here and now that even in the most extreme case is not. Lafayette will overbuild FTTH and I am confident it will succeed against both Cox and AT&T–no weak sisters. If we can down here in one of the storm-ravagened verge of one of the country's poorest states anyone can.
But I've not suggested such a brave course for most of the country…I've only suggested that a few real truth-in-advertising conditions be met and a spare regulation or two that forces the delivery of perfectly doable service standards be imposed. And I frankly think that doing so will set up a dynamic that will do almost all of the work. Cable companies can, if they will, find real bandwidth. They just choose a safer route. But even without gentle regulation Verizon (as I pointed out) is building out a FTTH network already. Their stock is rising. It's expensive but all now believe the return is there…
It is NOT too expensive.
What's missing is will. And a smidgen of courage. That's all it boils down to. Lafayette has it. Bristol, VA. Chatanooga, TN. Verizon too…Some companies without it will fail. So be it. The sad part is that they will drag their communities down with them.
Not at all sure what LUS will do. I am sure that they don’t have any of the constraints that Cox does in the last mile. And those pixelated HD streams, audio dropouts, and slow-in-the-early evening internet speeds are due to last mile issues.
There are other different constraints on LUS but I think they are more of the fearful, “maybe this bad thing might possibly, someday, happen” variety than a real immediate problem.
So far LUS has been pretty smart about policy. I hope that they will continue to be so and not give up the huge competitive advantage that Cox is about to hand them by echoing Cox’s mistakes as Cox becomes the target for a national attack that paints them as the bad guy for making another stab at protocol discrimination.
Smart policy, in terms of 1st year marketing at a minimum, would be to explicitly swear off both caps and protocol blocking/slowing in the beginning, holding the possibility in reserve if the privilege is abused. Let Cox make this mistake alone, without cover, and use the contrast as one additional reason to go with the good, responsible. squeaky clean local alternative.
There have been hints that LUS will take a “conservative” route and set standards of service that might include caps and then later take them off if they appeared to not be needed. They probably mean that, and really would.
But it is terrible policy in the first place (as acting out of fear always is) and it horrible marketing in the face of Cox handing them a hammer at the very moment of launch.
John I think you kinda over sensationalized this so you could say “examining your data” to imply some breach of privacy?
Sure deep packet inspection is needed to dig into a packet at the transport layer to determine the port and thus classify the traffic based on port. But that doesn’t mean one is examining the actual data. For the avg. user when you say something like that they are going to think along the lines that someone is reading their emails or something.
Also, according to Cox they are only testing the method at this time, and they are only doing the trials in two markets “Arkansas and Kansas”. But you didn’t seem to mention that?
I am curious about something. What is your take on an ISP prioritizing VoIP or Video (video conferencing) traffic? Both are very sensitive to delay and ideally “even in a non congested network, like in corporate LAN’s” you always prioritize this traffic over others to improve the user experience.
Also, I think alot of your argument is basically about ISP’s oversubscribing their networks. You do realize that LUS fiber is oversubscribing as well right?
Also I don’t think I agree with this comment, “And those pixelated HD streams, audio dropouts, and slow-in-the-early evening internet speeds are due to last mile issues.”
In a cable plant the last mile would be the coax side of things right? Node to the home? And from reading through your post you seem to indicate its congestion at the last mile right?
I am not expert on this stuff by any means but isn’t that all RF, each channel riding a certain frequency on the copper to the home? If your sending a signal at a certain frequency, a frequency contains a channel, how do you congest a frequency? Do too many radios tuned into a radio frequency cause congestion?
Or do you just mean that the issues with the last mile are related to other issues like RF interference and stuff? I would agree that there is a lot more that can go wrong and affect the last mile of copper compared to fiber. But I don’t think you can really congest an RF signal.
Of course you could have a finite amount of bandwidth to say a node for data and you could over subscribe that and thus incur congestion. Is that what your referring to with the limitation in the last mile of a copper plant?
Granted the same holds true in a fiber plant too right? There is a finite amount of bandwidth to a PON and depending on subscriber usage you could concievably have congestion there as well right?
Nice seeing you at the Tweetup the other night, wish we could have chatted.
Hi it's comforting to see you operating under your own name; thanks, I appreciate it…You make put forward three arguments or so: 1) what Cox is doing is not really so bad; and 2) the other guys are doing it too…
That the sort of argument that my kids made when they were wrong, knew it, and were trying to soften the consequences. I didn't buy it then and I'm not buying it now.
What Cox is doing _is_ wrong, IMHO. Deep packet inspection is reaching into my data in ways that I don't like. That feeling is widely shared and you'll find, I think, if you look fairly at the web reaction, that the general tone of the reaction agrees with my sensibilities. But this line of discussion evades the real question: should Cox be deciding which of my bitstreams are important enough to get favored treatment by grubbing through my data for fingerprints that will allow it to make that decision for me. The next, inevitable, step, as we both full well know, will be varying forms of deception and encryption. If this is supposed to be useful Cox will have to dig deeper and deeper into everyone's packets to make sure they are not carrying "offensive" bits. There is, in the end no logical stopping point. Cox would be wiser, much wiser not to get started down a losing road. The locus of control, not some technical issue is the central question.
A subsidiary element of the "not really so bad" argument you offer is the implication that it is necessary and hence inevitable…that's simply not true and you only need to look at the fact that even Cox and Comcast only started playing this game recently to see how false that is. And Comcast, whose network is similarly limited, is trying a different path to manage its bandwidth issues. The most reliable solution is easy, obvious, and the one that has almost always been used to date: simply supply more bandwidth. I expect Cox is an a fair position to do that, but hasn't chosen to. It should change its mind.
As to "the other guys are doing it too" bit of misdirection: When I bring up bandwidth issues I am simply trying to explain why Cox feels some sort of response is necessary: Yes, Cox's level of oversubscription is probably the main reason that they've got the service problems I referred to in the post. They have problems supplying the bandwidth they've sold people. Yes, I am aware of RF issues, and I'm aware of compression issues. I am NOT talking about either: I am talking about the consequences of enormously oversubscribed bandwidth. The contrast between LUS' level of oversubscription and Cox's is night and day.
And you know that.
Anyone who works for Cox — as you do— as a "network engineer" understands all this much better than you let on in public and really shouldn't be running around on various forums raising it as a question you don't understand. And, to make matters more questionable, Huval patiently explained it to you, in public, on the Advertiser forum where you repeated the question under the pseudonym pktloss. (*ref below*) There Huval noted that even if every one of the 32 people on a node were trying to use the full 100 megs of local Lafayette connectivity they'd still all be able to get 80 megs simultaneously. You and I both know that that will NEVER, EVER, happen. For regular internet use there is simply no mathematical way for users to come even close to maxing out the amount they've paid for since LUS is not selling more than a 50 meg package at retail. (The intranet speeds are a generous form of lagniappe…like "bursts." ) LUS is NOT oversubscribing the speed that folks have paid for; they can provide full throttle 80 megs to everyone all the time and are selling 50…plenty of overhead there.
Cox, demonstrably, CANNOT offer the full amount I bought to me all the time. All I have to do to prove this is watch my download speeds.
I'd love to be able to demonstrate just _how_ different the situation is with Cox…but while LUS will answer the question in a way that lets us all figure out what the is actually going on Cox will not.
Since you think it fair to ask of LUS, I think it fair to ask of you: what is Cox's backbone capacity in Lafayette; more specifically: what capacity do they supply to each node? And how large is the node; more specifically: what is the range of node sizes and what is the average size of each node in Lafayette?
(As to why this is of interest to the public: Here is the URL of a story about cable and Verizon that would be even more dramatic if rewritten about Cox and LUS:
I'd love to be able to just produce that first chart!)
As much as I would like to discuss things with you I cant due to the way you attack people personally whose comments might not be inline with your ideals on all of this.
If you ever want to have a real discussion offline let me know. I feel those will go further face to face. I would hope you wont seem as bitter and angry towards me in person as you do online.
Just to be clear on your questions regarding node size I assume your concerns are all about the data side of things right?
Often you and others report node sizes based on homes passed. For instance if a node serves an area of 500 homes that is what typically gets reported. However the actual subscribers within those 500 homes is a lot less. Especially when it comes to data.
I dont deal with this sort of stuff personally but its something I have always wanted to know as well.
I have see industry reports and things at times that seem to point to me that on avg. 1 in every 5 homes on a node might have a cable modem. Might be a little higher these days and might also depend on the area served by the node and so on. I can’t really say because I dont know, yet anyway. Have to ask.
Anyway just thought you might want to make note of that so you can ask the right question next time since I think data is all you are worried about. The TV side of things I don’t believe matter or has any affect on the data side, i think that is all fixed and congestion of say data would have no impact of video. Thats my basic understanding from my own research on trying to understanding the technology. Subject to being completely wrong.
Thats it from me… sorry to have bothered you and to think we could have a meaningful discussion. I was wrong to comment here apparently. Let me know if you ever want to discuss elsewhere.
Oh and regarding the backbone, I think that knowledge is publicly available, I think Cox publishes the backbone map showing the OC-192 to the Greater Louisiana market.
You know, I think ‘attack’ is the wrong word to use. My apologies in advance.
My problem is that it seems that because of who I work for I cannot comment or ask any questions that because it is assumed I am doing so on behalf of my employer or to defend my employer or something along those lines.
I cannot seem to take part in any discussions related to ‘LUS fiber’ without it coming up and with NO regard to the fact I am just trying to participate as another citizen of Lafayette. Very few seem to allow me to participate in anything without having to throw the employer into the equation.
Offline people seem to behave much more differently and some of the recent discussions I have had with people have been great in that forum.
With this being Johns blog, with him being unwavering in his conviction regarding the LUS fiber and this being a blog about LUS fiber I shouldn’t expect anything more then those type of responses when someone like myself comes here and posts.
I will simply have to stop trying to participate as its always taken the wrong way.
Dane, I’m not ignoring this, just drowning in civic and family obligations. I will get back. Thanks for your patience.
John, with the exception of FTP and possibly network storage, I don’t see anything wrong with Cox’s policy.
Networks MUST be managed. There is not a network in the world that is designed for 100% subscriber use, and that includes the Phone companies. Just try to get tickets to a hot concert, and you’ll here nothing but fast busy signals (all circuits are busy)
VOIP should take priority (and it’s low bandwidth too). P2P should get lower priority. That’s not to say that Comcast’s method of dropping the connection was acceptable. It was not, but that doesn’t make the idea of network management wrong.
LUS will manage the network. They may not do it at first, but eventually they will have to. People will suck up whatever is available.
Think of the network like roads in a town. If you an intersection with light traffic, a stop sign works. At some point, they may decide the road without a Stop Sign is causing the other road to back up too much, so they make it a four way stop.
That works, for a while, but then traffic gets even worse, because one of the roads has far more traffic than the other, so they put up a stop light.
Maybe the solution is an overpass or an underpass. But MAYBE, the stop light will accomplish the goal with minimal impact on drivers and at a fraction of the cost.
I was impacted by Comcasts network management, and it sucked. I’m sure they’re still managing traffic, but it’s no longer intrusive. Before, a lightly used torrent upload was killed every few seconds. Now it looks like they just turn down up/down stream bandwidth at times. Reordering packets just doesn’t bug me. I think a VOIP call IS more important than a torrent or an FTP session.
I’m not excusing the practice of over promising bandwidth. In fact, I think that all providers should tell the customer what they should expect during peak times, but I’m also for network management. Building a network for a condition that it won’t hit 90% of the time doesn’t make much sense. But if you don’t like that, just use a VPN service and your problem is solved. There’s no privacy on the internet, unless you’re encrypting your packets. It’s been like that for as long as I’ve been on the net, which is 15 or 16 years.
All that said, I’m 100% behind the LUS Fiber, even though I don’t benefit from it (since i’m now hundreds of miles away from LFT). The prices are great and bandwidth is excellent and I suspect the picture quality will exceed everything accept for those that use an antenna to watch OTA TV.
For phone, they may run into a problem with people finding much cheaper VOIP alternatives. Time will tell on that front.
Hi Kevin (do I know you?),
I mostly agree with you: management is necessary (though I seem to have a broader idea of what constitutes management), Voip is low bandwidth, Comcast’s old management is a bad idea, their new management is less obtrusive, that a solution would be to just be honest about what you actually can provide, that LUS is worth supporting, is likely to have better quality service, and might well have trouble selling a phone product once the customer base realizes the potential of over-the-top VOIP services on a really good and reliable connection.
But..where we seem to disagree: the best form of “management” is to simply oversupply bandwidth. That’s what’s been done traditionally and it is perfectly feasible with a full fiber network to continue that traditional practice. I am not objecting to that form of management at all. 🙂 I don’t like but could understand very high caps (like the new Comcast) if it is accompanied by some very clear standards that they are obligated to meet in regards to their advertised bandwidth. Caps are directed at the real problem and require some honesty in saying that they just can’t supply that much. I don’t care for Comcast’s previous practice (which they have decided they don’t have to engage in) and Cox’s new practice of slowing down protocols they think are less important. NOT their decision to make. But more, it won’t work: it is an open invitation to “cheaters” to manipulate their bitstream to hide (as you indirectly suggest) the nature of their connections. Cox will then be faced with the inevitable choice between letting the cheaters become a larger and larger part of their user base at the expense of those who play fair or dipping further and further into my bits, everyone’s bits in order to figure out who is cheating. They’re not going to win that sort of guerilla war. It’s bad policy for anything but a short-term bandaid because it will NOT work. IMHO.
Thanks for backing off the personal attack accusation. I appreciate it.
To get right to what you focus on: I don’t regard your employment by Cox as any sort of reason for your not to speak—or be heard. I don’t think it automatically means that your judgments are caused by your employment and have not said so. I do think it matters, and will continue to think it matters. –Just as it matters that folks understand that I am a partisan in favor of Lafayette’s building a fiber network. (There are plenty that dismiss what I say simply because of that. I suspect you know a few. :-)) Our history and commitments do matter in forming our take on things and I think that ought to be accessible to people as they evaluate what we say. You will have noticed that I make my identity visible here and in most places where I participate. I do so both to keep myself honest and because I have this funny idea that it should be possible to hold me accountable for what I say and do. Blame the Calvinist in my background.
I do think you are still overreacting; just less. 🙂 It should be possible to challenge people to be straightforward about who they are and why they take the positions they take. This is true in our everyday lives. That it is usually easy to get away with hiding the “real” you on the net is a failing of that medium. A failing that folks who hold themselves to high standards won’t take advantage of. You’ll find, perhaps to your surprise, that I say all these things in person and say them with a smile. (Ask around. I’m sure; that is fairly well know. (In fact, consider that said with a smile too.)
Here’s my take: I wrote a post about network management policies suggesting that Cox shouldn’t be engaging in a practice that I, lots of others, and apparently the FCC find offensive: protocol discrimination. I note that this is a strategy deployed to deal with bandwidth insufficiency and that a better and more traditional solution would be to simply increase the bandwidth available. I still think that’s good advice. (And suspect that my commitments and history will make some unwilling to hear it.)
Your response, IMHO, didn’t really address the issue, threw up a conflation of data congestion in the actual post and what I carefully labeled “last mile issues” in a fragment of a reply to a comment by another poster, and then went on to raise an red herring issue you’ve raised repeatedly across several forums: oversubscription. I was irked. It remains very difficult for me to believe that you don’t understand oversubscription better than you have let on. I won’t belabor the point further other than to say that I think Huval was overly conscientious when he appeared to concede that the 100 meg intranet speed involved any oversubscription in the usual sense. The 100 meg intranet is carefully offered as a best effort bit of “lagniappe” like the Cox’s “burst” — nice but not the product that is being sold but a good-faith offered extra. LUS can supply every bit of what you pay for even if everyone opts to purchase 50 megs and use it all at the same time (something that won’t happen). I don’t think that LUS is engaging in oversubscription of it’s download speeds at all as it can supply all with about 80 megs. With my Cox subscription I do see undeniable signs of my bandwidth simply topping out below my purchased amount very consistently.
I’d like to be clear: I don’t think oversubscription is a bad thing per se —applied with restraint it trades it minimizes a lot of waste for a small amount of constraint. I do think that a more honest way of advertising and reporting would help everyone…Nobody is asking for 5 9’s on a residential line. Or anything close. But hey, some standard like 90% of the time?
Huval did give you the network details you repeatedly asked for. I think it fair to ask the same of you. The article I pointed to earlier was written for a cable-centric trade mag. It pointed to the fact that Verizon–using a setup similar to LUS, but with half the capacity, was not oversubscribing its bandwidth at 15 Mbps but that Cable typically does….at the rate of better than 28:1 (!). That is, cable is selling 28 times what it has to sell and is counting on most of us not using it at the same time. And in reading closely I think you’ll see that a very generous to cable analysis. On the other hand LUS has twice the bandwidth to the node than the setup described in relation to Verizon. I expect the contrast here to be even more one-sided. Buyers ought to be able to see for themselves how much overselling of bandwidth (if any) their suppliers are engaged in.
Hackdra is a cyber security company that can provide smart contract auditing, pen-testing, bug bounty, blockchain, web3, DeFi, NFT, and ARM services with AI.