“Peering into the Future” (with a Louisiana angle)

Sundays are a good time for kicking back and spending a little time in reflective thought. My neighbors are all out mowing the lawn for the first time this season, knocking back the spring weed surge and hoping to get ahead of it this year. Me, I know yardwork is eternal, weeds simply are, and I’m enjoying the sun.

And while I am enjoying the sunshine I’m also happily reflecting on this nifty article on peering from Cringely, that points out that the future of video requires very, very, very efficient networks and that that, in turn, requires that network owners get over their fear of peer-to-peer file sharing and embrace the efficiencies that a peer-to-peer distribution structure make available.

Cringely is one of those guys who asks the right questions. It’s one thing to be smart–that’s nice but not all that reliably productive–and quite another thing to have developed a talent for asking the right questions. Cringely’s got a real talent for honing in on the important, intractable questions. While that’s his biggest asset, he also knows a hell of a lot about a hell of a lot of things and is able to synthesize a sensible, broadly-based (and hence surprising) guess about what really, honestly answering that right question might look like.


Cringely dogs his problems; a little bit of an obsessive personality turns out to be good for his readers (if not, probably, for his wife). The current question grows out of a continued worrying of the issue of how to build a network that will be capable of doing all we want it to do. And what we want it to do is everything….HD video, voice, video conferencing, data streams, off-site storage and on and on. There’s some huge bandwidth involved. Cringely points out in an earlier essay that, to realize our dreams of a net that takes over and interconnects all functions it will have to grow:

If the prime directive here is simply to grow the Net as big and as fast as possible, then the best way to do that is through the balancing of data loads as much as possible across the Net. This is contrary to the client-server model that has dominated the Internet for most of its existence. Put differently, the major impediment to eventual Internet hegemony is the problem of scaling client-server applications. How big a data center do you need before you realize that no data center is big enough for some applications? Only a server-server or peer-to-peer architecture makes sense in the long run.

See? The right question and very hard to evade conclusion. How to grow the net fast enough? Install a more efficient distribution system…one that mimics the architecture of the net instead of trying to run everything from central servers. Several hundred pages of miscellaneous BS whitepapers, numerous panicked reports, and droning congressional hearings are hereby avoided.

This week’s essay is devoted entirely to the problem. I recommend you read it and spend some of your sunshine time placidly reflecting on the local implications.

I’m gonna try and absorb the idea that a peer-to-peer distribution architecture is the answer to network congestion; not the cause of it. Folks who are providing internet connections all over the country are running in panic from peer-to-peer downloadable video because it raises bandwidth usage. On the broadest level that is a good thing, at least for any company that has bandwidth to spare. You should be happy people want to use your product and happy that they’ll gravitate to you if you have the most of it to offer. But while internal bandwidth is easy to sell, the rub is that connecting through other people’s networks costs you money. So on the one hand local ISPs want folks to spend their money on more and more bandwidth but on the other hand dread the costs associated with that increase. It might be that a clever ISP could have its cake and eat it too: facilitate lucrative bandwidth purchases and keep traffic manageably local. If an ISP were to provide a very aggressive local caching setup that redirected downloads directed outside the network to a local server and run a peer-to-peer client in the customer premise equipment/modem or on its customer’s’ computers a network that could handle lots of video and other high-bandwidth, interactive applications could be built that would both provide fast, reliable connections, and keep as much traffic as possible off the costly larger internet.

Here’s a little sweetener: there’s a Louisiana company involved that Cringely promotes as having a chunk of the big solution in hand. It’s connected to a University and involves a specie of grid computing that might enable real-time streaming video and a radically lower costs….

Enjoy your Sunday afternoon.

8 thoughts on ““Peering into the Future” (with a Louisiana angle)”

  1. Peer-to-peer might not be the only answer.

    There’s a company called eXludus that has some sort of network broadcast technology that is supposedly much more efficient than traditional methods.

    I don’t know too much about them, but it’s interesting stuff.

  2. Greetings Dan,

    I hadn’t heard of eXludus so thanks for the pointer. From what I gather after a very quick once-over it looks like most of what they are trying to do is eliminate the “call” for data…so that processing takes place when the data is generated and waits for a request from a user. (I could easily have misunderstood.) If that is all that is going on it might be very good for well-defined problems that are repitiously solved but not so good for more unique issues.

    It’s not clear to me that it would solve the same sorts of problems that need to be solved with video downloads–avoiding net congestion with large, time-sensitive transfers in one “direction.”

    I’ll scratch my head a bit longer. Thanks.

  3. Hackdra is a cyber security company that can provide smart contract auditing, pen-testing, bug bounty, blockchain, web3, DeFi, NFT, and ARM services with AI.