It's OK, the internet will scale fine
I've been seeing a lot of press lately worrying that the internet won't be able to handle the coming video revolution, that as more and more people try to get their TV via the internet, it will soon reach a traffic volume we don't have capacity to handle. (Some of this came from a Google TV exec's European talk, though Google has backtracked a bit on that.)
I don't actually believe that, even given the premise behind that statement, which is traditional centralized download from sites like Youtube or MovieLink. I think we have the dark fiber and other technology already in place, with terabits over fiber in the lab, to make this happen.
However, the real thing that they're missing is that we don't have to have that much capacity. I'm on the board of Bittorrent Inc., which was created to commercialize the P2P file transfer technology developed by its founder, and Monday we're launching a video store based on that technology. But in spite of the commercial interest I may have in this question, my answer remains the same.
The internet was meant to be a P2P network. Today, however, most people do download more than they upload, and have a connection which reflects this. But even with the reduced upload capacity of home broadband, there is still plenty of otherwise unused upstream sitting there ready. That's what Bittorrent and some other P2P technologies do -- they take this upstream bandwidth, which was not being used before, and use it to feed a desired file to other people wishing to download the file. It's a trade, so you do it from others and they do it for you. It allows a user with an ordinary connection to publish a giant file where this would otherwise be impossible.
Yes, as the best technology for publishing large files on the cheap, it does get used by people wanting to infringe copyrights, but that's because it's the best, not because it inherently infringes. It also has a long history of working well for legitimate purposes and is one of the primary means of publishing new linux distros today, and will be doing hollywood major studio movies Feb 26.
Right now the clients connect with whoever they can connect with, but they favour other clients that send them lots of stuff. That makes a bias towards other clients to whom there is a good connection. While I don't set the tech roadmap for the company, I have expectations that over time the protocol will become aware of network topology, so that it does an even better job of mostly peering with network neighbours. Customers of the same ISP, or students at the same school, for example. There is tons of bandwidth available on the internal networks of ISPs, and it's cheap to provide there. More than enough for everybody to have a few megabits for a few hours a day to get their HDTV. In the future, an ideal network cloud would send each file just once over any external backbone link, or at most once every few days -- becoming almost as efficient as multicasting.
(Indeed, we could also make great strides if we were to finally get multicasting deployed, as it does a great job of distributing the popular material that still makes up most of the traffic.)
So no, we're not going to run out. Yes, a central site trying to broadcast the Academy Awards to 50 million homes won't be able to work. And in fact, for cases like that, radio broadcasting and cable (or multicasting) continue to make the most sense. But if we turn up the upstream, there is more than enough bandwidth to go around within every local ISP network. Right now most people buy aDSL, but in fact it's not out the question that we might see devices in this area move to being soft-switchable as to how much bandwidth they do up and and how much down, so that if upstream is needed, it can be had on demand. It doesn't really matter to the ISP -- in fact since most users don't do upstream normally they have wasted capacity out to the network unless they also do hosting to make up for it.
There are some exceptions to this. In wireless ISP networks, there is no up and downstream, and that's also true on some ethernets. For wireless users, it's better to have a central cache just send the data, or to use multicasting. But for the wired users it's all 2-way, and if the upstream isn't used, it just sits there when it could be sending data to another customer on the same DSLAM.
So let's not get too scared. And check out the early version of bittorrent's new entertainment store and do a rental download (sadly only with Windows XP based DRM, sigh -- I hope for the day we can convince the studios not to insist on this) of multiple Oscar winner "Little Miss Sunshine" and many others.
Comments
Alex Goldman
Mon, 2007-02-26 12:03
Permalink
But costs will rise for ISPs
While the little companies that I cover will continue to charge market rates, the monopolies that advertise unlimited for $14.99 will need to further inflate their fees, and will need to kick some video watching customers off the network.
I continue to argue that we need usage-based pricing, which goes hand in hand with enforcing advertising rules.
The monopolies get away with lowball prices that are part cross-subsidy and partly about hidden fees. The smaller ISPs therefore charge much higher rates (i.e. actually set prices based on costs) so you see cost comparisons like $14.99 vs. $59.99.
The telcos want to get money from google etc., which is ridiculous. Instead, they'd be fine if they earned more money when people consumed more.
brad
Mon, 2007-02-26 12:53
Permalink
Usage pricing
Well, I've blogged a number of times before why I think usage-based pricing would be a mistake and an innovation killer. The reality is that bandwidth tech is doing very well now, even better than Moore's law, and you can stuff a terabit down a fiber and a a growing number of megabits down twisted pairs of copper, and there's gigabit free space optical going to get cheaper, etc. etc.
So actually, I predict costs will fall for ISPs, as they provide more and more, just like computer vendors have done. Usage based pricing only makes sense if the resource is getting more and more scarce.
As P2P matures, a lot of its bandwidth will be one customer feeding the data to another customer on the same DSLAM. Are you out of internal bandwidth on your DSLAMs? (Yes, today you may not route within them, but this is certainly not out of the question.)
Alex Goldman
Tue, 2007-02-27 11:04
Permalink
It's expensive
It's true that events such as the semiconductor fair can demonstrate real time interaction with a supercomputer, but deploying high bandwidth technologies remains expensive. The most aggressive cost cutter I know of (and the backbone's not what I know best) is Cogent, and although they could send many many colors down their fiber, they stopped at 2 or 4 because of the exponentially increasing cost of adding bandwidth.
As to copper technologies, these are not cheap either. In practice, the large ISPs are trying to sell slower tiers (such as 384 Kbps http://www.dslreports.com/shownews/81908) rather than trying to sell 50 Mbps (except for FiOS).
The WISPs and other little guys are doing good things, but the big companies change equipment as slowly as possible. The little companies then become test beds for untried equipment (the wireless broadband industry is a particularly good example of this).
I think that if false advertising and cross subsidies were eliminated, the smaller ISPs would be doing much much better as they already compete on free market principles.
Add new comment