• In today’s Internet, data traveling through optical fibers as beams of light have to be converted to electrical signals for processing. By dispensing with that conversion, a new network design could increase Internet speeds 100-fold.

    In today’s Internet, data traveling through optical fibers as beams of light have to be converted to electrical signals for processing. By dispensing with that conversion, a new network design could increase Internet speeds 100-fold.

    Full Screen
  • Vincent Chan, the Joan and Irwin Jacobs Professor of Electrical Engineering and Computer Science

    Vincent Chan, the Joan and Irwin Jacobs Professor of Electrical Engineering and Computer Science

    Full Screen

An Internet 100 times as fast

In today’s Internet, data traveling through optical fibers as beams of light have to be converted to electrical signals for processing. By dispensing with that conversion, a new network design could increase Internet speeds 100-fold.

A new network design that avoids the need to convert optical signals into electrical ones could boost capacity while reducing power consumption.

The heart of the Internet is a network of high-capacity optical fibers that spans continents. But while optical signals transmit information much more efficiently than electrical signals, they're harder to control. The routers that direct traffic on the Internet typically convert optical signals to electrical ones for processing, then convert them back for transmission, a process that consumes time and energy.

In recent years, however, a group of MIT researchers led by Vincent Chan, the Joan and Irwin Jacobs Professor of Electrical Engineering and Computer Science, has demonstrated a new way of organizing optical networks that, in most cases, would eliminate this inefficient conversion process. As a result, it could make the Internet 100 or even 1,000 times faster while actually reducing the amount of energy it consumes.

One of the reasons that optical data transmission is so efficient is that different wavelengths of light loaded with different information can travel over the same fiber. But problems arise when optical signals coming from different directions reach a router at the same time. Converting them to electrical signals allows the router to store them in memory until it can get to them. The wait may be a matter of milliseconds, but there's no cost-effective way to hold an optical signal still for even that short a time.

Chan's approach, called "flow switching," solves this problem in a different way. Between locations that exchange large volumes of data — say, Los Angeles and New York City — flow switching would establish a dedicated path across the network. For certain wavelengths of light, routers along that path would accept signals coming in from only one direction and send them off in only one direction. Since there's no possibility of signals arriving from multiple directions, there's never a need to store them in memory.

Reaction time

To some extent, something like this already happens in today's Internet. A large Web company like Facebook or Google, for instance, might maintain huge banks of Web servers at a few different locations in the United States. The servers might exchange so much data that the company will simply lease a particular wavelength of light from one of the telecommunications companies that maintains the country's fiber-optic networks. Across a designated pathway, no other Internet traffic can use that wavelength.

In this case, however, the allotment of bandwidth between the two endpoints is fixed. If for some reason the company's servers aren't exchanging much data, the bandwidth of the dedicated wavelength is being wasted. If the servers are exchanging a lot of data, they might exceed the capacity of the link.

In a flow-switching network, the allotment of bandwidth would change constantly. As traffic between New York and Los Angeles increased, new, dedicated wavelengths would be recruited to handle it; as the traffic tailed off, the wavelengths would be relinquished. Chan and his colleagues have developed network management protocols that can perform these reallocations in a matter of seconds.

In a series of papers published over a span of 20 years — the latest of which will be presented at the OptoElectronics and Communications Conference in Japan next month — they've also performed mathematical analyses of flow-switched networks' capacity and reported the results of extensive computer simulations. They've even tried out their ideas on a small experimental optical network that runs along the Eastern Seaboard.

Their conclusion is that flow switching can easily increase the data rates of optical networks 100-fold and possibly 1,000-fold, with further improvements of the network management scheme. Their recent work has focused on the power savings that flow switching offers: In most applications of information technology, power can be traded for speed and vice versa, but the researchers are trying to quantify that relationship. Among other things, they've shown that even with a 100-fold increase in data rates, flow switching could still reduce the Internet's power consumption.

Growing appetite

Ori Gerstel, a principal engineer at Cisco Systems, the largest manufacturer of network routing equipment, says that several other techniques for increasing the data rate of optical networks, with names like burst switching and optical packet switching, have been proposed, but that flow switching is "much more practical." The chief obstacle to its adoption, he says, isn't technical but economic. Implementing Chan's scheme would mean replacing existing Internet routers with new ones that don't have to convert optical signals to electrical signals. But, Gerstel says, it's not clear that there's currently enough demand for a faster Internet to warrant that expense. "Flow switching works fairly well for fairly large demand — if you have users who need a lot of bandwidth and want low delay through the network," Gerstel says. "But most customers are not in that niche today."

But Chan points to the explosion of the popularity of both Internet video and high-definition television in recent years. If those two trends converge — if people begin hungering for high-definition video feeds directly to their computers — flow switching may make financial sense. Chan points at the 30-inch computer monitor atop his desk in MIT's Research Lab of Electronics. "High resolution at 120 frames per second," he says: "That's a lot of data."

Topics: Computer science and technology, Electrical engineering and electronics, Internet, Research Laboratory of Electronics, Optical flow switching, Optical networks


I am sure this research is quite interesting. Perhaps it does, potentially, solve some problems in the core internet. However, it's not all that new of a technique and it's got a lot more competition than stated.

Not new: In today's circuit-switched world, you can find ROADM's which can be used to nail up wavelengths between active nodes. Control? 20 years ago, two colleagues and I, at Bell Labs, led a team that prototyped control of a network with a hierarchy of connections... from SONET OC-N to channels within a DS1 to IP addresses for VOIP. Even patch panels -- interconnection of real wires -- were viewed consistently (just as really slow switches). There are even patents on this that are so old they've expired.

Competition: While the techniques here are interesting, they must be compared against alternatives. Here are a few:

1) Dedicated fibers, as discussed in the article.

2) "Nailed up" wavelengths... a slow version of what's in the article. (Is reconfiguring in seconds *really* that much more valuable than reconfiguring in hours or by time of day?)

3) Caching. This is already the technique used for popular web pages. Using caches means the data need traverse to the cache node only once even if there are many users. This is what Akamai -- an MIT spin-out -- delivers.

4) Multicast. This is the technique already used for broadcast video. Instead of all streams emanating from the source, intermediate nodes make copies. Thus the vast majority of copies traverse only a few nodes.

5) Deep inspection. The trend has been for routers to get more intelligent... to interpret higher and higher protocol layers to optimize performance. The optical technique here is precisely the opposite.

6) Optical switching. Still unproven, but optical-only processing of packets may come before we know it.

Again, the technique here may have merit, but it needs to be evaluated in the context of a lot of potential substitutes both in performance and int he cost of deployment.

I omitted one technique from my previous post:

7) Compression. Dr. Chan's high-def monitor at 120 fps can be served quite well by just 10-20 Mbit/sec. (Such a system, based on existing standards and about to be a released product, was demonstrated two years ago.) Continuing improvements in compression obviate the impact of video on the network.

This is going to be a really nice idea; ideas transmission will become faster.

As MITDGreenb notes, there are several ways to boost current fiber optic network capacity by two orders of magnitude. This was the favorite subject of tech discussion exactly 10 years ago. At the tail end of the "Dot-Com" era in 2000-2002, all of these techniques were hot issues for Silicon Valley venture capitalists. I worked for one of those short-lived high tech firms trying to be the first out of the gate-- "BlueLeaf Networks" actually created the "Optical Switching" hardware to which MITDGreenb alludes, but we were still in stealth mode testing and improving the optical devices that would accelerate the Internet by 2 or 3 orders of magnitude when the bottom fell out of the Dot Com market and "BlueLeaf Networks" cratered.

Everyone was laid off and eventually found work in Green Energy, Iphone apps, Climate Change science, data search utilities, etc.

The intellectual property, the patents, and hundreds of millions (possibly billions) of dollars of successful stealth-mode research are still out there sitting in the files of hundreds of Silicon Valley VC firms. When the demand for 100x faster Internet arrives, there will be not one but several competing 10-year-old, well ripened technologies fighting to own the market.

Back to the top