Free Novel Read

Some Remarks: Essays and Other Writing Page 23


  When last we saw our hypothetical cable-ship captain, sitting off of Songkhla with 2,525 kilometers of very expensive cable, we had put him in a difficult spot by asking the question of how he could ensure that his 25 kilometers of slack ended up in exactly the right place. Essentially the same question was raised a few years ago when FLAG approached Cable & Wireless Marine and said, in effect: “We are going to buy 28,000 kilometers of fancy cable from AT&T and KDD, and we would like to have it go from England to Spain to Italy to Egypt to Dubai to India to Thailand to Hong Kong to China to Korea to Japan. We would like to pay for as little slack as possible, because the cable is expensive. What little slack we do buy needs to go in exactly the right place, please. What should we do next?”

  So it was that Captain Stuart Evans’s telephone rang. At the time (September 1992), he was working for a company called Worldwide Ocean Surveying, but by the time we met him, that company had been bought out by Cable & Wireless Marine, of which he is now general manager—survey. Evans is a thoroughly pleasant middle-aged fellow, a former merchant marine captain, who seemed just a bit taken aback that anyone would care about the minute details of what he and his staff do for a living. A large part of being a hacker tourist is convincing people that you are really interested in the nitty-gritty and not just looking for a quick, painless sound bite or two; once this is accomplished, they always warm to the task, and Captain Evans was no exception.Evans’s mission was to help FLAG select the most economical and secure route. The initial stages of the process are straightforward: choose the landing sites and then search existing data concerning the routes joining those sites. This is referred to as a desk search, with mild but unmistakable condescension. Evans and his staff came up with a proposed route, did the desk search, and sent it to FLAG for approval. When FLAG signed off on this, it was time to go out and perform the real survey. This process ran from January to September 1994.

  Each country uses the same landing sites over and over again for each new cable, so you might think that the routes from, say, Porthcurno to Spain would be well known by now. In fact, every new cable passes over some virgin territory, so a survey is always necessary. Furthermore, the territory does not remain static. There are always new wrecks, mobile sand waves, changes in anchorage patterns, and other late-breaking news.

  To lay a cable competently you must have a detailed survey of a corridor surrounding the intended route. In shallow water, you have relatively precise control over where the cable ends up, but the bottom can be very irregular, and the cable is likely to be buried into the seabed. So you want a narrow (1 kilometer wide) corridor with high resolution. In deeper water, you have less lateral control over the descending cable, but at the same time the phenomena you’re looking at are bigger, so you want a survey corridor whose width is 2 to 3 times the ocean depth but with a coarser resolution. A resolution of 0.5 percent of the depth might be considered a minimum standard, though the FLAG survey has it down to 0.25 percent in most places. So, for example, in water 5,000 meters deep, which would be a somewhat typical value away from the continental shelf, the survey corridor would be 10 to 15 kilometers in width, and a good vertical resolution would be 12 meters.

  The survey process is almost entirely digital. The data is collected by a survey ship carrying a sonar rig that fires 81 beams spreading down and out from the hull in a fan pattern. At a depth of 5,000 meters, the result, approximately speaking, is to divide the 10-kilometer-wide corridor into grid squares 120 meters wide and 175 meters long and get the depth of each one to a precision of some 12 meters.

  The raw data goes to an onboard SPARC station that performs data assessment in real time as a sort of quality assurance check, then streams the numbers onto DAT cassettes. The survey team is keeping an eye on the results, watching for any formations through which cable cannot be run. These are found more frequently in the Indian than in the Atlantic Ocean, mostly because the Atlantic has been charted more thoroughly.

  Steep slopes are out. A cable that traverses a steep slope will always want to slide down it sideways, secretly rendering every nautical chart in the world obsolete while imposing unknown stresses on the cable. This and other constraints may throw an impassable barrier across the proposed route of the cable. When this happens, the survey ship has to backtrack, move sideways, and survey other corridors parallel and adjacent to the first one, gradually building a map of a broader area, until a way around the obstruction is found. The proposed route is redrafted, and the survey ship proceeds.

  The result is a shitload of DAT tapes and a good deal of other data as well. For example, in water less than 1,200 meters deep, they also use sidescan sonar to generate analog pictures of the bottom—these look something like black-and-white photographs taken with a point light source, with the exception that shadows are white instead of black. It is possible to scan the same area from several different directions and then digitally combine the images to make something that looks just like a photo. This may provide crucial information that would never show up on the survey—for example, a dense pattern of anchor scars indicates that this is not a good place to lay a cable. The survey ship can also drop a flowmeter that will provide information about currents in the ocean.

  The result of all this, in the case of the FLAG survey, was about a billion data points for the bathymetric survey alone, plus a mass of sidescan sonar plots and other documentation. The tapes and the plots filled a room about 5 meters square all the way to the ceiling. The quantity of data involved was so vast that to manage it on paper, while it might have been theoretically possible given unlimited resources, was practically impossible given that FLAG is run by mortals and actually has to make money. FLAG is truly an undertaking of the digital age in that it simply couldn’t have been accomplished without the use of computers to manage the data. Evans’s mission was to present FLAG with a final survey report. If he had done it the old-fashioned way, the report would have occupied some 52 linear feet of shelf space, plus several hefty cabinets full of charts, and the inefficiency of dealing with so much paper would have made it nearly impossible for FLAG’s decision makers to grasp everything.

  Instead, Evans bought FLAG a PC and a plotter. During the summer of 1994, while the survey data was still being gathered, he had some developers write browsing software. Keeping in mind that FLAG’s investors were mostly high-finance types with little technical or nautical background, they gave the browser a familiar, easy-to-use graphical user interface. The billion data points and the sidescan sonar imagery were boiled down into a form that would fit onto 5 CD-ROMs, and in that form the final report was presented to FLAG at the end of 1994. When FLAG’s decision makers wanted to check out a particular part of the route, they could zoom in on it by clicking on a map, picking a small square of ocean, and blowing it up to reveal several different kinds of plots: a topographic map of the seafloor, information abstracted from the sidescan sonar images, a depth profile along the route, and another profile showing the consistency of the bottom—whether muck, gravel, sand, or hard rock. All of these could be plotted out on meterwide sheets of paper that provided a much higher-resolution view than is afforded by the computer screen.

  This represents a noteworthy virtuous circle—a self-amplifying trend. The development of graphical user interfaces has led to rapid growth in personal computer use over the last decade, and the coupling of that technology with the Internet has caused explosive growth in the use of the World Wide Web, generating enormous demand for bandwidth. That (in combination, of course, with other demands) creates a demand for submarine cables much longer and more ambitious than ever before, which gets investors excited—but the resulting project is so complex that the only way they can wrap their minds around it and make intelligent decisions is by using a computer with a graphical user interface.

  HACKING WIRES

  As you may have figured out by this point, submarine cables are an incredible pain in the ass to build, install, and operate. Hooking stuff up to the ends
of them is easy by comparison. So it has always been the case that cables get laid first and then people begin trying to think of new ways to use them. Once a cable is in place, it tends to be treated not as a technological artifact but almost as if it were some naturally occurring mineral formation that might be exploited in any number of different ways.

  This was true from the beginning. The telegraphy equipment of 1857 didn’t work when it was hooked up to the first transatlantic cable. Kelvin had to invent the mirror galvanometer, and later the siphon recorder, to make use of it. Needless to say, there were many other Victorian hackers trying to patent inventions that would enable more money to be extracted from cables. One of these was a Scottish-Canadian-American elocutionist named Alexander Graham Bell, who worked out of a laboratory in Boston.

  Bell was one of a few researchers pursuing a hack based on the phenomenon of resonance. If you open the lid of a grand piano, step on the sustain pedal, and sing a note into it, such as a middle C, the strings for the piano’s C keys will vibrate sympathetically, while the D strings will remain still. If you sing a D, the D strings vibrate and the C strings don’t. Each string resonates only at the frequency to which it has been tuned and is deaf to other frequencies.

  If you were to hum out a Morse code pattern of dots and dashes, all at middle C, a deaf observer watching the strings would notice a corresponding pattern of vibrations. If, at the same time, a second person was standing next to you humming an entirely different sequence of dots and dashes, but all on the musical tone of D, then a second deaf observer, watching the D strings, would be able to read that message, and so on for all the other tones on the scale. There would be no interference between the messages; each would come through as clearly as if it were the only message being sent. But anyone who wasn’t deaf would hear a cacophony of noise as all the message senders sang in different rhythms, on different notes. If you took this to an extreme, built a special piano with strings tuned as close to each other as possible, and trained the message senders to hum Morse code as fast as possible, the sound would merge into an insane roar of white noise.

  Electrical oscillations in a wire follow the same rules as acoustical ones in the air, so a wire can carry exactly the same kind of cacophony, with the same results. Instead of using piano strings, Bell and others were using a set of metal reeds like the ones in a harmonica, each tuned to vibrate at a different frequency. They electrified the reeds in such a way that they generated not only acoustical vibrations but corresponding electrical ones. They sought to combine the electrical vibrations of all these reeds into one complicated waveform and feed it into one end of a cable. At the far end of the cable, they would feed the signal into an identical set of reeds. Each reed would vibrate in sympathy only with its counterpart on the other end of the wire, and by recording the pattern of vibrations exhibited by that reed, one could extract a Morse code message independent of the other messages being transmitted on the other reeds. For the price of one wire, you could send many simultaneous coded messages and have them all sort themselves out on the other end.

  To make a long story short, it didn’t work. But it did raise an interesting question. If you could take vibrations at one frequency and combine them with vibrations at another frequency, and another, and another, to make a complicated waveform, and if that waveform could be transmitted to the other end of a submarine cable intact, then there was no reason in principle why the complex waveform known as the human voice couldn’t be transmitted in the same way. The only difference would be that the waves in this case were merely literal representations of sound waves, rather than Morse code sequences transmitted at different frequencies. It was, in other words, an analog hack on a digital technology.

  We have all been raised to think of the telephone as a vast improvement on the telegraph, as the steamship was to the sailing ship or the electric lightbulb to the candle, but from a hacker tourist’s point of view, it begins to seem like a lamentable wrong turn. Until Bell, all telegraphy was digital. The multiplexing system he worked on was purely digital in concept even if it did make use of some analog properties of matter (as indeed all digital equipment does). But when his multiplexing scheme went sour, he suddenly went analog on us.

  Fortunately, the story has a happy ending, though it took a century to come about. Because analog telephony did not require expertise in Morse code, anyone could take advantage of it. It became enormously popular and generated staggering quantities of revenue that underwrote the creation of a fantastically immense communications web reaching into every nook and cranny of every developed country.

  Then modems came along and turned the tables. Modems are a digital hack on an analog technology, of course; they take the digits from your computer and convert them into a complicated analog waveform that can be transmitted down existing wires. The roar of white noise that you hear when you listen in on a modem transmission is exactly what Bell was originally aiming for with his reeds. Modems, and everything that has ensued from them, like the World Wide Web, are just the latest example of a pattern that was established by Kelvin 140 years ago, namely, hacking existing wires by inventing new stuff to put on the ends of them.

  It is natural, then, to ask what effect FLAG is going to have on the latest and greatest cable hack: the Internet. Or perhaps it’s better to ask whether the Internet affected FLAG. The explosion of the Web happened after FLAG was planned. Taketo Furuhata, president and CEO of IDC, which runs the Miura station, says: “I don’t know whether Nynex management foresaw the burst of demand related to the Internet a few years ago—I don’t think so. Nobody—not even AT&T people—foresaw this. But the demand for Internet transmission is so huge that FLAG will certainly become a very important pipe to transmit such requirements.”

  John Mercogliano, vice president—Europe, Nynex Network Systems (Bermuda) Ltd., says that during the early 1990s when FLAG was getting organized, Nynex executives felt in their guts that something big was going to happen involving broadband multimedia transmission over cables. They had a media lab that was giving demos of medical imaging and other such applications. “We knew the Internet was coming—we just didn’t know it was going to be called the Internet,” he says.

  FLAG may, in fact, be the last big cable system that was planned in the days when people didn’t know about the Internet. Those days were a lot calmer in the global telecom industry. Everything was controlled by monopolies, and cable construction was based on sober, scientific forecasts, analogous, in some ways, to the actuarial tables on which insurance companies predicate their policies.

  When you talk on the phone, your words are converted into bits that are sent down a wire. When you surf the Web, your computer sends out bits that ask for yet more bits to be sent back. When you go to the store and buy a Japanese VCR or an article of clothing with a Made in Thailand label, you’re touching off a cascade of information flows that eventually leads to transpacific faxes, phone calls, and money transfers.

  If you get a fast busy signal when you dial your phone, or if your Web browser stalls, or if the electronics store is always low on inventory because the distribution system is balled up somewhere, then it means that someone, somewhere, is suffering pain. Eventually this pain gets taken out on a fairly small number of meek, mild-mannered statisticians—telecom traffic forecasters—who are supposed to see these problems coming.

  Like many other telephony-related technologies, traffic forecasting was developed to a fine art a long time ago and rarely screwed up. Usually the telcos knew when the capacity of their systems was going to be stretched past acceptable limits. Then they went shopping for bandwidth. Cables got built.

  That is all past history. “The telecoms aren’t forecasting now,” Mercogliano says. “They’re reacting.”

  This is a big problem for a few different reasons. One is that cables take a few years to build, and, once built, last for a quarter of a century. It’s not a nimble industry in that way. A PTT thinking about investing in a club cable is m
aking a 25-year commitment to a piece of equipment that will almost certainly be obsolete long before it reaches the end of its working life. Not only are they risking lots of money, but they are putting it into an exceptionally long-term investment. Long-term investments are great if you have reliable long-term forecasts, but when your entire forecasting system gets blown out of the water by something like the Internet, the situation gets awfully complicated.

  The Internet poses another problem for telcos by being asymmetrical. Imagine you are running an international telecom company in Japan. Everything you’ve ever done, since TPC–1 came into Ninomiya in ’64, has been predicated on circuits. Circuits are the basic unit you buy and sell—they are to you what cars are to a Cadillac dealership. A circuit, by definition, is symmetrical. It consists of an equal amount of bandwidth in each direction—since most phone conversations, on average, entail both parties talking about the same amount. A circuit between Japan and the United States is something that enables data to be sent from Japan to the U.S., and from the U.S. to Japan, at the same rate—the same bandwidth. In order to get your hands on a circuit, you cut a deal with a company in the States. This deal is called a correspondent agreement.

  One day, you see an ad in a magazine for a newfangled thing called a modem. You hook one end up to a computer and the other end to a phone line, and it enables the computer to grab a circuit and exchange data with some other computer with a modem. So far, so good. As a cable-savvy type, you know that people have been hacking cables in this fashion since Kelvin. As long as the thing works on the basis of circuits, you don’t care—any more than a car salesman would care if someone bought Cadillacs, tore out the seats, and used them to haul gravel.

  A few years later, you hear about some modem-related nonsense called the World Wide Web. And a year after that, everyone seems to be talking about it. About the same time, all of your traffic forecasts go down the toilet. Nothing’s working the way it used to. Everything is screwed up.