Awesome Conferences

Super Wi-Fi is better than just "super"

[This is still at 'first draft' quality, but I thought I'd post it sooner rather than later. Please ignore the typos for now.]

I recently twittered my delight that the FCC approval of "super Wi-Fi" is going to be regarded as a historic moment five years from now. I mean it.

Here's why:

In geek terms: This gives permission to treat the airwaves like Ethernet networking, not like Teleco networking. More modern and more flexible.

In non-geek terms, this decision by the FCC makes it easier to innovate. It makes it safe and easy to try new things With the possibility of experimentation comes new applications and ideas. It will be a game-changer.

Let me explain...

Let's first look at how spectrum is allocated today: In blocks. You want to do something "on the air", request a license, go through tons of approvals, put together a consortium of like-minded folks, wait months or years, and get a block of spectrum from frequency X to frequency Y in a particular geographic area. The process is so long that by now I've forgotten what the original idea was. Sigh.

Of course, when the FCC was created in the 1930s this made sense. We didn't know any other way and we didn't have the technology to do it any other way. Electronics were imprecise and stupid (and analog) so the best thing to do was to allocate big blocks and waste some space by putting gaps between those blocks to take into account "drift". It was centrally-controlled, graceless, but it worked. To manage a precious, rare, resource, it made sense. Did I mention this was the 1930s?

This is comparable to how telecoms traditionally have handled bandwidth. You may recall that a T1 line has 24 channels (DS0's) that are 64 kbit/s each. The total bandwidth of a T1 is 1.5 M but it is divided into 24 timeslots. If you are DS0 number 13, you know that your bits are transmitted 13 time units after each clock sync. If you don't have anything to transmit, zeros will be transmitted for you. It is wasteful and graceless but thats the best you can do in 1961 (which meant most of the design was done in the 1950s). That's nearly a decade before we landed a man on the moon and 20 years before the Commodore 64 first shipped.

Compare that kind of resource allocation scheme to Ethernet. On Ethernet every device that wants to transmit "listens" to see if anyone else it talking and as soon as there is silence "you just start sending". If two devices send at the same time it is considered a "collision" and both parties back off and re-try a random amount of time later. No central authority deciding who should talk when. Everyone just has to agree to use the same rules for how to back off when there is a problem. No central authority, just "benevolent self-interest" that requires everyone to follow the same rules: You follow the rules out of your own self-interest because if everyone does that, everyone can win: Talk when nobody else is and politely back off if you find you are interrupting someone.

It works because data protocols are done by computers that can think, look around, and retry when there is a problem. Analog electronics couldn't do that.

Also compare to how TCP/IP allocates bandwidth on the internet. Send all the data you want, ramping up to the max rate you can send. If there is congestion, the network drops your packets and you respond by slowing down. There is no attempt to allocate the perfect amount of bandwidth for you so that you don't have to deal with congestion. Your protocol follows specific rules on how to back off; and it works because everyone is following the same rules. Your protocol can try to cheat, but it is in your own self-interest to follow the rules and the rules are simple: Talk and and politely back off when there is an indication the system is overloaded.

Imagine if the telecom world or other old-school thinkers had tried to invent a data-networking system based on their old antiquated values? Every time you wanted to SSH to a host, first your protocol would contact some great authority in the sky, beg for an allocation of bandwidth, promise not to go outside that allocation, receive that allocation. With that allocation you would then start transmitting, always careful not to send more than you had promised. Once you were done you would notify the great authority in the sky and the bandwidth would be freed for use by others. But what if there wasn't bandwidth available? During the allocation request you would be given a busy signal (or sent to voicemail?) so you knew to try again later. At the end of the month you'd get a "phone bill" that would list every connection you've made with a dollar amount to be paid.

Now, obviously that's a silly way to run a network. Nobody would create a network like that... oh wait... someone did!

Do you think the telecom industry learned from experienced data network inventors? Heck no. In fact, the telecom industry's response to the internet and TCP/IP was ATM (the confusingly named "Asynchronous Transfer Mode") which was based on sending data in timeslots or "cells" that are 53 bytes each. (That's not a typo... yes, the packet size is a prime number!). The first stage of each session was an allocation process. You (your software) would talk to your nearest router and explain how much bandwidth you needed and for how long. It would negotiate on your behalf with every router between you and the destination to allocate your specified amount of bandwidth. That bandwidth would be allocated just to you. (more on that later) You then have this "virtual circuit" that you transmit your data on until you are done and then the routers de-allocate that bandwidth allotment.

Laughable? Yes. But from 1988 to 1995ish the telecom industry tried to "take over" the internet and force it to be replaced with ATM. Imagine every SSH session taking 0.5 to 2 seconds to start up as bandwidth was allocated to you. Of course, that didn't work well therefore the ATM Forum proposed that you allocate yourself a big chunk and use it for all your TCP/IP needs, doing your own suballocations from that larger block. In other words, they were going to give you a T1. In fact, "T1 Emulation" was a big feature of ATM equipment.

And... oh dear. You would you get a phone bill at the end of the month, listing all the connections you made and a sum total for you to pay. Insane. In fact, one of the jokes about ATM was that the acronym was "A Tariffing Mechanism".

ATM did have one concession to the way real-world data networks operate. The channels didn't have to be fixed sizes. They could be "variable rate". Also, if you weren't using your entire allocation the network could use the "spare" bandwidth for "best effort" protocols. In fact, a large part of the research around ATM was how to oversubscribe allocations and still assure that all bandwidth guarantees would be satisfied.

Here's the part I think is the most funny. The most complex part of an ATM system is all these mechanisms serve the purpose of assuring that the endpoints see a perfect network with perfect bandwidth allocation and perfect reliability and perfect fidelity. However, at the top of the protocol stack data you (your protocol) still has to do end-to-end error checking. Even if you are promised the network can not possibly drop or corrupt a packet, the top level protocols (the applications) still check for problems because the error may have been somewhere else: the cable between the computer and the perfect network, for example. Thus ATM went through all this hand wringing on behalf of upper level protocols that didn't need it, or find much utility in it. Here is a list of things that data protocols can handle on their own: missing packets, dropping packets, corrupted packets, data being sent too fast, data not being sent fast enough. Did I say "can"? They have to. Thus, ATM's generous offer to handle all of that for you is a waste of effort on ATM's behalf. It does, however, justify the ability for the ATM provider to send you a bill. What a great business model.

The fact that ATM didn't replace the internet was no accident. It was a huge effort to "push back" against some very big heavy weights. If you recall, these were the same years that small ISPs were being bought up by Telcos. Eventually all the major ISPs were entirely owned by Telecos. The equipment companies were entering the telecom space and didn't want to piss off their new telco customers. Thus, the people that needed to fight back against ATM were now all owned by megacorps that wanted ATM to win. Wired Magazine wrote the definitive history of this battle and I encourage everyone interest in internet history and governance to read it. The people in this story are heros.

This brings us back to the recent FCC decision.

The airwaves are allocated in blocks. It is wasteful, graceless and ham-fisted but it works. And most of all, it worked given the technology of the 1930s that created it.

The new regulations permit radio transmitters to share spectrum. As long as everyone plays by the same rules it all "just works". As long as everyone has an incentive to play by the rules, it will continue to work. The rules are both "the carrot and the stick". "The stick" is FCC penalties. "The carrot" is that if everyone plays by the rules, everyone will continue to be able to play.

So here's how it works. If you want to broadcast on frequency X, you listen to see if anyone else is broadcasting. If nobody is, you start broadcasting until you detect that someone else is broadcast at which point you have to stop broadcasting. It's a lot more technical than that, but that's the premise. It is like Ethernet and TCP/IP: Talk when nobody else is and politely back off if you find you are interrupting someone.

Of course, you probably are going to listen to many frequencies: scanning up and down for free frequencies so you always have enough available to send the data you have. One of the FCC concessions is that there will be a database of frequencies that are allocated "old school style" and devices will have to stay away from those. Devices will download updates from that database periodically. The database is geographic. The entries are not "don't use channel 9" but "In New York, Channel 9 is in use".

The frequencies that are now available include the "whitespace" airwaves (channels that are unused in the TV frequency range) as well as the gaps between channels that used to be needed due to "analog drift". Now that transmitters are digital they are more precise (they can stay within a more narrow frequency band) and self-correcting (no drift). Being able to use those gaps alone is a big innovation.

At last! Instead of going through tons of work to use any airwaves at all, we can simply build devices that know how to "talk when nobody else is", scan frequencies for available bandwidth, and sync up to a central database.

These are things that modern computers do very well.

None of this was possible until recently. In the 1970s a transistor radio might cost $10 and be so simple it might have come in a kit. Imagine if it had to scan frequencies and so on. With 1970s semiconductor technology it would be a million dollar product. Not something anyone could afford. Oh, and your hand-held radio would only fit in your hand if your hands were as big as the Statue of Liberty's.

Moore's law predicts the "march of progress" in semiconductors. It was easy to predict when such compute power would be affordable and therefore making it economically possible for such devices.

While Moore's law may be hitting the limits of physics, we are still benefitting from it. Ironically there are entire industries that have tried to deny its existence. The economic justification for creating ATM was based on the notion that silicon chips would never be sophisticated or powerful enough to be able to process variably-size, large packets; network speed would hit a limit if we didn't change everything to 53-byte packets. This is entirely true to anyone that is ignorant of, or denies, Moore's law. People that lobbied against "super Wi-Fi" and the use of whitespace also were ignorant of, or in denial about, Moore's law. Of course electronics could do this, it was a matter of time. The music industry was told, based on Moore's law, which year MP3 decoders would be inexpensive enough to put music on a PC, and what year it could fit on a portable player, and what year being able to download an MP3 would be economically feasible; their surprise when these things happened were either due to ignorance of, or denial about, Moore's law. The term "feigning surprise" is one way to describe how someone acts when predictions they've ignored all come true.

But I digress...

This new FCC regulation is a major step forward. It is a modernization of how we allocate wireless frequencies. It is an acknowledgement of Moore's law and the improvements digital electronics bring to the field. It is the gateway for new experimentation which will lead to new wireless applications and services.

Mark my words! Five years from now we'll look back at all the progress that has happened and point to this day as the historic moment that started it all, even though the announcement was mostly ignored at the time.

Well, ignored by everyone except you, dear reader.

Tom Limoncelli

(See you Dec 22, 2016!)

P.S. The only coverage of this FCC decision that I've been able to find has been in the foreign press. What's up with that? It's as if the U.S. incumbents are in cahoots to make sure it will be easy to feign surprise about this some day.

1 TrackBack

TrackBack URL: https://everythingsysadmin.com/cgi-bin/mt-tb.cgi/1402

If you suffered through my long rant about a totally different way to allocate wireless spectrum which would benefit everyone then you'll be happy to read this Arstechnica article about Obama's PCAST initiative moving forward. The old way to allocate... Read More

Leave a comment

Credits