Awesome Conferences

Recently in Sysadmin Industry News Category

Technology website The Register called it. With the search called off, we must presume that Evi Nemeth is no longer with us. Their obit, "Godmother of Unix admins Evi Nemeth presumed lost at sea", gives an excellent overview of her life and influence.

In the coming months there will be many memorials and articles written about Evi, most by people that knew Evi better than I. That said, I'd like to share something that most people don't know.

Evi saved "sudo".

Sudo has joined popular culture (or at least popular geek culture) thanks to the famous XKCD cartoon: sudo make me a sandwich. This has lead to other sudo references, even a company cafeteria (see picture).

I think most people understand sudo as "a Unix/Linux command that forces something to happen, or forces a computer to do something" which is pretty close. What it really does and the historical impact it made needs to be known.

The Unix operating system (and Linux, which is its clone) permits many users to log in to a computer at the same time. Each user is prevented from mucking with other people's stuff. However there is one user called "root" which is the "super user" and can meddle with all files, unrestricted. This is the "janitorial" account used by system administrators to fix things, install and uninstall software, reboot, and so on. Think of it as cardkey at a hotel that opens all the doors.

The "root" account has a password just like all accounts. Before sudo (1985) was popular (1995?), the system administrators would memorize their own password and also the "root" account password. When they needed to do maintenance they would log into the root account. There is even a command called "su" (super user) which makes it easy to temporarily switch to the "root" account for this reason. "su" requires you to enter the password for the account you will become. Therefore to become root you had to know the root account password.

"su" works just fine when you have one big Unix machine for the entire company, department or campus. It is easy to share the password among just the people that should have such heightened privilege.

That's fine for the 1980s where there may be 1 or 2 big computers for a department shared by hundreds of users. The users themselves do not have "root access" just as the customers of a hotel do not get the "master passkey". However with the workstation and PC revolution it was common to have hundreds or thousands of Unix computers in an organization. Typically they would have the same password for "root" on all of them. Again, this was fine because there may be 3-4 system administrators trusted with the password. But what if someone requires to do something as root on their own machine? With computers now owned by individuals, not departments, this caused a problem. You couldn't tell the owner of a machine the root password because then they'd know the root password for every machine in the network!

A number of terrible solutions were created. One was to set a different "root" password on each workstation so the owner could know the password, but then the sysadmins would have to memorize hundreds of passwords, store them somewhere (which is insecure), or just give up control of all the workstations that they were hired to control.

Another solution, popular at Rutgers, was called "slide". If your account had access to the "slide" command, you could "slide into root" without being prompted for a password. This was bad for many reasons but I recall two that were most important: first, since it didn't ask for a password it basically made your account as powerful as "root", which defeats the purpose of having users "compartmentalized" from each other. It also meant that if you walked away from your computer, someone could easily slide into root without permission. (nobody locked their screen back then)

Then there was this brilliant command: "sudo". It permitted fine-grained delegation of power with centralized control. It was like "su" but instead of asking for the root's password it asked for your password. It then decided if you were allowed to "become root" based on an authorization matrix created by the system administrators. No need to tell additional people the root password or create additional root passwords. It was "fine grained" meaning that the sysadmins that configured sudo could specify if a particular user could run a specific command as root (like, maybe just give people permission to eject CD-ROMs... something that only root was allowed to do for reasons that are too long to explain here) or full access to everything. It had "centralized control" in that the system administrators could configure sudo in a way that was maintainable and wouldn't get sloppy.

In the move from big, central, serveres to a world of workstations, sudo was a miracle.

It was so radical, in fact, that I didn't quite get why anyone would use it. I was in an environment that was treating 120 workstations the way we used to treat big centralized computers: three sysadmins that all knew the root password and none of our users did, nor could they do "root things" on their own machines. [You may be wondering... After being asked to eject a CD-ROM from their computer for the millionth time, we made a special command that enabled self-service ejects. If I had sudo I could have changed the authorization matrix, not spent a week writing code to make that command.]

So how does Evi play into this?

Sudo was first conceived and implemented around 1980 at SUNY/Buffalo. Sudo wasn't released publicly until about 1985. Between 1985 and 1991 Colorado University at Boulder and others kept improving it. There were many versions floating around, each with slightly different features, compatibility (or lack there of) with different Unix variants, and security problems. There was an "official" version but, for lack of a better term, was abandoned and went years without a new release.

Eventually the Colorado University at Boulder variant became the official version thanks to their leadership role, setting up a website and so on. This was the work of Todd C. Miller. Todd had been contributing to the project since 1992.

Todd recently told me in email that Evi "really had a major impact" on his life. Evi encouraged him to modify sudo "and was a major factor in its acceptance due to the inclusion of sudo in her System Administrator's Handbook."

Evi also advocated the use of sudo at the Usenix LISA tutorials. Every year at Usenix LISA she would teach a tutorial called "Hot Topics in System Administration". Before the internet, this was the best way to find out what the new tools in Unix system administration were. People would come to LISA just to see what new stuff was going to be revealed. This is where I first heard about amd (the automount replacement), sudo, RRDtool, ISC DHCP, and many other technologies that were new then, but soon became "standard" for any Unix/Linux system.

In that way, Evi saved sudo. She had the foresight to see that the future of Unix/Linux was distributed workstations and that sudo was an important step to making that vision a reality. She advocated that people use sudo via her books, articles, and training thereby giving it the momentum that was required to break through the cacophony of lessor solutions.

sudo is now a part of the "base install" of nearly ever Unix/Linux system available today. It is the standard way to run things as root. As a result managing systems is much easier and more secure.

We can thank Evi for that.

On a more personal note: when I think of Evi the picture in my mind is from the first time I met her. I was a wide-eyed young sysadmin in a class she taught at LISA. She was standing at the front of the packed classroom. A grey haired, diminutive woman passionately educating the first large generation of Unix/Linux system administrators about how to create the future by staying on top of the latest tools and techniques. Thank you, Evi!

Evi Nemeth's son is still optimistic and so am I.

Here's what I glean from this report on 3NewsNZ:

  • The last txt message from Evi wasn't the last txt message. Another txt was sent but not received.
  • The phone company was able to reveal the last txt and its geolocation.
  • The last txt was from Danielle and said "Sails Shredded last night, now bare polies, going 4 knot 310 degrees will update course info at 6pm."
  • Given that info, it should be possible to locate them.
  • However, no update at 6pm tells me we should be prepared for the worst.

Read the full article here from 3NewsNZ

Of course, our thoughts and well-wishes go out to the families and all involved.

Update: This may be the source article the others were written from.

[This is still at 'first draft' quality, but I thought I'd post it sooner rather than later. Please ignore the typos for now.]

I recently twittered my delight that the FCC approval of "super Wi-Fi" is going to be regarded as a historic moment five years from now. I mean it.

Here's why:

In geek terms: This gives permission to treat the airwaves like Ethernet networking, not like Teleco networking. More modern and more flexible.

In non-geek terms, this decision by the FCC makes it easier to innovate. It makes it safe and easy to try new things With the possibility of experimentation comes new applications and ideas. It will be a game-changer.

Let me explain...

Let's first look at how spectrum is allocated today: In blocks. You want to do something "on the air", request a license, go through tons of approvals, put together a consortium of like-minded folks, wait months or years, and get a block of spectrum from frequency X to frequency Y in a particular geographic area. The process is so long that by now I've forgotten what the original idea was. Sigh.

Of course, when the FCC was created in the 1930s this made sense. We didn't know any other way and we didn't have the technology to do it any other way. Electronics were imprecise and stupid (and analog) so the best thing to do was to allocate big blocks and waste some space by putting gaps between those blocks to take into account "drift". It was centrally-controlled, graceless, but it worked. To manage a precious, rare, resource, it made sense. Did I mention this was the 1930s?

This is comparable to how telecoms traditionally have handled bandwidth. You may recall that a T1 line has 24 channels (DS0's) that are 64 kbit/s each. The total bandwidth of a T1 is 1.5 M but it is divided into 24 timeslots. If you are DS0 number 13, you know that your bits are transmitted 13 time units after each clock sync. If you don't have anything to transmit, zeros will be transmitted for you. It is wasteful and graceless but thats the best you can do in 1961 (which meant most of the design was done in the 1950s). That's nearly a decade before we landed a man on the moon and 20 years before the Commodore 64 first shipped.

Compare that kind of resource allocation scheme to Ethernet. On Ethernet every device that wants to transmit "listens" to see if anyone else it talking and as soon as there is silence "you just start sending". If two devices send at the same time it is considered a "collision" and both parties back off and re-try a random amount of time later. No central authority deciding who should talk when. Everyone just has to agree to use the same rules for how to back off when there is a problem. No central authority, just "benevolent self-interest" that requires everyone to follow the same rules: You follow the rules out of your own self-interest because if everyone does that, everyone can win: Talk when nobody else is and politely back off if you find you are interrupting someone.

It works because data protocols are done by computers that can think, look around, and retry when there is a problem. Analog electronics couldn't do that.

Also compare to how TCP/IP allocates bandwidth on the internet. Send all the data you want, ramping up to the max rate you can send. If there is congestion, the network drops your packets and you respond by slowing down. There is no attempt to allocate the perfect amount of bandwidth for you so that you don't have to deal with congestion. Your protocol follows specific rules on how to back off; and it works because everyone is following the same rules. Your protocol can try to cheat, but it is in your own self-interest to follow the rules and the rules are simple: Talk and and politely back off when there is an indication the system is overloaded.

Imagine if the telecom world or other old-school thinkers had tried to invent a data-networking system based on their old antiquated values? Every time you wanted to SSH to a host, first your protocol would contact some great authority in the sky, beg for an allocation of bandwidth, promise not to go outside that allocation, receive that allocation. With that allocation you would then start transmitting, always careful not to send more than you had promised. Once you were done you would notify the great authority in the sky and the bandwidth would be freed for use by others. But what if there wasn't bandwidth available? During the allocation request you would be given a busy signal (or sent to voicemail?) so you knew to try again later. At the end of the month you'd get a "phone bill" that would list every connection you've made with a dollar amount to be paid.

Now, obviously that's a silly way to run a network. Nobody would create a network like that... oh wait... someone did!

Do you think the telecom industry learned from experienced data network inventors? Heck no. In fact, the telecom industry's response to the internet and TCP/IP was ATM (the confusingly named "Asynchronous Transfer Mode") which was based on sending data in timeslots or "cells" that are 53 bytes each. (That's not a typo... yes, the packet size is a prime number!). The first stage of each session was an allocation process. You (your software) would talk to your nearest router and explain how much bandwidth you needed and for how long. It would negotiate on your behalf with every router between you and the destination to allocate your specified amount of bandwidth. That bandwidth would be allocated just to you. (more on that later) You then have this "virtual circuit" that you transmit your data on until you are done and then the routers de-allocate that bandwidth allotment.

Laughable? Yes. But from 1988 to 1995ish the telecom industry tried to "take over" the internet and force it to be replaced with ATM. Imagine every SSH session taking 0.5 to 2 seconds to start up as bandwidth was allocated to you. Of course, that didn't work well therefore the ATM Forum proposed that you allocate yourself a big chunk and use it for all your TCP/IP needs, doing your own suballocations from that larger block. In other words, they were going to give you a T1. In fact, "T1 Emulation" was a big feature of ATM equipment.

And... oh dear. You would you get a phone bill at the end of the month, listing all the connections you made and a sum total for you to pay. Insane. In fact, one of the jokes about ATM was that the acronym was "A Tariffing Mechanism".

ATM did have one concession to the way real-world data networks operate. The channels didn't have to be fixed sizes. They could be "variable rate". Also, if you weren't using your entire allocation the network could use the "spare" bandwidth for "best effort" protocols. In fact, a large part of the research around ATM was how to oversubscribe allocations and still assure that all bandwidth guarantees would be satisfied.

Here's the part I think is the most funny. The most complex part of an ATM system is all these mechanisms serve the purpose of assuring that the endpoints see a perfect network with perfect bandwidth allocation and perfect reliability and perfect fidelity. However, at the top of the protocol stack data you (your protocol) still has to do end-to-end error checking. Even if you are promised the network can not possibly drop or corrupt a packet, the top level protocols (the applications) still check for problems because the error may have been somewhere else: the cable between the computer and the perfect network, for example. Thus ATM went through all this hand wringing on behalf of upper level protocols that didn't need it, or find much utility in it. Here is a list of things that data protocols can handle on their own: missing packets, dropping packets, corrupted packets, data being sent too fast, data not being sent fast enough. Did I say "can"? They have to. Thus, ATM's generous offer to handle all of that for you is a waste of effort on ATM's behalf. It does, however, justify the ability for the ATM provider to send you a bill. What a great business model.

The fact that ATM didn't replace the internet was no accident. It was a huge effort to "push back" against some very big heavy weights. If you recall, these were the same years that small ISPs were being bought up by Telcos. Eventually all the major ISPs were entirely owned by Telecos. The equipment companies were entering the telecom space and didn't want to piss off their new telco customers. Thus, the people that needed to fight back against ATM were now all owned by megacorps that wanted ATM to win. Wired Magazine wrote the definitive history of this battle and I encourage everyone interest in internet history and governance to read it. The people in this story are heros.

This brings us back to the recent FCC decision.

The airwaves are allocated in blocks. It is wasteful, graceless and ham-fisted but it works. And most of all, it worked given the technology of the 1930s that created it.

The new regulations permit radio transmitters to share spectrum. As long as everyone plays by the same rules it all "just works". As long as everyone has an incentive to play by the rules, it will continue to work. The rules are both "the carrot and the stick". "The stick" is FCC penalties. "The carrot" is that if everyone plays by the rules, everyone will continue to be able to play.

So here's how it works. If you want to broadcast on frequency X, you listen to see if anyone else is broadcasting. If nobody is, you start broadcasting until you detect that someone else is broadcast at which point you have to stop broadcasting. It's a lot more technical than that, but that's the premise. It is like Ethernet and TCP/IP: Talk when nobody else is and politely back off if you find you are interrupting someone.

Of course, you probably are going to listen to many frequencies: scanning up and down for free frequencies so you always have enough available to send the data you have. One of the FCC concessions is that there will be a database of frequencies that are allocated "old school style" and devices will have to stay away from those. Devices will download updates from that database periodically. The database is geographic. The entries are not "don't use channel 9" but "In New York, Channel 9 is in use".

The frequencies that are now available include the "whitespace" airwaves (channels that are unused in the TV frequency range) as well as the gaps between channels that used to be needed due to "analog drift". Now that transmitters are digital they are more precise (they can stay within a more narrow frequency band) and self-correcting (no drift). Being able to use those gaps alone is a big innovation.

At last! Instead of going through tons of work to use any airwaves at all, we can simply build devices that know how to "talk when nobody else is", scan frequencies for available bandwidth, and sync up to a central database.

These are things that modern computers do very well.

None of this was possible until recently. In the 1970s a transistor radio might cost $10 and be so simple it might have come in a kit. Imagine if it had to scan frequencies and so on. With 1970s semiconductor technology it would be a million dollar product. Not something anyone could afford. Oh, and your hand-held radio would only fit in your hand if your hands were as big as the Statue of Liberty's.

Moore's law predicts the "march of progress" in semiconductors. It was easy to predict when such compute power would be affordable and therefore making it economically possible for such devices.

While Moore's law may be hitting the limits of physics, we are still benefitting from it. Ironically there are entire industries that have tried to deny its existence. The economic justification for creating ATM was based on the notion that silicon chips would never be sophisticated or powerful enough to be able to process variably-size, large packets; network speed would hit a limit if we didn't change everything to 53-byte packets. This is entirely true to anyone that is ignorant of, or denies, Moore's law. People that lobbied against "super Wi-Fi" and the use of whitespace also were ignorant of, or in denial about, Moore's law. Of course electronics could do this, it was a matter of time. The music industry was told, based on Moore's law, which year MP3 decoders would be inexpensive enough to put music on a PC, and what year it could fit on a portable player, and what year being able to download an MP3 would be economically feasible; their surprise when these things happened were either due to ignorance of, or denial about, Moore's law. The term "feigning surprise" is one way to describe how someone acts when predictions they've ignored all come true.

But I digress...

This new FCC regulation is a major step forward. It is a modernization of how we allocate wireless frequencies. It is an acknowledgement of Moore's law and the improvements digital electronics bring to the field. It is the gateway for new experimentation which will lead to new wireless applications and services.

Mark my words! Five years from now we'll look back at all the progress that has happened and point to this day as the historic moment that started it all, even though the announcement was mostly ignored at the time.

Well, ignored by everyone except you, dear reader.

Tom Limoncelli

(See you Dec 22, 2016!)

P.S. The only coverage of this FCC decision that I've been able to find has been in the foreign press. What's up with that? It's as if the U.S. incumbents are in cahoots to make sure it will be easy to feign surprise about this some day.

AT&T

AT&T's De la Vega is getting in trouble for saying that they want to find ways to discourage people from using their data plans. It turns out that AT&T's data network is overloaded and rather than fix the problem, they think punishing their users will help.

As an AT&T customer, it makes me sick.

As an ex-AT&T employee, it just reminds me of why I was so happy to leave.

This is what you get for having salespeople run the company instead of engineers. Engineers would have budgeted for appropriate growth to match customer growth.

AT&T's mindset is that bandwidth is scarce. Every bit is so impossibly costly that it must be measured, counted, monitored, and charged for. On my first day as an employee I had to watch a 30 minute video that did nothing but explain that I can't make a single personal phone call from the office; it looked like it has been made when phone calls were still $3/minute. Don't waste their precious, precious bandwidth.

Bandwidth was expensive for the first 100 years of their history, but it certainly isn't true now. What made the internet great was thinking in terms of plenty, not scarcity.

I remember when "the web" (HTTP) was new. A friend at a different division of AT&T told me their engineers were fearful of HTTP and didn't want it to catch on because their network could never handle such a graphic-rich system (this was 1992 or 1993). I couldn't figure out why they weren't thinking, "Yeah! An opportunity to sell more bandwidth!" If you sell apples, don't you want to freely distribute apple pie recipes? If you sell paint don't you want to encourage everyone to repair their house? Ugh. If AT&T was selling bacon they'd be encouraging everyone to become a vegan.

At the time UUNET (the first commercial ISP) was giving away free Usenet feeds (at this time this was a HUGE amount of bandwidth) and paying people to develop open source Usenet software: all to make it easier for people to need more bandwidth. I thought UUNET's way was much smarter.

It also annoyed me, as an employee, that AT&T kept acting as if Moore's Law didn't exist. This is odd because the Moore revealed this observation during a presentation at AT&T's Bell Labs. Maybe they have to remember that Nielsen's Law makes similar claims about bandwidth. Pushed on by cheaper electronics, bandwidth gets cheaper too.

The biggest innovations in computing have come from brashly using more resources, usually slightly ahead of the supply curve. Textual user interfaces were a "waste of CPU" when first seen by batch computing people. Graphical user interfaces were a "waste of CPU" at first, but now it is what enables billions of people to use computers. RAID was a "waste of disk" but now I would never build a server without it.

The other attitude that I saw at AT&T was sheer shock and surprise that anything changes. "What? We built this thing for our customer base and... there are more customers a year later? They want new features? How could anyone have expected that?" Combine that with an intentional ignorance of Moore's Law and you have a disaster.

A disaster called AT&T.

Yes, AT&T, you have the best selling phone. People use it for data more than voice. The data apps are what make it such a success. Why do I get the feeling that when you negotiated with Apple you thought, "Sure, we'll throw in flat-rate data plans... it isn't like anyone is going to use that stuff!"

Are you still thinking that the internet is a "fad" like CEO Robert Allen?

My AT&T/iPhone contract is over in a few months. Maybe when it ends I should help De la Vega's bandwidth problem by not using his network at all.

P.S. I have a lot of pent up anger bout my AT&T service because twice a day as I take the train from Bloomfield, NJ to New York City and back I am faced with dead-spots at key locations such as the Secaucus transfer station, Watsessing Ave, and others locations along the way. It is frustrating to be on the train and see other passengers using Verizon and T-Mobile able to talk on their phone (and I presume surf the web) at all the points that I can't. It is my twice-a-day reminder to leave AT&T that I could be doing better with a different vendor.

I've long been a fan of Alan Turing, even writing a big paper about his mistreatment my freshman year of college (talking about gay stuff was much more radical in 1987. I nearly cried while giving the oral report portion of the project). For those of you that don't know, Alan Turing not only invented what we now call computer science, but broke the German code which directly led to The Allies winning World War II. One man can really change the world.

During the war Turing's code-cracking skills were so invaluable that his homosexuality was ignored or tolerated by the British government. Sadly, after the war the his code-cracking skills were not as needed the government began persecuting him. This persecution lead to his apparent suicide.

Recently John Graham-Cumming began a petition campaign to ask the UK government for a formal apology for their treatment of Alan Turing.

Last week Gordon Brown issued a formal apology.  Read Gordon Brown's statement in its entirety.

Amazingly enough, one of Alan Turing's most brilliant and enduring work was the creation of a an unbiased test for artificial intelligence (now called "The Turing Test"). The break-through of this test is that it creates an environment where we can only see the intelligence of a person (or computer). As a side-effect, we do not see their race, color, gender, or sexual orientation.  Could it be that his inspiration for this test was driven by his desire for a world where people were not persecuted for such things?  Read John Graham-Cumming's beautiful article about this ironic twist.

For an "insider view" of the process of getting the apology read John's article about when Gordon Brown called him to say that the apology was about to happen. John deserves a lot of credit for this work.  He didn't have a huge staff of people. He didn't have a PR agency. He only had the internet and a blog. One man can really change the world.

Thank you John!


Read all about it! Spread the word so all your friends and co-workers know to nominate that great person that runs their systems! :-)


Follow the event on on Twitter as @SysAdRockstar09, Facebook group, and LinkedIn group. For full information on the contest visit www.bigfix.com/rockstar.

Randy Bush makes an interesting financial point that might help you explain IPv6 to the finance people: Pay a little now or pay a lot in the future. Plus a very good point: Do a single service like making your DNS dual-stacked. You'll be more focused and you'll find where the problems are going to be.

Netflix has announced their streaming service is now accessible over IPv6.  This means that their CDN provider, Limelight, is now the first CDN to provide IPv6 service.  Netflix says it took two months of engineering (from initial idea to completion) and Limelight says they only had to allocate two engineers to the project.  IPv6 is easy.  Forget all your old misconceptions.

At my house we have Comcast for our internet access.  Now I just need them to provide it and I'm ready!  If Comcast needs a beta tester, please reach me!  tal at everything sysadmin dot com, folks!
The term "Warehouse-Scale" Machines has been coined.  The term describes the specific design that sites like Google use.  The data centers that Google runs aren't like other data centers where each rack has a mish-mosh of machines that result as various people request and fill rack space.  It's more like a single huge machine running many processes.  A machine has memory, CPUs, and storage and buses that connect them all.  A warehouse-scale machine has thousands of machines all with a few, specific, configurations.  You treat the machines as CPUs and/or storage; the network is the bus that connects them all.

There is a new on-line book (108 pages!) by the people at Google that are in charge of the Google data center operations (disclaimer: Urz is my boss's boss's boss's boss's boss)

The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines
by Luiz André Barroso and Urs Hölzle, Google Inc.

Abstract

As computation continues to move into the cloud, the computing platform of interest no longer resembles a pizza box or a refrigerator, but a warehouse full of computers. These new large datacenters are quite different from traditional hosting facilities of earlier times and cannot be viewed simply as a collection of co-located servers. Large portions of the hardware and software resources in these facilities must work in concert to efficiently deliver good levels of Internet service performance, something that can only be achieved by a holistic approach to their design and deployment. In other words, we must treat the datacenter itself as one massive warehouse-scale computer (WSC). We describe the architecture of WSCs, the main factors influencing their design, operation, and cost structure, and the characteristics of their software base. We hope it will be useful to architects and programmers of today's WSCs, as well as those of future many-core platforms which may one day implement the equivalent of today's WSCs on a single board.


http://www.morganclaypool.com/toc/cac/4/1


According to Merrill R. (Rick) Chapman's book, there is an oft repeated pattern in the computer industry where a company suddenly finds itself with two products in the same market space, and ends up not being able to sell either. They spend all their time trying to explain to customers why they should buy one or the other, when really the truth is that they are too similar to differentiate. Meanwhile a competitor (usually Microsoft) comes in with one product, a clear message ("it's the best!") and puts the other company out of business. If the other company had sold off or canceled one of its two similar products the disaster would have been avoided.

I consider that book the best book on how the major players in the software industry got to where they are today. When it came out it got hardly any press. Hardly anyone has heard of it. I think that's sad. It is a "best kept secret" book. It is written by a person that was "there when it happened" and he tells the stories in excellent detail. Each chapter teaches you something important. Oh, and most of his case studies involve companies that were beaten by Microsoft. If you don't want history to repeat itself, read this book.

If I was Oracle, I'd sell off MySQL and PostgreSQL right away.

Google has enabled IPv6 for most services but ISPs have to contact them and verify that their IPv6 is working properly before their users can take advantage of this.

I'm writing about this to spread the word.  Many readers of this blog work at ISPs and hopefully many of them have IPv6 rolled out, or are in the process of doing so.

Technically here's what happens:  Currently DNS lookups of www.google.com return A records (IPv4), and no AAAA records (IPv6).  If you run an ISP that has rolled out IPv6, Google will add you (your DNS servers, actually) to a white-list used to control Google's DNS servers.  After that, DNS queries of www.google.com will return both an A and AAAA record(s).

What's the catch?  The catch is that they are enabling it on a per-ISP basis. So, you need to badger your ISP about this.

Why not just enable it for all ISPs?  There are some OSs that have default configurations that get confused if they see an AAAA record yet don't have full IPv6 connectivity.  In particular, if you have IPv6 enabled at your house, but your ISP doesn't support IPv6, there is a good chance that your computer isn't smart enough to know that having local IPv6 isn't the same as IPv6 connectivity all the way across the internet.  Thus, it will send out requests over IPv6 which will stall as the packets get dropped by the first non-IPv6 router (your ISP).

Thus, it is safer to just send AAAA records if you are on an ISP that really supports IPv6.  Eventually this kind of thing won't be needed, but for now it is a "better safe than sorry" measure.  Hopefully if a few big sites do this then the internet will become "safe" for IPv6 and everyone else won't need to take such measures.

If none of this makes sense to you, don't worry. It is really more important that your ISP understands.  Though, as a system administrator it is a good idea to get up to speed on the issues.  I can recommend 2 great books:
The Google announcement and FAQ is here: Google announces "Google over IPv6". Slashdot has an article too.

Google (my employer) has announced a "Google Gadget" competition for students in Tanzania, Uganda, Kenya, Rwanda, Burundi and Ethiopia. The designer of the best gadget will a $600 USD stipend, five runners-up will receive a $350 USD stipend. Prize categories include Best Gadget UI, Best Local Content Gadget (Most Locally Useful Gadget), Best Education Specific Gadget, Best Procrastination Gadget, Most Technically Sophisticated Gadget, Gadget Most Likely to Get International Traffic, and Best Social Gadget.

Complete details are available on the East Africa Google Gadget Competition website. A PDF suitable for your university bulletin board is available here.

Posted by Tom Limoncelli in Sysadmin Industry News

A co-worker of mine, Fernanda Weiden, was interviewed on the FLOSS Weekly podcast.

Fernanda Weiden of Google in Zurich gives her perspectives on women and Latin Americans in the open source community, the Brazilian Women in Free Software, Debian Women and the Free Software Foundation of Latin America

Listen or download.

True story about Fernanda: She taught herself English by reading Linux "man" pages.

I appreciate you!

Today is the 8th Annual System Administrator Appreciation Day. I know this sounds kind of funny, but I really appreciate all the system administrators out there. I meet a lot of system administrators. I visit a lot of sites. I hear stories about heroics, and I hear stories of people who persist even though they are working with terrible management, unappreciative users, and CEOs that treat IT as a "cost center" instead of an investment in future corporate growth.

Last week the 2nd edition of The Practice of System and Network Administration started shipping. The new edition includes a lot of new anecdotes, many from the fan mail we've received over the years. Some of the fan mail is fun, like when we were told that something we suggested helped recover from an outage a few hours faster, which saved his company $100,000. Often we are pleased to receive email from someone who's received a promotion and wanted to thank us for writing a book that was instrumental to their career. But most of all I want to say that I am humbled by the messages we've received from the lonely system administrators: The under-appreciated person struggling to fix a big mess they inherited, with all the responsibility when it fails but none of the authority to fix the larger problems. We received email from one person who, when reading the book, burst into sobs after realizing she wasn't "the only one".

This will be the second year that I'm volunteering to judge SysAdmin Of The Year. Nominations are open, so email the URL (http://www.sysadminoftheyear.com/) to all your friends. The first 2500 nominated sysadmins get a free tshirt, which is pretty cool in itself.

Tom

P.S. If you are in the Philly/NJ/DE/NY area (or aren't, but like last-minute travel), don't forget that I'l be doing my time-management training classes during the tutorial part of LOPSA's SysadminDays local conference, August 6-7, 2007, in Cherry Hill, NJ (just outside Philadelphia).

I just came across Douglas Chick's book, "What All Network Administrators Know". I immediately rushed to add it to our web page of recommended titles (scroll to the bottom).

One of the problems with TPOSANA is that it really focuses on big sites. This book is perfect for sysadmins that are just getting started or are at a small site. It is down to earth, very practical, and contains tons of excellent advice. (If you want proof, preview it on Amazon by clicking on the "random page" button.)

Don't forget (or don't forget to remind your boss) that Friday, July 28th is System Administrator Appreciation Day. www.sysadminday.com

However the new hotness is the 2006 Sysadmin Of The Year contest. Sponsored by Splunk, LOPSA, and many other organizations. One Grand Prize winner will receive a $2,500 Splunk Professional license and an all-expense paid trip to Washington, D.C. to attend the Large Installation System Administration (LISA) Conference December 3-7, 2006. More than 2,500 other prizes will be awarded. Nominate someone today!

Credits