Howdy. My name’s Brian, and I’m a tired SysAdmin…
So, six days of tutorials and talks at the USENIX LISA ’13 conference are done. And it was good. My behind is, however, glad to be shut of those hotel conference chairs.
Sunday, 3 November
Sunday’s full day tutorial was called Securing Linux Servers, and was taught by Rik Farrow, a talented bloke who does security for a living, and is Editor of the USENIX ;login: magazine on the side. We covered the goals of running systems (access to properly executing services) and the attacks that accessibility (physical, network) enable. As always, the more you know, the more frightening running systems connected to networks becomes. We explicitly deconstructed several public exploits of high-value targets, and discussed mitigations that might have made them less likely. User account minimization and root account lockdowns through effective use of the `sudo` command were prominently featured. Proactive patching is highly recommended, too! Passwords, password security, hashing algorithms, and helping users select strong passwords that can be remembered also were a prime topic. Things that Rik wished were better documented online are PAM (Pluggable Authentication Modules) and simple, accessible starter documentation for SELinux.
Monday, 4 November
Hands-on Security for Systems Administrators was the full-day tutorial I attended on Monday. It was taught by Branson Matheson, a consultant and computer security wonk. Branson is an extremely energetic and engaging trainer who held my attention the whole day. We looked at security from the perspective of (informally, in the class) auditing our physical, social, and network vulnerabilities. In the context of the latter, we used a customized virtual build of Kali Linux , a Debian-based pen testing distro. I learned a lot of stuff, and for those things that I “knew”, the refresher was welcome and timely.
Tuesday, 5 November
Tuesday, I took two half-day tutorials.
The first was presented by Ted Ts’o, of Linux kernel and filesystem fame. Our tutorial topic was “Recovering from Linux Hard Drive Disasters.” We spent a couple of hours covering disk drive fundamentals and Linux file systems. The final hour was given over to the stated topic of recovering from assorted disk-based catastrophes. My take-away from this tutorial was two-fold. I think the presentation be better named “Disks, Linux Filesystems, and Disk Disaster Recovery,” which would be more reflective of the distribution of the material. Additionally, it’s worth stating that any single disk disaster is generally mitigated by multi-disk configurations (mirroring, RAID), and accidental data loss is often best covered by frequently taken and tested backups.
The second tutorial I attended, on Tuesday afternoon, was on the topic of “Disaster Recovery Plans: Design, Implementation and Maintenance Using the ITIL Framework.” Seems a bit dry, eh? A bit … boring? Not at all! Jeanne Schock brought the subject material to life, walking us through setting goals and running a project to effectively plan for Disaster Recovery. IMO, it’s documentation, planning, and process that turns the craft of System Administration into a true profession, and these sorts of activities are crucial. Jeanne’s presentation style and methods of engaging the audience are superb. This was my personal favorite of all the tutorials I attended. But … Thanks, Jeanne, for making more work for me!
Wednesday, 6 November
Whew. I was starting to reach brain-full state as the fourth day of tutorials began. I got to spend a full day with Ted Ts’o this time, and it was an excellent full day of training on Linux Performance Tuning. Some stuff I knew, since I’ve been doing this for a while. But the methods that Ted discussed for triaging system and software behaviour, then using the resulting data to prioritize diagnostic activities was very useful. This is a recurring topic at LISA ’13 – go for the low-hanging fruit and obvious stuff: check for CPU, disk, and network bottlenecks with quick commands before delving into one path more deeply. The seemingly obvious culprit may be a red herring. I plan on using the slide deck to construct a performance triage TWiki page at work.
I was in this tutorial when Bruce Schneier spoke (via Skype!) on “Surveillance, the NSA, and Everything.” Bummer.
This was also my last day of Tutorials. In the evening I attended the annual LOPSA meeting. Lots of interesting stuff there, follow the link to learn more about this useful and supportive organization. Yep, I’m a member.
Thursday, 7 November
Yay, today started with track problems on Metro, and an extra 45 minutes standing cheek-to-jowl with a bunch of random folks on a Red Line train.
This was a Technical Sessions and Invited Talks day for me. In the morning, Brendan Gregg presented Blazing Performance with Flame Graphs. Here’s a useful summary on Brendan’s blog. This was followed in the morning by Jon Masters of Red Hat talking about Hyperscale Computing with ARM Servers (which looks to be a cool and not unlikely path), and Ben Rockwood of Joyent discussing Lean Operations. Ben has strong opinions on the profession, and I always learn something from him.
In the afternoon, Brendan Gregg was in front of me again, pitching systems performance issues (and his new book of the same name). I continue to find Brendan’s presentation style a bit over the top, but his technical chops and writing skills are excellent. This was followed by Branson Matheson (who was training me earlier in the week) on the subject of “Hacking your Mind and Emotions” – much about social engineering. Sigh, too easy to do. But Branson is so enthusiastic and excited about his work that … well, that’s alright, then, eh?
The late afternoon pair of talks were on Enterprise Architecture Beyond the Perimeter (presented by a pair of talented Google Engineers), and Drifting into Fragility, by Matt Provost of Weta Digital. The former was all about authentication and authorization without the classical corporate perimeter – no firewall or VPN between clients and resources. Is it a legitimate client machine, properly secured and patched? With a properly authenticated user? Good, we’re cool. How much secured, authenticated, patched is required is dependent on the resource to be accessed. This seems a bit like a Google-scale problem… The latter talk, on fragility, was a poignant reminder of unintended dependencies and consequences in complex systems and network.
The conference reception was on Thursday evening, but I took a pass, headed home, and went to bed early. I was getting pretty tired by this time.
Friday, 8 November
My early morning session had George Wilson of Delphix talking about ZFS for Everyone, followed by Mark Cavage of Joyent discussing Manta Storage System Internals. I use ZFS, so the first talk held particular interest for me, especially the information about how the disparate ZFS implementations are working to prevent fragmentation by utilizing Feature Flags. OpenZFS.org was also discussed. I didn’t know much about Manta except that it exists, but I know a bit more now, and … it’s cool. I don’t have a use, today, but it’s definitely cool.
The late morning session I attended was a two-fer on the topic of Macs at Google. They have tens of thousands of Macs, and the effective image, deployment, and patching management was the first topic of the day, presented by Clay Caviness and Edward Eigerman. Some interesting tools and possibilities, but scale far beyond my needs. The second talk, by Greg Castle, on Hardening Macs, was pertinent and useful for me.
After lunch, the two talks I attended were on “Managing Access using SSH Keys” by the original author of SSH, Tatu Ylönen, and “Secure Linux Containers” by Dan Walsh of Red Hat (and SELinux fame). Tatu pretty much read text-dense slides aloud to us, and confirmed that managing SSH key proliferation and dependency paths is hard. Secure Linux Containers remind me strongly of sparse Solaris Zones, so that’s how I’m fitting them into my mental framework. Dan also talked to us about Docker … a container framework that Red Hat is “merging” (?) with Secure Linux Containers … and said we (sysadmins) wouldn’t like Docker at all. Mmmmmm.
The closing Plenary session, at about an hour and 45 minutes, was a caffeine-fueled odyssey by Todd Underwood, a Google Site Reliability Manager, on the topic of PostOps: A Non-Surgical Tale of Software, Fragility, and Reliability. Todd’s a fun, if hyper, speaker. He’s motivated and knows his stuff. But like some others in the audience, what happens at the scale of a GOOG-size organization may not apply so cleanly in the SMB space. The fact is that DevOps and NoOps may not work so well for us … though certainly the principles of coordinated work and automation strongly apply.
At any given time, for every room I sat in, for every speaker or trainer I listened to, there were three other things that I would have also learned much from. This was my path through LISA ’13. There are many like it, but this one is mine. This conference was a net win for me in many ways – I learned a lot, I ran across some old friends (Hi, Heather and Marc), made some new ones, and had a good time.
The folks I can recommend without reservation that you take a class from, or attend a talk that they’re presenting: Jeanne Schock, Branson Matheson, Rik Farrow, and Ted Ts’o. These are the four people I learned the most from in the course of six days, and you’d learn from them, too!
My hat’s off to the fine staff at USENIX, who worked their asses off to make the conference work. Kudos!