3 Feb 2017

Another interesting week near the heart of power. Well, when I say “heart”, I mean corroded hunk of radioactive tin encased in an orange waste of skin. Ah, well. One does what one can while watching the wreck of trains, above and below.

In the meantime, I managed to get Kubuntu installed on my old Mac Air (2011). The install was fairly trivial, just a couple of trips to the search engines to get me over the occasional install hump. Everything but the thunderbolt port works flawlessly, and here it sits next to it’s new big brother:

AirBuntu next to the new-ish MPB

AirBuntu next to the new-ish MPB

The primary failing of the Air was one of battery life – it had a semi-useful 2 hours worth, which sucked when I found myself stranded in Columbus without a power brick last Fall. The other main issue is the screen. In the last 6 years, my eyes appear to have aged about 10, and with the amount of information I like to keep on screen, the larger, higher resolution MBP is just better. Let’s be clear: compared to the Air, the Retina screen on the MacBook Pro is glorious. Oh, and a much faster processor doesn’t hurt at all either. The air will serve well as a conference laptop. The MBP is a superb work machine for me. All I have to do is get used to floating my palms off that bloody huge touchpad.

2015 Nov 29

LISA 15 Report

The LISA 2015 conference was held this year at the Washington Marriott Wardman Park, off Connecticut Avenue in north east DC. It’s 15 miles from home, but the best driving time I had was Wednesday (Veteran’s Day) morning, which took half an hour, and the worst was a bit over 1.5 hours, coming home in weeknight traffic, in the rain. It’s a nice venue, though I’ve never stayed there, only attended events.

Saturday, 11/7

Saturday night was badge pickup and opening reception. I attended that mostly to do a handoff of the give-away items for the LOPSA general business meeting. Because I’m local, I volunteered to be a drop ship site for stuff that arrived over the course of the month leading up to LISA. That evening, I made contact with LOPSA’s President, Chris Kacoroski (‘Ski’), and we grabbed a couple of other willing bodies and emptied out my trunk, which was chock-full of Lego kits, books, booth collateral, etc. An hour or two of chatting with early-arriving attendees, then I headed back home to get an early bedtime – I was facing a long week.

Sunday, 11/8

Sunday was the first of three consecutive days of tutorials. In the morning, I attended a half-day session presented by Chris McEniry on the topic of Go for Sysadmins. Go was developed at Google, and released under an open source license in 2009. To my eye, it combines some of the best features of C, Python, and Java (but the FAQ says that Pascal has a strong influence – it’s been a long, long time). With larger data sets to work with each passing year, a faster and better language seems to be a useful tool for the continuously learning system administrator, and Go provides that sort of tool. Chris was an excellent presenter, and his examples and supporting code were pertinent and useful. Effective? Yep, I want to learn more about Go … in my copious spare time.

Sunday afternoon was all about Software Testing for Sysadmin Programs, presented by someone I’ve known for a few years now, Adam Moskowitz. Adam is a pleasant bloke, and like everyone at LISA, smart as all get out. He makes the valid point that all of the tools that we encourage our programmers to use, from version control to testing and deployment automation, belong in our toolbox as well. And for UNIX-ish sysadmins, lots of stuff is written in shell. Adam developed a suite of tools based on Maven, Groovy, and Spock, and gave us a working configuration to test code with. Impressive and useful. Now all I have to do is do it!

In the evening, I hung out for a bit for what’s called the “Hallway Track”, which is all of the non-programmed activities from games to BoF (Birds of a Feather) sessions, to conversations about employers, recruiting, tools, and users. Always fulfilling, the hallway track.

Monday 11/9

On Monday, I over-committed myself. Caskey L. Dickson was putting on a full-day tutorial on Operating System Internals for Administrators (a shortened version of the actual title). I attended the morning session of that, which was awesome. One would suspect that hardware is so fast that it just doesn’t matter so much anymore. But it turns out that such things as memory affinity in multi-socket, multi-core systems can have significant performance impacts if the load isn’t planned well. And while storage is getting faster, so are busses and networks. The bottlenecks keep moving around and we can’t count on knowing what to fix without proper metrics. Caskey presents an excellent tutorial, it’s actually in some senses a pre-requisite for  the Linux Performance Tuning tutorial that Ted Ts’o does (I’ve attended that in years past). I would have stuck around for the second half day of Internals, but…

Instead, I attended a half-day tutorial  called systemd, the Next-Generation Linux System Manager. Presented by Alison Chaiken, I learned a lot about the latest generation of system manager software that’s taken over from the System V init scripts model that’s ruled for the last few decades. While change is always a PITA, and there are definitely people who vehemently dislike systemd, I find that (A) I have to use it in my work, so I should learn more; and (B) there are features that I really quite like. Alison knows a lot about the software and the subject, and helped me understand where I needed to fill in the gaps in my systemd education.

Tuesday 11/10

For me, Tuesday was all about Docker. Until not that long ago, I’d have been managing one service (or suite of services) on a given piece of hardware. Programs ran on the Operating System, which ran on the hardware, which sat in the rack in the data center, mostly idle but with bursts of activity. Always burning electricity, and needing cooling, a growing workload meant adding new racks, more cooling, more electric capacity. In the last decade, virtualization has taken the data center by storm. Where once a rack full of 2U servers (2U stands for the vertical space that the server takes up in the rack – most racks have 42 U {units} of space, and servers most commonly are 1, 2 or 4 U) sat mostly idling, we now have a single more powerful 2U or 4U server that runs software like VMware’s ESXi hypervisor, Microsoft’s Hyper-V, or Xen/KVM running on a Linux host. On “top” of those hypervisors, multiple Operating System installs are running, each providing their service(s) and at much higher density. Today’s high-end 2U server can provision as much compute capacity as a couple of racks worth of servers from 5-10 years ago. It’s awesome.

But that’s so … yesterday. Today, the new hotness is containers, and Docker is the big player in containers right now. The premise is that running a whole copy of the OS just to run a service seems silly. Why not have a “container” that just has the software  and configurations needed to provide the service, and have multiple containers running on a single OS instance, physical or virtualized. The density of services provided can go up by a factor of 10 or more, using containers. It’s the new awesome!

I don’t have to use Docker or containers in my current situation, but that day may come, and for once I’d like to be ahead of the curve. So in the morning, I attended Introduction to Docker and Containers, presented by Jerome Petazzoni, of Docker. Dude seriously knows his stuff. But I’ve never attended a half-day tutorial that had more than 250 slides before, and he got through more than 220 of them in the time at hand, while ALSO showing some quick demos. Amazingly, I wasn’t lost at the time. And I’ve got a copy so that I can go back through at my leisure. Containers launch quickly, just like Jerome’s tutorial. I think I learned a lot. But it’s still due for unpacking in my brain.

In the afternoon, Jerome continued with Advanced Docker Concepts and Container Orchestration. Tools now regarded as stable (such as Swarm, which reached the 1.0 milestone a couple of weeks before the presentation) (grin) and Docker Compose were discussed and demonstrated to show how to manage scaling up and out. Another immense info dump, but I’m grateful I attended these tutorials. I think I learned a lot.

In the evening, I hit up the Storage BoF put on by Cambridge Computers, and dropped into the Red Hat vendor BoF on the topic of Open Storage. A long day.

Wednesday, 11/11

Veteran’s Day dawned bright and sunny. Like each day of this week, I left the house at 0630. I was surprised, rolling into the parking garage at 0700 … until I remembered the holiday, and that no Feds were working (and clogging my drive) as a result. Win!

The morning keynote was given by Mikey Dickerson, head of the USDS. He spoke on the challenges of healthcare.gov (his first Federal engagement), and being called back to head up the new US Digital Service. Mikey is a neat, genuine guy who has assembled a team of technologists who are making a difference in government services. Excellent keynote, fun guy.

I took a hallway track break for the next hour and a half – catching up with folks I hadn’t seen in a couple of years.

After lunch, I attended first a talk by George Wilson on current state of the art for OpenZFS. ZFS is an awesome filesystem that was built by Sun (Yay!), then closed by Oracle (Boo!). OpenZFS took off as a fork of the last OpenSolaris release, some years ago. Since then it’s been at the core of IllumOS and other OpenSolaris-derived operating systems, as well as FreeBSD and other projects. I’m a huge fan of ZFS, and it’s always good to learn more about successes, progress, and pitfalls.

Then I sat in on Nicole Forsgren’s talk: My First Year at Chef: Measuring All the Things. Nicole is a smart, smart person, and left a tenure-track position to join Chef last year. She brought her observational super-powers and statistics-fu to bear on all the previously unmeasured things at Chef, and learned lots. Chef let her tell us (most of) what she learned, which is also awesome. The key take-away: Learn how to measure things, set goals, and measure progress. Excellent!

After dinner up the street at Zoo Bar and Grill with Chas and Peter, I attended the annual LOPSA business meeting. I didn’t stay for the LOPSA BoF in the bar upstairs, since my steam was running out and I was driving, not staying at the hotel.

Thursday, 11/12

Christopher Soghoian provided the frankly depressing Thursday morning keynote: Sysadmins and Their Role in Cyberwar: Why Several Governments Want to Spy on and Hack You, Even If You Have Nothing to Hide. Seriously. Chris is the Chief Technologist for the ACLU, and his “war” stories are hair-raising. We’re all targets, because we run systems that might let the (good|bad|huh?) guys get to other people. All admins are targets, not of opportunity, but of collateral access. Sigh. Sigh. Good talk, wish it wasn’t needed.

The morning talk I attended was about Sysdig, using it to monitor cloud and container environments. Presented by Gianluca Borello, I found that sysdig is a tool I really should learn more about.

In the afternoon, I spent some time in the Vendor Expo area, catching up with people and learning about the products that they think are important to my demographic. I was going to attend a mini-tutorial later in the afternoon called Git, Got, Gotten on using git for sysadmin version control … but by the time I got to the room it was SRO. So I bailed out way early (skipping the in-hotel conference evening reception – I expected a disappointment following last year’s wonderful event at the EMP Museum), unwound, and got a good night’s sleep.

Friday, 11/13

I started the day with Jez Humble of Chef, who talked to the big room about Lean Configuration Management. An excellent talk on, among other things, what tools from the Dev side of the aisle we can use on the Ops side. Jez is an excellent speaker, and he brings up a good point about how the data points to high-performing IT groups as being a driver of innovation AND profit.

My second morning session was Lightweight Change Control Using Git, by George Beech of Stack Overflow. A big hunk of time was given to what’s wrong, before progressing into the organization of managing configs and processes with version control, explicitly git. Good talk.

After lunch, I spent a couple of hours on the hallway track, since there was nothing that really called out my name in the formal program. And for the closing keynote … well, I decided to beat the Friday traffic out of the district instead. But the presentation has been made available already – it’s here: It Was Never Going to Work, So Let’s Have Some Tea, by James Mickens of Harvard. You can watch it with me.

Thanksgiving and stuff

It was a good week, though I did work on Friday. Thanksgiving Day was a nice quiet day at home. Pancakes and espresso in the morning. Turkey, mashed potatoes, gravy, cranberry sauce, apple pie, … other stuff, I think … through the late afternoon and evening. Food coma #FTW, with lots of leftovers. We called and talked to family in lots of places, and that was fun, too. The weekend has been catching up on chores, putting up the Christmas crap, and roasting coffee.

Fallen Warriors

DoD reported no new casualties in the last week.

Certifiable

OOOooo … err. Certified. That’s whut I am. The week of death march revising on RHEL7, followed by two certification exams on Friday, it is over. And most interestingly, I passed both exams, and now have my RHCE. Coming out of the building after 5 on Friday afternoon, I was sure I’d passed EX200 (the RHCSA exam), but frankly wasn’t feeling too warm and fuzzy about EX300 (the RHCE). So I was pleased as punch to learn that I had in fact passed both, and by comfortable margins.

Better yet, I learned a hell of a lot about the tools and technologies in this latest iteration of Red Hat Enterprise Linux, and I’ll be putting that knowledge to use in production systems within the next several months. So, that’s a good thing, too.

This weekend, I tried to stay awake, and to do some chores. I almost got enough done. What really needs doing is … everything. The house needs a deep cleaning, and the yard needs quite a lot of attention. All in good time. Oh, and while the garden isn’t doing well, it is still producing a bit:

Garden Goodies -2 Aug 2014

Garden Goodies -2 Aug 2014

Some of that has turned into salsa, we’re having more in salads, and some goes to work to make people there happy, as well.

*      *      *

DoD has announced no new casualties in the last 6 days.

A billion, billion comment spam

Well, that might be an exaggeration. It was more like a few hundred comment spam. Fortunately they were all so marked, making it easy to click–delete.

*      *      *

Monday? Monday?!? So sorry to have missed y’all, yesterday. I’ve been preparing for this week’s RH300 course, and stayed pretty focused on that goal. We’re covering 14 days of regular Red Hat coursework in four days of grueling review, followed by the RHCSA and RHCE exams on Friday. And the exams are … challenging. I’m really good with the bits I use. And I can puzzle out the bits I don’t use often. But come exam-time, there’s 2 or 4 hours to do a WHOLE BUNCH of stuff, and it all has to work right, and it all has to survive a reboot.

*      *      *

Our condolences to the families, friends, and units of these fallen warriors:

  • Pfc. Donnell A. Hamilton, Jr., 20, of Kenosha, Wisconsin, died July 24, at Brooke Army Medical Center, Joint Base San Antonio, Texas, from an illness sustained in Ghazni Province, Afghanistan.
  • Staff Sgt. Benjamin G. Prange, 30, of Hickman, Nebraska, died July 24, in Mirugol Kalay, Kandahar Province, Afghanistan, of wounds suffered when the enemy attacked his vehicle with an improvised explosive device.
  • Pfc. Keith M. Williams, 19, of Visalia, California, died July 24, in Mirugol Kalay, Kandahar Province, Afghanistan, of wounds suffered when the enemy attacked his vehicle with an improvised explosive device.
  • Boatswain’s Mate Seaman Yeshabel Villotcarrasco, 23, of Parma, Ohio, died as a result of a non-hostile incident June 19 aboard USS James E. Williams (DDG-95) while the ship was underway in the Red Sea.

Cool July

We’ve had several days of unseasonably cool weather. I’m not complaining, mind you. But all the same, it’s weird. Temps in the early mornings in the high 50’s, and barely breaking into the low 80’s. Who’d a thunk? But they let me take Lexi on a two mile walk this afternoon without arriving back home as a sweatball holding a dead dog.

The garden, it fares poorly. I gave it virtually no attention in the days leading up to Marcia’s surgery, nor in the weeks that followed that event. Bugs have killed my zucchini plants, the tomato plants are small-ish with yellowing leaves and low production, and my herbs have all bolted. But I was paying attention to the important tasks in life, so that’s okay.

I’m otherwise tired. I had a couple of rounds of system work today: an hour early, and a couple of hours following the shopping run. In the coming week, I’ve got to spend a fair bit of time working with RHEL7, in advance of a Rapid Track training course the week following, with an RHCE certification exam at the end of that.

*      *      *

Another week, another span of time during which DoD announced no casualties. It’s not like there isn’t plenty of unpleasantness in the Middle East and in the Ukraine … but I sincerely hope we stay the hell out of those conflicts.

Six Days of LISA ’13

Howdy. My name’s Brian, and I’m a tired SysAdmin…

So, six days of tutorials and talks at the USENIX LISA ’13 conference are done. And it was good. My behind is, however, glad to be shut of those hotel conference chairs.

Sunday, 3 November

Sunday’s full day tutorial was called Securing Linux Servers, and was taught by Rik Farrow, a talented bloke who does security for a living, and is Editor of the USENIX ;login: magazine on the side. We covered the goals of running systems (access to properly executing services) and the attacks that accessibility (physical, network) enable. As always, the more you know, the more frightening running systems connected to networks becomes. We explicitly deconstructed several public exploits of high-value targets, and discussed mitigations that might have made them less likely. User account minimization and root account lockdowns through effective use of the `sudo` command were prominently featured. Proactive patching is highly recommended, too! Passwords, password security, hashing algorithms, and helping users select strong passwords that can be remembered also were a prime topic. Things that Rik wished were better documented online are PAM (Pluggable Authentication Modules) and simple, accessible starter documentation for SELinux.

Monday, 4 November

Hands-on Security for Systems Administrators was the full-day tutorial I attended on Monday. It was taught by Branson Matheson, a consultant and computer security wonk. Branson is an extremely energetic and engaging trainer who held my attention the whole day. We looked at security from the perspective of (informally, in the class) auditing our physical, social, and network vulnerabilities. In the context of the latter, we used a customized virtual build of Kali Linux , a Debian-based pen testing distro. I learned a lot of stuff, and for those things that I “knew”, the refresher was welcome and timely.

Tuesday, 5 November

Tuesday, I took two half-day tutorials.

The first was presented by Ted Ts’o, of Linux kernel and filesystem fame. Our tutorial topic was “Recovering from Linux Hard Drive Disasters.” We spent a couple of hours covering disk drive fundamentals and Linux file systems. The final hour was given over to the stated topic of recovering from assorted disk-based catastrophes. My take-away from this tutorial was two-fold. I think the presentation be better named “Disks, Linux Filesystems, and Disk Disaster Recovery,” which would be more reflective of the distribution of the material. Additionally, it’s worth stating that any single disk disaster is generally mitigated by multi-disk configurations (mirroring, RAID), and accidental data loss is often best covered by frequently taken and tested backups.

The second tutorial I attended, on Tuesday afternoon, was on the topic of “Disaster Recovery Plans: Design, Implementation and Maintenance Using the ITIL Framework.” Seems a bit dry, eh? A bit … boring? Not at all! Jeanne Schock brought the subject material to life, walking us through setting goals and running a project to effectively plan for Disaster Recovery. IMO, it’s documentation, planning, and process that turns the craft of System Administration into a true profession, and these sorts of activities are crucial. Jeanne’s presentation style and methods of engaging the audience are superb. This was my personal favorite of all the tutorials I attended. But … Thanks, Jeanne, for making more work for me!

Wednesday, 6 November

Whew. I was starting to reach brain-full state as the fourth day of tutorials began. I got to spend a full day with Ted Ts’o this time, and it was an excellent full day of training on Linux Performance Tuning. Some stuff I knew, since I’ve been doing this for a while. But the methods that Ted discussed for triaging system and software behaviour, then using the resulting data to prioritize diagnostic activities was very useful. This is a recurring topic at LISA ’13 – go for the low-hanging fruit and obvious stuff: check for CPU, disk, and network bottlenecks with quick commands before delving into one path more deeply. The seemingly obvious culprit may be a red herring. I plan on using the slide deck to construct a performance triage TWiki page at work.

I was in this tutorial when Bruce Schneier spoke (via Skype!) on “Surveillance, the NSA, and Everything.” Bummer.

This was also my last day of Tutorials. In the evening I attended the annual LOPSA meeting. Lots of interesting stuff there, follow the link to learn more about this useful and supportive organization. Yep, I’m a member.

Thursday, 7 November

Yay, today started with track problems on Metro, and an extra 45 minutes standing cheek-to-jowl with a bunch of random folks on a Red Line train.

This was a Technical Sessions and Invited Talks day for me. In the morning, Brendan Gregg presented Blazing Performance with Flame Graphs. Here’s a useful summary on Brendan’s blog. This was followed in the morning by Jon Masters of Red Hat talking about Hyperscale Computing with ARM Servers (which looks to be a cool and not unlikely path), and Ben Rockwood of Joyent discussing Lean Operations. Ben has strong opinions on the profession, and I always learn something from him.

In the afternoon, Brendan Gregg was in front of me again, pitching systems performance issues (and his new book of the same name). I continue to find Brendan’s presentation style a bit over the top, but his technical chops and writing skills are excellent. This was followed by Branson Matheson (who was training me earlier in the week) on the subject of “Hacking your Mind and Emotions” – much about social engineering. Sigh, too easy to do. But Branson is so enthusiastic and excited about his work  that … well, that’s alright, then, eh?

The late afternoon pair of talks were on Enterprise Architecture Beyond the Perimeter (presented by a pair of talented Google Engineers), and Drifting into Fragility, by Matt Provost of Weta Digital. The former was all about authentication and authorization without the classical corporate perimeter – no firewall or VPN between clients and resources. Is it a legitimate client machine, properly secured and patched? With a properly authenticated user? Good, we’re cool. How much secured, authenticated, patched is required is dependent on the resource to be accessed. This seems a bit like a Google-scale problem… The latter talk, on fragility, was a poignant reminder of unintended dependencies and consequences in complex systems and network.

The conference reception was on Thursday evening, but I took a pass, headed home, and went to bed early. I was getting pretty tired by this time.

Friday, 8 November

My early morning session had George Wilson of Delphix talking about ZFS for Everyone, followed by Mark Cavage of Joyent discussing Manta Storage System Internals. I use ZFS, so the first talk held particular interest for me, especially the information about how the disparate ZFS implementations are working to prevent fragmentation by utilizing Feature Flags. OpenZFS.org was also discussed. I didn’t know much about Manta except that it exists, but I know a bit more now, and … it’s cool. I don’t have a use, today, but it’s definitely cool.

The late morning session I attended was a two-fer on the topic of Macs at Google. They have tens of thousands of Macs, and the effective image, deployment, and patching management was the first topic of the day, presented by Clay Caviness and Edward Eigerman. Some interesting tools and possibilities, but scale far beyond my needs. The second talk, by Greg Castle, on Hardening Macs, was pertinent and useful for me.

After lunch, the two talks I attended were on “Managing Access using SSH Keys” by the original author of SSH, Tatu Ylönen, and “Secure Linux Containers” by Dan Walsh of Red Hat (and SELinux fame). Tatu pretty much read text-dense slides aloud to us, and confirmed that managing SSH key proliferation and dependency paths is hard. Secure Linux Containers remind me strongly of sparse Solaris Zones, so that’s how I’m fitting them into my mental framework. Dan also talked to us about Docker … a container framework that Red Hat is “merging” (?) with Secure Linux Containers … and said we (sysadmins) wouldn’t like Docker at all. Mmmmmm.

The closing Plenary session, at about an hour and 45 minutes, was a caffeine-fueled odyssey by Todd Underwood, a Google Site Reliability Manager, on the topic of PostOps: A Non-Surgical Tale of Software, Fragility, and Reliability. Todd’s a fun, if hyper, speaker. He’s motivated and knows his stuff. But like some others in the audience, what happens at the scale of a GOOG-size organization may not apply so cleanly in the SMB space. The fact is that DevOps and NoOps may not work so well for us … though certainly the principles of coordinated work and automation strongly apply.

Brian’s Summary

At any given time, for every room I sat in, for every speaker or trainer I listened to, there were three other things that I would have also learned much from. This was my path through LISA ’13. There are many like it, but this one is mine. This conference was a net win for me in many ways – I learned a lot, I ran across some old friends (Hi, Heather and Marc), made some new ones, and had a good time.

The folks I can recommend without reservation that you take a class from, or attend a talk that they’re presenting: Jeanne Schock, Branson Matheson, Rik Farrow, and Ted Ts’o. These are the four people I learned the most from in the course of six days, and you’d learn from them, too!

My hat’s off to the fine staff at USENIX, who worked their asses off to make the conference work. Kudos!

Finishing a cabinet; Ch-ch-ch-changes a’coming.

Finishing the corner cabinet

Finishing the corner cabinet

 

I’m making progress, as you can see. This cabinet may be upstairs as early as Wednesday of the upcoming week. Depends if I can get enough coats of poly on the doors and shelves. Pictured above, I’m at the poly stage for the face and insides – the dark teal sides are already three coats and cured. After supper, I took those down, laid out the doors and shelves, and first-coated the backs. Tomorrow, a quick sanding and I’ll get the second coat on.

*      *      *

While I am not going to have the liberty to host sites that aren’t mine, I’m migrating back to a personally administered system. $FIRM has graciously allowed me some bandwith, 1RU of rack space, and an old R410. I’ ve got Scientific Linux (the high-energy physics respin of RHEL) I’m doing this for reasons. REASONS, I tell you. Well, I’m not telling you, not now, anyway. There are likely to be format changes, too, though I’m going to maintain the blog format for convenience. But it may not be the front-line landing page anymore. What I do will be clear and documented, though.

This site is running from the new box, as are Daynotes.com and Daynotes.net. Speaking of the former, Daynotes.com is still “owned” by Tom Syroid. But since Tom appears to be staying offline, there’s no way to transfer ownership. If anyone wants to pick up the ball this year and give Network Solutions some money to renew Daynotes.com before the site expires in mid-September, that’d be awesome. You don’t need to have any formal access to renew (spend money) at NetSol, at least you didn’t last time I did it myself. I’ve renewed it several times personally, but it’d be nice if someone has found it useful steps up for a year or two. Let me know if you do, and you’ll get public thanks, here and elsewhere.

Depending on the gardening potential tomorrow, I’m going to try to get Marcia’s sites migrated to the new box before the new week gets rolling. Now to walk the mutt in between rain bursts and then do a bit of remote system administration for work. Ciao!

Moving right along

First, for US visitors, Happy Thanksgiving. A weird holiday, to be sure, but it’s always good to be thankful for life, family, friends, and first world problems.

*      *      *

I’m posting from Linux again, for the first time in a long while. I’d been trying a variety of solutions for storage here, answers that didn’t involve running a full-size system 24×7. I couldn’t do it. You see, it isn’t good enough to just back stuff up here at home. I’m not going to backup home data on a cloud somewhere on the Internet – our friendly government doesn’t appear to respect the Fourth Amendment when it comes to online resources. So I don’t keep email online. Well, I try not to, but I’ll bet Google has it all anyway. But there are files and work I do here that I’m not willing to trust to another administrator and their devotion to security. So while I backup online stuff here, and I backup the home systems here, I need to get a copy of those backups offsite. Fire, theft, and other quirks of life are risks that need to be managed.

So, a weekly copy of the local backup, written to an encrypted disk, and driven to work … that’s a good answer. But when I stood down Slartibartfast, the old Linux server, and replaced him with a dLink NAS box … well, some things didn’t happen anymore. Automated backups of online properties – not happening. Trivially easy local and encrypted backups: neither trivial nor easy anymore. But I kept after it for a while, so that local systems could spin down, data could flow to the storage when it was available, and … I’d figure something out about the offsite.

That didn’t happen. Finally, I broke down a few months back and installed FreeNAS 8.mumble on one of the towers. Key needs: local AFP, SMB/CIFS, and NFS service. Scheduled tasks to pull backups from out in the world, so that problems there don’t kill our data forever. And encrypted backups to removable storage. Seems easy, right? And a dedicated local storage server STILL seemed like a better idea than toying with using a workstation ALSO as the storage server. Feh!

FreeNAS eventually solved everything but the removable storage problem … and the AFP service. The latter problem first: Apple presents a fast moving target for their file services, and I want a networked Time Machine target. Could not get it working with the latest FreeNAS, so the dLink kept spinning. Formerly, and more importantly, while I could plug in a USB disk, write an encrypted ZFS file system to it, create the walkabout tertiary backup, and take the drive to the office … I could only do that once per boot. That is, to get FreeNAS to recognize a drive reinserted into the USB or the eSATA connections, I had to reboot. Probably a failing of the non-enterprise support for hotplug … but a failing all the same.

This week, a “vacation” week for me, I’d had enough. I installed Scientific Linux 6.3, and got all of the above stuff working properly in less than a day. The ONLY thing I miss from FreeNAS (and this was a big driver for me) is ZFS. I *love* ZFS. Filesystem and volume management done properly, with superb snapshot capabilities – I LOVE ZFS. But I can’t have that, and everything else I want, so I’ve solved my problem.

Serenity boots and runs from a ~160GB SSD, and I have three 1TB drives in a software RAID5 serving as the data partition. It’s all formatted EXT4. I have a SATA slide-in slot on the front of the system, I can slot in a hard disk, give the crypto password, and have my offsite storage accessible for updating using rsync. Everything is working again. I can spin down that dLink, and decide what it’s fate is, one of these days. I also don’t need Dortmunder, the Raspberry Pi, running as my SSH and IRSSI landing “box” anymore. That I will find another use for – I can play with it now. And I’ll cautiously update and maintain this system. Frankly, I happier with it running Scientific Linux – the stability of a RHEL derivative is good.

Now to figure out why I can’t get my external SSH port open again… Thanks, Netgear, for giving me one more problem to solve on my “vacation.”

Oh, and finally: a good disk management GUI for a Linux:

Gnome Disk Utility

Gnome Disk Utility

Gnome Disk Utility – I don’t often prefer a GUI, but managing complex storage, which may involve hardware or software RAID, LVM, encryption, and more … well, the visibility of this utility makes me happy. Thanks to Red Hat for writing it.

Marcia’s back (again) & Linux reloaded

Today, I relaxed a bit. Shopping in the morning, a bit of Top Gear UK during the day, and I picked Marcia up at BWI around 1500 EST. Happy dog is happy, and so am I. The holiday bird is in the fridge, I’ve got a tray of mac-n-cheese ready, and … we’ll see how the table ends up.

Tonight, I blew out the FreeNAS installation, and installed Scientific Linux 6.3 x64 on the box still known as Serenity. I had a lot of trouble getting things working right, and there are issues with offsite backups that are much more easily solved with a Linux at the helm. Instead of returning to the Ubuntu way, I figured one of the RHEL retreads would be a good way to go – I’ve got to re-certify in the next few months, and more practice is good.

*      *      *

Our condolences to the families, friends, and units of these fallen warriors:

  • Capt. James D. Nehl, 37, of Gardiner, Oregon, died Nov. 9, in Ghazni Province, Afghanistan, from small arms fire while on patrol during combat operations.
  • Sgt. Matthew H. Stiltz, 26, of Spokane, Washington, died Nov. 12, at Zerok, Afghanistan, of wounds suffered when insurgents attacked his unit with indirect fire.
  • Staff Sgt. Rayvon Battle Jr., 25, of Rocky Mount, North Carolina, died Nov. 13, in Kandahar Province, Afghanistan.
  • Sgt. Channing B. Hicks, 24, of Greer, South Carolina, died Nov. 16, in Paktika province, Afghanistan, from injuries suffered when enemy forces attacked his unit with an improvised explosive device and small arms fire.
  • Spc. Joseph A. Richardson, 23, of Booneville, Arkansas, died Nov. 16, in Paktika province, Afghanistan, from injuries suffered when enemy forces attacked his unit with an improvised explosive device and small arms fire.

Linux remodel, OpenIndiana build 151a, Node.js, and the DTrace Book

Lots of computing updates going on. It all started last week …

*     *     *

It was the Thursday before Christmas, or Wednesday perhaps, the details blur just a bit. I’ve not been using the Linux box formerly known as Slartibartfast as a desktop machine for quite a while now. My old MacBook Pro got refurbished with a small-ish SSD drive, and that’s the primary desktop system these days. It sits in a custom upright support that I created for the purpose a couple of years ago, and finally put to use

Darlion, the sedentary MacBook Pro

Darlion, the sedentary MacBook Pro

Darlion — the OS X Lion -enabled former Darla — sits forlorn at home each day while the Air, known as Agog, travels with me now. But that’s another story. Anyway, the Ubuntu Linux box needed a shedload of updates, so I let it update. Ahem. That was a mistake.

When I was done, the system no longer booted properly. I’d managed to snag not a set of updates for my system, but a distribution upgrade to the latest and greatest ‘buntu. That’s all well and good, but I had lots of system-level customizations, especially on the networking side, that simply didn’t work anymore. Ethernet devices were renamed, the bloody network manager thing from Hell made a reappearance, and other stuff related to dbus and udev flatlined. That I was unhappy was an understatement, especially since it’s still my fault. I managed that system from a functional desktop that operated most of the time as a fairly reliable home server into a flakey piece of crap that didn’t boot. Me, I did this.

It’s ten o’clock at night on a working evening … I’m not getting this fixed today. Marcia’s nightly backups can skip a night, so can my nightly backups from the web (I back up our webs, MySQL databases, etc. every night into a rolling pattern that lets me restore at intervals back at least 60 days). So the backups just fail out overnight, and by Friday evening, I had time to do the work. Or so I thought.

I tried to get an ISO for Ubuntu LTS 10/04 (the long term support version: LTS) that would install. By around 2300 that night, I was ready to adjust the system with the aluminum LART [1] I keep in the house. I walked away, and re-approached the problem in the morning. Finally, on the fifth optical disc, and following two failures with USB media tries, I got Ubuntu Server 11/10 installed. That’s good for three years worth of security updates, and maybe I’ll have migrated to something else before then. I thought hard about OpenIndiana … but that’s the next chapter in the story.

*     *     *

Since I was rebuilding the system from scratch, I backed up the data I cared about separately from the normal weekly backups onto a pair of disks that weren’t part of the restructuring. I then dismantled both midsize towers, at least as far as storage was concerned.

For the purposes of conversation, let’s refer to these machines by the names they assumed at  the end of the process: Serenity, the Ubuntu Linux home server, and Hellboy, the OpenIndiana build 151a server and Gaming OS box. Both have quad-core processors (but Hellboy’s is a bit faster, and has VTS extensions, for later experimentation with Zones and KVM). Both have plenty of RAM, at 4G and 8G respectively.

I decommissioned the PCIe 1x 3Ware RAID card out of Serenity, and pulled the two 750G drives out of that system. I also pulled three 1TB drives, and a 500G drive out of Hellboy. All I left there was the 500G Windows 7 system disk. I put two of those 1TB drives into Serenity, and built them into a software RAID0 mirror set, which is fine for my purposes, and removed the dependency on the “custom” 3Ware RAID card. The performance hit for the purposes of this machine is negligible.

The Ubuntu install on Serenity is fine, and everything works. Why didn’t I go with a Red Hat or derivative? I’ve got current scripts with dependencies on packages that are trivial to acquire and install on Ubuntu, and I wanted this done before Christmas. Like I said, later. I configured the DNS, Samba, NTP and SSH services that Serenity provides, transcribing configs and updating as necessary from my backups. Then I restored the 500G or so of Userland data, and nearly everything was working again. I had to do some tuning on Marcia’s box to make backups work again, and modify some of her mapped drives to be happy with the new system, but that took no time at all. Putting the newer, larger drives into Serenity was actually a power-draw win, too! That system is only pulling about 70 watts at idle, where it was nearly 90 watts with the older drives and RAID card in play.

*     *     *

Next I reinstalled OpenIndiana build 151a onto Hellboy. This time, Hellboy got the two 750G drives as a single ZFS rpool mirror set, and that’s the extent of that system. It’s running, I can experiment with Zones and DTrace and Node.js there, and it doesn’t need to be running 24/7.

Why OpenIndiana? It’s one of the distributions of Illumos, the carrier of the OpenSolaris torch after Oracle abandoned that codebase in 2010. Do you want more Solaris history than that, leading up to what happened? Watch Bryan Cantrill’s Fork, Yeah! presentation from LISA 2011. What an awesome talk! Still, why OpenIndiana? I really like Solaris, but I don’t really want to spend the $2K/year which is the only way to legally license and keep updated Solaris on non-SUNOracle hardware. I want a Solaris playspace at home, and OpenIndiana provides that. And if the rumors are true, which is that internal to Oracle, Solaris is really just being treated as firmware for Oracle storage and database appliances, then the only general purpose computing inheritor of the Solaris codebase will be something evolved from/through Illumos. DTrace is cool. ZFS is über-cool. Zones are super-cool. And I want to play there, in my “spare time.”

*     *     *

Node.js and the DTrace book. That’ll have to wait for a pending post, I want supper! Ciao!

[1] LART – Luser Attitude Realignment Tool, in this case an aluminum baseball bat.