15 October 2017

On the face of it, three day work weeks are pretty nice. But I stay busy, and cramming a week’s worth of productivity into three days is less fun than it sounds like. That said, I got done what needed doing, and the coming week is prepped and ready.

*      *      *

Last night, we went back for a second round of Annapolis Shakespeare‘s production of Much Ado About Nothing. With 17 actors and a two story set, there’s always more going on than one can take in at one sitting. Since opening night, the actors have really settled into their roles, and we enjoyed it even more, if that were possible. They’ve been getting stellar reviews all over the place and I can only say this: If you’re in the area, there are nine more productions of this show: today’s matinee and four shows each of the next two weekends. Get tickets and go!!!

*      *      *

The daylight hours yesterday were full, too. Much of the day, I puttered with virtualization on my main home server, a FreeBSD 11.1 box that does internal SMB, internal IMAP, backups, and virtual machine hosting. When I started with virtualization on the system, I was using Oracle’s VirtualBox product, because the price is right (free, as in beer), and easy, easy to setup and use. But easy isn’t always my primary goal. So I’ve been experimenting with the native virtualization tool on FreeBSD: bhyve.

“bhyve, the “BSD hypervisor”, pronounced “beehive” is a hypervisor/virtual machine manager developed on FreeBSD.”

I make use of the appropriate section of the FreeBSD Handbook to provide guidance. As such things go, it’s relatively simple to stand up FreeBSD virtual guests, and a bit trickier for Linux guests. I’ll document some of the fun I had with that here, because there are gotcha’s that aren’t covered in the Handbook.

The Setup

I’m going to build an Ubuntu 17.04 virtual machine (VM), using a ZFS volume as a datastore. The use of ZFS is recommended for performance reasons. There are other advantages, too, like the ability to make quick clones of a VM. More on that later. So, my configuration is this:

root@serenity:// > ls /data/bhyve
images iso
root@serenity:~/ > zfs list zroot/data/vmimages 
zroot/data/vmimages 52.9G 1.07T 96K /data/vmimages

/data/bhyve/images is actually where I keep the runtime configuration and startup scripts for virtual machines.

/data/bhyve/iso is the repository for CD images for installation of virtual machines.

The ZFS path zroot/data/vmimages is the parent for all of my virtual machine disks.

I’ve also already done the initial networking setup with bridge and tap0 interfaces, per the Handbook sub-section, “Preparing the Host.”


Create and check the VM disk:

root@serenity:/data/bhyve/images/ > zfs create -V16G -o volmode=dev zroot/data/vmimages/ub1704new
root@serenity:/data/bhyve/images/ > ls -al /dev/zvol/zroot/data/vmimages/ub1704new
crw-r----- 1 root operator 0x9b Oct 15 13:59 /dev/zvol/zroot/data/vmimages/ub1704new

With the disk volume in place, I can create the device map file, which sets (hd0) to the path to the new disk volume I created, and (cd0) to the  path to the ISO file (vim is the text editor I use):

root@serenity:/data/bhyve/images/ > vim ub1704new-device.map
root@serenity:/data/bhyve/images/ > cat ub1704new-device.map
(hd0) /dev/zvol/zroot/data/vmimages/ub1704new
(cd0) /data/bhyve/iso/ubuntu-17.04-server-amd64.iso

Note that when a VM is or has been running, it creates an entry in the device tree, at /dev/vmm. Normally, one must always “destroy” that file before one can start/restart the VM (seems clunky, but there it is). But because this is the first time this VM will have been run (on creation), there should be no corresponding device file at /dev/vmm/ub1704new. I’ll check that, then create the VM using the grub-bhyve tool, which prepares the boot environment for the VM:

root@serenity:/data/bhyve/images/ > ls /dev/vmm/ub1704new
ls: /dev/vmm/ub1704new: No such file or directory

root@serenity:/data/bhyve/images/ > grub-bhyve -m ub1704new-device.map -r cd0 -M 1024M ub1704new
GNU GRUB version 2.00

|Install Ubuntu Server                                                     |
|OEM install (for manufacturers)                                           |
|Install MAAS Region Controller                                            |
|Install MAAS Rack Controller                                              |
|Check disc for defects                                                    |
|Rescue a broken system                                                    |
|                                                                          |
|                                                                          |

Use the ^ and v keys to select which entry is highlighted.
Press enter to boot the selected OS, `e' to edit the commands
before booting or `c' for a command-line.

root@serenity:/data/bhyve/images/ > ls /dev/vmm/ub1704new

The “Install Ubuntu Server” line was highlighted, so I simple pressed the Enter key to accept that option. Disconcertingly, one is then dropped right back onto the command line. This is expected, however. And as you can see, we now have a VM entry for the new guest under /dev/vmm.

The next gotcha is this: There has to be a free tapN interface for the VM to attach to. The documentation wasn’t really clear on that, I think I assumed that multiple VMs could attach to a single tap interface. But in reality, think of the bridge interface as the virtual switch, and each tap interface as a port on that switch. So, let’s check if tap0 is in use:

root@serenity:/data/bhyve/images/ > ifconfig | egrep "^tap[0-9]+:"
tap0: flags=8902<BROADCAST,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
tap1: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
tap2: flags=8902<BROADCAST,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500

As you can see, I’ve run into this problem already, and have a couple of spare taps available. This output shows that of the three tap interfaces, tap0 and tap2 are available, while tap1 is in use (see the word UP in the flags). For the purposes of this exercise I’ll just use tap0. But it’s trivial to add more tap devices on the fly, and to add them to the /etc/rc.conf file so that they are present for future runs. In a super-happy world, my VM automation script will look for any available tap device, and use one if found, otherwise dynamically add yet another one and use it. But that’s another post.

Install Time

So, it’s time to start the VM for the first time. Important note: One should set the amount of memory for the bhyve run to match the amount one set with grub-bhyve, or errors ensue. Observe that the memory setting with grub-bhyve above uses the -M flag, and a trailing M. The bhyve command uses a -m flag, and Megabytes are assumed.

I’m going to give the VM two processors (it can certainly take advantage of two, even during the installation)

root@serenity:/data/bhyve/images/ > bhyve -c 2 -m 1024 -H -P -A -s 0:0,hostbridge -s 1:0,lpc  \
 -s 2:0,virtio-net,tap0 -l com1,stdio -s 3,ahci-cd,/data/bhyve/iso/ubuntu-17.04-server-amd64.iso \
 -s 4,virtio-blk,/dev/zvol/zroot/data/vmimages/ub1704new ub1704new

  ┌───────────────────────┤ [!!] Select a language ├────────────────────────┐
  │                                                                         │
  │ Choose the language to be used for the installation process. The        │
  │ selected language will also be the default language for the installed   │
  │ system.                                                                 │
  │                                                                         │
  │ Language:                                                               │
  │                                                                         │
  │                               C                                         │
  │                               English                                   │
  │                                                                         │
  │  <Go Back>                                                              │
  │                                                                         │

 <Tab> moves;  <Space> selects;  <Enter> activates buttons

And so starts the text-mode Ubuntu installer. I’m going to assume you can find your way to figuring that out or find useful directions on the interwebs. A couple of installation tips:

  • The installer configures networking using DHCP by default. It’s easy to change to a static IP later, if desired.
  • Hostname entry – I generally use the name of the virtual machine I created. It’s just easier to keep straight in my head that way.
  • Partitioning – I’ve gone with “Guided – use entire disk and set up LVM”, but there are repercussions down the line. Manual isn’t hard, but can be confusing if you’ve not done much manual partitioning. LVM is a good choice because you can later add more diskspace to the volume(s) without even rebooting the system.
  • Automatic updates – These can be a good idea, some of the time. But with servers, I tend to have process around patching, booting, and testing, so I selected No Automatic Updates.
  • Software Selection – The only important choice for me at system installation is OpenSSH server: I need this to remotely administer any system: local or remote, physical or virtual.

Once the installer is done, there’s at least one more trick up my sleeve…

But first, we have to “destroy” the remnants of the prior run, then re-run grub-bhyve to figure out what our root and boot devices are:

root@serenity:/data/bhyve/images/ > bhyvectl --destroy --vm=ub1704new
root@serenity:/data/bhyve/images/ > grub-bhyve -m ub1704new-device.map -r hd0 -M 1024M ub1704new
grub> ls
(hd0) (hd0,msdos1) (cd0) (cd0,apple2) (cd0,apple1) (cd0,msdos2) (host) 
(lvm/ub1704new--vg-swap_1) (lvm/ub1704new--vg-root)
grub> ls (hd0)/
error: unknown filesystem.
grub> ls (hd0,msdos1)/
error: unknown filesystem.
grub> ls (lvm/ub1704new--vg-root)/
lost+found/ etc/ media/ bin/ boot/ dev/ home/ lib/ lib64/ mnt/ opt/ proc/ root/ run/ 
sbin/ srv/ sys/ tmp/ usr/ var/ initrd.img vmlinuz snap/
grub> cat (lvm/ub1704new--vg-root)/etc/fstab
/dev/mapper/ub1704new--vg-root / ext4 errors=remount-ro 0 1
/dev/mapper/ub1704new--vg-swap_1 none swap sw 0 0

And there’s the information we need to configure a file to prime grub automatically, but first, let’s get this system running for the first time after installation:

grub> linux (lvm/ub1704new--vg-root)/vmlinuz root=/dev/mapper/ub1704new--vg-root
grub> initrd (lvm/ub1704new--vg-root)/initrd.img
grub> boot
root@serenity:/data/bhyve/images/ >

There’s our prep done, now to run the machine:

root@serenity:/data/bhyve/images/ > bhyve -c 2 -m 1024 -H -P -A -s 0:0,hostbridge -s 1:0,lpc \ 
> -s 2:0,virtio-net,tap0 -l com1,stdio -s 4,virtio-blk,/dev/zvol/zroot/data/vmimages/ub1704new ub1704new
Ubuntu 17.04 ub1704new ttyS0

ub1704new login:bilbrey
Welcome to Ubuntu 17.04 (GNU/Linux 4.10.0-19-generic x86_64)

The next step is to update the freshly built system to with current packages and security updates, because the CD and DVD images are not respun every time there’s a changed package:

bilbrey@ub1704new:~$ sudo su -
[sudo] password for bilbrey: 
root@ub1704new:~# apt update && apt upgrade -y
root@ub1704new:~# sync
sroot@ub1704new:~# sync
root@ub1704new:~# shutdown -h now

With that done, now I’ll create a couple of files to make the startup much easier – we’ll create a file to feed grub-bhyve what it needs, and a quick and dirty shell script to automate all the startup options and run the VM:

root@serenity:/data/bhyve/images/ > vim ub1704new-grub.in  # pull together our grub info from the first startup...
root@serenity:/data/bhyve/images/ > cat ub1704new-grub.in
set root=(lvm/ub1704new--vg-root)
linux /vmlinuz root=/dev/mapper/ub1704new--vg-root
initrd /initrd.img

root@serenity:/data/bhyve/images/ > vim start_ub1704new.sh  # shell script to config and run 
root@serenity:/data/bhyve/images/ > cat start_ub1704new.sh


stkargs="-H -P -A -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,${tap} -l com1,stdio" 

cd /data/bhyve/images
bhyvectl --destroy --vm=${imgname}  # Clean up prior run
grub-bhyve -r hd0 -m ${imgname}-device.map -M ${mem}M ${imgname} < ${imgname}-grub.in  # prep grub boot 
bhyve -c ${cpus} -m ${mem} ${stkargs} -s 4,virtio-blk,${imgpath} ${imgname}  # Run the VM

root@serenity:/data/bhyve/images/ > chmod 700 start_ub1704new.sh  # Make the script runnable (by root)

All done, now I can just start the VM:

root@serenity:/data/bhyve/images/ > ./start_ub1704new.sh
Ubuntu 17.04 ub1704new ttyS0

ub1704new login: bilbrey
bilbrey@ub1704new:~$ sudo su -
[sudo] password for bilbrey: 
root@ub1704new:~# sync
root@ub1704new:~# sync
root@ub1704new:~# shutdown -h now

Making Copies and Clones

Okay, a simple script run to start up the VM. That’s good. But we’ve put in a fair bit of work on this VM, what if I want some more of exactly that? I can use ZFS utilities to clone the VM image, do a couple of edits in copies of the files we just created, and we can have one or more copies without all the installation effort and pain. Here goes:

root@serenity:/data/bhyve/images/ > zfs list -rt all zroot/data/vmimages/ub1704new
zroot/data/vmimages/ub1704new 16.5G 1.07T 3.32G -

root@serenity:/data/bhyve/images/ > zfs snapshot zroot/data/vmimages/ub1704new@copy1

root@serenity:/data/bhyve/images/ > zfs clone zroot/data/vmimages/ub1704new@copy1 zroot/data/vmimages/ub1704copy1

root@serenity:/data/bhyve/images/ > zfs list -rt all zroot/data/vmimages
zroot/data/vmimages 72.9G 1.05T 96K /data/vmimages
zroot/data/vmimages/ub1704copy1 8K 1.05T 3.32G -
zroot/data/vmimages/ub1704new 19.8G 1.07T 3.32G -
zroot/data/vmimages/ub1704new@copy1 0 - 3.32G -

root@serenity:/data/bhyve/images/ > zfs get origin zroot/data/vmimages/ub1704copy1
NAME                             PROPERTY  VALUE                                SOURCE
zroot/data/vmimages/ub1704copy1  origin    zroot/data/vmimages/ub1704new@copy1  -

[* Editors note – Updated above to add the zfs snapshot command, which did not survive the original cut and paste]

This read/write clone, ub1704copy1, takes about as long as it takes to run the snapshot and clone commands – no time at all, really. But it will be dependent on the snapshot (see the output of the zfs get origin command), and not an independent copy of the VM. So for quick-and-dirty testing, this is a great tool. If, on the other hand, you want to make use of that snapshot to make a long-lived copy of the VM, use the ZFS send/receive functionality:

root@serenity:/data/bhyve/images/ > zfs send zroot/data/vmimages/ub1704new@copy1 \
 | zfs receive zroot/data/vmimages/ub1704copy2

root@serenity:/data/bhyve/images/ > zfs list -rt all zroot/data/vmimages
NAME                                         USED  AVAIL  REFER  MOUNTPOINT
zroot/data/vmimages                         76.2G  1.05T    96K  /data/vmimages
zroot/data/vmimages/ub1704copy1                8K  1.05T  3.32G  -
zroot/data/vmimages/ub1704copy2             3.32G  1.05T  3.32G  -
zroot/data/vmimages/ub1704copy2@copy1           0      -  3.32G  -
zroot/data/vmimages/ub1704new               19.8G  1.06T  3.32G  -
zroot/data/vmimages/ub1704new@copy1             0      -  3.32G  -

root@serenity:/data/bhyve/images/ > zfs get origin zroot/data/vmimages/ub1704copy2
NAME                             PROPERTY  VALUE   SOURCE
zroot/data/vmimages/ub1704copy2  origin    -       -

root@serenity:/data/bhyve/images/ > zfs destroy zroot/data/vmimages/ub1704copy2@copy1

Note that the send/receive ALSO copied the snapshot, so I disposed of the copied snapshot… The send/receive took a couple of minutes for this small VM. A much larger VM would take a correspondingly longer time. Let’s create the scripts to run ub1704copy2:

root@serenity:/data/bhyve/images/ > cp ub1704new-grub.in ub1704copy2-grub.in
root@serenity:/data/bhyve/images/ > cp ub1704new-device.map ub1704copy2-device.map
root@serenity:/data/bhyve/images/ > cp start_ub1704new.sh start_ub1704copy2.sh

root@serenity:/data/bhyve/images/ > vim *ub1704copy2*

root@serenity:/data/bhyve/images/ > diff start_ub1704new.sh start_ub1704copy2.sh
< imgname="ub1704new" 
> imgname="ub1704copy2"
< mem=2048
< tap="tap0" 
> mem=4096
> tap="tap2"

root@serenity:/data/bhyve/images/ > diff ub1704new-device.map ub1704copy2-device.map
< (hd0) /dev/zvol/zroot/data/vmimages/ub1704new 
> (hd0) /dev/zvol/zroot/data/vmimages/ub1704copy2

root@serenity:/data/bhyve/images/ > diff ub1704new-grub.in ub1704copy2-grub.in

So, no changes to the grub.in file, as all things are the same, including the name of the LVM filesystem that is root. Remember, even though the VM is now ub1704copy2, it’s a copy of ub1704new, and will be until we run it, change the hostname, and make it different.

The device.map file has to change to point to the new ZFS volume, but that’s all.

And for the start_ub1704copy2.sh file, I really only had to change the imgname variable to make everything work.  But I also bumped the memory up to 4G, and changed the network device to tap2, so that new and copy2 could be running simultaneously. Now let’s boot copy2, change the hostname, and boot it again:

root@serenity:/data/bhyve/images/ > ./start_ub1704copy2.sh
ub1704new login: bilbrey
bilbrey@ub1704new:~$ sudo su -
[sudo] password for bilbrey:
root@ub1704new:~# vim /etc/hostname
root@ub1704new:~# cat /etc/hostname

root@ub1704new:~# sync
root@ub1704new:~# sync
root@ub1704new:~# shutdown -h now

root@serenity:/data/bhyve/images/ > ./start_ub1704copy2.sh
Ubuntu 17.04 ub1704copy1 ttyS0

ub1704copy1 login: bilbrey
bilbrey@ub1704copy1:~$ ip addr show dev enp0s2
2: enp0s2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:a0:98:27:32:75 brd ff:ff:ff:ff:ff:ff
    inet brd scope global enp0s2
       valid_lft forever preferred_lft forever
    inet6 fe80::2a0:98ff:fe27:3275/64 scope link 
       valid_lft forever preferred_lft forever

Okay, we’re running in copy2, renamed the host, and we have an IP address. Let’s start up ub1704new, and ping the copy:

root@serenity:/data/bhyve/images/ > ./start_ub1704new.sh
Ubuntu 17.04 ub1704new ttyS0

ub1704new login: bilbrey
bilbrey@ub1704new:~$ ip addr show dev enp0s2
2: enp0s2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:a0:98:d4:48:eb brd ff:ff:ff:ff:ff:ff
    inet brd scope global enp0s2
       valid_lft forever preferred_lft forever
    inet6 fe80::2a0:98ff:fed4:48eb/64 scope link 
       valid_lft forever preferred_lft forever

bilbrey@ub1704new:~$ ping
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.889 ms
64 bytes from icmp_seq=2 ttl=64 time=0.652 ms
bilbrey@ub1704new:~$ ssh
The authenticity of host ' (' can't be established.
ECDSA key fingerprint is SHA256:yARJTbiR8K2S1pTrYZ8xdDZawGMVqtukB3th2cf1Zjw.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '' (ECDSA) to the list of known hosts.
[email protected]'s password: 
Last login: Sun Oct 15 20:26:57 2017

There we go. The clones and copies are super fast and easy. I’m told by the interwebs that there are tools called vm-bhyve and iohyve that might be useful, but those are for another day.

*      *      *

DoD announced no new casualties in the last week. Ciao!


3 Feb 2017

Another interesting week near the heart of power. Well, when I say “heart”, I mean corroded hunk of radioactive tin encased in an orange waste of skin. Ah, well. One does what one can while watching the wreck of trains, above and below.

In the meantime, I managed to get Kubuntu installed on my old Mac Air (2011). The install was fairly trivial, just a couple of trips to the search engines to get me over the occasional install hump. Everything but the thunderbolt port works flawlessly, and here it sits next to it’s new big brother:

AirBuntu next to the new-ish MPB

AirBuntu next to the new-ish MPB

The primary failing of the Air was one of battery life – it had a semi-useful 2 hours worth, which sucked when I found myself stranded in Columbus without a power brick last Fall. The other main issue is the screen. In the last 6 years, my eyes appear to have aged about 10, and with the amount of information I like to keep on screen, the larger, higher resolution MBP is just better. Let’s be clear: compared to the Air, the Retina screen on the MacBook Pro is glorious. Oh, and a much faster processor doesn’t hurt at all either. The air will serve well as a conference laptop. The MBP is a superb work machine for me. All I have to do is get used to floating my palms off that bloody huge touchpad.

2015 Nov 29

LISA 15 Report

The LISA 2015 conference was held this year at the Washington Marriott Wardman Park, off Connecticut Avenue in north east DC. It’s 15 miles from home, but the best driving time I had was Wednesday (Veteran’s Day) morning, which took half an hour, and the worst was a bit over 1.5 hours, coming home in weeknight traffic, in the rain. It’s a nice venue, though I’ve never stayed there, only attended events.

Saturday, 11/7

Saturday night was badge pickup and opening reception. I attended that mostly to do a handoff of the give-away items for the LOPSA general business meeting. Because I’m local, I volunteered to be a drop ship site for stuff that arrived over the course of the month leading up to LISA. That evening, I made contact with LOPSA’s President, Chris Kacoroski (‘Ski’), and we grabbed a couple of other willing bodies and emptied out my trunk, which was chock-full of Lego kits, books, booth collateral, etc. An hour or two of chatting with early-arriving attendees, then I headed back home to get an early bedtime – I was facing a long week.

Sunday, 11/8

Sunday was the first of three consecutive days of tutorials. In the morning, I attended a half-day session presented by Chris McEniry on the topic of Go for Sysadmins. Go was developed at Google, and released under an open source license in 2009. To my eye, it combines some of the best features of C, Python, and Java (but the FAQ says that Pascal has a strong influence – it’s been a long, long time). With larger data sets to work with each passing year, a faster and better language seems to be a useful tool for the continuously learning system administrator, and Go provides that sort of tool. Chris was an excellent presenter, and his examples and supporting code were pertinent and useful. Effective? Yep, I want to learn more about Go … in my copious spare time.

Sunday afternoon was all about Software Testing for Sysadmin Programs, presented by someone I’ve known for a few years now, Adam Moskowitz. Adam is a pleasant bloke, and like everyone at LISA, smart as all get out. He makes the valid point that all of the tools that we encourage our programmers to use, from version control to testing and deployment automation, belong in our toolbox as well. And for UNIX-ish sysadmins, lots of stuff is written in shell. Adam developed a suite of tools based on Maven, Groovy, and Spock, and gave us a working configuration to test code with. Impressive and useful. Now all I have to do is do it!

In the evening, I hung out for a bit for what’s called the “Hallway Track”, which is all of the non-programmed activities from games to BoF (Birds of a Feather) sessions, to conversations about employers, recruiting, tools, and users. Always fulfilling, the hallway track.

Monday 11/9

On Monday, I over-committed myself. Caskey L. Dickson was putting on a full-day tutorial on Operating System Internals for Administrators (a shortened version of the actual title). I attended the morning session of that, which was awesome. One would suspect that hardware is so fast that it just doesn’t matter so much anymore. But it turns out that such things as memory affinity in multi-socket, multi-core systems can have significant performance impacts if the load isn’t planned well. And while storage is getting faster, so are busses and networks. The bottlenecks keep moving around and we can’t count on knowing what to fix without proper metrics. Caskey presents an excellent tutorial, it’s actually in some senses a pre-requisite for  the Linux Performance Tuning tutorial that Ted Ts’o does (I’ve attended that in years past). I would have stuck around for the second half day of Internals, but…

Instead, I attended a half-day tutorial  called systemd, the Next-Generation Linux System Manager. Presented by Alison Chaiken, I learned a lot about the latest generation of system manager software that’s taken over from the System V init scripts model that’s ruled for the last few decades. While change is always a PITA, and there are definitely people who vehemently dislike systemd, I find that (A) I have to use it in my work, so I should learn more; and (B) there are features that I really quite like. Alison knows a lot about the software and the subject, and helped me understand where I needed to fill in the gaps in my systemd education.

Tuesday 11/10

For me, Tuesday was all about Docker. Until not that long ago, I’d have been managing one service (or suite of services) on a given piece of hardware. Programs ran on the Operating System, which ran on the hardware, which sat in the rack in the data center, mostly idle but with bursts of activity. Always burning electricity, and needing cooling, a growing workload meant adding new racks, more cooling, more electric capacity. In the last decade, virtualization has taken the data center by storm. Where once a rack full of 2U servers (2U stands for the vertical space that the server takes up in the rack – most racks have 42 U {units} of space, and servers most commonly are 1, 2 or 4 U) sat mostly idling, we now have a single more powerful 2U or 4U server that runs software like VMware’s ESXi hypervisor, Microsoft’s Hyper-V, or Xen/KVM running on a Linux host. On “top” of those hypervisors, multiple Operating System installs are running, each providing their service(s) and at much higher density. Today’s high-end 2U server can provision as much compute capacity as a couple of racks worth of servers from 5-10 years ago. It’s awesome.

But that’s so … yesterday. Today, the new hotness is containers, and Docker is the big player in containers right now. The premise is that running a whole copy of the OS just to run a service seems silly. Why not have a “container” that just has the software  and configurations needed to provide the service, and have multiple containers running on a single OS instance, physical or virtualized. The density of services provided can go up by a factor of 10 or more, using containers. It’s the new awesome!

I don’t have to use Docker or containers in my current situation, but that day may come, and for once I’d like to be ahead of the curve. So in the morning, I attended Introduction to Docker and Containers, presented by Jerome Petazzoni, of Docker. Dude seriously knows his stuff. But I’ve never attended a half-day tutorial that had more than 250 slides before, and he got through more than 220 of them in the time at hand, while ALSO showing some quick demos. Amazingly, I wasn’t lost at the time. And I’ve got a copy so that I can go back through at my leisure. Containers launch quickly, just like Jerome’s tutorial. I think I learned a lot. But it’s still due for unpacking in my brain.

In the afternoon, Jerome continued with Advanced Docker Concepts and Container Orchestration. Tools now regarded as stable (such as Swarm, which reached the 1.0 milestone a couple of weeks before the presentation) (grin) and Docker Compose were discussed and demonstrated to show how to manage scaling up and out. Another immense info dump, but I’m grateful I attended these tutorials. I think I learned a lot.

In the evening, I hit up the Storage BoF put on by Cambridge Computers, and dropped into the Red Hat vendor BoF on the topic of Open Storage. A long day.

Wednesday, 11/11

Veteran’s Day dawned bright and sunny. Like each day of this week, I left the house at 0630. I was surprised, rolling into the parking garage at 0700 … until I remembered the holiday, and that no Feds were working (and clogging my drive) as a result. Win!

The morning keynote was given by Mikey Dickerson, head of the USDS. He spoke on the challenges of healthcare.gov (his first Federal engagement), and being called back to head up the new US Digital Service. Mikey is a neat, genuine guy who has assembled a team of technologists who are making a difference in government services. Excellent keynote, fun guy.

I took a hallway track break for the next hour and a half – catching up with folks I hadn’t seen in a couple of years.

After lunch, I attended first a talk by George Wilson on current state of the art for OpenZFS. ZFS is an awesome filesystem that was built by Sun (Yay!), then closed by Oracle (Boo!). OpenZFS took off as a fork of the last OpenSolaris release, some years ago. Since then it’s been at the core of IllumOS and other OpenSolaris-derived operating systems, as well as FreeBSD and other projects. I’m a huge fan of ZFS, and it’s always good to learn more about successes, progress, and pitfalls.

Then I sat in on Nicole Forsgren’s talk: My First Year at Chef: Measuring All the Things. Nicole is a smart, smart person, and left a tenure-track position to join Chef last year. She brought her observational super-powers and statistics-fu to bear on all the previously unmeasured things at Chef, and learned lots. Chef let her tell us (most of) what she learned, which is also awesome. The key take-away: Learn how to measure things, set goals, and measure progress. Excellent!

After dinner up the street at Zoo Bar and Grill with Chas and Peter, I attended the annual LOPSA business meeting. I didn’t stay for the LOPSA BoF in the bar upstairs, since my steam was running out and I was driving, not staying at the hotel.

Thursday, 11/12

Christopher Soghoian provided the frankly depressing Thursday morning keynote: Sysadmins and Their Role in Cyberwar: Why Several Governments Want to Spy on and Hack You, Even If You Have Nothing to Hide. Seriously. Chris is the Chief Technologist for the ACLU, and his “war” stories are hair-raising. We’re all targets, because we run systems that might let the (good|bad|huh?) guys get to other people. All admins are targets, not of opportunity, but of collateral access. Sigh. Sigh. Good talk, wish it wasn’t needed.

The morning talk I attended was about Sysdig, using it to monitor cloud and container environments. Presented by Gianluca Borello, I found that sysdig is a tool I really should learn more about.

In the afternoon, I spent some time in the Vendor Expo area, catching up with people and learning about the products that they think are important to my demographic. I was going to attend a mini-tutorial later in the afternoon called Git, Got, Gotten on using git for sysadmin version control … but by the time I got to the room it was SRO. So I bailed out way early (skipping the in-hotel conference evening reception – I expected a disappointment following last year’s wonderful event at the EMP Museum), unwound, and got a good night’s sleep.

Friday, 11/13

I started the day with Jez Humble of Chef, who talked to the big room about Lean Configuration Management. An excellent talk on, among other things, what tools from the Dev side of the aisle we can use on the Ops side. Jez is an excellent speaker, and he brings up a good point about how the data points to high-performing IT groups as being a driver of innovation AND profit.

My second morning session was Lightweight Change Control Using Git, by George Beech of Stack Overflow. A big hunk of time was given to what’s wrong, before progressing into the organization of managing configs and processes with version control, explicitly git. Good talk.

After lunch, I spent a couple of hours on the hallway track, since there was nothing that really called out my name in the formal program. And for the closing keynote … well, I decided to beat the Friday traffic out of the district instead. But the presentation has been made available already – it’s here: It Was Never Going to Work, So Let’s Have Some Tea, by James Mickens of Harvard. You can watch it with me.

Thanksgiving and stuff

It was a good week, though I did work on Friday. Thanksgiving Day was a nice quiet day at home. Pancakes and espresso in the morning. Turkey, mashed potatoes, gravy, cranberry sauce, apple pie, … other stuff, I think … through the late afternoon and evening. Food coma #FTW, with lots of leftovers. We called and talked to family in lots of places, and that was fun, too. The weekend has been catching up on chores, putting up the Christmas crap, and roasting coffee.

Fallen Warriors

DoD reported no new casualties in the last week.


OOOooo … err. Certified. That’s whut I am. The week of death march revising on RHEL7, followed by two certification exams on Friday, it is over. And most interestingly, I passed both exams, and now have my RHCE. Coming out of the building after 5 on Friday afternoon, I was sure I’d passed EX200 (the RHCSA exam), but frankly wasn’t feeling too warm and fuzzy about EX300 (the RHCE). So I was pleased as punch to learn that I had in fact passed both, and by comfortable margins.

Better yet, I learned a hell of a lot about the tools and technologies in this latest iteration of Red Hat Enterprise Linux, and I’ll be putting that knowledge to use in production systems within the next several months. So, that’s a good thing, too.

This weekend, I tried to stay awake, and to do some chores. I almost got enough done. What really needs doing is … everything. The house needs a deep cleaning, and the yard needs quite a lot of attention. All in good time. Oh, and while the garden isn’t doing well, it is still producing a bit:

Garden Goodies -2 Aug 2014

Garden Goodies -2 Aug 2014

Some of that has turned into salsa, we’re having more in salads, and some goes to work to make people there happy, as well.

*      *      *

DoD has announced no new casualties in the last 6 days.

A billion, billion comment spam

Well, that might be an exaggeration. It was more like a few hundred comment spam. Fortunately they were all so marked, making it easy to click–delete.

*      *      *

Monday? Monday?!? So sorry to have missed y’all, yesterday. I’ve been preparing for this week’s RH300 course, and stayed pretty focused on that goal. We’re covering 14 days of regular Red Hat coursework in four days of grueling review, followed by the RHCSA and RHCE exams on Friday. And the exams are … challenging. I’m really good with the bits I use. And I can puzzle out the bits I don’t use often. But come exam-time, there’s 2 or 4 hours to do a WHOLE BUNCH of stuff, and it all has to work right, and it all has to survive a reboot.

*      *      *

Our condolences to the families, friends, and units of these fallen warriors:

  • Pfc. Donnell A. Hamilton, Jr., 20, of Kenosha, Wisconsin, died July 24, at Brooke Army Medical Center, Joint Base San Antonio, Texas, from an illness sustained in Ghazni Province, Afghanistan.
  • Staff Sgt. Benjamin G. Prange, 30, of Hickman, Nebraska, died July 24, in Mirugol Kalay, Kandahar Province, Afghanistan, of wounds suffered when the enemy attacked his vehicle with an improvised explosive device.
  • Pfc. Keith M. Williams, 19, of Visalia, California, died July 24, in Mirugol Kalay, Kandahar Province, Afghanistan, of wounds suffered when the enemy attacked his vehicle with an improvised explosive device.
  • Boatswain’s Mate Seaman Yeshabel Villotcarrasco, 23, of Parma, Ohio, died as a result of a non-hostile incident June 19 aboard USS James E. Williams (DDG-95) while the ship was underway in the Red Sea.

Cool July

We’ve had several days of unseasonably cool weather. I’m not complaining, mind you. But all the same, it’s weird. Temps in the early mornings in the high 50’s, and barely breaking into the low 80’s. Who’d a thunk? But they let me take Lexi on a two mile walk this afternoon without arriving back home as a sweatball holding a dead dog.

The garden, it fares poorly. I gave it virtually no attention in the days leading up to Marcia’s surgery, nor in the weeks that followed that event. Bugs have killed my zucchini plants, the tomato plants are small-ish with yellowing leaves and low production, and my herbs have all bolted. But I was paying attention to the important tasks in life, so that’s okay.

I’m otherwise tired. I had a couple of rounds of system work today: an hour early, and a couple of hours following the shopping run. In the coming week, I’ve got to spend a fair bit of time working with RHEL7, in advance of a Rapid Track training course the week following, with an RHCE certification exam at the end of that.

*      *      *

Another week, another span of time during which DoD announced no casualties. It’s not like there isn’t plenty of unpleasantness in the Middle East and in the Ukraine … but I sincerely hope we stay the hell out of those conflicts.

Six Days of LISA ’13

Howdy. My name’s Brian, and I’m a tired SysAdmin…

So, six days of tutorials and talks at the USENIX LISA ’13 conference are done. And it was good. My behind is, however, glad to be shut of those hotel conference chairs.

Sunday, 3 November

Sunday’s full day tutorial was called Securing Linux Servers, and was taught by Rik Farrow, a talented bloke who does security for a living, and is Editor of the USENIX ;login: magazine on the side. We covered the goals of running systems (access to properly executing services) and the attacks that accessibility (physical, network) enable. As always, the more you know, the more frightening running systems connected to networks becomes. We explicitly deconstructed several public exploits of high-value targets, and discussed mitigations that might have made them less likely. User account minimization and root account lockdowns through effective use of the `sudo` command were prominently featured. Proactive patching is highly recommended, too! Passwords, password security, hashing algorithms, and helping users select strong passwords that can be remembered also were a prime topic. Things that Rik wished were better documented online are PAM (Pluggable Authentication Modules) and simple, accessible starter documentation for SELinux.

Monday, 4 November

Hands-on Security for Systems Administrators was the full-day tutorial I attended on Monday. It was taught by Branson Matheson, a consultant and computer security wonk. Branson is an extremely energetic and engaging trainer who held my attention the whole day. We looked at security from the perspective of (informally, in the class) auditing our physical, social, and network vulnerabilities. In the context of the latter, we used a customized virtual build of Kali Linux , a Debian-based pen testing distro. I learned a lot of stuff, and for those things that I “knew”, the refresher was welcome and timely.

Tuesday, 5 November

Tuesday, I took two half-day tutorials.

The first was presented by Ted Ts’o, of Linux kernel and filesystem fame. Our tutorial topic was “Recovering from Linux Hard Drive Disasters.” We spent a couple of hours covering disk drive fundamentals and Linux file systems. The final hour was given over to the stated topic of recovering from assorted disk-based catastrophes. My take-away from this tutorial was two-fold. I think the presentation be better named “Disks, Linux Filesystems, and Disk Disaster Recovery,” which would be more reflective of the distribution of the material. Additionally, it’s worth stating that any single disk disaster is generally mitigated by multi-disk configurations (mirroring, RAID), and accidental data loss is often best covered by frequently taken and tested backups.

The second tutorial I attended, on Tuesday afternoon, was on the topic of “Disaster Recovery Plans: Design, Implementation and Maintenance Using the ITIL Framework.” Seems a bit dry, eh? A bit … boring? Not at all! Jeanne Schock brought the subject material to life, walking us through setting goals and running a project to effectively plan for Disaster Recovery. IMO, it’s documentation, planning, and process that turns the craft of System Administration into a true profession, and these sorts of activities are crucial. Jeanne’s presentation style and methods of engaging the audience are superb. This was my personal favorite of all the tutorials I attended. But … Thanks, Jeanne, for making more work for me!

Wednesday, 6 November

Whew. I was starting to reach brain-full state as the fourth day of tutorials began. I got to spend a full day with Ted Ts’o this time, and it was an excellent full day of training on Linux Performance Tuning. Some stuff I knew, since I’ve been doing this for a while. But the methods that Ted discussed for triaging system and software behaviour, then using the resulting data to prioritize diagnostic activities was very useful. This is a recurring topic at LISA ’13 – go for the low-hanging fruit and obvious stuff: check for CPU, disk, and network bottlenecks with quick commands before delving into one path more deeply. The seemingly obvious culprit may be a red herring. I plan on using the slide deck to construct a performance triage TWiki page at work.

I was in this tutorial when Bruce Schneier spoke (via Skype!) on “Surveillance, the NSA, and Everything.” Bummer.

This was also my last day of Tutorials. In the evening I attended the annual LOPSA meeting. Lots of interesting stuff there, follow the link to learn more about this useful and supportive organization. Yep, I’m a member.

Thursday, 7 November

Yay, today started with track problems on Metro, and an extra 45 minutes standing cheek-to-jowl with a bunch of random folks on a Red Line train.

This was a Technical Sessions and Invited Talks day for me. In the morning, Brendan Gregg presented Blazing Performance with Flame Graphs. Here’s a useful summary on Brendan’s blog. This was followed in the morning by Jon Masters of Red Hat talking about Hyperscale Computing with ARM Servers (which looks to be a cool and not unlikely path), and Ben Rockwood of Joyent discussing Lean Operations. Ben has strong opinions on the profession, and I always learn something from him.

In the afternoon, Brendan Gregg was in front of me again, pitching systems performance issues (and his new book of the same name). I continue to find Brendan’s presentation style a bit over the top, but his technical chops and writing skills are excellent. This was followed by Branson Matheson (who was training me earlier in the week) on the subject of “Hacking your Mind and Emotions” – much about social engineering. Sigh, too easy to do. But Branson is so enthusiastic and excited about his work  that … well, that’s alright, then, eh?

The late afternoon pair of talks were on Enterprise Architecture Beyond the Perimeter (presented by a pair of talented Google Engineers), and Drifting into Fragility, by Matt Provost of Weta Digital. The former was all about authentication and authorization without the classical corporate perimeter – no firewall or VPN between clients and resources. Is it a legitimate client machine, properly secured and patched? With a properly authenticated user? Good, we’re cool. How much secured, authenticated, patched is required is dependent on the resource to be accessed. This seems a bit like a Google-scale problem… The latter talk, on fragility, was a poignant reminder of unintended dependencies and consequences in complex systems and network.

The conference reception was on Thursday evening, but I took a pass, headed home, and went to bed early. I was getting pretty tired by this time.

Friday, 8 November

My early morning session had George Wilson of Delphix talking about ZFS for Everyone, followed by Mark Cavage of Joyent discussing Manta Storage System Internals. I use ZFS, so the first talk held particular interest for me, especially the information about how the disparate ZFS implementations are working to prevent fragmentation by utilizing Feature Flags. OpenZFS.org was also discussed. I didn’t know much about Manta except that it exists, but I know a bit more now, and … it’s cool. I don’t have a use, today, but it’s definitely cool.

The late morning session I attended was a two-fer on the topic of Macs at Google. They have tens of thousands of Macs, and the effective image, deployment, and patching management was the first topic of the day, presented by Clay Caviness and Edward Eigerman. Some interesting tools and possibilities, but scale far beyond my needs. The second talk, by Greg Castle, on Hardening Macs, was pertinent and useful for me.

After lunch, the two talks I attended were on “Managing Access using SSH Keys” by the original author of SSH, Tatu Ylönen, and “Secure Linux Containers” by Dan Walsh of Red Hat (and SELinux fame). Tatu pretty much read text-dense slides aloud to us, and confirmed that managing SSH key proliferation and dependency paths is hard. Secure Linux Containers remind me strongly of sparse Solaris Zones, so that’s how I’m fitting them into my mental framework. Dan also talked to us about Docker … a container framework that Red Hat is “merging” (?) with Secure Linux Containers … and said we (sysadmins) wouldn’t like Docker at all. Mmmmmm.

The closing Plenary session, at about an hour and 45 minutes, was a caffeine-fueled odyssey by Todd Underwood, a Google Site Reliability Manager, on the topic of PostOps: A Non-Surgical Tale of Software, Fragility, and Reliability. Todd’s a fun, if hyper, speaker. He’s motivated and knows his stuff. But like some others in the audience, what happens at the scale of a GOOG-size organization may not apply so cleanly in the SMB space. The fact is that DevOps and NoOps may not work so well for us … though certainly the principles of coordinated work and automation strongly apply.

Brian’s Summary

At any given time, for every room I sat in, for every speaker or trainer I listened to, there were three other things that I would have also learned much from. This was my path through LISA ’13. There are many like it, but this one is mine. This conference was a net win for me in many ways – I learned a lot, I ran across some old friends (Hi, Heather and Marc), made some new ones, and had a good time.

The folks I can recommend without reservation that you take a class from, or attend a talk that they’re presenting: Jeanne Schock, Branson Matheson, Rik Farrow, and Ted Ts’o. These are the four people I learned the most from in the course of six days, and you’d learn from them, too!

My hat’s off to the fine staff at USENIX, who worked their asses off to make the conference work. Kudos!

Finishing a cabinet; Ch-ch-ch-changes a’coming.

Finishing the corner cabinet

Finishing the corner cabinet


I’m making progress, as you can see. This cabinet may be upstairs as early as Wednesday of the upcoming week. Depends if I can get enough coats of poly on the doors and shelves. Pictured above, I’m at the poly stage for the face and insides – the dark teal sides are already three coats and cured. After supper, I took those down, laid out the doors and shelves, and first-coated the backs. Tomorrow, a quick sanding and I’ll get the second coat on.

*      *      *

While I am not going to have the liberty to host sites that aren’t mine, I’m migrating back to a personally administered system. $FIRM has graciously allowed me some bandwith, 1RU of rack space, and an old R410. I’ ve got Scientific Linux (the high-energy physics respin of RHEL) I’m doing this for reasons. REASONS, I tell you. Well, I’m not telling you, not now, anyway. There are likely to be format changes, too, though I’m going to maintain the blog format for convenience. But it may not be the front-line landing page anymore. What I do will be clear and documented, though.

This site is running from the new box, as are Daynotes.com and Daynotes.net. Speaking of the former, Daynotes.com is still “owned” by Tom Syroid. But since Tom appears to be staying offline, there’s no way to transfer ownership. If anyone wants to pick up the ball this year and give Network Solutions some money to renew Daynotes.com before the site expires in mid-September, that’d be awesome. You don’t need to have any formal access to renew (spend money) at NetSol, at least you didn’t last time I did it myself. I’ve renewed it several times personally, but it’d be nice if someone has found it useful steps up for a year or two. Let me know if you do, and you’ll get public thanks, here and elsewhere.

Depending on the gardening potential tomorrow, I’m going to try to get Marcia’s sites migrated to the new box before the new week gets rolling. Now to walk the mutt in between rain bursts and then do a bit of remote system administration for work. Ciao!

Moving right along

First, for US visitors, Happy Thanksgiving. A weird holiday, to be sure, but it’s always good to be thankful for life, family, friends, and first world problems.

*      *      *

I’m posting from Linux again, for the first time in a long while. I’d been trying a variety of solutions for storage here, answers that didn’t involve running a full-size system 24×7. I couldn’t do it. You see, it isn’t good enough to just back stuff up here at home. I’m not going to backup home data on a cloud somewhere on the Internet – our friendly government doesn’t appear to respect the Fourth Amendment when it comes to online resources. So I don’t keep email online. Well, I try not to, but I’ll bet Google has it all anyway. But there are files and work I do here that I’m not willing to trust to another administrator and their devotion to security. So while I backup online stuff here, and I backup the home systems here, I need to get a copy of those backups offsite. Fire, theft, and other quirks of life are risks that need to be managed.

So, a weekly copy of the local backup, written to an encrypted disk, and driven to work … that’s a good answer. But when I stood down Slartibartfast, the old Linux server, and replaced him with a dLink NAS box … well, some things didn’t happen anymore. Automated backups of online properties – not happening. Trivially easy local and encrypted backups: neither trivial nor easy anymore. But I kept after it for a while, so that local systems could spin down, data could flow to the storage when it was available, and … I’d figure something out about the offsite.

That didn’t happen. Finally, I broke down a few months back and installed FreeNAS 8.mumble on one of the towers. Key needs: local AFP, SMB/CIFS, and NFS service. Scheduled tasks to pull backups from out in the world, so that problems there don’t kill our data forever. And encrypted backups to removable storage. Seems easy, right? And a dedicated local storage server STILL seemed like a better idea than toying with using a workstation ALSO as the storage server. Feh!

FreeNAS eventually solved everything but the removable storage problem … and the AFP service. The latter problem first: Apple presents a fast moving target for their file services, and I want a networked Time Machine target. Could not get it working with the latest FreeNAS, so the dLink kept spinning. Formerly, and more importantly, while I could plug in a USB disk, write an encrypted ZFS file system to it, create the walkabout tertiary backup, and take the drive to the office … I could only do that once per boot. That is, to get FreeNAS to recognize a drive reinserted into the USB or the eSATA connections, I had to reboot. Probably a failing of the non-enterprise support for hotplug … but a failing all the same.

This week, a “vacation” week for me, I’d had enough. I installed Scientific Linux 6.3, and got all of the above stuff working properly in less than a day. The ONLY thing I miss from FreeNAS (and this was a big driver for me) is ZFS. I *love* ZFS. Filesystem and volume management done properly, with superb snapshot capabilities – I LOVE ZFS. But I can’t have that, and everything else I want, so I’ve solved my problem.

Serenity boots and runs from a ~160GB SSD, and I have three 1TB drives in a software RAID5 serving as the data partition. It’s all formatted EXT4. I have a SATA slide-in slot on the front of the system, I can slot in a hard disk, give the crypto password, and have my offsite storage accessible for updating using rsync. Everything is working again. I can spin down that dLink, and decide what it’s fate is, one of these days. I also don’t need Dortmunder, the Raspberry Pi, running as my SSH and IRSSI landing “box” anymore. That I will find another use for – I can play with it now. And I’ll cautiously update and maintain this system. Frankly, I happier with it running Scientific Linux – the stability of a RHEL derivative is good.

Now to figure out why I can’t get my external SSH port open again… Thanks, Netgear, for giving me one more problem to solve on my “vacation.”

Oh, and finally: a good disk management GUI for a Linux:

Gnome Disk Utility

Gnome Disk Utility

Gnome Disk Utility – I don’t often prefer a GUI, but managing complex storage, which may involve hardware or software RAID, LVM, encryption, and more … well, the visibility of this utility makes me happy. Thanks to Red Hat for writing it.

Marcia’s back (again) & Linux reloaded

Today, I relaxed a bit. Shopping in the morning, a bit of Top Gear UK during the day, and I picked Marcia up at BWI around 1500 EST. Happy dog is happy, and so am I. The holiday bird is in the fridge, I’ve got a tray of mac-n-cheese ready, and … we’ll see how the table ends up.

Tonight, I blew out the FreeNAS installation, and installed Scientific Linux 6.3 x64 on the box still known as Serenity. I had a lot of trouble getting things working right, and there are issues with offsite backups that are much more easily solved with a Linux at the helm. Instead of returning to the Ubuntu way, I figured one of the RHEL retreads would be a good way to go – I’ve got to re-certify in the next few months, and more practice is good.

*      *      *

Our condolences to the families, friends, and units of these fallen warriors:

  • Capt. James D. Nehl, 37, of Gardiner, Oregon, died Nov. 9, in Ghazni Province, Afghanistan, from small arms fire while on patrol during combat operations.
  • Sgt. Matthew H. Stiltz, 26, of Spokane, Washington, died Nov. 12, at Zerok, Afghanistan, of wounds suffered when insurgents attacked his unit with indirect fire.
  • Staff Sgt. Rayvon Battle Jr., 25, of Rocky Mount, North Carolina, died Nov. 13, in Kandahar Province, Afghanistan.
  • Sgt. Channing B. Hicks, 24, of Greer, South Carolina, died Nov. 16, in Paktika province, Afghanistan, from injuries suffered when enemy forces attacked his unit with an improvised explosive device and small arms fire.
  • Spc. Joseph A. Richardson, 23, of Booneville, Arkansas, died Nov. 16, in Paktika province, Afghanistan, from injuries suffered when enemy forces attacked his unit with an improvised explosive device and small arms fire.