Monday, November 9, 2015

Google, where is the confidentiality in Chrome OS?

Ask an information security expert what the three cornerstones of information security are, and s/he should tell you, Confidentiality, Integrity, and Availability. Google will tell you that Chromebooks are some of the most secure consumer computers available today. How well do they really do?


ChromeOS and the hardware specification that Google put out for Chromebooks (specifically around firmware changes) have done a decent job of improving the Integrity of the system. Kernel signing and OS signing (a cryptographic signature on the binary files) goes a long way to stopping malicious root kits.

Of course it also goes a long way to preventing people from messing with the system too. But that would be flexibility, which isn't an underlying principle of information security. Besides, Google gave us developer mode to allow for flexibility. So, you can mess with the system if you want to.

But, what about the application software? Well, that is all managed through the Google Play app store. If Google is managing the ChromeOS app store as well as they are the Android app store, then the integrity of the system has fallen back to be on-par with any other operating system as far as end-user application security goes. And since, by far, the majority of 21st century malicious software is targeting the user at the application layer, not the OS layer, Google has largely missed the boat.

Want to fix this? Take a look at how Apple does things with their app store. Every app is rigorously tested and questioned. Application functionality is tested by Apple's own quality assurance staff and bad applications are rejected. The need for access to system resources (even non-traditional resources like your address book) are questioned and must be justified with actual needs of the application before it gets approved.

Does Google do this? I don't actually know. But I do know that on my Google Phone applications are given a lot more access to things that it makes no sense they should need.

Integrity score: marginally above average.


Google may argue that they have provided availability too because your system is tied to your Google account and your files get written there if you lose access to your Chromebook for some reason. As long as the Chromebook can work entirely offline, it does provide availability.

It also provides Google with a guaranteed source of valuable information about you that they can use to direct ads that target you specifically. In case you didn't know, advertisers pay a premium for targeted advertisings. Of course, those ads are also what pays for all the google services.

Mind you, the exact same thing is true for Windows users that have a OneDrive account, Apple users that have a iCloud account, and anyone who has bothered to set-up and religiously use any of the plethora of cloud storage providers (Dropbox,, SugarSync, and a host more).

Availability score: average.


But, those ads are not what pays for the Chromebook. We, the consumers, pay for our Chromebooks. And, at least some of us, expect to have a high degree of that last tenant of information security: confidentiality. This, Google throws completely out the window. The problem is, information is not secure unless there is a balance of all three.

Google has taken away all the confidentiality. Oh sure, they will say they keep your data private. Except they don't. Their deep content inspection tools dig through it and figure things out about you.  Then they share what they learned with advertisers in the form of targeting ads. Except those things they learned about you they wouldn't know if they had proper confidentiality. Just because it isn't a human looking through your private documents doesn't mean it isn't a loss of confidentiality.

And, while people in the USA are stuck with all sorts of laws that required companies to reveal data on request (sometimes with a court order, sometimes with a warrant, sometimes accompanied by a gag order), may of us don't live in the USA and shouldn't have our data exposed to their laws when we aren't in that country. We have our own mandatory disclosure laws. Yet everyone is stuck with their data being exposed to a foreign government's scrutiny whether they consider this important or not.

Confidentiality score: below average.


Sadly, Google could very easily fix this. And should have fixed it from the start. Google has had plenty of time to fix it and yet still haven't. In only requires 2 modifications to ChromeOS using existing and available software:

  1. Bring back local user authentication. Linux has it built in and ChromeOS is Linux at the core.
  2. Implement client side file encryption. These exists this wonderful tool available for Linux (and Windows and OS X and a bunch of other operating systems) called encfs (encrypted filesystem).
Hopefully everyone knows about local user authentication since every modern desktop operating system except for ChromeOS had it built in and uses it by default. So, I won't go into detail about it.

On the other hand, some people may not be aware of EncFS, which stands for Encrypted Filesystem. It works in conjunction with another strange thing called FUSE (which stands for filesystem in user space). Together this pair of tools do something really useful for confidentiality of data. They let you read and write files in one directory just like you normally would but they store those files in an encrypted form in another directory.

Why is this so good? Well, it means that your files are not available for deep content inspection by Google. So Google will probably never implement this too. It also means that if the files you store on your Google Drive slip out to the whole world (like say you accidentally choose to make them public) nobody else can see them because they don't have the encryption key.

At first glance, this does look to be a problem because it means you can't access your files except from your Chromebook. Except, of course, as was said earlier, EncFS is available on pretty much all of the common operating systems in use today.

It wouldn't be all that difficult for a decent programmer to implement an EncFS plugin for Chrome (the web browser) that would do the file decryption client side and encrypt it before sending it back to Google for long term storage.

Admittedly this last part takes a bit of effort but Google should probably make the use of client side encryption optional anyway. Encryption is one of those trade-offs between two of the principles of information security. You give up availability to get more confidentiality.


In my opinion, Google has violated their motto. Or at least the motto they used to have, "Do no evil". Google, if you read this realize that I consider your actions evil.

I have been looking at Chromebooks for a long time and I've been wanting one for years. I will not buy one so long as I must use a Google account in order to use it.

P.S. It would also make my Google account considerably less secure since my password is a 32 character randomly generated password. It should be as difficult to guess as brute force cracking AES256 encryption. I can't memorize it. So, to use a Chromebook I would need to considerably reduce the security of my Google account. And, no, two-factor authentication the google way is not an option because I regularly run the battery down to zero on my cell phone.

Chrome OS and Chromebooks may have made a marginal improvement in the integrity of the core operating system but it was done at the near complete expense of confidentiality and with a loss of flexibility.


If you really want to impress me, Google, take a note from they way Apple manages the shared keychain access across device and use per-device PKI (like PGP or SMIME), to transfer the encryption key between ChromeOS and Chrome browsers installed on all my other systems so that I don't have to worry about losing the key as long as I remember the passphrase I use to lock it. You could even do a proper secure key escrow with that.

And, in the name of flexibility, give me a way to put my own validation key into the firmware of the Chromebook (alongside yours) so I can install my own, custom, OS without having the scary developer mode warning show up. But please do it in a way that hobbyists and small open source developers can reasonably afford to make use it.

Friday, November 6, 2015

Secure Cloud Storage

Google will tell you that Drive is secure. Dropbox will tell you their service is secure. Microsoft tells us that OneDrive is secure. Amazon tells their clients that S3 is secure. Everyone tells us that the cloud storage they give us is secure. But what does that really mean?

For the most part it means that the data might be encrypted at rest (many cloud storage providers tell us this), that you need to authenticate to get access to the data, and that the data is backed-up. And, they are right up to a point.

What they aren't telling us, except for one provider (as far as I know), is that our data is not protected from the cloud storage provider itself. As in: the staff and applications of the cloud provider are able to read out files without us know (although in the case of staff it is probably very limited and requires significant effort).

Google basically admits this in their EULA where they tell us that they will use the content of our files (and e-mails) to help choose ads that are more suited to us. Microsoft claims that they can't see our files because they are encrypted using our login password (or they used to) but I bet there is a way to recover our password if we forgot it (which means they must be able to decrypt the files without our password being entered. The others vary between these two points.

Why does this matter?

For starters, you might want to use the cloud storage to back up sensitive information that you don't want to lose in case your computer hard drive crashes. Say, for example your financial records. Or, for many of us, that huge list of passwords for all the different websites, forums, and online services we all use now. Maybe you are a private type of person who feels uncomfortable knowing that other people might be able to look at your files. There could be files of a personal nature you don't want to get out, even accidentally. 

The point is, there are a lot of very legitimate reasons why a person would want cloud storage and not want the cloud provider to ever have access to those files. Even if it means you might lose access to those files yourself.

The big question is how do we protect our data when it is not on systems entirely under our control? The obvious answer is simple. We encrypt it on systems that are under our control before sending it away. To keep things convenient we really want to have this happen in a way that is transparent, or nearly transparent to us.

Just like cloud storage, we want to write a file to a directory and let some program that runs in the background encrypt it before it gets sent away for safe keeping.

What can we do?

Once upon a time, a long time ago, a smart person with a strong understanding of encryption created a tool called Truecrypt. He created it to solve a different problem. He wanted to be sure that if his laptop got stolen, people couldn't get access to his data. He wasn't worried as much about cloud storage. At least that is what he said. 

Much of the world saw value in Truecrypt and started using it. Many of us saw value in it beyond keeping our laptops safe should they get stolen. We saw it as a way to pass large files to other people securely. We saw it as a way to be sure our files could not be looked at even when they were placed in cloud storage. We liked this too. We trusted this tool.

Unfortunately, the author(s) of Truecrypt decided it was no longer needed and they abruptly shut down the project. The shutdown was so abrupt that many people don't believe the reasons given by the authors of the software. But, they gave a reason. The reason was simple, operating systems now have built-in disk encryption. So, they don't need to maintain Truecrypt and they stopped.

Truecrypt was never an ideal solution for cloud storage. It needed big files to be useful. You put your little files in the big files to keep them safe. This meant a lot of data has to go to the cloud storage provider and come back every time one little file changes (except in the case of Dropbox).

In any case, Truecrypt is no longer supported. There are some alternatives. VeraCrypt and CipherShed have both taken the last public release of the source code to Truecrypt and begun making their own changes and improvements. But, Truecrypt was never ideal for cloud storage because of that big file problem.

Encrypted File System (encfs)

There is this obscure tool that came out in the Linux world around the same time as Truecrypt. It was unstable, unproven, and only worked on Linux at the time. But, it had one major advantage over Truecrypt. It worked at the file level. That is, it encrypts each file separately. So, when the cloud sync happens, it only needs to send the files that actually changed.

EncFS has improved since then. It is now up to version 1.7 on Linux and we are starting to see some effort in getting it to common operating systems in a reliable and consistent manner. The user interface is still ugly. The tools for Windows and OSX are still pretty sparse and buggy but it looks like it is coming.

At the time of this writing there are a couple of Windows and OSX ports worth keeping an eye on:

  • Safe ( has both Windows and OSX support but seams unstable
  • EncFSmp ( supports both Windows and OSX. It appears to be a bit more stable than Safe, is still in Beta, and has a few growing pains to work out with the UI. This is the one I'm currently using.
  • OSXFUSE is an implementation of another Linux tool for OSX. FUSE is the tool that encfs was built to use (Safe and EncFSmp appear to have replaced FUSE functionality with alternatives built into Windows and OSX). While there is an OSX FUSE supported encfs, it requires that  you build it yourself.
  • Encfs4Win is an experimental port of encfs on Windows that requires fuse4win and is Windows only.
One very important thing to keep in mind if using encfs is that the encryption key is derived from the password you set. This means that the encryption is only as good as the password is complicated. Explaining this can get a bit complicated so here is a simple example:

A one word password taking from the words you know can be guessed by a modern desktop in under 10 seconds.

A password that is 8 characters long and randomly generated using printable characters can be guessed by a modern desktop computer in under 5 minutes. This is about on par with an old encryption algorithm from the 1970's called DES. It is not supposed to be used anymore.

A password that is 16 characters long and randomly generated using printable characters is as complicated as the keys used for AES (the replacement for DES) created in the 1990's. This is still considered acceptable for use today by governments and banks. It is probably good enough for us.

A password that is 32 characters long and randomly generated is as good as AES 256 encryption keys. This is as strong as encfs gets when put in paranoid mode. Odds are it will take a determined government months, or years, to crack this type of encryption key.

The problems with the usable passwords (above) is that most people won't be able to memories them and who in their right mind wants to type 16 (or even 32) characters at a password prompt.

If you aren't trying to prevent people who have access to your computer (or laptop) from seeing the encrypted files, you can keep the password in a text file to cut-and-paste it into the password prompt for encfs. Even better, EncFSmp conveniently saves the password so you don't have to keep entering it.

As to how you generate such an ugly complicated password. That is up to you. I use a tool (that runs on both OSX and Window) called KeePass. Conveniently, it also stores the passwords in a secure way.

Saturday, October 31, 2015

Building a Virtual Desktop Server

Every so often the business world cycles through the idea that end-users don't need computers on their desk and they can use thin clients instead. Inevitably, this fails because the user experience is inadequate. Powerful applications won't run very well or at all. This can include those big Excel spreadsheets that finance people need to use. Even when things do run, the screen updates horribly slowly. So, in the end, everyone has a desktop or laptop back.

However, the virtual desktop doesn't quite die because it still has some uses. System administrators like them for remote management of servers. They can be adapted to function as a sort of remote connection for end-users who are outside the office but don't have a company issued laptop. The technology that makes virtual desktops work is used by help desk staff for remote control of end-user computer. There are many edge cases that keep the technology alive.

Since everything that will be used in this process is not a part of the core operating system, these instruction should work with very little modification on almost any UNIX or UNIX like operating systems. The testing and quoted configurations in this journal entry are from FreeBSD 10.2.


It is common practice in the UNIX world to use VNC as a remote desktop client for UNIX systems running X11 where the client computer does not have an X11 server installed. It is even further common practice to tunnel the VNC traffic through ssh. This is so common that there is a modified version of the TightVNC client with the SSH capabilities built in (called ssvnc). This standard identifies the specific packages and configuration needed to support a multi-user environment with SSH, VNC, and XDM.
  • VNC run through inetd. Allows for up to 64 dynamically assigned virtual desktops without VNC screen sharing.
  • XDM runs as a daemon through the rc system. The server does not need to have a GUI enable or an X11 server installed.
  • The VNC password must be shared BUT users are still required authenticate individually.
  • Two-factor authentication and network session security are provided by SSH.


With this configuration, a user is able to log-in using SSH to the command-line interface and bypass the XDM login screen. However, the user did have to authenticate the ssh connection using certificates. If there is a passphrase on the certificate, then the user still performed two-factor authentication. This may be consider as flexibility provided to users who are comfortable with the UNIX command prompt while not hindering users who are not as comfortable.


  1. When the system first boots, it can take several seconds for inetd to fully start-up. This may result in the system appearing to be down or non-responsive when connecting immediately after startup. Wait a few seconds and try again.
  2. The OSX VNC client (called Screen Sharing) does not gracefully exit. This can result in the XDM daemon delaying reset for the affected desktop. When this happens the login screen will not be presented to the end-user for several seconds (until Xvnc attempts to re-try the query). The easiest solution is to advise users to re-try connecting if they fail to get a login prompt.
  3. The XDM and TightVNC binary packages for FreeBSD do not identify all pre-requisites needed for those packages to function fully. A list of the missing packages is identified in this document.
  4. The SSH match group section (at the bottom) has not been tested. It may not function as intended with this setup.


The following packages must be installed. All are available a pre-built binaries in the FreeBSD package repository.

Pre-requisites Packages

  • xrdb - a prerequisite of VNC that is not identified in the VNC server package.
  • xset - a prerequisite of VNC that is not identified in the VNC server package.
  • sessreg - a prerequisite to XDM that is not identified by the package.
  • xorg-fonts - pre-requisite of XDM that is not identified in the package.
  • xsetroot - optional: used to make the login desktop look nicer.
  • xterm - optional: prerequisites of the VNC server however, that functionality is not used in this setup.
  • twm - optional: prerequisites of the VNC server however, that functionality is not used in this setup.
  • xlsfonts - optional: helps with debugging font problems.
  • xfotnsel - optional: helps with debugging font problems.
Other packages will get installed automatically as pre-requisites of the main packages. The ones listed above should be but aren't by default.

Packages to Install

  • xdm - the X11 Display Manger used to present the user with a login screen.
  • tightvnc - the Virtual Network Client that provides a remote desktop display (since remote X11 is not always available).

Pre-requisite Services

A working SSH server must be installed and running on the server. It needs to be configured to allow port forwarding from clients. The default SSH server included with FreeBSD is configured this way and should be enabled in the rc subsystem already (if not, add the line sshd_enable="YES").


VNC and Inetd

In order for OSX (Apple Mac users) to connect, the VNC server must present a password. We will therefore create a simple VNC password file to be used. This password will need to be shared among all users. It is not for security purposes, it is used due to a technical limitation. DO NOT USE THE ROOT PASSWORD or your own. If in doubt, use the word "password". The following steps are used to create this password:
cp ~/.vnc/passwd /etc/vncpasswd.nobody
chmod 0600 /etc/vncpasswd.nobody
chown nobody /etc/vncpasswd.nobody


By running the VNC server through inetd, we are able to dynamically assign X11 desktops to users as they connect. This creates an service very similar to a Citrix VDI solution.
vnc stream tcp nowait  nobody /usr/local/bin/Xvnc Xvnc -inetd -query localhost -localhost -once -desktop VictorVM -geometry 1280x720  -depth 24 -rfbauth /etc/vncpasswd.nobody
All users will have a screen size of 1280x720 as defined by the -geometry parameter. This may be changed but it must be the same for all users. Note that the most common laptop screen resolution is currently 1366x768. Higher values may cause problems for some users.


The pre-packaged XDM is poorly set-up and not intended to operate the way we are setting it up, as such there are several steps that must be taken.


Normally xdm is run from /etc/ttys on virtual terminal 8 but this would require a local graphical console. Instead we will run it as a daemon process. Create a daemon control script for the service manager subsystem. The script should be placed in the file /usr/local/etc/rc.d/xdm, owned by root, and executable by root.


# PROVIDE: xdm
# KEYWORDS: shutdown

. /etc/rc.subr


load_rc_config $name
run_rc_command "$1"


Comment out the last line of the Xservers file to prevent xdm from trying to start a local X server on the console.
# Comment out the local line so that we are only providing XDMCP support
#:0 local /usr/local/bin/X :0 


Enable the XDMCP listener functionality of xdm. By adding a line to the end of the file:


For some reason the default Xresources file attempts to make use of fonts that are not available to XDM when it is run as a daemon. There is probably a way to add those fonts with additional configuration of Xvnc but it is easier to modify this file to make user of fonts that are available. The following are the lines that need to be changed.
xlogin*greetFont: -sony-fixed-medium-r-normal--24-170-100-100-c-120-iso8859-1
xlogin*font:       -sony-fixed-medium-r-normal--16-120-100-100-c-80-iso8859-1
xlogin*promptFont: -sony-fixed-medium-r-normal--16-120-100-100-c-80-iso8859-1
xlogin*failFont: -misc-fixed-bold-r-normal--14-130-75-75-c-70-iso8859-1
xlogin*greetFace:       Fixed-24
xlogin*face:            Fixed-16
xlogin*promptFace:      Fixed-16
xlogin*failFace:        Fixed-14:bold
Note: in the default file, these lines appear twice and the specific set selected depends on the screen resolution. It is safes to simply change both.


Comment out the link that disable XDMCP. It should be the last line in the file. It is the resource defined by DisplayManger.requrestPort.
! Comment out this line if you want to manage X terminals with xdm
!DisplayManager.requestPort: 0


Once everything else is configured, the rc system can be configured to start the daemons on system startup. The following lines should be added to /etc/rc.conf:
inetd_enable="YES" # running VNC server through INETD
xdm_enable="YES" # try to start xdm for the VNC server(s)

End User Desktop Setup

The choice of end-user desktop environment will be very purpose dependent. There are a great many choices available. The setup presented here is intended for basic testing only. A more modern desktop environment should be chosen and configured.


Each user with X11 (GUI) access should have a .xsession file in his/her home directory. Here is a very basic setup intended for testing only.

xrdb $HOME/.Xresources
xsetroot -solid grey
xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" &
When the window manager (twm) exits, the user is logged out and the XDM login screen should reappear.

Optional: Two-factor Authentication

If this service is to be provided over the Internet, it is advisable to enforce two-factor authentication. This can be achieved by enforcing the user of certificate based authentication for SSH. Note: the two factors are the ssh certificate (something-you-have) and the passphrase on that certificate (something you know). The VNC password and the unix password for xdm are not needed.


To enforce the use of certificate authentication with SSH requires some minor changes to the default configuration. The following identifies only the lines that need changing within that file:
# It is strongly advised to not allow direct remote-root login on all publicly facing servers
PermitRootLogin no

# enable RSA and DSA certificate authentication
RSSAuthentication yes
PubkeyAuthentication yes

# prevent ~/.rhosts authentication
IgnoreRhosts yes

# prevent username & password authentication
ChallengeResponseAuthentication no

Since the passphrase on the ssh certificate cannot be technically enforced, it may be desirable to restrict users so that they are only able to use the VNC tunnel. Additions are needed to the sshd_config file:
Match User limited-user
   #AllowTcpForwarding yes
   #X11Forwarding no
   #PermitTunnel no
   #GatewayPorts no
   AllowAgentForwarding no
   PermitOpen localhost:5900
   ForceCommand echo 'This account can only be used for SSH+VNC access.'

A group called 'limited-user' will need to be created and all users that should not have ssh shell access added to that group. Note: This will not prevent the user xterm or other methods of gaining shell access through the GUI provided.

Configuring the Client(s)

OpenSSH and VNC are available for almost every current operating system in use today on desktops, tablets, and smart phones. Several of them have one or both of these included with the operating system.

Connecting from Windows

We will need to install two applications on our Windows desktop: Putty (for SSH) and TightVNC. Windows installers for both are available from their respective websites. Anyone who is attempting to complete the steps outlined in this article should be able to perform the software installation without difficulty.
  • Putty for Windows
  • TightVNC for Windows
There are alternatives to both Putty and TightVNC some commercial and some free. These are freely available and what I choose to use on Windows.

PortableApps on Windows

PortableApps is a collection of freely available software for Windows that have been configured to be run directly from a USB thumb drive or other portable media. This option is handy because you can load the appropriate applications on to your thumb drive and keep it in your pocket. Thus any available Windows computer that you can run the application from becomes useful as a client for your virtual desktop server.

Connecting from Apple OSX (iMac, Mac Mini, Macbook, and friends)

The SSH and VNC clients are part of the base operating system. No additional software needs to be installed.

Connecting from Apple iOS (iPad and iPhone)

Guess what? There is an app for that. There are in fact many apps for it. This example will use the iSSH app. It may not be the best. It was the first one I ran across that was free, worked, and didn't display ads.

Connecting from Chrome OS (Chromebook)

Although Google tries desperately to hide it from the user, the fact is that Chrome OS is built on Linux and you can simply use ssh X11 tunnelling exactly like it is described in the FreeBSD and Linux section (below). The only hard part is getting to the ssh client.

Connecting from FreeBSD and Linux

FreeBSD and Linux are both UNIX-like operating systems and as such will most likely be using the X-Window System for their graphical user interface. While one could install the TightVNC client and configure the ssh tunnel in the same manner as described in the Apple OSX section, it is probably far less troublesome to simply tunnel the X11 protocol (used by the X-Window System) through the ssh connection directly instead of having the VNC intermediary.

ssh -X remotehostname path/to/application 

That's all there is to it. The -X (that's a capital X) tells SSH to tunnel the X11 protocol. For consistency, the setup and use of the TightVNC client is described below. It would be use when connecting to a MS Windows or Apple OSX server remotely.

What about Android?

I haven't found a decent combination of SSH and VNC that work adequately on Android. The small screen size makes it worse. It is probably best to use the Android phone as a WiFi access point and connect using something with a bigger screen.


It is somewhat ironic that the X Window System can be run on all the client operating systems listed in this article (except perhaps Android). In fact, the software needed is freely available. However, VNC has the advantage that you can install the VNC server on Windows and it's already a part of OSX. VNC has the other advantage that it can be configured to mirror the actual desktop which helps with remote support.

Wednesday, October 28, 2015

Risk Assessments for System Administrators

Risk assessments are one of those things that system administrators of larger companies are often asked to get involved in. Most system administrators consider this a waste of time that could be better spend keeping the systems running. Even when the system administrators aren’t directly involved in the assessment, they invariably get more work as a result of someone else’s risk assessment.
In this article, I shall attempt to help the system administrators understand why these risk assessments happen and how being involved in a risk assessment can help the system administrator do a better job.
A risk assessment is a formal process, typically carried out by people who sit in an unusual position in the company called IT governance. These people have the odd job of trying to translate corporate management and business need into information technology terms and back again. The risk assessment process itself is fancy formal process meant to identify all the assets of the company and the risks against them. It then takes those lists and sets priorities.
In my opinion, one of the big failings of the process is that most of the formal processes talk about enumerating the technology in terms of systems. The real assets are not the systems but the data in those systems. This article isn’t meant to debate the merits of the risk assessment process, but to help people who get stuck at the end of it better understand the process.
So, step one is to enumerate all the systems (at least those in the target scope of the risk assessment). Step two is to enumerate all the sources of risk to those systems.
How does this help the system administrator? For starters, there are going to be systems on the edges of the “target scope” like unofficial admin systems and old network hardware. An admin that is involved from the start can either down-play or highlight these systems to get them into (or keep them out of) the risk assessment. As part of the risk assessment you have the opportunity to argue for more resources to upgrade/replace those systems or to keep management from noticing them. 
Be careful here. This can backfire. Get an unofficial admin system noticed and it might be taken away instead of replaced. Get it noticed and there might be a lot more work maintaining it once it’s official and needs to go through change control. Keep it hidden and if it becomes the source of a problem you might get in trouble for not identifying it. The trade-offs can be rough.
Next up is risks. Risks are another trouble spot. They can be anything from elite hackers to nearby train tracks. The key to including a particular risk in the risk assessment is two fold. First there needs to be good 3rd party documentation about it. Articles in professional magazines that the management types recognize or white papers published by information security and audit companies. The second is some details that can support how likely this risk is to be a problem. Industry reports of recent events or statistical papers of historical occurrences. Again, sources that management understand are important.
Good sources of material to management are not the same as good sources to technical people. In fact, they are often quite opposite. Wikipedia would not usually be considered good from a management perspective but the sources that the Wikipedia article references might be. Likewise 2600 magazine and wired are not going to be good sources to management. On the other hand, Business Week, the Wall Street Journal, and white papers published by Deloitte, Trustwave, or most of your corporate vendors will be sources that management trusts.
These sources are not an exhaustive list and may not be accurate for all organizations. If you can find out where the management in your organization get their news, that’s a good start.
The key to all of this for the system administrator is to identify risks to specific systems and point them out to the people writing the risk assessment. More risks and risks with a higher likelihood are likely to get a system more attention.

Understand of what went into the risk assessment report helps to understand why management is putting increased focus on some things and less focus on other things. If that key system that keeps everything running smoothly is ignored, there may not be sufficient resources to keep it that way. If a minor system gets too much focus the administration staff may find they are spending too much time on something nobody really cares about.

Sunday, April 12, 2015

Configuring the FreeBSD Periodic Subsystem

As mentioned in the post about the daily periodic script there are some scripts that run daily to clean up various legacy subsystems. Some system administrators may not wish to run these scripts and view them as unnecessary. FreeBSD provides an easy way to modify the behaviour of the periodic subsystem through a simple configuration file.

From the earlier post, it was noted that the system announcements and rwho sub-system are somewhat legacy and probably not used (or even enabled) on modern installations. Particularly on servers that are not intended for end-user login. 

The syntax of the file is very simple and follows the same structure as /etc/rc.conf on FreeBSD. Lines that begin with a pound/hash symbol (#) are treated as comments. Blank lines are ignored. All other lines follow the variable=value syntax. The /etc/periodic.conf file should contain only overrides of the default values found in /etc/defaults/periodic.conf.


# Local configuration for periodic sub-system.
# This file overrides /etc/defaults/periodic.conf
# for more information: man periodic.conf

# disable archaic system messages cleanup since it is not in-use

# disable rwho database cleanup since the rwho daemon isn't running

# enable daily cleanup of /tmp

The above example of a customized periodic.conf file makes three changes to the defaults:

  • disables the section in the daily output entitled, "Cleaning out old system announcements:"
  • disables the section in the daily output entitled, "Removing stale files from /var/rwho:"
  • enables a section "Removing old temporary files:"
The first two were mentioned in the earlier post as being legacy and probably not in-use. The server is not intended for end-user login and the administrator does not make use of the system announcements sub-system (part of the mail subsystem) and so there will not be any system announcements to clean. The rwho daemon (rwhod) is not enables so there will not be any entries in /var/rwho. Thus on this particular system, it should be safe to disable these two daily scripts.

The third change is to enable cleanup of /tmp. This is probably not needed on a system without end-user access because only applications, scripts, and the system administrator should ever be using the /tmp filesystem. It is possible that some application or script may not behave properly and could leave files in /tmp when they are no longer needed. The system administrator might similarly forget to clean-up. Thus the periodic script has been enabled to keep things tidy.

The daily tmp cleanup script will, by default, remove any files found in /tmp that are more than 3 days old. This period can be adjusted by setting the daily_clean_tmps_days variable in /etc/periodic.conf


In the default state, FreeBSD's periodic sub-system is pretty well self maintaining and does a reasonable job at trying to keep the system clean as well as providing backups of key system files and providing the system administrator with daily reports. Although adjustments may not be needed in many circumstances, a system administrator will find value in understanding how to make changes to this sub-system when the need arises.

Sunday, April 5, 2015

FreeBSD Weekly and Monthly Maintenance Reports

This post will cover both the weekly and the monthly periodic reports because these are both short and they are primarily automated system maintenance activities.

Weekly Run Output

Rebuilding locate database:

Rebuilding whatis database:

You may notice there are only two entries in the weekly run and they are both blank. As with the other periodic report sections that are usually blank, if you see something in the output section, you need to figure out what broke and try to fix it.

The locate command is used to find publicly accessible files by their name. This could be used by a system administrator to figure out where a specific executable is stored. Different UNIX and UNIX-like operating systems may put the same command in different places. In the case of FreeBSD you may even find the same command in multiple location depending on how it was installed.

The whatis command provides a short summary of the function or purpose of an executable on the system. It does this by extracting the short description from the man pages. You won't get any output from whatis without an appropriate man page.

Monthly Run Output

Doing login accounting:
total                               92.11
a_user                              89.75
root                                 2.36

The monthly periodic output is a report of the number of hours spent on the system per-user. Back in the days of shared systems with connection billing, this was one way that the bills were calculated. It may still be used today but probably very rarely.

It does, however, make a nice simple monthly check to see if someone may be using your system without your permission. If you see user accounts that shouldn't be active or unusually large numbers associated with a specific user it could be time to do a little investigating.


There is quite a bit of legacy reporting and maintenance activities going on in the periodic jobs that FreeBSD includes as part of the base installation. Much of this is probably not used very often and could be disabled. This decision should be left up to the system administrators and/or the corporate security and build policy if such exists in the organization.

It is worthwhile for system administrators to understand the periodic subsystem. Up to this point we have looked at the defaults. There are configuration options that can be made to disable some of these defaults and to enable additional actions that are not enabled in the base installation.

Sunday, March 29, 2015

FreeBSD Security Report

The Second of the FreeBSD periodic reports to be looked at is the security report. It runs daily but is sent in a separate e-mail from the daily report. This will hopefully help system administrators to realize that some attention really should be paid to this report.

There are fewer settings but it is important to pay attention to all of them and understand what they are trying to communicate.

Sample Security Run Output

Checking setuid files and devices:
Checking negative group permissions: 
Checking for uids of 0:
root 0
toor 0 
Checking for passwordless accounts: 
Checking login.conf permissions: 
Checking for ports with mismatched checksums: 
Hostname login failures:
Mar 25 17:51:41 Hostname sshd[33490]: Invalid user admin from
Mar 25 17:51:41 Hostname sshd[33490]: input_userauth_request: invalid user admin [preauth]
Mar 25 17:51:45 Hostname sshd[33496]: Invalid user db2fenc1 from
Mar 25 17:51:45 Hostname sshd[33496]: input_userauth_request: invalid user db2fenc1 [preauth]
Mar 25 17:51:47 Hostname sshd[33498]: Invalid user oracle from
Mar 25 17:51:47 Hostname sshd[33498]: input_userauth_request: invalid user oracle [preauth]
Mar 25 17:51:50 Hostname sshd[33502]: Invalid user git from
Mar 25 17:51:50 Hostname sshd[33502]: input_userauth_request: invalid user git [preauth]
Mar 25 17:52:04 Hostname sshd[33516]: Invalid user aaron from
Mar 25 17:52:04 Hostname sshd[33516]: input_userauth_request: invalid user aaron [preauth]
Mar 25 17:52:05 Hostname sshd[33519]: Invalid user gt05 from
Mar 25 17:54:42 Hostname sshd[33707]: input_userauth_request: invalid user oracle [preauth]


Mar 25 19:04:00 Hostname sshd[33953]: reverse mapping checking getaddrinfo for [] failed - POSSIBLE BREAK-IN ATTEMPT! [preauth]
Mar 25 19:04:00 Hostname sshd[33953]: reverse mapping checking getaddrinfo for [] failed - POSSIBLE BREAK-IN ATTEMPT!

Hostname refused connections: 
Checking for packages with security vulnerabilities:
Database fetched: Wed Mar 25 03:12:53 EDT 2015

Checking setuid files and devices

Most days this should be a blank section. The very first time this script runs, it will contain a list of all the files on the system that have the setuid bit enabled. Subsequent runs will only show changes between runs. Operating system upgrades and some 3rd party software may change (usually add) setuid files or devices. This means you will only have one notification of the change.

The setuid permission is a special UNIX permission which says that the program (file) should be run as the owning user, not the user that actually ran the application. This is how FreeBSD (and UNIX systems in general) provide for temporary escalated privileges to ordinary users.

Unexpected changes need to be investigated. Abuse of setuid permissions is still one of the most common methods for internal users to gain unauthorized access to the root account and compromise systems.

You will see a sub-section with the heading, Hostname setuid diffs, when there are changes in setuid permissions.

Checking negative group permissions

There should never be negative group permissions on files or directories. This section should always be blank.

Negative group permissions means that 'other' users (users not in same group as the file/directory) have more permissions than 'group' users.

Naive system administrators have been known to use this tactic in an attempt to let one group of users read files that are created by a different group of users. The problem is that when file access permissions are checked, only the user's active group is checked. Any user in more than one group can change their active group and no longer be considered a 'group' user of the file. This makes a very fragile security model that is too easy to break. 

Filesystem ACLs were created to provide the sort of functionality that an administrator might want in situations where negative groups are considered. Use filesystem ACLs instead.

Hostname changes in mounted filesystems

When the mounted filesystems change between security runs, a section that identifies the differences will be included. If no changes are made, this section does not show up.

Administrators do have cause to make changes to mounted filesystems at times but malicious users may try to introduce new filesystems as a step to gaining unauthorized access to a server. Unexpected filesystem changes should be reviewed to determine the cause.

Checking for uids of 0

By default there are two accounts with UID 0 on FreeBSD. One is the well-known "root" account. The other is the "toor" account. Any other accounts that show up in this looks need to be looked at very carefully to understand not only where they came from but why.

UID 0, regardless of the name associated with it, is the super-user account on UNIX and UNIX-like operating systems (including FreeBSD). The name is traditionally "root". The only thing that makes the account a super-user account is that it has UID 0.

It is perfectly fine to have more than one account with the same UID on FreeBSD although it is very rare to do so with anything other than UID 0.

The "toor" account is included with the base system to allow the system administrator to change the user shell for interactive sessions where full super-user authority is required. The root user shell should not be changed because so many automated tasks need to run as root and rely on the user-shell being the bourn shell that is included with the base system.

Some environments choose to remove the "toor" account from FreeBSD because it violates their internal security policy.

Checking for passwordless accounts

This provides a list of all local accounts that have a blank password set. This is distinct from having no password at all; which would prevent password login from working. No account should ever be created with a blank password.

Checking login.conf permissions

This checks to ensure that the permission on the /etc/login.conf file have not changed from the default. Anything in this section indicates corrective action needs to be taken.

The /etc/login.conf file defines system level permissions for users. Changes to this file will change system level access permission for one or more users. Attacks on login.conf are a common way to gain unauthorized administrative privileges or introduce vulnerabilities to a system. Only the root user should have read and write permission to this file (group and other should have read-only).

Checking for ports with mismatched checksums

This provides a list of entries in the ports tree (starting at /usr/ports) that have invalid checksums when compared to the default checksums. Anything showing up here should be considered a problem that needs to be addressed.

The ports subsystem is another alternative to the package subsystem (discussed in a previous post). The main difference is that the ports subsystem provides source code and the system administrator must compile the source before installing the software. The ports subsystem includes all the functionality needed to easily compile and install on the local system.

Hostname kernel log messages

This is an extract of all kernel generated messages in /var/log/messages since the last run. It is usually blank.

When a system is booted (or rebooted) there will be some kernel messages generated. Also when there are errors in the kernel, there will be generated. This can include problems identified by low level hardware drives.

Any kernel messages that are not associated with a system boot/reboot should be reviewed and understood because they could be an early indication of hardware problems.

Some messages from the boot/reboot process could also indicates problems. It is worthwhile becoming familiar with any kernel messages that show up.

Hostname login failures

This is a list of all login failures on the system. On servers it would be empty unless an administrator made a typing error. On systems that support end-users there will usually be a few mistyped passwords every day (depending on the end-user population). On Internet facing servers with login support there are bound to be many entries every day as the result of (hopefully failed) unauthorized login attempts.

Every mistyped password would be included for most system login mechanisms (including console login, PAM, ssh, and x11). The included sample (above) shows a snippet of what is generated when an ssh server is exposed to the Internet. This happened less than an hour after the ssh server was made accessible.

The example was truncated to keep it reasonable. There were actually 58 separate attempts to connect  over a period of 13 minutes. This would have been done by a simple script that just tries a bunch of different passwords.

The bottom section shows a different sort of attempt to break in. It was probably more sophisticated than the first since it gave up immediately.

Hostname refused connections

Provides a list of TCP refused connections. Internally this should be empty. If not, something may be misconfigured in your network. For Internet facing servers, it is yet another indication of the common attacks (such as port scans). Little can be done about the Internet side. It is a good reminder of the importance of firewalls.

Checking for packages with security vulnerabilities

Most days this will only include the date and time that the package database was update (which is done as part of this process). It may also contain a list of packages installed on your system that have known security vuilnerabilities. That indicates it is time to upgrade the identified packages.

Simply put: it is time to upgrade anything that shows up in this section as soon as you can. Especially on servers that are exposed to the Internet.


The security report offers a lot of valuable information. Some sections will contain a lot of detail for which there is no action to be taken. Other sections give warning signs but only once. It is far too easy to miss the important parts of the security report due to the overwhelming amount of detail provided. The security report is much better than no report at all (which is what you get with most operating systems). but the security report is not a good substitute for a proper security monitoring solution, even on small installations. 

Wednesday, March 25, 2015

Reading the FreeBSD Periodic Reports

FreeBSD is a free open source operating system that predates Linux. It is used globally to this day; mostly as a server and in a few appliances. The FreeBSD project is very active and focuses on stability and reliability of the system. You can learn more about FreeBSD from the FreeBSD Foundation website.

One of the many features that FreeBSD includes in the default installation is a set of maintenance and monitoring scripts that run periodically. Unfortunately, many administrators and hobbyists don't fully understand these reports and some aren't even aware of their existence. The reports are mailed to the local root user of the system and if the email subsystem on the server hasn't been configured they sit in the local mailbox.

The reports include:

  • A daily report showing general system health.
  • A security report that runs daily and highlights potential security concerns.
  • A weekly report of system health activities.
  • A monthly report of system login accounting.

These reports offer the system administrator a quick and easy way to monitor their systems with a quick daily glance at a few e-mails (per system). Obviously if you manage hundreds of servers you will want a more robust solution. I will not cover such options in this post.

The Daily Report

This is an e-mail with the subject line: Hostname daily run output (Hostname is the actual short name for the server). Here is an example:
Removing stale files from /var/preserve:
Cleaning out old system announcements: 
Removing stale files from /var/rwho: 
Backup passwd and group files: 
Verifying group file syntax: 
/etc/group is fine
Backing up mail aliases:
Backing up package db directory: 
Disk status:
Filesystem     Size    Used   Avail Capacity  Mounted on
/dev/ada0p2     15G    3.3G     11G    23%    /
devfs          1.0k    1.0k      0B   100%    /dev
/dev/ada0p5    254G     79M    233G     0%    /data
fdescfs        1.0k    1.0k      0B   100%    /dev/fd
Network interface status:
Name    Mtu Network       Address              Ipkts Ierrs Idrop    Opkts Oerrs  Coll Drop
re0    1500 <Link#1>      00:01:2e:bc:bc:6e   239879     0     0    25803     0     0    0
re0    1500  caliban             190041     -     -    19392     -     -    -
re0    1500 fe80::201:2ef fe80::201:2eff:fe        0     -     -        2     -     -    -
ath0*  2290 <Link#2>      e0:b9:a5:66:0c:80        0     0     0        0     0     0    0
usbus     0 <Link#3>                               0     0     0        0     0     0    0
usbus     0 <Link#4>                               0     0     0        0     0     0    0
usbus     0 <Link#5>                               0     0     0        0     0     0    0
usbus     0 <Link#6>                               0     0     0        0     0     0    0
plip0  1500 <Link#7>                               0     0     0        0     0     0    0
lo0   16384 <Link#8>                           75644     0     0    75644     0     0    0
lo0   16384 localhost     ::1                  75580     -     -    75580     -     -    -
lo0   16384 fe80::1%lo0   fe80::1                  0     -     -        0     -     -    -
lo0   16384 your-net      localhost               64     -     -       64     -     -    -  
Local system status:
3:01AM  up 4 days, 16:58, 0 users, load averages: 0.34, 0.08, 0.03 
Mail in local queue:
mailq: Mail queue is empty 
Mail in submit queue:
mailq: Mail queue is empty 
Security check:
   (output mailed separately)
Checking for rejected mail hosts: 
Checking for denied zone transfers (AXFR and IXFR): 
Backing up pkgng database: 
-- End of daily output --

Removing stale files from /var/preserve

The first section should always be blank. If anything is found in this section, it indicates that something has gone wrong. Most likely a daemon did not properly start on the last reboot.

It is the output of files that were found in /var/preserve and deleted as part of the job. The /var/preserve directory is intended as a place to save state data between reboots. Normally it would only be used by the operating system. A daemon will write a file to that directory just prior to shutdown and expects to read that same file when the daemon starts up after a reboot. The daemon is supposed to erase the file once read.

Cleaning out old system announcements

This section should also always be blank. If you aren't sure what system announcements are, odds are pretty good you don't use them and there wouldn't be any issues. If you do see something here and you aren't expecting it, investigation will be warranted.

System announcements are an antiquated method of sending a global message to end-users so that they will see it the next time they log-in to the system. It works through the mail sub-system. Announcements are displayed to the user when they log-in. The end-user will see each announcement only once. Since we rarely have end-users on UNIX systems that login to a interactive text shell, this system is rarely used nowadays.

Removing stale files from /var/rwho

As with the previous entries, this should be empty. If there are entries in this section, it indicates something has gone wrong with the rwho subsystem. Generally speaking nobody should be running the rwho subsystem anymore (there may be exceptions).

The rwho subsystem was used back in the days of interactive shells to query information about other users on other computers. You might think of it a little like Facebook from the very early days of the Internet.

Backup passwd and group files

This should be empty, too. If it isn't empty, it is worth investigating what happened.

The passwd and group files store the local user database and the local group (roles) database. If either one gets corrupted or changed unexpectedly, you can use the backup to restore the previous day's version of the files. So, if something goes wrong with the backup it is important to figure out what and fix it.

Verifying group file syntax

If this says anything other than "/etc/group is fine" it means that the group database is corrupt and needs fixing. The easiest fix is probably to restore the previous day's group file but odds are that a system administrator made changes to the file and messed up the syntax somehow. So, it may be worthwhile looking at the difference between the two and making appropriate corrections.

Backing up mail aliases

Similar to the passwd and group backup, this section should be blank. If it is not, the e-mail subsystem will probably not be functioning correctly and e-mail may get lost or misdirected.

The mail aliases file (/etc/aliases) is used to alter where mail for local users is sent. If something goes wrong, the easies fix is to copy the previous day's aliases file back into /etc.

Backing up package db directory

As with passwd, group, and aliases this should be blank. If it is not, something has gone wrong with the backup of the package subsystem.

The FreeBSD package subsystem is used for installing 3rd party software with binary distributions. The distribution repository is maintained by the FreeBSD foundation and updated regularly. The package system (man pkg) provides functionality for installation, upgrades, and removal of 3rd party software.

Disk status

This provides a summary of the mounted filesystems (the output of df -k). It is intended that the administrator glance at this to watch for unexpected changes such as missing filesystems or full filesystems.

Network interface status

This is the output of the netstat command. It is intended that the administrator glance at this to watch for unexpected changes in network status such as missing network interfaces or unexpected networks appearing.

Excessive (or in some cases any) changes to the numbers in Ierrs, Idrop, Oerrs, Coll, and Drop indicate network issues that may need to be addressed.

Local system status

This is the output of the uptime command. It shows you how long the system has been running since the last reboot.

Obviously if the system rebooted since the last report and the administrator wasn't expecting it, there may be a problem. The last 3 numbers are the system load (over 3 different time periods). Since the periodic script runs in the middle of the night, one would expect these to be pretty low. A value of 1 indicates the system is fully loaded in some way.

Mail in local queue

This should generally say "mailq: Mail queue is empty". If it does not, there is something preventing mail from being delivered locally and this should be examined.

It provides a count of the number of messages waiting in the sendmail subsystem queue for local mail delivery. That is mail that is being sent to a user on this system.

Mail in submit queue

If you are running a mail server, this will contain the number of e-mail messages that are queued up for delivery to other systems. It may not be empty in such cases. A large number would indicate a problem but that may or may not be a problem local to this system.

If you are not running a mail server, it may still contain the number of e-mail messages that are queued up for delivery to other systems (but presumably generated from this system). Odds are any value other than "mailq: Mail queue is empty" indicate some sort of a problem with delivery.

Note that these two preceding sections (Mail in local queue and Mail in submit queue) are written assuming that the sendmail daemon is used for the servers e-mail subsystem. Many alternatives provide work-a-like functionality and will result in this report being accurate (both Exim and Postfix do this). Some lightweight alternatives do not provide this functionality and it will be the administrator's responsibility to provide an alternate monitoring solution.

Security check

This only ever says "(output mailed separately)". The output will be found in the mail message with subject line "Hostname security run output" where "Hostname" is the name of the server that ran the script. That report will be covered in a later journal entry.

Checking for rejected mail hosts

If this is a mail server, there may be entries in this section. Excessive entries may indicate a problem with your setup that is causing other organizations mail servers to reject mail from your server.

This will be a list of e-mail that failed to be delivered because the receipent's e-mail server rejected the message. On mail servers, it is not unusual for there to be some entries here.

Checking for denied zone transfers

If this section is not empty, there is a problem with your network's DNS setup and investigation is warranted. Or, someone is trying to take a copy of your DNS Server's entire database without your authorization.

This is an extract of log messages from the DNS subsystem (usually bind) which indicate attempted zone transfers failed. There are several reasons why this could happen.

A zone transfer request from an unauthorized (and unexpected) source may be an early indication of a focused attempt to crack your network but it can also be simple casual curiosity. While it is important to not over-react it would be a good idea to pay attention to other sources of information about unexpected network activity.

Backing up pkgng database

The pkgng subsystem is a replacement for the package subsystem (discussed above). All the same comments from the earlier section apply here as well. As of version 9.3 of FreeBSD (possibly earlier and later as well) either may exist on the server but both should not be used at the same time.


That covers the content of the daily report. Regularly viewing and understanding this report can help an administrator catch problems before they are noticed by end-users.

I have seen people in corporate environments use the output of these reports to satisfy IT audit requirements and as evidence to support the need for more staff or to justify a better annual review with the boss. They provide not only the information needed to be a pro-active system administrator but also the evidence that you are being pro-active and stopping problems before people notice.

Saturday, March 14, 2015

Dual Root Upgrades

Performing operating system upgrades is always a risky task. Obvious issues like unrecognized hardware and incompatible application software. Less obvious risks like subtle bugs or incompatibility with support systems that might not be noted right away. In production environments these problems can lead to extended downtime and failure to stay within published maintenance windows. The dual-root architecture I developed to try and reduce these risks with a very simple recovery process.

The technique may be applied to most operating systems that run on general purpose computers. The remainder of this journal entry will summarize how I have chosen to implement it on FreeBSD.


  1. In addition to the normal filesystem partitioning scheme, add a second partition that is an identical size to the planned root partition. 
    • This must cover / and /boot and 
    • should cover /usr but 
    • may not need to cover /usr/local. 
    • Ideally it should not cover /usr/src or /usr/ports. 
    • It must not cover /home or /usr/home or any other user or application data directories.
  2. Tell the installer to use an alternate location for the root filesystem when upgrading.
    • for source this is done with the INSTALLROOT variable/option.
    • for bsdinstall do a custom install with the new root partition mounted on /mnt.
For reference when defining the size of the root partition(s):
  • FreeBSD 10.1-RELEASE base install from binary used 1.4GB including the ports skeleton
  • FreeBSD 9.3-RELEASE-p10 base, source, ports, and several packages used 3.5GB before compiling the source tree.
  • I currently use 16GB for each root partition. I would reduce this to 8GB each if I shared /usr/src and /usr/ports between the two.
  • In a production environment, I would probably share /usr/local as well to allow for separate upgrading of OS and applications.

Selecting the Root Partition

The FreeBSD boot loader (boot0) scans the partitions in order looking for /etc/fstab and assumes that is the root partition by default. At least, that is what is (or was) in the source code comments. Fortunately for us, the gpart utility allows us to set some attributes that override that default behaviour.

If we assume that our two root partitions are located on /dev/ada0p2 and /dev/ada0p3, then the boot loader will use /dev/ada0p2 by default all the time. To tell the boot loader to use /dev/ada0p3 we need to set a special attribute with the gpart utility:

gpart set -i 3 -a bootme /dev/ada0

and to unset it:

gpart unset -i 3 -a bootme /dev/ada0

for clarity sake, you may want to set the bootme attribute on the first partition. However, the boot loader will boot the first available partition that appears to be a root partition by default.

Tuesday, March 10, 2015

Shell Script Report Skelton

Over the years I have written and re-written a skeleton of a shell script that provide a bunch of basic program like functionality for bourn shell (sh) scripts. Mostly I use this as a sort of wrapper for Cron and At job. It provides:
  • Simple command line argument parsing.
  • Help (-h argument).
  • E-mail the standard out as a report.

Skeleton Shell Script

# Skeleton shell script
# As written the output is mailed to the person defined by the MAILTO line.
# This is primarily intended as a skeleton for cron jobs.
# Copyright (C) 2015, Ean Kingston, All rights reserved.

# Configuration Variables - These should be customized


# Internal Variables - these should not need to be changed
# If PATH is not set, set it to something sane
MYRPTFILE=${WORKDIR}${0%'.sh'}.$ # Temporary file for the report.

# Support subroutines

# Display Usage information for the script. This will need to be edited.
printhelp() {
cat <<EOT
Usage: $0 [-h] [-m={email}]
This is a skeleton shell script. This text should be replaced with usage
information for the completed script.
   -m Send the mail to an alternate address, blank for stdout.
   default is ${MAILTO}
   -h Display this help text.


# Start of Main code

# Proess command line arguements
for ARG ; do
   case $ARG in
      -[mM]=*) MAILTO=${ARG#-[mM]=} ;;
      -[hH]) printhelp ; exit ;;
      *) echo Unexpected command line arguement $ARG. Stopping. >&2 ; exit 1 ;;

# Take everything from stdout and put it in a report file



# Send (or print) the report
if [ -n "$MAILTO" ] ; then

# Cleanup temporary file(s)