Thursday, April 7, 2016

Microsoft Introduces Early Beta of GNU/Windows

Microsoft recently release Insider Preview Build 14316 of Windows 10. This preview includes the option of adding a GNU/Linux user-land. So, really just a GNU user-land and early support for a Windows-to-Linux kernel call conversion library. Basically, the reverse of the Wine project.

This is hardly the first time Microsoft has done something similar. There used to be UNIX tools for Windows. Initially it was available as a free download, later it required the "pro" version of Windows, and ultimately it was scrapped. To my knowledge Microsoft never published a reason for scrapping the UNIX tools but I have always assumed it was a lack of maintaining the code and eventually the tools just didn't work properly with newer versions of Windows.

There has also been, for a very long time, the open source Cygwin project. This project has provided a GNU (and X11) user-land to Windows for nearly 2 decades now. The last time I tried Cygwin, it worked very well. Of course, Microsoft's solution is technically still in beta. And this being the first release to the Insider program, we should expect improvements on Microsoft's solution.

For its part, Microsoft has chosen the Ubuntu user-land. Complete with package management and development tool. While many people may consider this a success for the Linux community, I'm not convinced it is much more than a marketing ploy to try and attracted Linux users to Windows. It might be an attempt to satisfy the corporate data-center folks who have to deal with Windows on the desktop while managing UNIX and Linux systems in their data centers but I doubt that.

There are too many options these days to need a Linux user-land on a windows desktop. With reliable Secure Shell (ssh) tools easily available, a few options to get X11 to run on Windows, and the robust capabilities and selection of VM solutions available on the desktop, in addition to the age old solution of dual-booting, there is no longer much need for a UNIX user-land on Windows. That being said, if Microsoft is actually going to stay committed to this, I applaud their efforts.

As I said earlier, I suspect that this is Microsoft's attempt to get some Linux users to return to Windows. They would only get the fence sitters and at this point, I think they have missed the reason why there has been an increase in the number of fence sitter. It's not the availability of tools, it's the invasion of privacy and lack of control of one's own hardware that has got many people moving away from Windows since the Introduction of Windows 10.

For those who don't know, Microsoft has made Windows 10 call home so much that many security minded corporations are flat out refusing to let it in their organizations. Only the most expensive of the numerous Windows 10 license options actually lets you completely disable the call-home functionality of Windows 10. In my experience corporations don't generally want to deploy the most expensive version of Windows to every single desktop (they prefer the least expensive pro version that support AD integration).

You only need to perform a quick search to get an idea for just how bad Windows 10 behaves. Every time you click on the "start menu" Windows 10 calls home, even if you disable the "smart" features and active icons. My list of Internet sites to block in order to try and reduce the level of spying that Windows 10 does on my one Windows 10 system contains nearly 60 entries. Those entries do not include Microsoft's software update servers, bing search engine, outlook.com OneDrive, or live.com website.

Even if I am wrong and most converts aren't concerned with the invasive nature of Windows 10, it is still unlikely that Microsoft will get much conversion from long time Linux users. They are comfortable with the operating system they have. Most prefer the freedom and flexibility. Many will expound on the efficiency and reliability of their preferred operating systems.

In the end, I'm not sure this move will get Microsoft what it wants. The developers are going to prefer native Linux for developing Linux applications since they are less likely to run into subtle compatibility issues. Security minded folks aren't going to move back to Windows until Microsoft dumps all the spyware. Data center folks will enjoy the simplicity of  not needing a UNIX system in addition to a corporate desktop (but only if the corporate policy allows installation of the GNU user-land tools) while it lasts. Although those of us who have been around long enough to remember the old UNIX tools package, will doubt Microsoft's commitment for at least the next few releases of Windows.

Tuesday, March 1, 2016

Jumping on the Bandwagon with the SSLv2 "drown" vulnerability.

It has been a while since we've seen a truly newsworthy vulnerability in SSL but we got one today in CVE-2016-0800, more affectionately known at DROWN. The tech news media is sure to be writing very exciting and largely redundant articles about this particular beast. So, I figure why not add to the fray with my own little post.

Last time we saw something this newsworthy was the renown Heartbleed vulnerability way back in 2014. That one had IT shops scrambling to patch systems and executive up at night worrying that something was missed and the hackers would get into their businesses. Scary stuff for both groups.

But, enough background, what does this new vulnerability really mean?

If you have a server that allows the use of SSLv2, then an attacker can use it to figure out what is called a "session key". A session key is the encryption key used to secure the communications in transit between a client and a server. How bad this could be for your business depends on how sensitive the information flowing between your server and your clients.

In the worst case, you are communicating highly sensitive information that falls under strong government regulations or industry compliance standards like personally identifiable information, health care data, or financial transaction data.

In the best case, it represents an embarrassment to the company and the need for customers to change their passwords.

The Good News

This new vulnerability is a variation on one from March of 2015 (CVE-2015-0293) and is much harder to exploit. As a result of that previous incident, organizations that were concerned about that (older) vulnerability should have already disabled SSLv2 in order to protect themselves and their customers. Unless they have re-enabled SSLv2 this should not be a major concern today.

In order to take advantage of this vulnerability an attacker is going to have to make many thousands connection to a  vulnerable server. Any organization with a good intrusion detection system at their perimeter should be able to pick up on this to identify potential attackers fairly quickly.

The Bad News

One server that is vulnerable to the DROWN vulnerability can be used to obtain the session key for another server provided that both servers are using the same server certificates. More specifically the same RSA private keys within those certificates. They don't even have to be the same type of server as long as they use the same certificate. A public facing mail server could be used to obtain the session key for a secure web server in such a case.

Most bigger organizations choose to purchase wildcard certificates for their public servers. This lets the organization purchase one certificate and use it on multiple servers instead of having to purchase (and manage) separate certificates per server. While this simplifies server certificate management, it increases the risk of exposure in this case. A server that would be considered a lower priority could end up being used in an attack against a higher priority server.

Another potential problem arises network appliances. Particularly those whose vendors don't tend to provide patches promptly. These may go for years without a fix even available. The consumer market will be a particular problem in this case.

Conclusion

When this hits the major media news, we will probably see some more sleepless nights for executives and IT staff. It shouldn't be nearly as severe as Heartbleed was because anyone who has been paying attention to their information security should already have SSLv2 disabled all their servers.

The best course of action I can think of at the moment is to do a thorough review of all public facing servers and network equipment and confirm that SSLv2 is disabled. This needs to include even the low-priority systems because one server can be used by an attacker to compromise a different server. With SSLv2 disabled on every server, organizations can take their time and patch systems properly.

References:


Monday, November 9, 2015

Google, where is the confidentiality in Chrome OS?

Ask an information security expert what the three cornerstones of information security are, and s/he should tell you, Confidentiality, Integrity, and Availability. Google will tell you that Chromebooks are some of the most secure consumer computers available today. How well do they really do?

Integrity

ChromeOS and the hardware specification that Google put out for Chromebooks (specifically around firmware changes) have done a decent job of improving the Integrity of the system. Kernel signing and OS signing (a cryptographic signature on the binary files) goes a long way to stopping malicious root kits.

Of course it also goes a long way to preventing people from messing with the system too. But that would be flexibility, which isn't an underlying principle of information security. Besides, Google gave us developer mode to allow for flexibility. So, you can mess with the system if you want to.

But, what about the application software? Well, that is all managed through the Google Play app store. If Google is managing the ChromeOS app store as well as they are the Android app store, then the integrity of the system has fallen back to be on-par with any other operating system as far as end-user application security goes. And since, by far, the majority of 21st century malicious software is targeting the user at the application layer, not the OS layer, Google has largely missed the boat.

Want to fix this? Take a look at how Apple does things with their app store. Every app is rigorously tested and questioned. Application functionality is tested by Apple's own quality assurance staff and bad applications are rejected. The need for access to system resources (even non-traditional resources like your address book) are questioned and must be justified with actual needs of the application before it gets approved.

Does Google do this? I don't actually know. But I do know that on my Google Phone applications are given a lot more access to things that it makes no sense they should need.

Integrity score: marginally above average.

Availability

Google may argue that they have provided availability too because your system is tied to your Google account and your files get written there if you lose access to your Chromebook for some reason. As long as the Chromebook can work entirely offline, it does provide availability.

It also provides Google with a guaranteed source of valuable information about you that they can use to direct ads that target you specifically. In case you didn't know, advertisers pay a premium for targeted advertisings. Of course, those ads are also what pays for all the google services.

Mind you, the exact same thing is true for Windows users that have a OneDrive account, Apple users that have a iCloud account, and anyone who has bothered to set-up and religiously use any of the plethora of cloud storage providers (Dropbox, Box.net, SugarSync, and a host more).

Availability score: average.


Confidentiality

But, those ads are not what pays for the Chromebook. We, the consumers, pay for our Chromebooks. And, at least some of us, expect to have a high degree of that last tenant of information security: confidentiality. This, Google throws completely out the window. The problem is, information is not secure unless there is a balance of all three.

Google has taken away all the confidentiality. Oh sure, they will say they keep your data private. Except they don't. Their deep content inspection tools dig through it and figure things out about you.  Then they share what they learned with advertisers in the form of targeting ads. Except those things they learned about you they wouldn't know if they had proper confidentiality. Just because it isn't a human looking through your private documents doesn't mean it isn't a loss of confidentiality.

And, while people in the USA are stuck with all sorts of laws that required companies to reveal data on request (sometimes with a court order, sometimes with a warrant, sometimes accompanied by a gag order), may of us don't live in the USA and shouldn't have our data exposed to their laws when we aren't in that country. We have our own mandatory disclosure laws. Yet everyone is stuck with their data being exposed to a foreign government's scrutiny whether they consider this important or not.

Confidentiality score: below average.

Solution

Sadly, Google could very easily fix this. And should have fixed it from the start. Google has had plenty of time to fix it and yet still haven't. In only requires 2 modifications to ChromeOS using existing and available software:


  1. Bring back local user authentication. Linux has it built in and ChromeOS is Linux at the core.
  2. Implement client side file encryption. These exists this wonderful tool available for Linux (and Windows and OS X and a bunch of other operating systems) called encfs (encrypted filesystem).
Hopefully everyone knows about local user authentication since every modern desktop operating system except for ChromeOS had it built in and uses it by default. So, I won't go into detail about it.

On the other hand, some people may not be aware of EncFS, which stands for Encrypted Filesystem. It works in conjunction with another strange thing called FUSE (which stands for filesystem in user space). Together this pair of tools do something really useful for confidentiality of data. They let you read and write files in one directory just like you normally would but they store those files in an encrypted form in another directory.

Why is this so good? Well, it means that your files are not available for deep content inspection by Google. So Google will probably never implement this too. It also means that if the files you store on your Google Drive slip out to the whole world (like say you accidentally choose to make them public) nobody else can see them because they don't have the encryption key.

At first glance, this does look to be a problem because it means you can't access your files except from your Chromebook. Except, of course, as was said earlier, EncFS is available on pretty much all of the common operating systems in use today.

It wouldn't be all that difficult for a decent programmer to implement an EncFS plugin for Chrome (the web browser) that would do the file decryption client side and encrypt it before sending it back to Google for long term storage.

Admittedly this last part takes a bit of effort but Google should probably make the use of client side encryption optional anyway. Encryption is one of those trade-offs between two of the principles of information security. You give up availability to get more confidentiality.

Conclusion

In my opinion, Google has violated their motto. Or at least the motto they used to have, "Do no evil". Google, if you read this realize that I consider your actions evil.

I have been looking at Chromebooks for a long time and I've been wanting one for years. I will not buy one so long as I must use a Google account in order to use it.

P.S. It would also make my Google account considerably less secure since my password is a 32 character randomly generated password. It should be as difficult to guess as brute force cracking AES256 encryption. I can't memorize it. So, to use a Chromebook I would need to considerably reduce the security of my Google account. And, no, two-factor authentication the google way is not an option because I regularly run the battery down to zero on my cell phone.

Chrome OS and Chromebooks may have made a marginal improvement in the integrity of the core operating system but it was done at the near complete expense of confidentiality and with a loss of flexibility.

Post-Script

If you really want to impress me, Google, take a note from they way Apple manages the shared keychain access across device and use per-device PKI (like PGP or SMIME), to transfer the encryption key between ChromeOS and Chrome browsers installed on all my other systems so that I don't have to worry about losing the key as long as I remember the passphrase I use to lock it. You could even do a proper secure key escrow with that.

And, in the name of flexibility, give me a way to put my own validation key into the firmware of the Chromebook (alongside yours) so I can install my own, custom, OS without having the scary developer mode warning show up. But please do it in a way that hobbyists and small open source developers can reasonably afford to make use it.

Friday, November 6, 2015

Secure Cloud Storage

Google will tell you that Drive is secure. Dropbox will tell you their service is secure. Microsoft tells us that OneDrive is secure. Amazon tells their clients that S3 is secure. Everyone tells us that the cloud storage they give us is secure. But what does that really mean?

For the most part it means that the data might be encrypted at rest (many cloud storage providers tell us this), that you need to authenticate to get access to the data, and that the data is backed-up. And, they are right up to a point.

What they aren't telling us, except for one provider (as far as I know), is that our data is not protected from the cloud storage provider itself. As in: the staff and applications of the cloud provider are able to read out files without us know (although in the case of staff it is probably very limited and requires significant effort).

Google basically admits this in their EULA where they tell us that they will use the content of our files (and e-mails) to help choose ads that are more suited to us. Microsoft claims that they can't see our files because they are encrypted using our login password (or they used to) but I bet there is a way to recover our password if we forgot it (which means they must be able to decrypt the files without our password being entered. The others vary between these two points.

Why does this matter?

For starters, you might want to use the cloud storage to back up sensitive information that you don't want to lose in case your computer hard drive crashes. Say, for example your financial records. Or, for many of us, that huge list of passwords for all the different websites, forums, and online services we all use now. Maybe you are a private type of person who feels uncomfortable knowing that other people might be able to look at your files. There could be files of a personal nature you don't want to get out, even accidentally. 

The point is, there are a lot of very legitimate reasons why a person would want cloud storage and not want the cloud provider to ever have access to those files. Even if it means you might lose access to those files yourself.

The big question is how do we protect our data when it is not on systems entirely under our control? The obvious answer is simple. We encrypt it on systems that are under our control before sending it away. To keep things convenient we really want to have this happen in a way that is transparent, or nearly transparent to us.

Just like cloud storage, we want to write a file to a directory and let some program that runs in the background encrypt it before it gets sent away for safe keeping.

What can we do?


Once upon a time, a long time ago, a smart person with a strong understanding of encryption created a tool called Truecrypt. He created it to solve a different problem. He wanted to be sure that if his laptop got stolen, people couldn't get access to his data. He wasn't worried as much about cloud storage. At least that is what he said. 

Much of the world saw value in Truecrypt and started using it. Many of us saw value in it beyond keeping our laptops safe should they get stolen. We saw it as a way to pass large files to other people securely. We saw it as a way to be sure our files could not be looked at even when they were placed in cloud storage. We liked this too. We trusted this tool.

Unfortunately, the author(s) of Truecrypt decided it was no longer needed and they abruptly shut down the project. The shutdown was so abrupt that many people don't believe the reasons given by the authors of the software. But, they gave a reason. The reason was simple, operating systems now have built-in disk encryption. So, they don't need to maintain Truecrypt and they stopped.

Truecrypt was never an ideal solution for cloud storage. It needed big files to be useful. You put your little files in the big files to keep them safe. This meant a lot of data has to go to the cloud storage provider and come back every time one little file changes (except in the case of Dropbox).

In any case, Truecrypt is no longer supported. There are some alternatives. VeraCrypt and CipherShed have both taken the last public release of the source code to Truecrypt and begun making their own changes and improvements. But, Truecrypt was never ideal for cloud storage because of that big file problem.

Encrypted File System (encfs)

There is this obscure tool that came out in the Linux world around the same time as Truecrypt. It was unstable, unproven, and only worked on Linux at the time. But, it had one major advantage over Truecrypt. It worked at the file level. That is, it encrypts each file separately. So, when the cloud sync happens, it only needs to send the files that actually changed.

EncFS has improved since then. It is now up to version 1.7 on Linux and we are starting to see some effort in getting it to common operating systems in a reliable and consistent manner. The user interface is still ugly. The tools for Windows and OSX are still pretty sparse and buggy but it looks like it is coming.

At the time of this writing there are a couple of Windows and OSX ports worth keeping an eye on:

  • Safe (www.getsafe.org) has both Windows and OSX support but seams unstable
  • EncFSmp (http://sourceforge.net/projects/encfsmp/?source=directory) supports both Windows and OSX. It appears to be a bit more stable than Safe, is still in Beta, and has a few growing pains to work out with the UI. This is the one I'm currently using.
  • OSXFUSE is an implementation of another Linux tool for OSX. FUSE is the tool that encfs was built to use (Safe and EncFSmp appear to have replaced FUSE functionality with alternatives built into Windows and OSX). While there is an OSX FUSE supported encfs, it requires that  you build it yourself.
  • Encfs4Win is an experimental port of encfs on Windows that requires fuse4win and is Windows only.
One very important thing to keep in mind if using encfs is that the encryption key is derived from the password you set. This means that the encryption is only as good as the password is complicated. Explaining this can get a bit complicated so here is a simple example:

A one word password taking from the words you know can be guessed by a modern desktop in under 10 seconds.

A password that is 8 characters long and randomly generated using printable characters can be guessed by a modern desktop computer in under 5 minutes. This is about on par with an old encryption algorithm from the 1970's called DES. It is not supposed to be used anymore.

A password that is 16 characters long and randomly generated using printable characters is as complicated as the keys used for AES (the replacement for DES) created in the 1990's. This is still considered acceptable for use today by governments and banks. It is probably good enough for us.

A password that is 32 characters long and randomly generated is as good as AES 256 encryption keys. This is as strong as encfs gets when put in paranoid mode. Odds are it will take a determined government months, or years, to crack this type of encryption key.

The problems with the usable passwords (above) is that most people won't be able to memories them and who in their right mind wants to type 16 (or even 32) characters at a password prompt.


If you aren't trying to prevent people who have access to your computer (or laptop) from seeing the encrypted files, you can keep the password in a text file to cut-and-paste it into the password prompt for encfs. Even better, EncFSmp conveniently saves the password so you don't have to keep entering it.

As to how you generate such an ugly complicated password. That is up to you. I use a tool (that runs on both OSX and Window) called KeePass. Conveniently, it also stores the passwords in a secure way.

Saturday, October 31, 2015

Building a Virtual Desktop Server

Every so often the business world cycles through the idea that end-users don't need computers on their desk and they can use thin clients instead. Inevitably, this fails because the user experience is inadequate. Powerful applications won't run very well or at all. This can include those big Excel spreadsheets that finance people need to use. Even when things do run, the screen updates horribly slowly. So, in the end, everyone has a desktop or laptop back.

However, the virtual desktop doesn't quite die because it still has some uses. System administrators like them for remote management of servers. They can be adapted to function as a sort of remote connection for end-users who are outside the office but don't have a company issued laptop. The technology that makes virtual desktops work is used by help desk staff for remote control of end-user computer. There are many edge cases that keep the technology alive.


Since everything that will be used in this process is not a part of the core operating system, these instruction should work with very little modification on almost any UNIX or UNIX like operating systems. The testing and quoted configurations in this journal entry are from FreeBSD 10.2.

Overview

It is common practice in the UNIX world to use VNC as a remote desktop client for UNIX systems running X11 where the client computer does not have an X11 server installed. It is even further common practice to tunnel the VNC traffic through ssh. This is so common that there is a modified version of the TightVNC client with the SSH capabilities built in (called ssvnc). This standard identifies the specific packages and configuration needed to support a multi-user environment with SSH, VNC, and XDM.
  • VNC run through inetd. Allows for up to 64 dynamically assigned virtual desktops without VNC screen sharing.
  • XDM runs as a daemon through the rc system. The server does not need to have a GUI enable or an X11 server installed.
  • The VNC password must be shared BUT users are still required authenticate individually.
  • Two-factor authentication and network session security are provided by SSH.

Caveats

With this configuration, a user is able to log-in using SSH to the command-line interface and bypass the XDM login screen. However, the user did have to authenticate the ssh connection using certificates. If there is a passphrase on the certificate, then the user still performed two-factor authentication. This may be consider as flexibility provided to users who are comfortable with the UNIX command prompt while not hindering users who are not as comfortable.

Bugs

  1. When the system first boots, it can take several seconds for inetd to fully start-up. This may result in the system appearing to be down or non-responsive when connecting immediately after startup. Wait a few seconds and try again.
  2. The OSX VNC client (called Screen Sharing) does not gracefully exit. This can result in the XDM daemon delaying reset for the affected desktop. When this happens the login screen will not be presented to the end-user for several seconds (until Xvnc attempts to re-try the query). The easiest solution is to advise users to re-try connecting if they fail to get a login prompt.
  3. The XDM and TightVNC binary packages for FreeBSD do not identify all pre-requisites needed for those packages to function fully. A list of the missing packages is identified in this document.
  4. The SSH match group section (at the bottom) has not been tested. It may not function as intended with this setup.

Setup

The following packages must be installed. All are available a pre-built binaries in the FreeBSD package repository.

Pre-requisites Packages

  • xrdb - a prerequisite of VNC that is not identified in the VNC server package.
  • xset - a prerequisite of VNC that is not identified in the VNC server package.
  • sessreg - a prerequisite to XDM that is not identified by the package.
  • xorg-fonts - pre-requisite of XDM that is not identified in the package.
  • xsetroot - optional: used to make the login desktop look nicer.
  • xterm - optional: prerequisites of the VNC server however, that functionality is not used in this setup.
  • twm - optional: prerequisites of the VNC server however, that functionality is not used in this setup.
  • xlsfonts - optional: helps with debugging font problems.
  • xfotnsel - optional: helps with debugging font problems.
Other packages will get installed automatically as pre-requisites of the main packages. The ones listed above should be but aren't by default.

Packages to Install

  • xdm - the X11 Display Manger used to present the user with a login screen.
  • tightvnc - the Virtual Network Client that provides a remote desktop display (since remote X11 is not always available).

Pre-requisite Services

A working SSH server must be installed and running on the server. It needs to be configured to allow port forwarding from clients. The default SSH server included with FreeBSD is configured this way and should be enabled in the rc subsystem already (if not, add the line sshd_enable="YES").

Configuration


VNC and Inetd


In order for OSX (Apple Mac users) to connect, the VNC server must present a password. We will therefore create a simple VNC password file to be used. This password will need to be shared among all users. It is not for security purposes, it is used due to a technical limitation. DO NOT USE THE ROOT PASSWORD or your own. If in doubt, use the word "password". The following steps are used to create this password:
vncpasswd
cp ~/.vnc/passwd /etc/vncpasswd.nobody
chmod 0600 /etc/vncpasswd.nobody
chown nobody /etc/vncpasswd.nobody


/etc/inetd.conf


By running the VNC server through inetd, we are able to dynamically assign X11 desktops to users as they connect. This creates an service very similar to a Citrix VDI solution.
vnc stream tcp nowait  nobody /usr/local/bin/Xvnc Xvnc -inetd -query localhost -localhost -once -desktop VictorVM -geometry 1280x720  -depth 24 -rfbauth /etc/vncpasswd.nobody
All users will have a screen size of 1280x720 as defined by the -geometry parameter. This may be changed but it must be the same for all users. Note that the most common laptop screen resolution is currently 1366x768. Higher values may cause problems for some users.


XDM


The pre-packaged XDM is poorly set-up and not intended to operate the way we are setting it up, as such there are several steps that must be taken.

/usr/local/etc/rc.d/xdm


Normally xdm is run from /etc/ttys on virtual terminal 8 but this would require a local graphical console. Instead we will run it as a daemon process. Create a daemon control script for the service manager subsystem. The script should be placed in the file /usr/local/etc/rc.d/xdm, owned by root, and executable by root.

#!/bin/sh

# PROVIDE: xdm
# REQUIRE: DAEMON
# KEYWORDS: shutdown

. /etc/rc.subr

name=xdm
rcvar=xdm_enable
command="/usr/local/bin/xdm"
pidfile="/var/run/xdm.pid"

load_rc_config $name
run_rc_command "$1"


/usr/local/lib/X11/xdm/Xservers


Comment out the last line of the Xservers file to prevent xdm from trying to start a local X server on the console.
# Comment out the local line so that we are only providing XDMCP support
#:0 local /usr/local/bin/X :0 


/usr/local/lib/X11/xdm/Xaccess


Enable the XDMCP listener functionality of xdm. By adding a line to the end of the file:
LISTEN 127.0.0.1


/usr/local/lib/X11/xdm/Xresources


For some reason the default Xresources file attempts to make use of fonts that are not available to XDM when it is run as a daemon. There is probably a way to add those fonts with additional configuration of Xvnc but it is easier to modify this file to make user of fonts that are available. The following are the lines that need to be changed.
xlogin*greetFont: -sony-fixed-medium-r-normal--24-170-100-100-c-120-iso8859-1
xlogin*font:       -sony-fixed-medium-r-normal--16-120-100-100-c-80-iso8859-1
xlogin*promptFont: -sony-fixed-medium-r-normal--16-120-100-100-c-80-iso8859-1
xlogin*failFont: -misc-fixed-bold-r-normal--14-130-75-75-c-70-iso8859-1
xlogin*greetFace:       Fixed-24
xlogin*face:            Fixed-16
xlogin*promptFace:      Fixed-16
xlogin*failFace:        Fixed-14:bold
Note: in the default file, these lines appear twice and the specific set selected depends on the screen resolution. It is safes to simply change both.


/usr/local/lib/X11/xdm/xdm-config


Comment out the link that disable XDMCP. It should be the last line in the file. It is the resource defined by DisplayManger.requrestPort.
! Comment out this line if you want to manage X terminals with xdm
!DisplayManager.requestPort: 0


/etc/rc.conf


Once everything else is configured, the rc system can be configured to start the daemons on system startup. The following lines should be added to /etc/rc.conf:
inetd_enable="YES" # running VNC server through INETD
xdm_enable="YES" # try to start xdm for the VNC server(s)


End User Desktop Setup


The choice of end-user desktop environment will be very purpose dependent. There are a great many choices available. The setup presented here is intended for basic testing only. A more modern desktop environment should be chosen and configured.


~/.xsession


Each user with X11 (GUI) access should have a .xsession file in his/her home directory. Here is a very basic setup intended for testing only.
#!/bin/sh

xrdb $HOME/.Xresources
xsetroot -solid grey
xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" &
twm
When the window manager (twm) exits, the user is logged out and the XDM login screen should reappear.


Optional: Two-factor Authentication


If this service is to be provided over the Internet, it is advisable to enforce two-factor authentication. This can be achieved by enforcing the user of certificate based authentication for SSH. Note: the two factors are the ssh certificate (something-you-have) and the passphrase on that certificate (something you know). The VNC password and the unix password for xdm are not needed.


/etc/ssh/sshd_config


To enforce the use of certificate authentication with SSH requires some minor changes to the default configuration. The following identifies only the lines that need changing within that file:
# It is strongly advised to not allow direct remote-root login on all publicly facing servers
PermitRootLogin no

# enable RSA and DSA certificate authentication
RSSAuthentication yes
PubkeyAuthentication yes

# prevent ~/.rhosts authentication
IgnoreRhosts yes

# prevent username & password authentication
ChallengeResponseAuthentication no

Since the passphrase on the ssh certificate cannot be technically enforced, it may be desirable to restrict users so that they are only able to use the VNC tunnel. Additions are needed to the sshd_config file:
Match User limited-user
   #AllowTcpForwarding yes
   #X11Forwarding no
   #PermitTunnel no
   #GatewayPorts no
   AllowAgentForwarding no
   PermitOpen localhost:5900
   ForceCommand echo 'This account can only be used for SSH+VNC access.'

A group called 'limited-user' will need to be created and all users that should not have ssh shell access added to that group. Note: This will not prevent the user xterm or other methods of gaining shell access through the GUI provided.

Configuring the Client(s)


OpenSSH and VNC are available for almost every current operating system in use today on desktops, tablets, and smart phones. Several of them have one or both of these included with the operating system.

Connecting from Windows

We will need to install two applications on our Windows desktop: Putty (for SSH) and TightVNC. Windows installers for both are available from their respective websites. Anyone who is attempting to complete the steps outlined in this article should be able to perform the software installation without difficulty.
  • Putty for Windows
  • TightVNC for Windows
There are alternatives to both Putty and TightVNC some commercial and some free. These are freely available and what I choose to use on Windows.

PortableApps on Windows

PortableApps is a collection of freely available software for Windows that have been configured to be run directly from a USB thumb drive or other portable media. This option is handy because you can load the appropriate applications on to your thumb drive and keep it in your pocket. Thus any available Windows computer that you can run the application from becomes useful as a client for your virtual desktop server.

Connecting from Apple OSX (iMac, Mac Mini, Macbook, and friends)

The SSH and VNC clients are part of the base operating system. No additional software needs to be installed.

Connecting from Apple iOS (iPad and iPhone)

Guess what? There is an app for that. There are in fact many apps for it. This example will use the iSSH app. It may not be the best. It was the first one I ran across that was free, worked, and didn't display ads.

Connecting from Chrome OS (Chromebook)

Although Google tries desperately to hide it from the user, the fact is that Chrome OS is built on Linux and you can simply use ssh X11 tunnelling exactly like it is described in the FreeBSD and Linux section (below). The only hard part is getting to the ssh client.

Connecting from FreeBSD and Linux


FreeBSD and Linux are both UNIX-like operating systems and as such will most likely be using the X-Window System for their graphical user interface. While one could install the TightVNC client and configure the ssh tunnel in the same manner as described in the Apple OSX section, it is probably far less troublesome to simply tunnel the X11 protocol (used by the X-Window System) through the ssh connection directly instead of having the VNC intermediary.

ssh -X remotehostname path/to/application 

That's all there is to it. The -X (that's a capital X) tells SSH to tunnel the X11 protocol. For consistency, the setup and use of the TightVNC client is described below. It would be use when connecting to a MS Windows or Apple OSX server remotely.

What about Android?


I haven't found a decent combination of SSH and VNC that work adequately on Android. The small screen size makes it worse. It is probably best to use the Android phone as a WiFi access point and connect using something with a bigger screen.

Conclusion

It is somewhat ironic that the X Window System can be run on all the client operating systems listed in this article (except perhaps Android). In fact, the software needed is freely available. However, VNC has the advantage that you can install the VNC server on Windows and it's already a part of OSX. VNC has the other advantage that it can be configured to mirror the actual desktop which helps with remote support.

Wednesday, October 28, 2015

Risk Assessments for System Administrators

Risk assessments are one of those things that system administrators of larger companies are often asked to get involved in. Most system administrators consider this a waste of time that could be better spend keeping the systems running. Even when the system administrators aren’t directly involved in the assessment, they invariably get more work as a result of someone else’s risk assessment.
In this article, I shall attempt to help the system administrators understand why these risk assessments happen and how being involved in a risk assessment can help the system administrator do a better job.
A risk assessment is a formal process, typically carried out by people who sit in an unusual position in the company called IT governance. These people have the odd job of trying to translate corporate management and business need into information technology terms and back again. The risk assessment process itself is fancy formal process meant to identify all the assets of the company and the risks against them. It then takes those lists and sets priorities.
In my opinion, one of the big failings of the process is that most of the formal processes talk about enumerating the technology in terms of systems. The real assets are not the systems but the data in those systems. This article isn’t meant to debate the merits of the risk assessment process, but to help people who get stuck at the end of it better understand the process.
So, step one is to enumerate all the systems (at least those in the target scope of the risk assessment). Step two is to enumerate all the sources of risk to those systems.
How does this help the system administrator? For starters, there are going to be systems on the edges of the “target scope” like unofficial admin systems and old network hardware. An admin that is involved from the start can either down-play or highlight these systems to get them into (or keep them out of) the risk assessment. As part of the risk assessment you have the opportunity to argue for more resources to upgrade/replace those systems or to keep management from noticing them. 
Be careful here. This can backfire. Get an unofficial admin system noticed and it might be taken away instead of replaced. Get it noticed and there might be a lot more work maintaining it once it’s official and needs to go through change control. Keep it hidden and if it becomes the source of a problem you might get in trouble for not identifying it. The trade-offs can be rough.
Next up is risks. Risks are another trouble spot. They can be anything from elite hackers to nearby train tracks. The key to including a particular risk in the risk assessment is two fold. First there needs to be good 3rd party documentation about it. Articles in professional magazines that the management types recognize or white papers published by information security and audit companies. The second is some details that can support how likely this risk is to be a problem. Industry reports of recent events or statistical papers of historical occurrences. Again, sources that management understand are important.
Good sources of material to management are not the same as good sources to technical people. In fact, they are often quite opposite. Wikipedia would not usually be considered good from a management perspective but the sources that the Wikipedia article references might be. Likewise 2600 magazine and wired are not going to be good sources to management. On the other hand, Business Week, the Wall Street Journal, and white papers published by Deloitte, Trustwave, or most of your corporate vendors will be sources that management trusts.
These sources are not an exhaustive list and may not be accurate for all organizations. If you can find out where the management in your organization get their news, that’s a good start.
The key to all of this for the system administrator is to identify risks to specific systems and point them out to the people writing the risk assessment. More risks and risks with a higher likelihood are likely to get a system more attention.

Understand of what went into the risk assessment report helps to understand why management is putting increased focus on some things and less focus on other things. If that key system that keeps everything running smoothly is ignored, there may not be sufficient resources to keep it that way. If a minor system gets too much focus the administration staff may find they are spending too much time on something nobody really cares about.

Sunday, April 12, 2015

Configuring the FreeBSD Periodic Subsystem

As mentioned in the post about the daily periodic script there are some scripts that run daily to clean up various legacy subsystems. Some system administrators may not wish to run these scripts and view them as unnecessary. FreeBSD provides an easy way to modify the behaviour of the periodic subsystem through a simple configuration file.

From the earlier post, it was noted that the system announcements and rwho sub-system are somewhat legacy and probably not used (or even enabled) on modern installations. Particularly on servers that are not intended for end-user login. 

The syntax of the file is very simple and follows the same structure as /etc/rc.conf on FreeBSD. Lines that begin with a pound/hash symbol (#) are treated as comments. Blank lines are ignored. All other lines follow the variable=value syntax. The /etc/periodic.conf file should contain only overrides of the default values found in /etc/defaults/periodic.conf.

periodic.conf

# Local configuration for periodic sub-system.
# This file overrides /etc/defaults/periodic.conf
# for more information: man periodic.conf

# disable archaic system messages cleanup since it is not in-use
daily_clean_msgs_enable="NO"

# disable rwho database cleanup since the rwho daemon isn't running
daily_clean_rwho_enable="NO"

# enable daily cleanup of /tmp
daily_clean_tmps_enable="YES" 

The above example of a customized periodic.conf file makes three changes to the defaults:

  • disables the section in the daily output entitled, "Cleaning out old system announcements:"
  • disables the section in the daily output entitled, "Removing stale files from /var/rwho:"
  • enables a section "Removing old temporary files:"
The first two were mentioned in the earlier post as being legacy and probably not in-use. The server is not intended for end-user login and the administrator does not make use of the system announcements sub-system (part of the mail subsystem) and so there will not be any system announcements to clean. The rwho daemon (rwhod) is not enables so there will not be any entries in /var/rwho. Thus on this particular system, it should be safe to disable these two daily scripts.

The third change is to enable cleanup of /tmp. This is probably not needed on a system without end-user access because only applications, scripts, and the system administrator should ever be using the /tmp filesystem. It is possible that some application or script may not behave properly and could leave files in /tmp when they are no longer needed. The system administrator might similarly forget to clean-up. Thus the periodic script has been enabled to keep things tidy.

The daily tmp cleanup script will, by default, remove any files found in /tmp that are more than 3 days old. This period can be adjusted by setting the daily_clean_tmps_days variable in /etc/periodic.conf

Conclusion

In the default state, FreeBSD's periodic sub-system is pretty well self maintaining and does a reasonable job at trying to keep the system clean as well as providing backups of key system files and providing the system administrator with daily reports. Although adjustments may not be needed in many circumstances, a system administrator will find value in understanding how to make changes to this sub-system when the need arises.