Keeping work and personal computers separated

I try as much as possible to keep my personal stuff separated from the work stuff. Even if both California and Utah laws are clear that what I develop on my own time and my own hardware is mine (as long as it is not related to my day job – that’s the big difference with French laws where I own what I developed on my own time, even if it is on my employer’s business or computers), that did not prevent one of my former employers to try to claim ownership on what was rightfully mine. Because it is very expensive to get justice in the USA, getting things as separated as possible from the beginning seems like a good idea.

The best way to do that is simply to not work during one’s free time on anything that could have a potential business value – these days, I spend a lot of time learning about cryptography, control system engineering and concurrent systems validation. But keeping things separated still creates some issues, like having to carry two laptops when traveling. I did this twice for IETF meetings, and it is really no fun.

The solution I finally found was to run my personal laptop as an encrypted hard drive in a virtual machine on the company laptop. My employer provided me with a MacBook, which has nice hardware but whose OS is not very good. I had to put a reminder in my calendar to reboot it each week if I did not want to see it regularly crashing or freezing. Mac OSX is a lot like like Windows, excepted that you are not ashamed to show it to your friends. Anyway here’s how to run your own personal computer on your employer’s laptop:

First you need a portable hard drive, preferably one that does not require a power supply. I use the My Passport Ultra 500GB with the AmazonBasics Hard Carrying Case. Next step is to install and configure VirtualBox on your laptop. You will need to install the Oracle VM VirtualBox Extension Pack if, like me, you need to use in your personal computer a USB device that is connected to the employer laptop (in my case, a smart-card dongle that contains the TLS private key to connect to my servers). Next step is to change the owner of your hard drive (you unfortunately will have to do that each time you plug the hard drive):

sudo chown <user> /dev/disk2

After this you can create a raw vdmk file that will reference the external hard drive:

cd VirtualBox VMs
VBoxManage internalcommands createrawvmdk -filename ExternalDisk.vmdk -rawdisk /dev/disk2

After this, you just have to create a VM in VirtualBox that is using this vdmk file. I installed Debian sid which encryption, which took the most part of the day as the whole disk as to be encrypted sector by sector. I also installed gogo6 so I could have an IPv6 connection in places that still live in the dark age. Debian contains the correct packages (apt-get install virtualbox-guest-utils) so the X server in the personal computer will adapt its display size automatically to the size of the laptop.

To restore the data from my desktop, I configured VirtualBox on it too, so I could also run the personal computer on it. Then, thanks to the same Debian packages, I was able to mount my backups as a shared folder and restore all my data in far less time than an scp command would take.

And after all of this I had a secure and convenient way to handle my personal emails without having to carry two laptops.

Decrypting SSL sessions in Wireshark

Debugging secure communication programs is no fun. One thing I learned during all these years developing VoIP applications is to never, ever trust the logs to tell the truth, and especially the one I put myself in the code. The truth is on the wire, so capturing the packets and been able to analyze them, for example with Wireshark, is one of the most important tool that a communication developer can use. But the need for secure communications makes this tool difficult to use.

As said before, the first solution would be to display the packets directly in the logs before they are encrypted or after they are decrypted but as this is probably the same idiot who wrote the code to be debugged that will write the code to generate the logs, there is little chance to have something useful as a result.

Another solution is to have a switch that permits to use the software under test in a debug mode which does not require to encrypt and decrypt the communications. A first problem is that it increases the probability of forgetting to turn the switch off after debugging or to use the wrong setting in production. Another problem is that the secure and unsecure modes may very well use different code paths, and so may behave differently. And lastly some communication protocols, especially the modern one, do not have an unsecure setting (for example RELOAD and RTCWEB), and for very good reasons.

A slightly better solution that can reduce the difference between code paths and work with secure-only protocols is to use a null cipher but for security reasons both sides must agree beforehand to use a null cipher. That in fact probably increases the probability that someone forget to switch off the null cipher after a test.

So the only remaining solution is to somehow decrypt the packets in Wireshark. The standard way of doing that is to install the private RSA in Wireshark, which immediately creates even more problems:

  • This cannot be used to debug programs in production, because the sysadmin will never accept to give the private key.
  • This does not work if the private key is stored in an hardware token, like a smartcard as, by design, it is impossible to read the private key from these devices.
  • Even with the private key, the SSL modes that can be used are limited. For example a Diffie–Hellman key exchange cannot be decrypted by Wireshark.

Fortunately since version 1.6, Wireshark can use SSL session keys instead of the private key. Session keys are keys that are created from the private key but that are used only for a specific session, and disclosing them does not disclose the private key. This solves most of the problems listed above:

  • Sysadmins can disclose only the session key related to a specific session.
  • Session keys are available even if the private key is stored in an hardware token.
  • Sessions keys are the result of, e.g., a Diffie–Hellman key exchange, so there is no need to restrict the SSL modes for debugging.

Now we just need to have the program under test storing the session keys somewhere so Wireshark can use them. For example the next version of NSS (the security module used by Firefox, Thunderbird and Chrome) will have an environment variable that can be used to generate a file that can be directly used by Wireshark (see this link for more details).

Adding the support for this format in Java requires to maintain a modified build of Java, which can be inconvenient. A simpler solution is to process the output of the,keygen debug option. The just uploaded Debian package named keygen2keylog contains a program that does this for you. After installation, start Wireshark in capture mode with the name of the SSL session key file that will be generated as parameter, something like this:

$ wireshark -o ssl.keylog_file:mykeys.log -k -i any -f "host"

(Remember that you do not need to run Wireshark under root to capture packets if you run the following command after each update of the package: sudo setcap ‘CAP_NET_RAW+eip CAP_NET_ADMIN+eip’ /usr/bin/dumpcap)

Then you just need to pipe the debug output of your Java program to keygen2keylog to see the packets been decrypted in Wireshark, e.g.:

$ java,keygen -jar mycode.jar | keygen2keylog mykeys.log

And the beauty of this technique is that the packets are decrypted as they are captured.

RELOAD: Test server

It took two very frustrating weeks for this, but I finally managed to install a public RELOAD Configuration and Enrollment test server. The frustration part is a consequence of my self-imposed list of requirements: Full implementation of the RELOAD spec, IPv4 and IPv6 support, JMX management over TLS with client certificates and the private key for the RELOAD CA stored in a PKCS#11 token. The code is still not perfect, but it at least fulfills all the requirements.

The libreload-java library contains everything to process the data returned by the server, but it is possible to use command-line tools to have a look to the configuration file and to generate RELOAD certificates. First step, finding the IP address of the configuration server for the “” overlay using the DNS:

$ host -t SRV has SRV record 40 0 443
$ host has address has IPv6 address 2604:3400:dc1:41:216:3eff:fe5b:8240

Next step, retrieving the current version of the configuration file:

$ curl --resolve

The configuration file contains the current URL of the enrollment server,

The next step is to generate an RSA key pair:

$ openssl genrsa -out cert.rsa
$ openssl pkcs8 -in cert.rsa -out cert.key -topk8 -nocrypt

Then we can generate a certificate request in DER form:

$ openssl req -new -key cert.key -outform der -out cert.req

The certificate request can be sent to the enrollment server, which will use it to generate a certificate:

$ wget "" --post-file=cert.req --header "Content-Type: application/pkcs10" --header "Accept: application/pkix-cert" -O cert.der

Note that the password is mandatory but any password can be used at this time, as there is no user management in the server yet. The certificate returned will contain one user name and one Node-ID, but a certificate with multiple Node-IDs can be requested with the nodeids= parameter, as specified in version -15 of the RELOAD I-D. The content of the certificate can be displayed with something like this:

$ openssl x509 -noout -text -inform DER -in cert.der

I also released a new version of the libreload-java package that contains some bug fixes and improvements related to version -15 of the I-D. For example the library can now generate a signed configuration file, even if the test server is not using it yet.

Update 6/2/2011: Use curl instead of wget so we can force the IP address and port to the result of the SRV query.

Update 8/13/2012: This this post for the updated version of the servers.

Client certificates for a distributed development team

It’s that time of the decade again: I am building an environment for the development of a new Internet service. The previous time was at 8×8 and the environment was designed by trials and errors over a period of 6 years, between 2001 and 2006. Because I am now rebuilding an environment from scratch, I can spend a little bit of time fixing some of the issues that were not easy to fix in an existing environment.

One of the basic principle of the environment at 8×8 and of the new one is that they are designed to permit people to work remotely. My very first team at 8×8 had some developers in France, some in Canada and the remaining in the West coast of the USA so there was no other choice than build a distributed environment. I know that most startups try to keep developers as close as possible but having worked in both type of environment, I am now convinced that this setting gives a false impression of been efficient. For one this kind of setting increases the risk of micro-management, which I irrevocably dislike (that’s a story for another time, but escaping micro-management was one of the reasons I moved to the USA in the first place). But a distributed environment has one nice side effect which at the end makes all the difference between a good service and the usual crap that most people mistake for good engineering: documentation. Shouting explanations over the wall of a cubicle is probably appealing for lazy people, but it does not make a product or service better. Been forced to write down stuff (in an email, in a wiki, in an IRC channel…) is probably a little more work but it represents an immediate gain for the product. A recent Slashdot question was seeking an advice about seating arrangement for a team of developers. My advice would be simple: Be sure that there is at least a distance of 5 miles between each desk.

Working in a distributed environment probably means a VPN for most people. It happens that I do not like VPN very much, mostly for two reasons. Firstly VPN are generally proprietary products that work only on Windows and although my developers are free to use whatever OS and tools they prefer, Windows and MacOS are banned from my computers (or only running in a virtual machine, for testing purpose. VMWare is the condom of the Internet). The second reason is that having the developers working inside a secure environment does not make them good at handling security issues. If a developer is not capable of making her computers secure, how can she develop secure software? So the solution we used at 8×8 was SSH tunnels. That was working fine, but it is still a kind of VPN and it is not easy to deploy under Windows. The solution for this new environment is to use client certificates.

Client certificates have the advantage of working everywhere SSL/TLS is working and it is a great improvement from using passwords. Passwords are generally too easy to guess and when they are good, people reuse them on multiple websites, websites that can leak the good passwords. Client certificates does not have this issue as the key is not stored in the website and they are not guessable. The certificate itself must be protected by a password but as this password does not have to leave the computer it can be a good, unique password that never change so people do not have to write it down.

The first step to install a client certificate infrastructure is to install a server certificate. The easiest way is to buy one from the registrar of the domain (I use a domain name for development that is different from the corporate domain name. Code name does not have to change but product names and even company names can and will). The configuration in Apache is as follow:

SSLEngine on
SSLProtocol all -SSLv2
SSLCertificateFile /etc/ssl/private/server.crt
SSLCertificateKeyFile /etc/ssl/private/server.key

Using a real (i.e. not self-signed) server certificate is easier because we do not have to distribute the files for the CA for each developer.

The next step is to create a CA for the client certificates:

$ openssl genrsa -out ca.key 1024
$ openssl req -new -key ca.key -out ca.csr
$ openssl x509 -req -days 365 -in ca.csr -signkey ca.key -out ca.crt

and install it in Apache:

SSLCACertificateFile /etc/ssl/private/ca.crt
SSLVerifyClient require

After this, each developer can create a Certificate Signing Request and send it to the administrator of the server:

$ openssl genrsa -out client.key 1024
$ openssl req -new -key client.key -out client.csr

The administrator checks the request and then creates the client certificate and send it back to the developer:

$ openssl req -noout -text -in client.csr
$ openssl x509 -req -days 365 -CA /etc/ssl/private/ca.crt -CAkey /etc/ssl/private/ca.key -CAcreateserial -in client.csr -out client.crt

The developer can then use the certificate directly (e.g. with wget or Debian repositories) or can convert it to a PKCS#12 file that can be imported in a web browser:

$ openssl pkcs12 -export -clcerts -in client.crt -inkey client.key -out client.p12

Running code from strangers

Why anybody would download and run a binary program from a perfect stranger? I was looking at Apache logs to see if somebody was downloading a jar file I uploaded yesterday, when suddenly this question hit me. I did not release the code source because of lack of time to clean it and package it, so the potential users can not check the code source to verify that my program is not a Trojan horse but anyway source code or not, nobody will do a code review. So what can a user do to check that a program is really doing what its author says?

The program is in Java, so the solution is to run it under a security manager. This is really easy – just pass “” to the JVM and any access to a security sensitive resource will throw an exception. Because this is a verification done by the JVM, there is no possibility for the program to bypass this. When the program runs without security manager, I just display a warning message like this:

You should not run random Java programs downloaded from the Internet without security manager.
Use the --help-security option to know how to do this.

Note that running the program under a security manager only one or two times to check that it is not a Trojan horse is not better than doing nothing; if I wanted to do something bad in my program, I would be sure to not do it when a security manager is installed, to not be detected.

Then comes the tedious task of writing a policy file to authorize the program to access the resources it is supposed to access and no more than this. One way of doing this is to add the “” to the JVM command line and to add (or not) to the policy file a permission each time an access denied is found. The problem is that one needs to run absolutely all the options in the program to be sure to trigger all the security accesses, which is not very practical. The solution I chose for my program is to add an option that generates the policy file that is needed to run the program with the specific parameters under the security manager. Here’s an example:

$ java -jar id2kindle.jar --generate-policy draft-ietf-behave-turn-uri-01.txt
grant {
  permission "draft-ietf-behave-turn-uri-01.txt", "read";
  permission "", "write";

The standard output can be redirected to the policy file, which is then used to run the program:

$ java -jar id2kindle.jar --generate-policy draft-ietf-behave-turn-uri-01.txt >id2kindle.policy
$ cat id2kindle.policy
$ java -jar id2kindle.jar draft-ietf-behave-turn-uri-01.txt

As long as the content of the policy file is verified before running the program, nothing bad can happen. Well, excepted that most people will run it at least one time without security manager and one time is all that is needed to do bad things.