Things I learned last week (9)

Maildir conversion with Thunderbird

Because a full backup of my desktop takes more than 7 hours, I do only incremental backups during the week. This also permits to have more than two weeks of daily backups online, which is really helpful when a file is accidentally deleted or modified. I try to keep these incremental backup as small as possible by using various tricks. For example I use a snapshot of each of my Virtualbox instances during the week, and merge them the day before the full backup. Similarly, I do a run a garbage collection (which compress the objects into a pack file) of all my git repository just before a full backup, so an automatic garbage collection is not triggered before an incremental backup, and so on. But even with these tricks, each incremental backup is at least 10 Gb, which is mostly because of my mailer, which use mbox as storage, i.e. one big file for each folder, so receiving one email will trigger the backup of the full folder.

There is a better type of storage in this case, maildir, where each email is stored in a different file (in fact an even better storage would be to have one file per email, but to have all the current emails packed in one file when the “Compact Folders…” command is run). Changing an account storage to maildir is simple under Thunderbird: Search for the storeContractId property for the account, and change it from “@mozilla.org/msgstore/berkeleystore;1” to “@mozilla.org/msgstore/maildirstore;1”. The problem is that there is currently no automatic process to also convert the existing folders to the new format. Here’s a small script that does the conversion of an existing directory containing mbox files to a new directory containing maildir files:

#!/bin/bash
find $1 -type d -exec bash -c 'mkdir -p $0/${1#*/}' $2 {} ;
dir=$(pwd)
find $1 -type f -name *.msf -exec bash -c 'f=${2%.*}; ./mb2md -s $0/$f -d $0/$1/${f#*/}' $dir $2 {} ;

The mb2md program should be downloaded from http://www.ulduzsoft.com/2012/02/your-personal-gmail-like-mail-system-converting-emails/ but you need to apply the patch from http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=578084 (the mb2md program installed from the Debian package mb2md has some issues with malformed mbox).

After converting the directory, you have to copy some of the files in the original repository (msgFilterRules.dat, popstate.dat, rules.dat). Modify the storeContractId property and restart Thunderbird or Icedove and your emails should be available after the index files are rebuilt.

Things I learned last week (8)

Android devices

Murphy’s Law adapted for Android development: The udev rules of the computer you are currently using never contain the USB ID of the Android device under test. Probably because the list of Android devices is continuously growing, it is not possible to download a set of rules that stays up to date but, as Google provides a web page containing this list of USB IDs, it is easy to write a XSLT document to process the HTML table:

<!-- generate-rules.xslt -->
<?xml version="1.0" ?>

<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
  <xsl:output method="text" />
  <xsl:template match="text() | comment()" />
  <xsl:template match="table">
# Generated by generate-rules.xslt
    <xsl:apply-templates />
  </xsl:template>
  <xsl:template match="tr[1]" />
  <xsl:template match="tr">
# <xsl:value-of select="td[1]" />
SUBSYSTEM=="usb", ATTR{idVendor}=="<xsl:value-of select="td[2]" />", MODE="0666", GROUP="plugdev"
  </xsl:template>
</xsl:stylesheet>

The rules file can then be automatically generated, for example in a cron job, with a command like this:

curl https://developer.android.com/tools/device.html | xsltproc --html generate-rules.xslt - >51-android.rules

You probably need to reload udev after this.

Installing CPN Tools under Linux

I recently upgraded CPN Tools to version 3.9.3 and unfortunately this time the instructions for the installation under Linux were missing some steps:

  • Download the JRE for Windows from “https://www.java.com/en/download/manual.jsp” and install it under Wine with “wine jre-7u25-windows-i586.exe”. Install the Linux simulator.
  • After this download the new version of CPN Tools and install with “wine cpntools_3.9.3.exe”
  • Go to the simulator direction (~/.wine/drive_c/Program Files/CPN Tools/cpnsim) and make the right files executable with “chmod 775 cpnmld.x86-linux run.x86-linux”
  • In the CPN Tools directory modify the default.xml file so the “enterss.ogpath” contains “statespacefiles/”

There is 2 background jobs to start before the GUI, first is the simulator. To keep an eye on the log, I run the following script in a console:

<#!/bin/bash

trap 'kill $pid; wait; exit' INT TERM
cd ~/.wine/drive_c/Program Files/CPN Tools/cpnsim
./cpnmld.x86-linux -d 2098 ./run.x86-linux cpnmld.log &
pid=%+
tail -F cpnmld.log

The second background job is the simulator extensions server that can be started with the following command:

java -jar ~/.wine/drive_c/Program Files/CPN Tools/extensions/SimulatorExtensions.jar &

The CPN Tools GUI can finally be started with the following command:

wine ~/.wine/drive_c/Program Files/CPN Tools/cpntools.exe

One thing that does not work is the Save state space report: For some reason the open syscall receives the Windows name of the file (“C:…”) instead of the equivalent Linux name. But the state space queries are working fine, so that’s not really a big issue.

Things I learned last week (7)

Nexus 10, OTG and Pogo charger

My Mac mini was used for a long time as a MIDI controller but I needed it for something else, so I decided to use my Nexus 10 tablet instead as, with the help of an OTG cable, the tablet can power the MIDI adapter (a M-Audio Midisport 2×2). Because the MIDI adapter is drawing power from the tablet battery, I planned to use a Pogo cable to charge it at the same time. Seems easy enough, right?

Unfortunately the Nexus 10 stops charging from the Pogo cable as soon an OTG cable is connected to the USB port, but strangely the Nexus 10 still displays a charging state (I checked with an ammeter), which looks like a bug in the code. I would also classify the fact that it stops charging as a bug.

So now I’ll have to remember to unplug the OTG cable (not just the MIDI adapter) after each session so the tablet can charge, but unfortunately that’s not all. As I was doing those tests, I discover that when the OTG cable is disconnected, the Pogo charger does not go back to full charge (~1.6A) but to the nominal USB charge (i.e. 0.5A). That make a big difference in the time needed to charge the tablet. So not only I need to remember to unplug the OTG cable, but I also have to unplug and plug again the Pogo cable so the full charge mode is used. Looks like another bug in the charging logic.

Things I learned last week (6)

Default route in IETF IPv6

Last week was the IETF meeting in Berlin, which provides a very good IPv6 network. Unfortunately for me there is an issue somewhere in my laptop which result in the IPv6 default route to not be automatically renewed. After multiple IETF meetings having to periodically reboot my laptop, I finally found a workaround. Here’s what the “ip -6 r” command displays:

2001:df8:0:64::/64 dev eth0 proto kernel metric 256 expires 2591894sec
fe80::/64 dev eth0 proto kernel metric 256
default via fe80::64:2 dev eth0 proto ra metric 1024 expires 1694sec
default via fe80::64:3 dev eth0 proto ra metric 1024 expires 1694sec
default via fe80::64:1 dev eth0 proto ra metric 1024 expires 1694sec

And here’s a set of commands to fix the problem:

# ip -6 r del default via fe80::64:1
# ip -6 r del default via fe80::64:2
# ip -6 r del default via fe80::64:3
# ip -6 r add default via fe80::64:1 dev eth0

Things I learned last week (5)

IPv6 tunnel

Some of the time spent last week was to prepare for the IETF meeting in Berlin. I explained previously that I use a secure IPv6 connection to my mail server to send and retrieve emails, which creates its own set of issues when travelling. If on one hand the IETF provides a great IPv6 network on site, there is very little hope to find something similar on the various places one has to stay on his way to and from this event. E.g. hotels and airports generally are not IPv6 enabled, so a tunnel solution is required. I had a very good experience with Hurricane Electric before Comcast went IPv6 native, but their technology does not help much when the laptop is behind a NAT that cannot be controlled. So in this case I use the service provided by gogo6 (aka freenet6). I use their gogoCPE behind my AT&T NAT and their software client on my laptop, at least since last week when I finally found the solution to the problem I had configuring it. Probably because I was using a WiFi connection instead on the wired connection, the gogoc daemon got stuck until I ran the following command and answered the prompt:

sudo /usr/sbin/gogoc -n -s wlan0

Backup restoration

A full backup is obviously useful when something terribly wrong happen to your disk (in the 90’s I lost a disk full of code, and as I define stupidity as doing the same mistake twice, I am very careful since to always have multiple levels of protection for my code), but having it helps also in the day to day tasks, like for example when some code modification went into a wrong direction so restoring the previous day backup saves multiple hours of work.

Another benefit I discovered some time ago is to prepare my laptop before a travel. I like to carry my whole email and development history with me, so that’s a lot of data to copy from one computer to another. Initially I created a giant tarball on a flash drive, and then uncompressed it on the target computer, but that took forever. Now I just reuse my backup. On the day before I leave, I restore on my laptop the directories I need directly from the last full backup (i.e. from the last Sunday). The improvement I made last week is that I then change the configuration of my mailer so the emails are no longer deleted from my mail server. Then during the night, the incremental backup saves my new emails and the new configuration and it takes then less than 5 minutes before leaving for the airport the next day to restore the incremental backup, and with the guaranty that during my trip all my emails will stay on the server for when I am back to my office. That means less wasted time, and less stress.

Things I learned last week (4)

Passport disk

Two weeks ago I tried to understand why my backup disk did not stop spinning after the configured 30 minutes. I finally found the problem: smartd (the SMART daemon) is configured to poll all the disks at a 30 minutes interval and because the CHECK POWER MODE is not implemented on this disk, smartd was preventing it to spinning down. Unfortunately smartd cannot be configured to use different polling interval per disk, so my only solution was to completely disable polling for this disk – which is not possible directly, so I had to configure an explicit list of disks to poll. That brought another issue, which is that the drive name (/dev/sda, etc…) is not stable between reboot, so I had to find the mapping in /dev/disk/by-id/, which finally gave me the correct configuration in smartd.conf:

DEFAULT -a -m root -M exec /usr/share/smartmontools/smartd-runner
/dev/disk/by-id/ata-WDC_WD10EURS-630AB1_WD-WCAVXXXXXXX
/dev/disk/by-id/ata-WDC_WD10EFRX-68JCSN0_WD-WCC1XXXXXXXX
/dev/disk/by-id/ata-ST3750640AS_3QD0TXXX
/dev/disk/by-id/ata-ST3750640AS_3QD0RXXX

That does not solve my initial problem, which is to know if the drive is spinning or not before opening the safe – I tried upgrading the disk firmware, but the CHECK POWER MODE is still not implemented (and the upgrade cannot be done from a Windows guest) – but at least I lowered the probability of damaging it.

SMTP over IPv6

On July 16 Comcast started blocking port 25 on its IPv6 network. I was already using port 465 for the emails sent from my mailer, but I was using the IPv6 loophole for the messages sent by the system (SMART alerts, backup result, security audits, etc…). I suppose that this is a sign that IPv6 is getting deployed, so that’s a good thing, but that means that I had to start using TLS for my system emails too.

Unfortunately postfix does not support port 465, so I had to configure port 587 in the /etc/postfix/master.cf file on the server side:

submission inet n - - - - smtpd
-o smtpd_tls_security_level=encrypt
-o smtpd_tls_req_ccert=yes

My client certificate is stored in a smartcard, but postfix does not seem to support smartcards, so I had to generate a new key and CSR on the client side just for this:

openssl req -new -nodes -keyout client.key -out client.csr -days 365

then sign the csr on the server side:

openssl ca -config ca.conf -utf8 -out client.crt -infiles client.csr

After sending the certificate back to the client, I just added the key and certificate in the local /etc/postfix/main.cf:

relayhost = xxxxx.org:submission
smtp_use_tls = yes
smtp_tls_key_file = /etc/postfix/client.key
smtp_tls_cert_file = /etc/postfix/client.crt

Things I learned last week (3)

USB hub

As a result of reorganizing my office during the installation of my new computer, my backup system (which is is running inside a fireproof safe) is no longer close to my desktop computer, so I installed a USB hub between them. That was a very bad idea as a the full backup took 12 hours instead of the 9 hours that it took with my previous computer and a direct connection.

So I replaced the USB hub by a one meter extension cable (two cables in fact as the backup disk is connected with a USB Y cable, and extending the Y cable this way lowers the potential voltage drop). Now the backup time is 6.5 hours, nearly half of what it was with the USB hub (the drop in backup time from 9 hours to 6.5 hours being the result of the faster computer).

I knew that USB hubs can create problem (I have at least one device, an HD DVR, that does not work if connected through an USB hub), but slowing things down that much was a surprise.

As I was doing this I tried to verify if the Y cable was really needed. I used a USB monitor that confirmed that from time to time the disk will draw 0.7A, slightly more than the 0.5A limit.

My Passport disk

As I was testing various configurations for my backup drive, I confirmed that this drive was never spinning down. I suspected this and because the CHECK POWER MODE is not supported on this disk I have to unmount and disconnect the disk each time I want to open my safe (The disk being stored in the door of the safe). I tried to fix this without success so far, but I was able to at least fix one problem in the smartctl configuration by having this modification in the /var/lib/smartmontools/drivedb.h file:

{ "USB: WD My Passport USB 3.0; ",
"0x1058:0x07[4a]8",
"",
"",
- ""
+ "-d sat,16"
},

Things I learned last week (2)

Debian packages without repository

There is probably a better way to do that, but here is a command that display all the installed packages that are not part of a configured repository:

apt-show-versions -a |grep "No available version in archive"

These can be packages that are installed directly using dpkg -i, but I was surprised to find that many of them were in fact packages from repositories that I removed from my configuration. The problem is that without a repository there is no way to receive the new versions, which can be a problem when a security problem is discovered. Most of these packages were not used, but it is bad idea to keep obsolete stuff around – a malicious script can still try to call code from obsolete packages in the hope that an unpatched security bug can be exploited.

Android build still requires Sun Java

For the reasons explained above, I removed the sun-java6-jdk package from my system, but that made the Android build fail, which means that Android still requires the Sun JDK 1.6 (openjdk does not work) for its build, which is kind of crazy knowing that this package is no longer maintained. Even the Android web page is wrong as Ubuntu also no longer carry this package. I first tried to rebuild the package using make-pkg, but I stopped when I saw that registration is now required to download the JDK from Oracle. Finally I found that sun-java6-jdk was still available in the oldstable Debian repository, but for how long?

Here’s the line to add to /etc/apt/sources.list:

deb http://ftp.us.debian.org/debian oldstable main contrib non-free

Asus KCMA-D8 Sensors

One of the reasons I built a new computer is because the previous one was overheating even under moderate CPU load, so I wanted to be sure that the new computer would be able to to sustain continuous load without crashing. Because my system uses a RAID array, I wanted to do the testing with Debian Live instead of the target OS. The problem was that the sensors were not displayed at all, and it took multiple hours of research (even using an I2C monitor to find out the address of the sensors chip) to finally find the reason: The PIIX4 chip has in fact 2 SMBus, but the second bus (which is the one connecting the chip managing the sensors) was not implemented in the Linux kernel version 3.2. After switching to a version 3.9 kernel the sensors were finally accessible, which showed that the north bridge was overheating. I installed a fan on top of the heatsink and now the north bridge temperature is under control and cpuburn tests shows that that new system does not overheat or crash even after one hour with the 12 cores used at 100%.

KVM and Virtualbox

Another reason for a new computer was to be able to use kvm to run the Android emulator at decent speed. But it seems that it is not possible to run a kvm application and virtualbox at the same time.
This means that I will not be able to run an Android app and its server in a virtualbox, so I’ll have to convert my servers to kvm.

Things I learned last week (1)

Android emulator

There was a lot of progress made on the Android emulator since the last time I used it for development; sensors can now be emulated, the display can use the GPU of the host and most important, the emulator can use the virtualization support of the host to run an x86 image at close to native speed. The GPU support and virtualization are now more or less mandatory because of the size of the display of modern Android devices, so it is worth the effort to configure Android for this. That requires a computer that can run KVM and to install the x86 system images.

The command line to start the emulation looks like this:

$ emulator -avd avd_name -gpu on -qemu -m 512 -enable-kvm

Unfortunately when the AVD is new, the window displayed stays black. The solution is to run it the first time with kvm disabled:

$ emulator -avd avd_name -gpu on -qemu -m 512 -disable-kvm

After this, the first command line above can be used.

CPU heatsink

My desktop computer is dying, so it is time to build a new one. I replaced the Asus K8N-DL by a KCMA-D8 motherboard and ordered the CPUs and memory to build it last week-end. Unfortunately I did not planned that the CPUs would arrive without heatsink, and unfortunately heatsinks for C32 socket are not available in stores. I suppose it makes sense that the CPUs comes without heatsink as these kind of motherboard can be used in 1U chassis, which requires a very different type of heatsink than in a tower. But now I have to wait until I receive the heatsink to finish the build.

Rescue USB flash drive

I run Debian Sid on all my non-server computers, which means that from time to time there is something to repair after an apt-get upgrade – that’s not as insane as it seems as upgrading a computer each day with the latest packages and fixing whatever broke is a great way to learn stuff. After all I am a computer professional, not a museum curator.
To repair the most broken installation I keep a Debian Live distribution on a flash drive. On the other hand my desktop computer also use a flash drive to boot GRUB (this machine uses a RAID10 array, which cannot be used for booting), so for this new build I decided to put the Debian Live distribution on the same flash drive, so I do not have to search the rescue flash drive next time I break something. It took me a while, but here the process:

Download a recent Debian Live ISO file, mount it on a loop and copy the content of the live directory on the flash drive:

# mount -o loop Downloads/debian-live-7.0.0-amd64-rescue.iso /mnt
# mkdir /boot/live
# cp /mnt/live/* /boot/live/
# umount /mnt

Then add the following in /etc/grub.d/40_custom:

menuentry "Debian rescue" {
    echo 'Loading Debian rescue ...'
    linux /live/vmlinuz boot=live live-config live-media-path=/live
    echo 'Loading initial ramdisk ...'
    initrd /live/initrd.img
}

Then update grub.cfg with the following command:

# update-grub

Note that in this configuration the flash drive is mounted on /boot.