Things I learned last week (7)

Nexus 10, OTG and Pogo charger

My Mac mini was used for a long time as a MIDI controller but I needed it for something else, so I decided to use my Nexus 10 tablet instead as, with the help of an OTG cable, the tablet can power the MIDI adapter (a M-Audio Midisport 2×2). Because the MIDI adapter is drawing power from the tablet battery, I planned to use a Pogo cable to charge it at the same time. Seems easy enough, right?

Unfortunately the Nexus 10 stops charging from the Pogo cable as soon an OTG cable is connected to the USB port, but strangely the Nexus 10 still displays a charging state (I checked with an ammeter), which looks like a bug in the code. I would also classify the fact that it stops charging as a bug.

So now I’ll have to remember to unplug the OTG cable (not just the MIDI adapter) after each session so the tablet can charge, but unfortunately that’s not all. As I was doing those tests, I discover that when the OTG cable is disconnected, the Pogo charger does not go back to full charge (~1.6A) but to the nominal USB charge (i.e. 0.5A). That make a big difference in the time needed to charge the tablet. So not only I need to remember to unplug the OTG cable, but I also have to unplug and plug again the Pogo cable so the full charge mode is used. Looks like another bug in the charging logic.

Things I learned last week (6)

Default route in IETF IPv6

Last week was the IETF meeting in Berlin, which provides a very good IPv6 network. Unfortunately for me there is an issue somewhere in my laptop which result in the IPv6 default route to not be automatically renewed. After multiple IETF meetings having to periodically reboot my laptop, I finally found a workaround. Here’s what the “ip -6 r” command displays:

2001:df8:0:64::/64 dev eth0 proto kernel metric 256 expires 2591894sec
fe80::/64 dev eth0 proto kernel metric 256
default via fe80::64:2 dev eth0 proto ra metric 1024 expires 1694sec
default via fe80::64:3 dev eth0 proto ra metric 1024 expires 1694sec
default via fe80::64:1 dev eth0 proto ra metric 1024 expires 1694sec

And here’s a set of commands to fix the problem:

# ip -6 r del default via fe80::64:1
# ip -6 r del default via fe80::64:2
# ip -6 r del default via fe80::64:3
# ip -6 r add default via fe80::64:1 dev eth0

Things I learned last week (5)

IPv6 tunnel

Some of the time spent last week was to prepare for the IETF meeting in Berlin. I explained previously that I use a secure IPv6 connection to my mail server to send and retrieve emails, which creates its own set of issues when travelling. If on one hand the IETF provides a great IPv6 network on site, there is very little hope to find something similar on the various places one has to stay on his way to and from this event. E.g. hotels and airports generally are not IPv6 enabled, so a tunnel solution is required. I had a very good experience with Hurricane Electric before Comcast went IPv6 native, but their technology does not help much when the laptop is behind a NAT that cannot be controlled. So in this case I use the service provided by gogo6 (aka freenet6). I use their gogoCPE behind my AT&T NAT and their software client on my laptop, at least since last week when I finally found the solution to the problem I had configuring it. Probably because I was using a WiFi connection instead on the wired connection, the gogoc daemon got stuck until I ran the following command and answered the prompt:

sudo /usr/sbin/gogoc -n -s wlan0

Backup restoration

A full backup is obviously useful when something terribly wrong happen to your disk (in the 90’s I lost a disk full of code, and as I define stupidity as doing the same mistake twice, I am very careful since to always have multiple levels of protection for my code), but having it helps also in the day to day tasks, like for example when some code modification went into a wrong direction so restoring the previous day backup saves multiple hours of work.

Another benefit I discovered some time ago is to prepare my laptop before a travel. I like to carry my whole email and development history with me, so that’s a lot of data to copy from one computer to another. Initially I created a giant tarball on a flash drive, and then uncompressed it on the target computer, but that took forever. Now I just reuse my backup. On the day before I leave, I restore on my laptop the directories I need directly from the last full backup (i.e. from the last Sunday). The improvement I made last week is that I then change the configuration of my mailer so the emails are no longer deleted from my mail server. Then during the night, the incremental backup saves my new emails and the new configuration and it takes then less than 5 minutes before leaving for the airport the next day to restore the incremental backup, and with the guaranty that during my trip all my emails will stay on the server for when I am back to my office. That means less wasted time, and less stress.

Things I learned last week (4)

Passport disk

Two weeks ago I tried to understand why my backup disk did not stop spinning after the configured 30 minutes. I finally found the problem: smartd (the SMART daemon) is configured to poll all the disks at a 30 minutes interval and because the CHECK POWER MODE is not implemented on this disk, smartd was preventing it to spinning down. Unfortunately smartd cannot be configured to use different polling interval per disk, so my only solution was to completely disable polling for this disk – which is not possible directly, so I had to configure an explicit list of disks to poll. That brought another issue, which is that the drive name (/dev/sda, etc…) is not stable between reboot, so I had to find the mapping in /dev/disk/by-id/, which finally gave me the correct configuration in smartd.conf:

DEFAULT -a -m root -M exec /usr/share/smartmontools/smartd-runner
/dev/disk/by-id/ata-WDC_WD10EURS-630AB1_WD-WCAVXXXXXXX
/dev/disk/by-id/ata-WDC_WD10EFRX-68JCSN0_WD-WCC1XXXXXXXX
/dev/disk/by-id/ata-ST3750640AS_3QD0TXXX
/dev/disk/by-id/ata-ST3750640AS_3QD0RXXX

That does not solve my initial problem, which is to know if the drive is spinning or not before opening the safe – I tried upgrading the disk firmware, but the CHECK POWER MODE is still not implemented (and the upgrade cannot be done from a Windows guest) – but at least I lowered the probability of damaging it.

SMTP over IPv6

On July 16 Comcast started blocking port 25 on its IPv6 network. I was already using port 465 for the emails sent from my mailer, but I was using the IPv6 loophole for the messages sent by the system (SMART alerts, backup result, security audits, etc…). I suppose that this is a sign that IPv6 is getting deployed, so that’s a good thing, but that means that I had to start using TLS for my system emails too.

Unfortunately postfix does not support port 465, so I had to configure port 587 in the /etc/postfix/master.cf file on the server side:

submission inet n - - - - smtpd
-o smtpd_tls_security_level=encrypt
-o smtpd_tls_req_ccert=yes

My client certificate is stored in a smartcard, but postfix does not seem to support smartcards, so I had to generate a new key and CSR on the client side just for this:

openssl req -new -nodes -keyout client.key -out client.csr -days 365

then sign the csr on the server side:

openssl ca -config ca.conf -utf8 -out client.crt -infiles client.csr

After sending the certificate back to the client, I just added the key and certificate in the local /etc/postfix/main.cf:

relayhost = xxxxx.org:submission
smtp_use_tls = yes
smtp_tls_key_file = /etc/postfix/client.key
smtp_tls_cert_file = /etc/postfix/client.crt

Things I learned last week (3)

USB hub

As a result of reorganizing my office during the installation of my new computer, my backup system (which is is running inside a fireproof safe) is no longer close to my desktop computer, so I installed a USB hub between them. That was a very bad idea as a the full backup took 12 hours instead of the 9 hours that it took with my previous computer and a direct connection.

So I replaced the USB hub by a one meter extension cable (two cables in fact as the backup disk is connected with a USB Y cable, and extending the Y cable this way lowers the potential voltage drop). Now the backup time is 6.5 hours, nearly half of what it was with the USB hub (the drop in backup time from 9 hours to 6.5 hours being the result of the faster computer).

I knew that USB hubs can create problem (I have at least one device, an HD DVR, that does not work if connected through an USB hub), but slowing things down that much was a surprise.

As I was doing this I tried to verify if the Y cable was really needed. I used a USB monitor that confirmed that from time to time the disk will draw 0.7A, slightly more than the 0.5A limit.

My Passport disk

As I was testing various configurations for my backup drive, I confirmed that this drive was never spinning down. I suspected this and because the CHECK POWER MODE is not supported on this disk I have to unmount and disconnect the disk each time I want to open my safe (The disk being stored in the door of the safe). I tried to fix this without success so far, but I was able to at least fix one problem in the smartctl configuration by having this modification in the /var/lib/smartmontools/drivedb.h file:

{ "USB: WD My Passport USB 3.0; ",
"0x1058:0x07[4a]8",
"",
"",
- ""
+ "-d sat,16"
},

Things I learned last week (2)

Debian packages without repository

There is probably a better way to do that, but here is a command that display all the installed packages that are not part of a configured repository:

apt-show-versions -a |grep "No available version in archive"

These can be packages that are installed directly using dpkg -i, but I was surprised to find that many of them were in fact packages from repositories that I removed from my configuration. The problem is that without a repository there is no way to receive the new versions, which can be a problem when a security problem is discovered. Most of these packages were not used, but it is bad idea to keep obsolete stuff around – a malicious script can still try to call code from obsolete packages in the hope that an unpatched security bug can be exploited.

Android build still requires Sun Java

For the reasons explained above, I removed the sun-java6-jdk package from my system, but that made the Android build fail, which means that Android still requires the Sun JDK 1.6 (openjdk does not work) for its build, which is kind of crazy knowing that this package is no longer maintained. Even the Android web page is wrong as Ubuntu also no longer carry this package. I first tried to rebuild the package using make-pkg, but I stopped when I saw that registration is now required to download the JDK from Oracle. Finally I found that sun-java6-jdk was still available in the oldstable Debian repository, but for how long?

Here’s the line to add to /etc/apt/sources.list:

deb http://ftp.us.debian.org/debian oldstable main contrib non-free

Asus KCMA-D8 Sensors

One of the reasons I built a new computer is because the previous one was overheating even under moderate CPU load, so I wanted to be sure that the new computer would be able to to sustain continuous load without crashing. Because my system uses a RAID array, I wanted to do the testing with Debian Live instead of the target OS. The problem was that the sensors were not displayed at all, and it took multiple hours of research (even using an I2C monitor to find out the address of the sensors chip) to finally find the reason: The PIIX4 chip has in fact 2 SMBus, but the second bus (which is the one connecting the chip managing the sensors) was not implemented in the Linux kernel version 3.2. After switching to a version 3.9 kernel the sensors were finally accessible, which showed that the north bridge was overheating. I installed a fan on top of the heatsink and now the north bridge temperature is under control and cpuburn tests shows that that new system does not overheat or crash even after one hour with the 12 cores used at 100%.

KVM and Virtualbox

Another reason for a new computer was to be able to use kvm to run the Android emulator at decent speed. But it seems that it is not possible to run a kvm application and virtualbox at the same time.
This means that I will not be able to run an Android app and its server in a virtualbox, so I’ll have to convert my servers to kvm.

Things I learned last week (1)

Android emulator

There was a lot of progress made on the Android emulator since the last time I used it for development; sensors can now be emulated, the display can use the GPU of the host and most important, the emulator can use the virtualization support of the host to run an x86 image at close to native speed. The GPU support and virtualization are now more or less mandatory because of the size of the display of modern Android devices, so it is worth the effort to configure Android for this. That requires a computer that can run KVM and to install the x86 system images.

The command line to start the emulation looks like this:

$ emulator -avd avd_name -gpu on -qemu -m 512 -enable-kvm

Unfortunately when the AVD is new, the window displayed stays black. The solution is to run it the first time with kvm disabled:

$ emulator -avd avd_name -gpu on -qemu -m 512 -disable-kvm

After this, the first command line above can be used.

CPU heatsink

My desktop computer is dying, so it is time to build a new one. I replaced the Asus K8N-DL by a KCMA-D8 motherboard and ordered the CPUs and memory to build it last week-end. Unfortunately I did not planned that the CPUs would arrive without heatsink, and unfortunately heatsinks for C32 socket are not available in stores. I suppose it makes sense that the CPUs comes without heatsink as these kind of motherboard can be used in 1U chassis, which requires a very different type of heatsink than in a tower. But now I have to wait until I receive the heatsink to finish the build.

Rescue USB flash drive

I run Debian Sid on all my non-server computers, which means that from time to time there is something to repair after an apt-get upgrade – that’s not as insane as it seems as upgrading a computer each day with the latest packages and fixing whatever broke is a great way to learn stuff. After all I am a computer professional, not a museum curator.
To repair the most broken installation I keep a Debian Live distribution on a flash drive. On the other hand my desktop computer also use a flash drive to boot GRUB (this machine uses a RAID10 array, which cannot be used for booting), so for this new build I decided to put the Debian Live distribution on the same flash drive, so I do not have to search the rescue flash drive next time I break something. It took me a while, but here the process:

Download a recent Debian Live ISO file, mount it on a loop and copy the content of the live directory on the flash drive:

# mount -o loop Downloads/debian-live-7.0.0-amd64-rescue.iso /mnt
# mkdir /boot/live
# cp /mnt/live/* /boot/live/
# umount /mnt

Then add the following in /etc/grub.d/40_custom:

menuentry "Debian rescue" {
    echo 'Loading Debian rescue ...'
    linux /live/vmlinuz boot=live live-config live-media-path=/live
    echo 'Loading initial ramdisk ...'
    initrd /live/initrd.img
}

Then update grub.cfg with the following command:

# update-grub

Note that in this configuration the flash drive is mounted on /boot.

The merguez sandwich

Periodically this long time French exile starts to crave some very specific food that cannot be found locally. It would be more accurate to say that the food of a specific name that can be found locally does not generally taste like the food of same name in its origin country. But please note that this post is not to criticized the taste foreign foods have after they are adapted to the local population – I do understand the economics behind this necessity but I just wanted to show how one can solve this kind of problem for those foods that cannot be imported.

My all time favorite sandwich is called the “jambon-beurre” and is the simplest sandwich one can imagine: fresh bread, butter, sliced ham. This is one example of a sandwich that cannot be replicated here in California by simply buying the ingredients as the ham here has too much clove taste to it. For now this sandwich is out of my reach, but I managed after some work to make a perfect replica of my second favorite sandwich: The merguez sandwich.

The merguez sandwich, that you can buy in food trucks in Marseille, is made on a base of French bread by putting some harissa, lettuce, sliced tomatoes, grilled or boiled merguez and topping it with French fries. Imported harissa can be easily found, California has excellent vegetables, French bread can be made at home and as for French fries, nothing can beat the recipe from Cook’s Illustrated. The main problem is with the merguez itself which, because it is a fresh sausage, cannot be imported. It is possible to buy something called merguez in California and I even keep a package of those in my freezer for when I want to explain to someone what a merguez does not taste like.

California has excellent produce, from vegetables to fruits to meat, so it is just a matter of rearranging the way those things are put together to make something that taste truly like the French stuff. Here it all starts with a lamb leg (although there is probably less expensive cut that can be used), which is deboned then diced and put in the freezer for one hour or until it is near frozen.

From this point in time, the meat needs to be kept as cold as possible for the whole operation, and it is not just to prevent contamination – the final taste and aspect of the sausage depends on it. So it is a good idea to also put in the freezer all the elements that will touch the meat – like the meat grinder and the sausage stuffer.

It is also a good time to put the lamb casings in warm water and to prepare the spice mix. Note that the lamb casing really smell like shit, but that’s how it is supposed to smell.

The spices mix is made of 51 gr sea salt, 9 gr ground black pepper, 9 gr cayenne pepper, 5 gr garlic powder, 15 gr ground cumin, 75 gr Spanish paprika, 9 gr grounded coriander seed and 6 gr grounded fennel seed.

The next step will be to ground the lamb meat with a medium grinding plate (using a too fine plate really change the taste of the final product, as I discovered with my initial batches).

Put the meat back in the freezer during the next operations, which start by putting the lamb casing on the 5/8 inch stuffer attachment.

Then measure 60 gr of the spices mix for each 1000gr of meat and prepare a slurry with ice water.

Mix the slurry with the meat and fill the sausage stuffer with the mixture.

You can then push the mixture slowly into the casing. It is important to keep the lamb casing close to the exit of the stuffer attachment when making the sausages.

I make the links after the sausages rest a little bit – a merguez should be 8 or 9 inches long. Use a sharp needle to remove the air pockets in the sausages, and store them for 24 hours in the fridge before grilling or boiling them. Freeze everything that you do not eat in the next 24 hours.

On the design of the STUN and TURN URI formats

The first goal of this post is to write down my reasoning for the formats I am promoting for the future STUN and TURN URIs, mostly because I keep forgetting it and have to reconstruct it from scratch each time I have this discussion with other people (and sadly also with myself), but this post can be of interest if you are confused about what TURN and STUN are, and how they can be used.

Let’s start with STUN (RFC 5389): It is important to immediately separate the STUN protocol from the STUN usages. The STUN protocol covers how bits are organized on the wire and how STUN packets are sent, received and retransmitted – all details that are not terribly important for this discussion, excepted on how they contribute to the confusion. The really interesting part is the list of STUN usages, which is the list of different things that can be done with STUN. At the time this post is written there is 4 different STUN usages, which always involve a STUN client and a STUN server:

  • NAT Discovery, specified in RFC 5389, which used is to find under which IP address and port a STUN client is visible to a STUN server. If the STUN client is inside a NAT and the STUN server on the Internet, then the NAT Discovery Usage permits to find the IP address of the NAT.
  • NAT Behavior Discovery, specified in RFC 5780, which used is to find what type of NAT separate a STUN client from a STUN server. It is a bad idea to use this information for anything else than collecting debugging data, which is why this RFC is experimental and why we will not discuss it.
  • Connectivity Check, specified in RFC 5245 (aka ICE), which used is to find if a STUN server can be reached by a STUN client.
  • Keep-alive, specified in RFC 5626, which is used to a) detect if a STUN server can still be reached by a STUN client, b) detect if the NAT/Firewall IP address or port changed and c) to keep the NAT/Firewall open.

STUN is defined to be used over UDP, TCP or TLS. STUN cannot yet be used over DTLS (i.e. TLS over UDP), or any more recent transports like SCTP or DCCP. One fundamental point to understand for this discussion is that the choice of the transport used by STUN is dependent only on the application needing it. If for instance the NAT Discovery Usage is used with RTP, only STUN over UDP can be of use to this application. STUN over TCP cannot help at all, so the choice of the transport is not left to the user of the application or to the administrators of the STUN server – it is purely a consequence of what the application is trying to achieve.

TURN (RFC 5766) is an application layer tunneling protocol. Although TURN have absolutely nothing to do with any of the Usages described above, it shares the same protocol than STUN – same bits on the wire, same way the packets are sent, received and retransmitted. This is the first reason of the confusion between STUN and TURN, the second being that, to save a round-trip, the TURN allocate transaction returns the exact same information that the STUN NAT Discovery Usage returns. In spite of this similarities with STUN, the job of the TURN protocol is completely different, as it is to carry application data between the TURN client and the TURN peer, through the TURN server. These application data can be anything, e.g. RTP packets. They can even be STUN packets, in which case the TURN client can also be a STUN client and the TURN peer (not the TURN server) can also be a STUN server.

Like for STUN, TURN is defined to be used over UDP, TCP or TLS between the TURN client and the TURN server. But this is the transport used for the tunnel itself, and the transport used inside the tunnel (i.e. for our RTP or STUN packets) can be different. RFC 5766 defines only UDP as TURN allocation (this is how the inside transport is called in the specification), but RFC 6062 extends TURN by adding the support of TCP allocations, although with the limitation that a TCP allocation cannot be used over a UDP transport (i.e. a UDP tunnel cannot carry TCP inside).

The very important point here is that the application does not care which transport is used for the TURN tunnel – it can be any tunnel transport that can carry the inside transport that the application need to use with the peer. So if the application needs UDP to send STUN or RTP to the peer, it does not matter if the tunnel transport is UDP, TCP or TLS.

On the other hand, what tunnel transport is available can matter for the provider of the TURN server. At the difference of STUN servers, TURN servers use reel resources (ports, bandwidth, CPU), so the administrators of these TURN servers may want to be able to balance the load, fail-over servers, etc… One of the other things that an administrator may want to manage is the priority between the different tunnel transports that a TURN client can use, and this is exactly what RFC 5928 provides.

But before going into RFC 5928, let’s have a look to the way the DNS interacts with STUN and TURN. A TURN server or a STUN server for the two first STUN Usages listed above (NAT Discovery and NAT Behavior Discovery) are generally deployed on fixed public Internet addresses, and so it is useful to use the DNS to associate a name with them (in an A or AAAA record). Because more than one instance of these servers is generally required to run a service, the SRV records can be used to distribute the load between servers, to manage fail-over and to assign a port to the servers. What RFC 5928 adds to this is the definition of a NAPTR record to select the transport.

Under RFC 5928 when an application wants to use a TURN server it has to provide two sets of information. The first set contains the list of tunnel transports that the application implements. The second set, which is probably stored in the configuration of the application, contains the name of the domain for the TURN server, an optional port, an optional transport and an optional secure flag. The algorithm in RFC 5928 takes these two sets of information and spit out an ordered list of IP address, port and tunnel transport that the TURN client can try to establish the tunnel. As soon the tunnel is established, The TURN client can request a TCP or a UDP allocation to send and receive packets, depending, as explained above, on the purpose of the application.

Because there is no point on having the STUN server administrators choosing the transport, there is no need to define something equivalent to RFC 5928 for STUN.

The TURN URI as currently designed carries all the information that are in the second set passed to the RFC 5928 algorithm. The URI “turn:example.org” fills the host parameter with “example.org”, and sets the secure flag, the transport and the port to undefined. The URI “turns:[2001:DB8::1]:2345;transport=TCP” sets the host to the IPv6 address 2001:DB8::1, the secure flag on, the port to 2345 and the transport to TCP.

Let’s now replace the TURN URI in the WebRTC context, which is the reason it is needed in the first place. The TURN URI is passed from the Web server to the browser in the Javascript code. In normal operations, the TURN URI will probably be something like “turns:example.org”, meaning that the tunnel transport will be negotiated between the capabilities of the browser and what the administrators of the TURN servers in the example.org domain prefer. But the administrators of the Web server may want for debugging reason to use a specific server and port, e.g. “turn:[2001:DB8::::1]:1234”. They may also want to force a specific transport, knowing that others transport have an unfixed bug, by using something like “turn:example.org;transport=UDP”. This flexibility is even more useful knowing that even with the cooperation of the DNS administrators, it will take some time for the new DNS records to propagate. So in this context, it makes sense that the TURN URI has a transport parameter.

On the other hand, a transport parameter on a STUN URI would make no sense, because the transport used by STUN is dictated by the application. If the UDP transport has a bug in the STUN servers, switching to a TCP transport cannot help an application that is trying to send RTP packets.

One of the alternative format that was proposed for the TURN and STUN URIs was to lose the “s” suffix in the “turns” and “stuns” scheme and to consolidate it inside a “;proto=” parameter. With this alternative format, “turns:[2001:DB8::1]:2345;transport=TCP” becomes “turn:[2001:DB8::1]:2345;proto=TLS”. But because as demonstrated previously STUN URI does not need a transport parameter, it is not possible way to remove the “s” suffix and convert it in a “;proto=” parameter. One way would be to convert “stuns:example.org” to “stun:example.org;secure”, but one can ask how this is better than the original STUN URI.

For all these reasons, and because it would look strange that STUN uses the “s” suffix and not TURN, I think that the right format is to allow “turns” and “stuns” scheme, and to use the “;transport=” parameter only for TURN URIs.

Updated 09/12/2012: Added a bit more text about the interaction between STUN/TURN and the DNS.

A simple scheme for software version numbers

There is as many opinions on how software version numbers should be structured than there is developers. It is difficult to design a scheme that is simple and that will stay consistent for the whole lifetime of a product – one good example of product that periodically change the meaning of the version number is the Linux kernel – at one time odd version numbers meant development code and even numbers production code. Now the rule seems that the major component of the version is incremented whenever the project leader feels that the minor component is too large.

My own schemes were always plagued by one annoying inconsistency: I always start the numbering at 0.1, meaning that the software is still in its design phase, postponing the switch to 1.0 to when the API (whatever this means) is stable enough that it can be guaranteed to not require corrections. The inconsistency becomes visible when trying to go through the next iteration of versions after 1.0, iteration that will be concluded by a version 2.0. Some people use very high numbers (1.99, and so on) for this purpose, but that never looked right to me.

So I finally found and adopted a simple scheme that (I think) clearly indicates which phase of the development process it belongs to. All version numbers follows the <major>.<minor&gt.<correction> scheme, starting at 0.1.0. A minor value of 0 always means that the API is stable (i.e. that this API will be maintained forever), and any other number that this is a different API and that this API is still under development (i.e. that users of this API should be prepared to modify their code). It will be simpler to understand with some examples:

  • 0.1.0: This is the first version, the API is still under development.
  • 0.1.1: Same API than before, but bugs in the implementations were fixed.
  • 0.2.0: API modified, but still not stable.
  • 1.0.0: First version using a stable API.
  • 1.0.1: Bugs fix on the stable version.
  • 1.1.0: A new development cycle started, with a different API.
  • 1.1.1: Same API than before, but bugs in the implementation were fixed.
  • 1.0.2: New bugs fixed in the stable API.
  • 1.2.0: New development version with a different API.
  • 2.0.0: Second version with a stable API.
  • 2.0.1: Bugs fix on the second stable version.
  • 1.0.2: New bugs fixed in the first stable API.
  • 2.1.0: Beginning of a new cycle of development, and so on…

One may ask why the numbering does not start with 0.0.0. In this case the minor part is 0 and that would mean that this is a stable API, but the only reasonable design for a stable API this early in the process would be the absence of API, which would require to release a first Debian package that contains nothing. So it seems reasonable to skip this step and start directly with version 0.1.0. But note how this is reminiscent of the way unit testing is supposed to be done, i.e. that tests should be written before the actual code that permit them to succeed is written.