A configuration and enrollment service for RELOAD implementers

As people quickly discover, implementing RELOAD is not an easy task – the specification is complex, covers multiple network layers, is extensible and flexible and the fact that security is mandatory creates even more challenges at debug time. This is why new implementations generally focus on having the minimum set of features working between nodes using the same software.

Making two different RELOAD implementations interoperate requires a lot more work, mostly because connecting to a RELOAD overlay is not as simple as providing an IP address and port to connect to. Because of the extensibility of RELOAD, all the nodes in an overlay must use the same set of parameters, parameters that are collected and distributed in an XML document that need to be cryptographically signed. In addition to this, all nodes must communicate over (D)TLS links, using both client and server certificates signed by a CA that is local to the overlay. Configuration file and certificates must be distributed to each node and when two or more implementations wants to participate in the same overlay, adhoc methods to provision these elements are no longer adequate. The standard way to do that is through a configuration and enrollment server but unfortunately that is probably the part of the RELOAD specification that most implementers would assign the lowest priority, thus creating an higher barrier to interoperability testing than one would expect.

This is why during the last RELOAD interoperability testing event in Paris, I volunteered to provide configuration and enrollment servers as a service to RELOAD implementers, so they do not have to worry about this part. I already had my own configuration and enrollment servers, but I had to rewrite them from scratch because of two additional requirements: They had to work with any set of parameters, even some that my own implementation of RELOAD do not support yet, and it must be possible to host servers for multiple overlays on the same physical server (virtual server). A first set of servers are now deployed and in use by the participants of the last RELOAD interoperability event, so it is now time to open it to a larger set of participants.

First what this service is not: It is not to host commercial services, and it is not meant to showcase implementations. The service is free for RELOAD implementers (up to 5 overlays per implementer) for the explicit purpose of letting other implementers connect to your RELOAD implementation, which means that you are supposed to provision a username/password for any other implementer on request, on a reciprocity basis. Contact me directly if you are interested in an usage that does not fit this description.

The enrollment for the service is simple: send me an email containing the X500 names that will be used to provision your servers. Here’s an example to provision a fictional overlay named “my-overlay-reload.implementers.org”:

C=US, ST=California, L=Saratoga, O=Impedance Mismatch, LLC, OU=R&D,
CN=my-overlay-reload.implementers.org

The C=, ST=, L=, O= and OU= components should describe your organization (not you). The CN= component contains the name requested for your overlay. Note that the “-reload.implementers.org” part is mandatory, but you can choose to use whatever name before this suffix, as long as it is not already taken, that it follows the DNS label rules and that it does not contain a dot (wildcard certificates do not support sub-subdomains).

With these information I will provision the following:

  • The DNS RR as described in the RELOAD draft
  • A configuration server.
  • An enrollment server, with its CA certificate
  • A secure Operation, Administration and Management (OAM) server.

The DNS server will permit to retrieve the IP addresses and ports that can be used to connect to the configuration server. If we reuse our example above, the following command will retrieve the DNS name and port:

$ host -t SRV _reload-config._tcp.my-overlay-reload.implementers.org
_reload-config._tcp.my-overlay-reload.implementers.org has SRV record 40 0 443
my-overlay-reload.implementers.org.

Note that the example uses the new service and well-known URL name that were agreed on in the Vancouver meeting, but the current name (p2psip-enroll) will be supported until the updated specification is published.

The DNS name can then be resolved (the IPv6 address is functional):

$ host my-overlay-reload.implementers.org
my-overlay-reload.implementers.org has address 173.246.102.69
my-overlay-reload.implementers.org has IPv6 address
2604:3400:dc1:41:216:3eff:fe5b:8240

Then the configuration file can be retrieved by following the rules listed in the specification:

$ curl --resolve my-overlay-reload.implementers.org:443:173.246.102.69
https://my-overlay-reload.implementers.org/.well-known/reload-config

The returned configuration file will contain a root-cert element containing the CA certificate that was created for this overlay, and will be signed with a configuration signer that will be maintained by the configuration server. Basically the configuration server will automatically renew the configuration signer and resign the configuration file every 30 days, or sooner if you upload a new configuration file (more on this later). Note that to ensure that there is no lapse in the rollover of signer certificates, the configuration file must be retrieved periodically (the expiration attribute contains the expiration date of the signer certificate, so retrieving the configuration document one or two days before this date will guarantee that any configuration file can be used to validate the next one in sequence). This feature frees the implementers from developing its own signing tools (a future version will permit the implementers to maintain their own signer and to upload a signed configuration file).

The configuration file also contain an enrollment-server element, pointing to the enrollment server itself, that can be used to create certificates as described in the specification. The enrollment server requires a valid username/password to create a certificate and anyway the default configuration document returned is filled with the minimum parameters required, so they are useless as it to run a real overlay. Modifying the configuration document and managing the users that can request a certificate (and so join the overlay) is the responsibility of the OAM server.

Because the OAM server uses a client certificate for authentication, it uses a different domain name than the configuration and enrollment server. The domain name will use the “-oam-reload-implementers.org” suffix, and will use a separate CA to create the client certificate, so a user of the overlay cannot use its certificate to change the configuration (That would be a good idea to define a new X.509 extended key usage purpose for RELOAD to test for this).

The OAM server uses a RESTful API to manage the configuration and enrollment servers (well, as RESTful as possible, because the API is in fact auto-generated from a JMX API, and I did not find another solution that to map a JMX operation to a POST. But more on this in a future blog entry). Here’s the commands to add a new user, change a user password, list the users and remove a user:

$ curl --cert client.crt --key client.key --data "name=myname&password=mypassword"
https://my-overlay-oam-reload.implementers.org/type=Enrollment/addUser
$ curl --cert client.crt --key client.key --data "name=myname&password=mypassword"
https://my-overlay-oam-reload.implementers.org/type=Enrollment/modifyUser
$ curl --cert client.crt --key client.key https://notomele-reload.implementers.org/type=Enrollment/Users
$ curl --cert client.crt --key client.key --data "name=myname"
https://my-overlay-oam-reload.implementers.org/type=Enrollment/removeUser

The password is stored as a bcrypt hash, so it is safe as long as you do not use weak passwords.

The last step is to modify the configuration, probably to add a bootstrap element element. Currently the OAM server manages what is called a naked configuration, which is a configuration document stripped of all signatures. The current naked configuration can be retrieved with the following command:

$ curl --cert client.crt --key client.key https://my-overlay-oam-reload.implementers.org/type=Configuration/NakedConfiguration > config.relo

The file can then be freely modified with the following constraints:

  • The file must be valid XML and must conform to the schema in the specification (including the use of namespaces).
  • The sequence attribute value must be increased by exactly one, modulo 65535.
  • The instance-name attribute must not be modified.
  • The expiration attribute must not be modified.
  • A node-id-length element must not be added.
  • The root-cert element must not be removed or modified, and a new one must not be added.
  • The enrollment-server element must not be removed or modified, and a new one must not be added.
  • The configuration-signer element must not be removed or modified, and a new one must not be added.
  • A shared-secret element must not be added.
  • A self-signed-permitted element with a value of “true” must not be added.
  • A kind-signer element must not be added.
  • A kind-signature or signature element must not be added
  • Additional configuration elements must not be added.

Then the file can be uploaded into the configuration server:

$ curl --cert client.crt --key client.key -T config.relo https://my-overlay-oam-reload.implementers.org/type=Configuration/NakedConfiguration

If there is a problem in the configuration, an HTTP error 4xx should be returned, hopefully with a text explaining the problem (please send me an email if you think that the text is not helpful, or if a 5xx error is returned).

Bufferbloat, Cablemodem and SDR/SDN

Yesterday I attended a session at the IETF meeting in Vancouver that will probably be remembered as being a key moment in the history of the Internet. In it Van Jacobson gave a fantastic talk on CoDel and on the Bufferbloat problem. At the end of the talk, Van presented some deployment issues, the second one being that that in a computer CoDel should be deployed closer to the device driver. I wondered if Van Jacobson own’s netchannels could not be a nice solution to this problem, but I did not get the courage to go to the microphone and ask this.

I did not think much of the first deployment issue at the time. Here the problem is that although Codel is now implemented in the Linux kernel 3.5, and so can easily be deployed in home NATs/routers, the right place to install CoDel would be inside the Cablemodem (or equivalent). Unfortunately this is not a place that can be easily modified, as it is fully under the control of whoever built the cablemodem.

Then later in the day I attended the Technical Plenary, and the technical talk was about Software Defined Network (SDN). I must admit that I never heard of SDN before, but the name itself immediately made me thing of SDR, Software Defined Radio – I worked on a project involving SDR few years back, and I still have the two USRP1 that I used for prototyping. My intuition seems right on, although the first talk by Cisco left me confused (I will summarized it as “this is horribly complicated, buy our stuff”). The second talk by a researcher was better although a little bit creepy (SDN in my home network and the controller outside?). With the third and last talk by Google, I now was convinced that SDN was in fact SDR for network.

Now all these talks got somehow processed during my sleep, and I woke up with an idea. Why do not use the same hardware that is used for SDR, but for implementing SDN? For example one can design an USRP1 daughterboard that would permit to connect it to the RJ45 connector from my cable provider. Then it is simply a programming problem – i.e. implementing Docsis 3.0 but this time with CoDel inside. And that would also open a lot of possibilities, like being able to run tcpdump on the cable side of the modem.

One can even dream of additional daughterboards for different wire connections – Ethernet, USB, HDMI, SATA, Powerline and so on. That would be an exciting project to work on.

Jarc: Generated service

It is not permitted for an annotation processor to modify a Java source file, so a processor willing to add code to an existing class is left with only two solutions (if we exclude method instrumentation): Generating a superclass or generating a subclass.

Generating a superclass has the advantage that the constructors of the annotated class can be used directly. Let say that we have a annotation processor that is designed to help implement class composition, as described in Effective Java, item #16. Instead of writing the whole ForwardingSet class, an annotation processor could generate it automatically from this code fragment:

@Forwarding(ForwardingSet.class)
public class InstrumentedSet<E> extends ForwardingSet<E> {
  InstrumentedSet(Set<E> s) {
    super(new HashSet<>());
  }

But generating a superclass is not always possible. For example let’s imagine an annotation processor that generates the JMX boilerplate necessary to export attributes. An existing class with such annotation could look something like this:

public class MyData {
  @Attribute int counter;
  }

In this case the processor for the @Attribute annotation will generate a JMX interface (Let’s call it MyDataMXBean) that declares the getcounter and setcounter methods, and a class extending MyData and implementing the JMX interface (let’s call it MyDataImpl).

The code generated would take care of the boring stuff, like synchronization and so on, which is certainly an improvement over writing and maintaining it. But the problem with subclasses is that we do not know the name of the class that was generated. Note that we have no other choice than to know the name of the superclass because we have to inherit from it. For subclasses it is better to let the processor choose the name, but now we need a way to be able to instantiate the generated class without knowing this name (in our example to register it in the MBean server).

The obvious way of doing this is to use a ServiceLoader. We can add a factory method in the MyData class to instantiate the generated class, something like this:

static MyData newInstance() {
  return ServiceLoader.load(MyData.class).next();
  }

But for this technique to work, we need to describe the service in the jar file. Using the method explained in a previous post does not help in this case, because we still do not know what will be the name of the generated class.

The version 0.2.30 of jarc provides a solution to this problem. This new version contains a new annotation, @Service, that can be used to annotate a generated class. A processor integrated in jarc will read this annotation at compile time, and automatically generates the service entry in the built jar file, as if an X-Jarc-Service attribute has been added to the manifest file. This works because this processor will be invoked after the @Attribute processor, and so knows the name of the class that has been generated. Here is for example the code fragment that the code generator would have generated for MyDataImpl:

@Service(MyData.class)
@MXBean
class MyDataImpl extends MyData implements MyDataMXBean {

Note that classes used as services require an empty constructor and that can be a problem if the class it extend does not have an empty constructor itself. The solution in this case is to define an additional factory class as the service.

First we define our factory as an abstract class:

abstract class Factory {
  MyData newInstance(int init);
  }

We adjust our factory method accordingly:

static MyData newInstance(int init) {
  return ServiceLoader.load(Factory.class).next().newInstance(init);
  }

The @Attribute processor must generate an additional class that extends the factory class and this is the class which is declared as a service:

@MXBean
class MyDataImpl extends MyData implements MyDataMXBean {
  @Service(Factory.class)
  static class FactoryImpl extends Factory {
  MyDataImpl newInstance(int init) {
    return new MyDataImpl(init);
    }
  }

  MyDataImpl(init init) {
    super(init);
  }

Decrypting SSL sessions in Wireshark

Debugging secure communication programs is no fun. One thing I learned during all these years developing VoIP applications is to never, ever trust the logs to tell the truth, and especially the one I put myself in the code. The truth is on the wire, so capturing the packets and been able to analyze them, for example with Wireshark, is one of the most important tool that a communication developer can use. But the need for secure communications makes this tool difficult to use.

As said before, the first solution would be to display the packets directly in the logs before they are encrypted or after they are decrypted but as this is probably the same idiot who wrote the code to be debugged that will write the code to generate the logs, there is little chance to have something useful as a result.

Another solution is to have a switch that permits to use the software under test in a debug mode which does not require to encrypt and decrypt the communications. A first problem is that it increases the probability of forgetting to turn the switch off after debugging or to use the wrong setting in production. Another problem is that the secure and unsecure modes may very well use different code paths, and so may behave differently. And lastly some communication protocols, especially the modern one, do not have an unsecure setting (for example RELOAD and RTCWEB), and for very good reasons.

A slightly better solution that can reduce the difference between code paths and work with secure-only protocols is to use a null cipher but for security reasons both sides must agree beforehand to use a null cipher. That in fact probably increases the probability that someone forget to switch off the null cipher after a test.

So the only remaining solution is to somehow decrypt the packets in Wireshark. The standard way of doing that is to install the private RSA in Wireshark, which immediately creates even more problems:

  • This cannot be used to debug programs in production, because the sysadmin will never accept to give the private key.
  • This does not work if the private key is stored in an hardware token, like a smartcard as, by design, it is impossible to read the private key from these devices.
  • Even with the private key, the SSL modes that can be used are limited. For example a Diffie–Hellman key exchange cannot be decrypted by Wireshark.

Fortunately since version 1.6, Wireshark can use SSL session keys instead of the private key. Session keys are keys that are created from the private key but that are used only for a specific session, and disclosing them does not disclose the private key. This solves most of the problems listed above:

  • Sysadmins can disclose only the session key related to a specific session.
  • Session keys are available even if the private key is stored in an hardware token.
  • Sessions keys are the result of, e.g., a Diffie–Hellman key exchange, so there is no need to restrict the SSL modes for debugging.

Now we just need to have the program under test storing the session keys somewhere so Wireshark can use them. For example the next version of NSS (the security module used by Firefox, Thunderbird and Chrome) will have an environment variable that can be used to generate a file that can be directly used by Wireshark (see this link for more details).

Adding the support for this format in Java requires to maintain a modified build of Java, which can be inconvenient. A simpler solution is to process the output of the -Djavax.net.debug=ssl,keygen debug option. The just uploaded Debian package named keygen2keylog contains a program that does this for you. After installation, start Wireshark in capture mode with the name of the SSL session key file that will be generated as parameter, something like this:

$ wireshark -o ssl.keylog_file:mykeys.log -k -i any -f "host implementers.org"

(Remember that you do not need to run Wireshark under root to capture packets if you run the following command after each update of the package: sudo setcap ‘CAP_NET_RAW+eip CAP_NET_ADMIN+eip’ /usr/bin/dumpcap)

Then you just need to pipe the debug output of your Java program to keygen2keylog to see the packets been decrypted in Wireshark, e.g.:

$ java -Djavax.net.debug=ssl,keygen -jar mycode.jar | keygen2keylog mykeys.log

And the beauty of this technique is that the packets are decrypted as they are captured.

NAT64 discovery

Last week I volunteered to review draft-ietf-behave-nat64-discovery-heuristic, an IETF draft that describes how an application can discover a NAT64 prefix that can be used to synthesize IPv6 addresses for embedded IPv4 addresses that cannot be automatically synthesized by a DNS64 server (look here for a quick overview of NAT64/DNS64).

I am not a DNS or IPv6 expert, so I had to do a little bit of research before starting to understand that draft, and that looked interesting enough to decide to write an implementation, which is probably the best way to find problems in a draft (and seeing how often I find bugs in published RFCs that should be a mandatory step, but that’s another discussion). I installed a PC with the Linux Live CD of ecdysis, and configured it to use a /96 subnet of my /64 IPv6 subnet. After this I just had to add a route on my development computer to be able to use NAT64. I did not want to change my DNS configuration, so I forced the nameserver in the commands I used. With that configuration I was able to retrieve a synthesized IPv6 address for a server that do not have IPv6 addresses, then ping6 it:

$ host -t AAAA server.implementers.org 192.168.2.133
server.implementers.org has IPv6 address 2001:470:1f05:616:1:0:4537:e15b

$ ping6 2001:470:1f05:616:1:0:4537:e15b
PING 2001:470:1f05:616:1:0:4537:e15b(2001:470:1f05:616:1:0:4537:e15b) 56 data bytes
64 bytes from 2001:470:1f05:616:1:0:4537:e15b: icmp_seq=1 ttl=49 time=49.4 ms

As said above, the goal of NAT64 discovery is to find the list of IPv6 prefixes. The package nat64disc, that can be found at the usual place in my Debian/Ubuntu repository, contains one command, nat64disc, that can be used to find the list of prefixes:

$ nat64disc -d ipv4only.implementers.org -n 192.168.2.133 -l
Prefix: 2001:470:1f05:616:1:0:0:0/96 (connectivity check: nat64.implementers.org.)

When the draft will be published, the discovery mechanism will use by default the domain “ipv4only.arpa.” but this zone is not populated yet, so I added the necessary record to ipv4only.implementers.org so the tool can be used immediately. This domain name must be passed with the -d option on the command line.

As explained above, I did not want to modify my DNS configuration, so I have to force the address of the nameserver (i.e.e the DNS64 server) on the command line, with the -n option. Interestingly this triggered a bug in Java, as when forcing the nameserver the resolver will send an ANY request, which is not processed by DNS64. People interested in the workaround can look in the source code, as usual (note that there is another workaround in the code also related to a resolver bug, bug that prevents to use IPv6 addresses in /etc/resolv.conf).

I also provisioned a connectivity server for my prefix, as shown in the result. If the tool finds a connectivity server associated with a prefix, it will use it to check the connectivity and remove the prefix from the list of prefixes if the check fails.

The tool can also being use to synthesize an IPv6 address:

$ nat64disc -d ipv4only.implementers.org -n 192.168.2.133 69.55.225.91
69.55.225.91 ==> 2001:470:1f05:616:1:0:4537:e15b

and to verify that an IPv6 address is synthetic:

$ nat64disc -d ipv4only.implementers.org -n 192.168.2.133 2001:470:1f05:616:1:0:4537:e15b
2001:470:1f05:616:1:0:4537:e15b is synthetic

The tool does not process DNSSEC records yet, and I will probably not spend time on this (unless, obviously, someone pay me to do that).

Jarc: Annotation processors

Version 0.2.27 of jarc now supports to run annotations processors when a jar file is built. The syntax is simple, just add a X-Jarc-Processors attribute in the manifest file header with a list of jar files, and jarc will automatically run the processors found by querying the META-INF/services/javax.annotation.processing.Processor file inside the jar files:

Manifest-Version: 1.0
Main-Class: org.implementers.apps.turnuri.Main
X-Jarc-Target: 1.7
X-Jarc-Processors: /usr/share/lib/processor.jar

Name: org/implementers/apps/turnuri/Main.class

In turn, it is easy to build an annotation processor with jarc, here’s the manifest file for a processor I am currently developing:

Manifest-Version: 1.0
Class-Path: /usr/share/java/jcip.jar
X-Jarc-Target: 1.6

Name: org/implementers/management/annotations/processing/Main.class
X-Jarc-Service: javax.annotation.processing.Processor

Note that the Java compiler does not run processors on dependent files, so you need to add a “Name:” attribute for all Java files that need to be processed.

Also starting with this version, the JDK 1.7 is required to run jarc – but jarc can still cross-compile for all the versions of Java starting with 1.5.

Network applications demos in coffeehouses

Preparing a demo in your office is stressful enough – the demo need to be tested and rehearsed again and again until the time of the demo itself. A demo of a network application is even more challenging because even in a controlled environment like an office you never know when a coworker will bring down the network by transferring huge files, or when a system admin will decide to do maintenance on a server at the same time (all real life examples, sadly). But there is nothing worse than having to demo a network application in a coffeehouse or a restaurant, which is, it seems, where 99% of these demo are done in the Bay Area. I did a lot of demos of a network application in such conditions in the last 18 months, without one single failure, so this blog entry will explain the environment I used, in the hope that it will be useful to someone else.

The WiFi in these places is generally slow and flaky, so the worst thing to do would be to use it. That’s OK, I do not trust so-called experts to provide me a decent network, so why would I expect people whose job is to prepare a good cup of coffee to do better? Instead I used a laptop on which I installed the same software that is running in our data center, and I configured the laptop to be a WiFi Access Point. It was not the network application that was tailored to run on a special network (which would required to build an application that is different from the real product), it was my laptop that was simulating a very small instance of the Internet.

The first step was to find a Wifi adapter that could be used as a a Wifi Access Point (AP). I needed a adapter that is supported by the hostapd Debian package so I chose one from SIIG. The configuration of hostapd looks like this (/etc/hostapd/hostapd.conf):

interface=wlan1
driver=nl80211
ssid=demo
country_code=US
hw_mode=g
channel=5
macaddr_acl=0
auth_algs=3
wpa=2
wpa_passphrase=Wrokwait3
wpa_key_mgmt=WPA-PSK WPA-EAP

The adapter needs an IP address, which is configured in /etc/network/interfaces:

allow-hotplug wlan1
iface wlan1 inet static
  address 10.254.251.1
  netmask 255.255.255.0

We need a DHCP server (package isc-dhcp-server) to allocate IP addresses to the devices that will connect to our AP (/etc/dhcp/dhcpd.conf):

subnet 10.254.251.0 netmask 255.255.255.0 {
  range 10.254.251.10 10.254.251.20;
  option routers 10.254.251.1;
  option domain-name-servers 10.254.251.1;
}

You can see here that the laptop will also be our DNS server. As I explained above we cannot depend on a connection to the real Internet, so we will have to also serve DNS requests. My favorite DNS server is djbdns so after installing the package I created a new tinydns instance (tinydns is the djbdns component that implements an authoritative DNS server):

$ tinydns-conf tinydns tinydns /etc/service/tinydns 10.254.251.1

The next step was to start tcpdump on the IP address of the AP to see what requests the network application will send. For each of them I needed to install the corresponding server and to add the DNS resource records in djbdns. For example the devices I used for the demo (Android Nexus One) need to synchronize with NTP, so I installed the ntp server so it is running on the AP (/etc/default/ntp):

NTPD_OPTS='-g -I wlan1'

Then I added the following lines in djbdns to redirect the devices to my NTP server (/etc/service/tinydns/root/data):

.ntp.org:10.254.251.1:ns1.ntp.org
=north-america.pool.ntp.org:10.254.251.1:3600

Finally because my demo devices were Android based, I printed the QR Code of the WiFi demo network and taped it directly on the Wifi adapter so the Android devices can be easily configured with the ZXing app.

Jarc: Now running Java 1.8

I like to learn new Java features before they are officially released, and that requires using unstable builds. The difficulty is to integrate the new compiler into a build – for the JDK 1.7 I released jarc as an experimental package, but that was not a very good solution.

Since version 0.2.26, jarc can use an experimental compiler, like the one supporting lambda. If you installed the new JDK at /home/petithug/jdk8, you will only need to add the following lines to the /etc/jarc.conf file to be able to build jar files that use closures:

jdk-java_8-openjdk=/home/petithug/jdk8/bin/java
jdk-tools_8-openjdk=/home/petithug/jdk8/lib/tools.jar
canonical_8=1.8-openjdk
canonical_1.8=1.8-openjdk
canonical_8-openjdk=1.8-openjdk
canonical_1.8-openjdk=1.8-openjdk
jre-check_1.8-openjdk=/home/petithug/jdk8/jre/bin/java
jre-bootclasspath_1.8-openjdk=/home/petithug/jdk8/jre/lib/rt.jar:/home/petithug/jdk8/jre/lib/jce.jar
jre-source_1.8-openjdk=1.8
jre-exec_1.8-openjdk=/home/petithug/jdk8/jre/bin/java

Jarc always use by default the most recent compiler, but you can override this with the -Jjdk=7 or -Jjdk=6 option.

The new version of jarc also support passing parameters to the JVM – either at build time or at run time – by using the -J option.

Finally it is now possible to add an X-Jarc-Debug parameter at the manifest level. This option works just like the -g option in javac. I added this option to be able to build programs for aparapi – more about this in a future post.

RELOAD: The Wireshark dissector

I talked in a previous blog entry about the importance of assembling a set of good tools for development, and one of my favorite tool is Wireshark.

With my colleague Stèphane, we prepared a new version of the RELOAD dissector that now covers not only the full specification (draft-ietf-p2psip-base) but also the current standard extensions of RELOAD. The goal for this new version was not only to cover 100% of the specification, but also to do it in a way that help primarily the developers, because even if I dislike the idea that people develop protocol implementations by using packet dissection, the reality is that people are doing it, so we may as well be sure that the information displayed by the dissector are correct. So we tried as much as possible to dissect the protocol in a way that present the information on the screen as close as possible to the way it is described in the specification.

As for the RELOAD extensions, the following data structures are decoded by the new code:

  • SipRegistration in the “SIP Usage for RELOAD” (draft-ietf-p2psip-sip) specification.
  • RedirServiceProvider in the “Service Discovery Usage for RELOAD” (draft-ietf-p2psip-service-discovery) specification.
  • SelfTuningData in the “Self-tuning DHT for RELOAD” (draft-ietf-p2psip-self-tuning) specification.
  • DiagnosticsRequest, DiagnosticsResponse, PathTrackReq and PathTrackAns in the “P2PSIP Overlay Diagnostics” (draft-ietf-p2psip-diagnostics) specification.
  • ExtensiveRoutingModeOption in the “An extension to RELOAD to support Direct Response Routing” (draft-zong-p2psip-drr) specification.

We even prepared the work to decode RELOAD messages inside HIP, as described in the “HIP BONE Instance Specification for RELOAD” (draft-ietf-hip-reload-instance) specification.

The new code is not yet committed in the Wireshark tree, but it is available in the bug database (please vote for it if you can).

On request I can provide a compiled Debian/Ubuntu package for the i386 or amd64 architectures.

10/06/2011: The main patch is now commited in the Wireshark trunk. The fix for a bug in defragmentation still need to be manually applied.

10/08/2011: All the patches are now committed in the Wireshark trunk. Enjoy!

RELOAD: implementers mailing-list

A mailing-list dedicated for implementers of RELOAD (draft-ietf-p2psip-base) has been created and will be announced very soon in the P2PSIP mailing-list. This is not an official IETF mailing-list, but a place to discussion implementation details of the RELOAD protocol and its extensions (similar to what the sip-implementors mailing-list does for SIP).

The registration page is at http://implementers.org/mailman/listinfo/reload, and the current description is this:

"This list is for discussing implementation issues for the current version of RELOAD, including questions on current protocol features and extensions. Protocol development issues are discussed on the p2psip@ietf.org list.

The first posting for a new member is always moderated, so it can take up to 24 hours for this post to appear.

Commercial advertisements of any form (for products, software, jobs) are inappropriate. Announcements related to RELOAD interoperability test events or FOSS software are welcome. Product names may be mentioned if necessary ("Software X, Y and Z does it that way."), while disparaging or general comments ("Software W sucks rocks.") are inappropriate. Because there is no way to correctly process them, emails containing legal boilerplate are inappropriate.

Never cross-post a message to both the RELOAD list (p2psip@ietf.org) and this list."