RELOAD: Extensions

IETF people do not live on the same planet than us – and I mean that literally. My theory is that they leave on a planet with a very strange orbit that put them in communication distance of Earth only 3 times per year, roughly around March, July and November, which could explain why it is mostly impossible to talk to them outside of these 3 months. During the rest of the time, speed of light limits our ability to communicate with them and it takes between one week and one month to receive a response to an email because of the huge distance (that, or their planet is so dense that time is running considerably slowly than ours).

Anyway, that is how it looks like to me and that’s very frustrating, especially as I got burned few years ago for the exact same reasons. Forking a specification is mostly prevented by the IETF license, so that was not an option, so I am trying something else: I published a document containing all the modifications I would have liked to see in the RELOAD specification. Here is the list of these modifications, with some explanations on how to try these modifications either with the source code or on the test server:

1. Client connected to bootstrap node

I am working on a major release of the libreload-java package that will contain the code for a RELOAD client, so I think that is it simpler to wait this release for the explanations on this extension.

2. Service Name and Path

The RELOAD document uses the “p2psip” name both in the SRV RRs and in the URL used to retrieve the configuration file. The problem is that RELOAD is no longer specific to P2PSIP, so this name should be “p2p” instead.

The DNS RR for the RELOAD test server can know be retrieved with the following command:

$ host -t SRV has SRV record 40 0 443

Same thing, the configuration file can be retrieved with the following command:

$ curl --resolve

Note that the standard names are still working.

3. Multiple Node-IDs for self-signed certificates

CA signed certificates can contain multiple Node-IDs, but self-signed certificates cannot. This extension fixes this problem.

Version 0.4.0 of the libreload-java package, that was released few minutes ago, now contains a static method in the SignerIdentity class to generate a self-signed certificate with one Node-IDs and a second static method to generate a self-signed certificate with multiple Node-IDs.

4. Re-sign certificates

With the current specification, it is not possible to extend the validity of a certificate because the Node-ID must be generated by the enrollment server. That mean basically that each time the certificate expires, the peer has to get a new Node-ID, which means leaving the overlay, joining with the new Node-ID and re-storing all the data that were stored with the previous Node-ID.

The extension permits to renew a certificate by sending it to the enrollment server instead of the certificate signing request, with something like this:

$ wget "" 
--post-file=cert.der --header "Content-Type: application/pkix-cert"
--header "Accept: application/pkix-cert" -O cert2.der

You can even increase or decrease the number of Node-IDs in the resulting certificate, which can be useful.

Note that this works only if you keep the same key pair. If you want to change them then you need to send a certificate signing request, which will result in a certificate containing new Node-IDs. This is consistent with the way the self-certificates works.

RELOAD: Test server

It took two very frustrating weeks for this, but I finally managed to install a public RELOAD Configuration and Enrollment test server. The frustration part is a consequence of my self-imposed list of requirements: Full implementation of the RELOAD spec, IPv4 and IPv6 support, JMX management over TLS with client certificates and the private key for the RELOAD CA stored in a PKCS#11 token. The code is still not perfect, but it at least fulfills all the requirements.

The libreload-java library contains everything to process the data returned by the server, but it is possible to use command-line tools to have a look to the configuration file and to generate RELOAD certificates. First step, finding the IP address of the configuration server for the “” overlay using the DNS:

$ host -t SRV has SRV record 40 0 443
$ host has address has IPv6 address 2604:3400:dc1:41:216:3eff:fe5b:8240

Next step, retrieving the current version of the configuration file:

$ curl --resolve

The configuration file contains the current URL of the enrollment server,

The next step is to generate an RSA key pair:

$ openssl genrsa -out cert.rsa
$ openssl pkcs8 -in cert.rsa -out cert.key -topk8 -nocrypt

Then we can generate a certificate request in DER form:

$ openssl req -new -key cert.key -outform der -out cert.req

The certificate request can be sent to the enrollment server, which will use it to generate a certificate:

$ wget "" --post-file=cert.req --header "Content-Type: application/pkcs10" --header "Accept: application/pkix-cert" -O cert.der

Note that the password is mandatory but any password can be used at this time, as there is no user management in the server yet. The certificate returned will contain one user name and one Node-ID, but a certificate with multiple Node-IDs can be requested with the nodeids= parameter, as specified in version -15 of the RELOAD I-D. The content of the certificate can be displayed with something like this:

$ openssl x509 -noout -text -inform DER -in cert.der

I also released a new version of the libreload-java package that contains some bug fixes and improvements related to version -15 of the I-D. For example the library can now generate a signed configuration file, even if the test server is not using it yet.

Update 6/2/2011: Use curl instead of wget so we can force the IP address and port to the result of the SRV query.

Update 8/13/2012: This this post for the updated version of the servers.

RELOAD: Configuration and Enrollment

The libreload-java package version 0.2.0 was released few minutes ago. Because this version starts to have bits and pieces related to RELOAD itself, the access control policy script tester that was explained in a previous post is now in a separate package named libreload-java-dev. There is nothing new in this package as I am waiting for a new version of draft-knauf-p2psip-share to update it.

The libreload-java package itself now contains the APIs required to build a RELOAD PKI. The package does not contains the configuration and enrollment servers themselves, as there is many different implementations possible, from command line tools to a deployable servlets. The best way to understand how this API can be used is to walk through the various components of a complete RELOAD PKI:

1. The enrollment server

The enrollment server provides X.509 certificates to RELOAD node (clients or peers). It acts as the Certificate Authority (CA) for a whole RELOAD overlay without requiring the service of a top-level CA. That means that anybody can deploy and manage its own RELOAD overlay (note that the enrollment server itself requires an X.509 certificate signed by a CA for the HTTPS connection, but it is independent from the PKI function itself). Creating the CA private key and certificate is simple, for example with openssl:

$ openssl genrsa -out ca.rsa 2048
$ openssl pkcs8 -in ca.rsa -out ca.key -topk8 -nocrypt
$ openssl req -new -key ca.key -out ca.csr
$ openssl x509 -req -days 365 -in ca.csr -signkey ca.key -out ca.crt

Obviously it is very important to keep the secret key, hmm, secret so using a physical token like TPM or a smartcard can be a good idea.

The second thing that the enrollment server must do is to restrict the creation of certificates to people authorized to do so. The simplest way is to issue login/passwords, but there is obviously other ways.

The enrollment server will receive a certificate request (formatted with PKCS#10), and will send back a X.509 certificate containing one or more URLs each containing a Node-IDs. The RELOAD API provides a static method to build a valid Node-ID that can be used like this:

NodeId nodeId = NodeId.newInstance(16, new SecureRandom());
new URI("reload", nodeId.toUserInfo(), "", -1, "/", null, null);

The resulting URI looks something like this:


That’s the only API required for the enrollment server. All the other stuff – certificate request parsing, certificate generation, can be easily done by using a standard library like Bouncycastle.

Note that an enrollment server is not required when self signed certificates are used.

2. The configuration server

The configuration server provides configurations files, that are XML documents that contain all the information required to join a specific overlay. A configuration file is represented in the API as a Configuration object, which is immutable. A Configuration object can be created either by parsing an existing document (using the parse static method) or by creating one from scratch by using the Builder helper class. The Builder helper class currently supports only the subset needed for implementing a PKI, i.e. it permits to set the CA certificate, the URL to the enrollment server and the signer of the configuration document. The API can be used like this to generate an initial configuration document:

Configuration.Builder builder = new Configuration.Builder("");
builder.enrollmentServers().add(new URI(""));
builder.bootstrapNodes().add(new InetSocketAddress("", 6084));
Configuration configuration =, signerPrivateKey);
byte[] config = configuration.toByteArray();

The signerPrivateKey object is the private key that was created for the signer certificate, and the signer object is an instance of the SignerIdentity class that was created from the same certificate:

SignerIdentity signer = SignerIdentity.identities(certificate).get(0);

Subsequent versions of the configuration document for a specific overlay will need to be signed by the same signer.

It is easy to create a new version of a configuration document by using the API. For example to change the no-ice value and resign the document:

conf = new Configuration.Builder(conf).noIce(true).build(signer, signerPrivateKey);

The sequence number will automatically increase by one (modulo 65535) in the new document.

Note that this API can change in future versions. It will be frozen only in version 1.0.0, which cannot happen until the RELOAD specification is approved by the IESG for publication.

Jarc: Now running Java 1.7

At last, installable packages for OpenJDK 7 are available in the experimental Debian repository. So I released a new version of jarc that is now directly supporting Java 7 code but, because the openjdk-7 packages are in an experimental repository, I uploaded jarc 0.2.21 in an experimental repository so people would not be hit with dependency issues. I updated my Ubuntu/Debian repository page to explain how to add my experimental repository to the repository configuration file.

The reason to release this experimental version of jarc is that I need two the new features of the new JVM for my RELOAD implementation. The first one is the support of TLS 1.2, aka RFC 5246, which is used by RELOAD to authenticate and encrypt all the TCP connections. The only thing missing to completely implement RELOAD will be an implementation of DTLS 1.0 (which is to datagrams what TLS is to streams). I guess I will have to bite the bullet and write this one myself.

The second feature that I needed is an implementation of the SCTP protocol. TCP is great for transferring files but it has one major flaw when used with multiplexed client/server transactions, which is called the Head-Of-Line problem – a message lost will block all the subsequent messages, even if they belong to a different transaction, until the lost message is retransmitted and received. UDP does not have this issue but comes with its own set of problems. SCTP is somewhere between UDP and TCP, and this is why, in my opinion, it is a good transport protocol for something like RELOAD (well, between RELOAD nodes with a public IP address, as most NATs – one exception been the one I supervised the development at 8×8 – do not NAT SCTP).

So the plan will be to add SCTP to RELOAD, using either TLS (RFC 3436) or DTLS (RFC 6083) depending on the DTLS implementation described above, and to write an I-D, perhaps in time for IETF 81 in Quebec City.

RELOAD: Access Control Policy distribution

This blog had been quiet for the last month because I was working on an implementation of RELOAD, the peer-to-peer protocol developed at the IETF as a component of peer-to-peer SIP (I do not plan to develop a peer-to-peer SIP implementation, but I needed a RELOAD implementation for another project). The plan is to release a complete implementation of RELOAD as a Java library under an Affero GPL 3 license and to let the company I co-founded, Stonyfish Inc, sell commercial licenses to people in need of a less restrictive license. The code will be released progressively between now and July, so keep an eye on this blog for progress.

Anyway during the development I came to see some limitations in the RELOAD specification (which is still not an RFC). Most of them were solved by the P2PSIP Working Group and there is still few that are waiting for discussions and decisions by the WG. Hopefully all will be solved before the final release. But there was one specific problem that required a somewhat different treatment, and this is the subject of this blog entry.

RELOAD is, among other things, a Storage protocol. Any user of a network of RELOAD servers (an overlay) can store data in it, and these data are automatically replicated and available to any other user. Because the overlay is designed to be secure even in presence of users with less than nice goals, only the user that stored a piece of data can modify it. The rules that are used to decide who can or cannot write or modify a piece of data in an overlay are grouped into what is called an Access Control Policy. There is four different Access Control Policies defined in the RELOAD specification, and the intent was that these four policies would cover most of the future needs. And even if there is a way to add new Access Control Policies, only a limited number would be defined.

Unfortunately, it turns out that there is a need for more than the four policies already existing. After a survey of all the existing proposals for new types of data to be stored in an overlay (this is called an Usage), I discovered that more than 50% of the new Usage are requiring a new Access Control Policy. In my opinion that creates a problem that could kill the usefulness of RELOAD before it even start to be deployed.

Let’s say that I start a new overlay and that I distribute my software to hundred of thousand of users, each of them using the overlay to store their data in the way that was defined in this first version. Everything works fine until I decide to introduce a new feature that require a new Access Control Policy. The problem is that it is not only the users that will use this new feature that will have to upgrade their copy of the software. No, to be able to even start deploying this new feature, I will have to wait that ALL the users upgrade the software. If the story of IE6 teaches us anything, it is that it will never happen. And the problem is even worse if the software used comes from different vendors.

So the proposal I made to the P2PSIP Working Group is to automatically distribute the code of a new Access Control Policy, without having to upgrade the software. This way instead of waiting months or years to deploy a new feature, it will take only 24 hours or so to have all the users ready to store the new data.

Obviously this code has to be portable so it can be executed on any language used to write the RELOAD implementations in an overlay. So I chose JavaScript (or more precisely ECMAScript) to do that – not because I like the language (JavaScript is the rubber band that hold the Web together, and I do not mean that in a nice way) but because, thanks to the current Web Browsers war, there is very good implementations available.

I am presenting this concept at IETF 80 in Prague on March 31th. You can read the Internet-Draft or the slides for the presentation if you cannot attend.

Application developers and DNS TTL

During the technical plenary of the 73th IETF meeting in Minneapolis MN, Dave Thaler made the interesting point that most DNS resolver API do not return the TTL of the resource resolved, e.g. an IP address. At the time he was proposing a modification of the existing APIs, and that made me thinking since.

The problem is that programmers generally think that resolving a domain name and then storing the IP address(es) resulting of this resolution is a good idea, as this technique could yield a better responsiveness for an application. But doing that without having a mechanism to know that the result of the resolution is invalid creates an even bigger problem. During my time at 8×8, I worked on the problem of designing a scalable Internet service, and my design was focused on having the scalability driven by the client (in other words: the whole load balancers idea is evil). Such techniques are very cheap and effective, but only if the client strictly obeys the TTL value received in the DNS response (I still have two pending patent applications about this techniques). Another well known problem is documented in RFC 4472 section 8.2, as keeping an IP address for too long prevents renumbering an IPv6 network, but there is plenty of other cases.

So the idea of passing the TTL together with the result of the DNS query seems like a good one, until you realize that in fact what developer have now to do is to implement a DNS cache in their application, and every evidence shows that this is not a simple task. As can be seen by the number of security vulnerabilities found during the years, even people who do read the RFC seem to have an hard time doing it right. Internet could probably do without another wave of DNS cache implemented incorrectly.

So in my opinion, adding the TTL to the API is not the solution – it will just exchange one problem with another. The correct solution is to do the resolution each time the resource is needed and do not store the result at all. If performances are too much impacted (after scientifically measuring them, we are between professionals here) then using an external DNS cache will fix the problem. The DNS cache can be in your network (for example having two DNS caches per data center), can be on each server (dnscache from the djbdns suite is easy to install and configure and has a good security track), or even directly in your application (for example dnsjava contains such a cache).

Bad examples

Implementing Internet protocols is hard. It is hard for a variety of reasons – scalability, reliability, security are really important when anybody can write another implementation of the same protocol and use it to communicate on the Internet with your own implementation – but it is also hard because the specifications used to describe this protocols are not easy to read. It takes time to really understand a specification, which is why they generally contain one or more examples. Unfortunately some developers, instead of trying to understand how a protocol really works, try to implement them by imitating the examples they found. This creates two major problems. Firstly an example – by definition – can only show a limited subset of all the possibilities of a protocol and some subtle points will be missed if the normative text is not fully understood. The second problem is that it is not uncommon that examples contain mistakes, mistakes that will then been seen in careless implementations. I did a search in the RFC errata database, and found more than 70 errata on examples, so the problem is real.

In a perfect world, developers would understand that examples were not meant for them, or at least that they are not normative, but there is not much that can be done to improve this. Knowing that developers are lazy and will always take a shortcut if they can find one, the other solution is to improve the quality of examples in the specifications.

This is what I started to implement in version 0.1.5 of the rfc2629-tools. Normally an example is just an unstructured block of text in an artwork element, something like this (example copied from RFC 5245):

   o=jdoe 2890844526 2890842807 IN IP4
   c=IN IP4
   t=0 0
   m=audio 45664 RTP/AVP 0
   a=rtpmap:0 PCMU/8000
   a=candidate:1 1 UDP 2130706431 8998 typ host
   a=candidate:2 1 UDP 1694498815 45664 typ srflx raddr rport 8998

The problem with this example is that we do not know if it is correct. For example the specification can had changed between the time the example was inserted in the Internet-Draft and the time it was published as an RFC, and nobody checked that the example was still correct.

The solution I used for my own RFC was to generate the example from the reference code I wrote to check that my specification was implementable. I did this in three steps. First I wrote a reference implementation of the protocol. Then I wrote a set of unit tests to be sure that my implementation was correct. Then I added some code to automatically generate the examples. In RFC 5928, the figures 1, 2 and 3, as table 2 are all generated by a program, and so I am reasonably sure that they are correct, but also that they stayed correct during the long process of modifying the Internet-Draft.

The code I wrote for RFC 5928 was an ad-hoc program, but I wanted to integrate the possibility of easily generate examples in my rfc2629-tools, and so version 0.1.5 contains an example generator for SDP (RFC 4566), as SDP are often used with VoIP protocols. To generate the SDP example shown above, the following XML fragment can be inserted in the XML document:

<figure xmlns:ex="">
    <ex:sdp username="jdoe" session-id="2890844526" session-version="2890842807" session-address="" address="">
      <ex:attribute name="ice-pwd">asd88fgpdd777uzjYhagZg</ex:attribute>
      <ex:attribute name="ice-ufrag">8hhY</ex:attribute>
      <ex:media type="audio" port="45664" protocol="RTP/AVP" format="0">
        <ex:bandwidth type="RS">0</ex:bandwidth>
        <ex:bandwidth type="RR">0</ex:bandwidth>
        <ex:attribute name="rtpmap">0 PCMU/8000</ex:attribute>
        <ex:attribute name="candidate">1 1 UDP 2130706431 8998 typ host</ex:attribute>
        <ex:attribute name="candidate">2 1 UDP 1694498815 45664 typ srflx raddr rport 8998</ex:attribute>

Note that the xmlns declaration should be in the rfc element, so it can be removed automatically after the generation (xml2rfc does not like namespaces, so there is a special process to remove them).

Currently only SDP examples can be generated and even for SDP there is still more work to do to guarantee that no invalid SDP can be generated (for example, the attributes are currently simple character strings. I will add the parsing of know attributes like candidates in a future version).

rfc2629-tools 0.1.4

I just released a new version of my RFC 2629 processing tools. The txt2kindle and xml2mobi commands are unchanged, but the rfc2629 command was rewritten from scratch. I needed to add some extensions in the XSLT processing that was not supported by xsltproc, so I rewrote it in Java. Because of this I was able to add a nice feature for including the references in a RFC 2629 document.

In the previous version of the tool, a reference can be added by using something like this:

<references xmsns:xi="" title="Normative References">
  <xi:include href="" />

This is still supported, but now an IETF URN (RFC 2648) can be used instead:

<references xmsns:xi="" title="Normative References">
  <xi:include href="urn:ietf:rfc:2543" />

In addition to be less verbose, this notation has the advantage that I-D references can refer to a current version of an I-D instead of to the version that is publicly available. That was already in the previous version, but was cumbersome to use. With the current version, if you pass more than one RFC 2629 file to the rfc2629 command, it will use the version and data of the generated file in the cross references, instead of the public one.

In the future I will extend this feature to add a local cache. One of the most annoying problem with references (either when using Xinclude or <?rfc include ?>) is that you need to be connected to the Internet. It will be possible to store the references on the local disk and use them if there is no Internet connection available (or always use the cache for RFC references as they never change).

Note that only the rfc: and id: NSS are currently implemented.

RFID reader for IETF meeting

I attend the IETF meetings whenever I can and I follow the proceedings remotely when I am not able to travel. But following an IETF session remotely is not simple: You need one application to listen to the audio stream (I use XMMS), another application to connect to the Jabber room (Pidgin) and a PDF or Powerpoint viewer to look at the slides (Evince and On top of this complex setup, there is a lot of annoying problems: The audio stream is not very good, the audio servers crash from time to time, and people forget to speak on the microphone; very often the slides are available few minutes before the beginning of a session – it even happened that the slides are not available at all. The Jabber room is in fact used to state the name of the person talking in the microphone, because people forget very often to do this. Another usage for the Jabber room is to state the current slide on the screen.

Obviously all of this does not make for a good experience. There was multiple attempts to improve this, and I did work at a time on one of this attempts. The project was canceled after I left my job at 8×8, but I still continued to work on it on my spare time. One of the first component I designed was to capture the name of an attendee speaking at a microphone. There was some similar experiences, one during IETF 74 by Columbia University, and another during IETF 76 by the host, but my goal was a little bit different: I wanted the system to protect privacy and to be as non-intrusive as possible, so people had just to stand in from of the microphone to have their RFID tag scanned. The following picture shows how the system works:

The RFID antenna needed to be big enough so the speaker just stand naturally in front of the microphone. The RW310 has an 11″ range but unfortunately only has a TTL interface which cannot be directly connected to most PC so I used a TTL to USB converter, with the whole circuit fitting inside the USB connector. The box itself was made from plexiglass – I used VariCAD to design the box and an Epilog laser cutter at Techshop to cut the plexiglass.

Finding a way to attach the RFID reader to the microphone stand was the most challenging part – I had to go through 3 prototypes until I found something that permit to change the distance between the reader and the stand and that does not unbalance the whole thing. I finally found a microphone holder that was perfect for this, with a microphone surface-mount attached on the box so the RFID reader can be easily removed at the end of the day.

Privacy is very important for the IETF meeting attendees, so there is some concerns with the introduction of RFID tags – and in my opinion, rightly so. Having an RFID tag without a way to shielding it is unwise, so I also designed a badge holder that would permit to shield the RFID tag also in a non-intrusive way, as shown in the following picture:

The idea is simple: Either you both show your name and give access to your RFID tag or you hide both. I think that this idea is key here – if you accept that people can read your name on your badge then there is no reason why you would not let an RFID reader do the same thing on your tag. The top half of the badge holder contains a metallic shield, but the tag itself is in the bottom half. When the holder is open your name is readable and your tag is not shielded; when closed both your name and your tag are invisible to the full electromagnetic spectrum.

Having the shield only on one side works because this RFID tag works at 13.56MHz (with 125 Khz tags, two layers of aluminum need to enclose the tag and this on both sides). Another reason for the 13.56Mhz tag is that it permit to store the user information directly in the tag, instead of simply retrieving a serial number and match it with a central database which, in my book, is a no-no as I want my personal data to be destroyed at the same time I destroy by badge.

Unfortunately the software is not ready at this time, but I’ll try to find some time to finish it if there is enough interest in this project.

RFC2629 tools

Now that I have a java build that is redistributable, I can finally release the code source of the two tools I wrote to convert I-Ds and RFCs to a format that can be used on a Kindle. The Debian package containing this two tools is named “rfc2629-tools” and can be found at the usual place. In addition to this two tools, I also put in this package an additional tool that I use to process my RFC2629 XML sources.

Some quick explanation for the people new to the IETF process. All IETF documents, standards and others, are published as RFC (which is no longer an acronym) which are immutable documents. Before been published as RFC, documents are published as Internet-Drafts (I-D). The canonical form of RFCs and I-Ds is a line printer-ready format that is often called text format. There is other formats possible like PDF or HTML, but only the text format must be used as reference. Writing a well-formed text I-D or RFC is difficult, so authors generally write it in another format, then use a conversion tool to generate the text, PDF or HTML file. Some people use Winword, other use nroff but the most interesting format is an XML application defined in RFC 2629. There is available tools to convert an RFC 2629 source to text, PDF, nroff and HTML, but nothing for the Kindle platform.

One of the tools in the rfc2629-tools package takes as input a RFC 2629 source and generate an intermediary file that can then be processed to create a .mobi file that can be loaded in a Kindle. Here’s the command lines to do this:

$ xml2mobi draft-ietf-behave-turn-uri-09.xml
$ kindlegen draft-ietf-behave-turn-uri-09.htm

Note that the xml2mobi program is very much a work in progress, as it is incomplete and really need more work to be able to process with any RFC2629 source. I released the code source because I do not have much time currently for this, so anybody interested can work on a fork. For example, the file generated is named from the content of the docName attribute in the rfc element in the input file.
The kindlegen program can be found on Amazon Web site.

The IETF Secretariat archives the RFC 2629 source if the author provides it when uploading the text file but few people do this so another solution is needed to be able to read I-Ds and RFCs that are only available in text form. This is what the text2kindle program do:

$ txt2kindle draft-ietf-behave-turn-uri-09.txt

This command creates a file named that can be installed directly in a directory named /pictures on the Kindle 2. On the Kindle 1, the file must be unzipped in the /pictures directory.
Note that the document is displayed as a series of pictures so you cannot annotate them.

Even if the RFC2629 source of an I-D is available on the IETF web site, that does not mean that you will be able to generate a mobi file for your Kindle that is identical in content to the canonical version. The first reason is that the full date is generally not set, which means that running a conversion tool on this source will generate a document with the current date instead of the date that is used in the canonical document. The second reason is that people generally uses the <?rfc include> statement to insert references. The problem with this is that the text included will change when a new version of the I-D referenced will released, so again it is not possible to generate a content identical to the canonical document. The solution to this problem is to generate an RFC2629 document that always contains a complete date and where all references are already included. With this what the third tool in the package provides:

$ rfc2629 draft-ietf-behave-turn-uri.xml

This command will copy the content of the file passed as input with the following transformations:

  • The destination file is named from the content of the docuName attribute (this helps when the document is stored in a source control system).
  • The date in the destination file is filled with the current date.
  • All the xinclude statements are replaced by their content
  • All comments are removed.

The resulting XML file can be uploaded to the IETF site with a guarantee that other formats can be generated from this file as identical as the files generated by the author.

The same script also implements a feature that permit to rename a reference. A reference to an I-D generally looks like this is the result text:

The TURN specification [I-D.ietf-behave-turn] defines a process for a

The map element can be used to change the name displayed:

<map anchor="TURN">
  <xi:include xi:href="" />

Which, after been processed by the script and xml2rfc will generate this instead:

The TURN specification [TURN] defines a process for a TURN client to

The last feature of this script (for now) was added when I tried to release two I-Ds at the same time, with each I-D containing a reference to the other. Because the text that will be included is generated after the upload to the IETF, there was no way to automatically use the correct reference. What the script do is to generate a reference file automatically in /tmp for all the files processed. It is then easy to include the correctly generated reference is the document. For example draft-ietf-behave-turn-uri.xml and draft-petithuguenin-behave-turn-uri-bis.xml have a reference on each other. So the first step is to change the reference inside draft-ietf-behave-turn-uri.xml like this:

<xi:include xi:href="/tmp/reference.I-D.petithuguenin-behave-turn-uri-bis.xml" />

and to change the reference inside draft-petithuguenin-behave-turn-uri-bis.xml like this:

<xi:include xi:href="/tmp/reference.I-D.ietf-behave-turn-uri.xml" />

Then running the script on the two XML files at the same time generates the correct XML file, ready to be uploaded to the IETF:

$ rfc2629 ../src/share/docs/draft-ietf-behave-turn-uri.xml ../src/share/docs/draft-petithuguenin-behave-turn-uri-bis.xml