Jarc: Annotation processors

Version 0.2.27 of jarc now supports to run annotations processors when a jar file is built. The syntax is simple, just add a X-Jarc-Processors attribute in the manifest file header with a list of jar files, and jarc will automatically run the processors found by querying the META-INF/services/javax.annotation.processing.Processor file inside the jar files:

Manifest-Version: 1.0
Main-Class: org.implementers.apps.turnuri.Main
X-Jarc-Target: 1.7
X-Jarc-Processors: /usr/share/lib/processor.jar

Name: org/implementers/apps/turnuri/Main.class

In turn, it is easy to build an annotation processor with jarc, here’s the manifest file for a processor I am currently developing:

Manifest-Version: 1.0
Class-Path: /usr/share/java/jcip.jar
X-Jarc-Target: 1.6

Name: org/implementers/management/annotations/processing/Main.class
X-Jarc-Service: javax.annotation.processing.Processor

Note that the Java compiler does not run processors on dependent files, so you need to add a “Name:” attribute for all Java files that need to be processed.

Also starting with this version, the JDK 1.7 is required to run jarc – but jarc can still cross-compile for all the versions of Java starting with 1.5.

Jarc: Now running Java 1.8

I like to learn new Java features before they are officially released, and that requires using unstable builds. The difficulty is to integrate the new compiler into a build – for the JDK 1.7 I released jarc as an experimental package, but that was not a very good solution.

Since version 0.2.26, jarc can use an experimental compiler, like the one supporting lambda. If you installed the new JDK at /home/petithug/jdk8, you will only need to add the following lines to the /etc/jarc.conf file to be able to build jar files that use closures:

jdk-java_8-openjdk=/home/petithug/jdk8/bin/java
jdk-tools_8-openjdk=/home/petithug/jdk8/lib/tools.jar
canonical_8=1.8-openjdk
canonical_1.8=1.8-openjdk
canonical_8-openjdk=1.8-openjdk
canonical_1.8-openjdk=1.8-openjdk
jre-check_1.8-openjdk=/home/petithug/jdk8/jre/bin/java
jre-bootclasspath_1.8-openjdk=/home/petithug/jdk8/jre/lib/rt.jar:/home/petithug/jdk8/jre/lib/jce.jar
jre-source_1.8-openjdk=1.8
jre-exec_1.8-openjdk=/home/petithug/jdk8/jre/bin/java

Jarc always use by default the most recent compiler, but you can override this with the -Jjdk=7 or -Jjdk=6 option.

The new version of jarc also support passing parameters to the JVM – either at build time or at run time – by using the -J option.

Finally it is now possible to add an X-Jarc-Debug parameter at the manifest level. This option works just like the -g option in javac. I added this option to be able to build programs for aparapi – more about this in a future post.

RELOAD: The Wireshark dissector

I talked in a previous blog entry about the importance of assembling a set of good tools for development, and one of my favorite tool is Wireshark.

With my colleague Stèphane, we prepared a new version of the RELOAD dissector that now covers not only the full specification (draft-ietf-p2psip-base) but also the current standard extensions of RELOAD. The goal for this new version was not only to cover 100% of the specification, but also to do it in a way that help primarily the developers, because even if I dislike the idea that people develop protocol implementations by using packet dissection, the reality is that people are doing it, so we may as well be sure that the information displayed by the dissector are correct. So we tried as much as possible to dissect the protocol in a way that present the information on the screen as close as possible to the way it is described in the specification.

As for the RELOAD extensions, the following data structures are decoded by the new code:

  • SipRegistration in the “SIP Usage for RELOAD” (draft-ietf-p2psip-sip) specification.
  • RedirServiceProvider in the “Service Discovery Usage for RELOAD” (draft-ietf-p2psip-service-discovery) specification.
  • SelfTuningData in the “Self-tuning DHT for RELOAD” (draft-ietf-p2psip-self-tuning) specification.
  • DiagnosticsRequest, DiagnosticsResponse, PathTrackReq and PathTrackAns in the “P2PSIP Overlay Diagnostics” (draft-ietf-p2psip-diagnostics) specification.
  • ExtensiveRoutingModeOption in the “An extension to RELOAD to support Direct Response Routing” (draft-zong-p2psip-drr) specification.

We even prepared the work to decode RELOAD messages inside HIP, as described in the “HIP BONE Instance Specification for RELOAD” (draft-ietf-hip-reload-instance) specification.

The new code is not yet committed in the Wireshark tree, but it is available in the bug database (please vote for it if you can).

On request I can provide a compiled Debian/Ubuntu package for the i386 or amd64 architectures.

10/06/2011: The main patch is now commited in the Wireshark trunk. The fix for a bug in defragmentation still need to be manually applied.

10/08/2011: All the patches are now committed in the Wireshark trunk. Enjoy!

RELOAD: Client/server mode

In a previous post I quickly talked about an extension called “Client connected to bootstrap node”, which is now implemented in version 0.7.0 of the libreload-java package that was released few minutes ago.

Section 3.2.1 of the RELOAD specification talks about the two modes that a RELOAD client can be using to interact with an overlay. In the first mode, the client attaches itself to the responsible peer and process Update messages to always be connected to the current responsible peer. In the second mode, a client attaches itself to any peer and store a destination list somewhere in the overlay so other node can reach it (P2P SIP can use both of theses modes).

I think that there is a third mode that is missing in the specification, so I added it in my list of extensions. In this mode, the client is connected to one of the bootstrap peer, so it is similar to the second mode described above, but without having to send an Attach. One interesting characteristic of his mode is that this is the initial mode for all nodes. Client in either of the two modes described above have to start with this third mode. Peers have to start with this third mode, then switch to the first mode, then go to the peer mode. So describing it is, in my opinion, kind of a fundamental step.

Another interesting characteristic of that mode is that an overlay made of only bootstrap peers and clients using this mode is in fact our good old client/server architecture. Bootstrap peers replicates the data between each other and help establish direct connections between clients, all in a secure way.

The new version of the package contains only the code for the client side, and only implement the Store message at this time. Because having this code is not that useful without a bootstrap node, I also installed one in the test server (see the this post for instruction on how to retrieve the configuration file and request a certificate. The configuration file now contains the address of the bootstrap server to use with the client).

The Node class is the fundamental abstraction that application developers need to care about. A Node object is instantiated with 3 parameters, a Configuration object that contains the initial configuration of the overlay to join (and especially the list of bootstrap servers), a SignerIdentity object that contains the identity of the node to create, and the private key that was used to create the identity.

After creating the node the application can store or modify data in the bootstrap peers by calling the modify method. Here is an example of code that stores the certificate of the node in the bootstrap servers:


Set<DataModel<? extends Value>> kinds = new HashSet<DataModel<? extends Value>>();
Certificate certificate = new Certificate(signer.certificateChain().get(0));
CertificateByNode certificatesByNode = new CertificateByNode();
certificatesByNode.add(certificate);
kinds.add(certificatesByNode);
node.modify(signer.nodeId().toBytes(), kinds);

RELOAD: Interoperability

In a previous post I said that one of my reasons to develop a RELOAD implementation was to implement VIPR. Another reason is to be able to develop software that is resistant to developer bugs.

The Internet is global, which means that a service should be available 24/7, and having a web page saying that a service is down for maintenance is just plain incompetence, and companies doing this should be publicly shamed for this. There is multiple ways to build systems that can still work during software update, operating system bugs and other disasters, and I think that Google did a lot of good in this domain by showing that it could be done in a way that does not require to spend millions of dollars in hardware – I advocated a Google-like architecture a long time before Google even existed and it was a relief when Google started to publicly disclose some of the things they were doing because then I could end my explanations by a “and BTW, this is exactly what Google is doing!”. Herd mentality always works.

Now imagine a service that is already doing all the right things – Amdahl’s law is the Law of the Land, sysadmin motto’s (“if it ain’t broke, don’t fix it”) is cause for immediate dismissal, programmers cannot commit without unit tests and peer reviews, and so on. The problem is that even when using the best practices (and by best I certainly do not mean the most expensive) a whole system can still go down because of a single programmer’s bug – and we certainly had some very public examples of that recently.

Bugs are not something programmers should be ashamed of – instead they should be thought as a probability, and this probability integrated into the design of a service. One of the best proof that bugs are not preventable mistakes but unavoidable limitations of software is the so-call “software bit rot”, i.e. the spontaneous degradation of software over time. Software does not really degrade over time; it’s just that the environment under which they are running is changing over time – faster processor, larger memory and so on. Basically this means that event if we can prove that a software is perfect now, unless it is possible to predict the future it is not possible to guarantee that a program will be without bugs in the future.

So, as always, if there is no solution to a problem then just change the problem. Bugs have a more or less constant probability to happen, i.e. X bugs for Y lines of code. I am not aware of any research in this domain (pointers welcome!) but intuitively I would say that two independent developers starting from the same set of specification will come with two different software and, more important to my point, with two different sets of bugs. If the probability of a major bug in any implementation is X, the probability that the same environment triggers the same bug in two completely different implementations should be X^2 (assuming that developers are not biased towards certain classes of bugs). In other words if we can run in parallel two or more implementations of the same specifications and be able to choose the one that is working correctly at a specific time, then we increase by multiple orders of magnitude the reliability of our service (one of my old posts used the same reasoning applied to ISP. Just replace “service down” by “someone will die because I cannot reach 911”).

Obviously running multiple implementations of the same specifications at the same time is really expensive and, as far as I know, is used only when a bug can threaten life, like in a nuclear electric plant. If we can solve the economics of this problem, then we should be able to offer far better services to end-users.

Now back to RELOAD. RELOAD is a standard API for peer-to-peer services. There is nothing really new in RELOAD – it uses technologies, like Chord, that are well-known and, for some people, that are probably close to obsolescence. The most important, at least in my opinion, is that for the first time we have a standard API to use these services on multiple implementations that can inter-operate with each other. Having a distributed database is great for reliability purpose, but having a distributed database that is running on different implementations, like RELOAD permits, is far better.

But RELOAD is not limited to a distributed database. With the help of ReDIR, we can have multiple implementations registering a common service, and so if one implementation fails, we still can count on the other implementations of the same service.

As for the economic side of having multiple implementations, instead of having only mine developed in-house I can swap it with similar code developed by another implementers. At the end if I am able to do this with 3 or 4 other implementers, all of us will have an incredibility more resilient service without spending a lot more money (this is the reason why I am so against having THE ONE open or free software implementation of a standard. The more implementations are developed, the better the users will be served. Implementation anarchy is a good thing, because it fosters evolution).

OK, I admit that was a long introduction for version 0.6.0 of the libreload-java package and the associated new Internet draft about access control policy scripts, especially because access control policy scripts paradoxically do not improve the reliability of a service based on RELOAD, but reduce it. The reason for this is that access control policy are ordinarily natively implemented so a bug is local to an implementation. This draft permits to distribute a new access control policy but as the same code will be shared between all the different implementations this creates a single point of failure. This is why access control policy scripts should be considered only as a temporary solution, until the policy can be implemented natively. The script copyright should prevent developers to copy it but we all know how that works, so to help developers do the right thing and not be tempted to look at the scripts when implementing the native version in their code, the source code in the XML configuration file is now obfuscated by compressing it and converting it to a base64 string.

The draft also introduce an idea that, in my opinion, could be worth developing, which is that the signer of a configuration file should play an active role in verifying that all implementations in the overlay follow the rules set up by the configuration file. For example the signer can send incorrectly signed configuration files to random peers, and add them to the blacklist in a future configuration file if they do not reject it.

RELOAD: VIPR Support

One of my reasons for developing an implementation of RELOAD is to be able to develop an implementation of VIPR, a mechanism currently developed by the IETF to automatically and securely share SIP routes between VoIP providers. VIPR is using RELOAD as a giant distributed database where VoIP providers stores the phone numbers of their customers.

RELOAD in its last incarnation has all the features needed to implement VIPR, with two exceptions, the access control policy and the quota mechanism.

The access control policy in VIPR is similar to the standard USER-NODE-MATCH policy, but there is enough differences to mandate the implementation of a new policy. The preferred solution is to implement natively this policy, but a temporary solution could be to use the extension I designed to add new policies in the RELOAD configuration file. A future version of my implementation will implementation this policy natively, but meanwhile the following script can be used:


var equals = function(a, b) {
  if (a.length !== b.length) return false;
  for (var i = 0; i < a.length; i++) {
    if (a[i] !== b[i]) return false;
  }
  return true;
};
var length = configuration.node_id_length;
return equals(entry.key.slice(0, length),
  entry.value.slice(4, length + 4))
    && equals(entry.key.slice(0, length), signature.node_id);

The quota mechanism in VIPR is interesting. Basically it says that a VoIP provider must contribute a number of RELOAD servers for the distributed database that is proportional to the number of phone numbers it plans to register. Because this quota mechanism is useful for other usages than VIPR it now has its own Internet-Draft, separate from the VIPR drafts, with the goal of publishing it as an IETF standard. New quota mechanisms are not very frequently needed (AFAIK, this is the first quota mechanism created outside the RELOAD document) so it does not make sense to develop another API to write quota scripts. This means that this quota mechanism will have to be coded by RELOAD implementers (the VIPR configuration document should contain a <mandatory-extension> element to be sure that only servers implementing this extension will join the overlay).

Version 0.5.0 of the libreload-java package, that was released few minutes ago, not only permits to use the script listed above but also implement the new quota mechanism, making it suitable for implementing a VIPR server.

Update 07/05/2011: The VIPR access control policy is natively implemented in lib-reload-java 0.6.0.

RELOAD: Extensions

IETF people do not live on the same planet than us – and I mean that literally. My theory is that they leave on a planet with a very strange orbit that put them in communication distance of Earth only 3 times per year, roughly around March, July and November, which could explain why it is mostly impossible to talk to them outside of these 3 months. During the rest of the time, speed of light limits our ability to communicate with them and it takes between one week and one month to receive a response to an email because of the huge distance (that, or their planet is so dense that time is running considerably slowly than ours).

Anyway, that is how it looks like to me and that’s very frustrating, especially as I got burned few years ago for the exact same reasons. Forking a specification is mostly prevented by the IETF license, so that was not an option, so I am trying something else: I published a document containing all the modifications I would have liked to see in the RELOAD specification. Here is the list of these modifications, with some explanations on how to try these modifications either with the source code or on the test server:

1. Client connected to bootstrap node

I am working on a major release of the libreload-java package that will contain the code for a RELOAD client, so I think that is it simpler to wait this release for the explanations on this extension.

2. Service Name and Path

The RELOAD document uses the “p2psip” name both in the SRV RRs and in the URL used to retrieve the configuration file. The problem is that RELOAD is no longer specific to P2PSIP, so this name should be “p2p” instead.

The DNS RR for the RELOAD test server can know be retrieved with the following command:

$ host -t SRV _p2p-enroll._tcp.implementers.org
_p2p-enroll._tcp.implementers.org has SRV record 40 0 443 implementers.org

Same thing, the configuration file can be retrieved with the following command:

$ curl --resolve 
implementers.org:443:2604:3400:dc1:41:216:3eff:fe5b:8240
https://implementers.org/.well-known/p2p-enroll

Note that the standard names are still working.

3. Multiple Node-IDs for self-signed certificates

CA signed certificates can contain multiple Node-IDs, but self-signed certificates cannot. This extension fixes this problem.

Version 0.4.0 of the libreload-java package, that was released few minutes ago, now contains a static method in the SignerIdentity class to generate a self-signed certificate with one Node-IDs and a second static method to generate a self-signed certificate with multiple Node-IDs.

4. Re-sign certificates

With the current specification, it is not possible to extend the validity of a certificate because the Node-ID must be generated by the enrollment server. That mean basically that each time the certificate expires, the peer has to get a new Node-ID, which means leaving the overlay, joining with the new Node-ID and re-storing all the data that were stored with the previous Node-ID.

The extension permits to renew a certificate by sending it to the enrollment server instead of the certificate signing request, with something like this:

$ wget "https://implementers.org/enrollment?username=test&password=test" 
--post-file=cert.der --header "Content-Type: application/pkix-cert"
--header "Accept: application/pkix-cert" -O cert2.der

You can even increase or decrease the number of Node-IDs in the resulting certificate, which can be useful.

Note that this works only if you keep the same key pair. If you want to change them then you need to send a certificate signing request, which will result in a certificate containing new Node-IDs. This is consistent with the way the self-certificates works.

RELOAD: Configuration and Enrollment

The libreload-java package version 0.2.0 was released few minutes ago. Because this version starts to have bits and pieces related to RELOAD itself, the access control policy script tester that was explained in a previous post is now in a separate package named libreload-java-dev. There is nothing new in this package as I am waiting for a new version of draft-knauf-p2psip-share to update it.

The libreload-java package itself now contains the APIs required to build a RELOAD PKI. The package does not contains the configuration and enrollment servers themselves, as there is many different implementations possible, from command line tools to a deployable servlets. The best way to understand how this API can be used is to walk through the various components of a complete RELOAD PKI:

1. The enrollment server

The enrollment server provides X.509 certificates to RELOAD node (clients or peers). It acts as the Certificate Authority (CA) for a whole RELOAD overlay without requiring the service of a top-level CA. That means that anybody can deploy and manage its own RELOAD overlay (note that the enrollment server itself requires an X.509 certificate signed by a CA for the HTTPS connection, but it is independent from the PKI function itself). Creating the CA private key and certificate is simple, for example with openssl:

$ openssl genrsa -out ca.rsa 2048
$ openssl pkcs8 -in ca.rsa -out ca.key -topk8 -nocrypt
$ openssl req -new -key ca.key -out ca.csr
$ openssl x509 -req -days 365 -in ca.csr -signkey ca.key -out ca.crt

Obviously it is very important to keep the secret key, hmm, secret so using a physical token like TPM or a smartcard can be a good idea.

The second thing that the enrollment server must do is to restrict the creation of certificates to people authorized to do so. The simplest way is to issue login/passwords, but there is obviously other ways.

The enrollment server will receive a certificate request (formatted with PKCS#10), and will send back a X.509 certificate containing one or more URLs each containing a Node-IDs. The RELOAD API provides a static method to build a valid Node-ID that can be used like this:

NodeId nodeId = NodeId.newInstance(16, new SecureRandom());
new URI("reload", nodeId.toUserInfo(), "example.org", -1, "/", null, null);

The resulting URI looks something like this:

reload://0110534f8a94ec317834a9bb0d66bd3ae708@example.org/

That’s the only API required for the enrollment server. All the other stuff – certificate request parsing, certificate generation, can be easily done by using a standard library like Bouncycastle.

Note that an enrollment server is not required when self signed certificates are used.

2. The configuration server

The configuration server provides configurations files, that are XML documents that contain all the information required to join a specific overlay. A configuration file is represented in the API as a Configuration object, which is immutable. A Configuration object can be created either by parsing an existing document (using the parse static method) or by creating one from scratch by using the Builder helper class. The Builder helper class currently supports only the subset needed for implementing a PKI, i.e. it permits to set the CA certificate, the URL to the enrollment server and the signer of the configuration document. The API can be used like this to generate an initial configuration document:

Configuration.Builder builder = new Configuration.Builder("example.org");
builder.rootCertificates().add(caCert);
builder.enrollmentServers().add(new URI("https://example.org/enrollment"));
builder.kindSigners().add(nodeId);
builder.bootstrapNodes().add(new InetSocketAddress("10.1.1.0", 6084));
Configuration configuration = builder.build(signer, signerPrivateKey);
byte[] config = configuration.toByteArray();

The signerPrivateKey object is the private key that was created for the signer certificate, and the signer object is an instance of the SignerIdentity class that was created from the same certificate:

SignerIdentity signer = SignerIdentity.identities(certificate).get(0);

Subsequent versions of the configuration document for a specific overlay will need to be signed by the same signer.

It is easy to create a new version of a configuration document by using the API. For example to change the no-ice value and resign the document:

conf = new Configuration.Builder(conf).noIce(true).build(signer, signerPrivateKey);

The sequence number will automatically increase by one (modulo 65535) in the new document.

Note that this API can change in future versions. It will be frozen only in version 1.0.0, which cannot happen until the RELOAD specification is approved by the IESG for publication.

Jarc: Javadoc generation

During the preparation of the libreload-java package, I discovered that the javadoc tool – the JDK tool used to generate an HTML documentation of a Java API – does not generate the javadoc based on Java dependency. It can either generate the javadoc for whole packages, or for a subset of Java files (e.g. File*.java). But we cannot ask it to generate the documentation for one specific class and all the classes depending on this class.

I guess that my way of doing things is different: I do not have one source tree per project, but one source tree that contains the code for all of them (and much, much more code that is not released). A particular Debian package contains only a subset of the code available in my source tree – the code needed for this particular package. One way to see this is that a package is a view on my source repository. Ant or javadoc cannot easily extract only the sources files needed for a particular view (unless I list all the files individually, which would be unmaintainable), so the jarc tool relies on Java dependency.

When I tried to build the javadoc for the libreload-java package, I noticed that the test units files (used by Junit) were part of the javadoc, although they clearly are not part of the API. This is because the javadoc tool had to process the whole directory. I cannot move these two files in another directory, because they need to have access to constructors and methods that are not public.

So what I did is that I integrated the javadoc tool inside jarc. Because jarc knows the list of dependent Java files during a jar file build, I just have to feed it to the javadoc tool to have the exact API documentation corresponding to the jar file built. The new version of jarc (0.2.22) supports a new parameter (-javadoc ) that can is used to request the generation of the javadoc at the same time the package is build and unit tested.

One can notice that the Debian source packages that are distributed do not contain the whole tree, but only the subset of the files needed to rebuild this package. To do this (I obviously did not want to release my whole source tree), I had to rely on a trick. My hard disk is mounted with the strictatime options, which means that each time a file is accessed, the data/time of this access is kept. When I build a Debian package I start to build it from my whole source repository:

$ touch run-stamp
$ SRC=../src dpkg-buildpackage -D -b -us -uc

Then I run this command to copy only the files that where used for this build into a new directory:

$ find ../src -anewer run-stamp -type f -exec cp --parents -t src {} ;

I can then build again the packages, but this time with a source package that contains only these files (and that can be used to eventually rebuild the binary packages):

$ SRC=src dpkg-buildpackage -I

Note the difference in the $SRC variable content: ../src is my complete source repository and src is just the subset for this package.

RELOAD: Access control policy script tester

There was some interest at the last IETF meeting in my presentation of Access Control Policy script, so I spent some time working in improving this. The first step was to release a new version of the I-D. To encourage adoption of this draft, I also released a library permitting to develop and test new access control scripts without having a complete implementation of RELOAD. This was released as a Debian package named libreload-java in my Debian/Ubuntu repository. This package is also a reference implementation for the I-D but, as it contains a subset of the RELOAD library I am current developing, it can also be considered as an early release of this library. As always, this is released under an Affero GPL 3 license, and the code source is available.

To explain how to develop a new access control script, I will now go through the example which is also in the libreload-java package:

import static org.implementers.net.reload.AccessControlPolicyTester.kind;
import static org.implementers.net.reload.DataModel.Standard.*;

These two lines permit to simplify the code by importing the kind() static method and the available Data Models.

private static final long KIND_ID = 4026531841l;

Here we declare a constant defining a new Kind-Id in the private range.

AccessControlPolicyTester tester = new AccessControlPolicyTester("myusername",
  kind(KIND_ID, SINGLE,
    "String.prototype['bytes'] = function() {n" +
    "  var bytes = [];n" +
    "    for (var i = 0; i < this.length; i++) {n" +
    "      bytes[i] = this.charCodeAt(i);n" +
    "    }n" +
    "    return bytes;n" +
    "};n" +
    "return resource.equalsHash(signature.user_name.bytes());n"));

This code creates a new instance of the tester by passing a user name (that will be used to generate an X509 certificate) and a new kind. The new kind uses the Kind-ID declared previously, uses the SINGLE Data Model and will use an Access Control Policy as described in the ECMAScript code. Note that more calls to the kind() method can be added to the constructor of the tester.

Map<Long, DataModel<?>> kinds = new HashMap<Long, DataModel<?>>();

This code declares a variable that will contains the data items meant to be stored. The key of the map is a long that contains a Kind-Id. The value is either an instance of the DataModel.Single class, the DataModel.Array class or the DataModel.Dictionnary if the Data Model is respectively SINGLE, ARRAY or DICTIONARY. In our case, we use the SINGLE Data Model, meaning that only one item can be stored at a specific Resource-ID for this specific Kind-ID.

kinds.put(KIND_ID, new DataModel.Single(KIND_ID, 0l, new Value.NotExists(System.currentTimeMillis(), 60, tester.signer(), tester.privateKey())));

This piece of code does multiple things, so let’s decompose:

First it retrieves the signer (an Identity instance that consist of a X509 Certificate, a user name and a Node-ID) and the private key that was used for the certificate of the signer. These two objects are automatically created by the constructor of the tester object.

Then it creates a new Value instance with a store_time equal to the current time, a lifetime of one minute, an exists value of FALSE and the identity and private key just retrieved. This is the object that we want to store (it is more or less equivalent to StoredData).

Then a new instance of DataModel.Single is created with our new Kind-ID, a generation_counter of 0 and the value to store (it is more or less equivalent to StoreKindData).

Finally this object is stored in the kinds map, with the same Kind-ID as key.

tester.store(tester.signer().userName().getBytes(UTF_8), 0, kinds)

In this code, we finally try to store the items that we just created. The first parameter is the Resource-Name, which in our case is the user name (converted to a byte array). The second parameter is the replica number, 0 for the original replica. The third parameter is the set of items. The method will return false if one items fails the access control policy verification, and true if all the items pass the verification.

The DataModel.Array and DataModel.Dictionary works in the same way, excepted that DataModel.Array is a sparse array implementing java.util.List and DataModel.Dictionary is implementing java.util.Map.

The program can be compiled and executed with the following commands:

$ javac -classpath /usr/lib/reload/reload.jar Main.java
$ java -classpath usr/lib/reload/reload.jar:. Main

In addition to the jar file and the example, the package contains the Javadoc for all the classes and methods accessible to the tester, and a copy of the I-D.

Update 2011/05/14:

The command lines above do not work. I uploaded a new package (0.1.1) so the following command lines should work now:

$ javac -classpath /usr/lib/reload/reload.jar AcpTest.java
$ java -classpath usr/lib/reload/reload.jar:. AcpTest