Back up Hudson job history in Subversion

The popular continuous build tool Hudson has a few options for those looking to backup their setups. Most people will suggest one of 2 plugins, either the SCM Sync which pushes config changes into Subversion with a log message, or the Backup Plugin which can be configured to wrap up your main config, build job configs, histories and artefacts. I wanted something that blended these 2- a way of pushing all the config and history for my jobs into Subversion.

I began to look into the source to contribute my own plugin and came across a Team Lazer Beez blog post where they have used a described a Hudson job which runs commands in the shell. It adds newly created jobs to Subversion, removes deleted ones and checks in any other changes to your config files. There were a couple of issues I had with it as it stood- mainly that it couldn’t cope with job names containing spaces, so I edited it a bit.

# Add any new conf files, jobs, users, and content.
svn add --parents *.xml jobs/* users/* userContent/* fingerprints/*

# Add the names of plugins so that we know what plugins we have.
ls -1 plugins > plugins.list
svn add -q plugins.list

# Ignore things in the root we don't care about.
echo -e "war\nlog\n*.log\n*.tmp\n*.old\n*.bak\n*.jar\n*.json" > myignores
svn propset svn:ignore -F myignores . && rm myignores

# Ignore things in jobs/* we don't care about.
echo -e "builds\nlast*\nnext*\n*.txt\n*.log\nworkspace*\ncobertura\njavadoc\nhtmlreports\nncover\ndoclinks" > myignores
svn propset svn:ignore -F myignores jobs/* && rm myignores

# Remove anything from SVN that no longer exists in Hudson.
svn st |grep "\!" |awk '{printf "\""; for (i=2;i<=NF;i++) {printf "%s%s",sep, $i;sep=" "}; printf "\"\n"}' | xargs -r svn rm

# And finally, check in of course, showing status before and after for logging.
svn st && svn ci --non-interactive -m "automated commit of Hudson configuration" --username user --password xxxxxxxxx && svn st

It is working great as a job set to run ‘@midnight’, or with a manual triggering of it after a particularly rigorous config editing.

Hopefully this will prove useful to someone else.

Grails: populate a g:select from an enum

As easy as it is to hard-code a list into a selection drop down box in grails, it is a pretty big violation of the Don’t Repeat Yourself principle and is best avoided. I struggled to find an answer for a neat way to do this, so here is how I recommend doing it:

Step 1: Define an enum

If you’re using the domain layer you can obviously put your enum in you domain objects and refer to them there in the later steps but for this example I’ll assume you don’t have or can’t change your domain layer.

In src/groovy/myPackage we’ll create a new enum with some values and the pretty standard enum methods.

package com.adamwhittingham.grails.examples.SearchableField:

public enum SearchableField {
    USERNAME(“Username”), FIRSTNAME(“First Name”),SURNAME(“Display Name”),EMAIL("Email")

    final String value
    SearchableField(String value){ this.value = value }

    @Override
    String toString(){ value }
    String getKey() { name() }
}

Step 2: Import it and use it in the GSP

In our GSP, the first thing we need is to know about the enum.

<%! import com.adamwhittingham.grails.examples.SearchableField %>

With that done we can now get to defining the using the enum

<g:select name=”searchBy” from”${SearchableField.values()}” value=”${SearchableField}” optionKey=”key”/>

And that’s it, you can now use the enum in your controllers and views without needing to edit a hard coded lists. Much better!

Spa2010: TDD at the System Scale

Spa2010 is a conference for of people from all over the software community, run (and named after) the BCS special interest group for Software Professional Advancement. The conference is known for its rich mix of topics and heavy bias towards hands-on sessions, and annually takes on 2 forms-  the main 3 day event in London and MiniSpa, a free one day “Best Of Spa” compilation where the highest rated sessions are run again. This year I was lucky enough to attend the full conference thanks to some training budget I found, so I headed off to meet some hear some very interesting people run sessions on everything from testing to behaviours in civil architecture which could be applied in software engineering.

The first talk I attended was by Nat Pryce and Steve Freeman, co-authors of “Growing Object-Oriented Software, Guided by Tests”, who ran an interesting session on how to write and test (or should that be “test and write”- test first!) large scale systems. The session started with a great overview of the important lessons learnt and went on to some practical exercises based on the attendees hunting down testing issues. Here is a bit of what I learnt from it all:

Testing from the boundaries inwards, not centre-out

The biggest point made for me personally was that it is too easy to start by writing tests and implementations for our domain classes and settle for extensive unit test coverage without giving much thought to testing at larger scales.

Using only TDD at the unit scale means that we end up defining the “idealised” version of each unit- our domain classes define what we wish our domains looked like, our processing methods reflect how these perfect domains would be manipulated and so on until we hit the outer boundary of our program… where we end up with a rift between the ideal model we’ve built and the reality we need to integrate with.

The solution is simply to not start with our low level unit tests first, but to broadly scope and write tests at the limits of our system. This ensures we have some appreciation of the boundaries of our software and means we start working with the API’s we need to integrate to at the start. Any restrictions and needs placed on us by these API’s are considered from the beginning of development instead of as the last step when the pressure to deliver is on.

Use Anti-Corruption Layers or Simplicators

Another important point made was that we can lessen the impact of imperfect API’s by abstracting them behind a “simplicator” or “anti-corruption” layer- basically a small, separate project which consumes the ugly/unwieldy API’s we need to integrate with and exposes a more ideal API for our system to consume.

This also has the effect of largely de-coupling the implementation of our system from the specific system we are building against. An advantage I saw in this is that bad design can be isolated and removed when projects are improved or replaced; the issues in project A need not affect project B when it integrates with A; this also possibly avoids the issues being forced into the design for the replacement for A when it needs to ensure compatibility with B.

Testing Asynchronous Systems

The session went on to some excellent examples of tests which flickered (passed or failed somewhat randomly) and explored issues of false-positives and false-negatives caused in an asynchronous system. The lessons learnt from this, in summary were:

If some state changes in your asynchronous system, make damn sure that your tests are synchronised in some way:polling the state just doesn’t cut it. Sometimes the test will miss the change by testing the state before it has happened. Adding a wait or thread sleep might work, but it makes your tests take longer to run than they should. Polling after the state has changed, then been changed back again looks like a failure despite the correct behaviour being played out.

The session was crammed with useful tips and examples, I strongly recommend going to it if you get a chance at another event; it certainly makes a lot of testing and design issues known which I am certain I’d have wondered into over the next couple of years. Many thanks to Nat and Steve for all the time and effort that went into it!