Where does the Web go from here?

The web is a bit of an awkward teenager right now. Browsers full of plugins have been replaced with browsers running immense client-side JavaScript apps.

This is a big step in the right direction but the heavy use of JavaScript comes with its own problems. We spend vast amounts of effort have to be spent on browser dependent hacks for everything from audio to accelerometers. Lots of time and energy is put into solving similar problems over and over and very little reliable parity between browsers.

There are at least two options open to deal with this.

The first is simply to carry on; keep on hacking and shimming and poly-filling the gaps between the browsers we want and the ones we have, perpetuating the cat-and-mouse game of browser support for JS libraries, leaving support as an ‘n*m’ problem.

The second option, as Lyza Gardener put it at Over The Air 2013, is to “save the web by doing as little as possible”; we try to avoid the workarounds and wait for the standards and browsers to catch up. The web gets better as standards arrive and the complexity is pushed into the browsers and it is just an ‘n’ sized problem of dealing with the differences between browsers.

All we have to do is stop adding to the mess and try to resist the pressures and habits which lead us here in the first place, and this might be as hard as it sounds if we consider the context of things right now.

The internet has gone through major changes in the last decade, transforming from a desktop centric screen on the end of a paltry 56K connection into a vast array of laptops, smartphones and tablets devouring broadband and 4G. This has opened up so many possibilities but it has also created hundreds of new combinations of browser, screen and connection speed to cater for. On top of that, devices give us all kinds of new sensors to play with but each device offers a different combination or interface. The ‘m’ in our ‘m*n’ problem just became a much higher number.

All of this change is just beginning to settle down (at least until Google Glass-like devices or the Internet of Things become widely adopted and change the landscape again) and we are making the tools to provide excellent experiences on all these new forms of the internet; practices like responsive design are helping us close the gap between what is possible and what we’ve actually done. There is still an unimaginable amount of ideas to explore but, perhaps for a brief time, we have a chance to catch our breath a bit.

That means that now is the right time to start focusing on standards. We need to work on standardising APIs for real-time content, accelerometers, ambient light sensors and all the other neat inputs we suddenly have access to in a modern smartphone or tablet. Once we have the standards, the browsers can work on conforming to it and we’re back to just the ‘n’ sized problem of supporting each browser.

The earlier we start, the sooner we have all these capabilities easily and reliably accessible through browsers; We finally have a chance to help the web settle into a healthier, standardised form, ready for whatever comes next.

Beautiful Code (IPRUG Jan 2013)

On Tuesday January 13th I gave my talk “Beautiful Code” at IPRUG. Thanks to everyone who came and made it a really enjoyable night!

The slides are now available on Google docs; I’ve retyped the code examples so they might look a little different but the content is the same.

A lot of the ideas and examples were taken from the excellent talk, “Confident Code” by Avdi Grimm and the following talks were very influential:

All of these show some really nice examples of refactoring and lay down principles to guide us towards clean, well structured and ultimately more beautiful code.

I hope you find some of this useful and it helps you to make your code more beautiful!

Fixing Command-T for Vim in Ubuntu 12.04

The Command-T Vim plugin gives you a TextMate style “Fuzzy search” for files in the current directory tree. Basically, you can hit a button and start typing the bits of the file name you remember and Command-T will show you a set of search results.

On my first attempt to build and use Command-T it caused a seg-fault whenever I activated it. This is because Vim in the Ubuntu apt repo was built with the ancient ruby 1.8.7-p352, and Command-T only works when run under the same version which Vim was built with. If you install vim (or gvim/vim-gnome) from the package repos but get ruby from somewhere else, you’ll need to do the following:

  1. Get a copy of Command-T by following the readme. I use pathogen to load my vim plugins, so I cloned the Command-T git repo in my .vim/bundle folder.
  2. Navigate into the folder Command-T extracted or cloned to.
  3. Run `ruby –version`
    1. If you get the answer “1.8.7-p352″, congrats, go to 3 (argh, I used a goto!)
    2. If not, try setting your ruby to the system default.
      1. rvm users should run `rvm use system`,
      2. rbenv users can use `rbenv local system`.
    3. If you don’t see version 1.8.7-p352 now, we’ll install it. rvm users run `rvm install 1.8.7-p352`, wait for the compile to finish and then run `rvm use 1.8.7-p352`
  4. Now that’s sorted, lets build Command-T. First, cd into “ruby/command-t/”
  5. Run `ruby extconf.rb && make` to actually do the build
  6. Congrats, it should be built and working. Hit “<leader>t” in Vim and you should get the search box. Type in the name of your file and see if it completes!

I hope this helps, if not you can try loading my vim config and see if the Command-T I have built there works for you.

Composability is Crucial

In all technical work, I find there is an often overlooked principal which delivers a huge amount of value: Composablity.

In OO programming, this is represented by the core ideas of “loose coupling” and “highly-cohesion”. These principles drive us towards creating objects which can be reused and transposed easily- basically, we aim to make our objects composable in different circumstances.

In scripting and the whole-program level, composabily takes many forms. Adding lots of import/export options to your software enable users to involve it in their work more easily, but things really get interesting when you look at composability taken seriously.

One of the driving ideas for Unix/Linux was that of “small, sharp tools”, programs which do one job, very well. The result is almost magical: once you have learned a handful of the hundreds of commands, you can start piping the output of one into another. Suddenly you have a sort of “data manipulation Lego”- once you find the piece you are looking for, you can add it on to your others and quickly get some advanced behaviour.

Whatever I am building, I keep thinking about how my choices will affect it’s composability and found it a very useful tool. Hopefully it’s an idea that might help others too!

Edit: Sandi Metz gave a great talk which highlights the use of composition/collaboration to avoid an “Omega problems” and keep your projects fun to work on.

Back up Hudson job history in Subversion

The popular continuous build tool Hudson has a few options for those looking to backup their setups. Most people will suggest one of 2 plugins, either the SCM Sync which pushes config changes into Subversion with a log message, or the Backup Plugin which can be configured to wrap up your main config, build job configs, histories and artefacts. I wanted something that blended these 2- a way of pushing all the config and history for my jobs into Subversion.

I began to look into the source to contribute my own plugin and came across a Team Lazer Beez blog post where they have used a described a Hudson job which runs commands in the shell. It adds newly created jobs to Subversion, removes deleted ones and checks in any other changes to your config files. There were a couple of issues I had with it as it stood- mainly that it couldn’t cope with job names containing spaces, so I edited it a bit.

# Add any new conf files, jobs, users, and content.
svn add --parents *.xml jobs/* users/* userContent/* fingerprints/*

# Add the names of plugins so that we know what plugins we have.
ls -1 plugins > plugins.list
svn add -q plugins.list

# Ignore things in the root we don't care about.
echo -e "war\nlog\n*.log\n*.tmp\n*.old\n*.bak\n*.jar\n*.json" > myignores
svn propset svn:ignore -F myignores . && rm myignores

# Ignore things in jobs/* we don't care about.
echo -e "builds\nlast*\nnext*\n*.txt\n*.log\nworkspace*\ncobertura\njavadoc\nhtmlreports\nncover\ndoclinks" > myignores
svn propset svn:ignore -F myignores jobs/* && rm myignores

# Remove anything from SVN that no longer exists in Hudson.
svn st |grep "\!" |awk '{printf "\""; for (i=2;i<=NF;i++) {printf "%s%s",sep, $i;sep=" "}; printf "\"\n"}' | xargs -r svn rm

# And finally, check in of course, showing status before and after for logging.
svn st && svn ci --non-interactive -m "automated commit of Hudson configuration" --username user --password xxxxxxxxx && svn st

It is working great as a job set to run ‘@midnight’, or with a manual triggering of it after a particularly rigorous config editing.

Hopefully this will prove useful to someone else.