Monday, June 22, 2009

Staying on Trac

I admit it - I am growing tired of Trac wiki. It's probably our fault - we're using an old version - but I get the feeling it's a bigger problem than that. I work with a University-based technology team that tries to keep the following things together:

  • Organisational news and info

  • Project information

  • Source code

  • Job Tickets

  • Server details

  • Documentation

  • Publications

  • Procedures and Policy



We've got a range of skills on our team but obscure wiki markup format is not necessarily a precondition of employment. What's more, we've ended up with several sites running Trac or ICE that makes learning where to put stuff rather onerous.

My thought is to start trying to bring this stuff together but the question is how. In a previous job I created a controlled vocabulary within Confluence Wiki to bring together reports and project info but source code didn't really come into the equation. XWiki might be a good open alternative but there'd be some coding to do.

I've also been picking up on Maven and see that it could provide a good basis for the coding side of things but that doesn't help non-technical staff.

For presenting the content we could use The Fascinator to harvest from all of our sources and present (mash) it in a variety of combinations (public, developer or manager). That still leaves us with lots of entry points.

So I have some leads but nothing solid (yet). Ideas welcome.

The Fascinator 2

The team found itself with a little bit of breathing space this past week or so and we focussed on developing The Fascinator Desktop. There was a fair bit of whiteboard time with Peter early on and the coding began. Call it agile or whatever, a team sharing design issues whilst developing components just seems to get their stuff together better than a highly pre-spec'd system.

So, what did we achieve? Well:

  • Linda got Watcher up and running - even despite a moving goal.

  • Ron and Oliver worked on creating a storage API to allow us to test against Fedora or CouchDB

  • Bron and I created components to get the Watcher queue and extract metadata and full text via Aperture.

  • Linda created a transformation API to convert files into a variety of renditions.



This gives us a tool chain where we:

  1. Watch your system for file changes

  2. Extract the metadata and fulltext from the file

  3. Transform your various file types to renditions such as html and pdf

  4. Store the data in a repository



From this point, we can lay The Fascinator search engine over the top and give you a faceted search of your files. It's not all there yet - we need to finish off some of the storage work and get it all tied together - but here's hoping that the end of the week brings version 0.1 of The Fascinator Desktop!

My admission from the week: I must integrate unit tests into my development approach.

Monday, June 15, 2009

RDF and mod_rewrite

I was reading the Best Practice Recipes for Publishing RDF Vocabularies and looking for an easy way to provide HTML and RDF on my site. At the moment I have (very) limited RDF but I wanted something that would allow me to have cool(ish) URIs automatically. Basically, a system that would work out if I have an HTML or RDF file depending on your request.

So, http://duncan.dickinson.name/card should give you:

  • card.html if you're a web browser

  • card.rdf if you want semantic data



This stretched my mod_rewrite skills but the following seems to work:


# Turn off MultiViews
Options -MultiViews -Indexes
DirectoryIndex card.html index.html index.htm index.php

# Directive to ensure *.rdf files served as appropriate content type,
# if not present in main apache config
AddType application/rdf+xml .rdf

# Rewrite engine setup
RewriteEngine On
RewriteBase /

#Check if an RDF page exists, and return it
RewriteCond %{HTTP_ACCEPT} application/rdf\+xml
RewriteCond %{REQUEST_FILENAME}.rdf -f
RewriteCond %{REQUEST_URI} !^/.*/$
RewriteRule (.*) $1.rdf [L,R=303]

#Provide a default RDF page
RewriteCond %{HTTP_ACCEPT} application/rdf\+xml
RewriteCond %{REQUEST_URI} ^/$
RewriteRule .* /card.rdf [L,R=303]

#Provide the HTML for the request
RewriteCond %{REQUEST_FILENAME}.html -f
RewriteCond %{REQUEST_URI} !^/.*/$
RewriteRule (.*) $1.html [L,R=303]

#provide the PHP page for the request
RewriteCond %{REQUEST_FILENAME}.php -f
RewriteCond %{REQUEST_URI} !^/.*/$
RewriteRule (.*) $1.php [L,R=303]


Using some help I got curl looking at my site's rdf:


curl -H "Accept: application/rdf+xml" http://duncan.dickinson.name/card


So now I get back a 303 redirect.