I’ve announced recently the first release of a GRDDL Test Suite, whose goal is to allow to evaluate GRDDL implementations with regard to the specification.
The test suite is composed of
- a series of input/expected output documents – for the time being, only XHTML input has been integrated in the test suite
- an RDF list of these test cases, binding input and output documents, and defining the purpose of the test case – much as described in the QA Wiki on test metadata; the RDF vocabulary I’m using to that end is the one that was developed for the RDF Core test suite, mainly because it existed already – I’m not sure yet if this will prove to be benefitial in the end in terms of tools
- a small test harness in python that runs the implementation under test – GRDDL processors in this case – on the input documents, and compares it the expected output; since the comparison involves RDF graphs, I’m using SWAP’s graph comparitor; although the said code is very simple, I expect it could be re-used in a variety of similar test suites, that is test suites that feed input documents to a processor and compares it to a well-defined output document; obviously, in most cases you would need to alter the comparison mechanism depending on the output format
The first results of the test suite shows that my XSLT-based implementation of GRDDL gives similar results as Dan Connolly’s in python, in cases where there is no dereferencing errors; I’ve tried and run it on the RAP implementation – that I eventually installed along with PHP5 debian packages – with only mixed success. Hopefully I’ll have the time to get a closer look at that, and maybe send a patch.