Further clean up ddf generation.
authorDiane Trout <diane@caltech.edu>
Wed, 29 Jun 2011 06:11:04 +0000 (23:11 -0700)
committerDiane Trout <diane@caltech.edu>
Wed, 29 Jun 2011 06:11:04 +0000 (23:11 -0700)
commit24a0824ccb5924c8dfa8153b8b05efd233fa0b3f
treecb7f679119abf3bb1068e4928da89f61f21ded61
parente7a7c24e642d7b7ce59d99af3f13f93e6256e13e
Further clean up ddf generation.

This still needs work as I ended up hard coding a sparql query
to support the submission I'm currently working on -- which is
unfortunate as the whole point of the push to RDF was to reduce
hard coding.

However it did simplify collecting information for make_ddf.

Using the query would also mean that the term copying I was doing
earlier, moving library attributes into each specific submission view
would be unecessary, since I can now easily query the graphs.

Probably what I need to do after the submission is to reduce the
term copying when importing a submisison directory, and add some
way of tying a sparql query to a specific imported daf.

Though I need to deal with the upcoming submission deadlines first.
extra/ucsc_encode_submission/encode_find.py
extra/ucsc_encode_submission/ucsc_gather.py
htsworkflow/submission/daf.py
htsworkflow/submission/test/test_daf.py
htsworkflow/util/hashfile.py [new file with mode: 0644]
htsworkflow/util/rdfhelp.py