Hi Joshua,

On Mon, Sep 14, 2015 at 10:59 PM, Joshua Dunham <joshua_dun...@vrtx.com>
wrote:
> > Honestly It's normal that you're confused. Because I have to admit that
such configuration
> > is not used anymore in Marmotta, and it doesn't have any effect.

FYI, MARMOTTA-616 is now fixed in the develop branch, so available in the
latest 3.4.0-SNAPSHOT builds pushed to the asf repos,

> > 2: I'm trying to update some fields in marmotta via the import folder
functionality. Is it possible to use this feature (solely) and do a
delete/insert? Or is the import folder insert only? Basically I'm going to
drop in nearly the same .ttl or .nt with some slight changes each cycle and
want to delete the old import first?
> >
> > No, import only adds triples, it doesn't remove triples that are
missing in the new file. RDF doesn't work in that way.
> >
> > What I'd use for such scenario is the import folder feature in
combination with the graph store protocol, so:
> >
> > * You import data to a dedicated context (named graph); i.e., copy
files to /path/to/marmotta/import/<NAME>
> >
> > * If you want to "overwrite" the data, drop the context with a DELETE
HTTP request (with curl, for instance) to http://host/marmotta/context/
<NAME>
> >
> > Here the documentation about those two features:
> >
> >
http://wiki.apache.org/marmotta/ImportData#Import_data_via_the_local_directory
> > http://www.w3.org/TR/sparql11-http-rdf-update/#http-delete
> >
> > Hope that helps. If you' d need more documentation, either by mail or
improving the current one, just ask here.
>
> I thought that would be the best way but considering there is some extra
import functionality
> (lock and config files) I figured I'd ask.

Unfortunately the SPARQL 1.1 Graph Store HTTP Protocol lacks the sense of
"override graph". That's for example we do provide in the Redlink API:
http://dev.redlink.io/api/1.0#dataset-put

Another option is that you change to based your process on LDP. There you
have more fine grained control over resources, including PATCH support.

> In the long term I'm hoping to write a module for Apache Manifold CF so
pushing current states
> into Marmotta easier. Currently I'm using some crafted SQL and the file
writer output to make
> turtle files in the import folder and it works 'ok' for update but
obviously not delete update. In this
> case Manifold will only write out what has changed so my approach is to
write out the files with
> the subject and context so I can delete only that one subject by feeding
the subject into the correct
> context aware delete script (python). Since I know the shape of the
incoming data (the subject and
> predicates) it's not an issue. By using SPARQL delete messages I'll need
to copy the files outside
> of the import folder and have that delete script remove the all the
subject-predicate statements and
> then import the new data. Just annoying to have the extra process but
hopefully I can get a purposeful
> module into ManifoldCF at some point.

ManifoldCF integration (e.g., a Marmotta connector) would be cool for many
scenarios.

Unfortunately the RDF community had never been interested on solving
problems such as updates on a fixed schema scenario. There a some issues
registered (e.g., MARMOTTA-58 or MARMOTTA-435) for experimenting with new
paradigms/patterns/approaches, but I never managed to find time.

> A bit ago you volunteered to share some helper script files that I guess
you had developed to
> do quick things with Marmotta, is that offer still available? :)

I know. Unfortunately I'm always busy with something else to really take
care about making something generally usable our of those private (and
hacky) scripts I have split in different projects. Basically there is not
that much magic there. They simply solve particular needs in projects that
are not general enough to include them in the client library.

So, if your really have something that could be general purpose, just jump
into dev@marmotta and discuss it ;-)

Cheers,

--
Sergio Fernández
Partner Technology Manager
Redlink GmbH
m: +43 6602747925
e: sergio.fernan...@redlink.co
w: http://redlink.co

Reply via email to