Hi Joshua,

as the person responsible for the clustering and caching, let me add a bit
to Sergio's explanation:

2014-10-03 9:12 GMT+02:00 Sergio Fernández <wik...@apache.org>:

> Hi Joshua,
>
> On 01/10/14 16:00, Joshua Dunham wrote:
>
>>  It looks like there are quite a few options to configure the cluster.
>>
>
> Yes, you have the details at: http://marmotta.apache.org/platform/cloud
>
>  Can someone answer,
>> 1. First let me clarify, the clustering options in Marmotta > Core >
>> Settings > clustering.{address,backend,enabled,mode} need to be
>> configured when using Zookeeper?
>>
>
> Zookeeper comes to complement the regular configuration for cloud-based
> installations, where several nodes can read the same configuration.
>
>
In Marmotta, there are two independent functionalities that are related to
running a cluster of installations:
- the ZooKeeper integration allows a *central configuration management* for
Marmotta instances; of course, this makes most sense if you are running a
cluster, but in theory can also be used to run many individual independent
instances
- the clustered caching (configurations starting with clustering. as you
correctly observed) is responsible for making sure Marmotta runs properly
in a load-balancing setup by keeping caches in sync and providing
appropriate cluster-wide locking to ensure synchronization between instances


>  2. Which is the preferable backend? I’m not familiar with the pros/cons
>> of the options but I think looking around at some docs that Hazlecast is a
>> ‘safe’ good bet?
>>
>
> We currently support Guava and Ehcache for local caches, Hazelcast,
>  and Infinispan for clusters. AFAIK currently Hazelcast is the most stable
> and tested one, and it's currently used in production.


Guava for single-instance setups, otherwise Hazelcast. The other backends
are more experimental. Infinispan is powerful in large setups, because it
also supports dedicated cluster servers (HotRod Server), but this has not
been tested extensively and is significantly more complex and has more
overhead. EHCache has a bit more intelligent memory management (i.e. it
expires cached objects based on the memory they occupy, while all other
backends simply take object counts, so when you have many large objects you
might run into out-of-memory situations), but otherwise introduces more
overhead than Guava.


>
>
>  3. There are three options for mode. Based on the description I would say
>> that distributed is what I want but there is a third option ‘Replicated’
>> which is not described. What exactly does this do?
>>
>
> Yes, it accepts those three values:
>
> * In LOCAL cache mode, the cache is not shared among the servers in a
> cluster. Each machine keeps a local cache. This allows quick startups and
> eliminates network traffic in the cluster, but subsequent requests to
> different cluster members cannot benefit from the cached data.
>

It it even worse, because even synchronization features among cluster
members will not be available then. In short: don't use LOCAL when you are
running a cluster.


>
> * In DISTRIBUTED cache mode, the cluster forms a big hash table used as a
> cache. This allows to make efficient use of the large amount of memory
> available.
>
> * In REPLICATED cache model all nodes of the cluster hold a complete cache
> that is automatically replicated. This makes more efficient operations that
> require a traversal through the whole graph, such as SPARQL querying.
>
> I think the decision about the mode depends more on the concrete needs and
> backend used.
>
>>
>>  My datasets are too large to run on one instance I think and I would
>> like to become familiar with the clustering options Marmotta offers. If I
>> wanted to have N number of instances running, each has a portion of the
>> total dataset is this possible? Ideally there is some sort of master that I
>> query and it will collect the triples regardless of the server the data is
>> on. I’ve seen the walkthrough at the Marmotta site but wanted to see if
>> that will get me where I’d like to be. :)
>>
>
> That's exactly the idea. Just provide sufficient resources for the
> database.


Clustering in Marmotta generally won't help you with big datasets. But it
will help you with high concurrent loads. The clustering functionality
currently implemented essentially provides two features:
- a cluster-wide cache so that database lookups for frequently used nodes
and triples can be reduced; this won't help you if you are always
requesting different data or run SPARQL queries; it will help you if you
are repeatedly accessing the same nodes and triples
- a cluster-wide synchronization and locking mechanism to make sure the
cluster members all share the same data and no inconsistencies are created;
this will actually SLOW DOWN your single-process operations and is useful
only in highly concurrent setups

If you want to improve performance for single-dataset single-user
situations, don't use the clustering mechanism. Use and tune the PostgreSQL
database backend instead. Make sure you read
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server


>
>
>  I also found the Apache Giraph project which claims to offer native
>> node/edge processing for graph databases. Has anyone used this? I would be
>> *very* interested to play around if it could connect to Marmotta.
>>
>
> We have a experimental backend that uses Titan DB. If you'd be great if
> someone could evolve Marmotta in that direction!



Giraph serves a different purpose, it is a highly scalable graph processing
framework, not a database. As such, it allows you to parallelize typical
graph operations (like shortest path computations) and run them on a Hadoop
cluster. This is totally different to the kind of operations needed by
Marmotta (e.g. to support SPARQL querying). If you would like to have a
clustered database backend, you could try the Titan backend with HBase or
Cassandra, but I am not completely convinced it will be faster than
PostgreSQL.



>  Lastly, What are people using to manage there ontologies? I found Protege
>> a while back and installed WebProtege to manage ontologies. Is it possible
>> that it connects to marmotta to keep the ontology synchronized? Are there
>> any cool things WebProtege (or any ontology manager) can do with Marmotta?
>>
>
I am using emacs for managing ontologies ;-)

I hope I could clarify a bit more,

Sebastian

Reply via email to