Re: Five Questions for Cassandra Users

2019-04-02 Thread Alain RODRIGUEZ
Hello,

I'm no longer operating "my" own cluster, but now doing consulting for TLP.
Here is what is would say with my own experience:

1.   Do the same people where you work operate the cluster and write
> the code to develop the application?
>

It's not the same set of skills that is required to operate and to use the
driver, write query and develop the needed code.
Some people have all the requested skills, yet the amount of work one can
do in a day is limited anyway.

Some thoughts:
- The design/model should always be done with all the people involved in
this feature/project.
--- Operators will know about the best practices and will assume the
responsibility of having it working ultimately. They are the fire
extinguishers and should help building the house, because they know what
will burn and what is reliable.
--- Devs are the most qualified to build the code, interact with the
drivers and potentially write the code (and tests) needed to query
Cassandra in the 'right way', as defined together with operators.
--- Lawyers/Legal teams can help answer questions around the TTL to use
(Time To Live). Not many data are requested to live forever, and setting a
TTL is a good way to keep the data size under control.
--- If the same person cares of both DEV/OPS (start-ups, small teams
generally), it's good for this team to be at 2+ people big. One alone
cannot exchange ideas, nor be up 24/7...

- A team of operators, that just knows the basic can do level 1 support if
procedures are well documented and if the proper tooling is in place. There
is a fair amount of repetitive work, many times where the 'protocol' is the
same one to react to X or Y. Ultimately, they can escalate to the people
who are responsible for the cluster.


> 2.   Do you have a metrics stack that allows you to see graphs of
> various metrics with all the nodes displayed together?
>

I definitely recommend and advocate for this. It is the best way to get a
feeling of the health of. your cluster at first sight. To understand the
patterns, the bottlenecks, to see the impacts of optimisations  and to
diagnose issues efficiently.

We built Datadog default dashboards to help people using Datadog to monitor
their Cassandra clusters. The release post is here:
http://thelastpickle.com/blog/2017/12/05/datadog-tlp-dashboards.html
Also if you prefer videos, here is what I think about why, what and how to
monitor: https://www.youtube.com/watch?v=Q9AAR4UQzMk

If you're not using Datadog, they are Grafana dashboards available and
prometheus metric exporters as well.
- "grafana cassandra dashboards
"
on any search engine should give you a few options
- https://github.com/instaclustr/cassandra-exporter

3.   Do you have a log stack that allows you to see the logs for all
> the nodes together?
>

I would say it's a 'should-have' by opposition to a good monitoring system
for example that is a 'must-have'. I never had one or really used one,
despite the fact that as a consultant I worked on multiple clusters.

If you have it in place for other services, then maybe just plug in C*
nodes as well. It will help you if a machine becomes completely unreachable
or to easily aggregate and make statistics for the whole cluster. it can be
really nice. Then just be aware of the amount of logs that Cassandra
generates, the debug level you want to have and think about the appropriate
retention policy.

But it's definitely not the first thing I would care about, as tools allow
you to query from all nodes through ssh to get information about each node.
Or you can always jump on a faulty node.


> 4.   Do you regularly repair your clusters - such as by using Reaper?
>

Most of the people do I believe, one way or another. With cron, house built
tools, reaper, oss scripts to handle "range repairs".
It is not mandatory as long as you do not delete data. It's maybe not
needed if you use strong consistency. I always like to do it regularly.
I like to think that my nodes are having the same data, that entropy is as
low as possible. It always worked well for me, making me more confident
when operating the cluster (moving token ranges, removing forcefully a
node, etc) and I did not lose data in 6 years (apart from counters, but
they were known to be not 'accurate' not to say 'broken' already by then)
and despite the fact I started with C*0.8 (and fresh first counters
implementation yay!).
I would keep routine repairs as a good practice when it is useful (deletes,
not strongly consistent read) but also when it theory it's not needed, to
help keeping the data where it belongs, and despite Cassandra is now pretty
resilient.

Yet some people are doing perfectly fine... until they run a first repair!
Be sure to read about it before. With default number of vnodes and default
repair options in older versions of cassandra, you could really harm your
cluster.


> 5.   Do you u

How to install an older minor release?

2019-04-02 Thread Oleksandr Shulgin
Hello,

We've just noticed that we cannot install older minor releases of Apache
Cassandra from Debian packages, as described on this page:
http://cassandra.apache.org/download/

Previously we were doing the following at the last step: apt-get install
cassandra==3.0.17

Today it fails with error:
E: Version '3.0.17' for 'cassandra' was not found

And `apt-get show cassandra` reports only one version available, the latest
released one: 3.0.18
The packages for the older versions are still in the pool:
http://dl.bintray.com/apache/cassandra/pool/main/c/cassandra/

Was it always the case that only the latest version is available to be
installed directly with apt or did something change recently?

Regards,
-- 
Alex


Re: How do u setup networking for Opening Solr Web Interface when on cloud?

2019-04-02 Thread Amanda Moran
Hi there Krish-

I want to +1 what Rahul said about reaching out to DataStax. Please submit
a support ticket, and DataStax support can help you with this.

Thanks!

Amanda

On Mon, Apr 1, 2019 at 3:47 PM Krish Donald  wrote:

> I have searched on internet but did not get any link which worked for me.
>
> Even on
> https://s3.amazonaws.com/quickstart-reference/datastax/latest/doc/datastax-enterprise-on-the-aws-cloud.pdf
> it is mentioned to use SSH tunneling .
>
> "DSE nodes have no public IP addresses. Access to the web consoles for
> Solr or Spark can be established by using an SSH tunnel. For example, you
> can access the Solr console from http://NODE_IP:8983/solr/. You can bind
> to a local port with a command like the following (replacing the key and IP
> values for those of your cluster): ssh -v -i $KEY_FILE -L
> 8983:$NODE_IP:8983 ubuntu@$OPSC_PUBLIC_IP -N The Solr console is then
> accessible at http://127.0.0.1:8983/solr/. When you’re prompted to log
> in, enter the user name cassandra and the password you chose. "
>
> But i am not looking for SSH tunneling option.
>
> I tried to follow below link as well:
>
> https://forums.aws.amazon.com/thread.jspa?threadID=31406
>
> But DSE nodes have no public IP addresses so this also did not work.
>
> Thanks
>
>
>
> On Mon, Apr 1, 2019 at 12:32 PM Rahul Singh 
> wrote:
>
>> This is probably not a question for this community... but rather for
>> Datastax support or the Datastax Academy slack group. More specifically
>> this is a "how to expose solr securely" question which is amply answered
>> well on the interwebs if you look for it on Google.
>>
>>
>> rahul.xavier.si...@gmail.com
>>
>> http://cassandra.link
>>
>> I'm speaking at #DataStaxAccelerate, the world’s premiere
>> #ApacheCassandra conference, and I want to see you there! Use my code
>> Singh50 for 50% off your registration. www.datastax.com/accelerate
>>
>>
>> On Mon, Apr 1, 2019 at 12:19 PM Krish Donald 
>> wrote:
>>
>>> Hi,
>>>
>>> We have DSE cassandra cluster running on AWS.
>>> Now we have requirement to enable Solr and Spark on the cluster.
>>> We have cassandra on private data subnet which has connectivity to app
>>> layer.
>>> From cassandra , we cant open direct Solr Web interface.
>>> We tried using SSH tunneling and it is working but we cant give SSH
>>> tunneling option to developers.
>>>
>>> We would like to create a Load Balancer  and put the cassandra nodes
>>> under that load balancer but the question here is , what health check i
>>> need to give for load balancer so that it can open the Solr Web UI ?
>>>
>>> My solution might not be perfect, please suggest any other solution if
>>> you have ?
>>>
>>> Thanks
>>>
>>>


Re: Multi-DC replication and hinted handoff

2019-04-02 Thread sankalp kohli
Are you using OSS C*?

On Fri, Mar 29, 2019 at 1:49 AM Jens Fischer  wrote:

> Hi,
>
> I have a Cassandra setup with multiple data centres. The vast majority of
> writes are LOCAL_ONE writes to data center DC-A. One node (lets call this
> node A1) in DC-A has accumulated large amounts of hint files (~100 GB). In
> the logs of this node I see lots of messages like the following:
>
> INFO  [HintsDispatcher:26] 2019-03-28 01:49:25,217
> HintsDispatchExecutor.java:289 - Finished hinted handoff of file
> db485ac6-8acd-4241-9e21-7a2b540459de-1553419324363-1.hints to endpoint /
> 10.10.2.55: db485ac6-8acd-4241-9e21-7a2b540459de
>
> The node 10.10.2.55 is in DC-B, lets call this node B1. There is no
> indication whatsoever that B1 was down: Nothing in our monitoring, nothing
> in the logs of B1, nothing in the logs of A1. Are there any other
> situations where hints to B1 are stored at A1? Other than A1's failure
> detection detecting B1 as down I mean. For example could the reason for the
> hints be that B1 is overloaded and can not handle the intake from the A1?
> Or that the network connection between DC-A and DC-B is to slow?
>
> While researching this I also found the following information on Stack
> Overflow from Ben Slater regarding hints and multi-dc replication:
>
> Another factor here is the consistency level you are using - a LOCAL_*
> consistency level will only require writes to be written to the local DC
> for the operation to be considered a success (and hints will be stored for
> replication to the other DC).
> (…)
> The hints are the records of writes that have been made in one DC that are
> not yet replicated to the other DC (or even nodes within a DC). I think
> your options to avoid them are: (1) write with ALL or QUOROM (not LOCAL_*)
> consistency - this will slow down your writes but will ensure writes go
> into both DCs before the op completes (2) Don't replicate the data to the
> second DC (by setting the replication factor to 0 for the second DC in the
> keyspace definition) (3) Increase the capacity of the second DC so it can
> keep up with the writes (4) Slow down your writes so the second DC can keep
> up.
>
>
> Source: https://stackoverflow.com/a/37382726
>
> This reads like hints are used for “normal” (async) replication between
> data centres, i.e. hints could show up without any nodes being down
> whatsoever. This could explain what I am seeing. Does anyone now more about
> this? Does that mean I will see hints even if I disable hinted handoff?
>
> Any pointers or help are greatly appreciated!
>
> Thanks in advance
> Jens
>
> Geschäftsführer: Christoph Ostermann (CEO), Oliver Koch, Steffen
> Schneider, Hermann Schweizer.
> Amtsgericht Kempten/Allgäu, Registernummer: 10655, Steuernummer
> 127/137/50792, USt.-IdNr. DE272208908
>


Re: Best practices while designing backup storage system for big Cassandra cluster

2019-04-02 Thread Carl Mueller
Another approach to avoiding the full backup I/O hit would be to rotate a
node or small subset of nodes that do full backups routinely, so that over
the course of a month or two you get full backups. Of course this assumes
you have incremental ability for the other backup days/dates.

On Mon, Apr 1, 2019 at 1:30 PM Carl Mueller 
wrote:

> At my current job I had to roll my own backup system. Hopefully I can get
> it OSS'd at some point. Here is a (now slightly outdated) presentation:
>
>
> https://docs.google.com/presentation/d/13Aps-IlQPYAa_V34ocR0E8Q4C8W2YZ6Jn5_BYGrjqFk/edit#slide=id.p
>
> If you are struggling with the disk I/O cost of the sstable
> backups/copies, note that since sstables are append-only, if you adopt an
> incremental approach to your backups, you only need to track a list of the
> current files and upload the files that are new compared to a previous
> successful backup. Your "manifest" of files for a node will need to have
> references to the previous backup, and you'll wnat to "reset" with a full
> backup each month.
>
> I stole that idea from https://github.com/tbarbugli/cassandra_snapshotter.
> I would have used that but we had more complex node access modes
> (kubernetes, ssh through jumphosts, etc) and lots of other features needed
> that weren't supported.
>
> In AWS I use aws profiles to throttle the transfers, and parallelize
> across nodes. The basic unit of a successful backup is a single node, but
> you'll obviously want to track overall node success.
>
> Note that in rack-based topologies you really only need one whole
> successful rack if your RF is > # racks, and one DC.
>
> Beware doing simultaneous flushes/snapshots across the cluster at once,
> that might be the equivalent of a DDos. You might want to do a "jittered"
> randomized preflush of the cluster first before doing the snapshotting.
>
> Unfortunately, the nature of a distributed system is that snapshotting all
> the nodes at the precise same time is a hard problem.
>
> I also do not / have not used the built-in incremental backup feature of
> cassandra, which can enable more precise point-in-time backups (aside from
> the unflushed data in the commitlogs)
>
> A note on incrementals with occaisional FULLs: Note that FULL backups
> monthly might take more than a day or two, especially throttled. My
> incrementals were originally looking up previous manifests using only 'most
> recent", but then the long-running FULL backups were excluded from the
> "chain" of incremental backups. So I now implement a fuzzy lookup for the
> incrementals that prioritizes any FULL in the last 5 days over any more
> recent incremental. Thus you can purge old backups you don't need more
> safely using the monthly full backups as a reset point.
>
> On Mon, Apr 1, 2019 at 1:08 PM Alain RODRIGUEZ  wrote:
>
>> Hello Manish,
>>
>> I think any disk works. As long as it is big enough. It's also better if
>> it's a reliable system (some kind of redundant raid, NAS, storage like GCS
>> or S3...). We are not looking for speed mostly during a backup, but
>> resiliency and not harming the source cluster mostly I would say.
>> Then how fast you write to the backup storage system will probably be
>> more often limited by what you can read from the source cluster.
>> The backups have to be taken from running nodes, thus it's easy to
>> overload the disk (reads), network (export backup data to final
>> destination), and even CPU (as/if the machine handles the transfer).
>>
>> What are the best practices while designing backup storage system for big
>>> Cassandra cluster?
>>
>>
>> What is nice to have (not to say mandatory) is a system of incremental
>> backups. You should not take the data from the nodes every time, or you'll
>> either harm the cluster regularly OR spend days to transfer the data (if
>> the amount of data grows big enough).
>> I'm not speaking about Cassandra incremental snapshots, but of using
>> something like AWS Snapshot, or copying this behaviour programmatically to
>> take (copy, link?) old SSTables from previous backups when they exist, will
>> greatly unload the clusters work and the resource needed as soon enough a
>> substantial amount of the data should be coming from the backup data source
>> itself. The problem with incremental snapshot is that when restoring, you
>> have to restore multiple pieces, making it harder and involving a lot of
>> compaction work.
>> The "caching" technic mentioned above gives the best of the 2 worlds:
>> - You will always backup from the nodes only the sstables you don’t have
>> already in your backup storage system,
>> - You will always restore easily as each backup is a full backup.
>>
>> It's not really a "hands-on" writing, but this should let you know about
>> existing ways to do backups and the tradeoffs, I wrote this a year ago:
>> http://thelastpickle.com/blog/2018/04/03/cassandra-backup-and-restore-aws-ebs.html
>> .
>>
>> It's a complex topic, I hope some of this is helpful to you.

Re: Multi-DC replication and hinted handoff

2019-04-02 Thread Jens Fischer
Yes, Apache Cassandra 3.11.2 (no DSE).

On 2. Apr 2019, at 19:40, sankalp kohli 
mailto:kohlisank...@gmail.com>> wrote:

Are you using OSS C*?

On Fri, Mar 29, 2019 at 1:49 AM Jens Fischer 
mailto:j.fisc...@sonnen.de>> wrote:
Hi,

I have a Cassandra setup with multiple data centres. The vast majority of 
writes are LOCAL_ONE writes to data center DC-A. One node (lets call this node 
A1) in DC-A has accumulated large amounts of hint files (~100 GB). In the logs 
of this node I see lots of messages like the following:

INFO  [HintsDispatcher:26] 2019-03-28 01:49:25,217 
HintsDispatchExecutor.java:289 - Finished hinted handoff of file 
db485ac6-8acd-4241-9e21-7a2b540459de-1553419324363-1.hints to endpoint 
/10.10.2.55: db485ac6-8acd-4241-9e21-7a2b540459de

The node 10.10.2.55 is in DC-B, lets call this node B1. There is no indication 
whatsoever that B1 was down: Nothing in our monitoring, nothing in the logs of 
B1, nothing in the logs of A1. Are there any other situations where hints to B1 
are stored at A1? Other than A1's failure detection detecting B1 as down I 
mean. For example could the reason for the hints be that B1 is overloaded and 
can not handle the intake from the A1? Or that the network connection between 
DC-A and DC-B is to slow?

While researching this I also found the following information on Stack Overflow 
from Ben Slater regarding hints and multi-dc replication:

Another factor here is the consistency level you are using - a LOCAL_* 
consistency level will only require writes to be written to the local DC for 
the operation to be considered a success (and hints will be stored for 
replication to the other DC).
(…)
The hints are the records of writes that have been made in one DC that are not 
yet replicated to the other DC (or even nodes within a DC). I think your 
options to avoid them are: (1) write with ALL or QUOROM (not LOCAL_*) 
consistency - this will slow down your writes but will ensure writes go into 
both DCs before the op completes (2) Don't replicate the data to the second DC 
(by setting the replication factor to 0 for the second DC in the keyspace 
definition) (3) Increase the capacity of the second DC so it can keep up with 
the writes (4) Slow down your writes so the second DC can keep up.

Source: https://stackoverflow.com/a/37382726

This reads like hints are used for “normal” (async) replication between data 
centres, i.e. hints could show up without any nodes being down whatsoever. This 
could explain what I am seeing. Does anyone now more about this? Does that mean 
I will see hints even if I disable hinted handoff?

Any pointers or help are greatly appreciated!

Thanks in advance
Jens


[https://img.sonnen.de/TSEE2019_Banner_sonnenGmbH_de_1.jpg]

Geschäftsführer: Christoph Ostermann (CEO), Oliver Koch, Steffen Schneider, 
Hermann Schweizer.
Amtsgericht Kempten/Allgäu, Registernummer: 10655, Steuernummer 127/137/50792, 
USt.-IdNr. DE272208908


[https://img.sonnen.de/TSEE2019_Banner_sonnenGmbH_de_1.jpg]

Geschäftsführer: Christoph Ostermann (CEO), Oliver Koch, Steffen Schneider, 
Hermann Schweizer.
Amtsgericht Kempten/Allgäu, Registernummer: 10655, Steuernummer 127/137/50792, 
USt.-IdNr. DE272208908


Cassandra STIG

2019-04-02 Thread Krish Donald
Hi,

Does anyone has Cassandra STIG ?

Thanks
Krish


Re: Cassandra STIG

2019-04-02 Thread Joseph Testa
There is a recently published CIS benchmark for Cassandra.

Joe


On Tue, Apr 2, 2019 at 4:19 PM Krish Donald  wrote:

> Hi,
>
> Does anyone has Cassandra STIG ?
>
> Thanks
> Krish
>


Re: Cassandra STIG

2019-04-02 Thread Krish Donald
Hi Joe,

Thanks for the reply, I am looking for Cassandra STIG.
I found one link.
https://grokbase.com/p/cassandra/user/162g7mfvg2/security-assessment-of-cassandra

Anyone has a complete Cassandra STIG ?
CIS benchmark is not the one ia m looking for.

Thanks
Krish


On Tue, Apr 2, 2019 at 1:25 PM Joseph Testa  wrote:

> There is a recently published CIS benchmark for Cassandra.
>
> Joe
>
>
> On Tue, Apr 2, 2019 at 4:19 PM Krish Donald  wrote:
>
>> Hi,
>>
>> Does anyone has Cassandra STIG ?
>>
>> Thanks
>> Krish
>>
>


Procedures for moving part of a C* cluster to a different datacenter

2019-04-02 Thread Saleil Bhat (BLOOMBERG/ 731 LEX)
Hello all, 

I have a question about moving part of a multi-datacenter cluster to a new 
physical datacenter. 
For example, suppose I have a two-datacenter cluster with one DC in San Jose, 
California and one DC in Orlando, Florida, and I want to move all the nodes in 
Orlando to a new datacenter in Tampa.  


The standard procedure for doing this seems to be add a 3rd datacenter to the 
cluster, stream data to the new datacenter via nodetool rebuild, then 
decommission the old datacenter. A more detailed review of this procedure can 
be found here: 
http://thelastpickle.com/blog/2019/02/26/data-center-switch.html



However, I see two problems with the above protocol.  First, it requires 
changes on the application layer because of the datacenter name change; e.g. 
all applications referring to the datacenter ‘Orlando’ will now have to be 
changed to refer to ‘Tampa’.  Second, it requires that a full repair be run on 
every node in the old datacenter, ensuring that all writes which went to it are 
replicated to the new datacenter, before decommissioning it. This repair (for a 
large dataset) can be prohibitively expensive. 



As such, I was wondering what peoples’ thoughts were on the following 
alternative procedure: 

1) Kill one node in the old datacenter

2) Add a new node in the new datacenter but indicate that it is to REPLACE the 
one just shutdown; this node will bootstrap, and all the data which it is 
supposed to be responsible for will be streamed to it

3) Repeat steps one and two until all nodes have been replaced



In particular, I’m curious if anybody has any insight on what  problems can 
arise if a “logical” datacenter in Cassandra actually spans two different 
physical datacenters, and whether these problems might be mitigated if the two 
physical datacenters in question are geographically close together (e.g. Tampa 
and Orlando). 

Thanks, 
-Saleil 

Re: Multi-DC replication and hinted handoff

2019-04-02 Thread Stefan Miklosovic
Hi Jens,

I am reading Cassandra The definitive guide and there is a chapter 9 -
Reading and Writing Data and section The Cassandra Write Path and this
sentence in it:

If a replica does not respond within the timeout, it is presumed to be down
and a hint is stored for the write.

So your node might be actually fine eventually but it just can not cope
with the load and it will reply too late after a coordinator has sufficient
replies from other replicas. So it makes a hint for that write and for that
node. I am not sure how is this related to turning off handoffs completely.
I can do some tests locally if time allows to investigate various
scenarios. There might be some subtle differences 

On Wed, 3 Apr 2019 at 07:19, Jens Fischer  wrote:

> Yes, Apache Cassandra 3.11.2 (no DSE).
>
> On 2. Apr 2019, at 19:40, sankalp kohli  wrote:
>
> Are you using OSS C*?
>
> On Fri, Mar 29, 2019 at 1:49 AM Jens Fischer  wrote:
>
>> Hi,
>>
>> I have a Cassandra setup with multiple data centres. The vast majority of
>> writes are LOCAL_ONE writes to data center DC-A. One node (lets call this
>> node A1) in DC-A has accumulated large amounts of hint files (~100 GB). In
>> the logs of this node I see lots of messages like the following:
>>
>> INFO  [HintsDispatcher:26] 2019-03-28 01:49:25,217
>> HintsDispatchExecutor.java:289 - Finished hinted handoff of file
>> db485ac6-8acd-4241-9e21-7a2b540459de-1553419324363-1.hints to endpoint /
>> 10.10.2.55: db485ac6-8acd-4241-9e21-7a2b540459de
>>
>> The node 10.10.2.55 is in DC-B, lets call this node B1. There is no
>> indication whatsoever that B1 was down: Nothing in our monitoring, nothing
>> in the logs of B1, nothing in the logs of A1. Are there any other
>> situations where hints to B1 are stored at A1? Other than A1's failure
>> detection detecting B1 as down I mean. For example could the reason for the
>> hints be that B1 is overloaded and can not handle the intake from the A1?
>> Or that the network connection between DC-A and DC-B is to slow?
>>
>> While researching this I also found the following information on Stack
>> Overflow from Ben Slater regarding hints and multi-dc replication:
>>
>> Another factor here is the consistency level you are using - a LOCAL_*
>> consistency level will only require writes to be written to the local DC
>> for the operation to be considered a success (and hints will be stored for
>> replication to the other DC).
>> (…)
>> The hints are the records of writes that have been made in one DC that
>> are not yet replicated to the other DC (or even nodes within a DC). I think
>> your options to avoid them are: (1) write with ALL or QUOROM (not LOCAL_*)
>> consistency - this will slow down your writes but will ensure writes go
>> into both DCs before the op completes (2) Don't replicate the data to the
>> second DC (by setting the replication factor to 0 for the second DC in the
>> keyspace definition) (3) Increase the capacity of the second DC so it can
>> keep up with the writes (4) Slow down your writes so the second DC can keep
>> up.
>>
>>
>> Source: https://stackoverflow.com/a/37382726
>>
>> This reads like hints are used for “normal” (async) replication between
>> data centres, i.e. hints could show up without any nodes being down
>> whatsoever. This could explain what I am seeing. Does anyone now more about
>> this? Does that mean I will see hints even if I disable hinted handoff?
>>
>> Any pointers or help are greatly appreciated!
>>
>> Thanks in advance
>> Jens
>>
>> Geschäftsführer: Christoph Ostermann (CEO), Oliver Koch, Steffen
>> Schneider, Hermann Schweizer.
>> Amtsgericht Kempten/Allgäu, Registernummer: 10655, Steuernummer
>> 127/137/50792, USt.-IdNr. DE272208908
>>
>
> Geschäftsführer: Christoph Ostermann (CEO), Oliver Koch, Steffen
> Schneider, Hermann Schweizer.
> Amtsgericht Kempten/Allgäu, Registernummer: 10655, Steuernummer
> 127/137/50792, USt.-IdNr. DE272208908
>