JOB | Front End React Developer (Lead/Architect) in London, UK

2019-09-10 Thread James Tobin
Hello, I'm working with an employer that is looking to hire (at their
London office) a permanent front end (React) developer that can fulfil
the role of lead/architect. Consequently, I had hoped that some
members of this mailing list might like to discuss further.  I can be
contacted off-list using "JamesBTobin (at) Gmail (dot) Com".  Kind
regards, James

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Cassandra JVM configuration

2019-09-10 Thread pat

On 2019-09-06 11:02, Oleksandr Shulgin wrote:

On Fri, Sep 6, 2019 at 11:00 AM  wrote:


- reads => as much as possible - huge stream of requests
- data => 186GB on each node
- the reads are unpredictable
- there's (in the cluster) about 6 billions of records


I wonder though, if it makes sense to use Cassandra for a read-only
dataset?  Couldn't you just put it on something like Amazon S3 and be
done with it?

How many rows per partition do you have?  Do you always scan full
partition or you need to restrict results by clustering key?

Regards,
--
Alex


Hi,

well it might be updated, but updates can be done only after processing 
of data finished. No, accessing through the id. How can I get number of 
rows within the partition? Never need this.


Thanks

Pat



Freehosting PIPNI - http://www.pipni.cz/


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Drastic increase of bloom filter sizer after upgrading from 2.2.14 to 3.11.4

2019-09-10 Thread Matthias Pfau
Hi there,
we just finished upgrading sstables on a single node after upgrading from 
2.2.14 to 3.11.4. Since then, we noted a drastic increase of off heap memory 
consumption. This is due to increased bloom filter size.

According to cfstats output "Bloom filter off heap memory used" increased by a 
factor between 7 and 8. That means that while bloom filters took 1 GB of off 
heap storage with 2.2.14, they fill around 7.5 GB with 3.11.4.

Did anyone observe something similar? Have there been bigger changes to the 
bloom filter implementation between those versions?

Best,
Matthias

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Drastic increase of bloom filter sizer after upgrading from 2.2.14 to 3.11.4

2019-09-10 Thread Matthias Pfau
A few more details:

1. bloom_filter_fp_chance is set to 0.01

2. I reviewed CASSANDRA-8413 
(https://github.com/apache/cassandra/commit/23fd75f27c40462636f09920719b5dcbef5b8f36
 
)
 and this should not have lead to much larger bloom filters.

3. large sstables (few above 1 TB) have been splitted into way smaller ones 
(256 vnodes) during the upgrade sstables. Could this lead to the described 
problem with way too large bloom filters?

Best,
Matthias
10. Sep. 2019, 14:22 von matthias.p...@tutao.de.INVALID:

> Hi there,
> we just finished upgrading sstables on a single node after upgrading from 
> 2.2.14 to 3.11.4. Since then, we noted a drastic increase of off heap memory 
> consumption. This is due to increased bloom filter size.
>
> According to cfstats output "Bloom filter off heap memory used" increased by 
> a factor between 7 and 8. That means that while bloom filters took 1 GB of 
> off heap storage with 2.2.14, they fill around 7.5 GB with 3.11.4.
>
> Did anyone observe something similar? Have there been bigger changes to the 
> bloom filter implementation between those versions?
>
> Best,
> Matthias
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Is this something to be concerned about? Compaction Error reported in the log every minute

2019-09-10 Thread Leena Ghatpande
THank you for the response. We have an upgrade planned for beginning of next 
year.

Just saw this one of error in the error logs. Not sure if this helps.

ERROR [CompactionExecutor:6] 2019-09-10 11:26:20,356 CassandraDaemon.java:217 - 
Exception in thread Thread[CompactionExecutor:6,1,main]
java.lang.NullPointerException: null
at 
org.apache.cassandra.db.transform.UnfilteredRows.isEmpty(UnfilteredRows.java:38)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:64)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.partitions.PurgeFunction.applyToPartition(PurgeFunction.java:24)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.transform.BasePartitions.hasNext(BasePartitions.java:76)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.compaction.CompactionIterator.hasNext(CompactionIterator.java:226)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.compaction.CompactionTask.runMayThrow(CompactionTask.java:182)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:82)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:60)
 ~[apache-cassandra-3.7.jar:3.7]
at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:264)
 ~[apache-cassandra-3.7.jar:3.7]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[na:1.8.0_162]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
~[na:1.8.0_162]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
~[na:1.8.0_162]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[na:1.8.0_162]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_162]


From: Jeff Jirsa 
Sent: Monday, September 9, 2019 3:38 PM
To: cassandra 
Subject: Re: Is this something to be concerned about? Compaction Error reported 
in the log every minute

It's unfortunate you dont have a better stack trace to know what's actually 
going on here. 3.7 is pretty old, I'd be inclined to upgrade to the latest 3.11 
branch to hope that you either get a better stack or an outright fix, but that 
stack doesn't ring any bells for me.


On Mon, Sep 9, 2019 at 10:20 AM Leena Ghatpande 
mailto:lghatpa...@hotmail.com>> wrote:
We are on Cassandra 3.7 and have a 8 node cluster , 2DC, with 4 nodes in each 
DC. RF=3

The below Compaction Error message is being logged to the system.log exactly 
every Minute.

ERROR [CompactionExecutor:5751] 2019-06-09 03:24:50,585 
CassandraDaemon.java:217 - Exception in thread 
Thread[CompactionExecutor:5751,1,main]
java.lang.NullPointerException: null

ERROR [CompactionExecutor:5751] 2019-06-09 03:25:50,592 
CassandraDaemon.java:217 - Exception in thread 
Thread[CompactionExecutor:5751,1,main]
java.lang.NullPointerException: null

ERROR [CompactionExecutor:5753] 2019-06-09 03:26:50,709 
CassandraDaemon.java:217 - Exception in thread 
Thread[CompactionExecutor:5753,1,main]
java.lang.NullPointerException: null

ERROR [CompactionExecutor:5753] 2019-06-09 03:27:50,734 
CassandraDaemon.java:217 - Exception in thread 
Thread[CompactionExecutor:5753,1,main]

We have been seeing this error for some time now. But this error is showing 
only on 5 out of the 8 nodes.

It does not provide any more details of the error even if we enable the Debug 
Mode. I am not sure what table that this error is related to either.
We run repairs with -pr option every alternate day and they have been running 
successfully.

nodetool compactionstats shows 0 pending tasks.
pending tasks: 0

We are trying to figure out what this error indicates and if its a concern that 
we need to address.

Any other suggestions also will be greatly appreciated.




Re: How can I add blank values instead of null values in cassandra ?

2019-09-10 Thread Swen Moczarski
When using prepared statements, you could use "unset":
https://github.com/datastax/java-driver/blob/4.x/manual/core/statements/prepared/README.md#unset-values


That should solve the tombstone problem but might need code changes.

Regards,
Swen

Am Di., 10. Sept. 2019 um 04:50 Uhr schrieb Nitan Kainth <
nitankai...@gmail.com>:

> You can set default values in driver but that also little code change
>
>
> Regards,
>
> Nitan
>
> Cell: 510 449 9629
>
> On Sep 9, 2019, at 8:15 PM, buchi adddagada  wrote:
>
> We are using DSE 5.1.0 & Spring boot Java.
>
> While we are trying to insert data into cassandra , java by default
> inserts null values in cassandra tables which is causing huge tombstones.
>
>
> Instead of changing code in java to insert null values, can you control
> anywhere at driver level ?
>
>
> Thanks,
>
> Buchi Babu
>
>