Running Cassandra 3.0.7 we have 3 out of 6 nodes that threw an OOM error
when a developer created a secondary index. I'm trying to repair the
cluster. I stopped all nodes, deleted all traces of the table and secondary
index from disk, removed commit logs and saved caches, and restarted the
instance
Thank you, Alain.
There was no frequent GC nor compaction so it have been a
mystery,however, once I stopped chef-client(we're managing the cluster
though chef-cookbook), the load was eased for almost all of the
servers.
so we're now refactoring our cookbook, in the meanwhile, we also
decided to re
Thank you very much Paulo
On Aug 5, 2016 17:31, "Paulo Motta" wrote:
> https://issues.apache.org/jira/browse/CASSANDRA-11840
>
> increase streaming_socket_timeout to 8640 or upgrade to
> cassandra-2.1.15.
>
> 2016-08-05 12:28 GMT-03:00 Jean Carlo :
>
>>
>> Hello Paulo,
>>
>> Thx for your fas
https://issues.apache.org/jira/browse/CASSANDRA-11840
increase streaming_socket_timeout to 8640 or upgrade to
cassandra-2.1.15.
2016-08-05 12:28 GMT-03:00 Jean Carlo :
>
> Hello Paulo,
>
> Thx for your fast replay.
>
> You are right about that node, I did not see it that fast. In this node w
Hello Paulo,
Thx for your fast replay.
You are right about that node, I did not see it that fast. In this node we
have errors of .SocketTimeoutException: null
ERROR [STREAM-IN-/192.168.0.146] 2016-08-04 19:10:59,456
StreamSession.java:505 - [Stream #06c02460-5a5e-11e6-8e9a-a5bf51981ad8]
Stream
you need to check 192.168.0.36/10.234.86.36 for streaming ERRORS
2016-08-05 12:08 GMT-03:00 Jean Carlo :
> Hi Paulo
>
> I found the lines, we got an exception "Outgoing stream handler has been
> closed" these are
>
> ERROR [STREAM-IN-/10.234.86.36] 2016-08-04 16:55:53,772
> StreamSession.java:621
Btw, I'm not trying to say what you're asking for is a bad idea, or
shouldn't / can't be done. If you're asking for a new feature, you should
file a JIRA with all the details you provided above. Just keep in mind
it'll be a while before it ends up in a stable version. The advice on this
ML will
I think Duy Hai was suggesting Spark Streaming, which gives you the tools
to build exactly what you asked for. A custom compression system for
packing batches of values for a partition into an optimized byte array.
On Fri, Aug 5, 2016 at 7:46 AM Michael Burman wrote:
> Hi,
>
> For storing time
Hi Paulo
I found the lines, we got an exception "Outgoing stream handler has been
closed" these are
ERROR [STREAM-IN-/10.234.86.36] 2016-08-04 16:55:53,772
StreamSession.java:621 - [Stream #c4e79260-5a46-11e6-9993-e11d93fd5b40]
Remote peer 192.168.0.36 failed stream session.
INFO [STREAM-IN-/10.
Hi,
For storing time series data, storage disk usage is quite significant factor -
time series applications generate a lot of data (and of course the newest data
is most important). Given that even DateTiered compaction was designed in
keeping mind of these specialities of time series data, wou
It seems you have a streaming error, look for ERROR statement in the
streaming classes before that which may give you a more specific root
cause. In any case, I'd suggest you to upgrade to 2.1.15 as there were a
couple of streaming fixes on this version that might help.
2016-08-05 11:15 GMT-03:00
Hadoop and Cassandra have very different use cases. If the ability to
write a custom compression system is the primary factor in how you choose
your database I suspect you may run into some trouble.
Jon
On Fri, Aug 5, 2016 at 6:14 AM Michael Burman wrote:
> Hi,
>
> As Spark is an example of so
Hi guys, Doing a repair I got this error for 2 tokenranges.
ERROR [Thread-2499244] 2016-08-04 20:05:24,288 StorageService.java:3068 -
Repair session 41e4bab0-5a63-11e6-9993-e11d93fd5b40 for range
(487410372471205090,492009442088088379] failed with error
org.apache.cassandra.exceptions.RepairExcep
Hi,
As Spark is an example of something I really don't want. It's resource heavy,
it involves copying data and it involves managing yet another distributed
system. Actually I would also need a distributed system to schedule the spark
jobs also.
Sounds like a nightmare to implement a compressio
For the record, we've found the issue, it is not related to SASI, the
inconsistencies are due to inconsistent data, need a good repair to put
them back in sync.
Using QUORUM CL grant consistent results when querying
On Fri, Aug 5, 2016 at 1:18 PM, George Webster wrote:
> Thanks DuyHai,
>
> I wo
Thanks DuyHai,
I would agree but we have not performed any delete operations in over a
month. To me this looks like a potential bug or misconfiguration (on my
end) with SASI.
I say this for a few reasons:
1) we have not performed a delete operation since the indexes were created
2) when I perform
Ok the fact that you see some rows and after a while you see 0 rows means
that those rows are deleted.
Since SASI does only index INSERT & UPDATE but not DELETE, management of
tombstones is let to Cassandra to handle.
It means that if you do an INSERT, you'll have an entry into SASI index
file bu
17 matches
Mail list logo