Generally speaking, you don't need to. I almost never do. I've only set
it in situations where I've had a large number of tables and I want to
avoid a lot of flushing when commit log segments are removed.
Setting it to 128 milliseconds means it's flushing 8 times per second,
which gives no benef
No it did solve the problem, as Faraz mentioned but I am still not sure
about whats the underlying cause. Is 0ms really correct? how do we setu a
flush period?
- Affan
On Thu, Mar 15, 2018 at 10:00 PM, Jon Haddad wrote:
> TWCS does SizeTieredCompaction within the window, so it’s not likely to
>
TWCS does SizeTieredCompaction within the window, so it’s not likely to make a
difference. I’m +1’ing what Jeff said, 128ms memtable_flush_period_in_ms is
almost certainly your problem, unless you’ve changed other settings and haven’t
told us about them.
> On Mar 15, 2018, at 9:54 AM, Affan
Jeff,
I think additionally the reason might also be that the keyspace was
using TimeWindowCompactionStrategy
with 1 day bucket; however the writes very quite rapid and no automatic
compaction was working.
I would think changing strategy to SizeTiered would also solve this problem?
- Affan
On
The problem was likely more with the fact that it can’t flush in 128ms so you
backup on flush
--
Jeff Jirsa
> On Mar 14, 2018, at 12:07 PM, Faraz Mateen wrote:
>
> I was able to overcome the timeout error by setting
> memtable_flush_period_in_ms to 0 for all my tables. Initially it was set
I was able to overcome the timeout error by setting
memtable_flush_period_in_ms to 0 for all my tables. Initially it was set to
128.
Now I able to write ~4 records/min in cassandra and the script has been
running for around 12 hours now.
However, I am still curious that why was cassandra unabl
Thanks for the response.
Here is the output of "DESCRIBE" on my table
https://gist.github.com/farazmateen/1c88f6ae4fb0b9f1619a2a1b28ae58c4
I am getting two errors from the python script that I mentioned above.
First one does not show any error or exception in server logs. Second error:
*"cassan
The following won't address any server performance issues, but will allow
your application to continue to run even if there are client or server
timeouts:
Your python code should wrap all Cassandra statement execution calls in
a try/except block to catch any errors and handle them appropriate
Faraz,
Can you share your code snippet, how you are trying to save the entity
objects into cassandra.
Thanks and Regards,
Goutham Reddy Aenugu.
Regards
Goutham Reddy
On Tue, Mar 13, 2018 at 3:42 PM, Faraz Mateen wrote:
> Hi everyone,
>
> I seem to have hit a problem in which writing to cassan
What does your schema look like? Are you seeing any warnings or errors in the
server log?
Dinesh
On Tuesday, March 13, 2018, 3:43:33 PM PDT, Faraz Mateen
wrote:
Hi everyone,
I seem to have hit a problem in which writing to cassandra through a python
script fails and also occasionally
Hi everyone,
I seem to have hit a problem in which writing to cassandra through a python
script fails and also occasionally causes cassandra node to crash. Here are
the details of my problem.
I have a python based streaming application that reads data from kafka at a
high rate and pushes it to c
11 matches
Mail list logo