Re: Is the updating compaction strategy from 'sized tiered' to 'leveled' automatic or need to be done manually?

2014-05-04 Thread Yatong Zhang
What you mean 'you need write to this CF'? I've changed the schema by using CQL3 'alter table' statments. On Mon, May 5, 2014 at 2:28 PM, Viktor Jevdokimov < viktor.jevdoki...@adform.com> wrote: > To trigger LCS you need to write to this CF and wait when new sstable > flushes. I can’t find any

RE: Is the updating compaction strategy from 'sized tiered' to 'leveled' automatic or need to be done manually?

2014-05-04 Thread Viktor Jevdokimov
To trigger LCS you need to write to this CF and wait when new sstable flushes. I can’t find any other way to start LCS. Best regards / Pagarbiai Viktor Jevdokimov Senior Developer Email: viktor.jevdoki...@adform.com Phone: +370 5 212 3063, Fax +370 5 261 0453

Is there a way to stop cassandra compacting some large sstables?

2014-05-04 Thread Yatong Zhang
Hi, I changed compaction strategy from 'size tiered' to 'leveled' but after running a few days C* still tries to compacting some old large sstables, say: 1. I have 6 disks per node and 6 data directories per disk 2. There are some old huge sstables generated when using 'sized tiered' compaction s

Re: Cassandra 2.0.7 always failes due to 'too may open files' error

2014-05-04 Thread Yatong Zhang
I am running 'repair' when the error occurred. And just a few days before I changed the compaction strategy to 'leveled'. don know if this helps On Mon, May 5, 2014 at 1:10 PM, Yatong Zhang wrote: > Cassandra is running as root > > [root@storage5 ~]# ps aux | grep java >> root 1893 42.0 24

Re: Cassandra 2.0.7 always failes due to 'too may open files' error

2014-05-04 Thread Yatong Zhang
Cassandra is running as root [root@storage5 ~]# ps aux | grep java > root 1893 42.0 24.0 7630664 3904000 ? Sl 10:43 60:01 java -ea > -javaagent:/mydb/cassandra/bin/../lib/jamm-0.2.5.jar > -XX:+CMSClassUnloadingEnabled -XX:+UseThreadPriorities > -XX:ThreadPriorityPolicy=42 -Xms3959M -Xm

Re: Cassandra 2.0.7 always failes due to 'too may open files' error

2014-05-04 Thread Philip Persad
Have you tried running "ulimit -a" as the Cassandra user instead of as root? It is possible that your configured a high file limit for root but not for the user running the Cassandra process. On Sun, May 4, 2014 at 6:07 PM, Yatong Zhang wrote: > [root@storage5 ~]# lsof -n | grep java | wc -l >>

Re: Cassandra 2.0.7 always failes due to 'too may open files' error

2014-05-04 Thread Yatong Zhang
> > [root@storage5 ~]# lsof -n | grep java | wc -l > 5103 > [root@storage5 ~]# lsof | wc -l > 6567 It's mentioned in previous mail:) On Mon, May 5, 2014 at 9:03 AM, nash wrote: > The lsof command or /proc can tell you how many open files it has. How > many is it? > > --nash >

Re: Cassandra 2.0.7 always failes due to 'too may open files' error

2014-05-04 Thread nash
The lsof command or /proc can tell you how many open files it has. How many is it? --nash

Cassandra 2.0.7 always failes due to 'too may open files' error

2014-05-04 Thread Yatong Zhang
Hi there, After I changed compaction strategy to 'leveled', one of my nodes keeps reporting 'too many open files. But I have done some configuration following: http://www.datastax.com/docs/1.1/install/recommended_settingsand http://www.datastax.com/docs/1.1/troubleshooting/index#toomany I am usin

Re: Cassandra vs Elasticsearch.

2014-05-04 Thread Tim Uckun
I have been doing some research on how ES persists to disk and it seems like it seems to be pretty robust. Basically what happens when you write a document is that the document gets written to a log file on the local disk and and also to multiple shards. This happens synchronously and is tunable.

Re: Cassandra vs Elasticsearch.

2014-05-04 Thread Jack Krupansky
That’s a key advantage of DataStax Enterprise – Solr is fully integrated into the Casssandra cluster, so there is only a single infrastructure. -- Jack Krupansky From: Tim Uckun Sent: Sunday, May 4, 2014 6:36 AM To: user@cassandra.apache.org Subject: Re: Cassandra vs Elasticsearch. I am hesit

Re: Cassandra vs Elasticsearch.

2014-05-04 Thread Tim Uckun
I am hesitant about keeping both a cassandra and ES cluster because it effectively doubles my infrastructure costs. It may be much cheaper to keep the data in log files and have ES index them for searching.Thanks for the input everybody, there is much to think about here. On Sat, May 3, 20

Re: Cassandra 2.0.7 keeps reporting errors due to no space left on device

2014-05-04 Thread Yatong Zhang
I am using the latest 2.0.7. The 'nodetool tpstats' shows as: [root@storage5 bin]# ./nodetool tpstats > Pool NameActive Pending Completed Blocked > All time blocked > ReadStage 0 0 628220 > 0 0 > RequestResponseSt

Re: Cassandra 2.0.7 keeps reporting errors due to no space left on device

2014-05-04 Thread DuyHai Doan
The symptoms looks like there are pending compactions stacking up or failed compactions so temporary files (-tmp-Data.db) are not properly cleaned up. What is your Cassandra version ? Can you do a "nodetool tpstats" and look into Cassandra logs to see whether there are issues with compactions ?

Re: Cassandra 2.0.7 keeps reporting errors due to no space left on device

2014-05-04 Thread Yatong Zhang
Yes after a while the disk fills up again. So I changed the compaction strategy from 'sized tiered' to 'leveled' to reduce the disk usage when compacting, but the problem still occurs. This table has lots of write and a relative very small read, and no update. here is the schema of the table: CRE

Re: Cassandra 2.0.7 keeps reporting errors due to no space left on device

2014-05-04 Thread DuyHai Doan
And after a while the /data6 drive fills up again right ? One question, can you please give the CQL3 definition of your "mydb-images-tmp" table ? What is the access pattern for this table ? Lots of write ? Lots of update ? On Sun, May 4, 2014 at 10:00 AM, Yatong Zhang wrote: > after restar

Re: Cassandra 2.0.7 keeps reporting errors due to no space left on device

2014-05-04 Thread Yatong Zhang
after restarting or 'cleanup' the big tmp file has gone and all looks like fine: -rw-r--r-- 1 root root 19K Apr 30 13:58 > mydb_oe-images-tmp-jb-96242-CompressionInfo.db > -rw-r--r-- 1 root root 145M Apr 30 13:58 > mydb_oe-images-tmp-jb-96242-Data.db > -rw-r--r-- 1 root root 64K Apr 30 13:58 > m