Hi,
I found many simmilar lines in log:
INFO [SlabPoolCleaner] 2015-02-24 12:28:19,557 ColumnFamilyStore.java:850
- Enqueuing flush of customer_events: 95299485 (5%) on-heap, 0 (0%) off-heap
INFO [MemtableFlushWriter:1465] 2015-02-24 12:28:19,569 Memtable.java:339
- Writing Memtable-customer_eve
Hi, Ron
I look deep into my cassandra files and SSTables created during last day
are less than 20MB.
Piotrek
p.s. Your tips are really useful at least I am starting to finding where
exactly the problem is.
On Thu, Feb 26, 2015 at 3:11 PM, Ja Sam wrote:
> We did this query, most our files
We did this query, most our files are less than 100MB.
Our heap setting are like (they are calculatwed using scipr in
cassandra.env):
MAX_HEAP_SIZE="8GB"
HEAP_NEWSIZE="2GB"
which is maximum recommended by DataStax.
What values do you think we should try?
On Thu, Feb 26, 2015 at 10:06 AM, Rol
Hi,
One more thing. Hinted Handoff for last week for all nodes was less than 5.
For me every READ is a problem because it must open too many files (3
SSTables), which occurs as an error in reads, repairs, etc.
Regards
Piotrek
On Wed, Feb 25, 2015 at 8:32 PM, Ja Sam wrote:
> Hi,
> It
%28%2B1%29%20415.501.0198>London (+44) (0) 20 8144 9872
> <%28%2B44%29%20%280%29%2020%208144%209872>*
>
> On Wed, Feb 25, 2015 at 11:01 AM, Ja Sam wrote:
>
>> Hi Roni,
>> The repair results is following (we run it Friday): Cannot proceed on
>> repair because a
ere:
https://drive.google.com/file/d/0B4N_AbBPGGwLc25nU0lnY3Z5NDA/view
On Wed, Feb 25, 2015 at 7:50 PM, Roni Balthazar
wrote:
> Hi Piotr,
>
> Are your repairs finishing without errors?
>
> Regards,
>
> Roni Balthazar
>
> On 25 February 2015 at 15:43, Ja Sam wrote:
C nodes as well?
> You can check the pending compactions on each node.
>
> Also try to run "nodetool getcompactionthroughput" on all nodes and
> check if the compaction throughput is set to 999.
>
> Cheers,
>
> Roni Balthazar
>
> On 25 February 2015 at 14:47, Ja
SSTables and pending
compactions are decreasing to zero.
In AGRAF the minimum pending compaction is 2500 , maximum is 6000 (avg on
screen from opscenter is less then 5000)
Regards
Piotrek.
p.s. I don't know why my mail client display my name as Ja Sam instead of
Piotr Stapp, but this doesn
I do NOT have SSD. I have normal HDD group by JBOD.
My CF have SizeTieredCompactionStrategy
I am using local quorum for reads and writes. To be precise I have a lot of
writes and almost 0 reads.
I changed "cold_reads_to_omit" to 0.0 as someone suggest me. I used set
compactionthrouput to 999.
So i
tency.
>
> Regards,
>
> Carlos Juzarte Rolo
> Cassandra Consultant
>
> Pythian - Love your data
>
> rolo@pythian | Twitter: cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo
> <http://linkedin.com/in/carlosjuzarterolo>*
> Tel: 1649
> www.pythian.com
>
Hi,
I write some question before about my problems with C* cluster. All my
environment is described here:
https://www.mail-archive.com/user@cassandra.apache.org/msg40982.html
To sum up I have thousands SSTables in one DC and much much less in second.
I write only to first DC.
Anyway after reading
The repair results is following (we run it Friday): Cannot proceed on
repair because a neighbor (/192.168.61.201) is dead: session failed
But to be honest the neighbor did not died. It seemed to trigger a series
of full GC events on the initiating node. The results form logs are:
[2015-02-20 16:4
18, 2015 at 11:58 AM, Roni Balthazar
> wrote:
>
>> Try repair -pr on all nodes.
>>
>> If after that you still have issues, you can try to rebuild the SSTables
>> using nodetool upgradesstables or scrub.
>>
>> Regards,
>>
>> Roni Balthazar
&
ow what errors are you getting when running repairs.
>
> Regards,
>
> Roni Balthazar
>
>
> On Wed, Feb 18, 2015 at 1:31 PM, Ja Sam wrote:
>
>> Can you explain me what is the correlation between growing SSTables and
>> repair?
>> I was sure, until your mail, that r
tions must decrease as well...
>
> Cheers,
>
> Roni Balthazar
>
>
>
>
> On Wed, Feb 18, 2015 at 12:39 PM, Ja Sam wrote:
> > 1) we tried to run repairs but they usually does not succeed. But we had
> > Leveled compaction before. Last week we ALTER tables to ST
gt;
> Cheers,
>
> Roni Balthazar
>
> On Wed, Feb 18, 2015 at 11:07 AM, Ja Sam wrote:
> > I don't have problems with DC_B (replica) only in DC_A(my system write
> only
> > to it) I have read timeouts.
> >
> > I checked in OpsCenter SSTable count an
or dropped messages, maybe will you
> need to tune your system (eg: driver's timeout, concurrent reads and
> so on)
>
> Regards,
>
> Roni Balthazar
>
> On Wed, Feb 18, 2015 at 9:51 AM, Ja Sam wrote:
> > Hi,
> > Thanks for your "tip" it looks that somethi
f SSTables decreased from many thousands to a number below
> a hundred and the SSTables are now much bigger with several gigabytes
> (most of them).
>
> Cheers,
>
> Roni Balthazar
>
>
>
> On Tue, Feb 17, 2015 at 11:32 AM, Ja Sam wrote:
> > After some diagnos
ng but:
1) in DC_A avg size of Data.db file is ~13 mb. I have few a really big
ones, but most is really small (almost 1 files are less then 100mb).
2) in DC_B avg size of Data.db is much bigger ~260mb.
Do you think that above flag will help us?
On Tue, Feb 17, 2015 at 9:
ol:
> 0 0 * * * root nodetool -h `hostname` setcompactionthroughput 999
> 0 6 * * * root nodetool -h `hostname` setcompactionthroughput 16
>
> Cheers,
>
> Roni Balthazar
>
> On Mon, Feb 16, 2015 at 7:47 PM, Ja Sam wrote:
> > One think I do not understand. In my case compaction i
One think I do not understand. In my case compaction is running
permanently. Is there a way to check which compaction is pending? The only
information is about total count.
On Monday, February 16, 2015, Ja Sam wrote:
> Of couse I made a mistake. I am using 2.1.2. Anyway night build
Of couse I made a mistake. I am using 2.1.2. Anyway night build is
available from
http://cassci.datastax.com/job/cassandra-2.1/
I read about cold_reads_to_omit It looks promising. Should I set also
compaction throughput?
p.s. I am really sad that I didn't read this before:
https://engineering.eve
*Environment*
1) Actual Cassandra 2.1.3, it was upgraded from 2.1.0 (suggested by Al
Tobey from DataStax)
2) not using vnodes
3)Two data centres: 5 nodes in one DC (DC_A), 4 nodes in second DC (DC_B)
4) each node is set up on a physical box with two 16-Core HT Xeon
processors (E5-2660), 64GB RAM an
> concurrency and batch size of a single query against one node.
> Basically, what you/driver should do is to transform the query to series
> of "SELECT * FROM TABLE WHERE TOKEN IN (start, stop)".
>
> I will need to look up the actual code, but the idea should be clear :)
Is there a simple way (or even a complicated one) how can I speed up SELECT
* FROM [table] query?
I need to get all rows form one table every day. I split tables, and create
one for each day, but still query is quite slow (200 millions of records)
I was thinking about run this query in parallel, b
ressure on your
> existing nodes. Either way you should get caught up on compaction before
> you can safely add new nodes again.
>
> If you grow unsafely, you are effectively electing to discard data. Some
> of it may be recoverable with a nodetool repair after you're caught up on
Ad 4) For sure I got a big problem. Because pending tasks: 3094
The question is what should I change/monitor? I can present my whole
solution design, if it helps
On Mon, Jan 12, 2015 at 8:32 PM, Ja Sam wrote:
> To precise your remarks:
>
> 1) About 30 sec GC. I know that after time m
#x27;re not behind on
> compaction, your sstable_size_in_mb might be a bad value for your use case.
>
> On Mon, Jan 12, 2015 at 7:35 AM, Ja Sam wrote:
>
>> *Environment*
>>
>>
>>- Cassandra 2.1.0
>>- 5 nodes in one DC (DC_A), 4 nodes in second
*Environment*
- Cassandra 2.1.0
- 5 nodes in one DC (DC_A), 4 nodes in second DC (DC_B)
- 2500 writes per seconds, I write only to DC_A with local_quorum
- minimal reads (usually none, sometimes few)
*Problem*
After a few weeks of running I cannot read any data from my cluster,
beca
29 matches
Mail list logo