Julie nextcentury.com> writes:
Please see previous post but is hinted handoff a factor if the CL is set to ALL?
Robert Coli digg.com> writes:
> Check the size of the Hinted Handoff CF? If your nodes are flapping
> under sustained write, they could be storing a non-trivial number of
> hinted handoff rows? Probably not 5x usage though..
>
> http://wiki.apache.org/cassandra/Operations
> "
> The reason why yo
Peter Schuller infidyne.com> writes:
> Without necessarily dumping all the information - approximately what
> do they contain? Do they contain anything about compactions,
> anti-compactions, streaming, etc?
>
> With an idle node after taking writes, I *think* the only expected
> disk I/O (once i
hich is when I saw the uneven
data distribution (122GB on one of the nodes).
I actually have the log files from all 8 nodes if it helps to diagnose what
activity was going on behind the scenes. I really need to understand how this
happened.
Thanks again for your help,
Julie
a major compaction
> In other words, the problem isn't "temporary disk space occupied during
> the compact", it's permanent disk space occupied unless she compacts.
>
> Julie : when compaction occurs, it logs the number of bytes that it
> started with and the numbe
; seeing. If it is 6x across the whole cluster, it seems unlikely that the
> meta information is 5x the size of the actual information.
>
> Julie : when compaction occurs, it logs the number of bytes that it
> started with and the number it ended with, as well as the number of keys
> i
(haven't read this part of the source) that the min size
> > is being generated in minor compaction, which doesn't see the whole
> > row.
> >
> > -ryan
Thank you so much for this explanation! I understand now.
Julie
row sizes it is working
with?
When I add the timestamp column to each row, I am not deleting the other
column (large) in the row but I am not rewriting the large column either.
Thanks for your help!
Julie
|
10.248.34.80 Up 5.59 GB 170141183460469231731687303715884105728
|-->|
Nodetool cleanup works so beautifully, that I am wondering if there is any harm
in using "nodetool cleanup" in a cron job on a live system that is actively
processing reads and writes to the database?
Julie
e. I use a write consistency
setting of ALL. I can't see how these would increase the amount of disk
space used but just mentioning it.
Any help would be greatly appreciated,
Julie
Peter Schuller infidyne.com> writes:
> > a) cleanup is a superset of compaction, so if you
t sure why cleanup is having this big of an effect on my disk space usage?
If you can tell me how to automate this and why it's working, I would love it.
Thanks for your help!
Julie
could be happening?
cfstats reports that space used live is equal to space used total so I think the
data is truly taking up 106GB, I just can't explain why.
Space used (live): 113946099884
Space used (total): 113946099884
Thank you for any guidance!
Julie
Jonathan Ellis gmail.com> writes:
> "SSTables that are obsoleted by a compaction are deleted
> asynchronously when the JVM performs a GC. You can force a GC from
> jconsole if necessary, but Cassandra will force one itself if it
> detects that it is low on space. A compaction marker is also added
> I need to set my key cache size properly but am not sure how to set it if
> each
> key cached is stored in the key cache 2 or 3 times. I'd really appreciate
> any
> insight into how this works.
> Thanks!
> Julie
>
>
I actually still have this question b
Jonathan Ellis gmail.com> writes:
> On Wed, Jul 7, 2010 at 12:10 PM, Julie nextcentury.com>
wrote:
> >
> > This doesn't explain why 30 GB of data is taking up 106 GB of disk 24 hours
> > after all writes have completed. Compactions should be complete, no?
>
h row exactly once. I'm using batch_mutate() but in this
particular case my batch size is 1. I do not do any deletes and I am not
overwriting any rows since I write each key (1-1,000,000) only once. So I don't
think I should have any tombstones, unless there's something going on that I
don't know about.
Thanks for your help,
Julie
GB. My rows only have 1 column so there should only be one timestamp. My
column name is only 10 bytes long.
This doesn't explain why 30 GB of data is taking up 106 GB of disk 24 hours
after all writes have completed. Compactions should be complete, no?
Julie
java (line
246) Compacting []
INFO [COMPACTION-POOL:1] 2010-07-07 04:35:16,383 CompactionManager.java (line
246) Compacting []
Thank you for your help!!
Julie
e properly but am not sure how to set it if each
key cached is stored in the key cache 2 or 3 times. I'd really appreciate any
insight into how this works.
Thanks!
Julie
Gary Dusbabek gmail.com> writes:
>
> *Hopefully* fixed. I was never able to duplicate the problem on my
> workstation, but I had a pretty good idea what was causing the
> problem. Julie, if you're in a position to apply and test the fix, it
> would help help us make
Gary Dusbabek gmail.com> writes:
>
> *Hopefully* fixed. I was never able to duplicate the problem on my
> workstation, but I had a pretty good idea what was causing the
> problem. Julie, if you're in a position to apply and test the fix, it
> would help help us make
e 556
Sun.nio.ch.FileChannelImpl.transferFrom() – line 603
Org.apache.cassandra.streaming.IncomingStreamReader.read() – line 62
Org.apache.cassandra.net.IncomingTcpConnection.run() – line 66
This same 3 line loop in IncomingStreamReader was also getting stuck last
night with 0.6.1 so whatever it is is still happening in 0.6.2.
Thanks for your help,
Julie
f 10 servers are still at 40% CPU usage, although they
are doing 0 disk IO. I am not running anything else running on these server
nodes except for cassandra. The compactions have been done for over an hour.
The last write took place 5 hours ago.
Thank you for any help,
Julie
Benjamin Black b3k.us> writes:
>
> You are likely exhausting your heap space (probably still at the very
> small 1G default?), and maximizing the amount of resource consumption
> by using CL.ALL. Why are you using ALL?
>
> On Tue, Jun 15, 2010 at 11:58 AM, Julie ne
, the cpu usage is staying around 40%.
Thank you for your help and advice,
Julie
li wei yahoo.com> writes:
>
> Thanks you very much, Per!
>
> - Original Message
> From: Per Olesen trifork.com>
> To: "user cassandra.apache.org" cassandra.apache.org>
> Sent: Wed, June 9, 2010 4:02:52 PM
> Subject: Re: Quick help on Cassandra please: cluster access and performance
m like a Cassandra bug or is it well known that Cassandra always
needs more than 1GB of heap space?
Thanks again for your help,
Julie
li wei yahoo.com> writes:
>
> Thanks you very much, Per!
>
> - Original Message
> From: Per Olesen trifork.com>
> To: "user cassandra.apache.org" cassandra.apache.org>
> Sent: Wed, June 9, 2010 4:02:52 PM
> Subject: Re: Quick help on Cassandra please: cluster access and performanc
00 rows at once in each of 8 clients, until
each client has written 100,000 rows. All of my cassandra servers are started
up with 1GB of heap space: /usr/bin/java -ea -Xms128M -Xmx1G …
Thank you for your help!
Julie
29 matches
Mail list logo