The disk full bug is fixed in the -final artifacts and the 1.0.0 svn branch.
On Wed, Oct 12, 2011 at 10:16 AM, Günter Ladwig wrote:
> Hi,
>
> I tried running cfstats on other nodes. It works on all except two nodes.
> Then I tried scrubbing the OSP CF on one of the nodes where it fails
> (actua
Hi,
I tried running cfstats on other nodes. It works on all except two nodes. Then
I tried scrubbing the OSP CF on one of the nodes where it fails (actually the
node where the first exception I reported happened), but got this exception in
the log:
[...]
INFO 14:58:00,604 Scrub of
SSTableRead
Try scrubbing the CF ("nodetool scrub") and see if that fixes it.
If not, then at least we have a reproducible problem. :)
On Tue, Oct 11, 2011 at 4:43 PM, Günter Ladwig wrote:
> Hi all,
>
> I'm seeing the same problem on my 1.0.0-rc2 cluster. However, I do not have
> 5000, but just three (comp
Yes, all three use SnappyCompressor.
On 12.10.2011, at 02:58, Jonathan Ellis wrote:
> Are all 3 CFs using compression?
>
> On Tue, Oct 11, 2011 at 4:43 PM, Günter Ladwig wrote:
>> Hi all,
>>
>> I'm seeing the same problem on my 1.0.0-rc2 cluster. However, I do not have
>> 5000, but just three
Are all 3 CFs using compression?
On Tue, Oct 11, 2011 at 4:43 PM, Günter Ladwig wrote:
> Hi all,
>
> I'm seeing the same problem on my 1.0.0-rc2 cluster. However, I do not have
> 5000, but just three (compressed) CFs.
>
> The exception does not happen for the Migrations CF, but for one of mine:
Hi all,
I'm seeing the same problem on my 1.0.0-rc2 cluster. However, I do not have
5000, but just three (compressed) CFs.
The exception does not happen for the Migrations CF, but for one of mine:
Keyspace: KeyspaceCumulus
Read Count: 816
Read Latency: 8.926029411764706 ms.
I don't have access to the test system anymore. We did move to lower
number of CFs and dont see this problem any more.
I remember when I noticed the size in system.log it was little more
than UINT_MAX (4294967295). I was able to recreate it multiple times.
So I am wondering if there are any stats c
That row has a size of 819 peta bytes, so something is odd there. The error is
a result of that value been so huge. When you rant he same script on 0.8.6 what
was the max size of the Migrations CF ?
As Jonathan says, it's unlikely anyone would have tested creating 5000 CF's.
Most people only cr
My suspicion would be that it has more to do with "rare case when
running with 5000 CFs" than "1.0 regression."
On Mon, Oct 3, 2011 at 5:00 PM, Ramesh Natarajan wrote:
> We have about 5000 column family and when we run the nodetool cfstats it
> throws out this exception... this is running 1.0.0-
We recreated the schema using the same input file on both clusters and they
are running identical load.
Isn't the exception thrown in the system CF?
this line looks strange:
Compacted row maximum size: 9223372036854775807
thanks
Ramesh
On Mon, Oct 3, 2011 at 5:26 PM, Jonathan Ellis wrote:
>
It happens all the time on 1.0. It doesn't happen on 0.8.6. Is there any
thing I can do to check?
thanks
Ramesh
On Mon, Oct 3, 2011 at 5:15 PM, Jonathan Ellis wrote:
> My suspicion would be that it has more to do with "rare case when
> running with 5000 CFs" than "1.0 regression."
>
> On Mon,
Looks like you have unexpectedly large rows in your 1.0 cluster but
not 0.8. I guess you could use sstable2json to manually check your
row sizes.
On Mon, Oct 3, 2011 at 5:20 PM, Ramesh Natarajan wrote:
> It happens all the time on 1.0. It doesn't happen on 0.8.6. Is there any
> thing I can do t
We have about 5000 column family and when we run the nodetool cfstats it
throws out this exception... this is running 1.0.0-rc1
This seems to work on 0.8.6. Is this a bug in 1.0.0?
thanks
Ramesh
Keyspace: system
Read Count: 28
Read Latency: 5.8675 ms.
Write Count: 3
13 matches
Mail list logo