Gary Dusbabek gmail.com> writes:
>
> *Hopefully* fixed. I was never able to duplicate the problem on my
> workstation, but I had a pretty good idea what was causing the
> problem. Julie, if you're in a position to apply and test the fix, it
> would help help us make sure we've got this one nai
Gary Dusbabek gmail.com> writes:
>
> *Hopefully* fixed. I was never able to duplicate the problem on my
> workstation, but I had a pretty good idea what was causing the
> problem. Julie, if you're in a position to apply and test the fix, it
> would help help us make sure we've got this one nai
*Hopefully* fixed. I was never able to duplicate the problem on my
workstation, but I had a pretty good idea what was causing the
problem. Julie, if you're in a position to apply and test the fix, it
would help help us make sure we've got this one nailed down.
Gary.
On Thu, Jun 17, 2010 at 00:3
That is consistent with the
https://issues.apache.org/jira/browse/CASSANDRA-1169 bug I mentioned,
that is fixed in the 0.6 svn branch.
On Wed, Jun 16, 2010 at 10:51 PM, Julie wrote:
> Jonathan Ellis gmail.com> writes:
>
>> > Another thing that is odd is that even when the server nodes are quiesc
Jonathan Ellis gmail.com> writes:
> > Another thing that is odd is that even when the server nodes are quiescent
> > because compacting is complete, I am still seeing cpu usage stay at
> > about 40% . Even after several hours, no reading or writing to the
database
> > and all compactions comple
On Tue, Jun 15, 2010 at 4:58 PM, Jonathan Shook wrote:
> If there aren't enough resources on the server side to service the
> clients, the expectation should be that the servers have a graceful
> performance degradation, or in the worst case throw an error specific
> to resource exhaustion or expl
On Tue, Jun 15, 2010 at 4:44 PM, Charles Butterfield
wrote:
> To clarify the history here -- initially we were writing with CL=0 and had
> great performance but ended up killing the server. It was pointed out that
> we were really asking the server to accept and acknowledge an unbounded
> number
On Tue, Jun 15, 2010 at 4:44 PM, Charles Butterfield
wrote:
>
> I guess my point is that I have rarely run across database servers that die
> from either too many client connections, or too rapid client requests. They
> generally stop accepting incoming connections when there are too many
> conn
Actually, you shouldn't expect errors in the general case, unless you
are simply trying to use data that can't fit in available heap. There
are some practical limitations, as always.
If there aren't enough resources on the server side to service the
clients, the expectation should be that the serv
Benjamin Black b3k.us> writes:
>
> I am only saying something obvious: if you don't have sufficient
> resources to handle the demand, you should reduce demand, increase
> resources, or expect errors. Doing lots of writes without much heap
> space is such a situation (whether or not it is happen
On Tue, Jun 15, 2010 at 3:55 PM, Charles Butterfield
wrote:
> Benjamin Black b3k.us> writes:
>
>>
>> Then write slower. There is no free lunch.
>>
>> b
>
> Are you implying that clients need to throttle their collective load on the
> server to avoid causing the server to fail? That seems undesi
Benjamin Black b3k.us> writes:
>
> Then write slower. There is no free lunch.
>
> b
Are you implying that clients need to throttle their collective load on the
server to avoid causing the server to fail? That seems undesirable. Is this a
side effect of a server bug, or is it part of the int
On Tue, Jun 15, 2010 at 5:15 PM, Julie wrote:
> I'm also baffled that after all compactions are done on every one of the 10
> servers, about 5 out of 10 servers are still at 40% CPU usage, although they
> are doing 0 disk IO. I am not running anything else running on these server
> nodes except fo
odes going down at once.
I am baffled by all the "Value too large" exceptions that are occurring on
every one of my 10 servers:
ERROR [MESSAGE-STREAMING-POOL:1] 2010-06-14 19:30:24,471
DebuggableThreadPoolExecutor.java (line 101) Error in ThreadPoolExecutor
java.lang.RuntimeExce
On Tue, Jun 15, 2010 at 1:58 PM, Julie wrote:
> Coinciding with my write timeouts, all 10 of my cassandra servers are getting
> the following exception written to system.log:
"Value too large for defined data type" looks like a bug found in
older JREs. Upgrade to u19 or later.
> Another thing t
On Tue, Jun 15, 2010 at 1:40 PM, Julie wrote:
> Thanks for your reply. Yes, my heap space is 1G. My vms have only 1.7G of
> memory so I hesitate to use more.
Then write slower. There is no free lunch.
b
emtable.java (line 162)
>>> Completed flushing /var/lib/cassandra/data/Keyspace1/Standard1-359-Data.db
>>> ERROR [MESSAGE-STREAMING-POOL:1] 2010-06-15 13:13:59,145
>>> DebuggableThreadPoolExecutor.java (line 101) Error in ThreadPoolExecutor
>>> java.lang.
ted flushing /var/lib/cassandra/data/Keyspace1/Standard1-359-Data.db
> > ERROR [MESSAGE-STREAMING-POOL:1] 2010-06-15 13:13:59,145
> > DebuggableThreadPoolExecutor.java (line 101) Error in ThreadPoolExecutor
> > java.lang.RuntimeException: java.io.IOException: Value too large for d
3:54,411 Memtable.java (line 162)
> Completed flushing /var/lib/cassandra/data/Keyspace1/Standard1-359-Data.db
> ERROR [MESSAGE-STREAMING-POOL:1] 2010-06-15 13:13:59,145
> DebuggableThreadPoolExecutor.java (line 101) Error in ThreadPoolExecutor
> java.lang.RuntimeE
162)
Completed flushing /var/lib/cassandra/data/Keyspace1/Standard1-359-Data.db
ERROR [MESSAGE-STREAMING-POOL:1] 2010-06-15 13:13:59,145
DebuggableThreadPoolExecutor.java (line 101) Error in ThreadPoolExecutor
java.lang.RuntimeException: java.io.IOException: Value too large for defined
data type
20 matches
Mail list logo