Hello All,
Attempting to create what the Datastax 1.1 documentation calls a
Dynamic Column Family
(http://www.datastax.com/docs/1.1/ddl/column_family#dynamic-column-families)
via CQLSH.
This works in v2 of the shell:
"create table data ( key varchar PRIMARY KEY) WITH comparator=LongType;"
When
May or may not be related but I thought I'd recount a similar experience we
had in EC2 in hopes it helps someone else.
As background, we had been running several servers in a 0.6.8 ring with no
Cassandra issues (some EC2 issues, but none related to Cassandra) on
multiple EC2 XL instances in a sing
Forgot one critical point, we use zero swap on any of these hosts.
ng to escalate.
>
> For what it's worth, we were able to reproduce the lockup behavior that
> you're describing by running a tight loop that spawns threads. Here's a gist
> of the app I used: https://gist.github.com/a4123705e67e9446f1cc -- I'd be
> interested to know
or us, the 7.5 version of libc was causing problems.
> Either way, I'm looking forward to hearing about anything you find.
>
> Mike
>
>
> On Thu, Jan 13, 2011 at 11:47 PM, Erik Onnen wrote:
>
>> Too similar to be a coincidence I'd say:
>>
>> Good node (o
One of the developers will have to confirm but this looks like a bug
to me. MessagingService is a singleton and there's a Multimap used for
targets that isn't accessed in a thread safe manner.
The thread dump would seem to confirm this, when you hammer what is
ultimately a standard HashMap with mu
Filed as https://issues.apache.org/jira/browse/CASSANDRA-2037
I can't see how the code would be correct as written but I'm usually
wrong about most things.
On Sun, Jan 23, 2011 at 12:14 PM, Erik Onnen wrote:
> One of the developers will have to confirm but this looks like
During a recent upgrade of our cassandra ring from 0.6.8 to 0.7.3 and
prior to a drain on the 0.6.8 nodes, we lost a node for reasons
unrelated to cassandra. We decided to push forward with the drain on
the remaining healthy nodes. The upgrade completed successfully for
the remaining nodes and the
lize the data it's streaming over is
> older-version data. Can you create a ticket?
>
> In the meantime nodetool scrub (on the existing nodes) will rewrite
> the data in the new format which should workaround the problem.
>
> On Mon, Mar 7, 2011 at 1:23 PM, Erik Onnen wrot
I'd recommend not storing commit logs or data files on EBS volumes if
your machines are under any decent amount of load. I say that for
three reasons.
First, both EBS volumes contend directly for network throughput with
what appears to be a peer QoS policy to standard packets. In other
words, if y
After an upgrade from 0.7.3 to 0.7.4, we're seeing the following on
several data files:
ERROR [main] 2011-03-23 18:58:33,137 ColumnFamilyStore.java (line 235)
Corrupt sstable
/mnt/services/cassandra/var/data/0.7.4/data/Helium/dp_idx-f-4844=[Index.db,
Statistics.db, Data.db, Filter.db]; skipped
jav
Thanks, so is it the "[Index.db, Statistics.db, Data.db, Filter.db];
skipped" that indicates it's in Statistics? Basically I need a way to
know if the same is true of all the other tables showing this issue.
-erik
It's been about 7 months now but at the time G1 would regularly
segfault for me under load on Linux x64. I'd advise extra precautions
in testing and make sure you test with representative load.
I'll capture what I we're seeing here for anyone else who may look
into this in more detail later.
Our standard heap growth is ~300K in between collections with regular
ParNew collections happening on average about every 4 seconds. All
very healthy.
The memtable flush (where we see almost all our
Sorry for the complex setup, took a while to identify the behavior and
I'm still not sure I'm reading the code correctly.
Scenario:
Six node ring w/ SimpleSnitch and RF3. For the sake of discussion
assume the token space looks like:
node-0 1-10
node-1 11-20
node-2 21-30
node-3 31-40
node-4 41-50
s what it wants, since by the time we timeout once then FD and/or
> dynamic snitch should route the request to another node for the retry
> without adding additional complexity to StorageProxy. (If that's not
> what you see in practice, then we probably have a dynamic snitch bug.)
&
On Thu, Jul 29, 2010 at 9:57 PM, Ryan Daum wrote:
>
> Barring this we (place where I work, Chango) will probably eventually fork
> Cassandra to have a RESTful interface and use the Jetty async HTTP client to
> connect to it. It's just ridiculous for us to have threads and associated
> resources t
Hello,
We're planning an upgrade from 0.6.7 (after it's released) to 0.7.0 (after
it's released) and I wanted to validate my assumptions about what can be
expected. Obviously I'll need to test my own assumptions but I was hoping
for some guidance to make sure my understanding is correct.
My core
18 matches
Mail list logo