I'm saying I will make my clients forward the C* requests to the first replica
instead of forwarding to a random node.
--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.
Will Oberman wrote:
Sent from my iPhone
On Jul 1, 2011, at 9:53 PM, AJ wrote:
> Is this possible?
>
Sent from my iPhone
On Jul 1, 2011, at 9:53 PM, AJ wrote:
Is this possible?
All reads and writes for a given key will always go to the same node
from a client.
I don't think that's true. Given a key K, the client will write to N
nodes (N=replication factor). And at consistency level O
Is this possible?
All reads and writes for a given key will always go to the same node
from a client. It seems the only thing needed is to allow the clients
to compute which node is the closes replica for the given key using the
same algorithm C* uses. When the first replica receives the wri
Oops, forgot to mention that we're using Cassandra 0.7.2.
On 07/01/2011 05:46 PM, Jeremy Stribling wrote:
Hi all,
I'm running into a problem with Cassandra, where a new node coming up
seems to only get an incomplete set of schema mutations when
bootstrapping, and as a result hits an "IllegalS
Hi all,
I'm running into a problem with Cassandra, where a new node coming up
seems to only get an incomplete set of schema mutations when
bootstrapping, and as a result hits an "IllegalStateException:
replication factor (3) exceeds number of endpoints (2)" error.
I will describe the sequenc
unsubscribe
I can see from profiling that a lot of the time in both reading and writing
are spend on ByteBuffer compare on the column names (for long rows with many
columns)
I looked at the ByteBufferUtil.unsignedCompareByteBuffer() , it's basically
the same structure as standard JVM ByteBuffer.compare()
loop
Quoting Jonathan Ellis On Fri, Jul 1, 2011 at 7:12 AM, wrote:
> > I assume there's something wrong with the data (the column has
> validation_class:
> > UTF8Type, so is it because I'm inserting non-UTF8 bytes?) but the exception
> > doesn't explain.
>
> That would do it, but it looks like you've
On Fri, Jul 1, 2011 at 7:12 AM, wrote:
> I assume there's something wrong with the data (the column has
> validation_class:
> UTF8Type, so is it because I'm inserting non-UTF8 bytes?) but the exception
> doesn't explain.
That would do it, but it looks like you've cut off the rest of the
excepti
On 6/30/2011 1:57 PM, Jeremiah Jordan wrote:
For your Consistency case, it is actually an ALL read that is needed,
not an ALL write. ALL read, with what ever consistency level of write
that you need (to support machines dyeing) is the only way to get
consistent results in the face of a failed
All of the download URLs for 0.7.6-2 appear to be broken. The issue appears
to be a lack of "-2" in the path.
http://cassandra.apache.org/download/
Dan
Hello All,
I am trying to load huge amounts of data into Cassandra.I want you use bulk
loading with hadoop.
I look into bulkoloader utility in java.But I am not sure how to provide
input to hadoop and then load to cassandra.Could some one please explain the
process?
Thank you.
Regards,
Pr
When attempting to insert a column. I get the following exception:
InvalidRequestException(why="[Keyspace][ColumnFamily][9cc58234708d] =
[6a53ac0452f67acd71b35463d475762b7f69cc0ea7f9e0cb0ca24f0e45170d48dafae04bf7b966fa75c7fb2bad0eace0ff23b265e8b0e35c7b0bbc2a516bb75b2007eb35ab1308b8c646428e0491840
https://issues.apache.org/jira/browse/CASSANDRA-2843
thanks
Yang
On Fri, Jul 1, 2011 at 12:09 AM, Sylvain Lebresne wrote:
> I think it's an interesting solution. And we can probably avoid the two
> getTopLevelColumns flavors with at bit a refactor. Let's open a ticket
> however,
> because this i
To make it clear what the problem is, this is not a repair problem. This is
a gossip problem. Gossip is reporting that the remote node is a 0.7 node
and repair is just saying "I cannot use that node because repair has changed
and the 0.7 node will not know how to answer me correctly", which is the
This is the same behavior I reported in 2768 as Aaron referenced ...
What was suggested for us was to do the following:
- Shut down the entire ring
- When you bring up each node, do a nodetool repair
That didn't immediately resolve the problems. In the end, I backed up
all the data, removed the
Héctor, when you say "I have upgraded all my cluster to 0.8.1", from
which version was
that: 0.7.something or 0.8.0 ?
If this was 0.8.0, did you run successful repair on 0.8.0 previous to
the upgrade ?
--
Sylvain
On Fri, Jul 1, 2011 at 5:59 AM, Terje Marthinussen
wrote:
> Unless it is a 0.8.1 R
I think it's an interesting solution. And we can probably avoid the two
getTopLevelColumns flavors with at bit a refactor. Let's open a ticket however,
because this is starting to be off-topic for the user mailing list.
--
Sylvain
On Fri, Jul 1, 2011 at 12:44 AM, Yang wrote:
> ok, I kind of foun
nate,
that is not relevant. cql is a text query that gets parsed. without
parameters you have to build the query by string concatenation. if i give
you a string which contains a single quote, unless you have written your app
to escape that quote, i can force a corrupted query on you that does
some
19 matches
Mail list logo