Hi,
Using Cassandra 1.2.10, I am trying to load sstable data into a cluster of
6 machines.
The machines are using vnodes, and are configured with
NetworkTopologyStrategy replication=3 and LeveledCompactionStrategy on the
tables being loaded.
The sstable data was generated using SSTableSimpleUnsort
Hi Ross,
Did you try to use CQL2 tables?
/create the CF / table using "cqlsh -2".
We experienced the same but using CQL2 helped us.
Ferenc
From: Ross Black [mailto:ross.w.bl...@gmail.com]
Sent: Wednesday, November 27, 2013 10:12 AM
To: user@cassandra.apache.org
Subject: data dropped when using
Hi,
The trace printed because I enabled "debug". But actually my custom classes
are loaded successfully with that jar. The exception make me confusion.
This is FYI.
Thanks,
Ramesh
On Tue, Nov 26, 2013 at 4:55 PM, J Ramesh Kumar wrote:
> Hi,
>
> I wrote a trigger and it will call internally som
Hi,
I need your help on extract column names and values in the trigger augment
method.
*Table Def :*
create table dy_data (
id timeuuid,
data_key text,
time timestamp,
data text,
primary key((id,data_key),time)) with clustering order by (time desc);
public class ArchiveTrigger implements ITrigg
A-yup. Got burned this too some time ago myself. If you do accidentally try to
bootstrap a seed node, the solution is to run repair after adding the new node
but before removing the old one. However, during this time the node will
advertise itself as owning a range, but when queried, it'll retu
Thanks Mikhail.
I switched to 2.0.3 and the problem is still there, I will open an issue
with a test case on it. I have not tested 1.2.12, but I assume that will
have the same problem.
Shahryar
On Mon, Nov 25, 2013 at 5:57 PM, Shahryar Sedghi wrote:
> I did some test and apparently the prepar
On Wed, Nov 27, 2013 at 3:12 AM, Ross Black wrote:
> Using Cassandra 1.2.10, I am trying to load sstable data into a cluster of
> 6 machines.
This may be affecting you:
https://issues.apache.org/jira/browse/CASSANDRA-6272
Using 1.2.12 for the sstableloader process should work.
--
Tyler Hobb
Hi all,
I am installing a cassandra 1.2 on Ubuntu. I followed the Debian/Ubuntu
guidelines but even following the procedure to get rid of openJDK it was
always there. I read the packages a build using openJDK so I am not sure
who really get rid of it.
Anyway, I finally decided to install it from
So, I did a lot of dial turning and heap tuning (came across this nice
writeup about JVM tuning
http://blog.mikiobraun.de/2010/08/cassandra-gc-tuning.html) still no luck
with 1.2.9. I gave up and upgraded to 1.2.12 and since then things are much
much better. I don't run into the heap issue that I u
On Wed, Nov 27, 2013 at 2:47 AM, Turi, Ferenc (GE Power & Water, Non-GE) <
ferenc.t...@ge.com> wrote:
> Did you try to use CQL2 tables?
>
>
>
> /create the CF / table using “cqlsh -2”.
>
>
>
> We experienced the same but using CQL2 helped us.
>
CQL2 is a historical footnote and is likely to be r
Hi all,
We made a decision to use compact storage for a couple of very large tables to
get maximum storage efficiency. We understood that this would limit us to a
single non-primary key column. We did not realize at the time that we would not
be able to add this column using the cql3 alter comma
Hello,
I’m working on an distributed analytics service that originally uses Storm
as an RPC. I thought since Cassandra is already distributed, I may not have
to use another RPC system to write data to Cassandra.
There are plenty of great ideas in Triggers issue page (
https://issues.apache.org/jir
Hi Tyler,
Thanks (somehow I missed that ticket when I searched for sstableloader
bugs).
I will retry with 1.2.12 when we get a chance to upgrade. In the meantime
I have switched to loading data via the normal client API (slower but
reliable).
Ross
On 28 November 2013 03:45, Tyler Hobbs wrot
We have noticed that a cluster we upgraded to 1.1.6 (from 1.0.*) still has a
single large (~4GB) row in system.Migrations on each cluster node.
There is some code in there to drop that CF at startup, but I’m not sure on the
requirements for it to run. if the time stamps have not been updated in a
What’s the value of “max_hint_window_in_ms” in your cassandra.yaml?
-M
"Xavier Fustero" wrote in message
news:cah7zuusuh7s_9vvjaxg75fo5xd2rpwy6rgpw_ubxmwbwd4_...@mail.gmail.com...
Hi all,
I am installing a cassandra 1.2 on Ubuntu. I followed the Debian/Ubuntu
guidelines but even following th
We have the following structure in a composite CF, comprising 2 parts
Key=123 -> A:1, A:2, A:3,B:1, B:2, B:3, B:4, C:1, C:2, C:3,
Our application provides the following inputs for querying on the
first-part of composite column
key=123, [(colName=A, range=2), (colName=B, range=3), (colName=C
Tom,
Here is the definition
List all the endpoints that this node has hints for, and count the number
of hints for each such endpoint.
Returns:map of endpoint -> hint count
I would suggest looking at at the gossipinfo to validate if there are any
nodes which have that token value. If there is (
17 matches
Mail list logo