On 30 August 2013 18:42, Jeremiah D Jordan wrote:
You need to introduce the new "vnode enabled" nodes in a new DC. Or you
> will have similar issues to
> https://issues.apache.org/jira/browse/CASSANDRA-5525
>
> Add vnode DC:
>
> http://www.datastax.com/documentation/cassandra/1.2/webhelp/index.ht
hi:
I test this in cassandra 1.2.9 new version and the issue still persists .
:-(
Miguel Angel Martín Junquera
Analyst Engineer.
miguelangel.mar...@brainsins.com
2013/8/30 Miguel Angel Martin junquera
> I try this:
>
> *rows = LOAD
> 'cql://keyspace1/test?page_size=1&split_size=4&where_
*good/nice job !!!*
*
*
*
*
*I'd testing with an udf only with string schema type this is better and
elaborate work..*
*
*
*Regads*
Miguel Angel Martín Junquera
Analyst Engineer.
miguelangel.mar...@brainsins.com
2013/8/31 Chad Johnston
> I threw together a quick UDF to work around this iss
As there are two ways to support wide rows in CQL3..One is to use composite
keys and another is to use collections like Map, List and Set. The
composite keys method can have millions of columns (transposed to rows)..
This is solving some of our use cases.
However, if we use collections, I want to
Hi,
I'm trying to get SSTable access distribution for Reads from Cassandra
Stress Tool. When I try to dump the cfhistogram I don't see entries in
SSTable column. Meaning all turn out to be zero.
Any idea what must be going wrong ? Please suggest how to dump the
histogram with SSTable access di
hi all:
More info :
https://issues.apache.org/jira/browse/CASSANDRA-5941
I tried this (and gen. cassandra 1.2.9) but do not work for me,
git clone http://git-wip-us.apache.org/repos/asf/cassandra.git
cd cassandra
git checkout cassandra-1.2
patch -p1 < 5867-bug-fix-filter-push-down-1.2-branch
hi all:
More info :
https://issues.apache.org/jira/browse/CASSANDRA-5941
I tried this (and gen. cassandra 1.2.9) but do not work for me,
git clone http://git-wip-us.apache.org/repos/asf/cassandra.git
cd cassandra
git checkout cassandra-1.2
patch -p1 < 5867-bug-fix-filter-push-down-1.2-branch
I know that the size is limited to max short (~32k) because when deserializing
the response from the server, the first item returned is the number of items
and its a short. That being said you could likely handle this by looking for
the overflow and allowing double max short.
Vikas Goyal wrot
Hi
I have a requirement of versioning to be done in Cassandra.
Following is my column family definition
*create table file_details(id text primary key, fname text, version int,
mimetype text);*
I have a secondary index created on fname column.
Whenever I do an insert for the same 'fname', the
Hi
1.-
May be?
-- Register the UDF
REGISTER /path/to/cqlstorageudf-1.0-SNAPSHOT
-- FromCqlColumn will convert chararray, int, long, float, double
DEFINE FromCqlColumn com.megatome.pig.piggybank.tuple.FromCqlColumn();
-- Load data as normal
data_raw = LOAD 'cql://bookcrossing/books' USING CqlS
> 1.0.9 -> 1.0.12 -> 1.1.12 -> 1.2.x?
Because this fix in 1.0.11:
* fix 1.0.x node join to mixed version cluster, other nodes >= 1.1
(CASSANDRA-4195)
-Jeremiah
On Aug 30, 2013, at 2:00 PM, Mike Neir wrote:
> Is there anything that you can link that describes the pitfalls you mention?
> I'd l
I believe CQL has to fetch and transport the entire row, so if it contains
a collection you transmit the entire collection. C* is mostly about low
latency queries and as the row gets larger keeping low latency becomes
impossible.
Collections do not support a large number of columns, they were not
Hi Dawood,
On 02.09.2013, at 16:36, dawood abdullah wrote:
> Hi
> I have a requirement of versioning to be done in Cassandra.
>
> Following is my column family definition
>
> create table file_details(id text primary key, fname text, version int,
> mimetype text);
>
> I have a secondary inde
We had some problems when using secondary indexes because of three issues:
- The query is a Range Query, which means that it is slow.
- There is an open bug regarding the use of row cache for secondary indexes
(CASSANDRA-4973)
- The cardinality of our secondary key was very low (this was bad)
We
Requirement is like I have a column family say File
create table file(id text primary key, fname text, version int, mimetype
text, content text);
Say, I have few records inserted, when I modify an existing record (content
is modified) a new version needs to be created. As I need to have provision
On 02.09.2013, at 20:44, dawood abdullah wrote:
> Requirement is like I have a column family say File
>
> create table file(id text primary key, fname text, version int, mimetype
> text, content text);
>
> Say, I have few records inserted, when I modify an existing record (content
> is modif
> We performed some modifications and created another column family, which
> maps the secondary index to the key of the original column family. The
> improvements were very impressive in our case!
Sorry, I coundn't understand! What changes? Have you built a B-tree?
2013/9/2 Francisco Nogueira C
In my case version can be timestamp as well. What do you suggest version
number to be, do you see any problems if I keep version as counter /
timestamp ?
On Tue, Sep 3, 2013 at 12:22 AM, Jan Algermissen wrote:
>
> On 02.09.2013, at 20:44, dawood abdullah
> wrote:
>
> > Requirement is like I ha
I'm running the Cassandra 1.2.4 and when I enable the row_cache, the system
throws TImeoutExcpetion and Garbage Collection don't stop.
When I disable the query returns in 700ms.
*Configuration:
*
- *row_cache_size_in_mb: 256*
- *row_cache_save_period: 0*
- *# row_cache_keys_to_save: 10
You experience is not uncommon. There was a recent thread on this with a
variety of details on when to use row caching:
http://www.mail-archive.com/user@cassandra.apache.org/msg31693.html
tl;dr - it depends completely on use case. Small static rows work best.
On Mon, Sep 2, 2013 at 2:05 PM, Sáv
Is it related to https://issues.apache.org/jira/browse/CASSANDRA-4973? And
https://issues.apache.org/jira/browse/CASSANDRA-4785?
2013/9/2 Nate McCall
> You experience is not uncommon. There was a recent thread on this with a
> variety of details on when to use row caching:
> http://www.mail-arc
Hello,
We are experiencing an issue where nodes a temporarily slow due to I/O
contention anywhere from 10 minutes to 2 hours. I don't believe this slowdown
is Cassandra related, but factors outside of Cassandra. We run Cassandra
1.1.9. We run a 12 node cluster, with a replication factor of 3
In general with LOCAL_QUORUM you should not see such an issue when one node
is slow. However, it could be because Client's are still sending requests
to that node. Depending on what client library you are using , you could
try to take that node out of your connection pool. Not knowing exact issue
y
Sorry, I was not very clear.
We simply created another CF whose row keys were given by the secondary index
that we needed. The value of each row in this new CF was the key associated
with a row in the first CF (the original one).
Francisco
On Sep 2, 2013, at 4:13 PM, Sávio Teles wrote:
>
>
Hello,
Currently we have a Cassandra cluster in the Amazon EC2, and we are planning to
upgrade our deployment configuration to achieve better
performance and stability. However, a lot of open questions arise when planning
this migration. I'll be very thankfull if somebody could answer my
ques
If you launch the new servers, have them join the cluster, then decommission
the old ones, you'll be able to do it without downtime. It'll also have the
effect of randomizing the tokens, I believe.
On Sep 2, 2013, at 4:21 PM, Renat Gilfanov wrote:
> Hello,
>
> Currently we have a Cassandra
Thanks for the quick reply!
If I launch the new Cassandra node, should I preliminary add it's IP to the
cassandra-topology.properties and "seeds" parameter in the cassandra.yaml on
all existing nodes and restart them?
>If you launch the new servers, have them join the cluster, then decommissi
Hello,
I'd like to ask what is the best options of separating commit log and data on
Amazon m1.xlarge instance, given 4x420 Gb attached storages and EBS volume ?
As far as I understand, the EBS is not the choice and it's recomended to use
attached storages instead.
Is it better to combine 4 ep
28 matches
Mail list logo