FTR, it looks like a "nodetool upgradesstables -a" address this issue.
It is still good to know that before running this command, any restart will
hang for a long time.
Hopping this will help someone, someday :).
C*heers !
2015-01-26 11:29 GMT+01:00 Alain RODRIGUEZ :
> Hi guys,
>
> We migrate
Hello,
The following session, recorded during the Cassandra Europe Summit 2014,
might also be of interest for you:
http://youtu.be/RQnw-tfVXb4
--
Alexandre Dutra
On Mon, Jan 26, 2015 at 11:07 PM, Jabbar Azam wrote:
> There is also a YouTube video http://youtu.be/rqEylNsw2Ns explaining the
> i
I believe Aegisthus is open sourced.
Mohammed
From: Jan [mailto:cne...@yahoo.com]
Sent: Monday, January 26, 2015 11:20 AM
To: user@cassandra.apache.org
Subject: Re: Controlling the MAX SIZE of sstables after compaction
Parth et al;
the folks at Netflix seem to have built a solution for your pr
It is open sourced but works only with C* 1.x as far as I know.
Mikhail
On Tuesday, January 27, 2015, Mohammed Guller
wrote:
> I believe Aegisthus is open sourced.
>
>
>
> Mohammed
>
>
>
> *From:* Jan [mailto:cne...@yahoo.com
> ]
> *Sent:* Monday, January 26, 2015 11:20 AM
> *To:* user@cassand
Hi -
Over the last few weeks, I have seen several emails on this mailing list from
people trying to extract all data from C*, so that they can import that data
into other analytical tools that provide much richer analytics functionality
than C*. Extracting all data from C* is a full-table scan,
Both Java driver "select * from table" and Spark sc.cassandraTable() work well.
I use both of them frequently.
At 2015-01-28 04:06:20, "Mohammed Guller" wrote:
Hi –
Over the last few weeks, I have seen several emails on this mailing list from
people trying to extract all data from C*, so
How big is your table? How much data does it have?
Mohammed
From: Xu Zhongxing [mailto:xu_zhong_x...@163.com]
Sent: Tuesday, January 27, 2015 5:34 PM
To: user@cassandra.apache.org
Subject: Re:full-tabe scan - extracting all data from C*
Both Java driver "select * from table" and Spark sc.cassand
By default, each C* node is set with 256 tokens. On a local 1-node C*
server, my hadoop drop creates 256 connections to the server. Is there any
way to control this behavior? e.g. reduce the number of connections to a
pre-configured gap.
I debugged C* source code and found the client asks for part
Hi, Zhongxing,
I am also interested in your table size. I am trying to dump 10s Million
record data from C* using map-reduce related API like CqlInputFormat.
You mentioned about Java driver. Could you suggest any API you used? Thanks.
On Tue, Jan 27, 2015 at 5:33 PM, Xu Zhongxing wrote:
> Both J
Recently I surveyed this topic and you may want to take a look at
https://github.com/fullcontact/hadoop-sstable
and
https://github.com/Netflix/aegisthus
On Tue, Jan 27, 2015 at 5:33 PM, Xu Zhongxing wrote:
> Both Java driver "select * from table" and Spark sc.cassandraTable() work
> well.
> I u
The table has several billion rows.
I think the table size is irrelevant here. Cassandra driver can do paging well.
Spark handles data partition well, too.
At 2015-01-28 10:45:17, "Mohammed Guller" wrote:
How big is your table? How much data does it have?
Mohammed
From: Xu Zhongxing [
For Java driver, there is no special API actually, just
ResultSet rs = session.execute("select * from ...");
for (Row r : rs) {
...
}
For Spark, the code skeleton is:
val rdd = sc.cassandraTable("ks", "table")
then call various standard Spark API to process the table parallelly.
I have
Cool. What about performance? e.g. how many record for how long?
On Tue, Jan 27, 2015 at 10:16 PM, Xu Zhongxing
wrote:
> For Java driver, there is no special API actually, just
>
> ResultSet rs = session.execute("select * from ...");
> for (Row r : rs) {
>...
> }
>
> For Spark, the code skel
Hi Shenghua, as I understand, each range is assigned to a mapper. Mapper
will not share connections. So, it needs at least 256 connections to read
all. But all 256 connections should not be set up at the same time unless
you have 256 mappers running at the same time.
On Tue, Jan 27, 2015 at 9:34 P
Hi, Huiliang,
Great to hear from you, again!
Image you have 3 nodes, replication factor=1, and using default number of
tokens. You will have 3*256 mappers... In that case, you will be soon out
of mappers or reach the limit.
On Tue, Jan 27, 2015 at 10:59 PM, Huiliang Zhang wrote:
> Hi Shenghua,
Hi, All,
Does anyone know the answer?
Thanks a lot
Boying
From: Lu, Boying
Sent: 2015年1月6日 11:21
To: user@cassandra.apache.org
Subject: How to use cqlsh to access Cassandra DB if the
client_encryption_options is enabled
Hi, All,
I turned on the dbclient_encryption_options like this:
client_
This is hard to answer. The performance is a thing depending on context.
You could tune various parameters.
At 2015-01-28 14:43:38, "Shenghua(Daniel) Wan" wrote:
Cool. What about performance? e.g. how many record for how long?
On Tue, Jan 27, 2015 at 10:16 PM, Xu Zhongxing wrote:
For Java d
In that case, each node will have 256/3 connections at most. Still 256
mappers. Someone please correct me if I am wrong.
On Tue, Jan 27, 2015 at 11:04 PM, Shenghua(Daniel) Wan <
wansheng...@gmail.com> wrote:
> Hi, Huiliang,
> Great to hear from you, again!
> Image you have 3 nodes, replication fa
I mean when the number of nodes grow, there are more virtual nodes in
total. For each vnode (or a partition range), a connection will be created.
For 3 node, 256 tokens each, replication factor=1 for simplicity, there
will be 3*256 virtual nodes, and therefore that many connections. Let me
know if
For clarification, please checkout the source code I got from C* v2.0.11
in AbstractColumnFamilyInputFormat getSplits(JobContext context)
line 125 and 168
// cannonical ranges and nodes holding replicas
List masterRangeNodes = getRangeMap(conf);
for (TokenRange range : masterRa
20 matches
Mail list logo