d.map{e => EventV1(e.testId, e.ts, e.channel, e.groups,
> e.event)}
> transformed.saveToCassandra(keyspace, "test_v1")
>
> Not sure if this code might translate to limits.
>
> The total date in this table is +/- 2gb on disk, total data for each node
> is around
sure if this code might translate to limits.
The total date in this table is +/- 2gb on disk, total data for each node
is around 290gb.
On Fri, Jun 26, 2015 at 7:01 PM Nate McCall wrote:
> > We notice incredibly slow reads, 600mb in an hour, we are using quorum
> LOCAL_ONE reads.
> > Th
> We notice incredibly slow reads, 600mb in an hour, we are using quorum
LOCAL_ONE reads.
> The load_one of Cassandra increases from <1 to 60! There is no CPU wait,
only user & nice.
Without seeing the code and query, it's hard to tell, but I noticed
something similar wh
table into Spark, do some transformations and write them back to C*. We
are using Spark to do a data migration in C*.
Before we execute, the load on Cassandra is very little.
We notice incredibly slow reads, 600mb in an hour, we are using quorum
LOCAL_ONE reads.
The load_one of Cassandra increases
First I would try to simplify your architecture. Get everything onto the same
OS.
Then change the topology so you have 1 job tracker, and 4 nodes that ran both
Cassandra and Hadoop tasks. So that reading and mapping the data is happening
on the same nodes. Reads from cassandra happen as range
Hello Cassandra users,
I am trying to read and process data in Cassandra using Hadoop. I have a 4-node
Cassandra cluster, and an 8-node Hadoop cluster:- 1 Namenode/Jobtracker- 7
Datanodes/Tasktrackers (4 of them are also hosting Cassandra)
I am using Cassandra 1.2 beta, Hadoop 0.20.2, java 1.6_u
a-user-incubator-apache-org.3065146.n2.nabble.com/Slow-Reads-tp6622680p6627231.html
> Sent from the cassandra-u...@incubator.apache.org mailing list archive at
> Nabble.com.
>
--
http://twitter.com/tjake
.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Slow-Reads-tp6622680p6627231.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
> all the rows but with different data.
>
> So yes,its getting data from all rows.
> Please suggest me a better way to do so.
> Thank you.
>
> the output of my query will be (suppose if i do for supercol1)
> rowkey1,T,C
> rowkey2,A,A
>
>
>
> --
> View this message in context:
> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Slow-Reads-tp6622680p6623091.html
> Sent from the cassandra-u...@incubator.apache.org mailing list archive at
> Nabble.com.
>
--
*Indranath Ghosh
Phone: 408-813-9207*
tput of my query will be (suppose if i do for supercol1)
>> rowkey1,T,C
>> rowkey2,A,A
>>
>>
>>
>> --
>> View this message in context:
>> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Slow-Reads-tp6622680p6623091.html
>> Sent from the cassandra-u...@incubator.apache.org mailing list archive at
>> Nabble.com.
>>
>
>
y to do so.
> Thank you.
>
> the output of my query will be (suppose if i do for supercol1)
> rowkey1,T,C
> rowkey2,A,A
>
>
>
> --
> View this message in context:
> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Slow-Reads-tp6622680p6623091.html
> Sent from the cassandra-u...@incubator.apache.org mailing list archive at
> Nabble.com.
>
1,T,C
rowkey2,A,A
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Slow-Reads-tp6622680p6623091.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
er_column='23', read_consistency_level=None,buffer_size=None)
>> >
>> > This is very slow compared to MySQL.
>> > Am not sure whats going wrong here.Could some one let me know if there
>> is
>> > any problem with my model.
>> >
>>
some one let me know if there is
> > any problem with my model.
> >
> >
> > Any help in this regard is highly appreciated.
> >
> > Thank you.
> >
> > Regards,
> > Priyanka
> >
> >
> >
> >
> >
> > --
> > View this message in context:
> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Slow-Reads-tp6622680p6622680.html
> > Sent from the cassandra-u...@incubator.apache.org mailing list archive
> at Nabble.com.
>
ny help in this regard is highly appreciated.
>
> Thank you.
>
> Regards,
> Priyanka
>
>
>
>
>
> --
> View this message in context:
> http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Slow-Reads-tp6622680p6622680.html
> Sent from the cassandra-u...@incubator.apache.org mailing list archive at
> Nabble.com.
>
, read_consistency_level=None,buffer_size=None)
> >
> > This is very slow compared to MySQL.
> > Am not sure whats going wrong here.Could some one let me know if there is
> > any problem with my model.
> >
> >
> > Any help in this regard is h
slow compared to MySQL.
> Am not sure whats going wrong here.Could some one let me know if there is
> any problem with my model.
>
>
> Any help in this regard is highly appreciated.
>
> Thank you.
>
> Regards,
> Priyanka
>
>
>
>
>
> --
> View this mess
ize=None)
This is very slow compared to MySQL.
Am not sure whats going wrong here.Could some one let me know if there is
any problem with my model.
Any help in this regard is highly appreciated.
Thank you.
Regards,
Priyanka
--
View this message in context:
http://cassandra-us
Yes this is the issue. Thanks.
Dave
On Tuesday, August 3, 2010, Jonathan Ellis wrote:
> Sounds like https://issues.apache.org/jira/browse/THRIFT-638, where
> Arya Goudarzi posted a patch.
>
> On Tue, Aug 3, 2010 at 5:18 AM, Dave Gardner wrote:
>> Hi all
>>
>> I'm working on a PHP/Cassandra appl
Sounds like https://issues.apache.org/jira/browse/THRIFT-638, where
Arya Goudarzi posted a patch.
On Tue, Aug 3, 2010 at 5:18 AM, Dave Gardner wrote:
> Hi all
>
> I'm working on a PHP/Cassandra application. Yesterday we experienced a
> strange situation when testing random reads. The background t
Hi all
I'm working on a PHP/Cassandra application. Yesterday we experienced a
strange situation when testing random reads. The background to this test was
that we inserted 10,000 rows with simple row keys. The number of columns in
each row varies between about 5 columns and 40 columns (all random)
21 matches
Mail list logo