I think that could have pinpoint the problem, i have a table with a partition
key related to timestamp so for one hour so many data would be inserted at one
single node, this table creates a very big partitions (300MB-600MB), whatever
node the current partition of that table would be inserted to
Maybe the disk I/O cannot keep up with the high mutation rate ?
Check the number of pending compactions
On Sun, Jun 17, 2018 at 9:24 AM, onmstester onmstester
wrote:
> Hi,
>
> I was doing 500K inserts + 100K counter update in seconds on my cluster of
> 12 nodes (20 core/128GB ram/4 * 600 HDD 10
Hi,
I was doing 500K inserts + 100K counter update in seconds on my cluster of 12
nodes (20 core/128GB ram/4 * 600 HDD 10K) using batch statements
with no problem.
I saw a lot of warning show that most of batches not concerning a single node,
so they should not be in a batch, on the other h
Good answer Oleksandr,
But I think the data is inserted in the Memtable already in the right
order. At least the datastax academy videos say so.
But it shouldn't make any difference anyhow.
Kind regards,
Lucas Benevides
2017-12-08 5:41 GMT-02:00 Oleksandr Shulgin :
> On Fri, Dec 8, 2017 at 3:05
On Fri, Dec 8, 2017 at 3:05 AM, Eunsu Kim wrote:
> There is a table with a timestamp as a cluster key and sorted by ASC for
> the column.
>
> Is it better to insert by the time order when inserting data into this
> table for insertion performance? Or does it matter?
>
The writes hit memory table
There is a table with a timestamp as a cluster key and sorted by ASC for the
column.
Is it better to insert by the time order when inserting data into this table
for insertion performance? Or does it matter?
Thank you.
n has completed?
>
> I will add more iterations/time to the test.
>
> Thank you
>
>
>
>
> Date: Friday, July 14, 2017 at 2:21 PM
> To: Roger Warner
> Subject: Re: Reversed read write performance.
>
> Pls add info about caching. Probably your reads are
30G java heap. The dataset is the usual Cassandra-test size
How do I tell if compaction has completed?
I will add more iterations/time to the test.
Thank you
Date: Friday, July 14, 2017 at 2:21 PM
To: Roger Warner
Subject: Re: Reversed read write performance.
Pls add info about caching
I’m confused about read vs write performance. I was expecting to see higher
write than read perf. I’m seeing the opposite by nearly 2X
Please help. Am I doing/configuring something wrong or do I have the wrong
expectations. I am very new to Cassandra. And this is not using Datastax
dont have write schema. Also Replication is
different for schema R1 in read and write dcs.
This done, so we can independently scale node based on read and write
performance requirement separately.
My question is :
Will this provide concrete performance benefit in cassandra?
Will making a sin
increase value up to 128 MB/s (if I am not wrong).
Increasing to 32 to 64 MB/s by assessing the load would definitely give you
good write performance.
But if your machines are already IO Intensive and overloaded, then never
try to change the value.
Best Regards,
Kiran.M.K.
On Wed, Nov 25, 2015 at
Compaction is done to improve the reads. The compaction process is very CPU
intensive and it can make writes perform slow. Writes are also CPU-bound.
On Wed, Nov 25, 2015 at 11:12 AM, wrote:
> Hi all,
>
>
>
> Does compaction throughput impact write performance ?
>
>
>
Hi all,
Does compaction throughput impact write performance ?
Increasing the value of compaction_throughput_mb_per_sec can improve the insert
data ? If yes, is it possible to explain to me the concept ?
Thanks
Found the problem, it turns out that what Bharatendra suggested was
correct.
I had set the memtable_flush_writers to equal the number of cores but
hadn't restarted the Cassandra process, so they didn't take the
configuration.
On Wed, Jul 29, 2015 at 12:59 PM, Robert Coli wrote:
> On Tue, Jul 28,
On Tue, Jul 28, 2015 at 4:49 PM, Soerian Lieve wrote:
> I did already set that to the number of cores of the machines (24), but it
> made no difference.
>
I continue to suggest that you file a JIRA ticket... I feel you have done
sufficient community based due dilligence to question whether this
I did already set that to the number of cores of the machines (24), but it
made no difference.
On Tue, Jul 28, 2015 at 4:44 PM, Bharatendra Boddu
wrote:
> Increase memtable_flush_writers. In cassandra.yaml, it was recommended to
> increase this setting when SSDs used for storing data.
>
> On Fri
Increase memtable_flush_writers. In cassandra.yaml, it was recommended to
increase this setting when SSDs used for storing data.
On Fri, Jul 24, 2015 at 1:55 PM, Soerian Lieve wrote:
> I was on CFQ so I changed it to noop. The problem still persisted however.
> Do you have any other ideas?
>
> O
I was on CFQ so I changed it to noop. The problem still persisted however.
Do you have any other ideas?
On Thu, Jul 23, 2015 at 5:00 PM, Jeff Ferland wrote:
> Imbalanced disk use is ok in itself. It’s only saturated throughput that’s
> harmful. RAID 0 does give more consistent throughput and bal
Imbalanced disk use is ok in itself. It’s only saturated throughput that’s
harmful. RAID 0 does give more consistent throughput and balancing, but that’s
another story.
As for your situation with SSD drive, you can probably tweak this by changing
the scheduler is set to noop, or read up on
htt
I set up RAID0 after experiencing highly imbalanced disk usage with a JBOD
setup so my transaction logs are indeed on the same media as the sstables.
Is there any alternative to setting up RAID0 that doesn't have this issue?
On Thu, Jul 23, 2015 at 4:03 PM, Jeff Ferland wrote:
> My immediate gue
My immediate guess: your transaction logs are on the same media as your
sstables and your OS prioritizes read requests.
-Jeff
> On Jul 23, 2015, at 2:51 PM, Soerian Lieve wrote:
>
> Hi,
>
> I am currently performing benchmarks on Cassandra. Independently from each
> other I am seeing ~100k w
Hi,
I am currently performing benchmarks on Cassandra. Independently from each
other I am seeing ~100k writes/sec and ~50k reads/sec. When I read and
write at the same time, writing drops down to ~1000 writes/sec and reading
stays roughly the same.
The heap used is the same as when only reading,
280 sec: 865658 operations; 2661.5 current ops/sec; [INSERT
AverageLatency(us)=3640.16]
290 sec: 865658 operations; 0 current ops/sec;
It also may indicate that C* trying to finished active tasks and your write
requests have been
in the queue all 10 sec. Try to monitor C* doing*$watch nodetool
ParNew GC (used by default in cassandra) uses 'stop-the-world' algorithm,
which means your application has to be stopped to do gc.
You can run jstat command to monitor gc activity and check if your write
performance is related to GC, eg:
$ jstat -gc 1s
But it shouldn't drop through
Hi,
I am doing some performance benchmarks in a *single* node cassandra
1.2.4. BTW, the machine is dedicated to run one cassandra instance.
The workload is 100% write. The throughput varies dramatically and
sometimes even drops to 0. I have tried several things below and still
got the same observa
requests
> to different Casaandra nodes instead of only to one, leads to increased write
> performance in Cassandra.
>
In general yes, clients should distribute their writes.
>Is there any particular way in which write performance can be measured,
> preferably from the client?
in general , distribution of write
requests to different Casaandra nodes instead of only to one, leads to
increased write performance in Cassandra.
Is there any particular way in which write performance can be measured,
preferably from the client???
On Dec 18, 2013 8:30 AM, "Aaron Morton&qu
e 2 Cassandra nodes instead of sending the requests to a single
> node? Currently, I dont see any improvement even if I distribute the write
> requests to different hosts. How can I improve the write performance overall?
Normally we expect 3k to 4k non counter writes per core per node, i
> With a single node I get 3K for cassandra 1.0.12 and 1.2.12. So I suspect
> there is some network chatter. I have started looking at the sources, hoping
> to find something.
1.2 is pretty stable, I doubt there is anything in there that makes it run
slower than 1.0. It’s probably something in y
the columns. Also, can write throughput be increased by
distributing the write requests between the 2 Cassandra nodes instead of
sending the requests to a single node? Currently, I dont see any
improvement even if I distribute the write requests to different hosts.
How can I improve the write
Quote from
http://www.datastax.com/dev/blog/performance-improvements-in-cassandra-1-2
*"Murmur3Partitioner is NOT compatible with RandomPartitioner, so if you’re
upgrading and using the new cassandra.yaml file, be sure to change the
partitioner back to RandomPartitioner"*
On Thu, Dec 12, 2013 at
On Thu, Dec 12, 2013 at 11:15 AM, J. Ryan Earl wrote:
> Why did you switch to RandomPartitioner away from Murmur3Partitioner?
> Have you tried with Murmur3?
>
>
>1. # partitioner: org.apache.cassandra.dht.Murmur3Partitioner
>2. partitioner: org.apache.cassandra.dht.RandomPartitioner
>
>
Why did you switch to RandomPartitioner away from Murmur3Partitioner? Have
you tried with Murmur3?
1. # partitioner: org.apache.cassandra.dht.Murmur3Partitioner
2. partitioner: org.apache.cassandra.dht.RandomPartitioner
On Fri, Dec 6, 2013 at 10:36 AM, srmore wrote:
>
>
>
> On Fri, De
On Wed, Dec 11, 2013 at 10:49 PM, Aaron Morton wrote:
> It is the write latency, read latency is ok. Interestingly the latency is
> low when there is one node. When I join other nodes the latency drops about
> 1/3. To be specific, when I start sending traffic to the other nodes the
> latency for a
> It is the write latency, read latency is ok. Interestingly the latency is low
> when there is one node. When I join other nodes the latency drops about 1/3.
> To be specific, when I start sending traffic to the other nodes the latency
> for all the nodes increases, if I stop traffic to other n
Thanks Aaron
On Wed, Dec 11, 2013 at 8:15 PM, Aaron Morton wrote:
> Changed memtable_total_space_in_mb to 1024 still no luck.
>
> Reducing memtable_total_space_in_mb will increase the frequency of
> flushing to disk, which will create more for compaction to do and result in
> increased IO.
>
> Y
> Changed memtable_total_space_in_mb to 1024 still no luck.
Reducing memtable_total_space_in_mb will increase the frequency of flushing to
disk, which will create more for compaction to do and result in increased IO.
You should return it to the default.
> when I send traffic to one node its per
Changed memtable_total_space_in_mb to 1024 still no luck.
On Fri, Dec 6, 2013 at 11:05 AM, Vicky Kak wrote:
> Can you set the memtable_total_space_in_mb value, it is defaulting to 1/3
> which is 8/3 ~ 2.6 gb in capacity
>
> http://www.datastax.com/dev/blog/whats-new-in-cassandra-1-0-improved-me
Not long: Uptime (seconds) : 6828
Token: 56713727820156410577229101238628035242
ID : c796609a-a050-48df-bf56-bb09091376d9
Gossip active: true
Thrift active: true
Native Transport active: false
Load : 49.71 GB
Generation No: 1386344053
Uptime (secon
Since how long the server had been up, hours,days,months?
On Fri, Dec 6, 2013 at 10:41 PM, srmore wrote:
> Looks like I am spending some time in GC.
>
> java.lang:type=GarbageCollector,name=ConcurrentMarkSweep
>
> CollectionTime = 51707;
> CollectionCount = 103;
>
> java.lang:type=GarbageCo
Looks like I am spending some time in GC.
java.lang:type=GarbageCollector,name=ConcurrentMarkSweep
CollectionTime = 51707;
CollectionCount = 103;
java.lang:type=GarbageCollector,name=ParNew
CollectionTime = 466835;
CollectionCount = 21315;
On Fri, Dec 6, 2013 at 9:58 AM, Jason Wee wrote:
Can you set the memtable_total_space_in_mb value, it is defaulting to 1/3
which is 8/3 ~ 2.6 gb in capacity
http://www.datastax.com/dev/blog/whats-new-in-cassandra-1-0-improved-memory-and-disk-space-management
The flushing of 2.6 gb to the disk might slow the performance if frequently
called, may
On Fri, Dec 6, 2013 at 9:59 AM, Vicky Kak wrote:
> You have passed the JVM configurations and not the cassandra
> configurations which is in cassandra.yaml.
>
Apologies, was tuning JVM and that's what was in my mind.
Here are the cassandra settings http://pastebin.com/uN42GgYT
> The spikes ar
You have passed the JVM configurations and not the cassandra configurations
which is in cassandra.yaml.
The spikes are not that significant in our case and we are running the
cluster with 1.7 gb heap.
Are these spikes causing any issue at your end?
On Fri, Dec 6, 2013 at 9:10 PM, srmore wrote
Hi srmore,
Perhaps if you use jconsole and connect to the jvm using jmx. Then uner
MBeans tab, start inspecting the GC metrics.
/Jason
On Fri, Dec 6, 2013 at 11:40 PM, srmore wrote:
>
>
>
> On Fri, Dec 6, 2013 at 9:32 AM, Vicky Kak wrote:
>
>> Hard to say much without knowing about the cassa
On Fri, Dec 6, 2013 at 9:32 AM, Vicky Kak wrote:
> Hard to say much without knowing about the cassandra configurations.
>
The cassandra configuration is
-Xms8G
-Xmx8G
-Xmn800m
-XX:+UseParNewGC
-XX:+UseConcMarkSweepGC
-XX:+CMSParallelRemarkEnabled
-XX:SurvivorRatio=4
-XX:MaxTenuringThreshold=2
-X
Hard to say much without knowing about the cassandra configurations.
Yes compactions/GC's could skipe the CPU, I had similar behavior with my
setup.
-VK
On Fri, Dec 6, 2013 at 7:40 PM, srmore wrote:
> We have a 3 node cluster running cassandra 1.2.12, they are pretty big
> machines 64G ram wit
We have a 3 node cluster running cassandra 1.2.12, they are pretty big
machines 64G ram with 16 cores, cassandra heap is 8G.
The interesting observation is that, when I send traffic to one node its
performance is 2x more than when I send traffic to all the nodes. We ran
1.0.11 on the same box and
Hi All,
I found a significant performance problem when using composite primary key,
"wide" row and BATCH.
Ideally, I would like to have following structure:
CREATE TABLE bar1 (
some_id bigint,
some_type text,
some_value int,
some_data text
> *To: *user@cassandra.apache.org
> *Sent: *Thursday, February 14, 2013 8:34:06 AM
>
> *Subject: *Re: Write performance expectations...
>
> Hi Ken,
>
> You really should take a look at my first answer... and give us more
> information on the size of your inserts, the ty
event that
an instance goes down for whatever reason?
Ken
- Original Message -
From: "Alain RODRIGUEZ"
To: user@cassandra.apache.org
Sent: Thursday, February 14, 2013 8:34:06 AM
Subject: Re: Write performance expectations...
Hi Ken,
You really should take a l
_
> > From: "Tyler Hobbs"
> > To: user@cassandra.apache.org
> > Sent: Wednesday, February 13, 2013 11:06:30 AM
> >
> > Subject: Re: Write performance expectations...
> >
> > 2500 inserts per second is about what a single py
rg
> Sent: Wednesday, February 13, 2013 11:06:30 AM
>
> Subject: Re: Write performance expectations...
>
> 2500 inserts per second is about what a single python thread using pycassa
> can do against a local node. Are you using multiple threads for the
> inserts? Multiple pr
3, 2013 11:06:30 AM
Subject: Re: Write performance expectations...
2500 inserts per second is about what a single python thread using pycassa can
do against a local node. Are you using multiple threads for the inserts?
Multiple processes?
On Wed, Feb 13, 2013 at 8:21 AM, Alain RODRIGU
e .
Ken
- Original Message -
From: "Alain RODRIGUEZ"
To: user@cassandra.apache.org
Sent: Wednesday, February 13, 2013 9:21:18 AM
Subject: Re: Write performance expectations...
Is there a particular reason for you to use EBS ? Instance Store are
recommended because t
From: *"Tyler Hobbs" mailto:ty...@datastax.com>>
*To: *user@cassandra.apache.org <mailto:user@cassandra.apache.org>
*Sent: *Wednesday, February 13, 2013 11:06:30 AM
*Subject: *Re: Write performance expectations...
2500 inserts per second is about what a sing
t; I get a boost.
>
> Thanks.
>
> Ken
>
>
> --
> *From: *"Tyler Hobbs"
> *To: *user@cassandra.apache.org
> *Sent: *Wednesday, February 13, 2013 11:06:30 AM
> *Subject: *Re: Write performance expectations...
>
>
>
I'm not using multi-threads/processes. I'll try multi-threading to see if I get
a boost.
Thanks.
Ken
- Original Message -
From: "Tyler Hobbs"
To: user@cassandra.apache.org
Sent: Wednesday, February 13, 2013 11:06:30 AM
Subject: Re: Write performance exp
2013/2/13
>
> Hello,
>> New member here, and I have (yet another) question on write
>> performance.
>>
>> I'm using Apache Cassandra version 1.1, Python 2.7 and Pycassa 1.7.
>>
>> I have a cluster of 2 datacenters, each with 3 nodes, on AWS EC2 us
llo,
> New member here, and I have (yet another) question on write
> performance.
>
> I'm using Apache Cassandra version 1.1, Python 2.7 and Pycassa 1.7.
>
> I have a cluster of 2 datacenters, each with 3 nodes, on AWS EC2 using EBS
> and the RandomPartioner. I'
Hello,
New member here, and I have (yet another) question on write performance.
I'm using Apache Cassandra version 1.1, Python 2.7 and Pycassa 1.7.
I have a cluster of 2 datacenters, each with 3 nodes, on AWS EC2 using EBS and
the RandomPartioner. I'm writing to a column family in
Svc [mailto:jaytechg...@gmail.com]
> Sent: Monday, January 21, 2013 17:28
> To: user@cassandra.apache.org
> Subject: Concurrent write performance
>
> Folks,
>
> I would like to write(insert or update) to a single row in a column family. I
> have concurrent requests which
gmail.com]
Sent: Monday, January 21, 2013 17:28
To: user@cassandra.apache.org
Subject: Concurrent write performance
Folks,
I would like to write(insert or update) to a single row in a column family. I
have concurrent requests which will write to a single row. Do we see any
performance implicati
Folks,
I would like to write(insert or update) to a single row in a column family.
I have concurrent requests which will write to a single row. Do we see any
performance implications because of concurrent writes to a single row where
comparator has to sort the columns at the same time?
Please s
condary indexes and on the Datastax post about them,
>> it mentions the additional management overhead, and also that if you alter
>> an existing column family, that data will be updated in the background.
>> But how do secondary indexes affect write performance?
>>
&
in the
background. But how do secondary indexes affect write performance?
If the answer is "it doesn't", then how do brand new records get
located by a subsequent indexed query?
If someone has a link to a post with some of this info, that would be
awesome.
David
Morning,
Was reading up on secondary indexes and on the Datastax post about them, it
mentions the additional management overhead, and also that if you alter an
existing column family, that data will be updated in the background. But
how do secondary indexes affect write performance?
If the
;> From: Jeff Williams [mailto:je...@wherethebitsroam.com]
>> Sent: Tuesday, April 03, 2012 11:09 AM
>> To: user@cassandra.apache.org
>> Subject: Re: Write performance compared to Postgresql
>>
>> Vitalii,
>>
>> Yep, that sounds like a good idea. Do you have any
You should be able to get more than that.
Run nodetool cfstats, look at the Write Latency (this is the recent latency,
i.e. is reset each time you run it). This will give you an idea of how long an
individual node is spending on a write.
Fire up JConsole, go to the StorageProxy MBean and look
Would there be any reason why I can't write more than 875 writes/sec to a
cluster of 2 cassandra boxes? They are quad core machines with 8gb of ram
running raid 10, so not huge servers….but certainly enough to handle a much
larger load than that.
We are feeding data into it through a Flume sin
Hi Aaron, thanks for the reply. I suspected it might be the
read-and-write that causes the slower updates.
Regards,
P.
On Tue, Apr 17, 2012 at 11:52, aaron morton wrote:
> Secondary indexes require a read and a write (potentially two) for every
> update. Regular mutations are no look writes and
Secondary indexes require a read and a write (potentially two) for every
update. Regular mutations are no look writes and are much faster.
Just like in a RDBMS, it's more efficient to insert data and then create the
index than to insert data with the index present.
An alternative is to create
2 11:09 AM
> To: user@cassandra.apache.org
> Subject: Re: Write performance compared to Postgresql
>
> Vitalii,
>
> Yep, that sounds like a good idea. Do you have any more information about how
> you're doing that? Which client?
>
> Because even with 3 concurre
Hello.
We are using java async thrift client.
As of ruby, it seems you need to use something like
http://www.mikeperham.com/2010/02/09/cassandra-and-eventmachine/
(Not sure as I know nothing about ruby).
Best regards, Vitalii Tymchyshyn
2012/4/3 Jeff Williams
> Vitalii,
>
> Yep, that sounds l
Where is your client running?
-Original Message-
From: Jeff Williams [mailto:je...@wherethebitsroam.com]
Sent: Tuesday, April 03, 2012 11:09 AM
To: user@cassandra.apache.org
Subject: Re: Write performance compared to Postgresql
Vitalii,
Yep, that sounds like a good idea. Do you have
...@wherethebitsroam.com]
Sent: Tuesday, April 03, 2012 10:09 AM
To: user@cassandra.apache.org
Subject: Re: Write performance compared to Postgresql
Vitalii,
Yep, that sounds like a good idea. Do you have any more information about how
you're doing that? Which client?
Because even with 3 concurrent c
Vitalii,
Yep, that sounds like a good idea. Do you have any more information about how
you're doing that? Which client?
Because even with 3 concurrent client nodes, my single postgresql server is
still out performing my 2 node cassandra cluster, although the gap is narrowing.
Jeff
On Apr 3, 2
Note that having tons of TCP connections is not good. We are using async
client to issue multiple calls over single connection at same time. You
can do the same.
Best regards, Vitalii Tymchyshyn.
03.04.12 16:18, Jeff Williams написав(ла):
Ok, so you think the write speed is limited by the cli
Ok, so you think the write speed is limited by the client and protocol, rather
than the cassandra backend? This sounds reasonable, and fits with our use case,
as we will have several servers writing. However, a bit harder to test!
Jeff
On Apr 3, 2012, at 1:27 PM, Jake Luciani wrote:
> Hi Jeff,
Hi Jeff,
Writing serially over one connection will be slower. If you run many threads
hitting the server at once you will see throughput improve.
Jake
On Apr 3, 2012, at 7:08 AM, Jeff Williams wrote:
> Hi,
>
> I am looking at cassandra for a logging application. We currently log to a
>
Hi,
I am looking at cassandra for a logging application. We currently log to a
Postgresql database.
I set up 2 cassandra servers for testing. I did a benchmark where I had 100
hashes representing logs entries, read from a json file. I then looped over
these to do 10,000 log inserts. I repeated
I was inserting the contents of wikipedia, so the columns were at multi
kilobyte strings. It's a good data source to run tests with as the records and
relationships are somewhat varied in size.
My main point was to say the best way to benchmark cassandra with with multiple
server nodes, multipl
Since each row in my column family has 30 columns, wouldn't this translate
to ~8,000 rows per second...or am I misunderstanding something.
Talking in terms of columns, my load test would seem to perform as follows:
100,000 rows / 26 sec * 30 columns/row = 115K columns per second.
That's on a dua
To give an idea, last March (2010) I run the a much older Cassandra on 10 HP
blades (dual socket, 4 core, 16GB, 2.5 laptop HDD) and was writing around 250K
columns per second with 500 python processes loading the data from wikipedia
running on another 10 HP blades.
This was my first out of the
You don't give many details, but I would guess:
- your benchmark is not multithreaded
- mongodb is not configured for durable writes, so you're really only
measuring the time for it to buffer it in memory
- you haven't loaded enough data to hit "mongo's index doesn't fit in
memory anymore"
On Tue
Use more nodes to increase your write throughput. Testing on a single
machine is not really a viable benchmark for what you can achieve with
cassandra.
I am working for client that needs to persist 100K-200K records per second
for later querying. As a proof of concept, we are looking at several
options including nosql (Cassandra and MongoDB).
I have been running some tests on my laptop (MacBook Pro, 4GB RAM, 2.66 GHz,
Dual Core/4 logical cores)
there an
inherent reason why the high CPU bound on continuous bulk writes can't be
improved?
Thanks for all the help,
Rishi
From: Jonathan Ellis
To: user@cassandra.apache.org
Sent: Fri, June 11, 2010 12:22:39 PM
Subject: Re: Cassandra Write Performance, CPU
can cause them to stack up on top of the available CPU and memory
>> resources.
>>
>> In such a case (continuous bulk writes), you are causing all of these
>> costs to be taken in more of a synchronous (not delayed) fashion. You
>> are not allowing the background processing
path.
Thanks,
Rishi
From: Mike Malone
To: user@cassandra.apache.org
Sent: Fri, June 11, 2010 9:20:06 AM
Subject: Re: Cassandra Write Performance, CPU usage
Jonathan, while I agree with you re: this being an unusual load for the system,
it is interesting that
. You
> are not allowing the background processing that helps reduce client
> blocking (by deferring some processing) to do its magic.
>
>
>
> On Thu, Jun 10, 2010 at 7:42 PM, Rishi Bhardwaj
> wrote:
> > Hi
> > I am investigating Cassandra write performance and see ve
ntinuous
> bulk writes?
> Thanks for all the help,
> Rishi
>
> From: Jonathan Shook
> To: user@cassandra.apache.org
> Sent: Thu, June 10, 2010 7:39:24 PM
> Subject: Re: Cassandra Write Performance, CPU usage
>
> You are testing Cassandra in a way
To: user@cassandra.apache.org
Sent: Thu, June 10, 2010 7:39:24 PM
Subject: Re: Cassandra Write Performance, CPU usage
You are testing Cassandra in a way that it was not designed to be used.
Bandwidth to disk is not a meaningful example for nearly anything
except for filesystem benchmarking and things very nearl
deferring some processing) to do its magic.
On Thu, Jun 10, 2010 at 7:42 PM, Rishi Bhardwaj wrote:
> Hi
> I am investigating Cassandra write performance and see very heavy CPU usage
> from Cassandra. I have a single node Cassandra instance running on a dual
> core (2.66 Ghz Intel )
___
Let your email find you
On Fri, Jun 11, 2010 at 6:12 AM, Rishi Bhardwaj wrote:
> Hi
>
> I am investigating Cassandra write performance and see very heavy CPU usage
> from Cassandra. I have a single node Cassandra instance running on a dual
> core (2.66 Ghz Inte
Hi
I am investigating Cassandra write performance and see very heavy CPU usage
from Cassandra. I have a single node Cassandra instance running on a dual core
(2.66 Ghz Intel ) Ubuntu 9.10 server. The writes to Cassandra are being
generated from the same server using BatchMutate(). The client
Hi Brandon,
i've recoded my client (using threads). Now i'am getting round about 240
inserts per second (i think the bottleneck is know the virtualized hardware -->
single cpu). The stress.py script gives about 50 inserts/sec.
I'll test cassandra on real hw to see if it's perform better under a
On Thu, Mar 18, 2010 at 1:22 PM, Martin Probst (RobHost Support) <
supp...@robhost.de> wrote:
> Hi Tom,
>
> no we're not using a connection pool, only pure java on cmd.
>
> Cheers,
> Martin
>
>
The second graph here is relevant:
http://spyced.blogspot.com/2010/01/cassandra-05.html
Rather than cre
>
> On 18 mar 2010, at 19.03em, Martin Probst (RobHost Support) wrote:
>
> > Hi,
> >
> > we've tested the write performance on a single and dual node cluster and
> > the results are strangely poor. We've got about 30 inserts per second which
> > see
gt; wrote:
>> Hi,
>>
>> we've tested the write performance on a single and dual node cluster and the
>> results are strangely poor. We've got about 30 inserts per second which
>> seems a little bit slow?! The strange about is, that the node's we've
1 - 100 of 105 matches
Mail list logo