We don't use default ports. Woops! Now I advertised mine. I did try
disabling internode compression for all in cassandra.yaml but still it did
not work. I have to open the insecure storage port to public ips.
On Tue, Apr 16, 2013 at 4:59 PM, Edward Capriolo wrote:
> So cassandra does inter node
Hi Ravi
Key1 --> 123/IMAGE
Key2 --> 123/DOCUMENTS
Key3 --> 123/MULTIMEDIA
Which one is your ROW KEY ?? It is Key1,Key2,Key3?
On Thu, Apr 18, 2013 at 3:56 PM, aaron morton wrote:
> All rows with the same key go on the same nodes. So if you use the same
> row key in different CF's they will
I have started working with Cassandra database. I am planning to use
Datastax API to upsert/read into/from cassandra database. I am totally new
to this Datastax API (which uses new Binary protocol) and I am not able to
find lot of documentations as well which have some proper examples.
I am not su
Hi Aaron,
1. A timeout more than 10ms to us is the max value we could accept
2. It is a random key access, not a range scan.
3. We have only one column family only for that keyspace, we select the
columns.
Thanks.
Best wishes,
Stanley Xu
On Fri, Apr 19, 2013 at 2:22 AM, aaron morton wrote:
>
Write performance decreases.
Reads are basically blocked, too. Sometimes I have to wait 3-4 seconds to
get a count even though there're only couple of thousand small entries in a
table.
On Thu, Apr 18, 2013 at 8:37 PM, aaron morton wrote:
> After about 1-2K inserts I get significant performance
Yes, the exceptions were only on the 1.1.9 nodes.
Unfortunately, couldn't complete the upgrade because it was adversely
affecting the applications using the cluster during that time.
Best,
John
On Tue, Apr 16, 2013 at 2:03 PM, aaron morton wrote:
> Is this a known issue? Or rolling upgrade fo
Use the community edition and try it out. Compaction has nothing to do with the
CPU. It's all on raw disk speed. What kind of disks do you have ? 7.2k, 10k,
15k RPM ?
Are your keys unique or you are doing updates ? if unique writes, I would not
worry about compaction too much and let it run fas
Hi Wei,
Thank you for your reply.
Yes, I observed that all the concurrent_compactors and
multithreaded_compaction has no effect on LCS. I also tried with large
SSTable size it helped keeping the SSTable count low so keeping the pending
compaction low. But in spite I have more CPU, I am not able t
Thanks Aaron,
Please find answers to your questions.
1. I started test with default parameters the compaction is backing up. So
went for various options.
2. The data is on RAID10.
3. As I watched Disk latency on DSE Opscenter as well as on iostat the
await is always 35 to 40 ms for longer period
> Parameters used:
> • SSTable size: 500MB (tried various sizes from 20MB to 1GB)
> • Compaction throughput mb per sec: 250MB (tried from 16MB to 640MB)
> • Concurrent write: 196 (tried from 32 to 296)
> • Concurrent compactors: 72 (tried disabling to making it 172)
>
This is roughly the lift and shift process I use.
Note that disabling thrift and gossip does not stop an existing repair session.
So I often drain and then shutdown, and copy the live data dir rather than a
snapshot dir.
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
N
Hi Alexis, Thank you so much for inputs.
Let me try those suggested options. I will also look forward to see if you
have any more suggestions on our Compaction.
Thanks,
Jay
On Thu, Apr 18, 2013 at 1:03 PM, Alexis Rodríguez <
arodrig...@inconcertcc.com> wrote:
> Jay,
>
> await, according to ios
> After about 1-2K inserts I get significant performance decrease.
A decrease in performance doing what ?
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 19/04/2013, at 4:43 AM, Oleksandr Petrov wrote:
> Hi,
>
> I
> sorry to ask in this thread, but from some time I am wondering how CFS can be
> installed on normal Cassandra?
CFS is part of the Data Stax enterprise product.
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 19/04
Was the schema created with CQL or the CLI ? (It's not a good idea to manage
one with the other)
Can you provide the schema after the update and the update cf statement?
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
We have tried very hard to speed up lcs on 1.1.6 with no luck. It seems to be
single threaded and not much parallelism you can achieve. 1.2 does come with
parallel lcs which should help.
One more thing to try is to enlarge the sstable size which will reduce the
number of SSTable. It *might* hel
> Is that possible that we could make some configuration, so there will be like
> a mem_table queue in the memory, like there are 4 mem_tables in the memory,
> from mem1, mem2, mem3, mem4 based on time series, and the Cassandra will
> flush mem1, and once there is a mem5 is full, it will flush t
Looks like there are no repairs running. Is this just an issue with ops centre?
Try restarting the ops centre agent: sudo service opscenter-agent restart
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 18/04/2013,
Jay,
await, according to iostat's man page it is the time of a request to the
disk to get served. You may try changing the io scheduler. I've read that
noop it's recommended for SSDs, you can check here http://goo.gl/XMiIA
Regarding compaction, a week ago we had serious problems with compaction i
By the way the compaction and commit log disk latency, these are two
seperate problems I see.
The important one is compaction problem, How I can speed that up?
Thanks,
Jay
On Thu, Apr 18, 2013 at 12:07 PM, Jay Svc wrote:
> Looks like formatting is bit messed up. Please let me know if you want
Looks like formatting is bit messed up. Please let me know if you want the
same in clean format.
Thanks,
Jay
On Thu, Apr 18, 2013 at 12:05 PM, Jay Svc wrote:
> Hi Aaron, Alexis,
>
> Thanks for reply, Please find some more details below.
>
> *Core problems:* Compaction is taking longer time to
Hi Aaron, Alexis,
Thanks for reply, Please find some more details below.
*Core problems:* Compaction is taking longer time to finish. So it will
affect my reads. I have more CPU and memory, want to utilize that to speed
up the compaction process.
*Parameters used:*
1. SSTable size: 500MB (tri
Hi,
I'm trying to persist some event data, I've tried to identify the
bottleneck, and it seems to work like that:
If I create a table with primary key based on (application, environment,
type and emitted_at):
CREATE TABLE events (application varchar, environment varchar, type
varchar, additional
Jay, Do you have metrics of disk usage on the disks that contains your data
directories? Compaction operates over those files, may be your problems are
with those disks and not with the disks that have the commitlog.
On Thu, Apr 18, 2013 at 1:33 PM, Jay Svc wrote:
> Hi Alexis, Yes compaction ha
Hi Alexis, Yes compaction happens on data files -
1. What my disk latency is high for SSDs which been only used for commit log
2. Why my compaction is not catching up with my write traffic in spite of
low CPU, low memory and low JVM usage.
I am adding more details to this thread.
Thanks,
Jayant
This should work.
Another option is to follow a process similar to what we recently did. We
recently and successfully upgraded 12 instances from large to xlarge instances
in AWS. I chose not to replace nodes as restoring data from the ring would
have taken significant time and put the clust
I would say add your 3 servers to the 3 tokens where you want them, let's
say :
{
"0": {
"0": 0,
"1": 56713727820156410577229101238628035242,
"2": 113427455640312821154458202477256070485
}
}
or these token -1 or +1 if you already have these token used. And then jus
Hi,
The company I work for is having so much success that we are expanding
worldwide :). We have to deploy our Cassandra servers worldwide too in
order to improve the latency of our new abroad customers.
I am wondering about the process to grow from one data center to a few of
them. First thing i
Hi,
What is the best pratice to move from a cluster of 7 nodes (m1.xlarge) to 3
nodes (hi1.4xlarge).
Thanks,
You can get the topology info from thrift's describe_ring. But you can not
get host/gossip status through thrift. It does make sense as something to
add. In the native protocol and with the fat client(storage proxy) you can
hook into these events.
An example of this is here (fat client):
https://
Hi - We are planning to develop a custom client using the Thrift API for
Cassandra. Are these available from the JMX ?
- Can cassandra provide info abt node status?
- DC Failover detection (data center down, vs some nodes are down)
- How to get load info from each node?
Thanks,
Kanwar
I thought so,
sorry to ask in this thread, but from some time I am wondering how CFS can
be installed on normal Cassandra?
On Thu, Apr 18, 2013 at 3:23 PM, Michal Michalski wrote:
> Probably Robert meant CFS: http://www.datastax.com/wp-**
> content/uploads/2012/09/WP-**DataStax-HDFSvsCFS.pdf
Hi all,
we solved the problem ...
first we updated the Cassandra for 1.2.3 version and after another
exception was launched "No hosts to borrow from" and we discovered that
the command "ConnectionPoolConfigurationImpl(...).setConnectTimeout(-1)"
was the cause ...
and we put .setConnectTimeout
As stated in topic, I'm unable to drop secondary index either by using
cli or cqlsh. In both cases it looks like to command is processed
properly (some uuid shows up in cli, no output in cqlsh), I can see in
logs that schema is going to be updated (index name and type are set to
null) and then.
Probably Robert meant CFS:
http://www.datastax.com/wp-content/uploads/2012/09/WP-DataStax-HDFSvsCFS.pdf
:-)
W dniu 18.04.2013 14:10, Nikolay Mihaylov pisze:
whats CDFS ? I am sure you are not referring iso9660, e.g. CD-ROM
filesystem? :)
On Wed, Apr 17, 2013 at 10:42 PM, Robert Coli wrote:
Dear buddies,
We are using Cassandra to handle a tech scenario like the following:
1. A table using a Long as Key, and has one and only one Integer as a
ColumnFamily, with 2 hours as the TTL.
2. The wps(write per second) is 45000, the qps(read per second) would be
about 30 - 200.
3. There isn't a
whats CDFS ? I am sure you are not referring iso9660, e.g. CD-ROM
filesystem? :)
On Wed, Apr 17, 2013 at 10:42 PM, Robert Coli wrote:
> On Wed, Apr 17, 2013 at 11:19 AM, aaron morton wrote:
>
>> It's the same as the Apache version, but DSC comes with samples and the
>> free version of Ops Centr
Thank you Aaron.
From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: woensdag 17 april 2013 20:20
To: user@cassandra.apache.org
Subject: Re: differences between DataStax Community Edition and Cassandra
package
It's the same as the Apache version, but DSC comes with samples and the free
ve
I use hector
On Thu, Apr 18, 2013 at 1:35 PM, aaron morton wrote:
> > ERROR 08:40:42,684 Error occurred during processing of message.
> > java.lang.StringIndexOutOfBoundsException: String index out of range:
> -214741811
> > 1
> > at java.lang.String.checkBounds(String.java:397)
> >
> ERROR 08:40:42,684 Error occurred during processing of message.
> java.lang.StringIndexOutOfBoundsException: String index out of range:
> -214741811
> 1
> at java.lang.String.checkBounds(String.java:397)
> at java.lang.String.(String.java:442)
> at
> org.apache.thrift.pr
All rows with the same key go on the same nodes. So if you use the same row key
in different CF's they will be on the same nodes. i.e. have CF's called Image,
Documents, Meta and store rows in all of them with the 123 key.
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
Ne
> Cassandra -f
>
> So I believe that's why it is getting started as service.
Starts it in the foreground.
Sorry I dont use windows. Try moving the install and see what complains at
startup, check the system logs to see if there is an error.
Cheers
-
Aaron Morton
Freelance Cass
> I believe that compaction occurs on the data directories and not in the
> commitlog.
Yes, compaction only works on the data files.
> When I ran iostat; I see "await" 26ms to 30 ms for my commit log disk. My CPU
> is less than 18% used.
>
> How I reduce the disk latency for my commit log dis
43 matches
Mail list logo