Re: Timeout for only one keyspace in cluster

2018-07-23 Thread learner dba
Thanks Jonathan. this is device addition use case where we can't assign same value to more than one device, so we need isolation property. With regular UUID, we may end up assigning same value to two devices. I will talk to dev team and see if this can be changed and handled at application leve

Re: Timeout for only one keyspace in cluster

2018-07-23 Thread Jonathan Haddad
You don’t get this guarantee with counters. Do not use them for unique values. Use a UUID instead. On Mon, Jul 23, 2018 at 9:11 AM learner dba wrote: > James, > > Yes, counter is implemented due to valid reasons. We need this value > column to have unique values being used at the time of regis

Re: Timeout for only one keyspace in cluster

2018-07-23 Thread learner dba
James, Yes, counter is implemented due to valid reasons. We need this value column to  have unique values being used at the time of registering new devices.On Monday, July 23, 2018, 10:07:54 AM CDT, James Shaw wrote: does your application really need counter ?  just an option. Thanks,

Re: Timeout for only one keyspace in cluster

2018-07-23 Thread James Shaw
does your application really need counter ? just an option. Thanks, James On Mon, Jul 23, 2018 at 10:57 AM, learner dba wrote: > Thanks a lot Ben. This makes sense but feel bad that we don't have a > solution yet. We can try consistency level one but that will be against > general rule for ha

Re: Timeout for only one keyspace in cluster

2018-07-23 Thread learner dba
Thanks a lot Ben. This makes sense but feel bad that we don't have a solution yet. We can try consistency level one but that will be against general rule for having local_quorum for production. Also, consistency ONE will not guarantee 0 race condition. Is there any better solution? On Satur

Re: Timeout for only one keyspace in cluster

2018-07-23 Thread learner dba
Goutham, How will it make any difference? Could you please elaborate? On Saturday, July 21, 2018, 8:20:31 PM CDT, Goutham reddy wrote: Hi,As it is a single partition key, try to update the key with only partition key instead of passing other columns. And try to set consistency level ON

Re: Timeout for only one keyspace in cluster

2018-07-21 Thread Ben Slater
Note that that writetimeout exception can be C*s way of telling you when there is contention on a LWT (rather than actually timing out). See https://issues.apache.org/jira/browse/CASSANDRA-9328 Cheers Ben On Sun, 22 Jul 2018 at 11:20 Goutham reddy wrote: > Hi, > As it is a single partition key,

Re: Timeout for only one keyspace in cluster

2018-07-21 Thread Goutham reddy
Hi, As it is a single partition key, try to update the key with only partition key instead of passing other columns. And try to set consistency level ONE. Cheers, Goutham. On Fri, Jul 20, 2018 at 6:57 AM learner dba wrote: > Anybody has any ideas about this? This is happening in production and

Re: Timeout for only one keyspace in cluster

2018-07-20 Thread learner dba
Anybody has any ideas about this? This is happening in production and we really need to fix it. On Thursday, July 19, 2018, 10:41:59 AM CDT, learner dba wrote: Our foreignid is unique idetifier and we did check for wide partitions; cfhistorgrams show all partitions are evenly sized:

Re: Timeout for only one keyspace in cluster

2018-07-19 Thread learner dba
Our foreignid is unique idetifier and we did check for wide partitions; cfhistorgrams show all partitions are evenly sized: Percentile  SSTables     Write Latency      Read Latency    Partition Size        Cell Count                               (micros)          (micros)           (bytes)  

Re: Timeout for only one keyspace in cluster

2018-07-18 Thread wxn...@zjqunshuo.com
Your partition key is foreignid. You may have a large partition. Why not use foreignid+timebucket as partition key? From: learner dba Date: 2018-07-19 01:48 To: User cassandra.apache.org Subject: Timeout for only one keyspace in cluster Hi, We have a cluster with multiple keyspaces. All queries

Re: Timeout while setting keyspace

2017-07-26 Thread Peng Xiao
https://datastax-oss.atlassian.net/browse/JAVA-1002 This one says it's the driver issue,we will have a try. -- Original -- From: "";<2535...@qq.com>; Date: Wed, Jul 26, 2017 04:12 PM To: "user"; Subject: Timeout while setting keyspace Dear A

Re: Timeout while trying to acquire available connection

2017-06-26 Thread Jacob Shadix
How many client connections are hitting your cluster? Have you looked at tuning connection pool? https://github.com/datastax/java-driver/tree/3.x/manual/pooling -- Jacob Shadix On Mon, Jun 26, 2017 at 2:50 PM, Ivan Iliev wrote: > Hello everyone! > > I am seeing recent behavior of apps being no

Re: Timeout with static column

2015-11-23 Thread Brice Figureau
On Fri, 2015-11-13 at 11:25 +0100, Brice Figureau wrote: > On Thu, 2015-11-12 at 11:13 -0600, Tyler Hobbs wrote: > > Can you try to isolate this to a reproducible test case or script and > > open a jira ticket at https://issues.apache.org/jira/browse/CASSANDRA? > > I just created: > https://issues

Re: Timeout with static column

2015-11-13 Thread Brice Figureau
On Thu, 2015-11-12 at 11:13 -0600, Tyler Hobbs wrote: > Can you try to isolate this to a reproducible test case or script and > open a jira ticket at https://issues.apache.org/jira/browse/CASSANDRA? I just created: https://issues.apache.org/jira/browse/CASSANDRA-10698 It's unfortunately not a tes

Re: Timeout with static column

2015-11-12 Thread Tyler Hobbs
Can you try to isolate this to a reproducible test case or script and open a jira ticket at https://issues.apache.org/jira/browse/CASSANDRA? On Wed, Nov 11, 2015 at 2:54 PM, Brice Figureau < brice+cassan...@daysofwonder.com> wrote: > Hi, > > Following my previous Read query timing out, I'm now ru

Re: Timeout/Crash when insert more than 500 bytes chunks

2015-07-31 Thread noah chanmala
I figured out the issue. Mainly the setting is related to commitlog_segment_size_in_mb and the sync I am currently have across three other remote site. thanks, Noah On Thu, Jul 30, 2015 at 1:05 PM, noah chanmala wrote: > All, > > Would you please point me to location where I can adjust/reconf

RE: timeout creating table

2015-04-23 Thread Matthew Johnson
this is expected. Cheers, Matt *From:* y2k...@gmail.com [mailto:y2k...@gmail.com] *On Behalf Of *Jimmy Lin *Sent:* 23 April 2015 18:01 *To:* user@cassandra.apache.org *Subject:* Re: timeout creating table well i am pretty sure our CL is one. and the long pause seems happen somewhat

Re: timeout creating table

2015-04-23 Thread Jimmy Lin
well i am pretty sure our CL is one. and the long pause seems happen somewhat randomly. But is creating keyspace or table statements has different treatment in terms of CL that may explain the long pause? thanks On Thu, Apr 23, 2015 at 8:04 AM, Sebastian Estevez < sebastian.este...@datastax.com>

Re: timeout creating table

2015-04-23 Thread Sebastian Estevez
That is a problem, you should not have RF > N. Do an alter table to fix it. This will affect your reads and writes if you're doing anything > CL 1 --> timeouts. On Apr 23, 2015 4:35 AM, "Jimmy Lin" wrote: > Also I am not sure it matters, but I just realized the keyspace created > has replicatio

Re: timeout creating table

2015-04-23 Thread Jimmy Lin
Also I am not sure it matters, but I just realized the keyspace created has replication factor of 2 when my Cassandra is really just a single node. Is Cassandra smart enough to ignore the RF of 2 and work with only 1 single node? On Mon, Apr 20, 2015 at 8:23 PM, Jimmy Lin wrote: > hi, > there

Re: timeout creating table

2015-04-20 Thread Jimmy Lin
hi, there were only a few (4 of them across 4 minutes with around 200ms), so shouldn't be the reason The system log has tons of INFO [MigrationStage:1] 2015-04-20 11:03:21,880 ColumnFamilyStore.java (line 633) Enqueuing flush of Memtable-schema_keyspaces@2079381036(138/1215 serialized/live bytes,

Re: timeout creating table

2015-04-20 Thread Sebastian Estevez
Can you grep for GCInspector in your system.log? Maybe you have long GC pauses. All the best, [image: datastax_logo.png] Sebastián Estévez Solutions Architect | 954 905 8615 | sebastian.este...@datastax.com [image: linkedin.png]

Re: timeout creating table

2015-04-20 Thread Jimmy Lin
Yes, sometimes it is create table and sometime it is create index. It doesn't happen all the time, but feel like if multiple tests trying to do schema change(create or drop), Cassandra has a long delay on the schema change statements. I also just read about "auto_snapshot", and I turn it off but s

Re: timeout creating table

2015-04-20 Thread Jim Witschey
Jimmy, What's the exact command that produced this trace? Are you saying that the 16-second wait in your trace what times out in your CREATE TABLE statements? Jim Witschey Software Engineer in Test | jim.witsc...@datastax.com On Sun, Apr 19, 2015 at 7:13 PM, Jimmy Lin wrote: > hi, > we have so

Re: Timeout error in fetching million rows as results using clustering keys

2015-03-19 Thread Jack Krupansky
Content management (large blobs such as images and video) can be done with Cassandra, but it is tricky and great care is needed. As with any Cassandra app, you need to model your data based on how you intend to query and access the data. You can certainly access large amounts of data with Cassandra

Re: Timeout error in fetching million rows as results using clustering keys

2015-03-19 Thread Kai Wang
With your reading path and data model, it doesn't matter how many nodes you have. All data with the same image_caseid is physically located on one node (Well, on RF nodes but only one of those will try to server your query). You are not taking advantage of Cassandra by creating hot spots on both re

Re: Timeout error in fetching million rows as results using clustering keys

2015-03-18 Thread Eric Stevens
> Very few, if any, non-memory databases are likely to be able to "handle" a "million" "rows" in a small number of seconds. +1 to that. Our data models shy away from really huge sequential reads, so I don't know what to suggest your practical lower bound on response time would be expected to be fo

Re: Timeout error in fetching million rows as results using clustering keys

2015-03-18 Thread Robert Coli
On Wed, Mar 18, 2015 at 12:22 AM, Mehak Mehta wrote: > I have tried with fetch size of 1 still its not giving any results. > My expectations were that Cassandra can handle a million rows easily. > Very few, if any, non-memory databases are likely to be able to "handle" a "million" "rows" in

Re: Timeout error in fetching million rows as results using clustering keys

2015-03-18 Thread Eric Stevens
>From your description, it sounds like you have a single partition key with millions of clustered values on the same partition. That's a very wide partition. You may very likely be causing a lot of memory pressure in your Cassandra node (especially at 4G) while trying to execute the query. Althou

Re: Timeout error in fetching million rows as results using clustering keys

2015-03-18 Thread Jack Krupansky
Cassandra can certainly handle millions and even billions of rows, but... it is a very clear anti-pattern to design a single query to return more than a relatively small number of rows except through paging. How small? Low hundreds is probably a reasonable limit. It is also an anti-pattern to filte

Re: Timeout error in fetching million rows as results using clustering keys

2015-03-18 Thread Mehak Mehta
ya I have cluster total 10 nodes but I am just testing with one node currently. Total data for all nodes will exceed 5 billion rows. But I may have memory on other nodes. On Wed, Mar 18, 2015 at 6:06 AM, Ali Akhtar wrote: > 4g also seems small for the kind of load you are trying to handle > (bil

Re: Timeout error in fetching million rows as results using clustering keys

2015-03-18 Thread Ali Akhtar
4g also seems small for the kind of load you are trying to handle (billions of rows) etc. I would also try adding more nodes to the cluster. On Wed, Mar 18, 2015 at 2:53 PM, Ali Akhtar wrote: > Yeah, it may be that the process is being limited by swap. This page: > > > https://gist.github.com/a

Re: Timeout error in fetching million rows as results using clustering keys

2015-03-18 Thread Ali Akhtar
Yeah, it may be that the process is being limited by swap. This page: https://gist.github.com/aliakhtar/3649e412787034156cbb#file-cassandra-install-sh-L42 Lines 42 - 48 list a few settings that you could try out for increasing / reducing the memory limits (assuming you're on linux). Also, are yo

Re: Timeout error in fetching million rows as results using clustering keys

2015-03-18 Thread Mehak Mehta
Currently Cassandra java process is taking 1% of cpu (total 8% is being used) and 14.3% memory (out of total 4G memory). As you can see there is not much load from other processes. Should I try changing default parameters of memory in Cassandra settings. On Wed, Mar 18, 2015 at 5:33 AM, Ali Akhta

Re: Timeout error in fetching million rows as results using clustering keys

2015-03-18 Thread Ali Akhtar
What's your memory / CPU usage at? And how much ram + cpu do you have on this server? On Wed, Mar 18, 2015 at 2:31 PM, Mehak Mehta wrote: > Currently there is only single node which I am calling directly with > around 15 rows. Full data will be in around billions per node. > The code is wo

Re: Timeout error in fetching million rows as results using clustering keys

2015-03-18 Thread Mehak Mehta
Currently there is only single node which I am calling directly with around 15 rows. Full data will be in around billions per node. The code is working only for size 100/200. Also the consecutive fetching is taking around 5-10 secs. I have a parallel script which is inserting the data while I

Re: Timeout error in fetching million rows as results using clustering keys

2015-03-18 Thread Ali Akhtar
If even 500-1000 isn't working, then your cassandra node might not be up. 1) Try running nodetool status from shell on your cassandra server, make sure the nodes are up. 2) Are you calling this on the same server where cassandra is running? Its trying to connect to localhost . If you're running

Re: Timeout error in fetching million rows as results using clustering keys

2015-03-18 Thread Mehak Mehta
Data won't change much but queries will be different. I am not working on the rendering tool myself so I don't know much details about it. Also as suggested by you I tried to fetch data in size of 500 or 1000 with java driver auto pagination. It fails when the number of records are high (around 10

Re: Timeout error in fetching million rows as results using clustering keys

2015-03-18 Thread Ali Akhtar
How often does the data change? I would still recommend a caching of some kind, but without knowing more details (how often the data is changing, what you're doing with the 1m rows after getting them, etc) I can't recommend a solution. I did see your other thread. I would also vote for elasticsea

Re: Timeout error in fetching million rows as results using clustering keys

2015-03-18 Thread Mehak Mehta
The rendering tool renders a portion a very large image. It may fetch different data each time from billions of rows. So I don't think I can cache such large results. Since same results will rarely fetched again. Also do you know how I can do 2d range queries using Cassandra. Some other users sugg

Re: Timeout error in fetching million rows as results using clustering keys

2015-03-18 Thread Ali Akhtar
Sorry, meant to say "that way when you have to render, you can just display the latest cache." On Wed, Mar 18, 2015 at 1:30 PM, Ali Akhtar wrote: > I would probably do this in a background thread and cache the results, > that way when you have to render, you can just cache the latest results. >

Re: Timeout error in fetching million rows as results using clustering keys

2015-03-18 Thread Ali Akhtar
I would probably do this in a background thread and cache the results, that way when you have to render, you can just cache the latest results. I don't know why Cassandra can't seem to be able to fetch large batch sizes, I've also run into these timeouts but reducing the batch size to 2k seemed to

Re: Timeout error in fetching million rows as results using clustering keys

2015-03-18 Thread Mehak Mehta
We have UI interface which needs this data for rendering. So efficiency of pulling this data matters a lot. It should be fetched within a minute. Is there a way to achieve such efficiency On Wed, Mar 18, 2015 at 4:06 AM, Ali Akhtar wrote: > Perhaps just fetch them in batches of 1000 or 2000? Fo

Re: Timeout error in fetching million rows as results using clustering keys

2015-03-18 Thread Ali Akhtar
Perhaps just fetch them in batches of 1000 or 2000? For 1m rows, it seems like the difference would only be a few minutes. Do you have to do this all the time, or only once in a while? On Wed, Mar 18, 2015 at 12:34 PM, Mehak Mehta wrote: > yes it works for 1000 but not more than that. > How can

Re: Timeout error in fetching million rows as results using clustering keys

2015-03-18 Thread Mehak Mehta
yes it works for 1000 but not more than that. How can I fetch all rows using this efficiently? On Wed, Mar 18, 2015 at 3:29 AM, Ali Akhtar wrote: > Have you tried a smaller fetch size, such as 5k - 2k ? > > On Wed, Mar 18, 2015 at 12:22 PM, Mehak Mehta > wrote: > >> Hi Jens, >> >> I have tried

Re: Timeout error in fetching million rows as results using clustering keys

2015-03-18 Thread Ali Akhtar
Have you tried a smaller fetch size, such as 5k - 2k ? On Wed, Mar 18, 2015 at 12:22 PM, Mehak Mehta wrote: > Hi Jens, > > I have tried with fetch size of 1 still its not giving any results. > My expectations were that Cassandra can handle a million rows easily. > > Is there any mistake in t

Re: Timeout error in fetching million rows as results using clustering keys

2015-03-18 Thread Mehak Mehta
Hi Jens, I have tried with fetch size of 1 still its not giving any results. My expectations were that Cassandra can handle a million rows easily. Is there any mistake in the way I am defining the keys or querying them. Thanks Mehak On Wed, Mar 18, 2015 at 3:02 AM, Jens Rantil wrote: > Hi

Re: Timeout error in fetching million rows as results using clustering keys

2015-03-18 Thread Jens Rantil
Hi, Try setting fetchsize before querying. Assuming you don't set it too high, and you don't have too many tombstones, that should do it. Cheers, Jens – Skickat från Mailbox On Wed, Mar 18, 2015 at 2:58 AM, Mehak Mehta wrote: > Hi, > I have requirement to fetch million row as result of my

Re: timeout when using secondary index

2015-03-10 Thread Patrick McFadin
Jimmy, The secondary index is getting scanned since you put the column in your query. The behavior you are looking for is a coming feature called Global Indexes slated for 3.0. https://issues.apache.org/jira/browse/CASSANDRA-6477 In the meantime, you could build your own lookup table even with th

Re: Timeout Exception with row_cache enabled

2013-09-02 Thread Sávio Teles
Is it related to https://issues.apache.org/jira/browse/CASSANDRA-4973? And https://issues.apache.org/jira/browse/CASSANDRA-4785? 2013/9/2 Nate McCall > You experience is not uncommon. There was a recent thread on this with a > variety of details on when to use row caching: > http://www.mail-arc

Re: Timeout Exception with row_cache enabled

2013-09-02 Thread Nate McCall
You experience is not uncommon. There was a recent thread on this with a variety of details on when to use row caching: http://www.mail-archive.com/user@cassandra.apache.org/msg31693.html tl;dr - it depends completely on use case. Small static rows work best. On Mon, Sep 2, 2013 at 2:05 PM, Sáv

Re: Timeout reading row from CF with collections

2013-07-12 Thread Paul Ingalls
Yep, that was it. I built from the cassandra 1.2 branch and no more timeouts. Thanks for getting that fix into 1.2! Paul On Jul 12, 2013, at 1:20 AM, Sylvain Lebresne wrote: > My bet is that you're hitting > https://issues.apache.org/jira/browse/CASSANDRA-5677. > > -- > Sylvain > > >

Re: Timeout reading row from CF with collections

2013-07-12 Thread Sylvain Lebresne
My bet is that you're hitting https://issues.apache.org/jira/browse/CASSANDRA-5677. -- Sylvain On Fri, Jul 12, 2013 at 8:17 AM, Paul Ingalls wrote: > I'm running into a problem trying to read data from a column family that > includes a number of collections. > > Cluster details: > 4 nodes runn

Re: Timeout Exception in get_slice

2012-05-10 Thread Luís Ferreira
The multi get batches range from 100 to 200. The tests I'm running need to do get_slices and the multigets on those results. I can't turn either of them off. I was only setting 16 threads for reading, but I'll boost it up to 32 and see what happens. On May 9, 2012, at 11:03 AM, aaron morton wr

Re: Timeout Exception in get_slice

2012-05-09 Thread aaron morton
How big are the multi get batches ? How do the wide row get_slice calls behave when the multi gets are not running ? Cheers - Aaron Morton Freelance Developer @aaronmorton http://www.thelastpickle.com On 9/05/2012, at 1:47 AM, Luís Ferreira wrote: > Maybe one of the problems is

Re: Timeout Exception in get_slice

2012-05-08 Thread Luís Ferreira
Maybe one of the problems is that I am reading the columns in a row and the rows themselves in batches, using the count attribute in the SliceRange and by changing the start column or the corresponding for rows with the KeyRange. According to your blog post, using start key to read for millions

Re: Timeout Exception in get_slice

2012-05-08 Thread aaron morton
If I was rebuilding my power after spending the first thousand years of the Third Age as a shapeless evil I would cast my Eye of Fire in the direction of the filthy little multi_gets. A node can fail to respond to a query with rpc_timeout for two reasons: either the command did not run or the

Re: timeout while doing repair

2011-11-24 Thread Jahangir Mohammed
That will give you a snapshot of thread pools. You should look at ROW-READ-STAGE and see pending and active. If there are many pending, it means that the cluster is not able to keep up with the read requests coming along. Thanks, Jahangir Mohammed. On Thu, Nov 24, 2011 at 2:14 PM, Patrik Modesto

Re: timeout while doing repair

2011-11-24 Thread Patrik Modesto
We have our own servers, it is 16 core CPU, 32GB ram,8 1TB disks. I didn't check tpstats, just iotop where cassandra used all the io capacity when compacting/repairing. I had to completely clean the test cluster, but I'll check tpstats in the production. What should I look for? Regards, Patrik D

Re: timeout while doing repair

2011-11-24 Thread Jahangir Mohammed
What I know is timeout is because of increased load on node due to repair. Hardware? EC2? Did you check tpstats? On Thu, Nov 24, 2011 at 11:42 AM, Patrik Modesto wrote: > Thanks for the reply. I know I can configure longer timeout but in our use > case, reply longer than 1second is unacceptable

Re: timeout while doing repair

2011-11-24 Thread Patrik Modesto
Thanks for the reply. I know I can configure longer timeout but in our use case, reply longer than 1second is unacceptable. What I don't understand is why I get timeout while reading differrent keyspace than the repair is working on. I get timeouts even doing compaction. Besides usual access we d

Re: timeout while doing repair

2011-11-24 Thread Jahangir Mohammed
Do you use any client which gives you this timeout ? If you don't specify any timeout from client, look at rpc_timeout_in_ms. Increase it and see if you still suffer this. Repair is a costly process. Thanks, Jahangir Mohammed. On Thu, Nov 24, 2011 at 2:45 AM, Patrik Modesto wrote: > Hi, > >

Re: Timeout during stress test

2011-04-12 Thread mcasandra
Here is what cfhistograms look like. Don't really understand what this means, will try to read. I also %util in iostat continuously 90%. Not sure if this is caused by extra reads by cassandra. It seems unusual. [root@dsdb4 ~]# nodetool -h `hostname` cfhistograms StressKeyspace StressStandard Stres

Re: Timeout during stress test

2011-04-12 Thread aaron morton
Couple of hits here, one from jonathan and some previous discussions on the user list http://www.google.co.nz/search?q=cassandra+iostat Same here for cfhistograms http://www.google.co.nz/search?q=cassandra+cfhistograms cfhistograms includes information on the number of sstables read during rece

Re: Timeout during stress test

2011-04-11 Thread mcasandra
aaron morton wrote: > > You'll need to provide more information, from the TP stats the read stage > could not keep up. If the node is not CPU bound then it is probably IO > bound. > > > What sort of read? > How many columns was it asking for ? > How many columns do the rows have ? > Was the t

Re: Timeout during stress test

2011-04-11 Thread Terje Marthinussen
I notice you have pending hinted handoffs? Look for errors related to that. We have seen occasional corruptions in the hinted handoff sstables, If you are stressing the system to its limits, you may also consider playing with more with the number of read/write threads (concurrent_reads/writes)

Re: Timeout during stress test

2011-04-11 Thread aaron morton
You'll need to provide more information, from the TP stats the read stage could not keep up. If the node is not CPU bound then it is probably IO bound. What sort of read? How many columns was it asking for ? How many columns do the rows have ? Was the test asking for different rows ? How many

Re: Timeout during stress test

2011-04-11 Thread mcasandra
But I don't understand the reason for oveload. It was doing simple read of 12 threads and reasing 5 rows. Avg CPU only 20%, No GC issues that I see. I would expect cassandra to be able to process more with 6 nodes, 12 core, 96 GB RAM and 4 GB heap. -- View this message in context: http://cass

Re: Timeout during stress test

2011-04-11 Thread aaron morton
It means the cluster is currently overloaded and unable to complete requests in time at the CL specified. Aaron On 12 Apr 2011, at 11:18, mcasandra wrote: > It looks like hector did retry on all the nodes and failed. Does this then > mean cassandra is down for clients in this scenario? That wo

Re: Timeout during stress test

2011-04-11 Thread mcasandra
It looks like hector did retry on all the nodes and failed. Does this then mean cassandra is down for clients in this scenario? That would be bad. -- View this message in context: http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Timeout-during-stress-test-tp6262430p6263270.html Se

Re: Timeout during stress test

2011-04-11 Thread aaron morton
TimedOutException means the cluster could not perform the request in rpc_timeout time. The client should retry as the problem may be transitory. In this case read performance may have slowed down due to the number of sstables 286. It hard to tell without knowing what the workload is. Aaron On

Re: Timeout during stress test

2011-04-11 Thread mcasandra
I see this occurring often when all cassandra nodes all of a sudden show CPU spike. All reads fail for about 2 mts. GC.log and system.log doesn't reveal much. Only think I notice is that when I restart nodes there are tons of files that gets deleted. cfstats from one of the nodes looks like this:

Re: Timeout

2011-02-20 Thread Aaron Morton
Some of the schema operations wait to check agreement between the nodes before proceeding. Are there any messages in your server side logs. Aaron On 19/02/2011, at 2:47 PM, mcasandra wrote: > > Forgot to mention replication factor is 1 and I am running Cassandra 0.7.0. > It's using SimpleStr

Re: Timeout

2011-02-18 Thread mcasandra
Forgot to mention replication factor is 1 and I am running Cassandra 0.7.0. It's using SimpleStrategy -- View this message in context: http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Timeout-tp6042052p6042150.html Sent from the cassandra-u...@incubator.apache.org mailing list ar

Re: Timeout

2011-02-18 Thread mcasandra
This is a test cluster of 3 nodes. This is a test code that does the following: 1) First 4 lines physically drop, create keyspace and then creates CF and column definition on the server 2) Right after from 5th line onwards it then gets the reference to keyspace and tries to insert a row and colu

Re: Timeout

2011-02-18 Thread Javier Canillas
Why don't you post some details about your Cassandra Cluster, version, information about the keyspace you are creating (for example which is the replication factor within)? It might be of help. Besides, I don't fully understand your code. First you drop KEYSPACE, then create it again with a column

Re: Timeout Errors while running Hadoop over Cassandra

2011-01-19 Thread Jairam Chandar
I was able to workaround this problem by modifying the ColumnFamilyRecordReader class from the org.apache.cassandra.hadoop package. Since the errors where TimeoutException, I added sleep and retry logic around rows = client.get_range_slices(keyspace, new ColumnParent(cfName), predicate,

Re: Timeout Errors while running Hadoop over Cassandra

2011-01-14 Thread Jairam Chandar
The cassandra logs strangely show no errors at the time of failure. Changing the RPCTimeoutInMillis seemed to help. Though it slowed down the job considerably, it seems to be finishing by changing the timeout value to 1 min. Unfortunately, I cannot be sure if it will continue to work if the data in

Re: Timeout Errors while running Hadoop over Cassandra

2011-01-13 Thread Jeremy Hanna
On Jan 12, 2011, at 12:40 PM, Jairam Chandar wrote: > Hi folks, > > We have a Cassandra 0.6.6 cluster running in production. We want to run > Hadoop (version 0.20.2) jobs over this cluster in order to generate reports. > I modified the word_count example in the contrib folder of the cassandra

Re: Timeout Errors while running Hadoop over Cassandra

2011-01-12 Thread mck
On Wed, 2011-01-12 at 23:04 +0100, mck wrote: > > Caused by: TimedOutException() > > What is the exception in the cassandra logs? Or tried increasing rpc_timeout_in_ms? ~mck -- "When there is no enemy within, the enemies outside can't hurt you." African proverb | www.semb.wever.org | www.sesa

Re: Timeout Errors while running Hadoop over Cassandra

2011-01-12 Thread mck
On Wed, 2011-01-12 at 18:40 +, Jairam Chandar wrote: > Caused by: TimedOutException() What is the exception in the cassandra logs? ~mck -- "Don't use Outlook. Outlook is really just a security hole with a small e-mail client attached to it." Brian Trosko | www.semb.wever.org | www.sesat.no

Re: Timeout Errors while running Hadoop over Cassandra

2011-01-12 Thread Aaron Morton
Whats happening in the cassandra server logs when you get these errors? Reading through the hadoop 0.6.6 code it looks like it creates a thrift client with an infinite timeout. So it may be an internode timeout, which is set in storage-conf.xml.AaronOn 13 Jan, 2011,at 07:40 AM, Jairam Chandar wrot

Re: timeout when insert an indexed column

2010-09-08 Thread Jonathan Ellis
this is fixed in trunk for beta2 On Wed, Sep 8, 2010 at 9:52 PM, welcome wrote: > Hello! > >    I encounter the same problem with you,and I replace > client.insert("index".getBytes("UTF-8"), parent, > column,ConsistencyLevel.ONE);to: > client.insert("inde".getBytes("UTF-8"), parent, column,Consi

Re: timeout when insert an indexed column

2010-09-08 Thread welcome
Hello! I encounter the same problem with you,and I replace client.insert("index".getBytes("UTF-8"), parent, column,ConsistencyLevel.ONE);to: client.insert("inde".getBytes("UTF-8"), parent, column,ConsistencyLevel.ONE); That's right.But when I insert another one letter again,It's wrong as b

Re: timeout when insert an indexed column

2010-09-07 Thread Asif Jan
On Sep 7, 2010, at 11:05 AM, Ying Tang wrote: Sorry , i didn't put it clearly. The app throws out the TimeoutException , but the cassandra throws out the ArrayIndexOutOfBoundsException. And if i shortened this key's length,such as one letter , the indexed column insert is successful. But if

Re: timeout when insert an indexed column

2010-09-07 Thread Ying Tang
Sorry , i didn't put it clearly. The app throws out the TimeoutException , but the cassandra throws out the ArrayIndexOutOfBoundsException. And if i shortened this key's length,such as one letter , the indexed column insert is successful. But if i let the key be 'index0' ,this insert operation

Re: timeout when insert an indexed column

2010-09-07 Thread Carlin Wong
Hi Ivy, Are you sure about this. One is TimedOutException, and another is ArrayIndexOutOfBoundsException. I can't see any connection. Please point out, thank you. Calin4J 2010/9/7 Ying Tang > oh ,i've found this https://issues.apache.org/jira/browse/CASSANDRA-1402 > > > > On 9/7/10, Ying

Re: timeout when insert an indexed column

2010-09-07 Thread Ying Tang
oh ,i've found this https://issues.apache.org/jira/browse/CASSANDRA-1402 On 9/7/10, Ying Tang wrote: > Before inserting, the Cassandra.client is assined the keyspace . > ColumnParent parent = new ColumnParent(); >parent.setColumn_family("Standard1"); > > > On Tue, Sep 7, 2010 at

Re: timeout when insert an indexed column

2010-09-07 Thread Ying Tang
Before inserting, the Cassandra.client is assined the keyspace . ColumnParent parent = new ColumnParent(); parent.setColumn_family("Standard1"); On Tue, Sep 7, 2010 at 4:19 PM, Viktor Jevdokimov < viktor.jevdoki...@adform.com> wrote: > I didn't get which keyspace and column family yo

RE: timeout when insert an indexed column

2010-09-07 Thread Viktor Jevdokimov
I didn't get which keyspace and column family you trying to insert to? > parent.setColumn_family("Standard1"); -Original Message- From: Ying Tang [mailto:ivytang0...@gmail.com] Sent: Tuesday, September 07, 2010 11:10 AM To: user@cassandra.apache.org Subject: timeout when insert an indexe

Re: Timeout when cluster node fails/restarts

2010-06-24 Thread Jonathan Ellis
getting a TimedOutException for a few requests when a machine fails before Cassandra's Failure Detector notices is normal. On Wed, Jun 23, 2010 at 12:34 PM, Wouter de Bie wrote: > Hi, > > I've currently setup a cluster of 11 nodes. When running a small application > that uses Hector to read and

Re: timeout while running simple hadoop job

2010-05-12 Thread gabriele renzi
On Wed, May 12, 2010 at 5:46 PM, Johan Oskarsson wrote: > Looking over the code this is in fact an issue in 0.6. > It's fixed in trunk/0.7. Connections will be reused and closed properly, see > https://issues.apache.org/jira/browse/CASSANDRA-1017 for more details. > > We can either backport that

Re: timeout while running simple hadoop job

2010-05-12 Thread Johan Oskarsson
Looking over the code this is in fact an issue in 0.6. It's fixed in trunk/0.7. Connections will be reused and closed properly, see https://issues.apache.org/jira/browse/CASSANDRA-1017 for more details. We can either backport that patch or make at least close the connections properly in 0.6. Ca

Re: timeout while running simple hadoop job

2010-05-12 Thread Héctor Izquierdo
Have you checked your open file handler limit? You can do that by using "ulimit" in the shell. If it's too low, you will encounter the "too many open files" error. You can also see how many open handlers an application has with "lsof". Héctor Izquierdo On 12/05/10 17:00, gabriele renzi wrote:

Re: timeout while running simple hadoop job

2010-05-12 Thread gabriele renzi
On Wed, May 12, 2010 at 4:43 PM, Jonathan Ellis wrote: > On Wed, May 12, 2010 at 5:11 AM, gabriele renzi wrote: >> - is it possible that such errors show up on the client side as >> timeoutErrors when they could be reported better? > > No, if the node the client is talking to doesn't get a reply

Re: timeout while running simple hadoop job

2010-05-12 Thread Jonathan Ellis
On Wed, May 12, 2010 at 5:11 AM, gabriele renzi wrote: > - is it possible that such errors show up on the client side as > timeoutErrors when they could be reported better? No, if the node the client is talking to doesn't get a reply from the data node, there is no way for it to magically find ou

Re: timeout while running simple hadoop job

2010-05-12 Thread gabriele renzi
a follow up for anyone that may end up on this conversation again: I kept trying and neither changing the number of concurrent map tasks, nor the slice size helped. Finally, I found out a screw up in our logging system, which had forbidden us from noticing a couple of recurring errors in the logs

Re: timeout while running simple hadoop job

2010-05-07 Thread Joost Ouwerkerk
The number of map tasks for a job is a function of the InputFormat, which in the case of ColumnInputFormat is a function of the global number of keys in Cassandra. The number of concurrent maps being executed at any given time per TaskTracker (per node) is set by mapred.tasktracker.reduce.tasks.ma

Re: timeout while running simple hadoop job

2010-05-07 Thread Joseph Stein
you can manage the number of map tasks by node mapred.tasktracker.map.tasks.maximum=1 On Fri, May 7, 2010 at 9:53 AM, gabriele renzi wrote: > On Fri, May 7, 2010 at 2:44 PM, Jonathan Ellis wrote: >> Sounds like you need to configure Hadoop to not create a whole bunch >> of Map tasks at once >

  1   2   >