Hi Shalab,
Are you using anything in your WHERE clause of the query?
If not, you are doing a full scan of your data. In iteration 8 it will scan 1
500 000 entries, and the default time out value is pretty low.
If you do select count(*) from traffic_by_day where segment_id = 1 and day = 1
it sho
( Virtuval
Machines)
Do we need to increase the nofile limts to more than 32768 ?
On Thu, Nov 7, 2013 at 4:55 PM, Pieter Callewaert
mailto:pieter.callewa...@be-mobile.be>> wrote:
Hi Murthy,
Did you do a package install (.deb?) or you downloaded the tar?
If the latest, you have to
000
files. (can be found in /etc/init.d/cassandra, FD_LIMIT).
However, with the 2.0.x I had to raise it to 1 000 000 because 100 000 was too
low.
Kind regards,
Pieter Callewaert
From: Murthy Chelankuri [mailto:kmurt...@gmail.com]
Sent: donderdag 7 november 2013 12:15
To: user
owing how to reproduce.
I know the ppl of datastax are now investigating this, but no fix yet...
Kind regards,
Pieter Callewaert
-Original Message-
From: Nigel LEACH [mailto:nigel.le...@uk.bnpparibas.com]
Sent: dinsdag 29 oktober 2013 18:24
To: user@cassandra.apache.org
Subject: OpsCente
ppens.
-It's not socket related.
-Using Oracle Java(TM) SE Runtime Environment (build 1.7.0_40-b43)
-Using multiple data directories (maybe related ?)
I'm stuck at the moment, I don't know If I should try DEBUG log because it will
be too much information?
Kind regard
at java.lang.Thread.run(Thread.java:724)
Several minutes later I get Too many open files.
Specs:
12-node cluster with Ubuntu 12.04 LTS, Cassandra 2.0.1 (datastax packages),
using JBOD of 2 disks.
JNA enabled.
Any suggestions?
Kind regards,
Pieter Callewaert
[Description: cid:image003.png@01CD9CE5.CE5
Thanks, it works perfectly with ALTER TABLE. Stupid I didn't thought of this.
Maybe I overlooked, but maybe this should be added in the docs. Really a great
feature!
Kind regards,
[Description: cid:image003.png@01CD9CE5.CE5A2330]
Pieter Callewaert
Web & IT engineer
Web:
I doing something wrong? Or is this a bug?
Kind regards,
[Description: cid:image003.png@01CD9CE5.CE5A2330]
Pieter Callewaert
Web & IT engineer
Web: www.be-mobile.be<http://www.be-mobile.be/>
Email: pieter.callewa...@be-mobile.be<mailto:pieter.callewa...@be-mobile.be>
Tel: + 32 9 330 51 80
<>
@cassandra.apache.org
Subject: Re: cryptic exception in Hadoop/Cassandra job
Cassandra 1.1.5, using BulkOutputFormat
Brian
On Jan 30, 2013, at 7:39 AM, Pieter Callewaert wrote:
> Hi Brian,
>
> Which version of cassandra are you using? And are you using the BOF to write
> to Cassandr
Hi Brian,
Which version of cassandra are you using? And are you using the BOF to write to
Cassandra?
Kind regards,
Pieter
-Original Message-
From: Brian Jeltema [mailto:brian.jelt...@digitalenvoy.net]
Sent: woensdag 30 januari 2013 13:20
To: user@cassandra.apache.org
Subject: cryptic e
We also have 4-disk nodes, and we use the following layout:
2 x OS + Commit in RAID 1
2 x Data disk in RAID 0
This gives us the advantage we never have to reinstall the node when a drive
crashes.
Kind regards,
Pieter
From: Ran User [mailto:ranuse...@gmail.com]
Sent: dinsdag 30 oktober 2012 4:3
Hi,
Had the same problem this morning, seems related to the leap second bug.
Rebooting the nodes fixed it for me, but there seems to be a fix also without
rebooting the server.
Kind regards,
Pieter
From: feedly team [mailto:feedly...@gmail.com]
Sent: maandag 2 juli 2012 17:09
To: user@cassandra
Hi,
While I was typing my mail I had the idea to try with the new directory layout.
It seems you have to change the parameter settings from 1.0 to 1.1
In 1.0:
Param 1:
Param 2:
In 1.1:
Param 1:
Param 2: /
Don't know if this is a bug or a breaking change ?
Kind regards,
Pieter Calle
still active.
Does this have to do something with the new directory structure from 1.1 ? Or
are the parameters changed from the function?
Kind regards,
Pieter Callewaert
?
Kind regards,
Pieter Callewaert
From: Yuki Morishita [mailto:mor.y...@gmail.com]
Sent: dinsdag 22 mei 2012 16:21
To: user@cassandra.apache.org
Subject: Re: supercolumns with TTL columns not being compacted correctly
Data will not be deleted when those keys appear in other stables outside of
gc_grace is 0, but still
the data from the sstable is being written to the new one, while I am 100% sure
all the data is invalid.
Kind regards,
Pieter Callewaert
From: samal [mailto:samalgo...@gmail.com]
Sent: dinsdag 22 mei 2012 14:33
To: user@cassandra.apache.org
Subject: Re: supercolumns with TTL
andra cassandra 3.9G May 22 14:12
/data/MapData007/HOS-tmp-hc-196898-Data.db
The sstable is being 1-on-1 copied to a new one. What am I missing here?
TTL works perfectly, but is it giving a problem because it is in a super
column, and so never to be deleted from disk?
Kind regards
Pieter Callewaert
Hi,
In 1.1 the commitlog files are pre-allocated with files of 128MB.
(https://issues.apache.org/jira/browse/CASSANDRA-3411) This should however not
exceed your commitlog size in Cassandra.yaml.
commitlog_total_space_in_mb: 4096
Kind regards,
Pieter Callewaert
From: Bryce Godfrey
Hi,
Sorry to say I didn't look further into this. I'm using CentOS 6.2 now for
loader without any problems.
Kind regards,
Pieter Callewaert
-Original Message-
From: sj.climber [mailto:sj.clim...@gmail.com]
Sent: vrijdag 18 mei 2012 3:56
To: cassandra-u...@incubator.apache.o
the Cassandra.yaml or is it completely
independent?
Kind regards
-Original Message-
From: Pieter Callewaert [mailto:pieter.callewa...@be-mobile.be]
Sent: woensdag 9 mei 2012 17:41
To: user@cassandra.apache.org
Subject: RE: sstableloader 1.1 won't stream
I don't see any entr
(CentOS release 5.8
(Final)) not running Cassandra to a 3-node Cassandra cluster. All running 1.1.
My next step will be to try to use sstableloader on one of the nodes from the
cluster, to see if that works...
If anyone has any other ideas, please share.
Kind regards,
Pieter Callewaert
-
won't stream
You may want to upgrade all your nodes to 1.1.
The streaming process connect to every living nodes of the cluster (you can
explicitely diable some nodes), so all nodes need to speak 1.1.
2012/5/7 Pieter Callewaert :
> Hi,
>
>
>
> I’m trying to upgrade our
/10.10.10.102 0/1 (0)] [/10.10.10.100 0/1 (0)] [/10.10.10.101 0/1
(0)] [total: 0 - 0MB/s (avg: 0MB/s)]
...
Anyone any idea what I'm doing wrong?
Kind regards,
Pieter Callewaert
23 matches
Mail list logo