eycache
> to a low enough to get the server started or increase the Heap to hold all
> the objects. (If you are using the visualVM try to see the object counts).
>
> Regards,
>
>
>
>
>
> On Thu, Jun 28, 2012 at 3:40 PM, Gurpreet Singh
> wrote:
>
>> Vijay,
>
that the heap is full.
Thanks
/G
On Thu, Jun 28, 2012 at 12:22 PM, Vijay wrote:
> in 1.1 we dont calculate the Key size accurately, hence we have the fix in
> https://issues.apache.org/jira/browse/CASSANDRA-4315
>
> Regards,
>
>
>
>
>
> On Thu, Jun 28, 2012 at 11:
anyone have an explanation for this?
This kinda screws up memory calculations.
/G
On Mon, Jun 25, 2012 at 5:50 PM, Gurpreet Singh wrote:
> Hi,
> I have a question about cassandra 1.1
>
> Just wanted to confirm if key_cache_size_in_mb is the maximum amount of
> memory that key ca
Hi,
I have a question about cassandra 1.1
Just wanted to confirm if key_cache_size_in_mb is the maximum amount of
memory that key cache will use in memory? If not, what is it?
My observations:
With key cache disabled, I started cassandra. I invoked Full GC through
jconsole a couple of times just
I found a fix for this one, rather a workaround.
I changed the rpc_server_type in cassandra.yaml, from hsha to sync, and the
error went away. I guess, there is some issue with the thrift nonblocking
server.
Thanks
Gurpreet
On Wed, May 16, 2012 at 7:04 PM, Gurpreet Singh wrote:
> Thanks Aa
13, 2012 at 11:30 AM, ruslan usifov
> wrote:
>
>> Hm, it's very strange what amount of you data? You linux kernel
>> version? Java version?
>>
>> PS: i can suggest switch diskaccessmode to standart in you case
>> PS:PS also upgrade you linux to latest, and java
to avoid a cassandra restart every cpl of days. Something to keep
the RES memory to hit such a high number. I have been constantly monitoring
the RES, was not seeing issues when RES was at 14 gigs.
/G
On Fri, Jun 8, 2012 at 10:02 PM, Gurpreet Singh wrote:
> Aaron, Ruslan,
> I changed th
t; see if io is the problem
> http://spyced.blogspot.co.nz/2010/01/linux-performance-basics.html
>
> Cheers
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 8/06/2012, at 9:00 PM, Gurpreet Singh wrote:
>
;
> 2012/6/8 Gurpreet Singh :
> > Hi,
> > I am testing cassandra 1.1 on a 1 node cluster.
> > 8 core, 16 gb ram, 6 data disks raid0, no swap configured
> >
> > cassandra 1.1.1
> > heap size: 8 gigs
> > key cache size in mb: 800 (used only 200mb till now)
>
Hi,
I am testing cassandra 1.1 on a 1 node cluster.
8 core, 16 gb ram, 6 data disks raid0, no swap configured
cassandra 1.1.1
heap size: 8 gigs
key cache size in mb: 800 (used only 200mb till now)
memtable_total_space_in_mb : 2048
I am running a read workload.. about 30 reads/second. no writes at
ation.
>
> Cheers
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 26/05/2012, at 8:48 PM, Gurpreet Singh wrote:
>
> Hi Aaron,
> Here is the latest on this..
> i switched to a node with 6 disks and run
--
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 20/05/2012, at 4:10 PM, Radim Kolar wrote:
>
> Dne 19.5.2012 0:09, Gurpreet Singh napsal(a):
>
> Thanks Radim.
>
> Radim, actually 100 reads per second is achievab
Thanks Radim.
Radim, actually 100 reads per second is achievable even with 2 disks.
But achieving them with a really low avg latency per key is the issue.
I am wondering if anyone has played with index_interval, and how much of a
difference would it make to reads on reducing the index_interval. I
eived this message in error, please contact the sender immediately and
> irrevocably delete this message and any copies.
>
> *From:* Gurpreet Singh [mailto:gurpreet.si...@gmail.com]
> *Sent:* Thursday, May 17, 2012 20:24
> *To:* user@cassandra.apache.org
> *Subject:* Re: cassand
<
viktor.jevdoki...@adform.com> wrote:
> > Gurpreet Singh wrote:
> > Any ideas on what could help here bring down the read latency even more ?
>
> Avoid Cassandra forwarding request to other nodes:
> - Use consistency level ONE;
> - Create data model to do single request wi
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 12/05/2012, at 5:44 AM, Gurpreet Singh wrote:
>
> This is hampering our testing of cassandra a lot, and our move to
> cassandra 1.0.9.
> Has anyone seen this before?
This is hampering our testing of cassandra a lot, and our move to cassandra
1.0.9.
Has anyone seen this before? Should I be trying a different version of
cassandra?
/G
On Thu, May 10, 2012 at 11:29 PM, Gurpreet Singh
wrote:
> Hi,
> i have created 1 node cluster of cassandra 1.0.9. I am s
Hi,
i have created 1 node cluster of cassandra 1.0.9. I am setting this up for
testing reads/writes.
I am seeing the following error in the server system.log
ERROR [Selector-Thread-7] 2012-05-10 22:44:02,607 TNonblockingServer.java
(line 467) Read an invalid frame size of 0. Are you using TFramed
Is this also true for RackAware with alternating nodes from 2 datacenters on
the ring?
On Wed, Sep 22, 2010 at 7:28 AM, Jonathan Ellis wrote:
> if you're using RackUnawareStrategy that should work.
>
> On Wed, Sep 22, 2010 at 5:27 AM, Daniel Doubleday
> wrote:
> > Hi all,
> >
> > just wanted to
Up 194.27 GB 106767287274351479790232508363491106683|
^
ip7 Up 87.3 GB 12804505279929330897897669231312075
|-->|
/G
On Thu, Sep 16, 2010 at 11:56 PM, Gurpreet Singh
wrote:
> Thanks Benjamin. I realised that, i have reverted using cleanup, got it
> back to old state
Thanks Benjamin. I realised that, i have reverted using cleanup, got it back
to old state and testing the scenario exactly the way you put it.
On Thu, Sep 16, 2010 at 10:56 PM, Benjamin Black wrote:
> On Thu, Sep 16, 2010 at 3:19 PM, Gurpreet Singh
> wrote:
> > 1. I was looking
Hi,
I have a few questions and was looking for an answer.
I have a cluster of 7 Cassandra 0.6.5 nodes in my test setup. RF=2. Original
data size is about 100 gigs, with RF=2, i see the total load on the cluster
is about 200 gigs, all good.
1. I was looking to increase the RF to 3. This process e
Thanks to driftx from cassandra IRC channel for helping out.
This was resolved by increasing the rpc timeout for the bootstrap process.
On Wed, Sep 15, 2010 at 11:43 AM, Gurpreet Singh
wrote:
> This problem still stays unresolved despite numerous restarts to the
> cluster. I cant seem to
org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:59)
/G
On Tue, Sep 14, 2010 at 11:40 AM, Gurpreet Singh
wrote:
> Hi Vineet,
> I have tracked the nodetool streams to completion each time. Below are the
> logs on the source and destination node. There are 3 sstables being
> transfer
ont see anything happening try
> switching off firewall or iptables.
>
>
> Regards
> Vineet Daniel
> Cell : +918106217121
> Websites :
> Blog <http://vinetedaniel.blogspot.com> |
> Linkedin<http://in.linkedin.com/in/vineetdaniel>
> | Twitte
on.
/G
On Tue, Sep 14, 2010 at 10:05 AM, Gurpreet Singh
wrote:
> I am using cassandra 0.6.5.
>
>
> On Tue, Sep 14, 2010 at 9:16 AM, Gurpreet Singh
> wrote:
>
>> Hi,
>> I have a cassandra cluster of 4 machines, and I am trying to bootstrap 2
>> more machin
I am using cassandra 0.6.5.
On Tue, Sep 14, 2010 at 9:16 AM, Gurpreet Singh wrote:
> Hi,
> I have a cassandra cluster of 4 machines, and I am trying to bootstrap 2
> more machines, one at a time.
> For both these machines, the bootstrapping stays stuck after the streaming
> is don
Hi,
I have a cassandra cluster of 4 machines, and I am trying to bootstrap 2
more machines, one at a time.
For both these machines, the bootstrapping stays stuck after the streaming
is done.
When the nodes come up for bootstrapping, I see all the relevant messages
about getting a new token, assumi
PM, Gurpreet Singh
> wrote:
> > D was once a part of the cluster, but had gone down because of disk
> issues.
> > Its back up, it still has the old data, however to bootstrap again, i
> > deleted the old Location db (is that a good practise?), and so i see it
> did
&g
I am using Cassandra 0.6.5, and playing with bootstrapping, decommissioning
nodes.
The latest one i hit is the exception down below. I got this exception twice
on the source machine while streaming, in 2 different scenarios (both
involved streaming though)
1. Bootstrapping another machine from the
ay
that its a seed.
Thanks for all the help,
Gurpreet
On Thu, Sep 9, 2010 at 7:25 AM, Jonathan Ellis wrote:
> On Thu, Sep 9, 2010 at 12:50 AM, Gurpreet Singh
> wrote:
> > 1. what is the purpose of this anticompacted file created during cleanup?
>
> That is all the data that
ata directory defined with enough room, Cassandra will
> use that one.
>
> On Wed, Sep 8, 2010 at 6:25 AM, Gurpreet Singh
> wrote:
> > Hi,
> > version: cassandra 0.6.5
> > I am trying to bootstrap a new node from an existing seed node.
> > The new node seems to
Hi,
version: cassandra 0.6.5
I am trying to bootstrap a new node from an existing seed node.
The new node seems to be stuck with the bootstrapping message, and did not
show any activity.
Only after i checked the logs of the seed node, i realise there has been an
error:
Caused by: java.lang.Unsupp
33 matches
Mail list logo