I'm running into a problem trying to read data from a column family that
includes a number of collections.
Cluster details:
4 nodes running 1.2.6 on VMs with 4 cpus and 7 Gb of ram.
raid 0 striped across 4 disks for the data and logs
each node has about 500 MB of data currently loaded
Here is
> --
> Sylvain
>
>
> On Fri, Jul 12, 2013 at 8:17 AM, Paul Ingalls wrote:
> I'm running into a problem trying to read data from a column family that
> includes a number of collections.
>
> Cluster details:
> 4 nodes running 1.2.6 on VMs with 4 cpus and 7 Gb of ram.
I'm running into a problem where instances of my cluster are hitting over 450K
open files. Is this normal for a 4 node 1.2.6 cluster with replication factor
of 3 and about 50GB of data on each node? I can push the file descriptor limit
up, but I plan on having a much larger load so I'm wonderi
t? If you're using the defaults, you'll have a ton of really small files.
> I believe Albert Tobey recommended using 256MB for the table
> sstable_size_in_mb to avoid this problem.
>
>
> On Sun, Jul 14, 2013 at 5:10 PM, Paul Ingalls wrote:
> I'm running into a pr
15, 2013, at 12:00 AM, Paul Ingalls wrote:
> I have one table that is using leveled. It was set to 10MB, I will try
> changing it to 256MB. Is there a good way to merge the existing sstables?
>
> On Jul 14, 2013, at 5:32 PM, Jonathan Haddad wrote:
>
>> Are you using le
I'm seeing quite a few of these on pretty much all of the nodes of my 1.2.6
cluster. Is this something I should be worried about? If so, do I need to run
upgradesstables or run a scrub?
ERROR [CompactionExecutor:4] 2013-07-18 18:49:02,609 CassandraDaemon.java (line
192) Exception in thread Th
.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
On Jul 18, 2013, at 11:53 AM, Paul Ingalls wrote:
> I'm seeing quite a few of these on pretty much all of the nodes of my 1.2.6
> cluster. Is this something I should be w
I'm seeing a number of NullPointerExceptions in the log of my cluster. You can
see the log line below. I'm thinking this is probably bad. Any ideas?
ERROR [CompactionExecutor:38] 2013-07-19 17:01:34,494 CassandraDaemon.java
(line 192) Exception in thread Thread[CompactionExecutor:38,1,main]
j
from that thread [CompactionExecutor:38] and see what sstables it was
> compacting. Try remove those. But I would give scrub another chance to get it
> sorted.
>
> Cheers
>
> -
> Aaron Morton
> Cassandra Consultant
> New Zealand
>
> @aaronmorto
I'm getting constant exceptions during compaction of large rows. In fact, I
have not seen one work, even starting from an empty DB. As soon as I start
pushing in data, when a row hits the large threshold, it fails compaction with
this type of stack trace:
INFO [CompactionExecutor:6] 2013-07-
I want to check in. I'm sad, mad and afraid. I've been trying to get a 1.2
cluster up and working with my data set for three weeks with no success. I've
been running a 1.1 cluster for 8 months now with no hiccups, but for me at
least 1.2 has been a disaster. I had high hopes for leveraging t
experience I think Cassandra is a dangerous choice for an
> young limited funding/experience start-up expecting to scale fast. We are a
> fairly mature start-up with funding. We’ve just spent 3-5 months moving from
> Mongo to Cassandra. It’s been expensive and painful gettin
an
> young limited funding/experience start-up expecting to scale fast. We are a
> fairly mature start-up with funding. We’ve just spent 3-5 months moving from
> Mongo to Cassandra. It’s been expensive and painful getting Cassandra to read
> like Mongo, but we’ve made it J
>
>
Hey Radim,
I knew that it would take a while to stabilize, which is why I waited 1/2 a
year before giving it a go. I guess I was just surprised that 6 months wasn't
long enough…
I'll have to look at the differences between 1.2 and 2.0. Is there a good
resource for checking that?
Your experi
mpaction strategy where you declare a
> sstable_size_in_mb not min_compaction_threshold. Much better for our use case.
> http://www.datastax.com/dev/blog/when-to-use-leveled-compaction
> We are read-heavy latency sensitive people
> Lots of TTL’ing
> Few writes compared to rea
row and reinsert this row? By the way, how
> large is that one row?
>
> Jason
>
>
> On Wed, Jul 24, 2013 at 9:23 AM, Paul Ingalls wrote:
> I'm getting constant exceptions during compaction of large rows. In fact, I
> have not seen one work, even starting from an empty
te. If
> you can reproduce with that version, we'd be very interested to know (you can
> even feel free to open a ticket on JIRA).
>
> --
> Sylvain
>
>
> On Wed, Jul 24, 2013 at 10:52 PM, Paul Ingalls wrote:
> Hey Chris,
>
> so I just tried dropping all my da
reproduce, the
> better the change we'd be able to fix it quickly).
>
> --
> Sylvain
>
>
> On Wed, Jul 24, 2013 at 10:54 PM, Paul Ingalls wrote:
> It is pretty much every row that hits the large threshold. I don't think I
> can delete every row that h
This is the same issue we have been seeing. Still no luck getting a simple
repro case for creating a JIRA issue. Do you have something simple enough to
drop in a JIRA report?
Paul
On Jul 26, 2013, at 8:06 AM, Pavel Kirienko
wrote:
> Hi list,
>
> We run Cassandra 1.2 on three-node cluster.
; I'm going to try 1.2.7 then will be back with results.
>
>
> On Sat, Jul 27, 2013 at 12:18 AM, Paul Ingalls wrote:
>> This is the same issue we have been seeing. Still no luck getting a simple
>> repro case for creating a JIRA issue. Do you have something simple enou
Quick question about systems architecture.
Would it be better to run 5 nodes with 7GB RAM and 4CPU's or 10 nodes with
3.5GB RAM and 2CPUS?
I'm currently running the former, but am considering the latter. My goal would
be to improve overall performance by spreading the IO across more disks. My
I'm trying to get a handle on how newer cassandra handles memory. Most of what
I am seeing via google, on the wiki etc. appears old. For example, this wiki
article appears out of date relative to post 1.0:
http://wiki.apache.org/cassandra/MemtableThresholds
specifically this is the section I'
m like reducing the number of
tables would make a big impact, but I'm running 1.2.8 so not sure if it is
still true.
Is there a new rule of thumb?
On Aug 12, 2013, at 10:42 AM, Robert Coli wrote:
> On Mon, Aug 12, 2013 at 10:22 AM, Paul Ingalls wrote:
> At the core, my question
heap if < 8GB and / or reduce sampling
>> index_interval: 128 to a bigger value (256 - 512) and /or wait for 2.0.*
>> which, of the top of my head, should move the sampling in native memory
>> allowing heap size to be independent from the data size per node.
>>
>> This sh
24 matches
Mail list logo