I forgot to mention we are running Cassandra 1.1.2.
Thanks,
-Mike
On Sep 24, 2012, at 5:00 PM, Michael Theroux wrote:
> Hello,
>
> We are running into an unusual situation that I'm wondering if anyone has any
> insight on. We've been running a Cassandra clust
ily has resolved the issue
for now, but, is there something fundamental about row caching that I am
missing?
We are running Cassandra 1.1.2 with a 6 node cluster, with a replication
factor of 3.
Thanks,
-Mike
server's memory.
This isn't just a Cassandra thing, it simply happens to be very
evident with that system - generally to get an effective benefit from
a cache, the data should be contiguously sized and not too large to
allow effective cache 'lining'.
Bill
On 02/12/12 21
uledTasks:1] 2012-12-04 09:00:37,879 StatusLogger.java
(line 116) open.m 26720,51150067
It's appreciated,
Thanks,
-Mike
increased load on a
single column family?
Any insights would be appreciated,
-Mike
On 12/4/2012 3:33 PM, aaron morton wrote:
For background, a discussion on estimating working set
http://www.mail-archive.com/user@cassandra.apache.org/msg25762.html .
You can also just look at the size of te
; problem. Any history on that? I
couldn't find too much information on it.
Thanks,
-Mike
On 12/16/2012 8:41 PM, aaron morton wrote:
1) Am I reading things correctly?
Yes.
If you do a read/slice by name and more than min compaction level
nodes where read the data is re-written so tha
operations.
Thanks!
-Mike
? I assume a nodetool scrub will cleanup old tombstones only if
that row is not in another sstable?
Do tombstones take up bloomfilter space after gc_grace_period?
-Mike
On 1/2/2013 6:41 PM, aaron morton wrote:
1) As one can imagine, the index and bloom filter for this column family is
large
should be fairly
small (about 500,000 skinny rows per node, including replicas).
Any other thoughts on this?
-Mike
On 1/6/2013 3:49 PM, aaron morton wrote:
When these rows are deleted, tombstones will be created and stored in more
recent sstables. Upon compaction of sstables, and after gc_gr
?
This is more related to the current activities of deletion, as opposed
to a major compaction (although the question is applicable to both). As
we delete rows, will our bloomfilters grow?
-Mike
On 1/6/2013 3:49 PM, aaron morton wrote:
When these rows are deleted, tombstones will be created and
Has anyone had any gotchas recently that I should be aware of before
performing this upgrade?
In order to upgrade, is the only thing that needs to change are the JAR
files? Can everything remain as-is?
Thanks,
-Mike
families (the former
makes sense, I'm just making sure).
-Mike
On 1/16/2013 11:08 AM, Jason Wee wrote:
always check NEWS.txt for instance for cassandra 1.1.3 you need to
run nodetool upgradesstables if your cf has counter.
On Wed, Jan 16, 2013 at 11:58 PM, Mike <mailto:mthero...@y
o
have started at the start of the second session.
Any hints?
-Mike
n a staggered upgrade of the sstables over a number of days.
2) Upgrade one node at a time, running the clustered in a mixed
1.1.2->1.1.9 configuration for a number of days.
I would prefer #1, as with #2, streaming will not work until all the
nodes are upgraded.
I appreciate your thoughts,
-Mike
O
t for several days?
Thanks,
-Mike
On 2/10/2013 3:27 PM, aaron morton wrote:
I would do #1.
You can play with nodetool setcompactionthroughput to speed things up,
but beware nothing comes for free.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
nd, but I haven't
found too much on what else needs to be done after the schema change.
I did these tests with Cassandra 1.1.9.
Thanks,
-Mike
ing.
Given we use an RF of 3, and LOCAL_QUORUM consistency for everything,
and we are not seeing errors, something seems to be working correctly.
Any idea what is going on above? Should I be alarmed?
-Mike
[d558])
10.0.8.24 us-east 1c Up Normal 90.72 GB
0.04% Token(bytes[eaa8])
Any help would be appreciated, as if something is going drastically
wrong we need to go back to backups and revert back to 1.1.2.
Thanks,
-Mike
On 2/14/2013 8:32 AM
cause updates to be mistakenly dropped for being old.
Also, make sure you are running with a gc_grace period that is high
enough. The default is 10 days.
Hope this helps,
-Mike
On 2/15/2013 1:13 PM, Víctor Hugo Oliveira Molinar wrote:
hello everyone!
I have a column family filled with event
Another piece of information that would be useful is advice on how to
properly set the SSTable size for your usecase. I understand the
default is 5MB, a lot of examples show the use of 10MB, and I've seen
cases where people have set is as high as 200MB.
Any information is appreciated,
Hello Wei,
First thanks for this response.
Out of curiosity, what SSTable size did you choose for your usecase, and
what made you decide on that number?
Thanks,
-Mike
On 2/14/2013 3:51 PM, Wei Zhu wrote:
I haven't tried to switch compaction strategy. We started with LCS.
For us,
h out-of-order within an sstable.)"
We recently upgraded from 1.1.2 to 1.1.9.
Does anyone know if an offline scrub is recommended to be performed when
switching from STCS->LCS after upgrading from 1.1.2?
Any insight would be appreciated,
Thanks,
-Mike
On 2/17/2013 8:57 PM, Wei Zhu wrote:
=Cache,scope=KeyCache,name=Capacity
The Type of Attribute (Value) is java.lang.Object
is it possible to implement the datatype of gauge as numeric types
instead of object, or other way around for example using metric
reporter...etc?
Thanks a lot for any suggestion!
Best Regard!
Mike
Thanks for the response Rob,
And yes, the relevel helped the bloom filter issue quite a bit, although it
took a couple of days for the relevel to complete on a single node (so if
anyone tried this, be prepared)
-Mike
Sent from my iPhone
On Sep 23, 2013, at 6:34 PM, Robert Coli wrote:
>
EBS volumes, but it seems too consistent to be actually caused by
that. The problem is consistent across multiple replacements, and multiple
EC2 regions.
I appreciate any suggestions!
- Mike
8 at 2:24 PM, Mike Torra wrote:
>
>> Hi There -
>>
>> I have noticed an issue where I consistently see high p999 read latency
>> on a node for a few hours after replacing the node. Before replacing the
>> node, the p999 read latency is ~30ms, but after it increa
the problem:
```
pooling: {
coreConnectionsPerHost: {
[distance.local]: 2,
[distance.remote]: 0
}
}
```
Any suggestions?
- Mike
have any other ideas? Why does the row show in `sstabledump`
but not when I query for it?
I appreciate any help or suggestions!
- Mike
etion_info" : {
"local_delete_time" : "2019-01-22T17:59:35Z" }
}
]
}
]
}
```
As expected, almost all of the data except this one suspicious partition
has a ttl and is already expired. But if a partition isn't expired and I
see it in t
be isolated the way it is (ie only one CF even
though I have a few others that share a very similar schema, and only some
nodes) seems like it will help me prevent it.
On Thu, May 2, 2019 at 1:00 PM Paul Chandler wrote:
> Hi Mike,
>
> It sounds like that record may have been deleted, i
Thx for the help Paul - there are definitely some details here I still
don't fully understand, but this helped me resolve the problem and know
what to look for in the future :)
On Fri, May 3, 2019 at 12:44 PM Paul Chandler wrote:
> Hi Mike,
>
> For TWCS the sstable can only be de
at
> effectively blocks all other expiring cells from being purged.
>
> --
> Jeff Jirsa
>
>
> On May 3, 2019, at 7:57 PM, Nick Hatfield
> wrote:
>
> Hi Mike,
>
>
>
> If you will, share your compaction settings. More than likely, your issue
> is from 1 of
properties, you’ll compact
> away most of the other data in those old sstables (but not the partition
> that’s been manually updated)
>
> Also table level TTLs help catch this type of manual manipulation -
> consider adding it if appropriate.
>
> --
> Jeff Jirsa
>
>
&
Hi all, I would like to know, is there any way to rebuild a particular
column family when all the SSTables files for this column family are
missing?? Say we do not have any backup of it.
Thank you.
Regards,
Mike Yeap
cause I didn't use the -full option of the "nodetool rebuild".
Thanks!
Regards,
Mike Yeap
On Thu, May 19, 2016 at 4:03 PM, Ben Slater
wrote:
> Use nodetool listsnapshots to check if you have a snapshot - in default
> configuration, Cassandra takes snapshots for operations
ch is just a Docker image where you need to manage that
all yourself (painfully)
--
--mike
Repair is the default
for Cassandra 2.2 and later.
Regards,
Mike Yeap
On Wed, May 25, 2016 at 8:01 AM, Bryan Cheng wrote:
> Hi Luke,
>
> I've never found nodetool status' load to be useful beyond a general
> indicator.
>
> You should expect some small skew, as this
Hi George, are you using NetworkTopologyStrategy as the replication
strategy for your keyspace? If yes, can you check the
cassandra-rackdc.properties of this new node?
https://issues.apache.org/jira/browse/CASSANDRA-8279
Regards,
Mike Yeap
On Wed, May 25, 2016 at 2:31 PM, George Sigletos
Hi Paolo,
a) was there any large insertion done?
b) are the a lot of files in the saved_caches directory?
c) would you consider to increase the HEAP_NEWSIZE to, say, 1200M?
Regards,
Mike Yeap
On Fri, May 27, 2016 at 12:39 AM, Paolo Crosato <
paolo.cros...@targaubiest.com> wrote:
> H
memtable_offheap_space_in_mb
Regards,
Mike Yeap
On Sun, May 29, 2016 at 6:18 PM, Bhuvan Rawal wrote:
> Hi,
>
> We are running a 6 Node cluster in 2 DC on DSC 3.0.3, with 3 Node each.
> One of the node was showing UNREACHABLE on other nodes in nodetool
> describecluster and on that node
ake a look at the records in system.compaction:
select * from system.compaction_history;
Regards,
Mike Yeap
On Tue, May 31, 2016 at 5:21 PM, Paul Dunkler wrote:
> And - as an addition:
>
> Shoudln't that be documented that even snapshot files can change?
>
> I guess
hts on what to look for? Can we increase thread count/pool sizes
for the messaging service?
Thanks,
Mike
--
Mike Heffner
Librato, Inc.
One thing to add, if we do a rolling restart of the ring the timeouts
disappear entirely for several hours and performance returns to normal.
It's as if something is leaking over time, but we haven't seen any
noticeable change in heap.
On Thu, Jun 23, 2016 at 10:38 AM, Mike Heffner wr
Jens,
We haven't noticed any particular large GC operations or even persistently
high GC times.
Mike
On Thu, Jun 30, 2016 at 3:20 AM, Jens Rantil wrote:
> Hi,
>
> Could it be garbage collection occurring on nodes that are more heavily
> loaded?
>
> Cheers,
> Jens
&
Jeff,
Thanks, yeah we updated to the 2.16.4 driver version from source. I don't
believe we've hit the bugs mentioned in earlier driver versions.
Mike
On Mon, Jul 4, 2016 at 11:16 PM, Jeff Jirsa
wrote:
> AWS ubuntu 14.04 AMI ships with buggy enhanced networking driver –
> d
urbed by the initial timeout
spike which leads to dropping all / high-percentage of all subsequent
traffic.
We are planning to continue production use with msg coaleasing disabled for
now and may run tests in our staging environments to identify where the
coalescing is breaking this.
Mike
On Tue,
Garo,
No, we didn't notice any change in system load, just the expected spike in
packet counts.
Mike
On Wed, Jul 20, 2016 at 3:49 PM, Juho Mäkinen
wrote:
> Just to pick this up: Did you see any system load spikes? I'm tracing a
> problem on 2.2.7 where my cluster sees load sp
x27;m still having issues
I'd appreciate any suggestions on what else I can try to track down the cause
of these OOM exceptions.
- Mike
o:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Date: Wednesday, November 2, 2016 at 1:07 PM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: failing bootstraps with OOM
On Wed, Nov 2, 2016 at 3:
e, too. The rest of the jvm.* metrics have this extra '.'
character that causes them to not show up in graphite.
Am I missing something silly here? Appreciate any help or suggestions.
- Mike
Just bumping - has anyone seen this before?
http://stackoverflow.com/questions/41446352/cassandra-3-9-jvm-metrics-have-bad-name
From: Mike Torra mailto:mto...@demandware.com>>
Reply-To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassa
We currently use redis to store sorted sets that we increment many, many times
more than we read. For example, only about 5% of these sets are ever read. We
are getting to the point where redis is becoming difficult to scale (currently
at >20 nodes).
We've started using cassandra for other thin
g>"
mailto:user@cassandra.apache.org>>
Date: Saturday, January 14, 2017 at 1:25 PM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: implementing a 'sorted set' on top of cassandra
Mik
the value of phi_convict_threshold to 12, as suggested
here:
https://docs.datastax.com/en/cassandra/3.0/cassandra/architecture/archDataDistributeFailDetect.html.
This does not seem to have changed anything on the nodes that I've changed
it on.
I appreciate any suggestions on what else to try in order to track down
these timeouts.
- Mike
I can't say that I have tried that while the issue is going on, but I have
done such rolling restarts for sure, and the timeouts still occur every
day. What would a rolling restart do to fix the issue?
In fact, as I write this, I am restarting each node one by one in the
eu-west-1 datacenter, and
27;org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 0
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
Thanks,
Mike
--
Mike Heffner
Librato, Inc.
we are not using DTCS, but it
matches since the upgrade appeared to only drop fully expired sstables.
Mike
On Sat, Jul 18, 2015 at 3:40 PM, Nate McCall wrote:
> Perhaps https://issues.apache.org/jira/browse/CASSANDRA-9592 got
> compactions moving forward for you? This would explain th
x27;t see any msg that pointed to something obvious. Happy to provide any
more information that may help.
We are pretty much at the point of sprinkling debug around the code to
track down what could be blocking.
Thanks,
Mike
--
Mike Heffner
Librato, Inc.
Paulo,
Thanks for the suggestion, we ran some tests against CMS and saw the same
timeouts. On that note though, we are going to try doubling the instance
sizes and testing with double the heap (even though current usage is low).
Mike
On Wed, Feb 10, 2016 at 3:40 PM, Paulo Motta
wrote:
>
Jeff,
We have both commitlog and data on a 4TB EBS with 10k IOPS.
Mike
On Wed, Feb 10, 2016 at 5:28 PM, Jeff Jirsa
wrote:
> What disk size are you using?
>
>
>
> From: Mike Heffner
> Reply-To: "user@cassandra.apache.org"
> Date: Wednesday, February
Jaydeep,
No, we don't use any light weight transactions.
Mike
On Wed, Feb 17, 2016 at 6:44 PM, Jaydeep Chovatia <
chovatia.jayd...@gmail.com> wrote:
> Are you guys using light weight transactions in your write path?
>
> On Thu, Feb 11, 2016 at 12:36 AM, Fabrice Faco
st on that earlier.
Thanks,
Mike
On Wed, Feb 10, 2016 at 2:51 PM, Mike Heffner wrote:
> Hi all,
>
> We've recently embarked on a project to update our Cassandra
> infrastructure running on EC2. We are long time users of 2.0.x and are
> testing out a move to version 2.2.5 running o
ther reply that we've tracked it to something between
2.0.x and 2.1.x, so we are focusing on narrowing which point release it was
introduced in.
Cheers,
Mike
On Thu, Feb 18, 2016 at 3:33 AM, Alain RODRIGUEZ wrote:
> Hi Mike,
>
> What about the output of tpstats ? I imagine y
rites, batching (via Thrift
mostly) to 5 tables, between 6-1500 rows per batch.
Mike
On Thu, Feb 18, 2016 at 12:22 PM, Anuj Wadehra
wrote:
> Whats the GC overhead? Can you your share your GC collector and settings ?
>
>
> Whats your query pattern? Do you use secondary indexes, batches
sting:
memtable_allocation_type: offheap_objects
memtable_flush_writers: 8
Cheers,
Mike
On Fri, Feb 19, 2016 at 1:46 PM, Nate McCall wrote:
> The biggest change which *might* explain your behavior has to do with the
> changes in memtable flushing between 2.0 and 2.1:
> https://issues.a
Emils,
I realize this may be a big downgrade, but are you timeouts reproducible
under Cassandra 2.1.4?
Mike
On Thu, Feb 25, 2016 at 10:34 AM, Emīls Šolmanis
wrote:
> Having had a read through the archives, I missed this at first, but this
> seems to be *exactly* like what we're e
Emils,
We believe we've tracked it down to the following issue:
https://issues.apache.org/jira/browse/CASSANDRA-11302, introduced in 2.1.5.
We are running a build of 2.2.5 with that patch and so far have not seen
any more timeouts.
Mike
On Fri, Mar 4, 2016 at 3:14 AM, Emīls Šolmanis
wards.
Any assistance would be appreciated.
Thanks!
Mike
--
Mike Heffner
Librato, Inc.
On Mon, Jul 23, 2012 at 1:25 PM, Mike Heffner wrote:
> Hi,
>
> We are migrating from a 0.8.8 ring to a 1.1.2 ring and we are noticing
> missing data post-migration. We use pre-built/configured AMIs so our
> preferred route is to leave our existing production 0.8.8 untouched a
ponent[1] = A
(A:A:C), (A:A:B), (B:A:C)
I could do some iteration and figure this out in more of a brute force
manner, I'm just curious if there's anything built in that might be more
efficient
Thanks!
Mike
en Pierce napsal(a):
> > I'm running 1.1.5; the bug says it's fixed in 1.0.9/1.1.0.
> >
> > How can I check to see why it keeps running HintedHandoff?
> you have tombstone is system.HintsColumnFamily use list command in
> cassandra-cli to check
>
>
--
Mike Heffner
Librato, Inc.
I also agree that starting C* after an upgrade/install seems quite broken
if it was already stopped before the install. However annoying, I have
found this to be the default for most Ubuntu daemon packages.
Mike
On Thu, Nov 15, 2012 at 9:21 AM, Alain RODRIGUEZ wrote:
> We had an issue with co
after commitlog_total_space_in_mb is exceeded?).
With 1.1.5/6, all nanotime commitlogs are replayed on startup regardless of
whether they've been flushed. So in our case manually removing all the
commitlogs after a drain was the only way to prevent their replay.
Mike
On Tue, Nov 20, 2012 at 5:19 AM, Alain
On Tue, Nov 20, 2012 at 2:49 PM, Rob Coli wrote:
> On Mon, Nov 19, 2012 at 7:18 PM, Mike Heffner wrote:
> > We performed a 1.1.3 -> 1.1.6 upgrade and found that all the logs
> replayed
> > regardless of the drain.
>
> Your experience and desire for different (expected
I'm using 1.0.12 and I find that large sstables tend to get compacted
infrequently. I've got data that gets deleted or expired frequently. Is it
possible to use scrub to accelerate the clean up of expired/deleted data?
--
Mike Smith
Director Development, MailChannels
mpacted. So if you have an old row that is spread out over many files it
> may not get purged.
>
> Hope that helps.
>
>
>
>-
> Aaron Morton
> Freelance Cassandra Developer
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
>
Does CQL3 support blob/BytesType literals for INSERT, UPDATE etc commands?
I looked at the CQL3 syntax (http://cassandra.apache.org/doc/cql3/CQL.html)
and at the DataStax 1.2 docs.
As for why I'd want such a thing, I just wanted to initialize some test
values for a blob column with cqlsh.
Thanks
55,247 |
> 127.0.0.1 | 3644
>
>
> It reads from the secondary index and discards keys that are outside of
> the token range.
>
> Cheers
>
>
> -
> Aaron Morton
> Freelance Cassandra Developer
> New Zealand
>
> @aaronmorton
&g
a question mark in for timestamp failed as expected and I don't see
a method on the DataStax java driver BoundStatement for setting it.
Thanks in advance.
/Mike Sample
Thanks Sylvain. I should have scanned Jira first. Glad to see it's on the
todo list.
On Wed, Feb 6, 2013 at 12:24 AM, Sylvain Lebresne wrote:
> Not yet: https://issues.apache.org/jira/browse/CASSANDRA-4450
>
> --
> Sylvain
>
>
> On Wed, Feb 6, 2013 at 9:06 AM, M
It has been suggested to me that we could save a fair amount of time and
money by taking a snapshot of only 1 replica (so every third node for
most column families). Assuming that we are okay with not having the
absolute latest data, does this have any possibility of working? I feel
like it s
eloper
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 28/02/2013, at 3:21 PM, Mike Koh wrote:
It has been suggested to me that we could save a fair amount of time and money
by taking a snapshot of only 1 replica (so every third node for most column
families). Assuming that we are oka
I'm trying to change compaction strategy one node at a time. I'm using
jmxterm like this:
`echo 'set -b
org.apache.cassandra.db:type=ColumnFamilies,keyspace=my_ks,columnfamily=my_cf
CompactionParametersJson
\{"class":"TimeWindowCompactionStrategy","compaction_window_unit":"HOURS","compaction_windo
a way to tell when/if the local node has successfully updated the
compaction strategy? Looking at the sstable files, it seems like they are
still based on STCS but I don't know how to be sure.
Appreciate any tips or suggestions!
On Mon, Mar 13, 2017 at 5:30 PM, Mike Torra wrote:
>
I'm trying to use sstableloader to bulk load some data to my 4 DC cluster,
and I can't quite get it to work. Here is how I'm trying to run it:
sstableloader -d 127.0.0.1 -i {csv list of private ips of nodes in cluster}
myks/mttest
At first this seems to work, with a steady stream of logging like
overall data storage and create another
table and periodically transfer data from the main to the child table. But I
believe I'll get the same problem because Cassandra simply don't sort as an
RDBMS. So here must be an idea behind the philosophy of Cassandra.
Can anyone help me out?
Best regards
Mike Wenzel
(1)https://www.datastax.com/dev/blog/we-shall-have-order
ng else I should be doing to
gracefully restart the cluster? It could be something to do with the nodejs
driver, but I can't find anything there to try.
I appreciate any suggestions or advice.
- Mike
g nodes easier (or rather, we need to make drain do
> the right thing), but in this case, your data model looks like the biggest
> culprit (unless it's an incomplete recreation).
>
> - Jeff
>
>
> On Tue, Feb 6, 2018 at 10:58 AM, Mike Torra wrote:
>
>> Hi -
>
No, I am not
On Wed, Feb 7, 2018 at 11:35 AM, Jeff Jirsa wrote:
> Are you using internode ssl?
>
>
> --
> Jeff Jirsa
>
>
> On Feb 7, 2018, at 8:24 AM, Mike Torra wrote:
>
> Thanks for the feedback guys. That example data model was indeed
> abbreviated - the re
Any other ideas? If I simply stop the node, there is no latency problem,
but once I start the node the problem appears. This happens consistently
for all nodes in the cluster
On Wed, Feb 7, 2018 at 11:36 AM, Mike Torra wrote:
> No, I am not
>
> On Wed, Feb 7, 2018 at 11:35 AM, Jeff Jir
s that I moved
`nodetool disablegossip` to after `nodetool drain`. This is pretty
anecdotal, but is there any explanation for why this might happen? I'll be
monitoring my cluster closely to see if this change does indeed fix the
problem.
On Mon, Feb 12, 2018 at 9:33 AM, Mike Torra wrote:
>
Then could it be that calling `nodetool drain` after calling `nodetool
disablegossip` is what causes the problem?
On Mon, Feb 12, 2018 at 6:12 PM, kurt greaves wrote:
>
> Actually, it's not really clear to me why disablebinary and thrift are
> necessary prior to drain, because they happen in th
a node is full CPU (all other
cores are idleing), so I assume I'm CPU bound on node side. But why ? What the
node is doing ? Why does it take so long time ?
--
Mike Neir
Liquid Web, Inc.
Infrastructure Administrator
lthough I'm not sure what that means in practice. Will the counters be
99.99% accurate? How often will they be over or under counted?
Thanks, Mike.
I am investigating Java Out of memory heap errors. So I created an .hprof
file and loaded it into Eclipse Memory Analyzer Tool which gave some
"Problem Suspects".
First one looks like:
One instance of "org.apache.cassandra.db.ColumnFamilyStore" loaded by
"sun.misc.Launcher$AppClassLoader
Hi Boole,
Have you tried chef? There is this cookbook for deploying cassandra:
http://community.opscode.com/cookbooks/cassandra
MikeA
On 21 November 2013 01:33, Boole.Z.Guo (mis.cnsh04.Newegg) 41442 <
boole.z@newegg.com> wrote:
> Hi all,
>
> Is there any open source software for automati
om streaming it seems that simply
restarting will inevitably hit this problem again.
Cheers,
Mike
--
Mike Heffner
Librato, Inc.
1 - 100 of 243 matches
Mail list logo