Hello everyone,
I was trying to get some cluster wide statistics of the total insertions
performed in my 3 node Cassandra 0.8.6 cluster. So I wrote a nice little
program that gets the CompletedTasks attribute of
org.apache.cassandra.db:type=Commitlog from every node, sums up the values
and records
;> Aaron Morton
>> Freelance Cassandra Developer
>> @aaronmorton
>> http://www.thelastpickle.com
>>
>> On 12/10/2011, at 3:44 AM, Alexandru Dan Sicoe wrote:
>>
>> Hello everyone,
>> I was trying to get some cluster wide stat
Thanks for the detailed answers Dan, what you said makes sense. I think my
biggest worry right now is making the correct preditions of my data storage
space based on the measurements with the current cluster. Other than that I
should be fairly comfortable with the rest of the HW specs.
Thanks for
Hi guys,
It's interesting to see this thread. I recently discovered a similar
problem on my 3 node Cassandra 0.8.5 cluster. It was working fine, then I
took a node down to see how it behaves. All of a sudden I couldn't write or
read because of this exception being thrown:
Exception in thread "mai
use-case.
>
> --
> / Peter Schuller (@scode, http://worldmodscode.wordpress.com)
>
--
Alexandru Dan Sicoe
MEng, CERN Marie Curie ACEOLE Fellow
g
>> but then did some more research and found out that it doesn't make sense to
>> use SSDs for sequential appends because it won't have a performance
>> advantage with respect to rotational media. So I am going to use rotational
>> disk for the commit log and a
edin.com/skills
> [2] http://www.linkedin.com/in/tjake
>
>
> --
> http://twitter.com/tjake
>
--
Alexandru Dan Sicoe
MEng, CERN Marie Curie ACEOLE Fellow
Hi,
I'm using the community version of OpsCenter to monitor my clutster. At
the moment I'm interested in storage space. In the performance metrics
page, if I choose to see the graph of a the metric "CF: SSTable Size" for a
certain CF of interest, two things are plotted on the graph: Total disk
u
Hello everyone,
4 node Cassandra 0.8.5 cluster with RF=2, replica placement strategy =
SimpleStartegy, write consistency level = ANY, memtable_flush_after_mins
=1440; memtable_operations_in_millions=0.1; memtable_throughput_in_mb = 40;
max_compaction_threshold =32; min_compaction_threshold =4;
I
>
> On 11/28/2011 11:11 AM, Alexandru Dan Sicoe wrote:
>
>> Hello everyone,
>>
>> 4 node Cassandra 0.8.5 cluster with RF=2, replica placement strategy =
>> SimpleStartegy, write consistency level = ANY, memtable_flush_after_mins
>> =1440; memtabl
Hello everyone,
4 node Cassandra 0.8.5 cluster with RF =2.
One node started throwing exceptions in its log:
ERROR 10:02:46,837 Fatal exception in thread Thread[FlushWriter:1317,5,main]
java.lang.RuntimeException: java.lang.RuntimeException: Insufficient disk
space to flush 17296 bytes
at
deleted at startup. You will then need to run
> repair on that node to get back any data that was missed while it was
> full. If your commit log was on a different device you may not even have
> lost much.
>
> -Jeremiah
>
>
> On 12/01/2011 04:16 AM, Alexandru Dan Sicoe wro
tems are
people using?
Cheers,
Alex
On Thu, Dec 1, 2011 at 10:08 PM, Jahangir Mohammed
wrote:
> Yes, mostly sounds like it. In our case failed repairs were causing
> accumulation of the tmp files.
>
> Thanks,
> Jahangir Mohammed.
>
> On Thu, Dec 1, 2011 at 2:43 PM, Alexandru
n Fri, Dec 2, 2011 at 8:35 AM, Alexandru Dan Sicoe <
> sicoe.alexan...@googlemail.com> wrote:
>
>> Ok, so my problem persisted. On the node that is filling up the harddisk,
>> I have a 230 GB disk. Right after I restart the node I it deletes tmp files
>> and reaches 55G
Hello everyone.
3 node Cassandra 0.8.5 cluster. I've left the system running in production
environment for long term testing. I've accumulated about 350GB of data
with RF=2. The machines I used for the tests are older and need to be
replaced. Because of this I need to export the data to a permanen
noc administrator
>
> O: +1 503.553.2554 M: 707.738.8132 TW: @tas50
>
> *webtrends <http://www.webtrends.com/>* | 851 SW 6th Ave, Suite 1600,
> Portland, OR 97204
>
> *The Global Leader in Mobile and Social* *Analytics*
>
> ** **
>
> *From:* Al
want to store your data on. Use the sstable loader to load the
> sstables from all of the current machines into the new machine. Run major
> compaction a couple times. You will have all of the data on one machine.
>
>
> On 12/07/2011 10:17 AM, Alexandru Dan Sicoe wrote:
>
Hi,
I am thinking of strategies to deploy my application that uses a 3 node
Cassandra cluster.
Quick recap: I have several client applications that feed in about 2
million different variables (each representing a different monitoring
value/channel) in Cassandra. The system receives updates for ea
he problem. Lower the
>> thresholds for those tables if you don't want the commit logs to go crazy.
>>
>> -Jeremiah
>>
>> On 11/28/2011 11:11 AM, Alexandru Dan Sicoe wrote:
>>
>>> Hello everyone,
>>>
>>> 4 node Cassandra 0.8.5 clu
Hello,
I'm currently doing my masters project. I need to store lots of time series
data of any type (String, int, booleans, arrays of the previous) with a high
writing rate(20MBytes/sec -> 170TBytes/year - note not running continuously)
but less strict read requirements. This is monitoring data fr
20 matches
Mail list logo