No. We built a pluggable cache provider for memcache.
On Sun, Oct 30, 2011 at 7:31 PM, Mohit Anchlia wrote:
> On Sun, Oct 30, 2011 at 6:53 PM, Chris Goffinet
> wrote:
> >
> >
> > On Sun, Oct 30, 2011 at 3:34 PM, Sorin Julean
> > wrote:
> >>
> >> Hey Chris,
> >>
> >> Thanks for sharing all the
On Sun, Oct 30, 2011 at 6:53 PM, Chris Goffinet wrote:
>
>
> On Sun, Oct 30, 2011 at 3:34 PM, Sorin Julean
> wrote:
>>
>> Hey Chris,
>>
>> Thanks for sharing all the info.
>> I have few questions:
>> 1. What are you doing with so much memory :) ? How much of it do you
>> allocate for heap ?
>
On Sun, Oct 30, 2011 at 3:34 PM, Sorin Julean wrote:
> Hey Chris,
>
> Thanks for sharing all the info.
> I have few questions:
> 1. What are you doing with so much memory :) ? How much of it do you
> allocate for heap ?
>
max heap is 12GB. we use the rest for cache. we run memcache on each no
Dne 30.10.2011 23:34, Sorin Julean napsal(a):
Hey Chris,
Thanks for sharing all the info.
I have few questions:
1. What are you doing with so much memory :) ?
cassandra eats memory like there is no tomorrow on large databases. It
keeps some structures in memory which depends on database siz
Hey Chris,
Thanks for sharing all the info.
I have few questions:
1. What are you doing with so much memory :) ? How much of it do you
allocate for heap ?
2. What your network speed ? Do you use trunks ? Do you have a dedicated
VLAN for gossip/store traffic ?
Cheers,
Sorin
On Sun, Oct 30,
Hi Chris,
Thanks for your post. I can see you guys handle extremely large amounts of
data compared to my system. Yes I will own the racks and the machines but
the problem is I am limited by actual physical space in our data center
(believe it or not) and also the budget. It would be hard for me to
RE: RAID0 Recommendation
Cassandra supports multiple data file directories. Because we do
compactions, it's just much easier to deal with (1) data file directory
that is stripped across all disks as 1 volume (RAID0). There are other ways
to accomplish this though. At Twitter we use software raid (
On Tue, Oct 25, 2011 at 5:23 AM, Alexandru Sicoe wrote:
> At the moment I am partitioning the data in Cassandra in 75 CFs
You might consider not using so many column families. I am not a Cassandra
expert, but from what I've seen floated around, there is currently a unique
memtable, commit log,
If you need to have this data available outside the private network
then why not create the cluster outside itself? It seems inefficient
that you would do bulk transfers. You might think of an alternate
design using queues, subscribers or exposing Cassandra over HTTP etc.
You could also look at ht
Thanks for the detailed answers Dan, what you said makes sense. I think my
biggest worry right now is making the correct preditions of my data storage
space based on the measurements with the current cluster. Other than that I
should be fairly comfortable with the rest of the HW specs.
Thanks for
This may help determining your data storage requirements ...
http://btoddb-cass-storage.blogspot.com/
On 10/25/11 11:22 AM, "Mohit Anchlia" wrote:
>On Tue, Oct 25, 2011 at 11:18 AM, Dan Hendry
>wrote:
>>> 2. ... So I am going to use rotational disk for the commit log and an
>>>SSD
>>> for da
On Tue, Oct 25, 2011 at 11:18 AM, Dan Hendry wrote:
>> 2. ... So I am going to use rotational disk for the commit log and an SSD
>> for data. Does this make sense?
>
>
>
> Yes, just keep in mind however that the primary characteristic of SSDs is
> lower seek times which translates into faster rand
> 2. ... So I am going to use rotational disk for the commit log and an SSD
for data. Does this make sense?
Yes, just keep in mind however that the primary characteristic of SSDs is
lower seek times which translates into faster random access. We have a
similar Cassandra use case (time series da
13 matches
Mail list logo