>
> What I normally do is install plain CentOS (Not any AMI build for
> Cassandra) and I don't use them for production! I run them for testing,
> fire drills and some cassandra-stress benchmarks. I will look if I had more
> than 5h Cassandra uptime. I can even put one up now and do the test and get
> the results back to you.


Hey thanks for letting me know that. And yep! Same here. It's just a plain
CentOS 7 VM I've been using. None of this is for production. I also have an
AWS account that I use only for testing. I can try setting it up there to
and get back to you with my results.

Thank you!
Tim

On Thu, Feb 19, 2015 at 12:52 PM, Carlos Rolo <r...@pythian.com> wrote:

> What I normally do is install plain CentOS (Not any AMI build for
> Cassandra) and I don't use them for production! I run them for testing,
> fire drills and some cassandra-stress benchmarks. I will look if I had more
> than 5h Cassandra uptime. I can even put one up now and do the test and get
> the results back to you.
>
> Regards,
>
> Carlos Juzarte Rolo
> Cassandra Consultant
>
> Pythian - Love your data
>
> rolo@pythian | Twitter: cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo
> <http://linkedin.com/in/carlosjuzarterolo>*
> Tel: 1649
> www.pythian.com
>
> On Thu, Feb 19, 2015 at 6:41 PM, Tim Dunphy <bluethu...@gmail.com> wrote:
>
>> I have Cassandra instances running on VMs with smaller RAM (1GB even) and
>>> I don't go OOM when testing them. Although I use them in AWS and other
>>> providers, never tried Digital Ocean.
>>> Does Cassandra just fails after some time running or it is failing on
>>> some specific read/write?
>>
>>
>> Hi  Carlos,
>>
>> Ok, that's really interesting. So I have to ask, did you have to do
>> anything special to get Cassandra to run on those 1GB AWS instances? I'd
>> love to do the same. I even tried there as well and failed due to lack of
>> memory to run it.
>>
>> And there is no specific reason other than lack of memory that I can tell
>> for it to fail. And it doesn's seem to matter what data I use either.
>> Because even if I remove the data directory with rm -rf, the phenomenon is
>> the same. It'll run for a while, usually about 5 hours and then just crash
>> with the word 'killed' as the last line of output.
>>
>> Thanks
>> Tim
>>
>>
>> On Thu, Feb 19, 2015 at 3:40 AM, Carlos Rolo <r...@pythian.com> wrote:
>>
>>> I have Cassandra instances running on VMs with smaller RAM (1GB even)
>>> and I don't go OOM when testing them. Although I use them in AWS and other
>>> providers, never tried Digital Ocean.
>>>
>>> Does Cassandra just fails after some time running or it is failing on
>>> some specific read/write?
>>>
>>> Regards,
>>>
>>> Carlos Juzarte Rolo
>>> Cassandra Consultant
>>>
>>> Pythian - Love your data
>>>
>>> rolo@pythian | Twitter: cjrolo | Linkedin: 
>>> *linkedin.com/in/carlosjuzarterolo
>>> <http://linkedin.com/in/carlosjuzarterolo>*
>>> Tel: 1649
>>> www.pythian.com
>>>
>>> On Thu, Feb 19, 2015 at 7:16 AM, Tim Dunphy <bluethu...@gmail.com>
>>> wrote:
>>>
>>>> Hey guys,
>>>>
>>>> After the upgrade to 2.1.3, and after almost exactly 5 hours running
>>>> cassandra did indeed crash again on the 2GB ram VM.
>>>>
>>>> This is how the memory on the VM looked after the crash:
>>>>
>>>> [root@web2:~] #free -m
>>>>              total       used       free     shared    buffers
>>>> cached
>>>> Mem:          2002       1227        774          8         45
>>>>  386
>>>> -/+ buffers/cache:        794       1207
>>>> Swap:            0          0          0
>>>>
>>>>
>>>> And that's with this set in the cassandra-env.sh file:
>>>>
>>>> MAX_HEAP_SIZE="800M"
>>>> HEAP_NEWSIZE="200M"
>>>>
>>>> So I'm thinking now, do I just have to abandon this idea I have of
>>>> running Cassandra on a 2GB instance? Or is this something we can all agree
>>>> can be done? And if so, how can we do that? :)
>>>>
>>>> Thanks
>>>> Tim
>>>>
>>>> On Wed, Feb 18, 2015 at 8:39 PM, Jason Kushmaul | WDA <
>>>> jason.kushm...@wda.com> wrote:
>>>>
>>>>> I asked this previously when a similar message came through, with a
>>>>> similar response.
>>>>>
>>>>>
>>>>>
>>>>> planetcassandra seems to have it “right”, in that stable=2.0,
>>>>> development=2.1, whereas the apache site says stable is 2.1.
>>>>>
>>>>> “Right” in they assume latest minor version is development.  Why not
>>>>> have the apache site do the same?  That’s just my lowly non-contributing
>>>>> opinion though.
>>>>>
>>>>>
>>>>>
>>>>> *Jason  *
>>>>>
>>>>>
>>>>>
>>>>> *From:* Andrew [mailto:redmu...@gmail.com]
>>>>> *Sent:* Wednesday, February 18, 2015 8:26 PM
>>>>> *To:* Robert Coli; user@cassandra.apache.org
>>>>> *Subject:* Re: run cassandra on a small instance
>>>>>
>>>>>
>>>>>
>>>>> Robert,
>>>>>
>>>>>
>>>>>
>>>>> Let me know if I’m off base about this—but I feel like I see a lot of
>>>>> posts that are like this (i.e., use this arbitrary version, not this other
>>>>> arbitrary version).  Why are releases going out if they’re “broken”?  This
>>>>> seems like a very confusing way for new (and existing) users to approach
>>>>> versions...
>>>>>
>>>>>
>>>>>
>>>>> Andrew
>>>>>
>>>>>
>>>>>
>>>>> On February 18, 2015 at 5:16:27 PM, Robert Coli (rc...@eventbrite.com)
>>>>> wrote:
>>>>>
>>>>> On Wed, Feb 18, 2015 at 5:09 PM, Tim Dunphy <bluethu...@gmail.com>
>>>>> wrote:
>>>>>
>>>>> I'm attempting to run Cassandra 2.1.2 on a smallish 2.GB ram instance
>>>>> over at Digital Ocean. It's a CentOS 7 host.
>>>>>
>>>>>
>>>>>
>>>>> 2.1.2 is IMO broken and should not be used for any purpose.
>>>>>
>>>>>
>>>>>
>>>>> Use 2.1.1 or 2.1.3.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> https://engineering.eventbrite.com/what-version-of-cassandra-should-i-run/
>>>>>
>>>>>
>>>>>
>>>>> =Rob
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> GPG me!!
>>>>
>>>> gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
>>>>
>>>>
>>>
>>> --
>>>
>>>
>>>
>>>
>>
>>
>> --
>> GPG me!!
>>
>> gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
>>
>>
>
> --
>
>
>
>


-- 
GPG me!!

gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B

Reply via email to