> > What does your schema look like, your total data size and your read/write > patterns? Maybe you are simply doing a heavier workload than a small > instance can handle.
Hi Mark, OK well as mentioned this is all test data with almost literally no workload. So I doubt it's the data and/ or workload that's causing it to crash on the 2GB instance after 5 hours. But when I describe the schema with my test data this is what I see: cqlsh> use joke_fire1 ... ; cqlsh:joke_fire1> describe schema; CREATE KEYSPACE joke_fire1 WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '3'} AND durable_writes = true; 'module' object has no attribute 'UserTypesMeta' If I take a look at the size of the total amount of data this is what I see: [root@beta-new:/etc/alternatives/cassandrahome/data] #du -hs data 17M data Which includes the system keyspace. But the test data that I created for my use is only 15MB: [root@beta-new:/etc/alternatives/cassandrahome/data/data] #du -hs joke_fire1/ 15M joke_fire1/ But just to see if it's my data that could be causing the problem, I tried removing it all, and setting the IP of the 2GB instance itself as the seed node. I'll try running that for a while and seeing if it crashes. Also I tried just installing a plain cassandra 2.1.3 onto a plain CentOS 6.6 instance on the AWS free tier. It's a t.2 micro instance. So far it's running. I'll keep an eye on both. At this point, I'm thinking that there might be something about my data that could be causing it to fail after 5 or so hours. However I might need some help diagnosing the data, as I'm not familiar on how to do that with cassandra. Thanks! Tim On Thu, Feb 19, 2015 at 3:51 AM, Mark Reddy <mark.l.re...@gmail.com> wrote: > What does your schema look like, your total data size and your read/write > patterns? Maybe you are simply doing a heavier workload than a small > instance can handle. > > > Regards, > Mark > > On 19 February 2015 at 08:40, Carlos Rolo <r...@pythian.com> wrote: > >> I have Cassandra instances running on VMs with smaller RAM (1GB even) and >> I don't go OOM when testing them. Although I use them in AWS and other >> providers, never tried Digital Ocean. >> >> Does Cassandra just fails after some time running or it is failing on >> some specific read/write? >> >> Regards, >> >> Carlos Juzarte Rolo >> Cassandra Consultant >> >> Pythian - Love your data >> >> rolo@pythian | Twitter: cjrolo | Linkedin: *linkedin.com/in/carlosjuzarterolo >> <http://linkedin.com/in/carlosjuzarterolo>* >> Tel: 1649 >> www.pythian.com >> >> On Thu, Feb 19, 2015 at 7:16 AM, Tim Dunphy <bluethu...@gmail.com> wrote: >> >>> Hey guys, >>> >>> After the upgrade to 2.1.3, and after almost exactly 5 hours running >>> cassandra did indeed crash again on the 2GB ram VM. >>> >>> This is how the memory on the VM looked after the crash: >>> >>> [root@web2:~] #free -m >>> total used free shared buffers cached >>> Mem: 2002 1227 774 8 45 386 >>> -/+ buffers/cache: 794 1207 >>> Swap: 0 0 0 >>> >>> >>> And that's with this set in the cassandra-env.sh file: >>> >>> MAX_HEAP_SIZE="800M" >>> HEAP_NEWSIZE="200M" >>> >>> So I'm thinking now, do I just have to abandon this idea I have of >>> running Cassandra on a 2GB instance? Or is this something we can all agree >>> can be done? And if so, how can we do that? :) >>> >>> Thanks >>> Tim >>> >>> On Wed, Feb 18, 2015 at 8:39 PM, Jason Kushmaul | WDA < >>> jason.kushm...@wda.com> wrote: >>> >>>> I asked this previously when a similar message came through, with a >>>> similar response. >>>> >>>> >>>> >>>> planetcassandra seems to have it “right”, in that stable=2.0, >>>> development=2.1, whereas the apache site says stable is 2.1. >>>> >>>> “Right” in they assume latest minor version is development. Why not >>>> have the apache site do the same? That’s just my lowly non-contributing >>>> opinion though. >>>> >>>> >>>> >>>> *Jason * >>>> >>>> >>>> >>>> *From:* Andrew [mailto:redmu...@gmail.com] >>>> *Sent:* Wednesday, February 18, 2015 8:26 PM >>>> *To:* Robert Coli; user@cassandra.apache.org >>>> *Subject:* Re: run cassandra on a small instance >>>> >>>> >>>> >>>> Robert, >>>> >>>> >>>> >>>> Let me know if I’m off base about this—but I feel like I see a lot of >>>> posts that are like this (i.e., use this arbitrary version, not this other >>>> arbitrary version). Why are releases going out if they’re “broken”? This >>>> seems like a very confusing way for new (and existing) users to approach >>>> versions... >>>> >>>> >>>> >>>> Andrew >>>> >>>> >>>> >>>> On February 18, 2015 at 5:16:27 PM, Robert Coli (rc...@eventbrite.com) >>>> wrote: >>>> >>>> On Wed, Feb 18, 2015 at 5:09 PM, Tim Dunphy <bluethu...@gmail.com> >>>> wrote: >>>> >>>> I'm attempting to run Cassandra 2.1.2 on a smallish 2.GB ram instance >>>> over at Digital Ocean. It's a CentOS 7 host. >>>> >>>> >>>> >>>> 2.1.2 is IMO broken and should not be used for any purpose. >>>> >>>> >>>> >>>> Use 2.1.1 or 2.1.3. >>>> >>>> >>>> >>>> >>>> https://engineering.eventbrite.com/what-version-of-cassandra-should-i-run/ >>>> >>>> >>>> >>>> =Rob >>>> >>>> >>>> >>>> >>> >>> >>> -- >>> GPG me!! >>> >>> gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B >>> >>> >> >> -- >> >> >> >> > -- GPG me!! gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B