Is the data still physically on node 1 ? During start up does it log about
opening the SSTables ?
Another sometimes problem is if the schema is out of sync, and node 1 may not
have all the CF's and will not have opened the SSTables.
Check the logs and check if the physical data is there.
Ch
sorry, my bad. messed up the yaml file. I will try it again. thanks a lot!
On Tue, Sep 20, 2011 at 5:08 PM, aaron morton wrote:
> Is the data still physically on node 1 ? During start up does it log about
> opening the SSTables ?
>
> Another sometimes problem is if the schema is out of sync, and
The Cassandra team is pleased to announce the release of Apache Cassandra
version 0.8.6.
Cassandra is a highly scalable second-generation distributed database,
bringing together Dynamo's fully distributed design and Bigtable's
ColumnFamily-based data model. You can read more here:
http://cassand
Great! just waiting for it.
On Tue, Sep 20, 2011 at 6:12 PM, Sylvain Lebresne wrote:
> The Cassandra team is pleased to announce the release of Apache Cassandra
> version 0.8.6.
>
> Cassandra is a highly scalable second-generation distributed database,
> bringing together Dynamo's fully distribu
Hi,
I am running Cassandra over Linux VMs, each VM is: 2GB RAM, 4 core CPU.
Using RPM distribution. I have set -Xmx to 512M in cassandra-env.sh
After day of running I see that Cassandra process is utilizing over 80% of
memory that is 3 times more then 512M.
In result after 2 days of running, Cassa
wiki.apache.org/cassandra/FAQ#mmap
On Tue, Sep 20, 2011 at 8:12 AM, Evgeniy Ryabitskiy
wrote:
> Hi,
> I am running Cassandra over Linux VMs, each VM is: 2GB RAM, 4 core CPU.
> Using RPM distribution. I have set -Xmx to 512M in cassandra-env.sh
>
> After day of running I see that Cassandra process
Greetings,
I'd like to announce Cassandra Conference in Tokyo on October 5th.
The conference mainly focuses on "real world" usage of Apache Cassandra.
The talks includes:
- How cassandra is used behind the location sharing mobile application
- Amazon S3 like cloud storage backed by Cassandra
etc.
Hi,
I'd like to release version 0.8.6-1 of Mojo's Cassandra Maven Plugin
to sync up with the recent 0.8.6 release of Apache Cassandra.
We solved 2 issues:
http://jira.codehaus.org/secure/ReleaseNote.jspa?projectId=12121&version=17425
Staging Repository:
https://nexus.codehaus.org/content/repos
Thanks for reply. Now it looks much more clear.
Top shows this:
PID USER PR NI VIRT *RES* SHR
67 cassandr 18 0 6267m *1.6g * 805m S 0.3 79.0 24:35.80
java
It's Ok with huge VIRT memory since it's 64 bit architecture.
But RES is keep growing.
And I still have questions:
1) If I
I'll be in Tokyo the 4th through the 7th -- let me know if you'd like
to meet outside the conference.
On Tue, Sep 20, 2011 at 9:01 AM, Yuki Morishita wrote:
> Greetings,
>
> I'd like to announce Cassandra Conference in Tokyo on October 5th.
> The conference mainly focuses on "real world" usage of
One question. I was told that Brisk was going to made compatible with this
version of Cassandra, will we see a new Brisk release this week as well?
Anthony
On Tue, Sep 20, 2011 at 3:12 AM, Sylvain Lebresne wrote:
> The Cassandra team is pleased to announce the release of Apache Cassandra
> vers
Column Family: CF_1
SSTable count: 249
Space used (live): 15120496328
Space used (total): 15120496328
Number of Keys (estimate): 18860800
Memtable Columns Count: 0
Memtable Data Size: 0
I'm running into consistent problems when storing values larger than 15MB
into Cassandra, and I was hoping for some advice on tracking down what's
going wrong. From the FAQ it seems like what I'm trying to do is possible,
so I assume I'm messing something up with my configuration. I have a minimal
I believe that minor compactions work on tables of same or similar size, so as
long as your tables do not fall within a small range of each other in terms of
size, Cassandra does not see an opportunity to run a minor compaction.
- Original Message -
From: "myreasoner"
To: cassandra-u...
yes in 0.8 the compaction thresholds are for buckets of files.
Files are put into buckets where the size of each file is +/- half the size of
the average size. Files smaller than 50MB (from memory) are all put in the
first bucket.
You can change the compaction thresholds via nodetool, and tri
>From cassandra.yaml:
# Frame size for thrift (maximum field length).
# 0 disables TFramedTransport in favor of TSocket. This option
# is deprecated; we strongly recommend using Framed mode.
thrift_framed_transport_size_in_mb: 15
So you can either increase that limit, or split your write into mul
Pete,
See this thread
http://groups.google.com/group/hector-users/browse_thread/thread/cb3e72c85dbdd398/82b18ffca0e3940a?#82b18ffca0e3940a
for a bit more info.
Jim
On Tue, Sep 20, 2011 at 9:02 PM, Tyler Hobbs wrote:
> From cassandra.yaml:
>
> # Frame size for thrift (maximum field length).
> #
CASSANDRA-526 provides this ability, I just want to make sure,
let's say I have one node, with token 0, and now I want to add 2 new
ones, with initial_token set at 1/3 and 2/3 of the full range.
now I start nodes 2 and 3 with the -DCassandra.join_ring=false option,
and later use JMX joinRing() t
18 matches
Mail list logo