after I put my cassandra cluster on heavy load (1k/s write + 1k/s
read ) for 1 day,
I accumulated about 30GB of data in sstables. I think the caches have
warmed up to their
stable state.
when I started this, I manually cat all the sstables to /dev/null , so
that they are loaded into memory
(the
Thanks a lot. The problem was that every terminal I open on Debian 6 lacks
of java home and path; I have to export them every time I start the virtual
machine; btw I have Debian and cassandra running inside vmware workstation.
Thanks again. I'm following the readme file.
On Mon, Sep 12, 2011 at 1
Should, although we've only tested 0.8-to-1.0 directly. That would be
a useful report to contribute!
On Thu, Sep 15, 2011 at 3:45 PM, Anand Somani wrote:
> So I should be able to do rolling upgrade from 0.7 to 1.0 (not there in the
> release notes, but I assume that is work in progress).
>
> Tha
You should be able to update it, which will leave existing sstables
untouched but new ones will be generated compressed. (You could issue
scrub to rewrite the existing ones compressed too, if you wanted to
force that.)
On Thu, Sep 15, 2011 at 3:44 PM, Jeremiah Jordan
wrote:
> Is it possible to u
On Thu, Sep 15, 2011 at 3:05 PM, mcasandra wrote:
> With Leveldb is it going to make reads slower
No.
Qualified: compared to "major compaction" under the tiered strategy,
leveled reads will usually be a little slower for update-heavy loads.
(For insert-mostly workloads compaction doesn't really
Yes my bad.
http://wiki.apache.org/cassandra/Operations#Token_selection
Thanks
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 16/09/2011, at 6:50 AM, Anand Somani wrote:
> You are right, good catch, thanks!
>
> On Thu, Sep 15, 2011 at
> What I’m missing is a clear behavior for CL.ONE. I’m unsure about what nodes
> are used by ONE and how the filtering of missing data/error is done. I’ve
> landed in ReadCallback.java but error handling is out of my reach for the
> moment.
Start with StorageProxy.fetch() to see which nodes are
Hi All,
Just a quick note to encourage those of you who are in the Greater Boston area
to join:
1) The Boston subgroup of the Cassandra LinkedIn Group:
http://www.linkedin.com/groups?home=&gid=3973913
2) The Boston Cassandra Meetup Group:
http://www.meetup.com/Boston-Cassandra-Users
The first
So I should be able to do rolling upgrade from 0.7 to 1.0 (not there in the
release notes, but I assume that is work in progress).
Thanks
On Thu, Sep 15, 2011 at 1:36 PM, amulya rattan wrote:
> Isn't this levelDB implementation for Google's LevelDB?
> http://code.google.com/p/leveldb/
> From wha
Is it possible to update an existing column family with
{stable_compression: SnappyCompressor,
compaction_strategy:LeveldCompactionStrategy}? Or will I have to make a
new column family and migrate my data to it?
-Jeremiah
On 09/15/2011 01:01 PM, Sylvain Lebresne wrote:
The Cassandra team is
Isn't this levelDB implementation for Google's LevelDB?
http://code.google.com/p/leveldb/
>From what I know, its quite fast..
On Thu, Sep 15, 2011 at 4:04 PM, mcasandra wrote:
> This is a great new! Is it possible to do a write-up of main changes like
> "Leveldb" and explain it a little bit. I g
This is a great new! Is it possible to do a write-up of main changes like
"Leveldb" and explain it a little bit. I get lost reading JIRA and sometimes
is difficult to follow the thread. It looks like there are some major
changes in this release.
--
View this message in context:
http://cassandra-
>
>
> Cool. We've deactivated all tasks against these nodes and will scrub them
> all in parallel, apply the encryption options you specified, and see where
> that gets us. Thanks for the assistance.
>
To follow up:
* We scrubbed all the nodes
* We applied the encryption options specified
* A re
You are right, good catch, thanks!
On Thu, Sep 15, 2011 at 8:28 AM, Konstantin Naryshkin
wrote:
> Wait, his nodes are going SC, SC, AT, AT. Shouldn't they go SC, AT, SC, AT?
> By which I mean that if he adds another node to the ring (or lowers the
> replication factor), he will have a node that i
Congrats! Thanks for all the hard work!
On Thu, Sep 15, 2011 at 11:01 AM, Sylvain Lebresne wrote:
> The Cassandra team is pleased to announce the release of the first beta for
> the future Apache Cassandra 1.0.
>
> Let me first stress that this is beta software and as such is *not* ready
> for
>
The Cassandra team is pleased to announce the release of the first beta for
the future Apache Cassandra 1.0.
Let me first stress that this is beta software and as such is *not* ready for
production use.
The goal of this release is to give a preview of what will be Cassandra 1.0
and more important
Wait, his nodes are going SC, SC, AT, AT. Shouldn't they go SC, AT, SC, AT? By
which I mean that if he adds another node to the ring (or lowers the
replication factor), he will have a node that is under-utilized. The rings in
his data centers have the tokens:
SC: 0, 1
AT: 85070591730234615865843
On Thu, Sep 15, 2011 at 10:03 AM, Jonathan Ellis wrote:
> If you added the new node as a seed, it would ignore bootstrap mode.
> And bootstrap / repair *do* use streaming so you'll want to re-run
> repair post-scrub. (No need to re-bootstrap since you're repairing.)
>
Ah, of course. That's wha
If you added the new node as a seed, it would ignore bootstrap mode.
And bootstrap / repair *do* use streaming so you'll want to re-run
repair post-scrub. (No need to re-bootstrap since you're repairing.)
Scrub is a little less heavyweight than major compaction but same
ballpark. It runs sstable
Hinted handoff doesn't use streaming mode, so it doesn't care.
("Streaming" to Cassandra means sending raw sstable file ranges to
another node. HH just uses the normal column-based write path.)
On Thu, Sep 15, 2011 at 8:24 AM, Ethan Rowe wrote:
> Thanks, Jonathan. I'll try the workaround and s
On Thu, Sep 15, 2011 at 9:21 AM, Jonathan Ellis wrote:
> Where did the data loss come in?
>
The outcome of the analytical jobs run overnight while some of these repairs
were (not) running is consistent with what I would expect if perhaps 20-30%
of the source data was missing. Given the strong c
Thanks, Jonathan. I'll try the workaround and see if that gets the streams
flowing properly.
As I mentioned before, we did not run scrub yet. What is the consequence of
letting the streams from the hinted handoffs complete if scrub hasn't been
run on these nodes?
I'm currently running scrub on
Where did the data loss come in?
Scrub is safe to run in parallel.
On Thu, Sep 15, 2011 at 8:08 AM, Ethan Rowe wrote:
> After further review, I'm definitely going to scrub all the original nodes
> in the cluster.
> We've lost some data as a result of this situation. It can be restored, but
> th
That means we missed a place we needed to special-case for backwards
compatibility -- the workaround is, add an empty encryption_options section
to cassandra.yaml:
encryption_options:
internode_encryption: none
keystore: conf/.keystore
keystore_password: cassandra
truststore: conf/
After further review, I'm definitely going to scrub all the original nodes
in the cluster.
We've lost some data as a result of this situation. It can be restored, but
the question is what to do with the problematic new node first. I don't
particularly care about the data that's on it, since I'm
I just noticed the following from one of Jonathan Ellis' messages yesterday:
> Added to NEWS:
>
>- After upgrading, run nodetool scrub against each node before running
> repair, moving nodes, or adding new ones.
We did not do this, as it was not indicated as necessary in the news when w
Here's a typical log slice (not terribly informative, I fear):
> INFO [AntiEntropyStage:2] 2011-09-15 05:41:36,106 AntiEntropyService.java
> (l
> ine 884) Performing streaming repair of 1003 ranges with /10.34.90.8 for
> (299
> 90798416657667504332586989223299634,542966817681532720374307732343496
On Thu, Sep 15, 2011 at 1:16 PM, Ethan Rowe wrote:
> Hi.
>
> We've been running a 7-node cluster with RF 3, QUORUM reads/writes in our
> production environment for a few months. It's been consistently stable
> during this period, particularly once we got out maintenance strategy fully
> worked ou
Hi.
We've been running a 7-node cluster with RF 3, QUORUM reads/writes in our
production environment for a few months. It's been consistently stable
during this period, particularly once we got out maintenance strategy fully
worked out (per node, one repair a week, one major compaction a week, th
got it! thanks!
On Thu, Sep 15, 2011 at 4:10 PM, Peter Schuller wrote:
> > in one of my node, I found many hprof files in the cassandra installation
> > directory, they are using as much as 200GB disk space. other nodes
> didn't
> > have those files.
> > turns out that those files are used for
Now I get this,
Any help would be greatly appreciated.
./bin/word_count
11/09/15 12:28:28 INFO WordCount: output reducer type: cassandra
11/09/15 12:28:29 INFO jvm.JvmMetrics: Initializing JVM Metrics with
processName=JobTracker, sessionId=
11/09/15 12:28:30 INFO mapred.JobClient: Running job: jo
> in one of my node, I found many hprof files in the cassandra installation
> directory, they are using as much as 200GB disk space. other nodes didn't
> have those files.
> turns out that those files are used for memory analyzing, not sure how they
> are generated?
You're probably getting OutOfM
in one of my node, I found many hprof files in the cassandra installation
directory, they are using as much as 200GB disk space. other nodes didn't
have those files.
turns out that those files are used for memory analyzing, not sure how they
are generated?
like these:
java_pid10626.hprof java
I do not agree here. I trade "consistency" (it's more data miss than
consistency here) over performance in my case.
I'm okay to handle the popping of the Spanish inquisition in the current DC
by triggering a new read with a stronger CL somewhere else (for example in
other DCs).
If the data is no
34 matches
Mail list logo