Hi everyone,
I'm running into heap pressure issues and I seem to have traced the
problem to very large bloom filters. The bloom_filter_fp_chance is
set to the default value on all my column families but I'd like to try
changing it on some of them. Can I just change that value through the
cli and
Thanks Peter.
On Thu, Sep 13, 2012 at 12:52 PM, Peter Schuller
wrote:
>> changing it on some of them. Can I just change that value through the
>> cli and restart or are there any concerns I should have before trying
>> to tweak that parameter?
>
> You can change it, you don't have to restart. It
Hi, I have a quick question about migrating a cluster.
We have a cassandra cluster with 10 nodes that we'd like to move to a new DC
and what I was hoping to do is just copy the SSTables for each node to a
corresponding node in the new DC (the new cluster will also have 10 nodes).
Is there any reas
t new IP's, the
> important thing to them is the token.
>
> Cheers
>
> -
> Aaron Morton
> Freelance Cassandra Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 6 Jun 2011, at 23:25, Eric Czech wrote:
>
> > Hi, I have a quick
Hi everybody,
I'm running cassandra 0.7.5 on about 20 RHEL 5 (24 GB RAM) machines and I'm
having issues with snapshots, json sstable conversions, and various nodetool
commands due to memory errors and the lack of the native access C libraries.
I tried putting jna.jar on the classpath but I'm stil
I got it here : https://nodeload.github.com/twall/jna/tarball/master
Is there some other version or distribution of jna that I should be using?
The version I have is 3.3.0.
On Thu, Sep 1, 2011 at 8:49 AM, Eric Evans wrote:
> On Wed, Aug 31, 2011 at 11:38 PM, Eric Czech
> wrote:
&
Hi,
I recently upgraded 10 nodes from 7.5 to 8.4 and 9 of them work now but on
one node I'm getting an exception on startup that I can't seem to fix. Has
anyone seen this before or have any suggestions as to how to correct the
issue here? Here's the exception I'm getting:
java.lang.RuntimeExcep
ode. Delete the schema and let it pull it from
> another node, as in wiki.apache.org/cassandra/FAQ#schema_disagreement.
>
> On Sun, Sep 4, 2011 at 12:03 AM, Eric Czech wrote:
> > Hi,
> > I recently upgraded 10 nodes from 7.5 to 8.4 and 9 of them work now but
> on
> > o
I overwrote all sstable and system files from a snapshot that I took right
before the upgrade and it worked this time.
Everything is back to normal and thanks again.
On Sun, Sep 4, 2011 at 12:05 AM, Eric Czech wrote:
> I shutdown the cassandra java process, deleted the Schema and Migrat
Thank you guys.
I installed jna using yum and then put jna.jar on the classpath and
everything seems fine.
On Thu, Sep 1, 2011 at 9:51 AM, Eric Evans wrote:
> On Thu, Sep 1, 2011 at 10:13 AM, Eric Czech wrote:
> > I got it here : https://nodeload.github.com/twall/jna/tarball/mast
We just migrated from .7.5 to .8.4 in our production environment and it was
definitely the least painful transition yet (coming all the way from the .4
release series). It's been about a week for us but so far so good.
On Thu, Sep 8, 2011 at 9:25 PM, Dominic Williams <
dwilli...@fightmymonster.co
I'm getting a lot of errors that look something like "java.io.IOError:
java.io.IOException: mmap segment underflow; remaining is 348268797
but 892417075 requested" on one node in a 10 node cluster. I'm
currently running version 0.8.4 but this is data that was carried over
from much earlier version
On 20/09/2011, at 6:55 AM, Jonathan Ellis wrote:
>
>> You should start with scrub.
>>
>> On Mon, Sep 19, 2011 at 1:04 PM, Eric Czech wrote:
>>> I'm getting a lot of errors that look something like "java.io.IOError:
>>> java.io.IOException: mmap seg
Scrub seems to have worked. Thanks again!
Will a major compaction delete the "tmp" sstables genereated though? Scrub
seems to have generated a lot of them and they're taking up an unnerving
amount of disk space.
On Mon, Sep 19, 2011 at 5:34 PM, Eric Czech wrote:
> Ok then
We're exploring a data processing procedure where we snapshot our production
cluster data and move that data to a new cluster for analysis but I'm having
some strange issues where the analysis cluster is still somehow aware of the
production cluster (i.e. the production cluster ring is trying to in
node
6. start cassandra (or brisk really) on each analysis node to create
separate cluster
Any reason that procedure wouldn't work?
On Sun, Oct 2, 2011 at 3:14 PM, Edward Capriolo wrote:
>
>
> On Sun, Oct 2, 2011 at 4:25 PM, Eric Czech wrote:
>
>> We're exploring a d
uster?
Are you just mapping them one-to-one on the original cluster?
On Sun, Oct 2, 2011 at 3:49 PM, Shyamal Prasad wrote:
> >>>>> "Eric" == Eric Czech writes:
>
>Eric> We're exploring a data processing procedure where we snapshot
>Eric> ou
the tokens.
On Sun, Oct 2, 2011 at 3:14 PM, Edward Capriolo wrote:
>
>
> On Sun, Oct 2, 2011 at 4:25 PM, Eric Czech wrote:
>
>> We're exploring a data processing procedure where we snapshot our
>> production cluster data and move that data to a new cluster for analysis
, Oct 2, 2011 at 4:14 PM, Shyamal Prasad wrote:
> >>>>> "Eric" == Eric Czech writes:
>
> Eric> Hi Shyamal, I was using the same cluster name but since
>Eric> writing that first email, I've already had success bringing up
>Eric
restarts it will clear tmp files.
>
> So if you unnerved at the prospect of unnatural temp files, exorcise them
> casting a restart spell.
>
> Hope that helps.
>
> -
> Aaron Morton
> Freelance Cassandra Developer
> @aaronmorton
> http://www.thelastpi
have this up and
running.
On Sun, Oct 2, 2011 at 8:29 PM, Shyamal Prasad wrote:
> >>>>> "Eric" == Eric Czech writes:
>
> Eric> Yea that's not a mapping I'd like to maintain either -- as an
>Eric> experiment, I copied production ssta
Hi, we're trying to setup a cluster to run brisk/hadoop jobs on and part of
that setup is copying sstables from another cluster running 8.4. Could
there be any compatibility issues with the files there since the brisk beta2
package uses 8.1? So far, it seems to work fine but now I'm a little
nerv
Hi, I'm having what I think is a fairly uncommon schema issue --
My situation is that I had a cluster with 10 nodes and a consistent schema.
Then, in an experiment to setup a second cluster with the same information
(by copying the raw sstables), I left the LocationInfo* sstables in the
system ke
pler, less seemingly
risky way to do this so please, please let me know if that's true!
Thanks again.
- Eric
On Tue, Oct 11, 2011 at 11:55 AM, Eric Czech wrote:
> Hi, I'm having what I think is a fairly uncommon schema issue --
>
> My situation is that I had a cluster with 10 nod
g from the schema consistency requirements. Any
reason that wouldn't work?
And aside from a possible code patch, any recommendations as to how I can
best fix this given the current 8.4 release?
On Thu, Oct 13, 2011 at 12:14 AM, Jonathan Ellis wrote:
> Does nodetool removetoken no
the seed node in cass-analysis-2 and
> following the directions in
> http://wiki.apache.org/cassandra/FAQ#schema_disagreement might solve
> the problem. Somone please correct me.
>
> On Thu, Oct 13, 2011 at 12:05 AM, Eric Czech
> wrote:
> > I don't think that
're running into https://issues.apache.org/jira/browse/CASSANDRA-3259
>
> Try upgrading and doing a rolling restart.
>
> -Brandon
>
> On Thu, Oct 13, 2011 at 9:11 AM, Eric Czech wrote:
> > Nope, there was definitely no intersection of the seed nodes between the
> two
>
Thanks again. I have truncated certain cf's recently and the cli didn't
complain and listings of the cf rows return nothing after truncation. Is
that data not actually deleted?
On Fri, Oct 14, 2011 at 1:28 PM, Brandon Williams wrote:
> On Thu, Oct 13, 2011 at 11:33 PM, Eric C
2 PM, Brandon Williams wrote:
> On Fri, Oct 14, 2011 at 2:36 PM, Eric Czech wrote:
> > Thanks again. I have truncated certain cf's recently and the cli didn't
> > complain and listings of the cf rows return nothing after truncation. Is
> > that data not actua
Is there any way that you could do that lookup in reverse where you pull
the records from your SQL database, figure out which keys aren't necessary,
and then delete any unnecessary keys that may or may not exist in
cassandra?
If that's not a possibility, then what about creating the same Cassandra
I'd also add that one of the biggest complications to arise from having
multiple clusters is that read biased client applications would need to be
aware of all clusters and either aggregate result sets or involve logic to
choose the right cluster based on a particular query.
And from a more operat
QL DB might not be
> copied into the new keyspace. But maybe we could arrange to do that
> during low-demand-hours to minimize the amount of new inserts and
> additionally run the "copy" a second time with a select on newly inserted
> sql rows. So we'll probably go with t
Hi Brian,
We're trying to do the exact same thing and I find myself asking very
similar questions.
Our solution though has been to find what kind of queries we need to
satisfy on a preemptive basis and leverage cassandra's built-in indexing
features to build those result sets beforehand. The who
That's likely because there wasn't previously a validation class for the
"age" column before you added that index. In other words, the CLI doesn't
display values in a UTF8 format until you tell it to so I think the value
"8" is correct and you could check that by running the CLI command "assume
Us
I can't believe I have to ask this but I have a CF with about 10 rows and
the keys are literally 1 through 9.
Why does this not work if I want the row where the key is ascii('5')?
cqlsh:Keyspace1> select first 1 * from CF where key = '5';
KEY
-
05
* I saw the Jira about the sort of phant
Gotcha, I probably should have guessed that much. Does CQL have any
functions to convert ascii to hex so that I don't have to do that
conversion elsewhere (I don't see one in the docs)?
On Thu, May 3, 2012 at 2:09 PM, paul cannon wrote:
> On Thu, May 3, 2012 at 12:46 PM, Eric
if that's how you expect
> to work with it. Would it be an option to adjust that table?
>
> The ASSUME-changes-outgoing-cql ticket (CASSANDRA-3799) would also help,
> so maybe keep an eye on that.
>
> p
>
>
> On Thu, May 3, 2012 at 1:22 PM, Eric Czech wrote:
>
&g
Recently, cassandra has been crashing with no apparent error on one specific
node in my cluster. Has anyone else ever had this happen and is there a way
to possible figure out what is going on other than looking at what is in the
stdout and system.log files?
Thanks!
linux-amd64)
# Problematic frame:
# V [libjvm.so+0x1d3b32]
#
# If you would like to submit a bug report, please visit:
Have you ever seen that before?
On Wed, Oct 13, 2010 at 7:52 PM, Jonathan Ellis wrote:
> is there a jvm crash log file?
>
> On Wed, Oct 13, 2010 at 8:43 PM, Eric Cze
And this is the java version:
java version "1.6.0_13"
Java(TM) SE Runtime Environment (build 1.6.0_13-b03)
Java HotSpot(TM) 64-Bit Server VM (build 11.3-b02, mixed mode)
and it's running on Ubuntu 9.04 (jaunty) linux
4 cores
4 GB RAM
On Wed, Oct 13, 2010 at 8:30 PM, Eric Czech
n Wed, Oct 13, 2010 at 9:35 PM, B. Todd Burruss wrote:
> you should upgrade to the latest version of the JVM, 1.6.0_21
>
> there was a bug around 1.6.0_18 (or there abouts) that affected cassandra
>
>
> On 10/13/2010 07:55 PM, Eric Czech wrote:
>
> And this is the java v
problem is inside the JVM, not
> with Cassandra
>
> sorry to say, your best bet is to upgrade
>
>
>
> On 10/13/2010 10:09 PM, Eric Czech wrote:
>
> Thank you Todd. It seems strange though that this is only happening on one
> node and has never occurred on any others th
Thanks again for the help. I upgraded my JVM to update 22 but I'm still
getting the same error just as before, and just as, if not more,
frequently. I'm thinking that the best course of action at this point is to
replace the hardware. I would try the test builds, but I can't imagine they
wouldn'
Prepend zeros to every number out to a fixed length determined by the
maximum possible value. As an example, 0055 < 0100 in a lexical ordering
where the maximum value is .
On Fri, Oct 22, 2010 at 5:05 AM, Christian Decker <
decker.christ...@gmail.com> wrote:
> Ever since I started implementi
44 matches
Mail list logo