Thanks Jonathan
Is there any documentation on "IN"?
On 2 May 2011 15:34, Jonathan Ellis wrote:
> OR will not be supported for a while yet, however IN support is in
> trunk and will be in 0.8.1 (but not 0.8.0).
>
> On Mon, May 2, 2011 at 5:10 AM, Miguel Auso
> wrote:
> > hi!,
> > It`s posible
>
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Re-IP-address-resolution-in-MultiDC-setup-EC2-VIP-td6306635.html
On May 6, 2011 3:07 AM, "Sameer Farooqui" wrote:
> We're trying to set up a Cassandra 0.8.0beta1 cluster across Amazon East &
> West regions. It does not work out of th
Can someone point me to a document that explains how to interpret
CFHistograms output? i went through
http://narendrasharma.blogspot.com/2011/04/cassandra-07x-understanding-output-of.htmlwhich
is a good beginning, but was wondering if there was anything more
detailed. For e.g when i run CFHistogram
RPMs can be found here:
http://rpm.datastax.com/EL/5/x86_64/
If you have the DataStax/Riptano repository installed already, the
package is currently named apache-cassandra08 (to avoid any nasty
surprises for anyone on 0.7.x doing a 'yum upgrade').
On Thu, May 5, 2011 at 8:37 PM, Eric Evans wrote
I am pleased to announce the release of Apache Cassandra 0.8.0 beta2.
We're zeroing in fast on the final release (expect an RC within the
week), so this should be the last beta; Time is running out, so please
help test!
As always, be sure to have a look at the changelog[1] and release
notes[2].
Unfortunately no messages at ERROR level:
INFO [Thread-460] 2011-05-04 21:31:14,427 StreamInSession.java (line 121)
Streaming of file
/raiddrive/MDR/MeterRecords-f-2264-Data.db/(98339515276,197218618166)
progress=41536315392/98879102890 - 42% from
org.apache.cassandra.streaming.StreamI
We're trying to set up a Cassandra 0.8.0beta1 cluster across Amazon East &
West regions. It does not work out of the box with the binaries and the
nodes in different regions end up setting this own clusters.
The problem is with Cassandra's Listening Address as described by Rui:
"Using external IP
How many column families do you have?
On 5/4/11 12:50 PM, Hannes Schmidt wrote:
Hi,
We are using Cassandra 0.6.12 in a cluster of 9 nodes. Each node is
64-bit, has 4 cores and 4G of RAM and runs on Ubuntu Lucid with the
stock 2.6.32-31-generic kernel. We use the Sun/Oracle JDK.
Here's the prob
The difficulty is the different thrift clients between 0.6 and 0.7.
If you want to roll your own solution I would consider:
- write an app to talk to 0.6 and pull out the data using keys from the other
system (so you know can check referential integrity while you are at it). Dump
the data to fla
There have been some recent discussions about different EC2 deployments, may be
be exactly what you are looking for but try start here
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Re-IP-address-resolution-in-MultiDC-setup-EC2-VIP-td6306635.html
-
Aaron Morton
Could you provide some of the log messages when the receiver ran out of disk
space ? Sounds like it should be at ERROR level.
Thanks
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 6 May 2011, at 09:16, Sameer Farooqui wrote:
> Just wa
When adding nodes it is a *very* good idea to manually set the tokens, see
http://wiki.apache.org/cassandra/Operations#Load_balancing
bootstrap is a process that happens only once on a node, where as well as
telling the other nodes it's around it asks them to stream over the data it
will no be
Use snapshots or borrow from simplegeo.com
https://github.com/simplegeo/tablesnap
If you grab the directory at an arbitrary time there is no guarantee the data
will be consistent.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On
Row cache is still disabled by default.
AFAIK in general you should not move the position on any buffer in cassandra or
take ownership of them.
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 6 May 2011, at 07:34, Paul Loy wrote:
> H
Release Candidates are not really supported the 0.7 ones contain bugs, you
should definitely not be using it.
What is the the context for this error ? in an IDE in ant ? is the avro jar in
the path ? Can you use 0.7.5 ?
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaro
Hannes,
To get a baseline of behaviour set disk_access to standard. You will
probably want to keep it like that if you want better control over the memory
on the box.
Also connect to the box with JConsole and look at the PermGen space
used it is not included in the max heap sp
Also, a nodetool cleanup would rebuild the SSTable to the most current
version.
On 5/5/11 1:42 PM, Jeremiah Jordan wrote:
Running repair and I am getting this error:
java.lang.RuntimeException: Cannot recover SSTable with version a
(current version f).
at
org.apache.cassandra.io.sstable.S
Hi Jeremiah,
Did you try following up by running scrub? Did it help?
Ben
On 5/5/11 1:42 PM, Jeremiah Jordan wrote:
Running repair and I am getting this error:
java.lang.RuntimeException: Cannot recover SSTable with version a
(current version f).
at
org.apache.cassandra.io.sstable.SSTable
Click on Submit Patch then it should get noticed as the committers go through
the patch list. And / Or update the comments to get it back into the activity
stream
If you need a hand with updating the 0.8 patch let me know.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@a
Here is an image that shows what the Amazon VPC we're thinking about using
looks like:
http://i.imgur.com/OUe1i.png
We would like to configure a 2 node Cassandra cluster in the private subnet
and a read/write web application service in the public subnet. However, we
also want to span the Cassand
Just wanted to update you guys that we turned on DEBUG level logging on the
decommissioned node and the node receiving the decommissioned node's range.
We did this by editing /conf/log4j-server.properties and
changing the log4j.rootLogger to DEBUG.
We ran decommission again and saw the that the re
Here is what I did.
I booted up the first one. After that I started the second one with
bootstrap turned off.
Then I did a nodetool loadbalance on the second node.
After which I added the third node again with bootstrap turned off. Then did
the loadbalance again on the third node.
This seems to hav
I just rebuilt the cluster in the same manner as I did originally except after
I setup the first node I added a keyspace and column family before adding any
new nodes. This time the 3rd node auto bootstrapped successfully.
From: Len Bucchino [mailto:len.bucch...@veritix.com]
Sent: Thursday, May
Sylvain, thanks for the quick response, and you are correct: there is an
empty index file. I removed the file and tried to start it up. Now
Cassandra is reporting corrupt sstables. Does this mean I can't just simply
back up var/lib directory (use snapshots instead) ? We don't have much data
righ
Running repair and I am getting this error:
java.lang.RuntimeException: Cannot recover SSTable with version a
(current version f).
at
org.apache.cassandra.io.sstable.SSTableWriter.createBuilder(SSTableWrite
r.java:237)
at
org.apache.cassandra.db.CompactionManager.submitSSTableBuild(Compac
Hi all,
so I just updated Cassandra from 0.7.0 to 0.7.5. I embed Cassandra in my app
and use StorageProxy for querying.
In one of my unit tests I write a column to Cassandra and then read it out
again twice in quick succession. The second time I now get the same
ByteBuffer (i.e. same id - same 'p
On Thu, May 5, 2011 at 8:38 PM, Wenjun Che wrote:
> Hello,
>
> I have a one node cluster (refresh install from 0.7.4 and upgraded to 0.7.5
> last week). The data is being backed up by a cron job that periodically
> tar/gzip entire
> var/lib directory. when I tested the backup by restoring the l
Hello,
I have a one node cluster (refresh install from 0.7.4 and upgraded to 0.7.5
last week). The data is being backed up by a cron job that periodically
tar/gzip entire
var/lib directory. when I tested the backup by restoring the last tar file,
I am seeing the following exception:
DEBUG 13:4
Also, setting auto_bootstrap to false and setting token to the one that it said
it would use in the logs allows the new node to join the ring.
From: Len Bucchino [mailto:len.bucch...@veritix.com]
Sent: Thursday, May 05, 2011 1:25 PM
To: user@cassandra.apache.org
Subject: RE: New node not joining
Adding the fourth node to the cluster with an empty schema using auto_bootstrap
was not successful. A nodetool netstats on the new node shows "Mode: Joining:
getting bootstrap token" similar to what the third node did before it was
manually added. Also, there are no exceptions in the logs but
We can't do a straight upgrade from 0.6.13 to 0.7.5 because we have rows
stored that have unicode keys, and Cassandra 0.7.5 thinks those rows in the
sstables are corrupt, and it seems impossible to clean it up without losing
data.
However, we can still read all rows perfectly via thrift so we are
I can't seem to get the correct version of avro. Any help with this error would
be appreciated:
java.lang.NoSuchMethodError:
org.apache.avro.generic.GenericData$Array.(ILorg/apache/avro/Schema;)V
at org.apache.cassandra.io.SerDeUtils.createArray(SerDeUtils.java:129)
at org.apache.cassandra.confi
Thanks, but patching or losing keys is not an option for us. :-/
/Henrik
On Thu, May 5, 2011 at 15:00, Daniel Doubleday wrote:
> Don't know if that helps you but since we had the same SSTable corruption I
> have been looking into that very code the other day:
>
> If you could afford to drop the
I can't run sstable2json on the datafiles from 0.7, it throws the same "Keys
must be written in ascending order." error as compaction.
I can run sstable2json on the 0.6 datafiles, but when I tested that the
unicode characters in the keys got completely mangled since it outputs keys
in string format
Hi Len,
This looks like a decent workaround. I would be very interested to see how
the addition of the 4th node went. Please post it whenever you get a chance.
Thanks!
On Thu, May 5, 2011 at 6:47 AM, Len Bucchino wrote:
> I have the same problem on 0.7.5 auto bootstrapping a 3rd node onto an
> e
It's fixed in the cassandra-0.7 branch (no 0.7.x release, yet) and
0.8-beta2. You can also use ntp to sync the clocks in your cluster and the
problem won't happen again.
On Thu, May 5, 2011 at 3:47 AM, Dikang Gu wrote:
> Is this fixed in cassandra-0.7.5 or cassandra-0.8 ?
>
> On Thu, May 5, 201
Thanks replying,let me disable my swap memory.
On 05/05/2011 09:01 PM, Jonathan Ellis wrote:
6s parnew is insane. you're probably swapping. Easiest fix is
disabling swap entirely.
P.S. 0.6.3 is ancient.
--
S.Ali Ahsan
Senior System Engineer
e-Business (Pvt) Ltd
49-C Jail Road, Lahore
6s parnew is insane. you're probably swapping. Easiest fix is
disabling swap entirely.
P.S. 0.6.3 is ancient.
On Thu, May 5, 2011 at 10:51 AM, Ali Ahsan wrote:
> Hi All
>
> I have two cassandra node with RF=2,Cassandra start under performing i mean
> red write are slow and i see following inf
Hi All
I have two cassandra node with RF=2,Cassandra start under performing i
mean red write are slow and i see following info in cassandra log.I have
centos 5.5 64 bit with 14 GB memory assigned to cassandra.I am using LVM
for cassandra.When i reboot my cassanrda every thing become normal.
On Thu, May 5, 2011 at 5:21 PM, David Boxenhorn wrote:
> What is the format of ?
With the warning of my previous mail, it's an unsigned short (2 bytes).
>
> On Thu, May 5, 2011 at 6:14 PM, Eric Evans wrote:
>>
>> On Thu, 2011-05-05 at 17:44 +0300, David Boxenhorn wrote:
>> > Is there a spec fo
I think for CQL there is two different things:
1) how the request will look like. I think that it is what Eric is
refering to when he says "colon delimited". So a query will
look like "SELECT foo:42:bar "
2) and there is the actual byte format in which the column name will
What is the format of ?
On Thu, May 5, 2011 at 6:14 PM, Eric Evans wrote:
> On Thu, 2011-05-05 at 17:44 +0300, David Boxenhorn wrote:
> > Is there a spec for compound columns?
> >
> > I want to know the exact format of compound columns so I can adhere to
> > it. For example, what is the separat
Thank you, I will look into that and I will probably wait until there is an
"out of the box" comparator. But it's an excellent new feature !
Regards,
Victor K.
2011/5/5 Eric Evans
> On Thu, 2011-05-05 at 10:49 -0400, Victor Kabdebon wrote:
> > Hello Eric,
> >
> > Compound columns seem to be a v
On Thu, 2011-05-05 at 17:44 +0300, David Boxenhorn wrote:
> Is there a spec for compound columns?
>
> I want to know the exact format of compound columns so I can adhere to
> it. For example, what is the separator - or is some other format used
> (e.g. length:value or type:length:value)?
Tentati
On Thu, 2011-05-05 at 10:49 -0400, Victor Kabdebon wrote:
> Hello Eric,
>
> Compound columns seem to be a very interesting feature. Do you have any idea
> in which Cassandra version it is going to be introduced : 0.8.X or 0.9.X ?
You can use these today with a custom comparator[1]. There is an o
Thanks, yes, I was referring to the "compound columns" in this quote (from a
previous thread):
"No CQL will never support super columns, but later versions (not 1.0.0)
will support compound columns. Compound columns are better; instead of
a two-deep structure, you can have one of arbitrary depth.
I suppose it depends what you are referring to by "compound columns".
If you're talking
about the CompositeType of CASSANDRA-2231 (which is my only guess), then the
format is in the javadoc and is:
/*
* The encoding of a CompositeType column name should be:
*...
* where is:
* <'end-of-compon
Hello Eric,
Compound columns seem to be a very interesting feature. Do you have any idea
in which Cassandra version it is going to be introduced : 0.8.X or 0.9.X ?
Thanks,
Victor
2011/5/5 Eric Evans
> On Thu, 2011-05-05 at 18:19 +0800, Guofeng Zhang wrote:
> > I read the CQL v1.0 document. Th
Is there a spec for compound columns?
I want to know the exact format of compound columns so I can adhere to it.
For example, what is the separator - or is some other format used (e.g.
length:value or type:length:value)?
Reading this, I tried it again (this time on a freshly formated node due to
hd failure).
Before crashing, my data dir was only 7.7M big. Using nmap_indexonly
(mlockall was successfull) on a 64bit machine.
Anything else I could try to get this to work? Ps. All the other nodes (and
this node) run f
On Apr 27, 2011, at 16:59, Timo Nentwig wrote:
> On Apr 27, 2011, at 16:52, Edward Capriolo wrote:
>
>> The method being private is not a deal-breaker.While not good software
>> engineering practice you can copy and paste the code and renamed the
>> class SSTable2MyJson or whatever.
>
> Sure I
On Thu, 2011-05-05 at 18:19 +0800, Guofeng Zhang wrote:
> I read the CQL v1.0 document. There are operations about column
> families, but it does not describe how to operate on super column
> families. Why? Does this mean that super column families would not be
> supported by CQL in this version? W
I have the same problem on 0.7.5 auto bootstrapping a 3rd node onto an empty 2
node test cluster (the two nodes were manually added) and the it currently has
an empty schema. My log entries look similar to yours. I took the new token
it says its going to use from the log file added it to the y
Don't know if that helps you but since we had the same SSTable corruption I
have been looking into that very code the other day:
If you could afford to drop these rows and are able to recognize them the
easiest way would be patching:
SSTableScanner:162
public IColumnIterator next()
{
Thats UTF-8 not UTF-16.
On May 5, 2011, at 1:57 PM, aaron morton wrote:
> The hard core way to fix the data is export to json with sstable2json, hand
> edit, and then json2sstable it back.
>
> Also to confirm, this only happens when data is written in 0.6 and then tried
> to read back in 0.7?
On 2011-05-05 06:30, Hannes Schmidt wrote:
> This was my first thought, too. We switched to mmap_index_only and
> didn't see any change in behavior. Looking at the smaps file attached
> to my original post, one can see that the mmapped index files take up
> only a minuscule part of RSS.
I have not
The hard core way to fix the data is export to json with sstable2json, hand
edit, and then json2sstable it back.
Also to confirm, this only happens when data is written in 0.6 and then tried
to read back in 0.7?
And you what partitioner are you using ? You can still see the keys ?
Can you use
I'm looking at Magnolia at the moment (as in, this second). At first glance,
it looks like I should be able to use Cassandra as the database:
http://documentation.magnolia-cms.com/technical-guide/content-storage-and-structure.html#Persistent_storage
If it can use a filesystem as its database, it
Yeah, I've seen that one, and I'm guessing that it's the root cause of my
problems, something something encoding error, but that doesn't really help
me. :-)
However, I've done all my tests with 0.7.5, I'm gonna try them again with
0.7.4, just to see how that version reacts.
/Henrik
On Wed, May
Yes, the keys were written to 0.6, but when I looked through the thrift
client code for 0.6, it explicitly converts all string keys to UTF8 before
sending them over to the server so the encoding *should* be right, and after
the upgrade to 0.7.5, sstablekeys prints out the correct byte values for
th
Would you think of Django as a CMS ?
http://stackoverflow.com/questions/2369793/how-to-use-cassandra-in-django-framework
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 5 May 2011, at 22:54, Eric tamme wrote:
>> Does anyone know o
Yes that was what I was trying to say.
thanks
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 5 May 2011, at 18:52, Tyler Hobbs wrote:
> On Thu, May 5, 2011 at 1:21 AM, Peter Schuller
> wrote:
> > It's no longer recommended to run node
> Does anyone know of a content management system that can be easily
> customized to use Cassandra as its database?
>
> (Even better, if it can use Cassandra without customization!)
>
I think your best bet will be to look for a CMS that uses an ORM for
the storage layer and write a specific ORM fo
I take it back, the problem started in 0.6 where keys were strings. Looking
into how 0.6 did it's thing
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 5 May 2011, at 22:36, aaron morton wrote:
> Interesting but as we are dealing with k
Interesting but as we are dealing with keys it should not matter as they are
treated as byte buffers.
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 5 May 2011, at 04:53, Daniel Doubleday wrote:
> This is a bit of a wild guess but Wind
This was my first thought, too. We switched to mmap_index_only and
didn't see any change in behavior. Looking at the smaps file attached
to my original post, one can see that the mmapped index files take up
only a minuscule part of RSS.
On Wed, May 4, 2011 at 11:37 PM, Oleg Anastasyev wrote:
> Pr
I was inserting the contents of wikipedia, so the columns were at multi
kilobyte strings. It's a good data source to run tests with as the records and
relationships are somewhat varied in size.
My main point was to say the best way to benchmark cassandra with with multiple
server nodes, multipl
Hi,
I read the CQL v1.0 document. There are operations about column families,
but it does not describe how to operate on super column families. Why? Does
this mean that super column families would not be supported by CQL in this
version? Will it be supported in the future?
Thanks.
Guofeng
Does anyone know of a content management system that can be easily
customized to use Cassandra as its database?
(Even better, if it can use Cassandra without customization!)
Is this fixed in cassandra-0.7.5 or cassandra-0.8 ?
On Thu, May 5, 2011 at 1:43 PM, Tyler Hobbs wrote:
> The issue is quite possibly this:
> https://issues.apache.org/jira/browse/CASSANDRA-2536
>
> A person on the ticket commented that decomissioning and rejoining the node
> with the disagreeing
Hey guys,
I'm running into what seems like a very basic problem.
I have a one node cassandra instance. Version 0.7.5. Freshly installed.
Contains no data.
The cassandra.yaml is the same as the default one that is supplied, except
for data/commitlog/saved_caches directories.
I also changed the addre
71 matches
Mail list logo