Sorry, my mistake: this is bug
http://issues.apache.org/jira/browse/CASSANDRA-1700. I've committed
the fix to the 0.6 svn branch; it will be in 0.6.9.
On Mon, Nov 15, 2010 at 7:34 PM, Jonathan Ellis wrote:
> TimedOutException means the host that your client is talking to sent
> the request to an
I'm pretty sure that "reading an index" and "using pig" are not
compatible right now. the m/r support that pig builds on always does
sequential-scan range queries.
can you see the missing rows if you do a normal get_slice query for it
without pig?
On Mon, Nov 15, 2010 at 7:03 AM, Christian Decke
On Tue, Sep 28, 2010 at 6:35 PM, Ryan King wrote:
> One thing you should try is to make thrift use
> BinaryProtocolAccelerated, rather than the pure-ruby implementation
> (we should change the default).
Dumb question time: how do you do this?
$ find . -name "*.rb" |xargs grep -i binaryprotocol
Here is the yaml:
# Cassandra YAML generated from previous config
# Configuration wiki: http://wiki.apache.org/cassandra/StorageConfiguration
authenticator: org.apache.cassandra.auth.AllowAllAuthenticator
auto_bootstrap: false
binary_memtable_throughput_in_mb: 256
cluster_name: Test Cluster
column
AFAIK the ArrayStoreException is similar to a type mismatch. Is it possible you have something mixed up in your class path or source code if you built from source? It looks like the column family info was deserialised into a o.a.c.config.RawColumnFamily but when that object was added to the RawColu
I got the following log, how do i fix this? How can i reset the whole
database? (i´m only testing b now)
--- LOG --
INFO 20:45:44,852 Heap size: 1069416448/1069416448
INFO 20:45:44,913 JNA not found. Native methods will be disabled.
INFO 20:45:44,963
This is embedded for testing cassandra 0.7 beta2. using
EmbeddedCassandraService.
and manually adding schema programmatically using:
for (KSMetaData table : DatabaseDescriptor.readTablesFromYaml()) {
for (CFMetaData cfm : table.cfMetaData().values()) {
CFMetaDa
To reset the whole DB delete everything in the data and commit log directories, they are specified in cassandra.yaml . What does your yaml file look like ? This looks like an error trying to add an index at startup, it would be good to know why. You may need to roll back to the initial yaml file. H
Ok, i deleted all folders inside
/var/lib/cassandra/data/
and all files inside
/var/lib/cassandra/saved_caches/
that did the trick :o)
How can i avoid this exception in the past?
2010/11/16 André Fiedler
> I got the following log, how do i fix this? How can i reset the whole
> database?
Thanks aaron! I think i was a bit too fast. Maybe i added an index at
startup. Good to know, thx a lot! :o)
2010/11/16 Aaron Morton
> To reset the whole DB delete everything in the data and commit log
> directories, they are specified in cassandra.yaml .
>
> What does your yaml file look like ?
Loading yaml file like so:
FileInputStream yamlInputStream = new FileInputStream(
configTemplateFile);
Constructor constructor = new Constructor(Config.class);
Yaml yaml = new Yaml(new Loader(constructor));
Config conf = (Config) yaml.load(yamlInputSt
I've not used the embedded service. The code in o.a.c.service.EmbeddedCassandraService says it will read the yaml file. If the cluster does not have a schema stored I think it will load the one from yaml. Have you tried starting it up with an empty system data dir ? Does it pickup the schema from t
I try to perform the following action after a clean startup. And get the log
below. How to fix this?
- Action --
create column family Test with comparator=LexicalUUIDType and
column_metadata=[
{column_name:test1, validation_class:Lexical
This is a bug in beta3, if you checkout the cassandra-0.7 branch it should
work for you.
On Tue, Nov 16, 2010 at 3:38 PM, André Fiedler wrote:
> I try to perform the following action after a clean startup. And get the
> log below. How to fix this?
>
> - Action ---
Thx! :o) so i will do
2010/11/16 Jake Luciani
> This is a bug in beta3, if you checkout the cassandra-0.7 branch it should
> work for you.
>
>
> On Tue, Nov 16, 2010 at 3:38 PM, André Fiedler <
> fiedler.an...@googlemail.com> wrote:
>
>> I try to perform the following action after a clean startu
Latest branch doesn´t start:
- Log --
Exception in thread "main" java.lang.NoClassDefFoundError:
org/apache/cassandra/thrift/CassandraDaemon
Caused by: java.lang.ClassNotFoundException:
org.apache.cassandra.thrift.CassandraDaemon
at java.net.URL
Ups, sry... checked out the trunk, not branch... my fault :D
2010/11/16 André Fiedler
> Latest branch doesn´t start:
>
> - Log --
>
> Exception in thread "main" java.lang.NoClassDefFoundError:
> org/apache/cassandra/thrift/CassandraDaemon
> Ca
did you do a clean and then build ? "ant clean" then "ant jar"AaronOn 17 Nov, 2010,at 10:10 AM, André Fiedler wrote:Latest branch doesn´t start:- Log --Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/cassandra/thrift/Cassand
Ah ok, didn´t know that. Now it starts up, but throws a new exception:
- Log --
[defa...@test] create column family Test with comparator=LexicalUUIDType and
column_metadata=[{column_name:test1, validation_class:LexicalUUIDType,
index_type:0, ind
Your CF meta data says the column names are UUID's (comparator=LexicalUUIDType) but your column_metadata says the name of one of the columns is "test1" (and test2 etc). This is not a valid UUID. Either change the comparator or use UUID's for the column names in the column_metadata. AaronOn 17 Nov,
.7 beta 2 here
I've been reading about load balancing and some sites seem to imply that using
the random partitioner will keeps your nodes fairly well balanced. I am
using a 3 node cluster. 1 seed and two others with AutoBootstrap on.
Now i have read that autobootstrap can leave your nodes unbal
Ah ok, i thougt the comparator is used to compare CF keys... now the
exception makes sense to me. Thx! :o)
2010/11/16 Aaron Morton
> Your CF meta data says the column names are UUID's
> (comparator=LexicalUUIDType) but your column_metadata says the name of one
> of the columns is "test1" (and t
Take a look at the sections on Load Balance and Token Selection here http://wiki.apache.org/cassandra/OperationsAFAIK the best approach is to list the initial tokens for your nodes in their cassandra.yaml. Nodes will choose random tokens with the Random Partitioner, which will not result in an even
Looked at how DatabaseDescriptor is loading the yaml file. Using that
approach solves the problem with the column_families mapping exception.
The problems we are running into currently is regarding a known dataset not
being loaded into our test instance correctly.
Steps:
1. Create temp director
Looking at this closer. I noticed the following in the SSTableImport Class:
if (col.isDeleted) {
cfamily.addColumn(path, hexToBytes(col.value), new
TimestampClock(col.timestamp));
} else {
cfamily.addTombstone(path, hexToBytes(col.value), ne
On Tue, Nov 16, 2010 at 10:25 AM, Jonathan Ellis wrote:
> On Tue, Sep 28, 2010 at 6:35 PM, Ryan King wrote:
>> One thing you should try is to make thrift use
>> BinaryProtocolAccelerated, rather than the pure-ruby implementation
>> (we should change the default).
>
> Dumb question time: how do yo
I am going to have a supercolumn family where some rows can be quite large
(10-100 mb). I'd like to be able to pull a subset of this data without having
to pull the whole thing into memory and send it over the wire.
Each query will be for only one row. The supercolumn key and the child column
I am considering building a system as follows:
1. Data stored in Cassandra
2. Webservice cluster (stateless) will pull data from cassandra and do
business
operations plus security enforcement
3. Clients will hit the webservice cluster
I'm trying to maintain a low read latency and am worried
It's certainly looks suspect. I've had a look at the code around SSTableImport and SSTableExport and the isDeleted value for the col is based on IColumn.isMarkedForDelete read when the data was exported. I'll try to have a look tonight, or if someone is still up in the states they may help. The cur
Super columns have some limitations http://wiki.apache.org/cassandra/CassandraLimitationsit's not possible to bring back a range of super columns, as you say you can specify one or zero super column names in your ColumnParent. You could use standard CF's, and do two reads. First one would be the Su
No need to worry. I run REST requests through Varnish box > nginx / Tornaod / Python box > Cassandra cluster and can get requests in and out of the stack in a couple of milliseconds. Using some old workstation HW and not paying much attention to tuning. Build it like a normal system and separate ou
31 matches
Mail list logo