According to the wiki, it is not included in the binary distribution.
http://wiki.apache.org/cassandra/SimpleAuthenticator
2012/3/14 Xaero S :
>
>
> Hi,
>
> When i start Cassandra 1.0.7 with authentication enabled. i get this
> following exception -
>
> org.apache.cassandra.config.ConfigurationEx
Agreed if you are using SSD you likely will not need much as much RAM.
I said "You could always do better with more RAM" not "You should
definitely get more RAM" :)
On Tue, Mar 13, 2012 at 7:37 PM, Maxim Potekhin wrote:
> Thank you Edward.
>
> As can be expected, my data volume is a multiple of w
On Tue, Mar 13, 2012 at 11:32 PM, Thorsten von Eicken
wrote:
> On 3/13/2012 4:13 PM, Viktor Jevdokimov wrote:
>> What we did to speedup this process to return all exhausted nodes into
>> normal state faster:
>> We have created a 6 temporary virtual single Cassandra nodes with 2
>> CPU cores and 8G
On 3/13/2012 4:13 PM, Viktor Jevdokimov wrote:
> What we did to speedup this process to return all exhausted nodes into
> normal state faster:
> We have created a 6 temporary virtual single Cassandra nodes with 2
> CPU cores and 8GB RAM.
> Stopped completely a compaction for CF on a production node
5 node cluster running 1.0.2, doing about 1300 reads and 1300 writes/sec into 3
column families in the same keyspace. 2 client machines, doing about the same
amount of reads/writes, but one has an average response time in the 4-40ms
range and the other in the 200-800ms range. Both running iden
Hi,
I'm using Cassandra 1.0.8, on Windows 7. When I take a snapshot of the
database, I find that I am unable to delete the snapshot directory
(i.e., dir named "{datadir}\{keyspacename}\snapshots\{snapshottag}")
while Cassandra is running: "The action can't be completed because the
folder o
HI. I followed this:
To set up simple authentication and authorization
1. Edit cassandra.yaml, setting
org.apache.cassandra.auth.SimpleAuthenticator as the
authenticator value. The default value of AllowAllAuthenticator is
equivalent to no authentication.
2. Edit access.properties, adding entrie
Forwarding to the Cassandra mailing list as well, in case this is more of
an issue on how I'm using Cassandra.
Am I correct to assume that I can use range queries on composite row keys,
even when using a RandomPartitioner, if I make sure that the first part of
the composite key is fixed?
Any help
Thank you Edward.
As can be expected, my data volume is a multiple of whatever RAM I can
realistically buy, and in fact much bigger. In my very limited experience,
the money might be well spent on multicore CPUs because it makes routine
operations like compact/repair (which always include writes)
After loosing one node we had to repair, CFs was on leveled compaction.
For one CF each node had about 7GB of data.
Running a repair without primary range switch ended up some nodes exhausted
to about 60-100GB of 5MB sstables for that CF (a lot of files).
After switching back from leveled to tiered
Sounds similar to
http://www.mail-archive.com/user@cassandra.apache.org/msg20926.html
Are you able to try adding the node again with logging set to DEBUG (in
/etc/cassandra/log4j-server.properties) . (Please make sure the system
directory is empty (/var/lib/cassandra/data/system) *NOTE* do not
I am 1.0.7. I would suggest that. The memtable and JAMM stuff is very
stable. I would not setup 0.8.X because with 1.1 coming soon 0.8.X is
not likely to see to many more minor releases. You can always do
better with more RAM up to the size of your data, having more ram them
data size will not help
Hey,
I'm have a set of composite keys with data and trying to query them through
the CLI. However, the result set returned is always empty.
The schema is like this:
ColumnFamily: Routes
Key Validation Class:
org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.T
Dear All,
after all the testing and continuous operation of my first cluster,
I've been given an OK to build a second production Cassandra cluster in
Europe.
There were posts in recent weeks regarding the most stable and solid
Cassandra version.
I was wondering is anything better has appeare
I know batch operations are not atomic but does the success of a write
imply all writes preceeding it in the batch were successful ?
For example, using cql:
BEGIN BATCH USING CONSISTENCY QUORUM AND TTL 864
INSERT INTO users (KEY, password, name) VALUES ('user2',
'ch@ngem3b', 'second user')
The tokens are hex encoded arrays of bytes.
On Tue, Mar 13, 2012 at 1:05 PM, work late wrote:
> The ring command on nodetool shows as
>
> Address DC RackStatus State Load
> OwnsToken
>
> Token(bytes[88401b216270ab8ebb690946b0b70eab])
> 10.1.1.1 datacenter
The ring command on nodetool shows as
Address DC RackStatus State LoadOwns
Token
Token(bytes[88401b216270ab8ebb690946b0b70eab])
10.1.1.1 datacenter1 rack1 Up Normal 69.1 KB 50.00%
Token(bytes[4936c862b88db2bdd92d684583bf0280])
1
sorry, should have been: Given the hashtable nature of cassandra, finding a
row is probably 'relatively' constant no matter how many *rows* you have.
- Original Message -From: "Dave Brosius"
>;dbros...@mebigfatguy.com
Given the hashtable nature of cassandra, finding a row is probably 'relatively'
constant no matter how many columns you have.The smaller the number of columns,
i suppose the more likely that all the columns will be in one sstable. If
you've got a ton of columns per row, it is much more likely th
>From my tests, I am seeing that a CF that has less than 100 columns
but millions of rows has a much lower latency to read a column in a
row than a CF that has only a few thousands of rows but wide rows with
each having 20K columns.
Example:
cf1 has 6 Million rows and each row has about 100 column
On 3/12/2012 6:52 AM, Brandon Williams wrote:
> On Mon, Mar 12, 2012 at 4:44 AM, aaron morton wrote:
>> I don't understand why I
>> don't get multiple concurrent compactions running, that's what would
>> make the biggest performance difference.
>>
>> concurrent_compactors
>> Controls how many conc
Can you provide some context for the log files please.
The original error had to do with bootstrapping a new node into a cluster. The
log looks like a node is starting with -Dcassadra.join-ring = false and then
nodetool join is run.
Is there an error when this runs ?
Cheers
---
Did you find something in the files I sent you ?
On 3/12/12 10:47 AM, aaron morton wrote:
Modify this line the log4j-server.properties. It will normally be
located in /etc/cassandra
https://github.com/apache/cassandra/blob/trunk/conf/log4j-server.properties#L21
Change INFO to DEBUG
Cheers
-
> How much smaller did the BF get to ?
After pending compactions completed today, i'm presuming fp_ratio is
applied now to all sstables in the keyspace, it has gone from 20G+ down
to 1G. This node is now running comfortably on Xmx4G (used heap ~1.5G).
~mck
--
"A Microsoft Certified System
Thanks.
*Attribute*
*Type*
*Default*
*Required*
*Description*
expressions
list
n/a
Y
The list of IndexExpression objects which must contain one EQ
IndexOperator among
the expressions
start_key
binary
n/a
Y
Start the index query at the specified key - can be set to '', i.e., an
empty
Yes.use get_indexed_slices (http://wiki.apache.org/cassandra/API)
On Tue, Mar 13, 2012 at 2:12 PM, Vivek Mishra wrote:
> Hi,
> Is it possible to iterate and fetch in chunks using thrift API by querying
> using "secondary indexes"?
>
> -Vivek
>
Hello,
I have been trying to add a node to single node cluster of Cassandra (1.0.8)
but I always get following error:
INFO 17:50:35,555 JOINING: schema complete, ready to bootstrap
INFO 17:50:35,556 JOINING: getting bootstrap token
ERROR 17:50:35,557 Exception encountered during startup
java.lan
Hi,
Is it possible to iterate and fetch in chunks using thrift API by querying
using "secondary indexes"?
-Vivek
If you are on Ubuntu it may be this
http://wiki.apache.org/cassandra/FAQ#ubuntu_hangs
otherwise I would look for GC problems.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 13/03/2012, at 7:53 PM, Tamar Fraenkel wrote:
> Done it. Now i
Thanks for the update.
How much smaller did the BF get to ?
A
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 13/03/2012, at 8:24 AM, Mick Semb Wever wrote:
>
> It's my understanding then for this use case that bloom filters are of
> l
>> 2. Move node 'D' initial token down from 150... to 130...
>> Here we ran into a problem. When "move" started disk usage for node C
>> grows from 400 to 750GB, we saw running compactions on node 'D' but some
>> compactions failed with
Did you run out of space on C or D ?
>>>
31 matches
Mail list logo