Upgrading to 1.2.3 fixed the -pr Repair.. I'll just use that from now on
(which is what I prefer!)
Thanks,
Ryan
On Wed, Mar 27, 2013 at 9:11 AM, Ryan Lowe wrote:
> Marco,
>
> No there are no errors... the last line I see in my logs related to repair
> is :
>
> [repair
Marco,
No there are no errors... the last line I see in my logs related to repair
is :
[repair #...] Sending completed merkle tree to /[node] for
(keyspace1,columnfamily1)
Ryan
On Wed, Mar 27, 2013 at 8:49 AM, Marco Matarazzo <
marco.matara...@hexkeep.com> wrote:
> > If I run `nodetool -h lo
Has anyone else experienced this? After upgrading to VNodes, I am having
Repair issues.
If I run `nodetool -h localhost repair`, then it will repair only the first
Keyspace and then hang... I let it go for a week and nothing.
If I run `nodetool -h localhost repair -pr`, then it appears to only r
I have heard before that the recommended minimum cluster size is 4 (with
replication factor of 3). I am curious to know if vnodes would change that
or if that statement was valid to begin with!
The use case I am working on is one where we see tremendous amount of load
for just 2 days out of the w
What we have done to avoid creating multiple column families is to sort of
namespace the row key. So if we have a column family of Users and
accounts: "AccountA" and "AccountB", we do the following:
Column Family User:
"AccountA/ryan" : { first: Ryan, last: Lowe }
"AccountB/ryan" : { first:
I meant to also add that we do not necessarily care if the Reads are
somewhat stale... if two people reading from the cluster at the same time
get different results (say within a 5 min window) then that is acceptable.
performance is the key thing.
Ryan
On Sun, Aug 28, 2011 at 7:24 PM, Ryan Lowe
join our hadoop cluster at night and then
> leave again in the morning :) Maybe you can have some fun with your
> cassandra gear in its idle time.
>
>
>
> On Sun, Aug 28, 2011 at 2:47 PM, Ryan Lowe wrote:
>
>> We are working on a system that has super heavy traffic during sp
We are working on a system that has super heavy traffic during specific
times... think of sporting events. Other times we will get almost 0
traffic. In order to handle the traffic during the events, we are planning
on scaling out cassandra into a very large cluster. The size of our data
is stil
I've been doing multi-tenant with cassandra for a while, and from what I
have found, it is better to keep your keyspaces down in number. That said,
I have been using composite keys for my multi-tenancy now and it works
great:
Column Family: User
Key: [AccountId]/[UserId]
This makes it super han
, 2011 at 12:20 PM, Ryan Lowe wrote:
> > yeah, sorry about that... pushed click before I added my comments.
> > I have a cluster of 5 nodes using 0.8.4 where I am using counters. One
> one
> > of my nodes, every time I do a list command I get different results. The
> &
nodes.
Thanks!
Ryan
On Tue, Aug 16, 2011 at 1:18 PM, Ryan Lowe wrote:
> [default@Race] list CounterCF;
> Using default limit of 100
> ---
> RowKey: Stats
> => (counter=APP, value=7503)
> => (counter=FILEUPLOAD, value=155)
> => (counter=MQUPLOAD
[default@Race] list CounterCF;
Using default limit of 100
---
RowKey: Stats
=> (counter=APP, value=7503)
=> (counter=FILEUPLOAD, value=155)
=> (counter=MQUPLOAD, value=4726775)
=> (counter=PAGES, value=131948)
=> (counter=REST, value=3)
=> (counter=SOAP, value=44)
=> (counter=WS, va
12 matches
Mail list logo