Timo,
After one hour it spitted the data we needed , for the record :
curl YOURIP:8098/buckets/YOURBUCKET/index/\$bucket/_

Cheers,
Antonio

2015-07-15 21:51 GMT+01:00 Timo Gatsonides <t...@me.com>:

> From: Antonio Teixeira <eagle.anto...@gmail.com>
> To: Matthew Von-Maszewski <matth...@basho.com>
> Cc: riak-users <riak-users@lists.basho.com>
> Subject: Re: Riak LevelDB Deletion Problem
> Message-ID:
> <CAF+3j-tCr+CDD7PiWLm-QOaO69jWDzGyGhP8X9jsZyH=wsb...@mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hello Matthew,
>
> Space is reducing slooooowly , now we are faced with another problem :
> Alot of applications store data on this node and we don't have the bucket
> keys (they are uuid4)  so :
>
> We are using listkeys ( I know its bad ) on the erlang client and we also
> tried with curl using both blocking and stream methods.
> And they are all returning {error, timeout}.
>
> We are 95 % sure that not all data has been migrated so , is there any way
> to get the keys of the bucket, even if we have to shutdown the node (
> Uptime/Availability not important for us ).
>
> We are currently looking at mapreduce ?!
>
>
> What are you using to list keys? Are you using secondary indexes? You can
> get all the keys for a bucket using the $bucket index, or do a range scan
> on $key. See http://docs.basho.com/riak/latest/dev/using/2i/ . If your
> cluster is healthy that should return the keys without throwing a timeout
> error.
>
> Kind regards,
> Timo
>
>
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to