I don't think so... unless you know a way which I'm not aware of...
> Date: Mon, 3 Oct 2011 23:25:52 -0500
> Subject: Re: Searching across multiple buckets
> From: lesmikes...@gmail.com
> To: roberto_cal...@hotmail.com
> CC: jeff.kirk...@gmail.com; riak-users@lists.basho.com
>
> On Mon, Oct 3, 2
On Mon, Oct 3, 2011 at 10:06 PM, Roberto Calero
wrote:
> Yeah, I thought so... it would be nice though to be able to search across
> different buckets. i.e. lookup services for equities across different
> investment universes etc.
>
Can't the map step of a m/r expand a list of buckets for you - w
Evening, Morning, Afternoon to All -
For today's Recap: new code, talks, slides, jobs, and more.
Enjoy.
Mark
Community Manager
Basho Technologies
wiki.basho.com
twitter.com/pharkmillups
---
Riak Recap for September 28 - October 2
==
SSDs are an option, sure. I have one in my laptop; we have a bunch
of X25s on the way already for the servers. Yes, they're good. But
IOPS is not the core issue since the whole thing can sit in RAM
which is faster yet. Disk-flush "later" isn't time critical. Getting the
data into the buckets i
Yeah, I thought so... it would be nice though to be able to search across
different buckets. i.e. lookup services for equities across different
investment universes etc.
Date: Mon, 3 Oct 2011 22:37:44 -0400
Subject: Re: Searching across multiple buckets
From: jeff.kirk...@gmail.com
To: roberto_
Roberto,
I do not believe this is possible through a single call because you need to
explicitly define the bucket to search in and it does not (at least that I
have seen) accept an array or list of buckets. That said, I guess you could
do something that makes several calls but that is not what you
Hello,
I'm currently using the Riak Erlang client and when I do a get_index I
only get the keys back. So, my question is, is it better to get the
keys, loop through them and run a get on them one by one, or is it
better to write my own MapRed job which queries the index and then
runs a map phase
Is it possible? How do we do it?
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Your search/luwak tests are failing, presumably because those options
are not enabled in your Riak installation. You can disable them in the
test suite by doing:
$ SKIP_SEARCH=1 SKIP_LUWAK=1 python setup.py test
You also seem to be running into a problem with leftover keys in one
of the test buck
Woah! cool, thanks!
2011/10/3 Russell Brown
>
> On 3 Oct 2011, at 21:58, francisco treacy wrote:
>
> I was giving the bucket_inspector example because I happen to need it now.
> I know I can write it in another lang in one minute, but since I'll be using
> other Erlang contrib I wanted (and sti
While writing a detailed reply I realized that a single default
post-commit hook will do. The hook would get the bucket name from the
object and then decide what to do. Thanks for the nudge. :-)
Andy
On Mon, Oct 3, 2011 at 3:44 PM, Ryan Zezeski wrote:
> Andy,
> Not crazy at all. Another issue w
On 3 Oct 2011, at 21:58, francisco treacy wrote:
> I was giving the bucket_inspector example because I happen to need it now. I
> know I can write it in another lang in one minute, but since I'll be using
> other Erlang contrib I wanted (and still want) to see how that would work;
> but well,
I was giving the bucket_inspector example because I happen to need it now. I
know I can write it in another lang in one minute, but since I'll be using
other Erlang contrib I wanted (and still want) to see how that would work;
but well, nevermind if it's not that straightforward.
Francisco
2011/
Andy,
Not crazy at all. Another issue with adding the hook for each foo* bucket
is that you increase the size of the ring which must be gossiped between
nodes. We recently added something called "bucket fixups" to 1.0 that sorta
do what you want but only for buckets with custom properties. ATM,
On 3 Oct 2011, at 20:43, francisco treacy wrote:
> Hi Greg
>
> Thanks, but what I'm really after is executing a script (non-interactive)
>
> This is what i've got:
>
> #!/usr/bin/env escript
> %% -*- erlang -*-
> %%! -name riakinspect -setcookie riak
>
> main([]) ->
> %{ok, Client} = riak:cl
Hi Greg
Thanks, but what I'm really after is executing a script (non-interactive)
This is what i've got:
#!/usr/bin/env escript
%% -*- erlang -*-
%%! -name riakinspect -setcookie riak
main([]) ->
%{ok, Client} = riak:client_connect('riaksearch@127.0.0.1'),
% i have the module compiled in /tmp
Awesome, the test suite is passing for me with these settings.
Thanks a lot
Honza Král
E-Mail: honza.k...@gmail.com
Phone: +420 606 678585
On Mon, Oct 3, 2011 at 8:00 PM, Reid Draper wrote:
> There have been some changes in Riak 1.0 with regard to how deletes work,
> and the python client ha
There have been some changes in Riak 1.0 with regard to how deletes work,
and the python client has not been updated appropriately yet. In the
meantime, you can add the option:
{delete_mode, immediate} to the riak_kv section of app.config of Riak. This
should allow all of the tests to pass. You can
On Mon, Oct 3, 2011 at 7:31 PM, Greg Stein wrote:
> Your search/luwak tests are failing, presumably because those options
> are not enabled in your Riak installation. You can disable them in the
> test suite by doing:
>
> $ SKIP_SEARCH=1 SKIP_LUWAK=1 python setup.py test
Thanks, I tried this righ
Good to know, thanks.
Upvote for being able to unset properties over HTTP. We don't deploy erlang in
production and rely on curl to set all our properties, so that'd be great for
us.
--
Greg
Clipboard
On Sunday, October 2, 2011 at 8:11 PM, Ryan Zezeski wrote:
> Greg,
>
> Yes, use the new
Hi everybody,
I cannot get the test suite in the python client to run, I have tried
on Arch Linux on my notebook and then on an Ubunty Natty system on EC2
with riak 1.0.0:
wget http://downloads.basho.com/riak/riak-1.0.0/riak_1.0.0-1_amd64.deb
sudo dpkg -i riak_1.0.0-1_amd64.deb
sudo /etc/init.d/r
Hey Francisco,
If what you're looking to do is connect to Riak in Erlang without having to run
'riak attach', try this little bit of magic.
http://www.clipboard.com/clip/LR04fvr5rXWvT__G
The value for "cookie" will be "riak" unless you've changed it.
--
Greg
Clipboard
On Monday, October 3
Going along with the flushing option, would it be possible for you to write to
an in memory bucket in Riak and then periodically flush to disk? RAM is going
to be faster than an SSD ;)
---
Jeremiah Peschka - Founder, Brent Ozar PLF, LLC
Microsoft SQL Server MVP
On Oct 3, 2011, at 9:11 AM, Ryan
Mike,
I'd say you're going to be pushing the limits of Riak pretty hard given that
fact that you're talking about 5k writes-pre-second on a _single_ key. I
hope you listen to Artur Bergman and run SSDs in your data center, heh [1].
My first thought would be to batch those writes locally for a gi
Sounds great. If you are performing a rolling upgrade make sure to
modify/add vnode_vclocks after all nodes have transitioned to 1.0 [1].
[1]: http://wiki.basho.com/Rolling-Upgrades.html
On Mon, Oct 3, 2011 at 11:22 AM, Gordon Tillman wrote:
> Morning Ryan,
>
> Hey thanks for the info. I'm te
Antonio,
The reason the lowercase search doesn't work is because the default analyzer
(whitespace analyzer) is case sensitive [1]. You could use the "standard"
analyzer which will lowercase all terms but beware it's based off the
standard Lucene analyzer which is really meant for full text search
Morning Ryan,
Hey thanks for the info. I'm testing with 1.0 right now (great job with that
by the way) and I hope to be able to switch our deployment package over to 1.0
soon.
As always I appreciate your time and trouble.
--gordon
On Oct 3, 2011, at 10:02 , Ryan Zezeski wrote:
> Gordon,
>
Hi Roland,
Riak deletes by first writing a tombstone and then when all replicas are in
sync removing the object from the underlying key/value store. We have made
some changes in 1.0.0 to increase the length of time the tombstones are
around when all nodes are up (in the Delete Changes section of
Gordon,
It's worth pointing out that in 1.0 this should be greatly improved because
now we have changed vclock behavior in relation to PUTs [1]. Essentially,
client id now comes from the vnode, not externally, which leads to
smaller/static vclock sizes even in the face of frequent updates.
I can
Please? at least a pointer!
2011/9/29 francisco treacy
> I'm wanting to use the `bucket_inspector` contrib function (but have zero
> Erlang experience).
>
> Following the "usage" page, I do the following:
>
> $ /opt/riak/erts-5.7.5/bin/erlc -o /tmp /tmp/bucket_inspector.erl$ riak
> attach(riak
Hi Basho,
I had written some simple tests using riakc client library.
The tests are attached to this mail.
In particular I made this call in the tests at the start of
of each test in order to get a clean DB.
riakc_pb_socket:delete(Pid, Bucket, Key),
In 0.14 that worked just fine. But ... i
Jim,
If you look at your bitcask directories, do you have a large number of
zero-byte files, perchance?
D.
On Sat, Oct 1, 2011 at 1:58 PM, Jim Adler wrote:
> After upgrading my single-node instance to 1.0, I'm still seeing the
> "timeout when storing" issue. Here are the changes I made based o
32 matches
Mail list logo