Hi,
I'm trying to work out the best way of storing temporal data in Riak.
I've been investigating several NoSQL solutions and originally started
out using CouchDB, however I want to move to a db that scales more
gradually (CouchDB scales, but you really have to set up the
architecture before-hand
No, it's Intel, Core 2 Duo. That makes the problem even more weird.
The other bad news is that I cannot find any core dumps from Erlang...
On Mon, May 17, 2010 at 7:41 PM, Ryan Tilder wrote:
> Which hardware platform are you running on? I'm guessing PowerPC. Bus
> errors pretty much don't happe
I am a moron!
Edit deps/erlang/rebar.config. Change "make" in the last two lines to
"gmake".
From:
{port_pre_script, {"make -C c_src", ""}}.
{port_cleanup_script, "make -C c_src clean"}.
to
{port_pre_script, {"gmake -C c_src", ""}}.
{port_cleanup_script, "gmake -C c_src clean"}.
Looking for
Morning, Afternoon, Evening -
Hope everyone had a great weekend.
Another action packed recap for today! Some wisdom from #riak, a Riak
GUI tool written in Java, some Riak/PHP slides, a data model blog
post, some wiki additions, and a pointer to a new Basho Podcast.
Enjoy -
Mark
Community Manag
Hi Germain,
You can make a HEAD request to the bucket/key path. It will return 404 or
200 without the document body.
On Mon, May 17, 2010 at 9:04 AM, Germain Maurice <
germain.maur...@linkfluence.net> wrote:
> Le 17/05/10 15:34, Paul R a écrit :
>
> What should the user do to come back to the
Le 17/05/10 15:34, Paul R a écrit :
What should the user do to come back to the previous level of
replication ? A forced read repair, in other words a GET with R=2 on all
objects of all buckets ?
Yes, I wonder too what is the best thing to do after a node crash.
Eventually, i'm doing read reques
Which hardware platform are you running on? I'm guessing PowerPC. Bus
errors pretty much don't happen on x86 platforms any more. If you haven't
filed a bug with the full output of the error, please do so. We can't do
much for you without more detailed info.
--Ryan
On Sun, May 16, 2010 at 2:12
Hi !
Germain> Hum... i'm reading again "Replication" section on the wiki, and
Germain> i found that the behaviour i described seems to be a "read repair".
It is indeed documented, still, it makes me wonder what the user is
supposed to do after the definitive lost of a physical node.
For instance
Francesca,
Thank you! Please send the patch to r...@basho.com when you have a chance.
Sean Cribbs
Developer Advocate
Basho Technologies, Inc.
http://basho.com/
On May 17, 2010, at 7:57 AM, Francesca Gangemi wrote:
> Hi,
>
> when calling the filter_keys/2 API with innostore configured as back
Hum... i'm reading again "Replication" section on the wiki, and i found
that the behaviour i described seems to be a "read repair".
Sorry for the disturbing.
Le 17/05/10 13:49, Germain Maurice a écrit :
Hi,
I have a 3 nodes cluster and I simulated a complete lost of one node
(node3, erase o
Hi,
when calling the filter_keys/2 API with innostore configured as backend
I get the following error in the riak node.
{badarg,[{innostore_riak,list_bucket,2},
{riak_vnode,do_list_bucket,6},
{riak_vnode,active,2},
{gen_fsm,handle_msg,7},
{proc_lib,i
Hi,
I have a 3 nodes cluster and I simulated a complete lost of one node
(node3, erase of the entire hard disk).
I installed it again, i launche a "riak console", "riak-admin join
r...@node2", i waited a while to see the recovering of datas on node3
which was freshly repaired, but no datas wer
12 matches
Mail list logo