I would speculate that you are running into this issue:
https://github.com/basho/webmachine/issues/183.
Assuming that is true then it is a known issue.
Kelly
On 09/29/2014 05:03 AM, riakuser01 wrote:
This bug was causing the memory leak seen in
http://riak-users.197444.n3.nabble.com/Memory-Usa
Have you changed the n_val property of the bucket in question? Lowering
the n_val can result in duplicate results.
Kelly
On 08/21/2014 02:29 PM, Chaim Peck wrote:
I am looking for some clues as to why there might be duplicate keys in a Riak
Secondary Index. I am using version 1.4.0.
Thanks,
memory. Is it using a b-tree?
Thanks,
Jason
- Original Message -
From: "Kelly McLaughlin"
To: "Jason Campbell" , "riak-users"
Sent: Wednesday, 20 August, 2014 1:26:36 AM
Subject: Re: Bitcask Key Listing
Jason,
There are two aspects to to a key listing
Alex,
The value you had set for the fold_objects_for_list_keys setting is one
I was very interested to see and I highly recommend setting it to true
for your cluster.
The impact of setting this to true should be to make bucket listing
operations generally more efficient. There should be no det
Jason,
There are two aspects to to a key listing operation that make it
expensive relative to normal gets or puts.
The first part is that, due to the way data is distributed in Riak, key
listing requires a covering set of vnodes to participate in
order to determine the list of keys for a buck
ex Millar*, CTO
Office: 1-800-354-8010 ext. 704
Mobile: 519-729-2539
*GoBonfire*.com <http://GoBonfire.com>
From: Kelly McLaughlin <mailto:ke...@basho.com>
Reply: Kelly McLaughlin > <mailto:ke...@basho.com>
Date: August 15, 2014 at 7:03:47 PM
To: Alex Millar > &
Hello Alex. Would you mind sharing what version of Riak and Riak CS you
are using? Also if you can post the the contents of your Riak CS
app.config file
it might help give a better idea of what might be going on.
Generally listing the contents of a bucket is more expensive than a
normal downlo
Dave,
Can you tell me what versions of Riak and Riak CS you have installed? Do you
have AAE enabled or disabled? It’s tough to come up with an explanation without
more information, but I would try setting n_val_1_get_requests to false and see
if you continue to experience the problem. My guess
Mohan,
I might be a bit confused on what your intent is, but it sounds like your task
is to download a large group of files from S3 for processing and you are
considering Riak CS for that processing work. If that is the case I am not sure
Riak CS is the right fit for that job. Riak itself has a
Toby,
I checked on this and you are correct that it is a bug. The query parameters
for the usage API requests are omitted during the URL processing step. This
leads to the not_requested return you are seeing because when the usage request
is processed after the URL processing step the indicator
erl"},{line,48}]},
{webmachine_decision_core,decision,1,
[{file,"src/webmachine_decision_core.erl"},{line,486}]}]}}
I tried adding an acl item to the Options list, but that didn’t seem to help.
Do you have any idea what may be causing this?
Thanks,
Lee
the lines that make the calls
to Riak CS etc? That part evades me.
Thanks,
Lee
On 31 Mar 2014, at 16:09, Kelly McLaughlin wrote:
Lee,
We have a fork of erlcloud (https://github.com/basho/erlcloud) we use for
testing and it can be made to work with your Riak CS cluster with relatively
Riak CS does store large files directly in Riak in a manner somewhat similar to
what you describe and has the advantage that there are S3 libraries for most
languages. You might want to look at it a little closer because this does sound
like re-inventing of the wheel to me.
Kelly
On March 29,
Lee,
We have a fork of erlcloud (https://github.com/basho/erlcloud) we use for
testing and it can be made to work with your Riak CS cluster with relatively
little pain. Look in the riak_cs repo under client_tests/erlang/ercloud_eqc.erl
for some example usage. You'll probably want to set the pro
Try using the devrel Makefile target. It builds a set of releases under dev/
that are able to run on the same machine.
Kelly
On March 28, 2014 at 11:19:09 AM, Massimiliano Ciancio
(massimili...@ciancio.net) wrote:
Hello list,
I'm trying to start two different instances of riak on the same se
Andrew,
If you have not already done so, try increasing the pb_backlog setting in
the Riak app.config to 256 and see that helps. You will need to restart
Riak for it to take effect. Details about setting that are here:
http://docs.basho.com/riak/latest/ops/advanced/configs/configuration-files/#ap
ering how I can tune against issues like this, but keep the same
performance that's currently in place. I'm not very familiar with performing
read-repairs. Look forward to any assistance with this.
Thanks,
Andrew
On Thu, Nov 14, 2013 at 10:37 AM, Kelly McLaughlin wrote:
Andrew,
Are
ut, cause
archive.
Thank you so much,
Andrew
On Wed, Nov 13, 2013 at 10:57 PM, Kelly McLaughlin wrote:
Andy,
To try to get a better idea of what might be going on it would be helpful to
see what your riak and riak cs app.config files look like. Also the output of
riak-admin ring-status a
Andy,
To try to get a better idea of what might be going on it would be helpful to
see what your riak and riak cs app.config files look like. Also the output of
riak-admin ring-status and riak-admin member-status could be useful. For the
upload issue I am curious if you have changed the port th
Hi Shannon. I think the problem you are having is that you are trying to use the POST method to upload a part when you should be using PUT. Riak CS interprets a POST request that includes the uploadId like you are sending to be an attempt to complete the multipart upload and that is why you are see
Idan,Actually in the case you described of using a 3 node Riak cluster with n_val of 2 the behavior you see makes perfect sense. When using only three nodes Riak does not guarantee that all replicas of an object will be on distinct physical nodes. So if you have one node down you can hit a case whe
10 iterations or so. Hope that helps.
[1] : https://gist.github.com/kellymclaughlin/6041109
Kelly
On Wed, Jul 17, 2013 at 4:07 PM, Matthew Dawson wrote:
> On July 17, 2013 08:45:01 AM Kelly McLaughlin wrote:
> > Matthew,
> >
> > I find it really surprising that you don'
Dimitri,
It looks like you do not have Riak properly configured for use with RiakCS.
Specifically the it looks like add_paths setting is missing or incorrect.
You can find more info on the specifics of configuring Riak for RiakCS
here:
http://docs.basho.com/riakcs/latest/cookbooks/configuration/Co
Matthew,
I find it really surprising that you don't see any difference in behavior
when you set delete_mode to keep. I think it would be helpful if you could
outline your specific setup and give the steps to reproduce what you're
seeing to be able to make a determination if this represents a bug o
Hi Guido. The docs section you referenced is for RiakCS. We do recommend a
substantially higher value for zdbbl when using RiakCS because the object
data being stored is divided into 1MB chunks which is still a relatively
large object size for Riak to handle. Riak used by itself may not need the
va
Hi Dan. I do not know much about distcp, but if it is the case that it uses
a PUT (copy) operation to transfer data then distcp will not currently work
with RiakCS. Support for that operation is on our roadmap, but it is not
done yet unfortunately.
Kelly
On Wed, Jul 10, 2013 at 6:20 AM, Sajner,
Guy,
First, in your riak_cs config file you'll want to change the value of
cs_root_host to "s3.bksv.com". Then in the riak_cs_control config file
change the value of cs_hostname to also be "s3.bksv.com". That should be
all you need to do as long as *.s3.bksv.com resolves to your RiakCS machine.
K
Hi Quentin. Are you using a packaged version and if so could you tell me
what version? Also what sort of request are you attempting and what are you
using to generate the pre-signed URLs?
Kelly
On Mon, Jul 8, 2013 at 8:42 AM, Quentin ADAM wrote:
> Hi
>
> Does someone use riakCS with pre signed
The {tcp,econnrefused} errors mean that RiakCS is unable to connect to
Riak.
Riak must be running when you start RiakCS. Also make sure your settings in
the RiakCS app.config for riak_ip and riak_pb_port match the settings that
your
Riak instance is actually listening on. Hope that helps.
Kelly
Mike,
You are correct, there should only be one instance of Stanchion.
Stanchion's role is to serve as a serialization point for requests that
create or modify system entities that need to be globally unique [1]. e.g.
Buckets or user accounts. These are entities where there is not enough
context a
, Kelly McLaughlin wrote:
> Idan,
>
> I'll investigate this a bit and see if I can replicate similar behavior
> and hopefully I can get back to you with more information. Thanks for
> sharing the info.
>
> Kelly
>
>
> On Wed, May 22, 2013 at 3:23 AM, Idan Shinberg
&
gt; still , actual data-set size should be 48 x 32 MB , which is 1.5 GB .
> I also noticed each time I upload a file , 2x of it's size is
> automatically used , And I'm guessing that's related :-)
>
> The Single Riak node is running on CentOS 6.3 with 1.3.1 packaged
>
Idan,
Bitcask can sometimes be slow to reclaim space after deleting objects from
Riak CS. Are the settings you included the settings that have been in place
during all of your uploads and deletions? I am surprised that just a few
tens of uploads of 32 MB objects used up 15 GB of space. Can you be
Stefan,
I have been able to reproduce the problem you saw and I have opened a
github issue to address it here: https://github.com/basho/riak_cs/issues/532.
Thanks again for the report and we should have a fix for it pushed to
github very soon.
Kelly
On Wed, Apr 10, 2013 at 5:42 PM, Kelly
Stefan,
Thanks for the report. We will investigate this and let you know what we
find.
Kelly
On Wed, Apr 10, 2013 at 5:05 AM, Silasi Stefan wrote:
> Hello,
>
> I'm testing RIAK-CS as an S3 alternative and I get some strange crashing
> when reading data from it. The following is reproducible wit
Dave,
This is indeed a bug. I have a fix ready and I will be pushing it up to
github later today. In the meantime, I've opened an issue to track the
problem. Thanks for the report!
https://github.com/basho/riak_cs/issues/515
Kelly
On Wed, Mar 27, 2013 at 9:18 AM, Kelly McLaughlin
Hey Dave. I'll try to reproduce this issue and get back to you about what I
find. Thanks.
Kelly
On Wed, Mar 27, 2013 at 8:38 AM, Dave Finster wrote:
> Just adding a bit more info at the request of shino1.
>
> I've got a URL
>
>
> http://intranet-development.myriakcs.com/fdcddce0-78ca-0130-bc9c-
Jean-Baptiste,
It seems like something is not properly configured, but I am not really
able to spot anything obvious that is wrong from the gist you provided.
What operating system are you testing on and are you using packages or have
you built from source? Could you also gist your riak_cs app.con
Riak CS does not use chunk level deduplication. We made the decision to
avoid the complexity required for a robust deduplication scheme at the cost
of extra disk usage in part to ensure that there is a sane way to delete
items.
> Does CS perform any chunk-level de-dup like Luwak? And if so, doe
Here's the clue from the error output: {error,emfile}
EMFILE is an error code that indicates you have too many open files. You just
need to adjust your ulimit settings on the new machine to allow Riak to open
more files.
Kelly
On Dec 6, 2012, at 11:12 PM, kser wrote:
> I need to transfer th
Hi Olav. As you have observed the key length of the generated keys is not
guaranteed.
riak_core_util:unique_id_62 generates a 20 byte SHA hash and that value is
converted to a base 62
representation and that representation, converted to a string, becomes the key.
The length of the
generated k
Mikhail,
I am familiar with this error. I need to understand more of your situation
before I can make any recommendations. Can you describe more about what you are
doing when this happens? How often are you running secondary index queries? How
many objects are in the bucket you're querying agai
John and Shane,
I have been looking into some memory issues lately and I would be very
interested in more
information about your particular problems. If either of you are able to get
some output
from etop using the -sort memory option when you are having elevated memory
usage it
would be very
Geoff,
You can just delete the contents of the ring directory and restart with the
modified vm.args and you should be fine. You can find it at
./rel/riak/data/ring/ if you built a release from source or /var/lib/riak/ring/
if you installed a package.
Kelly
On Sep 24, 2012, at 7:40 AM, Geoff
Hi Dave. It is correct that listing all the keys in a bucket can be expensive.
The extent of that expense depends on a few things like the size of your
cluster (both physical nodes and vnodes), the backend you use, and the amount
of data you have stored. The $key index can be useful if you are t
On Aug 29, 2012, at 9:07 PM, Brad Heller wrote:
>
> So my question is: Why did this completely kill Riak? This makes me pretty
> nervous--a bug in our app has the potential to bring down the ring! Is there
> anything we can do to protect against this?
>
Riak 1.2 had a lot of changes to level
Michael,
If you delete the ring files from each release directory (rm -f data/ring/*)
and change the ports as Alexander mentioned then it should work for you. That
error message is not really very clear so that's something we should probably
work on improving, but since you already started the
Jeff,
Have you installed the Xcode command line tools and made sure all older Xcode
versions have been removed?
If so try this:
sudo ln -s /Applications/Xcode.app/Contents/Developer /Developer
Sent from my iPhone
On Aug 7, 2012, at 7:28 PM, Jeff Kirkell wrote:
> Is anyone having troub
Paul,
I just tried on OS X and Ubuntu 11.10 and got the expected results on both so
I'm not sure what could be going on. What version of Ubuntu were you trying?
Kelly
On Jul 20, 2012, at 6:12 PM, Paul Gross wrote:
> I'm seeing different results when performing a 2i query with spaces on
> di
Hi Izzy. Use riak start and then you can use riak attach if you need to attach
a console to the running instance.
Kelly
Sent from my iPhone
On May 26, 2012, at 8:38 AM, Izzy Alanis wrote:
> Has anyone gotten riak to run under launchd?
>
> Or, is there a way to run riak in console mode withou
Hi Steve. There is no caching of key lists in riak. What you are seeing is
likely the fact that listing of keys or index queries can pick up deleted keys
due to the fact that riak keeps tombstone markers around for deleted objects
for some period. For a really good explanation of riak's delete b
Tim,
Another option along the lines of what Markus described would be to use a HEAD
request. This will still cause Riak to read the object, but it has the
advantage that the entire object is not returned in the response. You can do
this with the HTTP or the PB interface. Making an HTTP HEAD req
Hi Claude. Most git servers use port 9418. If you are able to open that port on
your firewall it should work. Alternatively if you look for files in the riak
directory and its subdirectories called 'rebar.config' and edit those and
change the git urls to be "http://..."; instead of "git://..." t
Francisco,
Is it the case that you have already tried to force read repair on the file
chunks and are still seeing these errors?
Kelly
On Jan 12, 2012, at 10:57 AM, francisco treacy wrote:
> Hi,
>
> One of my users is preparing an exam but can't load an image because
> Luwak is serving it bro
, it reduce the amount of open file operation.
>
>
> Thanks
> Fisher
>
> On Fri, Dec 9, 2011 at 2:13 PM, Kelly McLaughlin wrote:
>> Fisher,
>>
>> Currently you're using the asynchronous api so you won't be able to
>> determine if the send calls actuall
if the write is successful or not, so it may cause the
> data lost sometime, is that right?
>
>
> Thanks
> Fisher
>
> On Fri, Dec 9, 2011 at 12:51 AM, Kelly McLaughlin wrote:
>> Fisher,
>>
>> Yeah don't use flush. Make the calls to send to put the data a
Fisher,
Yeah don't use flush. Make the calls to send to put the data and then call
close and you should get the expected behavior.
Kelly
On Dec 7, 2011, at 11:13 PM, vuleetu wrote:
> On Thu, Dec 8, 2011 at 1:57 PM, Kelly McLaughlin wrote:
>> Fisher,
>>
Fisher,
You need to call luwak_put_stream:close(Ps) to force the flush. That should get
it for you. Cheers.
Kelly
On Dec 7, 2011, at 10:05 PM, vuleetu wrote:
> luwak_put_stream:send(Ps, <<"56789">>).
> luwak_put_stream:flush(Ps).
> {ok, RiakFile3} = luwak_file:get(Riak, <<"testabc22yeah3">>).
Fisher,
In the cases where you see file size discrepancies, are you able to retrieve
those files intact from riak? Having a node in the cluster with an unstable
network connection should not cause data loss because replicas should be saved
to other nodes in your cluster, but it may cause some i
Walter,
There are three settings you can adjust in the riak app.config file that
control the number of javascript vms: map_js_vm_count, reduce_js_vm_count, and
hook_js_vm_count. If you are doing a lot of javascript mapreduce and seeing the
error you mentioned, increase the map and reduce vm cou
Francisco,
The problem you are experiencing is not due to search, but seems likely due to
the way in which the nodes of your cluster have been assigned the partitions of
the ring. If only one node has failed and you are getting that
no_candidate_nodes error it means that the preference list of
Ivan,
The erlang version is checked in the rebar.config file. You should see a line
like this: {require_otp_vsn, "R14B0[23]"}.
It's strange that you end up with 2 differents erts folders in your rel
directory. I would suggest wiping out the rel/riak directory and rebuilding. If
you're still g
Ben,
The problem is you are using the legacy MapReduce system instead of the new
pipe system. If you edit the riak app.config and change the mapred_system
setting to pipe instead of legacy, it should resolve the issue. The app.config
file can be found in /etc/riak if you installed from one of t
Randy,
Using the DEB package is probably the best way to start out. It'll be quicker
to get started and you won't have to worry about installing all the tools to
build from source. Last I checked the erlang version of the package that Ubuntu
installs via apt is old and doesn't work with the lat
Hi Gordon,
I'm looking at the info you provided about the problem and I suspect that it is
related to your use of 90 as the ring creation size. We generally recommend the
value to be a power of 2, though we do not explicitly enforce that in the code.
If this is a development cluster the simples
John,
It appears you've run into a race condition with adding and leaving nodes
that's present in 1.0.1. The problem happens during handoff and can cause
bitcask directories to be unexpectedly deleted. We have identified the issue
and we are in the process of correcting it, testing, and generat
o, I'm using
> the key filter and search functionality to accomplish this (I tend to use the
> riak python client). But, to be honest, I'm having a helluva time getting
> these basic tasks accomplished before I ramp to hundreds of millions of keys.
>
> Thanks for any he
Jim,
Looks like you are possibly using both the legacy key listing option and the
legacy map reduce. Assuming all your nodes are on Riak 1.0, check your
app.config files on all nodes and make sure mapred_system is set to pipe and
legacy_keylisting is set to false. If that's not already the case
Jeremy,
Yes, the same procedure should be fine with the eleveldb data directories.
Kelly
On Oct 15, 2011 6:27 AM, "Jeremy Raymond" wrote:
> Hello,
>
> I recall with using the bitcask backend you could backup the cluster data
> simply by copying the data directories of each node. You could then
Brian,
I've opened a bugzilla ticket for the problem at the root of this. You can view
the details and status here: https://issues.basho.com/show_bug.cgi?id=1244.
Thanks again for the report!
Kelly
On Oct 12, 2011, at 9:49 AM, Brian Bickerton wrote:
> Riak version: 1.0.0
> Backend: InnoStore
Brian,
I've reproduced your problem. From the initial review, it looks like there is
an issue with basho_bench and the way it assembles URLs. There also appears to
be a small problem with Innostore and key creation. We'll be investigating
these more and opening some bugzilla issues for them soo
Martin,
The error you are seeing is caused by the death of some of map phase worker
processes. It's hard to tell exactly why they would be dying from the limited
output, but I suspect since you mention that it happens only under heavy load
that it's hitting the map reduce timeout or perhaps (bu
Artem,
Are you using a riak package or have you built from source? Which version or
branch are you using? Also do you know how many objects are in the bucket.
Thanks.
Kelly
On Sep 23, 2011, at 2:23 AM, Artem Kozarezov wrote:
> Riak becomes unavailable when fetching bucket keys.
> wget "http:
Artem,
Are you using a riak package or have you built from source? Which version or
branch are you using? Also do you know how many objects are in the bucket.
Thanks.
Kelly
On Sep 23, 2011, at 2:23 AM, Artem Kozarezov wrote:
> Riak becomes unavailable when fetching bucket keys.
> wget "http:
LIDc=
> < Vary: Accept-Encoding
> < Server: MochiWeb/1.1 WebMachine/1.9.0 (participate in the frantic)
> < Link: ; rel="up"
> < Last-Modified: Fri, 16 Sep 2011 15:15:48 GMT
> < ETag: "51h3q7RjTNaHWYpO4P0MJj"
> < Date: Fri, 16 Sep 2011 15:22:0
Antoine,
I don't have 1.0.0pre2 installed anywhere handy at the moment, but I do have
pre3 installed and I tested with eleveldb and it works fine for me. Could you
download pre3 and see if that resolves the issue for you? Thanks.
Kelly
On Sep 16, 2011, at 8:08 AM, t3h wrote:
> Hello all,
>
nd access it will disappear after 60 second.
> If I put a key at time 0, and access it at time 50, it will disappear at time
> 110
> If I put a key at time 0, and access it every 15 seconds, it will disappear
> after 300 seconds.
>
> I'm I right?
>
> Regards,
>
Tony,
riak_kv_cache_backend_ttl is the amount of time to extend an object's lease or
lifespan when accessed whereas riak_kv_cache_backend_max_ttl is the amount of
time after which no further extensions should be granted.
Kelly
On Aug 23, 2011, at 7:49 AM, Tony Bussieres wrote:
> Hi all,
>
Dimitry,
The protocol buffers client does not yet support secondary indexes. For the
moment you'll have to stick with the HTTP API.
Kelly
On Aug 12, 2011, at 12:22 PM, Dimitry D wrote:
> I've pushed last version from github, set backend to index and tried this
> code:
>
>
> Obj = riakc_obj
Hi Dimitry. Looks like you are correct. Key listing works with eleveldb, but
bucket listing does not seem to be working. The master branch is under a lot of
development at the moment and there is already a fix for this under review. So
it should be resolved soon.
Kelly
On Aug 12, 2011, at 6:
Senthilkumar,
I just tried some secondary index queries via Erlang and it's working fine for
me. Getting the format just right can be a little tricky at first so here is an
example. Say you have a bucket called mybucket and index called field1_bin. An
example query for that index in Erlang woul
Robert,
I would start by trying to run "bin/riak console" and see if then you are able
to see any errors displayed on the erlang console.
Kelly
On Aug 10, 2011, at 7:49 AM, Robert Leftwich wrote:
> I'm having problems running riak on a Sun box, system details are as
> follows:
>
> SunOS 5.11
Craig,
The default backend is bit cask and if you want to use the indexes you'll need
to look in etc/app.config in the release directory and change the value for the
storage_backend setting to riak_kv_index_backend instead of
riak_kv_bitcask_backend. I suspect that's the problem. Cheers.
Kelly
oblem is?
> I would appreciate any kind of help.
> Cheers,
> Maria
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
--
Kelly McLaughlin
Engineer
Basho Technologies, Inc.
ke...
ith returning keydata?
> Anybody else seen this before?
>
> --Kyle
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
--
Kelly McLaughlin
E
MH,
Riak may not seem as fast in a single node configuration compared to things
like mongodb or others, but keep in mind the strengths of Riak are its
performance at scale and the ease with which it does scale. Nobody runs a
single node configuration in an actual production environment so keep tha
Muhammad,
I think the problem you're running into is that the value must be specified
as a byte array.
Try this:
RiakObject o = new RiakObject("mybucket", "mykey", (new
String("myvalue")).getBytes());
I think that should do it.
Kelly
On Wed, Mar 30, 2011 at 1:09 PM, Muhammad Yousaf
wrot
87 matches
Mail list logo