Hi Martin,
Thank you for sharing detailed information.
On Wed, Sep 4, 2013 at 11:20 PM, Martin Alpers wrote:
> * downloading from the frist node stopped at exactly 3MB, i.e. 3145728 bytes
> * downloading from the second node stopped at exactly 22MB
> [snip]
I also tried your URL and found sudde
Martin,
Thank you for more information.
I found a mis-configuration of riak's app.config. The backend of riak for
Riak CS is riak_cs_kv_multi_backend (look out for _cs_). This is the custom
riak_kv backend for Riak CS. The error of "s3cmd ls" was caused by this.
As for MB boundary stop, the issue
Martin,
I think you have already read this page:
http://docs.basho.com/riakcs/latest/cookbooks/configuration/Configuring-Riak/#Setting-up-the-Proper-Riak-Backend
As it says, you can use "add_paths" to set load path.
--
Shunichi Shinohara
Basho Japan KK
On Wed, Sep 11, 2013 at 2:05
Hi Toby,
There is a rarely used option "disable_local_bucket_check" in Riak CS.
I don't know it solves your case, please let me mention it.
To use it, first set it in app.config of riak-cs (or
application:set_env/3 in shell),
{riak_cs, [...
{disable_local_bucket_check, true},
Hi Adhi,
Could you please specify the version of riak cs?
riak-cs-gc batch just triggers GC and does not wait its completion.
After GC finishes, there will be a log in console.log, which looks like
Finished garbage collection: 0 seconds, 1 batch_count, 0
batch_skips, 1 manif_count, 1 block_cou
It seems GC works well :)
> I try to delete those file and check the size on the server won't reduce
If you care about disk usage of the server that riak is running on,
there is another factor. You need backend compaction to reduce disk usage.
It will require putting/deleting many objects or larg
Yes. Riak CS 1.5.1 is tested with Riak 1.4.10.
Shino
On Tue, Oct 21, 2014 at 10:34 PM, Toby Corkindale wrote:
> My understanding was that Riak CS was still recommending that it is
> run against Riak 1.4.10, not Riak 2.0?
>
> On 21 October 2014 17:10, Seth Thomas wrote:
>> The repository builds
Hi Sellmy,
New versions of s3cmd uses AWS v4 authentication [1] but Riak CS
does not support it yet [2].
Tentatively, please add following one line to your .s3cfg file:
signature_v2 = True
[1] https://github.com/s3tools/s3cmd/issues/402
[2] https://github.com/basho/riak_cs/issues/897
Thanks,
Shi
Hi Niels,
Thank you for your interest on Riak CS.
Some questions about 400 - InvalidDigest:
- Can you confirm which MD5 was correct for the log
2015-02-11 16:34:17.854 [debug]
<0.23568.18>@riak_cs_put_fsm:is_digest_valid:326
Calculated = <<"pIFX5fpeo7+sPPNjtSBWBg==">>,
Reported = "0B
ngrep does not show some bytes. tcpdump can dump network data in pcap format.
ex: sudo tcpdump -s 65535 -w /tmp/out.pcap -i eth0 'port 8080'
--
Shunichi Shinohara
Basho Japan KK
On Tue, Mar 10, 2015 at 7:30 PM, Niels O wrote:
> Hello Shino,
>
> I was uploading the attache
@0.4.2
- script https://gist.github.com/shino/36f02377a687f8312631
maybe version difference of node or aws sdk (?)
Thanks,
Shino
On Wed, Mar 11, 2015 at 11:13 AM, Shunichi Shinohara wrote:
> ngrep does not show some bytes. tcpdump can dump network data in pcap format.
>
> ex: sudo t
Congrats :)
Just my two cents,
> tcpdump 'host 172.16.3.21' -s 65535 -i eth0 > /opt/dump.pcap
tcpdump's option "-w file.pcap" is helpful because dump contains
not only header information but raw packet data.
How about "403 - AccessDenied" case? Is it also solved by version
up or still an issue
now the 400 issue (files from 1024-8191K) is
> solved .. the 403 issue indeed is not yet solved (files > 8192K)
>
> so indeed still an issue :-(
>
> here a pcap of the 403 issue (with -w option this time :-)
> http://we.tl/AFhslBBhGo
>
> On Wed, Mar 11, 2015 at 8:02
prefix_multi and
cs_version will be available in the future. It will reduce configuration
complexity, but please wait for a while :)
[1] https://github.com/basho/riak_kv/pull/1082
--
Shunichi Shinohara
Basho Japan KK
On Tue, Jun 2, 2015 at 2:55 PM, Toby Corkindale wrote:
> Hi
> I'm in the
Hi Roman,
FWIW, -noinput option of erl [1] makes beam not to read input and disables
interactive shell. Runner scripts that rebar generates passes extra args
of console command to erl (actually erlexec), one can add the option as:
riak console -noinput
Note: some features can not be used, e.g.
Management/#Creating-a-User-Account
--
Shunichi Shinohara
Basho Japan KK
On Fri, Aug 21, 2015 at 10:00 AM, changmao wang wrote:
> Kazuhiro,
>
> Maybe that's not the key point. I'm using riak 1.4.2 and follow below docs
> to configure "s3cfg" file.
> http://docs.b
The result of "s3cmd ls" (aka, GET Service API) indicates there
is no bucket with name "stock":
> root@cluster-s3-hd1:~# s3cmd ls
> 2013-12-01 06:45 s3://test
Have you created it?
--
Shunichi Shinohara
Basho Japan KK
On Mon, Aug 24, 2015 at 10:14 AM, changm
that you USE?
- What is your host_bucket in s3cfg?
Also, please attach s3cmd debug output AND riak cs console log at the same time
interval.
--
Shunichi Shinohara
Basho Japan KK
On Mon, Aug 24, 2015 at 10:42 AM, changmao wang wrote:
> I'm not sure who created it. This's a legacy p
Then, back to my first questions:
Could you provide results following commands with s3cfg1?
- s3cmd ls
- s3cmd info s3://stock
>From log file, gc index queries timed out again and again.
Not sure but it may be subtle situation...
--
Shunichi Shinohara
Basho Japan KK
On Mon, Aug 24, 2015 at
xecuting
application:set_env(riak_cs, fold_objects_for_list_keys, true).
by attaching to riak-cs node.
For more information about it, please refer the original PR [1].
[1] https://github.com/basho/riak_cs/pull/600
--
Shunichi Shinohara
Basho Japan KK
On Wed, Aug 26, 2015 at 2:04 PM, Stanislav Vlasov
wrote
56 [error]
> <0.27146.26>@riak_cs_wm_common:maybe_create_user:223 Retrieval of user
> record for s3 failed. Reason: no_user_key
> 2015-08-27 17:39:49.249 [error]
> <0.27147.26>@riak_cs_wm_common:maybe_create_user:223 Retrieval of user
> record for s3 failed. Reason: no_us
Hi Gautam,
Sorry for late response.
The function riak_cs_user:create_user() should be executed on riak cs node.
If you want to call it by escript, you can use rpc:call to execute it.
Shino
On Sat, Sep 19, 2015 at 7:43 AM, Gautam Pulla
wrote:
> Hello,
>
>
>
> I’d like to create a riak-cs user wi
Hi Kent,
riak_cs_s3_passthru_auth is for internal use and does not work as
auth_module.
It will be possible to make it work as auth_module but some effort
of refactering and (maybe) some additional implementation.
It can be a workaround to use bucket policy to permit GET Object and
PUT Object for
Hi Outback,
Sorry for very late response.
It seems that riak-cs doesn't/can't communicate with stanchion node.
There are a couple of possible reasons:
- admin.key and admin.secret should be set both in riak-cs and stanchion
- stanchion_host should be set properly as stanchion node's listen config
Hi,
Sorry but I don't have much to say about Dragon Disk.
If you can access to Riak CS by Rest API, then I think
Riak CS is (almost) working well.
Hmm... the random items that should be investigated at first are:
- Does Dragon disk actually attempt to connect to Riak CS?
(instead of AWS S3)
- I
I asked some questions but the error stack did not answer any.
s/Dragon disk/s3cmd/ version of my questions:
- Does s3cmd actually attempt to connect to Riak CS?
(instead of AWS S3)
- If yes, how does TCP communication go on?
- If TCP is ok, how is HTTP request/response?
- If HTTP is ok, what i
Thanks for detailed information.
You got 403 then authentication failure or authorization failure.
First to check is StringToSign on both side. s3cmd with debug flag
showed StringToSign at "DEBUG: SignHeaders" line. For Riak CS,
you can see it by setting log level to debug like:
[debug] STS:
["G
Hi Gautam,
Hmm... It seems like a bug. List objects fails when "delimiter" query
parameter is empty like:
http://foo.s3.amazonaws.com/?delimiter=
May I ask some questions?
- What client (s3cmd, s3curl, java sdk, etc...) do you use?
- Can you control the query parameter and remove it when its v
Hi Michael,
Sorry for very late response.
I tried mecking the module in erl shell.
In list passing style for expected arguments, Pid in call should
be the same term you passed in expect call, unless function_clause
is thrown.
> Pid = self().
<0.43.0>
> meck:expect(riakc_pb_socket, get, [Pid, {<<"
Hi Alberto,
I didn't look into boto implementation, but I suspect that COPY Object API
does NOT work between different S3-like systems.
The actual interface definition of the API is [1] and source bucket/key
is just a string in the x-amz-copy-source header. The request went into the
system that in
Hi Dattaraj,
I'm not sure how AWS SDK JS works in detail, I'm wondering whether
it's good to include
S3/CS bucket name in endpoint string. One example of the doc [1], it does not
have bucket name part.
[1] http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Endpoint.html
Thanks,
Shino
2015-
he buclet url as endpoint - also tried setting s3endpoint
> to true. Same problem.
>
> Surprisingly the command line tool works fine.
>
> Regards,
> Dattaraj
> http://in.linkedin.com/in/dattarajrao
>
> -Original Message-
> From: Shunichi Shinohara
> Date: Mo
t;
>
> Thanks for update. Then, please let me ask some questions:
>
> - What was the actual error message?
> - Could you confirm your code / SDK generate network communication
> to Riak CS?
>
> Shino
>
> 2015-12-01 17:18 GMT+09:00 Dattaraj J Rao :
>> Thanks Shino
Hi Michael,
Could you provide all the result of "riak config generate -l debug" as well as
riak.conf and advanced.config?
Thanks,
Shino
2016-01-07 9:05 GMT+09:00 Michael Walsh :
> All,
>
> I'm trying to set up a RIak S2 instance to integrate with KV and I'm getting
> the following cuttlefish err
Hi John,
I tested Multipart Upload with aws-sdk-js with the patch you
mentioned, against riak_cs (s2),
upload finished without errors up to 1GB object. The environment is
all local on my laptop,
so latency is small. The script I used is in [1].
As Luke mentioned, HAProxy would be the point to be
Hi Diego,
Sorry for late reply. It's difficult or rather tricky to modify the key
of existing user.
If your requirement can be fulfilled by getting the user from existing
cluster and putting it to newly created cluster, it's easier.
In Riak KV sense (NOT Riak S2/CS sense), CS user is an Riak KV ob
36 matches
Mail list logo