Re: s3cmd error: access to bucket was denied

2015-08-24 Thread Shunichi Shinohara
Then, back to my first questions:
Could you provide results following commands with s3cfg1?
- s3cmd ls
- s3cmd info s3://stock

>From log file, gc index queries timed out again and again.
Not sure but it may be subtle situation...

--
Shunichi Shinohara
Basho Japan KK


On Mon, Aug 24, 2015 at 11:03 AM, changmao wang  wrote:
> 1. root@cluster1-hd10:~# grep cs_root_host /etc/riak-cs/app.config
>   {cs_root_host, "api2.cloud-datayes.com"},
> root@cluster1-hd10:~# grep host_base .s3cfg
> host_base = api2.cloud-datayes.com
> root@cluster1-hd10:~# grep host_base s3cfg1
> host_base = api2.cloud-datayes.com
>
> 2. please check attached file for "s3cmd -d" output and
> '/etc/riak-cs/console.log'.
>
>
> On Mon, Aug 24, 2015 at 9:54 AM, Shunichi Shinohara  wrote:
>>
>> What is "api2.cloud-datayes.com"? Your s3cfg attached at the first one
>> in this email thread
>> does not include it. Please make sure you provide correct / consistent
>> information to
>> debug the issue.
>>
>> - What is your riak cs config "cs_root_host"?
>> - What is your host_base in s3cfg that you USE?
>> - What is your host_bucket in s3cfg?
>>
>> Also, please attach s3cmd debug output AND riak cs console log at the same
>> time
>> interval.
>> --
>> Shunichi Shinohara
>> Basho Japan KK
>>
>>
>> On Mon, Aug 24, 2015 at 10:42 AM, changmao wang 
>> wrote:
>> > I'm not sure who created it. This's a legacy production system.
>> >
>> > Just now, I used another "s3cfg" file to access it. Below is my output:
>> > root@cluster1-hd10:~# s3cmd -c s3cfg1 info
>> > s3://stock/XSHE/0/50/2008/XSHE-50-20080102
>> > s3://stock/XSHE/0/50/2008/XSHE-50-20080102 (object):
>> >File size: 397535
>> >Last mod:  Thu, 05 Dec 2013 03:19:00 GMT
>> >MIME type: binary/octet-stream
>> >MD5 sum:   feb2609ecfc9bb21549f2401a5c9477d
>> >ACL:   stockwrite: FULL_CONTROL
>> >ACL:   *anon*: READ
>> >URL:
>> > http://stock.s3.amazonaws.com/XSHE/0/50/2008/XSHE-50-20080102
>> > root@cluster1-hd10:~# s3cmd -c s3cfg1 ls
>> > s3://stock/XSHE/0/50/2008/XSHE-50-20080102 -d
>> > DEBUG: ConfigParser: Reading file 's3cfg1'
>> > DEBUG: ConfigParser: access_key->TE...17_chars...0
>> > DEBUG: ConfigParser: bucket_location->US
>> > DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com
>> > DEBUG: ConfigParser: cloudfront_resource->/2010-07-15/distribution
>> > DEBUG: ConfigParser: default_mime_type->binary/octet-stream
>> > DEBUG: ConfigParser: delete_removed->False
>> > DEBUG: ConfigParser: dry_run->False
>> > DEBUG: ConfigParser: encoding->UTF-8
>> > DEBUG: ConfigParser: encrypt->False
>> > DEBUG: ConfigParser: follow_symlinks->False
>> > DEBUG: ConfigParser: force->False
>> > DEBUG: ConfigParser: get_continue->False
>> > DEBUG: ConfigParser: gpg_command->/usr/bin/gpg
>> > DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose
>> > --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
>> > %(output_file)s %(input_file)s
>> > DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose
>> > --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
>> > %(output_file)s %(input_file)s
>> > DEBUG: ConfigParser: gpg_passphrase->...-3_chars...
>> > DEBUG: ConfigParser: guess_mime_type->True
>> > DEBUG: ConfigParser: host_base->api2.cloud-datayes.com
>> > DEBUG: ConfigParser: host_bucket->%(bucket)s.api2.cloud-datayes.com
>> > DEBUG: ConfigParser: human_readable_sizes->False
>> > DEBUG: ConfigParser: list_md5->False
>> > DEBUG: ConfigParser: log_target_prefix->
>> > DEBUG: ConfigParser: preserve_attrs->True
>> > DEBUG: ConfigParser: progress_meter->True
>> > DEBUG: ConfigParser: proxy_host->10.21.136.81
>> > DEBUG: ConfigParser: proxy_port->8080
>> > DEBUG: ConfigParser: recursive->False
>> > DEBUG: ConfigParser: recv_chunk->4096
>> > DEBUG: ConfigParser: reduced_redundancy->False
>> > DEBUG: ConfigParser: secret_key->Hk...37_chars...=
>> > DEBUG: ConfigParser: send_chunk->4096
>> > DEBUG: ConfigParser: simpledb_host->sdb.amazonaws.com
>> > DEBUG: ConfigParser: skip_existing->False
>> > DEBUG: ConfigParser: socket_timeout->100
>> > DEBUG: ConfigParser: urlencoding_mode->normal
>> > DEBUG: ConfigParser: use_https->False
>> > DEBUG: ConfigParser: verbosity->WARNING
>> > DEBUG: Updating Config.Config encoding -> UTF-8
>> > DEBUG: Updating Config.Config follow_symlinks -> False
>> > DEBUG: Updating Config.Config verbosity -> 10
>> > DEBUG: Unicodising 'ls' using UTF-8
>> > DEBUG: Unicodising 's3://stock/XSHE/0/50/2008/XSHE-50-20080102'
>> > using UTF-8
>> > DEBUG: Command: ls
>> > DEBUG: Bucket 's3://stock':
>> > DEBUG: String 'XSHE/0/50/2008/XSHE-50-20080102' encoded to
>> > 'XSHE/0/50/2008/XSHE-50-20080102'
>> > DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 24 Aug 2015 01:36:01
>> > +\n/stock/'
>> > DEBUG: CreateRequest: resource[uri]=/
>> > DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 24 Aug 2015 01:36:01
>> > +\n/stock/'
>> > DEBUG: Processing reque

Re: Rolling upgrade from 1.4.2 to 2.0.5

2015-08-24 Thread Sujay Mansingh
Hi guys

I am looking at the instructions here:
http://docs.basho.com/riak/1.4.2/ops/running/nodes/renaming/

However, these are instructions for renaming an existing cluster ‘in-place’.

What I have is an existing 5 node cluster.
I have brought up a completely new (and separate) 5 node cluster.
I am copying over the bitcask data and /var/lib/riak/ring directories from
each existing node to the new cluster. (i.e. from existing-01 to new-01,
existing-02 to new-02, etc)

The instructions above mention to join the cluster, but I don’t wish to do
that (as it would join a new node to the existing cluster).

At the moment I have not formed the new cluster (all 5 riak nodes are
standalone).
What do I need to do in order to rename the ring on the nodes in the new
cluster?

Sujay
​

On Tue, Aug 11, 2015 at 4:47 PM, Dmitri Zagidulin 
wrote:

> Hi Sujay,
>
> Yes, riak.conf is a riak 2 thing. If you're running 1.4, you would change
> the -setcookie in vm.args, exactly.
>
> And no, the node name doesn't have to match the cookie. The two are
> independent.
>
> On Tue, Aug 11, 2015 at 3:22 PM, Sujay Mansingh  wrote:
>
>> Oh and also, does the first part of the riak node name have to match the
>> cookie?
>> I.e. If I change the cookie to riaktest, does the node name have to be
>> riaktest@{{ ip_addr }} ?
>>
>>
>> On Tuesday, August 11, 2015, Sujay Mansingh  wrote:
>>
>>> Thanks Dmitri
>>>
>>> When you say the cookie must be modified in /etc/riak/riak.conf, is
>>> that a riak 2 thing?
>>> I can see a -setcookie riak line in /etc/riak/vm.args, is that what you
>>> mean?
>>>
>>> Sujay
>>> ​
>>>
>>> On Thu, Aug 6, 2015 at 2:11 PM, Dmitri Zagidulin 
>>> wrote:
>>>
 Sujay,

 You're right - the best way to verify the backup is to bring up a
 separate 5 node cluster, and restore it from the backup files.
 The procedure is slightly more involved than untar-ing, though. The
 backed up ring directories from the original cluster will contain the node
 ids (which rely on their IP addresses, etc). Since the new re-hydrated
 cluster is likely to have different IPs from the original one, there's a
 few more steps you need to take.

 The procedure of standing up a new cluster from backups is outlined
 here:
 http://docs.basho.com/riak/latest/ops/running/nodes/renaming/#Clusters-from-Backups

 There is one other important step to remember. If, by any chance,
 you're bringing up the new cluster on the same network as the old cluster
 is running on, be sure to modify the Erlang cookie in the new cluster (so
 that as far as Erlang is concerned, they're existing on different 
 networks).
 The Erlang cookie must be modified in /etc/riak/riak.conf so the new
 cluster does not conflict with any existing cluster.

 Let us know if you have any further questions.

 Dmitri

 On Thu, Aug 6, 2015 at 8:40 AM, Sujay Mansingh  wrote:

> Thanks Magnus & John.
>
> Yes certainly I will test it on a separate cluster first! Which is
> related to another question I have.
>
> If I want to backup I can archive the directories on the nodes as
> described here:
> http://docs.basho.com/riak/latest/ops/running/backups/#OS-Specific-Directory-Locations
>
> But in order to verify the backup (or perform operations on the
> cluster in 'offline' mode), can I simply bring up a separate 5 node 
> cluster
> and untar the backup files?
> (Probably not the /etc/riak directory, but the data and ring
> directories.)
>
> I want to do that, and then try adding a riak 2.0.6 node to the test
> riak 1.4.2 cluster and see if things are ok.
>
> Thanks,
>
> Sujay
>
> On Thu, Aug 6, 2015 at 9:31 AM, Magnus Kessler 
> wrote:
>
>>
>>
>> On 5 August 2015 at 18:53, John Daily  wrote:
>>
>>> That’s correct: upgrades to either 2.0.x or 2.1.x are supported from
>>> the 1.4 series.
>>>
>>> Side note: I definitely recommend testing the upgrade process in a
>>> QA environment first.
>>>
>>> -John
>>>
>>
>> Hi Sujay,
>>
>> The latest release in the 2.0 series is 2.0.6 [0]. Please use this
>> version if you upgrade to 2.0.
>>
>> Please also review the documentation about the new 'riak.conf'
>> configuration file [1][2]. 2.x installations should use the new format, 
>> but
>> you can continue to use the 'app.config' format from Riak 1.x. To 
>> maintain
>> complete backwards compatibility when using 'app.config', please add
>>
>> [{default_bucket_props,
>>   [{allow_mult,false}, %% have Riak resolve conflicts and do not return 
>> siblings
>>   {dvv_enabled,false}]}, %% use vector clocks for conflict resolution
>>   %% other settings
>> ]
>>
>> to 'app.config'. This will ensure that your existing application
>> continues to work exactly as before. When usi

Re: Rolling upgrade from 1.4.2 to 2.0.5

2015-08-24 Thread Dmitri Zagidulin
Hi Sujay,

This is where we get into the fact that maintaining docs across many
versions is a hard problem :)

You'll want to follow the instructions laid out in
http://docs.basho.com/riak/latest/ops/running/nodes/renaming/#Clusters-from-Backups
(the
Clusters from Backup section, specifically). That outlines the instructions
on renaming the ring on an existing new cluster from backup. (And keep in
mind what I said earlier about renaming the Erlang cookie in vm.args).

Since it's written for Riak version 2, you'll want to cross-reference it
with the slightly older version of the doc, that you're looking at,
http://docs.basho.com/riak/1.4.2/ops/running/nodes/renaming/ . The
procedure should be largely the same, just the names of the config files
are different.


On Monday, August 24, 2015, Sujay Mansingh  wrote:

> Hi guys
>
> I am looking at the instructions here:
> http://docs.basho.com/riak/1.4.2/ops/running/nodes/renaming/
>
> However, these are instructions for renaming an existing cluster
> ‘in-place’.
>
> What I have is an existing 5 node cluster.
> I have brought up a completely new (and separate) 5 node cluster.
> I am copying over the bitcask data and /var/lib/riak/ring directories
> from each existing node to the new cluster. (i.e. from existing-01 to
> new-01, existing-02 to new-02, etc)
>
> The instructions above mention to join the cluster, but I don’t wish to do
> that (as it would join a new node to the existing cluster).
>
> At the moment I have not formed the new cluster (all 5 riak nodes are
> standalone).
> What do I need to do in order to rename the ring on the nodes in the new
> cluster?
>
> Sujay
> ​
>
> On Tue, Aug 11, 2015 at 4:47 PM, Dmitri Zagidulin  > wrote:
>
>> Hi Sujay,
>>
>> Yes, riak.conf is a riak 2 thing. If you're running 1.4, you would change
>> the -setcookie in vm.args, exactly.
>>
>> And no, the node name doesn't have to match the cookie. The two are
>> independent.
>>
>> On Tue, Aug 11, 2015 at 3:22 PM, Sujay Mansingh > > wrote:
>>
>>> Oh and also, does the first part of the riak node name have to match the
>>> cookie?
>>> I.e. If I change the cookie to riaktest, does the node name have to be
>>> riaktest@{{ ip_addr }} ?
>>>
>>>
>>> On Tuesday, August 11, 2015, Sujay Mansingh >> > wrote:
>>>
 Thanks Dmitri

 When you say the cookie must be modified in /etc/riak/riak.conf, is
 that a riak 2 thing?
 I can see a -setcookie riak line in /etc/riak/vm.args, is that what
 you mean?

 Sujay
 ​

 On Thu, Aug 6, 2015 at 2:11 PM, Dmitri Zagidulin 
 wrote:

> Sujay,
>
> You're right - the best way to verify the backup is to bring up a
> separate 5 node cluster, and restore it from the backup files.
> The procedure is slightly more involved than untar-ing, though. The
> backed up ring directories from the original cluster will contain the node
> ids (which rely on their IP addresses, etc). Since the new re-hydrated
> cluster is likely to have different IPs from the original one, there's a
> few more steps you need to take.
>
> The procedure of standing up a new cluster from backups is outlined
> here:
> http://docs.basho.com/riak/latest/ops/running/nodes/renaming/#Clusters-from-Backups
>
> There is one other important step to remember. If, by any chance,
> you're bringing up the new cluster on the same network as the old cluster
> is running on, be sure to modify the Erlang cookie in the new cluster (so
> that as far as Erlang is concerned, they're existing on different 
> networks).
> The Erlang cookie must be modified in /etc/riak/riak.conf so the new
> cluster does not conflict with any existing cluster.
>
> Let us know if you have any further questions.
>
> Dmitri
>
> On Thu, Aug 6, 2015 at 8:40 AM, Sujay Mansingh 
> wrote:
>
>> Thanks Magnus & John.
>>
>> Yes certainly I will test it on a separate cluster first! Which is
>> related to another question I have.
>>
>> If I want to backup I can archive the directories on the nodes as
>> described here:
>> http://docs.basho.com/riak/latest/ops/running/backups/#OS-Specific-Directory-Locations
>>
>> But in order to verify the backup (or perform operations on the
>> cluster in 'offline' mode), can I simply bring up a separate 5 node 
>> cluster
>> and untar the backup files?
>> (Probably not the /etc/riak directory, but the data and ring
>> directories.)
>>
>> I want to do that, and then try adding a riak 2.0.6 node to the test
>> riak 1.4.2 cluster and see if things are ok.
>>
>> Thanks,
>>
>> Sujay
>>
>> On Thu, Aug 6, 2015 at 9:31 AM, Magnus Kessler 
>> wrote:
>>
>>>
>>>
>>> On 5 August 2015 at 18:53, John Daily  wrote:
>>>
 That’s correct: upgrades to either 2.0.x or 2.1.x are supported
 from the 1.4 series.

 Side 

Re: s3cmd error: access to bucket was denied

2015-08-24 Thread changmao wang
1. root@cluster1-hd10:~# grep cs_root_host /etc/riak-cs/app.config
  {cs_root_host, "api2.cloud-datayes.com"},
root@cluster1-hd10:~# grep host_base .s3cfg
host_base = api2.cloud-datayes.com
root@cluster1-hd10:~# grep host_base s3cfg1
host_base = api2.cloud-datayes.com

2. please check attached file for "s3cmd -d" output and
'/etc/riak-cs/console.log'.


On Mon, Aug 24, 2015 at 9:54 AM, Shunichi Shinohara  wrote:

> What is "api2.cloud-datayes.com"? Your s3cfg attached at the first one
> in this email thread
> does not include it. Please make sure you provide correct / consistent
> information to
> debug the issue.
>
> - What is your riak cs config "cs_root_host"?
> - What is your host_base in s3cfg that you USE?
> - What is your host_bucket in s3cfg?
>
> Also, please attach s3cmd debug output AND riak cs console log at the same
> time
> interval.
> --
> Shunichi Shinohara
> Basho Japan KK
>
>
> On Mon, Aug 24, 2015 at 10:42 AM, changmao wang 
> wrote:
> > I'm not sure who created it. This's a legacy production system.
> >
> > Just now, I used another "s3cfg" file to access it. Below is my output:
> > root@cluster1-hd10:~# s3cmd -c s3cfg1 info
> > s3://stock/XSHE/0/50/2008/XSHE-50-20080102
> > s3://stock/XSHE/0/50/2008/XSHE-50-20080102 (object):
> >File size: 397535
> >Last mod:  Thu, 05 Dec 2013 03:19:00 GMT
> >MIME type: binary/octet-stream
> >MD5 sum:   feb2609ecfc9bb21549f2401a5c9477d
> >ACL:   stockwrite: FULL_CONTROL
> >ACL:   *anon*: READ
> >URL:
> > http://stock.s3.amazonaws.com/XSHE/0/50/2008/XSHE-50-20080102
> > root@cluster1-hd10:~# s3cmd -c s3cfg1 ls
> > s3://stock/XSHE/0/50/2008/XSHE-50-20080102 -d
> > DEBUG: ConfigParser: Reading file 's3cfg1'
> > DEBUG: ConfigParser: access_key->TE...17_chars...0
> > DEBUG: ConfigParser: bucket_location->US
> > DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com
> > DEBUG: ConfigParser: cloudfront_resource->/2010-07-15/distribution
> > DEBUG: ConfigParser: default_mime_type->binary/octet-stream
> > DEBUG: ConfigParser: delete_removed->False
> > DEBUG: ConfigParser: dry_run->False
> > DEBUG: ConfigParser: encoding->UTF-8
> > DEBUG: ConfigParser: encrypt->False
> > DEBUG: ConfigParser: follow_symlinks->False
> > DEBUG: ConfigParser: force->False
> > DEBUG: ConfigParser: get_continue->False
> > DEBUG: ConfigParser: gpg_command->/usr/bin/gpg
> > DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose
> > --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
> > %(output_file)s %(input_file)s
> > DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose
> > --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
> > %(output_file)s %(input_file)s
> > DEBUG: ConfigParser: gpg_passphrase->...-3_chars...
> > DEBUG: ConfigParser: guess_mime_type->True
> > DEBUG: ConfigParser: host_base->api2.cloud-datayes.com
> > DEBUG: ConfigParser: host_bucket->%(bucket)s.api2.cloud-datayes.com
> > DEBUG: ConfigParser: human_readable_sizes->False
> > DEBUG: ConfigParser: list_md5->False
> > DEBUG: ConfigParser: log_target_prefix->
> > DEBUG: ConfigParser: preserve_attrs->True
> > DEBUG: ConfigParser: progress_meter->True
> > DEBUG: ConfigParser: proxy_host->10.21.136.81
> > DEBUG: ConfigParser: proxy_port->8080
> > DEBUG: ConfigParser: recursive->False
> > DEBUG: ConfigParser: recv_chunk->4096
> > DEBUG: ConfigParser: reduced_redundancy->False
> > DEBUG: ConfigParser: secret_key->Hk...37_chars...=
> > DEBUG: ConfigParser: send_chunk->4096
> > DEBUG: ConfigParser: simpledb_host->sdb.amazonaws.com
> > DEBUG: ConfigParser: skip_existing->False
> > DEBUG: ConfigParser: socket_timeout->100
> > DEBUG: ConfigParser: urlencoding_mode->normal
> > DEBUG: ConfigParser: use_https->False
> > DEBUG: ConfigParser: verbosity->WARNING
> > DEBUG: Updating Config.Config encoding -> UTF-8
> > DEBUG: Updating Config.Config follow_symlinks -> False
> > DEBUG: Updating Config.Config verbosity -> 10
> > DEBUG: Unicodising 'ls' using UTF-8
> > DEBUG: Unicodising 's3://stock/XSHE/0/50/2008/XSHE-50-20080102'
> > using UTF-8
> > DEBUG: Command: ls
> > DEBUG: Bucket 's3://stock':
> > DEBUG: String 'XSHE/0/50/2008/XSHE-50-20080102' encoded to
> > 'XSHE/0/50/2008/XSHE-50-20080102'
> > DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 24 Aug 2015 01:36:01
> > +\n/stock/'
> > DEBUG: CreateRequest: resource[uri]=/
> > DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 24 Aug 2015 01:36:01
> > +\n/stock/'
> > DEBUG: Processing request, please wait...
> > DEBUG: get_hostname(stock): stock.api2.cloud-datayes.com
> > DEBUG: format_uri():
> >
> http://stock.api2.cloud-datayes.com/?prefix=XSHE/0/50/2008/XSHE-50-20080102&delimiter=/
> > WARNING: Retrying failed request:
> > /?prefix=XSHE/0/50/2008/XSHE-50-20080102&delimiter=/ ('')
> > WARNING: Waiting 3 sec...
> > DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 24 Aug 2015 01:37:05
> > +\n/stock/'
> > DE

Re: s3cmd error: access to bucket was denied

2015-08-24 Thread changmao wang
I'm not sure who created it. This's a legacy production system.

Just now, I used another "s3cfg" file to access it. Below is my output:
root@cluster1-hd10:~# s3cmd -c s3cfg1 info
s3://stock/XSHE/0/50/2008/XSHE-50-20080102
s3://stock/XSHE/0/50/2008/XSHE-50-20080102 (object):
   File size: 397535
   Last mod:  Thu, 05 Dec 2013 03:19:00 GMT
   MIME type: binary/octet-stream
   MD5 sum:   feb2609ecfc9bb21549f2401a5c9477d
   ACL:   stockwrite: FULL_CONTROL
   ACL:   *anon*: READ
   URL:
http://stock.s3.amazonaws.com/XSHE/0/50/2008/XSHE-50-20080102
root@cluster1-hd10:~# s3cmd -c s3cfg1 ls
s3://stock/XSHE/0/50/2008/XSHE-50-20080102 -d
DEBUG: ConfigParser: Reading file 's3cfg1'
DEBUG: ConfigParser: access_key->TE...17_chars...0
DEBUG: ConfigParser: bucket_location->US
DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com
DEBUG: ConfigParser: cloudfront_resource->/2010-07-15/distribution
DEBUG: ConfigParser: default_mime_type->binary/octet-stream
DEBUG: ConfigParser: delete_removed->False
DEBUG: ConfigParser: dry_run->False
DEBUG: ConfigParser: encoding->UTF-8
DEBUG: ConfigParser: encrypt->False
DEBUG: ConfigParser: follow_symlinks->False
DEBUG: ConfigParser: force->False
DEBUG: ConfigParser: get_continue->False
DEBUG: ConfigParser: gpg_command->/usr/bin/gpg
DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose
--no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
%(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose
--no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
%(output_file)s %(input_file)s
DEBUG: ConfigParser: gpg_passphrase->...-3_chars...
DEBUG: ConfigParser: guess_mime_type->True
DEBUG: ConfigParser: host_base->api2.cloud-datayes.com
DEBUG: ConfigParser: host_bucket->%(bucket)s.api2.cloud-datayes.com
DEBUG: ConfigParser: human_readable_sizes->False
DEBUG: ConfigParser: list_md5->False
DEBUG: ConfigParser: log_target_prefix->
DEBUG: ConfigParser: preserve_attrs->True
DEBUG: ConfigParser: progress_meter->True
DEBUG: ConfigParser: proxy_host->10.21.136.81
DEBUG: ConfigParser: proxy_port->8080
DEBUG: ConfigParser: recursive->False
DEBUG: ConfigParser: recv_chunk->4096
DEBUG: ConfigParser: reduced_redundancy->False
DEBUG: ConfigParser: secret_key->Hk...37_chars...=
DEBUG: ConfigParser: send_chunk->4096
DEBUG: ConfigParser: simpledb_host->sdb.amazonaws.com
DEBUG: ConfigParser: skip_existing->False
DEBUG: ConfigParser: socket_timeout->100
DEBUG: ConfigParser: urlencoding_mode->normal
DEBUG: ConfigParser: use_https->False
DEBUG: ConfigParser: verbosity->WARNING
DEBUG: Updating Config.Config encoding -> UTF-8
DEBUG: Updating Config.Config follow_symlinks -> False
DEBUG: Updating Config.Config verbosity -> 10
DEBUG: Unicodising 'ls' using UTF-8
DEBUG: Unicodising 's3://stock/XSHE/0/50/2008/XSHE-50-20080102'
using UTF-8
DEBUG: Command: ls
DEBUG: Bucket 's3://stock':
DEBUG: String 'XSHE/0/50/2008/XSHE-50-20080102' encoded to
'XSHE/0/50/2008/XSHE-50-20080102'
DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 24 Aug 2015 01:36:01
+\n/stock/'
DEBUG: CreateRequest: resource[uri]=/
DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 24 Aug 2015 01:36:01
+\n/stock/'
DEBUG: Processing request, please wait...
DEBUG: get_hostname(stock): stock.api2.cloud-datayes.com
DEBUG: format_uri():
http://stock.api2.cloud-datayes.com/?prefix=XSHE/0/50/2008/XSHE-50-20080102&delimiter=/
WARNING: Retrying failed request:
/?prefix=XSHE/0/50/2008/XSHE-50-20080102&delimiter=/ ('')
WARNING: Waiting 3 sec...
DEBUG: SignHeaders: 'GET\n\n\n\nx-amz-date:Mon, 24 Aug 2015 01:37:05
+\n/stock/'
DEBUG: Processing request, please wait...
DEBUG: get_hostname(stock): stock.api2.cloud-datayes.com
DEBUG: format_uri():
http://stock.api2.cloud-datayes.com/?prefix=XSHE/0/50/2008/XSHE-50-20080102&delimiter=/
WARNING: Retrying failed request:
/?prefix=XSHE/0/50/2008/XSHE-50-20080102&delimiter=/ ('')
WARNING: Waiting 6 sec...




On Mon, Aug 24, 2015 at 9:17 AM, Shunichi Shinohara  wrote:

> The result of "s3cmd ls" (aka, GET Service API) indicates there
> is no bucket with name "stock":
>
> > root@cluster-s3-hd1:~# s3cmd ls
> > 2013-12-01 06:45  s3://test
>
> Have you created it?
>
> --
> Shunichi Shinohara
> Basho Japan KK
>
>
> On Mon, Aug 24, 2015 at 10:14 AM, changmao wang 
> wrote:
> > Shunichi,
> >
> > Thanks for your reply. Below is my command result:
> > root@cluster-s3-hd1:~# s3cmd ls
> > 2013-12-01 06:45  s3://test
> > root@cluster-s3-hd1:~# s3cmd info s3://stock
> > ERROR: Access to bucket 'stock' was denied
> > root@cluster-s3-hd1:~# s3cmd info s3://stock -d
> > DEBUG: ConfigParser: Reading file '/root/.s3cfg'
> > DEBUG: ConfigParser: access_key->M2...17_chars...K
> > DEBUG: ConfigParser: bucket_location->US
> > DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com
> > DEBUG: ConfigParser: cloudfront_resource->/2010-07-15/distribution
> > DEBUG: ConfigP

Re: s3cmd error: access to bucket was denied

2015-08-24 Thread changmao wang
Please check attached file for details.

On Mon, Aug 24, 2015 at 4:48 PM, Shunichi Shinohara  wrote:

> Then, back to my first questions:
> Could you provide results following commands with s3cfg1?
> - s3cmd ls
> - s3cmd info s3://stock
>
> From log file, gc index queries timed out again and again.
> Not sure but it may be subtle situation...
>
> --
> Shunichi Shinohara
> Basho Japan KK
>
>
> On Mon, Aug 24, 2015 at 11:03 AM, changmao wang 
> wrote:
> > 1. root@cluster1-hd10:~# grep cs_root_host /etc/riak-cs/app.config
> >   {cs_root_host, "api2.cloud-datayes.com"},
> > root@cluster1-hd10:~# grep host_base .s3cfg
> > host_base = api2.cloud-datayes.com
> > root@cluster1-hd10:~# grep host_base s3cfg1
> > host_base = api2.cloud-datayes.com
> >
> > 2. please check attached file for "s3cmd -d" output and
> > '/etc/riak-cs/console.log'.
> >
> >
> > On Mon, Aug 24, 2015 at 9:54 AM, Shunichi Shinohara 
> wrote:
> >>
> >> What is "api2.cloud-datayes.com"? Your s3cfg attached at the first one
> >> in this email thread
> >> does not include it. Please make sure you provide correct / consistent
> >> information to
> >> debug the issue.
> >>
> >> - What is your riak cs config "cs_root_host"?
> >> - What is your host_base in s3cfg that you USE?
> >> - What is your host_bucket in s3cfg?
> >>
> >> Also, please attach s3cmd debug output AND riak cs console log at the
> same
> >> time
> >> interval.
> >> --
> >> Shunichi Shinohara
> >> Basho Japan KK
> >>
> >>
> >> On Mon, Aug 24, 2015 at 10:42 AM, changmao wang <
> wang.chang...@gmail.com>
> >> wrote:
> >> > I'm not sure who created it. This's a legacy production system.
> >> >
> >> > Just now, I used another "s3cfg" file to access it. Below is my
> output:
> >> > root@cluster1-hd10:~# s3cmd -c s3cfg1 info
> >> > s3://stock/XSHE/0/50/2008/XSHE-50-20080102
> >> > s3://stock/XSHE/0/50/2008/XSHE-50-20080102 (object):
> >> >File size: 397535
> >> >Last mod:  Thu, 05 Dec 2013 03:19:00 GMT
> >> >MIME type: binary/octet-stream
> >> >MD5 sum:   feb2609ecfc9bb21549f2401a5c9477d
> >> >ACL:   stockwrite: FULL_CONTROL
> >> >ACL:   *anon*: READ
> >> >URL:
> >> > http://stock.s3.amazonaws.com/XSHE/0/50/2008/XSHE-50-20080102
> >> > root@cluster1-hd10:~# s3cmd -c s3cfg1 ls
> >> > s3://stock/XSHE/0/50/2008/XSHE-50-20080102 -d
> >> > DEBUG: ConfigParser: Reading file 's3cfg1'
> >> > DEBUG: ConfigParser: access_key->TE...17_chars...0
> >> > DEBUG: ConfigParser: bucket_location->US
> >> > DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com
> >> > DEBUG: ConfigParser: cloudfront_resource->/2010-07-15/distribution
> >> > DEBUG: ConfigParser: default_mime_type->binary/octet-stream
> >> > DEBUG: ConfigParser: delete_removed->False
> >> > DEBUG: ConfigParser: dry_run->False
> >> > DEBUG: ConfigParser: encoding->UTF-8
> >> > DEBUG: ConfigParser: encrypt->False
> >> > DEBUG: ConfigParser: follow_symlinks->False
> >> > DEBUG: ConfigParser: force->False
> >> > DEBUG: ConfigParser: get_continue->False
> >> > DEBUG: ConfigParser: gpg_command->/usr/bin/gpg
> >> > DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose
> >> > --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
> >> > %(output_file)s %(input_file)s
> >> > DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose
> >> > --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
> >> > %(output_file)s %(input_file)s
> >> > DEBUG: ConfigParser: gpg_passphrase->...-3_chars...
> >> > DEBUG: ConfigParser: guess_mime_type->True
> >> > DEBUG: ConfigParser: host_base->api2.cloud-datayes.com
> >> > DEBUG: ConfigParser: host_bucket->%(bucket)s.api2.cloud-datayes.com
> >> > DEBUG: ConfigParser: human_readable_sizes->False
> >> > DEBUG: ConfigParser: list_md5->False
> >> > DEBUG: ConfigParser: log_target_prefix->
> >> > DEBUG: ConfigParser: preserve_attrs->True
> >> > DEBUG: ConfigParser: progress_meter->True
> >> > DEBUG: ConfigParser: proxy_host->10.21.136.81
> >> > DEBUG: ConfigParser: proxy_port->8080
> >> > DEBUG: ConfigParser: recursive->False
> >> > DEBUG: ConfigParser: recv_chunk->4096
> >> > DEBUG: ConfigParser: reduced_redundancy->False
> >> > DEBUG: ConfigParser: secret_key->Hk...37_chars...=
> >> > DEBUG: ConfigParser: send_chunk->4096
> >> > DEBUG: ConfigParser: simpledb_host->sdb.amazonaws.com
> >> > DEBUG: ConfigParser: skip_existing->False
> >> > DEBUG: ConfigParser: socket_timeout->100
> >> > DEBUG: ConfigParser: urlencoding_mode->normal
> >> > DEBUG: ConfigParser: use_https->False
> >> > DEBUG: ConfigParser: verbosity->WARNING
> >> > DEBUG: Updating Config.Config encoding -> UTF-8
> >> > DEBUG: Updating Config.Config follow_symlinks -> False
> >> > DEBUG: Updating Config.Config verbosity -> 10
> >> > DEBUG: Unicodising 'ls' using UTF-8
> >> > DEBUG: Unicodising
> 's3://stock/XSHE/0/50/2008/XSHE-50-20080102'
> >> > using UTF-8
> >> > DEBUG: Command: ls
> >> > DEBUG: Bucket 's3://stock':
> >> > DEBUG: Strin