Re: Rolling upgrade from 1.4.2 to 2.0.5

2015-08-25 Thread Sujay Mansingh
Thanks Dmitri.

I tried that, but with no luck. I replaced the ring and bitcask data
directories.

But I can’t run riak-admin reip ... because it complains that riak isn’t
running.

However, I can’t start riak (I get the following in /var/log/riak/error.log
).
(The existing cluster is the 192.168.3.x range, and the new one is the
172.16.16.x range.)

2015-08-25 11:13:39.217 [error] <0.161.0> gen_server
riak_core_capability terminated with reason: no function clause
matching orddict:fetch('riak@172.16.16.211',
[{'riak@192.168.3.5',[{{riak_control,member_info_version},[v1,v0]},{{riak_core,resizable_ring},...},...]},...])
line 72
2015-08-25 11:13:39.217 [error] <0.161.0> CRASH REPORT Process
riak_core_capability with 0 neighbours exited with reason: no function
clause matching orddict:fetch('riak@172.16.16.211',
[{'riak@192.168.3.5',[{{riak_control,member_info_version},[v1,v0]},{{riak_core,resizable_ring},...},...]},...])
line 72 in gen_server:terminate/6 line 747
2015-08-25 11:13:39.219 [error] <0.137.0> Supervisor riak_core_sup had
child riak_core_capability started with
riak_core_capability:start_link() at <0.161.0> exit with reason no
function clause matching orddict:fetch('riak@172.16.16.211',
[{'riak@192.168.3.5',[{{riak_control,member_info_version},[v1,v0]},{{riak_core,resizable_ring},...},...]},...])
line 72 in context child_terminated
2015-08-25 11:13:39.219 [error] <0.135.0> CRASH REPORT Process
<0.135.0> with 0 neighbours exited with reason:
{{function_clause,[{orddict,fetch,['riak@172.16.16.211',[{'riak@192.168.3.5',[{{riak_control,member_info_version},[v1,v0]},{{riak_core,resizable_ring},[true,false]},{{riak_core,staged_joins},[true,false]},{{riak_core,vnode_routing},[proxy,legacy]},{{riak_kv,anti_entropy},[enabled_v1,disabled]},{{riak_kv,crdt},[[pncounter],[]]},{{riak_kv,handoff_data_encoding},[encode_raw,encode_zlib]},{{riak_kv,index_backpressure},[true,false]},{{riak_kv,legacy_keylisting},[false]},{{riak_kv,listkeys_backpressure},...},...]},...]],...},...]},...}
in application_master:init/4 line 138

At the moment, it looks like I can’t restore the cluster. Is there any
other way of verifying the backup? Perhaps I can simply pull out all the
keys in the bitcask data dump?

Thanks,
Sujay
​

On Mon, Aug 24, 2015 at 1:43 PM, Dmitri Zagidulin 
wrote:

> Hi Sujay,
>
> This is where we get into the fact that maintaining docs across many
> versions is a hard problem :)
>
> You'll want to follow the instructions laid out in
> http://docs.basho.com/riak/latest/ops/running/nodes/renaming/#Clusters-from-Backups
>  (the
> Clusters from Backup section, specifically). That outlines the instructions
> on renaming the ring on an existing new cluster from backup. (And keep in
> mind what I said earlier about renaming the Erlang cookie in vm.args).
>
> Since it's written for Riak version 2, you'll want to cross-reference it
> with the slightly older version of the doc, that you're looking at,
> http://docs.basho.com/riak/1.4.2/ops/running/nodes/renaming/ . The
> procedure should be largely the same, just the names of the config files
> are different.
>
>
> On Monday, August 24, 2015, Sujay Mansingh  wrote:
>
>> Hi guys
>>
>> I am looking at the instructions here:
>> http://docs.basho.com/riak/1.4.2/ops/running/nodes/renaming/
>>
>> However, these are instructions for renaming an existing cluster
>> ‘in-place’.
>>
>> What I have is an existing 5 node cluster.
>> I have brought up a completely new (and separate) 5 node cluster.
>> I am copying over the bitcask data and /var/lib/riak/ring directories
>> from each existing node to the new cluster. (i.e. from existing-01 to
>> new-01, existing-02 to new-02, etc)
>>
>> The instructions above mention to join the cluster, but I don’t wish to
>> do that (as it would join a new node to the existing cluster).
>>
>> At the moment I have not formed the new cluster (all 5 riak nodes are
>> standalone).
>> What do I need to do in order to rename the ring on the nodes in the new
>> cluster?
>>
>> Sujay
>> ​
>>
>> On Tue, Aug 11, 2015 at 4:47 PM, Dmitri Zagidulin 
>> wrote:
>>
>>> Hi Sujay,
>>>
>>> Yes, riak.conf is a riak 2 thing. If you're running 1.4, you would
>>> change the -setcookie in vm.args, exactly.
>>>
>>> And no, the node name doesn't have to match the cookie. The two are
>>> independent.
>>>
>>> On Tue, Aug 11, 2015 at 3:22 PM, Sujay Mansingh  wrote:
>>>
 Oh and also, does the first part of the riak node name have to match
 the cookie?
 I.e. If I change the cookie to riaktest, does the node name have to be
 riaktest@{{ ip_addr }} ?


 On Tuesday, August 11, 2015, Sujay Mansingh  wrote:

> Thanks Dmitri
>
> When you say the cookie must be modified in /etc/riak/riak.conf, is
> that a riak 2 thing?
> I can see a -setcookie riak line in /etc/riak/vm.args, is that what
> you mean?
>
> Sujay
> ​
>
> On Thu, Aug 6, 2015 at 2:11 PM, Dmitri Zagidulin  > wrote:
>
>> Sujay,
>>
>> 

Re: Search limitations

2015-08-25 Thread Magnus Kessler
On 18 August 2015 at 00:20, Brant Fitzsimmons 
wrote:

> Hello all,
>
> Are the search suggestions on
> http://docs.basho.com/riak/latest/dev/using/application-guide/#Search
> still valid?
>
> Specifically, is it still advisable to use 2i when deep pagination is
> required, and if the cluster is going to be larger than 8-10 nodes should I
> still use something else for search?
>
>
Hi Brent,

Regarding deep pagination, you may want to try Solr's deep
paging [0][1] for your use case. You can issue an appropriate HTTP request
through Riak's HTTP endpoint for Solr.

Regards,

Magnus

[0]: http://solr.pl/en/2014/03/10/solr-4-7-efficient-deep-paging/
[1]:
https://wiki.apache.org/solr/CommonQueryParameters#Deep_paging_with_cursorMark

-- 
Magnus Kessler
Client Services Engineer @ Basho

Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: s3cmd error: access to bucket was denied

2015-08-25 Thread changmao wang
Any ideas on this issue?

On Mon, Aug 24, 2015 at 5:09 PM, changmao wang 
wrote:

> Please check attached file for details.
>
> On Mon, Aug 24, 2015 at 4:48 PM, Shunichi Shinohara 
> wrote:
>
>> Then, back to my first questions:
>> Could you provide results following commands with s3cfg1?
>> - s3cmd ls
>> - s3cmd info s3://stock
>>
>> From log file, gc index queries timed out again and again.
>> Not sure but it may be subtle situation...
>>
>> --
>> Shunichi Shinohara
>> Basho Japan KK
>>
>>
>> On Mon, Aug 24, 2015 at 11:03 AM, changmao wang 
>> wrote:
>> > 1. root@cluster1-hd10:~# grep cs_root_host /etc/riak-cs/app.config
>> >   {cs_root_host, "api2.cloud-datayes.com"},
>> > root@cluster1-hd10:~# grep host_base .s3cfg
>> > host_base = api2.cloud-datayes.com
>> > root@cluster1-hd10:~# grep host_base s3cfg1
>> > host_base = api2.cloud-datayes.com
>> >
>> > 2. please check attached file for "s3cmd -d" output and
>> > '/etc/riak-cs/console.log'.
>> >
>> >
>> > On Mon, Aug 24, 2015 at 9:54 AM, Shunichi Shinohara 
>> wrote:
>> >>
>> >> What is "api2.cloud-datayes.com"? Your s3cfg attached at the first one
>> >> in this email thread
>> >> does not include it. Please make sure you provide correct / consistent
>> >> information to
>> >> debug the issue.
>> >>
>> >> - What is your riak cs config "cs_root_host"?
>> >> - What is your host_base in s3cfg that you USE?
>> >> - What is your host_bucket in s3cfg?
>> >>
>> >> Also, please attach s3cmd debug output AND riak cs console log at the
>> same
>> >> time
>> >> interval.
>> >> --
>> >> Shunichi Shinohara
>> >> Basho Japan KK
>> >>
>> >>
>> >> On Mon, Aug 24, 2015 at 10:42 AM, changmao wang <
>> wang.chang...@gmail.com>
>> >> wrote:
>> >> > I'm not sure who created it. This's a legacy production system.
>> >> >
>> >> > Just now, I used another "s3cfg" file to access it. Below is my
>> output:
>> >> > root@cluster1-hd10:~# s3cmd -c s3cfg1 info
>> >> > s3://stock/XSHE/0/50/2008/XSHE-50-20080102
>> >> > s3://stock/XSHE/0/50/2008/XSHE-50-20080102 (object):
>> >> >File size: 397535
>> >> >Last mod:  Thu, 05 Dec 2013 03:19:00 GMT
>> >> >MIME type: binary/octet-stream
>> >> >MD5 sum:   feb2609ecfc9bb21549f2401a5c9477d
>> >> >ACL:   stockwrite: FULL_CONTROL
>> >> >ACL:   *anon*: READ
>> >> >URL:
>> >> >
>> http://stock.s3.amazonaws.com/XSHE/0/50/2008/XSHE-50-20080102
>> >> > root@cluster1-hd10:~# s3cmd -c s3cfg1 ls
>> >> > s3://stock/XSHE/0/50/2008/XSHE-50-20080102 -d
>> >> > DEBUG: ConfigParser: Reading file 's3cfg1'
>> >> > DEBUG: ConfigParser: access_key->TE...17_chars...0
>> >> > DEBUG: ConfigParser: bucket_location->US
>> >> > DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com
>> >> > DEBUG: ConfigParser: cloudfront_resource->/2010-07-15/distribution
>> >> > DEBUG: ConfigParser: default_mime_type->binary/octet-stream
>> >> > DEBUG: ConfigParser: delete_removed->False
>> >> > DEBUG: ConfigParser: dry_run->False
>> >> > DEBUG: ConfigParser: encoding->UTF-8
>> >> > DEBUG: ConfigParser: encrypt->False
>> >> > DEBUG: ConfigParser: follow_symlinks->False
>> >> > DEBUG: ConfigParser: force->False
>> >> > DEBUG: ConfigParser: get_continue->False
>> >> > DEBUG: ConfigParser: gpg_command->/usr/bin/gpg
>> >> > DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose
>> >> > --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
>> >> > %(output_file)s %(input_file)s
>> >> > DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose
>> >> > --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
>> >> > %(output_file)s %(input_file)s
>> >> > DEBUG: ConfigParser: gpg_passphrase->...-3_chars...
>> >> > DEBUG: ConfigParser: guess_mime_type->True
>> >> > DEBUG: ConfigParser: host_base->api2.cloud-datayes.com
>> >> > DEBUG: ConfigParser: host_bucket->%(bucket)s.api2.cloud-datayes.com
>> >> > DEBUG: ConfigParser: human_readable_sizes->False
>> >> > DEBUG: ConfigParser: list_md5->False
>> >> > DEBUG: ConfigParser: log_target_prefix->
>> >> > DEBUG: ConfigParser: preserve_attrs->True
>> >> > DEBUG: ConfigParser: progress_meter->True
>> >> > DEBUG: ConfigParser: proxy_host->10.21.136.81
>> >> > DEBUG: ConfigParser: proxy_port->8080
>> >> > DEBUG: ConfigParser: recursive->False
>> >> > DEBUG: ConfigParser: recv_chunk->4096
>> >> > DEBUG: ConfigParser: reduced_redundancy->False
>> >> > DEBUG: ConfigParser: secret_key->Hk...37_chars...=
>> >> > DEBUG: ConfigParser: send_chunk->4096
>> >> > DEBUG: ConfigParser: simpledb_host->sdb.amazonaws.com
>> >> > DEBUG: ConfigParser: skip_existing->False
>> >> > DEBUG: ConfigParser: socket_timeout->100
>> >> > DEBUG: ConfigParser: urlencoding_mode->normal
>> >> > DEBUG: ConfigParser: use_https->False
>> >> > DEBUG: ConfigParser: verbosity->WARNING
>> >> > DEBUG: Updating Config.Config encoding -> UTF-8
>> >> > DEBUG: Updating Config.Config follow_symlinks -> False
>> >> > DEBUG: Updating Config.Config verbosity -> 10
>> >> > DEBUG: Un

Re: Search limitations

2015-08-25 Thread Brant Fitzsimmons
I’ll check those out.  Thanks.

> On Aug 25, 2015, at 11:28 AM, Magnus Kessler  wrote:
> 
> On 18 August 2015 at 00:20, Brant Fitzsimmons  > wrote:
> Hello all,
> 
> Are the search suggestions on 
> http://docs.basho.com/riak/latest/dev/using/application-guide/#Search 
>  still 
> valid?
> 
> Specifically, is it still advisable to use 2i when deep pagination is 
> required, and if the cluster is going to be larger than 8-10 nodes should I 
> still use something else for search?
> 
> 
> Hi Brent,
> 
> Regarding deep pagination, you may want to try Solr's deep paging [0][1] for 
> your use case. You can issue an appropriate HTTP request through Riak's HTTP 
> endpoint for Solr.
> 
> Regards,
> 
> Magnus
> 
> [0]: http://solr.pl/en/2014/03/10/solr-4-7-efficient-deep-paging/ 
> 
> [1]: 
> https://wiki.apache.org/solr/CommonQueryParameters#Deep_paging_with_cursorMark
>  
> 
>  
> -- 
> Magnus Kessler
> Client Services Engineer @ Basho
> 
> Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


--
Brant Fitzsimmons

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Rolling upgrade from 1.4.2 to 2.0.5

2015-08-25 Thread Dmitri Zagidulin
Can you paste me the command-line output of 'riak-admin reip', and the
error message? As far as I know, 'reip' requires the node to be not
running. (This was the case at least as far back as Riak 1.3, and probably
earlier).

I don't think there are ways of verifying the backup while the nodes are
not running. I'm confident we can sort this out, though, and get them up
and running.

On Tue, Aug 25, 2015 at 6:17 AM, Sujay Mansingh  wrote:

> Thanks Dmitri.
>
> I tried that, but with no luck. I replaced the ring and bitcask data
> directories.
>
> But I can’t run riak-admin reip ... because it complains that riak isn’t
> running.
>
> However, I can’t start riak (I get the following in
> /var/log/riak/error.log).
> (The existing cluster is the 192.168.3.x range, and the new one is the
> 172.16.16.x range.)
>
> 2015-08-25 11:13:39.217 [error] <0.161.0> gen_server riak_core_capability 
> terminated with reason: no function clause matching 
> orddict:fetch('riak@172.16.16.211', 
> [{'riak@192.168.3.5',[{{riak_control,member_info_version},[v1,v0]},{{riak_core,resizable_ring},...},...]},...])
>  line 72
> 2015-08-25 11:13:39.217 [error] <0.161.0> CRASH REPORT Process 
> riak_core_capability with 0 neighbours exited with reason: no function clause 
> matching orddict:fetch('riak@172.16.16.211', 
> [{'riak@192.168.3.5',[{{riak_control,member_info_version},[v1,v0]},{{riak_core,resizable_ring},...},...]},...])
>  line 72 in gen_server:terminate/6 line 747
> 2015-08-25 11:13:39.219 [error] <0.137.0> Supervisor riak_core_sup had child 
> riak_core_capability started with riak_core_capability:start_link() at 
> <0.161.0> exit with reason no function clause matching 
> orddict:fetch('riak@172.16.16.211', 
> [{'riak@192.168.3.5',[{{riak_control,member_info_version},[v1,v0]},{{riak_core,resizable_ring},...},...]},...])
>  line 72 in context child_terminated
> 2015-08-25 11:13:39.219 [error] <0.135.0> CRASH REPORT Process <0.135.0> with 
> 0 neighbours exited with reason: 
> {{function_clause,[{orddict,fetch,['riak@172.16.16.211',[{'riak@192.168.3.5',[{{riak_control,member_info_version},[v1,v0]},{{riak_core,resizable_ring},[true,false]},{{riak_core,staged_joins},[true,false]},{{riak_core,vnode_routing},[proxy,legacy]},{{riak_kv,anti_entropy},[enabled_v1,disabled]},{{riak_kv,crdt},[[pncounter],[]]},{{riak_kv,handoff_data_encoding},[encode_raw,encode_zlib]},{{riak_kv,index_backpressure},[true,false]},{{riak_kv,legacy_keylisting},[false]},{{riak_kv,listkeys_backpressure},...},...]},...]],...},...]},...}
>  in application_master:init/4 line 138
>
> At the moment, it looks like I can’t restore the cluster. Is there any
> other way of verifying the backup? Perhaps I can simply pull out all the
> keys in the bitcask data dump?
>
> Thanks,
> Sujay
> ​
>
> On Mon, Aug 24, 2015 at 1:43 PM, Dmitri Zagidulin 
> wrote:
>
>> Hi Sujay,
>>
>> This is where we get into the fact that maintaining docs across many
>> versions is a hard problem :)
>>
>> You'll want to follow the instructions laid out in
>> http://docs.basho.com/riak/latest/ops/running/nodes/renaming/#Clusters-from-Backups
>>  (the
>> Clusters from Backup section, specifically). That outlines the instructions
>> on renaming the ring on an existing new cluster from backup. (And keep in
>> mind what I said earlier about renaming the Erlang cookie in vm.args).
>>
>> Since it's written for Riak version 2, you'll want to cross-reference it
>> with the slightly older version of the doc, that you're looking at,
>> http://docs.basho.com/riak/1.4.2/ops/running/nodes/renaming/ . The
>> procedure should be largely the same, just the names of the config files
>> are different.
>>
>>
>> On Monday, August 24, 2015, Sujay Mansingh  wrote:
>>
>>> Hi guys
>>>
>>> I am looking at the instructions here:
>>> http://docs.basho.com/riak/1.4.2/ops/running/nodes/renaming/
>>>
>>> However, these are instructions for renaming an existing cluster
>>> ‘in-place’.
>>>
>>> What I have is an existing 5 node cluster.
>>> I have brought up a completely new (and separate) 5 node cluster.
>>> I am copying over the bitcask data and /var/lib/riak/ring directories
>>> from each existing node to the new cluster. (i.e. from existing-01 to
>>> new-01, existing-02 to new-02, etc)
>>>
>>> The instructions above mention to join the cluster, but I don’t wish to
>>> do that (as it would join a new node to the existing cluster).
>>>
>>> At the moment I have not formed the new cluster (all 5 riak nodes are
>>> standalone).
>>> What do I need to do in order to rename the ring on the nodes in the new
>>> cluster?
>>>
>>> Sujay
>>> ​
>>>
>>> On Tue, Aug 11, 2015 at 4:47 PM, Dmitri Zagidulin 
>>> wrote:
>>>
 Hi Sujay,

 Yes, riak.conf is a riak 2 thing. If you're running 1.4, you would
 change the -setcookie in vm.args, exactly.

 And no, the node name doesn't have to match the cookie. The two are
 independent.

 On Tue, Aug 11, 2015 at 3:22 PM, Sujay Mansingh 
 wrote:

> Oh and also,

Re: s3cmd error: access to bucket was denied

2015-08-25 Thread Stanislav Vlasov
2015-08-25 11:03 GMT+05:00 changmao wang :
> Any ideas on this issue?

Can you check credentials with another client?
s3curl, for example?

I got some bugs in s3cmd after debian upgrade, so if another client
works, than s3cmd has bug.

> On Mon, Aug 24, 2015 at 5:09 PM, changmao wang 
> wrote:
>>
>> Please check attached file for details.
>>
>> On Mon, Aug 24, 2015 at 4:48 PM, Shunichi Shinohara 
>> wrote:
>>>
>>> Then, back to my first questions:
>>> Could you provide results following commands with s3cfg1?
>>> - s3cmd ls
>>> - s3cmd info s3://stock
>>>
>>> From log file, gc index queries timed out again and again.
>>> Not sure but it may be subtle situation...
>>>
>>> --
>>> Shunichi Shinohara
>>> Basho Japan KK
>>>
>>>
>>> On Mon, Aug 24, 2015 at 11:03 AM, changmao wang 
>>> wrote:
>>> > 1. root@cluster1-hd10:~# grep cs_root_host /etc/riak-cs/app.config
>>> >   {cs_root_host, "api2.cloud-datayes.com"},
>>> > root@cluster1-hd10:~# grep host_base .s3cfg
>>> > host_base = api2.cloud-datayes.com
>>> > root@cluster1-hd10:~# grep host_base s3cfg1
>>> > host_base = api2.cloud-datayes.com
>>> >
>>> > 2. please check attached file for "s3cmd -d" output and
>>> > '/etc/riak-cs/console.log'.
>>> >
>>> >
>>> > On Mon, Aug 24, 2015 at 9:54 AM, Shunichi Shinohara 
>>> > wrote:
>>> >>
>>> >> What is "api2.cloud-datayes.com"? Your s3cfg attached at the first one
>>> >> in this email thread
>>> >> does not include it. Please make sure you provide correct / consistent
>>> >> information to
>>> >> debug the issue.
>>> >>
>>> >> - What is your riak cs config "cs_root_host"?
>>> >> - What is your host_base in s3cfg that you USE?
>>> >> - What is your host_bucket in s3cfg?
>>> >>
>>> >> Also, please attach s3cmd debug output AND riak cs console log at the
>>> >> same
>>> >> time
>>> >> interval.
>>> >> --
>>> >> Shunichi Shinohara
>>> >> Basho Japan KK
>>> >>
>>> >>
>>> >> On Mon, Aug 24, 2015 at 10:42 AM, changmao wang
>>> >> 
>>> >> wrote:
>>> >> > I'm not sure who created it. This's a legacy production system.
>>> >> >
>>> >> > Just now, I used another "s3cfg" file to access it. Below is my
>>> >> > output:
>>> >> > root@cluster1-hd10:~# s3cmd -c s3cfg1 info
>>> >> > s3://stock/XSHE/0/50/2008/XSHE-50-20080102
>>> >> > s3://stock/XSHE/0/50/2008/XSHE-50-20080102 (object):
>>> >> >File size: 397535
>>> >> >Last mod:  Thu, 05 Dec 2013 03:19:00 GMT
>>> >> >MIME type: binary/octet-stream
>>> >> >MD5 sum:   feb2609ecfc9bb21549f2401a5c9477d
>>> >> >ACL:   stockwrite: FULL_CONTROL
>>> >> >ACL:   *anon*: READ
>>> >> >URL:
>>> >> >
>>> >> > http://stock.s3.amazonaws.com/XSHE/0/50/2008/XSHE-50-20080102
>>> >> > root@cluster1-hd10:~# s3cmd -c s3cfg1 ls
>>> >> > s3://stock/XSHE/0/50/2008/XSHE-50-20080102 -d
>>> >> > DEBUG: ConfigParser: Reading file 's3cfg1'
>>> >> > DEBUG: ConfigParser: access_key->TE...17_chars...0
>>> >> > DEBUG: ConfigParser: bucket_location->US
>>> >> > DEBUG: ConfigParser: cloudfront_host->cloudfront.amazonaws.com
>>> >> > DEBUG: ConfigParser: cloudfront_resource->/2010-07-15/distribution
>>> >> > DEBUG: ConfigParser: default_mime_type->binary/octet-stream
>>> >> > DEBUG: ConfigParser: delete_removed->False
>>> >> > DEBUG: ConfigParser: dry_run->False
>>> >> > DEBUG: ConfigParser: encoding->UTF-8
>>> >> > DEBUG: ConfigParser: encrypt->False
>>> >> > DEBUG: ConfigParser: follow_symlinks->False
>>> >> > DEBUG: ConfigParser: force->False
>>> >> > DEBUG: ConfigParser: get_continue->False
>>> >> > DEBUG: ConfigParser: gpg_command->/usr/bin/gpg
>>> >> > DEBUG: ConfigParser: gpg_decrypt->%(gpg_command)s -d --verbose
>>> >> > --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
>>> >> > %(output_file)s %(input_file)s
>>> >> > DEBUG: ConfigParser: gpg_encrypt->%(gpg_command)s -c --verbose
>>> >> > --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o
>>> >> > %(output_file)s %(input_file)s
>>> >> > DEBUG: ConfigParser: gpg_passphrase->...-3_chars...
>>> >> > DEBUG: ConfigParser: guess_mime_type->True
>>> >> > DEBUG: ConfigParser: host_base->api2.cloud-datayes.com
>>> >> > DEBUG: ConfigParser: host_bucket->%(bucket)s.api2.cloud-datayes.com
>>> >> > DEBUG: ConfigParser: human_readable_sizes->False
>>> >> > DEBUG: ConfigParser: list_md5->False
>>> >> > DEBUG: ConfigParser: log_target_prefix->
>>> >> > DEBUG: ConfigParser: preserve_attrs->True
>>> >> > DEBUG: ConfigParser: progress_meter->True
>>> >> > DEBUG: ConfigParser: proxy_host->10.21.136.81
>>> >> > DEBUG: ConfigParser: proxy_port->8080
>>> >> > DEBUG: ConfigParser: recursive->False
>>> >> > DEBUG: ConfigParser: recv_chunk->4096
>>> >> > DEBUG: ConfigParser: reduced_redundancy->False
>>> >> > DEBUG: ConfigParser: secret_key->Hk...37_chars...=
>>> >> > DEBUG: ConfigParser: send_chunk->4096
>>> >> > DEBUG: ConfigParser: simpledb_host->sdb.amazonaws.com
>>> >> > DEBUG: ConfigParser: skip_existing->False
>>> >> > DEBUG: ConfigParser: socket_timeout->100
>>> >> > DEBUG: