Because I have created them manually and then I have installed Rados
Gateway. After that I realised that Rados Gateway didn't work. I thought
that it was because I have created pools manually so I removed those
buckets which I had created and reinstall Rados Gateway. But without
success of course

On Wed, Feb 17, 2016 at 10:35 PM, Alexandr Porunov <
alexandr.poru...@gmail.com> wrote:

> Because I have created them manually and then I have installed Rados
> Gateway. After that I realised that Rados Gateway didn't work. I thought
> that it was because I have created pools manually so I removed those
> buckets which I had created and reinstall Rados Gateway. But without
> success of course
>
> On Wed, Feb 17, 2016 at 10:13 PM, Василий Ангапов <anga...@gmail.com>
> wrote:
>
>> First, seems to me you should not delete pools .rgw.buckets and
>> .rgw.buckets.index because that's the pools where RGW stores buckets
>> actually.
>> But why did you do that?
>>
>>
>> 2016-02-18 3:08 GMT+08:00 Alexandr Porunov <alexandr.poru...@gmail.com>:
>> > When I try to create bucket:
>> > s3cmd mb s3://first-bucket
>> >
>> > I always get this error:
>> > ERROR: S3 error: 405 (MethodNotAllowed)
>> >
>> > /var/log/ceph/ceph-client.rgw.gateway.log :
>> > 2016-02-17 20:22:49.282715 7f86c50f3700  1 handle_sigterm
>> > 2016-02-17 20:22:49.282750 7f86c50f3700  1 handle_sigterm set alarm for
>> 120
>> > 2016-02-17 20:22:49.282646 7f9478ff9700  1 handle_sigterm
>> > 2016-02-17 20:22:49.282689 7f9478ff9700  1 handle_sigterm set alarm for
>> 120
>> > 2016-02-17 20:22:49.285830 7f949b842880 -1 shutting down
>> > 2016-02-17 20:22:49.285289 7f86f36c3880 -1 shutting down
>> > 2016-02-17 20:22:49.370173 7f86f36c3880  1 final shutdown
>> > 2016-02-17 20:22:49.467154 7f949b842880  1 final shutdown
>> > 2016-02-17 22:23:33.388956 7f4a94adf880  0 ceph version 9.2.0
>> > (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299), process radosgw, pid 889
>> > 2016-02-17 20:23:44.344574 7f4a94adf880  0 framework: civetweb
>> > 2016-02-17 20:23:44.344583 7f4a94adf880  0 framework conf key: port,
>> val: 80
>> > 2016-02-17 20:23:44.344590 7f4a94adf880  0 starting handler: civetweb
>> > 2016-02-17 20:23:44.344630 7f4a94adf880  0 civetweb: 0x7f4a951c8b00:
>> > set_ports_option: cannot bind to 80: 13 (Permission denied)
>> > 2016-02-17 20:23:44.495510 7f4a65ffb700  0 ERROR: can't read user
>> header:
>> > ret=-2
>> > 2016-02-17 20:23:44.495516 7f4a65ffb700  0 ERROR: sync_user() failed,
>> > user=alex ret=-2
>> > 2016-02-17 20:26:47.425354 7fb50132b880  0 ceph version 9.2.0
>> > (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299), process radosgw, pid 3149
>> > 2016-02-17 20:26:47.471472 7fb50132b880 -1 asok(0x7fb503e51340)
>> > AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen:
>> failed to
>> > bind the UNIX domain socket to
>> '/var/run/ceph/ceph-client.rgw.gateway.asok':
>> > (17) File exists
>> > 2016-02-17 20:26:47.554305 7fb50132b880  0 framework: civetweb
>> > 2016-02-17 20:26:47.554319 7fb50132b880  0 framework conf key: port,
>> val: 80
>> > 2016-02-17 20:26:47.554328 7fb50132b880  0 starting handler: civetweb
>> > 2016-02-17 20:26:47.576110 7fb4d2ffd700  0 ERROR: can't read user
>> header:
>> > ret=-2
>> > 2016-02-17 20:26:47.576119 7fb4d2ffd700  0 ERROR: sync_user() failed,
>> > user=alex ret=-2
>> > 2016-02-17 20:27:03.504131 7fb49d7a2700  1 ====== starting new request
>> > req=0x7fb4e40008c0 =====
>> > 2016-02-17 20:27:03.522989 7fb49d7a2700  1 ====== req done
>> > req=0x7fb4e40008c0 http_status=200 ======
>> > 2016-02-17 20:27:03.523023 7fb49d7a2700  1 civetweb: 0x7fb4e40022a0:
>> > 192.168.56.100 - - [17/Feb/2016:20:27:03 +0200] "GET / HTTP/1.1" 200 0
>> - -
>> > 2016-02-17 20:27:08.796459 7fb49bf9f700  1 ====== starting new request
>> > req=0x7fb4ec0343a0 =====
>> > 2016-02-17 20:27:08.796755 7fb49bf9f700  1 ====== req done
>> > req=0x7fb4ec0343a0 http_status=405 ======
>> > 2016-02-17 20:27:08.796807 7fb49bf9f700  1 civetweb: 0x7fb4ec0008c0:
>> > 192.168.56.100 - - [17/Feb/2016:20:27:08 +0200] "PUT / HTTP/1.1" 405 0
>> - -
>> > 2016-02-17 20:28:22.088508 7fb49e7a4700  1 ====== starting new request
>> > req=0x7fb503f1bfd0 =====
>> > 2016-02-17 20:28:22.090993 7fb49e7a4700  1 ====== req done
>> > req=0x7fb503f1bfd0 http_status=200 ======
>> > 2016-02-17 20:28:22.091035 7fb49e7a4700  1 civetweb: 0x7fb503f2e9f0:
>> > 192.168.56.100 - - [17/Feb/2016:20:28:22 +0200] "GET / HTTP/1.1" 200 0
>> - -
>> > 2016-02-17 20:28:35.943110 7fb4a77b6700  1 ====== starting new request
>> > req=0x7fb4cc0047b0 =====
>> > 2016-02-17 20:28:35.945233 7fb4a77b6700  1 ====== req done
>> > req=0x7fb4cc0047b0 http_status=200 ======
>> > 2016-02-17 20:28:35.945282 7fb4a77b6700  1 civetweb: 0x7fb4cc004c90:
>> > 192.168.56.100 - - [17/Feb/2016:20:28:35 +0200] "GET / HTTP/1.1" 200 0
>> - -
>> > 2016-02-17 20:29:07.447283 7fb49dfa3700  1 ====== starting new request
>> > req=0x7fb4d8000bf0 =====
>> > 2016-02-17 20:29:07.447743 7fb49dfa3700  1 ====== req done
>> > req=0x7fb4d8000bf0 http_status=405 ======
>> > 2016-02-17 20:29:07.447913 7fb49dfa3700  1 civetweb: 0x7fb4d8002bb0:
>> > 192.168.56.100 - - [17/Feb/2016:20:29:07 +0200] "PUT / HTTP/1.1" 405 0
>> - -
>> >
>> > My ceph.conf:
>> > [global]
>> > fsid = 54060180-f49f-4cfb-a04e-72ecbda8692b
>> > mon_initial_members = node1
>> > mon_host = 192.168.56.101
>> > auth_cluster_required = cephx
>> > auth_service_required = cephx
>> > auth_client_required = cephx
>> > filestore_xattr_use_omap = true
>> > osd_pool_default_size = 2
>> > public_network = 192.168.56.0/24
>> > cluster_network = 192.168.57.0/24
>> > [client.rgw.gateway]
>> > rgw_frontends = "civetweb port=80"
>> >
>> > I have created several pools (like .rgw.buckets.index and so on) and
>> then I
>> > have deleted several of them (like .rgw.buckets.index and so on). It is
>> my
>> > current list of pools:
>> > 0 rbd,1 .rgw.root,2 .rgw.control,3 .rgw,5 .log,6 .users.uid,7 data,12
>> > .intent-log,13 .usage,14 .users,15 .users.email,16 .users.swift,17
>> .rgw.gc,
>> >
>> > After reboot my ceph-radosgw@rgw.gateway.service is running but I
>> cannot
>> > send any request on Ceph Gateway Node (it shows errors).
>> >
>> > I manually start it with this command:
>> > /usr/bin/radosgw --id=rgw.gateway
>> >
>> > After this command ceph gateway becomes responsable but s3cmd mb
>> > s3://first-bucket still doesn't work.
>> >
>> > Help me please to figure out how to create buckets
>> >
>> > Regards
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>>
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to