Hello Everyone,
I have a Ceph test setup with 3 mons, 3 RGWs, 5 OSD nodes and 22 OSDs. RadosGW
instances run on the monitor nodes and they are behind a load balancer. I run
RGW instances in the full debug mode (20/20 for rgw and 20/20 for civet web).
I can easily access RGW via S3 API with any
ot;
PARTUUID="5a057b30-b697-4598-84c0-1794c608d70c"
/dev/nvme0n1p7: PARTLABEL="ceph journal"
PARTUUID="c22c272d-5b75-40ca-970e-87b1b303944c"
/dev/nvme0n1p8: PARTLABEL="ceph journal"
PARTUUID="ed9fd194-1490-42b1-a2b4-ae36b2a4f8ce"
/
gt; Anyways, it uses ceph-volume instead of ceph-disk and I think you have to
> specify the actual partition here.
> But I'd just downgrade to ceph-deploy 1.5.39 when running Luminous (not a
> long-term solution as ceph-disk will
> be removed in Nautilus)
>
> Paul
>
>
l
ank-ceph10
If anyone encounters a problem like this, this solution worked for me. FYI.
My best,
Huseyin
On 11 Jul 2018 20:31 +0300, Alfredo Deza , wrote:
> On Wed, Jul 11, 2018 at 12:57 PM, Huseyin Cotuk wrote:
> > Hi Paul,
> >
> > Thanks for your reply. I did not menti
.
Without rgw_dns_name defined, OpenStack object storage works as expected. Is
there any way to use both APIs with raw_dns_name ? I appreciate any suggestion.
By best,
Dr. Huseyin COTUK
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
eck or fix the
reported epoch #?
Thanks in advance.
Best regards,
Huseyin Cotuk
hco...@gmail.com <mailto:hco...@gmail.com>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
. So I want to use those NVME disks for a full flash pool,
and choose another disks for db device.
Any suggestion or recommendation would be appreciated.
Best regards,
Huseyin Cotuk
hco...@gmail.com <mailto:hco...@gmail.com>
Selamlar,
Huseyin Cotuk
hco...@gma