Hello Orit,
Please find attached the output from the radosgw commands and the
relevant section from ceph.conf (radosgw)
bbp-gva-master is running 10.2.5
bbp-gva-secondary is running 10.2.7
Kind regards,
Ben Morrice
__
Ben
Hi
any ideas?
thanks,
J.
On 17/04/17 12:50, magicb...@gmail.com wrote:
Hi
is it possible to configure radosGW (10.2.6-0ubuntu0.16.04.1) to work
with Openstack Keystone UUID based tokens? RadosGW is expecting a list
of revoked tokens, but that option only works in keystone deployments
based
You may want to look here http://tracker.ceph.com/issues/19499 and
http://tracker.ceph.com/issues/9493
Thanks,
From: ceph-users on behalf of
"magicb...@gmail.com"
Date: Friday, 21 April 2017 1:11 pm
To: ceph-users
Subject: EXT: Re: [ceph-users] RadosGW and Openstack Keystone revoked tokens
Hi Felix,
I'm wondering if the assignments changes if you reset the BIOS. Also,
when you insert a new disk in between what happens to the old disks, do
they keep their assignments (or as Mehmet said, do you get an in-between
number)? If not you may get into worse problems if disks fail or you
Hi all,
today i probably found a solution for this (unfortunately not the reason)
The problem only occurs when using ceph-kraken version on my clients.
If i use ceph-jewel (which was running on my iscsi gateways) the problem does
not appear.
Best regards,
Sven
Von: Rath, Sven
Gesendet: Donners
Hi all,
we have a running Ceph Cluster over 5 OSD nodes. Performance and Latency are
good. Now we have two new supermicro OSD nodes with HBA. The osd 0-26 are in
the old Servers und osd 27-55 in the new. Is this latency normal? The osd 27-55
are not in bucket und mapped no pools.
osd fs_commit
Hi Tobias,
I had a simmilar problem with supermicro and this HBA:
https://storage.microsemi.com/en-us/support/sas/sas/aha-1000-8i8e/
The problem was because of incompatibility of the aacraid module/driver and
CentOS 7.3.
I had to go to CentOS 7.2 with kernel 3.10.0-327.el7.x86_64 as the driver w
Thank you all, guys.
Nikita
From: Ben Hines [mailto:bhi...@gmail.com]
Sent: Thursday, April 20, 2017 2:08 AM
To: Vincent Godin
Cc: Nikita Shalnov ; ceph-users
Subject: Re: [ceph-users] Creating journal on needed partition
This is my experience. For creating new OSDs, i just created Rundeck j
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
In my experience, for safety, you should consider /dev/sd assignments
to be randomly assigned during boot. You may also want to be wary
about using /dev/disk/by-path. I have had pci bus/slot numbering
change after a kernel upgrade. From your descriptio
Hi
I have a 7 node ceph cluster built with ceph kraken. HW details: each node
has 5 x 1TB drives and a single SSD which has been partitioned to provide
ceph journal for each of the 5 drives per node.
Network is 10GigE. Each node has 16 cpus (Intel Haswell family chipset)
I also setup 7 x radosgw'
Hi Everyone,
I play a bit around with ceph on a test cluster with 3 servers (each MON
and OSD at the same time).
I use some self written ansible rules to deploy the config and crate
the OSD with ceph-disk. Because ceph-disk use the next free OSD-ID, my
ansible scrip is not aware which ID belongs
Hi all,
I built from source and proceeded to do a manual deployment starting on the
Mon. I'm getting the error shown below and it appears that Rados has not
been properly imported. How do I fix this?
Best,
Henry N.
cephadmin@node1:/var/lib/ceph/mon/ceph-node1$ sudo /etc/init.d/ceph start
mon.nod
Henry,
Because you compiled the sources, don't know where the librados is, but you
cant find when is it called and go for there, maybe the error is not that
is not properly imported, maybe is not where is suppose to be.
$ (strace ceph -s > myout) >& myerror
$ cat myerror | grep librados.so.2
The
Henry,
The last mail output was a Hammer installation.
$ ceph -v
ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)
This is the output in Jewel
root@ceph01:~# ceph -v
ceph version 10.2.6 (656b5b63ed7c43bd014bcafd81b001959d5f089f)
root@ceph01:~# (strace ceph -s > myout) >& myerror
Hi Alvaro,
I also have another cluster that was built using ceph-deploy with no
issues. I'll do some more digging, just want to check it's not a known bug
in the source.
On Apr 21, 2017 3:52 PM, "Alvaro Soto" wrote:
> Henry,
> The last mail output was a Hammer installation.
>
> $ ceph -v
>
> ce
Hi
I am very new to Ceph. Studding for few days for a deployment of Ceph
cluster. I am going to deploy ceph in a small data center where power
failure is a big problem. we have single power supply, Single ups and a
stand by generator. so what happend if all node down due to power failure?
will it
Hello,
On Fri, 21 Apr 2017 11:41:01 + Tobias Kropf - inett GmbH wrote:
> Hi all,
>
> we have a running Ceph Cluster over 5 OSD nodes. Performance and Latency are
> good. Now we have two new supermicro OSD nodes with HBA. The osd 0-26 are in
> the old Servers und osd 27-55 in the new. Is t
17 matches
Mail list logo