We upgraded our Ceph cluster from Hammer to Luminous and it is running
fine. Post upgrade we live migrated all our Openstack instances (not 100%
sure). Currently we see 1658 clients still on Hammer version. To track the
clients we increased the debugging of debug_mon=10/10, debug_ms=1/5,
debug_mon
Hi all,
I am jumping in with a similar (same?) issue. Running Mimic with a pool
using EC 4+2, ~280 OSD all 8TB in size.
We are far from an optimal distribution by any metrics (PGs per OSD or
space used per OSD) but the balancer claims the distribution is optimal.
I took a look at the ticket but d
Hey guys,
I’m struggling with the ceph-volume command in nautilus 14.2.6. I have 12 disks
on each server, 3 of them ssds (sda,sdb,sdc) and 9 spinning disks (sdd ..
sdl). Initial deploy witch ceph-volume batch works fine, one ssd is used for
wal and db for 3 spinning disks. But running the ‘cep
Dear Ilya,
I had exactly the same problem with authentication of cephfs clients on a
mimic-13.2.2 cluster. The key created with "ceph fs authorize ..." did not
grant access to the data pool. I ended up adding "rw" access to this pool by
hand.
Following up on your remark about pool tags, could
On Fri, Jan 24, 2020 at 11:48:10AM +, Stolte, Felix wrote:
>Hey guys,
>
>I’m struggling with the ceph-volume command in nautilus 14.2.6. I have 12
>disks on each server, 3 of them ssds (sda,sdb,sdc) and 9 spinning disks (sdd
>.. sdl). Initial deploy witch ceph-volume batch works fine, one ss
Le 23.01.20 à 15:51, Ilya Dryomov a écrit :
On Wed, Jan 22, 2020 at 8:58 AM Yoann Moulin wrote:
Hello,
On a fresh install (Nautilus 14.2.6) deploy with ceph-ansible playbook
stable-4.0, I have an issue with cephfs. I can create a folder, I can
create empty files, but cannot write data on lik
Hi Ceph Community.
We currently have a luminous cluster running and some machines still on Ubuntu
14.04
We are looking to upgrade these machines to 18.04 but the only upgrade path for
luminous with the ceph repo is through 16.04.
It is doable to get to Mimic but then we have to upgrade all those
I applied those packages for the same reason on a staging cluster and so far so
good.
> On Jan 24, 2020, at 9:15 AM, Atherion wrote:
>
>
> Hi Ceph Community.
> We currently have a luminous cluster running and some machines still on
> Ubuntu 14.04
> We are looking to upgrade these machines
There are two bugs that can cause these application tags to be
missing, one of them is fixed (but old pools aren't fixed
automatically), the other is https://tracker.ceph.com/issues/43061
which happens if you create the cephfs pools manually.
You can fix the pools like this:
ceph osd pool applica
On Fri, Jan 24, 2020 at 2:10 PM Yoann Moulin wrote:
>
> Le 23.01.20 à 15:51, Ilya Dryomov a écrit :
> > On Wed, Jan 22, 2020 at 8:58 AM Yoann Moulin wrote:
> >>
> >> Hello,
> >>
> >> On a fresh install (Nautilus 14.2.6) deploy with ceph-ansible playbook
> >> stable-4.0, I have an issue with ceph
On Sat, Jan 25, 2020 at 8:42 AM Ilya Dryomov wrote:
>
> On Fri, Jan 24, 2020 at 2:10 PM Yoann Moulin wrote:
> >
> > Le 23.01.20 à 15:51, Ilya Dryomov a écrit :
> > > On Wed, Jan 22, 2020 at 8:58 AM Yoann Moulin wrote:
> > >>
> > >> Hello,
> > >>
> > >> On a fresh install (Nautilus 14.2.6) deploy
This command is awesome, thank you!
--Pardhiv
On Fri, Jan 24, 2020 at 1:55 AM Konstantin Shalygin wrote:
> We upgraded our Ceph cluster from Hammer to Luminous and it is running
> fine. Post upgrade we live migrated all our Openstack instances (not 100%
> sure). Currently we see 1658 clients s
12 matches
Mail list logo