cephmailinglist writes:
> e) find /var/lib/ceph/ ! -uid 64045 -print0|xargs -0 chown ceph:ceph
> [...]
> [...] Also at that time one of our pools got a lot of extra data,
> those files where stored with root permissions since we did not
> restarted the Ceph daemons yet, the 'find' in step e
Hi,
We initially upgraded from Hammer to Jewel while keeping the ownership
unchanged, by adding "setuser match path =
/var/lib/ceph/$type/$cluster-$id" in ceph.conf
Later, we used the following steps to change from running as root to
running as ceph.
On the storage nodes, we ran the following
On 03/13/2017 02:02 PM, Christoph Adomeit wrote:
Christoph,
Thanks for the detailed upgrade report.
We have another scenario: We have allready upgraded to jewel 10.2.6 but
we are still running all our monitors and osd daemons as root using the
setuser match path directive.
What would be the r
On 03/12/2017 07:54 PM, Florian Haas wrote:
Florian,
For others following this thread who still have the hammer→jewel upgrade
ahead: there is a ceph.conf option you can use here; no need to fiddle
with the upstart scripts.
setuser match path = /var/lib/ceph/$type/$cluster-$id
Ah, i did not k
Thanks for the detailed upgrade report.
We have another scenario: We have allready upgraded to jewel 10.2.6 but
we are still running all our monitors and osd daemons as root using the
setuser match path directive.
What would be the recommended way to have all daemons running as ceph:ceph user
On 03/13/2017 11:07 AM, Dan van der Ster wrote:
On Sat, Mar 11, 2017 at 12:21 PM, wrote:
The next and biggest problem we encountered had to do with the CRC errors on
the OSD map. On every map update, the OSDs that were not upgraded yet, got that
CRC error and asked the monitor for a full OSD
On Sat, Mar 11, 2017 at 12:21 PM, wrote:
>
> The next and biggest problem we encountered had to do with the CRC errors on
> the OSD map. On every map update, the OSDs that were not upgraded yet, got
> that CRC error and asked the monitor for a full OSD map instead of just a
> delta update. At f
Hello,
On Sun, 12 Mar 2017 19:54:10 +0100 Florian Haas wrote:
> On Sat, Mar 11, 2017 at 12:21 PM, wrote:
> > The upgrade of our biggest cluster, nr 4, did not go without
> > problems. Since we where expecting a lot of "failed to encode map
> > e with expected crc" messages, we disabled clog to
Hello,
On Sun, 12 Mar 2017 19:52:12 +1000 Brad Hubbard wrote:
> On Sun, Mar 12, 2017 at 6:36 AM, Christian Theune
> wrote:
> > Hi,
> >
> > thanks for that report! Glad to hear a mostly happy report. I’m still on the
> > fence … ;)
> >
> > I have had reports that Qemu (librbd connections) will
On Sat, Mar 11, 2017 at 12:21 PM, wrote:
> The upgrade of our biggest cluster, nr 4, did not go without
> problems. Since we where expecting a lot of "failed to encode map
> e with expected crc" messages, we disabled clog to monitors
> with 'ceph tell osd.* injectargs -- --clog_to_monitors=false'
On Sat, 11 Mar 2017, Udo Lembke wrote:
> On 11.03.2017 12:21, cephmailingl...@mosibi.nl wrote:
> > ...
> >
> >
> > e) find /var/lib/ceph/ ! -uid 64045 -print0|xargs -0 chown ceph:ceph
> > ... the 'find' in step e found so much files that xargs (the shell)
> > could not handle it (too many a
On Sun, Mar 12, 2017 at 6:36 AM, Christian Theune wrote:
> Hi,
>
> thanks for that report! Glad to hear a mostly happy report. I’m still on the
> fence … ;)
>
> I have had reports that Qemu (librbd connections) will require
> updates/restarts before upgrading. What was your experience on that side
On 03/11/2017 09:49 PM, Udo Lembke wrote:
Hi Udo,
Perhaps would an "find /var/lib/ceph/ ! -uid 64045 -exec chown
ceph:ceph" do an better job?!
We did exactly that (and also tried other combinations) and that is a
workaround for the 'argument too long' problem, but then it would call
an exec
On 03/11/2017 09:36 PM, Christian Theune wrote:
Hello,
I have had reports that Qemu (librbd connections) will require
updates/restarts before upgrading. What was your experience on that
side? Did you upgrade the clients? Did you start using any of the new
RBD features, like fast diff?
We ha
Hi,
thanks for the usefull infos.
On 11.03.2017 12:21, cephmailingl...@mosibi.nl wrote:
>
> Hello list,
>
> A week ago we upgraded our Ceph clusters from Hammer to Jewel and with
> this email we want to share our experiences.
>
> ...
>
>
> e) find /var/lib/ceph/ ! -uid 64045 -print0|xargs -0
Hi,
thanks for that report! Glad to hear a mostly happy report. I’m still on the
fence … ;)
I have had reports that Qemu (librbd connections) will require updates/restarts
before upgrading. What was your experience on that side? Did you upgrade the
clients? Did you start using any of the new R
Hello list,
A week ago we upgraded our Ceph clusters from Hammer to Jewel and with
this email we want to share our experiences.
We have four clusters:
1) Test cluster for all the fun things, completely virtual.
2) Test cluster for Openstack: 3 monitors and 9 OSDs, all baremetal
3) Cluster
17 matches
Mail list logo