Hi all,
I'm attempting to get a small Mimic cluster running on ARM, starting
with a single node. Since there don't seem to be any Debian ARM64
packages in the official Ceph repository, I had to build from source,
which was fairly straightforward.
After installing the .deb packages that I built an
I found this discussion between Wido and Florian ( two really good ceph folks
), but it doesn’t seem to deep dive into sharding ( something I would like to
know more about ).
https://www.spinics.net/lists/ceph-users/msg24420.html
None of my clusters are using multi-site sync ( was thinking ab
Yes I was referring to windows explorer copies as that is what users typically
use
but also with windows robocopy and it set to 32 threads
the difference is we may go from a peak of 300MB/s to a more normal 100MB/s to
a stall at 0 to 30MB/s
about every 7-8 seconds it stalls to 0 MB/s
being re
Apparently it is the case presently that when dynamic resharding
completes, the retired bucket index shards need to be manually
deleted. We plan to change this, but it's worth checking for such
objects. Alternatively, though, look for other large omap "objects",
e.g., sync-error.log, if you are u
I didn’t want to attempt anything until I had more information. I have been
tied up with secondary stuff, so we are just monitoring for now. The only
thing I could find was a setting to make the warning go away, but that doesn’t
seem like a good idea as it was identified as an issue that shoul
Hi Cephers,
I am in the process of upgrading a cluster from Filestore to bluestore,
but I'm concerned about frequent warnings popping up against the new
bluestore devices. I'm frequently seeing messages like this, although the
specific osd changes, it's always one of the few hosts I've converted
I've heard you can do that with the manager service for balancing your
cluster. You can set the maximum amount of misplaced objects you want and
the service will add in the new node until it's balanced without moving
more data than your settings specify at any time.
On Sat, Sep 1, 2018, 6:35 AM Ma
> If the active MDS is connected to a monitor and they fail at the same time,
> the monitors can't replace the mds until they've been through their own
> election and a full mds timeout window.
So how long are we talking?
--
Bryan Henderson San Jose, California
When adding a node and I increment the crush weight like this. I have
the most efficient data transfer to the 4th node?
sudo -u ceph ceph osd crush reweight osd.23 1
sudo -u ceph ceph osd crush reweight osd.24 1
sudo -u ceph ceph osd crush reweight osd.25 1
sudo -u ceph ceph osd crush rewei