On Thu, Jul 25, 2019 at 7:48 AM Dan van der Ster <d...@vanderster.com> wrote:
>
> Hi all,
>
> In September we'll need to power down a CephFS cluster (currently
> mimic) for a several-hour electrical intervention.
>
> Having never done this before, I thought I'd check with the list.
> Here's our planned procedure:
>
> 1. umounts /cephfs from all hpc clients.
> 2. ceph osd set noout
> 3. wait until there is zero IO on the cluster
> 4. stop all mds's (active + standby)

You can also use `ceph fs set <name> down true` which will flush all
metadata/journals, evict any lingering clients, and leave the file
system down until manually brought back up even if there are standby
MDSs available.

-- 
Patrick Donnelly, Ph.D.
He / Him / His
Senior Software Engineer
Red Hat Sunnyvale, CA
GPG: 19F28A586F808C2402351B93C3301A3E258DD79D
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to