[ceph-users] data loss on full file system?

2020-01-27 Thread Håkan T Johansson
Hi, for test purposes, I have set up two 100 GB OSDs, one taking a data pool and the other metadata pool for cephfs. Am running 14.2.6-1-gffd69200ad-1 with packages from https://mirror.croit.io/debian-nautilus Am then running a program that creates a lot of 1 MiB files by calling fopen()

[ceph-users] Re: data loss on full file system?

2020-02-02 Thread Håkan T Johansson
croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Mon, Jan 27, 2020 at 9:11 PM Håkan T Johansson wrote: Hi, for test purposes, I have set up two 100 GB OSDs, one taking a data pool and the other metadata pool for cephfs. Am running 14.2.6-1-gffd69200ad-

[ceph-users] Re: data loss on full file system?

2020-02-05 Thread Håkan T Johansson
On Mon, 3 Feb 2020, Paul Emmerich wrote: On Sun, Feb 2, 2020 at 9:35 PM Håkan T Johansson wrote: Changing cp (or whatever standard tool is used) to call fsync() before each close() is not an option for a user. Also, doing that would lead to terrible performance generally. Just tested

[ceph-users] cephfs file layouts, empty objects in first data pool

2020-02-09 Thread Håkan T Johansson
Hi, running 14.2.6, debian buster (backports). Have set up a cephfs with 3 data pools and one metadata pool: myfs_data, myfs_data_hdd, myfs_data_ssd, and myfs_metadata. The data of all files are with the use of ceph.dir.layout.pool either stored in the pools myfs_data_hdd or myfs_data_ssd.

[ceph-users] Re: cephfs file layouts, empty objects in first data pool

2020-02-10 Thread Håkan T Johansson
On Mon, 10 Feb 2020, Gregory Farnum wrote: On Sun, Feb 9, 2020 at 3:24 PM Håkan T Johansson wrote: Hi, running 14.2.6, debian buster (backports). Have set up a cephfs with 3 data pools and one metadata pool: myfs_data, myfs_data_hdd, myfs_data_ssd, and myfs_metadata

[ceph-users] Re: cephfs file layouts, empty objects in first data pool

2020-02-10 Thread Håkan T Johansson
On Mon, 10 Feb 2020, Gregory Farnum wrote: On Mon, Feb 10, 2020 at 12:29 AM Håkan T Johansson wrote: On Mon, 10 Feb 2020, Gregory Farnum wrote: On Sun, Feb 9, 2020 at 3:24 PM Håkan T Johansson wrote: Hi, running 14.2.6, debian buster (backports). Have set up a

[ceph-users] Re: Monitors' election failed on VMs : e4 handle_auth_request failed to assign global_id

2020-03-10 Thread Håkan T Johansson
Note that with 6 monitors, quorum requires 4. So if only 3 are running, the system cannot work. With one old removed there would be 5 possible, then with quorum of 3. Best regards, Håkan On Tue, 10 Mar 2020, Paul Emmerich wrote: On Tue, Mar 10, 2020 at 8:18 AM Yoann Moulin wrote: I have