You'll find it said time and time agin on the ML... avoid disks of
different sizes in the same cluster. It's a headache that sucks. It's not
impossible, it's not even overly hard to pull off... but it's very easy to
cause a mess and a lot of headaches. It will also make it harder to
diagnose per
Dear Cephalopodians,
in our cluster (CentOS 7.4, EC Pool, Snappy compression, Luminous 12.2.4),
we often have all (~40) clients accessing one file in readonly mode, even with
multiple processes per client doing that.
Sometimes (I do not yet know when, nor why!) the MDS ends up in a situation
Dear Cephalopodians,
a small addition.
As far as I know, the I/O the user is performing is based on the following
directory structure:
datafolder/some_older_tarball.tar.gz
datafolder/sometarball.tar.gz
datafolder/processing_number_2/
datafolder/processing_number_3/
datafolder/processing_number_
I have 65TB stored on 24 OSDs on 3 hosts (8 OSDs per host). SSD journals
and spinning disks. Our performance before was acceptable for our purposes
- 300+MB/s simultaneous transmit and receive. Now that we're up to about
50% of our total storage capacity (65/120TB, say), the write performance i
Hi everyone,
I found myself in a situation where dynamic sharding and writing data to a
bucket containing a little more than 5M objects at the same time caused
corruption on the data rendering the entire bucket unusable, I tried several
solutions to fix this bucket and ended up ditching it.
Wh
Evening,
When attempting to create an OSD we receive the following error.
[ceph-admin@ceph-storage3 ~]$ sudo ceph-volume lvm create --bluestore --data
/dev/sdu
Running command: ceph-authtool --gen-print-key
Running command: ceph --cluster ceph --name client.bootstrap-osd --keyring
/var/lib/ceph
On Thu, Apr 12, 2018 at 9:38 AM, Alex Gorbachev
wrote:
> On Thu, Apr 12, 2018 at 7:57 AM, Jason Dillaman wrote:
>> If you run "partprobe" after you resize in your second example, is the
>> change visible in "parted"?
>
> No, partprobe does not help:
>
> root@lumd1:~# parted /dev/nbd2 p
> Model:
Hello,
On Fri, 13 Apr 2018 11:59:01 -0500 Robert Stanford wrote:
> I have 65TB stored on 24 OSDs on 3 hosts (8 OSDs per host). SSD journals
> and spinning disks. Our performance before was acceptable for our purposes
> - 300+MB/s simultaneous transmit and receive. Now that we're up to about