On 04/12/2018 04:36 AM, ? ?? wrote:
> Hi,
>
> For anybody who may be interested, here I share a process of locating the
> reason for ceph cluster performance slow down in our environment.
>
> Internally, we have a cluster with capacity 1.1PB, used 800TB, and raw user
> data is about 500TB. E
On 04/12/2018 11:21 AM, 宗友 姚 wrote:
Currently, this can only be done by hand. Maybe we need some scripts to handle
this automatically.
Mixed hosts, i.e. half old disks + half new disks is better than "old
hosts" and "new hosts" in your case.
k
Is that not obvious? The 8TB is handling twice as much as the 4TB. Afaik
there is not a linear relationship with the iops of a disk and its size.
But interesting about this xfs defragmentation, how does this
relate/compare to bluestore?
-Original Message-
From: ? ?? [mailto:yaoz
Hi,
thanks, but unfortunately it's not the thing I suspected :(
Anyways, there's something wrong with your snapshots, the log also contains
a lot of entries like this:
2018-04-09 06:58:53.703353 7fb8931a0700 -1 osd.28 pg_epoch: 88438 pg[0.5d(
v 88438'223279 (86421'221681,88438'223279] local-lis/l
Usually the problem is not that you are missing snapshot data, but that you
got
too many snapshots, so your snapshots are probably fine. You're just wasting
space.
Paul
2018-04-10 16:07 GMT+02:00 Marc Roos :
>
> Hi Paul,
>
> This is a small test cluster, and the rbd pool is replicated. I am
> h
Hi,
you can also set the primary_affinity to 0.5 at the 8TB-disks to lower
the reading access (in this case you don't waste 50% of space).
Udo
Am 2018-04-12 04:36, schrieb ? ??:
Hi,
For anybody who may be interested, here I share a process of locating
the reason for ceph cluster performanc
Oh that is very good to hear. So how should I be cleaning this up? I
read some post of Sage that scrubbing is not taking care of this.
Should I be dumping the logs with objects like
17:e80576a8:::rbd_data.2cc7df2ae8944a.09f8:27 and try to
delete these manually?
-Original
If you run "partprobe" after you resize in your second example, is the
change visible in "parted"?
On Wed, Apr 11, 2018 at 11:01 PM, Alex Gorbachev
wrote:
> On Wed, Apr 11, 2018 at 2:13 PM, Jason Dillaman wrote:
>> I've tested the patch on both 4.14.0 and 4.16.0 and it appears to
>> function co
On Wed, 2018-04-11 at 17:10 -0700, Patrick Donnelly wrote:
> No longer recommended. See:
> http://docs.ceph.com/docs/master/cephfs/upgrading/#upgrading-the-mds-
> cluster
Shouldn't docs.ceph.com/docs/luminous/cephfs/upgrading include that
too?
--
Kerio Operator in de Cloud? https://www.kerioindec
I can't comment directly on the relation XFS fragmentation has to Bluestore,
but I had a similar issue probably 2-3 years ago where XFS fragmentation was
causing a significant degradation in cluster performance. The use case was RBDs
with lots of snapshots created and deleted at regular interval
On Thu, Apr 12, 2018 at 7:57 AM, Jason Dillaman wrote:
> If you run "partprobe" after you resize in your second example, is the
> change visible in "parted"?
No, partprobe does not help:
root@lumd1:~# parted /dev/nbd2 p
Model: Unknown (unknown)
Disk /dev/nbd2: 2147MB
Sector size (logical/physica
Hi,
I am still struggling with my performance issue and I noticed that "ceph
health details" does not provide details about where the slow requests are
Some other people noticed that
( https://www.spinics.net/lists/ceph-users/msg43574.html )
What am I missing and /or how /where to find the OSD w
On Thu, Apr 12, 2018 at 5:05 AM, Mark Schouten wrote:
> On Wed, 2018-04-11 at 17:10 -0700, Patrick Donnelly wrote:
>> No longer recommended. See:
>> http://docs.ceph.com/docs/master/cephfs/upgrading/#upgrading-the-mds-
>> cluster
>
> Shouldn't docs.ceph.com/docs/luminous/cephfs/upgrading include t
Hi ceph-users,
I am trying to figure out how to go about making ceph balancer do its magic, as
I have some pretty unbalanced distribution across osd’s currently, both SSD and
HDD.
Cluster is 12.2.4 on Ubuntu 16.04.
All OSD’s have been migrated to bluestore.
Specifically, my HDD’s are the main
Hello,
I think your observations suggest that, to a first approximation,
filling drives with bytes to the same absolute level is better for
performance than filling drives to the same percentage full. Assuming
random distribution of PGs, this would cause the smallest drives to be
as active a
On 13. april 2018 05:32, Chad William Seys wrote:
Hello,
I think your observations suggest that, to a first approximation,
filling drives with bytes to the same absolute level is better for
performance than filling drives to the same percentage full. Assuming
random distribution of PGs, thi
16 matches
Mail list logo