Hi all,
is using jemalloc still recommended for Ceph?
There are multiple sites (e.g.
https://ceph.com/geen-categorie/the-ceph-and-tcmalloc-performance-story/) from
2015 where jemalloc
is praised for higher performance but I found a bug report that Bluestore
crashes when used with jemalloc.
Re
ou do!).
Thanks,
Mark
On 07/05/2018 08:08 AM, Uwe Sauter wrote:
Hi all,
is using jemalloc still recommended for Ceph?
There are multiple sites (e.g. https://ceph.com/geen-categorie/the-ceph-and-tcmalloc-performance-story/) from 2015
where jemalloc
is praised for higher performance but I foun
Hi Glen,
about 16h ago there has been a notice on this list with subject "IMPORTANT: broken luminous 12.2.6 release in repo, do
not upgrade" from Sage Weil (main developer of Ceph).
Quote from this notice:
"tl;dr: Please avoid the 12.2.6 packages that are currently present on
download.ceph.c
I asked a similar question about 2 weeks ago, subject "jemalloc / Bluestore".
Have a look at the archives
Regards,
Uwe
Am 17.07.2018 um 15:27 schrieb Robert Stanford:
> Looking here:
> https://ceph.com/geen-categorie/the-ceph-and-tcmalloc-performance-story/
>
> I see that it was a g
Given your formula, you would have 512 PGs in total. Instead of dividing that
evenly you could also do
128 PGs for pool-1 and 384 PGs for pool-2, which gives you 1/4 and 3/4 of total PGs. This might not be 100% optimal for
the pools but keeps the calculated total PGs and the 100PG/OSD target.
Dear community,
TL;DR: Cluster runs good with Kernel 4.13, produces slow_requests with Kernel
4.15. How to debug?
I'm running a combined Ceph / KVM cluster consisting of 6 hosts of 2 different
kinds (details at the end).
The main difference between those hosts is CPU generation (Westmere /
Hi list,
does documentation exist that explains the structure of Ceph log files? Other
than the source code?
Thanks,
Uwe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I'm also experiencing slow requests though I cannot point it to scrubbing.
Which kernel do you run? Would you be able to test against the same kernel with Spectre/Meltdown mitigations disabled
("noibrs noibpb nopti nospectre_v2" as boot option)?
Uwe
Am 05.09.18 um 19:30 schrieb Brett
Hi folks,
I'm currently chewing on an issue regarding "slow requests are blocked". I'd
like to identify the OSD that is causing those events
once the cluster is back to HEALTH_OK (as I have no monitoring yet that would
get this info in realtime).
Collecting this information could help identify
Hi Mohamad,
>> I'm currently chewing on an issue regarding "slow requests are blocked". I'd
>> like to identify the OSD that is causing those events
>> once the cluster is back to HEALTH_OK (as I have no monitoring yet that
>> would get this info in realtime).
>>
>> Collecting this information
Hi,
I'm currently chewing on an issue regarding "slow requests are blocked". I'd
like to identify the OSD that is causing those events
once the cluster is back to HEALTH_OK (as I have no monitoring yet that would
get this info in realtime).
Collecting this information could help identify agin
Brad,
thanks for the bug report. This is exactly the problem I am having (log-wise).
>>>
>>> You don't give any indication what version you are running but see
>>> https://tracker.ceph.com/issues/23205
>>
>>
>> the cluster is an Proxmox installation which is based on an Ubuntu kernel.
>>
>> # ceph
Am 19.05.2018 um 01:45 schrieb Brad Hubbard:
On Thu, May 17, 2018 at 6:06 PM, Uwe Sauter wrote:
Brad,
thanks for the bug report. This is exactly the problem I am having (log-wise).
You don't give any indication what version you are running but see
https://tracker.ceph.com/issues/
Hi,
I have several VM images sitting in a Ceph pool which are snapshotted. Is there
a way to move such images from one pool to another
and perserve the snapshots?
Regards,
Uwe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.
use "rbd export-diff" / "rbd import-diff" to manually transfer
> an image and its associated snapshots.
> On Wed, Nov 7, 2018 at 7:11 AM Uwe Sauter wrote:
>>
>> Hi,
>>
>> I have several VM images sitting in a Ceph pool which are snapshotted. Is
current content is copied but not the snapshots.
What am I doing wrong? Any help is appreciated.
Thanks,
Uwe
Am 07.11.18 um 14:39 schrieb Uwe Sauter:
I'm still on luminous (12.2.8). I'll have a look on the commands. Thanks.
Am 07.11.18 um 14:31 schrieb Jason Dillaman:
y the data format on import
otherwise it will assume it's copying a raw image).
On Wed, Nov 7, 2018 at 2:38 PM Uwe Sauter wrote:
I've been reading a bit and trying around but it seems I'm not quite where I
want to be.
I want to migrate from pool "vms" to pool "vdisks
Looks like I'm hitting this:
http://tracker.ceph.com/issues/34536
Am 07.11.18 um 20:46 schrieb Uwe Sauter:
I tried that but it fails:
# rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import --export-format
2 - vdisks/vm-102-disk-2
rbd: import header failed.
Importing ima
is in the
pending 13.2.3 release.
On Wed, Nov 7, 2018 at 2:46 PM Uwe Sauter wrote:
I tried that but it fails:
# rbd export --export-format 2 vms/vm-102-disk-2 - | rbd import --export-format
2 - vdisks/vm-102-disk-2
rbd: import header failed.
Importing image: 0% complete...failed.
rbd: import f
Am 07.11.18 um 21:17 schrieb Alex Gorbachev:
On Wed, Nov 7, 2018 at 2:38 PM Uwe Sauter wrote:
I've been reading a bit and trying around but it seems I'm not quite where I
want to be.
I want to migrate from pool "vms" to pool "vdisks".
# ceph osd pool ls
v
Hi,
TL;DR: In my Ceph clusters I replaced all OSDs from HDDs of several brands and models with Samsung 860 Pro SSDs and used
the opportunity to switch from filestore to bluestore. Now I'm seeing blocked ops in Ceph and file system freezes inside
VMs. Any suggestions?
I have two Proxmox clust
Am 28.02.19 um 10:42 schrieb Matthew H:
> Have you made any changes to your ceph.conf? If so, would you mind copying
> them into this thread?
No, I just deleted an OSD, replaced HDD with SDD and created a new OSD (with
bluestore). Once the cluster was healty again, I
repeated with the next OSD.
Hi all,
thanks for your insights.
Eneko,
> We tried to use a Samsung 840 Pro SSD as OSD some time ago and it was a
> no-go; it wasn't that performance was bad, it
> just didn't work for the kind of use of OSD. Any HDD was better than it (the
> disk was healthy and have been used in a
> softw
gt;
>
> -Original Message-
> Sent: 28 February 2019 13:56
> To: uwe.sauter...@gmail.com; Uwe Sauter; Ceph Users
> Subject: *SPAM* Re: [ceph-users] Fwd: Re: Blocked ops after
> change from filestore on HDD to bluestore on SDD
>
> "Advanced power loss
om:* ceph-users on behalf of Uwe
> Sauter
> *Sent:* Thursday, February 28, 2019 8:33 AM
> *To:* Marc Roos; ceph-users; vitalif
> *Subject:* Re: [ceph-users] Fwd: Re: Blocked ops after change from filestore
> on HDD to bluestore on SDD
>
> Do you have anything particular
> problems since using this also.
>
>
>
> -Original Message-
> From: Uwe Sauter [mailto:uwe.sauter...@gmail.com]
> Sent: 28 February 2019 14:34
> To: Marc Roos; ceph-users; vitalif
> Subject: Re: [ceph-users] Fwd: Re: Blocked ops after change from
> filestore on HDD to bluestore
Hi,
Am 28.03.19 um 20:03 schrieb c...@elchaka.de:
> Hi Uwe,
>
> Am 28. Februar 2019 11:02:09 MEZ schrieb Uwe Sauter :
>> Am 28.02.19 um 10:42 schrieb Matthew H:
>>> Have you made any changes to your ceph.conf? If so, would you mind
>> copying them into this threa
You could also edit your ceph-mon@.service (assuming systemd) to depend on chrony and add a line
"ExecStartPre=/usr/bin/sleep 30" to stall the startup to give chrony a chance to sync before the Mon is started.
Am 16.05.19 um 17:38 schrieb Stefan Kooman:
Quoting Jan Kasprzak (k...@fi.muni.cz):
28 matches
Mail list logo