Hi,
i'm just reinstalling my ceph-storage:
ubuntu 14.04, hammer, ceph-deploy 1.5.23
GPT was created with 'parted /dev/sdaf mklabel gpt'
then 'ceph-deploy --overwrite-conf osd --zap-disk --fs-type btrfs create
bd-2:/dev/sdaf' (beside others of cources)
in /etc/ceph.conf: 'osd_mkfs_options_btr
Just realised this never went to the group, sorry folks.
Is it worth me trying the FUSE driver, is that likely to make a difference in
this type of scenario? I'm still concerned whether what I'm trying to do with
CephFS is even supposed to work like this. Ignoring the Openstack/libvirt parts
On Wed, Apr 22, 2015 at 3:36 PM, Neville wrote:
> Just realised this never went to the group, sorry folks.
>
> Is it worth me trying the FUSE driver, is that likely to make a difference
> in this type of scenario? I'm still concerned whether what I'm trying to do
> with CephFS is even supposed to
Hi folks,
thanks for the feedback regarding the network questions. Currently I try
to solve the question of how much memory, cores and GHz for OSD nodes
and Monitors.
My research so far:
OSD nodes: 2 GB RAM, 2 GHz, 1 Core (?) per OSD
+ enough power to handle the network load.
For the monitors
Hi,
I was doing some benchmarks,
I have found an strange behaviour.
Using fio with rbd engine, I was able to reach around 100k iops.
(osd datas in linux buffer, iostat show 0% disk access)
then after restarting all osd daemons,
the same fio benchmark show now around 300k iops.
(osd datas in lin
On Wed, 22 Apr 2015 10:11:22 +0200 Götz Reinicke - IT Koordinator wrote:
> Hi folks,
>
> thanks for the feedback regarding the network questions. Currently I try
> to solve the question of how much memory, cores and GHz for OSD nodes
> and Monitors.
>
> My research so far:
>
> OSD nodes: 2 GB R
Hi,
here I saw some links that sound interisting to me regarding Hardware
planing: https://ceph.com/category/resources/
The links redirect to Redhat, and I cant find the content.
May be someone has a new Guid? I found one from 2013 as pdf.
Regards and Thanks . Götz
--
Götz Reinicke
IT-
Hi all,
With Debian jessie scheduled to be released in a few days on April 25,
many of us will be thinking of upgrading wheezy based systems to jessie.
The Ceph packages in upstream Debian repos are version 0.80.7 (i.e.,
firefly), whilst many people may be running giant or hammer by now.
I've not
Hi
I have problem with 1 pg incomplete
ceph -w
cluster 9cb96455-4d70-48e7-8517-8fd9447ab072
health HEALTH_WARN
1 pgs incomplete
1 pgs stuck inactive
1 pgs stuck unclean
monmap e1: 2 mons at {s1=10.10.12.151:6789/0,s2=10.10.12.152:6789/0}
I wonder if it could be numa related,
I'm using centos 7.1,
and auto numa balacning is enabled
cat /proc/sys/kernel/numa_balancing = 1
Maybe osd daemon access to buffer on wrong numa node.
I'll try to reproduce the problem
- Mail original -
De: "aderumier"
À: "ceph-devel" , "ceph-u
Hi everyone,
I am curious about the current state of the roadmap as well. Alongside
the already asked question Re vmware support, where are we at with
cephfs' multiMDS stability and dynamic subtree partitioning?
Thanks
Regards,
Marc
On 04/22/2015 07:04 AM, Ray Sun wrote:
Sage,
I have the s
Hi all,
When running command :
cephfs /cephfs/ show_layout
The result is :
WARNING: This tool is deprecated. Use the layout.* xattrs to query and modify
layouts.
Error getting layout: (25) Inappropriate ioctl for device
I didn't alternative for set and show layout nor another tool. is th
On 04/22/2015 11:22 AM, Stéphane DUGRAVOT wrote:
> Hi all,
> When running command :
>
> cephfs /cephfs/ show_layout
>
> The result is :
>
> WARNING: This tool is deprecated. Use the layout.* xattrs to query and modify
> layouts.
> Error getting layout: (25) Inappropriate ioctl for device
>
> I
Hi everyone,
I don't think this has been posted to this list before, so just
writing it up so it ends up in the archives.
tl;dr: Using RBD storage pools with libvirt is currently broken on
Ubuntu trusty (LTS), and any other platform using libvirt 1.2.2.
In libvirt 1.2.2, the rbd_create3 function
- Le 22 Avr 15, à 11:39, Wido den Hollander a écrit :
> On 04/22/2015 11:22 AM, Stéphane DUGRAVOT wrote:
> > Hi all,
> > When running command :
> > cephfs /cephfs/ show_layout
> > The result is :
> > WARNING: This tool is deprecated. Use the layout.* xattrs to query and
> > modify
> > la
Hi all,
responding to my yesterday email, I have interesting informations
confirming that the problem is not at all related to Hammer.
Seing really nothing explaining the weird comportment, I've reinstalled
a Giant and had the same symptoms letting me think it had to be hardware
related...
A
Hi,
On Tue, Apr 21, 2015 at 6:05 PM, Scott Laird wrote:
>
> ceph-objectstore-tool --op remove --data-path /var/lib/ceph/osd/ceph-36/
> --journal-path /var/lib/ceph/osd/ceph-36/journal --pgid $id
>
Out of curiosity, what is the difference between above and just rm'ing
the pg directory from /curre
On Wed, Apr 22, 2015 at 10:55 AM, Daniel Swarbrick
wrote:
> Hi all,
>
> With Debian jessie scheduled to be released in a few days on April 25,
> many of us will be thinking of upgrading wheezy based systems to jessie.
> The Ceph packages in upstream Debian repos are version 0.80.7 (i.e.,
> firefly
On Wed, Apr 22, 2015 at 5:01 AM, Alexandre DERUMIER
wrote:
> I wonder if it could be numa related,
>
> I'm using centos 7.1,
> and auto numa balacning is enabled
>
> cat /proc/sys/kernel/numa_balancing = 1
>
> Maybe osd daemon access to buffer on wrong numa node.
>
> I'll try to reproduce the p
On 04/22/2015 12:07 PM, Florian Haas wrote:
> Hi everyone,
>
> I don't think this has been posted to this list before, so just
> writing it up so it ends up in the archives.
>
> tl;dr: Using RBD storage pools with libvirt is currently broken on
> Ubuntu trusty (LTS), and any other platform using
Hello,
i've heavily unbalanced OSDs.
Some are at 61% usage and some at 86%.
Which is 372G free space vs 136G free space.
All are up and are weightet at 1.
I'm running firefly with tunables to optimal and hashpspool 1.
Also a reweight-by-utilization does nothing.
# ceph osd reweight-by-utiliz
Hi,
I'm a ceph newbie setting up some trial installs for evaluation.
Using Debian stable (Wheezy) with Ceph Firefly from backports
(0.80.7-1~bpo70+1).
I've been following the instructions at
http://docs.ceph.com/docs/firefly/install/manual-deployment/ and first
time through went well, using a pa
Hi Christian
This sounds like the same problem we are having. We get long wait times
on ceph nodes, with certain commands (in our case, mainly mkfs) blocking
for long periods of time, stuck in a wait (and not read or write) state.
We get the same warning messages in syslog, as well.
Jeff
On
Hi,
I tried to recreate a ceph fs ( well actually an underlying pool, but
for that I need to first remove the fs) , but this seems not that easy
to achieve.
When I run
`ceph fs rm ceph_fs`
I get:
`Error EINVAL: all MDS daemons must be inactive before removing filesystem`
I stopped the 3 MDSs
On Wed, Apr 22, 2015 at 1:02 PM, Wido den Hollander wrote:
> On 04/22/2015 12:07 PM, Florian Haas wrote:
>> Hi everyone,
>>
>> I don't think this has been posted to this list before, so just
>> writing it up so it ends up in the archives.
>>
>> tl;dr: Using RBD storage pools with libvirt is curren
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hi Florian
On 22/04/15 11:07, Florian Haas wrote:
> Further information: Original Red Hat BZ:
> https://bugzilla.redhat.com/show_bug.cgi?id=1092208 Bug I just
> added to Launchpad so the Ubuntu folks (hopefully) backport the
> patch for trusty-updat
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 22/04/15 11:44, Florian Haas wrote:
>> Would it be possible to have at least firefly, giant and hammer
>> built
>>> for the aforementioned distros?
> jftr, packages for saucy and trusty *are* available:
>
> http://ceph.com/debian-hammer/dists/
>
On 04/22/2015 03:20 PM, Florian Haas wrote:
> On Wed, Apr 22, 2015 at 1:02 PM, Wido den Hollander wrote:
>> On 04/22/2015 12:07 PM, Florian Haas wrote:
>>> Hi everyone,
>>>
>>> I don't think this has been posted to this list before, so just
>>> writing it up so it ends up in the archives.
>>>
>>>
Hi,
I've seen and read a few things about ceph-crush-location and I think that's
what I need.
What I need (want to try) is : a way to have SSDs in non-dedicated hosts, but
also to put those SSDs in a dedicated ceph root.
>From what I read, using ceph-crush-location, I could add a hostname with
On 04/22/2015 03:47 PM, SCHAER Frederic wrote:
> Hi,
>
> I've seen and read a few things about ceph-crush-location and I think that's
> what I need.
> What I need (want to try) is : a way to have SSDs in non-dedicated hosts, but
> also to put those SSDs in a dedicated ceph root.
>
> From what I
forgot to mention I'm running 0.94.1
On 04/22/2015 03:02 PM, Kenneth Waegeman wrote:
Hi,
I tried to recreate a ceph fs ( well actually an underlying pool, but
for that I need to first remove the fs) , but this seems not that easy
to achieve.
When I run
`ceph fs rm ceph_fs`
I get:
`Error EINVAL
Hi,
I have done a lot of test today, and it seem indeed numa related.
My numastat was
# numastat
node0 node1
numa_hit99075422 153976877
numa_miss 167490965 1493663
numa_foreign 1493663 167491417
int
Hi Alexandre,
We should discuss this at the perf meeting today. We knew NUMA node
affinity issues were going to crop up sooner or later (and indeed
already have in some cases), but this is pretty major. It's probably
time to really dig in and figure out how to deal with this.
Note: this is
On Tue, Apr 21, 2015 at 9:53 PM, Mohamed Pakkeer wrote:
> Hi sage,
>
> When can we expect the fully functional fsck for cephfs?. Can we get at next
> major release?. Is there any roadmap or time frame for the fully functional
> fsck release?
We're working on it as fast as we can, and it'll be don
>>We should discuss this at the perf meeting today. We knew NUMA node
>>affinity issues were going to crop up sooner or later (and indeed
>>already have in some cases), but this is pretty major. It's probably
>>time to really dig in and figure out how to deal with this.
Damn, I'm on the road c
-Message d'origine-
(...)
> So I just have to associate the mountpoint with the device... provided OSD is
> mounted when the tool is called.
> Anyone willing to share experience with ceph-crush-location ?
>
Something like this? https://gist.github.com/wido/5d26d88366e28e25e23d
I've us
I feel it is due to tcmalloc issue
I have seen similar issue in my setup after 20 days.
Thanks,
Srinivas
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark
Nelson
Sent: Wednesday, April 22, 2015 7:31 PM
To: Alexandre DERUMIER; Milosz Tansk
On 04/22/2015 03:38 PM, Wido den Hollander wrote:
> On 04/22/2015 03:20 PM, Florian Haas wrote:
>> On Wed, Apr 22, 2015 at 1:02 PM, Wido den Hollander wrote:
>>> On 04/22/2015 12:07 PM, Florian Haas wrote:
Hi everyone,
I don't think this has been posted to this list before, so just
Hi,
Is there a recommended way of powering down a ceph cluster and bringing it
back up ?
I have looked thru the docs and cannot find anything wrt it.
Thanks in advance
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listin
Hi,
I changed the cluster network parameter in the config files, restarted
the monitors , and then restarted all the OSDs (shouldn't have done
that). Now the OSDS keep on crashing, and the cluster is not able to
restore.. I eventually rebooted the whole cluster, but the problem
remains: For a
> Op 22 apr. 2015 om 16:54 heeft 10 minus het volgende
> geschreven:
>
> Hi,
>
> Is there a recommended way of powering down a ceph cluster and bringing it
> back up ?
>
> I have looked thru the docs and cannot find anything wrt it.
>
Best way would be:
- Stop all client I/O
- Shut down
On 04/10/2015 10:10 AM, Lionel Bouton wrote:
On 04/10/15 15:41, Jeff Epstein wrote:
[...]
This seems highly unlikely. We get very good performance without
ceph. Requisitioning and manupulating block devices through LVM
happens instantaneously. We expect that ceph will be a bit slower by
its
Hi,
>>I feel it is due to tcmalloc issue
Indeed, I had patched one of my node, but not the other.
So maybe I have hit this bug. (but I can't confirm, I don't have traces).
But numa interleaving seem to help in my case (maybe not from 100->300k, but
250k->300k).
I need to do more long tests to
If you just rm the directory you're leaving behind all of the leveldb data
about it. :)
-Greg
On Wed, Apr 22, 2015 at 3:23 AM Dan van der Ster wrote:
> Hi,
>
> On Tue, Apr 21, 2015 at 6:05 PM, Scott Laird wrote:
> >
> > ceph-objectstore-tool --op remove --data-path /var/lib/ceph/osd/ceph-36/
> >
If you look at the "ceph --help" output you'll find some commands for
removing MDSes from the system.
-Greg
On Wed, Apr 22, 2015 at 6:46 AM Kenneth Waegeman
wrote:
> forgot to mention I'm running 0.94.1
>
> On 04/22/2015 03:02 PM, Kenneth Waegeman wrote:
> > Hi,
> >
> > I tried to recreate a ceph
Hi all,
I have a cluster currently on Giant - is Hammer stable/ready for production
use?
-Tony
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Fri, Apr 17, 2015 at 3:29 PM, Loic Dachary wrote:
> Hi,
>
> Although erasure coded pools cannot be used with CephFS, they can be used
> behind a replicated cache pool as explained at
> http://docs.ceph.com/docs/master/rados/operations/cache-tiering/.
>
> Cheers
>
> On 18/04/2015 00:26, Ben Ra
On Mon, Apr 20, 2015 at 3:31 PM, Blair Bethwaite
wrote:
> Hi all,
>
> I understand the present pool tiering infrastructure is intended to work for
>>2 layers? We're presently considering backup strategies for large pools and
> wondered how much of a stretch it would be to have a base tier sitting
On Wed, Apr 22, 2015 at 3:18 AM, f...@univ-lr.fr wrote:
> Hi all,
>
> responding to my yesterday email, I have interesting informations confirming
> that the problem is not at all related to Hammer.
> Seing really nothing explaining the weird comportment, I've reinstalled a
> Giant and had the sam
On Wed, Apr 22, 2015 at 7:12 AM, Stefan Priebe - Profihost AG
wrote:
> Also a reweight-by-utilization does nothing.
As a fellow sufferer from this issue, mostly what I can offer you is
sympathy rather than actual help. However, this may be beneficial:
By default, reweight-by-utilization only al
On Wed, Apr 22, 2015 at 8:17 AM, Kenneth Waegeman
wrote:
> Hi,
>
> I changed the cluster network parameter in the config files, restarted the
> monitors , and then restarted all the OSDs (shouldn't have done that).
Do you mean that you changed the IP addresses of the monitors in the
config files
On 04/22/15 17:57, Jeff Epstein wrote:
>
>
> On 04/10/2015 10:10 AM, Lionel Bouton wrote:
>> On 04/10/15 15:41, Jeff Epstein wrote:
>>> [...]
>>> This seems highly unlikely. We get very good performance without
>>> ceph. Requisitioning and manupulating block devices through LVM
>>> happens instanta
ceph osd reweight-by-utilization needs another argument to do
something. The recommended starting value is 120. Run it again with lower
and lower values until you're happy. The value is a percentage, and I'm
not sure what happens if you go below 100. If you get into trouble with
this (too much
On 04/22/15 19:50, Lionel Bouton wrote:
> On 04/22/15 17:57, Jeff Epstein wrote:
>>
>>
>> On 04/10/2015 10:10 AM, Lionel Bouton wrote:
>>> On 04/10/15 15:41, Jeff Epstein wrote:
[...]
This seems highly unlikely. We get very good performance without
ceph. Requisitioning and manupulati
On Thu, Apr 16, 2015 at 8:02 PM, Gregory Farnum wrote:
> Since I now realize you did a bunch of reweighting to try and make
> data match up I don't think you'll find something like badly-sized
> LevelDB instances, though.
It's certainly something I can check, just to be sure. Erm, what does
a Le
On Wed, Apr 22, 2015 at 11:04 AM, J David wrote:
> On Thu, Apr 16, 2015 at 8:02 PM, Gregory Farnum wrote:
>> Since I now realize you did a bunch of reweighting to try and make
>> data match up I don't think you'll find something like badly-sized
>> LevelDB instances, though.
>
> It's certainly so
Hi,
Christian Balzer wrote:
>> thanks for the feedback regarding the network questions. Currently I try
>> to solve the question of how much memory, cores and GHz for OSD nodes
>> and Monitors.
>>
>> My research so far:
>>
>> OSD nodes: 2 GB RAM, 2 GHz, 1 Core (?) per OSD
>>
> RAM is enough, but
A very small 3-node Ceph cluster with this OSD tree:
http://pastebin.com/mUhayBk9
has some performance issues. All 27 OSDs are 5TB SATA drives, it
keeps two copies of everything, and it's really only intended for
nearline backups of large data objects.
All of the OSDs look OK in terms of utiliz
On 04/22/2015 01:39 PM, Francois Lafont wrote:
Hi,
Christian Balzer wrote:
thanks for the feedback regarding the network questions. Currently I try
to solve the question of how much memory, cores and GHz for OSD nodes
and Monitors.
My research so far:
OSD nodes: 2 GB RAM, 2 GHz, 1 Core (?)
What ceph version are you using ?
It seems clients are not sending enough traffic to the cluster.
Could you try with rbd_cache=false or true and see if behavior changes ?
What is the client side cpu util ?
Performance also depends on the QD you are driving with.
I would suggest, run fio on top of V
On Wed, Apr 22, 2015 at 2:54 PM, Somnath Roy wrote:
> What ceph version are you using ?
Firefly, 0.80.9.
> Could you try with rbd_cache=false or true and see if behavior changes ?
As this is ZFS, running a cache layer below it that it is not aware of
violates data integrity and can cause corrup
Look through the output of 'ceph pg 0.37 query' and see if it gives you any
hints on where to look.
On Wed, Apr 22, 2015 at 2:57 AM, MEGATEL / Rafał Gawron <
rafal.gaw...@megatel.com.pl> wrote:
> Hi
>
> I have problem with 1 pg incomplete
>
> ceph -w
>
> cluster 9cb96455-4d70-48e7-8517-8fd94
I believe your problem is that you haven't created bootstrap-osd key and
distributed it to your OSD node in /var/lib/ceph/bootstrap-osd/.
On Wed, Apr 22, 2015 at 5:41 AM, Daniel Piddock
wrote:
> Hi,
>
> I'm a ceph newbie setting up some trial installs for evaluation.
>
> Using Debian stable (Wh
On Wed, Apr 22, 2015 at 2:16 PM, Gregory Farnum wrote:
> Uh, looks like it's the contents of the "omap" directory (inside of
> "current") are the levelDB store. :)
OK, here's du -sk of all of those:
36740 ceph-0/current/omap
35736 ceph-1/current/omap
37356 ceph-2/current/omap
38096 ceph-3/curren
I have a PG that is in the active+inconsistent state and found the
following objects to have differing md5sums:
-fa8298048c1958de3c04c71b2f225987
./DIR_5/DIR_0/DIR_D/DIR_9/1008a75.017c__head_502F9D05__0
+b089c2dcd4f1d8b4419ba34fe468f784
./DIR_5/DIR_0/DIR_D/DIR_9/1008a75.017c__head_
The Ceph process should be listening to the IP address and port, not the
physical NIC. I haven't encountered a problem like this, but it is good to
know it may exist. You may want to tweak miimon, downdelay and updelay (I
have a feeling that these won't really help you much as it seems the link
was
So, it seems you are not limited by anything..
I am suggesting synthetic workload like fio to run on top of VM to identify
where the bottleneck is. For example, if fio is giving decent enough output, I
guess ceph layer is doing fine. It is your client that is not driving enough.
Thanks & Regard
On Wed, Apr 22, 2015 at 12:35 PM, Stillwell, Bryan
wrote:
> I have a PG that is in the active+inconsistent state and found the
> following objects to have differing md5sums:
>
> -fa8298048c1958de3c04c71b2f225987
> ./DIR_5/DIR_0/DIR_D/DIR_9/1008a75.017c__head_502F9D05__0
> +b089c2dcd4f1d8b4
ceph pg query says all the OSDs are being probed. If those 6 OSDs are
staying up, it probably just needs some time. The OSDs need to stay up
longer than 15 mniutes. If any of them are getting marked down at all,
that'll cause problems. I'd like to see the past intervals in the recovery
state ge
Hi David,
I suspect you are hitting problems with sync writes, which Ceph isn't known
for being the fastest thing for.
I'm not a big expert on ZFS but I do know that a SSD ZIL is normally
recommended to allow fast sync writes. If you don't have this you are
waiting on Ceph to Ack the write across
On 04/22/2015 03:38 PM, Wido den Hollander wrote:
> On 04/22/2015 03:20 PM, Florian Haas wrote:
>> I'm not entirely sure, though, why virStorageBackendRBDCreateImage()
>> enables striping unconditionally; could you explain the reasoning
>> behind that?
>>
>
> When working on this with Josh some ti
I could really use some eyes on the systemd change proposed here:
http://tracker.ceph.com/issues/11344
Specifically, on bullet #4 there, should we have a single
"ceph-mon.service" (implying that users should only run one monitor
daemon per server) or if we should support multiple "ceph-mon@" servi
On Wed, Apr 22, 2015 at 2:57 PM, Ken Dreyer wrote:
> I could really use some eyes on the systemd change proposed here:
> http://tracker.ceph.com/issues/11344
>
> Specifically, on bullet #4 there, should we have a single
> "ceph-mon.service" (implying that users should only run one monitor
> daemon
On 4/22/15, 2:08 PM, "Gregory Farnum" wrote:
>On Wed, Apr 22, 2015 at 12:35 PM, Stillwell, Bryan
> wrote:
>> I have a PG that is in the active+inconsistent state and found the
>> following objects to have differing md5sums:
>>
>> -fa8298048c1958de3c04c71b2f225987
>> ./DIR_5/DIR_0/DIR_D/DIR_9/1000
Hi,
Pavel V. Kaygorodov wrote:
> I have updated my cluster to Hammer and got a warning "too many PGs
> per OSD (2240 > max 300)". I know, that there is no way to decrease
> number of page groups, so I want to re-create my pools with less pg
> number, move all my data to them, delete old pools and
Hi Cephers, :)
I would like to know if there are some rules to estimate (approximatively)
the need of CPU and RAM for:
1. a radosgw server (for instance with Hammer and civetweb).
2. a mds server
If I am not mistaken, for these 2 types of server, there is no need
concerning the storage.
For a
Hi,
When I want to have an estimation of the pg_num of a new pool,
I use this very useful page: http://ceph.com/pgcalc/.
In the table, I must give the %data of a pool. For instance, for
a "rados gateway only" use case, I can see that, by default, the
page gives:
- .rgw.buckets => 96.90% of data
-
Mark Nelson wrote:
> I'm not sure who came up with the 1GB for each 1TB of OSD daemons rule, but
> frankly I don't think it scales well at the extremes. You can't get by with
> 256MB of ram for OSDs backed by 256GB SSDs, nor do you need 6GB of ram per
> OSD for 6TB spinning disks.
>
> 2-4GB o
On Wed, 22 Apr 2015 13:50:21 -0500 Mark Nelson wrote:
>
>
> On 04/22/2015 01:39 PM, Francois Lafont wrote:
> > Hi,
> >
> > Christian Balzer wrote:
> >
> >>> thanks for the feedback regarding the network questions. Currently I
> >>> try to solve the question of how much memory, cores and GHz for
Hello David,
On Wed, 22 Apr 2015 21:30:49 +0100 Nick Fisk wrote:
> Hi David,
>
> I suspect you are hitting problems with sync writes, which Ceph isn't
> known for being the fastest thing for.
>
> I'm not a big expert on ZFS but I do know that a SSD ZIL is normally
> recommended to allow fast s
Do you have some idea how I can diagnose this problem?
I'll look at ceph -s output while you get these stuck process to see
if there's any unusual activity (scrub/deep
scrub/recovery/bacfills/...). Is it correlated in any way with rbd
removal (ie: write blocking don't appear unless you remo
81 matches
Mail list logo