ot created when I applied the
spec?
When I add my next host, should I change the placement to that host name or
to '*'?
More generally, is there a higher level document that talks about Ceph spec
files and the orchestrator - something that deals with the general concepts?
Thanks.
-Dave
ed by the
failed OSD. Do I need to pre-create the LV, or will 'ceph orch' do that
for me?
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
On Thu, Oct 31, 2024 at 3:52 PM Tim Holloway wrote:
> I migrated from gluster when I found out it's going uns
ate documents for containerized
installations?
Lastly, the above cited instructions don't say anything about the separate
WAL/DB LV.
Please advise.
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
___
ceph-users mailing list --
by having the
> containers assemble the report data and the output is thus the OSD's
> internal view, not your server's view.
>
> Tim
>
>
> On 10/28/24 14:01, Dave Hall wrote:
> > Hello.
> >
> > Thanks to Rober's reply to 'Influencing the
On Mon, Oct 28, 2024 at 9:22 AM Anthony D'Atri
wrote:
>
>
> > Yes, but it's irritating. Ideally, I'd like my OSD IDs and hostnames to
> track so that if a server going pong I can find it and fix it ASAP
>
> `ceph osd tree down` etc. (including alertmanager rules and Grafana
> panels) arguably mak
rwise, I'm open to suggestions.
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
value.
I assume that there is a document somewhere that would explain the extended
syntax for ceph config, but I haven't found it.
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
On Wed, Oct 16, 2024 at 12:02 PM Anthony D'Atri wrote:
>
> > Unfortunately,
lues for osd_memory_target, and
especially about the 4th one at 22GB.
Also, I'm recalling that there might be a recommendation to disable swap.
and I could easily do 'swapoff -a' when the swap usage is lower than the
free RAM.
Can anybody shed any light on this?
Thanks.
-Dave
--
Dave Hal
t, no down, no
scrub/deep-scrub, no recover/rebalance/backfill, and perhaps pause) is it
safe to apply an adjusted crush map? Is it safe to revert to the original
crush map if things don't seem quite right?
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
Oddly, the Nautilus cluster that I'm gradually decommissioning seems to
have the same shadow root pattern in its crush map. I don't know if that
really means anything, but at least I know it's not something I did
differently when I set up the new Reef cluster.
-Dave
--
Dave
the real
problem.
Please advise.
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
On Thu, Sep 19, 2024 at 3:56 AM Stefan Kooman wrote:
> On 19-09-2024 05:10, Anthony D'Atri wrote:
> >
> >
> >>
> >> Anthony,
> >>
> >&g
, I simply assign the pool to a new crush
rule using a command similar to the one shown in your note in the link you
referenced?
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
On Wed, Sep 18, 2024 at 2:10 PM Anthony D'Atri
wrote:
>
>
>
> Helllo,
>
>
"item": -2,
"item_name": "default~hdd"
To be sure, all of my OSDs are identical - HDD with SSD WAL/DB.
Please advise on how to fix this.
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
the new cluster. In this context, I am looking to
enable and disable mirroring on specific RBD images and RGW buckets as the
client workload is migrated from accessing the old cluster to accessing the
new.
Thanks.
-Dave
--
Dave Hall
Binghamton University
k
ty graphs described above.
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
607-760-2328 (Cell)
607-777-4641 (Office)
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
apper also
back-filled the OSD as it was draining it.
So did I miss something here? What is the best way to proceed? I
understand that it would be mayhem to mark 8 of 72 OSDs out and then turn
backfill/rebalance/recover back on. But it seems like there should be a
better way.
Suggestions?
want to get to container-based Reef, but
I need to keep a stable cluster throughout.
Any advice or reassurance much appreciated.
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
___
ceph-users mailing list -- ceph-users@ceph.
new cluster on
fresh Debian installs. and migrating the data and remaining nodes. This
would be a long and painful process - decommission a node, move it, move
some data, decommission another node - and I don't know what effect it
would have on external references to our object store.
Ple
o mark two out simultaneously?
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
On Fri, Aug 4, 2023 at 10:16 AM Dave Holland wrote:
> On Fri, Aug 04, 2023 at 09:44:57AM -0400, Dave Hall wrote:
> > My inclination is to mark these 3 OSDs 'OUT' before they
major risk of data
loss. However, if it would be better to do them one per day or something,
I'd rather be safe.
I also assume that I should wait for the rebalance to complete before I
initiate the replacement procedure.
Your thoughts?
Thanks.
-Dave
--
Dave Hal
mber of OSDs is not large, there is an increased chance that more
than one scrub will want to read the same OSD. Scheduling nightmare if
the number of simultaneous scrubs is low and client traffic is given
priority.
-Dave
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
607-760-2328
showed any errors. In order to finally
clear the 'inconsistent' status we had to run another 'pg repair' after the
object repair.
Since then all is good.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
On Sun, Oct 3, 2021 at 1:09 PM 胡 玮文 wrote:
>
>
guideline for increasing osd_scrub_max_preemptions just enough balance
between scrub progress and client responsiveness?
Or perhaps there are other scrub attributes that should be tuned instead?
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdh...@bing
y of this until I get rid of
these 29 slow ops.
Can anybody suggest a path forward?
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
up
> with some of the more rarely used metadata on the HDD but having it on
> flash certain is nice.
>
>
> Mark
>
>
> On 6/3/21 5:18 PM, Dave Hall wrote:
> > Anthony,
> >
> > I had recently found a reference in the Ceph docs that indicated
> something
>
Anthony,
I had recently found a reference in the Ceph docs that indicated something
like 40GB per TB for WAL+DB space. For a 12TB HDD that comes out to
480GB. If this is no longer the guideline I'd be glad to save a couple
dollars.
-Dave
--
Dave Hall
Binghamton Universit
SATA drives (still
Enterprise). For Ceph, will the switch to SATA carry a performance
difference that I should be concerned about?
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
___
ceph-users mailing list -- ceph-users@ceph.
dcc1bc7b79" now active
ceph-block-b1fea172-71a4-463e-a3e3-8cdcc1bc7b79: autoactivation failed.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ything that is
'designed') is that a few people (developers) can produce something that a
large number of people (storage administrators, or 'users') will want to
use.
Please remember the ratio of users (cluster administrators) to developers
and don't lose sight of the users
ransferred this file
system to a new NFS server and new storage I was able to directly rsync
each user in parallel. I filled up a 10GB pipe and copied the whole FS in
an hour.
Typing in a hurry. If my explanation is confusing, please don't hesitate
to ask me to explain better.
-Dave
he scrubs progress, more scrub deadlines are missed, so it's not a
steady march to zero.
Please feel free to comment. I'd be glad to know if I'm on the right track
as we expect the cluster to double in size over the next 12 to 18 months.
Thanks.
-Dave
--
Dave Hal
500TB production cluster I am
asking for guidance from this list.
BTW, my cluster is currently running Nautilus 14.2.6 (stock Debian
packages).
Thank you.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
___
ceph-users mailing list -- ceph
systemd log messages were similar to those reported by Radoslav. A
Google search led me to the link above. The suggested addition to the
kernel command line fixed the issue.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
On Thu, Apr 15, 2021 at 4:07 AM Eneko Lacunza wrote
containers.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
On Wed, Apr 14, 2021 at 12:51 PM Radoslav Milanov <
radoslav.mila...@gmail.com> wrote:
> Hello,
>
> Cluster is 3 nodes Debian 10. Started cephadm upgrade on healthy 15.2.10
> cluster. Managers were upgraded
to do any of the ceph-volume stuff that seems to
be failing after the OSDs are configured?
Or maybe I just have something odd in my inventory file. I'd be glad to
share - either in this list or off line.
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdh...@bing
t the best practices
for Nautilus?
1) I couldn't find how to set this in Nautilus.
2) I found a mailing list post from August 2019 that talked about EC pools
and using a multiple of k * 4M.
Any insight, or a pointer to the right part of the docs would be greatly
appreciated.
Thanks.
-Dav
state, I'm wondering the best
course of action - should I just mark it back in? Or should I destroy and
rebuild it. If clearing it in the way I have, in combination with updating
to 14.2.16, will prevent it from misbehaving, why go through the trouble of
destroying and rebuilding?
Tha
want to temporarily suspend autoscaling it would be required to
modify the setting for each pool and then to modify it back afterward.
Thoughts?
Thanks
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
On Mon, Mar 29, 2021 at 1:44 PM Anthony D'Atri
wrote:
> Yes th
's off globally but enabled for this
particular pool. Also, I see that the target PG count is lower than the
current.
I guess you learn something new every day.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
607-760-2328 (Cell)
607-777-4641 (Office)
On Mon, Mar 29, 2021 at
t my cluster is
slowly eating itself, and that I'm about to lose 200TB of data. It's also
possible to imagine that this is all due to the gradual optimization of the
pools.
Note that the primary pool is an EC 8 + 2 containing about 124TB.
Thanks.
-Dave
--
Dave Hall
Binghamton Univers
eph/mgr?
Same questions about MDS in the near term, but I haven't searched the
docs yet.
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
more like 480GB of NVMe.
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
setting is applied, right?
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
parts. Networking is
frequently an afterthought. In this case node-level traffic management -
weighted fair queueing - could make all the difference.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
On Tue, Mar 16, 2021 at 4:20 AM Burkhard Linke <
burkhard
the right
answer, or should one side get a slight advantage?)
-Dave
Dave Hall
Binghamton University
kdh...@binghamton.edu
On 3/15/2021 12:48 PM, Andrew Walker-Brown wrote:
Dave
That’s the way our cluster is setup. It’s relatively small, 5 hosts, 12 osd’s.
Each host has 2x10G with LACP to t
demand from either side suddenly changed.
Maybe this is a crazy idea, or maybe it's really cool. Your thoughts?
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe
Reed,
Thank you. This seems like a very well thought approach. Your note about
the balancer and the auto_scaler seem quite relevant as well. I'll give it
a try when I add my next two nodes.
-Dave
--
Dave Hall
Binghamton University
On Thu, Mar 11, 2021 at 5:53 PM Reed Dier wrote:
ilure will take out
at least 2 OSDs. Becasue of this it seems potentially worthwhile to go
through the trouble of defining failure domain = nvme to assure maximum
resilience.
-Dvae
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
607-760-2328 (Cell)
607-777-4641 (Office)
On Thu, Mar 11, 20
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
607-760-2328 (Cell)
607-777-4641 (Office)
On Thu, Mar 11, 2021 at 1:28 PM Christian Wuerdig <
christian.wuer...@gmail.com> wrote:
> For EC 8+2 you can get away with 5 hosts by ensuring each host gets 2
> shards similar to t
ing 6 OSD nodes
and 48 HDDs) are NVMe write exhaustion and HDD failures. Since we have
multiple OSDs sharing a single NVMe device it occurs to me that we might
want to get Ceph to 'map' against that. In a way, NVMe devices are our
'nodes' at the current size of our cluste
ilure domains, resulting in protection against NVMe failure.
Please let me know if this is worth pursuing.
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
607-760-2328 (Cell)
607-777-4641 (Office)
___
ceph-users mailing list --
umes on the NVMe.
Is this correct? It is also possible to lay these out as basic logical
partitions?
Second question: How do I decide whether I need WAL, DB, or both?
Third question: Once I answer the above WAL/DB question, what are the
guidelines for sizing them?
Thanks.
-Dave
--
Dave
D capacity by 33% and resulted in 33% PG
misplacement. The next node will only result in 25% misplacement. If a
too high percentage of misplaced PGs negatively impacts rebalancing or
data availability, what is a reasonable ceiling for this percentage?
Thanks.
-Dave
--
Dave Hall
Binghamton
I have been told that Rocky Linux is a fork of CentOS that will be what
CentOS used to be before this all happened. I'm not sure how that figures
in here, but it's worth knowing.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
On Wed, Mar 3, 2021 at 12:41 PM D
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
On Tue, Mar 2, 2021 at 4:06 AM David Caro wrote:
> On 03/01 21:41, Dave Hall wrote:
> > Hello,
> >
> > I've had a look at the instructions for clean shutdown given at
> > https://ceph.io/planet/how-to-d
e got this right. The cluster contains 200TB of a
researcher's data that has taken a year to collect, so caution is needed.
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
___
ceph-users mailing list -- ceph-users@ceph.io
To unsu
-based approach is better than bare-metal.
I think I saw that Cephadm will only deploy container-based clusters.
Is this a hint that bare-metal is going away in the long run?
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
_
les it might be necessary to
allocate 300GB for DB per OSD.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
On Mon, Nov 16, 2020 at 12:41 AM Zhenshi Zhou wrote:
> well, the warning message disappeared after I executed "ceph tell osd.63
> compact".
>
> Zhenshi Zhou
ool and failure-domain
= OSD, data loss may be possible due to the failure of a shared SSD/NVMe.
Maybe it's important with a small cluster to suggest to place WAL/DB on the
HDD and use SSD/NVMe only for journal?
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
607-760-2328 (Cell)
won't get it
to work.
Hope this helps.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
607-760-2328 (Cell)
607-777-4641 (Office)
On Mon, Nov 9, 2020 at 9:08 AM Frédéric Nass
wrote:
> Hi Luis,
>
> Thanks for your help. Sorry I forgot about the kernel detai
ake any improvements. I
also have 150GB left on my mirrored boot drive. I could un-mirror part
of this and get 300GB of SATA SSD.
Thoughts?
-Dave
Dave Hall
Binghamton University
kdh...@binghamton.edu
On 10/23/2020 6:00 AM, Eneko Lacunza wrote:
Hi Dave,
El 22/10/20 a las 19:43, Dave Hall escr
-37564008adca-osd--block--db--41f75c25--67db--46e8--a3fb--ddee9e7f7fc4
253:15 0 124G 0 lvm
Dave Hall
Binghamton universitykdh...@binghamton.edu
607-760-2328 (Cell)
607-777-4641 (Office)
On 10/23/2020 6:00 AM, Eneko Lacunza wrote:
Hi Dave,
El 22/10/20 a las 19:43, Dave Hall esc
Brian, Eneko,
BTW, the Tyan LFF chassis we've been using has 12 x 3.5" bays in front
and 2 x 2.5" SATA bays in back. We've been using 240GB SSDs in the rear
bays for mirrored boot drives, so any NVMe we add is exclusively for OSD
support.
-Dave
Dave Hall
Bingh
Eneko,
On 10/22/2020 11:14 AM, Eneko Lacunza wrote:
Hi Dave,
El 22/10/20 a las 16:48, Dave Hall escribió:
Hello,
(BTW, Nautilus 14.2.7 on Debian non-container.)
We're about to purchase more OSD nodes for our cluster, but I have a
couple questions about hardware choices. Our original
o big. I
also thought I might have seen some comments about cutting large drives
into multiple OSDs - could that be?
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscrib
en't tried this yet, but there are at least some
discussions for MySQL.
-Dave
Dave Hall
Binghamton University
On 10/19/2020 10:49 PM, Brian Topping wrote:
Another option is to let PosgreSQL do the replication with local storage. There
are great reasons for Ceph, but databases optimize f
.
Please advise.
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdh...@binghamton.edu
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
opied into the install package. Once I copied
the file over to /lib/systemd/system everything started working again.
If I had to guess it was either ceph-volume@.service, or more likely -
based on timestamps on my one of my OSD servers, ceph-osd@.service.
Hope this helps.
-Dave
Dave
I'm running EC 8+2 with 'failure domain OSD' on a 3 node cluster with 24
OSDs. Until one has 10s of nodes it pretty much has to be failure domain
OSD.
The documentation lists certain other important settings which it took time
to find. Most important are recommendations to have a small replicat
isk of a late packet getting sent to the wrong TCP
connection. Hard to imagine this happening, but it could.)
-Dave
Dave Hall
Binghamton University
kdh...@binghamton.edu
607-760-2328 (Cell)
607-777-4641 (Office)
On 5/29/2020 2:45 PM, Anthony D'Atri wrote:
I’m pretty sure I’ve seen that happen
arking.
-Dave
Dave Hall
Binghamton University
kdh...@binghamton.edu
607-760-2328 (Cell)
607-777-4641 (Office)
On 5/29/2020 6:29 AM, Paul Emmerich wrote:
Please do not apply any optimization without benchmarking *before* and
*after* in a somewhat realistic scenario.
No, iperf is likely not a
might be 1500 bytes short of
a low multiple of 9000.
It would be interesting to see the iperf tests repeated with
corresponding buffer sizing. I will perform this experiment as soon as
I complete some day-job tasks.
-Dave
Dave Hall
Binghamton University
kdh...@binghamton.edu
607-760-2328 (C
It's rough,
but I'd be glad to share if anybody is interested.
-Dave
Dave Hall
Binghamton University
On 5/24/2020 12:29 PM, Martin Verges wrote:
Just save yourself the trouble. You won't have any real benefit from MTU
9000. It has some smallish, but it is not worth the effor
point, and in the worst case you can undo all of these
changes by rebooting the two test nodes.
If you want to move your production traffic to Jumbo Frames, change the
appropriate routes to MTU 8192 on all systems. Then test test test.
Lastly, change your network configuration on any
WAL or anything else.
Of course, if I've failed to create an optimal configuration I'm next
going to ask if I can adjust it without having to wipe and reinitialize
every OSD.
Thanks.
-Dave
Dave Hall
Binghamton University
kdh...@binghamton.edu
On 5/6/2020 2:20 AM, lin yunfan wr
r the
unfamiliar to find what they need.
-Dave
Dave Hall
Binghamton University
kdh...@binghamton.edu
607-760-2328 (Cell)
607-777-4641 (Office)
On 5/5/2020 10:42 AM, Igor Fedotov wrote:
Hi Dave,
wouldn't this help (particularly "Viewing runtime settings" section):
https://docs.ce
round a bit and, while I have found documentation on how
to configure, reconfigure, and repair a BlueStore OSD, I haven't found
anything on how to query the current configuration.
Could anybody point me to a command or link to documentation on this?
Thanks.
-Dave
--
Dave Hall
B
he virtual memory?
Also, is there something that needs to be tweaked to prevent the MDSs
from accumulating so much memory?
Thanks.
-Dave
--
Dave Hall
Binghamton University
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
e expanded,
but then the lookups got even slower.
-Dave
Dave Hall
Binghamton University
kdh...@binghamton.edu
On 3/23/2020 12:21 AM, Liviu Sas wrote:
Hi Dave,
Thank you for the answer.
Unfortunately the issue is that ceph uses the wrong source IP address,
and sends the traffic on the wrong
P requests.
-Dave
Dave Hall
Binghamton University
kdh...@binghamton.edu
On 3/22/2020 8:03 PM, Liviu Sas wrote:
Hello,
While testing our ceph cluster setup, I noticed a possible issue with the
cluster/public network configuration being ignored for TCP session
initiation.
Looks like the daemons (m
I need to create a new FS and copy
the data over?
Thanks.
-Dave
--
Dave Hall
Binghamton University
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
/DNS for the active mgr. When an active mgr fails
over (or goes down), the new active mgr enables this floating IP and
starts responding to mgr requests. Any passive mgrs would, of course,
turn this IP off to assure that requests go to the active mgr.
Just a thought...
-Dave
Dave Hall
t seem
to be able to use size as a criterion.
Does anybody have anything further to add that would help clarify this?
Thanks.
-Dave
Dave Hall
Binghamton University
On 2/10/20 1:26 PM, Gregory Farnum wrote:
On Mon, Feb 10, 2020 at 12:29 AM Håkan T Johansson wrote:
On Mon, 10 Feb 2020, Gregory
disk did fail, would the cluster re-balance and reconstruct
the lost data until the failed OSD was replaced.
Does this make sense? Or is it just wishful thinking.
Thanks.
-Dave
--
Dave Hall
Binghamton University
___
ceph-users mailing list
ath /dev/ceph-db-0/db-0 --osd-data
/var/lib/ceph/osd/ceph-24/ --osd-uuid 6441f236-8694-46b9-9c6a-bf82af89765d
--setuser ceph --setgroup ceph
root@ceph01:~#
Dave Hall
Binghamton universitykdh...@binghamton.edu
607-760-2328 (Cell)
607-777-4641 (Office)
On 1/29/2020 3:15 AM, Jan Fajerski wrote:
On T
commands that could be issued to
manually create these OSDs. There's some evidence of this in
/var/log/ceph/ceph-volume.log, but there's some detail missing and it's
really hard to follow.
If you can provide this list I'd gladly give it a try and let you know
how it
hose documents anyway if I could.
Thanks.
-Dave
Dave Hall
On 1/28/2020 3:05 AM, Jan Fajerski wrote:
On Mon, Jan 27, 2020 at 03:23:55PM -0500, Dave Hall wrote:
All,
I've just spent a significant amount of time unsuccessfully chasing
the _read_fsid unparsable uuid error on Debian 10 / Natilu
/dev/sdi: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
Disk /dev/sdj: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
I'd send the output of ceph-volume inventory on Luminous, but I'm
getting -->: KeyError: 'human_readable_size'.
Please let me know if I can
88 matches
Mail list logo