arently it just stops after that. I already tried to find a
debug log-level for ceph-volume but it's not applicable to all
subcommands.
The cephadm.log also just stops without even finishing the "copying
blob", which makes me wonder if it actually pulls the entire image? I
assum
files changed, 78 insertions(+), 12 deletions(-)
I will try to investigate next week but if some Ceph expert developpers
can have a look at this commit ;-)
Have a nice week-end
Patrick
Le 18/10/2023 à 13:48, Patrick Begou a écrit :
Hi all,
I'm trying to catch the faulty commit. I'
Le 23/10/2023 à 03:04, 544463...@qq.com a écrit :
I think you can try to roll back this part of the python code and wait for your
good news :)
Not so easy 😕
[root@e9865d9a7f41 ceph]# git revert
4fc6bc394dffaf3ad375ff29cbb0a3eb9e4dbefc
Auto-merging src/ceph-volume/ceph_volume/tests/util/te
join(_sys_block_path, dev, 'removable'))
== "1":
continue
The thumb drive is removable, of course, apparently that is filtered
here.
Regards,
Eugen
Zitat von Patrick Begou :
Le 23/10/2023 à 03:04, 544463...@qq.com a écrit :
I think you can try to roll back this part of
4070--97e9--e5e8b3970766-osd--block--7dec1808--d6f4--4f90--ac74--75a4346e1df5
253:1 0 465.8G 0 lvm
sdc 8:32 1 232.9G 0 disk
Patrick
Le 24/10/2023 à 13:38, Patrick Begou a écrit :
Hi Eugen,
Yes Eugen, all the devices /dev/sd[abc] have the removable flag set to
1. May be because they are
ere are vendors that report hot-swappable drives as
removable. Patrick, it looks like this may resolve your issue as well.
On Tue, Oct 24, 2023 at 5:57 AM Eugen Block wrote:
Hi,
May be because they are hot-swappable hard drives.
yes, that's my assumption as well.
Zitat von Patrick Beg
made was to enable removable (but not USB)
devices, as there are vendors that report hot-swappable drives as
removable. Patrick, it looks like this may resolve your issue as well.
On Tue, Oct 24, 2023 at 5:57 AM Eugen Block wrote:
Hi,
May be because they are hot-swappable hard drives.
yes, tha
Hi Robert,
Le 05/12/2023 à 10:05, Robert Sander a écrit :
On 12/5/23 10:01, duluxoz wrote:
Thanks David, I knew I had something wrong :-)
Just for my own edification: Why is k=2, m=1 not recommended for
production? Considered to "fragile", or something else?
It is the same as a replicated p
times of a new
server, k+m+2 is not luxury (Depending on the growth of your volume).
Cordialement,
*David CASIER*
Le mar. 5 déc. 2023 à 11:17, Patrick Begou
a écrit :
Hi R
Le 06/12/2023 à 00:11, Rich Freeman a écrit :
On Tue, Dec 5, 2023 at 6:35 AM Patrick Begou
wrote:
Ok, so I've misunderstood the meaning of failure domain. If there is no
way to request using 2 osd/node and node as failure domain, with 5 nodes
k=3+m=1 is not secure enough and I will ha
maintenance). Its not great
really, but sometimes there is no way around it. I was happy when I got the
extra hosts.
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
From: Curt
Sent: Wednesday, December 6, 2023 3:56 PM
To: Pat
Hi Sebastian
as you says "more than 3 public networks", did you manage Ceph daemons
listening on multiple public interface ?
I'm looking for such a possibility as daemons seams binded to one
interface only but do not find any how-to.
Thanks
Patrick
Le 03/01/2024 à 21:31, Sebastian a écrit :
Hi Erich,
about a similar problem I asked some months ago,Frank Schilder published
this on the list (December 6, 2023) and it may be helpfull for your
setup. I've not tested yet, my cluster is still in deployment state.
To provide some first-hand experience, I was operating a pool with
Hi everyone
I'm new to CEPH, just a french 4 days training session with Octopus on
VMs that convince me to build my first cluster.
At this time I have 4 old identical nodes for testing with 3 HDDs each,
2 network interfaces and running Alma Linux8 (el8). I try to replay the
training session
5. ceph log last cephadm
1. This will show you what orchestrator has been trying to do,
and how it may be failing
Also, it’s never un-helpful to have a look at “ceph -s” and “ceph
health detail”, particularly for any people trying to help you without
access to your systems.
Best of l
inventory”. That should show you the devices available for OSD
deployment, and hopefully matches up to what your “lsblk” shows. If
you need to zap HDDs and orchestrator is still not seeing them, you
can try “cephadm ceph-volume lvm zap /dev/sdb”
Thank you,
Josh Beaman
*From: *Patrick Begou
lume -- inventory
would use the 17.2.6 version of ceph-volume for the inventory. It
works by running ceph-volume through the container, so you don't have
to have to worry about installing different packages to try them and
it should pull the container image on its own if it isn't on the
ma
orks well. Have you looked at
the cephadm logs (ceph log last cephadm)?
Except if you are using a very specific hardware, I suspect Ceph is
suffering of a problem outside it...
Cheers,
Michel
Sent from my mobile
Le 26 mai 2023 17:02:50 Patrick Begou
a écrit :
Hi,
I'm back wor
I'm a new ceph user and I have some trouble with boostraping with
cephadm: using Pacific or Quincy no hard drive are detected by Ceph.
Using Octopus all the hard drives are detected. As I do not know how to
really clean, even a successful install but not functional, each test
requires me a f
Hi,
I'm working on a small POC for a ceph setup on 4 old C6100 power-edge. I
had to install Octopus since latest versions were unable to detect the
HDD (too old hardware ??). No matter, this is only for training and
understanding Ceph environment.
My installation is based on
https://downlo
Hi,
bad question, sorry.
I've just run
ceph mgr module enable snap_schedule --force
to solve this problem. I was just afraid to use "--force" 😕 but as I
can break this test configuration
Patrick
Le 19/09/2023 à 09:47, Patrick Begou a écrit :
Hi,
I'm working
subvolume rm [] [--force]
[--retain-snapshots]
Error EINVAL: invalid command
I think I need your help to go further 😕
Patrick
Le 19/09/2023 à 10:23, Patrick Begou a écrit :
Hi,
bad question, sorry.
I've just run
ceph mgr module enable snap_schedule --force
to solve this problem. I
//docs.ceph.com/en/quincy/cephfs/snap-schedule/#usage
ceph fs snap-schedule
(note the hyphen!)
On Tue, Sep 19, 2023 at 8:23 AM Patrick Begou
wrote:
Hi,
still some problems with snap_schedule as as the ceph fs snap-schedule
namespace is not available on my nodes.
[ceph: root@mostha1 /]# c
hard disk
drives (???). That seems more productive than debugging a long EOLifed
release.
On Tue, Sep 19, 2023 at 8:49 AM Patrick Begou
wrote:
Hi Patrick,
sorry for the bad copy/paste. As it was not working I have also tried
with the module name 😕
[ceph: root@mostha1 /]# ceph fs snap-schedule
no v
Hi,
After a power outage on my test ceph cluster, 2 osd fail to restart.
The log file show:
8e5f-00266cf8869c@osd.2.service: Failed with result 'timeout'.
Sep 21 11:55:02 mostha1 systemd[1]: Failed to start Ceph osd.2 for
250f9864-0142-11ee-8e5f-00266cf8869c.
Sep 21 11:55:12 mostha1 systemd[
ious run, or service
implementation deficiencies.
Patrick
Le 21/09/2023 à 12:44, Igor Fedotov a écrit :
Hi Patrick,
please share osd restart log to investigate that.
Thanks,
Igor
On 21/09/2023 13:41, Patrick Begou wrote:
Hi,
After a power outage on my test ceph cluster, 2 osd fail to restart.
sually indicates unclean termination of a previous run, or service
implementation deficiencies.
Patrick
Le 21/09/2023 à 12:44, Igor Fedotov a écrit :
Hi Patrick,
please share osd restart log to investigate that.
Thanks,
Igor
On 21/09/2023 13:41, Patrick Begou wrote:
Hi,
After a power outa
disks
connected to the same controller working ok? If so, I'd say disk is dead.
Cheers
El 21/9/23 a las 16:17, Patrick Begou escribió:
Hi Igor,
a "systemctl reset-failed" doesn't restart the osd.
I reboot the node and now it show some error on the HDD:
[ 107.716769] ata
Le 02/10/2023 à 18:22, Patrick Bégou a écrit :
Hi all,
still stuck with this problem.
I've deployed octopus and all my HDD have been setup as osd. Fine.
I've upgraded to pacific and 2 osd have failed. They have been
automatically removed and upgrade finishes. Cluster Health is finaly
OK, no d
host? Did you also upgrade
the OS when moving to Pacific? (Sorry if I missed that.
Zitat von Patrick Begou :
Le 02/10/2023 à 18:22, Patrick Bégou a écrit :
Hi all,
still stuck with this problem.
I've deployed octopus and all my HDD have been setup as osd. Fine.
I've upgraded to pacif
ous output you didn't specify the --destroy flag.
Which cephadm version is installed on the host? Did you also upgrade
the OS when moving to Pacific? (Sorry if I missed that.
Zitat von Patrick Begou :
Le 02/10/2023 à 18:22, Patrick Bégou a écrit :
Hi all,
still stuck with this problem.
utomatically (if
"all-available-devices" is enabled or your osd specs are already
applied). If it doesn't happen automatically, deploy it with 'ceph
orch daemon add osd **:**' [1].
[1] https://docs.ceph.com/en/quincy/cephadm/services/osd/#deploy-osds
Zitat von Patric
no LVs for that disk) you can check the inventory:
cephadm ceph-volume inventory
Please also add the output of 'ceph orch ls osd --export'.
Zitat von Patrick Begou :
Hi Eugen,
- the OS is Alma Linux 8 with latests updates.
- this morning I've worked with ceph-volume but it ends
ceph/{fsid}/cephadm.{latest} ceph-volume inventory
Does the output differ? Paste the relevant cephadm.log from that
attempt as well.
[1]
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/LASBJCSPFGDYAWPVE2YLV2ZLF3HC5SLS/
Zitat von Patrick Begou :
Hi Eugen,
first many thanks fo
really strange. Just out of curiosity, have you tried Quincy
(and/or Reef) as well? I don't recall what inventory does in the
background exactly, I believe Adam King mentioned that in some thread,
maybe that can help here. I'll search for that thread tomorrow.
Zitat von Patrick B
ecall what inventory does in the
background exactly, I believe Adam King mentioned that in some thread,
maybe that can help here. I'll search for that thread tomorrow.
Zitat von Patrick Begou :
Hi Eugen,
[root@mostha1 ~]# rpm -q cephadm
cephadm-16.2.14-0.el8.noarch
Log associated t
m not
sure what it could be.
Zitat von Patrick Begou :
I've ran additional tests with Pacific releases and with "ceph-volume
inventory" things went wrong with the first v16.11 release
(v16.2.11-20230125)
=== Ceph v16.2.10-20220920
finishing the "copying
blob", which makes me wonder if it actually pulls the entire image? I
assume you have enough free disk space (otherwise I would expect a
message "failed to pull target image"), do you see any other warnings
in syslog or something? Or are the logs inc
Hi Johan,
So it is not O.S. related as you are running Debian and I am running
Alma Linux. But I'm surprised why so few people meet this bug.
Patrick
Le 13/10/2023 à 17:38, Johan a écrit :
At home Im running a small cluster, Ceph v17.2.6, Debian 11 Bullseye.
I have recently added a new serv
39 matches
Mail list logo