[ceph-users] Re: User + Dev Meetup Tomorrow!

2024-05-24 Thread Sebastian Wagner
Hi Frédéric, I agree. Maybe we should re-frame things? Containers can run on bare-metal and containers can run virtualized. And distribution packages can run bare-metal and virtualized as well. What about asking independently about: * Do you run containers or distribution packages? * Do yo

[ceph-users] Re: Is it possible to stripe rados object?

2022-01-26 Thread Sebastian Wagner
libradosstriper ? Am 26.01.22 um 10:16 schrieb lin yunfan: > Hi, > I know with rbd and cephfs there is a stripe setting to stripe data > into multiple rodos object. > Is it possible to use librados api to stripe a large object into many > small ones? > > linyunfan > ___

[ceph-users] Re: Single Node Cephadm Upgrade to Pacific

2022-01-10 Thread Sebastian Wagner
Hi Nathan, Should work, as long as you have two MGRs deployed. Please have a look at ceph config set mgr mgr/mgr_standby_modules = False Best, Sebastian Am 08.01.22 um 17:44 schrieb Nathan McGuire: > Hello! > > I'm running into an issue with upgrading Cephadm v15 to v16 on a single host. > I'v

[ceph-users] Re: airgap install

2021-12-17 Thread Sebastian Wagner
Hi Zoran, I'd like to have this properly documented in the Ceph documentation as well.  I just created https://github.com/ceph/ceph/pull/44346 to add the monitoring images to that section. Feel free to review this one. Sebastian Am 17.12.21 um 11:06 schrieb Zoran Bošnjak: > Kai, thank you for y

[ceph-users] Re: Octopus: conversion from ceph-ansible to Cephadm causes unexpected 15.2.15→.13 downgrade for MDSs and RGWs

2021-12-16 Thread Sebastian Wagner
Hi Florian, hi Guillaume Am 16.12.21 um 14:18 schrieb Florian Haas: > Hello everyone, > > my colleagues and I just ran into an interesting situation updating > our Ceph training course. That course's labs cover deploying a > Nautilus cluster with ceph-ansible, upgrading it to Octopus (also with >

[ceph-users] Re: v16.2.7 Pacific released

2021-12-08 Thread Sebastian Wagner
Hi Robert, it would have been much better to avoid this NFS situation altogether by avoiding those different implementations in the first place. Unfortunately this wasn't the case and I agree this is not great. In any case, here are the manual steps that are performed by the migration automatical

[ceph-users] Re: 16.2.7 pacific QE validation status, RC1 available for testing

2021-12-02 Thread Sebastian Wagner
Am 29.11.21 um 18:23 schrieb Yuri Weinstein: > Details of this release are summarized here: > > https://tracker.ceph.com/issues/53324 > Release Notes - https://github.com/ceph/ceph/pull/44131 > > Seeking approvals for: > > rados - Neha rados/cephadm looks good. Except for https://tracker.ceph.com/

[ceph-users] Re: Expose rgw using consul or service discovery

2021-11-09 Thread Sebastian Wagner
S, >> SMB and S3. >> Upgrades are done live via apt upgrade We do not use cephadm, we >> provide a web based deployment ui (wizard like steps) as well as ui >> for cluster management. >> For nginx, we use the upstream method to configure the load balancing >> of

[ceph-users] Re: cephadm does not find podman objects for osds

2021-10-28 Thread Sebastian Wagner
Some thoughts: * Do you have any error messages form the MDS daemons? https://docs.ceph.com/en/latest/cephadm/troubleshooting/#gathering-log-files  * Do you have any error messages form the OSDs? * What do you mean by "osd podman object"? * Try downgrading to 3.0.1 Am 25.10.21 um 23

[ceph-users] Re: MDS and OSD Problems with cephadm@rockylinux solved

2021-10-28 Thread Sebastian Wagner
In case you still have the error messages and additional info, do you want to create a tracker issue for this? https://tracker.ceph.com/projects/orchestrator/issues/new . To me this sounds like a network issue and not like a rockylinux issue. Am 26.10.21 um 13:17 schrieb Magnus Harlander: > Hi, >

[ceph-users] Re: Expose rgw using consul or service discovery

2021-10-20 Thread Sebastian Wagner
Am 20.10.21 um 09:12 schrieb Pierre GINDRAUD: > Hello, > > I'm migrating from puppet to cephadm to deploy a ceph cluster, and I'm > using consul to expose radosgateway. Before, with puppet, we were > deploying radosgateway with "apt install radosgw" and applying upgrade > using "apt upgrade radosg

[ceph-users] Re: cephadm cluster behing a proxy

2021-10-14 Thread Sebastian Wagner
Hi Luis, Yes, there is a dowstream documentation for it here: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html-single/installation_guide/index#configuring-a-custom-registry-for-disconnected-installation_install but we clearly lack an upstream version of it. I'd love to me

[ceph-users] Re: Cephadm set rgw SSL port

2021-09-29 Thread Sebastian Wagner
Here you go: https://github.com/ceph/ceph/pull/43332 Am 28.09.21 um 15:49 schrieb Sebastian Wagner: > Am 28.09.21 um 15:12 schrieb Daniel Pivonka: >> Hi, >> >> 1. I believe the field is called 'rgw_frontend_port' >> 2. I don't think something like that e

[ceph-users] Re: Cephadm set rgw SSL port

2021-09-28 Thread Sebastian Wagner
Am 28.09.21 um 15:12 schrieb Daniel Pivonka: > Hi, > > 1. I believe the field is called 'rgw_frontend_port' > 2. I don't think something like that exists but probably should At least for RGWs, we have: https://docs.ceph.com/en/pacific/cephadm/rgw/#service-specification > > -Daniel Pivonka > >

[ceph-users] Re: Error ceph-mgr on fedora 36

2021-09-27 Thread Sebastian Wagner
looks like you should create a tracker issue for this. https://tracker.ceph.com/projects/mgr/issues/new Am 18.09.21 um 14:34 schrieb Igor Savlook: OS: Fedora 36 (rawhide) Ceph: 16.2.6 Python: 3.10 When start ceph-mgr he is try load core pyth

[ceph-users] Re: Remoto 1.1.4 in Ceph 16.2.6 containers

2021-09-27 Thread Sebastian Wagner
Thank you David! Am 24.09.21 um 00:41 schrieb David Galloway: I just repushed the 16.2.6 container with remoto 1.2.1 in it. On 9/22/21 4:19 PM, David Orman wrote: https://bodhi.fedoraproject.org/updates/FEDORA-EPEL-2021-4b2736a28c ^^ if people want to test and provide feedback for a potential

[ceph-users] Re: How you loadbalance your rgw endpoints?

2021-09-27 Thread Sebastian Wagner
Hi Szabo, I think you can have a look at https://docs.ceph.com/en/latest/cephadm/rgw/#high-availability-service-for-rgw even if you don't deploy ceph using cephadm. Am 24.09.21 um 07:59 schrieb Szabo, Istvan (Ag

[ceph-users] Re: Restore OSD disks damaged by deployment misconfiguration

2021-09-27 Thread Sebastian Wagner
Hi Phil, Am 27.09.21 um 10:06 schrieb Phil Merricks: Hey folks, A recovery scenario I'm looking at right now is this: 1: In a clean 3-node Ceph cluster (pacific, deployed with cephadm), the OS Disk is lost from all nodes 2: Trying to be helpful, a self-healing deployment system reinstalls the

[ceph-users] Re: Docker & CEPH-CRASH

2021-09-16 Thread Sebastian Wagner
ceph-crash should work, as crash dumps aren't namespaced in the kernel. Note that you need a pid1 process in your containers in order for crash dumps to be created. Am 16.09.21 um 08:57 schrieb Eugen Block: I haven't tried it myself but it would probably work to run the crash services apart fr

[ceph-users] Re: How to purge/remove rgw from ceph/pacific

2021-09-11 Thread Sebastian Wagner
Yeah, looks like this was missing from the docs. See https://github.com/ceph/ceph/pull/43141 Am 11.09.21 um 12:46 schrieb Eugen Block: Edit your rgw service specs and set „unmanaged“ to true so cephadm won’t redeploy a daemon, then remove it as you did before. See [1] for more details. [1]

[ceph-users] Re: mon startup problem on upgrade octopus to pacific

2021-09-02 Thread Sebastian Wagner
Could you please verify that the mon_map of each mon contains all and correct mons? Am 30.08.21 um 21:45 schrieb Chris Dunlop: Hi, Does anyone have any suggestions? Thanks, Chris On Mon, Aug 30, 2021 at 03:52:29PM +1000, Chris Dunlop wrote: Hi, I'm stuck, mid upgrade from octopus to pacif

[ceph-users] Re: Cephadm cannot aquire lock

2021-09-02 Thread Sebastian Wagner
Am 31.08.21 um 04:05 schrieb fcid: Hi ceph community, I'm having some trouble trying to delete an OSD. I've been using cephadm in one of our clusters and it's works fine, but lately, after an OSD failure, I cannot delete it using the orchestrator. Since the orchestrator is not working (for s

[ceph-users] Re: Very beginner question for cephadm: config file for bootstrap and osd_crush_chooseleaf_type

2021-09-02 Thread Sebastian Wagner
?: mon_allow_pool_size_one = 1 osd_pool_default_size = 1 Ignacio El 30/8/21 a las 17:31, Sebastian Wagner escribió: Try running `cephadm bootstrap --single-host-defaults` Am 20.08.21 um 18:23 schrieb Eugen Block: Hi, you can just set the config option with 'ceph config set ...' after your cluste

[ceph-users] Re: cephadm Pacific bootstrap hangs waiting for mon

2021-09-02 Thread Sebastian Wagner
by chance do you still have the logs of the mon the never went up? https://docs.ceph.com/en/latest/cephadm/troubleshooting/#checking-cephadm-logs Sebastian Am 31.08.21 um 23:51 schrieb Matthew Pounsett: On Tue,

[ceph-users] Re: Brand New Cephadm Deployment, OSDs show either in/down or out/down

2021-09-02 Thread Sebastian Wagner
were recognized, and all went up/in without issue. Thanks On 9/1/21 06:15, Sebastian Wagner wrote: Am 30.08.21 um 17:39 schrieb Alcatraz: Sebastian, Thanks for responding! And of course. 1. ceph orch ls --service-type osd --format yaml Output: service_type: osd service_id: all

[ceph-users] Re: cephadm 15.2.14 - mixed container registries?

2021-09-02 Thread Sebastian Wagner
Am 02.09.21 um 02:54 schrieb Nigel Williams: I managed to upgrade to 15.2.14 by doing: ceph orch upgrade start --image quay.io/ceph/ceph:v15.2.14 (anything else I tried would fail) When I look in ceph orch ps output though I see quay.io for most image sources, but alertmanager, grafana, node-

[ceph-users] Re: podman daemons in error state - where to find logs?

2021-09-02 Thread Sebastian Wagner
We have a troubleshooting section here: https://docs.ceph.com/en/latest/cephadm/troubleshooting/#checking-cephadm-logs a ceph user should not be required for the containers to log to systemd. Did things end up in

[ceph-users] Re: Brand New Cephadm Deployment, OSDs show either in/down or out/down

2021-09-01 Thread Sebastian Wagner
Output: Error EINVAL: caps cannot be specified both in keyring and in command You only need to create the keyring, you don't need to store the keyring anywhere. I'd still suggest to somehow create the keyring, but I haven't seen this particular error before. hth Sebast

[ceph-users] Re: Very beginner question for cephadm: config file for bootstrap and osd_crush_chooseleaf_type

2021-08-30 Thread Sebastian Wagner
Try running `cephadm bootstrap --single-host-defaults` Am 20.08.21 um 18:23 schrieb Eugen Block: Hi, you can just set the config option with 'ceph config set ...' after your cluster has been bootstrapped. See [1] for more details about the config store. [1] https://docs.ceph.com/en/latest/

[ceph-users] Re: Brand New Cephadm Deployment, OSDs show either in/down or out/down

2021-08-30 Thread Sebastian Wagner
Could you run 1. ceph orch ls --service-type osd --format yaml 2. cpeh orch ps --daemon-type osd --format yaml 3. try running the `ceph auth add` call form https://docs.ceph.com/en/mimic/rados/operations/add-or-rm-osds/#adding-an-osd-manual Am 30.08.21 um 14:49 schrieb Alcatraz: Hello al

[ceph-users] Re: "ceph orch ls", "ceph orch daemon rm" fail with exception "'KeyError: 'not'" on 15.2.10

2021-08-10 Thread Sebastian Wagner
Hi, you managed to hit https://tracker.ceph.com/issues/51176 which will be fixed by https://github.com/ceph/ceph/pull/42177 . https://tracker.ceph.com/issues/51176#note-9 contains a list of steps for you to recover from this. Hope that helps, Sebastian Am 09.08.21 um 13:11 schrieb Erkk

[ceph-users] Re: Ceph Pacific mon is not starting after host reboot

2021-08-10 Thread Sebastian Wagner
Good morning Robert, Am 10.08.21 um 09:53 schrieb Robert Sander: Hi, Am 09.08.21 um 20:44 schrieb Adam King: This issue looks the same as https://tracker.ceph.com/issues/51027 which is being worked on. Essentially, it seems that hosts that were being rebooted were temporarily marked as offli

[ceph-users] Re: cephadm shell fails to start due to missing config files?

2021-07-05 Thread Sebastian Wagner
Hi Vladimir, The behavior of`cephadm shell` will be improved by https://github.com/ceph/ceph/pull/42028 In the meantime and as a workaround you can either deploy a daemon on this host or you can copy the system's ceph.conf into the location that is shown in the error message. Hope that help

[ceph-users] Re: Module 'devicehealth' has failed:

2021-06-15 Thread Sebastian Wagner
Hi Torkil, you should see more information in the MGR log file. Might be an idea to restart the MGR to get some recent logs. Am 15.06.21 um 09:41 schrieb Torkil Svensgaard: Hi Looking at this error in v15.2.13: " [ERR] MGR_MODULE_ERROR: Module 'devicehealth' has failed:     Module 'devicehea

[ceph-users] Re: lib remoto in ubuntu

2021-06-11 Thread Sebastian Wagner
Hi Alfredo, if you don't use cephadm, then I'd recommend to not install the ceph-mgr-cephadm package. If you use cephadm with an ubuntu based container, you'll have to make sure that the MGR properly finds the remoto package within the container. Thanks, Sebastian Am 11.06.21 um 05:24 sch

[ceph-users] Re: mon vanished after cephadm upgrade

2021-05-14 Thread Sebastian Wagner
Hi Ashley, is sn-m01 listed in `ceph -s`? Which hosts are listed in `ceph orch ps --daemon-type mon ? Otherwise, there are a two helpful commands now: * `cpeh orch daemon rm mon.sn-m01` to remove the mon * `ceph orch daemon start mon.sn-m01` to start it again Am 14.05.21 um 14:14 schrieb

[ceph-users] Re: one of 3 monitors keeps going down

2021-04-29 Thread Sebastian Wagner
Right, here are the docs for that workflow: https://docs.ceph.com/en/latest/cephadm/mon/#mon-service Am 29.04.21 um 13:13 schrieb Eugen Block: Hi, instead of copying MON data to this one did you also try to redeploy the MON container entirely so it gets a fresh start? Zitat von "Robert W.

[ceph-users] Re: cephadm: how to create more than 1 rgw per host

2021-04-19 Thread Sebastian Wagner
Hi Ivan, this is a feature that is not yet released in Pacific. It seems the documentation is a bit ahead of time right now. Sebastian On Fri, Apr 16, 2021 at 10:58 PM i...@z1storage.com wrote: > Hello, > > According to the documentation, there's count-per-host key to 'ceph > orch', but it doe

[ceph-users] Re: How to disable ceph-grafana during cephadm bootstrap

2021-04-14 Thread Sebastian Wagner
cephadm bootstrap --skip-monitoring-stack should to the trick. See man cephadm On Tue, Apr 13, 2021 at 6:05 PM mabi wrote: > Hello, > > When bootstrapping a new ceph Octopus cluster with "cephadm bootstrap", > how can I tell the cephadm bootstrap NOT to install the ceph-grafana > container? > >

[ceph-users] Re: cephadm custom mgr modules

2021-04-12 Thread Sebastian Wagner
You want to build a custom container for that user case indeed. On Mon, Apr 12, 2021 at 2:18 PM Rob Haverkamp wrote: > Hi there, > > I'm developing a custom ceph-mgr module and have issues deploying this on > a cluster deployed with cephadm. > With a cluster deployed with ceph-deploy, I can just

[ceph-users] Re: Unhealthy Cluster | Remove / Purge duplicate osds | Fix daemon

2021-03-16 Thread Sebastian Wagner
E = The high-level unit activation state, i.e. generalization of SUB. > SUB    = The low-level unit activation state, values depend on unit type. > > 4 loaded units listed. Pass --all to see loaded but inactive units, too. > To show all installed unit files use 'systemctl list-

[ceph-users] Re: Container deployment - Ceph-volume activation

2021-03-12 Thread Sebastian Wagner
Am 11.03.21 um 18:40 schrieb 胡 玮文: > Hi, > > Assuming you are using cephadm? Checkout this > https://docs.ceph.com/en/latest/cephadm/osd/#activate-existing-osds > > > ceph cephadm osd activate ... Might not be backported. see https://tracker.ceph.com/issues/46691#note-1 for the workaround

[ceph-users] Re: Unhealthy Cluster | Remove / Purge duplicate osds | Fix daemon

2021-03-12 Thread Sebastian Wagner
Hi Oliver, # ssh gedaopl02 # cephadm rm-daemon osd.0 should do the trick. Be careful to remove the broken OSD :-) Best, Sebastian Am 11.03.21 um 22:10 schrieb Oliver Weinmann: > Hi, > > On my 3 node Octopus 15.2.5 test cluster, that I haven't used for quite > a while, I noticed that it shows

[ceph-users] Re: Cephadm: Upgrade 15.2.5 -> 15.2.9 stops on non existing OSD

2021-03-11 Thread Sebastian Wagner
yes Am 11.03.21 um 15:46 schrieb Kai Stian Olstad: > Hi Sebastian > > On 11.03.2021 13:13, Sebastian Wagner wrote: >> looks like >> >> $ ssh pech-hd-009 >> # cephadm ls >> >> is returning this non-existent OSDs. >> >> can you verif

[ceph-users] Re: cephadm (curl master)/15.2.9:: how to add orchestration

2021-03-11 Thread Sebastian Wagner
Hi Adrian, Am 11.03.21 um 13:55 schrieb Adrian Sevcenco: > Hi! After an initial bumpy bootstrapping (IMHO the defaults should be > whatever is already defined in .ssh of the user and custom values setup > with cli arguments) now i'm stuck adding any service/hosts/osds because > apparently i lack

[ceph-users] Re: Cephadm: Upgrade 15.2.5 -> 15.2.9 stops on non existing OSD

2021-03-11 Thread Sebastian Wagner
Hi Kai, looks like $ ssh pech-hd-009 # cephadm ls is returning this non-existent OSDs. can you verify that `cephadm ls` on that host doesn't print osd.355 ? Best, Sebastian Am 11.03.21 um 12:16 schrieb Kai Stian Olstad: > Before I started the upgrade the cluster was healthy but one > OSD(osd.

[ceph-users] Re: Alertmanager not using custom configuration template

2021-03-11 Thread Sebastian Wagner
Hi Mark, Indeed. I just merged https://github.com/ceph/ceph/pull/39932 which fixes the names of those config keys. Might want to try again (with slashes instead of underscores). Thanks for reporting this, Sebastian Am 10.03.21 um 15:34 schrieb Marc 'risson' Schmitt: > Hi, > > I'm trying to us

[ceph-users] Re: bug in latest cephadm bootstrap: got an unexpected keyword argument 'verbose_on_failure'

2021-03-03 Thread Sebastian Wagner
Indeed. That is going to be fixed by https://github.com/ceph/ceph/pull/39633 Am 03.03.21 um 07:31 schrieb Philip Brown: > Seems like someone is not testing cephadm on centos 7.9 > > Just tried installing cephadm from the repo, and ran > cephadm bootstrap --mon-ip=xxx > > it blew up, with > >

[ceph-users] Re: Python API mon_comand()

2021-01-15 Thread Sebastian Wagner
Am 15.01.21 um 09:24 schrieb Robert Sander: > Hi, > > I am trying to get some statistics via the Python API but fail to run the > equivalent of "ceph df detail". > > > ...snip... cluster.mon_command(json.dumps({'prefix': 'df detail', 'format': 'json'}), b'') > (-22, '', u'command n

[ceph-users] Re: Module 'dashboard' has failed: '_cffi_backend.CDataGCP' object has no attribute 'type'

2020-11-18 Thread Sebastian Wagner
Sounds like a bug. mind creating a tracker issue? https://tracker.ceph.com/projects/mgr/issues/new Am 17.11.20 um 17:39 schrieb Marcelo: > Hello all. > > I'm trying to deploy the dashboard (Nautilus 14.2.8), and after I run ceph > dashboard create-self-signed-cert, the cluster started to show th

[ceph-users] Re: cephadm & iSCSI

2020-09-04 Thread Sebastian Wagner
Thanks! do you want to create a bug for that? https://tracker.ceph.com/projects/orchestrator/issues/new Am 04.09.20 um 15:25 schrieb Robert Sander: > Hi, > > yes, I have read https://docs.ceph.com/docs/octopus/cephadm/stability/ > and know that the iSCSI support is still under development. > >

[ceph-users] Re: help me enable ceph iscsi gatewaty in ceph octopus

2020-08-05 Thread Sebastian Wagner
Till iscsi is fully working in cephadm, you can install ceph-iscsi manually as described here: https://docs.ceph.com/docs/master/rbd/iscsi-target-cli/ Am 05.08.20 um 11:44 schrieb Hoài Thương: > hello swagner, > Can you give me document , i use cephadm -- SUSE Software Solutions Germany GmbH,

[ceph-users] Re: help me enable ceph iscsi gatewaty in ceph octopus

2020-08-05 Thread Sebastian Wagner
hi David, hi Ricardo, I think we first have to clarify, if that was actually a cephadm deployment (and not ceph-ansible). If you install Ceph using ceph-ansible, then please refer to the ceph-ansible docs. If we're actually talking about cephadm here (which is not clear to me): iSCSI for cephadm

[ceph-users] Re: 6 hosts fail cephadm check (15.2.4)

2020-07-28 Thread Sebastian Wagner
Looks as if your cluster is still running 15.2.1. Have a look at https://docs.ceph.com/docs/master/cephadm/upgrade/ Am 28.07.20 um 09:57 schrieb Ml Ml: > Hello, > > i get: > > [WRN] CEPHADM_HOST_CHECK_FAILED: 6 hosts fail cephadm check > host ceph01 failed check: Failed to connect to ceph01

[ceph-users] Re: ceph orch apply [osd, mon] -i YAML file not found

2020-07-24 Thread Sebastian Wagner
Did you `alias ceph='cephadm shell -- ceph'? Then cat /root/osd_spec.yml | ceph orch apply -i - should do the trick. Nevertheless. I'll remove the alias command in > https://docs.ceph.com/docs/master/cephadm/install/?highlight=alias#enable-ceph-cli > immediately. Thanks for the report. Am

[ceph-users] Re: Monitor IPs

2020-07-16 Thread Sebastian Wagner
Well, for a cephadm deployment, I'd recommend do stick to the workflow that deploys new MONs. in order to use the workflow that is based on injecting monmaps, I'd wait, till we have a tested documentation for it. Am 15.07.20 um 15:34 schrieb Amit Ghadge: > you can try, ceph mon set-addrs a [v2:

[ceph-users] Re: cephadm adoption failed

2020-07-14 Thread Sebastian Wagner
strange. 0xc3 actually looks like utf-8 encoded german to me. by chance, do you have a hexdump of > chown -c -R $uid:$gid $data_dir_dst ? Am 13.07.20 um 20:51 schrieb Tobias Gall: > Hello, > > I'm trying to adopt an existing cluster. > The cluster consists of 5 converged (mon, mgr, osd, mds on

[ceph-users] Re: Error on upgrading to 15.2.4 / invalid service name using containers

2020-07-13 Thread Sebastian Wagner
Thanks! I've created https://tracker.ceph.com/issues/46497 Am 13.07.20 um 11:51 schrieb Mario J. Barchéin Molina: > Hello. We finally solved the problem, we just deleted the failed service > with: > > # ceph orch rm mds.label:mds > > and after that, we could finish the upgrade to 15.2.4. > >

[ceph-users] Re: Ceph SSH orchestrator?

2020-07-03 Thread Sebastian Wagner
Am 02.07.20 um 19:57 schrieb Oliver Freyermuth: > Dear Cephalopodians, > > as we all know, ceph-deploy is on its demise since a while and essentially in > "maintenance mode". > > We've been eyeing the "ssh orchestrator" which was in Nautilus as the > "successor in spirit" of ceph-deploy. >

[ceph-users] Re: How to ceph-volume on remote hosts?

2020-06-24 Thread Sebastian Wagner
Am 24.06.20 um 05:15 schrieb steven prothero: > Hello, > > I am new to CEPH and on a few test servers attempting to setup and > learn a test ceph system. > > I started off the install with the "Cephadm" option and it uses podman > containers. > Followed steps here: > https://docs.ceph.com/docs/

[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-06-04 Thread Sebastian Wagner
] : > pgmap v69: 97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 > TiB avail > 2020-05-21T18:54:45.703+ 7faed5caa700 0 log_channel(cluster) log [DBG] : > pgmap v70: 97 pgs: 97 active+clean; 4.8 GiB data, 12 GiB used, 87 TiB / 87 > TiB avail > 2020-05-21T1

[ceph-users] Re: Cephadm Setup Query

2020-06-04 Thread Sebastian Wagner
Am 26.05.20 um 08:16 schrieb Shivanshi .: > Hi, > > I am facing an issue on Cephadm cluster setup. Whenever, I try to add > remote devices as OSDs, command just hangs. > > The steps I have followed : > > sudo ceph orch daemon add osd node1:device > >   > > 1. For the setup I have followed t

[ceph-users] Re: Octopus 15.2.2 unable to make drives available (reject reason locked)...

2020-06-04 Thread Sebastian Wagner
Hi Marco, note that encrypted OSDs will land in the next octous release. Regarding the locked state, you could run ceph-volume directly on the host to understand the issue better. c-v should give you the reasons. Am 29.05.20 um 03:18 schrieb Marco Pizzolo: > Rebooting addressed > > On Thu,

[ceph-users] Re: Cephadm Hangs During OSD Apply

2020-06-04 Thread Sebastian Wagner
encrypted OSDS should land in the next octopus release: https://tracker.ceph.com/issues/44625 Am 27.05.20 um 20:31 schrieb m...@silvenga.com: > I noticed the luks volumes were open, even though luksOpen hung. I killed > cryptsetup (once per disk) and ceph-volume continued and eventually created

[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-05-25 Thread Sebastian Wagner
Am 22.05.20 um 19:28 schrieb Gencer W. Genç: > Upgrade: It is NOT safe to stop mon.vx-rg23-rk65-u43-130 please make sure, ceph mon ok-to-stop mon.vx-rg23-rk65-u43-130 return ok -- SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany (HRB 36809, AG Nürnberg). Geschäfts

[ceph-users] Re: ceph orch upgrade stuck at the beginning.

2020-05-20 Thread Sebastian Wagner
Hi Gencer, I'm going to need the full mgr log file. Best, Sebastian Am 20.05.20 um 15:07 schrieb Gencer W. Genç: > Ah yes, > > { >     "mon": { >         "ceph version 15.2.1 (9fd2f65f91d9246fae2c841a6222d34d121680ee) > octopus (stable)": 2 >     }, >     "mgr": { >         "ceph version 15.2.

[ceph-users] Re: Cephadm and rados gateways

2020-05-18 Thread Sebastian Wagner
This will be fixed in 15.2.2 https://tracker.ceph.com/issues/45215 ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: How to apply ceph.conf changes using new tool cephadm

2020-05-05 Thread Sebastian Wagner
ceph@elchaka.de wrote: > I am not absolutly sure but you should be able to do something like > > ceph config mon set Yes. please use `ceph config ...` cephadm only uses a minimal ceph.conf which only contains the IPs of the other MONs. > > Or try to restart the mon/osd daemon > > Hth > > Am

[ceph-users] Re: How to debug ssh: ceph orch host add ceph01 10.10.1.1

2020-04-29 Thread Sebastian Wagner
We've improved the docs a little bit. Does https://docs.ceph.com/docs/master/cephadm/troubleshooting/#ssh-errors help you now? ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: PGs unknown (osd down) after conversion to cephadm

2020-04-16 Thread Sebastian Wagner
edia"}, "last_refresh": > "2020-04-15T15:26:53.664473", "created": "2020-03-30T23:51:32.239555"}, > {"container_image_id": > "204a01f9b0b6710dd0c0af7f37ce7139c47ff0f0105d778d7104c69282dfbbf1", > "container_image

[ceph-users] Re: PGs unknown (osd down) after conversion to cephadm

2020-04-14 Thread Sebastian Wagner
Might be an issue with cephadm. Do you have the output of `ceph orch host ls --format json` and `ceph orch ls --format json`? Am 09.04.20 um 13:23 schrieb Dr. Marco Savoca: > Hi all, > >   > > last week I successfully upgraded my cluster to Octopus and converted it > to cephadm. The conversion

[ceph-users] Re: osd with specifiying directories

2020-04-06 Thread Sebastian Wagner
Hi Micha, cephadm does not (yet) support Filestore. See https://tracker.ceph.com/issues/44874 for details. Best, Sebastian Am 03.04.20 um 10:11 schrieb Micha: > Hi, > > I want to try using object storage with java. > Is it possible to set up osds with "only" directories as data destination >