Hi! After an initial bumpy bootstrapping (IMHO the defaults should be
whatever is already defined in .ssh of the user and custom values setup
with cli arguments) now i'm stuck adding any service/hosts/osds because
apparently i lack orchestration .. the the documentation show a big
"Page does no
On 3/11/21 3:07 PM, Sebastian Wagner wrote:
Hi Adrian,
Hi!
Am 11.03.21 um 13:55 schrieb Adrian Sevcenco:
Hi! After an initial bumpy bootstrapping (IMHO the defaults should be
whatever is already defined in .ssh of the user and custom values setup
with cli arguments) now i'm stuck addin
Hi! So, after i selected the tags to add 2 nvme ssds i declared a
replicated n=2 pool .. and for the last 30 min the progress shown in
notification is 0% and iotop shows around 100K/s for 2 (???) ceph-mon
processes and that all ...
and in my service list the osd services look somehow empty:
ht
On 3/11/21 4:45 PM, Adrian Sevcenco wrote:
Hi! So, after i selected the tags to add 2 nvme ssds i declared a
replicated n=2 pool .. and for the last 30 min the progress shown in
notification is 0% and iotop shows around 100K/s for 2 (???) ceph-mon
processes and that all ...
and in my service
On 3/11/21 5:01 PM, Adrian Sevcenco wrote:
On 3/11/21 4:45 PM, Adrian Sevcenco wrote:
Hi! So, after i selected the tags to add 2 nvme ssds i declared a
replicated n=2 pool .. and for the last 30 min the progress shown in
notification is 0% and iotop shows around 100K/s for 2 (???) ceph-mon
Hi! yesterday i bootstrapped (with cephadm) my first ceph installation
and things looked somehow ok .. but today the osds are not yet ready and
i have in dashboard this warnings:
MDS_SLOW_METADATA_IO: 1 MDSs report slow metadata IOs
PG_AVAILABILITY: Reduced data availability: 64 pgs inactive
PG_
On 3/12/21 12:31 PM, Eneko Lacunza wrote:
Hi Adrian,
Hi!
El 12/3/21 a las 11:26, Adrian Sevcenco escribió:
Hi! yesterday i bootstrapped (with cephadm) my first ceph installation
and things looked somehow ok .. but today the osds are not yet ready
and i have in dashboard this warnings
i will try also external clients but of course i will be capped to the
theoretical 120 MiB/s which i'm curios if i can touch (9k mtu))
Thanks!!
Adrian
A>
Sent from Mail <https://go.microsoft.com/fwlink/?LinkId=550986> for
Windows 10
*From: *Adrian Sevcenco <mailto:adrian.s
Hi! What steps/procedures are required for emergency shutdown and for machine
maintenance?
Thanks a lot!
Adrian
smime.p7s
Description: S/MIME Cryptographic Signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
d?
Thanks a lot again :)
Adrian
If you scripted it, there’s probably not a lot of difference in the time take
to shutdown.
A.
Sent from Mail <https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10
*From: *Adrian Sevcenco <mailto:adrian.sevce...@cern.ch>
*Sent: *18 March
Hi! I have a single machine ceph installation and after trying to update to
pacific the upgrade is stuck with:
ceph -s
cluster:
id: d9f4c810-8270-11eb-97a7-faa3b09dcf67
health: HEALTH_WARN
Upgrade: Need standby mgr daemon
services:
mon: 1 daemons, quorum sev.spacescience.ro (age 3w)
mgr: sev
ank you!
Adrian
when the primary is restarted. I suspect you would then run into the same
thing with the mon. All sorts of things
tend to crop up on a cluster this minimal.
On Apr 1, 2021, at 10:15 AM, Adrian Sevcenco wrote:
Hi! I have a single machine ceph installation and after tryi
Hi! How/where can i change the image configured for a service?
I tried to modify /var/lib/ceph///unit.{image,run}
but after restarting
ceph orch ps shows that the service use the same old image.
What other configuration locations are there for the ceph components
beside /etc/ceph (which is quite
On 4/5/21 3:27 PM, 胡 玮文 wrote:
在 2021年4月5日,19:29,Adrian Sevcenco 写道:
Hi! How/where can i change the image configured for a service?
I tried to modify /var/lib/ceph///unit.{image,run}
but after restarting
ceph orch ps shows that the service use the same old image.
Hi Adrian,
Hi!
Try
IMAGE then again the single image)
Thanks a lot!!
Adrian
Supporting automated single-node upgrades is high on the list.. we
hope to have it fixed soon.
s
On Thu, Apr 1, 2021 at 1:24 PM Adrian Sevcenco wrote:
On 4/1/21 8:19 PM, Anthony D'Atri wrote:
I think what it’s saying is that i
Hi! is is (technically) possible to instruct cephfs to store files < 1Mib on a
(replicate) pool
and the others files on another (ec) pool?
And even more, is it possible to take the same kind of decision on the path of
the file?
(let's say that critical files with names like r"/critical_path/cri
Hi! I'm new with ceph and i struggle to make a mapping between
my current storage knowledge and ceph...
So, i will state my understanding of the context and the question
so please correct me with anything that i got wrong :)
So, files (or pieces of files) are put in PGs that are given sections
o
Original Message
Subject: [ceph-users] crushmap rules :: host selection
From: Anthony D'Atri
To: Adrian Sevcenco
Date: 1/28/2024, 3:56:21 AM
First a all, thanks a lot for for info and taking time to help
a beginner :)
Pools are a logical name for a storage space bu
Original Message
Subject: [ceph-users] Re: crushmap rules :: host selection
From: Anthony D'Atri
To: Adrian Sevcenco
Date: 1/28/2024, 6:03:21 PM
First a all, thanks a lot for for info and taking time to help
a beginner :)
Nichts zu denken. This is a community, it’s
Original Message
Subject: [ceph-users] crushmap rules :: host selection
From: Anthony D'Atri
To: Adrian Sevcenco
Date: 1/28/2024, 11:34:00 PM
so it depends on failure domain .. but with host failure domain, if there is
space on some other OSDs
will the missing OS
20 matches
Mail list logo