unable to change the
URL.
I try a
ceph config-key dump |grep OLD_IP
and didn't find it.
So where this information are store ?
Regards
JAS
--
Albert SHIH 🦫 🐸
Observatoire de Paris
France
Heure locale/Local time:
jeu. 11 juil. 2024 10:22:58 CEST
___
Le 11/07/2024 à 10:27:09+0200, Albert Shih a écrit
> Hi everyone
>
> I just change the subnet of my cluster.
>
> The cephfs part seem to working well.
>
> But I got many error with
>
> Jul 11 10:08:35 hostname ceph-*** ts=2024-07-11T08:08:35.364Z
&
address...but I can't find where and of course I'm unable to change it.
Regards
--
Albert SHIH 🦫 🐸
Observatoire de Paris
France
Heure locale/Local time:
jeu. 11 juil. 2024 10:51:25 CEST
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
/some service need a restart.
>
> I misread the line, maybe you need to update alertmanager instead of
> prometheus.
Nope either
Thanks.
Regards
--
Albert SHIH 🦫 🐸
Observatoire de Paris
France
Heure locale/Local time:
jeu. 11 juil. 2024 11:23:38 CEST
te
Any clue or debugging method ?
Regards
--
Albert SHIH 🦫 🐸
Observatoire de Paris
France
Heure locale/Local time:
mar. 16 juil. 2024 14:26:47 CEST
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
y mount they take few minutes to answer me
> >
> > mount error: no mds server is up or the cluster is laggy
> >
> > on the client I can see :
> >
> > Jul 16 14:10:43 Debian12-1 kernel: [ 860.636012] ceph: corrupt mdsmap
> > Jul 16 14:23:37 Debian12-2 k
he OSD.
Then have been try some redeploying the mds --> no joy.
This morning I restart a osd and notice the restarted osd listen on v2
and v1, so I restart all osd.
After that, every osd listen on v2 and v1.
But still unable to mount the cephfs.
I try the option ms_mode=prefer
h'
I need to do a chown ceph:ceph on the directory. But on the next reboot the
dir return to nobody.
Anyway to fix (beside some cron + chown) this minor bug ?
Regards
--
Albert SHIH 🦫 🐸
Observatoire de Paris
France
Heure locale/Local time:
jeu. 18 juil. 2024 09:
Le 18/07/2024 à 10:27:09+0200, David C. a écrit
Hi,
>
> perhaps a conflict with the udev rules of locally installed packages.
>
> Try uninstalling ceph-*
Sorry...not sure I understandyou want me to uninstall ceph ?
Regards.
JAS
--
Albert SHIH 🦫 🐸
Observatoire de Paris
27;t need the ceph-*
packages on the host hosting the containers
I've no idea how they end-up installed, you mean I can securely uninstall all
of them on all my node ?
Regards.
--
Albert SHIH 🦫 🐸
Observatoire de Paris
France
Heure locale/Local time:
jeu. 18 juil.
Le 18/07/2024 à 11:00:56+0200, Albert Shih a écrit
> Le 18/07/2024 à 10:56:33+0200, David C. a écrit
>
Hi,
>
> > Your ceph processes are in containers.
>
> Yes I know but in my install process I just install
>
> ceph-common
> ceph-base
>
> then
all Python 3 utility libraries for Ceph
ii python3-cephfs 18.2.2-1~bpo12+1
amd64Python 3 libraries for the Ceph libcephfs library
root@cthulhu1:~#
Thanks !!!
Regards.
--
Albert SHIH 🦫 🐸
Observatoire de Paris
France
Heure local
ally prefer Debian mainly for its stability and easy
> upgrade-in-place. What are yours preferences?
I'm not the right person to answer you, I just wondering why not use the
orchestrator ?
Regards
JAS
--
Albert SHIH 🦫 🐸
Observatoire de Paris
Heure locale/Local time:
jeu. 0
flavor with Ceph (well
almost).
So I'm running Debian just because is the one I'm most familiar with, and
I'm running ceph with podman.
I'm new with ceph so I just like to know if they are any downside to do
that.
Regards.
--
Albert SHIH 🦫 🐸
Observatoire de Paris
Fra
to have a
replica (with only 1 copy of course) of pool from “row” primary to
secondary.
How can I achieve that ?
Regards
--
Albert SHIH 🦫 🐸
mer. 08 nov. 2023 18:37:54 CET
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send a
opy everything
from the “row primary” to “row secondary”.
Regards
>
>
> Le mer. 8 nov. 2023 à 18:45, Albert Shih a écrit :
>
> Hi everyone,
>
> I'm totally newbie with ceph, so sorry if I'm asking some stupid question.
>
> I'm trying to
luster are not to get the maximum
I/O speed, I would not say the speed is not a factor, but it's not the main
point.
Regards.
--
Albert SHIH 🦫 🐸
Observatoire de Paris
France
Heure locale/Local time:
ven. 17 nov. 2023 10:49:27 CET
___
ceph
n is to use the capability of ceph to migrate by himself the data from
old to new hardware.
So short answer : no enough money ;-) ;-)
Regards.
--
Albert SHIH 🦫 🐸
France
Heure locale/Local time:
sam. 18 nov. 2023 09:19:03 CET
___
ceph-users mai
er, you can always think about relocating certain services.
Ok, thanks for the answer.
Regards.
--
Albert SHIH 🦫 🐸
Observatoire de Paris
France
Heure locale/Local time:
sam. 18 nov. 2023 09:26:56 CET
___
ceph-users mailing list -- ceph-users@ceph.i
the default and make sure you have
> sufficient unused capacity to increase the chances for large bluestore writes
> (keep utilization below 60-70% and just buy extra disks). A workload with
> large min_alloc_sizes has to be S3-like, only upload, download and delete are
> allowed.
Thankt
where I loose 2 disks over 9-12
disks).
So my question are : Anyone use in large scale erasure coding for critical
(same level as raidz1/raid5 ou raidz2/raid6) ?
Regards
--
Albert SHIH 🦫 🐸
Observatoire de Paris
France
Heure locale/Local time:
jeu. 23 nov. 20
don't manually touche ceph.conf ?
And what about the future ?
Regards.
--
Albert SHIH 🦫 🐸
France
Heure locale/Local time:
jeu. 23 nov. 2023 15:21:47 CET
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to cep
all interested in the future, for that is where you and I are going to
> spend the rest of our lives.
Correct ;-)
Sorry my question was not very clear. My question was in fact in which
way we headed. But I'm guessing the answer is “ceph config” or something
like
nfiguration
to put in the /etc/ceph/ceph.conf
> the label _admin to your host in "ceph orch host" so that cephadm takes care
> of maintaining your /etc/ceph.conf (outside the container).
Ok. I'm indeed using ceph orch & Cie.
Thanks.
Regards.
JAS
--
Albert SHIH 🦫
t the install of ceph
«into» our puppet config.
Soon I remove the old version of cephadm and install the 17.2.7 version
everything work fine again.
Regards.
--
Albert SHIH 🦫 🐸
France
Heure locale/Local time:
mer. 29 nov. 2023 22:06:35 CET
___
cep
ivate network.
Is they are anyway to configure booth public_network and private_network
with cephadm bootstrap ?
Regards.
--
Albert SHIH 🦫 🐸
France
Heure locale/Local time:
jeu. 30 nov. 2023 18:27:08 CET
___
ceph-users mailing list -- ceph-users@ceph.
r to the other ceph cluster server.
I in fact add something like
for host in `cat /usr/local/etc/ceph_list_noeuds.txt`
do
/usr/bin/rsync -av /etc/ceph/ceph* $host:/etc/ceph/
done
in a cronjob
Regards.
--
Albert SHIH 🦫 🐸
France
Heure locale/Lo
same
cephfs_data_replicated/erasure pool ?
Regards
--
Albert SHIH 🦫 🐸
France
Heure locale/Local time:
mer. 24 janv. 2024 09:33:09 CET
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Le 24/01/2024 à 09:45:56+0100, Robert Sander a écrit
Hi
>
> On 1/24/24 09:40, Albert Shih wrote:
>
> > Knowing I got two class of osd (hdd and ssd), and I have a need of ~ 20/30
> > cephfs (currently and that number will increase with time).
>
> Why do you n
ay but that's OK ;-)
>
> What Robert emphasizes is that creating pools dynamically is not without
> effect
> on the number of PGs and (therefore) on the architecture (PG per OSD,
> balancer,
> pg autoscaling, etc.)
Ok.no worriesI didn't know it was possib
Le 24/01/2024 à 10:33:45+0100, Robert Sander a écrit
Hi,
>
> On 1/24/24 10:08, Albert Shih wrote:
>
> > 99.99% because I'm newbie with ceph and don't understand clearly how
> > the autorisation work with cephfs ;-)
>
> I strongly recommend you to ask f
deploy
the mds, and the «new» way to do it is to use ceph fs volume.
But with ceph fs volume I didn't find any documentation of how to set the
metadata/data pool
I also didn't find any way to change after the creation of the volume the
pool.
Thanks
--
Albert SHIH 🦫 🐸
France
Heure lo
Yes, but I'm guessing the
ceph fs volume
are the «future» so it would be super nice to add (at least) the option to
choose the couple of pool...
>
> I haven't looked too deep into changing the default pool yet, so there might
> be a way to switch that as well.
Ok. I will al
Hi
When I deploy my cluster I didn't notice on two of my servers the private
network was not working (wrong vlan), now it's working, but how can I check
the it's indeed working (currently I don't have data).
Regards
--
Albert SHIH 🦫 🐸
France
Heure locale/Local time:
lun.
Le 29/01/2024 à 22:43:46+0100, Albert Shih a écrit
> Hi
>
> When I deploy my cluster I didn't notice on two of my servers the private
> network was not working (wrong vlan), now it's working, but how can I check
> the it's indeed working (currently I don't have
ed-69f03a7303e9
/mnt
but on my test client I'm unable to mount
root@ceph-vo-m:/etc/ceph# mount -t ceph
vo@fxxx-c0f2-11ee-9307-f7e3b9f03075.cephfs=/volumes/_nogroup/erasure/998e3bdf-f92b-4508-99ed-69f03a7303e9/
/vo --verbose
parsing options: rw
source mount path was not specified
unable
Le 02/02/2024 à 16:34:17+0100, Albert Shih a écrit
> Hi,
>
>
> A little basic question.
>
> I created a volume with
>
> ceph fs volume
>
> then a subvolume called «erasure» I can see that with
>
> root@cthulhu1:/etc/ceph# ceph fs subvolume info cephfs
s.ceph.com/en/quincy/radosgw/
I can see a lot of very detailed documentation about each component, but
cannot find a more global documentation.
Any new documentation somewhere ? I think it's not a good idea to use the
one on octopus...
Regards
--
Albert SHIH 🦫 🐸
France
Heure locale/Loca
Le 12/02/2024 à 18:38:08+0100, Kai Stian Olstad a écrit
> On 12.02.2024 18:15, Albert Shih wrote:
> > I couldn't find a documentation about how to install a S3/Swift API (as
> > I
> > understand it's RadosGW) on quincy.
>
> It depends on how you have install
ay to keep the first answer ?
Regards
--
Albert SHIH 🦫 🐸
France
Heure locale/Local time:
jeu. 22 févr. 2024 08:44:17 CET
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
if I use 3 replicas that's mean when I write 100G of data
available space = quota limite - 100G x 3
Regards
JAS
--
Albert SHIH 🦫 🐸
Observatoire de Paris
France
Heure locale/Local time:
sam. 24 févr. 2024 10:09:12 CET
___
ceph-users mailing list
//docs.ceph.com/en/quincy/mgr/rgw/#mgr-rgw-module
so now I got some rgw daemon running.
But I like to clean up and «erase» everything about rgw ? not only to try
to understand but also because I think I mixted up between realm and
zonegroup...
Regards
--
Albert SHIH 🦫 🐸
France
Heure lo
Le 05/03/2024 à 11:54:34+0100, Robert Sander a écrit
Hi,
> On 3/5/24 11:05, Albert Shih wrote:
>
> > But I like to clean up and «erase» everything about rgw ? not only to try
> > to understand but also because I think I mixted up between realm and
> > zonegroup...
>
in icinga.
Thanks.
--
Albert SHIH 🦫 🐸
France
Heure locale/Local time:
ven. 22 mars 2024 22:24:35 CET
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
With our small cluster (11 nodes) I notice ceph log a lot
Beside to keep that somewhere «just in case», is they are anything to check
regularly in the log (in prevention of more serious problem) ? Or can we
trust «ceph health» and use the log only for debug.
Regards
--
Albert SHIH 🦫 🐸
2 ceph-mgr[2843]: mgr.server handle_open ignoring open
from mds.cephfs.cthulhu3.xvboir v2:145.238.187.186:6800/1297104944; not ready
for session (expect reconnect)
Mar 25 13:18:39 cthulhu2 ceph-mgr[2843]: mgr.server handle_open ignoring open
from mds.cephfs.cthulhu2.dqahyt v2
Le 25/03/2024 à 08:28:54-0400, Patrick Donnelly a écrit
Hi,
>
> The fix is in one of the next releases. Check the tracker ticket:
> https://tracker.ceph.com/issues/63166
Oh thanks. Didn't find it with google.
Is they are any risk/impact for the cluster ?
Regards.
--
Alb
uot;
but I don't known if that's related.
Any clue ?
Regards
--
Albert SHIH 🦫 🐸
France
Heure locale/Local time:
mar. 26 mars 2024 10:52:53 CET
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
hat).
Regards
--
Albert SHIH 🦫 🐸
France
Heure locale/Local time:
mer. 27 mars 2024 09:18:04 CET
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
out it's was a micron ssd.
So my question : whats the best thing to do ?
Which «plugin» should I use and how I tell cephad what to do ?
Regards
--
Albert SHIH 🦫 🐸
France
Heure locale/Local time:
mer. 27 mars 2024 15:43:54 CET
___
cep
waiting)
But I try to find «why» so I check all the OSD related on this pg and
didn't find anything, no error from osd daemon, no errors from smartctl, no
error from the kernel message.
So I just like to know if that's «normal» or should I scratch deeper.
JAS
--
Albert SHIH 🦫 🐸
Fr
crubbing, but make sure
> you're using the alert module [1] so to at least get informed about the scrub
> errors.
Thanks. I will look into because we got already icinga2 on site so I use
icinga2 to check the cluster.
Is they are a list of what the alert module going to check ?
Regar
troubleshooting-osd/
but no luck.
I don't find any message in the dmesg.
Zero message with journalctl
zero message with systemctl status
At the end I reboot once more time the server and everything work fine
again.
Is anyone encounter something like that ? Is that “normal”
Regards
-
bbing for Xs
or
queued for deep scrub
So my questions are :
Why ceph tell me «1» pg not been scrub when I see 15 ?
Is they are any way to find which pg ceph status are talking about.
Is they are any way to see the progress or scrubbing/remapping/backfill ?
Regards
--
Albert SH
Le 20/09/2024 à 11:01:20+0200, Albert Shih a écrit
Hi,
>
> >
> > > Is they are any way to find which pg ceph status are talking about.
> >
> > 'ceph health detail' will show you which PG it's warning about.
>
> Too easy for me ;-) ;-)...Th
[0]
> https://docs.ceph.com/en/latest/rados/operations/health-checks/#pg-not-deep-scrubbed
> [1]
> https://heiterbiswolkig.blogs.nde.ag/2024/09/06/pgs-not-deep-scrubbed-in-time/
Thanks...
Regards.
--
Albert SHIH 🦫 🐸
Observatoire de Paris
France
Heure locale/Local time:
ven. 20 sept.
ic is if the container fails before that point is reached.
>
> I've diagnosed Ceph problems buy doing a "brute force" launch of a Ceph
> container without the "-d" option, but it's not for the faint of heart.
;-) ;-)
What I don't understand is why after an
Hi everyone,
Stupid questionafter some test I was able to dump a user caps with
ceph auth get --format json
but I'wasn't able to find the other way, something
ceph auth add fubar.json
Is they are any way to add a user (without given a key and with a key).
Regards
--
A
hat with some script.
Anyway thanks.
Regards
--
Albert SHIH 🦫 🐸
Observatoire de Paris
France
Heure locale/Local time:
mar. 03 déc. 2024 13:27:23 CET
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
h more easy to do it with yaml/json because it's native to
puppet, don't need to add some library to use ini file. It's more easy to
check if something already exist (so don't do something already here).
Regards.
JAS
--
Albert SHIH 🦫 🐸
Observatoire de Paris
France
Hi,
I need to create manually few VM with KVM. I would like to know if they are
any difference between using a libvirt module and kernel module to access a
ceph cluster.
Regards
--
Albert SHIH 🦫 🐸
Observatoire de Paris
France
Heure locale/Local time:
mer. 12 févr. 2025 16:14:15 CET
ow you add add disk
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
but I also can mount the disk directly with
/etc/ceph/rbdmap
at the boot the disk will appear somewhere in /dev/sd* on the kvm server
and then use it in kvm as a «normal» disk.
Don't know if they are any difference or
> rbd rm /
Ok big thanks. I would (try to) keep that in my little brain
Now I will just remove the erasure pool also ;-)
Regards.
JAS
--
Albert SHIH 🦫 🐸
Observatoire de Paris
France
Heure locale/Local time:
lun. 17 mars 2025 16:11:41 CET
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ID PGS STORED OBJECTS USED %USED MAX AVAIL
.mgr11 23 MiB7 68 MiB 0 46 TiB
erasure42 14 32 336 GiB 86.46k 504 GiB 0.36 92 TiB
it still use 336GiB.
How can I find where those 336Gib are and delete the image or whatever.
Regards
--
Albert
64 matches
Mail list logo