Hi,
Is it advisable to limit the sizes of data pools or metadata pools
of a cephfs filesystem for performance or other reasons?
I assume you don't mean quotas for pools, right? The pool size is
limited by the number and size of the OSDs, of course. I can't really
say what's advisable or n
Dear all,
I now attempted this and my host is back in the cluster but the `ceph
cephadm osd activate` does not work.
# ceph cephadm osd activate HOST
Created no osd(s) on host HOST; already created?
Using --verbose is not too helpful either:
bestcmds_sorted:
[{'flags': 8,
'help': 'Start OSD c
excellent work everyone!
Regarding this: "Quincy does not support LevelDB. Please migrate your OSDs
and monitors to RocksDB before upgrading to Quincy."
Is there a convenient way to determine this for cephadm and non-cephadm
setups?
What happens if LevelDB is still active? does it cause an immedi
On Wed, Apr 20, 2022 at 6:21 AM Harry G. Coin wrote:
>
> Great news! Any notion when the many pending bug fixes will show up in
> Pacific? It's been a while.
Hi Harry,
The 16.2.8 release is planned within the next week or two.
Thanks,
Ilya
hallo everybody,
i have the following hardware which consist of 3 Nodes with the
following specs:
* 8 HDDs 8TB
* 1 SSD 900G
* 2 NVME 260G
i planned to use the HDDs for the OSDs and the other devices for
bluestorage(db)
according to the documentation 2% of storage is needed for bluestorage
as
Hi,
have you checked /var/log/ceph/cephadm.log for any hints?
ceph-volume.log may also provide some information
(/var/log/ceph//ceph-volume.log) what might be going on.
Zitat von Manuel Holtgrewe :
Dear all,
I now attempted this and my host is back in the cluster but the `ceph
cephadm os
Hi fellow ceph users and developers,
we've got into quite strange situation I'm not sure is
not a ceph bug..
we have 4 node CEPH cluster with multiple pools. one of them
is SATA EC 2+2 pool containting 4x3 10TB drives (one of tham
is actually 12TB)
one day, after planned downtime of fourth node,
Dear Eugen,
thanks for the hint. The output is pasted below. I can't gather any useful
information from that.
I also followed the instructions from
https://docs.ceph.com/en/latest/cephadm/operations/#watching-cephadm-log-messages
```
ceph config set mgr mgr/cephadm/log_to_cluster_level debug
ce
Hi,
and the relevant log output is here:
https://privatepastebin.com/?f95c66924a7ddda9#ADEposX5DCo5fb5wGv42czrxaHscnwoHB7igc3eNQMwc
This is just the output of 'ceph-volume lvm list', is that really all?
I haven't had the chance to test 'ceph cephadm osd activate' myself so
I can't really t
Dear Team,
We have two type to disk in our ceph cluster one is Magnetic disk, another
type is SSD.
# ceph osd crush class ls
[
"hdd",
"ssd"
]
hdd is normally a bit slower, which is normal. initially ssd was faster
read/write. Recently we are facing very slow operation on ssd. Need help t
Hi Stefan,
all daemons are 15.2.15 (I'm considering doing update to 15.2.16 today)
> What do you have set as neafull ratio? ceph osd dump |grep nearfull.
nearfull is 0.87
>
> Do you have the ceph balancer enabled? ceph balancer status
{
"active": true,
"last_optimize_duration": "0:00:00.
Hi,
thank you for your reply.
That really is all. I tried to call `cephadm ceph-volume lvm activate
--all`, see below, and this apparently crashes because of some unicode
problem... might that be the root cause?
Cheers,
Manuel
[root@dmz-host-4 rocky]# cephadm ceph-volume lvm activate --all
Infe
The Quincy release notes state that "MDS upgrades no longer require all
standby MDS daemons to be stoped before upgrading a file systems's sole
active MDS." but the "Upgrading non-cephadm clusters" instructions still
include reducing ranks to 1, upgrading, then raising it again.
Does the new f
IIUC it's just the arrow that can't be displayed when the systemd-unit
is enabled and the symlinks in
/etc/systemd/system/multi-user.target.wants/ are created. When OSDs
are created by cephadm the flag --no-systemd is usually used, can you
try this command?
cephadm ceph-volume lvm activat
Hm, not much more luck:
# cephadm --verbose ceph-volume lvm activate --all --no-systemd
cephadm ['--verbose', 'ceph-volume', 'lvm', 'activate', '--all',
'--no-systemd']
Using default config: /etc/ceph/ceph.conf
/bin/d
thanks for the tip on alternative balancer, I'll have a look at it
however I don't think the root of the problem is in improper balancing,
those 3 OSDs just should not be that full. I'll see how it gets when the
snaptrims finis, usage seems to go down by 0.01%/minute now..
I'll report the results
Well, at least it reports that the OSDs were activated successfully:
/bin/docker: --> ceph-volume lvm activate successful for osd ID: 12
/bin/docker: --> Activating OSD ID 25 FSID
3f3d61f8-6964-4922-98cb-6620aff5cb6f
Now you need to get the pods up, I'm not sure if cephadm will manage
that
Thank you for your reply.
However, the cephadm command did not create the osd.X directories in
/var/lib/ceph/FSID... Subsequently the start fails which is also shown in
the journalctl output:
Apr 20 14:51:28 dmz-host-4 bash[2540969]: /bin/bash:
/var/lib/ceph/d221bc3c-8ff4-11ec-b4ba-b02628267680/o
I see. I'm not sure if cephadm should be able to handle that and this
is a bug or if you need to create those files and directories
yourself. I was able to revive OSDs in a test cluster this way, but
that should not be necessary. Maybe there is already an existing
tracker, but if not you sh
Hi,
Those are not enterprise SSD, Samsung labeled and all are 2TB in size. we
have three node, 12 disk on each node
Regards,
Munna
On Wed, Apr 20, 2022 at 5:49 PM Stefan Kooman wrote:
> On 4/20/22 13:30, Md. Hejbul Tawhid MUNNA wrote:
> > Dear Team,
> >
> > We have two type to disk in our ceph
Hi, thank you.
Done, I put the link below. Maybe someone else on this list can enlighten
us ;-)
https://tracker.ceph.com/issues/55395
On Wed, Apr 20, 2022 at 2:55 PM Eugen Block wrote:
> I see. I'm not sure if cephadm should be able to handle that and this
> is a bug or if you need to create t
>
> Those are not enterprise SSD, Samsung labeled and all are 2TB in size. we
> have three node, 12 disk on each node
>
I think you should use the drives labeled with the lower case 's' of samsung.
___
ceph-users mailing list -- ceph-users@ceph.io
On 4/19/22 10:56 PM, Vladimir Brik wrote:
Yeah, this must be the bug. I have about 180 clients
Yeah, a new bug. Maybe too many clients and couldn't show them correctly
in the terminal.
-- Xiubo
Vlad
On 4/18/22 23:52, Jos Collin wrote:
Do you have mounted clients? How many clients do you
On Tue, Apr 19, 2022 at 08:51:50PM +, Ryan Taylor wrote:
> Thanks for the pointers! It does look like
> https://tracker.ceph.com/issues/55090
> and I am not surprised Dan and I are hitting the same issue...
Just a wild guess (already asked this on the tracker):
Is it possible that you're usi
Hi Ceph users,
after a long time without any major incident (which is great for such a
complex piece of software!), I've finally encountered a problem with our
Ceph installation. All of a sudden the monitor service on one of the
nodes doesn't start anymore. It crashes immediately when I try to sta
Hi,
I will check and confirm the lebel. In the mean time can you guys help me
to find out the root cause of this issue, and how I can resolve this? Is
there any ceph configuration issue or anything we should check
Please advise.
Regards,
Munna
On Wed, Apr 20, 2022 at 7:29 PM Marc wrote:
> >
>
Hi Stefan,
thanks for the hint. Yes, the mon store is owned by user ceph. It's a
normal folder under /var/lib/ceph/mon/... and there are various files in
there - all with owner ceph:ceph. Looks normal to me (like on the other
nodes).
I've read the docs you provided and I wonder which of those step
Hi Stefan,
amazing. You're a genius. It worked. Everything back in order.
Wonderful. These things are perhaps not so exciting for all of you Ceph
experts but it's certainly *very* exciting for me to do stuff like this
on a live system. *Phew, sweat*. So happy this is finally resolved and
nobody no
On Wed, Apr 20, 2022 at 7:22 AM Stefan Kooman wrote:
>
> On 4/20/22 03:36, David Galloway wrote:
> > We're very happy to announce the first stable release of the Quincy series.
> >
> > We encourage you to read the full release notes at
> > https://ceph.io/en/news/blog/2022/v17-2-0-quincy-released/
I added some osd's which are up and running with:
ceph-volume lvm create --data /dev/sdX --dmcrypt
But I am still getting such messages of the newly created osd's
systemd: Job
dev-disk-by\x2duuid-7a8df80d\x2d4a7a\x2d469f\x2d868f\x2d8fd9b7b0f09d.device/start
timed out.
systemd: Timed out waitin
My apologies if my previous message seemed negative to your docs in any way.
They are very good and work as documented.
The area that tripped me up originally and caused confusion on my part is the
Ceph docs that do not state clearly that one needs a separate radosgw daemon
for any of the Sync
Hello. I have a Ceph cluster (using Nautilus) in a lab environment on a smaller
scale than the production environment. We had some problems with timeouts in
production, so I started doing some benchmarking tests in this lab environment.
The problem is that the performance of RGW (with beast) is
Thanks very much for responding - my problem was not realizing that you need a
separate ragdosgw daemon for the zone that hosts the cloud tier. Once I got
past this concept your example worked great for a simple setup.
--
Mark Selby
Sr Linux Administrator, The Voleon Group
mse...@voleon.com
I rebuilt the setup from scratch and left off --master from the zones in the
other zonegroups and it had no effect on the outcome.
ulrich.kl...@ulrichklein.net has tried this as well and sees the same failure.
I hope there is someone who can see what we are doing wrong. This topology is
shown h
Hi Luís,
The same cephx key is used for both mounts. It is a regular rw key which does
not have permission to set any ceph xattrs (that was done separately with a
different key).
But it can read ceph xattrs and set user xattrs.
I just did a test using the latest Fedora 35 kernel and reproduce
Hi everyone,
This month's Ceph User + Dev Monthly Meetup has been canceled due to
the ongoing Ceph Developer Summit. However, we'd like to know if there
is any interest in an APAC friendly meeting. If so, we could alternate
between APAC and EMEA friendly meetings, like we do for CDM.
Thanks,
Neha
Hi everyone,
I’m trying to build my own Ceph .deb packages to solve the issue with this bug
https://tracker.ceph.com/issues/51327 in the Pacific release but, after
building all the .deb packages, I’m not able to install them on my rgw..
Thank you,
Fabio Pasetti
__
Does the v15.2.15-20220216 container include backports published since the
release of v15.2.15-20211027 ?
I'm interested in BACKPORT #53392 https://tracker.ceph.com/issues/53392,
which was merged into the ceph:octopus branch on February 10th.
___
ceph-use
Hi,
RGW rate-limits available starting from Quincy release
k
Sent from my iPhone
> On 20 Apr 2022, at 20:58, Marcelo Mariano Miziara
> wrote:
>
> Hello. I have a Ceph cluster (using Nautilus) in a lab environment on a
> smaller scale than the production environment. We had some problems wi
39 matches
Mail list logo