Hi,
On Windows, the RBD device map commands are dispatched to a centralized service
so that the daemons are not tied to the current Windows session.
The service gets configured automatically by the MSI installer [1]. However, if
you’d like to configure it manually, please check this document [2
I upgraded to Pacific 16.2.5 about a month ago and everything was working fine.
Suddenly for the past few days I’ve started having the tcmu-runner container on
my iSCSI gateways just disappear. I’m assuming this is because they have
crashed. I deployed the services using cephadm / ceph orch in D
If rocky8 had a ceph-common I would go with that. It would (presumably) be
tested more, since it comes with the original distro.
In any case, I installed the el8 ceph packages and they seem to work … so far.
At least I can mount my ceph volume and initial testing looks good.
Thanks!
George
Aha, I knew it was too short to be true. It seems like a client is trying to
delete a file which is triggering all this.
There are many many lines looking like -5 and -4 here.
-5> 2021-08-24T21:17:38.293+ 7fe9a5b0b700 0 mds.0.cache.dir(0x609)
_fetched badness: got (but i already had
Full backtrace below. Seems pretty short for a ceph backtrace!
I'll get started on a link scan for the time being. It'll keep it from flapping
in and out of CEPH_ERR!
-1> 2021-08-24T21:17:38.313+ 7fe9a730e700 -1
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH
Hi everyone,
We have a Ceph Tech Talk scheduled for this Thursday at 17:00 UTC with
Matan Brz on how to use Lua Scripting together with a NATS Lua client
to add NATS to the list of bucket notifications endpoints.
https://ceph.io/en/community/tech-talks/
More information on this project can be fo
, which make CentOS 8.4 and Rocky Linux 8.4 not compatible.
>
> Maybe I’m mistaken, but I thought that CentOS included ceph-common in
> their own repos, so just doing “yum install ceph-common” worked. This
> doesn’t work on Rocky.
> Just wondering if adding the rh8 ceph repo will work on Rocky, o
Yes, it’s supposed to be, but they have their own package repo and mirrors,
separate from Redhat. I’ve also heard that there are some differences between
the two under the hood, specifically OpenHPC-related, which make CentOS 8.4 and
Rocky Linux 8.4 not compatible.
Maybe I’m mistaken, but I th
Good morning all,
I deployed my radosgw on first monitor node on my test cluster following
the instructions here:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/installation_guide_for_red_hat_enterprise_linux/manually-installing-ceph-object-gateway
However the relat
Il 24.08.21 09:32, Janne Johansson ha scritto:
As a simple test I copied an Ubuntu /usr/share/doc (580 MB in 23'000 files):
- rsync -a to a Cephfs took 2 min
- s3cmd put --recursive took over 70 min
Users reported that the S3 access is generally slow, not only with s3tools.
Single per-object a
Are there client packages available for Rocky Linux (specifically 8.4) for
Pacific? If not, when can we expect them?
I also looked at download.ceph.com and I couldn’t find anything relevant. I
only saw rh7 and rh8 packages.
I bootstrapped test cluster under Rocky 8 and CentOS without meetin
Any progress on this? We have encountered the same problem, use the
rbd-nbd option timeout=120.
ceph version: 14.2.13
kernel version: 4.19.118-2+deb10u1
On Wed, May 19, 2021 at 10:55 PM Mykola Golub wrote:
>
> On Wed, May 19, 2021 at 11:32:04AM +0800, Zhi Zhang wrote:
> > On Wed, May 19, 2021 at
Hi,
I try to map rbd image on windows but it fails with bellow message.
1 -1 rbd-wnbd: Could not send device map request. Make sure that the ceph
service is running. Error: (5) AccÞs refusÚ.
rbd: rbd-wnbd failed with error: C:\Program Files\Ceph\bin\rbd-wnbd: exit
status: -22
I get the
Hi,
I assume that the "latest" docs are already referring to quincy, if
you check the pacific docs
(https://docs.ceph.com/en/pacific/mgr/dashboard/) that command is not
mentioned. So you'll probably have to use the previous method of
configuring the credentials.
Regards,
Eugen
Zitat v
Den tis 24 aug. 2021 kl 09:46 skrev Francesco Piraneo G. :
> Il 24.08.21 09:32, Janne Johansson ha scritto:
> >> As a simple test I copied an Ubuntu /usr/share/doc (580 MB in 23'000
> >> files):
> >> - rsync -a to a Cephfs took 2 min
> >> - s3cmd put --recursive took over 70 min
> >> Users reporte
Can you check what ceph-volume would do if you did it manually?
Something like this
host1:~ # cephadm ceph-volume lvm batch --report /dev/vdc /dev/vdd
--db-devices /dev/vdb
and don't forget the '--report' flag. One more question, did you
properly wipe the previous LV on that NVMe?
You sho
Den tis 24 aug. 2021 kl 09:12 skrev E Taka <0eta...@gmail.com>:
> As a simple test I copied an Ubuntu /usr/share/doc (580 MB in 23'000 files):
>
> - rsync -a to a Cephfs took 2 min
> - s3cmd put --recursive took over 70 min
> Users reported that the S3 access is generally slow, not only with s3tool
Hi,
What is the actual backtrace from the crash?
We occasionally had dup inode errors like this in the past but they
never escalated to a crash.
You can see my old thread here:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-July/036294.html
The developer at that time suggested some mani
One can find questions about this topic in the WWW, but most of them
for older versions of Ceph. So I ask specifically for the actual
version:
· Pacific 16.2.5
· 7 nodes (with many cores and RAM) with 111 OSD
· all OSD included by: ceph orch apply osd --all-available-devices
· bucket created in th
19 matches
Mail list logo