Hello!
Few years ago I build a "dc-a:12 + dc-b:12 = 24" node ceph cluster
with Nautilus v14.2.16
A year ago the cluster upgraded to Octopus and it was running fine.
Recently I added 4+4=8 new nodes with identical hardware and SSD drives.
When I created OSD's with Octopus, The cluster usage increas
Dear list,
we are upgrading our ceph infrastructure from mimic to octopus (please
be kind, we known that we are working with "old" tools, but these ceph
releases are tied to our openstack installation needs) and _*all*_ the
ceph actors (mon/mgr/osd/rgw - no mds as we do not serve filesystem)
Hello,
We’ve found that if we lose one of the nfs.cephfs service daemons in our
cephadm 19.2.2 cluster, all NFS traffic is blocked until either:
- the down nfs.cephfs daemon is restarted
- or we reconfigure the placement of the nfs.cephs service to not use the
affected host. After this, the ing
Hello Community,
We are trying to set ACL for one of the objects by s3cmd tool within the
buckets to be public by using the command as follows but we are unable to set
it in squid ceph version, however the same was done in the reef version, we
were successfully able to set it public.
Please let
Hi Can someone help me understand what happens in a scenario if my os disk
of all the ceph nodes gets destroyed somehow and I am only left with the
OSDs or physical storage devices, How can I recreate the same ceph cluster
using those old OSDs without any data loss ? Is there something I should
reg
Hi all,
we seem to have a serious issue with our file system, ceph version is pacific
latest. After a large cleanup operation we had an MDS rank with 100Mio stray
entries (yes, one hundred million). Today we restarted this daemon, which
cleans up the stray entries. It seems that this leads to a
I need help to remove a useless "HEALTH ERR" in 19.2.0 on a fully dual
stack docker setup with ceph using ip v6, public and private nets
separated, with a few servers. After upgrading from an error free v18
rev, I can't get rid of the 'health err' owing to the report that all
osds are unreach
Hi All,
Very new to Ceph and hoping someone can help me out.
We are implementing Ceph in our team's environment, and I have been able to
manually set up a test cluster using cephadm bootstrap and answering all the
prompts.
What we want to do is to automate the setup and maintenance of the prod
3 nodes each:
3 hdd – 21G
1 ssd – 80G
Create osd containing block_data with block_db size 15G located on ssd-
This par works
Create block_data osd on remaining space 35G in ssd- This part is not
working
ceph orch apply osd -i /path/to/osd_spec.yml
service_type: osd
service_id: osd_spec_hdd
place
Hello.
I would like to use mirroring to facilitate migrating from an existing
Nautilus cluster to a new cluster running Reef. RIght now I'm looking at
RBD mirroring. I have studied the RBD Mirroring section of the
documentation, but it is unclear to me which commands need to be issued on
each cl
Hello,
First of all, thanks for reading my message. I set up a Ceph version 18.2.2
cluster with 4 nodes, everything went fine for a while, but after copying some
files, the storage showed a warning status and the following message :
"HEALTH_WARN: 1 MDSs are read only mds.PVE-CZ235007SH(mds.0):
Hi,
we ran into a bigger problem today with our ceph cluster (Quincy,
Alma8.9).
We have 4 filesystems and a total of 6 MDs, the largest fs having
two ranks assigned (i.e. one standby).
Since we often have the problem of MDs lagging behind, we restart
the MDs occasionally. Helps ususally, the stan
Dear Ceph users,
in order to reduce the deep scrub load on my cluster I set the deep
scrub interval to 2 weeks, and tuned other parameters as follows:
# ceph config get osd osd_deep_scrub_interval
1209600.00
# ceph config get osd osd_scrub_sleep
0.10
# ceph config get osd osd_scrub_loa
good morning,
i am trying to understand ceph snapshot sizing. For example if i have 2.7
GB volume and i create a snap on it, the sizing says:
(BEFORE SNAP)
rbd du volumes/volume-d954915c-1dc1-41cb-8bf0-0c67e7b6e080
NAME PROVISIONED USED
volume-d954915c-1dc1-41cb-8bf0-0c67e7b6e080 10 GiB 2.7 Gi
Hi there!
Has anyone any experience with the Influx Ceph mgr module?
I am using 17.2.7 on CentOS8-Stream, I configured one of my clusters, I
test with "ceph influx send" (whereas official doc
https://docs.ceph.com/en/quincy/mgr/influx/ mentions the non-existing
"ceph influx self-test") but no
Hi
I have recently onboarded new OSDs into my Ceph Cluster. Previously, I had
44 OSDs of 1.7TiB each and was using it for about a year. About 1 year ago,
we onboarded an additional 20 OSDs of 14TiB each.
However I observed that many of the data were still being written onto the
original 1.7TiB OS
good morning,
i was struggling trying to understand why i cannot find this setting on
my reef version, is it because is only on latest dev ceph version and not
before?
https://docs.ceph.com/en/*latest*
/radosgw/metrics/#user-bucket-counter-caches
Reef gives 404
https://docs.ceph.com/en/reef
I configured a password for Grafana because I want to use Loki. I used the spec
parameter initial_admin_password and this works fine for a staging environment,
where I never tried to used Grafana with a password for Loki.
Using the username admin with the configured password gives a credential
I configured a password for Grafana because I want to use Loki. I used the spec parameter initial_admin_password and this works fine for a staging environment, where I never tried to used Grafana with a password for Loki.
Using the username admin with the configur
Hi,
I have a ceph stucked at `ceph --verbose stats fs fsname`. And in the
monitor log, I can found something like `audit [DBG] from='client.431973 -'
entity='client.admin' cmd=[{"prefix": "fs status", "fs": "fsname",
"target": ["mon-mgr", ""]}]: dispatch`.
What happened and what should I do?
--
Dear All,
After a unsuccessful upgrade to pacific, MDS were offline and could not get
back on. Checked the MDS log and found below. See cluster info from below as
well. Appreciate it if anyone can point me to the right direction. Thanks.
MDS log:
2023-05-24T06:21:36.831+1000 7efe56e7d700 1 m
Hi,
As discussed in another thread (Crushmap rule for multi-datacenter
erasure coding), I'm trying to create an EC pool spanning 3 datacenters
(datacenters are present in the crushmap), with the objective to be
resilient to 1 DC down, at least keeping the readonly access to the pool
and if po
Hi,
We've already convert two PRODUCTION storage nodes on Octopus to cephadm
without problem.
On the third one, we succeeded to convert only one OSD.
[root@server4 osd]# cephadm adopt --style legacy --name osd.0
Found online OSD at //var/lib/ceph/osd/ceph-0/fsid
objectstore_type is bluestore
Looking for some help as this is production effecting..
We run a 3 Node cluster with a mix of 5xSSD,15xSATA and 5xSAS in each node.
Running 15.2.15. All using DB/WAL on NVME SSD except the SSD's
Earlier today I increased the PG num from 32 to 128 on one of our pools,
due to the status complaining
Hi, we have 3 node ceph cluster, long time ago installed by another team.
Currently, we had to reinstall OS(due to disk failure) at one of them(node
3), so we lost all configs on that node.
As all other hard drives on that node3 are intact, after installing fresh
ceph, "cephadm ceph-volume lvm lis
Hi!
I need help setting up the domain name for my company's ceph dashboard. I
tried using NGINX but it would only display the ceph dashboard on HTTP and
logging in doesn't work. Also using HTTPS returns a 5xx error message. Our
domain name is from CloudFlare and also has SSL enabled. Please help.
T
We are seeking information on configuring Ceph to work with Noobaa and
NextCloud.
Randy
--
Randy Morgan
CSR
Department of Chemistry/BioChemistry
Brigham Young University
ran...@chem.byu.edu
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscr
Please help me enable ceph iscsi gatewaty in ceph octopus . when i install ceph
complete . i see iscsi gateway not enable. please help me config it
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi:
I am using ceph nautilus with CentOS 7.6 and working on adding a pair of
iscsi gateways in our cluster, following the documentation here:
https://docs.ceph.com/docs/master/rbd/iscsi-target-cli/
I was in the "Configuring" section, step #3, "Create the iSCSI gateways"
and ran into problems. Whe
I need help about add node when install ceph with cephadm .
When i run cpeh orch add host ceph2
error enoent: new host ceph2 (ceph2) failed check: ['traceback (most recent
call last):',
Please help me fix it.
Thanks & Best Regards
David
___
cep
Dear Support,
I need help about add node when install ceph with cephadm .
When i run cpeh orch add host ceph2
error enoent: new host ceph2 (ceph2) failed check: ['traceback (most
recent call last):',
Please help me fix it.
Thanks & Best Regards
David
_
I had 5 of 10 osds fail on one of my nodes, after reboot the other 5 osds
failed to start.
I have tried running ceph-disk activate-all and get back and error message
about the cluster fsid not matching in /etc/ceph/ceph.conf
Has anyone experienced an issue such as this?
*
ceph version 14.2.4 (75f4de193b3ea58512f204623e6c5a16e6c1e1ba) nautilus (stable)
os :CentOS Linux release 7.7.1908 (Core)
single node ceph cluster with 1 mon,1mgr,1 mds,1rgw and 12osds , but only
cephfs is used.
ceph -s is blocked after shutting down the machine (192.168.0.104), then ip
add
..."
>
> Today's Topics:
>
>1. Help (Sumit Gaur)
>
>
> ------
>
> Date: Tue, 29 Oct 2019 07:06:17 +1100
> From: Sumit Gaur
> Subject: [ceph-users] Help
> To: ceph-users@ceph.io
> Message-ID:
> udqwc9wbna...@
Dear All,
We are "in a bit of a pickle"...
No reply to my message (23/03/2020), subject "OSD: FAILED
ceph_assert(clone_size.count(clone))"
So I'm presuming it's not possible to recover the crashed OSD
This is bad news, as one pg may be lost, (we are using EC 8+2, pg dump
shows [NONE,NONE,
Hi,
I upgrade the ceph from 14.2.7 to the new version 14.2.8 . The bucket
notification dose not work.
I can’t create a TOPIC :
I use post man to send a post flow by
https://docs.ceph.com/docs/master/radosgw/notifications/#create-a-topic
REQUEST:
POST
http://rgw1:7480/?Action=CreateTopi
On Tue, 29 Oct 2019 at 1:50 am, wrote:
> Send ceph-users mailing list submissions to
> ceph-users@ceph.io
>
> To subscribe or unsubscribe via email, send a message with subject or
> body 'help' to
> ceph-users-requ...@ceph.io
>
> You can reach the person managing the list at
>
On Wed, 23 Oct 2019 at 3:12 am, wrote:
> Send ceph-users mailing list submissions to
> ceph-users@ceph.io
>
> To subscribe or unsubscribe via email, send a message with subject or
> body 'help' to
> ceph-users-requ...@ceph.io
>
> You can reach the person managing the list at
>
Am 11.10.2019 um 09:21 schrieb ceph-users-requ...@ceph.io:
Send ceph-users mailing list submissions to
ceph-users@ceph.io
To subscribe or unsubscribe via email, send a message with subject or
body 'help' to
ceph-users-requ...@ceph.io
You can reach the person managing the list at
Hi,
It's observerd up to 10 times space is consumed when concurrent 200 files
iozone writing test ,
with erasure code profile (k=8,m=4) data pool, mounted with ceph fuse, but
disk usage is normal if only has one writing task .
Furthermore everything is normal using replicated data pool, no
Hi,
I am using ceph version 13.2.6 (mimic) on test setup trying with cephfs.
my ceph health status showing warning .
"ceph health"
HEALTH_WARN Degraded data redundancy: 1197023/7723191 objects degraded
(15.499%)
"ceph health detail"
HEALTH_WARN Degraded data redundancy: 1197128/7723191 objects d
41 matches
Mail list logo