hi , for bucket notification anybody have experience use kafka with
encryption? i try config topic with this configuration
{
"topics": [
{
"owner": "farhad",
"name": "sre-s3",
"dest": {
"push_endpoint":
"kafka://sre_fkhedrian:d2Q9BAY
Hi, I need to install ceph-common from the Quincy repository, but I'm
getting this error:
---
Ceph x86_64
0.0 B/s | 0 B 00:01
Errors during downloading metadata for repository 'Ceph':
- Status code: 404 for
http://download.ceph.com/rpm-quincy/el8/x86_64/repodata/repo
hi , for trying deploy cluster with cephadm version 19.2.1 and using docker
version 28.0.1 i get this error :
---
# cephadm--image opkbhfpsksp0101.p.fnst/ceph/ceph:v19.2.1 bootstrap
--mon-ip 10.248.35.143 --registry-json /root/reg.json
--allow-fqdn-hostname --initial-dashboard-user admin
-
It is very good to be able to use Ingress for service manager This
possibility of high availability for Ed Will this feature be added in the
next version or should it still be implemented manually?
___
ceph-users mailing list -- ceph-users@ceph.io
To uns
hi, i want to use ceph bucket notification . i try to created topic with
below command but get error when used kafka with user/password
how can i solved this problem ? my syntax have any problem?
https://www.ibm.com/docs/en/storage-ceph/7?topic=management-creating-bucket-notifications
https://doc
Hello, according to ceph own document and the article that I sent the link
to, I tried to change the address of the ceph machines and its public
network.
But the guarantee that I had to set the machines with the new address(ceph
orch host set-addr opcrgfpsksa0101 10.248.35.213)
, the command was n
hi , i used ceph api for create rgw/role but ther is not api for delete or
edit rgw/role .
how can i delete them or edit ?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I have implemented a ceph cluster with cephadm which has three monitors and
three OSDs
each node have one interface 192.168.0.0/24 network.
I want to change the address of the machines to the range 10.4.4.0/24.
Is there a solution for this change without data loss and failure?
i change the pubic_ne
hi,thank you for guidance
There is no ability to change the global image before launching,I need to
download the images from the private registry during the initial setup.
i used option --image but it not worked.
# cephadm bootstrap --image rgistry.test/ceph/ceph:v18 --mon-ip 192.168.0.160
-
Hello, I downloaded cephadm from the link below.
https://download.ceph.com/rpm-18.2.0/el8/noarch/
I change the address of the images to the address of my private registry,
```
DEFAULT_IMAGE = 'opkbhfpspsp0101.fns/ceph/ceph:v18'
DEFAULT_IMAGE_IS_MAIN = False
DEFAULT_IMAGE_RELEASE = 'reef'
DEFAULT_P
i use ceph 17.2.6 and when i deploy two number of separate rgw realm with
zonegroup and zone , dashboard enabled access for bouth object gateway and
i can create user and bucket and etc .but when i trying create bucket in on
of object gatways .i get this error in below:
debug 2023-10
I deploy the rgw service and the default pool is created automatically But
I get an error in the dashboard
``
Error connecting to Object Gateway: RGW REST API request failed with
default 404 status code","HostId":"736528-default-default"}')
``
There is a dashboard user but I created the bucket ma
hi everybody
we have problem with nfs gansha load balancer
whene use rsync -avre to copy file from another share to ceph nfs share
path we get this error
`rsync -rav /mnt/elasticsearch/newLogCluster/acr-202*
/archive/Elastic-v7-archive`
rsync : close failed on "/archive/Elastic-v7-archive/"
when set osd_memory_target for limitation usage memory for osd disk ,This
value is expected to be set for the OSD container .But with the docker stats
command, this value is not seen Is my perception of this process wrong?
---
[root@opcsdfpsbpp0201 ~]# ceph orch ps | grep osd.12
osd.12
hi
i have a problem with ceph 17.2.6 , cephfs with mds daemons but see an
unusual behavior.
create a data pool with default crush rule but data just store in 3
specific osd and other osd is clean
PG auto-scaling is also active but its size does not change when the pool
is biger
I did this manua
I noticed that in my scenario, when I mount cephfs via the kernel module,
it directly copies to one or three of the OSDs. And the writing speed of
the client is higher than the speed of replication and auto scaling This
causes the writing operation to stop as soon as those OSDs are filled, and
the
i deployed the ceph cluster with 8 node (v17.2.6) and after add all of
hosts, ceph create 5 mon daemon instances
i try decrease that to 3 instance with ` ceph orch apply mon
--placement=label:mon,count:3 it worked, but after that i get error "2
stray daemons not managed by cephadm" .
But every ti
Hi guys
I deployed the ceph cluster with cephadm and root user, but I need to
change the user to a non-root user
And I did these steps:
1- Created a non-root user on all hosts with access without password and
sudo
`$USER_NAME ALL = (root) NOPASSWD:ALL`
2- Generated a SSH key pair and use ssh-copy-
hi everyone
i have a warning ` 1 stray daemon(s) not managed by cephadm`
# ceph health detail
HEALTH_WARN 1 stray daemon(s) not managed by cephadm
[WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm
stray daemon mon.apcepfpspsp0111 on host apcepfpspsp0111 not
managed by cepha
i try deploy cluster from private registry and used this command
{cephadm bootstrap ---mon-ip 10.10.128.68 --registry-url my.registry.xo
--registry-username myuser1 --registry-password mypassword1
--dashboard-password-noupdate --initial-dashboard-password P@ssw0rd }
even i changed section Default
how to i can set IO quota or limit for R/W for erasure coding pool in
ceph ?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I have a cluster of three nodes, with three replicas per pool on cluster
nodes
-
HOST ADDR LABELS STATUS
apcepfpspsp0101 192.168.114.157 _admin mon
apcepfpspsp0103 192.168.114.158 mon _admin
apcepfpspsp0105 192.168.114.159 mon _admin
3 hosts in cluster
--
i have cluster (v 17.2.4) with cephadm
---
[root@ceph-01 ~]# ceph -s
cluster:
id: c61f6c8a-42a1-11ed-a5f1-000c29089b59
health: HEALTH_OK
services:
mon:3 daemons, quorum ceph-01.fns.com,ceph-03,ceph-02 (age 109m)
mgr:ceph-01.fns.com.vdoxhd(active, since 1
i removed osd from crushmap but it still in 'ceph osd tree'
[root@ceph2-node-01 ~]# ceph osd tree
ID CLASS WEIGHTTYPE NAME STATUS REWEIGHT
PRI-AFF
-1 20.03859 root default
-20 20.03859 datacenter dc-1
-21 20.03859 room serv
I need a disk storage block that is shared between two Windows servers.
Servers are active standby (server certification) Only one server can write
at a time, but both servers can read the created files And if the first
server shuts down, the second server can edit the files or create a new file
i want set lc for incomplete multipart but i not find document that say use
minute or hour for time
how can set time for lc less than day ?
Abort incomplete multipart upload after 1 day
Enabled
1
i upgraded my cluster to 17.2 and locked process upgrade
i have error
[root@ceph2-node-01 ~]# ceph -s
cluster:
id: 151b48f2-fa98-11eb-b7c4-000c29fa2c84
health: HEALTH_WARN
Reduced data availability: 32 pgs inactive
Degraded data redundancy: 32 pgs undersized
i deleted all object in my bucket but used capacity not zero
when i list object in pool wit `rados -p default.rgw.buckets.data.ls` shows
me a lot of objects
2ee2e53d-bad4-4857-8bea-36eb52a83f34.5263789.1__shadow_1/16Q91ZUY34EAW9TH.2~zOHhukByW0DKgDIIihOEhtxtW85FO5m.74_1
2ee2e53d-bad4-4857-8bea-36eb
i have error im dashboard ceph
--
CephMgrPrometheusModuleInactive
description
The mgr/prometheus module at opcpmfpskup0101.p.fnst.10.in-addr.arpa:9283 is
unreachable. This could mean that the module has been disabled or the mgr
itself is down. Without the mgr/prometheus module metrics and alert
I will update the cluster to version 16.2.9 but other versions do not show
demons
[root@opcpmfpsbpp0101 c41ccd12-dc01-11ec-9e25-00505695f8a8]# ceph orch ps
NAME HOST PORTSSTATUS
REFRESHED AGE MEM USE MEM LIM VERSIONIMAGE ID CONTAI
multi write in block device
i have two windows server and i persent a lun with ceph rbd for bouth
i need when disk is ofline for first windows server another sever can
update,write and read all file in disk
but this until first server is down or disconnect from lun not work
what shoulde be doing
hi
i have a problem in my cluster
i used cache tier for rgw data
In this way, three hosts for cache and three hosts for data I have used
SSDs for cache and HDD for data
i set 20 GiB quota for cache pool
when one host of cache tier shulde be offline
released this warning and i decreased quota to 10
hi
i upgraded my cluster from 16.2.6 to 16.2.9
and i have this error in dashboard but not in command line
The mgr/prometheus module at opcpmfpsbpp0103.fst.20.10.in-addr.arpa:9283 is
unreachable. This could mean that the module has been disabled or the mgr
itself is down. Without the mgr/prometheus
hi
i have a error in delete service from dashboard
ceph version is 16.2.6
HEALTH_ERR Module 'cephadm' has failed: dashboard iscsi-gateway-rm failed:
iSCSI gateway 'opcpmfpsbpp0101' does not exist retval: -2
[ERR] MGR_MODULE_ERROR: Module 'cephadm' has failed: dashboard
iscsi-gateway-rm failed: iSC
hi
i want used private registry for running cluster ceph storage and i changed
default registry my container runtime (docker)
/etc/docker/deamon.json
{
"registery-mirrors": ["https://private-registery.fst";]
}
and all registry addres in /usr/sbin/cephadm(quay.ceph.io and docker.io to
my private
hi
i have a lot of error in s3 api
in client s3 i get this :
2022-05-24 10:49:58.095 ERROR 156723 --- [exec-upload-21640003-285-2]
i.p.p.d.service.UploadDownloadService: Gateway Time-out (Service:
Amazon S3; Status Code: 504; Error Code: 504 Gateway Time-out; Request ID:
null; S3 Extended Re
hi
i have a lot of error in s3 api
in client s3 i get this :
2022-05-24 10:49:58.095 ERROR 156723 --- [exec-upload-21640003-285-2]
i.p.p.d.service.UploadDownloadService: Gateway Time-out (Service:
Amazon S3; Status Code: 504; Error Code: 504 Gateway Time-out; Request ID:
null; S3 Extended Req
I want to save data pools for rgw on HDD disk drives And use some SSD hard
drive for the cache tier on top of it
Has anyone tested this scenario?
Is this practical and optimal?
How can I do this?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsub
I lost some disks in my cluster ceph then began to correct the structure
of the objects and replicate them
This caused me to get some errors on the s3 api
Gateway Time-out (Service: Amazon S3; Status Code: 504; Error Code: 504
Gateway Time-out; Request ID: null; S3 Extended Request ID: null; Prox
i have error a in my cluster ceph
HEALT_WARN 1 demons have recently crashed
[WRN] RECENT_CRASH: 1 demons have recently crashed
client.admin crashed on host node1 at 2022-05-16T08:30:41205667z
what does this mean
How can I fix it?
___
ceph-users ma
i have error a in my cluster ceph
HEALT_WARN 1 demons have recently crashed
[WRN] RECENT_CRASH: 1 demons have recently crashed
client.admin crashed on host node1 at 2022-05-16T08:30:41205667z
___
ceph-users mailing list -- ceph-users@ceph.io
To uns
hi
i deleted all object in bucket but used capacity in my bucket isnot zero
and show in ls command many objects
why ?
and how can i deleted all ?
s3 ls s3://podspace-default-bucket-zone
/usr/lib/python3.6/site-packages/urllib3/connectionpool.py:847:
InsecureRequestWarning: Unverified HTTPS req
42 matches
Mail list logo