Good morning,
some days ago we created a new pool with 512 pgs, and originally 5 osds.
We use the device class "ssd" and a crush rule that maps all data for
the pool "ssd" to the ssd device class osds.
While creating, one of the ssds failed and we are left with 4 osds:
[10:00:22] server2.place6
My first guess would be PG overdose protection kicked in [1][2]
You can try fixing it by increasing allowed number of PG per OSD with
ceph tell mon.* injectargs '--mon_max_pg_per_osd 500'
ceph tell osd.* injectargs '--mon_max_pg_per_osd 500'
and then triggering CRUSH algorithm update by restarting
You hit the nail! Thanks a lot!
Anytime around in Switzerland for a free beer [tm]?
Vladimir Prokofev writes:
> My first guess would be PG overdose protection kicked in [1][2]
> You can try fixing it by increasing allowed number of PG per OSD with
> ceph tell mon.* injectargs '--mon_max_pg_pe
Good morning,
Does some kind of config param exist in Ceph for avoid two hosts
accesing to the same vm pool or at least image inside?. Can be done at
pool or image level?.
Best regards,
--
sarenet
*Egoitz Aurrekoetxea*
Departamento de sistemas
944 209 470
Parque Tecnológico. Edificio 103
48
Hi Egoitz,
I think, I did something similar using different ceph pool keys for each
pool.
Regards, I
El sáb., 17 mar. 2018 12:46, Egoitz Aurrekoetxea
escribió:
> Good morning,
>
>
> Does some kind of config param exist in Ceph for avoid two hosts accesing
> to the same vm pool or at least ima
I am testing the ldap auth with rgw. But is there a simple shell script
that I can use to test with? I have problems with the signature in this
one
#!/bin/bash
#
file=1MB.bin
bucket=test
s3Key="TEST"
s3Secret="test"
host="192.168.1.14"
resource="/${bucket}/${file}"
contentType="application/x
Hi list,
My ceph version is jewel 10.2.10.
I tired to use rbd rm to remove a 50TB image(without object map because krbd
does't support it).It takes about 30mins to just complete about 3%. Is this
expected? Is there a way to make it faster?
I know there are scripts to delete rados objects of the r
Yes, this is what object-map does, it tracks used objects
For your 50TB new image:
- Without object-map, rbd rm must interate over every object, find out
that the object does not exists, look after the next object etc
- With object-map, rbd rm get the used objects list, find it empty, and
job is d
I have been following this instruction, except for doing this ldap
token.
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html-single/ceph_object_gateway_with_ldapad_guide/
With ldapsearch I am able to query the ldap server and list the
userPassword.
I guess I can use a s
Hi Marc
looks like no search is being done there.
rgw::auth::s3::AWSAuthStrategy denied with reason=-13
The same for me, http://tracker.ceph.com/issues/23091
But Yehuda closed this.
k
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi Mark
But is there a simple shell script
that I can use to test with? I have problems with the signature in this
one
This is 100% working test admin api (uid should have caps="buckets=read").
#!/bin/bash
s3_access_key=""
s3_secret_key=""
s3_host="objects-us-west-1.dream.io"
query="admin
11 matches
Mail list logo