Hello again,
The issues that you are seeing are because as I mentioned in my previous
email, I missed backporting some commits to Octopus (apologies for the
same), and I have opened a backport PR ((
https://github.com/ceph/ceph/pull/37640) and this should be available in
the next Octopus release.
Hi Michael,
it doesn't look too bad. All degraded objects are due to the undersized PG. If
this is an EC pool with m>=2, data is currently not in danger.
I see a few loose ends to pick up, let's hope this is something simple. For any
of the below, before attempting the next step, please wait un
Hello,
The original cause of the OSD instability has already been fixed. It
was due to user jobs (via condor) consuming too much memory and causing
the machine to swap. The OSDs didn't actually crash, but weren't
responding in time and were being flagged as down.
In most cases, the problematic
Hi Frank,
Thanks for taking the time to help out with this. Here is the output
you requested:
ceph status: https://pastebin.com/v8cJJvjm
ceph health detail: https://pastebin.com/w9wWLGiv
ceph osd pool stats: https://pastebin.com/dcJTsXE1
ceph osd df tree: https://pastebin.com/LaZcBemC
I remo
Hi Pritha and thanks again for your reply. Unfortunately we are still stuck at
the AssumeRoleWithWebIdentity API call as shown below:
2020-10-14T08:24:26.314+ 7ff6600ff700 1 == starting new request
req=0x7ff6b69496b0 =
2020-10-14T08:24:26.314+ 7ff6600ff700 2 req 7 0s initializi
Hi,
We are trying to introduce SSD/NvME OSD’s and to prevent data moving from
current (hdd based) OSD’s while also having erasure coded pools we could not
just simply change the erasure coding profile or create a new one and just
apply it to the pool.
Reading this list and other posts on forum
Thanks for your information. I see ceph-ansible is setting it just for
filestore and when I deploy my ceph cluster there is no /etc/default/ceph
file for my OSDs and I got confused :))
I have made a PR for it in ceph-ansible.
Thanks.
On Wed, Oct 14, 2020 at 4:26 PM Mark Nelson wrote:
> It *shou
On Wed, Oct 14, 2020 at 02:09:22PM +0200, Andreas John wrote:
> Hello Alwin,
>
> do you know if it makes difference to disable "all green computing" in
> the BIOS vs. settings the governor to "performance" in the OS?
Well, for one the governor will not be able to influence all BIOS
settings (eg. I
It *should* be set to 128MB for both fwiw. There may be slightly less
need now with async messenger but we still have a ton of threads flying
around allocating memory and I don't think we can get away with lowering
it yet. Might be something for interested parties to retest though! :)
Mark
Hello Alwin,
do you know if it makes difference to disable "all green computing" in
the BIOS vs. settings the governor to "performance" in the OS?
Of not, I think I will will have some service cycles to set our
proxmox-ceph nodes correctly.
Best Regards,
Andreas
On 14.10.20 08:39, Alwin Antr
Doh,
I found my problem I have some how managed to swap the zonegroup and
zone in my setup.
Fixed that and it works.
---
- Karsten
On 14-10-2020 09:10, Karsten Nielsen wrote:
Hi,
I have been setting up a new cluster with a combination of cephadm and
ceph orch.
I have run in to a problem wit
On 09.10.2020 15:44, Szabo, Istvan (Agoda) wrote:
I have a bucket which is close to 10 millions objects (9.1 millions), we have:
rgw_dynamic_resharding = false
rgw_override_bucket_index_max_shards = 100
rgw_max_objs_per_shard = 10
Do I need to increase the numbers soon or it is not possible
Hi,
I have been setting up a new cluster with a combination of cephadm and
ceph orch.
I have run in to a problem with rgw daemons that does not start.
I have been following the documentation:
https://docs.ceph.com/en/latest/cephadm/install/ - the RGW section
ceph orch apply rgw ikea cn-dc9-1
>>
>> Very nice and useful document. One thing is not clear for me, the fio
>> parameters in appendix 5:
>> --numjobs=<1|4> --iodepths=<1|32>
>> it is not clear if/when the iodepth was set to 32, was it used with all
>> tests with numjobs=4 ? or was it:
>> --numjobs=<1|4> --iodepths=1
> We have
14 matches
Mail list logo