Finally master is merged now
k
Sent from my iPhone
> On 25 Mar 2021, at 23:09, Simon Oosthoek wrote:
>
> I'll wait a bit before upgrading the remaining nodes. I hope 14.2.19 will be
> available quickly.
___
ceph-users mailing list -- ceph-users@cep
rotational device or not, from kernel
k
Sent from my iPhone
> On 25 Mar 2021, at 15:06, Nico Schottelius
> wrote:
>
> The description I am somewhat missing is "set based on which criteria?"
___
ceph-users mailing list -- ceph-users@ceph.io
To unsub
Hi,
I want to set alert on the user's pool before it got's full but in nautilus I
still haven't found the way which is the value of their data usage based on
ceph detail df?
POOL ID STORED OBJECTS
USED %USED
In couple of documentation that I've read I finally made the decision to
separate index from wal+db.
However don't you think that the density is a bit high with 12HDD for 1 nvme?
So if you loose nvme you actually loose your complete host and a lot of data
movements will happen.
Istvan Szabo
Sen
Hello there,
Thank you for advanced.
My ceph is ceph version 14.2.9
I have a repair issue too.
ceph health detail
HEALTH_WARN Too many repaired reads on 2 OSDs
OSD_TOO_MANY_REPAIRS Too many repaired reads on 2 OSDs
osd.29 had 38 reads repaired
osd.16 had 17 reads repaired
~# ceph tell os
On 3/25/21 8:47 PM, Simon Oosthoek wrote:
On 25/03/2021 20:42, Dan van der Ster wrote:
netstat -anp | grep LISTEN | grep mgr
# netstat -anp | grep LISTEN | grep mgr
tcp 0 0 127.0.0.1:6801 0.0.0.0:* LISTEN
1310/ceph-mgr
tcp 0 0 127.0.0.1:6800 0.0.
On 25/03/2021 20:56, Stefan Kooman wrote:
On 3/25/21 8:47 PM, Simon Oosthoek wrote:
On 25/03/2021 20:42, Dan van der Ster wrote:
netstat -anp | grep LISTEN | grep mgr
# netstat -anp | grep LISTEN | grep mgr
tcp 0 0 127.0.0.1:6801 0.0.0.0:* LISTEN
1310/ceph-mgr
tcp
In each host's ceph.conf, put:
[global]
public addr =
cluster addr =
Or use the public network / cluster network options = a.b.c.0/24 or
however your network is defined.
-- dan
On Thu, Mar 25, 2021 at 8:50 PM Simon Oosthoek wrote:
>
> On 25/03/2021 20:42, Dan van der Ster wrote:
> > netsta
On 25/03/2021 20:42, Dan van der Ster wrote:
netstat -anp | grep LISTEN | grep mgr
has it bound to 127.0.0.1 ?
(also check the other daemons).
If so this is another case of https://tracker.ceph.com/issues/49938
Do you have any idea for a workaround (or should I downgrade?). I'm
running ceph
So do this. Get the ip of the host running the mgr, and put it this in
the config file:
[global]
public addr =
cluster addr =
Then restart your mgr.
IMHO we really need a 14.2.19 with this fixed asap.
On Thu, Mar 25, 2021 at 8:47 PM Simon Oosthoek wrote:
>
> On 25/03/2021 20:42, Dan van de
On 25/03/2021 20:42, Dan van der Ster wrote:
netstat -anp | grep LISTEN | grep mgr
# netstat -anp | grep LISTEN | grep mgr
tcp0 0 127.0.0.1:6801 0.0.0.0:*
LISTEN 1310/ceph-mgr
tcp0 0 127.0.0.1:6800 0.0.0.0:*
LISTEN 1310/ceph-mgr
tcp6
netstat -anp | grep LISTEN | grep mgr
has it bound to 127.0.0.1 ?
(also check the other daemons).
If so this is another case of https://tracker.ceph.com/issues/49938
-- dan
On Thu, Mar 25, 2021 at 8:34 PM Simon Oosthoek wrote:
>
> Hi
>
> I'm in a bit of a panic :-(
>
> Recently we started att
DOhhh...
Read David's proceedure. And was surprised.
I thought the wording of the --replace flag was so blindingly obvious, I didnt
think I needed to read the docs. except apparently I do.
(ceph orch osd rm --replace)
"This follows the same procedure as the “Remove OSD” part with the exception
Hi
I'm in a bit of a panic :-(
Recently we started attempting to configure a radosgw to our ceph
cluster, which was until now only doing cephfs (and rbd wss working as
well). We were messing about with ceph-ansible, as this was how we
originally installed the cluster. Anyway, it installed nau
Thank you for the answers.
But I don't have problem with setting 8+2. The problem is the expansion.
I need to move the 5 node with data in it and add 5 node later because
they're in different city. The goal I'm trying to reach is 8+2 (host
crush rule)
So I want to cut the data 10 pieces and put t
You can also prepare a crush rule to put only 2 chunks per host. That
way you can still operate cluster even if a single host is down.
Regards,
On 3/25/21 7:53 PM, Martin Verges wrote:
You can change the crush rule to be OSD instead of HOST specific. That way
Ceph will put a chunk per OSD and
As we wanted to verify this behavior with 15.2.10, we went ahead and
tested with a failed OSD. The drive was replaced, and we followed the
steps below (comments for clarity on our process) - this assumes you
have a service specification that will perform deployment once
matched:
# capture "db devi
Here's a crush ruleset for 8+2 that will choose 2 osds per host:
rule cephfs_data_82 {
id 4
type erasure
min_size 3
max_size 10
step set_chooseleaf_tries 5
step set_choose_tries 100
step take default class hdd
step choose indep 5 typ
You can change the crush rule to be OSD instead of HOST specific. That way
Ceph will put a chunk per OSD and multiple Chunks per Host.
Please keep in mind, that will cause an outage if one of your hosts are
offline.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...
Hello.
I have 5 node Cluster in A datacenter. Also I have same 5 node in B datacenter.
They're gonna be 10 node 8+2 EC cluster for backup but I need to add
the 5 node later.
I have to sync my S3 data with multisite on the 5 node cluster in A
datacenter and move
them to the B and add the other 5 no
> However the meta-data is saved in a different LV, isn't it? I.e. isn't
> practically the same as if you'd have used gpt partitions?
No, its not, it is saved in LVM tags. LVM takes care of storing these in a
transparent way somewhere else than in a separate partition.
Best regards,
Hello everybody.
I searched in several places and I couldn't find any information about what
the best bucket index and WAL / DB organization would be.
I have several hosts consisting of 12 HDDs and 2 NVMes, and currently one
of the NVMes serves as WAL / DB for the 10 OSDs and the other NVMe is
pa
Hi,
I have a small cluster of 3 nodes. Each node has 10 or 11 OSDs, mostly HDDs
with a couple of SSDs for faster pools. I am trying to set up an erasure
coded pool with m=6 k=6, with each node storing 4 chunks on seperate OSDs.
Since this seems not possible with the CLI tooling I have written my o
Frank Schilder writes:
> I think there are a couple of reasons for LVM OSDs:
>
> - bluestore cannot handle multi-path devices, you need LVM here
> - the OSD meta-data does not require a separate partition
However the meta-data is saved in a different LV, isn't it? I.e. isn't
practically the sa
Hi David,
Thanks for the insight.
We also run CentOS 8.3 and plan to stay on it as long as possible.
The workaround we now used was to manually upgrade the managers first by
using 'ceph orch redeploy' so they no longer contain the bug.
We then continued upgrading the cluster normally using 'cep
Stefan Kooman writes:
> On 3/23/21 11:00 AM, Nico Schottelius wrote:
>> Stefan Kooman writes:
OSDs from the wrong class (hdd). Does anyone have a hint on how to fix
this?
>>>
>>> Do you have: osd_class_update_on_start enabled?
>> So this one is a bit funky. It seems to be off, but t
Hi
I'm currently having a bit of an issue with setting up end user authentication
and I would be thankful for any tips I could get.
The general scenario is like that; end users are authorised thorough webapp and
mobile app thorough keycloak. User has to be able to upload and download data
usi
Hi,
While looking at something else in the documentation, I came across
this:
https://docs.ceph.com/en/latest/cephfs/administration/#maximum-file-sizes-and-performance
"CephFS enforces the maximum file size limit at the point of appending
to files or setting their size. It does not affect ho
Hello Ceph Users,
Has anyone come across this error when converting to cephadm?
2021-03-25 20:41:05,616 DEBUG /bin/podman: stderr Error: error getting image
"ceph-375dcabe-574f-4002-b322-e7f89cf199e1-rgw.COMPANY.LOCATION.NAS-COMPANY-RK2-CEPH06.pcckdr":
repository name must be lowercase
It seem
29 matches
Mail list logo