On 8/26/2021 4:18 PM, Dave Piper wrote:
I'll keep trying to repro and gather diags, but running in containers is making
it very hard to run debug commands while the ceph daemons are down. Is this a
known problem with a solution?
Sorry, not aware of this issues. I don't use containers though.
Hello,
recently I thought about erasure coding and how to set k+m in a useful
way also taking into account the number of hosts available for ceph. Say
I would have this setup:
The cluster has 6 hosts and I want to allow two *hosts* to fail without
loosing data. So I might choose k+m as 4+2 w
Den fre 27 aug. 2021 kl 12:43 skrev Rainer Krienke :
>
> Hello,
>
> recently I thought about erasure coding and how to set k+m in a useful
> way also taking into account the number of hosts available for ceph. Say
> I would have this setup:
>
> The cluster has 6 hosts and I want to allow two *hosts
Hi,
1. two disks would fail where both failed disks are not on the same
host? I think ceph would be able to find a PG distributed across all
hosts avoiding the two failed disks, so ceph would be able to repair
and reach a healthy status after a while?
yes, if there is enough disk space an
For this August Debian testing became a Debian stable with LTS support.
But I see that only sid repo exists, no testing and no new stable bullseye.
May be some one knows, when there are plans to have a bullseye build?
I cannot answer to this question; however I hope that the maintainers
make
Hello Janne,
thank you very much for answering my questions.
Rainer
Am 27.08.21 um 12:51 schrieb Janne Johansson:
Den fre 27 aug. 2021 kl 12:43 skrev Rainer Krienke :
Hello,
recently I thought about erasure coding and how to set k+m in a useful
way also taking into account the number of hos
Am 27.08.21 um 12:51 schrieb Janne Johansson:
Yes. You should have more hosts for EC 4+2, or .. less K.
I'll second that. You should have at least k+m+2 hosts in the cluster
for erasure coding. Not only because of redundancy but also for better
distributing the load. EC is CPU heavy.
Regar
Hi,
thanks for the answers. My goal was to speed up the S3 interface, an
not only a single program. This was successful with this method:
https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#block-and-block-db
However, one major disadvantage was that Cephadm considered the O
Hello ,
I have configured RGW in my ceph cluster deployed using ceph ansible and
create sub user to access the created containers and would like to replace
swift by RGW in the openstack side. Anyone can help on configuration to be
done in the OpenStack side in order to integrate those services. I
Can you try to update the `mon host` value with brackets ?
mon host = [v2:192.168.1.50:3300,v1:192.168.1.50:6789],[v2:192.168.1.51:3300
,v1:192.168.1.51:6789],[v2:192.168.1.52:3300,v1:192.168.1.52:6789]
https://docs.ceph.com/en/latest/rados/configuration/msgr2/#updating-ceph-conf-and-mon-host
Re
Ok - thanks Xiubo. Not sure I feel comfortable doing that without breaking
something else, so will wait for a new release that incorporates the fix. In
the meantime I’m trying to figure out what might be triggering the issue, since
this has been running fine for months and just recently started
Hello,
We are running a ceph nautilus cluster under centos 7. To upgrade to
pacific we need to change to a more recent distro (probably debian or
ubuntu because of the recent announcement about centos 8, but the distro
doesn't matter very much).
However, I could'nt find a clear procedure to
Hi,
On 27/08/2021 16:16, Francois Legrand wrote:
We are running a ceph nautilus cluster under centos 7. To upgrade to
pacific we need to change to a more recent distro (probably debian or
ubuntu because of the recent announcement about centos 8, but the distro
doesn't matter very much).
How
>
>> Yes. You should have more hosts for EC 4+2, or .. less K.
>
> I'll second that. You should have at least k+m+2 hosts in the cluster for
> erasure coding. Not only because of redundancy but also for better
> distributing the load. EC is CPU heavy.
>
> Regards
I agree operationally, but
Hi Frank,
On Wed, Aug 25, 2021 at 6:27 AM Frank Schilder wrote:
>
> Hi all,
>
> I have the notorious "LARGE_OMAP_OBJECTS: 4 large omap objects" warning and
> am again wondering if there is any proper action one can take except "wait it
> out and deep-scrub (numerous ceph-users threads)" or "ign
What ceph version is used for the cluster ?
Because it looks like (according to the ceph config file) that you're using
either Nautilus/Octopus/Pacific (due to the msgr v2 config)
But are you using the same version for your radosgw node ?
The ceph-common and radosgw packages in buster [1][2] are
This was a bug in some versions of ceph, which has been fixed:
https://tracker.ceph.com/issues/49014
https://github.com/ceph/ceph/pull/39083
You'll want to upgrade Ceph to resolve this behavior, or you can use
size or something else to filter if that is not possible.
David
On Thu, Aug 19, 2021
Le vendredi 27 août 2021, 09:18:01 CEST Francesco Piraneo G. a écrit :
> > For this August Debian testing became a Debian stable with LTS support.
> >
> > But I see that only sid repo exists, no testing and no new stable
> > bullseye.
> >
> > May be some one knows, when there are plans to have a
Hi all,
on a working test cluster trying to install radosgw on a separate
machine; the system is like
mon1...mon3 - 192.168.1.50 - 192.168.1.52
osd1...osd3 - 192.168.1.60 - 192.168.1.60 + Cluster network 172.16.0.0/16
radosgw - hostname - s3.anonicloud.test - IP: 192.168.1.70
Everything de
> Installed radosgw and ceph-common via apt; modified ceph.conf as
follows on mon1 and propagated the modified file to all hosts and
obviously to s3.anonicloud.test.
> Yes, I forgot the ceph.conf, sorry.
[global]
fsid = e79c0ace-b910-40af-ab2c-ae90fa4f5dd2
mon initial members = mon1, mon2, mo
Well, the scan_links cleaned up all the duplicate inode messages, and now it's
just crashing on:
-5> 2021-08-25T16:23:54.996+ 7f8b088e4700 10 monclient:
get_auth_request con 0x55e2cc18d400 auth_method 0
-4> 2021-08-25T16:23:55.098+ 7f8b080e3700 10 monclient:
get_auth_request
Same error, but with the brackets! :-)
# ceph -s
server name not found: [v2:192.168.1.50:3300 (Name or service not known)
unable to parse addrs in '[v2:192.168.1.50:3300,v1:192.168.1.50:6789],
[v2:192.168.1.51:3300,v1:192.168.1.51:6789],
[v2:192.168.1.52:3300,v1:192.168.1.52:6789]'
[errno 22]
Hi everyone,
I'd like to share a few updates on completed/ongoing RADOS and Crimson
projects with the community.
Significant PRs merged
- Remove allocation metadata from RocksDB - should significantly
improve small write performance
- PG Autoscaler scale-down profile - default in new clusters fo
So, now I discovered that Debian has it's own ceph packages! For a
stupid joke I added the ceph repository and installed ceph-common
without making an apt clean; apt update; apt upgrade -y and this lead to
have all the cluster with pacific and just the radosgw with Luminous!
:-/ For this reason
Hi David! Very much appreciated your response.
I'm not sure that may be the problem. I tried with the following (without using
"rotational"):
...(snip)...
data_devices:
size: "15G:"
db_devices:
size: ":15G"
filter_logic: AND
placement:
label: "osdj2"
service_id: test_db_device
service_ty
25 matches
Mail list logo