On Mon, Jan 23, 2023 at 6:51 PM Yuri Weinstein wrote:
>
> Ilya, Venky
>
> rbd, krbd, fs reruns are almost ready, pls review/approve
rbd and krbd approved.
Thanks,
Ilya
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe se
On Mon, Jan 23, 2023 at 11:22 PM Yuri Weinstein wrote:
>
> Ilya, Venky
>
> rbd, krbd, fs reruns are almost ready, pls review/approve
fs approved.
>
> On Mon, Jan 23, 2023 at 2:30 AM Ilya Dryomov wrote:
> >
> > On Fri, Jan 20, 2023 at 5:38 PM Yuri Weinstein wrote:
> > >
> > > The overall progre
Hi Team,
We have a ceph cluster with 3 storage nodes:
1. storagenode1 - abcd:abcd:abcd::21
2. storagenode2 - abcd:abcd:abcd::22
3. storagenode3 - abcd:abcd:abcd::23
The requirement is to mount ceph using the domain name of MON node:
Note: we resolved the domain name via DNS server.
For
On Thu, Jan 19, 2023 at 9:07 PM Lo Re Giuseppe wrote:
>
> Dear all,
>
> We have started to use more intensively cephfs for some wlcg related workload.
> We have 3 active mds instances spread on 3 servers,
> mds_cache_memory_limit=12G, most of the other configs are default ones.
> One of them has
Hi,
On 24.01.23 15:02, Lokendra Rathour wrote:
My /etc/ceph/ceph.conf is as follows:
[global]
fsid = 7969b8a3-1df7-4eae-8ccf-2e5794de87fe
mon host =
[v2:[abcd:abcd:abcd::21]:3300,v1:[abcd:abcd:abcd::21]:6789],[v2:[abcd:abcd:abcd::22]:3300,v1:[abcd:abcd:abcd::22]:6789],[v2:[abcd:abcd:abcd::23]
Hi,
you can also use SRV records in DNS to publish the IPs of the MONs.
Read https://docs.ceph.com/en/quincy/rados/configuration/mon-lookup-dns/
for more info.
Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin
https://www.heinlein-support.de
Tel: 030 / 4050
We encountered the following problems while trying to perform
maintenance on a Ceph cluster:
The cluster consists of 7 Nodes with 10 OSDs each.
There are 4 pools on it: 3 of them are replicated pools with 3/2
size/min_size and one is an erasure coded pool with m=2 and k=5.
The following glo
Hello team,,
I have deployed ceph pacific cluster using ceph-ansible running on ubuntu
20.04 which have 3 OSD hosts and 3 mons on each OSD host we have 20 osd . I
am integrating swift in the cluster but I fail to find the policy and
upload objects in the container . I have deployed rgwloadbalance
Dear all
I have just changed the crush rule for all the replicated pools in the
following way:
ceph osd crush rule create-replicated replicated_hdd default host hdd
ceph osd pool set crush_rule replicated_hdd
See also this [*] thread
Before applying this change, these pools were all using
the
Hi team,
We have a ceph cluster with 3 storage nodes:
1. storagenode1 - abcd:abcd:abcd::21
2. storagenode2 - abcd:abcd:abcd::22
3. storagenode3 - abcd:abcd:abcd::23
We have a dns server with ip abcd:abcd:abcd::31 which resolves the above ip's
with a single hostname.
The resolution is as follows:
Josh, this is ready for your final review/approval and publishing
Release notes - https://github.com/ceph/ceph/pull/49839
On Tue, Jan 24, 2023 at 4:00 AM Venky Shankar wrote:
>
> On Mon, Jan 23, 2023 at 11:22 PM Yuri Weinstein wrote:
> >
> > Ilya, Venky
> >
> > rbd, krbd, fs reruns are almost r
Looks good to go!
On Tue, Jan 24, 2023 at 7:57 AM Yuri Weinstein wrote:
> Josh, this is ready for your final review/approval and publishing
>
> Release notes - https://github.com/ceph/ceph/pull/49839
>
> On Tue, Jan 24, 2023 at 4:00 AM Venky Shankar wrote:
> >
> > On Mon, Jan 23, 2023 at 11:22
Dear all,
I have a two hosts setup, and I recently rebooted a mgr machine without
"set noout" and "set norebalance" commands.
The "darkside2" machine is the cephadm machine, and "darkside3" is the
improperly rebooted mgr.
Now the darkside3 machine does not resume ceph configuration:
[root
Hi,
what you can’t change with EC pools is the EC profile, the pool‘s
ruleset you can change. The fix is the same as for the replicates
pools, assign a ruleset with hdd class and after some data movement
the autoscaler should not complain anymore.
Regards
Eugen
Zitat von Massimo Sgaravat
Hi Mark,
Thanks for your response, it is help!
Our Ceph cluster use Samsung SSD 870 EVO all backed with NVME drive. 12 SSD
drives to 2 NVMe drives per storage node. Each 4TB SSD backed 283G NVMe lvm
partition as DB.
Now cluster throughput only 300M write, and around 5K IOPS. I could see NVMe
d
Hi,
Do you think kernel should care about DNS resolution?
k
> On 24 Jan 2023, at 19:07, kushagra.gu...@hsc.com wrote:
>
> Hi team,
>
> We have a ceph cluster with 3 storage nodes:
> 1. storagenode1 - abcd:abcd:abcd::21
> 2. storagenode2 - abcd:abcd:abcd::22
> 3. storagenode3 - abcd:abcd:abcd:
Hi,
You SSD is a "desktop" SSD, not a "enterprise" SSD, see [1]
This mostly was't suitable for Ceph
[1] https://yourcmc.ru/wiki/Ceph_performance#CAPACITORS.21
k
> On 25 Jan 2023, at 05:35, peter...@raksmart.com wrote:
>
> Hi Mark,
> Thanks for your response, it is help!
> Our Ceph cluster use
17 matches
Mail list logo