Hi again,
Even with this, our 6+3 EC pool with default bluestore_min_alloc_size 64KiB
filled with 4MiB RBD objects should not take 1.67x space. It should be
around 1.55x. There still is a 12% un-accounted overhead. Could there be
something else too?
Best,
On Tue, Nov 26, 2019 at 8:08 PM Serkan Ç
Hello:
According to my understanding, osd's heartbeat partners only come from
those osds who assume the same pg
See below(# ceph osd tree), osd.10 and osd.0-6 cannot assume the same pg,
because osd.10 and osd.0-6 are from different root tree, and pg in my
cluster doesn't map across root trees(# cep
Ok, given the bad format, I put it here
https://onlinenotepad.us/T8Kh9oZVNd
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi,
I have a CephFS instance and I am also planning on also deploying an
Object Storage interface.
My servers have 2 network boards each. I would like to use the current
local one to talk to Cephs clients (both CephFS and Object Storage)
and use the second one to all Cephs processes to talk one
can this version be installed on Debian 10?
If not, is there a plan for Mimic to support Debian 10?
-Original Message-
From: Sage Weil
Sent: Monday, November 25, 2019 10:50 PM
To: ceph-annou...@ceph.io; ceph-users@ceph.io; d...@ceph.io
Subject: [ceph-users] v13.2.7 mimic released
This i
On 22.11.19 23:45, Paul Emmerich wrote:
tools), it means no mapping could be found; check your crush map and
crush rule
Most simple way to get into this state is to change OSDs' reweight on
small cluster where number of OSDs equal to EC n+k.
I do not know exactly, but seems that straw2 crush a
Hello,
as far I know Mimic and nautilus are still not available on debian.
Unfortunately we do not provide mimic on our mirror for debian 10 buster.
But if you want to migrate to nautilus, feel free to use our public mirrors
described at https://croit.io/2019/07/07/2019-07-07-debian-mirror.
--
Ma
Till i submit the mail below few days ago, we found some clues
We observed a lot of lossy connexion like :
ceph-osd.9.log:2019-11-27 11:03:49.369 7f6bb77d0700 0 --
192.168.4.181:6818/2281415 >> 192.168.4.41:0/1962809518
conn(0x563979a9f600 :6818 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH
pgs=0 cs=0
Hi Cephers,
To better understand how our current users utilize Ceph, we conducted a
public community survey. This information is a guide to the community of
how we spend our contribution efforts for future development. The survey
results will remain anonymous and aggregated in future Ceph Foundati
If it was a network issue, the counters should explose (as i said,
with a log level of 5 on the messenger, we observed more then 80 000
lossy channels per minute) but nothing abnormal is relevant on the
counters (on switchs and servers)
On the switchs no drop, no crc error, no packet loss, only so
Thanks a lot for information!
what’s the relationship of this mirror with ceph official website?
Basically we want to use an official release and hesitate to use a 3rd part
build package.
From: Martin Verges
Sent: Wednesday, November 27, 2019 9:58 PM
To: Sang, Oliver
Cc: Sage Weil ; ceph-annou
On 11/27/19 8:04 PM, Rodrigo Severo - Fábrica wrote:
I have a CephFS instance and I am also planning on also deploying an
Object Storage interface.
My servers have 2 network boards each. I would like to use the current
local one to talk to Cephs clients (both CephFS and Object Storage)
and use t
Hello,
I am new to Ceph and currently i am working on setting up CephFs and RBD
environment. I have successfully setup Ceph Cluster with 4 OSD's (2 OSD's
with size 50GB and 2 OSD's with size 300GB).
But while setting up CephFs the size which i see allocated for CephFs Data
and metadata pools is 55
13 matches
Mail list logo