Hi Andrei,
are you using Jumbo Frames? My experience, I had a driver issues where
one NIC wouldn't accept the MTU set for the interface and the cluster
ran into a very similar behavior as you are describing. After I have set
the MTU for all NICs and servers to the working value of my troubling
NIC
Hi George,
interesting result for your benchmark. May you please supply some more numbers?
As we didn't get that good of a result
on our tests.
Thanks.
Cheers,
Alwin
On 07/06/2016 02:03 PM, George Shuklin wrote:
> Hello.
>
> I've been testing Intel 3500 as journal store for few HDD-based OSD
Hi Marco,
On Mon, Oct 23, 2017 at 04:10:34PM +0200, Marco Baldini - H.S. Amiata wrote:
> Thanks for reply
>
> My ceph.conf:
>
>[global]
> auth client required = none
> auth cluster required = none
> auth service required = none
> bluestore_bl
/usr/bin/ceph-mon -f --cluster ceph --id pve-hs-3
> --setuser ceph --setgroup ceph
>
>
> At 17:28 I have this in syslog / journal of pve-hs-2
>
> Oct 23 17:38:47 pve-hs-2 kernel: [255282.309979] libceph: mon1
> 10.10.10.252:6789 session lost, hunting for new mon
>
>
Hi,
I am confused by the %USED calculation in the output 'ceph df' in luminous. In
the example below the pools use 2.92% "%USED" but with my calculation, taken
from the source code it gives me a 8.28%. On a hammer cluster my calculation
gives the same result as in the 'ceph df' output.
Am I t
Hi Nitin,
On Tue, Nov 07, 2017 at 12:03:15AM +, Kamble, Nitin A wrote:
> Dear Cephers,
>
> As seen below, I notice that 12.7% of raw storage is consumed with zero pools
> in the system. These are bluestore OSDs.
> Is this expected or an anomaly?
DB + WAL are consuming space already, if you ad
On Fri, Nov 03, 2017 at 12:09:03PM +0100, Alwin Antreich wrote:
> Hi,
>
> I am confused by the %USED calculation in the output 'ceph df' in luminous.
> In the example below the pools use 2.92% "%USED" but with my calculation,
> taken from the source code it gi
Hi Rudi,
On Thu, Nov 09, 2017 at 04:09:04PM +0200, Rudi Ahlers wrote:
> Hi,
>
> Can someone please tell me what the correct procedure is to upgrade a CEPH
> journal?
>
> I'm running ceph: 12.2.1 on Proxmox 5.1, which runs on Debian 9.1
>
> For a journal I have a 400GB Intel SSD drive and it seems C
extsz=4096 blocks=0, rtextents=0
> >> Warning: The kernel is still using the old partition table.
> >> The new table will be used at the next reboot or after you
> >> run partprobe(8) or kpartx(8)
> >> The operation has completed successfully.
> >&g
On Thu, Nov 09, 2017 at 05:38:46PM +0100, Caspar Smit wrote:
> 2017-11-09 17:02 GMT+01:00 Alwin Antreich :
>
> > Hi Rudi,
> > On Thu, Nov 09, 2017 at 04:09:04PM +0200, Rudi Ahlers wrote:
> > > Hi,
> > >
> > > Can someone please tell me what t
Hello Karun,
On Tue, Nov 14, 2017 at 04:16:51AM +0530, Karun Josy wrote:
> Hello,
>
> Recently, I deleted all the disks from an erasure pool 'ecpool'.
> The pool is empty. However the space usage shows around 400GB.
> What might be wrong?
>
>
> $ rbd ls -l ecpool
> $ $ ceph df
>
> GLOBAL:
> SIZ
Hello Marcus,
On Tue, Dec 05, 2017 at 07:09:35PM +0100, Marcus Priesch wrote:
> Dear Ceph Users,
>
> first of all, big thanks to all the devs and people who made all this
> possible, ceph is amazing !!!
>
> ok, so let me get to the point where i need your help:
>
> i have a cluster of 6 hosts, mixe
Hello Marcus,
On Thu, Dec 07, 2017 at 10:24:13AM +0100, Marcus Priesch wrote:
> Hello Alwin, Dear All,
>
> yesterday we finished cluster migration to proxmox and i had the same
> problem again:
>
> A couple of osd's down and out and a stuck request on a completely
> different osd which blocked the
Hi,
On Thu, Sep 06, 2018 at 03:52:21PM +0200, Menno Zonneveld wrote:
> ah yes, 3x replicated with minimal 2.
>
>
> my ceph.conf is pretty bare, just in case it might be relevant
>
> [global]
>auth client required = cephx
>auth cluster required = cephx
>auth service requi
On Thu, Sep 06, 2018 at 05:15:26PM +0200, Marc Roos wrote:
>
> It is idle, testing still, running a backup's at night on it.
> How do you fill up the cluster so you can test between empty and full?
> Do you have a "ceph df" from empty and full?
>
> I have done another test disabling new scrubs
On Thu, Sep 13, 2018 at 02:17:20PM +0200, Menno Zonneveld wrote:
> Update on the subject, warning, lengthy post but reproducible results and
> workaround to get performance back to expected level.
>
> One of the servers had a broken disk controller causing some performance
> issues on this one h
Hello Leonardo,
On Sat, Jun 30, 2018 at 11:09:31PM -0300, Leonardo Vaz wrote:
> Hey Cephers,
>
> The Ceph Community Newsletter of June 2018 has been published:
>
> https://ceph.com/newsletter/ceph-community-june-2018/
>
Thanks for the newsletter.
Sadly I didn't find any mentioning of the "Ce
Hi All,
first I wanted to say hello, as I am new to the list.
Secondly, we want to use ceph for VM disks and cephfs for our source
code, image data, login directories, etc.
I would like to know, if striping would improve performance if we would
set something like the following and move away from
and sources, I am definitely going to test these settings.
>
> Christian
>
Thanks for your replies.
--
with best regards,
Alwin Antreich
IT Analyst
antre...@cognitec.com
Cognitec Systems GmbH
Grossenhainer Strasse 101
01127, Dresden
Germany
Geschäftsführer: Alfredo Herrera
Amtsge
Hello,
On Mon, Dec 02, 2019 at 08:17:49AM +0100, GBS Servers wrote:
> Hi, im have problem with create new osd:
>
> stdin: ceph --cluster ceph --name client.bootstrap-osd --keyring
> /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new
> b9e52bda-7f05-44e0-a69b-1d47755343cf
> Dec 2 08:09:03 ser
On Mon, Dec 02, 2019 at 11:57:34AM +0100, GBS Servers wrote:
> How to check ?
>
> Thanks.
>
> pon., 2 gru 2019 o 10:38 Alwin Antreich napisał(a):
>
> > Hello,
> >
> > On Mon, Dec 02, 2019 at 08:17:49AM +0100, GBS Servers wrote:
> >
21 matches
Mail list logo