Hi all,
just a friendly reminder to use this pad for CfP coordination .
Right now it seems like I'm the only one who submitted something to
Cephalocon and I can't believe that ;-)
https://pad.ceph.com/p/cfp-coordination
Thanks,
Kai
On 5/31/18 1:17 AM, Gregory Farnum wrote:
> Short version: ht
Hi folks,
My ceph cluster is used exclusively for cephfs, as follows:
---
root@node1:~# grep ceph /etc/fstab
node2:6789:/ /ceph ceph
auto,_netdev,name=admin,secretfile=/root/ceph.admin.secret
root@node1:~#
---
"rados df" shows me the following:
---
root@node1:~# rados df
POOL_NAME USE
Hi,
you could try reducing "osd map message max", some code paths that end up
as -EIO (kernel: libceph: mon1 *** io error) is exceeding
include/linux/ceph/libceph.h:CEPH_MSG_MAX_{FRONT,MIDDLE,DATA}_LEN.
This "worked for us" - YMMV.
-KJ
On Tue, Jan 15, 2019 at 6:14 AM Andras Pataki
wrote:
> An
Hi Ketil,
I have not tested the creation/deletion but the read/write performance
was much better then the link you posted. Using CTDB setup based on
Robert's presentation, we were getting 800 MB/s write performance for
queue depth =1 and 2.2 GB/s queue depth= 32 from a single CTDB/Samba
g
Try lspci -vs and look for
`Capabilities: [148] Device Serial Number 00-02-c9-03-00-4f-68-7e`
in the output
--
With best regards,
Vitaliy Filippov
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ce
Robert,
Thanks, this is really interesting. Do you also have any details on how a
solution like this performs? I've been reading a thread about samba/cephfs
performance, and the stats aren't great - especially when creating/deleting
many files - but being a rookie, I'm not 100% clear on the hardwa
On Mon, Jan 14, 2019 at 7:11 AM Daniel Gryniewicz wrote:
>
> Hi. Welcome to the community.
>
> On 01/14/2019 07:56 AM, David C wrote:
> > Hi All
> >
> > I've been playing around with the nfs-ganesha 2.7 exporting a cephfs
> > filesystem, it seems to be working pretty well so far. A few questions:
Hi Ketil,
use Samba/CIFS with multiple gateway machines clustered with CTDB.
CephFS can be mounted with Posix ACL support.
Slides from my last Ceph day talk are available here:
https://www.slideshare.net/Inktank_Ceph/ceph-day-berlin-unlimited-fileserver-with-samba-ctdb-and-cephfs
Regards
--
Rob
Hi,
I'm pretty new to Ceph - pardon the newbie question. I've done a bit
of reading and searching, but I haven't seen an answer to this yet.
Is anyone using ceph to power a filesystem shared among a network of
Linux, Windows and Mac clients? How have you set it up? Is there a
mature Windows drive
On 1/15/19 9:02 AM, Stefan Priebe - Profihost AG wrote:
Am 15.01.19 um 12:45 schrieb Marc Roos:
I upgraded this weekend from 12.2.8 to 12.2.10 without such issues
(osd's are idle)
it turns out this was a kernel bug. Updating to a newer kernel - has
solved this issue.
Greets,
Stefan
Hi
Am 15.01.19 um 12:45 schrieb Marc Roos:
>
> I upgraded this weekend from 12.2.8 to 12.2.10 without such issues
> (osd's are idle)
it turns out this was a kernel bug. Updating to a newer kernel - has
solved this issue.
Greets,
Stefan
> -Original Message-
> From: Stefan Priebe - Pro
On Wed, Sep 19, 2018 at 7:01 PM Bryan Stillwell wrote:
>
> > On 08/30/2018 11:00 AM, Joao Eduardo Luis wrote:
> > > On 08/30/2018 09:28 AM, Dan van der Ster wrote:
> > > Hi,
> > > Is anyone else seeing rocksdb mon stores slowly growing to >15GB,
> > > eventually triggering the 'mon is using a lot
An update on our cephfs kernel client troubles. After doing some
heavier testing with a newer kernel 4.19.13, it seems like it also gets
into a bad state when it can't connect to monitors (all back end
processes are on 12.2.8):
Jan 15 08:49:00 mon5 kernel: libceph: mon1 10.128.150.11:6789 ses
On Tue, Jan 15, 2019 at 3:51 PM Sergei Shvarts wrote:
>
> Hello ceph users!
>
> A couple of days ago I've got a ceph health error - mds0: Metadata damage
> detected.
> Overall ceph cluster is fine: all pgs are clean, all osds are up and in, no
> big problems.
> Looks like there is not much infor
Hi,
my use case for Ceph is serving a central backup storage.
This means I will backup multiple databases in Ceph storage cluster.
This is my question:
What is the best practice for creating pools & images?
Should I create multiple pools, means one pool per database?
Or should I create a single
I upgraded this weekend from 12.2.8 to 12.2.10 without such issues
(osd's are idle)
-Original Message-
From: Stefan Priebe - Profihost AG [mailto:s.pri...@profihost.ag]
Sent: 15 January 2019 10:26
To: ceph-users@lists.ceph.com
Cc: n.fahldi...@profihost.ag
Subject: Re: [ceph-users] s
Is this result to be expected from cephfs, when comparing to a native
ssd speed test.
4k r ran.
4k w ran.
4k r seq.
4k w seq.
1024k r ran.
1024k w ran.
1024k r seq.
1024k w seq.
size
lat
iops
kB/s
lat
iops
kB/s
lat
iops
MB/s
lat
iops
MB/s
lat
iops
MB
On 1/15/19 11:39 AM, Dan van der Ster wrote:
> Hi Wido,
>
> `rpm -q --scripts ceph-selinux` will tell you why.
>
> It was the same from 12.2.8 to 12.2.10: http://tracker.ceph.com/issues/21672
>
Thanks for pointing it out!
> And the problem is worse than you described, because the daemons ar
Hi Wido,
`rpm -q --scripts ceph-selinux` will tell you why.
It was the same from 12.2.8 to 12.2.10: http://tracker.ceph.com/issues/21672
And the problem is worse than you described, because the daemons are
even restarted before all the package files have been updated.
Our procedure on these upg
Hi,
I'm in the middle of upgrading a 12.2.8 cluster to 13.2.4 and I've
noticed that during the Yum/RPM upgrade the OSDs are being restarted.
Jan 15 11:24:25 x yum[2348259]: Updated: 2:ceph-base-13.2.4-0.el7.x86_64
Jan 15 11:24:47 x systemd[1]: Stopped target ceph target allowing to
start/
Hello list,
i also tested current upstream/luminous branch and it happens as well. A
clean install works fine. It only happens on upgraded bluestore osds.
Greets,
Stefan
Am 14.01.19 um 20:35 schrieb Stefan Priebe - Profihost AG:
> while trying to upgrade a cluster from 12.2.8 to 12.2.10 i'm expe
21 matches
Mail list logo