[ceph-users] Fwd: Re: Merging CephFS data pools

2016-08-23 Thread Burkhard Linke
Missing CC to list Forwarded Message Subject:Re: [ceph-users] Merging CephFS data pools Date: Tue, 23 Aug 2016 08:59:45 +0200 From: Burkhard Linke To: Gregory Farnum Hi, On 08/22/2016 10:02 PM, Gregory Farnum wrote: On Thu, Aug 18, 2016 at 12:21 AM

[ceph-users] 答复: BlueStore write amplification

2016-08-23 Thread Zhiyuan Wang
Hi Only one node, and only one nvme SSD, the SSD has 12 partitions, every three for one OSD And fio is 4k randwrite, iodepth is 128 No snapshot Thanks 发件人: Jan Schermer [mailto:j...@schermer.cz] 发送时间: 2016年8月23日 14:52 收件人: Zhiyuan Wang 抄送: ceph-users@lists.ceph.com 主题: Re: [ceph-users] BlueStor

[ceph-users] BUG ON librbd or libc

2016-08-23 Thread Ning Yao
Hi, all Our vm is terminated unexpectedly when using librbd in our production environment with CentOS 7.0 kernel 3.12 with Ceph version 0.94.5 and glibc version 2.17. we get log from libvirtd as below *** Error in `/usr/libexec/qemu-kvm': invalid fastbin entry (free): 0x7f7db7eed740 ***

[ceph-users] Ceph Day Munich - 23 Sep 2016

2016-08-23 Thread Patrick McGarry
Hey cephers, We now finally have a date and location confirmed for Ceph Day Munich in September: http://ceph.com/cephdays/ceph-day-munich/ If you are interested in being a speaker please send me the following: 1) Speaker Name 2) Speaker Org 3) Talk Title 4) Talk abstract I will be accepting sp

Re: [ceph-users] Recommended hardware for MDS server

2016-08-23 Thread Burkhard Linke
Hi, On 08/22/2016 07:27 PM, Wido den Hollander wrote: Op 22 augustus 2016 om 15:52 schreef Christian Balzer : Hello, first off, not a CephFS user, just installed it on a lab setup for fun. That being said, I tend to read most posts here. And I do remember participating in similar discussio

Re: [ceph-users] BUG ON librbd or libc

2016-08-23 Thread Brad Hubbard
On Tue, Aug 23, 2016 at 03:45:58PM +0800, Ning Yao wrote: > Hi, all > > Our vm is terminated unexpectedly when using librbd in our production > environment with CentOS 7.0 kernel 3.12 with Ceph version 0.94.5 and > glibc version 2.17. we get log from libvirtd as below > > *** Error in `/usr/libex

[ceph-users] rbd-nbd: list-mapped : is it possible to display associtation between rbd volume and nbd device ?

2016-08-23 Thread Alexandre DERUMIER
Hi, I'm currently testing rbd-nbd, to implement them in lxc instead krbd (to support new rbd features ) #rbd-nbd map pool/testimage /dev/nbd0 #rbd-nbd list-mapped /dev/nbd0 Is is possible to implement something like #rbd-nbd list-mapped /dev/nbd0 pool/testimage Regards, Alexandre _

Re: [ceph-users] Signature V2

2016-08-23 Thread jan hugo prins
Hi, I already created a ticket for this issue. http://tracker.ceph.com/issues/17076 The complete logfile should be in this ticket. Jan Hugo Jan Hugo Prins On 08/22/2016 10:36 PM, Gregory Farnum wrote: > On Thu, Aug 18, 2016 at 11:42 AM, jan hugo prins wrote: >> I have been able to reproduce

[ceph-users] Very slow S3 sync with big number of object.

2016-08-23 Thread jan hugo prins
Hi, I'm testing S3 and I created a test where I sync a big part of my homedirectory, about 4GB of data in a lot of small objects, towards a S3 bucket. The first part of the sync was very fast but after some time it became a lot slower. What I basically see is this for every file: The file gets t

Re: [ceph-users] BUG ON librbd or libc

2016-08-23 Thread Jason Dillaman
There was almost the exact same issue on the master branch right after the switch to cmake because tcmalloc was incorrectly (and partially) linked into librados/librbd. What occurred was that the std::list within ceph::buffer::ptr was allocated via tcmalloc but was freed within librados/librbd via

Re: [ceph-users] RBD Watch Notify for snapshots

2016-08-23 Thread Jason Dillaman
Looks good. Since you are re-using the RBD header object to send the watch notification, a running librbd client will most likely print out an error message along the lines of "failed to decode the notification" since you are sending "fsfreeze" / "fsunfreeze" as the payload, but it would be harmle

Re: [ceph-users] Fwd: Re: Merging CephFS data pools

2016-08-23 Thread Дробышевский , Владимир
> > > Missing CC to list > > > Forwarded Message > Subject: Re: [ceph-users] Merging CephFS data pools > Date: Tue, 23 Aug 2016 08:59:45 +0200 > From: Burkhard Linke > > To: Gregory Farnum > > Hi, > > > On 08/22/2016 10:02 PM, Gregory Farnum wrote: > > On Thu, Aug 18, 2016

Re: [ceph-users] Help with systemd

2016-08-23 Thread Robert Sander
On 22.08.2016 20:16, K.C. Wong wrote: > Is there a way > to force a 'remote-fs' reclassification? Have you tried adding _netdev to the fstab options? Regards -- Robert Sander Heinlein Support GmbH Schwedter Str. 8/9b, 10119 Berlin http://www.heinlein-support.de Tel: 030 / 405051-43 Fax: 030 /

[ceph-users] Ceph auth key generation algorithm documentation

2016-08-23 Thread Heller, Chris
I’d like to generate keys for ceph external to any system which would have ceph-authtool. Looking over the ceph website and googling have turned up nothing. Is the ceph auth key generation algorithm documented anywhere? -Chris ___ ceph-users mailing li

Re: [ceph-users] rbd-nbd: list-mapped : is it possible to display associtation between rbd volume and nbd device ?

2016-08-23 Thread Jason Dillaman
I don't think this is something that could be trivially added. The nbd protocol doesn't really support associating metadata with the device. Right now, that "list-mapped" command just tests each nbd device to see if it is connected to any backing server (not just rbd-nbd backed devices). On Tue,

[ceph-users] CephFS + cache tiering in Jewel

2016-08-23 Thread Burkhard Linke
Hi, the Firefly and Hammer releases did not support transparent usage of cache tiering in CephFS. The cache tier itself had to be specified as data pool, thus preventing on-the-fly addition and removal of cache tiers. Does the same restriction also apply to Jewel? I would like to add a cache

Re: [ceph-users] rbd-nbd: list-mapped : is it possible to display associtation between rbd volume and nbd device ?

2016-08-23 Thread Alexandre DERUMIER
I have find a way, nbd device store the pid of the running rbd-ndb process, so: #cat /sys/block/nbd0/pid 18963 #cat /proc/18963/cmdline rbd-nbd map pool/testimage - Mail original - De: "Jason Dillaman" À: "aderumier" Cc: "ceph-users" Envoyé: Mardi 23 Août 2016 16:30:38 Objet: Re:

Re: [ceph-users] udev rule to set readahead on Ceph RBD's

2016-08-23 Thread Alex Gorbachev
On Mon, Aug 22, 2016 at 3:29 PM, Wido den Hollander wrote: > >> Op 22 augustus 2016 om 21:22 schreef Nick Fisk : >> >> >> > -Original Message- >> > From: Wido den Hollander [mailto:w...@42on.com] >> > Sent: 22 August 2016 18:22 >> > To: ceph-users ; n...@fisk.me.uk >> > Subject: Re: [ceph-

Re: [ceph-users] udev rule to set readahead on Ceph RBD's

2016-08-23 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Alex > Gorbachev > Sent: 23 August 2016 16:43 > To: Wido den Hollander > Cc: ceph-users ; Nick Fisk > Subject: Re: [ceph-users] udev rule to set readahead on Ceph RBD's > > On Mon, Aug 22, 2

Re: [ceph-users] udev rule to set readahead on Ceph RBD's

2016-08-23 Thread Ilya Dryomov
On Mon, Aug 22, 2016 at 9:22 PM, Nick Fisk wrote: >> -Original Message- >> From: Wido den Hollander [mailto:w...@42on.com] >> Sent: 22 August 2016 18:22 >> To: ceph-users ; n...@fisk.me.uk >> Subject: Re: [ceph-users] udev rule to set readahead on Ceph RBD's >> >> >> > Op 22 augustus 2016

Re: [ceph-users] udev rule to set readahead on Ceph RBD's

2016-08-23 Thread Ilya Dryomov
On Tue, Aug 23, 2016 at 6:15 PM, Nick Fisk wrote: > >> -Original Message- >> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of >> Alex Gorbachev >> Sent: 23 August 2016 16:43 >> To: Wido den Hollander >> Cc: ceph-users ; Nick Fisk >> Subject: Re: [ceph-users] udev

[ceph-users] phantom osd.0 in osd tree

2016-08-23 Thread Reed Dier
Trying to hunt down a mystery osd populated in the osd tree. Cluster was deployed using ceph-deploy on an admin node, originally 10.2.1 at time of deployment, but since upgraded to 10.2.2. For reference, mons and mds do not live on the osd nodes, and the admin node is neither mon, mds, or osd.

Re: [ceph-users] udev rule to set readahead on Ceph RBD's

2016-08-23 Thread Wido den Hollander
> Op 23 augustus 2016 om 18:32 schreef Ilya Dryomov : > > > On Mon, Aug 22, 2016 at 9:22 PM, Nick Fisk wrote: > >> -Original Message- > >> From: Wido den Hollander [mailto:w...@42on.com] > >> Sent: 22 August 2016 18:22 > >> To: ceph-users ; n...@fisk.me.uk > >> Subject: Re: [ceph-users]

Re: [ceph-users] udev rule to set readahead on Ceph RBD's

2016-08-23 Thread Nick Fisk
> -Original Message- > From: Wido den Hollander [mailto:w...@42on.com] > Sent: 23 August 2016 19:45 > To: Ilya Dryomov ; Nick Fisk > Cc: ceph-users > Subject: Re: [ceph-users] udev rule to set readahead on Ceph RBD's > > > > Op 23 augustus 2016 om 18:32 schreef Ilya Dryomov : > > > >

Re: [ceph-users] rbd-nbd: list-mapped : is it possible to display associtation between rbd volume and nbd device ?

2016-08-23 Thread Jason Dillaman
Would you mind opening a feature tracker ticket [1] to document the proposal? Any chance you are interested in doing the work? [1] http://tracker.ceph.com/projects/rbd/issues On Tue, Aug 23, 2016 at 11:15 AM, Alexandre DERUMIER wrote: > I have find a way, nbd device store the pid of the running

[ceph-users] issuse with data duplicated in ceph storage cluster.

2016-08-23 Thread Khang Nguyễn Nhật
Hi, I'm using ceph jewel 10.2.2 and I always want to know that Ceph will do with duplicate data? Is Ceph osd will automatically delete them or Ceph rgw will do it ? my Ceph storage cluster using s3 api to PUT object. Example: 1. Suppose I use one ceph-rgw s3 user to put two different ojbect of sam

[ceph-users] Memory leak in ceph OSD.

2016-08-23 Thread Khang Nguyễn Nhật
Hi, I'm using ceph jewel 10.2.2, I noticed that, when I put multiple object of the same file, same user to ceph-rgw s3 then RAM memory of ceph-osd increased and not reduced anymore? This time, the upload speed is reduced significantly. Please help me solve this problem? Thank!

Re: [ceph-users] phantom osd.0 in osd tree

2016-08-23 Thread M Ranga Swami Reddy
Please share the crushmap. Thanks Swami On Tue, Aug 23, 2016 at 11:49 PM, Reed Dier wrote: > Trying to hunt down a mystery osd populated in the osd tree. > > Cluster was deployed using ceph-deploy on an admin node, originally 10.2.1 > at time of deployment, but since upgraded to 10.2.2. > > For

Re: [ceph-users] phantom osd.0 in osd tree

2016-08-23 Thread Burkhard Linke
Hi, On 08/23/2016 08:19 PM, Reed Dier wrote: Trying to hunt down a mystery osd populated in the osd tree. Cluster was deployed using ceph-deploy on an admin node, originally 10.2.1 at time of deployment, but since upgraded to 10.2.2. For reference, mons and mds do not live on the osd nodes,

Re: [ceph-users] udev rule to set readahead on Ceph RBD's

2016-08-23 Thread Wido den Hollander
> Op 23 augustus 2016 om 22:24 schreef Nick Fisk : > > > > > > -Original Message- > > From: Wido den Hollander [mailto:w...@42on.com] > > Sent: 23 August 2016 19:45 > > To: Ilya Dryomov ; Nick Fisk > > Cc: ceph-users > > Subject: Re: [ceph-users] udev rule to set readahead on Ceph RB

[ceph-users] Finding Monitors using SRV DNS record

2016-08-23 Thread Wido den Hollander
Hi Ricardo (and rest), I see that http://tracker.ceph.com/issues/14527 / https://github.com/ceph/ceph/pull/7741 has been merged which would allow clients and daemons to find their Monitors through DNS. mon_dns_srv_name is set to ceph-mon by default, so if I'm correct, this would work? Let's s