Missing CC to list
Forwarded Message
Subject:Re: [ceph-users] Merging CephFS data pools
Date: Tue, 23 Aug 2016 08:59:45 +0200
From: Burkhard Linke
To: Gregory Farnum
Hi,
On 08/22/2016 10:02 PM, Gregory Farnum wrote:
On Thu, Aug 18, 2016 at 12:21 AM
Hi
Only one node, and only one nvme SSD, the SSD has 12 partitions, every three
for one OSD
And fio is 4k randwrite, iodepth is 128
No snapshot
Thanks
发件人: Jan Schermer [mailto:j...@schermer.cz]
发送时间: 2016年8月23日 14:52
收件人: Zhiyuan Wang
抄送: ceph-users@lists.ceph.com
主题: Re: [ceph-users] BlueStor
Hi, all
Our vm is terminated unexpectedly when using librbd in our production
environment with CentOS 7.0 kernel 3.12 with Ceph version 0.94.5 and
glibc version 2.17. we get log from libvirtd as below
*** Error in `/usr/libexec/qemu-kvm': invalid fastbin entry (free):
0x7f7db7eed740 ***
Hey cephers,
We now finally have a date and location confirmed for Ceph Day Munich
in September:
http://ceph.com/cephdays/ceph-day-munich/
If you are interested in being a speaker please send me the following:
1) Speaker Name
2) Speaker Org
3) Talk Title
4) Talk abstract
I will be accepting sp
Hi,
On 08/22/2016 07:27 PM, Wido den Hollander wrote:
Op 22 augustus 2016 om 15:52 schreef Christian Balzer :
Hello,
first off, not a CephFS user, just installed it on a lab setup for fun.
That being said, I tend to read most posts here.
And I do remember participating in similar discussio
On Tue, Aug 23, 2016 at 03:45:58PM +0800, Ning Yao wrote:
> Hi, all
>
> Our vm is terminated unexpectedly when using librbd in our production
> environment with CentOS 7.0 kernel 3.12 with Ceph version 0.94.5 and
> glibc version 2.17. we get log from libvirtd as below
>
> *** Error in `/usr/libex
Hi, I'm currently testing rbd-nbd, to implement them in lxc instead krbd (to
support new rbd features )
#rbd-nbd map pool/testimage
/dev/nbd0
#rbd-nbd list-mapped
/dev/nbd0
Is is possible to implement something like
#rbd-nbd list-mapped
/dev/nbd0 pool/testimage
Regards,
Alexandre
_
Hi,
I already created a ticket for this issue.
http://tracker.ceph.com/issues/17076
The complete logfile should be in this ticket.
Jan Hugo
Jan Hugo Prins
On 08/22/2016 10:36 PM, Gregory Farnum wrote:
> On Thu, Aug 18, 2016 at 11:42 AM, jan hugo prins wrote:
>> I have been able to reproduce
Hi,
I'm testing S3 and I created a test where I sync a big part of my
homedirectory, about 4GB of data in a lot of small objects, towards a S3
bucket.
The first part of the sync was very fast but after some time it became a
lot slower.
What I basically see is this for every file:
The file gets t
There was almost the exact same issue on the master branch right after
the switch to cmake because tcmalloc was incorrectly (and partially)
linked into librados/librbd. What occurred was that the std::list
within ceph::buffer::ptr was allocated via tcmalloc but was freed
within librados/librbd via
Looks good. Since you are re-using the RBD header object to send the
watch notification, a running librbd client will most likely print out
an error message along the lines of "failed to decode the
notification" since you are sending "fsfreeze" / "fsunfreeze" as the
payload, but it would be harmle
>
>
> Missing CC to list
>
>
> Forwarded Message
> Subject: Re: [ceph-users] Merging CephFS data pools
> Date: Tue, 23 Aug 2016 08:59:45 +0200
> From: Burkhard Linke
>
> To: Gregory Farnum
>
> Hi,
>
>
> On 08/22/2016 10:02 PM, Gregory Farnum wrote:
> > On Thu, Aug 18, 2016
On 22.08.2016 20:16, K.C. Wong wrote:
> Is there a way
> to force a 'remote-fs' reclassification?
Have you tried adding _netdev to the fstab options?
Regards
--
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
http://www.heinlein-support.de
Tel: 030 / 405051-43
Fax: 030 /
I’d like to generate keys for ceph external to any system which would have
ceph-authtool.
Looking over the ceph website and googling have turned up nothing.
Is the ceph auth key generation algorithm documented anywhere?
-Chris
___
ceph-users mailing li
I don't think this is something that could be trivially added. The
nbd protocol doesn't really support associating metadata with the
device. Right now, that "list-mapped" command just tests each nbd
device to see if it is connected to any backing server (not just
rbd-nbd backed devices).
On Tue,
Hi,
the Firefly and Hammer releases did not support transparent usage of
cache tiering in CephFS. The cache tier itself had to be specified as
data pool, thus preventing on-the-fly addition and removal of cache tiers.
Does the same restriction also apply to Jewel? I would like to add a
cache
I have find a way, nbd device store the pid of the running rbd-ndb process,
so:
#cat /sys/block/nbd0/pid
18963
#cat /proc/18963/cmdline
rbd-nbd map pool/testimage
- Mail original -
De: "Jason Dillaman"
À: "aderumier"
Cc: "ceph-users"
Envoyé: Mardi 23 Août 2016 16:30:38
Objet: Re:
On Mon, Aug 22, 2016 at 3:29 PM, Wido den Hollander wrote:
>
>> Op 22 augustus 2016 om 21:22 schreef Nick Fisk :
>>
>>
>> > -Original Message-
>> > From: Wido den Hollander [mailto:w...@42on.com]
>> > Sent: 22 August 2016 18:22
>> > To: ceph-users ; n...@fisk.me.uk
>> > Subject: Re: [ceph-
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Alex
> Gorbachev
> Sent: 23 August 2016 16:43
> To: Wido den Hollander
> Cc: ceph-users ; Nick Fisk
> Subject: Re: [ceph-users] udev rule to set readahead on Ceph RBD's
>
> On Mon, Aug 22, 2
On Mon, Aug 22, 2016 at 9:22 PM, Nick Fisk wrote:
>> -Original Message-
>> From: Wido den Hollander [mailto:w...@42on.com]
>> Sent: 22 August 2016 18:22
>> To: ceph-users ; n...@fisk.me.uk
>> Subject: Re: [ceph-users] udev rule to set readahead on Ceph RBD's
>>
>>
>> > Op 22 augustus 2016
On Tue, Aug 23, 2016 at 6:15 PM, Nick Fisk wrote:
>
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Alex Gorbachev
>> Sent: 23 August 2016 16:43
>> To: Wido den Hollander
>> Cc: ceph-users ; Nick Fisk
>> Subject: Re: [ceph-users] udev
Trying to hunt down a mystery osd populated in the osd tree.
Cluster was deployed using ceph-deploy on an admin node, originally 10.2.1 at
time of deployment, but since upgraded to 10.2.2.
For reference, mons and mds do not live on the osd nodes, and the admin node is
neither mon, mds, or osd.
> Op 23 augustus 2016 om 18:32 schreef Ilya Dryomov :
>
>
> On Mon, Aug 22, 2016 at 9:22 PM, Nick Fisk wrote:
> >> -Original Message-
> >> From: Wido den Hollander [mailto:w...@42on.com]
> >> Sent: 22 August 2016 18:22
> >> To: ceph-users ; n...@fisk.me.uk
> >> Subject: Re: [ceph-users]
> -Original Message-
> From: Wido den Hollander [mailto:w...@42on.com]
> Sent: 23 August 2016 19:45
> To: Ilya Dryomov ; Nick Fisk
> Cc: ceph-users
> Subject: Re: [ceph-users] udev rule to set readahead on Ceph RBD's
>
>
> > Op 23 augustus 2016 om 18:32 schreef Ilya Dryomov :
> >
> >
Would you mind opening a feature tracker ticket [1] to document the
proposal? Any chance you are interested in doing the work?
[1] http://tracker.ceph.com/projects/rbd/issues
On Tue, Aug 23, 2016 at 11:15 AM, Alexandre DERUMIER
wrote:
> I have find a way, nbd device store the pid of the running
Hi,
I'm using ceph jewel 10.2.2 and I always want to know that Ceph will do
with duplicate data?
Is Ceph osd will automatically delete them or Ceph rgw will do it ? my Ceph
storage cluster using s3 api to PUT object.
Example:
1. Suppose I use one ceph-rgw s3 user to put two different ojbect of sam
Hi,
I'm using ceph jewel 10.2.2, I noticed that, when I put multiple object of
the same file, same user to ceph-rgw s3 then RAM memory of ceph-osd
increased and not reduced anymore? This time, the upload speed is reduced
significantly.
Please help me solve this problem?
Thank!
Please share the crushmap.
Thanks
Swami
On Tue, Aug 23, 2016 at 11:49 PM, Reed Dier wrote:
> Trying to hunt down a mystery osd populated in the osd tree.
>
> Cluster was deployed using ceph-deploy on an admin node, originally 10.2.1
> at time of deployment, but since upgraded to 10.2.2.
>
> For
Hi,
On 08/23/2016 08:19 PM, Reed Dier wrote:
Trying to hunt down a mystery osd populated in the osd tree.
Cluster was deployed using ceph-deploy on an admin node, originally 10.2.1 at
time of deployment, but since upgraded to 10.2.2.
For reference, mons and mds do not live on the osd nodes,
> Op 23 augustus 2016 om 22:24 schreef Nick Fisk :
>
>
>
>
> > -Original Message-
> > From: Wido den Hollander [mailto:w...@42on.com]
> > Sent: 23 August 2016 19:45
> > To: Ilya Dryomov ; Nick Fisk
> > Cc: ceph-users
> > Subject: Re: [ceph-users] udev rule to set readahead on Ceph RB
Hi Ricardo (and rest),
I see that http://tracker.ceph.com/issues/14527 /
https://github.com/ceph/ceph/pull/7741 has been merged which would allow
clients and daemons to find their Monitors through DNS.
mon_dns_srv_name is set to ceph-mon by default, so if I'm correct, this would
work?
Let's s
31 matches
Mail list logo