> -Original Message-
> From: mq [mailto:maoqi1...@126.com]
> Sent: 04 July 2016 08:13
> To: Nick Fisk
> Subject: Re: [ceph-users]
> suse_enterprise_storage3_rbd_LIO_vmware_performance_bad
>
> Hi Nick
> i have test NFS: since NFS cannot choose Eager Zeroed Thick Provision mode
> so i use
Hi All,
Quick question. I'm currently in the process of getting ready to deploy a
2nd cluster, which at some point in the next 12 months, I will want to
enable RBD mirroring between the new and existing clusters. I'm leaning
towards deploying this new cluster with IPv6, because Wido says so ;-)
On 01/07/16 16:01, Yan, Zheng wrote:
On Fri, Jul 1, 2016 at 6:59 PM, John Spray wrote:
On Fri, Jul 1, 2016 at 11:35 AM, Kenneth Waegeman
wrote:
Hi all,
While syncing a lot of files to cephfs, our mds cluster got haywire: the
mdss have a lot of segments behind on trimming: (58621/30)
Becau
On Sun, Jul 3, 2016 at 8:06 AM, Lihang wrote:
> root@BoreNode2:~# ceph -v
>
> ceph version 10.2.0
>
>
>
> 发件人: lihang 12398 (RD)
> 发送时间: 2016年7月3日 14:47
> 收件人: ceph-users@lists.ceph.com
> 抄送: Ceph Development; 'uker...@gmail.com'; zhengbin 08747 (RD); xusangdi
> 11976 (RD)
> 主题: how to fix the mds
On 2016-07-01T13:04:45, mq wrote:
Hi MQ,
perhaps the upstream list is not the best one to discuss this. SUSE
includes adjusted backports for the iSCSI functionality that upstream
does not; very few people here are going to be intimately familiar with
the code you're running. If you're evaluating
On 2016-07-01T17:18:19, Christian Balzer wrote:
> First off, it's somewhat funny that you're testing the repackaged SUSE
> Ceph, but asking for help here (with Ceph being owned by Red Hat).
*cough* Ceph is not owned by RH. RH acquired the InkTank team and the
various trademarks, that's true (and
On 2016-07-01T19:11:34, Nick Fisk wrote:
> To summarise,
>
> LIO is just not working very well at the moment because of the ABORT Tasks
> problem, this will hopefully be fixed at some point. I'm not sure if SUSE
> works around this, but see below for other pain points with RBD + ESXi + iSCSI
Thank you very much for your advice. The command "ceph mds repaired 0" work
fine in my cluster, my cluster state become HEALTH_OK and the cephfs state
become normal also. but in the monitor or mds log file ,it just record the
replay and recover process log without point out somewhere is abnormal
> Op 4 juli 2016 om 9:25 schreef Nick Fisk :
>
>
> Hi All,
>
> Quick question. I'm currently in the process of getting ready to deploy a
> 2nd cluster, which at some point in the next 12 months, I will want to
> enable RBD mirroring between the new and existing clusters. I'm leaning
> towards d
> Op 3 juli 2016 om 11:34 schreef Roozbeh Shafiee :
>
>
> Actually I tried all the ways which I found them on Ceph Docs and mailing
> lists but
> non of them had no effect. As a last resort I changed pg/pgp.
>
> Anyway… What can I do as the best way to solve this problem?
>
Did you try to re
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Wido den Hollander
> Sent: 04 July 2016 14:34
> To: ceph-users@lists.ceph.com; n...@fisk.me.uk
> Subject: Re: [ceph-users] RBD mirroring between a IPv6 and IPv4 Cluster
>
>
> > Op 4 juli 2016
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Lars Marowsky-Bree
> Sent: 04 July 2016 11:36
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users]
> suse_enterprise_storage3_rbd_LIO_vmware_performance_bad
>
> On 2016-07-01T19:11:34,
Gregory Farnum пишет:
> On Thu, Jun 30, 2016 at 1:03 PM, Dzianis Kahanovich wrote:
>> Upgraded infernalis->jewel (git, Gentoo). Upgrade passed over global
>> stop/restart everything oneshot.
>>
>> Infernalis: e5165: 1/1/1 up {0=c=up:active}, 1 up:standby-replay, 1
>> up:standby
>>
>> Now after up
On Wed, Jun 29, 2016 at 5:41 AM, Campbell Steven wrote:
> Hi Alex/Stefan,
>
> I'm in the middle of testing 4.7rc5 on our test cluster to confirm
> once and for all this particular issue has been completely resolved by
> Peter's recent patch to sched/fair.c refereed to by Stefan above. For
> us any
HI Nick,
On Fri, Jul 1, 2016 at 2:11 PM, Nick Fisk wrote:
> However, there are a number of pain points with iSCSI + ESXi + RBD and they
> all mainly centre on write latency. It seems VMFS was designed around the
> fact that Enterprise storage arrays service writes in 10-100us, whereas Ceph
Reproduce with 'debug mds = 20' and 'debug ms = 20'.
shinobu
On Mon, Jul 4, 2016 at 9:42 PM, Lihang wrote:
> Thank you very much for your advice. The command "ceph mds repaired 0"
> work fine in my cluster, my cluster state become HEALTH_OK and the cephfs
> state become normal also. but in the
Dear All...
We have recently migrated all our ceph infrastructure from 9.2.0 to 10.2.2.
We are currently using ceph-fuse to mount cephfs in a number of clients.
ceph-fuse 10.2.2 client is segfaulting in some situations. One of the
scenarios where ceph-fuse segfaults is when a user submits a pa
Can you reproduce with debug client = 20?
On Tue, Jul 5, 2016 at 10:16 AM, Goncalo Borges <
goncalo.bor...@sydney.edu.au> wrote:
> Dear All...
>
> We have recently migrated all our ceph infrastructure from 9.2.0 to 10.2.2.
>
> We are currently using ceph-fuse to mount cephfs in a number of client
On Tue, Jul 5, 2016 at 12:13 PM, Shinobu Kinjo wrote:
> Can you reproduce with debug client = 20?
In addition to this I would suggest making sure you have debug symbols
in your build
and capturing a core file.
You can do that by setting "ulimit -c unlimited" in the environment
where ceph-fuse is
Hi Goncalo,
I believe this segfault may be the one fixed here:
https://github.com/ceph/ceph/pull/10027
(Sorry for brief top-post. Im on mobile.)
On Jul 4, 2016 9:16 PM, "Goncalo Borges"
wrote:
>
> Dear All...
>
> We have recently migrated all our ceph infrastructure from 9.2.0 to
10.2.2.
>
> W
On Tue, Jul 5, 2016 at 1:34 PM, Patrick Donnelly wrote:
> Hi Goncalo,
>
> I believe this segfault may be the one fixed here:
>
> https://github.com/ceph/ceph/pull/10027
Ah, nice one Patrick.
Goncalo, the patch is fairly simple, just the addition of a lock on two lines to
resolve the race. Could
Hi Brad, Shinobu, Patrick...
Indeed if I run with 'debug client = 20' it seems I get a very similar
log to what Patrick has in the patch. However it is difficult for me to
really say if it is exactly the same thing.
One thing I could try is simply to apply the fix in the source code and
reco
Will do Brad. From you answer it should be a safe thing to do.
Will report later.
Thanks for the help
Cheers
Goncalo
On 07/05/2016 02:42 PM, Brad Hubbard wrote:
On Tue, Jul 5, 2016 at 1:34 PM, Patrick Donnelly wrote:
Hi Goncalo,
I believe this segfault may be the one fixed here:
https:
23 matches
Mail list logo