ot;loaded dup inode"
> > errors
> >
> >
> >
> > On 07/06/2018 01:47 PM, John Spray wrote:
> > >
> > > On Fri, Jul 6, 2018 at 12:19 PM Wido den Hollander > > > wrote:
> > > >
> > > >
> > > >
cephfs-journal-tool journal reset
cephfs-table-tool all reset session
cephfs-table-tool all reset inode
cephfs-table-tool all take_inos 10
I'm worried that my FS is corrupt because files are not linked
correctly and have different content then they should.
Please help.
On Thu, 2018
Hi,
On Thu, 2018-07-05 at 09:55 +0800, Yan, Zheng wrote:
> On Wed, Jul 4, 2018 at 7:02 PM Dennis Kramer (DBS)
> wrote:
> >
> >
> > Hi,
> >
> > I have managed to get cephfs mds online again...for a while.
> >
> > These topics covers more
Hi,
I'm getting a bunch of "loaded dup inode" errors in the MDS logs.
How can this be fixed?
logs:
2018-07-05 10:20:05.591948 mds.mds05 [ERR] loaded dup inode 0x1991921
[2,head] v160 at , but inode 0x1991921.head v146 already
exists at
_
heng wrote:
> On Wed, Jun 27, 2018 at 6:16 PM Dennis Kramer (DT)
> wrote:
> >
> >
> > Hi,
> >
> > Currently i'm running Ceph Luminous 12.2.5.
> >
> > This morning I tried running Multi MDS with:
> > ceph fs set max_mds 2
> >
>
Hi,
Currently i'm running Ceph Luminous 12.2.5.
This morning I tried running Multi MDS with:
ceph fs set max_mds 2
I have 5 MDS servers. After running above command,
I had 2 active MDSs, 2 standby-active and 1 standby.
And after trying a failover on one
of the active MDSs, a standby-active d
Thanks for your response. But yes, the netwerk is OK.
But i will double check to be sure.
Then again, If I copy other (big) files from the same client everything
works without any issues. The problem is isolated to a specific file.
With a mis-configured network I would see this kind of issues cons
Hi all,
I have an issue that when I copy a specific file with ceph-fuse on
cephfs (within the same directory) it stalls after a couple of GB of
data. Nothing happens. No error, it just "hangs".
When I copy the same file with the cephfs kernel client it works without
issues.
I'm running Jewel 10.
Hi Jim,
I'm using a location script for OSDs, so when I add an OSD this script
will determine its place in the cluster and in which bucket it belongs.
In your ceph.conf there is a setting you can use:
osd_crush_location_hook =
With regards,
On 09/14/2016 09:30 PM, Jim Kilborn wrote:
> Reed,
>
Hi Burkhard,
Thank you for your reply, see inline:
On Wed, 14 Sep 2016, Burkhard Linke wrote:
Hi,
On 09/14/2016 12:43 PM, Dennis Kramer (DT) wrote:
Hi Goncalo,
Thank you. Yes, i have seen that thread, but I have no near full osds and
my mds cache size is pretty high.
You can use the
some near full osd blocking IO.
Cheers
G.
From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Dennis Kramer
(DBS) [den...@holmes.nl]
Sent: 14 September 2016 17:44
To: ceph-users@lists.ceph.com
Subject: [ceph-users] cephfs/ceph-fuse: mds0
Hi All,
Running Ubuntu 16.04, with version JEWEL ceph version 10.2.2
(45107e21c568dd033c2f0a3107dec8f0b0e58374)
In our environment we are running cephfs and our clients are connecting
through ceph-fuse. Since I have upgraded from Hammer to Jewel I was
haunted by the ceph-fuse segfaults, which wer
I also have this problem. Is it perhaps possible to block clients
entirely if it is not using a specific version of Ceph?
BTW, I often stumble upon the cephfs problem:
"client failing to respond to capability release", which result in
blocked requests aswell. But i'm not entirely sure if you run C
Hi all,
I just want to confirm that the patch works in our environment.
Thanks!
On 08/30/2016 02:04 PM, Dennis Kramer (DBS) wrote:
> Awesome Goncalo, that is very helpful.
>
> Cheers.
>
> On 08/30/2016 01:21 PM, Goncalo Borges wrote:
>> Hi Dennis.
>>
>> That
h-fuse using
> an infernalis client. That is how we did it during the 3 weeks we were
> debugging our issues.
>
> Cheers
> Goncalo
>
> ____
> From: Dennis Kramer (DBS) [den...@holmes.nl]
> Sent: 30 August 2016 20:59
> To: Goncalo
ephfs > /path/to/some/log 2>&1 &
>
> If you want an even bigger log level, you should set 'debug client = 20' in
> your
> /etc/ceph/ceph.conf before mounting.
>
>
> Cheers
> Goncalo
>
> On 08/24/2016 10:28 PM, Dennis Kramer (DT) wrote:
>&g
On 08/29/2016 08:31 PM, Gregory Farnum wrote:
> On Sat, Aug 27, 2016 at 3:01 AM, Francois Lafont
> wrote:
>> Hi,
>>
>> I had exactly the same error in my production ceph client node with
>> Jewel 10.2.1 in my case.
>>
>> In the client node :
>> - Ubuntu 14.04
>> - kernel 3.13.0-92-generic
>> - c
Hi all,
Running ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374) on
Ubuntu 16.04LTS.
Currently I have the weirdest thing, I have a bunch of linux clients,
mostly debian based (Ubuntu/Mint). They all use version 10.2.2 of
ceph-fuse. I'm running cephfs since Hammer without any is
Hi all,
Just wondering if the original issue has been resolved. I have the same
problems with inconsistent nfs and samba directory listings. I'm running
Hammer.
Is it a confirmed seekdir bug in de kernel client?
On 01/14/2016 04:05 AM, Yan, Zheng wrote:
> On Thu, Jan 14, 2016 at 3:37 AM, Mike Ca
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ah, that explains alot. Thank you.
Yes, it was a bit confusing for which version it applied to.
Awesome addition by the way, I like the path parameter!
Cheers.
On 12/08/2015 03:15 PM, John Spray wrote:
> On Tue, Dec 8, 2015 at 1:43 PM, Den
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
I'm trying to restrict clients to mount a specific path in CephFS.
I've been using the official doc for this:
http://docs.ceph.com/docs/master/cephfs/client-auth/
After setting these cap restrictions, the client can still mount and
use all dire
sly stolen from the Gluster FSAL):
> # If thuis flag is set to yes, a getattr is performed each time a
> readdir is done # if mtime do not match, the directory is renewed.
> This will make the cache more # synchronous to the FSAL, but will
> strongly decrease the directory cache perfor
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Sorry for raising this topic from the dead, but i'm having the same
issues with NFS-GANESHA /w the wrong user/group information.
Do you maybe have a working ganesha.conf? I'm assuming I might
mis-configured something in this file. It's also nice to ha
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Awesome!
Just bought the paper back copy. The sample looked very good. Thanks!
Grt,
On Fri, 6 Feb 2015, Karan Singh wrote:
Hello Community Members
I am happy to introduce the first book on Ceph with the title ?Learning Ceph?.
Me and many folks f
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Fri, 6 Feb 2015, Gregory Farnum wrote:
On Fri, Feb 6, 2015 at 7:11 AM, Dennis Kramer (DT) wrote:
On Fri, 6 Feb 2015, Gregory Farnum wrote:
On Fri, Feb 6, 2015 at 6:39 AM, Dennis Kramer (DT)
wrote:
I've used the upstream module fo
On Wed, 11 Feb 2015, Wido den Hollander wrote:
On 11-02-15 12:57, Dennis Kramer (DT) wrote:
On Fri, 7 Nov 2014, Gregory Farnum wrote:
Did you upgrade your clients along with the MDS? This warning
indicates the
MDS asked the clients to boot some inboxes out of cache and they have
taken
too
On Fri, 7 Nov 2014, Gregory Farnum wrote:
Did you upgrade your clients along with the MDS? This warning indicates the
MDS asked the clients to boot some inboxes out of cache and they have taken
too long to do so.
It might also just mean that you're actively using more inodes at any given
time th
On Fri, 6 Feb 2015, Gregory Farnum wrote:
On Fri, Feb 6, 2015 at 6:39 AM, Dennis Kramer (DT) wrote:
I've used the upstream module for our production cephfs cluster, but i've
noticed a bug where timestamps aren't being updated correctly. Modified
files are being reset to the be
hroughput when using this module instead of an
re-export with the kernel client. So I hope the VFS module will be
maintained actively again any time soon.
On Fri, 6 Feb 2015, Sage Weil wrote:
On Fri, 6 Feb 2015, Dennis Kramer (DT) wrote:
Hi,
Is the Samba VFS module for CephFS actively mai
Hi,
Is the Samba VFS module for CephFS actively maintained at this moment?
I haven't seen much updates in the ceph/samba git repo.
With regards,
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.c
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi Dmitry,
I've been using Ubuntu 14.04LTS + Icehouse /w CEPH as a storage
backend for glance, cinder and nova (kvm/libvirt). I *really* would
love to see this patch cycle in Juno. It's been a real performance
issue because of the unnecessary re-copy
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
What do you recommend in case of a disk failure in this kind of
configuration? Are you bringing down the host when you replace the
disk and re-create the raid-0 for the replaced disk? I reckon that
linux doesn't automatically get the disk replacem
Hi all,
A couple of weeks ago i've upgraded from emperor to firefly.
I'm using Cloudstack /w CEPH as the storage backend for VMs and templates.
Since the upgrade, ceph is in a HEALTH_ERR with 500+ pgs inconsistent and
2000+ scrub errors. Not sure if it has the do with firefly though, but
the u
33 matches
Mail list logo