On Mon, May 26, 2014 at 5:14 PM, Christian Balzer wrote:
>
> Hello,
>
> On Mon, 26 May 2014 10:28:12 +0200 Carsten Aulbert wrote:
>
>> Hi all
>>
>> first off, we have yet to start with Ceph (and other clustered file
>> systems other than QFS), therefore please consider me a total newbie
>> w.r.t t
On Tue, May 27, 2014 at 2:55 AM, Nulik Nol wrote:
> Hi, I am developing an application in C and it opens, reads, writes
> and closes many files, kind of like a custom database. I am using
> Linux's io_submit() system call, so all the read()/write() operations
> are asynchronous. But I don't know a
ter has no
> effect. No matter what combination of commands I try with the ceph-mds
> binary, or via the ceph tool, can I make a second MDS startup, causing mds.1
> to leave resolve and move to the next step. Running with -debug_mds 10
> provides no really enlightening information, nor doe
On Fri, Jun 6, 2014 at 8:38 AM, David Jericho
wrote:
> Hi all,
>
>
>
> I did a bit of an experiment with multi-mds on firefly, and it worked fine
> until one of the MDS crashed when rebalancing. It's not the end of the
> world, and I could just start fresh with the cluster, but I'm keen to see if
e option
to benchmark multiple mds is disable the dynamic load balancer (make
MDBalancer::try_rebalance()
return at the very beginning), and use 'ceph mds tell \* export_dir
...' to static distribute directories to multiple MDS.
Regards
Yan, Zheng
> Cheers,
>
> -- Qing
>
&g
On Mon, Jun 9, 2014 at 3:41 AM, Stuart Longland wrote:
> On 03/06/14 02:24, Mark Nelson wrote:
>> It's kind of a tough call. Your observations regarding the downsides of
>> using NFS with RBD are apt. You could try throwing another distributed
>> storage system on top of RBD and use Ceph for the
On Mon, Jun 9, 2014 at 3:49 PM, Liu Baogang wrote:
> Dear Sir,
>
> In our test, we use ceph firefly to build a cluster. On a node with kernel
> 3.10.xx, if using kernel client to mount cephfs, when use 'ls' command,
> sometime no all the files can be listed. If using ceph-fuse 0.80.x, so far
> it
were you using ceph-fuse or kernel client? ceph version and kernel
version? how reliably you can reproduce this problem?
Regards
Yan, Zheng
On Sun, Jun 15, 2014 at 4:42 AM, Erik Logtenberg wrote:
> Hi,
>
> So... I wrote some files into that directory to test performance, and
> now I
x27;t reproduce this locally. Please enable dynamic debugging for
ceph (echo module ceph +p > /sys/kernel/debug/dynamic_debug/control)
and send kernel log to me.
Regards
Yan, Zheng
>
> Kind regards,
>
> Erik.
>
>
> On 06/19/2014 11:37 PM, Erik Logtenberg wrote:
>> I
oving leading '/' from absolute path names
> # file: ceph/sean
> # owner: scrosby
> # group: people
> user::rw-
> user:lucien:rw-
> user:jkahn:rw-
> group::---
> mask::rw-
> other::---
>
> Is there some outstanding bugs regarding CephFS and POSIX ACL's
Can you try the attached patch. It should solve this issue.
Regards
Yan, Zheng
On Thu, Jun 26, 2014 at 10:45 AM, Sean Crosby
wrote:
> Hi,
>
>
> On 26 June 2014 12:07, Yan, Zheng wrote:
>>
>> On Wed, Jun 25, 2014 at 2:56 PM, Sean Crosby
>> wrote:
>> > I
ls -al
> drwxr-xr-x 1 root root0 29 jun 22:12 hoi
> drwxr-xr-x 1 root root0 29 jun 22:16 hoi2
> # dmesg > /host1.log
Did you have Posix ACL enabled? A bug in Posix ACL support code can
cause this issue.
Regards
Yan, Zheng
>
> On host2 I did:
>
> # echo mod
nal information?
Yes, I already have enough information. Thank you for reporting this.
Yan, Zheng
>
> Thanks,
>
> Erik.
>
>
> On 06/30/2014 05:13 AM, Yan, Zheng wrote:
>> On Mon, Jun 30, 2014 at 4:25 AM, Erik Logtenberg wrote:
>>> Hi Zheng,
>>>
there is memory leak bug in standby replay code, your issue is likely
caused by it.
Yan, Zheng
On Wed, Jul 9, 2014 at 4:49 PM, Florent B wrote:
> Hi all,
>
> I run a Firefly cluster with a MDS server for a while without any problem.
>
> I would like to setup a second one to
On Mon, Aug 11, 2014 at 10:34 PM, Micha Krause wrote:
> Hi,
>
> Im trying to build a cephfs to nfs gateway, but somehow i can't mount the
> share if it is backed by cephfs:
>
>
> mount ngw01.ceph:/srv/micha /mnt/tmp/
> mount.nfs: Connection timed out
>
> cephfs mount on the gateway:
>
> 10.210.32.
Please first delete the old mds log, then run mds with "debug_mds = 15".
Send the whole mds log to us after the mds crashes.
Yan, Zheng
On Wed, Aug 27, 2014 at 12:12 PM, MinhTien MinhTien <
tientienminh080...@gmail.com> wrote:
> Hi Gregory Farmum,
>
> Thank you for
I suspect the client does not have permission to write to pool 3.
could you check if the contents of XXX.iso.2 are all zeros.
Yan, Zheng
On Wed, Aug 27, 2014 at 5:05 PM, Michael Kolomiets
wrote:
> Hi!
> I use ceph pool mounted via cephfs for cloudstack secondary storage
> and have pro
e same. if they are, then use "rados
-p ls" to check if there are data in pool 3
Yan, Zheng
>
> root@lw01p01-mgmt01:/export/secondary# dd if=XXX.iso.2 bs=1M count=1000 |
> md5sum
> 2245e239a9e8f3387adafc7319191015 -
> 1000+0 records in
> 1000+0 records out
> 104
7: '--cluster'
> 2014-08-24 07:10:23.184296 7f2b575e7700 1 mds.-1.-1 8: 'ceph'
> 2014-08-24 07:10:23.274640 7f2b575e7700 1 mds.-1.-1 exe_path
> /usr/bin/ceph-mds
> 2014-08-24 07:10:23.606875 7f4c55abb800 0 ceph version 0.80.5
> (38b73c67d375a2552d8ed67843c8a6
which version of MDS are you using?
On Wed, Sep 3, 2014 at 10:48 PM, Florent Bautista wrote:
> Hi John and thank you for your answer.
>
> I "solved" the problem doing : ceph mds stop 1
>
> So one MDS is marked as "stopping". A few hours later, it is still
> "stopping" (active process, consuming C
unt seems to fix it.
>
which version of kernel do you use ?
Yan, Zheng
>
> On Fri, Aug 29, 2014 at 11:26 AM, James Devine wrote:
>>
>> I am running active/standby and it didn't swap over to the standby. If I
>> shutdown the active server it swaps to the standby fine
On Fri, Sep 5, 2014 at 8:42 AM, James Devine wrote:
> I'm using 3.13.0-35-generic on Ubuntu 14.04.1
>
Was there any kernel message when the hang happened? We have fixed a
few bugs since 3.13 kernel, please use 3.16 kernel if possible.
Yan, Zheng
>
> On Thu, Sep 4, 2014 at 6:
On Fri, Sep 5, 2014 at 4:05 PM, Florent Bautista wrote:
> Firefly :) last release.
>
> After few days, second MDS is still "stopping" and consuming CPU
> sometimes... :)
Try restarting the stopping MDS and run "ceph mds stop 1" again.
>
> On 09/04/201
not.
The crash didn't happen immediately after 'ceph: mds0 caps stale', I
have no idea what went wrong. Could you setup 'netconsole' to get
call trace of the kernel crash.
Regards
Yan, Zheng
>
> Thanks,
>
> Chris
>
> __
nt.ceph for client . The cp'command is zombie for a
> long long time ! When I restart mds , cp again , it work well . But after
> some days , I alse can't cp file from ceph cluster.
kernel version? when the hang happens again, find PID of cp and send
content of /proc//stack t
On Sun, Mar 8, 2015 at 9:21 AM, Francois Lafont wrote:
> Hello,
>
> Thanks to Jcsp (John Spray I guess) that helps me on IRC.
>
> On 06/03/2015 04:04, Francois Lafont wrote:
>
>>> ~# mkdir /cephfs
>>> ~# mount -t ceph 10.0.2.150,10.0.2.151,10.0.2.152:/ /cephfs/ -o
>>> name=cephfs,secretfile=/etc/
ry on CephFS is empty.
>
> And metadata pool is 46 MB.
>
> Is it expected ? If not, how to debug this ?
Old mds does not work well in this area. Try umounting clients and
restarting MDS.
Regards
Yan, Zheng
>
> Thank you.
> ___
&g
On Sat, Mar 14, 2015 at 5:22 PM, Florent B wrote:
> Hi,
>
> What do you call "old MDS" ? I'm on Giant release, it is not very old...
>
> And I tried restarting both but it didn't solve my problem.
>
> Will it be OK in Hammer ?
>
> On 03/13/2015 04:2
shouldn't happen. which kernel are ypu using? can you reproduce
it with ceph-fuse?
Regards
Yan, Zheng
>
> I could test this more; is there a command or proccess I can perform to
> flush the ceph-fuse cache?
>
> Thanks,
> Scott
>
>
> On Fri, Mar 13, 2015 at 1:49
s shouldn't happen. which kernel are you using? can you reproduce
this issue with ceph-fuse?
Regards
Yan, Zheng
>
> I could test this more; is there a command or proccess I can perform to
> flush the ceph-fuse cache?
>
> Thanks,
> Scott
>
>
> On Fri, Mar 13, 2015 a
0 b8089 1=1+0) (iversion lock) | dirtyparent=1 dirty=1 0x53c32c8]
> 2015-03-16 09:57:56.561404 7f417c9a1700 10 mds.0.cache unlisting
> unwanted/capless inode [inode 1a95e11 [2,head]
> /staging/api/easyrsa/vars auth v229 dirtyparent s=8089 n(v0 b8089 1=1+0)
> (iversion lock) | dirtyp
to speed up the mds?
could you enable mds debugging for a few seconds (ceph daemon mds.x
config set debug_mds 10; sleep 10; ceph daemon mds.x config set
debug_mds 0). and upload /var/log/ceph/mds.x.log to somewhere.
Regards
Yan, Zheng
>
> On Fri, Mar 27, 2015 at 4:50 PM, Gregory Farnum wrot
ch version of ceph/kernel are you using? do you use ceph-fuse or
kernel client, what's the mount options?
Regards
Yan, Zheng
>
> Beeij
>
>
> On Mon, Mar 30, 2015 at 10:59 PM, Yan, Zheng wrote:
>> On Sun, Mar 29, 2015 at 1:12 AM, Barclay Jameson
>> wrote:
>>>
Could you try the newest development version ceph (It includes
the fix). Or apply the attached patch to source of giant release.
Regards
Yan, Zheng
>
> I may have found something.
> I did the build manually as such I did _NOT_ set up these config settings:
> filestore xattr use om
On Thu, Apr 9, 2015 at 7:09 AM, Scottix wrote:
> I was testing the upgrade on our dev environment and after I restarted the
> mds I got the following errors.
>
> 2015-04-08 15:58:34.056470 mds.0 [ERR] unmatched rstat on 605, inode has
> n(v70 rc2015-03-16 09:11:34.390905), dirfrags have n(v0 rc201
sion map size increase
infinitely. which version of linux kernel are using?
Regards
Yan, Zheng
>
> On Wed, Apr 15, 2015 at 4:16 PM, John Spray wrote:
>>
>> On 15/04/2015 20:02, Kyle Hutson wrote:
>>>
>>> I upgraded to 0.94.1 from 0.94 on Monday, and everything had
t has any hang mds request. (check
/sys/kernel/debug/ceph/*/mdsc on the machine that contain cephfs
mount. If there is any request whose ID is significant smaller than
other requests' IDs)
Regards
Yan, Zheng
> --
> Adam
>
> On Wed, Apr 15, 2015 at 8:02 PM, Yan, Zheng wrote:
>>
t machine to me (should in
/var/log/kerne.log or /var/log/message)
to recover from the crash. you can either force reset the machine
contains cephfs mount or add "mds wipe sessions = 1" to mds section of
ceph.conf
Regards
Yan, Zheng
> Thanks,
>
> Adam
>
> On Wed, Apr 15, 2015
On Wed, Apr 22, 2015 at 3:36 PM, Neville wrote:
> Just realised this never went to the group, sorry folks.
>
> Is it worth me trying the FUSE driver, is that likely to make a difference
> in this type of scenario? I'm still concerned whether what I'm trying to do
> with CephFS is even supposed to
On Sat, Apr 25, 2015 at 11:21 PM, François Lafont wrote:
> Hi,
>
> Gregory Farnum wrote:
>
>> The MDS will run in 1GB, but the more RAM it has the more of the metadata
>> you can cache in memory. The faster single-threaded performance your CPU
>> has, the more metadata IOPS you'll get. We haven't
On Mon, Apr 27, 2015 at 3:42 PM, Burkhard Linke
wrote:
> Hi,
>
> I've deployed ceph on a number of nodes in our compute cluster (Ubuntu 14.04
> Ceph Firefly 0.80.9). /ceph is mounted via ceph-fuse.
>
> From time to time some nodes loose their access to cephfs with the following
> error message:
>
e >
client_cache_size). Could you please run "mount -o remount ", then run the status command again. check if number of pinned
dentries drops.
Regards
Yan, Zheng
>
> On Wed, Apr 29, 2015 at 12:19 PM Dexter Xiong wrote:
>>
>> I tried set client cache size = 100,
200020 7f9ad30a27c0 -1 fuse_parse_cmdline failed.
> ceph-fuse[2574]: mount failed: (22) Invalid argument.
>
> It seems that FUSE doesn't support remount? This link is google result.
>
please try "echo 3 > /proc/sys/vm/drop_caches". check if the pinned
dentries count
On Fri, May 8, 2015 at 11:15 AM, Dexter Xiong wrote:
> I tried "echo 3 > /proc/sys/vm/drop_caches" and dentry_pinned_count dropped.
>
> Thanks for your help.
>
could you please try the attached patch
patch
Description: Binary data
___
ceph-users maili
usands or more ] seconds old, received at ...:
client_request(client.734537:23 ...) " in your ceph cluster log.
Regards
Yan, Zheng
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
could you try the attached patch
On Tue, May 19, 2015 at 5:10 PM, Markus Blank-Burian wrote:
> Forgot the attachments. Besides, is there any way to get the cluster
> running again without restarting all client nodes?
>
> On Tue, May 19, 2015 at 10:45 AM, Yan, Zheng wrote:
>&
On Fri, May 22, 2015 at 6:14 AM, Erik Logtenberg wrote:
> Hi,
>
> Can anyone explain what the mount options nodcache and nofsc are for,
> and especially why you would want to turn these options on/off (what are
> the pros and cons either way?)
nodcache mount option make cephfs kernel driver not t
On Fri, May 22, 2015 at 1:57 PM, Francois Lafont wrote:
> Hi,
>
> Yan, Zheng wrote:
>
>> fsc means fs-cache. it's a kernel facility by which a network
>> filesystem can cache data locally, trading disk space to gain
>> performance improvements for access to slow
the kernel client bug should be fixed by
https://github.com/ceph/ceph-client/commit/72f22efb658e6f9e126b2b0fcb065f66ffd02239
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I tried 4.1 kernel and 0.94.2 ceph-fuse. their performance are about the same.
fuse:
Files=191, Tests=1964, 60 wallclock secs ( 0.43 usr 0.08 sys + 1.16 cusr
0.65 csys = 2.32 CPU)
kernel:
Files=191, Tests=2286, 61 wallclock secs ( 0.45 usr 0.08 sys + 1.21 cusr
0.72 csys = 2.46 CPU)
> O
> On Jun 30, 2015, at 15:37, Ilya Dryomov wrote:
>
> On Tue, Jun 30, 2015 at 6:57 AM, Yan, Zheng wrote:
>> I tried 4.1 kernel and 0.94.2 ceph-fuse. their performance are about the
>> same.
>>
>> fuse:
>> Files=191, Tests=1964, 60 wallclock secs ( 0.
> On Jul 1, 2015, at 00:34, Dan van der Ster wrote:
>
> On Tue, Jun 30, 2015 at 11:37 AM, Yan, Zheng wrote:
>>
>>> On Jun 30, 2015, at 15:37, Ilya Dryomov wrote:
>>>
>>> On Tue, Jun 30, 2015 at 6:57 AM, Yan, Zheng wrote:
>>>> I tried
On Tue, Aug 4, 2015 at 9:40 AM, Goncalo Borges
wrote:
> Hey John...
>
> First of all. thank you for the nice talks you have been giving around.
>
> See the feedback on your suggestions bellow, plus some additional questions.
>
>> However, please note that in my example I am not doing only deletion
On Tue, Aug 4, 2015 at 12:42 PM, Goncalo Borges
wrote:
> Hi Yan Zheng
>
>
>>
>>
>>
>> Now my questions:
>>
>> 1) I performed all of this testing to understand what would be the minimum
>> size (reported by df) of a file of 1 char and I am still
On Fri, Aug 7, 2015 at 3:41 PM, Goncalo Borges
wrote:
> Hi All...
>
> I am still fighting with this issue. It may be something which is not
> properly implemented, and if that is the case, that is fine.
>
> I am still trying to understand what is the real space occupied by files in
> a /cephfs fil
On Sun, Aug 9, 2015 at 8:57 AM, Hadi Montakhabi wrote:
> I am using fio.
> I use the kernel module to Mount CephFS.
>
please send fio job file to us
> On Aug 8, 2015 10:52 AM, "Ketor D" wrote:
>
>> Hi Haidi,
>> Which bench tool do you use? And how you mount CephFS, ceph-fuse or
>> kern
HAS3YEkpHTj6CSQg8u4hk+jHBasejQNLDc9/KYkYVQ=
could you use gdb to check where the crash happened. (gdb
/usr/local/bin/ceph-mds /core.x. maybe you need re-compile mds
with debuginfo)
Yan, Zheng
>
> cat /sys/kernel/debug/ceph/*/mdsc output:
>
> https://www.zerobi
MDS is back up and running you can try enabling
>> the mds_bal_frag setting.
>>
>> This is not a use case we have particularly strong coverage of in our
>> automated tests, so thanks for your experimentation and persistence.
>>
>> John
>>
>>>
>
3DAQAts8_TYXWrLh2FhGHcb7oC4uuhr2T8
>
please try setting mds_reconnect_timeout to 0. it should make your MDS
be able to recover. but this config will make client mounts unusable
after MDS recovers.
Besides, please use recent client kernel such as 4.0 or use ceph-fuse.
> thanks,
> Bob
OCKSIZE}
> numjobs=1
> iodepth=1
> invalidate=1
>
I just tried 4.2-rc kernel, everything went well. which version of kernel
were you using?
>
> On Sun, Aug 9, 2015 at 9:27 PM, Yan, Zheng wrote:
>
>>
>> On Sun, Aug 9, 2015 at 8:57 AM, Hadi Montakhabi wrote:
>&
t mds cache size to a number greater than files in the
fs, it requires lots of memory.
Yan, Zheng
>
> thanks again for the help.
>
> thanks,
> Bob
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
12, 2015 at 11:12 PM, Hadi Montakhabi wrote:
> 4.0.6-300.fc22.x86_64
>
> On Tue, Aug 11, 2015 at 10:24 PM, Yan, Zheng wrote:
>
>> On Wed, Aug 12, 2015 at 5:33 AM, Hadi Montakhabi wrote:
>>
>>>
>>> [sequential read]
>>> readwrite=read
&
ti-user Windows environment.
>
libcephfs does not support ACL. I have an old patch that adds ACL
support to samba's vfs ceph module, but haven't tested it carefully.
Yan, Zheng
> Thanks,
> Eric
> ___
> ceph-users mailing lis
The code is at https://github.com/ceph/samba.git wip-acl. So far the
code does not handle default ACL (files created by samba do not
inherit parent directory's default ACL)
Regards
Yan, Zheng
On Tue, Aug 18, 2015 at 6:57 PM, Gregory Farnum wrote:
> On Mon, Aug 17, 2015 at 4:12 AM, Ya
denied
>
> Oh, this might be a kernel bug, failing to ask for mdsmap updates when
> the connection goes away. Zheng, does that sound familiar?
> -Greg
This seems like reconnect timeout. you can try enlarging mds_reconnect_timeout
config option.
Which version of kernel are you using?
Yan
R_KEEPALIVE2. So the kernel client couldn't reliably
detect the event that network cable got unplugged. It kept waiting for
new events from the disconnected connection.
Regards
Yan, Zheng
>>
>> 10.15.0.3 was the active MDS at the time I unplugged the Ethernet cable.
>>
&g
/ceph/ceph-client/commit/33b68dde7f27927a7cb1a7691e3c5b6f847ffd14
<https://github.com/ceph/ceph-client/commit/33b68dde7f27927a7cb1a7691e3c5b6f847ffd14>.
Yes, you should try ceps-fuse if this bug causes problems for you.
Regards
Yan, Zheng
>
> Simon
>
>> -Original Message-
>> From: Gre
ed in 3.11 kernel by commit ccca4e37b1 (libceph:
fix truncate size calculation). We don't backport cephfs bug fixes to
old kernel.
please update the kernel or use ceph-fuse.
Regards
Yan, Zheng
> Best regards,
> Tobi
>
> ___
> ceph-us
the same (incarnations are the same). When OSD receives MDS
requests for the newly created FS. It silently drops the requests,
because it thinks they are duplicated. You can get around the bug by
creating new pools for the newfs.
Regards
Yan, Zheng
>
>Regards,
>
> Oliver
&g
On Wed, Sep 11, 2013 at 9:12 PM, Yan, Zheng wrote:
> On Wed, Sep 11, 2013 at 7:51 PM, Oliver Daudey wrote:
>> Hey Gregory,
>>
>> I wiped and re-created the MDS-cluster I just mailed about, starting out
>> by making sure CephFS is not mounted anywhere, stopping all M
On Wed, Sep 11, 2013 at 10:06 PM, Oliver Daudey wrote:
> Hey Yan,
>
> On 11-09-13 15:12, Yan, Zheng wrote:
>> On Wed, Sep 11, 2013 at 7:51 PM, Oliver Daudey wrote:
>>> Hey Gregory,
>>>
>>> I wiped and re-created the MDS-cluster I just mailed about, sta
est it for you.
>
>
Here is the patch, thanks for testing.
---
commit e42b371cc83aa0398d2c288d7a25a3e8f3494afb
Author: Yan, Zheng
Date: Thu Sep 12 09:50:51 2013 +0800
mon/MDSMonitor: don't reset incarnation when creating newfs
Signed-off-by: Yan, Zheng
diff --g
kport the fix to 3.11 and 3.10 kernel soon. please use ceph-fuse at
present.
Regards
Yan, Zheng
>
> Any ideas?
> cheers
> jc
>
> root@ineri ~$ ndo all_nodes grep instances /etc/fstab
> h0
> [2001:620:0:6::106]:6789,[2001:620:0:6::10e]:6789,[2001:620:0:6::10c]:6789:/
For cephfs, the size reported by 'ls -s' is the same as file size. see
http://ceph.com/docs/next/dev/differences-from-posix/
Regards
Yan, Zheng
On Mon, Sep 16, 2013 at 5:12 PM, Jens-Christian Fischer
wrote:
>
> Hi all
>
> as part of moving our OpenStack VM instance stor
outdated. you can increase pg count by:
#ceph osd pool set pg_num xxx
#ceph osd pool set pgp_num xxx
Yan, Zheng
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
-28 == -ENOSPC (No space left on device). I think it's is due to the
fact that some osds are near full.
Yan, Zheng
On Mon, Sep 30, 2013 at 10:39 PM, Eric Eastman wrote:
> I have 5 RBD kernel based clients, all using kernel 3.11.1, running Ubuntu
> 1304, that all failed with a wri
On Mon, Sep 30, 2013 at 11:50 PM, Eric Eastman wrote:
>
> Thank you for the reply
>
>
>> -28 == -ENOSPC (No space left on device). I think it's is due to the
>
> fact that some osds are near full.
>>
>>
>> Yan, Zheng
>
>
> I thought that ma
vice
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
>
> Use `umount -l` to force umount.
`umount -l` is not force umount, it just detach the fs from the hierarchy.
The fs is mounted internelly in the kernel if there still are references.
I'm not surprise the fs errors If you use `u
I solve this promble.。
>
> by the way ! I found when I typed the
> # ceph osd dump
> ...
>max_mds 1
>...
>
> Is the filed max_mds cause this problems ! in my ceph.conf , the
> max_mads , which values is 2 , lays in the [global]
r and subdirectories of the
>> directory can be used (and still are currently being used by VM's still
>> running from it). It's being mounted in a mixed kernel driver (ubuntu) and
>> centos (ceph-fuse) environment.
kernel, ceph-fuse and ceph-mds version? the hang
On Thu, Oct 24, 2013 at 5:43 PM, Michael wrote:
> On 24/10/2013 03:09, Yan, Zheng wrote:
>>
>> On Thu, Oct 24, 2013 at 6:44 AM, Michael
>> wrote:
>>>
>>> Tying to gather some more info.
>>>
>>> CentOS - hanging ls
>>> [root@srv ~]
On Thu, Oct 24, 2013 at 9:13 PM, Michael wrote:
> On 24/10/2013 13:53, Yan, Zheng wrote:
>>
>> On Thu, Oct 24, 2013 at 5:43 PM, Michael
>> wrote:
>>>
>>> On 24/10/2013 03:09, Yan, Zheng wrote:
>>>>
>>>> On Thu, Oct 24, 2013 at 6:44
On Sat, Oct 26, 2013 at 2:05 AM, Gregory Farnum wrote:
> Are you sure you're using only CephFS? Do you have any snapshots?
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Fri, Oct 25, 2013 at 2:59 AM, Miguel Afonso Oliveira
> wrote:
>> Hi,
>>
>> I have a recent cep
On Sat, Oct 26, 2013 at 2:05 AM, Gregory Farnum wrote:
> Are you sure you're using only CephFS? Do you have any snapshots?
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Fri, Oct 25, 2013 at 2:59 AM, Miguel Afonso Oliveira
> wrote:
>> Hi,
>>
>> I have a recent cep
s /disks/ show_layout
>
> Layout.data_pool: 9
>
>
>
> Great my layout is now 9 so my qcow2 pool but :
>
> Df-h | grep disks, shows the entire cluster size not only 900Gb why ? is it
> normal ? or am I doing something wrong ?
cephfs does not support any type of quota, df alwa
I first added to all my ceph.conf
> files:
>
>filestore zfs_snap = 1
>journal_aio = 0
>journal_dio = 0
no need to disable journal dio/aio if the journal is not in ZFS.
Regards
Yan, Zheng
>
> I then created the OSD with the commands:
>
> # ceph osd create
> 4
>
t enought then the
> problem seems to disapear.
>
>
> sounds like the d_prune_aliases() bug. please try updating 3.12 kernel or
using ceph-fuse
Yan, Zheng
> Any suggestions are welcome
> Atte,
>
> --
>
> *Alphé Salas*
> Ingeniero T.I
>
> [image: Descripción: cid:image00
ly"
>
> I wonder what "fully" means, and where is Ceph now on the road to "fully"
> supporting fancy striping. Could anyone please enlighten me?
The kernel ceph clients (rbd and cephfs) don't support "fancy
striping". Only user space client su
e cephfs mount point and check if there are
missing entries.
Reagrds
Yan, Zheng
>
> Alphe Salas
> I.T ingeneer
>
>
> On 11/22/13 10:15, Alphe Salas Michels wrote:
>>
>> Hello Yan,
>> Good guess ! thank you for your advice I updated this morning my
>> cephfs-p
On Mon, Dec 2, 2013 at 9:03 PM, Miguel Afonso Oliveira
wrote:
> Hi,
>
> Sorry for the very late reply. I have been trying a lot of things...
>
>
> On 25/10/13 22:40, Yan, Zheng wrote:
>>
>> On Sat, Oct 26, 2013 at 2:05 AM, Gregory Farnum wrote:
>>>
>&g
On Tue, Dec 3, 2013 at 4:00 PM, Miguel Afonso Oliveira
wrote:
>
>> If your issue is caused by the bug I presume, you need to use the
>> newest client (0.72 ceph-fuse or 3.12 kernel)
>>
>> Regards
>> Yan, Zheng
>
>
> Hi,
>
> We are running 0.72
On Wed, Dec 4, 2013 at 8:11 PM, Miguel Oliveira
wrote:
>
>
>> On 4 Dec 2013, at 04:37, "Yan, Zheng" wrote:
>>
>> On Tue, Dec 3, 2013 at 4:00 PM, Miguel Afonso Oliveira
>> wrote:
>>>
>>>> If your issue is caused by the bug I presume, y
gure out what's wrong from the short description. But please try
3.17 kernel, it contains fixes for several bugs that can cause hang.
Regards
Yan, Zheng
>
> My clients are Ubuntu 14.04 with kernel 3.13.0-24-generic
>
> And my servers are CentOS 6.5 with kernel 2.6.32-431.23.
le are async operations, but truncating a file is sync
operation)
Regards
Yan, Zheng
> On Tue, Oct 21, 2014 at 3:32 PM, Sergey Nazarov wrote:
>> Ouch, I think client log is missing.
>> Here it goes:
>> https://www.dropbox.com/s/650mjim2ldusr66/ceph-client.admin.log.gz?dl=0
>
27;s not a problem. each file has an authority MDS.
Yan, Zheng
> thanks,
>
> -lorieri
>
>
>
> On Tue, Nov 4, 2014 at 9:47 PM, Shain Miley wrote:
>> +1 for fsck and snapshots, being able to have snapshot backups and protect
>> against accidental deletion, etc
On Sat, Jan 17, 2015 at 11:47 AM, Lindsay Mathieson
wrote:
> On Fri, 16 Jan 2015 08:48:38 AM Wido den Hollander wrote:
>> In Ceph world 0.72.2 is ancient en pretty old. If you want to play with
>> CephFS I recommend you upgrade to 0.90 and also use at least kernel 3.18
>
> Does the kernel version
On Fri, Jan 16, 2015 at 1:29 AM, Daniel Takatori Ohara
wrote:
> Hi,
>
> I have a problem for remove one file in cephfs. With the command ls, all the
> arguments show me with ???.
>
> ls: cannot access refseq/source_step2: No such file or directory
> total 0
> drwxrwxr-x 1 dtohara BioInfoHSL Users
On Wed, Jan 28, 2015 at 2:48 PM, 于泓海 wrote:
> Hi:
>
> I have completed the installation of ceph cluster,and the ceph health is
> ok:
>
> cluster 15ee68b9-eb3c-4a49-8a99-e5de64449910
> health HEALTH_OK
> monmap e1: 1 mons at {ceph01=10.194.203.251:6789/0}, election epoch 1,
> quor
On Wed, Jan 28, 2015 at 10:35 PM, Yan, Zheng wrote:
> On Wed, Jan 28, 2015 at 2:48 PM, 于泓海 wrote:
>> Hi:
>>
>> I have completed the installation of ceph cluster,and the ceph health is
>> ok:
>>
>> cluster 15ee68b9-eb3c-4a49-8a99-e5de64449910
>&g
On Fri, Dec 6, 2013 at 6:08 AM, Miguel Oliveira
wrote:
>> How do you mount cephfs, use ceph-fuse or kernel driver?
>>
>> Regards
>> Yan, Zheng
>
> I use ceph-fuse.
>
Looks like the issue is not caused by the bug I presume. Could you
please run following co
1 - 100 of 548 matches
Mail list logo