On Sun, Dec 6, 2015 at 7:01 AM, Don Waterloo wrote:
> Thanks for the advice.
>
> I dumped the filesystem contents, then deleted the cephfs, deleted the
> pools, and recreated from scratch.
>
> I did not track the specific issue in fuse, sorry. It gave an endpoint
> disconnected message. I will nex
Hi Loic,
output is:
/dev:
insgesamt 0
crw--- 1 root root 10, 235 Dez 2 17:02 autofs
drwxr-xr-x 2 root root1000 Dez 2 17:02 block
drwxr-xr-x 2 root root 60 Dez 2 17:02 bsg
crw--- 1 root root 10, 234 Dez 5 06:29 btrfs-control
drwxr-xr-x 3 root root 60 D
kernel driver. One node is 4.3 kernel (ubuntu wily mainline) and one is 4.2
kernel (ubuntu wily stock)
I don't believe inline data is enabled (nothiing in ceph.conf, nothing in
fstab).
Its mounted like this:
10.100.10.60,10.100.10.61,10.100.10.62:/ /cephfs ceph
_netdev,noauto,noatime,x-systemd.r
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Ilja Slepnev
> Sent: 05 December 2015 19:45
> To: Blair Bethwaite
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] CephFS and single threaded RBD read performance
>
> Hi Blairo,
>
I didn't test that till jet but i guess we have to edit and inject a new
monmap...
Am 3. Dezember 2015 01:35:29 MEZ, schrieb Sam Huracan
:
>Hi,
>
>My Mon quorum includes 3 nodes, if 2 nodes fail out incidently. How
>could I
>recover system from 1 node left?
>
>Thanks and regards.
>
>
>--
On Mon, Dec 7, 2015 at 12:52 AM, Don Waterloo wrote:
> kernel driver. One node is 4.3 kernel (ubuntu wily mainline) and one is 4.2
> kernel (ubuntu wily stock)
>
> I don't believe inline data is enabled (nothiing in ceph.conf, nothing in
> fstab).
>
> Its mounted like this:
>
> 10.100.10.60,10.100
On Mon, Dec 7, 2015 at 10:51 AM, Wuxiangwei wrote:
> Hi, Everyone
>
> Recently I'm trying to figure out how to use ceph-fuse. If we mount cephfs as
> the kernel client, there is a 'cephfs' command tool (though it seems to be
> 'deprecated') with 'map' and 'show_location' commands to show the RAD
Thanks Yan, what if we wanna see some more specific or detailed information?
E.g. with cephfs we may run 'cephfs /mnt/a.txt show_location --offset' to find
the location of a given offset.
---
Wu Xiangwei
Tel : 0571-86760875
2014 UIS 2, TEAM BORE
-邮件原件-
发件人: Yan, Zhe
Thanks you all,
So using cache is a solution for decreasing latenc.
Ceph has build-in cache tiering, With my Ceph System that serves
approximately 100 VM simultaneously, up to maximum 700VM, included SQL VM,
which is most efficient cache solution for me?
Thanks and regards.
2015-12-04 3:22 GMT+0