On Fri, May 22, 2015 at 1:57 PM, Francois Lafont wrote:
> Hi,
>
> Yan, Zheng wrote:
>
>> fsc means fs-cache. it's a kernel facility by which a network
>> filesystem can cache data locally, trading disk space to gain
>> performance improvements for access to slow networks and media. cephfs
>> does
On 5/21/15, 5:04 AM, "Blair Bethwaite" wrote:
>Hi Warren,
>
>On 20 May 2015 at 23:23, Wang, Warren
>wrote:
>> We¹ve contemplated doing something like that, but we also realized that
>> it would result in manual work in Ceph everytime we lose a drive or
>> server,
>> and a pretty bad experience
Hi,
Yan, Zheng wrote:
> fsc means fs-cache. it's a kernel facility by which a network
> filesystem can cache data locally, trading disk space to gain
> performance improvements for access to slow networks and media. cephfs
> does not use fs-cache by default.
So enable this option can improve pe
On Fri, May 22, 2015 at 6:14 AM, Erik Logtenberg wrote:
> Hi,
>
> Can anyone explain what the mount options nodcache and nofsc are for,
> and especially why you would want to turn these options on/off (what are
> the pros and cons either way?)
nodcache mount option make cephfs kernel driver not t
To be sure to understand, if I create 2 times replicated pool toto with 1024
pgs and 1 pgp, pg and data of pool toto will be mapped on only 2 OSDs and on 2
servers right ?
Sent from my iPhone
> On 21 mai 2015, at 18:58, Florent MONTHEL wrote:
>
> Thanks Ilya for this clear explanation!
> I'm
Thanks Ilya for this clear explanation!
I'm searching that for a long time
Best practices is to have pg = pgp in order to "avoid" using of the same set of
osd right ? (On a small cluster you will have)
Sent from my iPhone
> On 21 mai 2015, at 07:49, Ilya Dryomov wrote:
>
>> On Thu, May 21, 20
Hi,
Can anyone explain what the mount options nodcache and nofsc are for,
and especially why you would want to turn these options on/off (what are
the pros and cons either way?)
Thanks,
Erik.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http:/
Hi,
I misread your initial question and did not notice the *.$host. AFAIK it has
never worked this way, even with Emperor. If you want to change the
configuration of the daemon running on same host you're running the command,
you can probably use something like
ceph daemon osd.0 config set osd
CephFS is I believe not very production ready. Use production quality
clustered filesystems or consider using NFS or Samba shares.
The exact setup depends on what you need.
Cheers, Vasily.
On Thu, May 21, 2015 at 6:47 PM, gjprabu wrote:
> Hi Angapov,
>
> I have seen below message in ceph off
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Not supported at the moment, but it is in the eventual plans and I
think some of the code has been written such that it will help
facilitate the development.
-
Robert LeBlanc
GPG Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 F
/usr/bin/ceph -f json --cluster ceph tell *.mds01 injectargs --
--mon_osd_min_down_reports=26
2015-05-21 17:52:14.476099 7f03375e7700 -1 WARNING: the following
dangerous and experimental features are enabled: keyvaluestore
2015-05-21 17:52:14.497399 7f03375e7700 -1 WARNING: the following
dangero
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
It should work. Could you copy/paste the command you run and its output ?
Cheers
On 21/05/2015 17:34, Kenneth Waegeman wrote:
> Hi,
>
> We're using ceph tell in our configuration system since emperor, and before
> we could run 'ceph tell *.$host injectargs -- ...' , and while I'm honestly
Hi,
We're using ceph tell in our configuration system since emperor, and
before we could run 'ceph tell *.$host injectargs -- ...' , and while
I'm honestly not completely sure anymore this did all what I think it
did, it exited cleanly and I *suppose* it injected the config in all the
daemon
Hi,
Some strange issue wrt boolean values in the config:
this works:
osd_crush_update_on_start = 0 -> osd not updated
osd_crush_update_on_start = 1 -> osd updated
In a previous version we could set boolean values in the ceph.conf file
with the integers 1(true) and false(0) also for
mon_clust
You are not able to mount a normal FS twice, in order to do this the filesystem
needs to be a clustered filesystem.
Doing the below with something like XFS or EXT4 will just result in corruption.
If you need to mount something like XFS or EXT4 and have it fail between two
machines, then you nee
Hi, Prabu!
This behavior is expected because you are using non-clustered filesystem
(ext4 or xfs or whatever), which is not expected to be mounted to multiple
hosts at the same time.
What's more - you can lose data when doing like this. That's the nature of
local filesystems.
So if you need to acc
Hi All,
We are using rbd and map the same rbd image to the rbd device on two
different client but i can't see the data until i umount and mount -a
partition. Kindly share the solution for this issue.
Example
create rbd image named foo
map foo to /dev/rbd0 on server A, mount /dev/rbd0
On 05/21/2015 02:36 PM, Brad Hubbard wrote:
If that's correct then starting from there and building a new RPM
with RBD support is the proper way of updating. Correct?
I guess there are two ways to approach this.
1. use the existing ceph source rpm here.
http://ceph.com/packages/ceph-extras/
On 05/21/2015 09:36 PM, Brad Hubbard wrote:
On 05/21/2015 03:39 PM, Georgios Dimitrakakis wrote:
Hi Brad!
Thanks for pointing out that for CentOS 6 the fix is included! Good to know
that!
No problem.
But I think that the original package doesn't support RBD by default so it has
to be bui
Le 21/05/2015 13:49, Ilya Dryomov a écrit :
> On Thu, May 21, 2015 at 12:12 PM, baijia...@126.com wrote:
>> Re: what's the difference between pg and pgp?
>
> pg-num is the number of PGs, pgp-num is the number of PGs that will be
> considered for placement, i.e. it's the pgp-num value that is used
On Thu, May 21, 2015 at 12:12 PM, baijia...@126.com wrote:
> Re: what's the difference between pg and pgp?
pg-num is the number of PGs, pgp-num is the number of PGs that will be
considered for placement, i.e. it's the pgp-num value that is used by
CRUSH, not pg-num. For example, consider pg-num
On 05/21/2015 03:39 PM, Georgios Dimitrakakis wrote:
Hi Brad!
Thanks for pointing out that for CentOS 6 the fix is included! Good to know
that!
No problem.
But I think that the original package doesn't support RBD by default so it has
to be built again, am I right?
I have not looked at
Le 21/05/2015 11:12, baijia...@126.com a écrit :
>
>
>
> baijia...@126.com
Hi,
weird question..
There's no relationship at all...Oh, yes a single letter ;-)
pg stands for placement group in ceph storage
pgp stands for Pre
Hi,
I'm using ceph giant version on the 5 nodes. Ceph is used for Glance,
Cinder and Nova compute services.
Last week I upgraded libvirt, qemu and kernel of Ubuntu 14.04 machines,
from this time libvirt starts to complaining about Invalid relative path in
the logs. Also storage performance on the v
Hello,
Is it possible to use the rados_clone_range() librados API call with an erasure
coded pool ? The documentation doesn’t mention it’s not possible. However
running the clonedata command from the rados utility (which seems to be calling
radios_clone_range()) gives an error when using a eras
baijia...@126.com___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Warren,
On 20 May 2015 at 23:23, Wang, Warren wrote:
> We¹ve contemplated doing something like that, but we also realized that
> it would result in manual work in Ceph everytime we lose a drive or
> server,
> and a pretty bad experience for the customer when we have to do
> maintenance.
Yeah
Hi!
1) XFS frag is not very much - from 6 to 10-12%, one osd have 19%. Is this
values too high to badly influence
to performance?
2) About 32/48Gb RAM. Cluster was created from slightly old HW, mostly on Intel
5520 platform
- 3 nodes is Intel SR2612URR platform,
- 1 node - Supermicro 6026T-
30 matches
Mail list logo