It appears that with --apparent-size, du adds the "size" of the
directories to the total as well. On most filesystems this is the
block size, or the amount of metadata space the directory is using. On
CephFS, this size is fabricated to be the size sum of all sub-files.
i.e. a cheap/free 'du -sh $fo
On 19/01/2016 05:19, Francois Lafont wrote:
> However, I still have a question. Since my previous message, supplementary
> data have been put in the cephfs and the values have changes as you can see:
>
> ~# du -sh /mnt/cephfs/
> 1.2G /mnt/cephfs/
>
> ~# du --apparent-size -sh /m
Hi all,
Does anyone know if RGW supports Keystone's PKIZ tokens, or better yet know
a list of the supported token types?
Cheers,
--
Cheers,
~Blairo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-cep
Hi,
On 18/01/2016 05:00, Adam Tygart wrote:
> As I understand it:
I think you understand well. ;)
> 4.2G is used by ceph (all replication, metadata, et al) it is a sum of
> all the space "used" on the osds.
I confirm that.
> 958M is the actual space the data in cephfs is using (without replic
Hi,
I have not well followed this thread, so sorry in advance if I'm a little out
of topic. Personally I'm using this udev rule and it works well (servers are
Ubuntu Trusty):
~# cat /etc/udev/rules.d/90-ceph.rules
ENV{ID_PART_ENTRY_SCHEME}=="gpt",
ENV{ID_PART_ENTRY_NAME}=="osd-?*-journal
On Mon, Jan 18, 2016 at 4:48 AM, Arthur Liu wrote:
>
>
> On Mon, Jan 18, 2016 at 11:34 PM, Burkhard Linke
> wrote:
>>
>> Hi,
>>
>> On 18.01.2016 10:36, david wrote:
>>>
>>> Hello All.
>>> Does anyone provides Ceph rbd/rgw/cephfs through NFS? I have a
>>> requirement about Ceph Cluster wh
Hi,
Thanks for your answer. Does CephFS stable enough to deploy it in
product environments? and Do you compare the performance between nfs-ganesha
and standard kernel based NFSd which are based on CephFS?
___
ceph-users mailing list
ceph-users@l
Hi,
Does CephFS stable enough to deploy it in product environments? and Do
you compare the performance between nfs-ganesha and standard kernel based NFSd
which are based on CephFS?
> On Jan 18, 2016, at 20:34, Burkhard Linke
> wrote:
>
> Hi,
>
> On 18.01.2016 10:36, david wrote:
>> H
Hello,
I have configured osd_crush_chooseleaf_type = 3 (rack), and I have 6 osd
in three hosts and three racks, my tree y this:
datacenter datacenter1
-7 5.45999 rack rack1
-2 5.45999 host storage1
0 2.73000 osd.0up 1.0
1.00
Take Greg's comments to heart, because he's absolutely correct here.
Distributed storage systems almost as a rule love parallelism and if you
have enough you can often hide other issues. Latency is probably the
more interesting question, and frankly that's where you'll often start
seeing the k
One of the other guys on the list here benchmarked them. They spanked every
other ssd on the *recommended* tree..
- Original Message -
From: "Gregory Farnum"
To: "Tyler Bishop"
Cc: "David" , "Ceph Users"
Sent: Monday, January 18, 2016 2:01:44 PM
Subject: Re: [ceph-users] Again - state
On Sun, Jan 17, 2016 at 12:34 PM, Tyler Bishop
wrote:
> The changes you are looking for are coming from Sandisk in the ceph "Jewel"
> release coming up.
>
> Based on benchmarks and testing, sandisk has really contributed heavily on
> the tuning aspects and are promising 90%+ native iop of a driv
On Sun, Jan 17, 2016 at 6:34 PM, James Gallagher
wrote:
> Hi,
>
> I'm looking to implement the CephFS on my Firefly release (v0.80) with an
> XFS native file system, but so far I'm having some difficulties. After
> following the ceph/qsg and creating a storage cluster, I have the following
> topol
On Sunday, January 17, 2016, James Gallagher > wrote:
> Hi,
>
> I'm looking to implement the CephFS on my Firefly release (v0.80) with
> an XFS native file system, but so far I'm having some difficulties. After
> following the ceph/qsg and creating a storage cluster, I have the following
> topolog
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Not that I know of.
-
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1
On Mon, Jan 18, 2016 at 10:33 AM, deeepdish wrote:
> Thanks Robert. Will definitely try this. Is there a way to implement
Thanks Robert. Will definitely try this. Is there a way to implement
“gradual CRUSH” changes? I noticed whenever cluster wide changes are pushed
(crush map, for instance) the cluster immediately attempts to align itself
disrupting client access / performance…
> On Jan 18, 2016, at 12:2
Unfortunately, I haven't seen any obvious suspicious log messages from
either the OSD or the MON. Is there a way to query detailed information
on OSD monitoring, e.g. heartbeats?
On 01/18/2016 05:54 PM, Steve Taylor wrote:
With a single osd there shouldn't be much to worry about. It will have
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I'm not sure why you have six monitors. Six monitors buys you nothing
over five monitors other than more power being used, and more latency
and more headache. See
http://docs.ceph.com/docs/hammer/rados/configuration/mon-config-ref/#monitor-quorum
for
https://github.com/swiftgist/lrbd/wiki
According to lrbd wiki it still uses KRBD (see those /dev/rbd/...
devices in targetcli config).
I was thinking that Mike Christie developed a librbd module for LIO.
So what is it - KRBD or librbd?
2016-01-18 20:23 GMT+08:00 Tyler Bishop :
>
> Well that's inte
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
>From what I understand, the scrub only scrubs PG copies in the same
pool, so there would not be much benefit to scrubbing a single
replication pool until Ceph starts storing the hash of the metadata
and data. Then you would only know that your data
With a single osd there shouldn't be much to worry about. It will have to get
caught up on map epochs before it will report itself as up, but on a new
cluster that should be pretty immediate.
You'll probably have to look for clues in the osd and mon logs. I would expect
some sort of error repor
On 01/16/2016 12:06 PM, David wrote:
Hi!
We’re planning our third ceph cluster and been trying to find how to
maximize IOPS on this one.
Our needs:
* Pool for MySQL, rbd (mounted as /var/lib/mysql or equivalent on KVM
servers)
* Pool for storage of many small files, rbd (probably dovecot mail
Hi Steve
Thanks for your answer. I don't have a private network defined.
Furthermore, in my current testing configuration, there is only one OSD,
so communication between OSDs should be a non-issue.
Do you know how OSD up/down state is determined when there is only one OSD?
Best,
Jeff
On 01/18
Do you have a ceph private network defined in your config file? I've seen this
before in that situation where the private network isn't functional. The osds
can talk to the mon(s) but not to each other, so they report each other as down
when they're all running just fine.
Steve Taylor | Senior
On Mon, Jan 18, 2016 at 11:34 PM, Burkhard Linke <
burkhard.li...@computational.bio.uni-giessen.de> wrote:
> Hi,
>
> On 18.01.2016 10:36, david wrote:
>
>> Hello All.
>> Does anyone provides Ceph rbd/rgw/cephfs through NFS? I have a
>> requirement about Ceph Cluster which needs to provide
Hi,
On 18.01.2016 10:36, david wrote:
Hello All.
Does anyone provides Ceph rbd/rgw/cephfs through NFS? I have a
requirement about Ceph Cluster which needs to provide NFS service.
We export a CephFS mount point on one of our NFS servers. Works out of
the box with Ubuntu Trusty, a rec
Well that's interesting.
I've mounted block devices to the kernel and exported them to iscsi but the
performance was horrible.. I wonder if this is any different?
From: "Dominik Zalewski"
To: ceph-users@lists.ceph.com
Sent: Monday, January 18, 2016 6:35:20 AM
Subject: [ceph-users] CentO
You should test out cephfs exported as an NFS target.
- Original Message -
From: "david"
To: ceph-users@lists.ceph.com
Sent: Monday, January 18, 2016 4:36:17 AM
Subject: [ceph-users] Ceph and NFS
Hello All.
Does anyone provides Ceph rbd/rgw/cephfs through NFS? I have a
require
Check these out to:
http://www.seagate.com/internal-hard-drives/solid-state-hybrid/1200-ssd/
- Original Message -
From: "Christian Balzer"
To: "ceph-users"
Sent: Sunday, January 17, 2016 10:45:56 PM
Subject: Re: [ceph-users] Again - state of Ceph NVMe and SSDs
Hello,
On Sat, 16 Jan
Hi,
I'm looking into implementing iscsi gateway with MPIO using lrbd -
https://github.com/swiftgist/lrb
https://www.suse.com/docrep/documents/kgu61iyowz/suse_enterprise_storage_2_and_iscsi.pdf
https://www.susecon.com/doc/2015/sessions/TUT16512.pdf
>From above examples:
*For iSCSI failover and
On 18-01-16 10:22, Alex Leake wrote:
> Hello All.
>
>
> Does anyone know if it's possible to retrieve the remaining OSD capacity
> via the Python or C API?
>
Using a mon_command in librados you can send a 'osd df' if you want to.
See this snippet: https://gist.github.com/wido/ac53ae01d661dd
Hello All.
Does anyone provides Ceph rbd/rgw/cephfs through NFS? I have a
requirement about Ceph Cluster which needs to provide NFS service.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-cep
?Hello All.
Does anyone know if it's possible to retrieve the remaining OSD capacity via
the Python or C API?
I can get all other sorts of information, but I thought it would be nice to see
near-full OSDs via the API.
Kind Regards,
Alex.
___
cep
33 matches
Mail list logo