Glance (and friends - Cinder etc) work with the RBD layer, so yeah the
big 'devices' visible to Openstack are made up of many (usually 4MB)
Rados objects.
Cheers
Mark
On 25/09/15 12:13, Cory Hawkless wrote:
>
> Upon bolting openstack Glance onto Ceph I can see hundreds of smaller objects
> are
I found a partial answer to some of the questions:
5./ My questions:
- Is there a simple command for me to check which sessions are
active? 'cephfs-table-tool 0 show session' does not seem to work
- Is there a way for me to cross check which sessions belong to
which clients (IPs)?
'
Hi All...
I have some questions about client session in CephFS.
1./ My setup:
a. ceph 9.0.3
b. 32 OSDs distributed in 4 servers (8 OSD per server).
c. 'osd pool default size = 3' and 'osd pool default min size = 2'
d. a single mds
e. dedicated pools for data and metadata
2./ I h
Ahh, and because that was my first insight into placing object into my ceph
pools I (Incorrectly) made some assumptions!
From: Gregory Farnum [mailto:gfar...@redhat.com]
Sent: Friday, 25 September 2015 9:46 AM
To: Cory Hawkless
Cc: John Spray ; Ceph Users
Subject: Re: [ceph-users] Basic object
Aha, it seems like '--mark-init auto' (supposedly the default arg to
ceph-disk activate?) must be failing. I'll try re-activating my OSDs
with an explicit init system passed in.
-Ben
On Thu, Sep 24, 2015 at 12:49 PM, Ben Hines wrote:
> Any idea why OSDs might revert to 'prepared' after reboot an
On Sep 24, 2015 5:12 PM, "Cory Hawkless" wrote:
>
> Hi all, thanks for the replies.
> So my confusion was because I was using "rados put test.file someobject
testpool"
> This command does not seem to split my 'files' into chunks when they are
saved as 'objects', hence the terminology
>
> Upon bolt
Hi all, thanks for the replies.
So my confusion was because I was using "rados put test.file someobject
testpool"
This command does not seem to split my 'files' into chunks when they are saved
as 'objects', hence the terminology
Upon bolting openstack Glance onto Ceph I can see hundreds of small
Please review http://docs.ceph.com/docs/master/rados/operations/crush-map/
regarding weights
Best regards,
Alex
On Wed, Sep 23, 2015 at 3:08 AM, wikison wrote:
> Hi,
> I have four storage machines to build a ceph storage cluster as
> storage nodes. Each of them is attached a 120 GB HDD
Any idea why OSDs might revert to 'prepared' after reboot and have to
be activated again?
These are older nodes which were manually deployed, not using ceph-deploy.
CentOS 6.7, Hammer 94.3
-Ben
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
On Tue, Sep 22, 2015 at 7:21 PM, Jevon Qiao wrote:
> Hi Sage and other Ceph experts,
>
> This is a greeting from Jevon, I'm from China and working in a company which
> are using Ceph as the backend storage. At present, I'm evaluating the
> following two options of using Ceph cluster to provide NAS
On Thu, Sep 24, 2015 at 2:06 AM, Ilya Dryomov wrote:
> On Thu, Sep 24, 2015 at 7:05 AM, Robert LeBlanc wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>>
>> If you use RADOS gateway, RBD or CephFS, then you don't need to worry
>> about striping. If you write your own application that
On Thu, Sep 24, 2015 at 8:59 AM, Mikaël Guichard wrote:
> Hi,
>
> I encounter this error :
>
>> /usr/bin/radosgw -d --keyring /etc/ceph/ceph.client.radosgw.keyring -n
>> client.radosgw.myhost
> 2015-09-24 17:41:18.223206 7f427f074880 0 ceph version 0.94.3
> (95cefea9fd9ab740263bf8bb4796fd864d9afe
Hi,
I encounter this error :
> /usr/bin/radosgw -d --keyring /etc/ceph/ceph.client.radosgw.keyring
-n client.radosgw.myhost
2015-09-24 17:41:18.223206 7f427f074880 0 ceph version 0.94.3
(95cefea9fd9ab740263bf8bb4796fd864d9afe2b), process radosgw, pid 4570
2015-09-24 17:41:18.349037 7f427f074
The kernel client does not yet support the new RBD features of exclusive-lock,
object-map, etc. Work is currently in-progress to add this support in the
future. In the meantime, if you require something similar, you can script your
mount commands around the existing RBD advisory locking approa
We now have a gitbuilder up and running building test packages for arm64
(aarch64). The hardware for these builds has been graciously provided by
Cavium (thank you!).
Trusty aarch64 users can now install packages with
ceph-deploy install --dev BRANCH HOST
and build results are visible at
ht
On Thu, 24 Sep 2015, Alexander Yang wrote:
> I use 'ceph osd crush dump | tail -n 20' get :
>
> "type": 1,
> "min_size": 1,
> "max_size": 10,
> "steps": [
> { "op": "take",
> "item": -62,
> "item_na
On Thu, Sep 24, 2015 at 12:33 PM, Wido den Hollander wrote:
>
>
> On 24-09-15 11:06, Ilya Dryomov wrote:
>> On Thu, Sep 24, 2015 at 7:05 AM, Robert LeBlanc wrote:
>>> -BEGIN PGP SIGNED MESSAGE-
>>> Hash: SHA256
>>>
>>> If you use RADOS gateway, RBD or CephFS, then you don't need to worry
On 24-09-15 11:06, Ilya Dryomov wrote:
> On Thu, Sep 24, 2015 at 7:05 AM, Robert LeBlanc wrote:
>> -BEGIN PGP SIGNED MESSAGE-
>> Hash: SHA256
>>
>> If you use RADOS gateway, RBD or CephFS, then you don't need to worry
>> about striping. If you write your own application that uses librado
On Thu, Sep 24, 2015 at 7:05 AM, Robert LeBlanc wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA256
>
> If you use RADOS gateway, RBD or CephFS, then you don't need to worry
> about striping. If you write your own application that uses librados,
> then you have to worry about it. I understa
On Thu, Sep 24, 2015 at 1:51 AM, Cory Hawkless wrote:
> Hi all,
>
>
>
> I have basic question around how Ceph stores individual objects.
>
> Say I have a pool with a replica size of 3 and I upload a 1GB file to this
> pool. It appears as if this 1GB file gets placed into 3PG’s on 3 OSD’s ,
> simpl
20 matches
Mail list logo