AFAIK in case of dm-crypt luks (as default) ceph-disk keeps particular OSD
partition\partitions key in ceph mon attributes and uses OSD partition uuid
as an ID for this key.
So you can get all your keys running:
/usr/bin/ceph config-key ls
You'll get something like:
[
...
"dm-crypt/osd/5025
Did a few more tests :
Older Ceph server with a pveceph create osd command (
(pveceph create osd /dev/sdb
equivalent to
ceph-disk prepare --zap-disk --fs-type xfs --cluster ceph --cluster-uuid
a5c0cfed-...4bf939ed70 /dev/sdb )
sgdisk --print /dev/sdd
Disk /dev/sdd: 2930277168 sectors, 1.4
Hi,
How is the MAX AVAIL calculated in 'ceph df'? Since I am missing some space.
I have 26 OSD's, each is 1484GB (according to df). I have 3 replica's.
Shouldn't the MAX AVAIL be: (26*1484)/3 = 12.861GB?
Instead 'ceph df' is showing 7545G for the pool that is using the 26 OSD's.
What i
On Thu, Aug 3, 2017 at 4:41 PM, Willem Jan Withagen wrote:
> On 03/08/2017 09:36, Brad Hubbard wrote:
>> On Thu, Aug 3, 2017 at 5:21 PM, Martin Palma wrote:
>>> Hello,
>>>
>>> is there a way to get librados for MacOS? Has anybody tried to build
>>> librados for MacOS? Is this even possible?
yes,
I am seeing OOM issues with some of my OSD nodes that I am testing with
Bluestore on 12.2.0, so I decided to try the StupidAllocator to see if it
has a smaller memory footprint, by setting the following in my ceph.conf:
bluefs_allocator = stupid
bluestore_cache_size_hdd = 1073741824
bluestore_cach
Hi Alexandre,
Am 07.09.2017 um 19:31 schrieb Alexandre DERUMIER:
> Hi Stefan
>
>>> Have you already done tests how he performance changes with bluestore
>>> while putting all 3 block devices on the same ssd?
>
>
> I'm going to test bluestore with 3 nodes , 18 x intel s3610 1,6TB in coming
> w
Hi,
have been using Ceph for multiple years now. It’s unclear to me which of your
options fits best, but here are my preferences:
* Updates are risky in a way that we tend to rather not do them every year.
Also, having seen jewel, we’ve been well off to avoid two
major issues what would have
Hi,
I face a wild issue: I cannot remove an object from rgw (via s3 API)
My steps:
s3cmd ls s3://bucket/object -> it exists
s3cmd rm s3://bucket/object -> success
s3cmd ls s3://bucket/object -> it still exists
At this point, I can curl and get the object (thus, it does exists)
Doing the same vi
Yes. Please open a ticket!
On Sat, Sep 9, 2017 at 11:16 AM Eric Eastman
wrote:
> I am seeing OOM issues with some of my OSD nodes that I am testing with
> Bluestore on 12.2.0, so I decided to try the StupidAllocator to see if it
> has a smaller memory footprint, by setting the following in my ce
Opened: http://tracker.ceph.com/issues/21332
On Sat, Sep 9, 2017 at 10:03 PM, Gregory Farnum wrote:
> Yes. Please open a ticket!
>
>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.co
As a user, I woul like to add, I would like to see a real 2 year support
for LTS releases. Hammer releases were sketchy at best in 2017. When
luminous was released The outstanding bugs were auto closed, good buy and
good readance.
Also the decision to drop certain OS support created a barrier
As a user, I woul like to add, I would like to see a real 2 year support
for LTS releases. Hammer releases were sketchy at best in 2017. When
luminous was released The outstanding bugs were auto closed, good buy and
good readance.
Also the decision to drop certain OS support created a barrier
As a user, I woul like to add, I would like to see a real 2 year support for
LTS releases. Hammer releases were sketchy at best in 2017. When luminous was
released The outstanding bugs were auto closed, good buy and good readance.
Also the decision to drop certain OS support created a bar
13 matches
Mail list logo