Hi,
On 1/31/19 6:11 PM, shubjero wrote:
Has anyone automated the ability to generate S3 keys for OpenStack
users in Ceph? Right now we take in a users request manually (Hey we
need an S3 API key for our OpenStack project 'X', can you help?). We
as cloud/ceph admins just use radosgw-admin to cr
On Fri, Feb 01, 2019 at 08:44:51AM +0100, Abhishek wrote:
> * This release fixes the pg log hard limit bug that was introduced in
> 12.2.9, https://tracker.ceph.com/issues/36686. A flag called
> `pglog_hardlimit` has been introduced, which is off by default. Enabling
> this flag will limit t
On Fri, 1 Feb 2019 08:47:47 +0100
Wido den Hollander wrote:
>
>
> On 2/1/19 8:44 AM, Abhishek wrote:
> > We are glad to announce the eleventh bug fix release of the Luminous
> > v12.2.x long term stable release series. We recommend that all users
> > * There have been fixes to RGW dynamic and
Hello,
I'm a bit confused about how the journaling actually works in the MDS.
I was reading about these two configuration parameters (journal write head
interval) and (mds early reply). Does the MDS flush the journal
synchronously after each operation? and by setting mds eary reply to true
it al
Hi all,
I'm just in the process of migrating my 3-node Ceph cluster from
BTRFS-backed Filestore over to Bluestore.
Last weekend I did this with my first node, and while the migration went
fine, I noted that the OSD did not survive a reboot test: after
rebooting /var/lib/ceph/osd/ceph-0 was comple
Hi,
On 2/1/19 11:40 AM, Stuart Longland wrote:
Hi all,
I'm just in the process of migrating my 3-node Ceph cluster from
BTRFS-backed Filestore over to Bluestore.
Last weekend I did this with my first node, and while the migration went
fine, I noted that the OSD did not survive a reboot test: a
Hi,
On 31/01/2019 17:11, shubjero wrote:
Has anyone automated the ability to generate S3 keys for OpenStack users
in Ceph? Right now we take in a users request manually (Hey we need an
S3 API key for our OpenStack project 'X', can you help?). We as
cloud/ceph admins just use radosgw-admin to c
On Fri, Feb 1, 2019 at 6:28 AM Burkhard Linke
wrote:
>
> Hi,
>
> On 2/1/19 11:40 AM, Stuart Longland wrote:
> > Hi all,
> >
> > I'm just in the process of migrating my 3-node Ceph cluster from
> > BTRFS-backed Filestore over to Bluestore.
> >
> > Last weekend I did this with my first node, and whi
Hi, PPL!
I disconnect tier pool from data pool.
"rados -p tier.pool ls" shows that there are no objects in the pool.
But "rados df -p=tier.pool" shows:
POOL_NAME USEDOBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED
RD_OPS RD WR_OPS WR
tier.pool 148 KiB 960
On Fri, Feb 1, 2019 at 2:31 AM Sinan Polat wrote:
>
> Thanks for the clarification!
>
> Great that the next release will include the feature. We are running on Red
> Hat Ceph, so we might have to wait longer before having the feature available.
>
> Another related (simple) question:
> We are usin
I am using the "ceph-ansible" set of Ansible playbooks to try to get a test
cluster up and running (in Vagrant.) I am deploying Mimic (13.2.4) on Ubuntu
16.04, with one (for now) monitor, and three osd servers.
I have a play in the Ansible that is erroring out, and in troubleshooting what
that
So the problem was an issue with trying to use "master" branch of ceph-ansible,
instead of a tagged branch...
From: Sebastien Han [mailto:s...@redhat.com]
Sent: Friday, February 01, 2019 9:40 AM
To: Will Dennis
Cc: ceph-ansi...@lists.ceph.com
Subject: Re: [Ceph-ansible] Problem wi
Hello,
We'll soon be building out four new luminous clusters with Bluestore.
Our current clusters are running filestore so we're not very familiar
with Bluestore yet and I'd like to have an idea of what to expect.
Here are the OSD hardware specs (5x per cluster):
2x 3.0GHz 18c/36t
22x 1.8TB 10K S
On Fri, Feb 1, 2019 at 1:11 AM Mark Schouten wrote:
>
> On Fri, Feb 01, 2019 at 08:44:51AM +0100, Abhishek wrote:
> > * This release fixes the pg log hard limit bug that was introduced in
> > 12.2.9, https://tracker.ceph.com/issues/36686. A flag called
> > `pglog_hardlimit` has been introduce
On 1/2/19 10:43 pm, Alfredo Deza wrote:
>>> I think mounting tmpfs for something that should be persistent is highly
>>> dangerous. Is there some flag I should be using when creating the
>>> BlueStore OSD to avoid that issue?
>>
>> The tmpfs setup is expected. All persistent data for bluestore OSD
On Fri, Feb 1, 2019 at 3:08 PM Stuart Longland
wrote:
>
> On 1/2/19 10:43 pm, Alfredo Deza wrote:
> >>> I think mounting tmpfs for something that should be persistent is highly
> >>> dangerous. Is there some flag I should be using when creating the
> >>> BlueStore OSD to avoid that issue?
> >>
>
Am 01.02.19 um 19:06 schrieb Neha Ojha:
> If you would have hit the bug, you should have seen failures like
> https://tracker.ceph.com/issues/36686.
> Yes, pglog_hardlimit is off by default in 12.2.11. Since you are
> running 12.2.9(which has the patch that allows you to limit the length
> of the
On Fri, Feb 1, 2019 at 1:09 PM Robert Sander
wrote:
>
> Am 01.02.19 um 19:06 schrieb Neha Ojha:
>
> > If you would have hit the bug, you should have seen failures like
> > https://tracker.ceph.com/issues/36686.
> > Yes, pglog_hardlimit is off by default in 12.2.11. Since you are
> > running 12.2.9
I thought a new cluster would have the 'rbd' pool already created, has this
changed? I'm using mimic.
# rbd ls
rbd: error opening default pool 'rbd'
Ensure that the default pool has been created or specify an alternate pool
name.
rbd: list: (2) No such file or directory
_
Confirm that no pools are created by default with Mimic.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
solarflow99
Sent: Friday, February 1, 2019 2:28 PM
To: Ceph Users
Subject: [ceph-users] RBD default pool
I thought a new cluster would have the 'rbd' pool already cr
Hi,
I went to replace a disk today (which I had not had to do in a while)
and after I added it the results looked rather odd compared to times past:
I was attempting to replace /dev/sdk on one of our osd nodes:
#ceph-deploy disk zap hqosd7 /dev/sdk
#ceph-deploy osd create --data /dev/sdk hqos
Your output looks a bit weird, but still, this is normal for bluestore. It
creates small separate data partition that is presented as XFS mounted in
/var/lib/ceph/osd, while real data partition is hidden as raw(bluestore)
block device.
It's no longer possible to check disk utilisation with df using
O.k. thank you!
I removed the osd just in case after the fact but I will re-add it back
in and update the thread if things still don't look right.
Shain
On 2/1/19 6:35 PM, Vladimir Prokofev wrote:
Your output looks a bit weird, but still, this is normal for
bluestore. It creates small separa
Hello @all,
Am 18. Januar 2019 14:29:42 MEZ schrieb Alfredo Deza :
>On Fri, Jan 18, 2019 at 7:21 AM Jan Kasprzak wrote:
>>
>> Eugen Block wrote:
>> : Hi Jan,
>> :
>> : I think you're running into an issue reported a couple of times.
>> : For the use of LVM you have to specify the name of the Volu
On 01/02/2019 22:40, Alan Johnson wrote:
Confirm that no pools are created by default with Mimic.
I can confirm that. Mimic deploy doesn't create any pools.
*From:*ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf Of
*solarflow99
*Sent:* Friday, February 1, 2019 2:28 PM
*To:*
Hi!
Right now, after adding OSD:
# ceph health detail
HEALTH_ERR 74197563/199392333 objects misplaced (37.212%); Degraded data
redundancy (low space): 1 pg backfill_toofull
OBJECT_MISPLACED 74197563/199392333 objects misplaced (37.212%)
PG_DEGRADED_FULL Degraded data redundancy (low space): 1 pg
26 matches
Mail list logo