On Mon, Sep 23, 2019 at 10:51 AM Shawn A Kwang wrote:
>
> On 9/23/19 9:38 AM, Robert LeBlanc wrote:
> > On Wed, Sep 18, 2019 at 11:47 AM Shawn A Kwang wrote:
> >>
> >> We are planning our ceph architecture and I have a question:
> >>
> >> How should NVMe drives be used when our spinning storage d
On 9/23/19 9:38 AM, Robert LeBlanc wrote:
> On Wed, Sep 18, 2019 at 11:47 AM Shawn A Kwang wrote:
>>
>> We are planning our ceph architecture and I have a question:
>>
>> How should NVMe drives be used when our spinning storage devices use
>> bluestore:
>>
>> 1. block WAL and DB partitions
>> (htt
Awesome, thanks. I've added a link to the full dump in the tracker (I think
it's too big to attach directly).
From: Sage Weil
Sent: Monday, September 23, 2019 10:27 AM
To: Koebbe, Brian
Cc: ceph-users@ceph.io ; d...@ceph.io
Subject: Re: [ceph-users] Seemingly
On Mon, 23 Sep 2019, Koebbe, Brian wrote:
> Thanks Sage!
> ceph osd dump: https://pastebin.com/raw/zLPz9DQg
>
>
> ceph-monstore-tool /var/lib/ceph/mon/ceph-ufm03 dump-keys |grep osd_snap| cut
> -c-29 |uniq -c
> 2 osd_snap / purged_snap_10_000
> 1 osd_snap / purged_snap_12_000
>
On Fri, Sep 20, 2019 at 5:41 AM Amudhan P wrote:
> I have already set "mon osd memory target to 1Gb" and I have set max-backfill
> from 1 to 8.
Reducing the number of backfills should reduce the amount of memory,
especially for EC pools.
Robert LeBlanc
PGP Fingerprint 79A2 9CA4
Thanks Sage!
ceph osd dump: https://pastebin.com/raw/zLPz9DQg
ceph-monstore-tool /var/lib/ceph/mon/ceph-ufm03 dump-keys |grep osd_snap| cut
-c-29 |uniq -c
2 osd_snap / purged_snap_10_000
1 osd_snap / purged_snap_12_000
75 osd_snap / purged_snap_13_000
4 osd_snap / purged_s
Hi,
On Mon, 23 Sep 2019, Koebbe, Brian wrote:
> Our cluster has a little over 100 RBDs. Each RBD is snapshotted with a
> typical "frequently", hourly, daily, monthly type of schedule.
> A while back a 4th monitor was temporarily added to the cluster that took
> hours to synchronize with the oth
On Wed, Sep 18, 2019 at 11:47 AM Shawn A Kwang wrote:
>
> We are planning our ceph architecture and I have a question:
>
> How should NVMe drives be used when our spinning storage devices use
> bluestore:
>
> 1. block WAL and DB partitions
> (https://docs.ceph.com/docs/nautilus/rados/configuration
Hi,
this solved my problem! I forgot the permissions on the cache pool
Best
Robert
-Ursprüngliche Nachricht-
Von: Kees Meijs
Gesendet: Montag, 16. September 2019 13:21
An: Eikermann, Robert ; Fyodor Ustinov ;
Wido den Hollander
Cc: ceph-users@ceph.io
Betreff: Re: [ceph-users] Re: Act
Our cluster has a little over 100 RBDs. Each RBD is snapshotted with a typical
"frequently", hourly, daily, monthly type of schedule.
A while back a 4th monitor was temporarily added to the cluster that took hours
to synchronize with the other 3.
While trying to figure out why that addition took
Forgot some important info: I'm running Mimic 13.2.5.
On Mon, Sep 23, 2019 at 8:49 AM Josh Haft wrote:
>
> Hi,
>
> I've been migrating data from one EC pool to another EC pool: two
> directories are mounted with ceph.dir.layout.pool file attribute set
> appropriately, then rsync from old to new a
Hi,
I've been migrating data from one EC pool to another EC pool: two
directories are mounted with ceph.dir.layout.pool file attribute set
appropriately, then rsync from old to new and finally, delete the old
files. I'm using the kernel client to do this. While the removed files
are no longer pres
Hi,
this is another issue that I'm facing with my current Ceph cluster (2x
MDS, 6x ODS).
When I try to delete a RBD image the CLI is flooded with error messages:
root@ld3955:# rbd rm hdd/vm-304-disk-0
Removing image: 3% complete...2019-09-23 14:25:01.954 7fb15c7cc700 0
--1- 10.97.206.91:0/3920420
On 23/09/2019 11:49, Marc Roos wrote:
And I was just about to upgrade. :) How is this even possible with this
change[0] where 50-100% iops lost?
[0]
https://github.com/ceph/ceph/pull/28573
-Original Message-
From: 徐蕴 [mailto:yu...@me.com]
Sent: maandag 23 september 2019 8:28
To:
Hi Maged,
I also noticed that the write bandwidth is about 114MBps, which cloud be
limited by 1G network. But why did the same hardware get better performance
mark when running Luminous or even Jewel? I ran the test at one server in this
cluster, so I assume that about 30% write requests (I hav
On 23/09/2019 08:27, 徐蕴 wrote:
Hi ceph experts,
I deployed Nautilus (v14.2.4) and Luminous (v12.2.11) on the same hardware, and
made a rough performance comparison. The result seems Luminous is much better,
which is unexpected.
My setup:
3 servers, each has 3 HDD OSDs, 1 SSD as DB, two sepa
Thomas ,
Check the documentation for CLI "ceph osd reweight-by-utilization" and run it.
Another problem of unbalanced data is the pool global data available .
Regards
Manuel
-Mensaje original-
De: Thomas <74cmo...@gmail.com>
Enviado el: lunes, 23 de septiembre de 2019 11:49
Para: EDH
Hi all,
I was not able to solve the problems with vfs_ceph, however I got it working
using the kernel mount driver
(https://docs.ceph.com/docs/master/cephfs/kernel/).
I can't find the post right now, but I understand that the vfs module is
promising better features going forward but the kernel
And I was just about to upgrade. :) How is this even possible with this
change[0] where 50-100% iops lost?
[0]
https://github.com/ceph/ceph/pull/28573
-Original Message-
From: 徐蕴 [mailto:yu...@me.com]
Sent: maandag 23 september 2019 8:28
To: ceph-users@ceph.io
Subject: [ceph-user
Hi,
I have already balancer mode upmap enabled.
root@ld3955:/mnt/pve/pve_cephfs/template/iso# ceph balancer status
{
"active": true,
"plans": [],
"mode": "upmap"
}
However there are OSD with 60% and others with 90% usage belonging to
the same pool with the same disk size.
This looks t
Hi Thomas,
For 100% byte distribution of data across OSD, you should setup ceph balancer
in "byte" mode, not in PG mode.
Change will distribute all osd with the same % of usage, but the objects will
be NOT reduntant.
After several weeks and months testing balancer the best profile is balance b
Hi,
I'm facing several issues with my ceph cluster (2x MDS, 6x ODS).
Here I would like to focus on the issue with pgs backfill_toofull.
I assume this is related to the fact that the data distribution on my
OSDs is not balanced.
This is the current ceph status:
root@ld3955:~# ceph -s
cluster:
hi Paul,
do you know how ceph delete objects in a bucket? what're steps?
-
Br,
Dương Tuấn Dũng
0986153686
On Sat, Sep 21, 2019 at 6:44 PM Paul Emmerich
wrote:
> try increasing --max-concurrent-ios, the default is only 32
>
>
> Paul
>
> --
> Paul Emmerich
>
> Looking
Why does it use such heavy RAM?
I am planning to use only cephfs, no block or object, is there a way to
take control over memory?
On Mon, Sep 23, 2019 at 10:56 AM Konstantin Shalygin wrote:
> On 9/22/19 10:50 PM, Amudhan P wrote:
> > Do you think 4GB RAM for two OSD's is low, even with 1 OSD a
None of the KVM / LXC instances is starting.
Any KVM / LXC instance is using RBD.
The same pool hdd is providing CephFS service, but this is only used for
storing KVM / LXC instance backups, ISOs and other files.
Am 23.09.2019 um 08:55 schrieb Ashley Merrick:
> Have you been able to start the V
25 matches
Mail list logo