+1
Operators view: 12 months cycle is definitely better than 9. March seem
to be a reasonable compromise.
Best
Dietmar
On 6/6/19 2:31 AM, Linh Vu wrote:
> I think 12 months cycle is much better from the cluster operations
> perspective. I also like March as a release month as well.
> -
Hello CUZA,
Are you rbd/disk_test in same ceph cluster?
you export rbd/disk_test with one user while import rbd/disk_test with
another one?
At 2019-06-05 23:25:45, "CUZA Frédéric" wrote:
>Thank you all for you quick answer.
>I think that will solve our problem.
>
>This is what we
Hello CUZA,
Are you rbd/disk_test in same ceph cluster?
you export rbd/disk_test with one user while import rbd/disk_test with
another one?
At 2019-06-05 23:25:45, "CUZA Frédéric" wrote:
>Thank you all for you quick answer.
>I think that will solve our problem.
>
>This is what we came
On Thu, Jun 6, 2019 at 6:36 AM Jorge Garcia wrote:
>
> We have been testing a new installation of ceph (mimic 13.2.2) mostly
> using cephfs (for now). The current test is just setting up a filesystem
> for backups of our other filesystems. After rsyncing data for a few
> days, we started getting t
I think 12 months cycle is much better from the cluster operations perspective.
I also like March as a release month as well.
From: ceph-users on behalf of Sage Weil
Sent: Thursday, 6 June 2019 1:57 AM
To: ceph-us...@ceph.com; ceph-de...@vger.kernel.org; d...@ce
Hi
Checking our cluster logs we found tons of this lines in the osd.
One osd
/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x8
6_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.1/rp
m/el7/BUILD/ceph-14.2.1/src/cls/rgw/cls_rgw.cc:3461: could
It seems like since the change to the 9 months cadence it has been bumpy
for the Debian based installs. Changing to a 12 month cadence sounds
like a good idea. Perhaps some Debian maintainers can suggest a good
month for them to get the packages in time for their release cycle.
On 2019-06-0
I think the mimic balancer doesn't include omap data when trying to
balance the cluster. (Because it doesn't get usable omap stats from
the cluster anyway; in Nautilus I think it does.) Are you using RGW or
CephFS?
-Greg
On Wed, Jun 5, 2019 at 1:01 PM Josh Haft wrote:
>
> Hi everyone,
>
> On my 1
Hi everyone,
On my 13.2.5 cluster, I recently enabled the ceph balancer module in
crush-compat mode. A couple manual 'eval' and 'execute' runs showed
the score improving, so I set the following and enabled the auto
balancer.
mgr/balancer/crush_compat_metrics:bytes # from
https://github.com/ceph/c
Hi,
>>- November: If we release Octopus 9 months from the Nautilus release
>>(planned for Feb, released in Mar) then we'd target this November. We
>>could shift to a 12 months candence after that.
For the 2 last debian releases, the freeze was around january-february,
november seem to be a go
On Wed, Jun 5, 2019 at 2:30 PM Sameh wrote:
>
> Le (On) Wed, Jun 05, 2019 at 01:57:52PM -0400, Alex Gorbachev ecrivit (wrote):
> >
> >
> > I get this in a lab sometimes, and
> > do
> >
> > ceph osd set noout
> >
> > and reboot the node with the stuck PG.
>
> Thank you for your feedback.
>
> I trie
Le (On) Wed, Jun 05, 2019 at 01:57:52PM -0400, Alex Gorbachev ecrivit (wrote):
>
>
> I get this in a lab sometimes, and
> do
>
> ceph osd set noout
>
> and reboot the node with the stuck PG.
Thank you for your feedback.
I tried to do that, even rebooting all the nodes, but nothing changed.
On Wed, Jun 5, 2019 at 1:36 PM Sameh wrote:
>
> Hello cephers,
>
> I was trying to reproduce a production situation involving a stuck stale PG.
>
> While playing with a test cluster, I aggressively removed 3 OSDs at once
> from the cluster. One OSD per host. All pools are size 3.
>
> After re-addi
On 6/5/19 5:57 PM, Sage Weil wrote:
> So far the balance of opinion seems to favor a shift to a 12 month
> cycle [...] it seems pretty likely we'll make that shift.
thanks, much appreciated (from an cluster operating point of view).
> Thoughts?
GNOME and a few others are doing April and October
Hello cephers,
I was trying to reproduce a production situation involving a stuck stale PG.
While playing with a test cluster, I aggressively removed 3 OSDs at once
from the cluster. One OSD per host. All pools are size 3.
After re-adding them, I ended up in this situation (PG unfound, or acting
On Wed, Jun 5, 2019 at 12:59 PM Jordan Share wrote:
>
> One thing to keep in mind when pipelining rbd export/import is that the
> default is just a raw image dump.
>
> So if you have a large, but not very full, RBD, you will dump all those
> zeroes into the pipeline.
>
> In our case, it was actual
On Wed, Jun 5, 2019 at 10:10 AM Jonas Jelten wrote:
>
> Hi!
>
> I'm also affected by this:
>
> HEALTH_WARN 13 pgs not deep-scrubbed in time; 13 pgs not scrubbed in time
> PG_NOT_DEEP_SCRUBBED 13 pgs not deep-scrubbed in time
> pg 6.b1 not deep-scrubbed since 0.00
> pg 7.ac not deep-scr
I am trying to resolve some kind of inconsistency.
My ceph -s:
services:
mon: 1 daemons, quorum cephback2 (age 22h)
mgr: cephback2(active, since 28m), standbys: cephback1
osd: 6 osds: 6 up (since 22h), 6 in (since 24h); 125 remapped pgs
But when I do
ceph mgr module enable dashboar
Hi!
I'm also affected by this:
HEALTH_WARN 13 pgs not deep-scrubbed in time; 13 pgs not scrubbed in time
PG_NOT_DEEP_SCRUBBED 13 pgs not deep-scrubbed in time
pg 6.b1 not deep-scrubbed since 0.00
pg 7.ac not deep-scrubbed since 0.00
pg 7.a0 not deep-scrubbed since 0.00
One thing to keep in mind when pipelining rbd export/import is that the
default is just a raw image dump.
So if you have a large, but not very full, RBD, you will dump all those
zeroes into the pipeline.
In our case, it was actually faster to write to a (sparse) temp file and
read it in agai
We have been testing a new installation of ceph (mimic 13.2.2) mostly
using cephfs (for now). The current test is just setting up a filesystem
for backups of our other filesystems. After rsyncing data for a few
days, we started getting this from ceph -s:
health: HEALTH_WARN
1 MDSs
Hi everyone,
Since luminous, we have had the follow release cadence and policy:
- release every 9 months
- maintain backports for the last two releases
- enable upgrades to move either 1 or 2 releases heads
(e.g., luminous -> mimic or nautilus; mimic -> nautilus or octopus; ...)
This has
Hi,
El 5/6/19 a las 16:53, vita...@yourcmc.ru escribió:
Ok, average network latency from VM to OSD's ~0.4ms.
It's rather bad, you can improve the latency by 0.3ms just by
upgrading the network.
Single threaded performance ~500-600 IOPS - or average latency of 1.6ms
Is that comparable to wh
On Wed, Jun 5, 2019 at 11:31 AM CUZA Frédéric wrote:
>
> Hi,
>
> Thank you all for you quick answer.
> I think that will solve our problem.
>
> This is what we came up with this :
> rbd -c /etc/ceph/Oceph.conf --keyring /etc/ceph/Oceph.client.admin.keyring
> export rbd/disk_test - | rbd -c /etc/c
Hi,
Thank you all for you quick answer.
I think that will solve our problem.
This is what we came up with this :
rbd -c /etc/ceph/Oceph.conf --keyring /etc/ceph/Oceph.client.admin.keyring
export rbd/disk_test - | rbd -c /etc/ceph/Nceph.conf --keyring
/etc/ceph/Nceph.client.admin.keyring import
On Wed, Jun 5, 2019 at 11:26 AM CUZA Frédéric wrote:
>
> Thank you all for you quick answer.
> I think that will solve our problem.
You might have hijacked another thread?
> This is what we came up with this :
> rbd -c /etc/ceph/Oceph.conf --keyring /etc/ceph/Oceph.client.admin.keyring
> export
Thank you all for you quick answer.
I think that will solve our problem.
This is what we came up with this :
rbd -c /etc/ceph/Oceph.conf --keyring /etc/ceph/Oceph.client.admin.keyring
export rbd/disk_test - | rbd -c /etc/ceph/Nceph.conf --keyring
/etc/ceph/Nceph.client.admin.keyring import - rbd
We have this, if it is any help
write-4k-seq: (groupid=0, jobs=1): err= 0: pid=1446964: Fri May 24
19:41:48 2019
write: IOPS=760, BW=3042KiB/s (3115kB/s)(535MiB/180001msec)
slat (usec): min=7, max=234, avg=16.59, stdev=13.59
clat (usec): min=786, max=167483, avg=1295.60, stdev=1933.25
It works okay. You need a ceph.conf and a generic radosgw cephx key. That's
it.
On Wed, Jun 5, 2019, 5:37 AM Marc Roos wrote:
>
>
> Has anyone put the radosgw in a container? What files do I need to put
> in the sandbox directory? Are there other things I should consider?
>
>
>
> ___
Ok, average network latency from VM to OSD's ~0.4ms.
It's rather bad, you can improve the latency by 0.3ms just by upgrading
the network.
Single threaded performance ~500-600 IOPS - or average latency of 1.6ms
Is that comparable to what other are seeing?
Good "reference" numbers are 0.5ms
What is wrong with?
service ceph-mgr@c stop
systemctl disable ceph-mgr@c
-Original Message-
From: Vandeir Eduardo [mailto:vandeir.edua...@gmail.com]
Sent: woensdag 5 juni 2019 16:44
To: ceph-users
Subject: [ceph-users] How to remove ceph-mgr from a node
Hi guys,
sorry, but I'm not f
Hi guys,
sorry, but I'm not finding in documentation how to remove ceph-mgr
from a node. Is it possible?
Thanks.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Has anyone put the radosgw in a container? What files do I need to put
in the sandbox directory? Are there other things I should consider?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#devices
"The BlueStore journal will always be placed on the fastest device
available, so using a DB device will provide the same benefit that the
WAL device
would while *also* allowing additional metadata to be stored there
34 matches
Mail list logo