As you can see we are running 12.2.12 luminous.
I could not manage to find if this fix has been backported to luminous.
Which version of luminous fixes this issue? Or is it fixed at all for
luminous?
Am 20.07.19 um 03:25 schrieb Alex Litvak:
> The issue should have been resolved by backport
> ht
On Mon, Jul 29, 2019 at 9:54 PM Dan van der Ster wrote:
>
> On Mon, Jul 29, 2019 at 3:47 PM Yan, Zheng wrote:
> >
> > On Mon, Jul 29, 2019 at 9:13 PM Dan van der Ster
> > wrote:
> > >
> > > On Mon, Jul 29, 2019 at 2:52 PM Yan, Zheng wrote:
> > > >
> > > > On Fri, Jul 26, 2019 at 4:45 PM Dan va
Good news! The CFP deadline has been extended to August 11, in case
anyone missed out.
On 7/25/19 9:21 PM, Tim Serong wrote:
> Hi All,
>
> Just a reminder, there's only a few days left to submit talks for this
> most excellent conference; the CFP is open until Sunday 28 July Anywhere
> on Earth.
lt;<< binary blob of length 12 >>>",
"config-history/7/+mgr/mgr/dashboard/RGW_API_SECRET_KEY": "",
"config/mgr/mgr/dashboard/RGW_API_ACCESS_KEY": "",
"config/mgr/mgr/dashboard/RGW_API_SECRET_KEY": "",
&q
;: "",
"config/mgr/mgr/dashboard/ssl": "false",
"config/mgr/mgr/devicehealth/enable_monitoring": "true",
"mgr/dashboard/accessdb_v1": "{\"version\": 1, \"users\": {\"ceph\":
{\"usernam
Hi,
When I get me ceph status, I do not understand the result :
ceph df detail
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 131 TiB 102 TiB 29 TiB 29 TiB 21.98
TOTAL 131 TiB 102 TiB 29 TiB 29 TiB 21.98
POO
Hello Jason,
i updated the ticket https://tracker.ceph.com/issues/40822
Am 24.07.19 um 19:20 schrieb Jason Dillaman:
> On Wed, Jul 24, 2019 at 12:47 PM Marc Schöchlin wrote:
>>
>> Testing with a 10.2.5 librbd/rbd-nbd ist currently not that easy for me,
>> because the ceph apt source does not co
Same here,
Nautilus 14.2.2.
Evacuate one host and join another one at the same time and all is
unbalance.
Best
De: ceph-users En nombre de David
Herselman
Enviado el: lunes, 29 de julio de 2019 11:31
Para: ceph-users@lists.ceph.com
Asunto: [ceph-users] Ceph Nautilus - can't balan
On Sat, Jul 27, 2019 at 06:08:58PM +0530, Ajitha Robert wrote:
> *1) Will there be any folder related to rbd-mirroring in /var/lib/ceph ? *
no
> *2) Is ceph rbd-mirror authentication mandatory?*
no. But why are you asking?
> *3)when even i create any cinder volume loaded with glance image i ge
Thanks !
On Mon, Jul 29, 2019 at 5:26 PM Paul Emmerich
wrote:
> yes, that's good enough for "upmap".
>
> Mapping client features to versions is somewhat unreliable by design: not
> every new release adds a new feature, some features are backported to older
> releases, kernel clients are a comple
yes, that's good enough for "upmap".
Mapping client features to versions is somewhat unreliable by design: not
every new release adds a new feature, some features are backported to older
releases, kernel clients are a completely independent implementation not
directly mapable to a Ceph release.
Your results are okay..ish. General rule is that it's hard to achieve
read latencies below 0.5ms and write latencies below 1ms with Ceph, **no
matter what drives or network you use**. 1 iops with one thread is
0.1 ms. It's just impossible with Ceph currently.
I've heard that some people ma
On Mon, Jul 29, 2019 at 3:47 PM Yan, Zheng wrote:
>
> On Mon, Jul 29, 2019 at 9:13 PM Dan van der Ster wrote:
> >
> > On Mon, Jul 29, 2019 at 2:52 PM Yan, Zheng wrote:
> > >
> > > On Fri, Jul 26, 2019 at 4:45 PM Dan van der Ster
> > > wrote:
> > > >
> > > > Hi all,
> > > >
> > > > Last night w
On Mon, Jul 29, 2019 at 9:13 PM Dan van der Ster wrote:
>
> On Mon, Jul 29, 2019 at 2:52 PM Yan, Zheng wrote:
> >
> > On Fri, Jul 26, 2019 at 4:45 PM Dan van der Ster
> > wrote:
> > >
> > > Hi all,
> > >
> > > Last night we had 60 ERRs like this:
> > >
> > > 2019-07-26 00:56:44.479240 7efc6cca1
On Mon, Jul 29, 2019 at 2:52 PM Yan, Zheng wrote:
>
> On Fri, Jul 26, 2019 at 4:45 PM Dan van der Ster wrote:
> >
> > Hi all,
> >
> > Last night we had 60 ERRs like this:
> >
> > 2019-07-26 00:56:44.479240 7efc6cca1700 0 mds.2.cache.dir(0x617)
> > _fetched badness: got (but i already had) [inod
On Fri, Jul 26, 2019 at 4:45 PM Dan van der Ster wrote:
>
> Hi all,
>
> Last night we had 60 ERRs like this:
>
> 2019-07-26 00:56:44.479240 7efc6cca1700 0 mds.2.cache.dir(0x617)
> _fetched badness: got (but i already had) [inode 0x
> [...2,head] ~mds2/stray1/10006289992 auth v14438219972 dirtypa
I have a ceph cluster where mon, osd and mgr are running ceph luminous
If I try running ceph features [*], I see that clients are grouped in 2
sets:
- the first one appears using luminous with features 0x3ffddff8eea4fffb
- the second one appears using luminous too, but with
features 0x3ffddff8eea
On 24.07.19 09:18, nokia ceph wrote:
> Please let us know disabling bluestore warn on legacy statfs is the only
> option for upgraded clusters.
You can repair the OSD with
systemctl stop ceph-osd@$OSDID
ceph-bluestore-tool repair --path /var/lib/ceph/osd/ceph-$OSDID
systemctl start ceph-osd@$OSD
Christian writes:
> Hi,
>
> I found this (rgw s3 auth order = local, external) on the web:
> https://opendev.org/openstack/charm-ceph-radosgw/commit/3e54b570b1124354704bd5c35c93dce6d260a479
>
> Which is seemingly exactly what I need for circumventing higher
> latency when switching on keystone au
19 matches
Mail list logo