Hi Stefan,
mds.mds1 [WRN] replayed op client.15327973:15585315,15585103 used ino
0x19918de but session next is 0x1873b8b
Nothing of importance is logged in the mds (debug_mds_log": "1/5").
What does this warning message mean / indicate?
we face these messages on a regular basis. The
On Wed, Sep 12, 2018 at 2:59 PM Stefan Kooman wrote:
>
> Hi,
>
> Once in a while, today a bit more often, the MDS is logging the
> following:
>
> mds.mds1 [WRN] replayed op client.15327973:15585315,15585103 used ino
> 0x19918de but session next is 0x1873b8b
>
> Nothing of importance is lo
Hi List,
TL;DR: what application types are compatible with each other concerning
Ceph Pools?
I.e. is it safe to mix "RBD" pool with (some) native librados objects?
RBD / RGW / Cephfs all have their own pools. Since luminous release
there is this "application tag" to (somewhere in the future) pre
On Thu, Sep 13, 2018 at 9:03 AM Stefan Kooman wrote:
>
> Hi List,
>
> TL;DR: what application types are compatible with each other concerning
> Ceph Pools?
>
> I.e. is it safe to mix "RBD" pool with (some) native librados objects?
>
> RBD / RGW / Cephfs all have their own pools. Since luminous rel
Hi!
I want to user promethus+grafana to monitor ceph, and I find below url:
http: //docs.ceph.com/docs/master/mgr/prometheus/
Then i download ceph dashboard in grafana:
https://grafana.com/dashboards/7056
It is so cool
But some metrices do not work for ceph 13( Mimic ), like
"ceph_monit
Hi!
I want to user promethus+grafana to monitor ceph, and I find below url:
http: //docs.ceph.com/docs/master/mgr/prometheus/
Then i download ceph dashboard in grafana:
https://grafana.com/dashboards/7056
It is so cool
But some metrices do not work for ceph 13( Mimic ), like
"ceph_monit
Hi John,
Quoting John Spray (jsp...@redhat.com):
> On Wed, Sep 12, 2018 at 2:59 PM Stefan Kooman wrote:
>
> When replaying a journal (either on MDS startup or on a standby-replay
> MDS), the replayed file creation operations are being checked for
> consistency with the state of the replayed cli
On Thu, Sep 13, 2018 at 11:01 AM Stefan Kooman wrote:
>
> Hi John,
>
> Quoting John Spray (jsp...@redhat.com):
>
> > On Wed, Sep 12, 2018 at 2:59 PM Stefan Kooman wrote:
> >
> > When replaying a journal (either on MDS startup or on a standby-replay
> > MDS), the replayed file creation operations
Update on the subject, warning, lengthy post but reproducible results and
workaround to get performance back to expected level.
One of the servers had a broken disk controller causing some performance issues
on this one host, FIO showed about half performance on some disks compared to
the other
On Thu, Sep 13, 2018 at 02:17:20PM +0200, Menno Zonneveld wrote:
> Update on the subject, warning, lengthy post but reproducible results and
> workaround to get performance back to expected level.
>
> One of the servers had a broken disk controller causing some performance
> issues on this one h
-Original message-
> From:Alwin Antreich
> Sent: Thursday 13th September 2018 14:41
> To: Menno Zonneveld
> Cc: ceph-users ; Marc Roos
>
> Subject: Re: [ceph-users] Rados performance inconsistencies, lower than
> expected performance
>
> > Am I doing something wrong? Did I run into so
I'm sure I'm not forgetting to free any buffers. I'm not even allocating
any heap memory in the example above.
On further investigation, the same issue *does* happen with the synchronous
read operation API. I erroneously said that the issue doesn't happen with
the synchronous API when what I meant
Dear list,
I am currently in the process of upgrading Proxmox 4/Jewel to
Proxmox5/Luminous.
I also have a new node to add to my Proxmox cluster.
What I plan to do is the following (from
https://pve.proxmox.com/wiki/Ceph_Jewel_to_Luminous):
* upgrade Jewel to Luminous
* let the "ceph osd c
On Thu, Sep 13, 2018 at 6:35 AM Daniel Goldbach
wrote:
> I'm sure I'm not forgetting to free any buffers. I'm not even allocating
> any heap memory in the example above.
>
> On further investigation, the same issue *does* happen with the
> synchronous read operation API. I erroneously said that t
Yes I understand that. If you look at the example, the data buffer is stack
allocated and hence its memory is freed when the stack frame for readobj is
destroyed. Additionally, no leak occurs if I comment out the
rados_read_op_operate line. This is a problem with librados, not with my
example.
O
On 2018-09-12 19:49:16-07:00 Jason Dillaman wrote:
On Wed, Sep 12, 2018 at 10:15 PM wrote:
>
> On 2018-09-12 17:35:16-07:00 Jason Dillaman wrote:
>
>
> Any chance you know the LBA or byte offset of the corruption so I can
> compare it against the log?
&
On Thu, Sep 13, 2018 at 1:54 PM wrote:
>
> On 2018-09-12 19:49:16-07:00 Jason Dillaman wrote:
>
>
> On Wed, Sep 12, 2018 at 10:15 PM wrote:
> >
> > On 2018-09-12 17:35:16-07:00 Jason Dillaman wrote:
> >
> >
> > Any chance you know the LBA or byte offset of
Hi all,
after upgrading from 12.2.7 to 12.2.8 the standby mgr instances in my cluster
stopped sending beacons.
The service starts and everything seems to work just fine, but after a period
of time the mgr disappears.
All of my three mgr daemons are running.
[root@ceph01 ~]# ceph mgr dump
{
Hi Hervé,
No answer from me, but just to say that I have exactly the same upgrade
path ahead of me. :-)
Please report here any tips, trics, or things you encountered doing the
upgrades. It could potentially save us a lot of time. :-)
Thanks!
MJ
On 09/13/2018 05:23 PM, Hervé Ballans wrote:
I have a stage cluster with 4 HDDs and an SSD in each host. I have an EC
profile that specifically chooses HDDs for placement. Also several Replica
pools that write to either HDD or SSD. This has all worked well for a
while. When I updated the Tunables to Jewel on the cluster, all of a
sudden t
I'm now following up to my earlier message regarding data migration from
old to new hardware in our ceph cluster. As part of this we wanted to
move to device-class-based crush rules. For the replicated pools the
directions for this were straightforward; for our EC pool, it wasn't so
clear, but
hi ceph-{maintainers,users,developers},
recently, i ran into an issue[0] which popped up when we build Ceph on
centos 7.5, but test it on centos 7.4. as we know, the gperftools-libs
package provides the tcmalloc allocator shared library, but centos 7.4
and centos 7.5 ship different version of gper
Hi,
I have a ceph cluster of version 12.2.5 on centos7.
I created 3 pools, 'rbd' for rbd storage, as well as 'cephfs_data'
and 'cephfs_meta' for cephfs. Cephfs is used for backing up by
rsync and volumes mounting by docker.
The size of backup files is 3.5T. Besides, docker use less than
60G spac
23 matches
Mail list logo