This is what i get:
:/# ceph tell mds.kavehome-mgto-pro-fs01 heap dump
2018-07-24 09:05:19.350720 7fc562ffd700 0 client.145254
On 24.07.2018 07:02, Satish Patel wrote:
> My 5 node ceph cluster is ready for production, now i am looking for
> good monitoring tool (Open source), what majority of folks using in
> their production?
Some people already use Prometheus and the exporter from the Ceph Mgr.
Some use more traditiona
Hello, Cephers,
after trying different repair approaches I am out of ideas how to repair
inconsistent PG. I hope, someones sharp eye will notice what I overlooked.
Some info about cluster:
Centos 7.4
Jewel 10.2.10
Pool size 2 (yes, I know it's a very bad choice)
Pool with inconsistent PG: .rgw.bu
Thank You for help, it is exactly that I need.
Regards
Mateusz
From: Jason Dillaman [mailto:jdill...@redhat.com]
Sent: Wednesday, July 18, 2018 1:28 PM
To: Mateusz Skala (UST, POL)
Cc: dillaman ; ceph-users
Subject: Re: [ceph-users] Read/write statistics per RBD image
Yup, on the host run
Just use collectd to start with. That is easiest with influxdb. However
do not expect to much of the support on influxdb.
-Original Message-
From: Satish Patel [mailto:satish@gmail.com]
Sent: dinsdag 24 juli 2018 7:02
To: ceph-users
Subject: [ceph-users] ceph cluster monitoring to
I mean:
ceph tell mds.x heap start_profiler
... wait for some time
ceph tell mds.x heap stop_profiler
pprof --text /usr/bin/ceph-mds
/var/log/ceph/ceph-mds.x.profile..heap
On Tue, Jul 24, 2018 at 3:18 PM Daniel Carrasco wrote:
>
> This is what i get:
>
> --
Hello,
How many time is neccesary?, because is a production environment and memory
profiler + low cache size because the problem, gives a lot of CPU usage
from OSD and MDS that makes it fails while profiler is running. Is there
any problem if is done in a low traffic time? (less usage and maybe it
Ok, Thank you very much . I will try to caontack them and update the
problem. And in the meantime , I will try to debug it by just seting up one
mon and one osd. Thanks again.
On Mon, Jul 23, 2018 at 3:49 PM John Hearns wrote:
> Will, looking at the logs which you sent, the connection canno
On Tue, Jul 24, 2018 at 4:59 PM Daniel Carrasco wrote:
>
> Hello,
>
> How many time is neccesary?, because is a production environment and memory
> profiler + low cache size because the problem, gives a lot of CPU usage from
> OSD and MDS that makes it fails while profiler is running. Is there a
Hi all,
The same server did it again with the same CATERR exactly 3 days after
rebooting (+/- 30 seconds).
If it were'nt for the exact +3 days, I would think it's a random event.
But exactly 3 days after reboot does not seem random.
Nothing I added got me more information (mcelog, pstore, BMC vid
Hello again,
How can I determine $cctid for specific rbd name? Or is there any good way to
map admin-socket with rbd?
Regards
Mateusz
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Mateusz Skala (UST, POL)
Sent: Tuesday, July 24, 2018 9:49 AM
To: dilla...@redhat.co
Hi,
I'm having an issue with enabling the mgr balancer plugin, properly because a
misunderstanding in the fundamentals of the crush algorithm. I hope the list
can help, thanks.
I've enabled the plugin itself and automatic balancing. The mode is set to
crush-compat and my minimum compatible cli
On 07/24/2018 12:58 PM, Martin Overgaard Hansen wrote:
> Creating a compat weight set manually with 'ceph osd crush weight-set
> create-compat' gives me: Error EPERM: crush map contains one or more
> bucket(s) that are not straw2
>
> What changes do I need to implement to get the mgr balancer plug
I have a Luminous Ceph cluster that uses just rgw. We want to turn it
into a mult-site installation. Are there instructions online for this?
I've been unable to find them.
-R
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
On Tue, Jul 24, 2018 at 4:56 PM, Robert Stanford
wrote:
>
> I have a Luminous Ceph cluster that uses just rgw. We want to turn it
> into a mult-site installation. Are there instructions online for this?
> I've been unable to find them.
>
> -R
>
>
http://docs.ceph.com/docs/luminous/radosgw/mul
Is there any way to safely switch the yum repo I am using from the CentOS
Storage repo to the official ceph repo for RPMs or should I just rebuild it?
Thanks,
-Drew
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cg
Hello,
We've got a 3 node cluster online as part of an openstack ansible installation:
Ceph Mimic
3x OSD nodes:
3x 800GB Intel S3710 SSD OSD's using whole device bluestore (node 3
has 4 for 10 total)
40Gbit networking
96GB Ram
2x E5-2680v2 CPU
Compute nodes are similar but with 192GB ram
Perfo
Hi,
On which node should we add the "admin socket" parameter to ceph.conf. On
the MON, OSD or on what node?
On one of my clients (which is the Ansible node in this case) has the
following:
[client.libvirt]
admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok # must be
writable by QEMU
On Mon, Jul 23, 2018 at 2:33 PM, Satish Patel wrote:
> Alfredo,
>
> Thanks, I think i should go with LVM then :)
>
> I have question here, I have 4 physical SSD per server, some reason i
> am using ceph-ansible 3.0.8 version which doesn't create LVM volume
> itself so i have to create LVM volume m
On Tue, Jul 24, 2018 at 6:51 AM Mateusz Skala (UST, POL) <
mateusz.sk...@ust-global.com> wrote:
> Hello again,
>
> How can I determine $cctid for specific rbd name? Or is there any good way
> to map admin-socket with rbd?
>
The $cctid is effectively pseudo-random (it's a memory location within th
If one VM is using multiple rbd’s then using just $pid is not enough. Socket
shows only one (first) rbd statistics.
Regards
Mateusz
From: Jason Dillaman [mailto:jdill...@redhat.com]
Sent: Tuesday, July 24, 2018 2:39 PM
To: Mateusz Skala (UST, POL)
Cc: ceph-users
Subject: Re: [ceph-users] Re
I did that but i am using Ceph-ansible 3.0.8 version which doesn't
support auto creation of LVM :( i think 3.1 version has LVM support.
Because of some reason i have to stick to 3.0.8 so i need to create manually.
On Tue, Jul 24, 2018 at 8:34 AM, Alfredo Deza wrote:
> On Mon, Jul 23, 2018 at 2:
On Tue, Jul 24, 2018 at 8:48 AM Mateusz Skala (UST, POL) <
mateusz.sk...@ust-global.com> wrote:
> If one VM is using multiple rbd’s then using just $pid is not enough.
> Socket shows only one (first) rbd statistics.
>
Yup, that's why $cctid was added. In your case, you would need to scrap all
of
On 07/24/2018 12:51 PM, Mateusz Skala (UST, POL) wrote:
> Hello again,
>
> How can I determine $cctid for specific rbd name? Or is there any good
> way to map admin-socket with rbd?
>
Yes, check the output of 'perf dump', you can fetch the RBD image
information from that JSON output.
Wido
>
Hi,
I read the 12.2.7 upgrade notes, and set "osd skip data digest = true" before I
started upgrading from 12.2.6 on my Bluestore-only cluster.
As far as I can tell, my OSDs all got restarted during the upgrade and all got
the option enabled :
This is what I see for a specific OSD taken at rand
Hi,
On 24/07/18 06:02, Satish Patel wrote:
> My 5 node ceph cluster is ready for production, now i am looking for
> good monitoring tool (Open source), what majority of folks using in
> their production?
This does come up from time to time, so it's worth checking the list
archives.
We use collec
OK, it will be nice feature if we can get name of rbd from admin socket, at now
I’m doing this in the way you wrote.
Thanks for help,
Mateusz
From: Jason Dillaman [mailto:jdill...@redhat.com]
Sent: Tuesday, July 24, 2018 2:52 PM
To: Mateusz Skala (UST, POL)
Cc: ceph-users
Subject: Re: [ceph
Oh my...
Tried to yum upgrade in writeback mode and noticed the syslogs on the VM :
Jul 24 15:16:57 dev7240 kernel: end_request: I/O error, dev vda, sector 1896024
Jul 24 15:16:57 dev7240 kernel: end_request: I/O error, dev vda, sector 1896064
Jul 24 15:16:57 dev7240 kernel: end_request: I/O erro
You must add this on node that You are running VM's and [client.libvirt] is
name of user configured in VM. Additional if you run vm's as standard user,
this user should have write permissions on /var/run/ceph/ directory.
Regards,
Mateusz
-Original Message-
From: ceph-users [mailto:ceph-
Satish,
I'm currently working on monasca's roles for openstack-ansible.
We have plugins that monitors ceph as well and I use in production. Bellow
you can see an example:
https://imgur.com/a/6l6Q2K6
Em ter, 24 de jul de 2018 às 02:02, Satish Patel
escreveu:
> My 5 node ceph cluster is ready
Hi all,
After the 12.2.6 release went out, we've been thinking on better ways
to remove a version from our repositories to prevent users from
upgrading/installing a known bad release.
The way our repos are structured today means every single version of
the release is included in the repository. T
`ceph versions` -- you're sure all the osds are running 12.2.7 ?
osd_skip_data_digest = true is supposed to skip any crc checks during reads.
But maybe the cache tiering IO path is different and checks the crc anyway?
-- dan
On Tue, Jul 24, 2018 at 3:01 PM SCHAER Frederic wrote:
>
> Hi,
>
>
>
On 07/24/2018 07:02 AM, Satish Patel wrote:
> My 5 node ceph cluster is ready for production, now i am looking for
> good monitoring tool (Open source), what majority of folks using in
> their production?
There are several, using Prometheus with the Ceph Exporter Manager
module is a popular choic
On Tue, Jul 24, 2018 at 4:38 PM Alfredo Deza wrote:
>
> Hi all,
>
> After the 12.2.6 release went out, we've been thinking on better ways
> to remove a version from our repositories to prevent users from
> upgrading/installing a known bad release.
>
> The way our repos are structured today means e
On Tue, Jul 24, 2018 at 10:54 AM, Dan van der Ster wrote:
> On Tue, Jul 24, 2018 at 4:38 PM Alfredo Deza wrote:
>>
>> Hi all,
>>
>> After the 12.2.6 release went out, we've been thinking on better ways
>> to remove a version from our repositories to prevent users from
>> upgrading/installing a kn
On Tue, Jul 24, 2018 at 8:54 AM, Dan van der Ster wrote:
> On Tue, Jul 24, 2018 at 4:38 PM Alfredo Deza wrote:
>>
>> Hi all,
>>
>> After the 12.2.6 release went out, we've been thinking on better ways
>> to remove a version from our repositories to prevent users from
>> upgrading/installing a kno
On Tue, Jul 24, 2018 at 4:59 PM Alfredo Deza wrote:
>
> On Tue, Jul 24, 2018 at 10:54 AM, Dan van der Ster
> wrote:
> > On Tue, Jul 24, 2018 at 4:38 PM Alfredo Deza wrote:
> >>
> >> Hi all,
> >>
> >> After the 12.2.6 release went out, we've been thinking on better ways
> >> to remove a version
On Tue, Jul 24, 2018 at 5:08 PM Dan van der Ster wrote:
>
> On Tue, Jul 24, 2018 at 4:59 PM Alfredo Deza wrote:
> >
> > On Tue, Jul 24, 2018 at 10:54 AM, Dan van der Ster
> > wrote:
> > > On Tue, Jul 24, 2018 at 4:38 PM Alfredo Deza wrote:
> > >>
> > >> Hi all,
> > >>
> > >> After the 12.2.6 r
It would be nice if ceph-deploy could select the version as well as the
release. E.G: --release luminous --version 12.2.7
Otherwise, I deploy a newest release to a new OSD server, then have to
upgrade the rest of the cluster ( unless the cluster is on a previous
release at the highest level )
On Tue, Jul 24, 2018 at 1:19 PM, Brent Kennedy wrote:
> It would be nice if ceph-deploy could select the version as well as the
> release. E.G: --release luminous --version 12.2.7
>
> Otherwise, I deploy a newest release to a new OSD server, then have to
> upgrade the rest of the cluster ( unles
Hello,
I've run the profiler for about 5-6 minutes and this is what I've got:
-
41 matches
Mail list logo