Re: [ceph-users] sharing single SSD across multiple HD based OSDs

2019-12-10 Thread Daniel Sung
The way I did this was to use: ceph-volume lvm batch /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf etc Where you just list all of the block devices you want to use in a group. It will automatically determine which devices are SSD and then automatically partition it for you and share it am

[ceph-users] Ceph-mgr :: Grafana + Telegraf / InfluxDB metrics format

2019-12-10 Thread Miroslav Kalina
Hello guys, is there anyone using Telegraf / InfluxDB metrics exporter with Grafana dashboards? I am asking like that because I was unable to find any existing Grafana dashboards based on InfluxDB. I am having hard times with creating graphs I want to see. Metrics are exported in way that every s

[ceph-users] Pool Max Avail and Ceph Dashboard Pool Useage on Nautilus giving different percentages

2019-12-10 Thread David Majchrzak, ODERLAND Webbhotell AB
Hi! While browsing /#/pool in nautilus ceph dashboard I noticed it said 93% used on the single pool we have (3x replica). ceph df detal however shows 81% used on the pool and 67% raw useage. # ceph df detail RAW STORAGE: CLASS SIZEAVAIL USEDRAW USED %RAW USED

Re: [ceph-users] sharing single SSD across multiple HD based OSDs

2019-12-10 Thread Philip Brown
Interesting. What did the partitioning look like? - Original Message - From: "Daniel Sung" To: "Nathan Fish" Cc: "Philip Brown" , "ceph-users" Sent: Tuesday, December 10, 2019 1:21:36 AM Subject: Re: [ceph-users] sharing single SSD across multiple HD based OSDs The way I did this was t

Re: [ceph-users] Ceph-mgr :: Grafana + Telegraf / InfluxDB metrics format

2019-12-10 Thread Marc Roos
>I am having hard times with creating graphs I want to see. Metrics are exported in way that every single one is stored in separated series in Influx like: > >> ceph_pool_stats,cluster=ceph1,metric=read value=1234 15506589110 >> ceph_pool_stats,cluster=ceph1,metric=write value=1234

Re: [ceph-users] sharing single SSD across multiple HD based OSDs

2019-12-10 Thread Daniel Sung
It just uses LVM to create a bunch of LVs. It doesn't actually create separate partitions on the block devices. You can run the command and it will give you a preview of what it will do and ask for confirmation. On Tue, 10 Dec 2019 at 13:36, Philip Brown wrote: > Interesting. What did the partit

Re: [ceph-users] sharing single SSD across multiple HD based OSDs

2019-12-10 Thread Marc Roos
Just also a bit curious. So it just creates a pv on sda and no partitioning done on sda? -Original Message- From: Daniel Sung [mailto:daniel.s...@quadraturecapital.com] Sent: dinsdag 10 december 2019 14:40 To: Philip Brown Cc: ceph-users Subject: Re: [ceph-users] sharing single SSD acr

Re: [ceph-users] sharing single SSD across multiple HD based OSDs

2019-12-10 Thread Philip Brown
oh very nice! I tried it out with "batch --report" options, using 1 sdd and 3 hds. (and ceph nautilus) it gave me this: (snipped a bit) Solid State VG: Targets: block.db Total size: 110.00 GB Total LVs: 3 Size per LV: 36.67 GB Devices: /dev/sd

Re: [ceph-users] best pool usage for vmware backing

2019-12-10 Thread Heðin Ejdesgaard Møller
tcmu-runner iGW is seen behaving as a generic iSCSI target, seen from vSphere, with dataflow on 1 AO and n ANO's as "hot-standby" for each LUN. Load-balance is therefore limited to be between images, if you use tcmu-runner based deployment. Each LUN will have one AO(Active-Optimized) path and n n

[ceph-users] Use telegraf/influx to detect problems is very difficult

2019-12-10 Thread Mario Giammarco
Hi, I enabled telegraf and influx plugins for my ceph cluster. I would like to use influx/chronograf to detect anomalies: - osd down - monitor down - osd near full But it is very difficult/complicated to make simple queries because, for example I have osd up and osd total but not osd down metric.

Re: [ceph-users] PG Balancer Upmap mode not working

2019-12-10 Thread Richard Bade
> How is that possible? I dont know how much more proof I need to present that > there's a bug. I also think there's a bug in the balancer plugin as it seems to have stopped for me also. I'm on Luminous though, so not sure if that will be the same bug. The balancer used to work flawlessly, giving

[ceph-users] ceph-mon is blocked after shutting down and ip address changed

2019-12-10 Thread Cc??
ceph version 14.2.4 (75f4de193b3ea58512f204623e6c5a16e6c1e1ba) nautilus (stable) os :CentOS Linux release 7.7.1908 (Core) single node ceph cluster with 1 mon,1mgr,1 mds,1rgw and 12osds , but only  cephfs is used.  ceph -s   is blocked after  shutting down the machine (192.168.0.104), then ip add

Re: [ceph-users] Use telegraf/influx to detect problems is very difficult

2019-12-10 Thread Konstantin Shalygin
But it is very difficult/complicated to make simple queries because, for example I have osd up and osd total but not osd down metric. To determine how much osds down you don't need special metric, because you already have osd_up and osd_in metrics. Just use math. k ___

[ceph-users] PG Balancer Upmap mode not working

2019-12-10 Thread Philippe D'Anjou
My full OSD list (also here as pastebin https://paste.ubuntu.com/p/XJ4Pjm92B5/ ) ID  CLASS WEIGHT  REWEIGHT SIZE    RAW USE DATA    OMAP    META    AVAIL   %USE  VAR  PGS STATUS  14   hdd 9.09470  1.0 9.1 TiB 6.9 TiB 6.8 TiB  71 KiB  18 GiB 2.2 TiB 75.34 1.04  69 up  19   hdd 9.09470 

Re: [ceph-users] ceph-mon is blocked after shutting down and ip address changed

2019-12-10 Thread Stefan Kooman
Quoting Cc君 (o...@qq.com): > ceph version 14.2.4 (75f4de193b3ea58512f204623e6c5a16e6c1e1ba) nautilus > (stable) > > os :CentOS Linux release 7.7.1908 (Core) > single node ceph cluster with 1 mon,1mgr,1 mds,1rgw and 12osds , but > only  cephfs is used. >  ceph -s   is blocked after  shutting down

[ceph-users] ?????? ceph-mon is blocked after shutting down and ip address changed

2019-12-10 Thread Cc??
Hello Gr. Stefan, 1. osd marked noout,nobackfill,norecover before shutting down . $ ceph osd set noout $ ceph osd set nobackfill $ ceph osd set norecover 2. [root@ceph-node1 ~]# systemctl  status firewalld ?? firewalld.service - firewalld - dynamic firewall daemon    Loaded: loaded (/usr/li

[ceph-users] ?????? ?????? ceph-mon is blocked after shutting down and ip address changed

2019-12-10 Thread Cc??
It's my personal "production" cluster , by the way. Hello Gr. Stefan, 1. osd marked noout,nobackfill,norecover before shutting down . $ ceph osd set noout $ ceph osd set nobackfill $ ceph osd set norecover 2. [root@ceph-node1 ~]# systemctl  status firewalld ?? firewalld.service - firewall

Re: [ceph-users] Ceph-mgr :: Grafana + Telegraf / influxdb metrics format

2019-12-10 Thread Stefan Kooman
Quoting Miroslav Kalina (miroslav.kal...@livesport.eu): > Hello guys, > > is there anyone using Telegraf / InfluxDB metrics exporter with Grafana > dashboards? I am asking like that because I was unable to find any > existing Grafana dashboards based on InfluxDB. \o (telegraf) > I am having hard

Re: [ceph-users] 回复: ceph-mon is blocked after shutting down and ip address changed

2019-12-10 Thread Stefan Kooman
Quoting Cc君 (o...@qq.com): > 4.[root@ceph-node1 ceph]# ceph -s > just blocked ... > error 111 after a few hours Is the daemon running? You can check for the process to be alive in /var/run/ceph/ceph-mon.$hostname.asok If so ... then query the monitor for its status: ceph daemon mon.$hostname quo