The way I did this was to use:
ceph-volume lvm batch /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde
/dev/sdf etc
Where you just list all of the block devices you want to use in a group. It
will automatically determine which devices are SSD and then automatically
partition it for you and share it am
Hello guys,
is there anyone using Telegraf / InfluxDB metrics exporter with Grafana
dashboards? I am asking like that because I was unable to find any
existing Grafana dashboards based on InfluxDB.
I am having hard times with creating graphs I want to see. Metrics are
exported in way that every s
Hi!
While browsing /#/pool in nautilus ceph dashboard I noticed it said 93%
used on the single pool we have (3x replica).
ceph df detal however shows 81% used on the pool and 67% raw useage.
# ceph df detail
RAW STORAGE:
CLASS SIZEAVAIL USEDRAW USED %RAW
USED
Interesting. What did the partitioning look like?
- Original Message -
From: "Daniel Sung"
To: "Nathan Fish"
Cc: "Philip Brown" , "ceph-users"
Sent: Tuesday, December 10, 2019 1:21:36 AM
Subject: Re: [ceph-users] sharing single SSD across multiple HD based OSDs
The way I did this was t
>I am having hard times with creating graphs I want to see. Metrics are
exported in way that every single one is stored in separated series in
Influx like:
>
>> ceph_pool_stats,cluster=ceph1,metric=read value=1234
15506589110
>> ceph_pool_stats,cluster=ceph1,metric=write value=1234
It just uses LVM to create a bunch of LVs. It doesn't actually create
separate partitions on the block devices. You can run the command and it
will give you a preview of what it will do and ask for confirmation.
On Tue, 10 Dec 2019 at 13:36, Philip Brown wrote:
> Interesting. What did the partit
Just also a bit curious. So it just creates a pv on sda and no
partitioning done on sda?
-Original Message-
From: Daniel Sung [mailto:daniel.s...@quadraturecapital.com]
Sent: dinsdag 10 december 2019 14:40
To: Philip Brown
Cc: ceph-users
Subject: Re: [ceph-users] sharing single SSD acr
oh very nice!
I tried it out with "batch --report" options, using 1 sdd and 3 hds. (and ceph
nautilus)
it gave me this:
(snipped a bit)
Solid State VG:
Targets: block.db Total size: 110.00 GB
Total LVs: 3 Size per LV: 36.67 GB
Devices: /dev/sd
tcmu-runner iGW is seen behaving as a generic iSCSI target, seen from
vSphere, with dataflow on 1 AO and n ANO's as "hot-standby" for each
LUN.
Load-balance is therefore limited to be between images, if you use
tcmu-runner based deployment.
Each LUN will have one AO(Active-Optimized) path and n n
Hi,
I enabled telegraf and influx plugins for my ceph cluster.
I would like to use influx/chronograf to detect anomalies:
- osd down
- monitor down
- osd near full
But it is very difficult/complicated to make simple queries because, for
example I have osd up and osd total but not osd down metric.
> How is that possible? I dont know how much more proof I need to present that
> there's a bug.
I also think there's a bug in the balancer plugin as it seems to have
stopped for me also. I'm on Luminous though, so not sure if that will
be the same bug.
The balancer used to work flawlessly, giving
ceph version 14.2.4 (75f4de193b3ea58512f204623e6c5a16e6c1e1ba) nautilus (stable)
os :CentOS Linux release 7.7.1908 (Core)
single node ceph cluster with 1 mon,1mgr,1 mds,1rgw and 12osds , but only
cephfs is used.
ceph -s is blocked after shutting down the machine
(192.168.0.104), then ip add
But it is very difficult/complicated to make simple queries because, for
example I have osd up and osd total but not osd down metric.
To determine how much osds down you don't need special metric, because
you already
have osd_up and osd_in metrics. Just use math.
k
___
My full OSD list (also here as pastebin https://paste.ubuntu.com/p/XJ4Pjm92B5/ )
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE
VAR PGS STATUS
14 hdd 9.09470 1.0 9.1 TiB 6.9 TiB 6.8 TiB 71 KiB 18 GiB 2.2 TiB
75.34 1.04 69 up
19 hdd 9.09470
Quoting Cc君 (o...@qq.com):
> ceph version 14.2.4 (75f4de193b3ea58512f204623e6c5a16e6c1e1ba) nautilus
> (stable)
>
> os :CentOS Linux release 7.7.1908 (Core)
> single node ceph cluster with 1 mon,1mgr,1 mds,1rgw and 12osds , but
> only cephfs is used.
> ceph -s is blocked after shutting down
Hello Gr. Stefan,
1.
osd marked noout,nobackfill,norecover before shutting down .
$ ceph osd set noout
$ ceph osd set nobackfill
$ ceph osd set norecover
2.
[root@ceph-node1 ~]# systemctl status firewalld
?? firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/li
It's my personal "production" cluster , by the way.
Hello Gr. Stefan,
1.
osd marked noout,nobackfill,norecover before shutting down .
$ ceph osd set noout
$ ceph osd set nobackfill
$ ceph osd set norecover
2.
[root@ceph-node1 ~]# systemctl status firewalld
?? firewalld.service - firewall
Quoting Miroslav Kalina (miroslav.kal...@livesport.eu):
> Hello guys,
>
> is there anyone using Telegraf / InfluxDB metrics exporter with Grafana
> dashboards? I am asking like that because I was unable to find any
> existing Grafana dashboards based on InfluxDB.
\o (telegraf)
> I am having hard
Quoting Cc君 (o...@qq.com):
> 4.[root@ceph-node1 ceph]# ceph -s
> just blocked ...
> error 111 after a few hours
Is the daemon running? You can check for the process to be alive in
/var/run/ceph/ceph-mon.$hostname.asok
If so ... then query the monitor for its status:
ceph daemon mon.$hostname quo
19 matches
Mail list logo