384 active+clean; 19 TiB data, 45 TiB used, 76 TiB / 122 TiB avail; 3.4
KiB/s rd, 573 KiB/s wr, 20 op/s
Dec 23 11:58:25 c02 ceph-mgr: 2019-12-23 11:58:25.194 7f7d3a2f8700 0
log_channel(cluster) log [DBG] : pgmap v411196: 384 pgs: 384
active+clean; 19 TiB data, 45 TiB used, 76 TiB / 122 TiB
hi all:
I want to monitor my luminous ceph with zabbix.my ceph runs in docker.
I enable mgr zabbix module,
[root@ceph2 /]# ceph mgr module ls
{
"enabled_modules": [
"dashboard",
"restful",
"status",
"zabbix"
],
"disabled_modules": [
"balancer",
"influx",
"localpool",
"prometheus",
"selftest"
]
}
Hi,
We have a situation where the sum of the bucket sizes as observed from the
bucket stats command is way less than the actual cluster usage.
Sum of bucket sizes = 11TB
Replication size = 2
Total expected cluster occupancy = 22TB
Actual cluster occupancy = 100TB
Any pointers on debugging this?
Hello all. I'm trying to link a bucket from a user that's not "tenanted"
(user1) to a tenanted user (usert$usert), but i'm getting an error message.
I'm using Luminous:
# ceph version
ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous
(stable)
The steps I took were:
1) creat
I think you need this pull request https://github.com/ceph/ceph/pull/28813
to do this, I don't think this was ever backported to any upstream release
branch
Paul
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
Hohoho Merry Christmas and Hello,
i set up a "poor man´s" ceph cluster with 3 Nodes, one switch and
normal standard HDDs.
My problem; with rbd benchmark i get 190MB/sec write, but only
45MB/sec read speed.
Here is the Setup: https://i.ibb.co/QdYkBYG/ceph.jpg
I plan to implement a separate switc
Dear Jonas,
I tried just now on a 14.2.5 cluster, and sadly, the unexpected behaviour is
still there,
i.e. an OSD marked "out" and then restarted is not considered as data source
anymore.
I also tried with a 13.2.8 OSD (in a cluster running 13.2.6 on other OSDs, MONs
and MGRs), same effect.
Hello,
On Mon, 23 Dec 2019 22:14:15 +0100 Ml Ml wrote:
> Hohoho Merry Christmas and Hello,
>
> i set up a "poor man´s" ceph cluster with 3 Nodes, one switch and
> normal standard HDDs.
>
> My problem; with rbd benchmark i get 190MB/sec write, but only
> 45MB/sec read speed.
>
Something is seve
Hi,
It seems that either you haven't imported the zabbix template on your zabbix
server or you haven't added this template to the host
"ceph-5f23a710-ca98-44f6-a323-41d412256f4d" which should be present in the
zabbix server.
Also, if you have named the host "ceph-5f23a710-ca98-44f6-a323-41d4122
Hi all. I deploy a ceph cluster with Mimic 13.2.4. There are 26 nodes, 286
osds and 1.4 PiB avail space.
I created nearly 5,000,000,000 objects by ceph-rgw, each object is 4K size.
So there should be 18TB * 3 disk used. But `ceph df detail` output shows
that the RAW USED is 889 TiB
Is this a bug or
you are using hdd?
> 在 2019年12月24日,下午3:06,Ch Wan 写道:
>
> Hi all. I deploy a ceph cluster with Mimic 13.2.4. There are 26 nodes, 286
> osds and 1.4 PiB avail space.
> I created nearly 5,000,000,000 objects by ceph-rgw, each object is 4K size.
> So there should be 18TB * 3 disk used. But `ceph
11 matches
Mail list logo