Hi all,
I had an issue on an hammer-cluster (0.94.9 - ugraded from 0.94.7
today).
There are three PGs incomplete:
root@ceph-06:~# ceph health detail
HEALTH_WARN 3 pgs incomplete; 3 pgs stuck inactive; 3 pgs stuck unclean
pg 24.cc is stuck inactive for 595902.285007, current state incomplete,
l
Hi,
you can also set the primary_affinity to 0.5 at the 8TB-disks to lower
the reading access (in this case you don't waste 50% of space).
Udo
Am 2018-04-12 04:36, schrieb ? ??:
Hi,
For anybody who may be interested, here I share a process of locating
the reason for ceph cluster performanc
Am 2017-11-21 13:12, schrieb Rudi Ahlers:
On Tue, Nov 21, 2017 at 10:46 AM, Christian Balzer
wrote:
On Tue, 21 Nov 2017 09:21:58 +0200 Rudi Ahlers wrote:
> On Mon, Nov 20, 2017 at 2:36 PM, Christian Balzer wrote:
>...
>
>
> Ok, so I have 4 physical servers and need to setup a highly redunda
Hi,
not flushing the ceph-journal!
I speak about the caching from linux.
If you run free, you can see how much is cached:
like
# free
totalusedfree shared buff/cache
available
Mem: 4118969216665960 4795700 12478019728032
28247464
Hi Rudi,
Am 2017-11-20 11:58, schrieb Rudi Ahlers:
...
Some more stats:
root@virt2:~# rados bench -p Data 10 seq
hints = 1
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg
lat(s)
0 0 0 0 0 0 -
0
1 16 402
Hi,
again is an update available without release-note...
http://ceph.com/releases/v10-2-10-jewel-released/ isn't found.
No announcement on the mailing list (perhaps i miss something).
I know, normaly it's save to update ceph, but two releases ago it
wasn't.
Udo
__
Hi,
10.2.9 is there:
apt list --upgradable
Listing... Done
ceph/stable 10.2.9-1~bpo80+1 amd64 [upgradable from: 10.2.8-1~bpo80+1]
Change-File??
Udo
Am 2017-07-14 09:26, schrieb Martin Palma:
So only the ceph-mds is affected? Let's say if we have mons and osds
on 10.2.8 and the MDS on 10.2.6 or
Hi,
no ceph read from the primary PG - so your reads are app. 33% local.
And why? better distibution of read-access.
Udo
Am 2017-03-24 09:49, schrieb mj:
Hi all,
Something that I am curious about:
Suppose I have a three-server cluster, all with identical OSDs
configuration, and also a replic
Hi Piotr,
is your partition GUID right?
Look with sgdisk:
# sgdisk --info=2 /dev/sdd
Partition GUID code: 45B0969E-9B03-4F30-B4C6-B4B80CEFF106 (Unknown)
Partition unique GUID: 396A0C50-738C-449E-9FC6-B2D3A4469E51
First sector: 2048 (at 1024.0 KiB)
Last sector: 10485760 (at 5.0 GiB)
Partition size
Hi Greg,
Am 2017-01-12 19:54, schrieb Gregory Farnum:
...
That's not what anybody intended to have happen. It's possible the
simultaneous loss of a monitor and the OSDs is triggering a case
that's not behaving correctly. Can you create a ticket at
tracker.ceph.com with your logs and what steps
Hi,
Am 2017-01-12 11:38, schrieb Shinobu Kinjo:
Sorry, I don't get your question.
Generally speaking, the MON maintains maps of the cluster state:
* Monitor map
* OSD map
* PG map
* CRUSH map
yes - and if an osd say "osd.5 marked itself down" the mon can update
immediately the OSD map (an
Hi all,
I had just reboot all 3 nodes (one after one) of an small Proxmox-VE
ceph-cluster. All nodes are mons and have two OSDs.
During reboot of one node, ceph stucks longer than normaly and I look in
the "ceph -w" output to find the reason.
This is not the reason, but I'm wonder why "osd mar
Hi Andre,
I use check_ceph_dash on top of ceph-dash for this (is an nagios/icinga
Plugin).
https://github.com/Crapworks/ceph-dash
https://github.com/Crapworks/check_ceph_dash
ceph-dash provide an simple clear overview as web-dashbord.
Udo
Am 2017-01-02 12:42, schrieb Andre Forigato:
Hello,
Hi Björn,
i think he use something like this:
http://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
Udo
Am 2016-12-15 11:10, schrieb Bjoern Laessig:
On Do, 2016-10-27 at 15:47 +0200, mj wrote:
Hi Jelle,
On 10/27/2016 03:04 PM, Jelle de Jong wrote:
> Hello everybody,
>
> I want to up
Hi,
if you wrote from an client, the data was written in an (or more)
Placement Group in 4MB-Chunks. This PGs are written to journal and the
osd-disk and due this the data are in the linux file buffer on the
osd-node too (until the os need the storage for other data (file buffer
or anything el
Hi,
update to 10.2.5 - available since saturday.
Udo
Am 2016-12-12 13:40, schrieb George Kissandrakis:
Hi
I have a jewel/xenial ceph installation with 61 OSDs mixed sas/sata in
hosts
with two roots
The installation has version jewel 10.2.3-1xenial (and monitors)
Two hosts where newly a
Am 2016-11-28 10:29, schrieb Strankowski, Florian:
Hey guys,
...
I simply cant get osd.0 back up. I took it offline, out, reinserterd,
resetup, deleted the osd configs, remade them, no success
whatsoever. IMHO the documentation on this part is a bit "lousy" so im
missing some points of informa
Hi,
Am 2016-05-10 05:48, schrieb Geocast:
Hi members,
We have 21 hosts for ceph OSD servers, each host has 12 SATA disks (4TB
each), 64GB memory.
ceph version 10.2.0, Ubuntu 16.04 LTS
The whole cluster is new installed.
Can you help check what the arguments we put in ceph.conf is reasonable
o
18 matches
Mail list logo