Hello Daniel,
just throw away your crappy Samsung SSD 860 Pro. It won't work in an
acceptable way.
See
https://docs.google.com/spreadsheets/d/1E9-eXjzsKboiCCX-0u0r5fAjjufLKayaut_FOPxYZjc/edit?usp=sharing
for a performance indication of individual disks.
--
Martin Verges
Managing director
Mobile
If somebody is looking, i found doc
here https://www.bookstack.cn/read/ceph-en
Thanks for reply.
Unortunaly this is the case.
In Google you can use the "Show Page in cache" option to get your
desired site.
I guess there Was a change from our documentation guys and the
forget to Set an
Hi Ceph users,
Im working on common lisp client utilizing rados library. Got some
results, but don't know how to estimate if i am getting correct
performance. I'm running test cluster from laptop - 2 OSDs - VM, RAM
4Gb, 4 vCPU each, monitors and mgr are running from the same VM(s). As
f
Hi Anthony,
Thanks for looking into this and opening the ticket - I'll keep an eye on it.
For prepping the LVMs etc. I was thinking could probably use 'ceph-volume
lvm prepare' then fixing up the relevant LV tags with the appropriate values
from the origin osd.
Cheers,
Chris
On Mon, Oct 12
I’m not using rook although I think it will probably help a lot with that
recovery as rook is containers based too!
Thanks a lot!
Le mar. 13 oct. 2020 à 00:19, Brian Topping a
écrit :
> I see, maybe you want to look at these instructions. I don’t know if you
> are running Rook, but the point ab
I see, maybe you want to look at these instructions. I don’t know if you are
running Rook, but the point about getting the container alive by using `sleep`
is important. Then you can get into the container with `exec` and do what you
need to.
https://rook.io/docs/rook/v1.4/ceph-disaster-recover
Hi there!
This isn’t a difficult problem to fix. For purposes of clarity, the monmap is
just a part of the monitor database. You generally have all the details correct
though.
Have you looked at the process in
https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-mon/#recoverin
Hi everyone,
Because of unfortunate events, I’ve a containers based ceph cluster
(nautilus) in a bad shape.
One of the lab cluster which is only made of 2 nodes as control plane (I
know it’s bad :-)) each of these nodes run a mon, a mgr and a rados-gw
containerized ceph_daemon.
They were install
Unortunaly this is the case.
In Google you can use the "Show Page in cache" option to get your desired site.
I guess there Was a change from our documentation guys and the forget to Set an
valuable redirect/rewrite rule for "older" site which are already crawled via
Google. But am not sure...
Poking through the source I *think* the doc should indeed refer to the “dup”
function, vs “copy”. That said, arguably we shouldn’t have a section in the
docs that says "there’s this thing you can do but we aren’t going to tell you
how”.
Looking at the history / blame info, which only seems to
If everything is stable isn't it good to update this doc?
https://docs.ceph.com/en/latest/start/os-recommendations/
On Mon, Oct 12, 2020 at 12:56 PM Burkhard Linke <
burkhard.li...@computational.bio.uni-giessen.de> wrote:
> Hi,
>
> On 10/12/20 2:31 AM, Seena Fallah wrote:
> > Hi all,
> >
> > Does
Dear all,
occasionally, I find messages like
Health check update: Long heartbeat ping times on front interface seen, longest
is 1043.153 msec (OSD_SLOW_PING_TIME_FRONT)
in the cluster log. Unfortunately, I seem to be unable to find out which OSDs
were affected (a-posteriori). I cannot find rel
On 2020-10-12 09:28, Robert Sander wrote:
> Hi,
>
> Am 12.10.20 um 02:31 schrieb Seena Fallah:
>>
>> Does anyone has any production cluster with ubuntu 20 (focal) or any
>> suggestion or any bugs that prevents to deploy Ceph octopus on Ubuntu 20?
>
> The underlying distribution does not matter an
On 2020-10-12 08:58, Seena Fallah wrote:
> I've seen this PR that reverts the latest ubuntu version from 20.04 to
> 18.04 because of some failures!
> Are there any updates on this?
> https://github.com/ceph/ceph/pull/35110
Apparently there have been attempts to get Ceph built on Focal. I did
not g
I really should read these emails more carefully... Sorry, thanks for
pointing that out. I haven't done the filestore migration per OSD. I
created a filestore OSD in my lab setup to play around with
ceph-objectstore-tool but I couldn't find anything except for '--op
dup' but it's not really
Hi,
On 10/12/20 12:05 PM, Kristof Coucke wrote:
Diving into the different logging and searching for answers, I came across
the following:
PG_DEGRADED Degraded data redundancy: 2101057/10339536570 objects degraded
(0.020%), 3 pgs degraded, 3 pgs undersized
pg 1.4b is stuck undersized for 63
I'll answer it myself:
When CRUSH fails to find enough OSDs to map to a PG, it will show as a
2147483647 which is ITEM_NONE or no OSD found.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Diving into the different logging and searching for answers, I came across
the following:
PG_DEGRADED Degraded data redundancy: 2101057/10339536570 objects degraded
(0.020%), 3 pgs degraded, 3 pgs undersized
pg 1.4b is stuck undersized for 63114.227655, current state
active+undersized+degraded
Hi,
On 10/12/20 2:31 AM, Seena Fallah wrote:
Hi all,
Does anyone has any production cluster with ubuntu 20 (focal) or any
suggestion or any bugs that prevents to deploy Ceph octopus on Ubuntu 20?
We are running our new ceph cluster on Ubuntu 20.04 and ceph octopus
release. Packages are take
Hi all,
We're now having trouble over a week with our Ceph cluster.
Short info regarding our situation:
- Original cluster had 10 OSD nodes, each having 16 OSDs
- Expansion was necessary, so another 6 nodes have been added
- Version: 14.2.11
Last week we saw heavily loaded OSD servers, after help
Hi,
Am 12.10.20 um 02:31 schrieb Seena Fallah:
>
> Does anyone has any production cluster with ubuntu 20 (focal) or any
> suggestion or any bugs that prevents to deploy Ceph octopus on Ubuntu 20?
The underlying distribution does not matter any more as long as you get
cephadm bootstrapped on one
21 matches
Mail list logo