Thanks for your feedback !
I increased debug_ms to 1/5.
This is another slow request (full output from 'ceph daemon osd.5
dump_historic_ops' for this event is attached):
{
"description": "osd_op(client.171725953.0:404377591 8.9b
8:d90adab6:
::rbd_data.c47f3c390c8495.
Hi
Am 27. Januar 2019 18:20:24 MEZ schrieb Will Dennis :
>Been reading "Learning Ceph - Second Edition"
>(https://learning.oreilly.com/library/view/learning-ceph-/9781787127913/8f98bac7-44d4-45dc-b672-447d162ea604.xhtml)
>and in Ch. 4 I read this:
>
>"We've noted that Ceph OSDs built with the new
Hi All,
Any further suggestions, should i just ignore the error "Failed to load
ceph-mgr modules: telemetry" or is this my route cause for no realtime I/O
readings in the Dashboard?
Thanks
On Fri, Feb 8, 2019 at 3:59 PM Ashley Merrick
wrote:
> Message went away, but obviously still don't get t
Hi there,
Running 12.2.11-1xenial on a machine with 6 SSD OSD with bluestore.
Today we had two disks fail out of the controller, and after a reboot
they both seemed to come back fine but ceph-osd was only able to start
in one of them. The other one gets this:
2019-02-08 18:53:00.703376 7f64f948c
Hello m8s,
Im curious how we should do an Upgrade of our ceph Cluster on Ubuntu 16/18.04.
As (At least on our 18.04 nodes) we only have 12.2.7 (or .8?)
For an Upgrade to mimic we should First Update to Last version, actualy 12.2.11
(iirc).
Which is not possible on 18.04.
Is there a Update pat
Around available versions, are you using the Ubuntu repo’s or the CEPH
18.04 repo.
The updates will always be slower to reach you if your waiting for it to
hit the Ubuntu repo vs adding CEPH’s own.
On Sun, 10 Feb 2019 at 12:19 AM, wrote:
> Hello m8s,
>
> Im curious how we should do an Upgrade
Hello Ashley,
Thank you for this fast response.
I cannt prove this jet but i am using already cephs own repo for Ubuntu 18.04
and this 12.2.7/8 is the latest available there...
- Mehmet
Am 9. Februar 2019 17:21:32 MEZ schrieb Ashley Merrick
:
>Around available versions, are you using the Ubun
What does the output of apt-get update look like on one of the nodes?
You can just list the lines that mention CEPH
Thanks
On Sun, 10 Feb 2019 at 12:28 AM, wrote:
> Hello Ashley,
>
> Thank you for this fast response.
>
> I cannt prove this jet but i am using already cephs own repo for Ubuntu
>
Hi!
I have ceph cluster with 3 nodes with mon/mgr/mds servers.
I reboot one node and see this in client log:
Feb 09 20:29:14 ceph-nfs1 kernel: libceph: mon2 10.5.105.40:6789 socket closed
(con state OPEN)
Feb 09 20:29:14 ceph-nfs1 kernel: libceph: mon2 10.5.105.40:6789 session lost,
hunting for
Hi Roman,
We recently discussed your tests and a simple idea came to my mind - can
you repeat your tests targeting latency instead of max throughput? I mean
just use iodepth=1. What the latency is and on what hardware?
Well, I am playing with ceph rdma implementation quite a while
and it h
On Sun, Feb 10, 2019 at 1:56 AM Ruben Rodriguez wrote:
>
> Hi there,
>
> Running 12.2.11-1xenial on a machine with 6 SSD OSD with bluestore.
>
> Today we had two disks fail out of the controller, and after a reboot
> they both seemed to come back fine but ceph-osd was only able to start
> in one o
The log ends at
$ zcat ceph-osd.5.log.gz |tail -2
2019-02-09 07:37:00.022534 7f5fce60d700 1 --
192.168.61.202:6816/157436 >> - conn(0x56308edcf000 :6816
s=STATE_ACCEPTING pgs=0 cs=0 l=0)._process_connection sd=296 -
The last two messages are outbound to 192.168.222.204 and there are no
further m
12 matches
Mail list logo