Hi Loic,
searched around for possible udev bugs, and then tried to run "yum update".
Udev did have a fresh update with the following version diffs;
udev-147-2.63.el6_7.1.x86_64 --> udev-147-2.63.el6_7.1.x86_64
from what i can see this update fixes stuff related to symbolic links /
external
Hi all,
I've a 3 node ceph cluster running on Ubuntu 14.04, dell r720xd / ceph version
is 0.80.10.
I have 64 Gb RAM on each node and 2 x E5-2695 v2 @ 2.40Ghz (so cat
/proc/cpuinfo gives me 48 processors per node), each cpu processor is 1200 Mhz
and cache size is 30720 kB.
3 mon (one one each n
I hope this can help anyone who is running into the same issue as us -
kernels 4.1.x appear to have terrible RBD sequential write performance.
Kernels before and after are great.
I tested with 4.1.6 and 4.1.15 on Ubuntu 14.04.3, ceph hammer 0.94.5 - a
simple dd test yields this result:
dd if=/dev
Hi guys,
Which are the PG states that the cluster is still usable like read/write ?
Thanks
Dan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
all the pg is active, clean is not necessary,
but all the pg should be active.
在 2015年12月18日 18:01, Dan Nica 写道:
Hi guys,
Which are the PG states that the cluster is still usable like read/write ?
Thanks
Dan
___
ceph-users mailing list
ceph-users@lis
On Fri, Dec 18, 2015 at 10:55 AM, Alex Gorbachev
wrote:
> I hope this can help anyone who is running into the same issue as us -
> kernels 4.1.x appear to have terrible RBD sequential write performance.
> Kernels before and after are great.
>
> I tested with 4.1.6 and 4.1.15 on Ubuntu 14.04.3, ce
Hi all,
I reboot all my osd node after, I got some pg stuck in peering state.
root@ceph-osd-3:/var/log/ceph# ceph -s
cluster 186717a6-bf80-4203-91ed-50d54fe8dec4
health HEALTH_WARN
clock skew detected on mon.ceph-osd-2
33 pgs peering
33 pgs stuck inact
That feature was added for the Infernalis release of Ceph -- the man pages for
Hammer are located here [1]. Prior to Infernalis, this site describes a
procedure to accomplish roughly the same task [2].
[1] http://docs.ceph.com/docs/hammer/man/8/rbd/
[2] http://www.sebastien-han.fr/blog/2013/12/
Hi Reno,
"Peering", as far as I understand it, is the osds trying to talk to each
other.
You have approximately 1 OSD worth of pgs stuck (i.e. 264 / 8), and osd.0
appears in each of the stuck pgs, alongside either osd.2 or osd.3.
I'd start by checking the comms between osd.0 and osds 2 and 3 (in
Gregory Farnum writes:
>
> What's the full output of "ceph -s"?
>
> The only time the MDS issues these "stat" ops on objects is during MDS
> replay, but the bit where it's blocked on "reached_pg" in the OSD
> makes it look like your OSD is just very slow. (Which could
> potentially make the MDS
On Fri, Dec 18, 2015 at 7:03 AM, Bryan Wright wrote:
> Gregory Farnum writes:
>>
>> What's the full output of "ceph -s"?
>>
>> The only time the MDS issues these "stat" ops on objects is during MDS
>> replay, but the bit where it's blocked on "reached_pg" in the OSD
>> makes it look like your OSD
Gregory Farnum writes:
>
> Nonetheless, it's probably your down or incomplete PGs causing the
> issue. You can check that by seeing if seed 0.5d427a9a (out of that
> blocked request you mentioned) belongs to one of the dead ones.
> -Greg
Hi Greg,
How would I find out which pg this seed belongs
Hi Christian,
On 18/12/2015 04:16, Christian Balzer wrote:
>> It seems to me very bad.
> Indeed.
> Firstly let me state that I don't use CephFS and have no clues how this
> influences things and can/should be tuned.
Ok, no problem. Anyway, thanks for your answer. ;)
> That being said, the fio
Good day everyone,
I currently manage a Ceph cluster running Firefly 0.80.10, we had some
maintenance which implied stopping OSD and starting them back again. This
caused one of the hard drive to notice it had a bad sector and then Ceph to
mark it as inconsistent.
After reparing the physical issu
Hi Chris,
Thank for your answer.
All the nodes are on AWS and I didn't change security group configuration.
2015-12-18 15:41 GMT+01:00 Chris Dunlop :
> Hi Reno,
>
> "Peering", as far as I understand it, is the osds trying to talk to each
> other.
>
> You have approximately 1 OSD worth of pgs s
I think I was in a hurry, everything is fine now.
root@ceph-osd-1:/var/log/ceph# ceph -s
cluster 186717a6-bf80-4203-91ed-50d54fe8dec4
health HEALTH_OK
monmap e1: 3 mons at {ceph-osd-1=
10.200.1.11:6789/0,ceph-osd-2=10.200.1.12:6789/0,ceph-osd-3=10.200.1.13:6789/0
}
electi
Hello all, i seem to have a problem with the ceph version available at
ports.ubuntu.com in the armhf branch.
The latest available version is now infernalis 9.2, however, whenever i
try to update my system, i still get the hammer version (0.94.5).
I've been checking everyday, and it seems the a
Hi Loic,
Damn, the updated udev didn't fix the problem :-(
The rc.local workaround is also complaining;
INFO:ceph-disk:Running command: /usr/bin/ceph-osd -i 0 --get-journal-uuid
--osd-journal /dev/sdc3
libust[2648/2648]: Warning: HOME environment variable not set. Disabling
LTTng-UST per-u
Hey cephers,
Before we all head off to various holiday shenanigans and befuddle our
senses with rest, relaxation, and glorious meals of legend, I wanted
to give you something to look forward to for 2016 in the form of Ceph
Tech Talks!
http://ceph.com/ceph-tech-talks/
First on the docket in Janua
Hi Jesper,
The goal of the rc.local is twofold but mainly to ensure the
/dev/disk/by-partuuid symlinks exists for the journals. Is it the case ?
Cheers
On 18/12/2015 19:50, Jesper Thorhauge wrote:
> Hi Loic,
>
> Damn, the updated udev didn't fix the problem :-(
>
> The rc.local workaround is
I have 3 systems w/ a cephfs mounted on them.
And i am seeing material 'lag'. By 'lag' i mean it hangs for little bits of
time (1s, sometimes 5s).
But very non repeatable.
If i run
time find . -type f -print0 | xargs -0 stat > /dev/null
it might take ~130ms.
But, it might take 10s. Once i've done
On 17 December 2015 at 21:36, Francois Lafont wrote:
> Hi,
>
> I have ceph cluster currently unused and I have (to my mind) very low
> performances.
> I'm not an expert in benchs, here an example of quick bench:
>
> ---
> # fio --randrep
On Fri, Dec 18, 2015 at 9:24 PM, Alex Gorbachev
wrote:
> Hi Ilya,
>
> On Fri, Dec 18, 2015 at 11:46 AM, Ilya Dryomov wrote:
>>
>> On Fri, Dec 18, 2015 at 5:40 PM, Alex Gorbachev
>> wrote:
>> > Hi Ilya
>> >
>> > On Fri, Dec 18, 2015 at 6:50 AM, Ilya Dryomov
>> > wrote:
>> >>
>> >> On Fri, Dec 1
Hi Loic,
Getting closer!
lrwxrwxrwx 1 root root 10 Dec 18 19:43 1e9d527f-0866-4284-b77c-c1cb04c5a168 ->
../../sdc4
lrwxrwxrwx 1 root root 10 Dec 18 19:43 c34d4694-b486-450d-b57f-da24255f0072 ->
../../sdc3
lrwxrwxrwx 1 root root 10 Dec 18 19:42 c83b5aa5-fe77-42f6-9415-25ca0266fb7f ->
../../
On 18/12/2015 22:09, Jesper Thorhauge wrote:
> Hi Loic,
>
> Getting closer!
>
> lrwxrwxrwx 1 root root 10 Dec 18 19:43 1e9d527f-0866-4284-b77c-c1cb04c5a168
> -> ../../sdc4
> lrwxrwxrwx 1 root root 10 Dec 18 19:43 c34d4694-b486-450d-b57f-da24255f0072
> -> ../../sdc3
> lrwxrwxrwx 1 root root 10
On 18 December 2015 at 15:48, Don Waterloo wrote:
>
>
> On 17 December 2015 at 21:36, Francois Lafont wrote:
>
>> Hi,
>>
>> I have ceph cluster currently unused and I have (to my mind) very low
>> performances.
>> I'm not an expert in benchs, here an example of quick bench:
>>
>> ---
I ran into a similar problem while in the middle of upgrading from Hammer
(0.94.5) to Infernalis (9.2.0). I decided to try rebuilding one of the OSDs by
using 'ceph-disk prepare /dev/sdb' and it never comes up:
root@b3:~# ceph daemon osd.10 status
{
"cluster_fsid": "----
Bryan,
I rebooted another host which wasn't updated to CentOS 7.2 and those OSDs
also failed to come out of booting state. I thought I'd restarted each OSD
host after upgrading them to infernalis but I must have been mistaken and
after running ceph tell osd.* version I saw we were on a mix of v0.9
28 matches
Mail list logo