gards,
>
> Alexandre
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
--
Łukasz Jagiełło
lukaszjagielloorg
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
t; perform a simple network latency test like this? I'd like to compare the
> results.
>
> --
> Wido den Hollander
> 42on B.V.
> Ceph trainer and consultant
>
> Phone: +31 (0)20 700 9902
> Skype: contact42on
> _____
ag -r /dev/sdc1
actual 9043587, ideal 8926438, fragmentation factor 1.30%
#v-
Any possible reason of that, and how to avoid that in future ? Someone
earlier mention it's problem with fragmentation but 122GB ?
Best Regards
--
Łukasz Jagiełło
lukaszjagielloorg
ser/2261/match=pierre+blondeau
>
> Regards.
>
> Le 10/12/2013 23:23, Łukasz Jagiełło a écrit :
>
>> Hi,
>>
>> Today my ceph cluster suffer of such problem:
>>
>> #v+
>> root@dfs-s1:/var/lib/ceph/osd/ceph-1# df -h | grep ceph-1
>> /dev/sdc1
/14 Sean Crosby
> Since you are using XFS, you may have run out of inodes on the device and
> need to enable the inode64 option.
>
> What does `df -i` say?
>
> Sean
>
>
> On 13 December 2013 00:51, Łukasz Jagiełło wrote:
>
>> Hi,
>>
>> 72 OSDs (12 serve
27;re still running ceph 0.67.11
Thanks,
--
Łukasz Jagiełło
lukaszjagielloorg
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
bout the source of this error.
https://gist.github.com/ljagiello/06a4dd1f34a776e38f77
Result of more verbose debug.
> You're really behind.
>
I know, we've got scheduled update for 2016 it's a big project to ensure
everything is fine.
--
Łukasz Jagiełło
lukaszjagielloorg
___
ikely
> radosgw-admin running a newer version).
>From last 12h it's just deep-scrub info
#v+
2015-11-13 08:23:00.690076 7fc4c62ee700 0 log [INF] : 15.621 deep-scrub ok
#v-
But yesterday there was a big rebalance and a host with that osd was
rebuilding from scratch.
We're runn
x27;t running the same major version? (more likely
> >> radosgw-admin running a newer version).
> >
> >
> > From last 12h it's just deep-scrub info
> > #v+
> > 2015-11-13 08:23:00.690076 7fc4c62ee700 0 log [INF] : 15.621 deep-scrub
> ok
> > #v-
>
do ?
>
> Thks in advance,
>
> Regards
> --
> *Guillaume Comte*
> 06 25 85 02 02 | guillaume.co...@blade-group.com
>
> 90 avenue des Ternes, 75 017 Paris
>
>
> ___
> ceph-users mailing list
> ceph-use
[~]:$ touch /var/lib/ceph/osd/ceph-123/123
> touch: cannot touch ‘/var/lib/ceph/osd/ceph-123/123’: No space left on
> device
>
> xfs_repair gives no error for FS.
>
> Kernel is
> root@ed-ds-c178:[~]:$ uname -r
> 4.7.0-1.el7.wg.x86_64
>
> What else can I do to rec
f97000d420:op=0x7ff970066b80:24RGWCloneMetaLogCoroutine:
> >>>> operate()
> >>>> 2017-04-20 16:43:04.917238 7ff9777e6700 20 rgw meta sync: operate:
> >>>> shard_id=20: init request
> >>>> 2017-04-20 16:43:04.917240 7ff9777e6700 20
> >>>> cr:s=0x7ff97000d420:op=0x7ff970066b80:24RGWCloneMetaLogCoroutine:
> >>>> operate()
> >>>> 2017-04-20 16:43:04.917241 7ff9777e6700 20 rgw meta sync: operate:
> >>>> shard_id=20: reading shard status
> >>>> 2017-04-20 16:43:04.917303 7ff9777e6700 20 run: stack=0x7ff97000d420
> is
> >>>> io
> >>>> blocked
> >>>> 2017-04-20 16:43:04.918285 7ff9777e6700 20
> >>>> cr:s=0x7ff97000d420:op=0x7ff970066b80:24RGWCloneMetaLogCoroutine:
> >>>> operate()
> >>>> 2017-04-20 16:43:04.918295 7ff9777e6700 20 rgw meta sync: operate:
> >>>> shard_id=20: reading shard status complete
> >>>> 2017-04-20 16:43:04.918307 7ff9777e6700 20 rgw meta sync: shard_id=20
> >>>> marker=1_1492686039.901886_5551978.1 last_update=2017-04-20
> >>>> 13:00:39.0.901886s
> >>>> 2017-04-20 16:43:04.918316 7ff9777e6700 20
> >>>> cr:s=0x7ff97000d420:op=0x7ff970066b80:24RGWCloneMetaLogCoroutine:
> >>>> operate()
> >>>> 2017-04-20 16:43:04.918317 7ff9777e6700 20 rgw meta sync: operate:
> >>>> shard_id=20: sending rest request
> >>>> 2017-04-20 16:43:04.918381 7ff9777e6700 20 RGWEnv::set(): HTTP_DATE:
> Thu
> >>>> Apr
> >>>> 20 14:43:04 2017
> >>>> 2017-04-20 16:43:04.918390 7ff9777e6700 20 > HTTP_DATE -> Thu Apr 20
> >>>> 14:43:04 2017
> >>>> 2017-04-20 16:43:04.918404 7ff9777e6700 10 get_canon_resource():
> >>>> dest=/admin/log
> >>>> 2017-04-20 16:43:04.918406 7ff9777e6700 10 generated canonical header:
> >>>> GET
> >>>>
> >>>> --
> >>>> Kind regards,
> >>>>
> >>>> Ben Morrice
> >>>>
> >>>>
> __
> >>>> Ben Morrice | e: ben.morr...@epfl.ch | t: +41-21-693-9670
> >>>> EPFL / BBP
> >>>> Biotech Campus
> >>>> Chemin des Mines 9
> >>>> 1202 Geneva
> >>>> Switzerland
> >>>>
> >>>> ___
> >>>> ceph-users mailing list
> >>>> ceph-users@lists.ceph.com
> >>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
> >
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Łukasz Jagiełło
lukaszjagielloorg
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
ph/ceph/blame/v10.2.6/src/rgw/rgw_
> rest.cc#L1781-L1782
>
> I'm really not sure we want to revert them. Still, it can be that they just
> unhide a misconfiguration issue while fixing the problems we had with
> handling of virtual hosted buckets.
>
> Regards,
> Radek
>
13 matches
Mail list logo