Dear ceph overlords. It seems that the ceph download server is down.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello,
On Fri, 24 Jun 2016 15:45:52 -0400 Wade Holler wrote:
> Not reasonable as you say :
>
> vm.min_free_kbytes = 90112
>
Yeah, my nodes with IB adapters all have that set to at least 512MB, 1GB
if they're over 64GB.
> we're in recovery post expansion (48->54 OSDs) right now but free -t is:
Not reasonable as you say :
vm.min_free_kbytes = 90112
we're in recovery post expansion (48->54 OSDs) right now but free -t is:
#free -t
totalusedfree shared buff/cache available
Mem: 693097104 37838338436870080 369292 277843640 2509313
On Thu, Jun 23, 2016 at 9:01 AM, Yoann Moulin wrote:
> Le 23/06/2016 08:25, Sarni Sofiane a écrit :
>> Hi Florian,
>>
>
>
>> On 23.06.16 06:25, "ceph-users on behalf of Florian Haas"
>> wrote:
>>
>>> On Wed, Jun 22, 2016 at 10:56 AM, Yoann Moulin wrote:
Hello Florian,
> On Tue, Ju
Oops, that reminds me, do you have min_free_kbytes set to something
reasonable like at least 2-4GB?
Warren Wang
On 6/24/16, 10:23 AM, "Wade Holler" wrote:
>On the vm.vfs_cace_pressure = 1 : We had this initially and I still
>think it is the best choice for most configs. However with our la
Hi,
I'm facing the same thing after I reinstalled a node directly in jewel...
Reading : http://thread.gmane.org/gmane.comp.file-systems.ceph.devel/31917
I can confirm that running : "udevadm trigger -c add -s block " fires the udev
rules and gets ceph-osd up.
Thing is : I now have reinstalled
On the vm.vfs_cace_pressure = 1 : We had this initially and I still
think it is the best choice for most configs. However with our large
memory footprint, vfs_cache_pressure=1 increased the likelihood of
hitting an issue where our write response time would double; then a
drop of caches would ret
Hi Loïc,
Sorry for the delay. Well, it's a vanillia Centos iso image downloaded from
centos.org mirror:
[root@hulk-stg ~]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
This issue happens after Ceph upgrade from hammer, I haven't tested with distro
starting with a fresh Ceph ins