Hi Ceph
after moving to legay tunables cluster back to active+clean but if I revert
it back to optimal or firefly it move to " active+remapped" but Health is
"OK".. but with legacy settings "HEALTH_WARN crush map has legacy
tunables" .. ? any one have any idea why .. I cleared the warning
ce
Hi ceph
followed guide at
http://ceph.com/docs/master/start/quick-start-preflight/#ceph-deploy-setup
.. & used release rpm at rpm -Uvh
http://ceph.com/rpm-firefly/rhel6/noarch/ceph-release-1-0.el6.noarch.rpm
but no auto dir creation & udev rules .
as suggested in guide I mentioned " osd pool defa
Hi,
Late to the party, but just to be sure, does the switch support mc-lag
or mlag by any chance?
There could be updates integrating this.
Cheers,
Josef
Sven Budde skrev 2014-06-06 13:06:
Hi all,
thanks for the replies and heads up for the different bonding options.
I'll toy around with th
Hi,
we have a similar behavior. The Logfiles are growing fast and are filled up
with such messages:
root@aixit-ceph-osd01:/var/log/ceph# tail -f ceph-osd.1.log
2014-06-07 23:36:39.708785 7f7836fbb700 0
xfsfilestorebackend(/var/lib/ceph/osd/ceph-1) set_extsize: FSSETXATTR: (22)
Invalid argument
20
>Hi,
>
>Am 05.06.2014 11:27, schrieb ale...@kurnosov.spb.ru:
>>
>> ceph 0.72.2 on SL6.5 from offical repo.
>>
>> After down one of OSDs (for further the sever out) one of PGs become
>> incomplte: $ ceph health detail HEALTH_WARN 1 pgs incomplete; 1 pgs stuck
>> inactive; 1 pgs stuck unclean; 2 r
On Sat, 7 Jun 2014, Anil Dhingra wrote:
> HI Guys
>
> Finally writing ..after loosing my patience to configure my cluster multiple
> times but still not able to achieve active+clean .. looks like its almost
> impossible to configure this on centos 6.5.
>
> As I have to prepare a POC ceph+cinder b
HI Guys
Finally writing ..after loosing my patience to configure my cluster
multiple times but still not able to achieve active+clean .. looks like its
almost impossible to configure this on centos 6.5.
As I have to prepare a POC ceph+cinder but with this config difficult to
convince someone. Als
I think you man "osd op threads", you can find infos at
http://ceph.com/docs/master/rados/configuration/osd-config-ref/
On Sat, Jun 7, 2014 at 9:47 PM, Cao, Buddy wrote:
> Hi,
>
>
>
> Is ““osd ops threads”” parameter still valid in Firefly? I did not find any
> info related to it in ceph.com onli
Hi,
Is ““osd ops threads”” parameter still valid in Firefly? I did not find any
info related to it in ceph.com online document.
Wei Cao (Buddy)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.c
Thanks.
> There's some prefetching and stuff, but the rbd library and RADOS storage are
> capable of issuing reads and writes in any size (well, down to the minimal
> size of the underlying physical disk).
> There are some scenarios where you will see it writing a lot more if you use
> layering
Hum right ... and it's better to promote official tools like
ceph-deploy, which is great.
Cheers
Le 07/06/2014 12:03, Loic Dachary a écrit :
> Hi,
>
> This script is unfit for the purpose of trying erasure coded pools. Using
>
> http://ceph.com/docs/master/start/quick-ceph-deploy/
>
> on a singl
Hi koleofuscus,
In order to figure out why the mon does not start after a reboot it would be
useful to get the logs.
Cheers
On 06/06/2014 14:05, Koleos Fuskus wrote:
> Hello,
>
> My environment is only one machine, only one hard disk. I cannot restart the
> cluster after machine reboots.
>
>
Hi,
This script is unfit for the purpose of trying erasure coded pools. Using
http://ceph.com/docs/master/start/quick-ceph-deploy/
on a single node is definitely the best option. The only thing that is slightly
not obvious (but works great) is
ceph-deploy osd prepare mynode:/var/local/osd1
cep
Le 06/06/2014 14:05, Koleos Fuskus a écrit :
> BTW, I know is not a good idea to run all mon and all osd on the same
> machine, on the same disk. But on the other hand, it facilitates testing with
> small resources. It would be great to deploy such small environment easily.
Loic wrote a nice scr
14 matches
Mail list logo