Good day!
Is it possible to change frontend to something different then Apache? For
example Nginx.
Regards, Artem Silenkov
2013/11/30 Sebastian
> Hi Yehuda,
>
>
> > It's interesting, the responses are received but seems that they
> > aren't being handled (hence the following pings). There ar
2013/11/25 James Harper :
> Is the OS doing anything apart from ceph? Would booting a ramdisk-only system
> from USB or compact flash work?
This is the same question i've made some times ago.
Is ok to use USB as standard OS (OS, non OSD!) disk? OSDs and journals
will be on dedicated disks.
USB wi
Are you using the inktank patched FastCGI sever? http://gitbuilder.ceph.com
Alternately try another script sever like ngnix as already suggested.
On Nov 29, 2013 12:23 PM, "German Anders" wrote:
> Thanks a lot Sebastian, i'm going to try that, also i'm having an issue
> while trying to test a
> > Is the OS doing anything apart from ceph? Would booting a ramdisk-only
system from USB or compact flash work?
I haven't tested this kind of configuration myself but I can't think of
anything that would preclude this type of setup. I'd probably use sqashfs
layered with a tmpfs via aufs to avoid
> This journal problem is a bit of wizardry to me, I even had weird
intermittent issues with OSDs not starting because the journal was not
found, so please do not hesitate to suggest a better journal setup.
You mentioned using SAS for journal, if your OSDs are SATA and a expander
is in the data pa
I ran HTTach on one of my VM's and got a graph that looks like this:
___--
The low points are all ~35Mbytes/sec and the high points are all ~60Mbytes/sec.
This is very reproducible.
HDTach does sample reads across the whole disk, so would I be right in thinking
that the variation is du
>
> I ran HTTach on one of my VM's and got a graph that looks like this:
>
> ___--
>
> The low points are all ~35Mbytes/sec and the high points are all
> ~60Mbytes/sec. This is very reproducible.
>
> HDTach does sample reads across the whole disk, so would I be right in
> thinking that
Dear all,
Greetings to all, I am new to this list. Please mind my newbie question. :)
I am running a Ceph cluster with 3 servers and 4 drives / OSDs per server.
So total currently there are 12 OSDs running on the cluster. I set PGs
(Placement Groups) value to 600 based on recommendation of calcul