I am new to Ceph and am looking at trying to utilize some existing hardware
to perform 2 tasks per node. We have a 2 servers which can hold 12 & 16
drives and probably 4 servers which take 4 drives.
Ideally, if possible, we would like to install XCP on each of these servers,
and use Ceph to clust
Ello —
I’ve been watching with great eagerness at the design and features of ceph
especially compared to the current distributed file systems I use. One of the
pains with VM work loads is when writes stall for more than a few seconds,
virtual machines that think they are communicating with a r
On Thu, Mar 6, 2014 at 2:06 PM, McNamara, Bradley
wrote:
> I'm confused...
>
> The bug tracker says this was resolved ten days ago.
The release for that feature is not out yet.
Also, I actually used ceph-deploy on 2/12/2014 to add two monitors to
my cluster, and it worked, and the documentation
On 03/06/2014 08:38 PM, Dan van der Ster wrote:
Hi all,
We're about to go live with some qemu rate limiting to RBD, and I
wanted to crosscheck our values with this list, in case someone can
chime in with their experience or known best practices.
The only reasonable, non test-suite, values I f
Hi all,
We're about to go live with some qemu rate limiting to RBD, and I wanted
to crosscheck our values with this list, in case someone can chime in with
their experience or known best practices.
The only reasonable, non test-suite, values I found on the web are:
iops_wr 200
iops_rd 400
bps_
I'm confused...
The bug tracker says this was resolved ten days ago. Also, I actually used
ceph-deploy on 2/12/2014 to add two monitors to my cluster, and it worked, and
the documentation says it can be done. However, I believe that I added the new
mon's to the ceph.conf in the 'mon_initial_m
On Thu, 2014-03-06 at 09:02 -0500, Alfredo Deza wrote:
> > From the admin node:-
> > http://pastebin.com/AYKgevyF
>
> Ah you added a monitor with ceph-deploy but that is not something that
> is supported (yet)
>
> See: http://tracker.ceph.com/issues/6638
>
> This should be released in the upcomi
Le 05/03/2014 15:34, Guang Yang a écrit :
Hello all,
Hellon
Recently I am working on Ceph performance analysis on our cluster, our
OSD hardware looks like:
11 SATA disks, 4TB for each, 7200RPM
48GB RAM
When break down the latency, we found that half of the latency
(average latency is ar
On Wed, Mar 5, 2014 at 12:49 PM, Jonathan Gowar wrote:
> On Wed, 2014-03-05 at 16:35 +, Joao Eduardo Luis wrote:
>> On 03/05/2014 02:30 PM, Jonathan Gowar wrote:
>> > In an attempt to add a mon server, I appear to have completely broken a
>> > mon service to the cluster:-
>>
>> Did you start t
On 03/06/2014 01:51 AM, Robert van Leeuwen wrote:
Hi,
We experience something similar with our Openstack Swift setup.
You can change the sysstl "vm.vfs_cache_pressure" to make sure more inodes are
being kept in cache.
(Do not set this to 0 because you will trigger the OOM killer at some point ;
Good spot!!!
The same problem here!!!
Best,
G.
On Thu, 6 Mar 2014 12:28:26 +0100 (CET), Jerker Nyberg wrote:
I had this error yesterday. I had run out of storage at
/var/lib/cepoh/mon/ at the local file system on the monitor.
Kind regards,
Jerker Nyberg
On Wed, 5 Mar 2014, Georgios Dimitra
I had this error yesterday. I had run out of storage at
/var/lib/cepoh/mon/ at the local file system on the monitor.
Kind regards,
Jerker Nyberg
On Wed, 5 Mar 2014, Georgios Dimitrakakis wrote:
Can someone help me with this error:
2014-03-05 14:54:27.253711 7f654fd3d700 0=20
mon.client1
I've managed to get joao's assistance in tracking down the issue. I'll be
updating the bug 7210.
Thanks joao and all!
- WP
On Thu, Mar 6, 2014 at 6:25 PM, YIP Wai Peng wrote:
> Ok, I think I got bitten by http://tracker.ceph.com/issues/7210, or
> rather, the cppool command in
> http://www.seba
Ok, I think I got bitten by http://tracker.ceph.com/issues/7210, or rather,
the cppool command in
http://www.sebastien-han.fr/blog/2013/03/12/ceph-change-pg-number-on-the-fly/
I did use "rados cppool " in a pool with snapshots
(openstack glance). A user feedback that ceph crashed when he deleted
Hi,
I am currently facing a horrible situation. All my mons are crashing on
startup.
Here's a dump of mon.a.log. The last few ops are below. It seems to crash
trying to remove a snap? Any ideas?
- WP
-10> 2014-03-06 17:04:38.838490 7fb2a541a700 1 -- 192.168.116.24:6789/0 -->
osd.9 192.168.
Hi,
I am currently facing a horrible situation. All my mons are crashing on
startup.
Here's a dump of mon.a.log. The last few ops are below. It seems to crash
trying to remove a snap? Any ideas?
- WP
-10> 2014-03-06 17:04:38.838490 7fb2a541a700 1 --
192.168.116.24:6789/0--> osd.9
192.168.1
16 matches
Mail list logo