Hi Igor
I suspect you have very much the same problem as me.
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg22260.html
Basically Samsung drives (like many SATA SSD's) are very much hit and miss so
you will need to test them like described here to see if they are any good.
http://ww
wrote:
Hi.
Read this thread here:
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg17360.html
С уважением, Фасихов Ирек Нургаязович
Моб.: +79229045757
2015-08-12 14:52 GMT+03:00 Pieter Koorts :
Hi
Something that's been bugging me for a while is I am trying to diagnose iowait
Hi
Something that's been bugging me for a while is I am trying to diagnose iowait
time within KVM guests. Guests doing reads or writes tend do about 50% to 90%
iowait but the host itself is only doing about 1% to 2% iowait. So the result
is the guests are extremely slow.
I currently run 3x ho
:54 PM, Pieter Koorts wrote:
Hi
I suspect something more sinister may be going on. I have set the values
(though smaller) on my cluster but the same issue happens. I also find when the
VM is trying to start there might be an IRQ flood as processes like ksoftirqd
seem to use more CPU than they
et_size: 0, seed: 0} 1800s
x1 stripe_width 0
Thanks
Pieter
On Aug 05, 2015, at 03:37 PM, Burkhard Linke
wrote:
Hi,
On 08/05/2015 03:09 PM, Pieter Koorts wrote:
Hi,
This is my OSD dump below
###
osc-mgmt-1:~$ sudo ceph osd dump | grep pool
p
#I have also attached my crushmap (plain text version) if that can provide any detail too.ThanksPieterOn Aug 05, 2015, at 02:02 PM, Burkhard Linke wrote:Hi, On 08/05/2015 02:54 PM, Pieter Koorts wrote:Hi Burkhard,I seemed to have missed that part but even though allowing access (rwx) to the
ait-5" but I
still seem to get it.
Thanks
Pieter
On Aug 05, 2015, at 01:42 PM, Burkhard Linke
wrote:
Hi,
On 08/05/2015 02:13 PM, Pieter Koorts wrote:
Hi All,
This seems to be a weird issue. Firstly all deployment is done with
"ceph-deploy" and 3 host machines acting as M
Hi All,
This seems to be a weird issue. Firstly all deployment is done with
"ceph-deploy" and 3 host machines acting as MON and OSD using the Hammer
release on Ubuntu 14.04.3 and running KVM (libvirt).
When using vanilla CEPH, single rbd pool no log device or cache tiering, the
virtual machin
> sockets and memory banks to Ceph/Compute, but we haven't done a lot of
> > testing yet afaik.
> >
> > Mark
> >
> >
> > On 11/12/2014 07:45 AM, Pieter Koorts wrote:
> >>
> >> Hi,
> >>
> >> A while back on a blog I
Hi,
A while back on a blog I saw mentioned that Ceph should not be run on
compute nodes and in the general sense should be on dedicated hardware.
Does this really still apply?
An example, if you have nodes comprised of
16+ cores
256GB+ RAM
Dual 10GBE Network
2+8 OSD (SSD log + HDD store)
I unde
f you
create a newfs on other pools. But I think I saw a discussion
somewhere for having an 'rmfs' command in the future:)
- Message from Pieter Koorts -
Date: Thu, 05 Jun 2014 11:12:46 +0000 (GMT)
From: Pieter Koorts
Subject: [ceph-users] Remove data and metadata pools
Hi,
Is it possible to remove the metadata and data pools with FireFly (0.80)? As in
delete them...
We will not be using any form of CephFS and the cluster is simply design for
RBD devices so these pools and their abilities will likely never be used
either however when I try to remove them it
If looking for a DRBD alternative and not wanting to use CephFS is it not
possible to just use something like OCFS2 or GFS on top of a RDB block device
and all worker nodes accessing it via GFS or OCFS2 (obviously with
write-through mode)?
Would this method not present some advantages over DRBD
ou need, but
> not too many.
>
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Pieter Koorts
> Sent: Monday, May 12, 2014 5:21 AM
> To: ceph-us...@ceph.com
> Subject: [ceph-users] CEPH placement groups and pool sizes
>
> Hi,
>
> Been doi
Hi,
Been doing some reading on the CEPH documentation and just wanted to clarify if
anyone knows the (approximate) correct PG's for CEPH.
What I mean is lets say I have created one pool with 4096 placement groups.
Now instead of one pool I want two so if I were to create 2 pools instead would
Hello,
Just a general question really. What is the recommended node size for Ceph
with storage clusters? The Ceph documentation does say use more smaller
nodes rather than fewer large nodes but what constitutes to large in terms
of Ceph? Is it 16 OSD or more like 32 OSD?
Where does Ceph tail off
Hello,
I am having and issue with CEPH in that it won't communicate over the
private network so the moment I turn the firewalls on then it will start
marking OSD's offline. I have specified in the ceph.conf file that it has a
separate cluster network but it looks like it is not obeying my orders.
zew...@artegence.com> wrote:
> Dnia 2014-03-03, o godz. 10:41:04
> Pieter Koorts napisał(a):
>
> > Hi
> >
> > Does the disk encryption have a major impact on performance for a
> > busy(ish) cluster?
> >
> > What are the thoughts of having the encryption ena
Hi
Does the disk encryption have a major impact on performance for a busy(ish)
cluster?
What are the thoughts of having the encryption enabled for all disks by
default?
- Pieter
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.co
19 matches
Mail list logo