Hi,
As per joe and Iban's suggestions, adding one more OSD makes everything
fine in my setup, but only if OSDs are made of directories as shown in
quick installation steps, not of disks.
Is there any restriction while using disks as OSD? Is virtual disk OK to
use for OSD? Is 10 GB size of vi
Hi, All.
I encountered such a problem.
Was the status of one pg - inconsistent. RBD found this device and deleted
it, now on the OSD get the following error:
cod 0'0 active+inconsistent snaptrimq=[15~1,89~1]] exit
Started/Primary/Active/Recovering 0.025609 1 0.53
-8> 2014-07-23 12:03:13
Ricardo,
Thought to share my testing results.
I've been using IPoIB with ceph for quite some time now. I've got QDR
osd/mon/client servers to serve rbd images to kvm hypervisor. I've done some
performance testing using both rados and guest vm benchmarks while running the
last three stable ve
Hello,
I'm running a test cluster (mon and osd are debian 7
with 3.2.57-3+deb7u2 kernel). The client is a debian 7
with a 3.15.4 kernel that I compiled myself.
The cluster has 3 monitors and 16 osd servers.
I created a pool (periph) and used it a bit and then
I decided to create some buckets
Hi all,
I am studying nagios for monitoring ceph features.
different plugins of nagios monitor ceph cluster health, o0sd status, monitor
status etc.
My questions are:
* Does Nagios monitor ceph for cluster, pool and each PG for
- CPU utilization
- memory utilization
- Network Utilization
- tot
On 07/23/2014 03:54 AM, Andrei Mikhailovsky wrote:
Ricardo,
Thought to share my testing results.
I've been using IPoIB with ceph for quite some time now. I've got QDR
osd/mon/client servers to serve rbd images to kvm hypervisor. I've done
some performance testing using both rados and guest vm b
Nagios can monitor anything you can script. If there isn’t a plugin for it,
write it yourself, it’s really not hard. I’d go for icinga by the way, which is
more actively maintained than nagios.
On Jul 23, 2014, at 3:07 PM, pragya jain wrote:
> Hi all,
>
> I am studying nagios for monitoring
Hi Everybody,
We are having ceph issues where it recovers then hangs then crashes, we
are also now getting btrfs bcache errors and are stuck of what to do. We
have got one ceph node up in the cluster but you try and get another one
up and that then goes down we really are lost. Any ideas of w
We use zabbix but the same concept applies in writing your own scripts.
We take advantage of the command
$ceph -s --format=json 2>/dev/null
stderr comes up with some stuff sometimes so we filter that out.
On Wed, Jul 23, 2014 at 6:32 AM, Wolfgang Hennerbichler wrote:
> Nagios can monitor anythin
Hi,
from my tests, I can't import snapshot from a replicated pool (in
cluster1) to an erasure-coding pool (in cluster2).
Is it a known limitation ? A temporary one ?
Or did I make a mistake somewhere ?
The cluster1 (aka production) is running Ceph 0.67.9), and the cluster2
(aka backup) is runnin
Ok, I just found this message from Gregory Farnum :
« You can't use erasure coded pools directly with RBD. They're only
suitable for use with RGW or as the base pool for a replicated cache
pool, and you need to be very careful/specific with the configuration. I
believe this is well-documented, so c
5-minute quick start was deprecated quite some time ago. Use
http://ceph.com/docs/master/start/
On Tue, Jul 22, 2014 at 1:20 AM, Vincenzo Pii wrote:
> Ceph packages are already in Ubuntu 14.04 repositories, no need to add
> more into any sources.list.
> So, undo your changes there and just proc
Hey Olivier,
On 07/23/2014 02:06 PM, Olivier DELHOMME wrote:
Hello,
I'm running a test cluster (mon and osd are debian 7
with 3.2.57-3+deb7u2 kernel). The client is a debian 7
with a 3.15.4 kernel that I compiled myself.
The cluster has 3 monitors and 16 osd servers.
I created a pool (periph)
Keep in mind that this coordination is largely happening above the
FileStore layer, so you are indeed not seeing any code within the
FileStore to support it. :) But operations within the OSD are ordered
on a per-PG basis, and while in-progress writes can overlap, a read
will be blocked until the wr
I'm evaluating ceph for our new private and public cloud environment. I have a
"working" ceph cluster running on centos 6.5, but have had a heck of a time
figuring out how to get rbd support to connect to cloudstack. Today I found
out that the default kernel is too old, and while I could compile
Brian,
Please see http://ceph.com/docs/master/start/os-recommendations/ I would go
with anything with a 'C' rating matching the version of Ceph that you will
want to install.
On Wed, Jul 23, 2014 at 11:12 AM, Brian Lovett
wrote:
> I'm evaluating ceph for our new private and public cloud enviro
Hello,
Recently I've started seeing very slow read speeds from the rbd images I
have mounted. After some analysis, I suspect the root cause is related
to krbd; if I run the rados benchmark, I see read bandwith in the
400-600MB/s range, however if I attempt to read directly from the block
device wi
On Wed, 23 Jul 2014, Steve Anthony wrote:
> Hello,
>
> Recently I've started seeing very slow read speeds from the rbd images I
> have mounted. After some analysis, I suspect the root cause is related
> to krbd; if I run the rados benchmark, I see read bandwith in the
> 400-600MB/s range, however
Ah, ok. That makes sense. With one concurrent operation I see numbers
more in line with the read speeds I'm seeing from the filesystems on the
rbd images.
# rados -p bench bench 300 seq --no-cleanup -t 1
Total time run:300.114589
Total reads made: 2795
Read size:4194304
Ban
Using elrepo (http://elrepo.org/tiki/tiki-index.php) by adding to your yum
repositories is much simpler than compiling your own kernel - Once you add the
repository:
1.) Install the kernel
yum install
where can be:
kernel-lt (long-term support kernel - http:/
To be clear - this is on CentOS 6.5 :)
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Bachelder, Kurt
Sent: Wednesday, July 23, 2014 5:10 PM
To: Brian Lovett; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Which OS for fresh install?
Usin
On 07/23/2014 04:09 PM, Bachelder, Kurt wrote:
> 2.) update your grub.conf to boot to the appropriate image (default=0, or
> whatever kernel in the list you want to boot from).
Actually, edit /etc/sysconfig/kernel, set DEFAULTKERNEL=kernel-lt before
installing it.
--
Dimitri Maziuk
Programmer/
Hi.
On Thu, Jul 24, 2014 at 1:07 AM, pragya jain wrote:
> Hi all,
>
> I am studying nagios for monitoring ceph features.
>
> different plugins of nagios monitor ceph cluster health, o0sd status,
> monitor status etc.
We use these:
https://github.com/rochaporto/ceph-nagios-plugins
and sent patch
Dear all
i got that Firefly 0.80.4 have new feature that not need install apache and
fastcgi, am i right ?
*Standalone radosgw (experimental): The radosgw process can now run in a
standalone mode without an apache (or similar) web server or fastcgi. This
simplifies deployment and can improve perfo
hello, I have running a ceph cluster(RBD) on production environment to host 200
VMs, Under normal circumstances, ceph's performance is quite good.
but when I delete a snapshot or image, ceph cluster will be appear a lot of
blocked requests(generally morn than 1000), then , the whole cluster hav
25 matches
Mail list logo