Thanks..
This solved my problem.
i think the documentation should mention this.
also how you need to configure multiple compute nodes with the same uuid
-Original Message-
From: Josh Durgin mailto:josh%20durgin%20%3cjosh.dur...@inktank.com%3e> >
To: "Makkelie, R - SPLXL" mailto:%22Makkeli
On Tue, Jun 4, 2013 at 2:20 PM, Chen, Xiaoxi wrote:
> Hi Greg,
> Yes, Thanks for your advice ,we do turn down the
> osd_client_message_size_cap to 100MB/OSD ,both Journal queue and filestore
> queue are set to 100MB also.
> That's 300MB/OSD in total, but from TOP we see:
>
Hi all,
I have 3 monitors in my cluster but 1 of them is a backup and I don't
want to use it as master (too far and on a server where I want to save
resources for something else ...)
But as I can see, mon priority is based on IP (lower to higher, if I'm
not wrong).
I would like to know if it's p
Hello,
I'm new to the Ceph mailing list, and I need some advices for our
testing cluster. I have 2 servers with x2 hard disks. On the first
server i configured monitor and OSD, and on the second server only OSD.
The configuration looks like as follows:
[mon.a]
host = ceph1
Just to add, this doesn't happen in just one pool.
When I change "data" pool replicate size from 2 to 3, a few PGs (3) got
stuck too.
pg 0.7c is active+clean+degraded, acting [8,2]
pg 0.48 is active+clean+degraded, acting [4,8]
pg 0.1f is active+clean+degraded, acting [5,7]
I am already on tunab
Hijacking (because it's related): a couple weeks ago on IRC it was
indicated a repo with these (or updated) qemu builds for CentOS should be
coming soon from Ceph/Inktank. Did that ever happen?
Thanks,
Jeff
On Mon, Jun 3, 2013 at 10:25 PM, YIP Wai Peng wrote:
> Hi Andrel,
>
> Have you tried th
Yip, no, I have not tried them, but I certainly will! Do I need a patched
libvirtd as well, or is this working out of the box?
Thanks
Andrei
- Original Message -
From: "YIP Wai Peng"
To: "Andrei Mikhailovsky"
Cc: ceph-users@lists.ceph.com
Sent: Tuesday, 4 June, 2013 3:25:17 AM
Yavor,
I would highly recommend taking a look at the quick install guide:
http://ceph.com/docs/next/start/quick-start/
As per the guide, you need to precreate the directories prior to starting ceph.
Andrei
- Original Message -
From: "Явор Маринов"
To: ceph-users@lists.ceph.com
S
That's the exact documentation which i'm using the directory on ceph2 is
created, and the service is starting without any problems on both nodes.
However the health of the cluster is getting WARN and i was able to
mount the cluster
On 06/04/2013 03:43 PM, Andrei Mikhailovsky wrote:
Yavor,
Jeff,
Perhaps these?
http://tracker.ceph.com/issues/4834
http://ceph.com/packages/qemu-kvm/
- Mike
On 6/4/2013 8:16 AM, Jeff Bachtel wrote:
Hijacking (because it's related): a couple weeks ago on IRC it was
indicated a repo with these (or updated) qemu builds for CentOS should
be coming soon
As noted in the ticket, the packages are built but untested. Once we've
done some QA then we will then push them out 'formally'.
We're also planning to host the Qemu+rbd package in a native CentOS ceph
repository, along with all the other Ceph packages. We're liaising with the
CentOS project lead
hello everyone
i used the ceph-deply with someproble ,so i want to know something about
it,
when i unzip the ceph-deploy-master.zip and cd ceph-deploy-master folder
run ./bootstrap all things working well .
and then i run ceph-deploy new zphj1987 (myserver host name zphj1987)
i run c
On Tue, Jun 4, 2013 at 2:31 AM, Guilhem Lettron wrote:
> Hi all,
>
> I have 3 monitors in my cluster but 1 of them is a backup and I don't
> want to use it as master (too far and on a server where I want to save
> resources for something else ...)
>
> But as I can see, mon priority is based on IP
On 06/04/2013 06:27 PM, Gregory Farnum wrote:
On Tue, Jun 4, 2013 at 2:31 AM, Guilhem Lettron wrote:
Hi all,
I have 3 monitors in my cluster but 1 of them is a backup and I don't
want to use it as master (too far and on a server where I want to save
resources for something else ...)
But as I
-- Forwarded message --
From: Gandalf Corvotempesta
Date: 2013/5/31
Subject: Multi Rack Reference architecture
To: "ceph-users@lists.ceph.com"
In reference architecture PDF, downloadable from your website, there was
some reference to a multi rack architecture described in anothe
Any experiences with clustered FS on top of RBD devices?
Which FS do you suggest for more or less 10.000 mailboxes accessed by 10
dovecot nodes ?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Am 04.06.2013 20:03, schrieb Gandalf Corvotempesta:
> Any experiences with clustered FS on top of RBD devices?
> Which FS do you suggest for more or less 10.000 mailboxes accessed by 10
> dovecot
> nodes ?
>
we use ocfs2 ontop of rbd... the only bad thing is that ocfs2 will fence all
nodes if rb
Hi,
VM died, but on root disk i found:
kern.log:
<5>1 2013-06-04T21:18:02.568823+02:00 vm-1 kernel - - - [ 220.717935]
sd 2:0:0:0: Attached scsi generic sg0 type 0
<5>1 2013-06-04T21:18:02.568848+02:00 vm-1 kernel - - - [ 220.718231]
sd 2:0:0:0: [sda] 1048576000 512-byte logical blocks: (536 GB/5
2013/6/4 Smart Weblications GmbH - Florian Wiessner <
f.wiess...@smart-weblications.de>
> we use ocfs2 ontop of rbd... the only bad thing is that ocfs2 will fence
> all
> nodes if rbd is not responding within defined timeout...
if rbd is not responding to all nodes, having all ocfs2 fenced shoul
Hi Gandalf,
Am 04.06.2013 21:45, schrieb Gandalf Corvotempesta:
> 2013/6/4 Smart Weblications GmbH - Florian Wiessner
> mailto:f.wiess...@smart-weblications.de>>
>
> we use ocfs2 ontop of rbd... the only bad thing is that ocfs2 will fence
> all
> nodes if rbd is not responding within def
Behind a registration form, but iirc, this is likely what you are
looking for:
http://www.inktank.com/resource/dreamcompute-architecture-blueprint/
- Mike
On 5/31/2013 3:26 AM, Gandalf Corvotempesta wrote:
In reference architecture PDF, downloadable from your website, there was
some reference
Hi,
I'm trying to get Hadoop tested with the ceph:/// schema, but can't seem to
find a way to make Hadoop ceph-aware :(
Is the only way to get it to work is to build Hadoop off the
https://github.com/ceph/hadoop-common/tree/cephfs/branch-1.0/src or is it
possible to compile/obtain some sort of
On Jun 4, 2013, at 2:58 PM, Ilja Maslov wrote:
> Is the only way to get it to work is to build Hadoop off the
> https://github.com/ceph/hadoop-common/tree/cephfs/branch-1.0/src or is it
> possible to compile/obtain some sort of a plugin and feed it to a stable
> hadoop version?
Hi Ilja,
We
I have a ceph setup with cuttlefish for kernel rbd test. After I mapped rbd to
the clients, I execute 'rbd showmapped', the output looks like as follows:
id pool image snap device
1 ceph node7_1 -/dev/rbd1
2 ceph node7_2 -/dev/rbd2
3 ceph node7_3 -/dev/rbd3
4 ceph node7_4 -
24 matches
Mail list logo