Re: [ceph-users] ceph features monitored by nagios

2014-07-23 Thread Wolfgang Hennerbichler
Nagios can monitor anything you can script. If there isn’t a plugin for it, write it yourself, it’s really not hard. I’d go for icinga by the way, which is more actively maintained than nagios. On Jul 23, 2014, at 3:07 PM, pragya jain wrote: > Hi all, > > I am studying nagios for monitoring

Re: [ceph-users] Ceph RBD and Backup.

2014-07-03 Thread Wolfgang Hennerbichler
if the rbd filesystem ‘belongs’ to you you can do sth like this: http://www.wogri.com/linux/ceph-vm-backup/ On Jul 3, 2014, at 7:21 AM, Irek Fasikhov wrote: > > Hi,All. > > Dear community. How do you make backups CEPH RDB? > > Thanks > > -- > Fasihov Irek (aka Kataklysm). > С уважением, Фа

Re: [ceph-users] RBD on Mac OS X

2014-05-06 Thread Wolfgang Hennerbichler
I'd use an rbd to iscsi software and attach it via iscsi on mac os x. On Tue, May 06, 2014 at 03:28:21PM +0400, Pavel V. Kaygorodov wrote: > Hi! > > I want to use ceph for time machine backups on Mac OS X. > Is it possible to map RBD or mount CephFS on mac directly, for example, using > osxfuse

Re: [ceph-users] What happened if rbd lose a block?

2014-04-29 Thread Wolfgang Hennerbichler
On Tue, Apr 29, 2014 at 01:13:25PM +0200, Wido den Hollander wrote: > When you go from the major release to another one there is no > problem. Dumpling -> Emperor -> Firefly, etc. > > That should all work without downtime. I can confirm that upgrading production instances since bobtail did never

Re: [ceph-users] 3 node setup with pools size=3

2014-01-14 Thread Wolfgang Hennerbichler
On 01/14/2014 10:06 AM, Dietmar Maurer wrote: > Yes, only a single OSD is down and marked out. Sorry for the misunderstanding then. >> Then there should definitively be a backfilling in place. > > no, this does not happen. Many PGs stay in degraded state (I tested this > several times now).

Re: [ceph-users] 3 node setup with pools size=3

2014-01-14 Thread Wolfgang Hennerbichler
On 01/14/2014 09:44 AM, Dietmar Maurer wrote: >>> When using a pool size of 3, I get the following behavior when one OSD >>> fails: >>> * the affected PGs get marked active+degraded >>> >>> * there is no data movement/backfill >> >> Works as designed, if you have the default crush map in place (a

Re: [ceph-users] 3 node setup with pools size=3

2014-01-13 Thread Wolfgang Hennerbichler
On 01/13/2014 12:39 PM, Dietmar Maurer wrote: > I am still playing around with a small setup using 3 Nodes, each running > 4 OSDs (=12 OSDs). > > > > When using a pool size of 3, I get the following behavior when one OSD > fails: > * the affected PGs get marked active+degraded > > * there is

[ceph-users] documentation comment

2014-01-09 Thread Wolfgang Hennerbichler
I think I found a comment in the documentation that's not inteded to be there: http://ceph.com/docs/master/rbd/rbd-snapshot/ "For the rollback section, you could mention that rollback means overwriting the current version with data from a snapshot, and takes longer with larger images. So cloning i

Re: [ceph-users] RGW and Placement Group count

2014-01-07 Thread Wolfgang Hennerbichler
On 08 Jan 2014, at 04:47, Wido den Hollander wrote: >> I expect that basically only one pool (.rgw?) will hold the true data, >> all other stuff (like '.users' and so on) will not be data intensive, as >> it might only store metadata. >> > Indeed. So you can have less PGs for these pools. Only t

[ceph-users] RGW and Placement Group count

2014-01-07 Thread Wolfgang Hennerbichler
Hi, when I designed a ceph cluster nobody talked about radosgw, it was RBD only. Now we are thinking about adding radosgw, and I have some concern when it comes to the number of PG's per OSD (which will grow beyond the 50-100 recommended PG's). According to: http://ceph.com/docs/master/rados/opera

Re: [ceph-users] Monitor configuration issue

2014-01-01 Thread Wolfgang Hennerbichler
2014-01-01 16:23:07.821642 7fe8443f9700 0 -- :/1019476 >> > 10.0.10.11:6789/0 <http://10.0.10.11:6789/0> pipe(0x7fe840004140 sd=3 >:0 s=1 pgs=0 cs=0 l=1 > c=0x7fe8400043a0).fault > > ^this fault error continues unt

Re: [ceph-users] Monitor configuration issue

2014-01-01 Thread Wolfgang Hennerbichler
Matt, first of all: four monitors is a bad idea. use an odd number for mons, e. g. three. your other problem is your configuration file. the mon_initial members and mon_host directives should include all monitor daemons. see my cluster: mon_initial_members = node01,node02,node03 mon_host = 10

Re: [ceph-users] Failure probability with largish deployments

2013-12-19 Thread Wolfgang Hennerbichler
On 19 Dec 2013, at 16:43, Gruher, Joseph R wrote: > It seems like this calculation ignores that in a large Ceph cluster with > triple replication having three drive failures doesn't automatically > guarantee data loss (unlike a RAID6 array)? not true with RBD images, which are potentially stri

Re: [ceph-users] Failure probability with largish deployments

2013-12-19 Thread Wolfgang Hennerbichler
of those 4U 60 disk storage > servers (or 72 disk per 4U if you're happy with killing another drive when > replacing a faulty one in that Supermicro contraption), that ratio is down > to 1 in 21.6 which is way worse than that 8disk RAID5 I mentioned up there. > > Regards, >

Re: [ceph-users] pool size 1 RBD distribution

2013-12-05 Thread Wolfgang Hennerbichler
> On 12/05/2013 10:52 AM, Wolfgang Hennerbichler wrote: >> Now I do an rbd import of an RBD Image (which is 1G in size), and I would >> expect that RBD image to stripe across the two OSD’s. Well, this is just not >> happening, everything sits on OSD2 (osd1 and osd0 hav

[ceph-users] pool size 1 RBD distribution

2013-12-05 Thread Wolfgang Hennerbichler
hi ceph, just for testing (on emperor 0.72.1) I created two OSD’s on a single server, resized the pool to a replication factor of one, and created 200 PG’s for that pool: # ceph osd dump ... pool 4 'rbd' rep size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num 200 pgp_num 200 last

Re: [ceph-users] 2 probs after upgrade to emporer

2013-11-22 Thread Wolfgang Hennerbichler
I don’t think it helps if you keep sending the same e-mail over and over. somebody will eventually reply - or not. if you keep sending out your e-mail regularly you will start to become annoying. -- http://www.wogri.at On Nov 22, 2013, at 8:06 AM, Linke, Michael wrote: > Hi, > maybe you can

Re: [ceph-users] Size of RBD images

2013-11-21 Thread Wolfgang Hennerbichler
-- http://www.wogri.at On Nov 21, 2013, at 10:30 AM, nicolasc wrote: > Thanks Josh! This is a lot clearer now. > > I understand that librbd is low-level, but still, a warning wouldn't hurt, > would it? Just check if the size parameter is larger than the cluster > capacity, no? maybe I want

Re: [ceph-users] Size of RBD images

2013-11-19 Thread Wolfgang Hennerbichler
On Nov 19, 2013, at 3:47 PM, Bernhard Glomm wrote: > Hi Nicolas > just fyi > rbd format 2 is not supported yet by the linux kernel (module) I believe this is wrong. I think linux supports rbd format 2 images since 3.10. wogri ___ ceph-users mailing

Re: [ceph-users] locking rbd device

2013-11-06 Thread Wolfgang Hennerbichler
On 08/26/2013 09:03 AM, Wolfgang Hennerbichler wrote: > hi list, > > I realize there's a command called "rbd lock" to lock an image. Can > libvirt use this to prevent virtual machines from being started > simultaneously on different virtualisation containers?

Re: [ceph-users] Inktank Ceph Enterprise Launch

2013-10-30 Thread Wolfgang Hennerbichler
I welcome this step. For me, more important than open-sourcing the fried calamari is to see inktank succeed, make money and become even more independent (from investors). Once this is done, and this young company is rock solid in business, you can think about open sourcing tools that you sell fo

Re: [ceph-users] Dumpling ceph.conf looks different

2013-10-09 Thread Wolfgang Hennerbichler
users are > less likely to have conflicting ceph.confs across multiple nodes, and > it doesn't present the illusion that a monolithic config file is > necessary — but you are of course free to do otherwise if you prefer! > -Greg > Software Engineer #42 @ http://inktank.com | http

Re: [ceph-users] Dumpling ceph.conf looks different

2013-10-09 Thread Wolfgang Hennerbichler
I would also love to see this answered, this is sometimes asked during my geek on duty shift and I don't know a real answer to this, and I myself always do it old-(bobtail)-style. Wolfgang -- http://www.wogri.at On Oct 9, 2013, at 13:54 , su kucherova wrote: > Hi > > When I compare the /et

Re: [ceph-users] trouble adding OSDs - which documentation to use

2013-10-02 Thread Wolfgang Hennerbichler
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 10/01/2013 05:08 PM, Jogi Hofmüller wrote: > Dear all, Sers jogi, > I am back to managing the cluster before starting to use it even on > a test host. First of all a question regarding the docs: > > Is this [1] outdated? If not, why are the l

Re: [ceph-users] Ceph deployment issue in physical hosts

2013-09-25 Thread Wolfgang Hennerbichler
t; then ceph deploy will at least be able to contact that host. hint: look at your /etc/hosts file. > Thanks, > Guang Wolfgang > __ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users

Re: [ceph-users] Number of Monitors per OSDs

2013-09-11 Thread Wolfgang Hennerbichler
On 09/11/2013 11:55 AM, ian_m_por...@dell.com wrote: > *Dell - Internal Use - Confidential * if this is dell internal, I probabloy shouldn't answer :) > Hi, > > What’s a good rule of thumb to work out the number of monitors per OSDs > in a cluster AFAIK there is no rule of thumb. I would dimen

Re: [ceph-users] Number of Monitors per OSDs

2013-09-11 Thread Wolfgang Hennerbichler
each with their own drive)? > > Ian > > -Original Message- > From: ceph-users-boun...@lists.ceph.com > [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Wolfgang > Hennerbichler > Sent: 11 September 2013 11:35 > To: ceph-users@lists.ceph.com > Subject: Re

Re: [ceph-users] Number of Monitors per OSDs

2013-09-11 Thread Wolfgang Hennerbichler
On 09/11/2013 11:55 AM, ian_m_por...@dell.com wrote: > *Dell - Internal Use - Confidential * if this is dell internal, I probabloy shouldn't answer :) > Hi, > > What’s a good rule of thumb to work out the number of monitors per OSDs > in a cluster AFAIK there is no rule of thumb. I would dimen

Re: [ceph-users] ceph-deploy install on remote machine error

2013-09-10 Thread Wolfgang Hennerbichler
Hi, I believe you need to tell apt about your proxy server: cat /etc/apt/apt.conf Acquire::http::Proxy "http://my.proxy.server:3142";; wogri On 09/11/2013 08:28 AM, kumar rishabh wrote: > I am new to ceph.I am trying to follow the official document to install > ceph on the machine .All things

Re: [ceph-users] OSD to OSD Communication

2013-08-30 Thread Wolfgang Hennerbichler
On Aug 30, 2013, at 20:38 , Geraint Jones wrote: >> >> Yes, you can use "cluster_network" to direct OSD traffic over different >> network interfaces. > > Perfect, so now to buy some NIC's :) or use VLANs on your 10GE and frickle around with QoS. >> >> Wido >> >>> If anyone has any suggesti

Re: [ceph-users] metadata server - single point of failure?

2013-08-29 Thread Wolfgang Hennerbichler
On 08/29/2013 03:39 PM, Athanasios Kostopoulos wrote: > To change the question and expand a bit: are there SPOFs in ceph's > design? How one can built a really robust ceph "cluster"? There are no SPOFs in ceph. Except for the fact that a ceph cluster likes to reside in one close geographic reg

[ceph-users] locking rbd device

2013-08-26 Thread Wolfgang Hennerbichler
hi list, I realize there's a command called "rbd lock" to lock an image. Can libvirt use this to prevent virtual machines from being started simultaneously on different virtualisation containers? wogri -- http://www.wogri.at ___ ceph-users mailing

Re: [ceph-users] some newbie questions...

2013-08-20 Thread Wolfgang Hennerbichler
On Aug 20, 2013, at 15:18 , Johannes Klarenbeek wrote: > > > Van: Wolfgang Hennerbichler [mailto:wo...@wogri.com] > Verzonden: dinsdag 20 augustus 2013 10:51 > Aan: Johannes Klarenbeek > CC: ceph-users@lists.ceph.com > Onderwerp: Re: [ceph-users] some newbie questio

Re: [ceph-users] some newbie questions...

2013-08-20 Thread Wolfgang Hennerbichler
On Aug 20, 2013, at 09:54 , Johannes Klarenbeek wrote: > dear ceph-users, > > although heavily active in the past, i didn’t touch linux for years, so I’m > pretty new to ceph and i have a few questions, which i hope someone could > answer for me. > > 1) i read somewhere that it is recommen

Re: [ceph-users] Ceph Deployments

2013-08-19 Thread Wolfgang Hennerbichler
27;t need >> to use VMs at all for librbd. So you can install QEMU/KVM, libvirt and >> OpenStack all on the same host too. It's just not an ideal situation >> from performance or high availability perspective. >> >> >> >> On Mon, Aug 19

Re: [ceph-users] Ceph Deployments

2013-08-19 Thread Wolfgang Hennerbichler
On 08/19/2013 12:01 PM, Schmitt, Christian wrote: >> yes. depends on 'everything', but it's possible (though not recommended) >> to run mon, mds, and osd's on the same host, and even do virtualisation. > > Currently we don't want to virtualise on this machine since the > machine is really small, a

Re: [ceph-users] Usage pattern and design of Ceph

2013-08-19 Thread Wolfgang Hennerbichler
distribution. > > Regards > > Mark > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- DI (FH) Wolfgang Hennerbichler Software Development Unit Advanc

Re: [ceph-users] Ceph Deployments

2013-08-19 Thread Wolfgang Hennerbichler
On 08/19/2013 10:36 AM, Schmitt, Christian wrote: > Hello, I just have some small questions about Ceph Deployment models and > if this would work for us. > Currently the first question would be, is it possible to have a ceph > single node setup, where everything is on one node? yes. depends on 'ev

Re: [ceph-users] Ceph VM Backup

2013-08-18 Thread Wolfgang Hennerbichler
On Sun, Aug 18, 2013 at 06:57:56PM +1000, Martin Rudat wrote: > Hi, > > On 2013-02-25 20:46, Wolfgang Hennerbichler wrote: > >maybe some of you are interested in this - I'm using a dedicated VM to > >backup important VMs which have their storage in RBD. This i

Re: [ceph-users] Ceph instead of RAID

2013-08-13 Thread Wolfgang Hennerbichler
ll harder than a local RAID. Keep that in mind. > Dmitry Wolfgang > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- DI (FH) Wolfgang Hennerbichler Software Development Unit Advanced Computing

Re: [ceph-users] Ceph instead of RAID

2013-08-13 Thread Wolfgang Hennerbichler
be a fool. > > "Every nonfree program has a lord, a master -- > and if you use the program, he is your master." > --Richard Stallman > -- DI (FH) Wolfgang Hennerbichler Software Development Unit Advanced Computing Technologies RISC Software GmbH A company of the Johan

Re: [ceph-users] Ceph instead of RAID

2013-08-13 Thread Wolfgang Hennerbichler
On 08/13/2013 03:49 AM, Dmitry Postrigan wrote: > Hello community, Hi, > I am currently installing some backup servers with 6x3TB drives in them. I > played with RAID-10 but I was not > impressed at all with how it performs during a recovery. > > Anyway, I thought what if instead of RAID-10 I

Re: [ceph-users] OCFS2 or GFS2 for cluster filesystem?

2013-07-12 Thread Wolfgang Hennerbichler
FYI: i'm using ocfs2 as you plan to (/var/Lib/nova/instances/) it is stable, but Performance isnt blasting. -- Sent from my mobile device On 12.07.2013, at 14:21, "Tom Verdaat" mailto:t...@server.biz>> wrote: Hi Darryl, Would love to do that too but only if we can configure nova to do this a

Re: [ceph-users] Issues going from 1 to 3 mons

2013-07-10 Thread Wolfgang Hennerbichler
Sorry, no updates on my side. My wife got our second baby and I'm busy with reality (changing nappies and stuff) -- Sent from my mobile device On 09.07.2013, at 22:18, "Jeppesen, Nelson" mailto:nelson.jeppe...@disney.com>> wrote: Any updates on this? My production cluster has been running on

Re: [ceph-users] Antwort: Re: Replication between 2 datacenter

2013-06-26 Thread Wolfgang Hennerbichler
Also be aware that due to the nature how monitors work (and that you need an unequal number of them), that if the datacenter loses power with the majority of the monitors, you can't access your backup-data either (you can after fiddling with the monmap, but it doesn't failover). configuration i

Re: [ceph-users] Issues going from 1 to 3 mons

2013-06-25 Thread Wolfgang Hennerbichler
On Tue, Jun 25, 2013 at 02:24:35PM +0100, Joao Eduardo Luis wrote: > (Re-adding the list for future reference) > > Wolfgang, from your log file: > > 2013-06-25 14:58:39.739392 7fa329698780 -1 common/config.cc: In > function 'void md_config_t::set_val_or_die(const char*, const > char*)' thread 7fa

Re: [ceph-users] increasing stability

2013-06-25 Thread Wolfgang Hennerbichler
On 05/30/2013 11:06 PM, Sage Weil wrote: > Hi everyone, Hi again, > I wanted to mention just a few things on this thread. Thank you for taking the time. > The first is obvious: we are extremely concerned about stability. > However, Ceph is a big project with a wide range of use cases, and i

Re: [ceph-users] Issues going from 1 to 3 mons

2013-06-25 Thread Wolfgang Hennerbichler
On 06/25/2013 11:45 AM, Joao Eduardo Luis wrote: >> On mon a I see: >> >> # ceph --admin-daemon /run/ceph/ceph-mon.a.asok mon_status >> { "name": "a", >>"rank": 0, >>"state": "probing", >>"election_epoch": 1, >>"quorum": [], >>"outside_quorum": [ >> "a"], >>"ext

Re: [ceph-users] Issues going from 1 to 3 mons

2013-06-25 Thread Wolfgang Hennerbichler
monmap so it then shuts down. You'll need to convince it to turn on > and contact mon.0; I don't remember exactly how to do that (Joao?) but > I think you should be able to find what you need at > http://ceph.com/docs/master/dev/mon-bootstrap > -Greg > Software Engineer #

Re: [ceph-users] Backport of modern qemu rbd driver to qemu 1.0 + Precise packaging

2013-06-23 Thread Wolfgang Hennerbichler
Hi again, compiled, tested, seems to work for me. Fulfilling my own request, if that's OK for you, Alex. Download and try Alex' packages here on your own responsibility: http://www.wogri.at/Qemu-Ceph-Packages.343.0.html Wolfgang On Fri, Jun 21, 2013 at 03:41:53PM +0100, Alex Bligh wrote: > I

Re: [ceph-users] Backport of modern qemu rbd driver to qemu 1.0 + Precise packaging

2013-06-23 Thread Wolfgang Hennerbichler
Hi Alex, any chances you would also be sharing the compiled .deb of the ubuntu package? I'm willing to test, as we have issues with qemu-1.4.2 and bridging within a VM. Will try to build the .deb now. Wolfgang On Fri, Jun 21, 2013 at 03:41:53PM +0100, Alex Bligh wrote: > I've backported the

Re: [ceph-users] Upgrade from bobtail

2013-06-17 Thread Wolfgang Hennerbichler
n, Jun 17, 2013 at 12:27 PM, Sage Weil wrote: > > On Mon, 17 Jun 2013, Wolfgang Hennerbichler wrote: > >> Hi, i'm planning to Upgrade my bobtail (latest) cluster to cuttlefish. > >> Are there any outstanding issues that I should be aware of? Anything > >> that c

[ceph-users] Upgrade from bobtail

2013-06-17 Thread Wolfgang Hennerbichler
Hi, i'm planning to Upgrade my bobtail (latest) cluster to cuttlefish. Are there any outstanding issues that I should be aware of? Anything that could brake my productive setup? Wolfgang -- Sent from my mobile device ___ ceph-users mailing list ceph-

Re: [ceph-users] Ceph and open source cloud software: Path of least resistance

2013-06-17 Thread Wolfgang Hennerbichler
by hand, which wasn't really too hard (and I'm not a big fan of do-it-yourself-compiling or makefiles, too) > OpenNebula doesn't list 12.04 as a supported distribution, so I'm more > inclined to 12.10. it seems you're doomed :) -- DI (FH) Wolfgang Hennerbichler

Re: [ceph-users] Ceph and open source cloud software: Path of least resistance

2013-06-17 Thread Wolfgang Hennerbichler
On 06/17/2013 12:51 PM, Jens Kristian Søgaard wrote: > Reg. goal b) The qemu-kvm binary in the supported Ubuntu 12.10 > distribution does not include async flush. I don't know if this is > available as a backport from somewhere else, as my attempts to simply > upgrade qemu didn't go well. I've

Re: [ceph-users] Ceph and open source cloud software: Path of least resistance

2013-06-16 Thread Wolfgang Hennerbichler
nd storages, but this may just be my limited view of the world, and is way too off-topic for this mailinglist... > Thanks, Wolfgang -- DI (FH) Wolfgang Hennerbichler Software Development Unit Advanced Computing Technologies RISC Software GmbH A company of the Johannes Kepler Univers

Re: [ceph-users] Influencing reads/writes

2013-06-16 Thread Wolfgang Hennerbichler
we stated in his other e-mail. -- DI (FH) Wolfgang Hennerbichler Software Development Unit Advanced Computing Technologies RISC Software GmbH A company of the Johannes Kepler University Linz IT-Center Softwarepark 35 4232 Hagenberg Austria Phone: +43 7236 3343 245 Fax: +43 7236 3343 250 wolfg

Re: [ceph-users] Live Migrations with cephFS

2013-06-16 Thread Wolfgang Hennerbichler
smitted with it in error please notify > postmas...@openet.com. Although Openet has taken reasonable precautions to > ensure no viruses are present in this email, we cannot accept responsibility > for any loss or damage arising from the use of this email or attachments. > ___

Re: [ceph-users] Libvirt, quemu, ceph write cache settings

2013-06-11 Thread Wolfgang Hennerbichler
anks hope this helps Wolfgang > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- DI (FH) Wolfgang Hennerbichler Software Development Unit Advanced Computing Technologies RISC So

Re: [ceph-users] PG active+clean+degraded, but not creating new replicas

2013-06-03 Thread Wolfgang Hennerbichler
On Mon, Jun 03, 2013 at 08:58:00PM -0700, Sage Weil wrote: > My first guess is that you do not have the newer crush tunables set and > some placements are not quite right. If you are prepared for some data > migration, and are not using an older kernel client, try > > ceph osd crush tunables

Re: [ceph-users] qemu-1.4.2 rbd-fixed ubuntu packages

2013-06-02 Thread Wolfgang Hennerbichler
On Wed, May 29, 2013 at 04:16:14PM +0200, w sun wrote: > Hi Wolfgang, > > Can you elaborate the issue for 1.5 with libvirt? Wonder if that will impact > the usage with Grizzly. Did a quick compile for 1.5 with RBD support enabled, > so far it seems to be ok for openstack with a few simple tests.

[ceph-users] increasing stability

2013-05-29 Thread Wolfgang Hennerbichler
Hi, as most on the list here I also see the future of storage in ceph. I think it is a great system and overall design, and sage with the rest of inktank and the community are doing their best to make ceph great. Being a part-time developer myself I know how awesome new features are, and how great

Re: [ceph-users] qemu-1.4.2 rbd-fixed ubuntu packages

2013-05-28 Thread Wolfgang Hennerbichler
> Email : sebastien@enovance.com > <mailto:sebastien@enovance.com> – Skype : han.sbastien > Address : 10, rue de la Victoire – 75009 Paris > Web : www.enovance.com – Twitter : @enovance > > On May 28, 2013, at 8:10 PM, Alex Bligh wrote: > >

[ceph-users] qemu-1.4.2 rbd-fixed ubuntu packages

2013-05-27 Thread Wolfgang Hennerbichler
Hi, for anybody who's interested, I've packaged the latest qemu-1.4.2 (not 1.5, it didn't work nicely with libvirt) which includes important fixes to RBD for ubuntu 12.04 AMD64. If you want to save some time, I can share the packages with you. drop me a line if you're interested. Wolfgang __

Re: [ceph-users] RBD image copying

2013-05-14 Thread Wolfgang Hennerbichler
, while there is for p1b16. > > Did I not understand the copy mechanism ? you sure did understand it the way it is supposed to be. something's wrong here. what happens if you dd bs=1024 count=1 | hexdump your devices, do you see differences there? is your cluster healthy? > Thank

Re: [ceph-users] RBD image copying

2013-05-14 Thread Wolfgang Hennerbichler
and took a look inside, I see p1b16 > (along with binary data) but no trace of p2b16 > > I must have missed something somewhere... > > Cheers, > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.

Re: [ceph-users] HEALTH WARN: clock skew detected

2013-05-06 Thread Wolfgang Hennerbichler
On 05/06/2013 07:34 AM, Varun Chandramouli wrote: No, the ntp daemon is not running. Any other > suggestions? How do you sync your clocks then? ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] Best solution for shared FS on Ceph for web clusters

2013-04-26 Thread Wolfgang Hennerbichler
On Wed, Apr 24, 2013 at 07:49:40AM -0500, Mark Nelson wrote: > On 04/24/2013 05:18 AM, Maik Kulbe wrote: > Any idea if this was more due to OCFS2 or more due to Ceph? I > confess I don't know much about how OCFS2 works. Is it doing some > kind of latency sensitive operation when two files are bei

Re: [ceph-users] I/O Speed Comparisons

2013-04-22 Thread Wolfgang Hennerbichler
il in > the wip-bobtail-rbd-backports-req-order branch. The backport isn't > fully tested yet, but it's there if anyone wants to try it out. Great, thanks. > Josh Wolfgang -- DI (FH) Wolfgang Hennerbichler Software Development Unit Advanced Computing Technologies RISC

Re: [ceph-users] Ceph Illustrations

2013-04-18 Thread Wolfgang Hennerbichler
at you need I can get you the raw files for > that or expand our search criteria. Let me know what works. Thanks. > > > Best Regards, > > Patrick McGarry > Director, Community || Inktank > > http://ceph.com || http://inktank.com > @scuttlemonkey || @ceph || @inkt

Re: [ceph-users] how configure cephfs to strip data across osd's?

2013-04-17 Thread Wolfgang Hennerbichler
s mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- DI (FH) Wolfgang Hennerbichler Software Development Unit Advanced Computing Technologies RISC Software GmbH A company of the Johannes Kepler University Linz IT-Center Softwarepark 35

Re: [ceph-users] Unable to read file on Ceph FS

2013-04-17 Thread Wolfgang Hennerbichler
31 (0)20 700 9902 > Skype: contact42on > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > ___ > ceph-users mailing list >

[ceph-users] Ceph Illustrations

2013-04-17 Thread Wolfgang Hennerbichler
Hi, I do have to present ceph in front of a bunch of students in the following weeks. Are there any illustrations that you guys have that I could re-use? Like beautiful pictures that explain the whole concept, other than those in the documentation? Wolfgang -- DI (FH) Wolfgang Hennerbichler

Re: [ceph-users] I/O Speed Comparisons

2013-04-16 Thread Wolfgang Hennerbichler
uded into their master branch yet as far as I've seen. Are they reliable in integrating it into upstream? This patch is REALLY relevant, IMHO, we should urge them to integrate it sooner than later. Wolfgang -- DI (FH) Wolfgang Hennerbichler Software Development Unit Advanced Computing Tec

Re: [ceph-users] RBD snapshots are not «readable», because of LVM ?

2013-04-16 Thread Wolfgang Hennerbichler
n I thought. -- DI (FH) Wolfgang Hennerbichler Software Development Unit Advanced Computing Technologies RISC Software GmbH A company of the Johannes Kepler University Linz IT-Center Softwarepark 35 4232 Hagenberg Austria Phone: +43 7236 3343 245 Fax: +43 7236 3343 250 wolfgang.hennerb

Re: [ceph-users] RBD snapshots are not «readable», because of LVM ?

2013-04-15 Thread Wolfgang Hennerbichler
like that ? > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- DI (FH) Wolfgang Hennerbichler Software Development Unit Advanced Computing Technologies RISC

Re: [ceph-users] Question about Backing Up RBD Volumes in Openstack

2013-04-09 Thread Wolfgang Hennerbichler
On Tue, Apr 09, 2013 at 10:09:11AM +0200, Sebastien Han wrote: > So the memory is _not_ saved, only the disk is. Note that it's always hard to > make consistent snapshot. I assume that freezing the filesystem itself is the > only solution to have a consistent snapshot, and still this doesn't mean

Re: [ceph-users] I/O Speed Comparisons

2013-04-01 Thread Wolfgang Hennerbichler
On Fri, Mar 29, 2013 at 01:46:16PM -0700, Josh Durgin wrote: > The issue was that the qemu rbd driver was blocking the main qemu > thread when flush was called, since it was using a synchronous flush. > Fixing this involves patches to librbd to add an asynchronous flush, > and a patch to qemu to us

Re: [ceph-users] Ceph Crach at sync_thread_timeout after heavy random writes.

2013-03-25 Thread Wolfgang Hennerbichler
737? >>> I cannot find any direct link between them. I didnt turn on qemu cache and >>> my qumu/VM work fine >>> >>> >>>Xiaoxi >>> >>> 在 2013-3-25,17:07,"Wolfgang Hennerbichler" >>> 写道: >>&

Re: [ceph-users] Ceph Crach at sync_thread_timeout after heavy random writes.

2013-03-25 Thread Wolfgang Hennerbichler
nt turn on qemu cache and my > qumu/VM work fine > > > Xiaoxi > > 在 2013-3-25,17:07,"Wolfgang Hennerbichler" > 写道: > >> Hi, >> >> this could be related to this issue here and has been reported multiple >> times: >> &

Re: [ceph-users] Ceph Crach at sync_thread_timeout after heavy random writes.

2013-03-25 Thread Wolfgang Hennerbichler
t failed. > > > > Could you please let me know if you need any more informations > & have some solutions? Thanks > > > > > > Xiaoxi > > > > _______ > ceph-users maili

Re: [ceph-users] I/O Speed Comparisons

2013-03-18 Thread Wolfgang Hennerbichler
pply a patch in git I can probably test within 24 hours. Wolfgang -- DI (FH) Wolfgang Hennerbichler Software Development Unit Advanced Computing Technologies RISC Software GmbH A company of the Johannes Kepler University Linz IT-Center Softwarepark 35 4232 Hagenberg Austria P

Re: [ceph-users] I/O Speed Comparisons

2013-03-14 Thread Wolfgang Hennerbichler
-- Sent from my mobile device On 13.03.2013, at 18:38, "Josh Durgin" wrote: > On 03/12/2013 12:46 AM, Wolfgang Hennerbichler wrote: >> >> >> On 03/11/2013 11:56 PM, Josh Durgin wrote: >> >>>> dd if=/dev/zero of=/bigfile bs=2M & >&

[ceph-users] cluster-network documentation

2013-03-12 Thread Wolfgang Hennerbichler
it picked (and why there are so many of them): netstat -planet | egrep -E ':68.*LISTEN.*ceph-osd' | awk '{ print $4}' 0.0.0.0:6821 0.0.0.0:6822 0.0.0.0:6823 10.1.91.11:6800 10.1.91.11:6801 10.1.91.11:6802 10.1.91.11:6803 10.1.91.11:6804 10.1.91.11:6805 0.0.0.0:6812 0.0.0.0:6815 0.0

Re: [ceph-users] I/O Speed Comparisons

2013-03-12 Thread Wolfgang Hennerbichler
On 03/11/2013 11:56 PM, Josh Durgin wrote: >> dd if=/dev/zero of=/bigfile bs=2M & >> >> Serial console gets jerky, VM gets unresponsive. It doesn't crash, but >> it's not 'healthy' either. CPU load isn't very high, it's in the waiting >> state a lot: > > Does this only happen with rbd_cache tur

Re: [ceph-users] I/O Speed Comparisons

2013-03-11 Thread Wolfgang Hennerbichler
1/2013 01:42 PM, Mark Nelson wrote: > I guess first question is does the jerky mouse behavior only happen > during reads or writes too? How is the CPU utilization in each case? > > Mark > > On 03/11/2013 01:30 AM, Wolfgang Hennerbichler wrote: >> Let

Re: [ceph-users] test Ceph

2013-03-11 Thread Wolfgang Hennerbichler
com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- DI (FH) Wolfgang Hennerbichler Software Development Unit Advanced Computing Technologies RISC Software GmbH A company of the Johannes Kepler University Linz IT-Center Softwarepark 35 4232 Hagenberg Austria Phone: +43 7236 33

Re: [ceph-users] (no subject)

2013-03-11 Thread Wolfgang Hennerbichler
en i will just be testing the basic functions of Ceph! > > > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- DI (FH) Wolfgang Hennerbichler Software Develop

Re: [ceph-users] I/O Speed Comparisons

2013-03-10 Thread Wolfgang Hennerbichler
o are seeing the same behavior with > QEMU/KVM/RBD. Maybe it is a common symptom of high IO with this setup. > > > > Regards, > > > > > > Andrew > > > On 3/8/2013 12:46 AM, Mark Nelson wrote: > > On 03/07/2013 05:10 AM, Wolfgan

Re: [ceph-users] I/O Speed Comparisons

2013-03-07 Thread Wolfgang Hennerbichler
On 03/07/2013 12:46 PM, Mark Nelson wrote: > Thanks for the heads up Wolfgang. I'm going to be looking into QEMU/KVM > RBD performance in the coming weeks so I'll try to watch out for this > behaviour. Thanks for taking the time. It seems to me as if there are so many interrupts within the vir

Re: [ceph-users] I/O Speed Comparisons

2013-03-07 Thread Wolfgang Hennerbichler
On 03/06/2013 02:31 PM, Mark Nelson wrote: > If you are doing sequential reads, you may benefit by increasing the > read_ahead_kb value for each device in /sys/block//queue on the > OSD hosts. Thanks, that didn't really help. It seems the VM has to handle too much I/O, even the mouse-cursor is

[ceph-users] I/O Speed Comparisons

2013-03-06 Thread Wolfgang Hennerbichler
ts where I could turn some knobs? I'd rather trade some write-speed to get better read-speed. Wolfgang -- DI (FH) Wolfgang Hennerbichler Software Development Unit Advanced Computing Technologies RISC Software GmbH A company of the Johannes Kepler University Linz IT-Center Softwarepark 35

Re: [ceph-users] Number of ODS per host

2013-03-06 Thread Wolfgang Hennerbichler
;> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > ___ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > -- DI (FH) Wolfgang Hennerbichler Software Deve

Re: [ceph-users] Writes to only one OSD?

2013-03-01 Thread Wolfgang Hennerbichler
Without looking at your screencast - some thoughts: 2 mons means increasing failure probability, not reducing it. if you lose one mon, the other mon will stop working. This is on intention. You need at least 3 mons to create a quorum. so using ceph with two nodes is a bad idea. as for the distri

[ceph-users] Ceph VM Backup

2013-02-25 Thread Wolfgang Hennerbichler
Hi, maybe some of you are interested in this - I'm using a dedicated VM to backup important VMs which have their storage in RBD. This is nothing fancy and not implemented perfectly, but it works. The VM's don't notice that they're backed up, the only requirement is that the filesystem of the VM is