Nagios can monitor anything you can script. If there isn’t a plugin for it,
write it yourself, it’s really not hard. I’d go for icinga by the way, which is
more actively maintained than nagios.
On Jul 23, 2014, at 3:07 PM, pragya jain wrote:
> Hi all,
>
> I am studying nagios for monitoring
if the rbd filesystem ‘belongs’ to you you can do sth like this:
http://www.wogri.com/linux/ceph-vm-backup/
On Jul 3, 2014, at 7:21 AM, Irek Fasikhov wrote:
>
> Hi,All.
>
> Dear community. How do you make backups CEPH RDB?
>
> Thanks
>
> --
> Fasihov Irek (aka Kataklysm).
> С уважением, Фа
I'd use an rbd to iscsi software and attach it via iscsi on mac os x.
On Tue, May 06, 2014 at 03:28:21PM +0400, Pavel V. Kaygorodov wrote:
> Hi!
>
> I want to use ceph for time machine backups on Mac OS X.
> Is it possible to map RBD or mount CephFS on mac directly, for example, using
> osxfuse
On Tue, Apr 29, 2014 at 01:13:25PM +0200, Wido den Hollander wrote:
> When you go from the major release to another one there is no
> problem. Dumpling -> Emperor -> Firefly, etc.
>
> That should all work without downtime.
I can confirm that upgrading production instances since bobtail did never
On 01/14/2014 10:06 AM, Dietmar Maurer wrote:
> Yes, only a single OSD is down and marked out.
Sorry for the misunderstanding then.
>> Then there should definitively be a backfilling in place.
>
> no, this does not happen. Many PGs stay in degraded state (I tested this
> several times now).
On 01/14/2014 09:44 AM, Dietmar Maurer wrote:
>>> When using a pool size of 3, I get the following behavior when one OSD
>>> fails:
>>> * the affected PGs get marked active+degraded
>>>
>>> * there is no data movement/backfill
>>
>> Works as designed, if you have the default crush map in place (a
On 01/13/2014 12:39 PM, Dietmar Maurer wrote:
> I am still playing around with a small setup using 3 Nodes, each running
> 4 OSDs (=12 OSDs).
>
>
>
> When using a pool size of 3, I get the following behavior when one OSD
> fails:
> * the affected PGs get marked active+degraded
>
> * there is
I think I found a comment in the documentation that's not inteded to be
there:
http://ceph.com/docs/master/rbd/rbd-snapshot/
"For the rollback section, you could mention that rollback means
overwriting the current version with data from a snapshot, and takes
longer with larger images. So cloning i
On 08 Jan 2014, at 04:47, Wido den Hollander wrote:
>> I expect that basically only one pool (.rgw?) will hold the true data,
>> all other stuff (like '.users' and so on) will not be data intensive, as
>> it might only store metadata.
>>
> Indeed. So you can have less PGs for these pools. Only t
Hi,
when I designed a ceph cluster nobody talked about radosgw, it was RBD
only. Now we are thinking about adding radosgw, and I have some concern
when it comes to the number of PG's per OSD (which will grow beyond the
50-100 recommended PG's).
According to:
http://ceph.com/docs/master/rados/opera
2014-01-01 16:23:07.821642 7fe8443f9700 0 -- :/1019476 >>
> 10.0.10.11:6789/0 <http://10.0.10.11:6789/0> pipe(0x7fe840004140 sd=3
>:0 s=1 pgs=0 cs=0 l=1
> c=0x7fe8400043a0).fault
>
> ^this fault error continues unt
Matt,
first of all: four monitors is a bad idea. use an odd number for mons, e. g.
three. your other problem is your configuration file. the mon_initial members
and mon_host directives should include all monitor daemons. see my cluster:
mon_initial_members = node01,node02,node03
mon_host = 10
On 19 Dec 2013, at 16:43, Gruher, Joseph R wrote:
> It seems like this calculation ignores that in a large Ceph cluster with
> triple replication having three drive failures doesn't automatically
> guarantee data loss (unlike a RAID6 array)?
not true with RBD images, which are potentially stri
of those 4U 60 disk storage
> servers (or 72 disk per 4U if you're happy with killing another drive when
> replacing a faulty one in that Supermicro contraption), that ratio is down
> to 1 in 21.6 which is way worse than that 8disk RAID5 I mentioned up there.
>
> Regards,
>
> On 12/05/2013 10:52 AM, Wolfgang Hennerbichler wrote:
>> Now I do an rbd import of an RBD Image (which is 1G in size), and I would
>> expect that RBD image to stripe across the two OSD’s. Well, this is just not
>> happening, everything sits on OSD2 (osd1 and osd0 hav
hi ceph,
just for testing (on emperor 0.72.1) I created two OSD’s on a single server,
resized the pool to a replication factor of one, and created 200 PG’s for that
pool:
# ceph osd dump
...
pool 4 'rbd' rep size 1 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num
200 pgp_num 200 last
I don’t think it helps if you keep sending the same e-mail over and over.
somebody will eventually reply - or not. if you keep sending out your e-mail
regularly you will start to become annoying.
--
http://www.wogri.at
On Nov 22, 2013, at 8:06 AM, Linke, Michael wrote:
> Hi,
> maybe you can
--
http://www.wogri.at
On Nov 21, 2013, at 10:30 AM, nicolasc wrote:
> Thanks Josh! This is a lot clearer now.
>
> I understand that librbd is low-level, but still, a warning wouldn't hurt,
> would it? Just check if the size parameter is larger than the cluster
> capacity, no?
maybe I want
On Nov 19, 2013, at 3:47 PM, Bernhard Glomm wrote:
> Hi Nicolas
> just fyi
> rbd format 2 is not supported yet by the linux kernel (module)
I believe this is wrong. I think linux supports rbd format 2 images since 3.10.
wogri
___
ceph-users mailing
On 08/26/2013 09:03 AM, Wolfgang Hennerbichler wrote:
> hi list,
>
> I realize there's a command called "rbd lock" to lock an image. Can
> libvirt use this to prevent virtual machines from being started
> simultaneously on different virtualisation containers?
I welcome this step. For me, more important than open-sourcing the fried
calamari is to see inktank succeed, make money and become even more independent
(from investors). Once this is done, and this young company is rock solid in
business, you can think about open sourcing tools that you sell fo
users are
> less likely to have conflicting ceph.confs across multiple nodes, and
> it doesn't present the illusion that a monolithic config file is
> necessary — but you are of course free to do otherwise if you prefer!
> -Greg
> Software Engineer #42 @ http://inktank.com | http
I would also love to see this answered, this is sometimes asked during my geek
on duty shift and I don't know a real answer to this, and I myself always do it
old-(bobtail)-style.
Wolfgang
--
http://www.wogri.at
On Oct 9, 2013, at 13:54 , su kucherova wrote:
> Hi
>
> When I compare the /et
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/01/2013 05:08 PM, Jogi Hofmüller wrote:
> Dear all,
Sers jogi,
> I am back to managing the cluster before starting to use it even on
> a test host. First of all a question regarding the docs:
>
> Is this [1] outdated? If not, why are the l
t; then ceph deploy will at least be able to
contact that host.
hint: look at your /etc/hosts file.
> Thanks,
> Guang
Wolfgang
> __
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users
On 09/11/2013 11:55 AM, ian_m_por...@dell.com wrote:
> *Dell - Internal Use - Confidential *
if this is dell internal, I probabloy shouldn't answer :)
> Hi,
>
> What’s a good rule of thumb to work out the number of monitors per OSDs
> in a cluster
AFAIK there is no rule of thumb. I would dimen
each with their own drive)?
>
> Ian
>
> -Original Message-
> From: ceph-users-boun...@lists.ceph.com
> [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Wolfgang
> Hennerbichler
> Sent: 11 September 2013 11:35
> To: ceph-users@lists.ceph.com
> Subject: Re
On 09/11/2013 11:55 AM, ian_m_por...@dell.com wrote:
> *Dell - Internal Use - Confidential *
if this is dell internal, I probabloy shouldn't answer :)
> Hi,
>
> What’s a good rule of thumb to work out the number of monitors per OSDs
> in a cluster
AFAIK there is no rule of thumb. I would dimen
Hi,
I believe you need to tell apt about your proxy server:
cat /etc/apt/apt.conf
Acquire::http::Proxy "http://my.proxy.server:3142";;
wogri
On 09/11/2013 08:28 AM, kumar rishabh wrote:
> I am new to ceph.I am trying to follow the official document to install
> ceph on the machine .All things
On Aug 30, 2013, at 20:38 , Geraint Jones wrote:
>>
>> Yes, you can use "cluster_network" to direct OSD traffic over different
>> network interfaces.
>
> Perfect, so now to buy some NIC's :)
or use VLANs on your 10GE and frickle around with QoS.
>>
>> Wido
>>
>>> If anyone has any suggesti
On 08/29/2013 03:39 PM, Athanasios Kostopoulos wrote:
> To change the question and expand a bit: are there SPOFs in ceph's
> design? How one can built a really robust ceph "cluster"?
There are no SPOFs in ceph. Except for the fact that a ceph cluster
likes to reside in one close geographic reg
hi list,
I realize there's a command called "rbd lock" to lock an image. Can libvirt use
this to prevent virtual machines from being started simultaneously on different
virtualisation containers?
wogri
--
http://www.wogri.at
___
ceph-users mailing
On Aug 20, 2013, at 15:18 , Johannes Klarenbeek
wrote:
>
>
> Van: Wolfgang Hennerbichler [mailto:wo...@wogri.com]
> Verzonden: dinsdag 20 augustus 2013 10:51
> Aan: Johannes Klarenbeek
> CC: ceph-users@lists.ceph.com
> Onderwerp: Re: [ceph-users] some newbie questio
On Aug 20, 2013, at 09:54 , Johannes Klarenbeek
wrote:
> dear ceph-users,
>
> although heavily active in the past, i didn’t touch linux for years, so I’m
> pretty new to ceph and i have a few questions, which i hope someone could
> answer for me.
>
> 1) i read somewhere that it is recommen
27;t need
>> to use VMs at all for librbd. So you can install QEMU/KVM, libvirt and
>> OpenStack all on the same host too. It's just not an ideal situation
>> from performance or high availability perspective.
>>
>>
>>
>> On Mon, Aug 19
On 08/19/2013 12:01 PM, Schmitt, Christian wrote:
>> yes. depends on 'everything', but it's possible (though not recommended)
>> to run mon, mds, and osd's on the same host, and even do virtualisation.
>
> Currently we don't want to virtualise on this machine since the
> machine is really small, a
distribution.
>
> Regards
>
> Mark
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanc
On 08/19/2013 10:36 AM, Schmitt, Christian wrote:
> Hello, I just have some small questions about Ceph Deployment models and
> if this would work for us.
> Currently the first question would be, is it possible to have a ceph
> single node setup, where everything is on one node?
yes. depends on 'ev
On Sun, Aug 18, 2013 at 06:57:56PM +1000, Martin Rudat wrote:
> Hi,
>
> On 2013-02-25 20:46, Wolfgang Hennerbichler wrote:
> >maybe some of you are interested in this - I'm using a dedicated VM to
> >backup important VMs which have their storage in RBD. This i
ll harder than a local RAID. Keep that in mind.
> Dmitry
Wolfgang
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanced Computing
be a fool.
>
> "Every nonfree program has a lord, a master --
> and if you use the program, he is your master."
> --Richard Stallman
>
--
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanced Computing Technologies
RISC Software GmbH
A company of the Johan
On 08/13/2013 03:49 AM, Dmitry Postrigan wrote:
> Hello community,
Hi,
> I am currently installing some backup servers with 6x3TB drives in them. I
> played with RAID-10 but I was not
> impressed at all with how it performs during a recovery.
>
> Anyway, I thought what if instead of RAID-10 I
FYI: i'm using ocfs2 as you plan to (/var/Lib/nova/instances/) it is stable,
but Performance isnt blasting.
--
Sent from my mobile device
On 12.07.2013, at 14:21, "Tom Verdaat"
mailto:t...@server.biz>> wrote:
Hi Darryl,
Would love to do that too but only if we can configure nova to do this
a
Sorry, no updates on my side. My wife got our second baby and I'm busy with
reality (changing nappies and stuff)
--
Sent from my mobile device
On 09.07.2013, at 22:18, "Jeppesen, Nelson"
mailto:nelson.jeppe...@disney.com>> wrote:
Any updates on this? My production cluster has been running on
Also be aware that due to the nature how monitors work (and that you need an
unequal number of them), that if the datacenter loses power with the majority
of the monitors, you can't access your backup-data either (you can after
fiddling with the monmap, but it doesn't failover).
configuration i
On Tue, Jun 25, 2013 at 02:24:35PM +0100, Joao Eduardo Luis wrote:
> (Re-adding the list for future reference)
>
> Wolfgang, from your log file:
>
> 2013-06-25 14:58:39.739392 7fa329698780 -1 common/config.cc: In
> function 'void md_config_t::set_val_or_die(const char*, const
> char*)' thread 7fa
On 05/30/2013 11:06 PM, Sage Weil wrote:
> Hi everyone,
Hi again,
> I wanted to mention just a few things on this thread.
Thank you for taking the time.
> The first is obvious: we are extremely concerned about stability.
> However, Ceph is a big project with a wide range of use cases, and i
On 06/25/2013 11:45 AM, Joao Eduardo Luis wrote:
>> On mon a I see:
>>
>> # ceph --admin-daemon /run/ceph/ceph-mon.a.asok mon_status
>> { "name": "a",
>>"rank": 0,
>>"state": "probing",
>>"election_epoch": 1,
>>"quorum": [],
>>"outside_quorum": [
>> "a"],
>>"ext
monmap so it then shuts down. You'll need to convince it to turn on
> and contact mon.0; I don't remember exactly how to do that (Joao?) but
> I think you should be able to find what you need at
> http://ceph.com/docs/master/dev/mon-bootstrap
> -Greg
> Software Engineer #
Hi again,
compiled, tested, seems to work for me.
Fulfilling my own request, if that's OK for you, Alex. Download and try Alex'
packages here on your own responsibility:
http://www.wogri.at/Qemu-Ceph-Packages.343.0.html
Wolfgang
On Fri, Jun 21, 2013 at 03:41:53PM +0100, Alex Bligh wrote:
> I
Hi Alex,
any chances you would also be sharing the compiled .deb of the ubuntu package?
I'm willing to test, as we have issues with qemu-1.4.2 and bridging within a
VM. Will try to build the .deb now.
Wolfgang
On Fri, Jun 21, 2013 at 03:41:53PM +0100, Alex Bligh wrote:
> I've backported the
n, Jun 17, 2013 at 12:27 PM, Sage Weil wrote:
> > On Mon, 17 Jun 2013, Wolfgang Hennerbichler wrote:
> >> Hi, i'm planning to Upgrade my bobtail (latest) cluster to cuttlefish.
> >> Are there any outstanding issues that I should be aware of? Anything
> >> that c
Hi, i'm planning to Upgrade my bobtail (latest) cluster to cuttlefish. Are
there any outstanding issues that I should be aware of? Anything that could
brake my productive setup?
Wolfgang
--
Sent from my mobile device
___
ceph-users mailing list
ceph-
by hand, which wasn't really too hard (and I'm not a big fan
of do-it-yourself-compiling or makefiles, too)
> OpenNebula doesn't list 12.04 as a supported distribution, so I'm more
> inclined to 12.10.
it seems you're doomed :)
--
DI (FH) Wolfgang Hennerbichler
On 06/17/2013 12:51 PM, Jens Kristian Søgaard wrote:
> Reg. goal b) The qemu-kvm binary in the supported Ubuntu 12.10
> distribution does not include async flush. I don't know if this is
> available as a backport from somewhere else, as my attempts to simply
> upgrade qemu didn't go well.
I've
nd storages, but this may just
be my limited view of the world, and is way too off-topic for this
mailinglist...
> Thanks,
Wolfgang
--
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanced Computing Technologies
RISC Software GmbH
A company of the Johannes Kepler Univers
we stated in his other e-mail.
--
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanced Computing Technologies
RISC Software GmbH
A company of the Johannes Kepler University Linz
IT-Center
Softwarepark 35
4232 Hagenberg
Austria
Phone: +43 7236 3343 245
Fax: +43 7236 3343 250
wolfg
smitted with it in error please notify
> postmas...@openet.com. Although Openet has taken reasonable precautions to
> ensure no viruses are present in this email, we cannot accept responsibility
> for any loss or damage arising from the use of this email or attachments.
> ___
anks
hope this helps
Wolfgang
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanced Computing Technologies
RISC So
On Mon, Jun 03, 2013 at 08:58:00PM -0700, Sage Weil wrote:
> My first guess is that you do not have the newer crush tunables set and
> some placements are not quite right. If you are prepared for some data
> migration, and are not using an older kernel client, try
>
> ceph osd crush tunables
On Wed, May 29, 2013 at 04:16:14PM +0200, w sun wrote:
> Hi Wolfgang,
>
> Can you elaborate the issue for 1.5 with libvirt? Wonder if that will impact
> the usage with Grizzly. Did a quick compile for 1.5 with RBD support enabled,
> so far it seems to be ok for openstack with a few simple tests.
Hi,
as most on the list here I also see the future of storage in ceph. I
think it is a great system and overall design, and sage with the rest of
inktank and the community are doing their best to make ceph great. Being
a part-time developer myself I know how awesome new features are, and
how great
> Email : sebastien@enovance.com
> <mailto:sebastien@enovance.com> – Skype : han.sbastien
> Address : 10, rue de la Victoire – 75009 Paris
> Web : www.enovance.com – Twitter : @enovance
>
> On May 28, 2013, at 8:10 PM, Alex Bligh wrote:
>
>
Hi,
for anybody who's interested, I've packaged the latest qemu-1.4.2 (not 1.5, it
didn't work nicely with libvirt) which includes important fixes to RBD for
ubuntu 12.04 AMD64. If you want to save some time, I can share the packages
with you. drop me a line if you're interested.
Wolfgang
__
, while there is for p1b16.
>
> Did I not understand the copy mechanism ?
you sure did understand it the way it is supposed to be. something's
wrong here. what happens if you dd bs=1024 count=1 | hexdump your
devices, do you see differences there? is your cluster healthy?
> Thank
and took a look inside, I see p1b16
> (along with binary data) but no trace of p2b16
>
> I must have missed something somewhere...
>
> Cheers,
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.
On 05/06/2013 07:34 AM, Varun Chandramouli wrote:
No, the ntp daemon is not running. Any other
> suggestions?
How do you sync your clocks then?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Wed, Apr 24, 2013 at 07:49:40AM -0500, Mark Nelson wrote:
> On 04/24/2013 05:18 AM, Maik Kulbe wrote:
> Any idea if this was more due to OCFS2 or more due to Ceph? I
> confess I don't know much about how OCFS2 works. Is it doing some
> kind of latency sensitive operation when two files are bei
il in
> the wip-bobtail-rbd-backports-req-order branch. The backport isn't
> fully tested yet, but it's there if anyone wants to try it out.
Great, thanks.
> Josh
Wolfgang
--
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanced Computing Technologies
RISC
at you need I can get you the raw files for
> that or expand our search criteria. Let me know what works. Thanks.
>
>
> Best Regards,
>
> Patrick McGarry
> Director, Community || Inktank
>
> http://ceph.com || http://inktank.com
> @scuttlemonkey || @ceph || @inkt
s mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanced Computing Technologies
RISC Software GmbH
A company of the Johannes Kepler University Linz
IT-Center
Softwarepark 35
31 (0)20 700 9902
> Skype: contact42on
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
>
Hi,
I do have to present ceph in front of a bunch of students in the
following weeks. Are there any illustrations that you guys have that I
could re-use? Like beautiful pictures that explain the whole concept,
other than those in the documentation?
Wolfgang
--
DI (FH) Wolfgang Hennerbichler
uded into
their master branch yet as far as I've seen. Are they reliable in
integrating it into upstream? This patch is REALLY relevant, IMHO, we
should urge them to integrate it sooner than later.
Wolfgang
--
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanced Computing Tec
n I thought.
--
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanced Computing Technologies
RISC Software GmbH
A company of the Johannes Kepler University Linz
IT-Center
Softwarepark 35
4232 Hagenberg
Austria
Phone: +43 7236 3343 245
Fax: +43 7236 3343 250
wolfgang.hennerb
like that ?
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanced Computing Technologies
RISC
On Tue, Apr 09, 2013 at 10:09:11AM +0200, Sebastien Han wrote:
> So the memory is _not_ saved, only the disk is. Note that it's always hard to
> make consistent snapshot. I assume that freezing the filesystem itself is the
> only solution to have a consistent snapshot, and still this doesn't mean
On Fri, Mar 29, 2013 at 01:46:16PM -0700, Josh Durgin wrote:
> The issue was that the qemu rbd driver was blocking the main qemu
> thread when flush was called, since it was using a synchronous flush.
> Fixing this involves patches to librbd to add an asynchronous flush,
> and a patch to qemu to us
737?
>>> I cannot find any direct link between them. I didnt turn on qemu cache and
>>> my qumu/VM work fine
>>>
>>>
>>>Xiaoxi
>>>
>>> 在 2013-3-25,17:07,"Wolfgang Hennerbichler"
>>> 写道:
>>&
nt turn on qemu cache and my
> qumu/VM work fine
>
>
> Xiaoxi
>
> 在 2013-3-25,17:07,"Wolfgang Hennerbichler"
> 写道:
>
>> Hi,
>>
>> this could be related to this issue here and has been reported multiple
>> times:
>>
&
t failed.
>
>
>
> Could you please let me know if you need any more informations
> & have some solutions? Thanks
>
>
>
>
>
> Xiaoxi
>
>
>
> _______
> ceph-users maili
pply a patch in git I can probably test within 24 hours.
Wolfgang
--
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanced Computing Technologies
RISC Software GmbH
A company of the Johannes Kepler University Linz
IT-Center
Softwarepark 35
4232 Hagenberg
Austria
P
--
Sent from my mobile device
On 13.03.2013, at 18:38, "Josh Durgin" wrote:
> On 03/12/2013 12:46 AM, Wolfgang Hennerbichler wrote:
>>
>>
>> On 03/11/2013 11:56 PM, Josh Durgin wrote:
>>
>>>> dd if=/dev/zero of=/bigfile bs=2M &
>&
it picked (and why there are so many of them):
netstat -planet | egrep -E ':68.*LISTEN.*ceph-osd' | awk '{ print $4}'
0.0.0.0:6821
0.0.0.0:6822
0.0.0.0:6823
10.1.91.11:6800
10.1.91.11:6801
10.1.91.11:6802
10.1.91.11:6803
10.1.91.11:6804
10.1.91.11:6805
0.0.0.0:6812
0.0.0.0:6815
0.0
On 03/11/2013 11:56 PM, Josh Durgin wrote:
>> dd if=/dev/zero of=/bigfile bs=2M &
>>
>> Serial console gets jerky, VM gets unresponsive. It doesn't crash, but
>> it's not 'healthy' either. CPU load isn't very high, it's in the waiting
>> state a lot:
>
> Does this only happen with rbd_cache tur
1/2013 01:42 PM, Mark Nelson wrote:
> I guess first question is does the jerky mouse behavior only happen
> during reads or writes too? How is the CPU utilization in each case?
>
> Mark
>
> On 03/11/2013 01:30 AM, Wolfgang Hennerbichler wrote:
>> Let
com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanced Computing Technologies
RISC Software GmbH
A company of the Johannes Kepler University Linz
IT-Center
Softwarepark 35
4232 Hagenberg
Austria
Phone: +43 7236 33
en i will just be testing the basic functions of Ceph!
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
DI (FH) Wolfgang Hennerbichler
Software Develop
o are seeing the same behavior with
> QEMU/KVM/RBD. Maybe it is a common symptom of high IO with this setup.
>
>
>
> Regards,
>
>
>
>
>
> Andrew
>
>
> On 3/8/2013 12:46 AM, Mark Nelson wrote:
>
> On 03/07/2013 05:10 AM, Wolfgan
On 03/07/2013 12:46 PM, Mark Nelson wrote:
> Thanks for the heads up Wolfgang. I'm going to be looking into QEMU/KVM
> RBD performance in the coming weeks so I'll try to watch out for this
> behaviour.
Thanks for taking the time. It seems to me as if there are so many
interrupts within the vir
On 03/06/2013 02:31 PM, Mark Nelson wrote:
> If you are doing sequential reads, you may benefit by increasing the
> read_ahead_kb value for each device in /sys/block//queue on the
> OSD hosts.
Thanks, that didn't really help. It seems the VM has to handle too much
I/O, even the mouse-cursor is
ts where I could
turn some knobs? I'd rather trade some write-speed to get better
read-speed.
Wolfgang
--
DI (FH) Wolfgang Hennerbichler
Software Development
Unit Advanced Computing Technologies
RISC Software GmbH
A company of the Johannes Kepler University Linz
IT-Center
Softwarepark 35
;> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
DI (FH) Wolfgang Hennerbichler
Software Deve
Without looking at your screencast - some thoughts:
2 mons means increasing failure probability, not reducing it. if you lose one
mon, the other mon will stop working. This is on intention. You need at least 3
mons to create a quorum. so using ceph with two nodes is a bad idea.
as for the distri
Hi,
maybe some of you are interested in this - I'm using a dedicated VM to
backup important VMs which have their storage in RBD. This is nothing
fancy and not implemented perfectly, but it works. The VM's don't notice
that they're backed up, the only requirement is that the filesystem of
the VM is
95 matches
Mail list logo