Hi All,
I finally got around to progressing with this but immediately got this
message. Any thoughts?
alphaceph@cephadmin1:~$ rbd create fooimage --size 1024 --pool barpool -m
cephserver1.zion.bt.co.uk -k /etc/ceph/ceph.client.admin.keyring
2013-06-17 08:38:43.955683 7f76a6b72780 -1 did not load
Hi All,
A bit of an update... I should have run the command from my-cluster
directory. I am now receiving this error:
alphaceph@cephadmin1:~/ceph-deploy/my-cluster$ rbd create fooimage --size
1024 --pool barpool -m cephserver1.zion.bt.co.uk -k
/etc/ceph/ceph.client.admin.keyring
2013-06-17 08:55:
On 2013-06-14 19:59, Joao Eduardo Luis wrote:
On 06/14/2013 02:39 PM, pe...@2force.nl wrote:
On 2013-06-13 20:10, pe...@2force.nl wrote:
On 2013-06-13 18:57, Joao Eduardo Luis wrote:
On 06/13/2013 05:25 PM, pe...@2force.nl wrote:
On 2013-06-13 18:06, Gregory Farnum wrote:
On Thursday, June 1
Thank you, Sebastien Han. I am sure many are thankful you've published your thoughts and experiences with Ceph and even OpemStack.Thanks Bo! :)If I may, I would like to reword my question/statement with greater clarity: To force all instances to always boot from RBD volumes, would a person would ha
Hi Jens,
with regard to OpenNebula I would like to point out a couple of things.
OpenNebula has official support not just for CentOS but for three other
distros, among which there's Ubuntu, which as far as I know has Ceph and
rbd supported libvirt and qemu-kvm versions.
Also, as far as I know the
Hi Jaime,
We spoke on IRC when I was trying to setup OpenNebula. Thanks for all
the help and hints there!
It is right that I found that my primary problem was that I choose
CentOS 6.4 from the list of supported distributions, as that is the one
I'm most comfortable with.
If I had chosen Ub
On 06/17/2013 12:51 PM, Jens Kristian Søgaard wrote:
> Reg. goal b) The qemu-kvm binary in the supported Ubuntu 12.10
> distribution does not include async flush. I don't know if this is
> available as a backport from somewhere else, as my attempts to simply
> upgrade qemu didn't go well.
I've
Hi Wolfgang,
I've packaged those for ubuntu 12.04 amd64, and you can download them here:
Thanks for the link!
I'm not that familiar with Ubuntu, so sorry for the stupid question.
Will this .dev be compatible with 12.10?
OpenNebula doesn't list 12.04 as a supported distribution, so I'm more
On 06/17/2013 01:03 PM, Jens Kristian Søgaard wrote:
> Hi Wolfgang,
>
>> I've packaged those for ubuntu 12.04 amd64, and you can download them
>> here:
>
> Thanks for the link!
no problem.
> I'm not that familiar with Ubuntu, so sorry for the stupid question.
>
> Will this .dev be compatible w
Hi Wolfgang,
won't install. So the worst that can happen to you is that you have to
build qemu by hand, which wasn't really too hard (and I'm not a big fan
of do-it-yourself-compiling or makefiles, too)
Well, as a veteran C-programmer I have no problems compiling things or
tweaking Makefiles
On 06/16/2013 08:48 PM, Jens Kristian Søgaard wrote:
> Hi guys,
>
> I'm looking to setup an open source cloud IaaS system that will work
> well together with Ceph. I'm looking for a system that will handle
> running KVM virtual servers with persistent storage on a number of
> physical servers with
Hi Stratos,
you might want to take a look at Synnefo. [1]
I did take a look at it earlier, but decided not to test it.
Mainly I was deterred because I found the documentation a bit lacking. I
opened up the section on File Storage and found that there were only
chapter titles, but no actual
Hi, i'm planning to Upgrade my bobtail (latest) cluster to cuttlefish. Are
there any outstanding issues that I should be aware of? Anything that could
brake my productive setup?
Wolfgang
--
Sent from my mobile device
___
ceph-users mailing list
ceph-
Hi,
http://tracker.ceph.com/issues/5232
http://tracker.ceph.com/issues/5238
http://tracker.ceph.com/issues/5375
Stefan
This mail was sent from my iPhone.
Am 17.06.2013 um 17:06 schrieb Wolfgang Hennerbichler
:
> Hi, i'm planning to Upgrade my bobtail (latest) cluster to cuttlefish. Are
> t
On Mon, 17 Jun 2013, Wolfgang Hennerbichler wrote:
> Hi, i'm planning to Upgrade my bobtail (latest) cluster to cuttlefish.
> Are there any outstanding issues that I should be aware of? Anything
> that could brake my productive setup?
There will be another point release out in the next day or tw
Yep, you can't connect to your monitors so nothing else is going to
work either. There's a wealth of conversations about debugging monitor
connection issues in the mailing list and irc archives (and I think
some in the docs), but as a quick start list:
1) make sure the monitor processes are actuall
I'm actually planning this same upgrade on Saturday. Is the memory
leak from Bobtail during deep-scrub known to be squashed? I've been
seeing that a lot lately.
I know Bobtail->Cuttlefish is only one way, due to the mon
re-architecting. But in general, whenever we do upgrades we usually
have a
On Sun, Jun 16, 2013 at 11:10 PM, Wolfgang Hennerbichler
wrote:
>
>
> On 06/16/2013 01:27 AM, Matthew Walster wrote:
>> In the same way that we have CRUSH maps for determining placement
>> groups, I was wondering if anyone had stumbled across a way to influence
>> a *client* (be it CephFS or RBD)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello!
How do you add new pool access to existing Ceph Client?
e.g.
At first create a new user -- openstack-volumes:
ceph auth get-or-create client.openstack-volumes mon 'allow r' osd 'allow
class-read object_prefix rbd_children, allow rwx *pool=openstack-volumes*,
allow rx pool=openstack-images
If you followed the standard setup, each OSD is it's own disk +
filesystem. /var/lib/ceph/osd/ceph-2 is in use, as the mount point for
the OSD.2 filesystem. Double check by examining the output of the
`mount` command.
I get the same error when I try to rename a directory that's used as a
mount p
Thanks. I'll have to get more creative. :-)
On 6/14/13 18:19 , Gregory Farnum wrote:
Yeah. You've picked up on some warty bits of Ceph's error handling
here for sure, but it's exacerbated by the fact that you're not
simulating what you think. In a real disk error situation the
filesystem wo
Hi Florian,
If you can trigger this with logs, we're very eager to see what they say
about this! The http://tracker.ceph.com/issues/5336 bug is open to track
this issue.
Thanks!
sage
On Thu, 13 Jun 2013, Smart Weblications GmbH - Florian Wiessner wrote:
> Hi,
>
> Is really no one on the li
On Mon, Jun 17, 2013 at 02:10:27PM -0400, Travis Rhoden wrote:
> I'm actually planning this same upgrade on Saturday. Is the memory
> leak from Bobtail during deep-scrub known to be squashed? I've been
> seeing that a lot lately.
this is actually the reason why we're planning to upgrade, too. on
Hi List,
I want to deploy a ceph cluster with latest cuttlefish, and export it with
iscsi interface to my applications.
Some questions here:
1. Which Linux distro and release would you recommend? I used Ubuntu 13.04 for
testing purpose before.
2. Which iscsi target is better? LIO, SCST, or other
Hi Derek -
If you are still having problems with ceph-deploy, please forward the ceph.log
file to me, I can start trying to figure out what's gone wrong.
Thanks,
Gary
On Jun 12, 2013, at 7:09 PM, Derek Yarnell wrote:
> Hi,
>
> I am trying to run ceph-deploy on a very basic 1 node configura
Hi
==
root@xtream:~# service ceph start
=== mds.a ===
Starting Ceph mds.a on xtream...already running
=== osd.0 ===
Mounting xfs on xtream:/var/lib/ceph/osd/ceph-0
2013-06-18 04:26:16.373075 7f30ffd2d700
27 matches
Mail list logo