The times I have seen this message, it has always been because there
are snapshots of the image that haven't been deleted yet. You can see
the snapshots with "rbd snap list ".
On Tue, May 20, 2014 at 4:26 AM, James Eckersall
wrote:
> Hi,
>
>
>
> I'm having some trouble with an rbd image. I want
You can use librados directly or you can use radosgw, which, I think,
would be pretty much exactly what you are looking for.
On Tue, Apr 29, 2014 at 4:36 PM, Stuart Longland wrote:
> Hi all,
>
> Is there some kind of web-based or WebDAV-based front-end for accessing
> a Ceph cluster?
>
> Our situ
You need to add a line to /etc/lvm/lvm.conf:
types = [ "rbd", 1024 ]
It should be in the "devices" section of the file.
On Tue, Sep 24, 2013 at 5:00 PM, John-Paul Robinson wrote:
> Hi,
>
> I'm exploring a configuration with multiple Ceph block devices used with
> LVM. The goal is to provide a
The easy solution to this is to create a really tiny image in glance (call
it fake_image or something like that) and tell nova that it is the image
you are using. Since you are booting from the RBD anyway, it doesn't
actually use the image for anything, and should only put a single copy of
it in t
Hmm. This sounds very similar to the problem I reported (with
debug-mon = 20 and debug ms = 1 logs as of today) on our support site
(ticket #438) - Sage, please take a look.
On Mon, Aug 12, 2013 at 9:49 PM, Sage Weil wrote:
> On Mon, 12 Aug 2013, Jeppesen, Nelson wrote:
>> Joao,
>>
>> (log file
One of our tests last night failed in a weird way. We started with a
three node cluster, with three monitors, expanded to a 5 node cluster
with 5 monitors and dropped back to a 4 node cluster with three
monitors.
The sequence of events was:
start 3 monitors (monitors 0, 1, 2) - monmap e1
add one
I just hit a kernel oops from drivers/block/rbd.c:1736
https://gist.github.com/mdegerne/c61ecb99368e3155cd2b
Running ceph version:
ceph version 0.61.4 (1669132fcfc27d0c0b5e5bb93ade59d147e23404)
Running kernel:
Linux node-172-20-0-13 3.9.10 #1 SMP Tue Jul 16 10:02:16 UTC 2013
x86_64 Intel(R) Xeon
I'm not certain what the correct behavior should be in this case, so
maybe it is not a bug, but here is what is happening:
When an OSD becomes full, a process fails and we unmount the rbd
attempt to remove the lock associated with the rbd for the process.
The unmount works fine, but removing the l
Is there any command (in shell or python API), that can tell me if
ceph is still creating pgs other than actually attempting a
modification of the pg_num or pgp_num of a pool? I would like to
minimize the number of errors I get and not keep trying the commands
until success, if possible.
Right no
un 24, 2013 at 11:26 PM, Alex Bligh wrote:
>
> On 25 Jun 2013, at 00:39, Mandell Degerness wrote:
>
>> The issue, Sage, is that we have to deal with the cluster being
>> re-expanded. If we start with 5 monitors and scale back to 3, running
>> the "ceph mon remove N&quo
were previously removed. They will suicide at startup.
On Mon, Jun 24, 2013 at 4:22 PM, Sage Weil wrote:
> On Mon, 24 Jun 2013, Mandell Degerness wrote:
>> Hmm. This is a bit ugly from our perspective, but not fatal to your
>> design (just our implementation). At the time we run the
, Jun 24, 2013 at 1:54 PM, Sage Weil wrote:
> On Mon, 24 Jun 2013, Mandell Degerness wrote:
>> I'm testing the change (actually re-starting the monitors after the
>> monitor removal), but this brings up the issue with why we didn't want
>> to do this in the first plac
file. In that case, the outage is
immediate and will last until the problem is corrected on the first
server to have the monitor restarted.
On Mon, Jun 24, 2013 at 10:07 AM, John Nielsen wrote:
> On Jun 21, 2013, at 5:00 PM, Mandell Degerness
> wrote:
>
>> There is a scenario w
monitor still suicide's when started.
Regards,
Mandell Degerness
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
It is possible to create all of the pools manually before starting
radosgw. That allows control of the pg_num used. The pools are:
.rgw, .rgw.control, .rgw.gc, .log, .intent-log, .usage, .users,
.users.email, .users.swift, .users.uid
On Wed, Jun 19, 2013 at 6:13 PM, Derek Yarnell wrote:
> Hi,
I have used the examples on the website to make fast, slow, and mixed
pools. What I'd like to be able to do, but which I fear may not be
possible, is to have a mixed storage rule such that:
primary copy goes to fast disk
secondary copies go to slow disk
primary and secondary are never on the same
Sorry. I should have mentioned, this is using the bobtail version of ceph.
On Mon, May 13, 2013 at 1:13 PM, Mandell Degerness
wrote:
> I know that there was another report of the bad behavior when deleting
> an RBD that is currently mounted on a host. My problem is related,
> but
I know that there was another report of the bad behavior when deleting
an RBD that is currently mounted on a host. My problem is related,
but slightly different.
We are using openstack and Grizzly Cinder to create a bootable ceph
volume. The instance was booted and all was well. The server on
w
On Thu, Apr 25, 2013 at 11:05 AM, Gregory Farnum wrote:
> On Wed, Apr 24, 2013 at 6:05 PM, Mandell Degerness
> wrote:
>> Given a partition, is there a command which can be run to validate if
>> the partition is used as a journal of an OSD and, if so, what OSD it
>> belongs
Given a partition, is there a command which can be run to validate if
the partition is used as a journal of an OSD and, if so, what OSD it
belongs to?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph
at 7:15 PM, Yehuda Sadeh wrote:
> On Fri, Mar 15, 2013 at 5:06 PM, Mandell Degerness
> wrote:
>> How are the pools used by rgw defined?
>>
>> Specifically, if I want to ensure that all of the data stored by rgw
>> uses pools which are replicated 3 times and have a pgnum a
How are the pools used by rgw defined?
Specifically, if I want to ensure that all of the data stored by rgw
uses pools which are replicated 3 times and have a pgnum and a pgpnum
greater than 8, what do I need to set?
___
ceph-users mailing list
ceph-user
the moment.
On Mon, Feb 25, 2013 at 1:34 PM, Josh Durgin wrote:
> On 02/25/2013 11:12 AM, Mandell Degerness wrote:
>>
>> I keep running into this error when attempting to create a volume from an
>> image:
>>
>> ProcessExecutionError: Unexpected error while running
I keep running into this error when attempting to create a volume from an image:
ProcessExecutionError: Unexpected error while running command.
Command: rbd import --pool rbd /mnt/novadisk/tmp/tmpbjwv9l
volume-d777527e-9754-4779-bd4c-d869968aba0c
Exit code: 1
Stdout: ''
Stderr: 'rbd: unable to get
24 matches
Mail list logo