Hi,
Am 21.11.2013 01:29, schrieb m...@linuxbox.com:
> On Tue, Nov 19, 2013 at 09:02:41AM +0100, Stefan Priebe wrote:
> ...
>>> You might be able to vary this behavior by experimenting with sdparm,
>>> smartctl or other tools, or possibly with different microcode in the drive.
>> Which values or wh
Thanks Josh! This is a lot clearer now.
I understand that librbd is low-level, but still, a warning wouldn't
hurt, would it? Just check if the size parameter is larger than the
cluster capacity, no?
Thank you for pointing out the trick of simply deleting the rbd_header,
I will try that now.
hello,
is there some methods to monitor osd nodes? for example the free size of
one osd node.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
http://www.wogri.at
On Nov 21, 2013, at 10:30 AM, nicolasc wrote:
> Thanks Josh! This is a lot clearer now.
>
> I understand that librbd is low-level, but still, a warning wouldn't hurt,
> would it? Just check if the size parameter is larger than the cluster
> capacity, no?
maybe I want
Yes, I understand that creating an image larger than the cluster may
sometimes be considered a feature. I am not suggesting it should be
forbidden, simply that it should display a warning message to the operator.
Full disc: I am not a Ceph dev, this is a simple user's opinion
Best regards,
Ni
On 19 November 2013 20:12, LaSalle, Jurvis
wrote:
>
> On 11/19/13, 2:10 PM, "Wolfgang Hennerbichler" wrote:
>
> >
> >On Nov 19, 2013, at 3:47 PM, Bernhard Glomm
> >wrote:
> >
> >> Hi Nicolas
> >> just fyi
> >> rbd format 2 is not supported yet by the linux kernel (module)
> >
> >I believe this i
On 11/21/2013 12:52 PM, Gregory Farnum wrote:
> If you want a logically distinct copy (? this seems to be what Dimitri
> is referring to with adding a 3rd DRBD copy on another node)
Disclaimer: I haven't done "stacked drbd", this is from my reading of
the fine manual -- I was referring to "stacke
Hi Knut Moe,
Am 21.11.2013 22:51, schrieb Knut Moe:
> Thanks, Alfredo.
>
> The challenge is that it is calling those links when issuing the following
> command:
>
> sudo apt-get update && sudo apt-get install ceph-deploy
>
> It then goes through a lot different steps before displaying those er
the weird thing is that I have some volumes that were created from a snapshot,
that actually boot (they complain about not being able to connect to the
metadataserver (which I guess is a totally different problem) but in the end,
they come up.
I haven't been able to see the difference between t
Perhaps you mean these instructions, from
http://ceph.com/docs/master/start/quick-start-preflight/#ceph-deploy-setup?
---clip---
2. Add the Ceph packages to your repository. Replace
{ceph-stable-release} with a stable Ceph release (e.g., cuttlefish,
dumpling, etc.).
For example:
echo deb htt
Hi all
I'm playing with the boot from volume options in Havana and have run into
problems:
(Openstack Havana, Ceph Dumpling (0.67.4), rbd for glance, cinder and
experimental ephemeral disk support)
The following things do work:
- glance images are in rbd
- cinder volumes are in rbd
- creating
Is this statement accurate?
As I understand DRBD, you can replicate online block devices reliably,
but with Ceph the replication for RBD images requires that the file
system be offline.
Thanks for the clarification,
~jpr
On 11/08/2013 03:46 PM, Gregory Farnum wrote:
>> Does Ceph provides the d
Hi all,
I am trying to install Ceph using the Preflight Checklist and when I issue
the following command
sudo apt-get update && sudo apt-get install ceph-deploy
I get the following error after it goes through a lot different steps:
Failed to fetch
http://ceph.com/debian-{ceph-stable-release}/di
Thanks, Alfredo.
The challenge is that it is calling those links when issuing the following
command:
sudo apt-get update && sudo apt-get install ceph-deploy
It then goes through a lot different steps before displaying those error
messages. See more of the error in this screenshot link:
http://w
Title: signature
Hello all!
I experience a strange issue since last update to ubuntu 13.10
(saucy) and ceph emperor 0.72.1
kernel version 3.11.0-13-generic #20-Ubuntu
ceph packages installed are the ones for RARING
when I mount my ceph cluster us
Hi all
I'm playing with the boot from volume options in Havana and have run into
problems:
(Openstack Havana, Ceph Dumpling (0.67.4), rbd for glance, cinder and
experimental ephemeral disk support)
The following things do work:
- glance images are in rbd
- cinder volumes are in rbd
- creating
Ugis,
Can you provide the results for:
ceph osd tree
ceph osd crush dump
On Thu, Nov 21, 2013 at 7:59 AM, Gregory Farnum wrote:
> On Thu, Nov 21, 2013 at 7:52 AM, Ugis wrote:
>> Thanks, reread that section in docs and found tunables profile - nice
>> to have, hadn't noticed it before(ceph
On Thu, Nov 21, 2013 at 10:13 AM, John-Paul Robinson wrote:
> Is this statement accurate?
>
> As I understand DRBD, you can replicate online block devices reliably,
> but with Ceph the replication for RBD images requires that the file
> system be offline.
It's not clear to me what replication you
On 11/21/2013 02:36 AM, Stefan Priebe - Profihost AG wrote:
Hi,
Am 21.11.2013 01:29, schrieb m...@linuxbox.com:
On Tue, Nov 19, 2013 at 09:02:41AM +0100, Stefan Priebe wrote:
...
You might be able to vary this behavior by experimenting with sdparm,
smartctl or other tools, or possibly with dif
As an OSD is just a partition, you could use any of the monitoring packages out
there? (I like opsview…)
We use the check-ceph-status nagios plugin[1] to monitor overall cluster
status, but I'm planning on adding/finding more monitoring functionality soon
(e.g. ceph df)
John
1: https://github.
Thanks, reread that section in docs and found tunables profile - nice
to have, hadn't noticed it before(ceph docs develop so fast that you
need RSS to follow all changes :) )
Still problem persists in a different way.
Did set profile "optimal", reballancing started, but I had "rbd
delete" in backg
Sorry, I was mixing concepts.
I was thinking of RBD snapshots, which require the file system to be
consistent before creating.
I have been exploring an idea for creating remote, asynchronous copies
of RBD images, hopefully with some form of delayed state updating. I've
been reviewing the feature
On Fri, Nov 22, 2013 at 9:19 AM, Alphe Salas Michels wrote:
> Hello all!
>
> I experience a strange issue since last update to ubuntu 13.10 (saucy) and
> ceph emperor 0.72.1
>
> kernel version 3.11.0-13-generic #20-Ubuntu
>
> ceph packages installed are the ones for RARING
>
> when I mount my ce
Hi,
maybe you can help us with following probs:
if you need more info about our cluster or any debugging log I will be
happy to help
Environment:
---
Small test cluster with 7 node, 1 osd per node
Upgrade from dumpling to emporer 0.72.1
2 Problems after upgrade:
On Thu, Nov 21, 2013 at 7:52 AM, Ugis wrote:
> Thanks, reread that section in docs and found tunables profile - nice
> to have, hadn't noticed it before(ceph docs develop so fast that you
> need RSS to follow all changes :) )
>
> Still problem persists in a different way.
> Did set profile "optima
On Thu, Nov 21, 2013 at 4:01 PM, Knut Moe wrote:
> Hi all,
>
> I am trying to install Ceph using the Preflight Checklist and when I issue
> the following command
>
> sudo apt-get update && sudo apt-get install ceph-deploy
>
> I get the following error after it goes through a lot different steps:
>
On Thu, Nov 21, 2013 at 4:51 PM, Knut Moe wrote:
> Thanks, Alfredo.
>
> The challenge is that it is calling those links when issuing the following
> command:
>
> sudo apt-get update && sudo apt-get install ceph-deploy
>
> It then goes through a lot different steps before displaying those error
> m
What do you mean the filesystem disappears? Is it possible you're just
pushing more traffic to the disks than they can handle, and not waiting
long enough for them to catch up?
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Thu, Nov 21, 2013 at 5:19 PM, Alphe Salas Michels
On 11/21/2013 12:13 PM, John-Paul Robinson wrote:
> Is this statement accurate?
>
> As I understand DRBD, you can replicate online block devices reliably,
> but with Ceph the replication for RBD images requires that the file
> system be offline.
>
> Thanks for the clarification,
Basic DRBD is RA
29 matches
Mail list logo