On Fri, Mar 15, 2013 at 8:40 PM, Mandell Degerness
wrote:
> Thanks. One further question: I notice from the docs that there is a
> setting for rgw_data which defaults to
> "/var/lib/ceph/radosgw/$cluster-$id". What is stored here? How big
> should it be? Is it even necessary? The directory d
Thanks. One further question: I notice from the docs that there is a
setting for rgw_data which defaults to
"/var/lib/ceph/radosgw/$cluster-$id". What is stored here? How big
should it be? Is it even necessary? The directory does not exist at
the moment.
On Fri, Mar 15, 2013 at 7:15 PM, Yehu
On Fri, Mar 15, 2013 at 5:06 PM, Mandell Degerness
wrote:
> How are the pools used by rgw defined?
>
> Specifically, if I want to ensure that all of the data stored by rgw
> uses pools which are replicated 3 times and have a pgnum and a pgpnum
> greater than 8, what do I need to set?
There are a
How are the pools used by rgw defined?
Specifically, if I want to ensure that all of the data stored by rgw
uses pools which are replicated 3 times and have a pgnum and a pgpnum
greater than 8, what do I need to set?
___
ceph-users mailing list
ceph-user
On Tue, Feb 26, 2013 at 4:35 PM, wrote:
> Hello,
>
> I was wondering whether it would be feasible to manage existing FC-SAN
> storage with ceph. Now this may sound somewhat weird, so let me explain:
>
> as it turns out you can't actually trust SAN boxes with RAID-6 devices
> to actually hold your
On Fri, Mar 1, 2013 at 1:29 AM, nighteblis li wrote:
> OS:
> # uname -a
> Linux QA-DB-009 2.6.18-238.el5 #1 SMP Sun Dec 19 14:22:44 EST 2010 x86_64
> x86_64 x86_64 GNU/Linux
>
> distro: # cat /etc/redhat-release
> Red Hat Enterprise Linux Server release 5.6 (Tikanga)
>
> ceph: 0.56.3
>
> # gcc -v
On Thu, Mar 14, 2013 at 4:09 AM, Léon Keijser wrote:
> Hi,
>
> Every now and then I'm unable to unmap an RBD device:
>
> root@c2-backup ~ # rbd showmapped
> id poolimage snapdevice
> 0 20-kowin-a 20-kowin-a-01 - /dev/rbd0
> root@c2-backup ~ # rbd unmap /dev/rbd0
>
On Friday, March 15, 2013 at 3:40 PM, Marc-Antoine Perennou wrote:
> Thank you a lot for these explanations, looking forward for these fixes!
> Do you have some public bug reports regarding this to link us?
>
> Good luck, thank you for your great job and have a nice weekend
>
> Marc-Antoine Peren
Le 15 mars 2013 à 21:32, Greg Farnum a écrit :
> On Friday, March 8, 2013 at 3:29 PM, Kevin Decherf wrote:
>> On Fri, Mar 01, 2013 at 11:12:17AM -0800, Gregory Farnum wrote:
>>> On Tue, Feb 26, 2013 at 4:49 PM, Kevin Decherf >> (mailto:ke...@kdecherf.com)> wrote:
You will find the archive he
Patrick,
absolutely yes, it make sense.
We will wait with (im)patience
Thank you
-- Marco Aroldi
2013/3/15 Patrick McGarry :
> Marco,
>
> There are definitely folks who would love to see exactly what you are
> asking for. However, it's not always as simple as it might seem.
> With the ability t
On Friday, March 8, 2013 at 3:29 PM, Kevin Decherf wrote:
> On Fri, Mar 01, 2013 at 11:12:17AM -0800, Gregory Farnum wrote:
> > On Tue, Feb 26, 2013 at 4:49 PM, Kevin Decherf > (mailto:ke...@kdecherf.com)> wrote:
> > > You will find the archive here:
> > > The data is not anonymized. Interesting
Marco,
There are definitely folks who would love to see exactly what you are
asking for. However, it's not always as simple as it might seem.
With the ability to set replication levels per pool, and given that no
space is used until you write data to a given pool there are often too
many variable
Yes Bill,
but it would be nice to see the real space available reported at least
by the cephfs clients, retrieving the pool and the relative rep size
from the monitors and dividing accordingly the total space.
This could be a suggestion for Greg and the other guys working on the
first stable Cephf
Yes, that is the TOTAL amount in the cluster.
For example, if you have a replica size of '3' , 81489 GB available, and
you write 1 GB of data, then that data is written to the cluster 3 times,
so your total available will be 81486 GB. It definitely threw me off at
first, but seeing as you can hav
Hi,
I have a test cluster of 80Tb raw.
My pools are using rep size = 2, so the real storage capacity is 40Tb
but I see in pgmap a total of 80Tb available and also the cephfs
mounted on a client reports 80Tb available too
I would expect to see somewhere a "40Tb available"
Is this behavior correct?
Fantastic diagram, Wido, many thanks for doing this. Ross is in the
process of setting up a community wiki so hopefully we can host this
diagram there.
Neil
On Fri, Mar 15, 2013 at 6:22 AM, Wido den Hollander wrote:
> Hi,
>
> In the last couple of months I got several questions from people who a
On Fri, Mar 15, 2013 at 4:44 PM, Marco Aroldi wrote:
> Dan,
> this sound weird:
> how can you run "cephfs /mnt/mycephfs set_layout 10" on a unmounted
> mountpoint?
We had cephfs still mounted from earlier (before the copy pool, delete
pool). Basically, any file reads resulted in a I/O error, but
Dan,
this sound weird:
how can you run "cephfs /mnt/mycephfs set_layout 10" on a unmounted mountpoint?
My client says:
root@gw1:~# cephfs /mnt/ceph/ set_layout -p 3
Error setting layout: Inappropriate ioctl for device
And in IRC I've found that "path need to be a path to an already mounted cephfs"
We eventually resolved the problem by doing "ceph mds add_data_pool
10; cephfs /mnt/mycephfs set_layout 10", where 10 is the id of our new
"data" volume, and then rebooting the client machine (since the cephfs
mount was hung).
Cheers, Dan
On Fri, Mar 15, 2013 at 4:13 PM, Marco Aroldi wrote:
> Sam
Same here,
Now mounting cephfs hangs for a minute then says "mount error 5 =
Input/output error"
Since the new pool has id=3, I've also executed "ceph mds
add_data_pool 3" and "ceph mds remove_data_pool 0"
The monitor log has this line:
2013-03-15 16:08:08.327049 7fe957441700 0 -- 192.168.21.11:6
Hi,
In the last couple of months I got several questions from people who
asked how the Ceph integration with CloudStack works internally.
CloudStack is being developed rapidly lately and the documentation about
how it all works internally isn't always up to date.
I made a small diagram whic
Hi,
On Fri, Mar 15, 2013 at 9:52 AM, Sebastien Han
wrote:
> Hi,
>
> It's not recommended to use this command yet.
>
> As a workaround you can do:
>
> $ ceph osd pool create
> $ rados cppool
> $ ceph osd pool delete
> $ ceph osd pool rename
>
We've just done exactly this on the default p
Hi,
On 03/14/2013 05:05 PM, Patrick McGarry wrote:
Hey Ceph fans,
The World Hosting Days event in Rust, Germany [1] is fast-approaching,
and a couple of Ceph disciples will be available to beat about the
head and shoulders if you care to stop by the Dell booth.
So far I know both Nigel Thomas
I need to create the directory "/var/lib/ceph/mds/mds.$id "by hand, right ?
I start the service as you said, and it is succeed.
But, no "mds.$id" directory exist.
Will this affect it working?
And, what will be installed in the directory?
FS's metadata will be installed in OSDs, right?
Thanks.
-c
Hi,It's not recommended to use this command yet.As a workaround you can do:$ ceph osd pool create $ rados cppool $ ceph osd pool delete $ ceph osd pool rename Sébastien HanCloud Engineer"Always give 100%. Unless you're giving blood."PHONE : +33 (0)1 49 70 99 72 – MOBILE : +33 (0)6 52 84 44 7
Hi,* Edit `ceph.conf` and add a MDS section like so: [mds] mds data = ""> keyring = /var/lib/ceph/mds/mds.$id/mdsi.$id.keyring [mds.0] host = {hostname}* Create the authentication key (if you use cephx): $ sudo ceph auth get-or-create mds.0 mds 'allow rwx' mds 'allow *' osd 'allow *' > /var/
Hi,
I have a new cluster with no data.
Now it has 44 osd and my goal is to increase in the next months to
reach a total of 88 osd.
My pgmap is:
pgmap v841: 8640 pgs: 8640 active+clean; 8730 bytes data, 1733 MB
used, 81489 GB / 81491 GB avail
2880 PG each for data, metadata and rbd pools
This valu
Hi list,
I'm a new user of Ceph.
I have built a set up to try using RBDs, and it works brilliant.
I haven't enable MDS service, because I think I do not need it at that time.
But now, I want to try Ceph FS on the same cluster.
The question here is: I already have some data stored in RBDs.
Is t
28 matches
Mail list logo