Hi Pavel,
Will try and answer some of your questions:
My first question will be about monitor data directory. How much space I
> need to reserve for it? Can monitor-fs be corrupted if monitor goes out of
> storage space?
>
We have about 20GB partitions for monitors - they really don't use much
s
Hi All,
By default in which directory/directories, does ceph store the block device
files ? Is it in the /dev or other filesystem ?
Thanks
Kumar
This message is for the designated recipient only and may contain privileged,
proprietary, or otherwise confidenti
To start over is easier to do it from scratch to avoid configuration
problems and other things that might bite you back
like stale keyrings.
It has happened to me more than once :)
When I need to start from scratch I call 'ceph-deploy purge NODE &&
ceph-deploy purge-data'
And then remove all the
Hi All,
I am using cinder as a front end for volume storage in Openstack
configuration.
Ceph is used as storage back-end.
Currently cinder uses only one pool (in my case pool name is "volumes" )
for its volume storage.
I want cinder to use multiple ceph pools for volume storage
--follow
Thanks Altredo. 'ceph-deploy purge NODE &&
ceph-deploy purge-data' really helps.
On Tue, Feb 25, 2014 at 9:24 PM, Alfredo Deza wrote:
> To start over is easier to do it from scratch to avoid configuration
> problems and other things that might bite you back
> like stale keyrings.
>
> It has hap
THanks Srinivasa. The instruction is not for Ubuntu but I ll take it as
ref.
I actually found the problem is due to the public IP setting in ceph.conf
on my admin node. I remove that line and the problem is fixed and the
cluster can be set up. But a new problem occurs that I can only have one
moni
Hi,
Please have a look at the cinder multi-backend functionality: examples here:
http://www.sebastien-han.fr/blog/2013/04/25/ceph-and-cinder-multi-backend/
Cheers.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien.
Hi all,
I hit the same problem here when adding new monitors:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-October/005483.html
I understand that I should set up the public address in ceph.conf. But I am
really confused about the public network setting in the doc.
Where should I set th
Hi,
RBD blocks are stored as objects on a filesystem usually under:
/var/lib/ceph/osd//current//
RBD is just an abstraction layer.
Cheers.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Addres
Hi all,
I'm new with Ceph and I would like to know if there is any way of
changing size of Ceph's internal objects.
I mean, when I put an image on RBD for exemple, I can see this:
rbd -p CephTest info base-127-disk-1
rbd image 'base-127-disk-1':
size 32768 MB in 8192 objects
order 22 (40
Hi,
The value can be set during the image creation.
Start with this: http://ceph.com/docs/master/man/8/rbd/#striping
Followed by the example section.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.
On 02/25/2014 03:54 PM, Florent Bautista wrote:
Hi all,
I'm new with Ceph and I would like to know if there is any way of
changing size of Ceph's internal objects.
I mean, when I put an image on RBD for exemple, I can see this:
rbd -p CephTest info base-127-disk-1
rbd image 'base-127-disk-1':
>
> You can't change it afterwards, but when creating and image you can
> supply the --order value and change the default 22 into something you
> like:
>
> 22 = 4096KB
> 23 = 8192KB
> 24 = 16384KB
> 25 = 32768KB
> 26 = 65536KB
>
>> Or is it a fixed value in Ceph architecture ?
>>
>
> No, you can se
Okay, well, let's try and track some of these down. What's the content
of the "ceph.layout" xattr on the directory you're running this test
in? Can you verify that pool 0 is the data pool used by CephFS, and
that all reported slow ops are in that pool? Can you record the IO
patterns on an OSD while
On Mon, Feb 24, 2014 at 11:48 PM, Markus Goldberg
wrote:
> Hi Gregory,
> here we go:
>
> root@bd-a:/mnt/myceph#
> root@bd-a:/mnt/myceph# ls -la
> insgesamt 4
> drwxr-xr-x 1 root root 25928099891213 Feb 24 14:14 .
> drwxr-xr-x 4 root root 4096 Aug 30 10:34 ..
> drwx-- 1 root root 2592
Greetings,
I've been running dumpling for several months and it seems very stable.
I'm about to spin up a new ceph environment. Would I be advised to
install emperor? Or, since dumpling is solid, just stick with it?
Thanks much,
JR
___
ceph-users maili
After reading the following on Fscahce integration with CephFS, I'm would
like to know which version of Linux kernel has all the Fscache patches
available?
http://ceph.com/community/first-impressions-through-fscache-and-ceph/
Do we know when these patches will be available in future release of Ub
So the "backfill_tooful" was an old state; it disappeared after I
reweighted. Yesterday, I even set up the Ceph system's tunables to optimal,
added one more osd, let it rebalance, and then after rebalancing, I ran a
"ceph osd reweight-by-utilization 105". After several hours, though, CEPH
stabilize
25 февр. 2014 г., в 14:13, Srinivasa Rao Ragolu написал(а):
> always better to have same version in all the nodes of cluster to avoid
> integration issues rule out.
But, while updating, some nodes will run on older version for a some period. Is
this ok?
Pavel.
> On Tue, Feb 25, 2014 at
Hi!
> 2. One node (with 8 osds) goes offline. Will ceph automatically replicate all
> objects on the remaining node to maintain number of replicas = 2?
> No, because it can no longer satisfy your CRUSH rules. Your crush rule states
> 1x copy pr. node and it will keep it that way. The cluster wil
With the reweight-by-utilization applied, CRUSH is failing to generate
mappings of enough OSDs, so the system is falling back to keeping
around copies that already exist, even though they aren't located on
the correct CRUSH-mapped OSDs (since there aren't enough OSDs).
Are your OSDs correctly weigh
Greetings!
Just wanted to let people know that the schedule has been published
for the next Ceph Developer Summit (March 04-05, 2014):
https://wiki.ceph.com/Planning/CDS/CDS_Giant_(Mar_2014)
There may still be a few last minute tweaks, but for the most part
that should be what we're working with
Hi All,
Just wondering if there was a reason for no packages for Ubuntu Saucy in
http://ceph.com/packages/ceph-extras/debian/dists/. Could do with
upgrading to fix a few bugs but would hate to have to drop Ceph from
being handled through the package manager!
Thanks,
-Michael
Hello,
Most recently when looking at PG’s folder splitting, I found that there was
only one sub folder in the top 3 / 4 levels and start having 16 sub folders
starting from level 6, what is the design consideration behind this?
For example, if the PG root folder is ‘3.1905_head’, in the first le
On Tue, Feb 25, 2014 at 7:13 PM, Guang wrote:
> Hello,
> Most recently when looking at PG's folder splitting, I found that there was
> only one sub folder in the top 3 / 4 levels and start having 16 sub folders
> starting from level 6, what is the design consideration behind this?
>
> For example,
Got it. Thanks Greg for the response!
Thanks,
Guang
On Feb 26, 2014, at 11:51 AM, Gregory Farnum wrote:
> On Tue, Feb 25, 2014 at 7:13 PM, Guang wrote:
>> Hello,
>> Most recently when looking at PG's folder splitting, I found that there was
>> only one sub folder in the top 3 / 4 levels and st
Thanks Sebastien.
-Original Message-
From: Sebastien Han [mailto:sebastien@enovance.com]
Sent: Tuesday, February 25, 2014 8:23 PM
To: Gnan Kumar, Yalla
Cc: ceph-users
Subject: Re: [ceph-users] storage
Hi,
RBD blocks are stored as objects on a filesystem usually under:
/var/lib/cep
Hi Larry,
As you suggested I have changed to Ubuntu 10.04. Still I could not able to
figure out what is this problem. I skipped only two sections in ceph
documentation is 1) SSL 2) DNS , as I thought not needed any security to my
gateway.
1) I strongly have a doubt in specifying hostname in ceph.
28 matches
Mail list logo