The other thing to note, too, is that it appears you're trying to decrease the
PG/PGP_num parameters, which is not supported. In order to decrease those
settings, you'll need to delete and recreate the pools. All new pools created
will use the settings defined in the ceph.conf file.
-Orig
Hello,
This is my first real issue since running Ceph for several months. Here's the
situation:
I've been running an Emperor cluster for several months. All was good. I
decided to upgrade since I'm running Ubuntu 13.10 and 0.72.2. I decided to
first upgrade Ceph to 0.80.4, which was the la
tadata and
data pools to eliminate the HEALTH_WARN issue.
-Original Message-
From: Gregory Farnum [mailto:g...@inktank.com]
Sent: Thursday, September 11, 2014 2:09 PM
To: McNamara, Bradley
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Upgraded now MDS won't start
On Wed, Se
I'd like to see a Solaris client.
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Dennis
Chen
Sent: Wednesday, March 04, 2015 2:00 AM
To: ceph-devel; ceph-users; Sage Weil; Loic Dachary
Subject: [ceph-users] The project of ceph client file syste
Correct me if I'm wrong, I'm new to this, but I think the distinction between
the two methods is that using 'qemu-img create -f rbd' creates an RBD for
either a VM to boot from, or for mounting within a VM. Whereas, the OP wants a
single RBD, formatted with a cluster file system, to use as a pl
CephFS, yes, but it's not considered production-ready.
You can also use an RBD volume and place OCFS2 on it and share it that way, too.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
yang.bi...@zte.com.cn
Sent: Friday, October 31, 2014 12:22 AM
To: ceph-users@lists.ceph
I have a somewhat interesting scenario. I have an RBD of 17TB formatted using
XFS. I would like it accessible from two different hosts, one mapped/mounted
read-only, and one mapped/mounted as read-write. Both are shared using Samba
4.x. One Samba server gives read-only access to the world fo
I finally have my first test cluster up and running. No data on it, yet. The
config is: three mons, and three OSDS servers. Each OSDS server has eight 4TB
SAS drives and two SSD journal drives.
The cluster is healthy, so I started playing with PG and PGP values. By the
provided calculation
riginal Message-
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Peter Matulis
Sent: Wednesday, January 29, 2014 8:11 AM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] PG's and Pools
On 01/28/2014 09:46 PM, McNamara, Bradley wrote:
Just for clarity since I didn't see it explained, but how are you accessing
Ceph using ESXI? Is it via iscsi or NFS? Thanks.
Brad McNamara
-Original Message-
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Maciej Bonin
Sent: Tuesday, Feb
I have a test cluster that is up and running. It consists of three mons, and
three OSD servers, with each OSD server having eight OSD's and two SSD's for
journals. I'd like to move from the flat crushmap to a crushmap with typical
depth using most of the predefined types. I have the current c
I'm confused...
The bug tracker says this was resolved ten days ago. Also, I actually used
ceph-deploy on 2/12/2014 to add two monitors to my cluster, and it worked, and
the documentation says it can be done. However, I believe that I added the new
mon's to the ceph.conf in the 'mon_initial_m
Round up your pg_num and pgp_num to the next power of 2, 2048.
Ceph will start moving data as soon as you implement the new 'size 3', so I
would increase the pg_num and pgp_num, first, then increase the size. It will
start creating the new PG's immediately. You can see all this going on using
times may be exaggerated, but the cluster
will be completely functional.
Brad
From: Karol Kozubal [mailto:karol.kozu...@elits.com]
Sent: Wednesday, March 12, 2014 1:52 PM
To: McNamara, Bradley; ceph-users@lists.ceph.com
Subject: Re: PG Scaling
Thank you for your response.
The number of replic
There was a very recent thread discussing PG calculations, and it made me doubt
my cluster setup. So, Inktank, please provide some clarification.
I followed the documentation, and interpreted that documentation to mean that
PG and PGP calculation was based upon a per-pool calculation. The rece
What you are seeing is expected behavior. Pool numbers do not get reused; they
increment up. Pool names can be reused once they are deleted. One note,
though, if you delete and recreate the data pool, and want to use cephfs,
you'll need to run 'ceph mds newfs
--yes-i-really-mean-it' before
Take a look at ProxmoxVE. Has full support for Ceph, is supported, and uses
KVM/QEMU.
-Original Message-
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Brian Candler
Sent: Friday, April 04, 2014 1:44 AM
To: Brian Beverage; ceph-users@list
Do you have all of the cluster IP's defined in the host file on each OSD
server? As I understand it, the mon's do not use a cluster network, only the
OSD servers.
-Original Message-
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Gandalf
I believe any kernel greater than 3.9 supports format 2 RBD's. I'm sure
someone will correct me if this is a misstatement.
Brad
-Original Message-
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Dyweni - Ceph-Users
Sent: Thursday, April 2
The underlying file system on the RBD needs to be a clustered file system, like
OCFS2, GFS2, etc., and a cluster between the two, or more, iSCSI target servers
needs to be created to manage the clustered file system.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Andrei
The formula was designed to be used on a per-pool basis. Having said that,
though, when looking at the number of PG's from a system-wide perspective, one
does not want too many total PG's. So, it's a balancing act, and it has been
suggested that it's better to have slightly more PG's than you
I'm new, too, and I guess I just need a little clarification on Greg's
statement. The RBD filesystem is mounted to multiple VM servers, say, in a
Proxmox cluster, and as long as any one VM image file on that filesystem is
only being accessed from one node of the cluster, everything will work, a
Instead of using ext4 for the file system, you need to use a clustered file
system on the RBD device.
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Jon
Sent: Wednesday, May 29, 2013 7:55 AM
To: Igor Laskovy
Cc: ceph-users
Subject: Re: [ceph-users
...@gmail.com]
Sent: Wednesday, May 29, 2013 11:47 AM
To: McNamara, Bradley
Cc: ceph-users
Subject: Might Be Spam -RE: [ceph-users] Mounting a shared block device on
multiple hosts
Hello Bradley,
Please excuse my ignorance, I am new to CEPH and what I thought was a good
understanding of file
24 matches
Mail list logo