Hi ,
Recently, we are testing use CTDB+Cephfs+Samba/NFS HA cluster , but it's
not success, so the Cephfs if support CTDB cluster file ? if it support ,
could you pls offer a guideline link .
We use Ceph 0.67.4 version.thanks a lot!
__
Saravanan,
Please activate osds which are running on specific nodes from monitor node
like below
# cephdeploy osd activate ceph-node1:sdb1
Thanks restart ceph service on the same node
#sudo service ceph restart
simple check is your osds will be mounted on the respective nodes.
Please see th
Hi all,
Today I'm testing CephFS with client-side kernel drivers.
My installation is composed of 2 nodes, each one with a monitor and an OSD.
One of them is also MDS.
root@test2:~# ceph -s
cluster 42081905-1a6b-4b9e-8984-145afe0f22f6
health HEALTH_OK
monmap e2: 2 mons at
{0=192.168
On 02/27/2014 09:42 AM, Michael wrote:
> Thanks Tim, I'll give the raring packages a try.
> Found a tracker for Saucy packages, looks like the person they were
> assigned to hasn't checking in for a fair while so they might have just
> been overlooked http://tracker.ceph.com/issues/6726.
Packages
On Wed, 26 Feb 2014, haiquan...@sina.com wrote:
> Hi ,
>
> Recently, we are testing use CTDB+Cephfs+Samba/NFS HA cluster , but
> it's not success, so the Cephfs if support CTDB cluster file ? if it
> support , could you pls offer a guideline link .
>
> We use Ceph 0.67.4 version.
Hi Karan,
First of all many many thanks for your blog written on openstack
integration with CEPH
I could able to integrate Openstack Cinder with CEPH successfully and
attach volumes to running VMs
But facing the issue with Glance service while uploading image as shown
below
*[controller@co
Hi Florent,
It sounds like the capability for the user you are authenticating as does
not have access to the new OSD data pool. Try doing
ceph auth list
and see if there is an osd cap that mentions the data pool but not the new
pool you created; that would explain your symptoms.
sage
On Fr
On Fri, Feb 28, 2014 at 6:14 AM, Sage Weil wrote:
> On Wed, 26 Feb 2014, haiquan...@sina.com wrote:
>> Hi ,
>>
>> Recently, we are testing use CTDB+Cephfs+Samba/NFS HA cluster , but
>> it's not success, so the Cephfs if support CTDB cluster file ? if it
>> support , could you pls offer a g
Seems that you may also need to tell CephFS to use the new pool instead of
the default..
After CephFS is mounted, run:
# cephfs /mnt/ceph set_layout -p 4
Michael J. Kidd
Sr. Storage Consultant
Inktank Professional Services
On Fri, Feb 28, 2014 at 9:12 AM, Sage Weil wrote:
> Hi Florent,
>
> I
By default your filesystem data is stored in the "data" pool, ID 0.
You can change to a different pool (for files going forward, not
existing ones) by setting the root directory's layout via the
ceph.layout.pool virtual xattr, but it doesn't look like you've done
that yet.
Until then, you've got tw
Okay... I forgot that!
Thank you both Gregory & Michael !
I had to set all layout options to make it work :
cephfs /mnt/ceph set_layout -p 4 -s 4194304 -u 4194304 -c 1
On 02/28/2014 04:52 PM, Michael J. Kidd wrote:
> Seems that you may also need to tell CephFS to use the new pool
> instead of
Hi Sage, Thank you for your answer.
I do not see anything about that...
root@test2:~# ceph auth list
installed auth entries:
mds.0
key: AQCfOw9TgF4QNBAAkiVjKh5sGPULV8ZsO4/q1A==
caps: [mds] allow
caps: [mon] allow rwx
caps: [osd] allow *
osd.0
key: AQCnbgtTKAdABBAAIjnQLlzMnXg2
Hi Srinivasa
Few things for troubleshooting
1) check in glance-api.conf , it should have
rbd_store_ceph_conf = /etc/ceph/ceph.conf
2) if not already done
cp /etc/ceph/ceph.client.images.keyring /etc/glance
3) i am not sure if there is any difference between glance image-create and
glan
Does anyone have any general performance information or experience
regarding retrieving large numbers of objects out of the CEPH object
store. I know there's all sorts of variables related to this, but am just
looking for some information on general experience that people might have.
For example.
https://ask.openstack.org/en/question/9570/glance-image-create-returns-httpinternalservererror-http-500/
check this also try to upload a new image , i know its crazy but just a try.
—karan
On 28 Feb 2014, at 16:24, Srinivasa Rao Ragolu wrote:
> Failed to upload image
On Thu, Feb 27, 2014 at 9:29 PM, Michael Sevilla wrote:
> I'm looking for the debug messages in Client.cc, which uses ldout
> (library debugging). I increased the client debug level for all
> daemons (i.e. under [global] in ceph.conf) and verified that it got
> set:
>
> $ ceph --admin-daemon /var/
On Wed, Feb 26, 2014 at 11:39 AM, David Champion wrote:
> * On 26 Feb 2014, Gregory Farnum wrote:
>> >> > q1. CephFS has a tunable for max file size, currently set to 1TB. If
>> >> > I want to change this, what needs to be done or redone? Do I have to
>> >> > rebuild, or can I just change the pa
According to the documentation at
https://ceph.com/docs/master/rbd/rbd-snapshot/ -- snapshots require that
all I/O to a block device be stopped prior to making the snapshot. Is there
any plan to allow for online snapshotting so that we could do incremental
snapshots of running VMs on a regular basi
RBD itself will behave fine with whenever you take the snapshot. The
thing to worry about is that it's a snapshot at the block device
layer, not the filesystem layer, so if you don't quiesce IO and sync
to disk the filesystem might not be entirely happy with you for the
same reasons that it won't b
* On 28 Feb 2014, Gregory Farnum wrote:
> > No dice -- "set" not supported. I can set this directly in ceph.conf,
> > though, right? This is the advice I've seen before. Is a restart of
> > ceph-mds sufficient to make that work, or would something need to be
> > recreated?
>
> Unfortunately, i
Turns out I deployed everything correctly but didn't pass the debug
flag to ceph-fuse. I expected everything to go to /var/log like rgw.
Thanks, Greg.
On Fri, Feb 28, 2014 at 1:42 PM, Gregory Farnum wrote:
> On Thu, Feb 27, 2014 at 9:29 PM, Michael Sevilla
> wrote:
>> I'm looking for the debug
Hi,
Am 28.02.2014 03:45, schrieb Haomai Wang:
[...]
> I use fio which rbd supported from
> TelekomCloud(https://github.com/TelekomCloud/fio/commits/rbd-engine)
> to test rbd.
I would recommend to no longer use this branch, it's outdated. The rbd
engine got contributed back to upstream fio and is
how to provide multiple pools name in nova.conf? Do we need to give the
configuration same as of cinder.conf for multi backend config.
please provide the config details for nova.conf for multi pools.
___
ceph-users mailing list
ceph-users@lists.ceph.com
h
On Sat, Mar 1, 2014 at 8:04 AM, Danny Al-Gaaf wrote:
> Hi,
>
> Am 28.02.2014 03:45, schrieb Haomai Wang:
> [...]
>> I use fio which rbd supported from
>> TelekomCloud(https://github.com/TelekomCloud/fio/commits/rbd-engine)
>> to test rbd.
>
> I would recommend to no longer use this branch, it's ou
24 matches
Mail list logo