Le mercredi 15 mai 2013 à 00:15 +0200, Olivier Bonvalet a écrit :
> Hi,
>
> I have some PG in state down and/or incomplete on my cluster, because I
> loose 2 OSD and a pool was having only 2 replicas. So of course that
> data is lost.
>
> My problem now is that I can't retreive a "HEALTH_OK" stat
Hi Again,
Reading your message again:
> original file:
> ~# getfacl /samba/data/test.txt
> getfacl: Entferne führende '/' von absoluten Pfadnamen
> # file: samba/data/test.txt
> # owner: 300
> # group: users
> user::rwx
> user:root:rwx
> group::---
> group:users:---
> group:300:rwx
> mask:
Hi,
`cp -a` implies to preserve attributes. It is obvious that there are some
attributes set, not supported by any linux filesystem.
I have the same experience running `rsync -a` between a fat32 or ntfs filesytem
to ext4 or btrfs, it can not preserve attributes.
Try to copy the same files to a
Hi Matt -
Sounds like you installed ceph-deploy by downloading from
github.com/ceph/ceph-deploy, then running the bootstrap script.
We have debian packages for ceph-deploy and python-pushy that are included in
the debian-cuttlefish repo, as well as
http://ceph.com/packages/ceph-deploy/debian.
hi,
I used ceph-deploy successfully a few days ago but recently reinstalled my
admin machine from the same instructions
http://ceph.com/docs/master/rados/deployment/preflight-checklist/
now getting the error below. Then I figured I'd just use the debs but they
are missing the python-pushy dependan
On Thu, 16 May 2013, ian_m_por...@dell.com wrote:
> Hi Sage,
>
> Looks like the ceph-deploy config push HOST command does work, however it
> doesn't copy over comments (which is what caused me to question if it was
> working as I was using comments to test if the config was changing)
>
> For ex
Hi Sage,
Looks like the ceph-deploy config push HOST command does work, however it
doesn't copy over comments (which is what caused me to question if it was
working as I was using comments to test if the config was changing)
For example, the config file on the admin node
[global]
fsid = b9567f
On 05/16/2013 05:57 PM, Dewan Shamsul Alam wrote:
I don't think 4.2 is coming out anytime soon. Right now all 4.1 builds
are stable except for the systemvm. Have a look in their jenkins server
http://jenkins.cloudstack.org/. 4.2 is not even listed there. :(
4.2 is scheduled for July. It curren
On Thu, 16 May 2013, ian_m_por...@dell.com wrote:
> Yeah that's what I did, created another server and I got HEALTH_OK :)
>
> I guess I was a bit confused with the osd_crush_chooseleaf_type setting
> in the ceph.conf file. I had this set to 0 which, according to the
> documentation, it should p
I don't think 4.2 is coming out anytime soon. Right now all 4.1 builds are
stable except for the systemvm. Have a look in their jenkins server
http://jenkins.cloudstack.org/. 4.2 is not even listed there. :(
Best Regards,
Dewan Shamsul Alam
On Thu, May 16, 2013 at 9:04 PM, Wido den Hollander w
Yeah that's what I did, created another server and I got HEALTH_OK :)
I guess I was a bit confused with the osd_crush_chooseleaf_type setting in the
ceph.conf file. I had this set to 0 which, according to the documentation, it
should peer with my 2 OSDs on the single node (clearly it didn't or
On 05/16/2013 04:34 PM, Dewan Shamsul Alam wrote:
Hi,
I will be deploying CloudStack and will use Ceph. Too bad CloudStack
requires NFS as the primary storage for System VM. So I have to use a
DRBD+NFS for that. The setup is as follows:
Wait for CloudStack 4.2 :) As we speak I'm working on the
Hi,
I will be deploying CloudStack and will use Ceph. Too bad CloudStack
requires NFS as the primary storage for System VM. So I have to use a
DRBD+NFS for that. The setup is as follows:
3 Node Ceph Cluster [Bobtail] - Planning to upgrade to Cuttle fish after
trying this.
2 Node for DRBD+NFS
1 VM
Greetings ceph-ers,
As you may have noticed lately, there has been a lot of talk about
Ceph and OpenStack. While we love all of the excitement that this has
generated, we want to make sure that other cloud setups aren't getting
neglected or ignored. CloudStack, for instance, also has a great Cep
Ian,
If you are only running one server (with 2 OSDs), you should probably
take a look at your CRUSH map. I haven't used ceph-deploy myself yet,
but with mkcephfs the default CRUSH map is constructed such that the 2
replicas must be on different hosts, not just on different OSDs. This
is so that
On 05/16/2013 03:34 AM, 大椿 wrote:
I'm a newbie to ceph and also storage, just attracted by ceph's modern
features.
We want to build up a backup/restore system with two layers: HDD and Tape.
With ceph as the virtualization layer, export NFS/CIFS/iSCSI interface
to business storage systems.
Some r
Hi there,
today i setup my ceph-cluster. it's up and running fine.
mounting cephfs to my "client"-machine works fine as well.
~# mount
172.17.50.71:6789:/ on /samba/ceph type ceph
(rw,relatime,name=admin,secret=)
touching a file in that directory and setting xattr works perfect.
~# touch /s
17 matches
Mail list logo