Ciao Gippa,
From http://ceph.com/releases/v0-61-cuttlefish-released/
* ceph-disk: dm-crypt support for OSD disks
Hope this helps,
On 01/10/2013 08:57, Giuseppe 'Gippa' Paterno' wrote:
> Hi!
> Maybe an FAQ, but is encryption of data available (or will be available)
> in ceph at a storage level?
Eric,
Yeah, your OSD weights are a little crazy...
For example, looking at one host from your output of "ceph osd tree"...
-3 31.5host tca23
1 3.63osd.1 up 1
7 0.26osd.7 up 1
13 2.72osd.13
Found a weird behavior (or looks like weird) with ceph 0.67.3
I have 5 servers. Monitor runs on server 1. And server 2 to 5 have one OSD
running each (osd.0 - osd.3)
I did a 'ceph pg dump'. I can see PGs got somehow randomly distributed to all
4 OSDs which is expected behavior.
However, if
Dear all,
I am back to managing the cluster before starting to use it even on a
test host. First of all a question regarding the docs:
Is this [1] outdated? If not, why are the links to chef-* not working?
Is chef-* still recommended/used?
After adding a new OSD (with ceph-deploy version 1.2.
Ching-Cheng,
Data placement is handled by CRUSH. Please examine the following:
ceph osd getcrushmap -o crushmap && crushtool -d crushmap -o
crushmap.txt && cat crushmap.txt
That will show the topology and placement rules Ceph is using.
Pay close attention to the "step chooseleaf" lines inside
Hi
According to http://ceph.com/docs/master/radosgw/s3/ radosgw support ACLs but I
cannot find a way how to do it.
What we need to do is to have a key/secret with read write permission and one
with read only permission to a certain bucket, is this possible? How?
Regards
Andi
Mike:
Thanks for the reply.
However, I did the crushtool command but the output doesn't give me any obvious
explanation why osd.4 should be the primary OSD for PGs.
All the rule has this "step chooseleaf firstn 0 type host". According to Ceph
document, PG should select two buckets from the ho
Hi Travis,
Both you and Yan saw the same thing, in that the drives in my test
system go from 300GB to 4TB. I used ceph-deploy to create all the
OSDs, which I assume picked the weights of 0.26 for my 300GB drives,
and 3.63 for my 4TB drives. All the OSDs that are reporting nearly full
are the
On Tue, Oct 1, 2013 at 10:11 AM, Chen, Ching-Cheng (KFRM 1) <
chingcheng.c...@credit-suisse.com> wrote:
> Mike:
>
> Thanks for the reply.
>
> However, I did the crushtool command but the output doesn't give me any
> obvious explanation why osd.4 should be the primary OSD for PGs.
>
> All the rule
Aaron:
Bingo!
All my 5 VMs are exactly same setup so I didn't bother with weight setting.
Thinking all 0.000 will they will be treat equally.
After following your suggestion put some numbers (I made them all 0.200) and
the I got expected behavior.
Really appreciated,
Ching-Cheng Chen
MDS
Hello-
I've set up a rados gateway but I'm having trouble accessing it from clients.
I can access it using rados command line just fine from any system in my ceph
deployment, including my monitors and OSDs, the gateway system, and even the
admin system I used to run ceph-deploy. However, when
Hi Piers,
Am 2013-09-27 22:59, schrieb Piers Dawson-Damer:
> I'm trying to setup my first cluster, (have never manually
> bootstrapped a cluster)
I am about at the same stage here ;)
> Is ceph-deploy odd activate/prepare supposed to write to the master
> ceph.conf file, specific entries for e
12 matches
Mail list logo