Hello,
I'm trying to setup my first Ceph Cluster on Hammer.
[root@linsrv002 ~]# ceph -v
ceph version 0.94.3 (95cefea9fd9ab740263bf8bb4796fd864d9afe2b)
[root@linsrv002 ~]# ceph -s
cluster 7a8cc185-d7f1-4dd5-9fe6-42cfd5d3a5b7
health HEALTH_OK
monmap e1: 3 mons at
{linsrv001=10.10.1
Hi Cephers,
Recently when I did some tests of RGW functions I found that the swift key of a
subuser is kept after removing the subuser. As a result, this subuser-swift_key
pair can still pass authentication system and get an auth-token (without any
permission though). Moreover, if we create a s
Test
-
本邮件及其附件含有杭州华三通信技术有限公司的保密信息,仅限于发送给上面地址中列出
的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、
或散发)本邮件中的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本
邮件!
This e-mail and its attachments c
- Original Message -
> From: "Fangzhe Chang (Fangzhe)"
> To: ceph-users@lists.ceph.com
> Sent: Saturday, 5 September, 2015 6:26:16 AM
> Subject: [ceph-users] Cannot add/create new monitor on ceph v0.94.3
>
>
>
> Hi,
>
> I’m trying to add a second monitor using ‘ceph-deploy mon new hos
On Sat, 5 Sep 2015 07:13:29 -0300 German Anders wrote:
> Hi Christian,
>
> Ok so would said that it's better to rearrange the nodes so i dont
> mix the hdd and ssd disks right? And create high perf nodes with ssd and
> others with hdd, its fine since its a new deploy.
>
It is what I would do,
> On Sep 3, 2015, at 21:19, Janusz Borkowski
> wrote:
>
> Hi!
>
> Actually, it looks that O_APPEND does not work even if the file kept open
> read-only (reader + writer). Test:
>
> in one session
>> less /mnt/ceph/test
> in another session
>> echo "start or end" >> /mnt/ceph/test
I can’t
>>Thank you, will these packages be provided to debian upstream as well.
debian manage his own repository, and only provide firefly.
You can add ceph.com repository if you want newer releases (giant,hammer,)
- Mail original -
De: "Jelle de Jong"
À: "ceph-users"
Envoyé: Samedi 5 Se
Just a quick update after up'ing the thresholds, not much happened. This is
probably because the merge threshold is several times less than the trigger for
the split. So I have now bumped the merge threshold up to 1000 temporarily to
hopefully force some DIR's to merge.
I believe this has star
Hi,
I've just setup Ceph Hammer (latest version) on a single node (1 MON, 1
MDS, 4 OSDs) for testing purposes. I used ceph-deploy. I only
configured CephFS as I don't use RBD. My pool config is as follows:
$ sudo ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
7428G 7258G