-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 12/12/13 09:45, Wido den Hollander wrote:
>> I keep ontop of the stable releases during the development cycle;
>> we also have a minor release exception for Ceph which means I can
>> push point releases as stable release updates.
>>
>
> Great! B
Nicolas,
Does atop show anything out of the ordinary when you run the benchmark (both on
the Ceph nodes and the node you run the benchmark from)?
It should give a good indication what could be limiting your performance.
I would highly recommend against using 9 disk RAID0 for the disks:
* I expec
How to avoid "slow requests" on rbd v1 snapsot delete? Time ago it looks solved,
but on "emperor" seen again.
Are migrating to rbd v2 can solve it?
--
WBR, Dzianis Kahanovich AKA Denis Kaganovich, http://mahatma.bspu.unibel.by/
___
ceph-users mailing
How did you cancel the uploads? Note that gc entries are not going to
show immediately in the gc list, only after some period. Also, not
sure if rados df counts the entries in omap, where all the gc data
resides.
Yehuda
On Thu, Dec 12, 2013 at 4:32 PM, Joel van Velden wrote:
> In a similar probl
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello Jiangan,
Thank you for the links, they are very helpful. I am wondering whether your
Ceph tuning configuration i safe for a production environment.
Thanks
--
Howie C.
On Thursday, December 12, 2013 at 11:07 PM, jiangang duan wrote:
> hope this helpful.
> http://software.intel.com/e
Hello German
Can you check the following and let us know.
1. After you execute service ceph start , are the service getting started ??
what is the output of service ceph status
2. what does cehp status says
3. check on ceph-node02 what are things mounted.
Many Thanks
Karan Singh
-
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
can you come to #ceh irc channel , we will troubleshoot real time ??
Many Thanks
Karan Singh
- Original Message -
From: "German Anders"
To: "Karan Singh"
Cc: ceph-users@lists.ceph.com
Sent: Friday, 13 December, 2013 8:05:00 PM
Subject: Re: [ceph-users] Ceph not responding after
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Wed, Dec 11, 2013 at 6:13 PM, Sherry Shahbazi wrote:
>
> Hi all,
>
> I was wondering if u could answer my question regarding cache pool:
> If I have got two servers with 1 SSD in front of each of them, what CRUSH
> map would be like?
>
> For example:
> If I have defined the following CRUSH map:
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi CEPH,
Introduction of Savanna for those haven't heard of it:
Savanna project aims to provide users with simple means to provision a Hadoop
cluster at OpenStack by specifying several parameters like Hadoop version,
cluster
topology, nodes hardware details and a few more.
For now, Savanna c
Since you are using XFS, you may have run out of inodes on the device and
need to enable the inode64 option.
What does `df -i` say?
Sean
On 13 December 2013 00:51, Łukasz Jagiełło wrote:
> Hi,
>
> 72 OSDs (12 servers with 6 OSD per server) and 2000 placement groups.
> Replica factor is 3.
>
>
Is this useful?
http://techs.enovance.com/6424/back-from-the-summit-cephopenstack-integration
在2013-12-14,Kai 写道:-原始邮件-
发件人: Kai
发送时间: 2013年12月14日 星期六
收件人: "ceph-us...@ceph.com"
主题: [ceph-users] CEPH and Savanna Integration
Hi CEPH,Introduction of Savanna for those haven't heard of
16 matches
Mail list logo