On Tue, Apr 11, 2017 at 4:01 AM, Alex Gorbachev
wrote:
> On Mon, Apr 10, 2017 at 2:16 PM, Alex Gorbachev
> wrote:
>> I am trying to understand the cause of a problem we started
>> encountering a few weeks ago. There are 30 or so per hour messages on
>> OSD nodes of type:
>>
>> ceph-osd.33.log:
Thanks for your help:-)
By the way, could you give us some hint why Infernalis and later releases don't
have this problem, please? Thank you.
-邮件原件-
发件人: Jason Dillaman [mailto:jdill...@redhat.com]
发送时间: 2017年4月11日 4:30
收件人: 许雪寒
抄送: ceph-users@lists.ceph.com
主题: Re: 答复: [ceph-users] 答复:
This point release fixes several important bugs in RBD mirroring, librbd & RGW.
We recommend that all v10.2.x users upgrade.
For more detailed information, refer to the complete changelog[1] and the
release notes[2]
Notable Changes
---
* librbd: possible race in ExclusiveLock handl
The changes involved in introducing the deep-flatten feature in the
Infernalis release resulted in new clone copy-on-write handling. I
would suggest retrying with a jewel release client.
On Tue, Apr 11, 2017 at 6:56 AM, 许雪寒 wrote:
> Thanks for your help:-)
>
> By the way, could you give us some h
Hi Piotr,
On Tue, Apr 11, 2017 at 2:41 AM, Piotr Dałek wrote:
> On 04/10/2017 08:16 PM, Alex Gorbachev wrote:
>>
>> I am trying to understand the cause of a problem we started
>> encountering a few weeks ago. There are 30 or so per hour messages on
>> OSD nodes of type:
>>
>> ceph-osd.33.log:201
Hi Ilya,
On Tue, Apr 11, 2017 at 4:06 AM, Ilya Dryomov wrote:
> On Tue, Apr 11, 2017 at 4:01 AM, Alex Gorbachev
> wrote:
>> On Mon, Apr 10, 2017 at 2:16 PM, Alex Gorbachev
>> wrote:
>>> I am trying to understand the cause of a problem we started
>>> encountering a few weeks ago. There are 30
In the ceph, a large file will be cut into small objects(2MB ~4MB), then the
process Pool ---(crush)> PG -> OSD
Here,I have a question. How to cut a large file into small objects?? it'done
by the ceph itself or some other way ?
I try this command: rados put test-object xxx.iso --pool
Hey,
I could see lots of this error messages on a RGW instance of our secondary
cluster:
2017-04-11 04:36:35.572659 6d23b15e5700 1 rgw meta sync: ERROR: failed to
read mdlog info with (2) No such file or directory
2017-04-11 04:36:35.572754 6d23b15e5700 1 rgw meta sync: ERROR: failed to
read md
Hi,
rados - Does not shard your object (as far as I know, there may be a
striping API, although it may not do quite what you want)
cephfs - implemented on top of rados - does it's own object sharding (I'm
fuzzy on the details)
rbd - implemented on top of rados - does shard into 2^order sized objec
I'm actually using ZFS as I've been a user of ZFS for 15y now, and have
been saved too many times by its ability to detect issues (especially
non-disk related hardware issues) that other filesystems don't.
With regards to the stripe_length, I have verified that changing the
stripe_length changes t
thank you, the answer give me the confidence to use ceph well
-- 原始邮件 --
发件人: "Kjetil J��rgensen" ;
发送时间: 2017年4月12日(星期三) 1:16
收件人: "冥王星" <945019...@qq.com>;
抄送: "ceph-users" ;
主题: Re: [ceph-users] How to cut a large file into small objects
Hi,
rados - Does n
Is this now supported? I read in one of the revisions that only systematic
is supported. Just want to know if the latest version now supports
non-systematic.
http://tracker.ceph.com/projects/ceph/repository/revisions/14c31ddf1056e48d0361c9650c4e62d95603f1b8
After much banging on this and reading through the Ceph RGW source, i
figured out Ceph RadosGW returns -13 ( EACCES - AcessDenied) if you dont
pass in a 'Prefix' in your S3 lifecycle configuration setting. It also
returns EACCES if the XML is invalid in any way, which is probably not the
most corre
Hi,
I noticed that the Debian package for ceph-deploy was updated yesterday, but
the version number remains the same (1.5.37). Any idea what is going on?
Thanks,
Brendan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listin
14 matches
Mail list logo