On Tue, Feb 27, 2018 at 2:29 PM, Jan Pekař - Imatic wrote:
> I think I hit the same issue.
> I have corrupted data on cephfs and I don't remember the same issue before
> Luminous (i did the same tests before).
>
> It is on my test 1 node cluster with lower memory then recommended (so
> server is s
On Sat, Mar 3, 2018 at 6:17 PM, Jan Pekař - Imatic wrote:
> On 3.3.2018 11:12, Yan, Zheng wrote:
>>
>> On Tue, Feb 27, 2018 at 2:29 PM, Jan Pekař - Imatic
>> wrote:
>>>
>>> I think I hit the same issue.
>>> I have corrupted data on cephfs and I don't remember the same issue
>>> before
>>> Luminou
On 3.3.2018 11:12, Yan, Zheng wrote:
On Tue, Feb 27, 2018 at 2:29 PM, Jan Pekař - Imatic wrote:
I think I hit the same issue.
I have corrupted data on cephfs and I don't remember the same issue before
Luminous (i did the same tests before).
It is on my test 1 node cluster with lower memory the
On Tue, Feb 27, 2018 at 2:29 PM, Jan Pekař - Imatic wrote:
> I think I hit the same issue.
> I have corrupted data on cephfs and I don't remember the same issue before
> Luminous (i did the same tests before).
>
> It is on my test 1 node cluster with lower memory then recommended (so
> server is s
Hi all,
thank you for reply. I will answer your questions, try to reproduce it
and if I succeed, start new thread. It can take a while, I'm quiet busy.
My cluster was upgraded from Hammer or Jewel.
Luminous cluster was healthy wen I started my test. It could happen,
that load temporarily cau
On 27 Feb 2018 06:46, "Jan Pekař - Imatic" wrote:
I think I hit the same issue.
I have corrupted data on cephfs and I don't remember the same issue before
Luminous (i did the same tests before).
It is on my test 1 node cluster with lower memory then recommended (so
server is swapping) but it sho
I think I hit the same issue.
I have corrupted data on cephfs and I don't remember the same issue
before Luminous (i did the same tests before).
It is on my test 1 node cluster with lower memory then recommended (so
server is swapping) but it shouldn't lose data (it never did before).
So slow
On Thu, Jan 18, 2018 at 6:39 PM, Florent B wrote:
> I still have file corruption on Ceph-fuse with Luminous (on Debian
> Jessie, default kernel) !
>
> My mounts are using fuse_disable_pagecache=true
>
> And I have a lot of errors like "EOF reading msg header (got 0/30
> bytes)" in my app.
does th
On Thu, Dec 14, 2017 at 8:52 PM, Florent B wrote:
> On 14/12/2017 03:38, Yan, Zheng wrote:
>> On Thu, Dec 14, 2017 at 12:49 AM, Florent B wrote:
>>>
>>> Systems are on Debian Jessie : kernel 3.16.0-4-amd64 & libfuse 2.9.3-15.
>>>
>>> I don't know pattern of corruption, but according to error mess
On Thu, Dec 14, 2017 at 12:49 AM, Florent B wrote:
> On 13/12/2017 17:40, Yan, Zheng wrote:
>> On Wed, Dec 13, 2017 at 11:49 PM, Florent B wrote:
>>> On 13/12/2017 16:48, Yan, Zheng wrote:
On Wed, Dec 13, 2017 at 11:23 PM, Florent B wrote:
> The problem is : only a single client accesse
On Wed, Dec 13, 2017 at 11:49 PM, Florent B wrote:
> On 13/12/2017 16:48, Yan, Zheng wrote:
>> On Wed, Dec 13, 2017 at 11:23 PM, Florent B wrote:
>>> The problem is : only a single client accesses each file !
>>>
>> do you mean the file is only accessed by one client from beginning to end?
>>
>>
On Wed, Dec 13, 2017 at 11:23 PM, Florent B wrote:
> The problem is : only a single client accesses each file !
>
do you mean the file is only accessed by one client from beginning to end?
> It is not related to a multiple clients accessing a same file at the
> same time, at all.
The problem c
Hi,
The ceph mds keeps all the capabilities for the files, however the
clients modify the the rados data pool objects directly (they do not do
the content modification threw the mds).
IMHO IF the file (really) gets corrupted because a client write (not
some corruption from the mds / osd) th
On Fri, Dec 8, 2017 at 10:04 PM, Florent B wrote:
> When I look in MDS slow requests I have a few like this :
>
> {
> "description": "client_request(client.460346000:5211
> setfilelockrule 1, type 2, owner 9688352835732396778, pid 660, start 0,
> length 0, wait 1 #0x100017da2aa 2017-12
On 08. des. 2017 14:49, Florent B wrote:
On 08/12/2017 14:29, Yan, Zheng wrote:
On Fri, Dec 8, 2017 at 6:51 PM, Florent B wrote:
I don't know I didn't touched that setting. Which one is recommended ?
If multiple dovecot instances are running at the same time and they
all modify the same fil
: "ceph-users"
>> Envoyé: Vendredi 8 Décembre 2017 10:54:59
>> Objet: Re: [ceph-users] Corrupted files on CephFS since Luminous upgrade
>>
>> On 08/12/2017 10:44, Wido den Hollander wrote:
>>>
>>> On 12/08/2017 10:27 AM, Florent B wrote:
>>>
do you have disabled fuse pagecache on your clients ceph.conf ?
[client]
fuse_disable_pagecache = true
- Mail original -
De: "Florent Bautista"
À: "ceph-users"
Envoyé: Vendredi 8 Décembre 2017 10:54:59
Objet: Re: [ceph-users] Corrupted files on CephFS since Lumino
On 12/08/2017 10:27 AM, Florent B wrote:
Hi everyone,
A few days ago I upgraded a cluster from Kraken to Luminous.
I have a few mail servers on it, running Ceph-Fuse & Dovecot.
And since the day of upgrade, Dovecot is reporting some corrupted files
on a large account :
doveadm(myu...@mydoma
18 matches
Mail list logo