.4.25 kernel with some additional ceph patches borrowed
> from
> newer kernel releases.
>
> Thanks,
> Markus
>
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com ]
> On Behalf Of
> Nikolay Borisov
> Sent: Montag, 10. Oktobe
On 10/10/2016 12:22 PM, Ilya Dryomov wrote:
> On Fri, Oct 7, 2016 at 1:40 PM, Nikolay Borisov wrote:
>> Hello,
>>
>> I've encountered yet another cephfs crash:
>>
>> [990188.822271] BUG: unable to handle kernel NULL pointer dereference at
>
Hello,
I've encountered yet another cephfs crash:
[990188.822271] BUG: unable to handle kernel NULL pointer dereference at
001c
[990188.822790] IP: [] __free_pages+0x5/0x30
[990188.823090] PGD 180dd8f067 PUD 1bf2722067 PMD 0
[990188.823506] Oops: 0002 [#1] SMP
[990188.831274] CPU
On 10/05/2016 05:26 AM, Yan, Zheng wrote:
>
>> On 3 Oct 2016, at 20:27, Ilya Dryomov wrote:
>>
>> On Mon, Oct 3, 2016 at 1:19 PM, Nikolay Borisov wrote:
>>> Hello,
>>>
>>> I've been investigating the following crash with cephfs:
>>>
Hello,
I'd like to ask whether Ceph's perfrmance weekly meetings recordings are
going to be updated at http://pad.ceph.com/p/performance_weekly.
I can see that minutes are being updated from the meeting, however the
links with videos from those discussions are lagging by more than a year
(latest
On 10/03/2016 03:27 PM, Ilya Dryomov wrote:
> On Mon, Oct 3, 2016 at 1:19 PM, Nikolay Borisov wrote:
>> Hello,
>>
>> I've been investigating the following crash with cephfs:
>>
>> [8734559.785146] general protection fault: [#1] SMP
>> [8734559.
Hello,
I've been investigating the following crash with cephfs:
[8734559.785146] general protection fault: [#1] SMP
[8734559.791921] ioatdma shpchp ipmi_devintf ipmi_si ipmi_msghandler
tcp_scalable ib_qib dca ib_mad ib_core ib_addr ipv6 [last unloaded:
stat_faker_4410clouder4]
[8734559
On 09/22/2016 06:36 PM, Ilya Dryomov wrote:
> On Thu, Sep 15, 2016 at 3:18 PM, Ilya Dryomov wrote:
>> On Thu, Sep 15, 2016 at 2:43 PM, Nikolay Borisov wrote:
>>>
>>> [snipped]
>>>
>>> cat /sys/bus/rbd/devices/47/client_id
>>>
On 09/15/2016 03:15 PM, Ilya Dryomov wrote:
> On Thu, Sep 15, 2016 at 12:54 PM, Nikolay Borisov wrote:
>>
>>
>> On 09/15/2016 01:24 PM, Ilya Dryomov wrote:
>>> On Thu, Sep 15, 2016 at 10:22 AM, Nikolay Borisov
>>> wrote:
>>>>
>
On 09/15/2016 01:24 PM, Ilya Dryomov wrote:
> On Thu, Sep 15, 2016 at 10:22 AM, Nikolay Borisov
> wrote:
>>
>>
>> On 09/15/2016 09:22 AM, Nikolay Borisov wrote:
>>>
>>>
>>> On 09/14/2016 05:53 PM, Ilya Dryomov wrote:
>>&
On 09/14/2016 05:53 PM, Ilya Dryomov wrote:
> On Wed, Sep 14, 2016 at 3:30 PM, Nikolay Borisov wrote:
>>
>>
>> On 09/14/2016 02:55 PM, Ilya Dryomov wrote:
>>> On Wed, Sep 14, 2016 at 9:01 AM, Nikolay Borisov wrote:
>>>>
>>>>
>>>>
On 09/14/2016 02:55 PM, Ilya Dryomov wrote:
> On Wed, Sep 14, 2016 at 9:01 AM, Nikolay Borisov wrote:
>>
>>
>> On 09/14/2016 09:55 AM, Adrian Saul wrote:
>>>
>>> I found I could ignore the XFS issues and just mount it with the
>>> a
On 09/14/2016 02:55 PM, Ilya Dryomov wrote:
> On Wed, Sep 14, 2016 at 9:01 AM, Nikolay Borisov wrote:
>>
>>
>> On 09/14/2016 09:55 AM, Adrian Saul wrote:
>>>
>>> I found I could ignore the XFS issues and just mount it with the
>>> a
On 09/14/2016 09:55 AM, Adrian Saul wrote:
>
> I found I could ignore the XFS issues and just mount it with the appropriate
> options (below from my backup scripts):
>
> #
> # Mount with nouuid (conflicting XFS) and norecovery (ro snapshot)
> #
> if ! mount -o r
On 09/13/2016 04:30 PM, Ilya Dryomov wrote:
[SNIP]
>
> Hmm, it could be about whether it is able to do journal replay on
> mount. When you mount a snapshot, you get a read-only block device;
> when you mount a clone image, you get a read-write block device.
>
> Let's try this again, suppose im
On 09/13/2016 01:33 PM, Ilya Dryomov wrote:
> On Tue, Sep 13, 2016 at 12:08 PM, Nikolay Borisov wrote:
>> Hello list,
>>
>>
>> I have the following cluster:
>>
>> ceph status
>> cluster a2fba9c1-4ca2-46d8-8717-a8e42db14bb0
>> health
Hello list,
I have the following cluster:
ceph status
cluster a2fba9c1-4ca2-46d8-8717-a8e42db14bb0
health HEALTH_OK
monmap e2: 5 mons at
{alxc10=x:6789/0,alxc11=x:6789/0,alxc5=x:6789/0,alxc6=:6789/0,alxc7=x:6789/0}
election epoch 196, quorum 0,1,2
17 matches
Mail list logo