Hi David,
Apologies for the late response.
NodeB is mon+client, nodeC is client:
Cheph health details:
HEALTH_ERR 819 pgs are stuck inactive for more than 300 seconds; 883 pgs
degraded; 64 pgs stale; 819 pgs stuck inactive; 1064 pgs stuck unclean; 883
pgs undersized; 22 requests are blocked >
Reported http://tracker.ceph.com/issues/16388
ceph version 10.2.1
2016-06-19 20:54 GMT+03:00 Victor Efimov :
> That was 5 megabytes size. I tried 6 megabytes and 600 bytes, same
> strory. So seems unrelated to size. I think important things here: 1)
> actual object size is zero 2) Range req
On 20 June 2016 at 09:21, Blair Bethwaite wrote:
> slow request issues). If you watch your xfs stats you'll likely get
> further confirmation. In my experience xs_dir_lookups balloons (which
> means directory lookups are missing cache and going to disk).
Murphy's a bitch. Today we upgraded a clus
On Sun, Jun 19, 2016 at 3:12 PM, ServerPoint wrote:
> Hi,
>
> Nothing particular in dmesg. I noticed that the client server hangs and I
> have to restart it to get it back up.
>
> Kernal Version is
> --
> root@cephclient:~# uname -r
> 3.16.0-4-amd64
>
3.16 is too old for cephfs kernel client
Hi All,
We have a Jewel (10.2.1) cluster on Centos 7 - I am using an elrepo 4.4.1
kernel on all machines and we have an issue where some of the machines hang -
not sure if its hardware or OS but essentially the host including the console
is unresponsive and can only be recovered with a hardwar
Hello Blair,
On Mon, 20 Jun 2016 09:21:27 +1000 Blair Bethwaite wrote:
> Hi Wade,
>
> (Apologies for the slowness - AFK for the weekend).
>
> On 16 June 2016 at 23:38, Wido den Hollander wrote:
> >
> >> Op 16 juni 2016 om 14:14 schreef Wade Holler :
> >>
> >>
> >> Hi All,
> >>
> >> I have a r
On Mon, 20 Jun 2016 00:14:55 +0700 Lazuardi Nasution wrote:
> Hi,
>
> Is it possible to do cache tiering for some storage pools with the same
> cache pool?
As mentioned several times on this ML, no.
There is a strict 1:1 relationship between base and cache pools.
You can of course (if your SSDs
Hi Wade,
(Apologies for the slowness - AFK for the weekend).
On 16 June 2016 at 23:38, Wido den Hollander wrote:
>
>> Op 16 juni 2016 om 14:14 schreef Wade Holler :
>>
>>
>> Hi All,
>>
>> I have a repeatable condition when the object count in a pool gets to
>> 320-330 million the object write ti
That was 5 megabytes size. I tried 6 megabytes and 600 bytes, same
strory. So seems unrelated to size. I think important things here: 1)
actual object size is zero 2) Range request
I'll ask my sysadmin-team for version, they'll answer tomorrow. I'll report
issue tomorrow.
2016-06-19 18:28 GM
Hi,
Is it possible to do cache tiering for some storage pools with the same
cache pool? What will happen if cache pool is broken or at least doesn't
meet quorum when storage pool is OK?
Best regards,
___
ceph-users mailing list
ceph-users@lists.ceph.com
> Op 19 juni 2016 om 12:21 schreef Victor Efimov :
>
>
> When I submit request to zero-size object with Range header, I am getting
> wrong Content-Length and Content-Range.
>
> See "Content-Range: bytes 0-5242880/0" and "Content-Length: 5242881" below.
>
That sounds like a bug! Good catch :)
When I submit request to zero-size object with Range header, I am getting
wrong Content-Length and Content-Range.
See "Content-Range: bytes 0-5242880/0" and "Content-Length: 5242881" below.
GET
http:///test-vsespb-1/mykey?AWSAccessKeyId=XXX&Expires=1467330825&Signature=XXX
Range: bytes=0-5242
Hi,
so far the key values for that are:
osd_client_op_priority = 63 ( anyway default, but i set it to remember it )
osd_recovery_op_priority = 1
In addition i set:
osd_max_backfills = 1
osd_recovery_max_active = 1
---
But according to your settings its all ok.
According to
Hi,
Nothing particular in dmesg. I noticed that the client server hangs and
I have to restart it to get it back up.
Kernal Version is
--
root@cephclient:~# uname -r
3.16.0-4-amd64
On 6/19/2016 7:29 AM, Lincoln Bryant wrote:
Hi,
Are there any messages in 'dmesg'? Are you running a recen
14 matches
Mail list logo