On Sat, Nov 29, 2014 at 2:13 AM, Andrei Mikhailovsky wrote:
> Ilya, here is what I got shortly after starting the dd test:
>
>
>
> [ 288.307993]
> [ 288.308004] =
> [ 288.308008] [ INFO: possible irq lock inversion dependency detected ]
>
Ilya, so, what is the best action plan now? should I continue using the kernel
that you've sent me? I am running production infrastructure and not sure if
this is the right way forward.
Do you have a patch by any chance against the LTS kernel that I can use to
recompile the ceph module?
Than
I am seeing a lot of failures with Giant/radosgw and s3test
particularly with fastcgi.
I am using community patched apache, fastcgi. civetweb is doing much better.
1. Both tests hangs at
s3tests.functional.test_headers.test_object_create_bad_contentlength_mismatch_above
I have to exclude this
On Sat, Nov 29, 2014 at 2:33 AM, Andrei Mikhailovsky wrote:
> Ilya,
>
> not sure if dmesg output in the previous is related to the cephfs, but from
> what I can see it looks good with your kernel. I would have seen hang tasks
> by now, but not anymore. I've ran a bunch of concurrent dd tests and a
Ilya, I will give it a try and get back to you shortly,
Andrei
- Original Message -
> From: "Ilya Dryomov"
> To: "Andrei Mikhailovsky"
> Cc: "ceph-users"
> Sent: Saturday, 29 November, 2014 10:40:48 AM
> Subject: Re: [ceph-users] Giant + nfs over cephfs hang tasks
> On Sat, Nov 29,
Hi all!
I am setting UP a new cluster with 10 OSDs
and the state is degraded!
# ceph health
HEALTH_WARN 940 pgs degraded; 1536 pgs stuck unclean
#
There are only the default pools
# ceph osd lspools
0 data,1 metadata,2 rbd,
with each one having 512 pg_num and 512 pgp_num
# ceph osd dump |
I think I had a similar issue recently when I've added a new pool. All pgs that
corresponded to the new pool were shown as degraded/unclean. After doing a bit
of testing I've realized that my issue was down to this:
replicated size 2
min_size 2
replicated size and min size was the same. In m
Ilya,
The 3.17.4 kernel that you've given is also good so far. No hang tasks as seen
before. However, I do have the same message in dmesg as with the 3.18 kernel
that you've sent. This message I've not seen in the past while using kernel
version 3.2 onwards.
Not really sure if this message s
Ilya,
I think i spoke too soon in my last message. I've not given it more load
(running 8 concurrent dds with bs=4M) and about a minute or so after starting
i've seen problems in dmesg output. I am attaching kern.log file for you
reference.
Please check starting with the following line: Nov
On Sat, Nov 29, 2014 at 3:10 PM, Andrei Mikhailovsky wrote:
> Ilya,
>
> The 3.17.4 kernel that you've given is also good so far. No hang tasks as
> seen before. However, I do have the same message in dmesg as with the 3.18
> kernel that you've sent. This message I've not seen in the past while usi
On Sat, Nov 29, 2014 at 3:22 PM, Andrei Mikhailovsky wrote:
> Ilya,
>
> I think i spoke too soon in my last message. I've not given it more load
> (running 8 concurrent dds with bs=4M) and about a minute or so after
> starting i've seen problems in dmesg output. I am attaching kern.log file
> for
On Sat, Nov 29, 2014 at 3:49 PM, Ilya Dryomov wrote:
> On Sat, Nov 29, 2014 at 3:22 PM, Andrei Mikhailovsky
> wrote:
>> Ilya,
>>
>> I think i spoke too soon in my last message. I've not given it more load
>> (running 8 concurrent dds with bs=4M) and about a minute or so after
>> starting i've se
Hi,
Am 26.11.2014 23:36, schrieb Geoff Galitz:
>
> Hi.
>
> If I create an RDB instance, and then use fusemount to access it from various
> locations as a POSIX entity, I assume I'll need to create a filesystem on it.
> To access it from various remote servers I assume I'd also need a
> distribu
Ilya, do you have a ticket reference for the bug?
Andrei, we run NFS tests on CephFS in our nightlies and it does pretty well
so in the general case we expect it to work. Obviously not at the moment
with whatever bug Ilya is looking at, though. ;)
-Greg
On Sat, Nov 29, 2014 at 4:51 AM Ilya Dryomov
On 29/11/14 11:40, Yehuda Sadeh wrote:
On Fri, Nov 28, 2014 at 1:38 PM, Ben wrote:
On 29/11/14 01:50, Yehuda Sadeh wrote:
On Thu, Nov 27, 2014 at 9:22 PM, Ben wrote:
On 2014-11-28 15:42, Yehuda Sadeh wrote:
On Thu, Nov 27, 2014 at 2:15 PM, b wrote:
On 2014-11-27 11:36, Yehuda Sadeh wrote
I have 2 OSD's on two nodes top of zfs that I'd like to rebuild in a more
standard (xfs) setup.
Would the following be a non destructive if somewhat tedious way of doing so?
Following the instructions from here:
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-osds-manua
That's not actually so unusual:
http://techreport.com/review/26058/the-ssd-endurance-experiment-data-retention-after-600tb
The manufacturers are pretty conservative with their ratings and
warranties. ;)
-Greg
On Thu, Nov 27, 2014 at 2:41 AM Andrei Mikhailovsky
wrote:
> Mark, if it is not too much
I have 2 OSD's on two nodes top of zfs that I'd like to rebuild in a more
standard (xfs) setup.
Would the following be a non destructive if somewhat tedious way of doing so?
Following the instructions from here:
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-osds-manua
According to the docs, Ceph block devices are thin provisioned. But how do I
list the actual size of vm images hosted on ceph?
I do something like:
rbd ls -l rbd
But that only lists the provisioned sizes, not the real usage.
thanks,
--
Lindsay
signature.asc
Description: This is a digitally
Yeah, we still have no way to inspect the actual usage of image.
But we already have existing bp to impl it.
https://wiki.ceph.com/Planning/Blueprints/Hammer/librbd%3A_shared_flag%2C_object_map
On Sun, Nov 30, 2014 at 9:13 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:
> According t
On Sun, 30 Nov 2014 11:37:06 AM Haomai Wang wrote:
> Yeah, we still have no way to inspect the actual usage of image.
>
> But we already have existing bp to impl it.
> https://wiki.ceph.com/Planning/Blueprints/Hammer/librbd%3A_shared_flag%2C_ob
> ject_map
Thanks, good to know.
I did find this:
On Sun, Nov 30, 2014 at 1:19 AM, Gregory Farnum wrote:
> Ilya, do you have a ticket reference for the bug?
Opened a ticket, assigned to myself.
http://tracker.ceph.com/issues/10208
> Andrei, we run NFS tests on CephFS in our nightlies and it does pretty well
> so in the general case we expect i
22 matches
Mail list logo