On Mon, Mar 12, 2018 at 8:20 PM, Maged Mokhtar wrote:
> On 2018-03-12 21:00, Ilya Dryomov wrote:
>
> On Mon, Mar 12, 2018 at 7:41 PM, Maged Mokhtar wrote:
>
> On 2018-03-12 14:23, David Disseldorp wrote:
>
> On Fri, 09 Mar 2018 11:23:02 +0200, Maged Mokhtar wrote:
>
> 2)I undertand that before sw
Hallo Jason,
thanks for your feedback!
Original Message >> * decorated a CentOS image with
hw_scsi_model=virtio--scsi,hw_disk_bus=scsi> > Is that just a typo for
"hw_scsi_model"?
Yes, it was a typo when I wrote my message. The image has virtio-scsi as
it should.
I see
Hallo!
Discards appear like they are being sent to the device. How big of a
temporary file did you create and then delete? Did you sync the file
to disk before deleting it? What version of qemu-kvm are you running?
I made several test with commands like (issuing sync after each operation):
d
Hi all,
I've tested some new Samsung SM863 960GB and Intel DC S4600 240GB SSD's
using the method described at Sebastien Han's blog:
https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
The first thing stated there is to disable the drive's wr
On 03/10/2018 12:58 AM, Amardeep Singh wrote:
On Saturday 10 March 2018 02:01 AM, Casey Bodley wrote:
On 03/08/2018 07:16 AM, Amardeep Singh wrote:
Hi,
I am trying to configure server side encryption using Key Management
Service as per documentation
http://docs.ceph.com/docs/master/radosgw
Hi, Maged
Not a big difference in both cases.
Performance of 4 nodes pool with 5x PM863a each is:
4k bs - 33-37kIOPS krbd 128 threads and 42-51kIOPS vs 1024 threads (fio
numjobs 128-256-512)
the same situation happens when we try to increase rbd workload, 3 rbd gets
the same iops #.
Dead end & l
Hi All
I have a Samba server that is exporting directories from a Cephfs Kernel
mount. Performance has been pretty good for the last year but users have
recently been complaining of short "freezes", these seem to coincide with
MDS related slow requests in the monitor ceph.log such as:
2018-03-13
Well I have it mostly wrapped up and writing to graylog, however the ops log
has a `remote_addr` field, but as far as I can tell it's always blank. I found
this fix but it seems to only be in v13.0.1
https://github.com/ceph/ceph/pull/16860
Is there any chance we'd see backports of this to Jewel
Hello, Caspar!
Would you mind to share controller model you use? I would say these
results are pretty low.
Here are my results on Intel RMS25LB LSI2308 based SAS controller in IT
mode:
I set write_cache to write through
Test command, fio 2.2.10:
sudo fio --filename=/dev/sdb --direct=1 --sy
Hi Vladimir,
Yeah, the results are pretty low compared to yours but i think this is due
to the fact that this SSD is in a fairly old server (Supermicro X8, SAS2
expander backplane).
Controller is LSI/Broadcom 9207-8i on the latest IT firmware (Same LSI2308
chipset as yours)
Kind regards,
Caspar
Discards appear like they are being sent to the device. How big of a
temporary file did you create and then delete? Did you sync the file
to disk before deleting it? What version of qemu-kvm are you running?
On Tue, Mar 13, 2018 at 11:09 AM, Fulvio Galeazzi
wrote:
> Hallo Jason,
> thanks for
Can you provide the output from "rbd info /volume-80838a69-e544-47eb-b981-a4786be89736"?
On Tue, Mar 13, 2018 at 12:30 PM, Fulvio Galeazzi
wrote:
> Hallo!
>
>> Discards appear like they are being sent to the device. How big of a
>> temporary file did you create and then delete? Did you sync the
On Tue, Mar 13, 2018 at 12:17 PM, David C wrote:
> Hi All
>
> I have a Samba server that is exporting directories from a Cephfs Kernel
> mount. Performance has been pretty good for the last year but users have
> recently been complaining of short "freezes", these seem to coincide with
> MDS relate
Thanks for the detailed response, Greg. A few follow ups inline:
On 13 Mar 2018 20:52, "Gregory Farnum" wrote:
On Tue, Mar 13, 2018 at 12:17 PM, David C wrote:
> Hi All
>
> I have a Samba server that is exporting directories from a Cephfs Kernel
> mount. Performance has been pretty good for the
On Tue, Mar 13, 2018 at 2:56 PM David C wrote:
> Thanks for the detailed response, Greg. A few follow ups inline:
>
>
> On 13 Mar 2018 20:52, "Gregory Farnum" wrote:
>
> On Tue, Mar 13, 2018 at 12:17 PM, David C wrote:
> > Hi All
> >
> > I have a Samba server that is exporting directories from
Updated cluster now to 12.2.4 and the cycle of
inconsistent->repair->unfound seems to continue, though possibly
slightly differently. A pg does pass through an "active+clean" phase
after repair, which might be new, but more likely I never observed it at
the right time before.
I see messages l
I try to add a data pool:
OSD_STAT USED AVAIL TOTAL HB_PEERSPG_SUM PRIMARY_PG_SUM
9 1076M 930G 931G [0,1,2,3,4,5,6,7,8]128 5
8 1076M 930G 931G [0,1,2,3,4,5,6,7,9]128 14
7 1076M 930G 931G [0,1,2,3,4,5,6,8,9]128
17 matches
Mail list logo