Hello Marc,
here is the profile and the output:
[global]
ioengine=libaio
invalidate=1
ramp_time=30
iodepth=1
runtime=180
time_based
direct=1
filename=/dev/sdd
[randwrite-4k-d32-rand]
stonewall
bs=4k
rw=randwrite
iodepth=32
[randread-4k-d32-rand]
stonewall
bs=4k
rw=randread
iodepth=32
[write-4096
Hi Marc,
Thanks for your prompt response.
We have test the direct random write for the disk (without Ceph) and it is
200 MB/s. Wonder why we got 80MB/s from Ceph.
Your help is much appreciated.
Regards,
Behzad
On Sun, Jan 16, 2022 at 11:56 AM Marc wrote:
>
>
> > Detailed (somehow) problem de
And here is the disk information that we base our testing:
HPE EG1200FDJYT 1.2TB 10kRPM 2.5in SAS-6G Enterprise
On Sun, Jan 16, 2022 at 11:23 AM Behzad Khoshbakhti
wrote:
> Hi all,
>
> We are curious about the single disk performance which we experience
> performance degradation w
Hi all,
We are curious about the single disk performance which we experience
performance degradation when the disk is controlled via Ceph.
Problem description:
We are curious about the Ceph write performance and we have found that when
we request data to be written via Ceph, it is not using full
Thu, Apr 8, 2021 at 9:49 AM Behzad Khoshbakhti
> wrote:
>
>> I believe there is some of problem in the systemd as the ceph starts
>> successfully by running manually using the ceph-osd command.
>>
>> On Thu, Apr 8, 2021, 10:32 AM Enrico Kern
>> wrote:
>>
&
te:
> > >
> > > Hello,
> > >
> > > +1 Am facing the same problem in ubuntu after upgrade to pacific
> > >
> > > 2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1
> bluestore(/var/lib/ceph/osd/
> > > ceph-29/block) _read_bdev_label fai
:08.823+0430 7f91772c5f00 -1 osd.2 496 log_to_monitors
{default=true}
2021-04-05T11:24:09.943+0430 7f916f7b9700 -1 osd.2 496 set_numa_affinity
unable to identify public interface 'ens160' numa node: (0) Success
On Mon, Apr 5, 2021, 10:51 AM Behzad Khoshbakhti
wrote:
> running as ceph us
mi
root@osd03:/var/lib/ceph/osd/ceph-2#
root@osd03:/var/lib/ceph/osd/ceph-2#
On Sun, Apr 4, 2021 at 7:06 PM Andrew Walker-Brown <
andrew_jbr...@hotmail.com> wrote:
> And after a reboot what errors are you getting?
>
> Sent from my iPhone
>
> On 4 Apr 2021, at 15:33, Behzad Kh
not permitted
>> 2021-04-03T10:36:07.698+0300 7f9b8d075f00 -1 ESC[0;31m ** ERROR: unable to
>> open OSD superblock on /var/lib/ceph/osd/ceph-29: (2) No such file or
>> directoryESC[0m
>>
>> On Sun, Apr 4, 2021 at 1:52 PM Behzad Khoshbakhti > >
>> wrote:
>&g
It worth mentioning as I issue the following command, the Ceph OSD starts
and joins the cluster:
/usr/bin/ceph-osd -f --cluster ceph --id 2 --setuser ceph --setgroup ceph
On Sun, Apr 4, 2021 at 3:00 PM Behzad Khoshbakhti
wrote:
> Hi all,
>
> As I have upgrade my Ceph cluster from 1
e430--4
b89--bcd4--105b2df26352
253:10 16G 0 lvm
root@osd03:~#
root@osd03:/var/lib/ceph/osd/ceph-2# mount | grep -i ceph
tmpfs on /var/lib/ceph/osd/ceph-2 type tmpfs (rw,relatime)
root@osd03:/var/lib/ceph/osd/ceph-2#
any help is much appreciated
--
11 matches
Mail list logo