We can get 513558 IOPS in 4K read per nvme by fio but only 45146 IOPS
per OSD.by rados.
Don't expect Ceph to fully utilize NVMe's, it's software and it's slow :)
some colleagues tell that SPDK works out of the box, but almost doesn't
increase performance, because the userland-kernel interact
One thing that's worked for me to get more out of nvmes with Ceph is to
create multiple partitions on the nvme with an osd on each partition. That
way you get more osd processes and CPU per nvme device. I've heard of
people using up to 4 partitions like this.
On Sun, Feb 24, 2019, 10:25 AM Vitaliy
There is a scheduled birds of a feather for Ceph tomorrow night, but I also
noticed that there are only trainings tomorrow. Unless you are paying more
for those, you likely don't have much to do on Monday. That's the boat I'm
in. Is anyone interested in getting together tomorrow in Boston during th
Oh you are so close David, but I have to go to Tampa to a client site,
otherwise I'd hop on a flight to Boston to say hi.
Hope you are doing well. Are you going to the Cephalocon in Barcelona?
--
Alex Gorbachev
Storcium
On Sun, Feb 24, 2019 at 10:40 AM David Turner wrote:
>
> There is a schedu
I've tried 4x OSD on fast SAS SSDs in a test setup with only 2 such drives
in cluster - it increased CPU consumption a lot, but total 4Kb random
write iops (RBD) only went from ~11000 to ~22000. So it was 2x increase,
but at a huge cost.
One thing that's worked for me to get more out of nv
> Date: Fri, 22 Feb 2019 16:26:34 -0800
> From: solarflow99
>
>
> Aren't you undersized at only 30GB? I thought you should have 4% of your
> OSDs
The 4% guidance is new. Until relatively recently the oft-suggested and
default value was 1%.
> From: "Vitaliy Filippov"
> Numbers are easy to
That sounds more like the result I expected, maybe there's something
wrong with my disk or server (other disks perform fine, though).
Paul
Paul
On Fri, Feb 22, 2019 at 8:25 PM Jacob DeGlopper wrote:
>
> What are you connecting it to? We just got the exact same drive for
> testing, and I'm se
After a reboot of a node I have one particular OSD that won't boot. (Latest
Mimic)
When I "/var/lib/ceph/osd/ceph-8 # ls -lsh"
I get " 0 lrwxrwxrwx 1 root root 19 Feb 25 02:09 block.db -> '/dev/sda5
/dev/sdc5'"
For some reasons it is trying to link block.db to two disks, if I remove
the block.
Hello Ceph!
I am tracking down a performance issue with some of our mimic 13.2.4 OSDs. It
feels like a lack of memory but I have no real proof of the issue. I have used
the memory profiling ( pprof tool ) and the OSD's are maintaining their 4GB
allocated limit.
My questions are:
1.How do you