On 18/06/18 09:09, Alfredo Deza wrote:
On Fri, Jun 15, 2018 at 11:59 AM, Alfredo Daniel Rezinovsky
wrote:
Too long is 120 seconds
The DB is in SSD devices. The devices are fast. The process OSD reads about
800Mb but I cannot be sure from where.
You didn't mention what version of Ceph you ar
On Fri, Jun 15, 2018 at 11:59 AM, Alfredo Daniel Rezinovsky
wrote:
> Too long is 120 seconds
>
> The DB is in SSD devices. The devices are fast. The process OSD reads about
> 800Mb but I cannot be sure from where.
You didn't mention what version of Ceph you are using and how you
deployed these OS
Too long is 120 seconds
The DB is in SSD devices. The devices are fast. The process OSD reads
about 800Mb but I cannot be sure from where.
On 13/06/18 11:36, Gregory Farnum wrote:
How long is “too long”? 800MB on an SSD should only be a second or three.
I’m not sure if that’s a reasonable am
On 06/13/2018 08:22 PM, Alfredo Daniel Rezinovsky wrote:
I have 3 boxes. And I'm installing a new one. Any box can be lost
without data problem.
If any SSD is lost I will just reinstall the whole box, still have
data duplicates and in about 40 hours the triplicates will be ready.
I understa
How long is “too long”? 800MB on an SSD should only be a second or three.
I’m not sure if that’s a reasonable amount of data; you could try
compacting the rocksdb instance etc. But if reading 800MB is noticeable I
would start wondering about the quality of your disks as a journal or
rocksdb device.
On 13/06/18 01:03, Konstantin Shalygin wrote:
Each node now has 1 SSD with the OS and the BlockDBs and 3 HDDs with
bluestore data.
Very. Very bad idea. When your ssd/nvme dead you lost your linux box.
I have 3 boxes. And I'm installing a new one. Any box can be lost
without data problem.
Each node now has 1 SSD with the OS and the BlockDBs and 3 HDDs with
bluestore data.
Very. Very bad idea. When your ssd/nvme dead you lost your linux box.
k
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/