Re: [ceph-users] Latency spikes in OSD's based on bluestore

2019-04-08 Thread Patrik Martinsson
r if one actually should take some sort of action. Thanks for answering! Best Regards, Patrik Martinsson, Sweden ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Latency spikes in OSD's based on bluestore

2019-04-07 Thread Patrik Martinsson
364688, "job": 6777, "event": "table_file_deletion", "file_number": 17625} 2019-04-07 04:13:57.436121 7fba248c4700 4 rocksdb: [/builddir/build/BUILD/ceph-12.2.8/src/rocksdb/db/db_impl_write.cc:684] reusing log 17656 from recycle list 2019-04-07 04:13:57.

Re: [ceph-users] New Ceph-cluster and performance "questions"

2018-02-08 Thread Patrik Martinsson
Hi Christian, First of all, thanks for all the great answers and sorry for the late reply. On Tue, 2018-02-06 at 10:47 +0900, Christian Balzer wrote: > Hello, > > > I'm not a "storage-guy" so please excuse me if I'm missing / > > overlooking something obvious. > > > > My question is in the

[ceph-users] New Ceph-cluster and performance "questions"

2018-02-05 Thread Patrik Martinsson
more reliable ways of measuring it ? Is there a way to calculate this "theoretically" (ie with with 6 nodes and 36 SSD's we should get these numbers) and then compare it to the reality. Again, not a storage guy and haven't really done this before so please excuse me for my lay

Re: [ceph-users] Yet another hardware planning question ...

2016-10-20 Thread Patrik Martinsson
ce *always* was the best solution   - if you use mainly spinners, have the journals on ssd's  - if you mainly use ssd's, have journals on nvme's  But that's not always the case I guess, and thanks for pointing that out.  Best regards,  Patrik Martinsson  Sweden On fre, 2016-1

Re: [ceph-users] Yet another hardware planning question ...

2016-10-13 Thread Patrik Martinsson
t ? Best regards,  Patrik Martinsson Sweden > On Oct 13, 2016 10:23 AM, "Patrik Martinsson" > wrote: > > Hello everyone,  > > > > We are in the process of buying hardware for our first ceph- > > cluster. We > > will start with some testing and

[ceph-users] Yet another hardware planning question ...

2016-10-13 Thread Patrik Martinsson
tlenecks or flaws that we are missing or could this may very well work as good start (and the ability to grow with adding more servers) ?" When it comes to workload-wise-issues, I think we'll just have to see and grow as we learn.  We'll be grateful for any input, thoughts, idea