> It seems all the big vendors feel 2x is safe with NVMe but
> I get the feeling this community feels otherwise
Definitely!
As someone who works for a big vendor (and I have since I worked at
Fusion-IO way back in the old days), IMO the correct way to phrase
this would probably be that "someone i
I have just one more suggestion for you:
> but even our Supermicro contact that we worked the
> config out with was in agreement with 2x on NVMe
These kinds of settings aren't set in stone, it is a one line command
to rebalance (admittedly you wouldn't want to just do this casually).
I don't kno
> Redhat/Micron/Samsung/Supermicro have all put out white papers backing the
> idea of 2 copies on NVMe's as safe for production.
It's not like you can just jump from "unsafe" to "safe" -- it is about
comparing the probability of losing data against how valuable that
data is.
A vendor's decision
> I am interested in benchmarking the cluster.
dstat is great, but can you send and example of this command on your
osd machine: iostat -mtxy 1
This will also show some basic CPU info and more detailed analysis of
the I/O pattern.
What kind of drives are you using? Random access can be very slo
> One nvme sudden crash again. Could anyone please help shed some light here?
It looks like a flaky NVMe drive. Do you have a spare to try?
On Mon, Feb 22, 2021 at 1:56 AM zxcs wrote:
>
> One nvme sudden crash again. Could anyone please help shed some light here?
> Thank a ton!!!
> Below ar
pare to try.
> It’s very strange, the load not very high at that time. and both ssd and
> nvme seems healthy.
>
> If cannot fix it. I am afraid I need to setup more nodes and set out remove
> these OSDs which using this Nvme?
>
> Thanks,
> zx
>
>
> > 在 2021
I just did this recently. The only painful part was using
"monmaptool" to change the monitor IP addresses on disk. Once you do
that, and change the monitor IPs in ceph.conf everywhere, it should
come up just fine.
Mark
On Tue, Apr 6, 2021 at 8:08 AM Jean-Marc FONTANA
wrote:
>
> Hello everyon
> One server has LSI SAS3008 [0] instead of the Perc H800,
> which comes with 512MB RAM + BBU. On most servers latencies are around
> 4-12ms (average 6ms), on the system with the LSI controller we see
> 20-60ms (average 30ms) latency.
Are these reads, writes, or a mixed workload? I would expect a
use MegaRAID-based controllers (such as the H800).
Good luck,
Mark
On Tue, Apr 20, 2021 at 2:28 PM Nico Schottelius
wrote:
>
>
> Mark Lehrer writes:
>
> >> One server has LSI SAS3008 [0] instead of the Perc H800,
> >> which comes with 512MB RAM + BBU. On most servers la
Can you collect the output of this command on all 4 servers while your
test is running:
iostat -mtxy 1
This should show how busy the CPUs are as well as how busy each drive is.
On Thu, Apr 29, 2021 at 7:52 AM Schmid, Michael
wrote:
>
> Hello folks,
>
> I am new to ceph and at the moment I am d
I've had good luck with the Ubuntu LTS releases - no need to add extra
repos. 20.04 uses Octopus.
On Fri, Apr 30, 2021 at 1:14 PM Peter Childs wrote:
>
> I'm trying to set up a new ceph cluster, and I've hit a bit of a blank.
>
> I started off with centos7 and cephadm. Worked fine to a point, ex
I've been using MySQL on Ceph forever, and have been down this road
before but it's been a couple of years so I wanted to see if there is
anything new here.
So the TL:DR version of this email - is there a good way to improve
16K write IOPs with a small number of threads? The OSDs themselves
are i
i wrote:
>
> Please describe:
>
> * server RAM and CPU
> * osd_memory_target
> * OSD drive model
>
> > On Jun 7, 2024, at 11:32, Mark Lehrer wrote:
> >
> > I've been using MySQL on Ceph forever, and have been down this road
> > before but it's bee
> Not the most helpful response, but on a (admittedly well-tuned)
Actually this was the most helpful since you ran the same rados bench
command. I'm trying to stay away from rbd & qemu issues and just test
rados bench on a non-virtualized client.
I have a test instance newer drives, CPUs, and Ce
gt;
> Eh? cf. Mark and Dan's 1TB/s presentation.
>
> On Jun 10, 2024, at 13:58, Mark Lehrer wrote:
>
> It
> seems like Ceph still hasn't adjusted to SSD performance.
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To
> I'm reading and trying to figure out how crazy
> is using Ceph for all of the above targets [MySQL]
Not crazy at all, it just depends on your performance needs. 16K I/O
is not the best Ceph use case, but the snapshot/qcow2 features may
justify it.
The biggest problem I have with MySQL is that
Fabian said:
> The output of "ceph osd pool stats" shows ~100 op/s, but our disks are doing:
What does the iostat output look like on the old cluster?
Thanks,
Mark
On Mon, Feb 24, 2020 at 11:02 AM Fabian Zimmermann wrote:
>
> Hi,
>
> we currently creating a new cluster. This cluster is (as far
>> fio test on local disk(NVME) and ceph rbd.
I would suggest trying rados bench as well. This will show the basic
Ceph object performance level. If you have good rados performance
with a 1M object size (which is usually the case in my experience),
then you can look at what is happening at the R
18 matches
Mail list logo