This may be the explanation:
https://serverfault.com/questions/857271/better-performance-when-hdd-write-cache-is-disabled-hgst-ultrastar-7k6000-and
Other manufacturers may have started to do the same, I suppose.
--
With best regards,
Vitaliy Filippov
Looks like it as the Toshiba drives I use have their own version of that it
seems.
So would explain the same kind of results.
On Tue, 13 Nov 2018 at 4:26 PM, Виталий Филиппов wrote:
> This may be the explanation:
>
>
> https://serverfault.com/questions/857271/better-performance-when-hdd-write-c
I read the whole thread and it looks like the write cache should always be
disabled as in the worst case, the performance is the same(?).
This is based on this discussion.
I will test some WD4002FYYZ which don't mention "media cache".
Kevin
Am Di., 13. Nov. 2018 um 09:27 Uhr schrieb Виталий Фили
As i am not sure howto correctly use tracker.ceph.com, i´ll post my report here:
Using the dashboard to delete a rbd image via gui throws an error when the
image name ends with an whitespace (user input error leaded to this situation).
Also editing this image via dashboard throws error.
Deleting
Hi Brendan
in fact you can alter RocksDB settings by using
bluestore_rocksdb_options config parameter. And hence change
"max_bytes_for_level_base" and others.
Not sure about dynamic level sizing though.
Current defaults are:
"compression=kNoCompression,max_write_buffer_number=4,min_write_b
Hi
I remember that there was a bug when using cephfs after
upgrading ceph from L to M. Is that bug fixed now?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 11/13/18 12:49 PM, Zhenshi Zhou wrote:
> Hi
>
> I remember that there was a bug when using cephfs after
> upgrading ceph from L to M. Is that bug fixed now?
>
No. 13.2.2 still has this bug, you will need to wait for 13.2.3 before
upgrading if you use CephFS.
Wido
> __
Hi,
I've been trying to setup an active active ( or even active passive) NFS
share for a while without any success
Using Mimic 13.2.2 and nfs-ganesha 2.8 with rados_cluster as recovery
mechanism
I focused on corosync/pacemaker as a HA controlling software but I would
not mind using anything else
Hello,
what do you think about this Supermicro server:
http://www.supermicro.com/products/system/1U/5019/SSG-5019D8-TR12P.cfm ?
We are considering about eight or ten server each with twelve 10TB SATA
drives, one m.2 SSD and 64GB RAM. Public and cluster network will be
10Gbit/s. The question is if
Not sure about CPU, but I would definitely suggest more than 64GB of ram.
With the next release of Mimic the default memory will be set to 4GB per a
OSD (if I am correct), this only includes the bluestore layer, so id easily
expect to see you getting close to 64GB after OS cache's e.t.c, and the
l
Hi,
The server support up to 128GB RAM, so upgrade RAM will not be problem.
The storage will be used for storing data from a microscopes. Users will
download data from the storage to local PC, make some changes and then
will upload data back to the storage. We want use the cluster for direct
compu
Hello,
we believe 1 core or thread per OSD + 2-4 for OS and other services
are enough for most use cases, so yes. Same goes for 64 GB Ram, we
suggest ~4 G per OSD (12*4 = 48 GB) so 16 GB for the Linux is more
then enough. Buy good drives (ssd & hdd) to prevent performance
issues.
--
Martin Verges
I’d say them CPU’s should more than be fine for your use case and
requirements then.
You have more than one thread per an OSD which seems to be the ongoing
recommendation.
On Tue, 13 Nov 2018 at 10:12 PM, Michal Zacek wrote:
> Hi,
>
> The server support up to 128GB RAM, so upgrade RAM will not
I'd ensure the io performance you expect can be achieved. If your scopes
create tons of small files, you may have a problem. You mentioned 10TB/day.
But, what is the scope's expectation with regard to dumping the data to
network storage. For instance, does the scope function normally while it is
tr
Hi,
On my CephFS production cluster (Luminous 12.2.8), I would like to add a
CephFS client from a server installed with Debian Buster (Testing release).
But, the default proposed Ceph packages in this release are still Jewel :
# cat /etc/debian_version
buster/sid
# apt search ceph-common
Sor
Hello,
unfortunately there is no such deb package at the moment.
However you could extract the sbin/mount.ceph command from the desired
version and copy the file into your debian buster installation. After
that you should be able to use the CephFS Kernel client from debian
buster.
I tried it on
Hello again,
maybe some other hint. If you want to mount the cephfs without
modifying your system, you could also do the "trick" of mount.cephfs.
=
echo "XXX" | base64 --decode | keyctl padd ceph client.admin @u
mount -t ceph X.X.X.X:/ /mnt/ -o name=admin,key=client.admin
Use Ubuntu bionic repository, Mimic installs without problem from there.
You can also build it yourself, all you need is to install gcc-7 and
other build dependencies, git clone, checkout 13.2.2 and say
`dpkg-buildpackage -j4`.
It takes some time, but overall it builds without issues, except
https://techcrunch.com/2018/11/12/the-ceph-storage-project-gets-a-dedicated-open-source-foundation/
What does this mean for:
1. Governance
2. Development
3. Community
Forgive me if I’ve missed the discussion previously on this list.
___
ceph-
Or is it possible to mount one OSD directly for read file access?
v
On Sun, Nov 11, 2018 at 1:47 PM Vlad Kopylov wrote:
> Maybe it is possible if done via gateway-nfs export?
> Settings for gateway allow read osd selection?
>
> v
>
> On Sun, Nov 11, 2018 at 1:01 AM Martin Verges
> wrote:
>
>>
Hi Vlad,
No need for a specific CRUSH map configuration. I’d suggest you use the
primary-affinity setting on the OSD so that only the OSDs that are close to
your read point are are selected as primary.
See https://ceph.com/geen-categorie/ceph-primary-affinity/ for information
Just set the prim
Hi Martin,
Thank you a lot, this solution works perfecly !
rv
Le 13/11/2018 à 18:07, Martin Verges a écrit :
Hello,
unfortunately there is no such deb package at the moment.
However you could extract the sbin/mount.ceph command from the desired
version and copy the file into your debian bust
Hi,
OK I hadn't thought that the Ubuntu packages were very close to Debian
anymore ! The solution given by Martin works and my issue is solved, but
I keep this option as an alternative...
Thanks,
rv
Le 13/11/2018 à 18:30, vita...@yourcmc.ru a écrit :
Use Ubuntu bionic repository, Mimic inst
Hi Igor,
Thank you for that information. This means I would have to reduce the
"write_buffer_size" in order to reduce the L0 size in addition to reducing
"max_bytes_for_level_base" to make the L1 size match.
Does anyone on the list have experience making these kinds of modifications?
Or bett
Each of 3 clients from different buildings are picking same
primary-affinity, and everything is slow at least on two.
Instead of just read from their local OSD they read mostly from
primary-affinity.
*What I need is something like primary-affinity for each client connection*
ID CLASS WEIGHT TYP
Hi,
This was indeed the trick. This has caused me a few years because of raised
heart pressure. :)
Should this be documented somewhere? That even though the cluster does not seem
to be recovering as is should, you should just continue to restart OSD's and
run 'ceph osd require-osd-release lum
Hi all,
We want to compare the performance between HDD partition as the journal (inline
from OSD disk) and SSD partition as the journal, here is what we have done, we
have 3 nodes used as Ceph OSD, each has 3 OSD on it. Firstly, we created the
OSD with journal from OSD partition, and run "rado
Only certain SSD's are good for CEPH Journals as can be seen @
https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/
The SSD your using isn't listed but doing a quick search online it appears
to be a SSD designed for read workloads as a "upgrade
Thanks Merrick!
I checked with Intel spec [1], the performance Intel said is,
• Sequential Read (up to) 500 MB/s
• Sequential Write (up to) 330 MB/s
• Random Read (100% Span) 72000 IOPS
• Random Write (100% Span) 2 IOPS
I think these indicator should be must better than general HDD, and
Well as you mentioned Journals I guess you was using filestore in your test?
You could go down the route of bluestore and put the WAL + DB onto the SSD
and the bluestore data onto the HD, you should notice an increase in
performance over both methods you have tried on filestore.
On Wed, Nov 14, 2
Or is it possible to mount one OSD directly for read file access?
In Ceph is impossible to io directly to OSD, only to PG.
k
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Please never use the Datasheet values to select your SSD. We never had a
single one that that delivers the shown perfomance in a Ceph Journal use
case.
However, do not use Filestore anymore. Especialy with newer kernel
versions. Use Bluestore instead.
--
Martin Verges
Managing director
Mobile: +
Thanks Merrick!
I haven’t tried the blue store but I believe what you said, I tried again with
“rbd bench-write” with filestore, the result has more than 50% performance
increase with the SSD as the journal, so I am still cannot understand why
“rados bench” cannot give us any difference, what’s
Thanks for the reply Wido. I will delay the upgrading plan.
Wido den Hollander 于2018年11月13日周二 下午10:35写道:
>
>
> On 11/13/18 12:49 PM, Zhenshi Zhou wrote:
> > Hi
> >
> > I remember that there was a bug when using cephfs after
> > upgrading ceph from L to M. Is that bug fixed now?
> >
>
> No. 13.2.
Thanks Martin for your suggestion!
I will definitely try bluestore later. The version of Ceph I am using is
v10.2.10 Jewel, do you think it’s stable enough to use Bluestore for Jewel or
should I upgrade Ceph to Luminous?
Best Regards,
Dave Chen
From: Martin Verges
Sent: Wednesday, November 14
Hi Dave,
The SSD journal will help boost iops & latency which will be more
apparent for small block sizes. The rados benchmark default block size
is 4M, use the -b option to specify the size. Try at 4k, 32k, 64k ...
As a side note, this is a rados level test, the rbd image size is not
releva
36 matches
Mail list logo