Hi.
On 02.08.2017 17:43, Ronald Klop wrote:
On Fri, 28 Jul 2017 12:56:11 +0200, Eugene M. Zheganin
<e...@norma.perm.ru> wrote:
Hi,
I'm using several FreeBSD zfs installations as the iSCSI production
systems, they basically consist of an LSI HBA, and a JBOD with a
bunch of SSD disks (12-24, Intel, Toshiba or Sandisk (avoid Sandisks
btw)). And I observe a problem very often: gstat shows 20-30% of disk
load, but the system reacts very slowly: cloning a dataset takes 10
seconds, similar operations aren't lightspeeding too. To my
knowledge, until the disks are 90-100% busy, this shouldn't happen.
My systems are equipped with 32-64 gigs of RAM, and the only tuning I
use is limiting the ARC size (in a very tender manner - at least to
16 gigs) and playing with TRIM. The number of datasets is high enough
- hundreds of clones, dozens of snapshots, most of teh data ovjects
are zvols. Pools aren't overfilled, most are filled up to 60-70% (no
questions about low space pools, but even in this case the situation
is clearer - %busy goes up in the sky).
So, my question is - is there some obvious zfs tuning not mentioned
in the Handbook ? On the other side - handbook isn't much clear on
how to tune zfs, it's written mostly in the manner of "these are
sysctl iods you can play with". Of course I have seen several ZFS
tuning guides. Like Opensolaris one, but they are mostly file- and
application-specific. Is there some special approach to tune ZFS in
the environment with loads of disks ? I don't know.... like tuning
the vdev cache or something simllar. ?
What version of FreeBSD are you running?
Well, different ones. Mostly some versions of 11.0-RELEASE-pX and 11-STABLE.
What is the system doing during all this?
What do you meant by "what" ? Nothing else except serving iSCSI - it's
the main purpose of every one of these servers.
How are your pools setup (raidz1/2/3, mirror, 3mirror)?
zroot is a mirrored two-disk pool, others are raidz, mostly spans of
multiple 5-disk radizs.
How is your iSCSI configured and what are the clients doing with it?
Using the kernel ctld of course. As you may know ctl.conf dosn't suppose
any performance tweaks, it's just a way of organizing the authorization
layer. Clients are the VMWare ESX hypervisors, using iSCSI as disk
devices, as for ESX SRs, and as direct iSCSI disks in Windows VMs.
Is the data distributed evenly on all disks?
It's not. Does it ever ditrubute evenly anywhere ?
Do the clients write a lot of sync data?
What do you exactly mean by "sync data" ?
Eugene.
_______________________________________________
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"