FreeBSD_STABLE_10-i386 - Build #664 - Still Failing:
Build information: https://jenkins.FreeBSD.org/job/FreeBSD_STABLE_10-i386/664/
Full change log:
https://jenkins.FreeBSD.org/job/FreeBSD_STABLE_10-i386/664/changes
Full build log:
https://jenkins.FreeBSD.org/job/FreeBSD_STABLE_10-i386/664/conso
FreeBSD_STABLE_10-i386 - Build #665 - Still Failing:
Build information: https://jenkins.FreeBSD.org/job/FreeBSD_STABLE_10-i386/665/
Full change log:
https://jenkins.FreeBSD.org/job/FreeBSD_STABLE_10-i386/665/changes
Full build log:
https://jenkins.FreeBSD.org/job/FreeBSD_STABLE_10-i386/665/conso
Hi all,
Please feel free to direct me to a list that is more suitable.
We are trying to set up a fileserver solution for a web application that we
are building. This fileserver is running FreeBSD 10.2 and ZFS. Files are
written over CIFS with Samba running on the fileserver host.
However, we are
FreeBSD_STABLE_9-i386 - Build #238 - Still Failing:
Build information: https://jenkins.FreeBSD.org/job/FreeBSD_STABLE_9-i386/238/
Full change log:
https://jenkins.FreeBSD.org/job/FreeBSD_STABLE_9-i386/238/changes
Full build log:
https://jenkins.FreeBSD.org/job/FreeBSD_STABLE_9-i386/238/console
FreeBSD_STABLE_10-i386 - Build #666 - Still Failing:
Build information: https://jenkins.FreeBSD.org/job/FreeBSD_STABLE_10-i386/666/
Full change log:
https://jenkins.FreeBSD.org/job/FreeBSD_STABLE_10-i386/666/changes
Full build log:
https://jenkins.FreeBSD.org/job/FreeBSD_STABLE_10-i386/666/conso
make sure atime if off for starters on the filesystem
On 24 November 2015 at 14:00, Albert Cervin wrote:
> Hi all,
>
> Please feel free to direct me to a list that is more suitable.
>
> We are trying to set up a fileserver solution for a web application that we
> are building. This fileserver is
On Tue, Nov 24, 2015 at 8:00 AM, Albert Cervin wrote:
> Hi all,
>
> Please feel free to direct me to a list that is more suitable.
>
> We are trying to set up a fileserver solution for a web application that we
> are building. This fileserver is running FreeBSD 10.2 and ZFS. Files are
> written o
On 11/24/2015 9:00 AM, Albert Cervin wrote:
> However, we are seeing en exponential decrease in performance to write to
> the file server when the number of files in the directory grows (when it
> goes up to ~6000 files it becomes unusable and the write time has gone from
> a fraction of a second t
Thanks!
"I should hope not. ext4 vs zfs comparison isn't fair for either."
I do realize that comparing ext4 and ZFS is not really giving anything
but it tells us one thing, ext4 would work whereas ZFS would not for
our use case, which was unexpected, at least to me.
vfs.zfs.txg.timeout is alre
On 11/24/2015 10:26 AM, Albert Cervin wrote:
> vfs.zfs.txg.timeout is already verified to be 5 (the default). I have
> also turned off atime and vfs.zfs.arc_meta_limit is 1287906304.
>
> "Do you have any memory pressures on your server ? Have a look at this
> thread"
>
> The server has 4 cores
"8G is not that much really. In the thread they suggested increasing the
meta limit so that the giant directory can fit into cache."
It is not really short on ram judging from the usage though and sure
8Gb is not that much but on the other hand, neither is 8000 files in
one directory in my opinion
Maybe patch around this change
http://sourceforge.net/p/sshpass/code/48/tree//trunk/main.c?diff=5182883de88f3d77deda7b5c:47
similarly to what was done in the Salt issue
https://github.com/saltstack/salt/pull/22120/files
--Nikolay
On Tue, Nov 24, 2015 at 1:05 AM, Adam Vande More wrote:
> sshpass
FreeBSD_STABLE_10-i386 - Build #667 - Still Failing:
Build information: https://jenkins.FreeBSD.org/job/FreeBSD_STABLE_10-i386/667/
Full change log:
https://jenkins.FreeBSD.org/job/FreeBSD_STABLE_10-i386/667/changes
Full build log:
https://jenkins.FreeBSD.org/job/FreeBSD_STABLE_10-i386/667/conso
On Tue, 24 Nov 2015 17:11:54 +0100 Albert Cervin
wrote about Re: ZFS - poor performance with "large" directories:
AC> Will try a bit with the meta limit.
You can also put metadata on a flash device to speed things up. To
check if this is really the bottleneck in your case, something simple like
14 matches
Mail list logo