We have a zpool made of 4 512g iscsi luns located on a network appliance.
We are seeing poor read performance from the zfs pool.
The release of solaris we are using is:
Solaris 10 10/09 s10s_u8wos_08a SPARC
The server itself is a T2000
I was wondering how we can tell if the zfs_vdev_max_pending
s anyone seen files created on a linux client with negative or zero
> creation timestamps on zfs+nfs exported datasets?
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinf
ence many of these problems in an Enterprise
class data center, but still I don't look forward to having to deal with
the consequences of encountering these types of problems.
Maybe zfs is not ready to be considered a general purpose filesystem.
--
Ed Spencer
__
ONLINE 0 0 1 43K resilvered
>
> errors: No known data errors
>
>
> A checksum error on one of the other disks! Thank god I went with raid-z2.
>
> Ross
> --
> This message posted from opensolaris.org
> _____
Let me give a real life example of what I believe is a fragmented zfs pool.
Currently the pool is 2 terabytes in size (55% used) and is made of 4 san luns
(512gb each).
The pool has never gotten close to being full. We increase the size of the pool
by adding 2 512gb luns about once a year or so
On Fri, 2009-08-07 at 19:33, Richard Elling wrote:
> This is very unlikely to be a "fragmentation problem." It is a
> scalability problem
> and there may be something you can do about it in the short term.
You could be right.
Out test mail server consists of the exact same design, same hardwa
On Sat, 2009-08-08 at 09:17, Bob Friesenhahn wrote:
> Many of us here already tested our own systems and found that under
> some conditions ZFS was offering up only 30MB/second for bulk data
> reads regardless of how exotic our storage pool and hardware was.
Just so we are using the same units
On Sat, 2009-08-08 at 08:14, Mattias Pantzare wrote:
> Your scalability problem may be in your backup solution.
We've eliminated the backup system as being involved with the
performance issues.
The servers are Solaris 10 with the OS on UFS filesystems. (In zfs
terms, the pool is old/mature). So
On Sat, 2009-08-08 at 09:17, Bob Friesenhahn wrote:
> Enterprise storage should work fine without needing to run a tool to
> optimize data layout or repair the filesystem. Well designed software
> uses an approach which does not unravel through use.
H, this is counter to my understanding.
On Sat, 2009-08-08 at 15:12, Mike Gerdts wrote:
> The DBA's that I know use files that are at least hundreds of
> megabytes in size. Your problem is very different.
Yes, definitely.
I'm relating records in a table to my small files because our email
system treats the filesystem as a database.
On Sat, 2009-08-08 at 15:20, Bob Friesenhahn wrote:
> A SSD slog backed by a SAS 15K JBOD array should perform much better
> than a big iSCSI LUN.
Now...yes. We implemented this pool years ago. I believe, then, the
server would crash if you had a zfs drive fail. We decided to let the
netapp han
On Sat, 2009-08-08 at 16:09, Mike Gerdts wrote:
> Right... but ZFS doesn't understand your application. The reason that
> a file system would put files that are in the same directory in the
> same general area on a disk is to minimize seek time. I would argue
> that seek time doesn't matter a w
On Sat, 2009-08-08 at 15:05, Mike Gerdts wrote:
> On Sat, Aug 8, 2009 at 12:51 PM, Ed Spencer wrote:
> >
> > On Sat, 2009-08-08 at 09:17, Bob Friesenhahn wrote:
> >> Many of us here already tested our own systems and found that under
> >> some conditions ZFS was
On Sat, 2009-08-08 at 17:25, Mike Gerdts wrote:
> ndd -get /dev/tcp tcp_xmit_hiwat
> ndd -get /dev/tcp tcp_recv_hiwat
> grep tcp-nodelay /kernel/drv/iscsi.conf
# ndd -get /dev/tcp tcp_xmit_hiwat
2097152
# ndd -get /dev/tcp tcp_recv_hiwat
2097152
# grep tcp-nodelay /kernel/drv/iscsi.conf
#
> Whil
I've come up with a better name for the concept of file and directory
fragmentation which is, "Filesystem Entropy". Where, over time, an
active and volitile filesystem moves from an organized state to a
disorganized state resulting in backup difficulties.
Here are some stats which illustrate the
On Tue, 2009-08-11 at 07:58, Alex Lam S.L. wrote:
> At a first glance, your production server's numbers are looking fairly
> similar to the "small file workload" results of your development
> server.
>
> I thought you were saying that the development server has faster performance?
The developmen
Concurrency/Parallelism testing.
I have 6 different filesystems populated with email data on our mail
development server.
I rebooted the server before beginning the tests.
The server is a T2000 (sun4v) machine so its ideally suited for this
type of testing.
The test was to tar (to /dev/null) each o
On Tue, 2009-08-11 at 14:56, Scott Lawson wrote:
> > Also, is atime on?
> Turning atime off may make a big difference for you. It certainly does
> for Sun Messaging server.
> Maybe worth doing and reposting result?
Yes. All these results were attained with atime=off. We made that change
on all t
to a sun4v
machine.
This architecture is well suited to run more jobs in parallel.
Thanx for all your help and advice.
Ed
On Tue, 2009-08-11 at 22:47, Mike Gerdts wrote:
> On Tue, Aug 11, 2009 at 9:39 AM, Ed Spencer wrote:
> > We backup 2 filesystems on tuesday, 2 filesystems on thursday, an
19 matches
Mail list logo