Of course, nexenta os is a build of ubuntu on an opensolaris kernel.
On Jun 26, 2010, at 12:27 AM, Freddie Cash wrote:
> On Sat, Jun 26, 2010 at 12:20 AM, Ben Miles wrote:
>> What supporting applications are there on Ubuntu for RAIDZ?
>
> None. Ubuntu doesn't officially support ZFS.
>
> Yo
All true, I just saw too many "need ubuntu and zfs" and thought to state the
obvious in case the patch set for nexenta happen to differ enough to provide a
working set. I've had nexenta succeed where opensolaris quarter releases failed
and vice versa
On Jun 27, 2010, at 9:54 PM, Erik Trimble w
I've had this happen to me too. I found some dtrace scripts at the
time that showed that the file system was spending too much time
finding available 128k blocks or the like as I was near full per each
disk, even though combined I still had 140GB left of my 3TB pool. The
SPA code I believe it was w
Well, here's my previous summary off list to different solaris folk
(regarding NFS serving via ZFS and iSCSI):
I want to use ZFS as a NAS with no bounds on the backing hardware (not
restricted to one boxes capacity). Thus, there are two options: FC SAN
or iSCSI. In my case, I have multi-building c
On 1/24/07, Jonathan Edwards <[EMAIL PROTECTED]> wrote:
On Jan 24, 2007, at 09:25, Peter Eriksson wrote:
>> too much of our future roadmap, suffice it to say that one should
>> expect
>> much, much more from Sun in this vein: innovative software and
>> innovative
>> hardware working together to
On 2/1/07, Al Hopper <[EMAIL PROTECTED]> wrote:
On Thu, 1 Feb 2007, Tom Buskey wrote:
> [i]
> I got an Addonics eSata card. Sata 3.0. PCI *or* PCI-X. Works right off the
bat w/ 10u3. No firmware update needed. It was $130. But I don't pull out my hair
and I can use it if I upgrade my server fo
On 2/5/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Hello Casper,
Monday, February 5, 2007, 2:32:49 PM, you wrote:
>>Hello zfs-discuss,
>>
>> I've patched U2 system to 118855-36. Several zfs related bugs id
>> should be covered between -19 and -36 like HotSpare support.
>>
>> However desp
So, I attempting to find the inode from the result of a "zpool status -v":
errors: The following persistent errors have been detected:
DATASET OBJECT RANGE
cc 21e382 lvl=0 blkid=0
Well, 21e282 appears to not be a valid number for "find . -inum blah"
Any suggestions?
integrated into Nevada build 57.
Jeff
On Sat, Feb 10, 2007 at 05:18:05PM -0800, Joe Little wrote:
> So, I attempting to find the inode from the result of a "zpool status -v":
>
> errors: The following persistent errors have been detected:
>
> DATASET OBJECT RANGE
On 2/11/07, Matty <[EMAIL PROTECTED]> wrote:
Howdy,
On one of my Solaris 10 11/06 servers, I am getting numerous errors
similar to the following:
Feb 11 09:30:23 rx scsi: WARNING: /[EMAIL PROTECTED],2000/[EMAIL
PROTECTED],1/[EMAIL PROTECTED],0 (sd1):
Feb 11 09:30:23 rx Error for Command:
On 2/27/07, Eric Haycraft <[EMAIL PROTECTED]> wrote:
I am no scripting pro, but I would imagine it would be fairly simple to create
a script and batch it to make symlinks in all subdirectories.
I've done something similar using NFS aggregation products. The real
problem is when you export, e
On 6/7/07, Al Hopper <[EMAIL PROTECTED]> wrote:
On Wed, 6 Jun 2007, Erast Benson wrote:
> Announcing new direction of Open Source NexentaOS development:
> NexentaCP (Nexenta Core Platform).
>
> NexentaCP is Dapper/LTS-based core Operating System Platform distributed
> as a single-CD ISO, integra
I consider myself an early adopter of ZFS and pushed it hard on this
list and in real life with regards to iSCSI integration, zfs
performance issues with latency there of, and how best to use it with
NFS. Well, I finally get to talk more about the ZFS-based product I've
been beta testing for quite
On 11/2/07, MC <[EMAIL PROTECTED]> wrote:
> > I consider myself an early adopter of ZFS and pushed
> > it hard on this
> > list and in real life with regards to iSCSI
> > integration, zfs
> > performance issues with latency there of, and how
> > best to use it with
> > NFS. Well, I finally get to t
On 11/2/07, Rob Logan <[EMAIL PROTECTED]> wrote:
>
> I'm confused by this and NexentaStor... wouldn't it be better
> to use b77? with:
>
> Heads Up: File system framework changes (supplement to CIFS' "head's up")
> Heads Up: Flag Day (Addendum) (CIFS Service)
> Heads Up: Flag Day (CIFS Service)
> c
Not for NexentaStor as yet to my knowledge. I'd like to caution that
the target of the initial product release is digital
archiving/tiering/etc and is not necessarily primary NAS usage, though
it can be used as such for those so inclined. However, interested
parties should contact them as they fles
I have historically noticed that in ZFS, when ever there is a heavy
writer to a pool via NFS, the reads can held back (basically paused).
An example is a RAID10 pool of 6 disks, whereby a directory of files
including some large 100+MB in size being written can cause other
clients over NFS to pause
NFS. We may have 16, 32 or whatever
threads, but if a single writer keeps the ZIL pegged and prohibiting
reads, its all for nought. Is there anyway to tune/configure the
ZFS/NFS combination to balance reads/writes to not starve one for the
other. Its either feast or famine or so tests have shown.
&g
On Nov 16, 2007 9:17 PM, Joe Little <[EMAIL PROTECTED]> wrote:
> On Nov 16, 2007 9:13 PM, Neil Perrin <[EMAIL PROTECTED]> wrote:
> > Joe,
> >
> > I don't think adding a slog helped in this case. In fact I
> > believe it made performance worse. Previou
On Nov 16, 2007 10:41 PM, Neil Perrin <[EMAIL PROTECTED]> wrote:
>
>
> Joe Little wrote:
> > On Nov 16, 2007 9:13 PM, Neil Perrin <[EMAIL PROTECTED]> wrote:
> >> Joe,
> >>
> >> I don't think adding a slog helped in this case. In fact I
On Nov 18, 2007 1:44 PM, Richard Elling <[EMAIL PROTECTED]> wrote:
> one more thing...
>
>
> Joe Little wrote:
> > I have historically noticed that in ZFS, when ever there is a heavy
> > writer to a pool via NFS, the reads can held back (basically paused).
> &
On Nov 19, 2007 9:41 AM, Roch - PAE <[EMAIL PROTECTED]> wrote:
>
> Neil Perrin writes:
> >
> >
> > Joe Little wrote:
> > > On Nov 16, 2007 9:13 PM, Neil Perrin <[EMAIL PROTECTED]> wrote:
> > >> Joe,
> > >>
> > >&
On Nov 20, 2007 6:34 AM, MC <[EMAIL PROTECTED]> wrote:
> > So there is no current way to specify the creation of
> > a 3 disk raid-z
> > array with a known missing disk?
>
> Can someone answer that? Or does the zpool command NOT accommodate the
> creation of a degraded raidz array?
>
can't start
I was playing with a Gigabyte i-RAM card and found out it works great
to improve overall performance when there are a lot of writes of small
files over NFS to such a ZFS pool.
However, I noted a frequent situation in periods of long writes over
NFS of small files. Here's a snippet of iostat during
r answer explains why its
60 seconds or so. What's sad is that this is a ramdisk so to speak,
albeit connected via SATA-I to the sil3124. Any way to isolate this
further? Anyway to limit i/o timeouts to a drive? this is just two
sticks of ram.. ms would be fine :)
> -- richard
>
>
>
On Nov 26, 2007 7:57 PM, Richard Elling <[EMAIL PROTECTED]> wrote:
> Joe Little wrote:
> > On Nov 26, 2007 7:00 PM, Richard Elling <[EMAIL PROTECTED]> wrote:
> >
> >> I would expect such iostat output from a device which can handle
> >> only a single
On Tue, Apr 8, 2008 at 9:55 AM, <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] wrote on 04/08/2008 11:22:53 AM:
>
>
> > In our environment, the politically and administratively simplest
> > approach to managing our storage is to give each separate group at
> > least one ZFS pool of their own (
Hello list,
We discovered a failed disk with checksum errors. Took out the disk
and resilvered, which reported many errors. A few of my subvolumes to
the pool won't mount anymore, with "zfs import poolname" reporting
that "cannot mount 'poolname/proj': I/O error"
Ok, we have a problem. I can succ
This past weekend, but holiday was ruined due to a log device
"replacement" gone awry.
I posted all about it here:
http://jmlittle.blogspot.com/2008/05/problem-with-slogs-how-i-lost.html
In a nutshell, an resilver of a single log device with itself, due to
the fact one can't remove a log device
On Mon, May 26, 2008 at 6:10 AM, Gerard Henry <[EMAIL PROTECTED]> wrote:
> hello all,
> i have indiana freshly installed on a sun ultra 20 machine. It only does nfs
> server. During one night, the kernel had crashed, and i got this messages:
> "
> May 22 02:18:57 ultra20 unix: [ID 836849 kern.noti
log evacuation would make logs useful now instead of
waiting.
> - Eric
>
> On Tue, May 27, 2008 at 01:13:47PM -0700, Joe Little wrote:
>> This past weekend, but holiday was ruined due to a log device
>> "replacement" gone awry.
>>
>> I posted all about
ced). At one point there were plans to do this as a separate
>> piece of work (since the vdev changes are needed for the general case
>> anyway), but I don't know whether this is still the case.
>>
>> - Eric
>>
>> On Tue, May 27, 2008 at 01:13:47PM -0700, Joe
On Tue, May 27, 2008 at 5:04 PM, Neil Perrin <[EMAIL PROTECTED]> wrote:
> Joe Little wrote:
>>
>> On Tue, May 27, 2008 at 4:50 PM, Eric Schrock <[EMAIL PROTECTED]>
>> wrote:
>>>
>>> Joe -
>>>
>>> We definitely don't do
On Thu, May 29, 2008 at 7:25 PM, Jeb Campbell <[EMAIL PROTECTED]> wrote:
> Meant to add that zpool import -f pool doesn't work b/c of the missing log
> vdev.
>
> All the other disks are there and show up with "zpool import", but it won't
> import.
>
> Is there anyway a util could clear the log de
On Thu, May 29, 2008 at 8:59 PM, Joe Little <[EMAIL PROTECTED]> wrote:
> On Thu, May 29, 2008 at 7:25 PM, Jeb Campbell <[EMAIL PROTECTED]> wrote:
>> Meant to add that zpool import -f pool doesn't work b/c of the missing log
>> vdev.
>>
>> All the o
On Fri, May 30, 2008 at 7:43 AM, Paul Raines <[EMAIL PROTECTED]> wrote:
>
> It seems when a zfs filesystem with reserv/quota is 100% full users can no
> longer even delete files to fix the situation getting errors like these:
>
> $ rm rh.pm6895.medial.V2.tif
> rm: cannot remove `rh.pm6895.medial.V2
On Fri, May 30, 2008 at 6:30 AM, Jeb Campbell <[EMAIL PROTECTED]> wrote:
> Ok, here is where I'm at:
>
> My install of OS 2008.05 (snv_86?) will not even come up in single user.
>
> The OS 2008.05 live cd comes up fine, but I can't import my old pool b/c of
> the missing log (and I have to import
On Fri, May 30, 2008 at 7:07 AM, Hugh Saunders <[EMAIL PROTECTED]> wrote:
> On Fri, May 30, 2008 at 10:37 AM, Akhilesh Mritunjai
> <[EMAIL PROTECTED]> wrote:
>> I think it's right. You'd have to move to a 64 bit kernel. Any reasons to
>> stick to a 32 bit
>> kernel ?
>
> My reason would be lack of
On Thu, Jun 5, 2008 at 8:16 PM, Tim <[EMAIL PROTECTED]> wrote:
>
>
> On Thu, Jun 5, 2008 at 9:17 PM, Peeyush Singh <[EMAIL PROTECTED]>
> wrote:
>>
>> Hey guys, please excuse me in advance if I say or ask anything stupid :)
>>
>> Anyway, Solaris newbie here. I've built for myself a new file server
On Thu, Jun 5, 2008 at 9:26 PM, Tim <[EMAIL PROTECTED]> wrote:
>
>
> On Thu, Jun 5, 2008 at 11:12 PM, Joe Little <[EMAIL PROTECTED]> wrote:
>>
>> On Thu, Jun 5, 2008 at 8:16 PM, Tim <[EMAIL PROTECTED]> wrote:
>> >
>> >
>> &g
Well, I would caution at this point against the iscsi backend if you
are planning on using NFS. We took a long winded conversation online
and have yet to return to this list, but the gist of it is that the
latency of iscsi along with the tendency for NFS to fsync 3 times per
write causes performan
Well, here's my previous summary off list to different solaris folk
(regarding NFS serving via ZFS and iSCSI):
I want to use ZFS as a NAS with no bounds on the backing hardware (not
restricted to one boxes capacity). Thus, there are two options: FC SAN
or iSCSI. In my case, I have multi-building
Please add to the list the differences on locally or remotely attach
vdevs: FC, SCSI/SATA, or iSCSI. This is the part that is troubling me
most, as there are wildly different performance characteristics when
you use NFS with any of these backends with the various configs of
ZFS. Another thing is w
I've been writing via tar to a pool some stuff from backup, around
500GB. Its taken quite a while as the tar is being read from NFS. My
ZFS partition in this case is a RAIDZ 3-disk job using 3 400GB SATA
drives (sil3124 card)
Ever once in a while, a "df" stalls and during that time my io's go
fla
So, if I recall from this list, a mid-june release to the web was
expected for S10U2. I'm about to do some final production testing, and
I was wondering if S10U2 was near term or more of a July thing now.
This may not be the perfect venue for the question, but the subject
was previously covered wi
What if your 32bit system is just a NAS -- ZFS and NFS, nothing else?
I think it would still be ideal to allow tweaking of things at runtime
to make 32-bit systems more ideal.
On 6/21/06, Mark Maybee <[EMAIL PROTECTED]> wrote:
Yup, your probably running up against the limitations of 32-bit kern
Well, I should weigh in hear.
I have been using ZFS with an iscsi backend and a NFS front end to my
clients. Until B41 (not sure what fixed this) I was getting 20KB/sec
for RAIDZ and 200KB/sec for just ZFS on on large iscsi LUNs
(non-RAIDZ) when I was receiving many small writes, such as untarrin
On 6/22/06, Darren J Moffat <[EMAIL PROTECTED]> wrote:
Rich Teer wrote:
> On Thu, 22 Jun 2006, Joe Little wrote:
>
> Please don't top post.
>
>> What if your 32bit system is just a NAS -- ZFS and NFS, nothing else?
>> I think it would still be ideal to allow
On 6/22/06, Jeff Bonwick <[EMAIL PROTECTED]> wrote:
> a test against the same iscsi targets using linux and XFS and the
> NFS server implementation there gave me 1.25MB/sec writes. I was about
> to throw in the towel and deem ZFS/NFS has unusable until B41 came
> along and at least gave me 1.25MB
I guess the only hope is to find pin-compatible Xeons that are 64bit
to replace what is a large chassis with 24 slots of disks that has
specific motherboard form-factor, etc. We have 6 of these things from
a government grant that must be used for the stated purpose. So, yes,
we can buy product, bu
order for the change to take effect.
If you don't have time, no big deal.
--Bill
On Thu, Jun 22, 2006 at 04:22:22PM -0700, Joe Little wrote:
> On 6/22/06, Jeff Bonwick <[EMAIL PROTECTED]> wrote:
> >> a test against the same iscsi targets using linux and XFS and the
>
On 6/23/06, Roch <[EMAIL PROTECTED]> wrote:
Joe Little writes:
> On 6/22/06, Bill Moore <[EMAIL PROTECTED]> wrote:
> > Hey Joe. We're working on some ZFS changes in this area, and if you
> > could run an experiment for us, that would be great. Just do this:
On 6/23/06, Roch <[EMAIL PROTECTED]> wrote:
Joe Little writes:
> On 6/22/06, Bill Moore <[EMAIL PROTECTED]> wrote:
> > Hey Joe. We're working on some ZFS changes in this area, and if you
> > could run an experiment for us, that would be great. Just do this:
To clarify what has just been stated. With zil disabled I got 4MB/sec.
With zil enabled I get 1.25MB/sec
On 6/23/06, Tao Chen <[EMAIL PROTECTED]> wrote:
On 6/23/06, Roch <[EMAIL PROTECTED]> wrote:
>
> > > On Thu, Jun 22, 2006 at 04:22:22PM -0700, Joe Little wrote:
On 6/27/06, Erik Trimble <[EMAIL PROTECTED]> wrote:
Darren J Moffat wrote:
> Peter Rival wrote:
>
>> storage arrays with the same arguments over and over without
>> providing an answer to the customer problem doesn't do anyone any
>> good. So. I'll restate the question. I have a 10TB database
On 6/28/06, Nathan Kroenert <[EMAIL PROTECTED]> wrote:
On Thu, 2006-06-29 at 03:40, Nicolas Williams wrote:
> But Joe makes a good point about RAID-Z and iSCSI.
>
> It'd be nice if RAID HW could assist RAID-Z, and it wouldn't take much
> to do that: parity computation on write, checksum verificat
I've always seen this curve in my tests (local disk or iscsi) and just
think its zfs as designed. I haven't seen much parallelism when I have
multiple i/o jobs going, the filesystem seems to go mostly into one or
the other mode. Perhaps per vdev (in iscsi I'm only exposing one or
two), there is on
On 7/31/06, Dale Ghent <[EMAIL PROTECTED]> wrote:
On Jul 31, 2006, at 8:07 PM, eric kustarz wrote:
>
> The 2.6.x Linux client is much nicer... one thing fixed was the
> client doing too many commits (which translates to fsyncs on the
> server). I would still recommend the Solaris client but i'm
y and some major
penalties for streaming writes of various sizes with the NFS
implementation and its fsync happiness (3 fsyncs per write from an NFS
client). Its all very true that its stable/safe, but its also very
slow in various use cases!
On 8/1/06, eric kustarz <[EMAIL PROTECTED]>
One of the things espoused on this list again and again is that quotas
for users are not ideal, and that one should just make a filesystem
per user.
Ok.. I did that. I now have per just one "volume" within my pool some
380 odd users. By way of example, lets say I have
/pool/common/users/user1 ...
On 8/16/06, Frank Cusack <[EMAIL PROTECTED]> wrote:
On August 16, 2006 10:25:18 AM -0700 Joe Little <[EMAIL PROTECTED]> wrote:
> Is there a way to allow simple export commands the traverse multiple
> ZFS filesystems for exporting? I'd hate to have to have hundreds of
>
On 8/16/06, Frank Cusack <[EMAIL PROTECTED]> wrote:
On August 16, 2006 10:34:31 AM -0700 Joe Little <[EMAIL PROTECTED]> wrote:
> On 8/16/06, Frank Cusack <[EMAIL PROTECTED]> wrote:
>> On August 16, 2006 10:25:18 AM -0700 Joe Little <[EMAIL PROTECTED]> wrote:
&g
We finally flipped the switch on one of our ZFS-based servers, with
approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is
a RAID5 volume on the adaptec card). We have snapshots every 4 hours
for the first few days. If you add up the snapshot references it
appears somewhat high ver
On 8/24/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
On Thu, Aug 24, 2006 at 07:07:45AM -0700, Joe Little wrote:
> We finally flipped the switch on one of our ZFS-based servers, with
> approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is
> a RAID5 volume on t
On 8/24/06, Matthew Ahrens <[EMAIL PROTECTED]> wrote:
On Thu, Aug 24, 2006 at 02:21:33PM -0700, Joe Little wrote:
> well, by deleting my 4-hourlies I reclaimed most of the data. To
> answer some of the questions, its about 15 filesystems (decendents
> included). I'm aware
So, people here recommended the Marvell cards, and one even provided a
link to acquire them for SATA jbod support. Well, this is what the
latest bits (B47) say:
Sep 12 13:51:54 vram marvell88sx: [ID 679681 kern.warning] WARNING:
marvell88sx0: Could not attach, unsupported chip stepping or unable
On 9/12/06, James C. McPherson <[EMAIL PROTECTED]> wrote:
Joe Little wrote:
> So, people here recommended the Marvell cards, and one even provided a
> link to acquire them for SATA jbod support. Well, this is what the
> latest bits (B47) say:
>
> Sep 12 13:51:54 vram ma
Yeah. I got the message from a few others, and we are hoping to
return/buy the newer one. I've sort of surprised by the limited set of
SATA RAID or JBOD cards that one can actually use. Even the one's
linked to on this list sometimes aren't supported :). I need to get up
and running like yesterday
The latest OpenSolaris release? Perhaps Nexenta in the end is the way
to best deliver/maintain that.
On 10/27/06, David Blacklock <[EMAIL PROTECTED]> wrote:
What is the current recommended version of Solaris 10 for ZFS ?
-thanks,
-Dave
___
zfs-discus
On 11/22/06, Chad Leigh -- Shire.Net LLC <[EMAIL PROTECTED]> wrote:
On Nov 22, 2006, at 4:11 PM, Al Hopper wrote:
> No problem there! ZFS rocks. NFS/ZFS is a bad combination.
Has anyone tried sharing a ZFS fs using samba or afs or something
else besides nfs? Do we have the same issues?
I
On 12/12/06, James F. Hranicky <[EMAIL PROTECTED]> wrote:
Jim Davis wrote:
>> Have you tried using the automounter as suggested by the linux faq?:
>> http://nfs.sourceforge.net/#section_b
>
> Yes. On our undergrad timesharing system (~1300 logins) we actually hit
> that limit with a standard au
We just put together a new system for ZFS use at a company, and twice
in one week we've had the system wedge. You can log on, but the zpools
are hosed, and a reboot never occurs if requested since it can't
unmount the zfs volumes. So, only a power cycle works.
In both cases, we get this:
Dec 20
On 12/20/06, Joe Little <[EMAIL PROTECTED]> wrote:
We just put together a new system for ZFS use at a company, and twice
in one week we've had the system wedge. You can log on, but the zpools
are hosed, and a reboot never occurs if requested since it can't
unmount the zfs vol
Some further joy:
http://bugs.opensolaris.org/view_bug.do?bug_id=6504404
On 12/20/06, Joe Little <[EMAIL PROTECTED]> wrote:
On 12/20/06, Joe Little <[EMAIL PROTECTED]> wrote:
> We just put together a new system for ZFS use at a company, and twice
> in one week we've ha
and specific models, and the driver used? Looks like there may be
stability issues with the marvell, which appear to go unanswered..
On 12/21/06, Jason J. W. Williams <[EMAIL PROTECTED]> wrote:
Hi Naveen,
I believe the newer LSI cards work pretty well with Solaris.
Best Regards,
Jason
On 12/
On 12/21/06, Al Hopper <[EMAIL PROTECTED]> wrote:
On Thu, 21 Dec 2006, Joe Little wrote:
> and specific models, and the driver used? Looks like there may be
> stability issues with the marvell, which appear to go unanswered..
I've tested a box running two Marvell based 8
I've been writing to the Solaris NFS list since I was getting some bad
performance copying via NFS (noticeably there) a large set of small
files. We have various source trees, including a tree with many linux
versions that I was copying to my ZFS NAS-to-be. On large files, it
flies pretty well, an
nd ." times on a local zfs.
Neil Perrin wrote On 05/04/06 21:01,:
> Was this a 32 bit intel system by chance?
> If so this is quite likely caused by:
>
> 6413731 pathologically slower fsync on 32 bit systems
>
> This was fixed in snv_39.
>
> Joe Little wrote On 0
1 pathologically slower fsync on 32 bit systems
>
> This was fixed in snv_39.
>
> Joe Little wrote On 05/04/06 15:47,:
>
>> I've been writing to the Solaris NFS list since I was getting some bad
>> performance copying via NFS (noticeably there) a large set of small
&
well, it was already an NFS-discuss list message. Someone else added
dtrace-discuss to it. I have already noted this to a degree on
zfs-discuss, but it seems to be mainly a NFS specific issue at this
stage.
On 5/5/06, Spencer Shepler <[EMAIL PROTECTED]> wrote:
On Fri, Joe Little wrote:>
188
RFS3_COMMIT
306
On 5/5/06, Joe Little <[EMAIL PROTECTED]> wrote:
well, it was already an NFS-discuss list message. Someone else added
dtrace-discuss to it. I have already noted this to a degree on
zfs-discuss, but it seems to be mainly a NFS specific issue at this
s
Thanks for the tip. In the local case, I could send to the
iSCSI-backed ZFS RAIDZ at even faster rates, with a total elapsed time
of 50seconds (17 seconds better than UFS). However, I didn't even both
finishing the NFS client test, since it was taking a few seconds
between multiple 27K files. So,
On 5/5/06, Eric Schrock <[EMAIL PROTECTED]> wrote:
On Fri, May 05, 2006 at 03:46:08PM -0700, Joe Little wrote:
> Thanks for the tip. In the local case, I could send to the
> iSCSI-backed ZFS RAIDZ at even faster rates, with a total elapsed time
> of 50seconds (17 seconds
And of course, just to circle back, an rsync via ssh from the client
to the Solaris ZFS/iscsi server came in at 17.5MB/sec, taking 1minute
16 seconds, or about 20% longer. So, NFS (over TCP) is 1.4k/s, and
encrypted ssh is 17.5MB/sec following the same network path.
On 5/5/06, Joe Little <[EM
Are there known i/o or iscsi dtrace scripts available?
On 5/5/06, Spencer Shepler <[EMAIL PROTECTED]> wrote:
On Fri, Joe Little wrote:
> On 5/5/06, Eric Schrock <[EMAIL PROTECTED]> wrote:
> >On Fri, May 05, 2006 at 03:46:08PM -0700, Joe Little wrote:
> >> Thanks
such a different code path?
On 5/5/06, Lisa Week <[EMAIL PROTECTED]> wrote:
These may help:
http://opensolaris.org/os/community/dtrace/scripts/
Check out iosnoop.d
http://www.solarisinternals.com/si/dtrace/index.php
Check out iotrace.d
- Lisa
Joe Little wrote On 05/05/06 18:
uling. Is
this tuneable for either ZFS or NFS and/or can be set?
On 5/5/06, Lisa Week <[EMAIL PROTECTED]> wrote:
These may help:
http://opensolaris.org/os/community/dtrace/scripts/
Check out iosnoop.d
http://www.solarisinternals.com/si/dtrace/index.php
Check out iotrace.d
- Lisa
ing R2T: 1
Max Receive Data Segment Length: 8192
Max Connections: 1
Header Digest: NONE
Data Digest: NONE
On 5/6/06, Nicolas Williams <[EMAIL PROTECTED]> wrote:
On Fri, May 05, 2006 at 09:48:00PM -0700, Joe Little
amely, XFS, JFS, etc which I've tested
before)
On 5/8/06, Nicolas Williams <[EMAIL PROTECTED]> wrote:
On Fri, May 05, 2006 at 11:55:17PM -0500, Spencer Shepler wrote:
> On Fri, Joe Little wrote:
> > Thanks. I'm playing with it now, trying to get the most succinct te
ee with NFS. I definitely think the bug is on the NFS server end, even considering that the SMB protocol is different.
On 5/8/06, Joe Little <[EMAIL PROTECTED]> wrote:
I was asked to also snoop the iscsi end of things, trying to findsomething different between the two. iscsi being relativ
mance testing and benchmarking.
I will hand off my configuration for the Sun NFS teams internally
to check out.
-David
Joe Little wrote:
> Well, I tried some suggested iscsi tunings to no avail. I did try
> something else though: I brought up samba. My linux 2.2 source tree
> copying i
How did you get the average time for async writes? My client (lacking
ptime, its linux) comes in at 50 minutes, not 50 seconds. I'm running
again right now for a more accurate number. I'm untarring from a local
file on the directory to the NFS share.
On 5/11/06, Roch Bourbonnais - Performance En
well, here's my first pass result:
[EMAIL PROTECTED] loges1]# time tar xf /root/linux-2.2.26.tar
real114m6.662s
user0m0.049s
sys 0m1.354s
On 5/11/06, Roch Bourbonnais - Performance Engineering
<[EMAIL PROTECTED]> wrote:
Joe Little writes:
> How did you get the av
93 matches
Mail list logo