i think that this fix may be being backported as part of the
brandz project backport, but i don't think anyone is backporting it
outside of that. you might want to add a new call record and open
a subCR if you need this to be backported.
the workaround is just what you've already discovered.
dele
Thanks Ed. The ticket shows the customer running Solaris 10. Do you
know if the fix will be incorporated in an S10 update or patch? Or
possibly an S10 workaround made available?
Thanks again!
Dave Radden
x74861
---
Edward Pilatowicz wrote On 10/31/06 18:53,:
if your running solaris
if your running solaris 10 or an early nevada build then it's
possible your hitting this bug (which i fixed in build 35):
4976415 devfsadmd for zones could be smarter when major numbers change
if you're running a recent nevada build then this could be a new issue.
so what version of sola
Jay Grogan wrote:
Ran 3 test using mkfile to create a 6GB on a ufs and ZFS file system.
command ran mkfile -v 6gb /ufs/tmpfile
Test 1 UFS mounted LUN (2m2.373s)
Test 2 UFS mounted LUN with directio option (5m31.802s)
Test 3 ZFS LUN (Single LUN in a pool) (3m13.126s)
Sunfire V120
1 Qlogic 2
Rince wrote:
Hi all,
I recently created a RAID-Z1 pool out of a set of 7 SCSI disks, using
the following command:
# zpool create magicant raidz c5t0d0 c5t1d0 c5t2d0 c5t3d0 c5t4d0 c5t5d0
c5t6d0
It worked fine, but I was slightly confused by the size yield (99 GB vs
the 116 GB I had on my o
Hello Luke,
Wednesday, November 1, 2006, 12:59:49 AM, you wrote:
LL> Robert,
LL> On 10/31/06 3:55 PM, "Robert Milkowski" <[EMAIL PROTECTED]> wrote:
>> Right now with S10U3 beta with over 40 disks I can get only about
>> 1.6GB/s peak.
LL> That's decent - is that the number reported by "zpool io
Robert,
On 10/31/06 3:55 PM, "Robert Milkowski" <[EMAIL PROTECTED]> wrote:
> Right now with S10U3 beta with over 40 disks I can get only about
> 1.6GB/s peak.
That's decent - is that the number reported by "zpool iostat"? In that case
then I think 1GB = 1024^4, my GB measurements are roughly "b
Hello Luke,
Wednesday, November 1, 2006, 12:13:28 AM, you wrote:
LL> Robert,
LL> On 10/31/06 3:10 PM, "Robert Milkowski" <[EMAIL PROTECTED]> wrote:
>> Even then I would try first to test with more real load on ZFS as it
>> can turn out that ZFS performs better anyway. Despite problems with
>> l
Robert,
On 10/31/06 3:12 PM, "Robert Milkowski" <[EMAIL PROTECTED]> wrote:
> Almost definitely not true. I did some simple test today with U3 beta
> on thumper and still can observe "jumping" writes with sequential
> 'dd'.
We crossed posts. There are some firmware issues with the Hitachi disks
Robert,
On 10/31/06 3:10 PM, "Robert Milkowski" <[EMAIL PROTECTED]> wrote:
> Even then I would try first to test with more real load on ZFS as it
> can turn out that ZFS performs better anyway. Despite problems with
> large sequential writings I find ZFS to perform better in many more
> complex s
Hello Luke,
Tuesday, October 31, 2006, 6:09:23 PM, you wrote:
LL> Robert,
LL>
>> I belive it's not solved yet but you may want to try with
>> latest nevada and see if there's a difference.
LL> It's fixed in the upcoming Solaris 10 U3 and also in Solaris Express
LL> post build 47 I think.
Al
Hello Jay,
Tuesday, October 31, 2006, 7:09:12 PM, you wrote:
JG> Thanks Robert, I was hoping something like that hard turned up
JG> allot of what I will need to use ZFS for will be sequential writes at this
time.
JG>
Even then I would try first to test with more real load on ZFS as it
can tur
Team,
**Please respond to me and my coworker listed in the Cc, since neither
one of us are on this alias**
QUICK PROBLEM DESCRIPTION:
Cu created a dataset which contains all the zvols for a particular
zone. The zone is then given access to all the zvols in the dataset
using a match statement in
Robert Petkus wrote:
When using sharenfs, do I really need to NFS export the parent zfs
filesystem *and* all of its children? For example, if I have
/zfshome
/zfshome/user1
/zfshome/user1+n
it seems to me like I need to mount each of these exported filesystems
individually on the NFS client. T
Robert Petkus wrote:
Folks,
When using sharenfs, do I really need to NFS export the parent zfs
filesystem *and* all of its children? For example, if I have
/zfshome
/zfshome/user1
/zfshome/user1+n
it seems to me like I need to mount each of these exported filesystems
individually on the NFS cli
On Oct 31, 2006, at 11:09 AM, Jay Grogan wrote:
Thanks Robert, I was hoping something like that hard turned up
allot of what I will need to use ZFS for will be sequential writes
at this time.
I don't know what it is worth, but I was using iozone www.iozone.org/> on my ZFS on top of Areca R
Folks,
When using sharenfs, do I really need to NFS export the parent zfs
filesystem *and* all of its children? For example, if I have
/zfshome
/zfshome/user1
/zfshome/user1+n
it seems to me like I need to mount each of these exported filesystems
individually on the NFS client. This scheme doesn'
Thanks Richard, this seems to be exactly what I was looking for.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
There are several ways to do this. Two of the most popular are syslog
and SNMP. syslog works, just like it always did (or didn't). For more
details on FMA and how it works with SNMP traps, see the conversations on
the OpenSolaris fault management community,
http://www.opensolaris.org/os
Thanks Robert, I was hoping something like that hard turned up allot of what I
will need to use ZFS for will be sequential writes at this time.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://m
Robert,
> I belive it's not solved yet but you may want to try with
> latest nevada and see if there's a difference.
It's fixed in the upcoming Solaris 10 U3 and also in Solaris Express
post build 47 I think.
- Luke
___
zfs-discuss mailing list
zfs
On 10/31/06, Wes Williams <[EMAIL PROTECTED]> wrote:
Okay, so now that I'm planning to build my NAS using ZFS, I now need to devise
or
learn of a preexisting method to receive notification of ZFS handled errors on a
remote > > machine.
For example, if a disk fails and I don't regularly login o
Jay Grogan wrote:
To answer your question "Yes I did expect the same or better performance than standard
UFS" based on all the hype and to quote Sun "Blazing performance
ZFS is based on a transactional object model that removes most of the
traditional constraints on the order of issuing I/Os, w
On 10/31/06, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Hello Cyril,
Tuesday, October 31, 2006, 8:30:50 AM, you wrote:
CP> On 10/30/06, Robert Milkowski <[EMAIL PROTECTED]> wrote:
>>
>>
>> 1. rebooting server could take several hours right now with so many file
system
>>
>>I belive this p
Hello Cyril,
Tuesday, October 31, 2006, 8:30:50 AM, you wrote:
CP> On 10/30/06, Robert Milkowski <[EMAIL PROTECTED]> wrote:
>>
>>
>> 1. rebooting server could take several hours right now with so many file
>> system
>>
>>I belive this problem is being addressed right now
CP> Well, I've done
To answer your question "Yes I did expect the same or better performance than
standard UFS" based on all the hype and to quote Sun "Blazing performance
ZFS is based on a transactional object model that removes most of the
traditional constraints on the order of issuing I/Os, which results in huge
Okay, so now that I'm planning to build my NAS using ZFS, I now need to devise
or learn of a preexisting method to receive notification of ZFS handled errors
on a remote machine.
For example, if a disk fails and I don't regularly login or SSH into the ZFS
server, I'd like an email or some oth
> I use the smartmontools smartd daemon to email me
> when disk drives are
> about to fail. If you are interested in configuring
> smartd to send
> email notifications prior to a disk failing, check
> out the following
> blog post:
>
> http://prefetch.net/blog/index.php/2006/01/05/using-sm
> artd-
Erblichs writes:
> Hi,
>
> My suggestion is direct any command output to a file
> that may print thous of lines.
>
> I have not tried that number of FSs. So, my first
> suggestion is to have alot of phys mem installed.
I seem to recall 64K per FS and being worked on t
Hello Jay,
Tuesday, October 31, 2006, 3:31:54 AM, you wrote:
JG> Ran 3 test using mkfile to create a 6GB on a ufs and ZFS file system.
JG> command ran mkfile -v 6gb /ufs/tmpfile
JG> Test 1 UFS mounted LUN (2m2.373s)
JG> Test 2 UFS mounted LUN with directio option (5m31.802s)
JG> Test 3 ZFS LUN
I was doing some experimentation of my own, using SCSI attached JBOD.
I built a test zpool spanning 7 drives (raidz) on S10U2. The 7 disks were
split between 3 controllers.
I then started replacing the 18GB drives with 36GB drives, one at a time, and
watched it rebuild the zpool, growing as it
31 matches
Mail list logo