On 12/16/06, Richard Elling <[EMAIL PROTECTED]> wrote:
Jason J. W. Williams wrote:
> Hi Jeremy,
>
> It would be nice if you could tell ZFS to turn off fsync() for ZIL
> writes on a per-zpool basis. That being said, I'm not sure there's a
> consensus on that...and I'm sure not smart enough to be a
I use zfs in a san. I have two Sun V440s running solaris 10 U2, which
have luns assigned to them from my Sun SE 3511. So far, it has worked
flawlessly.
Robert Milkowski wrote:
Hello Dave,
Friday, December 15, 2006, 9:02:31 PM, you wrote:
DB> Does anyone have a document that describes ZFS in
Jason J. W. Williams wrote:
Hi Jeremy,
It would be nice if you could tell ZFS to turn off fsync() for ZIL
writes on a per-zpool basis. That being said, I'm not sure there's a
consensus on that...and I'm sure not smart enough to be a ZFS
contributor. :-)
The behavior is a reality we had to deal
> >> JV> For zones: use standard upgrade, because it is not yet possible to
> >> use Live
> >> JV> Upgrade on a zoned system. Also, see the Zones FAQ for other
> >> important
> >>
> >> IIRC upgrade on system with Zones won't work (it was only lately
> >> integrated into Nevada).
> >
> > That is
Richard Lowe wrote:
Jeff Victor wrote:
Robert Milkowski wrote:
Hello Jeff,
Friday, December 15, 2006, 9:36:48 PM, you wrote:
JV> David Smith wrote:
We currently have a couple of servers at Solaris 10 U2, and we would like
to get to Solaris 10 U3 for the new zfs features. Can this be accompl
Jeff Victor wrote:
Robert Milkowski wrote:
Hello Jeff,
Friday, December 15, 2006, 9:36:48 PM, you wrote:
JV> David Smith wrote:
We currently have a couple of servers at Solaris 10 U2, and we would
like
to get to Solaris 10 U3 for the new zfs features. Can this be
accomplished
via patching,
Robert Milkowski wrote:
Hello Jeff,
Friday, December 15, 2006, 9:36:48 PM, you wrote:
JV> David Smith wrote:
We currently have a couple of servers at Solaris 10 U2, and we would like
to get to Solaris 10 U3 for the new zfs features. Can this be accomplished
via patching, or do you have to do
Hello Kory,
Friday, December 15, 2006, 6:58:53 PM, you wrote:
KW> Basically then wilth data being stored on the ZFS disks (no
KW> applications), and web servers logs, it would benefit us more to
KW> have the 3 luns setup in one ZFS Storage Pool?
If you put all 3 LUNs in one pool just make sure t
Hello Christine,
Saturday, December 16, 2006, 12:17:12 AM, you wrote:
CT> Hi,
CT> I guess we are acquainted with the ZFS Wikipedia?
CT> http://en.wikipedia.org/wiki/ZFS
CT> Customers refer to it, I wonder where the Wiki gets its numbers. For
CT> example there's a Sun marketing slide that say
Hello Jeff,
Friday, December 15, 2006, 9:36:48 PM, you wrote:
JV> David Smith wrote:
>> We currently have a couple of servers at Solaris 10 U2, and we would like
>> to get to Solaris 10 U3 for the new zfs features. Can this be accomplished
>> via patching, or do you have to do an upgrade from S1
Hello Dave,
Friday, December 15, 2006, 9:02:31 PM, you wrote:
DB> Does anyone have a document that describes ZFS in a pure
DB> SAN environment? What will and will not work?
ZFS is "just" a filesystem with "just" an integrated volume manager.
Ok, it's more than that.
The point is that if any oth
Hi,
I guess we are acquainted with the ZFS Wikipedia?
http://en.wikipedia.org/wiki/ZFS
Customers refer to it, I wonder where the Wiki gets its numbers. For
example there's a Sun marketing slide that says "unlimited snapshots"
contradicted by the the first bullet:
2^48 — Number of snapshot
On Friday 15 December 2006 21:54, Eric Schrock wrote:
> Ah, you're running into this bug:
>
> 650054 ZFS fails to see the disk if devid of the disk changes due to driver
> upgrade
You mean 6500545 ;)
>
> Basically, if we have the correct path but the wrong devid, we bail out
> of vdev_disk_open()
Yes. Use one pool.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Just to make sure there's no confusion ;-), this error message was added to
'ls' after Solaris 10, and hasn't been backported yet. (Bug 4985395, *ls* does
not report errors from getdents().)
This message posted from opensolaris.org
___
zfs-discuss m
On Fri, Dec 15, 2006 at 08:11:08PM +, Ricardo Correia wrote:
> With the help of dtrace, I found out that in vdev_disk_open() (in
> vdev_disk.c), the ddi_devid_compare() function was failing.
>
> I don't know why the devid has changed, but simply doing zpool export ;
> zpool
> import did th
On Fri, 15 Dec 2006, Ben Rockwood wrote:
> Stuart Glenn wrote:
> > A little back story: I have a Norco DS-1220, a 12 bay SATA box, it is
> > connected to eSATA (SiI3124) via PCI-X two drives are straight
> > connections, then the other two ports go to 5x multipliers within the
> > box. My needs/ho
On Dec 15, 2006, at 13:49, Ben Rockwood wrote:
I have similar issues on my home workstation. They started
happening when I put Seagate SATA-II drives with NCQ on a SI3124.
I do not believe this to be an issue with ZFS. I've largely
dismissed the issue as hardware caused, although I may b
David Smith wrote:
We currently have a couple of servers at Solaris 10 U2, and we would like
to get to Solaris 10 U3 for the new zfs features. Can this be accomplished
via patching, or do you have to do an upgrade from S10U2 to S10U3? Also
what about a system with Zones? What is the best pract
David Smith wrote:
We currently have a couple of servers at Solaris 10 U2, and we would like to
get to Solaris 10 U3 for the new zfs features. Can this be accomplished via
patching, or do you have to do an upgrade from S10U2 to S10U3? Also what about
a system with Zones? What is the best pr
We currently have a couple of servers at Solaris 10 U2, and we would like to
get to Solaris 10 U3 for the new zfs features. Can this be accomplished via
patching, or do you have to do an upgrade from S10U2 to S10U3? Also what about
a system with Zones? What is the best practice for upgrading
Dave Burleson wrote:
Does anyone have a document that describes ZFS in a pure
SAN environment? What will and will not work?
From some of the information I have been gathering
it doesn't appear that ZFS was intended to operate
in a SAN environment.
What information? ZFS works on a SAN just as
With the help of dtrace, I found out that in vdev_disk_open() (in
vdev_disk.c), the ddi_devid_compare() function was failing.
I don't know why the devid has changed, but simply doing zpool export ; zpool
import did the trick - the pool imported correctly and the contents seem to
be intact.
Ex
Does anyone have a document that describes ZFS in a pure
SAN environment? What will and will not work?
From some of the information I have been gathering
it doesn't appear that ZFS was intended to operate
in a SAN environment.
Thanks,
Dave
___
zfs-di
Stuart Glenn wrote:
A little back story: I have a Norco DS-1220, a 12 bay SATA box, it is
connected to eSATA (SiI3124) via PCI-X two drives are straight
connections, then the other two ports go to 5x multipliers within the
box. My needs/hopes for this was using 12 500GB drives and ZFS make a
v
Hi Jeremy,
It would be nice if you could tell ZFS to turn off fsync() for ZIL
writes on a per-zpool basis. That being said, I'm not sure there's a
consensus on that...and I'm sure not smart enough to be a ZFS
contributor. :-)
The behavior is a reality we had to deal with and workaround, so I
pos
The instructions will tell you how to configure the array to ignore
SCSI cache flushes/syncs on Engenio arrays. If anyone has additional
instructions for other arrays, please let me know and I'll be happy to
add them!
Wouldn't it be more appropriate to allow the administrator to disable
ZFS from
Hi Folks,
Roch Bourbonnais and Richard Elling helped me tremendously with the
issue of ZFS killing performance on arrays with battery-backed cache.
Since this seems to have been mentioned a bit recently, and there are
no instructions on how to fix it on Sun StorageTek/Engenio arrays, I
wanted to
Not sure if this is helpful, but anyway..:
[EMAIL PROTECTED]:~# zdb -bb pool
Traversing all blocks to verify nothing leaked ...
No leaks (block sum matches space maps exactly)
bp count: 1617816
bp logical:91235889152 avg: 56394
bp physical: 8
Basically then wilth data being stored on the ZFS disks (no applications), and
web servers logs, it would benefit us more to have the 3 luns setup in one ZFS
Storage Pool?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-dis
On Fri, Dec 15, 2006 at 09:33:51AM -0800, Jeb Campbell wrote:
> One thing, on the home dir create, would you want to chown it to the user?
Yes. I guess I made my minimal example a bit too minimal. I updated it
to chown to the user and also added a 'zfs set quota' which may be
fairly common as we
Trevor Watson wrote:
Anton B. Rang wrote:
Were there any errors reported in /var/adm/messages, or do you see any
logged via fmdump?
Nothing, unfortunately.
In Solaris 10, 'ls' will not print any error message if reading from a
directory fails. (Fixed in Nevada.) If something damaged a dire
This might help diagnosing the problem: zdb successfully traversed the pool.
Here's the output:
[EMAIL PROTECTED]:~# zdb -c pool
Traversing all blocks to verify checksums and verify nothing leaked ...
zdb_blkptr_cb: Got error 50 reading <5, 3539, 0, 12e7> -- skipping
Error counts:
err
On Friday 15 December 2006 16:27, Anton B. Rang wrote:
> This is $7FFF, which is MAXOFFSET_T, aka UNKNOWN_SIZE. Not sure
> why...a damaged label on this device?
'format' seems to show the partition table correctly:
[EMAIL PROTECTED]:~# format
Searching for disks...done
AVAILABLE DISK
Ed, thanks for the great work on Samba/ZFS! I will be testing the shadow copy
patch soon.
One thing, on the home dir create, would you want to chown it to the user?
Now if we can just get Samba+ZFS acls, we would really be rocking.
Thanks again,
Jeb
This message posted from opensolaris.or
> The implication in what you've written is that ZFS doesn't report an error if
> it detects an invalid checksum. Is that correct?
No, sorry I wasn't more clear.
ZFS detects and reports the invalid checksum. If the checksum error occurs on a
directory, this can result in an error being returned
For those who work with Samba, I've setup a page about the ZFS Shadow
Copy VFS module for Samba that I've been working on. This module helps
to make ZFS snapshots easily navigatable through a Microsoft-provided GUI in
Windows which can be used to provide end users with an easy way to
restore files
Anton B. Rang wrote:
Were there any errors reported in /var/adm/messages, or do you see any logged
via fmdump?
Nothing, unfortunately.
In Solaris 10, 'ls' will not print any error message if reading from a
directory fails. (Fixed in Nevada.) If something damaged a directory (including
ZFS
Anton B. Rang wrote:
Were there any errors reported in /var/adm/messages, or do you see any logged
via fmdump?
In Solaris 10, 'ls' will not print any error message if reading from a
directory fails. (Fixed in Nevada.) If something damaged a directory (including
ZFS detecting a checksum error)
> found this rather strange:
> [EMAIL PROTECTED]:~# stat -L /dev/dsk/c2t0d0s1
> File: `/dev/dsk/c2t0d0s1'
> Size: 9223372036854775807 Blocks: 0 IO
>
> Notice the size!
This is $7FFF, which is MAXOFFSET_T, aka UNKNOWN_SIZE. Not sure
why...a damaged label on this device?
Were there any errors reported in /var/adm/messages, or do you see any logged
via fmdump?
In Solaris 10, 'ls' will not print any error message if reading from a
directory fails. (Fixed in Nevada.) If something damaged a directory (including
ZFS detecting a checksum error), its contents (or some
On Dec 15, 2006, at 7:37 AM, Jesus Cea wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I'm interesting in ZFS redundancy when vdev's are "remote". The idea,
for example, is use vdev remote mirroring as a cluster FS layer. Or
puntual backup.
Has anybody tried to mount an iscsi target as a
On Friday 15 December 2006 15:28, Trevor Watson wrote:
> Does anyone have any ideas or suggestions as to how I might try to figure
> out what's wrong?
I have no idea, but I've had the same thing happen to me yesterday (see
http://www.opensolaris.org/jive/thread.jspa?threadID=20294&tstart=0 ).
Wh
I have a non-redundant zpool configured on one slice of my disk, and in the
past week have had two directories simply disappear, in two different filesystems.
The first was my email directory under my homedir (which is a ZFS fs) - I put
this disappearance down to Thunderbird despite it never ha
A little back story: I have a Norco DS-1220, a 12 bay SATA box, it is
connected to eSATA (SiI3124) via PCI-X two drives are straight
connections, then the other two ports go to 5x multipliers within the
box. My needs/hopes for this was using 12 500GB drives and ZFS make a
very large & simpl
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I'm interesting in ZFS redundancy when vdev's are "remote". The idea,
for example, is use vdev remote mirroring as a cluster FS layer. Or
puntual backup.
Has anybody tried to mount an iscsi target as a ZFS device?. Are machine
reboots / conectivity pr
>Anyone is working to fix it? On some slower servers this is really
>annoying (I know flash would 'fix' it).
Not that I am aware of; it is really annoying on older
hardware.
Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
47 matches
Mail list logo