Are there any potential problems that one should be aware of if you would like
to make dual-use of a pair of SSD MLC units and use parts of them as mirrored
(ZFS) boot disks, and then use the rest of them as ZFS L2ARC cache devices (for
another zpool)?
The one thing I can think of is potential
Just a quick followup that the same issue still seems to be there on our X4500s
with the latest Solaris 10 with all the latest patches and the following SSD
disks:
Intel X25-M G1 firmware 8820 (80GB MLC)
Intel X25-M G2 firmware 02HD (160GB MLC)
However - things seem to work smoothly with:
Inte
> What kind of overhead do we get from this kind of thing?
Overheadache...
[i](Tack Kronberg för svaret)[/i]
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listin
I figured I'd post the solution to this problem here also.
Anyway, I solved the problem the old-fashioned way: Tell Solaris to fake the
disk device ID's... I added the following to /kernel/drv/ssd.conf:
> ssd-config-list=
> "EUROLOGC", "unsupported-hack";
>
> unsupported-hack=1,0x8,0,0,
I'm trying to put an older Fibre Channel RAID (Fujitsu Siemens S80) box into
use again with ZFS on a Solaris 10 (Update 8) system, but it seems ZFS gets
confused about which "disk" (LUN) is which...
Back in the old days when we used these disk systems on another server we had
problems with Disk
What type of disks are you using?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Have you tried wrapping your disks inside LVM metadevices and then used those
for your ZFS pool?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discus
Now tested a firmware 8850 X25-E in one of our X4500:s and things look better:
> # /ifm/bin/smartctl -d scsi -l selftest /dev/rdsk/c5t7d0s0
> smartctl version 5.38 [i386-pc-solaris2.10] Copyright (C) 2002-8 Bruce Allen
> Home page is http://smartmontools.sourceforge.net/
>
> No self-tests have be
I can confirm that on an X4240 with the LSI (mpt) controller:
X25-M G1 with 8820 still returns invalid selftest data
X25-E G1 with 8850 now returns correct selftest data
(I haven't got any X25-M G2)
Going to replace an X25-E with the old firmware in one of our X4500s
soon and we'll see if things
Done some more testing, and I think my X4240/mpt/X25-problems must be something
else.
Attempting to read (with smartctl) the self test log on the 8850-firmware X25-E
gives better results
than with the old firmware:
X25-E running firmware 8850 on an X4240 with mpt controller:
# smartctl -d sc
> You can "zpool replace" a bad slog device now.
>From which kernel release is this implemented/working?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zf
I'm curious about if there are any potential problems with using LVM
metadevices as ZFS zpool targets. I have a couple of situations where using a
device directly by ZFS causes errors on the console about "Bus and lots of
"stalled" I/O. But as soon as I wrap that device inside an LVM metadevice
I wonder exactly what's going on. Perhaps it is the cache flushes that is
causing the SCSI errors
when trying to use the SSD (Intel X25-E and X25-M) disks? Btw, I'm seeing the
same behaviour on
both an X4500 (SATA/Marwell controller) and the X4240 (SAS/LSI controller).
Well, almost. On the
X4
Oh, and for completeness: If I wrap 'c1t12d0s0' inside a SVM metadevice to and
use that to create the "TEST" zpool (without a log) I run the same test command
in 36.3 seconds... Ie:
# metadb -f -a -c3 c1t13d0s0
# metainit d0 1 1 c1t13d0s0
# metainit d2 1 1 c1t12d0s0
# zpool create TEST /dev/md/d
You might wanna try one thing I just noticed - wrap the log device inside a SVM
(disksuite) metadevice - makes wonders for the performance on my test server
(Sun Fire X4240)... I do wonder what the downsides might be (except for having
to fiddle with Disksuite again). Ie:
# zpool create TEST c1
Interresting... I wonder what differs between your system and mine. With my
dirt-simple stress-test:
server1# zpool create X25E c1t15d0
server1# zfs set sharenfs=rw X25E
server1# chmod a+w /X25E
server2# cd /net/server1/X25E
server2# gtar zxf /var/tmp/emacs-22.3.tar.gz
and a fully patched X4242
Still no news when a real patch will be released for this issue?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> [0] andromeda:/<2>common/sge# wc /etc/dfs/sharetab
> 18537412 157646 /etc/dfs/sharetab
This machine (Thumper) currently runs Solaris 10 Update 3 (with some patches)
and things work just fine. Now, I'm a bit worried about reboot times due to the
number of exported filesystems and I'm think
Yeah, this is annoying. I'm seeing this on a Thumper running Update 3 too...
Has this issue been fixed in Update 4 and/or current releases of OpenSolaris?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.o
Speaking of error recovery due to bad blocks - anyone know if the SATA disks
that are delivered with the Thumper have "enterprise" or "desktop"
firmware/settings by default? If I'm not mistaken one of the differences is
that the "enterrprise" variant more quickly gives up with bad blocks and
re
We too are seeing this problem on some of our Thumpers - the ones with U4
and/or all the latest patches installed. We have one which we stopped patching
before the kernel patch that introduced
this problem that works fine...
Works:
[0] andromeda:/<2>ncri86pc/sbin# uname -a
SunOS andromeda 5.10
Suns disk are labeled with a standard label that are smaller than the actual
disk (so that they can be
interchangeble in the future). I'd first try to wipe the Sun label from the
disk and have format
write a new label on it... Ie:
dd if=/dev/zero of=/dev/rdsk/YOURDISK bs=512 count=1024
format
I'm about to start migrating a lot of files on UFS filesystems from a Solaris 9
server to a new server running Solaris 10 (u3) with ZFS (a Thumper). Now...
What's the "best" way to move all these files? Should one use Solaris tar,
Solaris cpio, ufsdump/ufsrestore, rsync or what?
I currently us
Ah :-)
Btw, that bug note is a bit misleading - our usage case had nothing to do with
ZFS Root filesystems - he was trying to install in a completely separate
filesystem - a very large one. And yes, he found out that setting a quota was a
good workaround :-)
This message posted from opensol
A coworker of mine ran into a large ZFS-related bug the other day. He was
trying to install Sun Studio 11 on a ZFS filesystem and it just kept on
failing. Then he tried to install on a UFS filesystem on the same machine and
it worked just fine...
After much headscratching and testing and trussi
If you _boot_ the original machine then it should see that the pool now is
"owned" by
the other host and ignore it (you'd have to do a "zpool import -f" again I
think). Not tested though so don't take my word for it...
However if you simply type "go" and let it continue from where it was then
t
>> ZFS ->> HBA -> FC Switch -> JBOD -> "Simple" FC-SATA-converter -> SATA disk
> Why bother with switch here?
Think multiple JBODs.
With a single JBOD then a switch is not needed and then FC probably also is
overkill - then normal SCSI can work.
- Peter
Message was edited by:
pen
> #1 is speed. You can aggregate 4x1Gbit ethernet and still not touch 4Gb/sec
> FC.
> #2 drop in compatibility. I'm sure people would love to drop this into an
> existing SAN
#2 is the key for me. And I also have a #3:
FC has been around a long time now. The HBAs and Switches are (more or le
> too much of our future roadmap, suffice it to say that one should expect
> much, much more from Sun in this vein: innovative software and innovative
> hardware working together to deliver world-beating systems with undeniable
> economics.
Yes please. Now give me a fairly cheap (but still quality
Hmm... I just noticed this qla2100.conf option:
# During link down conditions enable/disable the reporting of
# errors.
#0 = disabled, 1 = enable
hba0-link-down-error=1;
hba1-link-down-error=1;
I _wonder_ what might possibly happen if I change that 1 to a 0 (zero)... :-)
This message post
>> So ZFS should be more resilient against write errors, and the SCSI disk or
>> FC drivers
>> should be more resilient against LIPs (the most likely cause of your
>> problem) or other
>> transient errors. (Alternatively, the ifp driver should be updated to
>> support the
>> maximum number of ta
> If you take a look at these messages the somewhat unusual condition
> that may lead to unexpected behaviour (ie. fast giveup) is that
> whilst this is a SAN connection it is achieved through a non-
> Leadville config, note the fibre-channel and sd references. In a
> Leadville compliant instal
There is nothing in the ZFS FAQ about this. I also fail to see how FMA could
make any difference since it seems that ZFS is deadlocking somewhere in the
kernel when this happens...
It works if you wrap all the physical devices inside SVM metadevices and use
those for your
ZFS/zpool instead. Ie:
Suppose I have a server that is used as a backup system fom many other ("live")
servers. It uses ZFS snapshots to enable people to recover files from any date
a year back (or so).
Now, I want to backup this backup server to some kind of external stable
storage in case disaster happens and this
... and in a related question - since rsync uses the ACL code from the Samba
project - has there been some progress in that direction too?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.o
Has anyone looked into adding support for ZFS ACLs into Rsync? It would be
really convenient if it would support transparent conversions from old-style
Posix ACLs to ZFS ACLs on the fly
One way Posix->ZFS is probably good enough. I've tried Googling, but haven't
come up with much. There see
36 matches
Mail list logo