On 10/24/12 3:59 AM, Darren J Moffat wrote:
So in this case you should have a) created the pool with a version that
matches the pool version of the backup server and b) make sure you
create the ZFS file systems with a version that is supposed by the
backup server.
And AI allows you to set the
On 6/18/12 4:07 PM, Koopmann, Jan-Peter wrote:
Thanks. Just noticed that the Hitachi 3TB drives are not available. The
4TB ones are but with 512b emulated only. However I can get Barracudas
7200.14 with supposedly real 4k quite cheap. Anyone any experience with
those? I might be getting one or tw
On 6/18/12 12:19 AM, Koopmann, Jan-Peter wrote:
Hi Carson,
I have 2 Sans Digital TR8X JBOD enclosures, and they work very well.
They also make a 4-bay TR4X.
http://www.sansdigital.com/towerraid/tr4xb.html
http://www.sansdigital.com/towerraid/tr8xb.html
looks nice! The only th
On 6/17/12 6:36 PM, Timothy Coalson wrote:
No problem, and yes, I think that should work. One thing to keep in
mind, though, is that if the internals of the enclosure simply split
the multilane SAS cable into 4 connectors without an expander, and you
use SATA drives, the controller will use SAT
On 6/17/12 3:21 PM, Koopmann, Jan-Peter wrote:
Hi Tim,
you might be able to use
an adapter to the SFF-8088 external 4 lane SAS connector, which may
increase your options.
So what you are saying is that something like this will do the trick?
http://www.pc-pitstop.com/sata_enclosu
On 6/11/12 3:12 PM, Alan Hargreaves wrote:
There is a ZFS Community on the Oracle Communities that was just kicked
off this month -
https://communities.oracle.com/portal/server.pt/community/oracle_solaris_zfs_file_system/526
Thanks for the heads up, but it's just another horrid Oracle web UI wi
On 5/14/12 2:02 AM, Ian Collins wrote:
Adding the log was OK:
zpool add -f export log mirror c10t3d0s0 c10t4d0s0
But adding the cache fails:
zpool add -f export cache c10t3d0s1 c10t4d0s1
invalid vdev specification
the following errors must be manually repaired:
/dev/dsk/c10t3d0s2 is part of ac
On 4/26/12 2:17 PM, J.P. King wrote:
Shared storage is evil (in this context). Corrupt the storage, and you
have no DR.
Now I am confused. We're talking about storage which can be used for
failover, aren't we? In which case we are talking about HA not DR.
Depends on how you define DR - we h
On 4/25/12 10:10 PM, Richard Elling wrote:
On Apr 25, 2012, at 8:30 PM, Carson Gaspar wrote:
And applications that don't pin the mount points, and can be idled
during the migration. If your migration is due to a dead server, and
you have pending writes, you have no choice but to reboo
On 4/25/12 6:57 PM, Paul Kraus wrote:
On Wed, Apr 25, 2012 at 9:07 PM, Nico Williams wrote:
On Wed, Apr 25, 2012 at 7:37 PM, Richard Elling
wrote:
Nothing's changed. Automounter + data migration -> rebooting clients
(or close enough to rebooting). I.e., outage.
Uhhh, not if you
maximum memory page size and is limited to no more than 4KB. iSCSI
appears to acknowledge every individual block that is sent. That means
the most data one can stream without an ACK is 4KB. That means the
throughput is limited by the latency of the network rather than the
bandwidth.
I am _far_
On 8/7/11 6:36 AM, Roy Sigurd Karlsbakk wrote:
That's why, back in 1992, the sliding window protocol was created
(http://tools.ietf.org/html/rfc1323), so that a peer won't wait for a TCP ACK
before resuming operation.
It was part of TCP _long_ before that (it was never as stupid as XMODEM
;
On 3/17/2011 8:11 AM, David Magda wrote:
From: Paul Kraus [mailto:p...@kraus-haus.org]
[...]
2. Unix / Solaris limitation of 16 / 32 group membership
#2 is fixed in OpenSolaris as of snv_129:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4088757
The new limit is 1024--the
On 2/16/11 9:58 PM, Krunal Desai wrote:
When I try to do a SMART status read (more than just a simple
identify), looks like the 1068E drops the drive for a little bit. I
bought the Intel-branded LSI SAS3081E:
Current active firmware version is 0120 (1.32.00)
Firmware image's version is MPTFW
On 2/2/11 5:47 PM, Krunal Desai wrote:
Fails for me, my version does not recognize the 'sat' option. I've
been using -d scsi:
movax@megatron:~# smartctl -h
smartctl version 5.36 [i386-pc-solaris2.8] Copyright (C) 2002-6 Bruce Allen
So build the current version of smartmontools. As you should
On 2/2/11 5:43 PM, Krunal Desai wrote:
I updated firmware on both of my USAS-L8i (LSI1068E based), and while
controller numbering has shifted around in Solaris (went from c10/c11
to c11/c12, not a big deal I think), suddently smartctl is able to
pull temperatures. Can't get a full SMART listing,
On 2/1/11 5:52 PM, Krunal Desai wrote:
SMART status was reported healthy as well (got smartctl kind of
working), but I cannot read the SMART data of my disks behind the
1068E due to limitations of smartmontools I guess. (e.g. 'smartctl -d
scsi -a /dev/rdsk/c10t0d0' gives me serial #, model, and
eally"? Use metattach to grow a metadevice
or soft partition. Use growfs to grow UFS on the grown device.
He is probably referring to the fact that growfs locks the filesystem.
--
Carson Gaspar
___
zfs-discuss mailing list
zfs-discuss@opensolari
On 9/10/10 4:16 PM, Piotr Jasiukajtis wrote:
Ok, now I know it's not related to the I/O performance, but to the ZFS itself.
At some time all 3 pools were locked in that way:
extended device statistics errors ---
r/sw/s kr/s kw/s wait actv wsv
Cindy Swearingen wrote:
Hi Daniel,
No conversion from a mirrored to RAIDZ configuration is available yet.
Well... you can do it, but it's a bit byzantine, and leaves you without
redundancy during the migration.
1) Add your new disks
2) Create a sparse file at least as large as your smallest
Ragnar Sundblad wrote:
I was referring to the case where zfs has written data to the drive but
still hasen't issued a cache flush, and before the cache flush the drive
is reset. If zfs finally issues a cache flush and then isn't informed
that the drive has been reset, data is lost.
I hope this
Arne Jansen wrote:
David Magda wrote:
On Wed, June 16, 2010 10:44, Arne Jansen wrote:
David Magda wrote:
I'm not sure you'd get the same latency and IOps with disk that you can
with a good SSD:
http://blogs.sun.com/brendan/entry/slog_screenshots
[...]
Please keep in mind I'm talking ab
Jan Kryl wrote:
the bug (6798273) has been closed as incomplete with following
note:
"I cannot reproduce any issue with the given testcase on b137."
So you should test this with b137 or newer build. There have
been some extensive changes going to treeclimb_* functions,
so the bug is probably fi
Forrest Aldrich wrote:
I've seen this product mentioned before - the problem is, we use
Veritas heavily on a public network and adding yet another software
dependency would be a hard sell. :(
Be very certain that you need synchronous replication before you do
this. For some ACID systems it re
Willard Korfhage wrote:
This afternoon, messages like the following started appearing in
/var/adm/messages:
May 18 13:46:37 fs8 scsi: [ID 365881 kern.info]
/p...@0,0/pci8086,2...@1/pci15d9,a...@0 (mpt0):
May 18 13:46:37 fs8 Log info 0x3108 received for target 5.
May 18 13:46:37 fs8
Miles Nordin wrote:
"bh" == Brandon High writes:
bh> From what I've read, the Hitachi and Samsung drives both
bh> support CCTL, which is in the ATA-8 spec. There's no way to
bh> toggle it on from OpenSolaris (yet) and it doesn't persist
bh> through reboot so it's not really ide
Glenn Lagasse wrote:
How about ease-of-use, all you have to do is plug in the usb disk and
zfs will 'do the right thing'. You don't have to remember to run zfs
send | zfs receive, or bother with figuring out what to send/recv etc
etc etc.
It should be possible to automate that via syseventd/s
Nicolas Williams wrote:
On Wed, Apr 21, 2010 at 01:03:39PM -0500, Jason King wrote:
ISTR POSIX also doesn't allow a number of features that can be turned
on with zfs (even ignoring the current issues that prevent ZFS from
being fully POSIX compliant today). I think an additional option for
the
Richard Elling wrote:
So you are saying that the OnTap .snapshot directory is equivalent to a symlink
to $FSROOT/.zfs/snapshot? That would "solve" the directory shuffle problem.
Not quite. It's equivalent(ish) to:
cd "$MYDIR" && mkdir .snapshot && cd .snapshot
for s in "$FSROOT"/.zfs/snapsho
Nicolas Williams wrote:
On Tue, Apr 20, 2010 at 04:28:02PM +, A Darren Dunham wrote:
On Sat, Apr 17, 2010 at 09:03:33AM -0400, Edward Ned Harvey wrote:
"zfs list -t snapshot" lists in time order.
Good to know. I'll keep that in mind for my "zfs send" scripts but it's not
relevant for the
Bob Friesenhahn wrote:
On Sun, 18 Apr 2010, Carson Gaspar wrote:
Before (Mac OS 10.6.3 NFS client over GigE, local subnet, source file
in RAM):
carson:arthas 0 $ time tar jxf /Volumes/RamDisk/gcc-4.4.3.tar.bz2
real92m33.698s
user0m20.291s
sys 0m37.978s
That's awful!
..
Edward Ned Harvey wrote:
I'm saying that even a single pair of disks (maybe 4 disks if you're using
cheap slow disks) will outperform a 1Gb Ethernet. So if your bottleneck is
the 1Gb Ethernet, you won't gain anything (significant) by accelerating the
stuff that isn't the bottleneck.
And you a
Carson Gaspar wrote:
I just found an 8 GB SATA Zeus (Z4S28I) for £83.35 (~US$127) shipped to
California. That should be more than large enough for my ZIL @home,
based on zilstat.
The web site says EOL, limited to current stock.
http://www.dpieshop.com/stec-zeus-z4s28i-8gb-25-sata-ssd-solid
Carson Gaspar wrote:
Miles Nordin wrote:
"re" == Richard Elling writes:
How do you handle the case when a hotplug SATA drive is powered off
unexpectedly with data in its write cache? Do you replay the writes,
or do they go down the ZFS hotplug write hole?
If zfs never got
Miles Nordin wrote:
"re" == Richard Elling writes:
How do you handle the case when a hotplug SATA drive is powered off
unexpectedly with data in its write cache? Do you replay the writes,
or do they go down the ZFS hotplug write hole?
If zfs never got a positive response to a cache flush,
Erik Trimble wrote:
On Tue, 2010-04-06 at 08:26 -0700, Anil wrote:
Seems a nice sale on Newegg for SSD devices. Talk about choices. What's the
latest recommendations for a log device?
http://bit.ly/aL1dne
The Vertex LE models should do well as ZIL (though not as well as an
X25-E or a Zeus)
Willard Korfhage wrote:
Yes, I was hoping to find the serial numbers. Unfortunately, it
doesn't show any serial numbers for the disk attached to the Areca
raid card.
Does Areca provide any Solaris tools that will show you the drive info?
If you are using the Areca in JBOD mode, smartctl will f
Jeroen Roodhart wrote:
The thread was started to get insight in behaviour of the F20 as ZIL.
_My_ particular interest would be to be able to answer why perfomance
doesn't seem to scale up when adding vmod-s...
My best guess would be latency. If you are latency bound, adding
additional paralle
Brandon High wrote:
On Thu, Apr 1, 2010 at 11:46 AM, Carson Gaspar <mailto:car...@taltos.org>> wrote:
"Nonsensical" may be a bit strong, but I can see no possible use
case where a 3 disk raidz2 isn't better served by a 3-way mirror.
Once bp_rewrite is done, you
Cindy Swearingen wrote:
If someone new to ZFS wants to take 3 old (but reliable) disks and make
a raidz2 configuration for testing, we would not consider this is a
nonsensical idea. You can then apply what you learn about ZFS space
allocation and redundancy to a new configuration.
"Nonsensical
Richard Elling wrote:
On Mar 30, 2010, at 3:32 PM, Jeroen Roodhart wrote:
If you are going to trick the system into thinking a volatile cache is
nonvolatile, you
might as well disable the ZIL -- the data corruption potential is the same.
I'm sorry? I believe the F20 has a supercap or the like?
Slack-Moehrle wrote:
And I should mention that I have a boot drive (500gb SATA) so I dont have to
consider booting from the RAID, I just want to use it for storage.
- Original Message -
From: "Slack-Moehrle"
To: "zfs-discuss"
Sent: Friday, March 26, 2010 11:39:35 AM
Subject: [zfs-disc
Freddie Cash wrote:
So, is it just a "standard" that hardware/software RAID setups require 3
drives for a RAID5 array? And 4 drives for RAID6?
It's padding on the sharp edges. See my earlier post - a 2 disk RAID5 is
silly, use a mirror. A 3 disk RAID6 is silly, use a 3-way mirror. Both
are
Bruno Sousa wrote:
What do you mean by "Using fewer than 4 disks in a raidz2 defeats the
purpose of raidz2, as you will always be in a degraded mode" ?
Does it means that having 2 vdevs with 3 disks it won't be redundant in
the advent of a drive failure?
Technically a 3 disk raidz2 won't be de
Fajar A. Nugraha wrote:
On Thu, Mar 25, 2010 at 10:31 AM, Carson Gaspar wrote:
Fajar A. Nugraha wrote:
You will do best if you configure the raid controller to JBOD.
Problem: HP's storage controller doesn't support that mode.
It does, ish. It forces you to create a bunch of single
Carson Gaspar wrote:
Fajar A. Nugraha wrote:
On Thu, Mar 25, 2010 at 1:02 AM, Edward Ned Harvey
wrote:
I think the point is to say: ZFS software raid is both faster and more
reliable than your hardware raid. Surprising though it may be for a
newcomer, I have statistics to back that up
Fajar A. Nugraha wrote:
On Thu, Mar 25, 2010 at 1:02 AM, Edward Ned Harvey
wrote:
I think the point is to say: ZFS software raid is both faster and more
reliable than your hardware raid. Surprising though it may be for a
newcomer, I have statistics to back that up,
Can you share it?
You w
Bob Friesenhahn wrote:
On Thu, 18 Mar 2010, erik.ableson wrote:
Ditto on the Linux front. I was hoping that Solaris would be the
exception, but no luck. I wonder if Apple wouldn't mind lending one
of the driver engineers to OpenSolaris for a few months...
Perhaps the issue is the filesyst
Someone wrote (I haven't seen the mail, only the unattributed quote):
My guess is unit conversion and rounding. Your pool
has 11 base 10 TB,
which is 10.2445 base 2 TiB.
Likewise your fs has 9 base 10 TB, which is 8.3819
base 2 TiB.
Not quite.
11 x 10^12 =~ 10.004 x (1024^4).
So, th
Tonmaus wrote:
I am lacking 1 TB on my pool:
u...@filemeister:~$ zpool list daten NAMESIZE ALLOC FREE
CAP DEDUP HEALTH ALTROOT daten10T 3,71T 6,29T37% 1.00x
ONLINE - u...@filemeister:~$ zpool status daten pool: daten state:
ONLINE scrub: none requested config:
NAME
Khyron wrote:
I believe Richard Elling recommended "cfgadm -v". I'd also suggest
"iostat -E", with and without "-n" for good measure.
So that's "iostat -E" and "iostat -En". As long as you know the
physical drive
specification for the drive (ctd which appears to be c9t1d0 from
the other e-m
Tomas Ögren wrote:
On 02 March, 2010 - Carson Gaspar sent me these 0,5K bytes:
I strongly suggest that folks who are thinking about this examine what
NetApp does when exporting NTFS security model qtrees via NFS. It
constructs a mostly bogus set of POSIX permission info based on the ACL
I strongly suggest that folks who are thinking about this examine what
NetApp does when exporting NTFS security model qtrees via NFS. It
constructs a mostly bogus set of POSIX permission info based on the ACL.
All access is enforced based on the actual ACL. Sadly for NFSv3 clients
there is no w
Richard Elling wrote:
...
As you can see, so much has changed, hopefully for the better, that running
performance benchmarks on old software just isn't very interesting.
NB. Oracle's Sun OpenStorage systems do not use Solaris 10 and if they did, they
would not be competitive in the market. The n
Paul Armstrong wrote:
I'm surprised at the number as well.
Running it again, I'm seeing it jump fairly high just before the fork errors:
bash-4.0# ps -ef | grep zfsdle | wc -l
20930
(the next run of ps failed due to the fork error).
So maybe it is running out of processes.
ZFS file data fro
Travis Tabbal wrote:
If someone from Sun will confirm that it should work to use the mpt
driver from 2009.06, I'd be willing to set up a BE and try it. I
still have the snapshot from my 2009.06 install, so I should be able
to mount that and grab the files easily enough.
I tried, it doesn't work
Carson Gaspar wrote:
Mark Johnson wrote:
I think there are two different bugs here...
I think there is a problem with MSIs and some variant of mpt
card on xVM. These seem to be showing up as timeout errors.
Disabling MSIs for this adapter seems to fix this problem.
For folks seeing this
Mark Johnson wrote:
I think there are two different bugs here...
I think there is a problem with MSIs and some variant of mpt
card on xVM. These seem to be showing up as timeout errors.
Disabling MSIs for this adapter seems to fix this problem.
For folks seeing this problem, what HBA adapter a
Travis Tabbal wrote:
I have a possible workaround. Mark Johnson has
been emailing me today about this issue and he proposed the
following:
You can try adding the following to /etc/system, then rebooting...
set xpv_psm:xen_support_msi = -1
I am also running XVM, and after modifying /etc/syste
For all of those suffering from mpt timeouts in snv_127, I decided to
give the ancient itmpt driver a whirl. It works fine, and in my brief
testing a zfs scrub that would generate about 1 timeout every 2 minutes
or so now runs with no problems.
The downside is that lsiutil and raidctl both fai
On 10/26/09 5:33 PM, p...@paularcher.org wrote:
I can't find much on gam_server on Solaris (couldn't find too much on it
at all, really), and port_create is apparently a system call. (I'm not a
developer--if I can't write it in BASH, Perl, or Ruby, I can't write
it.)
I appreciate the suggestions,
On 10/26/09 3:31 PM, Richard Elling wrote:
How about a consumer for gvfs-monitor-dir(1) or gvfs-monitor-file(1)? :-)
The docs are... ummm... "skimpy" is being rather polite. The docs I can find via
Google say that they will launch some random unspecified daemons via d-bus (I
assume gvfsd ans
On 10/25/09 5:38 PM, Paul Archer wrote:
5:12pm, Cyril Plisko wrote:
while there is no inotify for Solaris, there are similar technologies
available.
Check port_create(3C) and gam_server(1)
I can't find much on gam_server on Solaris (couldn't find too much on it
at all, really), and port_cre
On 10/24/09 9:43 AM, Richard Elling wrote:
OK, here we see 4 I/Os pending outside of the host. The host has
sent them on and is waiting for them to return. This means they are
getting dropped either at the disk or somewhere between the disk
and the controller.
When this happens, the sd driver w
On 10/24/09 8:37 AM, Richard Elling wrote:
At LISA09 in Baltimore next week, Darren is scheduled to give an update
on the ZFS crypto project. We should grab him, take him to our secret
rendition site at Inner Harbor, force him into a comfy chair, and
beer-board him until he confesses.
I can su
On 10/22/09 4:07 PM, James C. McPherson wrote:
Adam Cheal wrote:
It seems to be timing out accessing a disk, retrying, giving up and then
doing a bus reset?
...
ugh. New bug time - bugs.opensolaris.org, please select
Solaris / kernel / driver-mpt. In addition to the error
messages and descript
To recap for those who don't recall my plaintive cries for help, I lost a pool
due to the following sequence of events:
- One drive in my raidz array becomes flaky, has frequent "stuck" I/Os due to
drive error recovery, trashing performance
- I take flaky drive offline (zpool offline...)
- I b
Maurilio Longo wrote:
I did try to use smartmontools, but it cannot report SMART logs nor start
SMART tests, so I don't know how to look at their internal state.
Really? That's odd...
You could also have a firmware bug on your disks. You might try lowering
the number of tagged commands per d
Maurilio Longo wrote:
the strange thing is that this is happening on several disks (can it be that
are all failing?)
Possible, but less likely. I'd suggest running some disk I/O tests, looking at
the drive error counters before/after.
What is the controller bug you're talking about? I'm run
Maurilio Longo wrote:
Hi,
I have a pc with a MARVELL AOC-SAT2-MV8 controller and a pool made up of a
six disks in a raid-z pool with a hot spare.
...
Now, the problem is that issuing an
iostat -Cmnx 10
or any other time intervall, I've seen, sometimes, a complete stall of disk
I/O due to a d
Also can someone tell me if I'm too late for an uberblock rollback to help me?
Diffing "zdb -l" output between c7t0 and c7t1 I see:
-txg=12968048
+txg=12968082
Is that too large a txg gap to roll back, or is it still possible?
Carson Gaspar wrote:
Carson Gaspar wro
Carson Gaspar wrote:
I'm booted back into snv118 (booting with the damaged pool disks
disconnected so the host would come up without throwing up). After hot
plugging the disks, I get:
bash-3.2# /usr/sbin/zdb -eud media
zdb: can't open media: File exists
OK, things are now
Carson Gaspar wrote:
Carson Gaspar wrote:
I'll also note that the kernel is certainly doing _something_ with my
pool... from "iostat -n -x 5":
extended device statistics
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
40.55.4 154
Carson Gaspar wrote:
I'll also note that the kernel is certainly doing _something_ with my
pool... from "iostat -n -x 5":
extended device statistics
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
40.55.4 1546.40.0 0.0 0.3
Carson Gaspar wrote:
Carson Gaspar wrote:
Victor Latushkin wrote:
Carson Gaspar wrote:
is zdb happy with your pool?
Try e.g.
zdb -eud
I'm booted back into snv118 (booting with the damaged pool disks
disconnected so the host would come up without throwing up). After hot
pluggin
Carson Gaspar wrote:
Victor Latushkin wrote:
Carson Gaspar wrote:
is zdb happy with your pool?
Try e.g.
zdb -eud
I'm booted back into snv118 (booting with the damaged pool disks
disconnected so the host would come up without throwing up). After hot
plugging the disks, I get:
Victor Latushkin wrote:
Carson Gaspar wrote:
zpool online media c7t0d0
j...@opensolaris:~# zpool online media c7t0d0
cannot open 'media': no such pool
Already tried that ;-)
--
This message posted from opensolaris.org
D'oh! Of course, I should have been paying attention
t
> On Wed, 30 Sep 2009 11:01:13 PDT, Carson Gaspar
> wrote:
>
> >> zpool online media c7t0d0
> >
> >j...@opensolaris:~# zpool online media c7t0d0
> >cannot open 'media': no such pool
> >
> >Already tried that ;-)
>
> Perhaps you
> >> zpool online media c7t0d0
> >
> > j...@opensolaris:~# zpool online media c7t0d0
> > cannot open 'media': no such pool
> >
> > Already tried that ;-)
> > --
> > This message posted from opensolaris.org
> >
> >
> D'oh! Of course, I should have been paying attention
> to the fact that the
> pool
> zpool online media c7t0d0
j...@opensolaris:~# zpool online media c7t0d0
cannot open 'media': no such pool
Already tried that ;-)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
One of the disks in my RAIDZ array was behaving oddly (lots of bus errors) so I
took it offline to replace it. I shut down the server, put in the replacement
disk, and rebooted. Only to discover that a different drive had chosen that
moment to fail completely. So I replace the failing (but not y
Joe Toppi wrote:
I have machine that had 2x 1TB drives in it. They were in the same zpool and
that entire zpool is set to "copies=2". From what I understand this will
store all my data twice, and if the SPA is doing its job right it will store
the copies on different disks and store the checksum
Erik Trimble wrote:
You are correct in that restoring a full stream creates the appropriate
versioned filesystem. That's not the problem.
The /much/ more likely scenario is this:
(1) Let's say I have a 2008.11 server. I back up the various ZFS
filesystems, with both incremental and full stre
Erik Trimble wrote:
> I haven't see this specific problem, but it occurs to me thus:
For the reverse of the original problem, where (say) I back up a 'zfs
send' stream to tape, then later on, after upgrading my system, I want
to get that stream back.
Does 'zfs receive' support reading a ver
Carson Gaspar wrote:
Except you replied to me, not to the person who has SSDs. I have dead
standard hard disks, and the mpt driver is just not happy. After
applying 141737-04 to my Sol 10 system, things improved greatly, and
the constant bus resets went away. After upgrading to OpenSolaris 6
James C. McPherson wrote:
On Thu, 10 Sep 2009 12:31:11 -0700
Carson Gaspar wrote:
Alex Li wrote:
We finally resolved this issue by change LSI driver. For details, please
refer to here
http://enginesmith.wordpress.com/2009/08/28/ssd-faults-finally-resolved/
Anyone from Sun have any knowledge
Alex Li wrote:
We finally resolved this issue by change LSI driver. For details, please
refer to here
http://enginesmith.wordpress.com/2009/08/28/ssd-faults-finally-resolved/
Anyone from Sun have any knowledge of when the open source mpt driver will be
less broken? Things improved greatly for
Erik Trimble wrote:
Yes, if you stick (say) a 1.5TB, 1TB, and .5TB drive together in a
RAIDZ, you will get only 1TB of usable space. Of course, there is
always the ability to use partitions instead of the whole disk, but I'm
not going to go into that. Suffice to say, RAIDZ (and practically
Ross Walker wrote:
On Aug 4, 2009, at 8:36 PM, Carson Gaspar wrote:
Isn't the PERC 6/e just a re-branded LSI? LSI added SSD support recently.
Yes, but the LSI support of SSDs is on later controllers.
Please cite your source for that statement.
The PERC 6/e is an LSI 1078. The LS
Ross Walker wrote:
I get pretty good NFS write speeds with NVRAM (40MB/s 4k sequential
write). It's a Dell PERC 6/e with 512MB onboard.
...
there, dedicated slog device with NVRAM speed. It would be even better
to have a pair of SSDs behind the NVRAM, but it's hard to find
compatible SSDs for
Frank Middleton wrote:
Finally, a number of posters blamed VB for ignoring a flush, but
according to the evil tuning guide, without any application syncs,
ZFS may wait up to 5 seconds before issuing a synch, and there
must be all kinds of failure modes even on bare hardware where
it never gets a
stephen bond wrote:
can you provide an example of how to read from dd cylinder by cylinder?
What's a cylinder? That's a meaningless term these days. You dd byte ranges.
Pick whatever byte range you want. If you want mythical cylinders, fetch the
cylinder size from "format" and use that as yo
Martin Englund wrote:
I'm wondering if someone has tried using Sans Digital's Tower Raid
TR8M[1] with ZFS (I'm especially curious about the bundled 2-port
eSATA PCIe Host Bus Adapter)
The port multiplier issue will probably prevent this from working right
now, as someone else has already menti
Miles Nordin wrote:
There's also been talk of two tools, MegaCli and lsiutil, which are
both binary only and exist for both Linux and Solaris, and I think are
used only with the 1078 cards but maybe not.
lsiutil works with LSI chips that use the Fusion-MPT interface (SCSI,
SAS, and FC), inclu
Rob Healey wrote:
Does anyone know if related problems to the panic's dismissed as
"duplicate of 6746456" ever resulted in Solaris 10 patches? It sounds
like they were actually solved in OpenSolaris but S10 is still
panicing predictably when Linux NFS clients try to change a nobody
UID/GID on a Z
James C. McPherson wrote:
Use raidctl(1m). For fwflash(1m), this is on the "future project"
list purely because we've got much higher priority projects on the
boil - if we couldn't use raidctl(1m) this would be higher up the
list.
Nice to see that raidctl can do that. Although I don't see a wa
James C. McPherson wrote:
On Sun, 21 Jun 2009 19:01:31 -0700
As a member of the team which works on mpt(7d), I'm disappointed that\
you believe you need to use lsiutil to "fully access all the functionality"
of the board.
What gaps have you found in mpt(7d) and the standard OpenSolaris
tools th
I'll chime in as a happy owner of the LSI SAS 3081E-R PCI-E board. It
works just fine. You need to get "lsiutil" from the LSI web site to
fully access all the functionality, and they cleverly hide the download
link only under their FC HBAs on their support site, even though it
works for everyth
Daniel Carosone wrote:
This sounds like FUD.
There's a comprehensive test suite, and it apparently passes.
It's not exactly FUD. If you search the list archives, you'll find
messages about multiple bugs in the 32-bit code. I strongly suspect that
these have been fixed in the interim, but it
Lori Alt wrote:
A root pool is composed of one top-level vdev, which can be a mirror
(i.e. 2 or more disks). A raidz vdev is not supported for the root pool
yet. It might be supported in the future, but the timeframe is unknown
at this time.
The original poster was asking about a zpool of m
1 - 100 of 161 matches
Mail list logo