where the pools
never got an error, same panic.
ANY ideas of volume rescue are welcome - if i missed some important
information,please tell me.
regards, mark
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
some screenshots that may help:
pool: tank
id: 5649976080828524375
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
data ONLINE
mirror-0 ONLINE
c27t2d0ONLINE
c27t0d0ONLINE
m
I'm new with ZFS, but I have had good success using it with raw physical disks.
One of my systems has access to an iSCSI storage target. The underlying
physical array is in a propreitary disk storage device from Promise. So the
question is, when building a OpenSolaris host to store its data on a
Hello, first time posting. I've been working with zfs on and off with limited
*nix experience for a year or so now, and have read a lot of things by a lot of
you I'm sure. Still tons I don't understand/know I'm sure.
We've been having awful IO latencies on our 7210 running about 40 VM's spread
I'm trying to understand how snapshots work in terms of how I can use them for
recovering and/or duplicating virtual machines, and how I should set up my file
system.
I want to use OpenSolaris as a storage platform with NFS/ZFS for some
development VMs; that is, the VMs use the OpenSolaris box
Ok. Thanks. Why does '/' show up in the newly created /BE/etc/vfstab but not
in the current /etc/vfstab? Should '/' be in the /BE/etc/vfstab?
btw, thank you for responding so quickly to this.
Mark
On Wed, Oct 21, 2009 at 12:49 PM, Enda O'Connor wrote:
> Mark Horst
I have a 1U server that supports 2 SATA drives in the chassis. I have 2 750 GB
SATA drives. When I install opensolaris, I assume it will want to use all or
part of one of those drives for the install. That leaves me with the remaining
part of disk 1, and all of disk 2.
Question is, how do I be
I thank each of you for all of your insights. I think if this was a production
system I'd abandon the idea of 2 drives and get a more capable system, maybe a
2U box with lots of SAS drives so I could use RAIDZ configurations. But in this
case, I think all I can do is try some things until I unde
Otherwise does anyone have any other tips for monitoring usage? I
wonder how they have it all working in Fishworks gear as some of the
analytics demos show you being able to drill down on through file
activity in real time.
Any advice or suggestions greatly appreciated.
Cheers,
Mark
__
her level compared to HPs pre-sales
department... not that theyre bad but in this realm youre the man :)
Thanks,
Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Good call Saso. Sigh... I guess I wait to hear from HP on supported IT
mode HBAs in their D2000s or other jbods.
On Tue, Jan 8, 2013 at 11:40 AM, Sašo Kiselkov wrote:
> On 01/08/2013 04:27 PM, mark wrote:
> >> On Jul 2, 2012, at 7:57 PM, Richard Elling wrote:
> >>
> &g
We are using a 7210, 44 disks I believe, 11 stripes of RAIDz sets. When I
installed I selected the best bang for the buck on the speed vs capacity chart.
We run about 30 VM's on it, across 3 ESX 4 servers. Right now, its all running
NFS, and it sucks... sooo slow.
iSCSI was no better.
I a
Hey thanks for the replies everyone.
Saddly most of those options will not work, since we are using a SUN Unified
Storage 7210, the only option is to buy the SUN SSD's for it, which is about
$15k USD for a pair. We also don't have the ability to shut off ZIL or any of
the other options that o
It does, its on a pair of large APC's.
Right now we're using NFS for our ESX Servers. The only iSCSI LUN's I have are
mounted inside a couple Windows VM's. I'd have to migrate all our VM's to
iSCSI, which I'm willing to do if it would help and not cause other issues.
So far the 7210 Applia
Hi Harry,
I doubt it too. Try here to be sure (no need to install, unzip in a folder
and just run).
CPUID <http://www.cpuid.com/>
Check the processor features when you run the app. I hope that helps.
/Mark :-)
2009/2/26 Tim
> Then you would be looking for AMD-V extensions. VT is
. They will be connected by gigabit ethernet. So my
question is how do I mirror one raidz Array across the network to the other?
Thanks for all your help
Mark.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
t it really. is this
correct?
thanks again for all your help
Cheers
Mark
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to buy it all for
around AUS$350. cpu, mobo, ram and everything. ive tried it with a few solaris
distro's and its worked fine and been rather fast.
Cheers
Mark
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-di
Hey,
I will submit it. However does Opensolaris have a seperate HCL? or do i just
use the solaris one?
Cheers
Mark
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
.
Does anybody see a problem with this?
Also i know this isnt ZFS, but is there any upper limit on file size with samba?
Thanks For all your help.
Mark
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
at by
using IPMP the bandwidth is increased due to sharing across all the network
cards, is this true?
Thanks again for all your help
Cheers
Mark
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hey all again,
Looking into a few other options. How about infiniband? it would give us more
bandwidth, but will it increase complexity/price? any thoughts?
Cheers
Mark
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs
sage (shown in the /var/adm/messages) :
>
> scsi: [ID 107833 kern.warning] WARNING:
> /p...@0,0/pci8086,3...@4/pci1028,1...@0 (mpt0):
>
> Does anyone has any tip in how to start to trace the problem ?
>
Have a look at Bug ID: 6894775
http://bugs.opensolaris.org/bugdatabase/view_
On 4/21/10 6:49 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Nicolas Williams
And you can
On 04/21/10 08:45 AM, Edward Ned Harvey wrote:
From: Mark Shellenbaum [mailto:mark.shellenb...@oracle.com]
You can create/destroy/rename snapshots via mkdir, rmdir, mv inside
the
.zfs/snapshot directory, however, it will only work if you're running
the
command locally. It will not
On 23 Apr, 2010, at 7.06, Phillip Oldham wrote:
>
> I've created an OpenSolaris 2009.06 x86_64 image with the zpool structure
> already defined. Starting an instance from this image, without attaching the
> EBS volume, shows the pool structure exists and that the pool state is
> "UNAVAIL" (as
On 23 Apr, 2010, at 7.31, Phillip Oldham wrote:
> I'm not actually issuing any when starting up the new instance. None are
> needed; the instance is booted from an image which has the zpool
> configuration stored within, so simply starts and sees that the devices
> aren't available, which beco
On 23 Apr, 2010, at 8.38, Phillip Oldham wrote:
> The instances are "ephemeral"; once terminated they cease to exist, as do all
> their settings. Rebooting an image keeps any EBS volumes attached, but this
> isn't the case I'm dealing with - its when the instance terminates
> unexpectedly. For
rror message.
Since removing that drive, we have not encounted that issue.
You might want to look at
http://bugs.opensolaris.org/bugdatabase/view_bug.do;jsessionid=7acda35c626180d9cda7bd1df451?bug_id=6894775
too.
-Mark
> Machine specs :
>
> Dell R710, 16 GB memory, 2 Intel Qua
On 28 May, 2010, at 17.21, Vadim Comanescu wrote:
> In a stripe zpool configuration (no redundancy) is a certain disk regarded as
> an individual vdev or do all the disks in the stripe represent a single vdev
> ? In a raidz configuration im aware that every single group of raidz disks is
> reg
Can you find the devices in /dev/rdsk? I see there is a path in /pseudo at
least, but the zpool import command only looks in /dev. One thing you can try
is doing this:
# mkdir /tmpdev
# ln -s /pseudo/vpat...@1:1 /tmpdev/vpath1a
And then see if 'zpool import -d /tmpdev' finds the pool.
On 2
I'm guessing that the virtualbox VM is ignoring write cache flushes. See this
for more ifno:
http://forums.virtualbox.org/viewtopic.php?f=8&t=13661
On 12 Jun, 2010, at 5.30, zfsnoob4 wrote:
> Thanks, that works. But it only when I do a proper export first.
>
> If I export the pool then I can
I had an interesting dilemma recently and I'm wondering if anyone here can
illuminate on why this happened.
I have a number of pools, including the root pool, in on-board disks on the
server. I also have one pool on a SAN disk, outside the system. Last night the
SAN crashed, and shortly thereaf
You can also use the "zpool split" command and save yourself having to do the
zfs send|zfs recv step - all the data will be preserved.
"zpool split rpool preserve" does essentially everything up to and including
the "zpool export preserve" commands you listed in your original email. Just
don'
oking
into a USB boot/raidz root combination for 1U storage.
I ran Red Hat 9 with updated packages for quite a few years.
As long as the kernel is stable, and you can work through the hurdles, it can
still do the job.
Mark.
--
This message posted from opensolaris.org
for what eventually becomes a commercial
Enterprise offering. OpenSolaris was the Solaris equivalent.
Losing the free bleeding edge testing community will no doubt impact on the
Solaris code quality.
It is now even more likely Solaris will revert to it'
I have a snapshot that I'd like to destroy:
# zfs list rpool/ROOT/be200909160...@200909160720
NAME USED AVAIL REFER MOUNTPOINT
rpool/ROOT/be200909160...@200909160720 1.88G - 4.18G -
But when I try it warns me of dependent clones:
# zfs destroy rpool
Sorry. My environment:
# uname -a
SunOS xx 5.10 Generic_141414-10 sun4v sparc SUNW,SPARC-Enterprise-T5220
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listin
en inheriting an ACL?
I just tried it locally and it appears to work.
# ls -ld test.dir
drwsr-sr-x 2 marksstorage4 Oct 12 16:45 test.dir
my primary group is "staff"
$ touch file
$ ls -l file
-rw-r--r-- 1 marksstorage
I'm seeing the same [b]lucreate[/b] error on my fresh SPARC sol10u8 install
(and my SPARC sol10u7 machine I keep patches up to date), but I don't have a
separate /var:
# zfs list
NAMEUSED AVAIL REFER MOUNTPOINT
pool00 3.36G 532G20K none
pool00/global
Neither the virgin SPARC sol10u8 nor the (update to date) patched SPARC sol10u7
have any local zones.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-d
More input:
# cat /etc/lu/ICF.1
sol10u8:-:/dev/zvol/dsk/rpool/swap:swap:67108864
sol10u8:/:rpool/ROOT/sol10u8:zfs:0
sol10u8:/appl:pool00/global/appl:zfs:0
sol10u8:/home:pool00/global/home:zfs:0
sol10u8:/rpool:rpool:zfs:0
sol10u8:/install:pool00/shared/install:zfs:0
sol10u8:/opt/local:pool00/shared
more input:
# lumount foobar /mnt
/mnt
# cat /mnt/etc/vfstab
# cat /mnt/etc/vfstab
#live-upgrade: updated boot environment
#device device mount FS fsckmount mount
#to mount to fsck point typepassat boot options
#
fd -
Then why the warning on the lucreate. It hasn't done that in the past.
Mark
On Oct 21, 2009, at 12:41 PM, "Enda O'Connor"
wrote:
Hi T
his will boot ok in my opinion, not seeing any issues there.
Enda
Mark Horstman wrote:
more input:
# lumount foobar /mnt
/mnt
#
card installed. The mptsas cards are
not generally
available yet (they're 2nd generation), so I would be
surprised if
you had one.
No...I had set the other two variables after Mark contacted me offline to
do some testing mainly to verify the problem was, indeed, not xVM specific.
I had
This is basically just a me too. I'm using different hardware but essentially
the same problems. The relevant hardware I have is:
---
SuperMicro MBD-H8Di3+-F-O motherboard with LSI 1068E onboard
SuperMicro SC846E2-R900B 4U chassis with two LSI SASx36 expander chips on the
backplane
24 Western D
Chad Cantwell wrote:
Hi,
I was using for quite awhile OpenSolaris 2009.06
with the opensolaris-provided mpt driver to operate a zfs raidz2 pool of
about ~20T and this worked perfectly fine (no issues or device errors
logged for several months, no hanging). A few days ago I decided to
reinsta
Mark Johnson wrote:
Chad Cantwell wrote:
Hi,
I was using for quite awhile OpenSolaris 2009.06
with the opensolaris-provided mpt driver to operate a zfs raidz2 pool of
about ~20T and this worked perfectly fine (no issues or device errors
logged for several months, no hanging). A few days
Yeah, this is my main concern with moving from my cheap Linux server with no
redundancy to ZFS RAID on OpenSolaris; I don't really want to have to pay twice
as much to buy the 'enterprise' disks which appear to be exactly the same
drives with a flag set in the firmware to limit read retries, but
>From what I remember the problem with the hardware RAID controller is that the
>long delay before the drive responds causes the drive to be dropped from the
>RAID and then if you get another error on a different drive while trying to
>repair the RAID then that disk is also marked failed and you
Thanks, sounds like it should handle all but the worst faults OK then; I
believe the maximum retry timeout is typically set to about 60 seconds in
consumer drives.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@ope
Hi,
Is it possible to import a zpool and stop it mounting the zfs file systems, or
override the mount paths?
Mark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
signed for the UIO slot. The cards are a mirror image of a normal
pci-e card and may overlap adjacent slots.
They "may" work in other servers, but I have found some Supermicro non-UIO
servers that wouldn't run them.
Mark.
--
This message po
I'd recommend a SAS non-raid controller (with sas backplane) over sata.
It has better hot plug support.
I use the Supermicro SC836E1 and a AOC-USAS-L4i with a UIO M/b.
Mark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing lis
backplane price difference diminishes when you get to 24
bays.
Mark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Check if your card has the latest firmware.
Mark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Will,
sorry for picking an old thread, but you mentioned a psu monitor to supplement
the CSE-PTJBOD-CB1.
I have two of these and am interested in your design.
Oddly, the LSI backplane chipset supports 2 x i2c busses that Supermicro didn't
make use of for monitoring the psu's.
Mark
uit needed in a few V240 PSU's.
Much cheaper than replacing the whole psu due to poor fan lifespan.
Mark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ben,
I have found that booting from cdrom and importing the pool on the new host,
then boot the hard disk will prevent these issues.
That will reconfigure the zfs to use the new disk device.
When running, zpool detach the missing mirror device and attach a new one.
Mark.
--
This message posted
As in they work without any possibility of mpt timeout issues? I'm at my wits
end with a machine right now that has an integrated 1068E and is dying almost
hourly at this point.
If I could spend three hundred dollars or so and have my problems magically go
away, I'd love to pull the trigger on
t or hot
plug.
The robustness of ZFS certainly helps keep things running.
Mark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> It may depend on the firmware you're running. We've
> got a SAS1068E based
> card in Dell R710 at the moment, connected to an
> external SAS JBOD, and
> we did have problems with the as shipped firmware.
Well, I may have misspoke. I just spent a good portion of yesterday upgrading
to the lat
I would definitely be interested to see if the newer firmware fixes the problem
for you. I have a very similar setup to yours, and finally forcing the
firmware flash to 1.26.00 of my on-board LSI 1068E on a SuperMicro H8DI3+
running snv_131 seemed to address the issue. I'm still waiting to see
ntroller and see
if that helps.
P.S. I have a client with a "suspect", nearly full, 20Tb zpool to try to scrub,
so this is a big issue for me. A resilver of a 1Tb disk takes up to 40 hrs., so
I expect a scrub to be a week (or two), and at present, would probably result
in multiple dis
Hi Giovanni,
I have seen these while testing the mpt timeout issue, and on other systems
during resilvering of failed disks and while running a scrub.
Once so far on this test scrub, and several on yesterdays.
I checked the iostat errors, and they weren't that high on that device,
compared to
Hello,
Can anybody tell me if EMC's Replication software is supported using
ZFS, and if so is there any particular version
of Solaris that it is supported with?
thanks,
-mark
--
Mark Woelfel
Storage TSC Backline Volume Products
Sun Microsystems
Work: 78
24 x WD10EARS in 6 disk vdev sets, 1 on 16 bay and 2 on 24 bay.
Mark
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> Also, I noticed you're using 'EARS' series drives.
> Again, I'm not sure if the WD10EARS drives suffer
> from a problem mentioned in these posts, but it might
> be worth looking into -- especially the last link:
Aren't the EARS drives the first ones using 4k sectors? Does OpenSolaris
support th
many scsi cards.
Mark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
256*512/4096=32
Mark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ceived for target 13.
---
With no problems at all, that scrub I don't think takes
nearly that long (I think it was less than 12 hours previously)
and the percentage is barely moving, although it is increasing.
Even still, the exported volumes still appear to be
work
to be the only option available.
Mark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
luate alternative supplers of low cost disks for low end
high volume storage.
Mark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
SI IT mode firmware changing the disk
order so the bootable disk is no longer the one booted from with expanders?
It boots with only two disks installed(bootable zfs mirror). Add some more and
the target "boot disk" moves to one of them.
Mark.
--
This message posted from
> That's good to hear. Which revision are they: 00R6B0
> or 00P8B0? It's marked on the drive top.
Interesting. I wonder if this is the issue too with the 01U1B0 2.0TB drives?
I have 24 WD2002FYPS-01U1B0 drives under OpenSolaris with an LSI 1068E
controller that have weird timeout issues and I
Looks like I got the textbook response from Western Digital:
---
Western Digital technical support only provides jumper configuration and
physical installation support for hard drives used in systems running the
Linux/Unix operating systems. For setup questions beyond physical installation
of yo
Thomas Burgess wrote:
I've got a strange issue, If this is covered elsewhere, i apologize in
advance for my newbness
I've got a couple ZFS filesystems shared cifs and nfs, i've managed to
get ACL's working the way i want, provided things are accessed via cifs
and nfs.
If i create a new dir
7;t lie, but it's good to have it anyways, and is critical for
> personal systems such as laptops.
IIRC, fsck was seldom needed at
my former site once UFS journalling
became available. Sweet update.
Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Why don't you see which byte differs, and how it does?
Maybe that would suggest the "failure mode". Is it the
same byte data in all affected files, for instance?
Mark
Sent from my iPhone
On Oct 22, 2011, at 2:08 PM, Robert Watzlavick wrote:
> On Oct 22, 2011, at 13:14,
ing a drive to "sorry no more writes aloud" scenarios.
Thanks
Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
n...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Neil Perrin
Sent: Friday, October 28, 2011 11:38 AM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Log disk with all ssd pool?
On 10/28/11 00:54, Neil Perrin wrote:
On 10/28/11 00:04, Mark Wolek wrote:
Still kick
ying to get me to do. Do I have to do:
zfs create datastore/zones/zonemaster
before I can create a zone in that path? That's not in the documentation,
so I didn't want to do anything until someone can point out my error for
me. Thanks for your help!
--
Mark
_
You can see the original ARC case here:
http://arc.opensolaris.org/caselog/PSARC/2009/557/20091013_lori.alt
On 8 Dec 2011, at 16:41, Ian Collins wrote:
> On 12/ 9/11 12:39 AM, Darren J Moffat wrote:
>> On 12/07/11 20:48, Mertol Ozyoney wrote:
>>> Unfortunetly the answer is no. Neither l1 nor l2
thing?
Thanks
Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
,root=192.168.1.52:192.168.1.51:192.168.1.53
local
-Original Message-
From: Jim Klimov [mailto:jimkli...@cos.ru]
Sent: Wednesday, February 29, 2012 1:44 PM
To: Mark Wolek
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Problem with ESX NFS store on ZFS
2012-02-29 21:15, Mark
: Permanent errors have been detected in the following files:
rpool/filemover:<0x1>
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool6.64T 0 29.9K /rpool
rpool/filemover 6.64T 323G 6.32T -
Thanks
Mark
_
On 11/16/12 17:15, Peter Jeremy wrote:
I have been tracking down a problem with "zfs diff" that reveals
itself variously as a hang (unkillable process), panic or error,
depending on the ZFS kernel version but seems to be caused by
corruption within the pool. I am using FreeBSD but the issue look
On 11/19/12 1:14 PM, Jim Klimov wrote:
On 2012-11-19 20:58, Mark Shellenbaum wrote:
There is probably nothing wrong with the snapshots. This is a bug in
ZFS diff. The ZPL parent pointer is only guaranteed to be correct for
directory objects. What you probably have is a file that was hard
s encouraging, so I exported it and booted from the origional 132 boot
drive.
Well, it came back, and at 1:00AM I was able to get back to the origional issue
I was chasing.
So, don't give up hope when all hope appears to be lost.
Mark.
Still an Open_Solaris fan keen to help the commu
On 16 Aug 2010, at 22:30, Robert Hartzell wrote:
>
> cd /mnt ; ls
> bertha export var
> ls bertha
> boot etc
>
> where is the rest of the file systems and data?
By default, root filesystems are not mounted. Try doing a "zfs mount
bertha/ROOT/snv_134"__
You need to let the resilver complete before you can detach the spare. This is
a known problem, CR 6909724.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6909724
On 18 Aug 2010, at 14:02, Dr. Martin Mundschenk wrote:
> Hi!
>
> I had trouble with my raidz in the way, that some o
SSD's I have been testing to work properly.
On the other hand, I could just use the spare 7210 Appliance boot disk I have
lying about.
Mark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolari
m, the experts there seem to recommend NOT having TLER
enabled when using ZFS as ZFS can be configured for its timeouts, etc,
and the main reason to use TLER is when using those drives with hardware
RAID cards which will kick a drive out of the array if it takes longer
than 10 seconds.
Can anyone
Hi Steve,
Couple of options.
Create a new boot environment on the SSD, and this will copy the data over.
Or
zfs send -R rp...@backup | zfs recv altpool
I'd use the alt boot environment, rather than the send and receive.
Cheers,
-Mark.
On 19/09/2010, at 5:37 PM, Steve Arkley
You should only see a "HOLE" in your config if you removed a slog after having
added more stripes. Nothing to do with bad sectors.
On 14 Oct 2010, at 06:27, Matt Keenan wrote:
> Hi,
>
> Can someone shed some light on what this ZPOOL_CONFIG is exactly.
> At a guess is it a bad sector of the dis
sue.
I'd search the archives, but they don't seem searchable. Or am I wrong about
that?
Thanks.
Mark (subscription pending)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Nov 2, 2010, at 12:10 AM, Ian Collins wrote:
> On 11/ 2/10 08:33 AM, Mark Sandrock wrote:
>>
>>
>> I'm working with someone who replaced a failed 1TB drive (50% utilized),
>> on an X4540 running OS build 134, and I think something must be wrong.
>>
&
Edward,
I recently installed a 7410 cluster, which had added Fiber Channel HBAs.
I know the site also has Blade 6000s running VMware, but no idea if they
were planning to run fiber to those blades (or even had the option to do so).
But perhaps FC would be an option for you?
Mark
On Nov 12
maybe
give in touch with their support and see if you can use something
similar.
Cheers,
Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 5 Dec 2010, at 16:06, Roy Sigurd Karlsbakk wrote:
>> Hot spares are dedicated spares in the ZFS world. Until you replace
>> the actual bad drives, you will be running in a degraded state. The
>> idea is that spares are only used in an emergency. You are degraded
>> until your spares are
1 - 100 of 573 matches
Mail list logo