On Wed, Jan 5, 2011 at 1:55 PM, Damien Fleuriot wrote:
> Well actually...
>
> raidz2:
> - 7x 1.5 tb = 10.5tb
> - 2 parity drives
>
> raidz1:
> - 3x 1.5 tb = 4.5 tb
> - 4x 1.5 tb = 6 tb , total 10.5tb
> - 2 parity drives in split thus different raidz1 arrays
>
> So really, in both cases 2 different
On Fri, Jan 7, 2011 at 3:16 AM, Matthew D. Fuller
wrote:
> On Thu, Jan 06, 2011 at 03:45:04PM +0200 I heard the voice of
> Daniel Kalchev, and lo! it spake thus:
>>
>> You should also know that having large L2ARC requires that you also
>> have larger ARC, because there are data pointers in the ARC
2011/4/4 Gerrit Kühn :
> On Mon, 4 Apr 2011 14:36:25 +0100 Bruce Cran wrote
> about Re: drives >2TB on mpt device:
>
> Hi Bruce,
>
> BC> It looks like a known issue:
> BC> http://www.freebsd.org/cgi/query-pr.cgi?pr=bin/147572
>
> Hm, I don't know if this is exactly what I'm seeing here (although t
On Mon, Apr 4, 2011 at 1:56 PM, Boris Kochergin wrote:
> The problem persists, I'm afraid, and seems to have crept up a lot more
> quickly than before:
>
> # uname -a
> FreeBSD exodus.poly.edu 8.2-STABLE FreeBSD 8.2-STABLE #3: Sat Apr 2
> 11:48:43 EDT 2011 sp...@exodus.poly.edu:/usr/obj/usr/s
On Thu, Apr 28, 2011 at 6:08 PM, Jeremy Chadwick
wrote:
> I will note something, however: your ARC max is set to 3072MB, yet Wired
> is around 4143MB. Do you have something running on this box that takes
> up a lot of RAM? mysqld, etc..? I'm trying to account for the "extra
> gigabyte" in Wired
On Fri, May 6, 2011 at 5:43 AM, Holger Kipp wrote:
> Resilvering a disk in raidz2 ZFS is taking ages. Any ideas? I had replaced a
> different disk this morning (da7) and it took only about 1 hour alltogether.
> Any ideas? Or did I do something very wrong (tm)?
Don't believe everything you see.
On Tue, May 31, 2011 at 7:31 AM, Freddie Cash wrote:
> On Tue, May 31, 2011 at 5:48 AM, Matt Thyer wrote:
>
>> What do people recommend for 8-STABLE as a PCIe SATA II HBA for someone
>> using ZFS ?
>>
>> Not wanting to break the bank.
>> Not interested in SATA III 6GB at this time... though it co
On Thu, Jun 2, 2011 at 12:31 PM, Torfinn Ingolfsen
wrote:
> FYI, in case it is interesting
> my zfs fileserver[1] just had a panic: (transcribed from screen)
> panic: kmem_malloc(131072): kmem_map too small: 1324613632 total allocated
> cpuid = 1
It's probably one of the most frequently reported
On Thu, Jun 9, 2011 at 1:00 PM, Greg Bonett wrote:
> Hi all,
> I know this is a long shot, but I figure it's worth asking. Is there
> anyway to recover a file from a zfs snapshot which was destroyed? I know
> the name of the file and a unique string that should be in it. The zfs
> pool is on geli
On Thu, Jun 9, 2011 at 3:43 PM, Greg Bonett wrote:
> One question though, you say it's necessary that "appropriate
> disk blocks have not been reused by more recent transactions"
> Is it not possible for me to just read all the disk blocks looking for
> the filename and string it contained? How b
On Fri, Jun 17, 2011 at 6:06 AM, Bartosz Stec wrote:
> W dniu 2011-06-11 18:43, Sergey Kandaurov pisze:
>>
>> On 11 June 2011 20:01, Rolf Nielsen wrote:
>>>
>>> Hi all,
>>>
>>> After going from 8.2-RELEASE to 8-STABLE (to get ZFS v28), I get
>>>
>>> log_sysevent: type 19 is not implemented
>>>
>>
On Tue, Jul 19, 2011 at 6:31 AM, John Baldwin wrote:
> The only reason it might be nice to stick with two fields is due to the line
> length (though the first line is over 80 cols even in the current format).
> Here
> are two possible suggestions:
>
> old:
>
> hostb0@pci0:0:0:0: class=0x060
2011/8/17 Daniel Kalchev :
> On 17.08.11 16:35, Miroslav Lachman wrote:
>>
>> I tried mfsBSD installation on Dell T110 with PERC H200A and 4x 500GB SATA
>> disks. If I create zpool with RAIDZ, the boot immediately hangs with
>> following error:
>>
> May be it that the BIOS does not see all drives a
On Wed, Aug 17, 2011 at 12:40 PM, Miroslav Lachman <000.f...@quip.cz> wrote:
> Thank you guys, you are right. The BIOS provides only 1 disk to the loader!
> I checked it from loader prompt by lsdev (booted from USB external HDD).
>
> So I will try to make a small zpool mirror for root and boot (if
On Tue, Oct 11, 2011 at 2:34 AM, Steven Hartland
wrote:
> - Original Message - From: "Mickaël Maillot"
>
>
>
>> same problem here after ~ 30 days with a production server and 2 SSD Intel
>> X25M as L2.
>> so we update and reboot the 8-STABLE server every month.
>
> Old thread but also see
On Tue, Oct 11, 2011 at 10:21 AM, Steven Hartland
wrote:
> Thanks for the confirmation there Artem, we currently can't use 8-STABLE
> due to the serious routing issue, seem like every packet generates a
> RTM_MISS routing packet to be sent, which causes high cpu load.
>
> Thread: "Re: serious pack
On Tue, Oct 11, 2011 at 1:17 PM, Steven Hartland
wrote:
>> It's a bummer. If you can build your own kernel cherry-picking
>> following revisions may help with long-term stability:
>> r218429 - fixes original overflow causing CPU hogging by l2arc feeding
>> thread. It will keep you up and running f
On Sun, Oct 23, 2011 at 8:54 AM, Matthew Seaman
wrote:
> On the other hand, for anything Gb capable nowadays connected to a
> switch autoneg pretty much just works -- em(4), bce(4) are excellent,
> and even re(4) gets this stuff right.
There are still cases of incompatibility. I've got a cheap D-
On Thu, Nov 17, 2011 at 6:41 AM, David Wolfskill wrote:
> MAKE=/usr/obj/usr/src/make.i386/make sh /usr/src/sys/conf/newvers.sh GENERIC
> cc -c -O -pipe -std=c99 -g -Wall -Wredundant-decls -Wnested-externs
> -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Winline
> -Wcast-qual -Wunde
Hi,
On Tue, Dec 20, 2011 at 7:03 AM, Andriy Gapon wrote:
> on 20/12/2011 16:31 Ganael LAPLANCHE said the following:
>> On Tue, 20 Dec 2011 15:02:01 +0100 (CET), Ganael LAPLANCHE wrote
>>
>>> But there is still something I don't understand : on the Linux
>>> machine where I ran my test program, th
On Mon, Jan 2, 2012 at 5:41 AM, Victor Balada Diaz wrote:
...
> System wide totals computed every five seconds: (values in kilobytes)
> ===
> Processes: (RUNQ: 2 Disk Wait: 0 Page Wait: 0 Sleep: 51)
> Virtual Memory: (Total: 10980171
On Wed, Feb 8, 2012 at 4:28 PM, Jeremy Chadwick
wrote:
> On Thu, Feb 09, 2012 at 01:11:36AM +0100, Miroslav Lachman wrote:
...
>> ARC Size:
>> Current Size: 1769 MB (arcsize)
>> Target Size (Adaptive): 512 MB (c)
>> Min Size (Hard Limit): 512 MB (zfs_arc
On Sat, Feb 18, 2012 at 10:10 PM, Ask Bjørn Hansen wrote:
> Hi everyone,
>
> We're recycling an old database server with room for 16 disks as a backup
> server (our old database servers had 12-20 15k disks; the new ones one or two
> SSDs and they're faster).
>
> We have a box running FreeBSD 8.2
On Wed, Aug 4, 2010 at 9:47 PM, Alex V. Petrov wrote:
...
>> > vfs.zfs.cache_flush_disable=1
>> > vfs.zfs.zil_disable=1
>>
>> I question both of these settings, especially the latter. Please remove
>> them both and re-test your write performance.
>
> I removed all settings of zfs.
> Now it defaul
On Sat, Aug 7, 2010 at 4:51 AM, Ivan Voras wrote:
> On 5.8.2010 6:47, Alex V. Petrov wrote:
>
>> camcontrol identify ada2
>> pass2: ATA-8 SATA 2.x device
>
> Aren't those 4k sector drives?
EADS drives use regular 512-byte sectors AFAIK. It's EA*R*S models
that use 4K sectors.
--Artem
__
IMHO the key here is whether hardware is broken or not. The only case
where correctable ECC errors are OK is when a bit gets flipped by a
high-energy particle. That's a normal but fairly rare event. If you
get bit flips often enough that you can recall details of more then
one of them on the same h
On Tue, Sep 28, 2010 at 3:22 PM, Andriy Gapon wrote:
> BTW, have you seen my posts about UMA and ZFS on hackers@ ?
> I found it advantageous to use UMA for ZFS I/O buffers, but only after
> reducing
> size of per-CPU caches for the zones with large-sized items.
> I further modified the code in my
On Wed, Sep 29, 2010 at 11:04 AM, Dan Langille wrote:
> It's taken about 15 hours to copy 800GB. I'm sure there's some tuning I
> can do.
>
> The system is now running:
>
> # zfs send storage/bac...@transfer | zfs receive storage/compressed/bacula
Try piping zfs data through mbuffer (misc/mbuffe
Hmm. It did help me a lot when I was replicating ~2TB worth of data
over GigE. Without mbuffer things were roughly in the ballpark of your
numbers. With mbuffer I've got around 100MB/s.
Assuming that you have two boxes connected via ethernet, it would be
good to check that nobody generates PAUSE f
fer over network.
If you're running send/receive locally just pipe the data through
mbuffer -- zfs send|mbuffer|zfs receive
--Artem
>
> --
> Dan Langille
> http://langille.org/
>
>
> On Oct 1, 2010, at 5:56 PM, Artem Belevich wrote:
>
>> Hmm. It did help me a lo
> As soon as I opened this email I knew what it would say.
>
>
> # time zfs send storage/bac...@transfer | mbuffer | zfs receive
> storage/compressed/bacula-mbuffer
> in @ 197 MB/s, out @ 205 MB/s, 1749 MB total, buffer 0% full
...
> Big difference. :)
I'm glad it helped.
Does anyone know wh
I've just tested on my box and loopback interface does not seem to be
the bottleneck. I can easily push through ~400MB/s through two
instances of mbuffer.
--Artem
On Fri, Oct 1, 2010 at 7:51 PM, Sean wrote:
>
> On 02/10/2010, at 11:43 AM, Artem Belevich wrote:
>
>>> A
On Sun, Oct 3, 2010 at 6:11 PM, Dan Langille wrote:
> I'm rerunning my test after I had a drive go offline[1]. But I'm not
> getting anything like the previous test:
>
> time zfs send storage/bac...@transfer | mbuffer | zfs receive
> storage/compressed/bacula-buffer
>
> $ zpool iostat 10 10
>
On Sun, Oct 10, 2010 at 5:25 PM, Alex Goncharov
wrote:
> (It only www/opera stopped crashing on File/Exit now...)
I think I've accidentally stumbled on a workaround for this crash on exit issue.
Once you've started opera and opened a page (any page), turn on print
preview on and off (Menu->Print
On Mon, Oct 11, 2010 at 4:32 AM, Jakub Lach wrote:
> Remedy for ugly file dialogs is skin with skinned ones.
>
> e.g. http://my.opera.com/community/customize/skins/info/?id=10071
That may make them look better, but the main issue was that file
open/save dialogs turned into a simple text input fie
Are you interested in what's wrong or in how to fix it?
If fixing is the priority, I'd boot from OpenSolaris live CD and would
try importing the array there. Just make sure you don't upgrade ZFS to
a version that is newer than the one FreeBSD supports.
Opensolaris may be able to fix the array. On
> but only those 3 devices in /dev/gpt and absolutely nothing in /dev/gptid/
> So is there a way to bring all the gpt labeled partitions back into the pool
> instead of using the mfidXX devices?
Try re-importing the pool with "zpool import -d /dev/gpt". This will
tell ZFS to use only devices found
On Thu, Oct 28, 2010 at 10:51 PM, Rumen Telbizov wrote:
> Hi Artem, everyone,
>
> Thanks for your quick response. Unfortunately I already did try this
> approach.
> Applying -d /dev/gpt only limits the pool to the bare three remaining disks
> which turns
> pool completely unusable (no mfid devices
On Fri, Oct 29, 2010 at 11:34 AM, Rumen Telbizov wrote:
> The problem I think comes down to what I have written in the zpool.cache
> file.
> It stores the mfid path instead of the gpt/disk one.
> children[0]
> type='disk'
> id=0
> guid=16413940568249554
On Fri, Oct 29, 2010 at 2:19 PM, Rumen Telbizov wrote:
> You're right. zpool export tank seems to remove the cache file so import has
> nothing to consult so doesn't make any difference.
> I guess my only chance at this point would be to somehow manually edit
> the zpool configuration, via the zpo
On Fri, Oct 29, 2010 at 4:42 PM, Rumen Telbizov wrote:
> FreeBSD 8.1-STABLE #0: Sun Sep 5 00:22:45 PDT 2010
> That's when I csuped and rebuilt world/kernel.
There were a lot of ZFS-related MFCs since then. I'd suggest updating
to the most recent -stable and try again.
I've got another idea that
On Sat, Nov 6, 2010 at 9:09 AM, Thomas Zander
wrote:
> This means for now I have to trust the BIOS that ECC is enabled and I
> should see MCA reports in the dmesg output once a bit error is
> detected?
Well, you don't have to take BIOS' word for that and test whether ECC
really works. All you nee
) worked well enough for me. It
didn't get damaged by the slot connector and it didn't leave any
residue. YMMV, caveat emptor, beware, use at your own risk, you know
the drill..
--Artem
On Sat, Nov 6, 2010 at 5:00 PM, wrote:
> Artem Belevich wrote:
>
>> All you need is
On Thu, Dec 2, 2010 at 1:33 PM, Thomas Zander
wrote:
> Hi,
>
> do we have any way to monitor which LBAs of which block device are
> read/written at a given time?
>
> I stumbled upon this,
> http://southbrain.com/south/2008/02/fun-with-dtrace-and-zfs-mirror.html
>
> which is pretty intriguing. Unfo
> GEOM sounds like a good candidate for probing of that kind.
>
> sudo dtrace -n 'fbt:kernel:g_io_deliver:entry { printf("%s %d %d
> %d\n",stringof(args[0]->bio_from->geom->name), args[0]->bio_cmd,
> args[0]->bio_offset, args[0]->bio_length); }'
By the way, in order for this to work one would need
On Sun, Dec 5, 2010 at 11:31 AM, Andriy Gapon wrote:
>> By the way, in order for this to work one would need r207057 applied
>> to -8. Any chance that could be MFC'ed?
>>
>> http://svn.freebsd.org/viewvc/base?view=revision&revision=207057
>
> Nice catch.
>
> Alexander,
> can that commit be trivial
On Sat, Jan 1, 2011 at 10:18 AM, Attila Nagy wrote:
> What I see:
> - increased CPU load
> - decreased L2 ARC hit rate, decreased SSD (ad[46]), therefore increased
> hard disk load (IOPS graph)
>
...
> Any ideas on what could cause these? I haven't upgraded the pool version and
> nothing was chang
> all right and understood but shouldn't something as fsck should correct the
> error? Seems kind of problematic to me mounting zfs in single user mode,
> deleting the file and restarting the OS ?
According to Sun's documents, removing corrupted file seems to be
'official' way to get rid of the pr
> Note that Western Digital's "RAID edition" drives claim to take up to 7
> seconds to reallocate sectors, using something they call TLER, which
> force-limits the amount of time the drive can spend reallocating. TLER
> cannot be disabled:
TLER can be enabled/disabled on recent WD drives (SE16/RE
> LSI SAS 3080X-R 8-port SATA/SATA PCI-X
This one uses LSI1068 chip which is supported by mpt driver. I'm using
motherboard with an on-board equivalent of this and don't have much to
complain about. I did see some CRC errors with SATA drives in 3Gbps
mode, but those went away after updating fi
--Artem
On Tue, Nov 17, 2009 at 6:12 PM, Artem Belevich wrote:
>> LSI SAS 3080X-R 8-port SATA/SATA PCI-X
>
> This one uses LSI1068 chip which is supported by mpt driver. I'm using
> motherboard with an on-board equivalent of this and don't have much to
> complain
Hi,
> If that one uses the LSI1068 chipset, do you know which one uses the
> LSI1078 chipset?
Supermicro's AOC-USAS-H8iR uses LSI1078:
http://www.supermicro.com/products/accessories/addon/AOC-USAS-H8iR.cfm
Dell PERC 6/i is based on LSI1078 as well.
http://www.dell.com/content/topics/topic.aspx/g
Keep an eye on ARC size and on active/inactive/cache/free memory lists:
sysctl kstat.zfs.misc.arcstats.size
sysctl vm.stats.vm.v_inactive_count
sysctl vm.stats.vm.v_active_count
sysctl vm.stats.vm.v_cache_count
sysctl vm.stats.vm.v_cache_count
ZFS performance does degrade a lot if ARC becomes too
aoc-sat2-mv8 was somewhat slower compared to ICH9 or LSI1068
controllers when I tried it with 6 and 8 disks.
I think the problem is that MV8 only does 32K per transfer and that
does seem to matter when you have 8 drives hooked up to it. I don't
have hard numbers, but peak throughput of MV8 with 8-d
> will do, thank you. is fletcher4 faster?
Not necessarily. But it does work as a checksum much better. See
following link for the details.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6740597
--Artem
___
freebsd-stable@freebsd.org mailing
> I'd like a technical explanation of exactly what this loader.conf
> tunable does. The sysctl -d explanation is useful if one has insight to
> what purpose it serves. I can find mention on Solaris lists about "txg
> timeout", but the context is over my head (intended for those very
> familiar wi
Check your vm.kmem_size. Default setting is way too low. Set it to at
least double of desired arc size.
--Artem
On Fri, Feb 12, 2010 at 10:31 AM, Steve Polyack wrote:
> Has anyone had an issue with the ZFS ARC max being limited below what has
> been defined in /boot/loader.conf? I just upgrad
:
> On 02/12/10 13:47, Artem Belevich wrote:
>>
>> On Fri, Feb 12, 2010 at 10:31 AM, Steve Polyack
>> wrote:
>>
>>
>>>
>>> Has anyone had an issue with the ZFS ARC max being limited below what has
>>> been defined in /boot/loader.con
Can you check if kstat.zfs.misc.arcstats.memory_throttle_count sysctl
increments during your tests?
ZFS self-throttles writes if it thinks system is running low on
memory. Unfortunately on FreeBSD the 'free' list is a *very*
conservative indication of available memory so ZFS often starts
throttlin
> your ZFS pool of SATA disks has 120gb worth of L2ARC space
Keep in mind that housekeeping of 120G L2ARC may potentially require
fair amount of RAM, especially if you're dealing with tons of small
files.
See this thread:
http://www.mail-archive.com/zfs-disc...@opensolaris.org/msg34674.html
--Ar
>> * vm.kmem_size
>> * vm.kmem_size_max
>
> I tried kmem_size_max on -current (this year), and I got a panic during use,
> I changed kmem_size to the same value I have for _max and it didn't panic
> anymore. It looks (from mails on the lists) that _max is supposed to give a
> max value for auto-enh
> How much ram are you running with?
8GB on amd64. kmem_size=16G, zfs.arc_max=6G
> In a latest test with 8.0-R on i386 with 2GB of ram, an install to a ZFS
> root *will* panic the kernel with kmem_size too small with default
> settings. Even dropping down to Cy Schubert's uber-small config will p
I've got another PCI UART card based on OX16PCI952 that needs its
clock multiplied by 8 in order to work correctly. It was some
el-cheapo card I've got at Fry's.
p...@pci0:1:0:0:class=0x070006 card=0x00011415 chip=0x95211415
rev=0x00 hdr=0x00
vendor = 'Oxford Semiconductor Ltd'
> On this specific system, it has 32 GB physical memory and has
> vfs.zfs.arc_max="2G" and vm.kmem_size="64G" in /boot/loader.conf. The
> latter was added per earlier suggestions on this list, but appears to be
> ignored as "sysctl vm.kmem_size" returns about 2 GB (2172452864) anyway.
Set vm.kmem
vm.kmem_size limitation has been this way for a pretty long time.
What's changed recently is that ZFS ARC now uses UMA for its memory
allocations. If I understand it correctly, this would make ARC's
memory use more efficient as allocated chunks will end up in a zone
tuned for allocations of partic
You can try disabling ZIO_USE_UMA in sys/modules/zfs/Makefile
Comment out following line in that file:
CFLAGS+=-DZIO_USE_UMA
This should revert memory allocation method back to its previous mode.
Let us know whether it helps or not.
--Artem
On Mon, May 10, 2010 at 4:14 PM, Richard Perini wro
On Wed, Oct 31, 2012 at 10:55 AM, Steven Hartland
wrote:
> At that point with the test seemingly successful I went
> to delete test files which resulted in:-
> rm random*
> rm: random1: Unknown error: 122
ZFS is a logging filesystem. Even removing a file apparently requires
some space to write a
On Fri, Dec 28, 2012 at 12:46 PM, Greg Bonett wrote:
> However, I can't figure out how to destroy the /tank filesystem without
> destroying /tank/tempfs (and the other /tank children). Is it possible to
> destroy a parent without destroying the children? Or, create a new parent
> zfs file system
On Sat, Dec 29, 2012 at 12:35 AM, Greg Bonett wrote:
>
> >
> > Does:
> >
> > cat /dev/null > bad.file
> >
> > Cause a kernel panic?
> >
> >
> >
> ah, sadly that does cause a kernel panic. I hadn't tried it though, thanks
> for the suggestion.
It's probably a long shot, but you may try removing ba
On Sun, Jun 16, 2013 at 10:00 AM, Andy Farkas wrote:
> On 16/06/13 20:30, Jeremy Chadwick wrote:
> > * Output from: strings /boot/kernel/kernel | egrep ^option Thanks.
>
> I stumbled across this one about a week ago:
>
> strings /boot/kernel/kernel | head -1
>
> and was wondering about the histo
On Mon, Jul 22, 2013 at 2:50 AM, Dominic Fandrey wrote:
> Occasionally stopping amd freezes my system. It's a rare occurrence,
> and I haven't found a reliable way to reproduce it.
>
> It's also a real freeze, so there's no way to get into the debugger
> or grab a core dump. I only can perform the
On Tue, Jul 23, 2013 at 10:43 AM, Dominic Fandrey wrote:
> > Don't use KILL or make sure that nobody tries to use amd mountpoints
> until
> > new instance starts. Manually unmounting them before killing amd may
> help.
> > Why not let amd do it itself with "/etc/rc.d/amd stop" ?
>
> That was a typ
On Fri, Jul 26, 2013 at 10:10 AM, Dominic Fandrey wrote:
> Amd exhibits several very strange behaviours.
>
> a)
> During the first start it writes the wrong PID into the pidfile,
> it however still reacts to SIGTERM.
>
> b)
> After starting it again, it no longer reacts to SIGTERM.
>
amd does blo
>> This information is outdated. The current max in RELENG_7 for amd64 is
>> ~3.75GB.
Technically, RELENG_7 should allow kmem_size of up to 6G, but the
sysctl variables used for tuning are 32-bit and *that* limits
kmem_size to ~4G.
It's been fixed in -current and can easily be fixed in RELENG_7 (
I had the same problem on -current. Try attached patch. It may not
apply cleanly on -stable, but should be easy enough to make equivalent
changes on -stable.
--Artem
On Wed, May 27, 2009 at 3:00 AM, Henri Hennebert wrote:
> Kip Macy wrote:
>>
>> On Wed, May 20, 2009 at 2:59 PM, Kip Macy wrote
Did you by any chance do that from single-user mode? ZFS seems to rely
on hostid being set.
Try running "/etc/rc.d/hostid start" and then re-try your zfs commands.
--Artem
On Wed, May 27, 2009 at 1:06 PM, Henri Hennebert wrote:
> Artem Belevich wrote:
>>
>> I had the
Do you have ZIL disabled? I think I saw the same scrub stall on -7
when I had vfs.zfs.zil_disable=1. After re-enabling ZIL scrub
proceeded normally.
--Artem
On Sun, Sep 20, 2009 at 2:42 PM, Christof Schulze
wrote:
> Hello,
>
> currently I am running a 7.2 stable with zfs v13.
> Things work nic
> - untested support for raidz, I can't test this because virtualbox only
> provides one BIOS disk to the bootloader and raidz needs at least two
> disks for booting :-/
You can try creating raidz from multiple GPT slices on the same disk.
--Artem
___
f
> Unfortunately it appears ZFS doesn't search for GPT partitions so if you
> have them and swap the drives around you need to fix it up manually.
When I used raw disk or GPT partitions, if disk order was changed the
pool would come up in 'DEGRADED' or UNAVAILABLE state. Even then all
that had to b
79 matches
Mail list logo