>> Can't you slice the SSD in two, and then give each slice to the two zpools?
> This is exactly what I do ... use 15-20 GB for root and the rest for an L2ARC.
I like the idea of swapping on SSD too, but why not make a zvol for the L2ARC
so your not limited by the hard partitioning?
> I like the idea of swapping on SSD too, but why not make a zvol for the L2ARC
> so your not limited by the hard partitioning?
it lives through a reboot..
zpool create -f test c9t3d0s0 c9t4d0s0
zfs create -V 3G rpool/cache
zpool add test cache /dev/zvol/dsk/rpool/cache
reboot
> you can't use anything but a block device for the L2ARC device.
sure you can...
http://mail.opensolaris.org/pipermail/zfs-discuss/2010-March/039228.html
it even lives through a reboot (rpool is mounted before other pools)
zpool create -f test c9t3d0s0 c9t4d0s0
zfs create -V 3G rpool/cache
zp
> if you disable the ZIL altogether, and you have a power interruption, failed
> cpu,
> or kernel halt, then you're likely to have a corrupt unusable zpool
the pool will always be fine, no matter what.
> or at least data corruption.
yea, its a good bet that data sent to your file or zvol wil
> zfs will use as much memory as is "necessary" but how is "necessary"
calculated?
using arc_summary.pl from http://www.cuddletech.com/blog/pivot/entry.php?id=979
my tiny system shows:
Current Size: 4206 MB (arcsize)
Target Size (Adaptive): 4207 MB (c)
Mi
>> Directory "1" takes between 5-10 minutes for the same command to
return
>> (it has about 50,000 files).
> That said, directories with 50K files list quite quickly here.
a directory with 52,705 files lists in half a second here
36 % time \ls -1 > /dev/null
0.41u 0.07s 0:00.50 96.0%
perh
Action: Restore the file in question if possible. Otherwise restore
the
entire pool from backup.
:<0x0>
:<0x15>
bet its in a snapshot that looks to have been destroyed already. try
zpool clear POOL01
zpool scrub POOL01
___
zfs-dis
> are you going to ask NetApp to support ONTAP on Dell systems,
well, ONTAP 5.0 is built on freebsd, so it wouldn't be too
hard to boot on dell hardware. Hay, at least it can do
aggregates larger than 16T now...
http://www.netapp.com/us/library/technical-reports/tr-3786.html
> So the solution is to never get more than 90% full disk space
while that's true, its not Henrik's main discovery. Henrik points
out that 1/4 of the arc is used for metadata, and sometime
that's not enough..
if
echo "::arc" | mdb -k | egrep ^size
isn't reaching
echo "::arc" | mdb -k | egrep "^
frequent snapshots offer outstanding "oops" protection.
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> Maybe to create snapshots "after the fact"
how does one quiesce a drive "after the fact"?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> from a two disk (10krpm) mirror layout to a three disk raidz-1.
wrights will be unnoticeably slower for raidz1 because of parity calculation
and latency of a third spindle. but reads will be 1/2 the speed
of the mirror because it can split the reads between two disks.
another way to say the s
> P45 Gigabyte EP45-DS3P. I put the AOC card into a PCI slot
I'm not sure how many "half your disks" are or how your vdevs
are configured, but the ICH10 has 6 sata ports at 300MB and
one PCI port at 266MB (that's also shared with the IT8213 IDE chip)
so in an ideal world your scrub bandwidth
> The ICH10 has a 32-bit/33MHz PCI bus which provides 133MB/s at half duplex.
you are correct, I thought ICH10 used a 66Mhz bus, when infact its 33Mhz. The
AOC card works fine in a PCI-X 64Bit/133Mhz slot good for 1,067 MB/s
even if the motherboard uses a PXH chip via 8 lane PCIE.
> Chenbro 16 hotswap bay case. It has 4 mini backplanes that each connect via
> an SFF-8087 cable
> StarTech HSB430SATBK
hmm, both are passive backplanes with one SATA tunnel per link...
no SAS Expanders (LSISASx36) like those found in SuperMicro or J4x00 with 4
links per connection.
wonder
> 2 x 500GB mirrored root pool
> 6 x 1TB raidz2 data pool
> I happen to have 2 x 250GB Western Digital RE3 7200rpm
> be better than having the ZIL 'inside' the zpool.
listing two log devices (stripe) would have more spindles
than your single raidz2 vdev.. but for low cost fun one
might make a t
this one has me alittle confused. ideas?
j...@opensolaris:~# zpool import z
cannot mount 'z/nukeme': mountpoint or dataset is busy
cannot share 'z/cle2003-1': smb add share failed
j...@opensolaris:~# zfs destroy z/nukeme
internal error: Bad exchange descriptor
Abort (core dumped)
j...@opensolaris
> By partitioning the first two drives, you can arrange to have a small
> zfs-boot mirrored pool on the first two drives, and then create a second
> pool as two mirror pairs, or four drives in a raidz to support your data.
agreed..
2 % zpool iostat -v
capacity operations
> a 1U or 2U JBOD chassis for 2.5" drives,
from http://supermicro.com/products/nfo/chassis_storage.cfm
the E1 (single) or E2 (dual) options have a SAS expander so
http://supermicro.com/products/chassis/2U/?chs=216
fits your build or build it your self with
http://supermicro.com/products/accessori
> true. but I buy a Ferrari for the engine and bodywork and chassis
> engineering. It is totally criminal what Sun/EMC/Dell/Netapp do charging
its interesting to read this with another thread containing:
> timeout issue is definitely the WD10EARS disks.
> replaced 24 of them with ST32000542AS (f
> I am leaning towards AMD because of ECC support
well, lets look at Intel's offerings... Ram is faster than AMD's
at 1333Mhz DDR3 and one gets ECC and thermal sensor for $10 over non-ECC
http://www.newegg.com/Product/Product.aspx?Item=N82E16820139040
This MB has two Intel ethernets and for a
> if zfs overlaps mirror reads across devices.
it does... I have one very old disk in this mirror and
when I attach another element one can see more reads going
to the faster disks... this past isn't right after the attach
but since the reboot, but one can still see the reads are
load balanced d
> Intel's RAM is faster because it needs to be.
I'm confused how AMD's dual channel, two way interleaved
128-bit DDR2-667 into an on-cpu controller is faster than
Intel's Lynnfield dual channel, Rank and Channel interleaved
DDR3-1333 into an on-cpu controller.
http://www.anandtech.com/printarti
> I like the original Phenom X3 or X4
we all agree ram is the key to happiness. The debate is what offers the most ECC
ram for the least $. I failed to realize the AM3 cpus accepted UnBuffered ECC
DDR3-1333
like Lynnfield. To use Intel's 6 slots vs AMD 4 slots, one must use Registered
ECC.
So t
> RFE open to allow you to store [DDT] on a separate top level VDEV
hmm, add to this spare, log and cache vdevs, its to the point of making
another pool and thinly provisioning volumes to maintain partitioning
flexibility.
taemun: hay, thanks for closing the loop!
> An UPS plus disabling zil, or disabling synchronization, could possibly
> achieve the same result (or maybe better) iops wise.
Even with the fastest slog, disabling zil will always be faster...
(less bytes to move)
> This would probably work given that your computer never crashes
> in an uncon
> BTW, any new storage-controller-related drivers introduced in snv151a?
the 64bit driver in 147
-rwxr-xr-x 1 root sys 401200 Sep 14 08:44 mpt
-rwxr-xr-x 1 root sys 398144 Sep 14 09:23 mpt_sas
is a different size than 151a
-rwxr-xr-x 1 root sys 400936 Nov 15 23
plus virtualbox 4.1 with "network in a box" would like snv_159
from http://www.virtualbox.org/wiki/Changelog
Solaris hosts: New Crossbow based bridged networking driver for Solaris 11
build 159 and above
Rob
___
zfs-
> ECC?
$60 unbuffered 4GB 800MHz DDR2 ECC CL5 DIMM (Kit Of 2)
http://www.provantage.com/kingston-technology-kvr800d2e5k2-4g~7KIN90H4.htm
for Intel 32x0 north bridge like
http://www.provantage.com/supermicro-x7sbe~7SUPM11K.htm
___
zfs-discuss mailing l
> I don't think the Pentium E2180 has the lanes to use ECC RAM.
look at the north bridge, not the cpu.. the PowerEdge SC440
uses intel 3000 MCH which supports up to 8GB unbuffered ECC
or non-ECC DDR2 667/533 SDRAM. its been replaced with
the intel 32x0 that uses DDR2 800/667MHz unbuffered ECC /
the sata framework uses the sd driver so its:
4 % smartctl -d scsi -a /dev/rdsk/c4t2d0s0
smartctl version 5.36 [i386-pc-solaris2.8] Copyright (C) 2002-6 Bruce Allen
Home page is http://smartmontools.sourceforge.net/
Device: ATA WDC WD1001FALS-0 Version: 0K05
Serial number:
Device type: disk
Not. Intel decided we don't need ECC memory on the Core i7
I thought that was a Core i7 vs Xeon E55xx for socket
LGA-1366 so that's why this X58 MB claims ECC support:
http://supermicro.com/products/motherboard/Xeon3000/X58/X8SAX.cfm
___
zfs-discus
When I type `zpool import` to see what pools are out there, it gets to
/1: open("/dev/dsk/c5t2d0s0", O_RDONLY) = 6
/1: stat64("/usr/local/apache2/lib/libdevid.so.1", 0x08042758) Err#2 ENOENT
/1: stat64("/usr/lib/libdevid.so.1", 0x08042758)= 0
/1: d=0x02D90002 i
> use a bunch of 15K SAS drives as L2ARC cache for several TBs of SATA disks?
perhaps... depends on the workload, and if the working set
can live on the L2ARC
> used mainly as astronomical images repository
hmm, perhaps two trays of 1T SATA drives all
mirrors rather than raidz sets of one tra
> zpool offline grow /var/tmp/disk01
> zpool replace grow /var/tmp/disk01 /var/tmp/bigger_disk01
one doesn't need to offline before the replace, so as long as you
have one free disk interface one can cfgadm -c configure sata0/6
each disk as you go... or you can offline and cfgadm each
disk in the
> How does one look at the disk traffic?
iostat -xce 1
> OpenSolaris, raidz2 across 8 7200 RPM SATA disks:
> 17179869184 bytes (17 GB) copied, 127.308 s, 135 MB/s
> OpenSolaris, "flat" pool across the same 8 disks:
> 17179869184 bytes (17 GB) copied, 61.328 s, 280 MB/s
one raidz2 set of 8 disk
> correct ratio of arc to l2arc?
from http://blogs.sun.com/brendan/entry/l2arc_screenshots
"It costs some DRAM to reference the L2ARC, at a rate proportional to record
size.
For example, it currently takes about 15 Gbytes of DRAM to reference 600 Gbytes
of
L2ARC - at an 8 Kbyte ZFS record size
> try to be spread across different vdevs.
% zpool iostat -v
capacity operationsbandwidth
pool used avail read write read write
-- - - - - - -
z686G 434G 40 5 2.46M 271K
c1t0d0s7 250G 194G
> CPU is smoothed out quite a lot
yes, but the area under the CPU graph is less, so the
rate of real work performed is less, so the entire
job took longer. (allbeit "smoother")
Rob
___
zfs-discuss mailing list
zfs-discuss
>> We have a SC846E1 at work; it's the 24-disk, 4u version of the 826e1.
>> It's working quite nicely as a SATA JBOD enclosure.
> use the LSI SAS 3442e which also gives you an external SAS port.
I'm confused, I though expanders only worked with SAS disk, and SATA disks
took an entire SAS port. c
> c4 scsi-bus connectedconfigured unknown
> c4::dsk/c4t15d0disk connectedconfigured unknown
:
> c4::dsk/c4t33d0disk connectedconfigured unknown
> c4::es/ses0ESI connected
> the machine hung and I had to power it off.
kinda getting off the "zpool import --tgx -3" request, but
"hangs" are exceptionally rare and usually ram or other
hardware issue, solairs usually abends on software faults.
r...@pdm # uptime
9:33am up 1116 day(s), 21:12, 1 user, load average:
> The post I read said OpenSolaris guest crashed, and the guy clicked
> the ``power off guest'' button on the virtual machine.
I seem to recall "guest hung". 99% of solaris hangs (without
a crash dump) are "hardware" in nature. (my experience backed by
an uptime of 1116days) so the finger is stil
> FWIW, the Micropolis 1355 is a 141 MByte (!) ESDI disk.
> The MD21 is an ESDI to SCSI converter.
yup... its the board in the middle left of
http://rob.com/sun/sun2/md21.jpg
Rob
___
zfs-discuss mailing list
zfs-discuss@opensola
This is a lightly loaded v20z but it has zfs across its two disks..
its hung (requiring a power cycle) twice since running
5.11 opensol-20060904
the last time I had a `vmstat 1` running... nice page rates
right before death :-)
kthr memorypagedisk faults
> With modern journalling filesystems, I've never had to fsck anything or
> run a filesystem repair. Ever. On any of my SAN stuff.
you will.. even if the SAN is perfect, you will hit
bugs in the filesystem code.. from lots of rsync hard
links or like this one from raidtools last week:
Feb 9 05
I'm sure its not blessed, but another process to maximize the zfs space
on a system with few disks is
1) boot from SXCR http://www.opensolaris.org/os/downloads/on/
2) select "min install" with
512M /
512M swap
rest /export/home
use format to copy the partition table from disk0 to disk1
umount
updating my notes with Lori's rootpool notes found in
http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ using
the Solaris Express: Community Release DVD (no asserts like bfu code) from
http://www.opensolaris.org/os/downloads/on/ and installing the "Solaris
Express" (second option,
> sits there for a second, then boot loops and comes back to the grub menu.
I noticed this too when I was playing... using
kernel$ /platform/i86pc/kernel/$ISADIR/unix -v -B $ZFS-BOOTFS
I could see vmunix loading, but it quickly NMIed around the
rootnex: [ID 349649 kern.notice] isa0 at root
point.
> Patching zfs_prefetch_disable = 1 has helped
It's my belief this mainly aids scanning metadata. my
testing with rsync and yours with find (and seen with
du & ; zpool iostat -v 1 ) pans this out..
mainly tracked in bug 6437054 vdev_cache: wise up or die
http://www.opensolaris.org/jive/thread.js
On the third upgrade of the home nas, I chose
http://www.addonics.com/products/raid_system/ae4rcs35nsa.asp to hold the
disks. each hold 5 disks, in the space of three slots and 4 fit into a
http://www.google.com/search?q=stacker+810 case for a total of 20
disks.
But if given a chance to go back
we know time machine requires an extra disk (local or remote) so its
reasonable to guess the non bootable "time machine disk" could use zfs.
someone with a Leopard dvd (Rick Mann) could answer this...
___
zfs-discuss mailing list
zfs-discuss@opensolari
with no seen effects `dmesg` reports lots of
kern.warning] WARNING: marvell88sx1: port 3: error in command 0x2f: status 0x51
found in snv_62 and opensol-b66 perhaps
http://bugs.opensolaris.org/view_bug.do?bug_id=6539787
can someone post part of the headers even if the code is closed?
> [hourly] marvell88sx error in command 0x2f: status 0x51
ah, its some kinda SMART or FMA query that
model WDC WD3200JD-00KLB0
firmware 08.05J08
serial number WD-WCAMR2427571
supported features:
48-bit LBA, DMA, SMART, SMART self-test
SATA1 compatible
capacity = 625142448 sectors
drives d
> an array of 30 drives in a RaidZ2 configuration with two hot spares
> I don't want to mirror 15 drives to 15 drives
ok, so space over speed... and are willing to toss somewhere between 4
and 15 drives for protection.
raidz splits the (up to 128k) write/read recordsize into each element of
the
> issues does ZFS have with running in only 32-bit mode?
with less then 2G ram, no worry... with more then 3G ram
and you don't need mem in userspace, give it to the kernel
in virtual memory for zfs cache by moving the kernelbase...
eeprom kernelbase=0x8000
or for only 1G userland:
eeprom ker
> How does eeprom(1M) work on the Xeon that the OP said he has?
its faked via /boot/solaris/bootenv.rc
built into /platform/i86pc/$ISADIR/boot_archive
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/
> which is better 8+2 or 8+1+spare?
8+2 is safer for the same speed
8+2 requires alittle more math, so its slower in theory. (unlikely seen)
(4+1)*2 is 2x faster, and in theory is less likely to have wasted space
in transaction group (unlikely seen)
(4+1)*2 is cheaper to upgrade in plac
#define DRIVE_SIZE_GB 300
#define MTBF_YEARS 2
#define MTTR_HOURS_NO_SPARE 48
#define MTTR_HOURS_SPARE 8
#define NUM_BAYS 10
- can have 3 (2+1) w/ 1 spares providing 1800 GB with MTTDL of 243.33 years
- can have 2 (4+1) w/ 0 spares providing 2400 GB with MTTDL of 18.25 years
- can have 1
> I'm not surprised that having /usr in a separate pool failed.
while this is discouraging, (I have several b62 machines with
root mirrored and /usr on raidz) if booting from raidz
is a pri, and comes soon, at least I'd be happy :-)
Rob
___
> I suspect that the bad ram module might have been the root
> cause for that "freeing free segment" zfs panic,
perhaps I removed two 2G simms but left the two 512M
simms, also removed kernelbase but the zpool import
still crashed the machine.
its also registered ECC ram, memtest86 v1.7 di
I'm confused by this and NexentaStor... wouldn't it be better
to use b77? with:
Heads Up: File system framework changes (supplement to CIFS' "head's up")
Heads Up: Flag Day (Addendum) (CIFS Service)
Heads Up: Flag Day (CIFS Service)
caller_context_t in all VOPs - PSARC/2007/218
VFS Feature Regist
> On the other hand, the pool of 3 disks is obviously
> going to be much slower than the pool of 5
while today that's true, "someday" io will be
balanced by the latency of vdevs rather than
the number... plus two vdevs are always going
to be faster than one vdev, even if one is slower
than the
grew tired of the recycled 32bit cpus in
http://www.opensolaris.org/jive/thread.jspa?messageID=127555
and bought this to put the two marvell88sx cards in:
$255 http://www.supermicro.com/products/motherboard/Xeon3000/3210/X7SBE.cfm
http://www.supermicro.com/manuals/motherboard/3210/MNL-0970.p
here is a simple layout for 6 disks toward "speed" :
/dev/dsk/c0t0d0s1 - - swap- no -
/dev/dsk/c0t1d0s1 - - swap- no -
root/snv_77 - / zfs - no -
z/snv_77/usr - /usr zfs - yes -
z/snv_77/var -
> with 4 cores and 2-4G of ram.
not sure 2G is enough... at least with 64bit there are no kernel space
issues.
6 % echo '::memstat' | mdb -k
Page SummaryPagesMB %Tot
Kernel 692075
After a fresh SMI labeled c0t0d0s0 / swap /export/home jumpstart
in /etc check
hostname.e1000g0 defaultrouter netmasks resolv.conf nsswitch.conf
services hosts
coreadm.conf acctadm.conf dumpadm.conf named.conf rsync.conf
svcadm disable fc-cache cde-login cde-calendar-manager cde-printinfo
>> r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
>> 0.0 48.00.0 3424.6 0.0 35.00.0 728.9 0 100 c2t8d0
> That service time is just terrible!
yea, that service time is unreasonable. almost a second for each
command? and 35 more commands queued? (reorder =
> panic[cpu0]/thread=fbc257a0: cannot mount root path /[EMAIL
> PROTECTED],0/
when booted from snv_77 type:
zpool import rootpool
zpool get bootfs rootpool
mkdir /mnt
mount -F zfs "the bootfs string" /mnt
my guess is it will fail... so then do
zfs list
and find one that will mount,
> bootfs rootpool/rootfs
does "grep zfs /mnt/etc/vfstab" look like:
rootpool/rootfs- / zfs - no -
(bet it doesn't... edit like above and reboot)
or second guess (well, third :-) is your theory that
can be checked with:
zpool import rootpool
zpool import datap
> I guess the zpool.cache in the bootimage got corrupted?
not on zfs :-) perhaps a path to a drive changed?
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discus
> I've only started using ZFS this week, and hadn't even touched a Unix
welcome to ZFS... here is a simple script you can start with:
#!/bin/sh
snaps=15
today=`date +%j`
nuke=`expr $today - $snaps`
yesterday=`expr $today - 1`
if [ $yesterday -lt 0 ] ; then
yesterday=365
fi
if [ $nuke -lt 0
> space_map_add+0xdb(ff014c1a21b8, 472785000, 1000)
> space_map_load+0x1fc(ff014c1a21b8, fbd52568, 1,
ff014c1a1e88, ff0149c88c30)
> running snv79.
hmm.. did you spend any time in snv_74 or snv_75 that might
have gotten http://bugs.opensolaris.org/view_bug.do?bug_id=660
fun example that shows NCQ lowers wait and %w, but doesn't have
much impact on final speed. [scrubbing, devs reordered for clarity]
extended device statistics
devicer/sw/s kr/skw/s wait actv svc_t %w %b
sd2 454.70.0 47168.00.0 0.0 5.7 12.6
> what causes a dataset to get into this state?
while I'm not exactly sure, I do have the steps leading up to when
I saw it trying to create a snapshot. ie:
10 % zfs snapshot z/b80nd/[EMAIL PROTECTED]
cannot create snapshot 'z/b80nd/[EMAIL PROTECTED]': dataset is busy
13 % mount -F zfs z/b80nd/
as its been pointed out it likely 6458218
but a zdb -e poolname
will tell you alittle more
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> appears to have unlimited backups for 4.95 a month.
http://rsync.net/ $1.60 per month per G (no experience)
to keep this "more" ontopic and not spam like. what about [home]
backups??.. what's the best "deal" for you:
1) a 4+1 (space) or 2*(2+1) (speed) 64bit 4G+ zfs nas
(data for
> Way crude, but effective enough:
kinda cool, but isn't thats what
sar -f /var/adm/sa/sa`date +%d` -A | grep -v ","
is for? crontab -e sys
to start..
for more fun
acctadm -e extended -f /var/adm/exacct/proc process
Rob
___
zfs-discu
> have 4x500G disks in a RAIDZ. I'd like to repurpose [...] as the second
> half of a mirror in a machine going into colo.
rsync or zfs send -R the 128G to the machine going to the colo
if you need more space in colo, remove one disk faulting sys1
and add (stripe) it on colo (note: you will ne
> ZFS is not 32-bit safe.
while this is kinda true, if the systems has 2G or less of ram
it shouldn't be an issue other than poor performance for lack of
ARC.
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://
> Because then I have to compute yesterday's date to do the
incremental dump.
snaps=15
today=`date +%j`
# to change the second day of the year from 002 to 2
today=`expr $today + 0`
nuke=`expr $today - $snaps`
yesterday=`expr $today - 1`
if [ $yesterday -lt 1 ] ; then
yesterday=365
fi
if [
> I did the cp -r dir1 dir2 again and when it hanged
when its hung, can you type: iostat -xce 1
in another window and is there a 100 in the %b column?
when you reset and try the cp again, and look at
iostat -xce 1 on the second hang, is the same disk at 100 in %b?
if all your windows are hung,
hmm, three drives with 35 io requests in the queue
and none active? remind me not to buy a drive
with that FW..
1) upgrade the FW in the drives or
2) turn off NCQ with:
echo "set sata:sata_max_queue_depth = 0x1" >> /etc/system
Rob
___
or work around the NCQ bug in the drive's FW by typing:
su
echo "set sata:sata_max_queue_depth = 0x1" >> /etc/system
reboot
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
> would do and booted from the CD. OK, now I zpool imported rpool,
> modified [], exported the pool, and rebooted.
the oops part is the "exported the pool" as a reboot after editing
would have worked as expected so rpool wasn't marked as exported
so boot from the cdrom again, zpool import r
type:
zpool import 11464983018236960549 rpool.old
zpool import -f mypool
zpool upgrade -a
zfs upgrade -a
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> There's also a spare attached to the pool that's not showing here.
can you make it show?
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> How do I go about making it show?
zdb -e exported_pool_name
will show the children's paths and find the path of the "spare"
that's missing and once you get it to shows up you can import the pool.
Rob
___
zfs-discuss mailing l
> 1) Am I right in my reasoning?
yes
> 2) Can I remove the new disks from the pool, and re-add them under the
> raidz2 pool
copy the data off the pool, destroy and remake the pool, and copy back
> 3) How can I check how much zfs data is written on the actual disk (say
> c12)
> There is something more to consider with SSDs uses as a cache device.
why use SATA as the interface? perhaps
http://www.tgdaily.com/content/view/34065/135/
would be better? (no experience)
"cards will start at 80 GB and will scale to 320 and 640 GB next year.
By the end of 2008, Fusion io also
> I'd like to take a backup of a live filesystem without modifying
> the last accessed time.
why not take a snapshot?
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discus
> Is there a way to efficiently replicating a complete zfs-pool
> including all filesystems and snapshots?
zfs send -R
-R Generate a replication stream package,
which will replicate the specified
filesystem, and
> making all the drives in a *zpool* the same size.
The only issue of having vdevs of diffrent sizes is when
one fills up, reducing the strip size for writes.
> making all the drives in a *vdev* (of almost any type) the same
The only issue is the unused space of the largest device, but
then we c
> 1) and l2arc or log device needs to evacuation-possible
how about evacuation of any vdev? (pool shrink!)
> 2) any failure of a l2arc or log device should never prevent
> importation of a pool.
how about import or creation of any kinda degraded pool?
Rob
> replace a current raidz2 vdev with a mirror.
your asking for vdev removal or pool shrink which isn't
finish yet.
Rob
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-dis
`mv`ing files from a zfs dir to another zfs filesystem
in the same pool will panic a 8 sata zraid
http://supermicro.com/Aplus/motherboard/Opteron/nForce/H8DCE.cfm
system with
::status
debugging crash dump vmcore.3 (64-bit) from zfs
operating system: 5.11 opensol-20060523 (i86pc)
panic message:
a
-0400, Rob Logan wrote:
> `mv`ing files from a zfs dir to another zfs filesystem
> in the same pool will panic a 8 sata zraid
> http://supermicro.com/Aplus/motherboard/Opteron/nForce/H8DCE.cfm
> system with
>
> ::status
> debugging crash dump vmcore.3 (64-bit) from zfs
> o
why is sum of disks bandwidth from `zpool iostat -v 1`
less than the pool total while watching `du /zfs`
on opensol-20060605 bits?
capacity operationsbandwidth
pool used avail read write read write
-- - - - - - -
zfs
> a total of 4*64k = 256k to fetch a 2k block.
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6437054
perhaps a quick win would be to tell vdev_cache
about the DMU_OT_* type so it can read ahead appropriately.
it seems the largest losses are metadata. (du,find,scrub/resilver)
__
ERj> 2) is it possible to easily add (-> more available space) and
> you can add disks to a raidz pool but it won't actually grow stripe
> width and in order to preserver redundancy you will have to add at
> least pairs of disks.
if one is drive bay limited, replace *all* the raidz drives,
one
1 - 100 of 108 matches
Mail list logo