Le 04/02/2013 ? 11:21:12-0500, Paul Kraus a écrit
> On Jan 31, 2013, at 5:16 PM, Albert Shih wrote:
>
> > Well I've server running FreeBSD 9.0 with (don't count / on
> > differents disks) zfs pool with 36 disk.
> >
> > The performance is very very g
d that change nothing either.
Anyone have any idea ?
Regards.
JAS
--
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@obspm.fr
Heure local/Local time:
jeu 31 jan 2013 23:04:47 CET
__
00.4
>
>
> > And what happen if I have 24, 36 disks to change ? It's take mounth to do
> > that.
>
> Those are the current limitations of zfs. Yes, with 12x2TB of data to
> copy it could take about a month.
OK.
>
> If you are feeling particularly risky and
Le 30/11/2012 ? 15:52:09+0100, Tomas Forsman a écrit
> On 30 November, 2012 - Albert Shih sent me these 0,8K bytes:
>
> > Hi all,
> >
> > I would like to knwon if with ZFS it's possible to do something like that :
> >
> > http://tldp.org/HOWTO/LV
put in the server, add in the zpool, ask zpool
to migrate all data on those 12 old disk on the new and remove those old
disk ?
Regards.
--
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@obspm.fr
Heure l
what I understand I can add more more MD1200. But if I loose one MD1200
for any reason I lost the entire pool.
In your experience what's the «limit» ? 100 disk ?
How FreeBSD manage 100 disk ? /dev/da100 ?
Regards.
JAS
--
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules J
m.
OK thanks.
>
> > 2/ How can I known the size of all snapshot size for a partition ?
> > (OK I can add zfs list -t snapshot)
>
> zfs get usedbysnapshots
Thansk
Can I say
USED-REFER=snapshot size ?
Regards.
JAS
--
Albert SHIH
DIO bâtiment
Hi all,
Two questions from a newbie.
1/ What REFER mean in zfs list ?
2/ How can I known the size of all snapshot size for a partition ?
(OK I can add zfs list -t snapshot)
Regards.
JAS
--
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
Le 17/01/2012 à 06:31:22-0800, Brad Stone a écrit
> Try zpool import
Thanks.
It's working.
Regards.
--
Albert SHIH
DIO batiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
Heure local/Local time:
mar 17 jan 2012 15:3
ll /dev/da0 --> /dev/da47.
I've create a zpool with 4 raidz2 (one for each MD1200).
After that I re-install the server (I put some wrong swap size), and I
don't find my zpool at all.
Is that normal ?
Regards.
JAS
--
Albert SHIH
DIO batiment 15
Observatoire de Paris
5 Place
On Mon, Dec 12, 2011 at 03:01:08PM -0500, "Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D."
wrote:
> 4c@2.4ghz
Yep, that's the plan. Thanks.
> On 12/12/2011 2:44 PM, Albert Chin wrote:
> >On Mon, Dec 12, 2011 at 02:40:52PM -0500, "Hung-Sheng Tsao (Lao Tsao 老曹)
> >Ph.D
On Mon, Dec 12, 2011 at 02:40:52PM -0500, "Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D."
wrote:
> please check out the ZFS appliance 7120 spec 2.4Ghz /24GB memory and
> ZIL(SSD)
> may be try the ZFS simulator SW
Good point. Thanks.
> regards
>
> On 12/12/2011 2:28 PM,
Recommendations?
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
egards.
--
Albert SHIH
DIO batiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
Heure local/Local time:
jeu 27 oct 2011 17:20:11 CEST
___
zfs-discuss mailing list
zfs-discuss@opensolaris.or
Le 19/10/2011 à 10:52:07-0400, Krunal Desai a écrit
> On Wed, Oct 19, 2011 at 10:14 AM, Albert Shih wrote:
> > When we buy a MD1200 we need a RAID PERC H800 card on the server so we have
> > two options :
> >
> > 1/ create a LV on the PERC H800 so the se
so
> > 12x2To disk)
>
> The more the better :)
Well, my employer is not so rich.
It's first time I'm going to use ZFS on FreeBSD on production (I use on my
laptop but that's mean nothing), so what's in your opinion the minimum ram
I need ? Is something like 48
on ?
Any advise about the RAM I need on the server (actually one MD1200 so 12x2To
disk)
Regards.
JAS
--
Albert SHIH
DIO batiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
Heure local/Local time:
mer 19 oct 2011 16:
man zpool /failmode
-Albert
On Tue, May 24, 2011 at 1:20 PM, Roy Sigurd Karlsbakk
wrote:
> Hi all
>
> I just attended this HTC conference and had a chat with a guy from UiO
> (university of oslo) about ZFS. He claimed Solaris/OI will die silently if a
> single pool fails. I ha
W dniu 04.04.2011 12:44, Fajar A. Nugraha pisze:
On Mon, Apr 4, 2011 at 4:49 PM, For@ll wrote:
What can I do that zpool show new value?
zpool set autoexpand=on TEST
zpool set autoexpand=off TEST
-- richard
I tried your suggestion, but no effect.
Did you modify the partition table?
IIRC if
hi there,
i got freenas installed with a raidz1 pool of 3 disks. one of them now failed
and it gives me errors like "Unrecovered red errors: autorreallocatefailed" or
"MEDIUM ERROR asc:11,4" and the system won't even boot up. so i bought a
replacement drive, but i am a bit concerned since norma
Hi,
I wonder what is the better option to install the system on solaris ufs
and zfs sensitive data on whether this is the best all on zfs?
What are the pros and cons of such a solution?
f...@ll
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
W dniu 2010-12-01 15:19, Menno Lageman pisze:
f...@ll wrote:
Hi,
I must send zfs snaphost from one server to another. Snapshot have size
130GB. Now I have question, the zfs have any limit of sending file?
If you are sending the snapshot to another zpool (i.e. using 'zfs send |
zfs recv') then
This forum has been tremendously helpful, but I decided to get some help from a
Solaris Guru install Solaris for a backup application.
I do not want to disturb the flow of this forum, but where can I post to get
some paid help on this forum? We are located in the San Francisco Bay Area. Any
hel
ok thanks for the fast info. that sounds really awesome. i am glad i tried out
zfs, so i no longer have to worry about this issues and the fact that i can
upgrad forth and back between stripe and mirror is amazing. money was short, so
only 2 disks had been put in and since the data is not that w
hi there,
since i am really new to zfs, i got 2 important questions for starting. i got a
nas up and running zfs in stripe mode with 2x 1,5tb hdd. my question for future
proof would be, if i could add just another drive to the pool and zfs can
integrate it flawlessly? and second if this hdd cou
ok nice to know :) thank you very much for your quick answer
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
hi there,
maybe this is a stupid question, yet i haven't found an answer anywhere ;)
let say i got 3x 1,5tb hdds, can i create equal partitions out of each and make
a raid5 out of it? sure the safety would drop, but that is not that important
to me. with roughly 500gb partitions and the raid5 fo
oots/hppa1.1-hp-hpux11...@ab
zfs snapshot tww/opt/chroots/hppa1.1-hp-hpux11...@ab
zfs clone tww/opt/chroots/hppa1.1-hp-hpux11...@ab
tww/opt/chroots/ab/hppa1.1-hp-hpux11.11
...
and then perform another zfs send/receive, the error above occurs. Why?
--
albert chin (ch...@the
the thread:
http://opensolaris.org/jive/thread.jspa?threadID=115503&tstart=0
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
p://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
> _
ans it will
> take 100 hours. Is this normal? If I had 30TB to back up, it would
> take 1000 hours, which is more than a month. Can I speed this up?
It's not immediately obvious what the cause is. Maybe the server running
zfs send has slow MB/s performance reading from disk. Maybe the
> ZFS ACL activity shown by DTrace. I wonder if there is a lot of sync
>> I/O that would benefit from separately defined ZILs (whether SSD or
>> not), so I've asked them to look for fsync activity.
>>
>> Data collected thus far is listed below. I've asked f
On Mon, Oct 19, 2009 at 09:02:20PM -0500, Albert Chin wrote:
> On Mon, Oct 19, 2009 at 03:31:46PM -0700, Matthew Ahrens wrote:
> > Thanks for reporting this. I have fixed this bug (6822816) in build
> > 127.
>
> Thanks. I just installed OpenSolaris Preview based on 125
t; --matt
>
> Albert Chin wrote:
>> Running snv_114 on an X4100M2 connected to a 6140. Made a clone of a
>> snapshot a few days ago:
>> # zfs snapshot a...@b
>> # zfs clone a...@b tank/a
>> # zfs clone a...@b tank/b
>>
>> The system started pan
- switching to Comstar, snv124, VBox
> 3.08, etc., but such a dramatic loss of performance probably has a
> single cause. Is anyone willing to speculate?
Maybe this will help:
http://mail.opensolaris.org/pipermail/storage-discuss/2009-September/007118.html
--
a
receive [-vnF] -d
>
> For the property list, run: zfs set|get
>
> For the delegated permission list, run: zfs allow|unallow
> r...@xxx:~# uname -a
> SunOS xxx 5.10 Generic_13-03 sun4u sparc SUNW,Sun-Fire-V890
>
> What's wrong?
Looks like -u wa
On Mon, Sep 28, 2009 at 07:33:56PM -0500, Albert Chin wrote:
> When transferring a volume between servers, is it expected that the
> usedbydataset property should be the same on both? If not, is it cause
> for concern?
>
> snv114# zfs list tww/opt/vms/images/vios/
USED AVAIL REFER MOUNTPOINT
t/opt/vms/images/vios/near.img 14.5G 2.42T 14.5G -
snv119# zfs get usedbydataset t/opt/vms/images/vios/near.img
NAMEPROPERTY VALUE SOURCE
t/opt/vms/images/vios/near.img usedbydataset 14.5G -
--
albert chin (ch
properties are
not sent?
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, Sep 28, 2009 at 10:16:20AM -0700, Richard Elling wrote:
> On Sep 28, 2009, at 3:42 PM, Albert Chin wrote:
>
>> On Mon, Sep 28, 2009 at 12:09:03PM -0500, Bob Friesenhahn wrote:
>>> On Mon, 28 Sep 2009, Richard Elling wrote:
>>>>
>>>> Scrub co
a.
>> So you simply need to read the data.
>
> This should work but it does not verify the redundant metadata. For
> example, the duplicate metadata copy might be corrupt but the problem
> is not detected since it did not happen to be used.
Too bad we cannot scrub a data
Without doing a zpool scrub, what's the quickest way to find files in a
filesystem with cksum errors? Iterating over all files with "find" takes
quite a bit of time. Maybe there's some zdb fu that will perform the
check for me?
--
albert chin (ch..
l. I'd
pop up there and ask. There are somewhat similar bug reports at
bugs.opensolaris.org. I'd post a bug report just in case.
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I started getting this as
> well.. My Mirror array is unaffected.
>
> snv111b (2009.06 release)
What does the panic dump look like?
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mai
On Fri, Sep 25, 2009 at 05:21:23AM +, Albert Chin wrote:
> [[ snip snip ]]
>
> We really need to import this pool. Is there a way around this? We do
> have snv_114 source on the system if we need to make changes to
> usr/src/uts/common/fs/zfs/dsl_dataset.c. It seems like the
t seems like the "zfs
destroy" transaction never completed and it is being replayed, causing
the panic. This cycle continues endlessly.
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
c40 zfs:txg_sync_thread+265 ()
ff00104c0c50 unix:thread_start+8 ()
System is a X4100M2 running snv_114.
Any ideas?
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mail
0 0
c4t6d0 ONLINE 0 0 0
c4t7d0 ONLINE 0 0 0
errors: 855 data errors, use '-v' for a list
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opens
INUSE currently in use
c6t600A0B800029996605C84668F461d0 INUSE currently in use
c6t600A0B80002999660A454A93CEDBd0 AVAIL
c6t600A0B80002999660ADA4A9CF2EDd0 AVAIL
--
albert chin (ch...@thewrittenword.com
On Mon, Aug 31, 2009 at 02:40:54PM -0500, Albert Chin wrote:
> On Wed, Aug 26, 2009 at 02:33:39AM -0500, Albert Chin wrote:
> > # cat /etc/release
> > Solaris Express Community Edition snv_105 X86
> >Copyright 2008 Sun Microsystems, Inc.
On Wed, Aug 26, 2009 at 02:33:39AM -0500, Albert Chin wrote:
> # cat /etc/release
> Solaris Express Community Edition snv_105 X86
>Copyright 2008 Sun Microsystems, Inc. All Rights Reserved.
> Use is subject to l
ation that might help track this down, just lots of checksum
> errors.
So, on snv_121, can you read the files with checksum errors? Is it
simply the reporting mechanism that is wrong or are the files really
damaged?
--
albert chin (ch...@thewrittenword.com)
_
up.
see: http://www.sun.com/msg/ZFS-8000-8A
scrub: resilver in progress for 0h11m, 2.82% done, 6h21m to go
config:
...
So, why is a resilver in progress when I asked for a scrub?
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
z
On Tue, Aug 25, 2009 at 06:05:16AM -0500, Albert Chin wrote:
> [[ snip snip ]]
>
> After the resilver completed:
> # zpool status tww
> pool: tww
> state: DEGRADED
> status: One or more devices has experienced an error resulting in data
> corruption. Appl
0299CCC0A194A89E634d0 \
c6t600A0B800029996609EE4A89DA51d0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c6t600A0B800029996609EE4A89DA51d0s0 is part of active ZFS
pool tww. Please see zpool(1M).
So, what is going on?
--
al
On Mon, Aug 24, 2009 at 02:01:39PM -0500, Bob Friesenhahn wrote:
> On Mon, 24 Aug 2009, Albert Chin wrote:
>>
>> Seems some of the new drives are having problems, resulting in CKSUM
>> errors. I don't understand why I have so many data errors though. Why
>> does th
ors though. Why
does the third raidz2 vdev report 34.0K CKSUM errors?
The number of data errors appears to be increasing as well as the
resilver process continues.
--
albert chin (ch...@thewrittenword.com)
___
zfs-discuss mailing list
zfs-discuss@opens
On Mon, Nov 24, 2008 at 08:43:18AM -0800, Erik Trimble wrote:
> I _really_ wish rsync had an option to "copy in place" or something like
> that, where the updates are made directly to the file, rather than a
> temp copy.
Isn't this what --inplace does?
--
alber
unfortunately I am unable
> to verify the driver. "pkgadd -d umem_Sol_Drv_Cust_i386_v01_11.pkg"
> hangs on "## Installing part 1 of 3." on snv_95. I do not have other
> Solaris versions to experiment with; this is really just a hobby for
> me.
Does the card c
he clients though.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t; > start thinking bigger.
> >
> > I'd also like to know if there's any easy way to see the current performance
> > of the system once it's in use? I know VMware has performance monitoring
> > built into the console, bu
ve too much ram
Well, if the server attached to the J series is doing ZFS/NFS,
performance will increase with zfs:zfs_nocacheflush=1. But, without
battery-backed NVRAM, this really isn't "safe". So, for this usage case,
unless the server has battery-backed NVRAM, I don't see how
e is an another version called
> J4400 with 24 disks.
>
> Doc is here :
> http://docs.sun.com/app/docs/coll/j4200
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
tp://www.vmetro.com/category4304.html, and I don't have any space in
> this server to mount a SSD.
Maybe you can call Vmetro and get the names of some resellers whom you
could call to get pricing info?
--
albert chin ([EMAIL PROTECTED])
oid? Any other caveats I would want to take into consideration?
>
I don't think there will be any spec changes for S10u6 from the ZFS boot
support currently available in SX, but the JumpStart configuration for
SX might not be compatible for other reasons (install-discuss may know
better).
en asking zfs-discuss about a backup solution. This
is 2008, not 1960.
If he wanted that he could just use dd (or partimage for a slight
optimisation). =P
-Albert
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
e no data
> would be rewritten.
Correct, and Live Upgrade also clones the active BE when you do
lucreate. Unless you copy all the data manually, it's going to inherit
the uncompressed blocks from the current filesystem.
-Albert
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
it manually:
# zpool set bootfs=zpl/ROOT/snv_90 zpl
Update the boot archive (which is stale for some reason):
# mount -F zfs zpl/ROOT/snv_90 /a
# bootadm update-archive -R /a
# umount /a
Cross fingers, reboot!
# init 6
-Albert
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
gt; Suggestions welcome, maybe we'll try out some of them and report ;)
The LU support for ZFS root is part of a set of updates to the installer
that are not available until snv_90. There is a hack to do an offline
upgrade from DVD/CD (zfs_ttinstall), if you can't wait.
-Albert
t; Same with: boot disk1 -Z Root/nv88_zfs
>
> What is missing in the setup?
> Unfortunately opensolaris contains only the preliminary setup for x86,
> so it does not help me...
>
> Regards,
>
> Ulrich
>
Does newboot automatically construct
instead of SAMBA (or, you
> know your macs can speak NFS ;>).
Alternatively you could run Banshee or mt-daapd on the Solaris box and
just rely on iTunes sharing. =P
Seriously, NFS is a totally reasonable way to go.
-Albert
___
zfs-discuss mailing
yesterday's date to do the incremental
> dump.
Not if you set a ZFS property with the date of the last backup.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
current scoop?
>
> Thanks!
>
PSARC 2007/567 added the "failmode" option to the zpool(1) command to
specify what happens when a pool fails. This was integrated in Nevada
b77, it probably won't be available in S10 until the next update.
-Albert
_
as terrible. I then manually transferred half the LUNs
> to controller A and it started to fly.
http://groups.google.com/group/comp.unix.solaris/browse_frm/thread/59b43034602a7b7f/0b500afc4d62d434?lnk=st&q=#0b500afc4d62d434
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
=255
You will have to edit it to fit your zfs layout.
> --
> Michael Hale
> <[EMAIL PROTECTED]
> >
> Manager of Engineering SupportEnterprise
> Engineering Group
> Transcom Enhanced Services
> http://www
thumper_bench.html
>
Lots of thanks for making this work. And let me to read it.
Regards.
--
Albert SHIH
Observatoire de Paris Meudon
SIO batiment 15
Heure local/Local time:
Ven 1 fév 2008 23:03:59 CET
___
zfs-discuss mailing list
zfs-discus
Le 30/01/2008 à 11:01:35-0500, Kyle McDonald a écrit
> Albert Shih wrote:
>> What's kind of pool you use with 46 disk ? (46=2*23 and 23 is prime number
>> that's mean I can make raidz with 6 or 7 or any number of disk).
>>
>>
> Depending on needs for
pool uses 5-way raidz and new vdev uses 6-way
raidz
I can force this with «-f» option.
But what's that mean (sorry if the question is stupid).
What's kind of pool you use with 46 disk ? (46=2*23 and 23 is prime number
that's mean I can make raidz with 6 or 7 or any number of dis
ething equivalent to the performance of
ZIL disabled with ZIL/RAM. I'd do ZIL with a battery-backed RAM in a
heartbeat if I could find a card. I think others would as well.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@open
pt.
>
> All these things are being worked on, but it might take sometime
> before everything is made aware that yes it's no longer unusual that
> there can be 1+ filesystems on one machine.
But shouldn't sharemgr(1M) be "a
S to
> do the mirroring.
Why even both with a H/W RAID array when you won't use the H/W RAID?
Better to find a decent SAS/FC JBOD with cache. Would definitely be
cheaper.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mai
Unfortunately, no inexpensive
cards exist for the common consumer (with ECC memory anyways). If you
convince http://www.micromemory.com/ to sell you one, let us know :)
Set "set zfs:zil_disable = 1" in /etc/system to gauge the type of
improvement you can expect. Don't use this in p
behavior? It really makes
> ZFS less than desirable/reliable.
http://blogs.sun.com/eschrock/entry/zfs_and_fma
FMA For ZFS Phase 2 (PSARC/2007/283) was integrated in b68:
http://www.opensolaris.org/os/community/arc/caselog/2007/283/
http://www.opensolaris.org/os/community/on/flag-days
it works for earlier builds, too.
Good luck,
-Albert
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
; > Computer Officer, University of Cambridge, Unix Support
> >
>
> --
> Jorgen Lundman | <[EMAIL PROTECTED]>
> Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
> Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
> Japan| +81 (0)3 -3375-1767 (home)
>
On Mon, 2007-11-26 at 08:21 -0800, Roman Morokutti wrote:
> Hi
>
> I am very interested in using ZFS as a whole: meaning
> on the whole disk in my laptop. I would now make a
> complete reinstall and don´t know how to partition
> the disk initially for ZFS.
>
> --
> Roman
>
>
> This message
On Tue, Nov 20, 2007 at 11:39:30AM -0600, Albert Chin wrote:
> On Tue, Nov 20, 2007 at 11:10:20AM -0600, [EMAIL PROTECTED] wrote:
> >
> > [EMAIL PROTECTED] wrote on 11/20/2007 10:11:50 AM:
> >
> > > On Tue, Nov 20, 2007 at 10:01:49AM -0600, [EMAIL PROTECTED] wrote:
| tail -1
> > > > 2007-11-20.02:37:13 zpool replace tww
> > > c0t600A0B8000299966059E4668CBD3d0
> > > > c0t600A0B8000299CCC06734741CD4Ed0
> > > >
> > > > So, why did resilvering restart when no zfs operations occurred? I
> > &
and now I get:
> > # zpool status tww
> > pool: tww
> >state: DEGRADED
> > status: One or more devices is currently being resilvered. The pool
> will
> > continue to function, possibly in a degraded state.
> > action: Wait for the resilve
graded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 0.00% done, 134h45m to go
What's going on?
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.openso
On Mon, Nov 19, 2007 at 06:23:01PM -0800, Eric Schrock wrote:
> You should be able to do a 'zpool detach' of the replacement and then
> try again.
Thanks. That worked.
> - Eric
>
> On Mon, Nov 19, 2007 at 08:20:04PM -0600, Albert Chin wrote:
> > Running ON b66
0
cannot replace c0t600A0B8000299966059E4668CBD3d0 with
c0t600A0B8000299CCC06734741CD4Ed0: cannot replace a replacing device
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
re drive.
>
> Ian.
AFAIK the write cache is always enabled for PATA drives:
http://src.opensolaris.org/source/search?q=ata_write_cache+&defs=&refs=&path=&hist=&project=%2Fonnv
-Albert
___
zfs-discuss mailing list
zfs-discuss@ope
t;
> It didn't. It still isn't supported by the installer in SXCE either.
>
> Ian
ZFS root and boot work fine since the later snv_6x builds, the installer
is a different matter. You'll have to install to UFS and move to ZFS
http://mail.opensolaris.org/pipermail/storage-discuss/2007-July/003080.html
You'll need to determine the performance impact of removing NVRAM from
your data LUNs. Don't blindly do it.
--
albert chin ([EMAIL PROTECTED])
___
zfs-discuss mailing li
please read the script.
Let me know if this was helpful,
-Albert
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Sep 18, 2007 at 12:59:02PM -0400, Andy Lubel wrote:
> I think we are very close to using zfs in our production environment.. Now
> that I have snv_72 installed and my pools set up with NVRAM log devices
> things are hauling butt.
How did you get NVRAM log devices?
--
al
f exporting is not possible, 'zpool import -f' will ignore that warning
and allow importing anyway.
-Albert
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
connector and not sure if it is worth the whole effort
> for my personal purposes.
Huh? So your MM-5425CN doesn't fit into a PCI slot?
> Any comment are very appreciated
How did you obtain your card?
--
albert chin ([EMAIL PROTECTED])
___
On Wed, Jul 18, 2007 at 01:54:23PM -0600, Neil Perrin wrote:
> Albert Chin wrote:
> > On Wed, Jul 18, 2007 at 01:29:51PM -0600, Neil Perrin wrote:
> >> I wrote up a blog on the separate intent log called "slog blog"
> >> which describes the interface; some
1 - 100 of 138 matches
Mail list logo