[zfs-discuss] Solaris License with ZFS USER quotas?

2009-09-28 Thread Jorgen Lundman
turning our 12x x4540, and calling NetApp. I would rather not (more work for me). I understand Sun is probably experiencing some internal turmoil at the moment, but it has been rather frustrating for us. Lund -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (w

Re: [zfs-discuss] Solaris License with ZFS USER quotas?

2009-09-28 Thread Jorgen Lundman
uot;future releases" of Solaris. Thanks Lund -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home)

Re: [zfs-discuss] Comments on home OpenSolaris/ZFS server

2009-09-30 Thread Jorgen Lundman
bootable Solaris. Very flexible and can put on the Admin GUIs, and so on. https://sourceforge.net/projects/embeddedsolaris/ Lund -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan

Re: [zfs-discuss] zfs on s10u8

2009-10-17 Thread Jorgen Lundman
: Any known issues for the new ZFS on solaris 10 update 8? Or is it still wiser to wait doing a zpool upgrade? Because older ABE's can no longer be accessed then. -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578

[zfs-discuss] ZFS user quota, userused updates?

2009-10-19 Thread Jorgen Lundman
...@1029 54.0M local Any suggestions would be most welcome, Lund -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home

[zfs-discuss] ZFS dedup vs compression vs ZFS user/group quotas

2009-11-03 Thread Jorgen Lundman
aves space, that is profit to us) Is the space saved with dedup charged in the same manner? I would expect so, I figured some of you would just know. I will check when b128 is out. I don't suppose I can change the model? :) Lund -- Jorgen Lundman | Unix Administrator | +81 (0)3 -545

Re: [zfs-discuss] ZFS directory and file quota

2009-11-18 Thread Jorgen Lundman
same with ZFS userquotas, and did not need any changes. -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home) __

[zfs-discuss] Replacing log with SSD on Sol10 u8

2009-11-20 Thread Jorgen Lundman
ny thoughts? What would you experts do in this situation? We have to run Solaris 10 (lng battle there, no support for Opensolaris from anyone in Japan). Can I delete the sucker using zdb? Thanks for any reply, -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext

Re: [zfs-discuss] Replacing log with SSD on Sol10 u8

2009-11-25 Thread Jorgen Lundman
gh. Lund [*1] http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6574286 [*2] http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6739497 -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500

Re: [zfs-discuss] Replacing log with SSD on Sol10 u8

2009-11-26 Thread Jorgen Lundman
0 0 It does at least have a solution, even if it is rather unattractive. 12 servers, and has to be done at 2am means I will be testy for a while. Lund Jorgen Lundman wrote: Interesting. Unfortunately, I can not "zpool offline", nor "zpool detach", nor "zpo

Re: [zfs-discuss] rquota didnot show userquota (Solaris 10)

2009-11-26 Thread Jorgen Lundman
OTA No quota Why 'no quota'? Both systems are nearly fully patched. Any help is appreciated. Thanks in advance. Willi ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Jorge

Re: [zfs-discuss] quotas on zfs at solaris 10 update 9 (10/09)

2009-12-23 Thread Jorgen Lundman
things up a little faster. -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home) ___ zfs-discuss mailing list zf

Re: [zfs-discuss] opensolaris lightweight install

2010-01-06 Thread Jorgen Lundman
On my NAS I use Velitium: http://sourceforge.net/projects/velitium/ which goes down to about 70MB at the smallest. (2010/01/07 15:23), Frank Cusack wrote: been searching and searching ... -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku

[zfs-discuss] ZFS panic on blade BL465c G1

2010-10-03 Thread Jorgen Lundman
Hello list, I got a c7000 with BL465c G1 blades to play with and have been trying to get some form of Solaris to work on it. However, this is the state: OpenSolaris 134: Installs with ZFS, but no BNX nic drivers. OpenIndiana 147: Panics on "zpool create" everytime, even from console. Has no U

[zfs-discuss] Mirroring raidz ?

2011-01-12 Thread Jorgen Lundman
I have a server, with two external drive cages attached, on separate controllers: c0::dsk/c0t0d0 disk connectedconfigured unknown c0::dsk/c0t1d0 disk connectedconfigured unknown c0::dsk/c0t2d0 disk connectedco

Re: [zfs-discuss] zil and root on the same SSD disk

2011-01-13 Thread Jorgen Lundman
Whenever I do a root pool, ie, configure a pool using the c?t?d?s0 notation, it will always complain about overlapping slices, since *s2 is the entire disk. This warning seems excessive, but "-f" will ignore it. As for ZIL, the first time I created a slice for it. This worked well, the second t

[zfs-discuss] x4500 performance tuning.

2008-07-23 Thread Jorgen Lundman
doubled... are there better values?) set ufs_ninode=259594 in /etc/system, and reboot. But it is costly to reboot based only on my guess. Do you have any other suggestions to explore? Will this help? Sincerely, Jorgen Lundman -- Jorgen Lundman | <[EMAIL PROTECTED]> Unix Admini

Re: [zfs-discuss] x4500 performance tuning.

2008-07-23 Thread Jorgen Lundman
taking upwards of 7 seconds to complete. Lund -- Jorgen Lundman | <[EMAIL PROTECTED]> Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3

Re: [zfs-discuss] x4500 performance tuning.

2008-07-24 Thread Jorgen Lundman
and if the x4500's do lock up I'm a bit concerned about how they > handle hardware failures. > > thanks, > > Ross > > > This message posted from opensolaris.org > ___ > zfs-discuss mailing list > zf

Re: [zfs-discuss] x4500 performance tuning.

2008-07-24 Thread Jorgen Lundman
1038376 maxsize reached 993770 (Increased it by nearly x10 and it still gets a high 'reached'). Lund Jorgen Lundman wrote: > We are having slow performance with the UFS volumes on the x4500. They > are slow even on the local server. Which makes me think i

[zfs-discuss] Replacing the boot HDDs in x4500

2008-07-31 Thread Jorgen Lundman
filesystems if I were to simply drop in the two mirrored Sol 10 5/08 boot HDDs on the x4500 and reboot? I assume Sol10 5/08 zpool version would be newer, so in theory it would work. Comments? -- Jorgen Lundman | <[EMAIL PROTECTED]> Unix Administrator | +81 (0)3 -5456-2687 ex

Re: [zfs-discuss] Replacing the boot HDDs in x4500

2008-08-01 Thread Jorgen Lundman
gt; > This message posted from opensolaris.org > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > -- Jorgen Lundman | <[EMAIL PROTECTED]> Unix Administrator | +81 (0)3 -5456-2687 ext 1017

Re: [zfs-discuss] Replacing the boot HDDs in x4500

2008-08-01 Thread Jorgen Lundman
s to be no way to resume a "half transfered" zfs send. So, rsyncing smaller bits. zfs send -i only works if you have a full copy already, which we can't get from above. -- Jorgen Lundman | <[EMAIL PROTECTED]> Unix Administrator | +81 (0)3 -5456-2687 ext 1

Re: [zfs-discuss] Replacing the boot HDDs in x4500

2008-08-01 Thread Jorgen Lundman
s/OS, are only ZFS version 1. I do not think zfs version 1 will read version 2. I see no script talking about converting a version 2 to a version 1. -- Jorgen Lundman | <[EMAIL PROTECTED]> Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-55

[zfs-discuss] x4500 dead HDD, hung server, unable to boot.

2008-08-10 Thread Jorgen Lundman
he command for now, as it definitely hangs the server every time. Hard reset done again. Lund -- Jorgen Lundman | <[EMAIL PROTECTED]> Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500

Re: [zfs-discuss] x4500 dead HDD, hung server, unable to boot.

2008-08-10 Thread Jorgen Lundman
[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL PROTECTED]/[EMAIL PROTECTED],0 (sd30):" And I need to get the answer "40". The "hd" output additionally gives me "sdar" ? Lund -- Jorgen Lundman | <[EMAIL PROTECTED]> Unix Administrat

Re: [zfs-discuss] x4500 dead HDD, hung server, unable to boot.

2008-08-10 Thread Jorgen Lundman
> See http://www.sun.com/servers/x64/x4500/arch-wp.pdf page 21. > Ian Referring to Page 20? That does show the drive order, just like it does on the box, but not how to map them from the kernel message to drive slot number. Lund -- Jorgen Lundman | <[EMAIL PROTECT

Re: [zfs-discuss] x4500 dead HDD, hung server, unable to boot.

2008-08-10 Thread Jorgen Lundman
. I suspect we are one the first to try x4500 here as well. Anyway, it has almost rebooted, so I need to go remount everything. Lund -- Jorgen Lundman | <[EMAIL PROTECTED]> Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-55

Re: [zfs-discuss] x4500 dead HDD, hung server, unable to boot.

2008-08-10 Thread Jorgen Lundman
Jorgen Lundman wrote: > > Anyway, it has almost rebooted, so I need to go remount everything. > Not that it wants to stay up for longer than ~20 mins, then hangs. In that all IO hangs, including "nfsd". I thought this might have been related: http://sunsolve.sun.com

Re: [zfs-discuss] x4500 dead HDD, hung server, unable to boot.

2008-08-10 Thread Jorgen Lundman
0 mins or so), and we can only log a call with vendor, and if they feel like it, will push it to Sun. Although, we do have SunSolve logins, can we by-pass the middleman, and avoid the whole translation fiasco, and log directly with Sun? Lund -- Jorgen Lundman | <[EMAIL PROTECTED]>

Re: [zfs-discuss] x4500 dead HDD, hung server, unable to boot.

2008-08-10 Thread Jorgen Lundman
l32+0x101() -- Jorgen Lundman | <[EMAIL PROTECTED]> Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home) ___ zfs-discuss m

Re: [zfs-discuss] x4500 dead HDD, hung server, unable to boot.

2008-08-11 Thread Jorgen Lundman
"zpool status". Going to get some sleep, and really hope it has been fixed. Thank you to everyone who helped. Lund Jorgen Lundman wrote: > > Jorgen Lundman wrote: >> Anyway, it has almost rebooted, so I need to go remount everything. >> > > Not that it wants t

[zfs-discuss] x4500 vs AVS ?

2008-09-03 Thread Jorgen Lundman
ere methods in AVS to handle fail-back? Since 02 has been used, it will have newer/modified files, and will need to replicate backwards until synchronised, before fail-back can occur. We did ask our vendor, but we were just told that AVS does not support x4500. Lund -- Jorgen Lund

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-16 Thread Jorgen Lundman
ter. > >> Even for a mirror, the data is stale and >> it's removed from the active set. I thought you were talking about >> block parity run across columns... >> >> -- >> Darren >> ___ >> zfs-discuss mail

[zfs-discuss] Replacing HDD in x4500

2009-01-26 Thread Jorgen Lundman
y is rather frustrating. Lund -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home) ___ zfs-di

Re: [zfs-discuss] Replacing HDD in x4500

2009-01-27 Thread Jorgen Lundman
27; and 'zfs > upgrade' to all my mirrors (3 3-way). I'd been having similar > troubles to yours in the past. > > My system is pretty puny next to yours, but it's been reliable now for > slightly over a month. > > > On Tue, Jan 27, 2009 at 12:19 AM, Jor

Re: [zfs-discuss] Replacing HDD in x4500

2009-01-27 Thread Jorgen Lundman
is "wait", since it almost behaves like it. Not sure why it would block "zpool", "zfs" and "df" commands as well though? Lund -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-850

Re: [zfs-discuss] Replacing HDD in x4500

2009-02-03 Thread Jorgen Lundman
I've been told we got a BugID: "3-way deadlock happens in ufs filesystem on zvol when writing ufs log" but I can not view the BugID yet (presumably due to my accounts weak credentials) Perhaps it isn't something we do wrong, that would be a nice change. Lund Jorgen

Re: [zfs-discuss] Introducing zilstat

2009-02-04 Thread Jorgen Lundman
__ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku

Re: [zfs-discuss] Introducing zilstat

2009-02-04 Thread Jorgen Lundman
For the most part, the defaults work well. But you can experiment > with them and see if you can get better results. It came shipped with 16. And I'm sorry but 16 didn't cut it at all :) We set it at 1024 as it was the highest number I found via Google. Lund -- Jorgen Lundman

[zfs-discuss] User quota design discussion..

2009-03-11 Thread Jorgen Lundman
wo sets. Advantages are that only small hooks are required in ZFS. The byte updates, and the blacklist with checks for being blacklisted. Disadvantages are that it is loss of precision, and possibly slower rescans? Sanity? But I do not really know the internals of ZFS, so I might be complet

Re: [zfs-discuss] User quota design discussion..

2009-03-12 Thread Jorgen Lundman
. This I did not know, but now that you point it out, this would be the right way to design it. So the advantage of requiring less ZFS integration is no longer the case. Lund -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578

Re: [zfs-discuss] User quota design discussion..

2009-03-12 Thread Jorgen Lundman
ufs filesystem on zvol when writng ufs log -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home) ___ zfs

Re: [zfs-discuss] User quota design discussion..

2009-03-12 Thread Jorgen Lundman
, but consider a rescan to be the answer. We don't ZFS send very often as it is far too slow. Lund -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan

Re: [zfs-discuss] User quota design discussion..

2009-03-14 Thread Jorgen Lundman
to support quotas for ZFS JL> send, but consider a rescan to be the answer. We don't ZFS send very JL> often as it is far too slow. Since build 105 it should be *MUCH* for faster. -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku,

Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-04-09 Thread Jorgen Lundman
27;ing. Since build 105 it should be *MUCH* for faster. -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home)

Re: [zfs-discuss] Can the new consumer NAS devices run OpenSolaris?

2009-04-20 Thread Jorgen Lundman
an/listinfo/zfs-discuss -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home) ___ zfs-discuss mailing l

[zfs-discuss] Zfs and b114 version

2009-05-17 Thread Jorgen Lundman
compiling osol compared to, say, NetBSD/FreeBSD, Linux etc ? (IRIX and its quickstarting??) -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home

Re: [zfs-discuss] Zfs and b114 version

2009-05-18 Thread Jorgen Lundman
sp-...@cds-cds_smi I don't mind learning something new, but that's even faster! I will try that image and work on my kernel building projects a little later... Thanks! -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo|

Re: [zfs-discuss] Zfs and b114 version

2009-05-18 Thread Jorgen Lundman
r after all :) Lund Jorgen Lundman wrote: The website has not been updated yet to reflect its availability (thus it may not be "official" yet), but you can get SXCE b114 now from https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_SMI-Site/en_US/-/USD/viewproductdetail-start?produc

Re: [zfs-discuss] Zfs and b114 version

2009-05-19 Thread Jorgen Lundman
from CD instead of using LiveUpdate Jorgen Lundman wrote: I used LUpdate to create a b114 BE on the spare X4540, and booted it, but alas, I get the following message on boot: SunOS Release 5.11 Version snv_114 64-bit Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved. Us

Re: [zfs-discuss] Zfs and b114 version

2009-05-19 Thread Jorgen Lundman
I tried LUpdate 3 times with same result, burnt the ISO and installed the old fashioned way, and it boots fine. Jorgen Lundman wrote: Most annoying. If "su.static" really had been static I would be able to figure out what goes wrong. When I boot into miniroot/failsafe it

[zfs-discuss] ZFS userquota groupquota test

2009-05-20 Thread Jorgen Lundman
ng). I assume rquota is just not implemented, not a problem for us. perl cpan module Quota does not implement ZFS quotas. :) -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan

Re: [zfs-discuss] ZFS userquota groupquota test

2009-05-20 Thread Jorgen Lundman
at confused the situation. Perhaps something to do with that "mount" doesn't think it is mounted with "quota" when local. I could try mountpoint=legacy and explicitly list rq when mounting maybe . But we don't need it to work, it was just different from legacy

Re: [zfs-discuss] ZFS userquota groupquota test

2009-05-20 Thread Jorgen Lundman
or similar? If not, I could potentially use zfs ioctls perhaps to write my own bulk import program? Large imports are rare, but I was just curious if there was a better way to issue large amounts of "zfs set" commands. Jorgen Lundman wrote: Matthew Ahrens wrote: Thanks for the

Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-05-21 Thread Jorgen Lundman
To finally close my quest. I tested "zfs send" in osol-b114 version: received 82.3GB stream in 1195 seconds (70.5MB/sec) Yeeaahh! That makes it completely usable! Just need to change our support contract to allow us to run b114 and we're set! :) Thanks, Lund Jorgen Lund

Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-05-22 Thread Jorgen Lundman
, 2009 at 10:17 PM, Jorgen Lundman wrote: To finally close my quest. I tested "zfs send" in osol-b114 version: received 82.3GB stream in 1195 seconds (70.5MB/sec) Yeeaahh! That makes it completely usable! Just need to change our support contract to allow us to run b114 and we'

[zfs-discuss] Replacing HDD with larger HDD..

2009-05-22 Thread Jorgen Lundman
lable And alas, "grow" is completely gone, and no amount of "import" would see it. Oh well. -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81

Re: [zfs-discuss] Replacing HDD with larger HDD..

2009-05-22 Thread Jorgen Lundman
Rob Logan wrote: you meant to type zpool import -d /var/tmp grow Bah - of course, I can not just expect zpool to know what random directory to search. You Sir, are a genius. Works like a charm, and thank you. Lund -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687

Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-05-26 Thread Jorgen Lundman
, what is the size of the sending zfs? I thought replication speed depends on the size of the sending fs, too not only size of the snapshot being sent. Regards Dirk --On Freitag, Mai 22, 2009 19:19:34 +0900 Jorgen Lundman wrote: Sorry, yes. It is straight; # time zfs send zpool1/leroy_c

Re: [zfs-discuss] Zfs send speed. Was: User quota design discussion..

2009-05-27 Thread Jorgen Lundman
I changed to try zfs send on a UFS on zvolume as well: received 92.9GB stream in 2354 seconds (40.4MB/sec) Still fast enough to use. I have yet to get around to trying something considerably larger in size. Lund Jorgen Lundman wrote: So you recommend I also do speed test on larger

Re: [zfs-discuss] Can the new consumer NAS devices run OpenSolaris?

2009-05-28 Thread Jorgen Lundman
x27;t re-flash it with osol, or eon, or freenas. -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home) ___ zfs-

Re: [zfs-discuss] zfs on 32 bit?

2009-06-17 Thread Jorgen Lundman
is really good at. Lund -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home) ___ zfs-discuss mailin

Re: [zfs-discuss] Best controller card for 8 SATA drives ?

2009-06-21 Thread Jorgen Lundman
l (SATA-II) but I have not personally tried it. Lund -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home) _

Re: [zfs-discuss] PicoLCD Was: Best controller card for 8 SATA drives ?

2009-06-21 Thread Jorgen Lundman
whole load of ZFS data. Has someone already been down this road too? -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home

Re: [zfs-discuss] ZFS - SWAP and lucreate..

2009-06-28 Thread Jorgen Lundman
That is, after lucreate, but before you "init 6" to reboot. Or indeed any time after, as long as you "swap -d", "swap -a" to make it notice the new size. (I believe you should set volsize and refreservation to the same value). -- Jorgen Lundman | Unix A

Re: [zfs-discuss] Interposing on readdir and friends

2009-07-02 Thread Jorgen Lundman
der. However, I'm having a bit of trouble hacking this together (the current source doesn't compile in isolation on my S10 machine). -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Ja

[zfs-discuss] Open Solaris version recommendation? b114, b117?

2009-07-02 Thread Jorgen Lundman
yet to experience any problems. But b117 is what 2010/02 version will be based on, so perhaps that is a better choice. Other versions worth considering? I know it's a bit vague, but perhaps there is a known panic in a certain version that I may not be aware. Lund -- Jorgen Lu

Re: [zfs-discuss] how to discover disks?

2009-07-06 Thread Jorgen Lundman
__ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss _______ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Jorgen Lundman

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Jorgen Lundman
x4540 running svn117 # ./zfs-cache-test.ksh zpool1 zfs create zpool1/zfscachetest creating data file set 93000 files of 8192000 bytes0 under /zpool1/zfscachetest ... done1 zfs unmount zpool1/zfscachetest zfs mount zpool1/zfscachetest doing initial (unmount/mount) 'cpio -o . /dev/null' 4800024

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Jorgen Lundman
zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discu ss -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Ja

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Jorgen Lundman
h, nevermind, it looks like there's just a rogue 9 appeared in your output. It was just a standard run of 3,000 files. -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Jorgen Lundman
hear about systems which do not suffer from this bug. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolar

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Jorgen Lundman
o -C 131072 -o > /dev/null' 48000256 blocks real7m27.87s user0m6.51s sys 1m20.28s Doing second 'cpio -C 131072 -o > /dev/null' 48000256 blocks real7m25.34s user 0m6.63s sys 1m32.04s Feel free to clean up with 'zfs destroy zboot/zfscachetest

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-14 Thread Jorgen Lundman
rs, not x4500s configured for desktops :( They are cheap though! Nothing like being Wall-Mart of Storage! That is how the pools were created as well. Admittedly it may be down to our Vendor again. Lund -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shib

[zfs-discuss] ZFS Mirror cloning

2009-07-14 Thread Jorgen Lundman
In fact, can I mount that disk to make changes to it before pulling out the disk? Most documentation on cloning uses "zfs send", which would be possible, but 4 minutes is hard to beat when your cluster is under heavy load. Lund -- Jorgen Lundman | Unix Administrator |

Re: [zfs-discuss] Solaris live CD that supports ZFS root mount for fs fixes

2009-07-16 Thread Jorgen Lundman
? Thanks, Matt ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500

Re: [zfs-discuss] ZFS Mirror cloning

2009-07-23 Thread Jorgen Lundman
't export the "/" pool before pulling out the disk, either. Jorgen Lundman wrote: Hello list, Before we started changing to ZFS bootfs, we used DiskSuite mirrored ufs boot. Very often, if we needed to grow a cluster by another machine or two, we would simply clone a run

Re: [zfs-discuss] ZFS Mirror cloning

2009-07-23 Thread Jorgen Lundman
Jorgen Lundman wrote: However, "zpool detach" appears to mark the disk as blank, so nothing will find any pools (import, import -D etc). zdb -l will show labels, For kicks, I tried to demonstrate this does indeed happen, so I dd'ed the first 1024 1k blocks from the disk,

Re: [zfs-discuss] ZFS Mirror cloning

2009-07-24 Thread Jorgen Lundman
and 5097228. Ah of course, you have a valid point and mirrors can be used it much more complicated situations. Been reading your blog all day, while impatiently waiting for zfs-crypto.. Lund -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku

Re: [zfs-discuss] ZFS Mirror cloning

2009-07-24 Thread Jorgen Lundman
zfs send speed fixes", like official Sol 10 10/08. (I am not sure, but zfs send sounds like you already need the 2nd server set up and running with IPs etc? ) Anyway, we have found a procedure now, so it is all possible. But it would have been nicer to be able to detach the disk "po

Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-28 Thread Jorgen Lundman
y close regardless as to whether the application did or not? This I have not yet wrapped my head around. For example, I know rsync and tar does not use fdsync (but dovecot does) on its close(), but does NFS make it fdsync anyway? Sorry for the giant email. -- Jorgen Lundman | Unix Adm

Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-28 Thread Jorgen Lundman
ame for it, as I doubt it'll stay standing after the next earthquake. :) Lund Jorgen Lundman wrote: This thread started over in nfs-discuss, as it appeared to be an nfs problem initially. Or at the very least, interaction between nfs and zil. Just summarising speeds we have found

Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-29 Thread Jorgen Lundman
27;t actually find any with Solaris drivers. Peculiar. Lund -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home) _

Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-30 Thread Jorgen Lundman
d ZIL logs can live together and put /var in the data pool. That way we would not need to rebuild the data-pool and all the work that comes with that. Shame I can't zpool replace to a smaller disk (500GB HDD to 32GB SSD) though, I will have to lucreate and reboot one time. Lund --

[zfs-discuss] Lundman home NAS

2009-07-31 Thread Jorgen Lundman
to start around 80,000. Anyway, sure has been fun. Lund -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3 -3375-1767 (home

Re: [zfs-discuss] Lundman home NAS

2009-07-31 Thread Jorgen Lundman
i, Jul 31, 2009 at 5:22 AM, Jorgen Lundman wrote: I have assembled my home RAID finally, and I think it looks rather good. http://www.lundman.net/gallery/v/lraid5/p1150547.jpg.html Feedback is welcome. I have yet to do proper speed tests, I will do so in the coming week should people be intereste

Re: [zfs-discuss] Lundman home NAS

2009-08-01 Thread Jorgen Lundman
Some preliminary speed tests, not too bad for a pci32 card. http://lundman.net/wiki/index.php/Lraid5_iozone Jorgen Lundman wrote: Finding a SATA card that would work with Solaris, and be hot-swap, and more than 4 ports, sure took a while. Oh and be reasonably priced ;) Double the price of

Re: [zfs-discuss] Lundman home NAS

2009-08-01 Thread Jorgen Lundman
en/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Lundman home NAS

2009-08-01 Thread Jorgen Lundman
. ;) Jorgen Lundman wrote: I was following Toms Hardware on how they test NAS units. I have 2GB memory, so I will re-run the test at 4, if I figure out which option that is. I used Excel for the graphs in this case, gnuplot did not want to work. (Nor did Excel mind you) Bob Friesenhahn wrote: On

Re: [zfs-discuss] Lundman home NAS

2009-08-02 Thread Jorgen Lundman
umb did not seem to enable it either). Jorgen Lundman wrote: Ok I have redone the initial tests as 4G instead. Graphs are on the same place. http://lundman.net/wiki/index.php/Lraid5_iozone I also mounted it with nfsv3 and mounted it for more iozone. Alas, I started with 100mbit, so it has

[zfs-discuss] x4540 dead HDD replacement, remains "configured".

2009-08-05 Thread Jorgen Lundman
:dsk/c1t5d0 disk connectedconfigured failed I am fairly certain that if I reboot, it will all come back ok again. But I would like to believe that I should be able to replace a disk without rebooting on a X4540. Any other commands I should try? Lund -- Jorgen Lundman

Re: [zfs-discuss] Lundman home NAS

2009-08-05 Thread Jorgen Lundman
. I never thought about using it with a motherboard inside. Could you provide a complete parts list? What sort of temperatures at the chip, chipset, and drives did you find? Thanks! -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81

Re: [zfs-discuss] x4540 dead HDD replacement, remains "configured".

2009-08-05 Thread Jorgen Lundman
...@6,0:a,raw Perhaps because it was booted with the dead disk in place, it never configured the entire "sd5" mpt driver. Why the other hard-disks work I don't know. I suspect the only way to fix this, is to reboot again. Lund Jorgen Lundman wrote: x4540 snv_117 We lost a HDD

Re: [zfs-discuss] x4540 dead HDD replacement, remains "configured".

2009-08-06 Thread Jorgen Lundman
s you've taken each time? I appreciate you're probably more concerned with getting an answer to your question, but if ZFS needs a reboot to cope with failures on even an x4540, that's an absolute deal breaker for everything we want to do with ZFS. Ross -- Jorgen Lundman

[zfs-discuss] libzfs API: sharenfs, sharesmb, shareiscsi, $custom ?

2009-08-16 Thread Jorgen Lundman
but I was under the impression that the API is flexible. The ultimate goal is to move away from static paths listed in the config file. Lund -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan

Re: [zfs-discuss] libzfs API: sharenfs, sharesmb, shareiscsi, $custom ?

2009-08-17 Thread Jorgen Lundman
e, since I would rather not system("zfs") hack it. Lund Ross wrote: Hi Jorgen, Does that software work to stream media to an xbox 360? If so could I have a play with it? It sounds ideal for my home server. cheers, Ross -- Jorgen Lundman | Unix Administrator | +81 (0)3

Re: [zfs-discuss] libzfs API: sharenfs, sharesmb, shareiscsi, $custom ?

2009-08-17 Thread Jorgen Lundman
LL, zfs); if (spawn) lion_set_handler(spawn, root_zfs_handler); # zfs set net.lundman:sharellink=on zpool1/media # ./llink -d -v 32 ./llink - Jorgen Lundman v2.2.1 lund...@shinken.interq.or.jp build 1451 (Tue Aug 18 14:02:44 2009) (libdvdnav). : looking for ZFS filesystems

Re: [zfs-discuss] x4540 dead HDD replacement, remains "configured".

2009-08-19 Thread Jorgen Lundman
INE 0 0 0 c5t4d0 ONLINE 0 0 0 c5t7d0 ONLINE 0 0 0 -- Jorgen Lundman | Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work) Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell) Japan| +81 (0)3

Re: [zfs-discuss] Ssd for zil on a dell 2950

2009-08-19 Thread Jorgen Lundman
you as well. Only issue with using the third-party parts is that the involved support organizations for the software/hardware will make it very clear that such a configuration is quite unsupported. That said, we've had pretty good luck with them. -Greg -- Jorgen Lundman | Unix Admini

  1   2   >