turning our 12x x4540, and calling NetApp. I
would rather not (more work for me).
I understand Sun is probably experiencing some internal turmoil at the moment,
but it has been rather frustrating for us.
Lund
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (w
uot;future releases" of Solaris.
Thanks
Lund
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
bootable
Solaris. Very flexible and can put on the Admin GUIs, and so on.
https://sourceforge.net/projects/embeddedsolaris/
Lund
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan
:
Any known issues for the new ZFS on solaris 10 update 8?
Or is it still wiser to wait doing a zpool upgrade? Because older ABE's
can no longer be accessed then.
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578
...@1029 54.0M local
Any suggestions would be most welcome,
Lund
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
aves space, that is profit to us)
Is the space saved with dedup charged in the same manner? I would expect so, I
figured some of you would just know. I will check when b128 is out.
I don't suppose I can change the model? :)
Lund
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -545
same with ZFS userquotas, and did not need any changes.
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
__
ny thoughts? What would you
experts do in this situation? We have to run Solaris 10 (lng battle there,
no support for Opensolaris from anyone in Japan).
Can I delete the sucker using zdb?
Thanks for any reply,
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext
gh.
Lund
[*1]
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6574286
[*2]
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6739497
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500
0 0
It does at least have a solution, even if it is rather unattractive. 12 servers,
and has to be done at 2am means I will be testy for a while.
Lund
Jorgen Lundman wrote:
Interesting. Unfortunately, I can not "zpool offline", nor "zpool
detach", nor "zpo
OTA No quota
Why 'no quota'?
Both systems are nearly fully patched.
Any help is appreciated. Thanks in advance.
Willi
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Jorge
things up a little faster.
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs-discuss mailing list
zf
On my NAS I use Velitium: http://sourceforge.net/projects/velitium/ which goes
down to about 70MB at the smallest.
(2010/01/07 15:23), Frank Cusack wrote:
been searching and searching ...
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku
Hello list,
I got a c7000 with BL465c G1 blades to play with and have been trying to get
some form of Solaris to work on it.
However, this is the state:
OpenSolaris 134: Installs with ZFS, but no BNX nic drivers.
OpenIndiana 147: Panics on "zpool create" everytime, even from console. Has no
U
I have a server, with two external drive cages attached, on separate
controllers:
c0::dsk/c0t0d0 disk connectedconfigured unknown
c0::dsk/c0t1d0 disk connectedconfigured unknown
c0::dsk/c0t2d0 disk connectedco
Whenever I do a root pool, ie, configure a pool using the c?t?d?s0 notation, it
will always complain about overlapping slices, since *s2 is the entire disk.
This warning seems excessive, but "-f" will ignore it.
As for ZIL, the first time I created a slice for it. This worked well, the
second t
doubled... are there better values?)
set ufs_ninode=259594
in /etc/system, and reboot. But it is costly to reboot based only on my
guess. Do you have any other suggestions to explore? Will this help?
Sincerely,
Jorgen Lundman
--
Jorgen Lundman | <[EMAIL PROTECTED]>
Unix Admini
taking upwards of 7 seconds to complete.
Lund
--
Jorgen Lundman | <[EMAIL PROTECTED]>
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3
and if the x4500's do lock up I'm a bit concerned about how they
> handle hardware failures.
>
> thanks,
>
> Ross
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zf
1038376
maxsize reached 993770
(Increased it by nearly x10 and it still gets a high 'reached').
Lund
Jorgen Lundman wrote:
> We are having slow performance with the UFS volumes on the x4500. They
> are slow even on the local server. Which makes me think i
filesystems if I were
to simply drop in the two mirrored Sol 10 5/08 boot HDDs on the x4500
and reboot? I assume Sol10 5/08 zpool version would be newer, so in
theory it would work.
Comments?
--
Jorgen Lundman | <[EMAIL PROTECTED]>
Unix Administrator | +81 (0)3 -5456-2687 ex
gt;
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
Jorgen Lundman | <[EMAIL PROTECTED]>
Unix Administrator | +81 (0)3 -5456-2687 ext 1017
s to be no way to resume a "half
transfered" zfs send. So, rsyncing smaller bits.
zfs send -i only works if you have a full copy already, which we can't
get from above.
--
Jorgen Lundman | <[EMAIL PROTECTED]>
Unix Administrator | +81 (0)3 -5456-2687 ext 1
s/OS, are only ZFS
version 1. I do not think zfs version 1 will read version 2. I see no
script talking about converting a version 2 to a version 1.
--
Jorgen Lundman | <[EMAIL PROTECTED]>
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-55
he command for now, as it definitely
hangs the server every time. Hard reset done again.
Lund
--
Jorgen Lundman | <[EMAIL PROTECTED]>
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500
[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci11ab,[EMAIL
PROTECTED]/[EMAIL PROTECTED],0 (sd30):"
And I need to get the answer "40". The "hd" output additionally gives me
"sdar" ?
Lund
--
Jorgen Lundman | <[EMAIL PROTECTED]>
Unix Administrat
> See http://www.sun.com/servers/x64/x4500/arch-wp.pdf page 21.
> Ian
Referring to Page 20? That does show the drive order, just like it does
on the box, but not how to map them from the kernel message to drive
slot number.
Lund
--
Jorgen Lundman | <[EMAIL PROTECT
.
I suspect we are one the first to try x4500 here as well.
Anyway, it has almost rebooted, so I need to go remount everything.
Lund
--
Jorgen Lundman | <[EMAIL PROTECTED]>
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-55
Jorgen Lundman wrote:
>
> Anyway, it has almost rebooted, so I need to go remount everything.
>
Not that it wants to stay up for longer than ~20 mins, then hangs. In
that all IO hangs, including "nfsd".
I thought this might have been related:
http://sunsolve.sun.com
0 mins or so), and we can only log a call
with vendor, and if they feel like it, will push it to Sun. Although,
we do have SunSolve logins, can we by-pass the middleman, and avoid the
whole translation fiasco, and log directly with Sun?
Lund
--
Jorgen Lundman | <[EMAIL PROTECTED]>
l32+0x101()
--
Jorgen Lundman | <[EMAIL PROTECTED]>
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs-discuss m
"zpool status".
Going to get some sleep, and really hope it has been fixed. Thank you to
everyone who helped.
Lund
Jorgen Lundman wrote:
>
> Jorgen Lundman wrote:
>> Anyway, it has almost rebooted, so I need to go remount everything.
>>
>
> Not that it wants t
ere methods in AVS to handle fail-back? Since 02 has
been used, it will have newer/modified files, and will need to replicate
backwards until synchronised, before fail-back can occur.
We did ask our vendor, but we were just told that AVS does not support
x4500.
Lund
--
Jorgen Lund
ter.
>
>> Even for a mirror, the data is stale and
>> it's removed from the active set. I thought you were talking about
>> block parity run across columns...
>>
>> --
>> Darren
>> ___
>> zfs-discuss mail
y is rather frustrating.
Lund
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs-di
27; and 'zfs
> upgrade' to all my mirrors (3 3-way). I'd been having similar
> troubles to yours in the past.
>
> My system is pretty puny next to yours, but it's been reliable now for
> slightly over a month.
>
>
> On Tue, Jan 27, 2009 at 12:19 AM, Jor
is "wait", since it almost behaves
like it. Not sure why it would block "zpool", "zfs" and "df" commands as
well though?
Lund
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-850
I've been told we got a BugID:
"3-way deadlock happens in ufs filesystem on zvol when writing ufs log"
but I can not view the BugID yet (presumably due to my accounts weak
credentials)
Perhaps it isn't something we do wrong, that would be a nice change.
Lund
Jorgen
__
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku
For the most part, the defaults work well. But you can experiment
> with them and see if you can get better results.
It came shipped with 16. And I'm sorry but 16 didn't cut it at all :) We
set it at 1024 as it was the highest number I found via Google.
Lund
--
Jorgen Lundman
wo sets.
Advantages are that only small hooks are required in ZFS. The byte
updates, and the blacklist with checks for being blacklisted.
Disadvantages are that it is loss of precision, and possibly slower
rescans? Sanity?
But I do not really know the internals of ZFS, so I might be complet
.
This I did not know, but now that you point it out, this would be the
right way to design it. So the advantage of requiring less ZFS
integration is no longer the case.
Lund
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578
ufs filesystem on zvol when writng ufs log
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs
, but consider a rescan to be the answer. We don't ZFS send very
often as it is far too slow.
Lund
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan
to support quotas for ZFS
JL> send, but consider a rescan to be the answer. We don't ZFS send very
JL> often as it is far too slow.
Since build 105 it should be *MUCH* for faster.
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku,
27;ing.
Since build 105 it should be *MUCH* for faster.
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
an/listinfo/zfs-discuss
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs-discuss mailing l
compiling osol compared to,
say, NetBSD/FreeBSD, Linux etc ? (IRIX and its quickstarting??)
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
sp-...@cds-cds_smi
I don't mind learning something new, but that's even faster! I will try
that image and work on my kernel building projects a little later...
Thanks!
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo|
r after all :)
Lund
Jorgen Lundman wrote:
The website has not been updated yet to reflect its availability (thus
it may not be "official" yet), but you can get SXCE b114 now from
https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_SMI-Site/en_US/-/USD/viewproductdetail-start?produc
from CD instead of using
LiveUpdate
Jorgen Lundman wrote:
I used LUpdate to create a b114 BE on the spare X4540, and booted it,
but alas, I get the following message on boot:
SunOS Release 5.11 Version snv_114 64-bit
Copyright 1983-2009 Sun Microsystems, Inc. All rights reserved.
Us
I tried LUpdate 3 times with same result, burnt the ISO and installed
the old fashioned way, and it boots fine.
Jorgen Lundman wrote:
Most annoying. If "su.static" really had been static I would be able to
figure out what goes wrong.
When I boot into miniroot/failsafe it
ng).
I assume rquota is just not implemented, not a problem for us.
perl cpan module Quota does not implement ZFS quotas. :)
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan
at confused the situation. Perhaps something to do with that
"mount" doesn't think it is mounted with "quota" when local.
I could try mountpoint=legacy and explicitly list rq when mounting maybe .
But we don't need it to work, it was just different from legacy
or similar?
If not, I could potentially use zfs ioctls perhaps to write my own bulk
import program? Large imports are rare, but I was just curious if there
was a better way to issue large amounts of "zfs set" commands.
Jorgen Lundman wrote:
Matthew Ahrens wrote:
Thanks for the
To finally close my quest. I tested "zfs send" in osol-b114 version:
received 82.3GB stream in 1195 seconds (70.5MB/sec)
Yeeaahh!
That makes it completely usable! Just need to change our support
contract to allow us to run b114 and we're set! :)
Thanks,
Lund
Jorgen Lund
, 2009 at 10:17 PM, Jorgen Lundman wrote:
To finally close my quest. I tested "zfs send" in osol-b114 version:
received 82.3GB stream in 1195 seconds (70.5MB/sec)
Yeeaahh!
That makes it completely usable! Just need to change our support contract to
allow us to run b114 and we'
lable
And alas, "grow" is completely gone, and no amount of "import" would see
it. Oh well.
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81
Rob Logan wrote:
you meant to type
zpool import -d /var/tmp grow
Bah - of course, I can not just expect zpool to know what random
directory to search.
You Sir, are a genius.
Works like a charm, and thank you.
Lund
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687
,
what is the size of the sending zfs?
I thought replication speed depends on the size of the sending fs, too
not only size of the snapshot being sent.
Regards
Dirk
--On Freitag, Mai 22, 2009 19:19:34 +0900 Jorgen Lundman
wrote:
Sorry, yes. It is straight;
# time zfs send zpool1/leroy_c
I changed to try zfs send on a UFS on zvolume as well:
received 92.9GB stream in 2354 seconds (40.4MB/sec)
Still fast enough to use. I have yet to get around to trying something
considerably larger in size.
Lund
Jorgen Lundman wrote:
So you recommend I also do speed test on larger
x27;t
re-flash it with osol, or eon, or freenas.
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs-
is really good at.
Lund
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
___
zfs-discuss mailin
l (SATA-II) but I have not
personally tried it.
Lund
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
_
whole
load of ZFS data. Has someone already been down this road too?
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
That is, after lucreate, but before you "init 6" to reboot.
Or indeed any time after, as long as you "swap -d", "swap -a" to make it
notice the new size.
(I believe you should set volsize and refreservation to the same value).
--
Jorgen Lundman |
Unix A
der. However, I'm having a bit of trouble hacking this
together (the current source doesn't compile in isolation on my S10
machine).
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Ja
yet to experience any
problems. But b117 is what 2010/02 version will be based on, so perhaps
that is a better choice. Other versions worth considering?
I know it's a bit vague, but perhaps there is a known panic in a certain
version that I may not be aware.
Lund
--
Jorgen Lu
__
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Jorgen Lundman
x4540 running svn117
# ./zfs-cache-test.ksh zpool1
zfs create zpool1/zfscachetest
creating data file set 93000 files of 8192000 bytes0 under
/zpool1/zfscachetest ...
done1
zfs unmount zpool1/zfscachetest
zfs mount zpool1/zfscachetest
doing initial (unmount/mount) 'cpio -o . /dev/null'
4800024
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discu
ss
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Ja
h, nevermind, it looks like there's just a rogue 9 appeared in your output.
It was just a standard run of 3,000 files.
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan
hear about systems which do not suffer from this bug.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolar
o -C 131072 -o > /dev/null'
48000256 blocks
real7m27.87s
user0m6.51s
sys 1m20.28s
Doing second 'cpio -C 131072 -o > /dev/null'
48000256 blocks
real7m25.34s
user 0m6.63s
sys 1m32.04s
Feel free to clean up with 'zfs destroy zboot/zfscachetest
rs, not x4500s configured for
desktops :( They are cheap though! Nothing like being Wall-Mart of Storage!
That is how the pools were created as well. Admittedly it may be down to
our Vendor again.
Lund
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shib
In fact, can I mount that disk to make changes to it before
pulling out the disk?
Most documentation on cloning uses "zfs send", which would be possible,
but 4 minutes is hard to beat when your cluster is under heavy load.
Lund
--
Jorgen Lundman |
Unix Administrator |
?
Thanks,
Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500
't export the "/" pool
before pulling out the disk, either.
Jorgen Lundman wrote:
Hello list,
Before we started changing to ZFS bootfs, we used DiskSuite mirrored ufs
boot.
Very often, if we needed to grow a cluster by another machine or two, we
would simply clone a run
Jorgen Lundman wrote:
However, "zpool detach" appears to mark the disk as blank, so nothing
will find any pools (import, import -D etc). zdb -l will show labels,
For kicks, I tried to demonstrate this does indeed happen, so I dd'ed
the first 1024 1k blocks from the disk,
and 5097228.
Ah of course, you have a valid point and mirrors can be used it much
more complicated situations.
Been reading your blog all day, while impatiently waiting for zfs-crypto..
Lund
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku
zfs send speed fixes", like official Sol 10
10/08. (I am not sure, but zfs send sounds like you already need the
2nd server set up and running with IPs etc? )
Anyway, we have found a procedure now, so it is all possible. But it
would have been nicer to be able to detach the disk "po
y close regardless as to whether the application did or not?
This I have not yet wrapped my head around.
For example, I know rsync and tar does not use fdsync (but dovecot does)
on its close(), but does NFS make it fdsync anyway?
Sorry for the giant email.
--
Jorgen Lundman |
Unix Adm
ame for it, as I doubt it'll
stay standing after the next earthquake. :)
Lund
Jorgen Lundman wrote:
This thread started over in nfs-discuss, as it appeared to be an nfs
problem initially. Or at the very least, interaction between nfs and zil.
Just summarising speeds we have found
27;t actually find any with Solaris drivers.
Peculiar.
Lund
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home)
_
d ZIL logs can live
together and put /var in the data pool. That way we would not need to
rebuild the data-pool and all the work that comes with that.
Shame I can't zpool replace to a smaller disk (500GB HDD to 32GB SSD)
though, I will have to lucreate and reboot one time.
Lund
--
to start around 80,000.
Anyway, sure has been fun.
Lund
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3 -3375-1767 (home
i, Jul 31, 2009 at 5:22 AM, Jorgen Lundman wrote:
I have assembled my home RAID finally, and I think it looks rather good.
http://www.lundman.net/gallery/v/lraid5/p1150547.jpg.html
Feedback is welcome.
I have yet to do proper speed tests, I will do so in the coming week should
people be intereste
Some preliminary speed tests, not too bad for a pci32 card.
http://lundman.net/wiki/index.php/Lraid5_iozone
Jorgen Lundman wrote:
Finding a SATA card that would work with Solaris, and be hot-swap, and
more than 4 ports, sure took a while. Oh and be reasonably priced ;)
Double the price of
en/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. ;)
Jorgen Lundman wrote:
I was following Toms Hardware on how they test NAS units. I have 2GB
memory, so I will re-run the test at 4, if I figure out which option
that is.
I used Excel for the graphs in this case, gnuplot did not want to work.
(Nor did Excel mind you)
Bob Friesenhahn wrote:
On
umb
did not seem to enable it either).
Jorgen Lundman wrote:
Ok I have redone the initial tests as 4G instead. Graphs are on the same
place.
http://lundman.net/wiki/index.php/Lraid5_iozone
I also mounted it with nfsv3 and mounted it for more iozone. Alas, I
started with 100mbit, so it has
:dsk/c1t5d0 disk connectedconfigured failed
I am fairly certain that if I reboot, it will all come back ok again.
But I would like to believe that I should be able to replace a disk
without rebooting on a X4540.
Any other commands I should try?
Lund
--
Jorgen Lundman
. I never thought
about using it with a motherboard inside.
Could you provide a complete parts list?
What sort of temperatures at the chip, chipset, and drives did you find?
Thanks!
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81
...@6,0:a,raw
Perhaps because it was booted with the dead disk in place, it never
configured the entire "sd5" mpt driver. Why the other hard-disks work I
don't know.
I suspect the only way to fix this, is to reboot again.
Lund
Jorgen Lundman wrote:
x4540 snv_117
We lost a HDD
s
you've taken each time?
I appreciate you're probably more concerned with getting an answer to your
question, but if ZFS needs a reboot to cope with failures on even an x4540,
that's an absolute deal breaker for everything we want to do with ZFS.
Ross
--
Jorgen Lundman
but I was under the impression that the API is
flexible. The ultimate goal is to move away from static paths listed in
the config file.
Lund
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan
e, since I would rather not system("zfs") hack it.
Lund
Ross wrote:
Hi Jorgen,
Does that software work to stream media to an xbox 360? If so could I have a
play with it? It sounds ideal for my home server.
cheers,
Ross
--
Jorgen Lundman |
Unix Administrator | +81 (0)3
LL, zfs);
if (spawn) lion_set_handler(spawn, root_zfs_handler);
# zfs set net.lundman:sharellink=on zpool1/media
# ./llink -d -v 32
./llink - Jorgen Lundman v2.2.1 lund...@shinken.interq.or.jp build 1451
(Tue Aug 18 14:02:44 2009) (libdvdnav).
: looking for ZFS filesystems
INE 0 0 0
c5t4d0 ONLINE 0 0 0
c5t7d0 ONLINE 0 0 0
--
Jorgen Lundman |
Unix Administrator | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500 (cell)
Japan| +81 (0)3
you as well. Only issue
with using the third-party parts is that the involved support
organizations for the software/hardware will make it very clear that
such a configuration is quite unsupported. That said, we've had pretty
good luck with them.
-Greg
--
Jorgen Lundman |
Unix Admini
1 - 100 of 135 matches
Mail list logo