able to throw some light on what might be going on under the hood?
thanks! Andy.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ol import rpool newrpool ,
would that work ?
Cheers
Andy
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
help some folks out there.
Cheers!
Andy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Bart,
yep, I got Bruno to run a kernel profile lockstat...
it does look like the mpt issue..
andy
:---
Count indv cuml rcnt nsec Hottest CPU+PILCaller
2861 7% 55
e fact that my cwd was in my current
filesystem, so couldn't be unmounted, and therefore
couldn't be removed! Phew!! Nice to learn something and only get singed
eyebrows, instead of losing a leg!
hth Andy
___
zfs-discuss mailing list
zf
IFS File System client support (Kernel)' - is this the
same package as SUNWsmbskr?
Thanks in adavnce for any suggestions,
Andy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
& running for nearly a
year with no problems to date - there are two other RAIDz1 pools on this
server but these are working fine.
Andy
-
Andy Thomas,
Time Domain Systems
Tel: +44 (0)7866 556626
Fax: +44 (0)20 8372 2582
http://www.time-domain.co.uk
__
On Tue, 14 Feb 2012, Richard Elling wrote:
Hi Andy
On Feb 14, 2012, at 10:37 AM, andy thomas wrote:
On one of our servers, we have a RAIDz1 ZFS pool called 'maths2' consisting of
7 x 300 Gb disks which in turn contains a single ZFS filesystem called 'home'.
Yest
is a Netra 150 dating from 1997 - still going
strong, crammed with 12 x 300 Gb disks and running Solaris 9. I think one
ought to have more faith in Sun hardware.
Andy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
On Thu, 16 Feb 2012, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of andy thomas
One of my most vital servers is a Netra 150 dating from 1997 - still going
strong, crammed with 12 x 300 Gb disks and running Solaris 9
since the first snapshot, week01, and therefore includes those in
week02?
To rollback to week03, it's necesaary to delete snapshots week04 and
week05 first but what if week01 and week02 have also been deleted - will
the rollback still work or is it ncessary to keep earlier snapshots?
s
spares but the other day someone here was talking of seeing hundreds of
disks in a single pool! So what is the current advice for ZFS in Solaris
and FreeBSD?
Andy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, 11 Oct 2012, Freddie Cash wrote:
On Thu, Oct 11, 2012 at 2:47 PM, andy thomas wrote:
According to a Sun document called something like 'ZFS best practice' I read
some time ago, best practice was to use the entire disk for ZFS and not to
partition or slice it in any way.
On Thu, 11 Oct 2012, Richard Elling wrote:
On Oct 11, 2012, at 2:58 PM, Phillip Wagstrom
wrote:
On Oct 11, 2012, at 4:47 PM, andy thomas wrote:
According to a Sun document called something like 'ZFS best practice' I read
some time ago, best practice was to use the entire di
I have an X4500 thumper box with 48x 500gb drives setup in a a pool and split
into raidz2 sets of 8 - 10 drives within the single pool.
I had a failed disk with i cfgadm unconfigured and replaced no problem, but it
wasn't recognised as a Sun drive in Format and unbeknown to me someone else
logg
FS (Opensolaris or Indiana or something ZFS
compatible) and use the ACL's in this operating system will it work in
the manner I'm anticipating? ie files inherit ACL's no matter if they
are created in the folder or copied to the folder, the ACL is the same.
Am
, candida and andy.
I've created both users andy and candida,
I've created a finance group
I've added andy and candida to the 'finance' group
I've created /srv/Finance directory
I've set: chown candida:finance /srv/Finance
I've then done: /bin/chmod g+s
t are there any
other things I should take into consideration? It's not a major problem as
the system is intended for storage and users are not supposed to go in and
untar huge tarfiles on it as it's not a fast system ;-)
Andy
----
Andy Thomas,
Time Domain Syste
On Sat, 13 Aug 2011, Bob Friesenhahn wrote:
On Sat, 13 Aug 2011, andy thomas wrote:
However, one of our users recently put a 35 Gb tar.gz file on this server
and uncompressed it to a 215 Gb tar file. But when he tried to untar it,
after about 43 Gb had been extracted we noticed the disk usage
On Sat, 13 Aug 2011, Bob Friesenhahn wrote:
On Sat, 13 Aug 2011, andy thomas wrote:
However, one of our users recently put a 35 Gb tar.gz file on this server
and uncompressed it to a 215 Gb tar file. But when he tried to untar it,
after about 43 Gb had been extracted we noticed the disk usage
On Sat, 13 Aug 2011, Joerg Schilling wrote:
andy thomas wrote:
What 'tar' program were you using? Make sure to also try using the
Solaris-provided tar rather than something like GNU tar.
I was using GNU tar actually as the original archive was created on a
Linux machine. I w
the Sun SAS HBA card ;)
-Andy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of James C.
McPherson
Sent: Saturday, July 26, 2008 8:18 AM
To: Miles Nordin
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Ideal Setup: RAID-5, Areca, etc!
Miles Nordin
a web driven GUI.
Anyone know if something like that is in the works? It looks like a
nice appliance for file shares in a corp network.
-Andy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Tom Buskey
Sent: Monday, November 10, 2008 3:40 PM
To: zfs-di
ll :)
3. I like that, even better would be a way to install it without
dedicating spindles to OS.
4. I am ok with value added software being sold by Sun. We don't mind
paying money if it makes our job actually less complex each workday!
Im going to give this vmware image a whirl and see w
and from
the chasis layout, looks fairly involved. We don't want to "upgrade"
something that we just bought so we can take advantage of this software
which appears to finally complete the Sun NAS picture with zfs!
-Andy
___
z
e out the whole guts in one tray (from
the bottom rear?).
-Andy
-Original Message-
From: Chris Greer [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 12, 2008 3:57 PM
To: Andy Lubel; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] OpenStorage GUI
I was hoping for a swap out o
Afaik, the drives are pretty much the same, its the chipset that
changed, which also meant a change of cpu and memory.
-Andy
From: Tim [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 12, 2008 7:24 PM
To: Andy Lubel
Cc: Chris Greer; zfs-discuss
e are HP-UX 11i and OS X 10.4.9 and they both
have corresponding performance characteristics.
Any insight would be appreciated - we really like zfs compared to any
filesystem we have EVER worked on and dont want to revert if at all possible!
TIA,
Andy
solve this!
-Andy
-Original Message-
From: [EMAIL PROTECTED] on behalf of Torrey McMahon
Sent: Fri 4/20/2007 6:00 PM
To: Marion Hakanson
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS+NFS on storedge 6120 (sun t4)
Marion Hakanson wrote:
> [EMAIL PROTECTED] said:
>
cache memsize : 1024 MBytes
fc_topology: auto
fc_speed : 2Gb
disk_scrubber : on
ondg : befit
Am i missing something? As far as the RW test, i will tinker some more and
paste the results soonish.
Thanks in advance,
Andy Lubel
-Original Message-
ftware Engineer
-
Leon Koll wrote:
>
> Welcome to the club, Andy...
>
> I tried several times to attract the attention of the community to the
> dramatic performance degradation (about 3 times) of NFZ/ZFS vs. ZFS/UFS
> combination - without any result : href
7 seconds!
We are likely going to just try iscsi instead, the behavior is non-existent.
At some point though we would like to use ZFS based NFS mounts for things..
the current difference in performance just scares us!
-Andy
-Original Message-
From: [EMAIL PROTECTED] on behalf of Roch
ng ZFS and ISCSI(initiator and target) in Leopard. After
all, OS X borrows from FreeBSD.. FreeBSD 7 has zfs ;)
Andy
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Luke Scharf
Sent: Wednesday, April 25, 2007 3:00 PM
To: Toby Thain
Cc: [EMAIL PROTECTED]
Sub
Anyone who has an Xraid should have one (or 2) of these BBC modules.
good mojo.
http://store.apple.com/1-800-MY-APPLE/WebObjects/AppleStore.woa/wa/RSLID
?mco=6C04E0D7&nplm=M8941G/B
Can you tell I <3 apple?
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
hieve to everyone else. If you want more details about my setup, just
email me directly, I don't mind :)
-Andy
On 5/7/07 4:48 PM, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]>
wrote:
> Lee,
>
> Yes, the hot spare (disk4) should kick if another disk in the pool fails
&g
Im using:
zfs set:zil_disable 1
On my se6130 with zfs, accessed by NFS and writing performance almost
doubled. Since you have BBC, why not just set that?
-Andy
On 5/24/07 4:16 PM, "Albert Chin"
<[EMAIL PROTECTED]> wrote:
> On Thu, May 24, 2007 at 11:55:58AM -0700
; -END PGP SIGNATURE-
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Andy Lubel
Application Administrator / IT Department
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Andy Lubel
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, thanks Andy
panic[cpu65]/thread=2a104299cc0: assertion failed: dmu_read(os,
smo->smo_object, offset, size, entry_map) == 0 (0x5 == 0x0), file:
../../common/fs/zfs/space_map.c, line: 307
02a104299000 genunix:assfail3+94 (7b3866c8, 5, 7b386708, 0, 7b386710, 133)
%l0-3: 2
Is there a way to get past this I can not re-create until I export it?
zpool export -f zonesHA2
cannot iterate filesystems: I/O error
zpool status
pool: zonesHA2
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affec
my data on the RAIDZ and remount the ZFS after
> reinstall, or am I screwed?
>
> Please help ...
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.ope
On 9/4/07 4:34 PM, "Richard Elling" <[EMAIL PROTECTED]> wrote:
> Hi Andy,
> my comments below...
> note that I didn't see zfs-discuss@opensolaris.org in the CC for the
> original...
>
> Andy Lubel wrote:
>> Hi All,
>>
>> I have been as
> instead of a netapp or something like that.
I don't see why it wouldn't using zvols and iscsi. We use iscsi in our
rather large exchange implementation - not backed by zfs but I don't see why
it couldn't be.
PS no "NAS" solution will work for exchange will it?
guess we can probably be OK using SXCE (as Joyent did).
Thanks,
Andy Lubel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 9/18/07 1:02 PM, "Bryan Cantrill" <[EMAIL PROTECTED]> wrote:
>
> Hey Andy,
>
> On Tue, Sep 18, 2007 at 12:59:02PM -0400, Andy Lubel wrote:
>> I think we are very close to using zfs in our production environment.. Now
>> that I have snv_72 installed an
On 9/18/07 2:26 PM, "Neil Perrin" <[EMAIL PROTECTED]> wrote:
>
>
> Andy Lubel wrote:
>> On 9/18/07 1:02 PM, "Bryan Cantrill" <[EMAIL PROTECTED]> wrote:
>>
>>> Hey Andy,
>>>
>>> On Tue, Sep 18, 2007 at 12:59:
corrupted data.
>
> That would also be my preference, but if I were forced to use hardware
> RAID, the additional loss of storage for ZFS redundancy would be painful.
>
> Would anyone happen to have any good recommendations for an enterprise
> scale storage subsystem suitab
On 9/20/07 7:31 PM, "Paul B. Henson" <[EMAIL PROTECTED]> wrote:
> On Thu, 20 Sep 2007, Tim Spriggs wrote:
>
>> It's an IBM re-branded NetApp which can which we are using for NFS and
>> iSCSI.
Yeah its fun to see IBM compete with its OEM provider Netapp.
>
> Ah, I see.
>
> Is it comparable s
tley group of disks' on an e450 acting as our jumpstart
server and server build times are noticeably quicker since u4.
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> htt
just made these things dumb jbods!
-Andy
On 9/28/07 7:37 PM, "Marion Hakanson" <[EMAIL PROTECTED]> wrote:
> Greetings,
>
> Last April, in this discussion...
> http://www.opensolaris.org/jive/thread.jspa?messageID=143517
>
> ...we never found out how (or if) th
Yeah im pumped about this new release today.. such harmony in my
storage to be had. now if OSX only had a native iscsi target/initiator!
-Andy Lubel
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Peter Woodman
Sent: Friday, October 26, 2007 8:14
x27;t know, but I
don't think its possible.. Don't hold me to that however, I only say that
because THE way I demote them to sataI is by removing a jumper actually :)
HTH,
Andy
On 11/2/07 12:29 PM, "Eric Haycraft" <[EMAIL PROTECTED]> wrote:
> I have a supermicro
Marvell controllers work great with solaris.
Supermicro AOC-SAT2-MV8 is what I currently use. I bought it on
recommendation from this list actually. I think I paid 110$ for mine.
-Andy
On 11/2/07 4:10 PM, "Peter Schuller" <[EMAIL PROTECTED]> wrote:
> Hello,
>
>
easing.
I dream of a JBOD with lots of disks + something like this built into 3u.
Too bad Sun's forthcoming JBODS probably wont have anything similar to
this...
-Andy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Arcea, nice!
Any word on whether 3ware has come around yet? I've been bugging them for
months to do something to get a driver made for solaris.
-Andy
From: [EMAIL PROTECTED] on behalf of James C. McPherson
Sent: Thu 11/22/2007 5:06 PM
To: mike
Cc
create pool2 raidz c3t8d0 c3t9d0 c3t10d0 c3t11d0
#zpool add pool2 raidz c3t12d0 c3t13d0 c3t14d0 c3t15d0
I have really learned not to do it this way with raidz and raidz2:
#zpool create pool2 raidz c3t8d0 c3t9d0 c3t10d0 c3t11d0 c3t12d0
c3t13d0 c3t14d0 c3t15d0
So when is thumper going to have
ords out
real0m3.372s
user0m0.088s
sys 0m1.209s
real0m0.015s
user0m0.001s
sys 0m0.012s
bash-3.00# time dd if=/pool0-raidz/w-test.lo1 of=/dev/null bs=8192; time
sync
655360+0 records in
655360+0 records out
real0m15.863s
user0m0.431s
sys 0m6.077s
re
www.rite-group.com/rich
> http://www.linkedin.com/in/richteer
> http://www.myonlinehomeinventory.com
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-Andy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mar 11, 2008, at 4:58 PM, Bart Smaalders wrote:
> Frank Bottone wrote:
>> I'm using the latest build of opensolaris express available from
>> opensolaris.org.
>>
>> I had no problems with the install (its an AMD64 x2 3800+, 1gb
>> physical ram, 1 ide drive for the os and 4*250GB sata drives at
tually want to delete the
oldest snapshot, similar to the zsnap.pl script floating around.
Cant wait to try this on NFS, the whole reason we objected to snapshots in the
first place in our org was because our admins didn't want to be involved with
the users for the routine of working with snapshots.
-Andy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
filesystem
creation upon connection to an AD joined cifs server? samba had some cool
stuff with preexec and I just wonder if something like that is available for
the kernel mode cifs driver.
-Andy
-Original Message-
From: [EMAIL PROTECTED] on behalf of Andy Lubel
Sent: Sun 5/11/2008 2:24 A
server, attach it to the new server then run 'zpool import' - and then
do a 'zpool upgrade'. Unfortunately this doesn't help the thumpers so
much :(
>
>
> - cks
> ___
gt;
Echo. We like the 2540 as well, and will be buying lots of them
shortly.
>
>
> --
> Best regards,
> Robert Milkowski mailto:[EMAIL PROTECTED]
> http://milek.blogspot.com
>
-Andy
> ___
lers giving you
theoretically more throughput so long as MPxIO is functioning properly. Only
(minor) downside is parity is being transmitted from the host to the disks
rather than living on the controller entirely.
-Andy
From: [EMAIL PROTECTED] on behalf of
at 'mountd' was one of the top three resource
> consumers on my system, there would be bursts of high network traffic
> (1500 packets/second), and the affected OS-X system would operate
> more strangely than normal.
>
> The simple solution was to simply create a "/home
SAN SSD's (ours is RAM based, not flash).
-Andy
>
>
> "cards will start at 80 GB and will scale to 320 and 640 GB next year.
> By the end of 2008, Fusion io also hopes to roll out a 1.2 TB
> card.
> 160 parallel pipelines that can read data at 800 megabytes per seco
Did you try mounting with nfs version 3?
mount -o vers=3
On May 28, 2008, at 10:38 AM, kevin kramer wrote:
> that is my thread and I'm still having issues even after applying
> that patch. It just came up again this week.
>
> [locahost] uname -a
> Linux dv-121-25.centtech.com 2.6.18-53.1.14.el
console to listen on something other than localhost did you do
this?
# svccfg -s svc:/system/webconsole setprop options/tcp_listen = true
# svcadm disable svc:/system/webconsole
# svcadm enable svc:/system/webconsole
-Andy
When I open the link, the left frame lists a stacktrace (below) and
d start to figure out what is going on? truss, dtrace,
snoop.. so many choices!
Thanks,
-Andy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, in a couple months we will be dumping this server
with new x4600's.
Thanks for the help,
-Andy
On Jun 5, 2008, at 6:19 PM, Robert Thurlow wrote:
> Andy Lubel wrote:
>
>> I've got a real doozie.. We recently implemented a b89 as zfs/
>> nfs/ cifs server. The
On Jun 6, 2008, at 11:22 AM, Andy Lubel wrote:
> That was it!
>
> hpux-is-old.com -> nearline.host NFS C GETATTR3 FH=F6B3
> nearline.host -> hpux-is-old.com NFS R GETATTR3 OK
> hpux-is-old.com -> nearline.host NFS C SETATTR3 FH=F6B3
> nearline.host -> hpux-is-old.c
On Jun 9, 2008, at 12:28 PM, Andy Lubel wrote:
>
> On Jun 6, 2008, at 11:22 AM, Andy Lubel wrote:
>
>> That was it!
>>
>> hpux-is-old.com -> nearline.host NFS C GETATTR3 FH=F6B3
>> nearline.host -> hpux-is-old.com NFS R GETATTR3 OK
>> hpux-is-ol
nt spin at 7k+ rpm and
have no 'moving' parts. I do agree that there is a lot of circuitry
involved and eventually they will reduce that just like they did with
mainboards. Remember how packed they used to be?
Either way, I'm really interested in the v
73 matches
Mail list logo