I should be able to reply to you next Tuesday -- my 6140 SATA
expansion tray is due to arrive. Meanwhile, what kind of problem do
you have with the 3511?
--
Just me,
Wire ...
On 3/23/07, Frank Cusack <[EMAIL PROTECTED]> wrote:
Does anyone have a 6140 expansion shelf that they can hook directly
On Thu, Mar 22, 2007 at 08:39:55AM -0700, Eric Schrock wrote:
> Again, thanks to devids, the autoreplace code would not kick in here at
> all. You would end up with an identical pool.
Eric, maybe I'm missing something, but why ZFS depend on devids at all?
As I understand it, devid is something th
On Fri, Mar 23, 2007 at 11:31:03AM +0100, Pawel Jakub Dawidek wrote:
> On Thu, Mar 22, 2007 at 08:39:55AM -0700, Eric Schrock wrote:
> > Again, thanks to devids, the autoreplace code would not kick in here at
> > all. You would end up with an identical pool.
>
> Eric, maybe I'm missing something,
Hi.
bash-3.00# uname -a
SunOS nfs-14-2.srv 5.10 Generic_125101-03 i86pc i386 i86pc
I created first zpool (stripe of 85 disks) and did some simple stress testing -
everything seems almost alright (~700MB seq reads, ~430 seqential writes).
Then I destroyed pool and put SVM stripe on top the same
When I'm trying to do in kernel in zfs ioctl:
1. snapshot destroy PREVIOS
2. snapshot rename LATEST->PREVIOUS
3. snapshot create LATEST
code is:
/* delete previous snapshot */
zfs_unmount_snap(snap_previous, NULL);
dmu_objset_destroy(snap_previous
Dear all.
I've setup the following scenario:
Galaxy 4200 running OpenSolaris build 59 as iSCSI target; remaining
diskspace of the two internal drives with a total of 90GB is used as zpool
for the two 32GB volumes "exported" via iSCSI
The initiator is an up to date Solaris 10 11/06 x86 box usi
Hello,
Our Solaris 10 machine need to be reinstalled.
Inside we have 2 HDDs in striping ZFS with 4 filesystems.
After Solaris is installed how can I "mount" or recover the 4 filesystems
without losing the existing data?
Thank you very much!
This message posted from opensolaris.org
>See fsattr(5)
It was helpful :). Thanks!
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 3/23/07, Ionescu Mircea <[EMAIL PROTECTED]> wrote:
Hello,
Our Solaris 10 machine need to be reinstalled.
Inside we have 2 HDDs in striping ZFS with 4 filesystems.
After Solaris is installed how can I "mount" or recover the 4 filesystems
without losing the existing data?
Check "zfs import
Hello Robert,
Forget it, silly me.
Pool was mounted on one host, SVM metadevice was created on another
host on the same disk at the same time and both hosts were issuing
IOs.
Once I corrected it I do no longer see CKSUM errors with ZFS on top of
SVM and performance is similar.
:
Thomas Nau writes:
> Dear all.
> I've setup the following scenario:
>
> Galaxy 4200 running OpenSolaris build 59 as iSCSI target; remaining
> diskspace of the two internal drives with a total of 90GB is used as zpool
> for the two 32GB volumes "exported" via iSCSI
>
> The initiator is
On Mar 23, 2007, at 6:13 AM, Łukasz wrote:
When I'm trying to do in kernel in zfs ioctl:
1. snapshot destroy PREVIOS
2. snapshot rename LATEST->PREVIOUS
3. snapshot create LATEST
code is:
/* delete previous snapshot */
zfs_unmount_snap(snap_previous, NULL);
where the name of the pool is xyx:
zpool export xyz
rebuild the system (Stay clear of the pool disks)
zpool import xyx
Ron Halstead
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensol
Robert Milkowski wrote:
Basically we've implemented a mechanizm to replicate zfs file system
implementing new ioctl based on zfs send|recv. The difference is that
we sleep() for specified time (default 5s) and then ask for new
transcation and if there's one we send it out.
More details really so
It looks like we're between a rock and a hard place. We want to use
ZFS for one project because of snapshots and data integrity - both
would give us considerable advantages over ufs (not to mention
filesystem size). Unfortunately, this is critical company data and the
access control has to be e
With latest Nevada setting zfs_arc_max in /etc/system is
sufficient. Playing with mdb on a live system is more
tricky and is what caused the problem here.
-r
[EMAIL PROTECTED] writes:
> Jim Mauro wrote:
>
> > All righty...I set c_max to 512MB, c to 512MB, and p to 256MB...
> >
> > > arc::
> How it got that way, I couldn't really say without looking at your code.
It works like this:
In new ioctl operation
zfs_ioc_replicate_send(zfs_cmd_t *zc)
we open filesystem ( not snapshot )
dmu_objset_open(zc->zc_name, DMU_OST_ANY,
DS_MODE_STANDARD | DS_MODE_READON
On Fri, Mar 23, 2007 at 11:31:03AM +0100, Pawel Jakub Dawidek wrote:
>
> Eric, maybe I'm missing something, but why ZFS depend on devids at all?
> As I understand it, devid is something that never change for a block
> device, eg. disk serial number, but on the other hand it is optional, so
> we ca
On 3/23/07, Mark Shellenbaum <[EMAIL PROTECTED]> wrote:
The original plan was to allow the inheritance of owner/group/other
permissions. Unfortunately, during ARC reviews we were forced to remove
that functionality, due to POSIX compliance and security concerns.
What exactly is the POSIX compl
Peter Tribble wrote:
On 3/23/07, Mark Shellenbaum <[EMAIL PROTECTED]> wrote:
The original plan was to allow the inheritance of owner/group/other
permissions. Unfortunately, during ARC reviews we were forced to remove
that functionality, due to POSIX compliance and security concerns.
What exac
Thanks for advice.
I removed my buffers snap_previous and snap_latest and it helped.
I'm using zc->value as buffer.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/l
On Fri, 23 Mar 2007, Roch - PAE wrote:
I assume the rsync is not issuing fsyncs (and it's files are
not opened O_DSYNC). If so, rsync just works against the
filesystem cache and does not commit the data to disk.
You might want to run sync(1M) after a successful rsync.
A larger rsync would pr
Thank you all !
The machine crashed unexpectedly so no export was possible.
Anyway just using "zpool import pool_name" helped me to recover everything.
Thanks again for your help!
This message posted from opensolaris.org
___
zfs-discuss mailing lis
On March 23, 2007 5:38:20 PM +0800 Wee Yeh Tan <[EMAIL PROTECTED]> wrote:
I should be able to reply to you next Tuesday -- my 6140 SATA
expansion tray is due to arrive. Meanwhile, what kind of problem do
you have with the 3511?
I'm not sure that it had anything to do with the raid controller be
On March 23, 2007 6:51:10 PM +0100 Thomas Nau <[EMAIL PROTECTED]> wrote:
Thanks for the hints but this would make our worst nightmares become
true. At least they could because it means that we would have to check
every application handling critical data and I think it's not the apps
responsibilit
I recently integrated this fix into ON:
6536606 gzip compression for ZFS
With this, ZFS now supports gzip compression. To enable gzip compression
just set the 'compression' property to 'gzip' (or 'gzip-N' where N=1..9).
Existing pools will need to upgrade in order to use this feature, and, yes,
On Fri, 23 Mar 2007, Adam Leventhal wrote:
> I recently integrated this fix into ON:
>
> 6536606 gzip compression for ZFS
Cool! Can you recall into which build it went?
--
Rich Teer, SCSA, SCNA, SCSECA, OpenSolaris CAB member
CEO,
My Online Home Inventory
Voice: +1 (250) 979-1638
URLs: ht
On Fri, Mar 23, 2007 at 11:41:21AM -0700, Rich Teer wrote:
> > I recently integrated this fix into ON:
> >
> > 6536606 gzip compression for ZFS
>
> Cool! Can you recall into which build it went?
I put it back yesterday so it will be in build 62.
Adam
--
Adam Leventhal, Solaris Kernel Devel
>Peter Tribble wrote:
>> On 3/23/07, Mark Shellenbaum <[EMAIL PROTECTED]> wrote:
>>>
>>> The original plan was to allow the inheritance of owner/group/other
>>> permissions. Unfortunately, during ARC reviews we were forced to remove
>>> that functionality, due to POSIX compliance and security conc
Well, I am aware that /tmp can be mounted on swap as tmpfs and that this is
really fast as most all writes go straight to memory, but this is of little to
no value to the server in question.
The server in question is running 2 enterprise third party applications. No
compilers are installed...in
>I'd tend to disagree with that. POSIX/SUS does not guarantee data makes
>it to disk until you do an fsync() (or open the file with the right flags,
>or other techniques). If an application REQUIRES that data get to disk,
>it really MUST DTRT.
Indeed; want your data safe? Use:
fflush(
On Fri, 23 Mar 2007, Matt B wrote:
> The server in question is running 2 enterprise third party
> applications. No compilers are installed...in fact its a super minimal
> Solaris 10 core install (06/06). The reasoning behind moving /tmp onto
> ZFS was to protect against the occasional misdirected
On Fri, Mar 23, 2007 at 11:57:40AM -0700, Matt B wrote:
>
> The server in question is running 2 enterprise third party
> applications. No compilers are installed...in fact its a super minimal
> Solaris 10 core install (06/06). The reasoning behind moving /tmp onto
> ZFS was to protect against the
On 3/23/07, Mark Shellenbaum <[EMAIL PROTECTED]> wrote:
Peter Tribble wrote:
> What exactly is the POSIX compliance requirement here?
>
The ignoring of a users umask.
Where in POSIX does it specify the interaction of ACLs and a
user's umask?
--
-Peter Tribble
http://www.petertribble.co.uk/ - h
Anton B. Rang wrote:
Is this because C would already have a devid? If I insert an unlabeled disk,
what happens? What if B takes five minutes to spin up? If it never does?
N.B. You get different error messages from the disk. If a disk is not ready
then it will return a not ready code and the s
workaround below...
Richard Elling wrote:
Anton B. Rang wrote:
Is this because C would already have a devid? If I insert an unlabeled
disk, what happens? What if B takes five minutes to spin up? If it
never does?
N.B. You get different error messages from the disk. If a disk is not
ready
Peter Tribble wrote:
On 3/23/07, Mark Shellenbaum <[EMAIL PROTECTED]> wrote:
Peter Tribble wrote:
> What exactly is the POSIX compliance requirement here?
>
The ignoring of a users umask.
Where in POSIX does it specify the interaction of ACLs and a
user's umask?
Let me try and summarize the
Thomas Nau wrote:
Dear all.
I've setup the following scenario:
Galaxy 4200 running OpenSolaris build 59 as iSCSI target; remaining
diskspace of the two internal drives with a total of 90GB is used as
zpool for the two 32GB volumes "exported" via iSCSI
The initiator is an up to date Solaris 1
> Consider that 18GByte disks are old and their failure
> rate will
> increase dramatically over the next few years.
I guess thats why i am asking about raidz and mirrors, not just creating a huge
stripe them
> Do something to
> have redundancy. If raidz2 works for your workload,
> I'd go wit
Just to clarify
pool1 -> 5 disk raidz2
pool2 -> 4 disk raid 10
spare for both pools
Is that correct?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-
Ok so you are suggesting that I simply mount /tmp as tmpfs on my existing 8GB
swap slice and then put in the VM limit on /tmp? Will that limit only affect
users writing data to /tmp or will it also affect the systems use of swap?
This message posted from opensolaris.org
__
Robert Milkowski wrote:
Hello Robert,
Forget it, silly me.
Pool was mounted on one host, SVM metadevice was created on another
host on the same disk at the same time and both hosts were issuing
IOs.
Once I corrected it I do no longer see CKSUM errors with ZFS on top of
SVM and performance is s
For reference...here is my disk layout currently (one disk of two, but both are
identical)
s4 is for the MetaDB
s5 is dedicated for ZFS
partition> print
Current partition table (original):
Total disk cylinders available: 8921 + 2 (reserved cylinders)
Part TagFlag CylindersSi
On Fri, 23 Mar 2007, Matt B wrote:
> Ok so you are suggesting that I simply mount /tmp as tmpfs on my
> existing 8GB swap slice and then put in the VM limit on /tmp? Will that
Yes.
> limit only affect users writing data to /tmp or will it also affect the
> systems use of swap?
Well, they'd pote
Ok, since I already have an 8GB swap slice i'd like to use, what would be the
best way of setting up /tmp on this existing SWAP slice as tmpfs and then apply
the 1GB quota limit?
I know how to get rid of the zpool/tmp filesystem in ZFS, but I'm not sure how
to actually get to the above in a pos
On Fri, 23 Mar 2007, Matt B wrote:
> Ok, since I already have an 8GB swap slice i'd like to use, what
> would be the best way of setting up /tmp on this existing SWAP slice as
> tmpfs and then apply the 1GB quota limit?
Have a line similar to the following in your /etc/vfstab:
swap- /t
And just doing this will automatically target my /tmp at my 8GB swap slice on
s1 as well as placing the quota in place?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mail
On Fri, 23 Mar 2007, Matt B wrote:
> And just doing this will automatically target my /tmp at my 8GB swap
> slice on s1 as well as placing the quota in place?
After a reboot, yes.
--
Rich Teer, SCSA, SCNA, SCSECA, OpenSolaris CAB member
CEO,
My Online Home Inventory
Voice: +1 (250) 979-1638
U
Oh, one other thing...s1 (8GB swap) is part of an SVM mirror (on d1)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, 23 Mar 2007, Matt B wrote:
> Oh, one other thing...s1 (8GB swap) is part of an SVM mirror (on d1)
That's not relevant in this case.
--
Rich Teer, SCSA, SCNA, SCSECA, OpenSolaris CAB member
CEO,
My Online Home Inventory
Voice: +1 (250) 979-1638
URLs: http://www.rite-group.com/rich
Worked great. Thanks
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I'd take your 10 data disks and make a single raidz2 stripe. You can sustain
two disk failures before losing data, and presumably you'd replace the failed
disks before that was likely to happen. If you're very concerned about
failures, I'd have a single 9-wide raidz2 stripe with a hot spare.
Adam
Dear Fran & Casper
I'd tend to disagree with that. POSIX/SUS does not guarantee data makes
it to disk until you do an fsync() (or open the file with the right flags,
or other techniques). If an application REQUIRES that data get to disk,
it really MUST DTRT.
Indeed; want your data safe? Use
snv_62
On Fri, 23 Mar 2007, Rich Teer wrote:
Date: Fri, 23 Mar 2007 11:41:21 -0700 (PDT)
From: Rich Teer <[EMAIL PROTECTED]>
To: Adam Leventhal <[EMAIL PROTECTED]>
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] gzip compression support
On Fri, 23 Mar 2007, Adam Leventhal wrote:
Richard,
Like this?
disk--zpool--zvol--iscsitarget--network--iscsiclient--zpool--filesystem--app
exactly
I'm in a way still hoping that it's a iSCSI related Problem as detecting
dead hosts in a network can be a non trivial problem and it takes quite
some time for TCP to timeout and inform th
>Thanks for clarifying! Seems I really need to check the apps with truss or
>dtrace to see if they use that sequence. Allow me one more question: why
>is fflush() required prior to fsync()?
When you use stdio, you need to make sure the data is in the
system buffers prior to call fsync.
fclose(
Łukasz wrote:
How it got that way, I couldn't really say without looking at your code.
It works like this:
...
we set max_txg
ba.max_txg = (spa_get_dsl(filesystem->os->os_spa))->dp_tx.tx_synced_txg;
So, how do you send the initial stream? Presumably you need to do it
with ba.max_txg = 0
If I create a mirror, presumably if possible I use two or more identically
sized devices,
since it can only be as large as the smallest. However, if later I want to
replace a disk
with a larger one, and detach the mirror (and anything else on the disk),
replace the
disk (and if applicable repar
Yes, this is supported now. Replacing one half of a mirror with a larger device;
letting it resilver; then replacing the other half does indeed get a larger
mirror.
I believe this is described somewhere but I can't remember where now.
Neil.
Richard L. Hamilton wrote On 03/23/07 20:45,:
If I cr
HI Guys !
Please share you experience on how to backup zfs with ACL using commercially
available backup softwares. Has any one tested backup of zfs with acl using
Tivoli (TSM)
thanks
Ayaz
This message posted from opensolaris.org
___
zfs-discuss m
On Fri, Mar 23, 2007 at 11:28:19AM -0700, Frank Cusack wrote:
> >I'm in a way still hoping that it's a iSCSI related Problem as detecting
> >dead hosts in a network can be a non trivial problem and it takes quite
> >some time for TCP to timeout and inform the upper layers. Just a
> >guess/hope here
61 matches
Mail list logo