I am trying to configure a system where I have two different NFS shares
which point to the same directory. The idea is if you come in via one path,
you will have read-only access and can't delete any files, if you come in
the 2nd path, then you will have read/write access.
For example, create the
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of artiepen
>
> I'm using zfs/osol snv_134. I have 2 zfs volumes: /zpool1/test/share1 and
> /zpool1/test/share2. share1 is using CIFS, share2: nfs.
>
> I've recently put a cronjob in place that c
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Stephan Budach
>
> Now, I want to use zfs send -R t...@movetank | zfs recv targetTank/...
> which would place all zfs fs one level down below targetTank.
> Overwriting targetTank is not an opti
On Fri, 17 Dec 2010, Edward Ned Harvey wrote:
Also if a 2nd disk fails during resilver, it's more likely to be in the same
vdev, if you have only 2 vdev's. Your odds are better with smaller vdev's,
both because the resilver completes faster, and the probability of a 2nd
failure in the same vdev
Hi,
I want to move all the ZFS fs from one pool to another, but I don't want
to "gain" an extra level in the folder structure on the target pool.
On the source zpool I used zfs snapshot -r t...@movetank on the root fs
and I got a new snapshot in all sub fs, as expected.
Now, I want to use zfs
Here's a long due update for you all...
After updating countless drivers, BIOSes and Nexenta, it seems that our issue
has disappeared. We're slowly moving our production to our three appliances
and things are going well so far. Sadly we don't know exactly what update
fixed our issue. I wish I
I have 159x 15K RPM SAS drives I want to build a ZFS appliance with.
75x 145G
60x 300G
24x 600G
The box has 4 CPUs, 256G of RAM, 14x 100G SLC SSDs for the cache and a mirrored
pair of 4G DDRDrive X1s for the SLOG.
My plan is to mirror all these drives and keep some hot spares.
My question is
at Dezember, 17 2010, 17:48 wrote in [1]:
> By single drive mirrors, I assume, in a 14 disk setup, you mean 7
> sets of 2 disk mirrors - I am thinking of traditional RAID1 here.
> Or do you mean 1 massive mirror with all 14 disks?
Edward means a set of two-way-mirrors.
Do you remember what he
The chown may affect access - possibly due to user ACE
versus owner@ ACE behavior. A user ACE always refers to the
specific user mentioned in the ACE. An owner@ ACE applies
to the current owner of the file, which changes with chown.
owner@ represents the typical, expected behavior on UNIX
but c
Hi all,
I'm getting a very strange problem with a recent OpenSolaris b134 install.
System is:
Supermicro X5DP8-G2 BIOS 1.6a
2x Supermicro AOC-SAT2-MV8 1.0b
11 Seagate Barracuda 1TB ES.2 ST31000340NS drives
If I have any of the 11 1TB Seagate drives plugged into the controller,
the AOC-SAT2-MV8
You should take a look at the ZFS best practices guide for RAIDZ and
mirrored configuration recommendations:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Its easy for me to say because I don't have to buy storage but
mirrored storage pools are currently more flexible,
Thanks!
By single drive mirrors, I assume, in a 14 disk setup, you mean 7 sets of 2
disk mirrors - I am thinking of traditional RAID1 here.
Or do you mean 1 massive mirror with all 14 disks?
This is always a tough one for me. I too prefer RAID1 where redundancy is king,
but the trade off for m
I'm using zfs/osol snv_134. I have 2 zfs volumes: /zpool1/test/share1 and
/zpool1/test/share2. share1 is using CIFS, share2: nfs.
I've recently put a cronjob in place that changes the ownership of share2 to a
user and a group, on the test filer every 5 minutes. The cron job actually runs
in ope
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Lanky Doodle
>
> This is relevant as my final setup was planned to be 15 disks, so only one
> more than the example.
>
> So, do I drop one disk and go with 2 7 drive vdevs, or stick to 3 5 dri
Miles Nordin wrote:
> > "js" == Joerg Schilling
> delivered the following alternate reality of idealogical
> partisan hackery:
>
> js> GPLv3 does not give you anything you don't have from CDDL
> js> also.
>
> I think this is wrong. The patent indemnification is totally
>
OK cool.
One last question. Reading the Admin Guid for ZFS, it says:
[i]"A more complex conceptual RAID-Z configuration would look similar to the
following:
raidz c1t0d0 c2t0d0 c3t0d0 c4t0d0 c5t0d0 c6t0d0 c7t0d0 raidz c8t0d0 c9t0d0
c10t0d0 c11t0d0 c12t0d0 c13t0d0 c14t0d0
If you are creating a
On 12/17/2010 2:12 AM, Lanky Doodle wrote:
Thanks for all the replies.
The bit about combining zpools came from this command on the southbrain
tutorial;
zpool create mail \
mirror c6t600D0230006C1C4C0C50BE5BC9D49100d0
c6t600D0230006B66680C50AB7821F0E900d0 \
mirror c6t600D0230006B66680C50A
Thanks for all the replies.
The bit about combining zpools came from this command on the southbrain
tutorial;
zpool create mail \
mirror c6t600D0230006C1C4C0C50BE5BC9D49100d0
c6t600D0230006B66680C50AB7821F0E900d0 \
mirror c6t600D0230006B66680C50AB0187D75000d0
c6t600D0230006C1C4C0C50BE27386C4
18 matches
Mail list logo