Hello,
I have an X4540 running 2008.11 snv_106
I rebooted it tonight since I had a hung iSCSI connection to the Sun
box that wouldn't go away (couldn't delete that particular ZFS
filesystem till the initiator drops connection)
Upon reboot, the system will hang after printing the license header. I
Hi all,
Recently there's been discussion [1] in the Linux community about how
filesystems should deal with rename(2), particularly in the case of a crash.
ext4 was found to truncate files after a crash, that had been written with
open("foo.tmp"), write(), close() and then rename("foo.tmp", "foo").
Wow Craig - thank you so much for that thorough response.
I am only using 1 vdev and I didn't realize two things:
1) that 50 GB on each of the 300s is essentially wasted. I thought it
would spread 300 GB of parity across all 6 disks, leaving me with 1350
GB of "data" space. Instead, you're saying
Brent,
Brent Wagner wrote:
> Can someone point me to a document describing how available space in a
> zfs is calculated or review the data below and tell me what I'm
> missing?
>
> Thanks in advance,
> -Brent
> ===
> I have a home project with 3x250 GB+3x300 GB in raidz, so I expect to
> lose 1x3
Great explanation. Thanks, Lori.
From: Lori Alt
To: Grant Lowe
Cc: cindy.swearin...@sun.com; zfs-discuss@opensolaris.org
Sent: Tuesday, March 17, 2009 2:52:04 PM
Subject: Re: [zfs-discuss] Mounting zfs file systems
no, this is an incorrect diagnosis. The pr
On 17 Mar, 2009, at 16.21, Bryan Allen wrote:
Then mirror the VTOC from the first (zfsroot) disk to the second:
# prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t1d0s2
# zpool attach -f rpool c1t0d0s0 c1t1d0s0
# zpool status -v
And then you'll still need to run installgrub to put grub
Can someone point me to a document describing how available space in a
zfs is calculated or review the data below and tell me what I'm
missing?
Thanks in advance,
-Brent
===
I have a home project with 3x250 GB+3x300 GB in raidz, so I expect to
lose 1x300 GB to parity.
Total size:1650GB
Total siz
Are any plans for an API that would allow ZFS commands including
snapshot/rollback integrated with customer's application?
Thanks,
Cherry
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 03/17/09 12:32 PM, cindy.swearin...@sun.com wrote:
Neal,
You'll need to use the text-based initial install option.
The steps for configuring a ZFS root pool during an initial
install are covered here:
http://opensolaris.org/os/community/zfs/docs/
Page 114:
Example 4–1 Initial Installation
On Mar 17, 2009, at 4:45 PM, Grant Lowe wrote:
bash-3.00# zfs create -b 8192 -V 44Gb oracle/prd_data/db1
I'm trying to set a mountpoint. But trying to mount it doesn't work.
bash-3.00# zfs list
NAME USED AVAIL REFER MOUNTPOINT
oracle 44.0G 653G 25.5K /ora
no, this is an incorrect diagnosis. The problem is that by
using the -V option, you created a volume, not a file system.
That is, you created a raw device. You could then newfs
a ufs file system within the volume, but that is almost certainly
not what you want.
Don't use -V when you create th
Ok, Cindy. Thanks. I would like to have one big pool and divide it into
separate file systems for an Oracle database. What I had before was a separate
pool for each file system. So does it look I have to go back to what I had
before?
- Original Message
From: "cindy.swearin...@sun
Grant,
If I'm following correctly, you can't mount a ZFS resource
outside of the pool from which the resource resides.
Is this a UFS directory, here:
# mkdir -p /opt/mis/oracle/data/db1
What are you trying to do?
Cindy
Grant Lowe wrote:
Another newbie question:
I have a new system with zfs
Another newbie question:
I have a new system with zfs. I create a directory:
bash-3.00# mkdir -p /opt/mis/oracle/data/db1
I do my zpool:
bash-3.00# zpool create -f oracle c2t5006016B306005AAd0 c2t5006016B306005AAd1
c2t5006016B306005AAd3 c2t5006016B306005AAd4 c2t5006016B306005AAd5
c2t5006016B
+--
| On 2009-03-17 16:37:25, Mark J Musante wrote:
|
| >Then mirror the VTOC from the first (zfsroot) disk to the second:
| >
| ># prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t1d0s2
| ># zpool attach -f rpool c1
+--
| On 2009-03-17 16:13:27, Toby Thain wrote:
|
| Right, but what if you didn't realise on that screen that you needed
| to select both to make a mirror? The wording isn't very explicit, in
| my opinion. Yesterday I
On 17-Mar-09, at 3:32 PM, cindy.swearin...@sun.com wrote:
Neal,
You'll need to use the text-based initial install option.
The steps for configuring a ZFS root pool during an initial
install are covered here:
http://opensolaris.org/os/community/zfs/docs/
Page 114:
Example 4–1 Initial Install
Neal,
You'll need to use the text-based initial install option.
The steps for configuring a ZFS root pool during an initial
install are covered here:
http://opensolaris.org/os/community/zfs/docs/
Page 114:
Example 4–1 Initial Installation of a Bootable ZFS Root File System
Step 3, you'll be p
On Tue, 17 Mar 2009, Neal Pollack wrote:
Can anyone share some instructions for setting up the rpool mirror of
the boot disks during the Solaris Nevada (SXCE) install?
You'll need to use the text-based installer, and in there you choose two
the two bootable disks instead of just one. They're
I'm setting up a new X4500 Thumper, and noticed suggestions/blogs
for setting up two boot disks as a zfs rpool mirror during installation.
But I can't seem to find instructions/examples for how to do this using
google, the blogs, or the Sun docs for X4500.
Can anyone share some instructions for s
The links to the Part 1 and Part 2 demos on this page
(http://www.opensolaris.org/os/project/avs/Demos/) appear to be broken.
http://www.opensolaris.org/os/project/avs/Demos/AVS-ZFS-Demo-V1/
http://www.opensolaris.org/os/project/avs/Demos/AVS-ZFS-Demo-V2/
James D. Rogers
NRA, GOA, DAD -
James,
also there is this demo:
http://www.nexenta.com/demos/auto-cdp.html
showing how AVS/ZFS integrated in NexentaStor.
On Tue, 2009-03-17 at 10:25 -0600, James D. Rogers wrote:
> The links to the Part 1 and Part 2 demos on this page
> (http://www.opensolaris.org/os/project/avs/Demos/) appear
On Sun, March 15, 2009 15:37, Ross wrote:
> Not sure if this is what you mean, but I always start CIFS shares by
> granting everybody full permissions, and then set the rest from windows.
> I find otherwise deny permissions cause all kinds of problems since
> they're implemented differently on win
On Mon, March 16, 2009 06:10, Tobs wrote:
> There's a share with this A=everyboy@:full_set:fd:allow folder_name
> permission set, but it seems that people didn't get identified the right
> way.
>
> For example, its not possible to start Portable Thunderbird from this cifs
> share.
>
> Did you use
Sorry, no , I assume you are on Sol 10.
othe value "space" to display space usage
properties on file systems and volumes.
Thisisashortcut for "-o
name,avail,used,usedsnap,usedds,
use
If you meant available, here's the output of that:
bash-3.00# zfs list -o available r12_data
AVAIL
62.7G
bash-3.00# zfs list -o available r12_data/d24
AVAIL
2.14G
bash-3.00# zfs list -o available r12_data/d25
AVAIL
62.7G
bash-3.00#
- Original Message
From: Michael Ramchand
To: Grant
Hi Mike,
Yes, that does help things. Thanks.
bash-3.00# zfs get compression r12_data/d25
NAME PROPERTY VALUE SOURCE
r12_data/d25 compression off default
bash-3.00# zfs get compression r12_data/d24
NAME PROPERTY VALUE SOURCE
r12_data/d24 com
Well, it is kinda confusing...
In short, df -h will always return the size of the WHOLE pool for "size"
(unless you've set a quota on the dataset in which case it says that),
the amount of space that particular dataset is using for "used", and the
total amount of free space on the WHOLE pool f
Hi Mike,
Yes, d25 is a clone of d24. Here are some data points about it:
bash-3.00# zfs get reservation r12_data/d25
NAME PROPERTY VALUE SOURCE
r12_data/d25 reservation none default
bash-3.00# zfs get quota r12_data/d25
NAME PROPERTY VALUE SOURCE
Grant Lowe wrote:
Hey all,
I have a question/puzzle with zfs. See the following:
bash-3.00# df -h | grep d25 ; zfs list | grep d25
FILESYSTEM SIZE USED AVAIL CAPACITY MOUNTED ON
r12_data/d25 *659G*40G*63G*39%/opt/d25/oakwc12
df -h says the d25 file system
30 matches
Mail list logo