On Thu, 29 Oct 2009 casper@sun.com wrote:
> Do you have the complete NFS trace output? My reading of the source code
> says that the file will be created with the proper gid so I am actually
> believing that the client "over corrects" the attributes after creating
> the file/directory.
I dug
create a dedicated zfs zvol or filesystem for each file representing
your virtual machine.
Then if you need to clone a VM you clone its zvol or the filesystem.
Jeffry Molanus wrote:
I'm not doing anything yet; I just wondered if ZFS provides any methods to
do file level cloning instead of comp
On Thu, 29 Oct 2009 casper@sun.com wrote:
> Do you have the complete NFS trace output? My reading of the source code
> says that the file will be created with the proper gid so I am actually
> believing that the client "over corrects" the attributes after creating
> the file/directory.
Yes,
Miles Nordin wrote:
"pt" == Peter Tribble writes:
pt> Does it make sense to fold this sort of intelligence into the
pt> filesystem, or is it really an application-level task?
in general it seems all the time app writers want to access hundreds
of thousands of files by uni
After several days of trying to get a 1.5TB drive to resilver and it
continually restarting, I eliminated all of the snapshot-taking
facilities which were enabled and
2009-10-29.14:58:41 [internal pool scrub done txg:567780] complete=0
2009-10-29.14:58:41 [internal pool scrub txg:567780] func
Hi all,
I received my SSD, and wanted to test it out using fake zpools with files as
backing stores before attaching it to my production pool. However, when I
exported the test pool and imported, I get an error. Here is what I did:
I created a file to use as a backing store for my new pool:
mkf
> "pt" == Peter Tribble writes:
pt> Does it make sense to fold this sort of intelligence into the
pt> filesystem, or is it really an application-level task?
in general it seems all the time app writers want to access hundreds
of thousands of files by unique id rather than filename, a
On Oct 29, 2009, at 15:08, Henrik Johansson wrote:
On Oct 29, 2009, at 5:23 PM, Bob Friesenhahn wrote:
On Thu, 29 Oct 2009, Orvar Korvar wrote:
So the solution is to never get more than 90% full disk space, för
fan?
Right. While UFS created artificial limits to keep the filesystem
from
>I posted a little while back about a problem we are having where when a
>new directory gets created over NFS on a Solaris NFS server from a Linux
>NFS client, the new directory group ownership is that of the primary group
>of the process, even if the parent directory has the sgid bit set and is
On Sat, Oct 24, 2009 at 12:12 PM, Orvar Korvar
wrote:
> Would this be possible to implement ontop ZFS? Maybe it is a dumb idea, I
> dont know. What do you think, and how to improve this?
>
> Assume all files are put in the zpool, helter skelter. And then you can
> create arbitrary different filt
I posted a little while back about a problem we are having where when a
new directory gets created over NFS on a Solaris NFS server from a Linux
NFS client, the new directory group ownership is that of the primary group
of the process, even if the parent directory has the sgid bit set and is
owned
> So the solution is to never get more than 90% full disk space
while that's true, its not Henrik's main discovery. Henrik points
out that 1/4 of the arc is used for metadata, and sometime
that's not enough..
if
echo "::arc" | mdb -k | egrep ^size
isn't reaching
echo "::arc" | mdb -k | egrep "^
On Oct 29, 2009, at 5:23 PM, Bob Friesenhahn wrote:
On Thu, 29 Oct 2009, Orvar Korvar wrote:
So the solution is to never get more than 90% full disk space, för
fan?
Right. While UFS created artificial limits to keep the filesystem
from getting so full that it became sluggish and "sick",
Daniel,
What is the actual size of c1d1?
>I notice that the size of the first partition is wildly inaccurate.
If format doesn't understand the disk, then ZFS won't either.
Do you have some kind of intervening software like EMC powerpath
or are these disks under some virtualization control?
If
Yes I am trying to create a non-redundant pool of two disks.
The output of format -> partition for c0d0
Current partition table (original):
Total disk sectors available: 976743646 + 16384 (reserved sectors)
Part TagFlag First Sector Size Last Sector
0usr
I might need to see the format-->partition output for both c0d0 and
c1td1.
But in the meantime, you could try this:
# zpool create tank2 c1d1
# zpool destroy tank2
# zpool add tank c1d1
Adding the c1d1 disk to the tank pool will create a non-redundant pool
of two disks. Is this what you had in
On Thu, 29 Oct 2009, Orvar Korvar wrote:
So the solution is to never get more than 90% full disk space, för fan?
Right. While UFS created artificial limits to keep the filesystem
from getting so full that it became sluggish and "sick", ZFS does not
seem to include those protections. Don't
Here is the output of zpool status and format.
# zpool status tank
pool: tank
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
tankONLINE 0 0 0
c0d0 ONLINE 0 0 0
errors: No known data errors
Hi Dan,
Could you provide a bit more information, such as:
1. zpool status output for tank
2. the format entries for c0d0 and c1d1
Thanks,
Cindy
- Original Message -
From: Daniel
Date: Thursday, October 29, 2009 9:59 am
Subject: [zfs-discuss] adding new disk to pool
To: zfs-discuss@o
Hi,
I just installed 2 new disks in my solaris box and would like to add them to
my zfs pool.
After installing the disks I run
# zpool add -n tank c1d1
and I get:
would update 'tank' to the following configuration:
tank
c0d0
c1d1
Which is what I want however when I o
So the solution is to never get more than 90% full disk space, för fan?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Lasse Osterild wrote:
Hi,
Seems either Solaris or SunSolve is in need of an update.
pool: dataPool
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to
Hi,
Seems either Solaris or SunSolve is in need of an update.
pool: dataPool
state: DEGRADED
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are
unaffected.
action: Determine if the device needs to be repla
Hi,
Did anyone ever get to the bottom of this? After enabling smb, I'm now seeing
this behaviour - zfs create just hangs.
Thanks
Miles
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
24 matches
Mail list logo