On 08/12/11 04:42 PM, Vikash Gupta wrote:
Hi Ian,
It's there in the subject line.
I am unable to see the zfs file system in df output.
How did you mount it and did it fail? As I said, what commands did you
use and what errors you get?
What is the output of zfs mount -a?
--
Ian.
I use df command and its not showing the zfs file system in the list.
zfs mount -a does not return any error.
Rgds
Vikash
-Original Message-
From: Ian Collins [mailto:i...@ianshome.com]
Sent: Friday, August 12, 2011 1:24 PM
To: Vikash Gupta
Cc: zfs-discuss@opensolaris.org
Subject: Re: [
Hiya,
My S11E server is needed to serve Windows clients. I read a while ago (last
year!) about 'fudging' it so that Everyone has read/write access.
Is it possible for me to lock this down to users? I only have a single user on
my Windows clients and in some case (htpc) this user is logged on au
On Fri, Aug 12, 2011 at 3:05 PM, Vikash Gupta wrote:
> I use df command and its not showing the zfs file system in the list.
>
> zfs mount -a does not return any error.
First of all, please check whether you're posting to the right place.
zfs-discuss@opensolaris.org, as the name implies, mostly r
Which 320 series drive are you targeting, specifically? The ~$100 80GB variant
should perform as well as the more expensive versions if your workload is more
random from what I've seen/read.
--
This message posted from opensolaris.org
___
zfs-discuss
Hello
Well I have got to the bottom of it (sort of).
I have a shared IRQ - but this is not the problem
The controller is working, either with the LSI firmware 1.26 or the Sun
firmware 1.28. (no setup , no config , it just works)
All the drives are healthy, they are an odd mix of sizes and types
Hi All,
I have off the dedup on pool and tried to delete the old snapshot.
Its takes 2 to 3 hours to destroy the snapshot.
So now, I have deleted all old snapshot and created the new one and tried to
destroy it, this time its just finish within few seconds :-)
# time zfs destroy tank/h...@dail
On Thu, Aug 11, 2011 at 09:17:38PM -0700, Cooper Hubbell wrote:
> Which 320 series drive are you targeting, specifically? The ~$100
> 80GB variant should perform as well as the more expensive versions if
> your workload is more random from what I've seen/read.
ESX NFS-attached datastore activity.
> # uname -a
>
> Linux testbox 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010
> x86_64 x86_64 x86_64 GNU/Linux
>
> # rpm -qa|grep zfs
> zfs-test-0.5.2-1
> zfs-modules-0.5.2-1_2.6.18_194.el5
> zfs-0.5.2-1
> zfs-modules-devel-0.5.2-1_2.6.18_194.el5
> zfs-devel-0.5.2-1
This smells like an old r
Hi all
We've migrated from an old samba installation to a new box with openindiana,
and it works well, but... It seems Windows now honours the executable bit, so
that .exe files for installing packages, are no longer directly executable.
While it is positive that windows honours this bit, it br
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ray Van Dolson
>
> For ZIL, I
> suppose we could get the 300GB drive and overcommit to 95%!
What kind of benefit does that offer? I suppose, if you have a 300G drive
and the OS can only see 3
On 08/13/11 01:53 PM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ray Van Dolson
For ZIL, I
suppose we could get the 300GB drive and overcommit to 95%!
What kind of benefit does that offer? I suppose, if you hav
12 matches
Mail list logo