Box running osol_133 with smb/server enabled. I create a file on a Windows box
that has a remote ZFS fs mounted. I go to the Solaris box and try to remove the
file and get "permission denied" for up 30 sec. Than it works. A "sync"
immediately before the rm seems to speed things up and rm is succ
i may be wrong but i think it would depend on how you have your ACL's set up
and whether or not ACL inhereat is on
On Sun, Feb 21, 2010 at 5:46 AM, Peter Radig wrote:
> Box running osol_133 with smb/server enabled. I create a file on a Windows
> box that has a remote ZFS fs mounted. I go to the S
Hello.
I got an idea.. How about creating an ramdisk, making a pool out of it,
then making compressed zvols and add those as l2arc.. Instant compressed
arc ;)
So I did some tests with secondarycache=metadata...
capacity operationsbandwidth
pool used avail read
Am 20.02.10 03:22, schrieb Tomas Ögren:
On 19 February, 2010 - Christo Kutrovsky sent me these 0,5K bytes:
How do you tell how much of your l2arc is populated? I've been looking for a
while now, can't seem to find it.
Must be easy, as this blog entry shows it over time:
http://blogs.sun.com/b
On 21 February, 2010 - Felix Buenemann sent me these 0,7K bytes:
> Am 20.02.10 03:22, schrieb Tomas Ögren:
>> On 19 February, 2010 - Christo Kutrovsky sent me these 0,5K bytes:
>>> How do you tell how much of your l2arc is populated? I've been looking for
>>> a while now, can't seem to find it.
>
On Feb 21, 2010, at 9:18 AM, Tomas Ögren wrote:
> On 21 February, 2010 - Felix Buenemann sent me these 0,7K bytes:
>
>> Am 20.02.10 03:22, schrieb Tomas Ögren:
>>> On 19 February, 2010 - Christo Kutrovsky sent me these 0,5K bytes:
How do you tell how much of your l2arc is populated? I've bee
I don't see why this couldn't be extended beyond metadata (+1 for the
idea): if zvol is compressed, ARC/L2ARC could store compressed data.
The gain is apparent: if user has compression enabled for the volume,
he/she expects volume's data to be compressable at good ratio,
yielding significant reduct
On Thu, Feb 18, 2010 at 16:03, Ethan wrote:
> On Thu, Feb 18, 2010 at 15:31, Daniel Carosone wrote:
>
>> On Thu, Feb 18, 2010 at 12:42:58PM -0500, Ethan wrote:
>> > On Thu, Feb 18, 2010 at 04:14, Daniel Carosone wrote:
>> > Although I do notice that right now, it imports just fine using the p0
On 21 February, 2010 - Richard Elling sent me these 1,3K bytes:
> On Feb 21, 2010, at 9:18 AM, Tomas Ögren wrote:
>
> > On 21 February, 2010 - Felix Buenemann sent me these 0,7K bytes:
> >
> >> Am 20.02.10 03:22, schrieb Tomas Ögren:
> >>> On 19 February, 2010 - Christo Kutrovsky sent me these 0
Working from a remote linux machine on a zfs fs that is an nfs mounted
share (set for nfs availability on zfs server, mounted nfs on linux);
I've been noticing a certain kind of sloth when messing with files.
What I see: After writing a file it seems to take the fs too long to
be able to display
I thought this was simple. Turns out not to be.
bash-3.2$ zfs list -t snapshot zp1
cannot open 'zp1': operation not applicable to datasets of this type
Fails equally on all the variants of pool name that I've tried,
including "zp1/" and "zp1/@" and such.
You can do "zfs list -t snapshot" and
Try:
zfs list -r -t snapshot zp1
--
Dave
On 2/21/10 5:23 PM, David Dyer-Bennet wrote:
I thought this was simple. Turns out not to be.
bash-3.2$ zfs list -t snapshot zp1
cannot open 'zp1': operation not applicable to datasets of this type
Fails equally on all the variants of pool name that I
On Feb 21, 2010, at 7:47 PM, Harry Putnam wrote:
>
> Working from a remote linux machine on a zfs fs that is an nfs mounted
> share (set for nfs availability on zfs server, mounted nfs on linux);
> I've been noticing a certain kind of sloth when messing with files.
>
> What I see: After writin
On 2/21/2010 7:33 PM, Dave wrote:
Try:
zfs list -r -t snapshot zp1
I hate to sound ungrateful; but what you suggest I try is something that
I listed in my message as having *already* tried.
Still down there in the quotes, you can see it. I listed two ways to
get a superset of what I wante
Not quite where you were looking, but there is always:
$ ls /my/data/set/.zfs/snapshot
--
Dan.
pgpWfxoFGjdqY.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-disc
Hi Any idea why zfs does not dedup files with this format ?
file /opt/XXX/XXX/data
VAX COFF executable - version 7926
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailma
On Mon, 22 Feb 2010, Henrik Johansson wrote:
You will not see the on disk size of the file with du before the transaction
group have been committed
which can take up to 30 seconds. ZFS does not even know how much space it will
consume before writing out
the data to disks since compression might
> Hi Any idea why zfs does not dedup files with this format ?
> file /opt/XXX/XXX/data
> VAX COFF executable - version 7926
With dedup enabled, ZFS will identify and remove duplicated regardless of the
data format.
Adam
--
Adam Leventhal, Fishworkshttp://blogs.sun.com/ah
Hello out there,
is there any progress in shrinking zpools?
i.e. removing vdevs from a pool?
Cheers,
Ralf
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
I have these exact same files placed under zfs with dedup enabled:
-rw-r- 1 root root 6800152 Feb 21 20:17 /mypool/test1/data
-rw-r- 1 root root 6800152 Feb 21 20:17 /mypool/test2/data
~ # zpool list
NAMESIZE ALLOC FREECAP DEDUP HEALTH ALTROOT
mypool 1.98T 14.7G 1.97T
Nevermind. I just tried larger files and now it does shows a dedup-% status
change..
-rw-r- 1 root root 2097157724 Feb 21 20:58 /mypool/test1/data
-rw-r- 1 root root 2097157724 Feb 21 20:58 /mypool/test2/data
NAMESIZE ALLOC FREECAP DEDUP HEALTH ALTROOT
mypool 1.98T 16.8G
I am having a bit of an issue I have an opensolaris box setup as a fileserver.
Running through CIFS to provide shares to some windows machines.
Now lets call my zpool /tank1, when i create a zfs filesystem called /test it
gets shared as /test and i can see it as "test" on my windows machines...
It doesn't work with CIFS. There is an open RFE on that for quite some
time now.
Peter
On 22.02.2010, at 08:09, "Tau" wrote:
> I am having a bit of an issue I have an opensolaris box setup as a
> fileserver. Running through CIFS to provide shares to some windows
> machines.
>
> Now lets
On 2/21/10 11:08 PM -0800 Tau wrote:
I am having a bit of an issue I have an opensolaris box setup as a
fileserver. Running through CIFS to provide shares to some windows
machines.
Now lets call my zpool /tank1,
Let's not because '/' is an illegal character in a zpool name.
when i create
24 matches
Mail list logo