Hi,
I reinstalled our Solaris 10 box using the latest update available.
However I could not upgrade the zpool
bash-3.00# zpool upgrade -v
This system is currently running ZFS version 4.
The following versions are supported:
VER DESCRIPTION
--- --
Folks,
I am running into an issue with a quota enabled ZFS system. I tried to check
out the ZFS properties but could not figure out a workaround.
I have a file system /data/project/software which has 250G quota set. There
are no snapshots enabled for this system. When the quota is reached on this,
All,
I assume this issue is pretty old given the time ZFS has been around. I have
tried searching the list but could not get understand the structure of how
ZFS actually takes snapshot space into account.
I have a user walter on whom I try to do the following ZFS operations
bash-3.00# zfs get quo
Hi Robert,
Thanks it worked like a charm.
--Walter
On Dec 7, 2007 7:33 AM, Robert Milkowski <[EMAIL PROTECTED]> wrote:
> Hello Walter,
>
>
> Thursday, December 6, 2007, 7:05:54 PM, you wrote:
>
>
> >
>
> Hi All,
>
> We are currently a hardware issue with our zfs file server hence the file
> s
Hi All,
We are currently a hardware issue with our zfs file server hence the file
system is unusable.
We are planning to move it to a different system.
The setup on the file server when it was running was
bash-3.00# zpool status
pool: store1
state: ONLINE
scrub: none requested
config:
Hi Lukas,
The system that we use for zfs is Solaris 10 on Sparc Update 3.
I assume all the scripts you gave have to be run on the nfs/zfs server
and not any client.
Thanks,
--Walter
On Nov 8, 2007 2:34 AM, Ćukasz K <[EMAIL PROTECTED]> wrote:
> Dnia 8-11-2007 o godz. 7:58 Walte
enough RAM available on machine.
> Check ::kmastat in mdb.
> Space map uses kmem_alloc_40 ( on thumpers this is a real problem )
>
> Workaround:
> 1. first you can change pool recordsize
> zfs set recordsize=64K POOL
>
> Maybe you wil have to use 32K or even 16K
>
>
Hi,
We have a zfs file system configured using a Sunfire 280R with a 10T
Raidweb array
bash-3.00# zpool list
NAMESIZEUSED AVAILCAP HEALTH ALTROOT
filer 9.44T 6.97T 2.47T73% ONLINE -
bash-3.00# zpool status
pool: backup
Hi,
I have a ZFS file system that consists of a
Sunfire V280R + 10T of attached Raidweb array.
bash-3.00# zpool status
pool: filer
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
backup ONLINE 0 0 0
c1t2d1ONL
Hi,
We have implemented a zfs files system for home directories and have enabled
it with quotas+snapshots. However the snapshots are causing an issue with
the user quotas. The default snapshot files go under
~username/.zfs/snapshot, which is a part of the user file system. So if the
quota is 10G an
10 matches
Mail list logo