I upgraded a server today that has been running SXCE b111 to the
OpenSolaris preview b134. It has three pools and two are fine, but one
comes up with no space available in the pool (SCSI jbod of 300GB disks).
The zpool version is at 14.
I tried exporting the pool and re-importing and I get se
We'll be in touch.
Thanks,
Cindy
On 06/17/10 07:02, Ben Miller wrote:
I upgraded a server today that has been running SXCE b111 to the
OpenSolaris preview b134. It has three pools and two are fine, but
one comes up with no space available in the pool (SCSI jbod of 300GB
disks). The zpool
I have an X4540 running b134 where I'm replacing 500GB disks with 2TB disks
(Seagate Constellation) and the pool seems sick now. The pool has four
raidz2 vdevs (8+2) where the first set of 10 disks were replaced a few
months ago. I replaced two disks in the second set (c2t0d0, c3t0d0) a
coupl
On 09/20/10 10:45 AM, Giovanni Tirloni wrote:
On Thu, Sep 16, 2010 at 9:36 AM, Ben Miller mailto:bmil...@mail.eecis.udel.edu>> wrote:
I have an X4540 running b134 where I'm replacing 500GB disks with 2TB
disks (Seagate Constellation) and the pool seems sick now. The pool
On 09/21/10 09:16 AM, Ben Miller wrote:
On 09/20/10 10:45 AM, Giovanni Tirloni wrote:
On Thu, Sep 16, 2010 at 9:36 AM, Ben Miller mailto:bmil...@mail.eecis.udel.edu>> wrote:
I have an X4540 running b134 where I'm replacing 500GB disks with 2TB
disks (Seagate Constellation) and the
On 09/22/10 04:27 PM, Ben Miller wrote:
On 09/21/10 09:16 AM, Ben Miller wrote:
I had tried a clear a few times with no luck. I just did a detach and that
did remove the old disk and has now triggered another resilver which
hopefully works. I had tried a remove rather than a detach before
This post from close to a year ago never received a response. We just had this
same thing happen to another server that is running Solaris 10 U6. One of the
disks was marked as removed and the pool degraded, but 'zpool status -x' says
all pools are healthy. After doing an 'zpool online' on th
I just put in a (low priority) bug report on this.
Ben
> This post from close to a year ago never received a
> response. We just had this same thing happen to
> another server that is running Solaris 10 U6. One of
> the disks was marked as removed and the pool
> degraded, but 'zpool status -x'
Bug ID is 6793967.
This problem just happened again.
% zpool status pool1
pool: pool1
state: DEGRADED
scrub: resilver completed after 0h48m with 0 errors on Mon Jan 5 12:30:52 2009
config:
NAME STATE READ WRITE CKSUM
pool1 DEGRADED 0 0 0
7;m just
> curious - I had a similar situation which seems to be
> resolved now
> that I've gone to Solaris 10u6 or OpenSolaris
> 2008.11).
>
>
>
> On Wed, Jan 21, 2009 at 2:11 PM, Ben Miller
> wrote:
> > Bug ID is 6793967.
> >
> > This problem just hap
e]/[filesystem(s)]'
>
> What does 'zfs upgrade' say? I'm not saying this is
> the source of
> your problem, but it's a detail that seemed to affect
> stability for
> me.
>
>
> On Thu, Jan 22, 2009 at 7:25 AM, Ben Miller
> >
var/mysql': Device busy
cannot unmount '/var/postfix': Device busy
6 filesystems upgraded
821 filesystems already at this version
Ben
> You can upgrade live. 'zfs upgrade' with no
> arguments shows you the
> zfs version status of filesystems present w
# zpool status -xv
all pools are healthy
Ben
> What does 'zpool status -xv' show?
>
> On Tue, Jan 27, 2009 at 8:01 AM, Ben Miller
> wrote:
> > I forgot the pool that's having problems was
> recreated recently so it's already at zfs version 3.
> I
I have an Ultra 10 client running Sol10 U3 that has a zfs pool set up on the
extra space of the internal ide disk. There's just the one fs and it is shared
with the sharenfs property. When this system reboots nfs/server ends up
getting disabled and this is the error from the SMF logs:
[ Apr 1
It does seem like an ordering problem, but nfs/server should be starting up
late enough with SMF dependencies. I need to see if I can duplicate the
problem on a test system...
This message posted from opensolaris.org
___
zfs-discuss mailing list
zf
I just rebooted this host this morning and the same thing happened again. I
have the core file from zfs.
[ Apr 26 07:47:01 Executing start method ("/lib/svc/method/nfs-server start") ]
Assertion failed: pclose(fp) == 0, file ../common/libzfs_mount.c, line 380, func
tion zfs_share
Abort - core du
I was able to duplicate this problem on a test Ultra 10. I put in a workaround
by adding a service that depends on /milestone/multi-user-server which does a
'zfs share -a'. It's strange this hasn't happened on other systems, but maybe
it's related to slower systems...
Ben
This message pos
I just threw in a truss in the SMF script and rebooted the test system and it
failed again.
The truss output is at http://www.eecis.udel.edu/~bmiller/zfs.truss-Apr27-2007
thanks,
Ben
This message posted from opensolaris.org
___
zfs-discuss mailing l
We have around 1000 users all with quotas set on their ZFS filesystems on
Solaris 10 U3. We take snapshots daily and rotate out the week old ones. The
situation is that some users ignore the advice of keeping space used below 80%
and keep creating large temporary files. They then try to remov
Has anyone else run into this situation? Does anyone have any solutions other
than removing snapshots or increasing the quota? I'd like to put in an RFE to
reserve some space so files can be removed when users are at their quota. Any
thoughts from the ZFS team?
Ben
> We have around 1000 use
> > > Hello Matthew,
> > > Tuesday, September 12, 2006, 7:57:45 PM, you
> > wrote:
> > > MA> Ben Miller wrote:
> > > >> I had a strange ZFS problem this morning.
> The
> > entire system would
> > >> hang when mounting the Z
We run a cron job that does a 'zpool status -x' to check for any degraded
pools. We just happened to find a pool degraded this morning by running 'zpool
status' by hand and were surprised that it was degraded as we didn't get a
notice from the cron job.
# uname -srvp
SunOS 5.11 snv_78 i386
#
I had a strange ZFS problem this morning. The entire system would hang when
mounting the ZFS filesystems. After trial and error I determined that the
problem was with one of the 2500 ZFS filesystems. When mounting that users'
home the system would hang and need to be rebooted. After I remove
> Hello Matthew,
> Tuesday, September 12, 2006, 7:57:45 PM, you wrote:
> MA> Ben Miller wrote:
> >> I had a strange ZFS problem this morning. The
> entire system would
> >> hang when mounting the ZFS filesystems. After
> trial and error I
> >> de
> > Hello Matthew,
> > Tuesday, September 12, 2006, 7:57:45 PM, you
> wrote:
> > MA> Ben Miller wrote:
> > >> I had a strange ZFS problem this morning. The
> > entire system would
> > >> hang when mounting the ZFS filesystems. After
> &
25 matches
Mail list logo