>By the way, is there a way to view just the responses that have accumulated in
>this forum since I
>last visited - or just those I've never looked at before?
Not through the web interface itself, as far as I can tell, but there's an RSS
feed of messages that might do the trick. Unfortunately
On September 15, 2006 3:49:14 PM -0700 "can you guess?"
<[EMAIL PROTECTED]> wrote:
(I looked at my email before checking here, so I'll just cut-and-paste
the email response in here rather than send it. By the way, is there a
way to view just the responses that have accumulated in this forum sinc
(I looked at my email before checking here, so I'll just cut-and-paste the
email response in here rather than send it. By the way, is there a way to view
just the responses that have accumulated in this forum since I last visited -
or just those I've never looked at before?)
Bill Moore wrote:
Yes sir:
[EMAIL PROTECTED]:/
# zpool status -v fserv
pool: fserv
state: DEGRADED
status: One or more devices is currently being resilvered. The pool
will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 5.90% d
On Fri, Sep 15, 2006 at 01:26:21PM -0700, Tim Cook wrote:
> says it's online now so I can only assume it's working. Doesn't seem
> to be reading from any of the other disks in the array though. Can it
> sliver without traffic to any other disks? /noob
Can you send the output of "zpool status -v
Quoth Darren J Moffat on Fri, Sep 08, 2006 at 01:59:16PM +0100:
> Nicolas Dorfsman wrote:
> > Regarding "system partitions" (/var, /opt, all mirrored + alternate
> > disk), what would be YOUR recommendations ? ZFS or not ?
>
> /var for now must be UFS since Solaris 10 doesn't not have ZF
says it's online now so I can only assume it's working. Doesn't seem to be
reading from any of the other disks in the array though. Can it sliver without
traffic to any other disks? /noob
This message posted from opensolaris.org
___
zfs-discuss m
hrmm... "cannot replace c5d0 with c5d0: cannot replace a replacing device"
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Sep 15, 2006 at 01:10:25PM -0700, Tim Cook wrote:
> the status showed 19.46% the first time I ran it, then 9.46% the
> second. The question I have is I added the new disk, but it's showing
> the following:
>
> Device: c5d0
> Storage Pool: fserv
> Type: Disk
> Device State: Faulted (cannot
the status showed 19.46% the first time I ran it, then 9.46% the second. The
question I have is I added the new disk, but it's showing the following:
Device: c5d0
Storage Pool: fserv
Type: Disk
Device State: Faulted (cannot open)
The disk is currently unpartitioned and unformatted. I was under
On Fri, Sep 15, 2006 at 12:43:19PM -0700, Tim Cook wrote:
> Being resilvered 444.00 GB 168.21 GB 158.73 GB
>
> Just wondering if anyone has any rough guesstimate of how long this
> will take? It's 3x1200JB ata drives and one Seagate SATA drive. The
> SATA drive is the one that w
s10u2, once zoned, always zoned? i see that zoned property is not
cleared after removing the dataset from a zone cfg or even
uninstalling the entire zone... [right, i know how to clear it by
hand, but maybe i am missing a bit of magic otherwise anodyne
zonecfg et al.]
oz
--
ozan s. yigit | [EMAIL
Being resilvered444.00 GB 168.21 GB 158.73 GB
Just wondering if anyone has any rough guesstimate of how long this will take?
It's 3x1200JB ata drives and one Seagate SATA drive. The SATA drive is the one
that was replaced. Any idea how long this will take? As in 5 hours?
Not sure if this is a bug, or desired behavior, but it doesn't seem
right to me, and a possible admin headache.
bash-3.00# zfs create pool/test2
bash-3.00# zfs create pool/test2/blah
on another box, in this case it was a linux box. mount the first filesystem.
[EMAIL PROTECTED] systemtap]# mo
It is highly likely you are seeing a duplicate of:
6413510 zfs: writing to ZFS filesystem slows down fsync() on
other files in the same FS
which was fixed recently in build 48 on Nevada.
The symptoms are very similar. That is a fsync from the vi would, prior
to the bug being fixed, have
Mika Borner wrote:
-The mechanism to asynchronously replicate to another host could be
simulated using zfs send/receive. Still, I would prefer having a
replication, that is automatically triggered, like Sun's StorEdge
Network Data Replicator does this for UFS. This could be easily
implemented in
On Fri, Sep 15, 2006 at 01:23:31AM -0700, can you guess? wrote:
> Implementing it at the directory and file levels would be even more
> flexible: redundancy strategy would no longer be tightly tied to path
> location, but directories and files could themselves still inherit
> defaults from the fil
On Fri, Sep 15, 2006 at 10:55:48AM -0500, Nicolas Williams wrote:
> On Fri, Sep 15, 2006 at 09:31:04AM +0100, Ceri Davies wrote:
> > On Thu, Sep 14, 2006 at 05:08:18PM -0500, Nicolas Williams wrote:
> > > Yes, but the checksum is stored with the pointer.
> > >
> > > So then, for each file/director
How did you get these images on ZFS? Did you just put them yourself or
did you run setup_install_server? When I try to use add_install_client,
if the image is on ZFS, it refuses. How do you get around that?
--SCott
Steffen Weiberle wrote:
I have a jumpstart server where the install images are
On Fri, Sep 15, 2006 at 09:31:04AM +0100, Ceri Davies wrote:
> On Thu, Sep 14, 2006 at 05:08:18PM -0500, Nicolas Williams wrote:
> > Yes, but the checksum is stored with the pointer.
> >
> > So then, for each file/directory there's a dnode, and that dnode has
> > several block pointers to data blo
What's the brand and model of the cards ?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Luke Scharf wrote:
It sounded to me like he wanted to implement tripwire, but save some
time and CPU power by querying the checksumming-work that was already
done by ZFS.
Nevermind. The e-mail client that I chose to use broke up the thread,
and I didn't see that the issue had already been thor
Matthew Ahrens wrote:
Bady, Brant RBCM:EX wrote:
Actually to clarify - what I want to do is to be able to read the
associated checksums ZFS creates for a file and then store them in an
external system e.g. an oracle database most likely
Rather than storing the checksum externally, you could si
The disks in that Blade 100, are these IDE disks?
The performance problem is probably bug 6421427:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6421427
A fix for the issue was integrated into the Opensolaris 20060904 source
drop (actually closed binary drop):
http://dlc.sun.com/os
Hi forum,
I'm currently a little playing around with ZFS on my workstation.
I created a standard mirrored pool over 2 disk-slices.
# zpool status
Pool: mypool
Status: ONLINE
scrub: Keine erforderlich
config:
NAME STATE READ WRITE CKSUM
mypoolONLINE 0
Yup, its almost certain that this is the bug you are hitting.
-Mark
Alan Hargreaves wrote:
I know, bad form replying to myself, but I am wondering if it might be
related to
6438702 error handling in zfs_getpage() can trigger "page not
locked"
Which is marked "fix in progress" with
Hi
We are thinking about moving away from our Magneto-Optical based archive system
(WORM technology). At the moment, we use a volume manager, which virtualizes
the WORM's in the jukebox and presents them as UFS Filesystems. The volume
manager automatically does asynchronous replication to an id
On Thu, Sep 14, 2006 at 05:08:18PM -0500, Nicolas Williams wrote:
> On Thu, Sep 14, 2006 at 10:32:59PM +0200, Henk Langeveld wrote:
> > Bady, Brant RBCM:EX wrote:
> > >Part of the archiving process is to generate checksums (I happen to use
> > >MD5), and store them with other metadata about the dig
> On 9/13/06, Matthew Ahrens <[EMAIL PROTECTED]>
> wrote:
> > Sure, if you want *everything* in your pool to be
> mirrored, there is no
> > real need for this feature (you could argue that
> setting up the pool
> > would be easier if you didn't have to slice up the
> disk though).
>
> Not necessar
29 matches
Mail list logo