Well, I tried some suggested iscsi tunings to no avail. I did try something else though: I brought up samba. My linux 2.2 source tree copying into the ZFS volume (in other words, SMB->ZFS->iSCSI) did far much better, taking a minute to copy 102MB. And that's from a 100MB/sec client.
My original tes
On Tue, May 09, 2006 at 04:56:05PM -0500, Al Hopper wrote:
> While I agree that zfs send is incredibly useful, after reading this post
> I'm asking myself:
>
> a) This already sounds like we're descending the slippery slope of
> 'checkpointing' - which is an incredibly hard problem to solve and
[.
On Tue, 9 May 2006, Nicolas Williams wrote:
> On Tue, May 09, 2006 at 01:33:33PM -0700, Darren Reed wrote:
> > Eric Schrock wrote:
> >
> > >...
> > >Asynchronous remote replication can be done today with 'zfs send' and
> > >zfs receive', though it needs some more work to be truly useful. It has
>
Wout Mertens <[EMAIL PROTECTED]> wrote:
> >> WOFS lives on a Write once medium, WOFS itself is not write once.
> >>
> >> I would need to check my papers there is a solution.
> >
> > If you unlink the original name/inode entry you can mark it as deleted
> > without actually deleting it, thus le
On 09 May 2006, at 18:09, Nicolas Williams wrote:
On Tue, May 09, 2006 at 05:37:07PM +0200, Joerg Schilling wrote:
Wout Mertens <[EMAIL PROTECTED]> wrote:
Yes, but WOFS is a write-once filesystem. ZFS is read-write. What
happens if you delete the file referenced by the inode-softlinks?
WOFS
On Tue, 9 May 2006, Matthew Ahrens wrote:
> FYI folks, I have implemented "clone promotion", also known as "clone
> swap" or "clone pivot", as described in this bug report:
>
> 6276916 support for "clone swap"
>
> Look for it in an upcoming release...
>
> Here is a copy of PSARC case which i
On Tue, May 09, 2006 at 12:55:34PM -0700, Alan Romeril wrote:
>
> I've set off a scrub to check things, there was no resilver of any
> data on boot, but there's mention of corruption... Is there any way
> of translating this output to filenames? As this is a zfs root, I'd
> like to be absolutely
Yes. What happened is that you had a transient error which resulted in
EIO being returned to the application. We dutifully recorded this fact
in the persistent error log. When you ran a scrub, it verified that the
blocks were in fact still readable, and hence removed them from the
error log. Me
On Tue, May 09, 2006 at 04:01:53PM -0500, Al Hopper wrote:
> > In the future, we may improve this by "joining" less-than-full ZAP
> > blocks, or by simply special-casing "empty again" or "small again"
> > directories and essentially automatically re-writing them in the most
> > compact form. So fa
On Tue, May 09, 2006 at 04:01:53PM -0500, Al Hopper wrote:
> Does this mean, that if I have a zfs filesystem that is
> creating/writing/reading/deleting millions of short-lived files in a day,
> that the directory area would keep growing? Or am I missing something?
The space used by a directory i
Matthew Ahrens wrote:
On Tue, May 09, 2006 at 01:33:33PM -0700, Darren Reed wrote:
Eric Schrock wrote:
...
Asynchronous remote replication can be done today with 'zfs send' and
zfs receive', though it needs some more work to be truly useful. It has
the properties that it doesn't tax
Actually this would explain the behavior, probably because I have regular
snapshots taken every hour it does restart scrub and that is why I am seeing it.
Is there any way to disable scrub untill this is fixed? Or somehow prevent them
from starting? I can certainly add additional line to a scrip
Well I did manually start it few days ago, unless there is some automatic way to
start it... Where could I find it out if it does automatically start or not?
thanks for the info.. I like this where I have a system to play with and
experiment it, I would hate it to be in production and behave li
On Tue, 9 May 2006, Matthew Ahrens wrote:
> On Sun, May 07, 2006 at 11:38:52PM -0700, Darren Dunham wrote:
> > I was doing some tests with creating and removing subdirectories and
> > watching the time that takes. The directory retains the size and
> > performance issues after the files are remov
On Tue, May 09, 2006 at 01:33:33PM -0700, Darren Reed wrote:
> Eric Schrock wrote:
>
> >...
> >Asynchronous remote replication can be done today with 'zfs send' and
> >zfs receive', though it needs some more work to be truly useful. It has
> >the properties that it doesn't tax local activity, but
This did work and all activity is stopped. :) Thank you.
Chris
On Tue, 9 May 2006, Eric Schrock wrote:
On Tue, May 09, 2006 at 11:04:05AM -0400, Krzys wrote:
Ys, I did tun that command but it was quite a few days ago... :( Would it
take that long to complete? I would never imagine it would..
On Tue, May 09, 2006 at 01:33:33PM -0700, Darren Reed wrote:
> Eric Schrock wrote:
>
> >...
> >Asynchronous remote replication can be done today with 'zfs send' and
> >zfs receive', though it needs some more work to be truly useful. It has
> >the properties that it doesn't tax local activity, but
Eric Schrock wrote:
...
Asynchronous remote replication can be done today with 'zfs send' and
zfs receive', though it needs some more work to be truly useful. It has
the properties that it doesn't tax local activity, but your data will be
slightly out of sync (depending on how often you sync yo
Eh maybe it's not a problem after all, the scrub has completed well...
--a
bash-3.00# zpool status -v
pool: raidpool
state: ONLINE
scrub: scrub completed with 0 errors on Tue May 9 21:10:55 2006
config:
NAMESTATE READ WRITE CKSUM
raidpoolONLINE 0 0
FYI folks, I have implemented "clone promotion", also known as "clone
swap" or "clone pivot", as described in this bug report:
6276916 support for "clone swap"
Look for it in an upcoming release...
Here is a copy of PSARC case which is currently under review.
1. Introduction
1.1. Pr
I'm not sure exactly what happened with my box here, but something caused a
hiccup on multiple sata disks...
May 9 16:40:33 sol scsi: [ID 107833 kern.warning] WARNING: /[EMAIL
PROTECTED],0/pci10de,[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]
(ata6):
May 9 16:47:43 sol scsi: [ID 10783
On Sun, May 07, 2006 at 11:38:52PM -0700, Darren Dunham wrote:
> I was doing some tests with creating and removing subdirectories and
> watching the time that takes. The directory retains the size and
> performance issues after the files are removed.
>
> /rootz/test> ls -la .
> total 42372
> drwx
FYI...
--- Begin Message ---
Hi All,
I am sponsoring the following fast track for myself and Richard Lowe as
part of the OpenSolaris bug sponsor program. It times out on 5/16/2006.
Proposed man page changes are in the case directory, zfs.txt.
thanks,
sarah
*
All ARC Project Materials a
On Tue, 2006-05-09 at 14:06 +0200, Frank Hofmann wrote:
> I second the call for consistency, but think that this means dumping
> partitions/slices from the actual device name. A disk is a disk - one unit
> of storage. How it is subdivided and how/whether the subdivisions are made
> available as
On Tue, 9 May 2006, Bertrand Sirodot wrote:
> I have Solaris Express snv_27 installed on an x86 pc with 4 SATA
> drives. I have 1 zpool called data with 16 ZFS filesystems. From times
> to times, it looks like ZFS falls asleep, i.e. when I do a df -k, it
> takes about a second to list each ZFS fil
Hi,
I have Solaris Express snv_27 installed on an x86 pc with 4 SATA drives. I have
1 zpool called data with 16 ZFS filesystems. From times to times, it looks like
ZFS falls asleep, i.e. when I do a df -k, it takes about a second to list each
ZFS filesystem. If I re-issue the command straight a
On Tue, May 09, 2006 at 03:47:42AM -0700, Daniel Rock wrote:
> found the BugID:
Yep, sorry about that.
6416101 du inside snapshot produces bad sizes and paths
--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/m
Darren J Moffat wrote:
Scott Rotondo wrote:
Joseph Kowalski wrote:
This is just a request for elaboration/education. I find reason #1
compelling ehough to accept your answer, but I really don't understand
reason #2. Why wouldn't the Solaris audit facility be correct here?
The Solaris aud
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, May 09, 2006 at 05:37:07PM +0200, Joerg Schilling wrote:
> Wout Mertens <[EMAIL PROTECTED]> wrote:
> > On 07 May 2006, at 17:03, Joerg Schilling wrote:
> > > If ZFS did use my concept, you don't have the problems you have
> > > with FAT.
> >
> > Yes, but WOFS is a write-once filesystem. Z
Wout Mertens <[EMAIL PROTECTED]> wrote:
>
> On 07 May 2006, at 17:03, Joerg Schilling wrote:
>
> > Look at my WOFS from 1990... It uses 'gnodes' that include the
> > filename
> > in one single meta data chunk for a file. Hard links are
> > implemented as
> > inode number related soft links (wh
> Ys, I did tun that command but it was quite a few
> days ago... :( Would it take
> that long to complete? I would never imagine it
> would... Is there any way to
> stop it?
>
> Chris
>
The really odd part Chris is that the scrub indicates it's at 35.37% with 1hour
and 1 minute left to finis
On Tue, May 09, 2006 at 11:04:05AM -0400, Krzys wrote:
> Ys, I did tun that command but it was quite a few days ago... :( Would it
> take that long to complete? I would never imagine it would... Is there any
> way to stop it?
Are you taking regular snapshots? There is currently a bug whereby
s
On 07 May 2006, at 17:03, Joerg Schilling wrote:
Look at my WOFS from 1990... It uses 'gnodes' that include the
filename
in one single meta data chunk for a file. Hard links are
implemented as
inode number related soft links (while symlinks are name related
soft links).
If ZFS did use my
Ys, I did tun that command but it was quite a few days ago... :( Would it take
that long to complete? I would never imagine it would... Is there any way to
stop it?
Chris
On Tue, 9 May 2006, Bill Sommerfeld wrote:
On Tue, 2006-05-09 at 09:44, Krzys wrote:
scrub: scrub in progress, 35.37%
On Tue, 2006-05-09 at 09:44, Krzys wrote:
> scrub: scrub in progress, 35.37% done, 1h1m to go
> Any idea what is going on and why there is so much reading going on?
see above. someone must have done a "zpool scrub" recently.
(unfortunate that it doesn't tell you when the scrub started..)
I/O
I am running zfs set on 3 x 300gb HD's, I do see my disk activity going crazy
all the time, is there any reason for it? I have nothing running on this system,
just did setit up for testing purposes. I do replicate data from different
system once a day trough rsync but that is quick process an
"Maury Markowitz" <[EMAIL PROTECTED]> wrote:
> I believe xattrs were added to store things just like what we're talking
> about here. Specifically, if I'm not mistaken, many originally used them for
> ALC storage. Now that zfs promotes ACL's to first-class citizens, it seems
> that a reevaluati
On Tue, 9 May 2006, Darren J Moffat wrote:
Paul van der Zwan wrote:
I just booted up Minix 3.1.1 today in Qemu and noticed to my surprise
that it has a disk nameing scheme similar to what Solaris uses.
It has c?d?p?s? note that both p (PC FDISK I assume) and s is used,
HP-UX uses the same sc
Paul van der Zwan <[EMAIL PROTECTED]> wrote:
> > HP-UX uses the same scheme.
> >
>
> I think any system descending from the old SysV branch has the c?t?d?
> s? naming convention.
> I don't remember which version first used it but as far as I remember
> it was already used in the mid 80's.
I be
Paul van der Zwan wrote:
I just booted up Minix 3.1.1 today in Qemu and noticed to my surprise
that it has a disk nameing scheme similar to what Solaris uses.
It has c?d?p?s? note that both p (PC FDISK I assume) and s is used,
HP-UX uses the same scheme.
I think any system descending from t
On 9-mei-2006, at 11:35, Joerg Schilling wrote:
Darren J Moffat <[EMAIL PROTECTED]> wrote:
Jeff Bonwick wrote:
I personally hate this device naming semantic (/dev/rdsk/c-t-d
not meaning what you'd logically expect it to). (It's a generic
Solaris bug, not a ZFS thing.) I'll see
Ok,
found the BugID:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6416101
and the relevant code:
http://cvs.opensolaris.org/source/diff/on/usr/src/uts/common/fs/zfs/zfs_dir.c?r2=1.7&r1=1.6
So I will wait for snv_39
--
Daniel
This message posted from opensolaris.org
_
Darren J Moffat <[EMAIL PROTECTED]> wrote:
> Jeff Bonwick wrote:
> >
> > I personally hate this device naming semantic (/dev/rdsk/c-t-d
> > not meaning what you'd logically expect it to). (It's a generic
> > Solaris bug, not a ZFS thing.) I'll see if I can get it changed.
> >
Jeff Bonwick wrote:
I personally hate this device naming semantic (/dev/rdsk/c-t-d
not meaning what you'd logically expect it to). (It's a generic
Solaris bug, not a ZFS thing.) I'll see if I can get it changed.
Because almost everyone gets bitten by this.
I've heard lots
> bash-3.00# dd if=/dev/urandom of=/dev/dsk/c1t10d0 bs=1024 count=20480
A couple of things:
(1) When you write to /dev/dsk, rather than /dev/rdsk, the results
are cached in memory. So the on-disk state may have been unaltered.
(2) When you write to /dev/rdsk/c-t-d, without specifying a slic
Hi Tim,
thank you for your comments.
Bringing in SMF is an excellent idea and should make what admins like to
do much more elegant.
I guess the question here is to find out:
- What degree of canned functionality is needed to address 80% of every admin's
need.
- Who should provide the functi
Just noticed this:
# zfs create scratch/test
# cd /scratch/test
# mkdir d1 d2 d3
# zfs snapshot scratch/[EMAIL PROTECTED]
# cd .zfs/snapshot/snap
# ls
d1d2d3
# du -k
1 ./d3
3 .
{so "du" doesn't traverse the other directories 'd1' and 'd2'}
# pwd
/scratch/test/.zfs/snapshot/snap
#
49 matches
Mail list logo