On Tue, Aug 22, 2006 at 06:15:08AM -0700, Tony Galway wrote:
> A question (well lets make it 3 really) ??? Is vdbench a useful tool when
> testing file system performance of a ZFS file system? Secondly - is ZFS write
> performance really much worse than UFS or VxFS? and Third - what is a good
>
> Filed as 6462690.
>
> If our storage qualification test suite doesn't yet
> check for support of this bit, we might want to get
> that added; it would be useful to know (and gently
> nudge vendors who don't yet support it).
Is either the test suite, or at least a list of what it tests
(which it
Anton B. Rang wrote:
If you issue aligned, full-record write requests, there is a definite advantage
to continuing to set the record size. It allows ZFS to process the write
without the read-modify-write cycle that would be required for the default 128K
record size. (While compression results
Hello Sarah,
Wednesday, August 23, 2006, 12:56:05 AM, you wrote:
SJ> Hi Robert,
SJ> Looks like you are using libumem? And it looks like there is a possible
SJ> memory issue in the libmeta code when we are trying to dlopen it from
SJ> libdiskmgt.
SJ> I think we would have seen this more if it w
Hi Robert,
Looks like you are using libumem? And it looks like there is a possible
memory issue in the libmeta code when we are trying to dlopen it from
libdiskmgt.
I think we would have seen this more if it was happening every time with
u2 bits. Doesn't mean its not a bug, but looks like it
Hello Roch,
Monday, August 21, 2006, 12:07:02 PM, you wrote:
R> Hi Robert, Maybe this RFE would contribute to alleviate your
R> problem:
R> 6417135 need generic way to dissociate disk or slice from it's
filesystem
R> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6417135
Y
Hello Anton,
Tuesday, August 22, 2006, 9:53:57 PM, you wrote:
ABR> Filed as 6462690.
ABR> If our storage qualification test suite doesn't yet check for
ABR> support of this bit, we might want to get that added; it would be
ABR> useful to know (and gently nudge vendors who don't yet support it).
Hello Eric,
Tuesday, August 22, 2006, 11:51:55 PM, you wrote:
ES> This looks like a bug in the in-use checking for SVM (?). What build
ES> are you running?
S10 update2 + patches, kernel Generic_118833-20 sparc
ES> In the meantime, you can work around this by setting 'NOINUSE_CHECK' in
ES> you
Hello James,
Tuesday, August 22, 2006, 11:52:37 PM, you wrote:
JCM> Robert Milkowski wrote:
>> Hello zfs-discuss,
>>
>> S10U2 SPARC + patches
>>
>> Generic_118833-20
>>
>> LUNs from 3510 array.
>>
>>
>> bash-3.00# zpool import
>> no pools available to import
>> bash-3.00# z
Hello Robert,
After server restart I got:
bash-3.00# zpool create test c5t600C0FF0098FD535C3D2B900d0
warning: device in use checking failed: No such device
bash-3.00# zpool list
NAMESIZEUSED AVAILCAP HEALTH ALTROOT
test204G 84.5K
This looks like a bug in the in-use checking for SVM (?). What build
are you running?
In the meantime, you can work around this by setting 'NOINUSE_CHECK' in
your environment to disable in-use checking. Just be careful that
you're not specifying disks which are actually in use, of course ;-)
-
Robert Milkowski wrote:
Hello zfs-discuss,
S10U2 SPARC + patches
Generic_118833-20
LUNs from 3510 array.
bash-3.00# zpool import
no pools available to import
bash-3.00# zpool create f3-1 mirror c5t600C0FF0098FD535C3D2B900d0
c5t600C0FF0098FD54CB01E1100d0 mir
Hello zfs-discuss,
S10U2 SPARC + patches
Generic_118833-20
LUNs from 3510 array.
bash-3.00# zpool import
no pools available to import
bash-3.00# zpool create f3-1 mirror c5t600C0FF0098FD535C3D2B900d0
c5t600C0FF0098FD54CB01E1100d0 mirror
c5t600C0FF009
If you issue aligned, full-record write requests, there is a definite advantage
to continuing to set the record size. It allows ZFS to process the write
without the read-modify-write cycle that would be required for the default 128K
record size. (While compression results in records of variable
On Tue, Aug 22, 2006 at 04:04:50PM -0500, Neil A. Wilson wrote:
> Do both compression and fixed record sizes work together?
Yes.
> Our Directory Server uses a fixed page size (8KB by default) for
> database records, so I'm in the habit of setting the ZFS recordsize to
> equal the database page
Constantin Gonzalez wrote:
Hi Eric,
This means that we have one pool with 3 vdevs that access up to 3
different
sliced on the same physical disk.
minor correction: 1 pool, 3 vdevs, 3 slices per disk on 4 disks.
Question: Does ZFS consider the underlying physical disks when
loa
Do both compression and fixed record sizes work together?
Our Directory Server uses a fixed page size (8KB by default) for
database records, so I'm in the habit of setting the ZFS recordsize to
equal the database page size. However, we also typically use
compression because it often helps imp
This seems like /etc/dfs/sharetab was somehow corrupted. Basically, ZFS
saw the share there and assumed that it was shared. But then when we
went went to unshare it, the in-kernel list of shares showed that it
wasn't actually shared.
If you can reproduce this, please capture the contents of
/etc
Filed as 6462690.
If our storage qualification test suite doesn't yet check for support of this
bit, we might want to get that added; it would be useful to know (and gently
nudge vendors who don't yet support it).
This message posted from opensolaris.org
_
Saw this while writing a script today -- while debugging the script, I was
ctrl-c-ing it a lot rather
than wait for the zfs create / zfs set commands to complete. After doing so,
my cleanup script
failed to zfs destroy the new filesystem:
[EMAIL PROTECTED]:/ # zfs destroy -f raid/www/user-test
On Tue, Aug 22, 2006 at 11:46:30AM -0700, Anton B. Rang wrote:
> I realized just now that we're actually sending the wrong variant of
> SYNCHRONIZE CACHE, at least for SCSI devices which support SBC-2.
>
> SBC-2 (or possibly even SBC-1, I don't have it handy) added the
> SYNC_NV bit to the command
We're running ZFS with compress=ON on a E2900. I'm hosting SAS/SPDS datasets
(files) on these filesystems and am achieving 1:3.87 (as reported by zfs)
compression. Your mileage will vary depending on the data you are writing. If
your data is already compressed (zip files) then don't expect any p
Just updating the discussion with some email chains. After more digging, this
does not appear to be a version 2 or 3 replicatiion issues. I believe it to be
an invalid named snapshot that causes zpool and zfs commands to core.
Tim mentioned it may be similiar to bug 6450219.
I agree it seems s
Bill,
I realized just now that we're actually sending the wrong variant of
SYNCHRONIZE CACHE, at least for SCSI devices which support SBC-2.
SBC-2 (or possibly even SBC-1, I don't have it handy) added the SYNC_NV bit to
the command. If SYNC_NV is set to 0, the device is required to flush data f
Shane, I wasn't able to reproduce this failure on my system. Could you
try running Eric's D script below and send us the output while running
'zfs list'?
thanks,
--matt
On Fri, Aug 18, 2006 at 09:47:45AM -0700, Eric Schrock wrote:
> Can you send the output of this D script while running 'zfs li
Wow, congratulations, nice work!
I'm the one porting ZFS to FUSE and seeing you doing such progress so fast is
very very encouraging :)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Aug 22, 2006 at 07:02:53PM +0200, Thomas Deutsch wrote:
> >ZFS' RAIDZ1 uses one parity disk per RAIDZ set, similarly to RAID-5.
> >ZFS' RAIDZ2 uses two parity disks per RAIDZ set.
>
> This means that RAIDZ2 allows problems with two disks?
That's right. A third failure would cause data lo
On 8/10/06, Neil Perrin <[EMAIL PROTECTED]> wrote:
Myron Scott wrote:
> Is there any difference between fdatasync and fsync on ZFS?
-No. ZFS does not log data and meta data separately. rather
it logs essentially the system call records, eg writes, mkdir,
truncate, setattr, etc. So fdatasync and
For what it's worth, an unbuilt Solaris workspace (containing only source files
and SCCS files) stored on ZFS has a reported compression ratio of about 1.87.
A large filesystem on which I keep primarily compilers (including several
versions of the full Sun Studio install) has a ratio of 1.57.
On Tue, Aug 22, 2006 at 10:09:19AM -0700, Rich Teer wrote:
> On Tue, 22 Aug 2006, Matthew Ahrens wrote:
>
> > gzip. We plan to implement a broader range of compression algorithms in
> > the future.
>
> Cool. Presumably, the algorithm used will be a user-settable property?
That's correct, you w
On Tue, 22 Aug 2006, Matthew Ahrens wrote:
> gzip. We plan to implement a broader range of compression algorithms in
> the future.
Cool. Presumably, the algorithm used will be a user-settable property?
--
Rich Teer, SCNA, SCSA, OpenSolaris CAB member
President,
Rite Online Inc.
Voice: +1 (2
>-Original Message-
>From: [EMAIL PROTECTED]
[mailto:zfs-discuss->[EMAIL PROTECTED] On Behalf Of roland
>can someone tell, how effective is zfs compression and space-efficiency
>(regarding small files) ?
>linux-kernel source tree
% ls -l linux-2.6.17.tar.gz
-rw--- 1 nliu staff
Hi
2006/8/22, Constantin Gonzalez <[EMAIL PROTECTED]>:
Thomas Deutsch wrote:
> I'm thinking about to change from Linux/Softwareraid to
> OpenSolaris/ZFS. During this, I've got some (probably stupid)
> questions:
don't worry, there are no stupid questions :).
> 1. Is ZFS able to encrypt all the
On Tue, Aug 22, 2006 at 08:43:32AM -0700, roland wrote:
> can someone tell, how effective is zfs compression and
> space-efficiency (regarding small files) ?
>
> since compression works at the block level, i assume compression may
> not come into effect as some may expect. (maybe i`m wrong here)
On Tue, Aug 22, 2006 at 06:15:08AM -0700, Tony Galway wrote:
> A question (well lets make it 3 really) ? Is vdbench a useful tool
> when testing file system performance of a ZFS file system? Secondly -
> is ZFS write performance really much worse than UFS or VxFS? and Third
> - what is a good bench
Michael Schuster - Sun Microsystems wrote:
Pawel Jakub Dawidek wrote:
On Tue, Aug 22, 2006 at 04:30:44PM +0200, Jeremie Le Hen wrote:
I don't know much about ZFS, but Sun states this is a "128 bits"
filesystem. How will you handle this in regards to the FreeBSD
kernel interface that is alrea
Hi,
Thomas Deutsch wrote:
> Hi
>
> I'm thinking about to change from Linux/Softwareraid to
> OpenSolaris/ZFS. During this, I've got some (probably stupid)
> questions:
don't worry, there are no stupid questions :).
> 1. Is ZFS able to encrypt all the data? If yes: How safe is this
> encryption?
Hello !
I searched the net and the forum for this, but couldn`t find anything about
this.
can someone tell, how effective is zfs compression and space-efficiency
(regarding small files) ?
since compression works at the block level, i assume compression may not come
into effect as some may exp
On Tue, Aug 22, 2006 at 04:42:57PM +0200, Michael Schuster - Sun Microsystems
wrote:
> Pawel Jakub Dawidek wrote:
> >On Tue, Aug 22, 2006 at 04:30:44PM +0200, Jeremie Le Hen wrote:
> >>I don't know much about ZFS, but Sun states this is a "128 bits"
> >>filesystem. How will you handle this in reg
Hi
I'm thinking about to change from Linux/Softwareraid to
OpenSolaris/ZFS. During this, I've got some (probably stupid)
questions:
1. Is ZFS able to encrypt all the data? If yes: How safe is this
encryption? I'm currently using dm-crypt on linux for doing this.
2. How big is the usable diskspa
Pawel Jakub Dawidek wrote:
On Tue, Aug 22, 2006 at 04:30:44PM +0200, Jeremie Le Hen wrote:
I don't know much about ZFS, but Sun states this is a "128 bits"
filesystem. How will you handle this in regards to the FreeBSD
kernel interface that is already struggling to be 64 bits
compliant ? (I'm
Hi folks,
thanks for the responses. We've noticed a couple of switches in this code:
un_f_write_cache_enabled - loaded in sd_get_write_cache_enabled() after looking
at sense data
and
un_f_sync_cache_supported - referenced in sdioctl :
22025 case DKIOCFLUSHWRITECACHE:
22026
On Tue, Aug 22, 2006 at 04:30:44PM +0200, Jeremie Le Hen wrote:
> I don't know much about ZFS, but Sun states this is a "128 bits"
> filesystem. How will you handle this in regards to the FreeBSD
> kernel interface that is already struggling to be 64 bits
> compliant ? (I'm stating this based on
A question (well lets make it 3 really) – Is vdbench a useful tool when testing
file system performance of a ZFS file system? Secondly - is ZFS write
performance really much worse than UFS or VxFS? and Third - what is a good
benchmarking tool to test ZFS vs UFS vs VxFS?
The reason I ask is this
On Tue, Aug 22, 2006 at 12:22:44PM +0100, Dick Davies wrote:
> This is fantastic work!
>
> How long have you been at it?
As I said, 10 days, but this is really far from beeing finished.
--
Pawel Jakub Dawidek http://www.wheel.pl
[EMAIL PROTECTED]
Roch wrote:
Dick Davies writes:
> On 22/08/06, Bill Moore <[EMAIL PROTECTED]> wrote:
> > On Mon, Aug 21, 2006 at 02:40:40PM -0700, Anton B. Rang wrote:
> > > Yes, ZFS uses this command very frequently. However, it only does this
> > > if the whole disk is under the control of ZFS, I believe
On 8/22/06, Pawel Jakub Dawidek <[EMAIL PROTECTED]> wrote:
I started porting the ZFS file system to the FreeBSD operating system.
Mighty cool!! Please keep us posted!!
raj
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolar
Michael Schuster - Sun Microsystems writes:
> Roch wrote:
> > Michael Schuster writes:
> > > IHAC who is using a very similar test (cp -pr /zpool1/Studio11
> > > /zpool1/Studio11.copy) and is seeing behaviour similar to what we've
> > > seen described here; BUT since he's using a single-C
Roch wrote:
Michael Schuster writes:
> IHAC who is using a very similar test (cp -pr /zpool1/Studio11
> /zpool1/Studio11.copy) and is seeing behaviour similar to what we've
> seen described here; BUT since he's using a single-CPU box (SunBlade
> 1500) and has a single disk in his pool, every
This is fantastic work!
How long have you been at it?
You seem a lot further on than the ZFS-Fuse project.
On 22/08/06, Pawel Jakub Dawidek <[EMAIL PROTECTED]> wrote:
Hi.
I started porting the ZFS file system to the FreeBSD operating system.
There is a lot to do, but I'm making good progress,
Hi.
I started porting the ZFS file system to the FreeBSD operating system.
There is a lot to do, but I'm making good progress, I think.
I'm doing my work in those directories:
contrib/opensolaris/ - userland files taken directly from
OpenSolaris (libzfs, zpool, zfs and o
Hi Eric,
>> This means that we have one pool with 3 vdevs that access up to 3
>> different
>> sliced on the same physical disk.
minor correction: 1 pool, 3 vdevs, 3 slices per disk on 4 disks.
>> Question: Does ZFS consider the underlying physical disks when
>> load-balancing
>> or does it only
Michael Schuster writes:
> IHAC who is using a very similar test (cp -pr /zpool1/Studio11
> /zpool1/Studio11.copy) and is seeing behaviour similar to what we've
> seen described here; BUT since he's using a single-CPU box (SunBlade
> 1500) and has a single disk in his pool, every time the CPU
Dick Davies writes:
> On 22/08/06, Bill Moore <[EMAIL PROTECTED]> wrote:
> > On Mon, Aug 21, 2006 at 02:40:40PM -0700, Anton B. Rang wrote:
> > > Yes, ZFS uses this command very frequently. However, it only does this
> > > if the whole disk is under the control of ZFS, I believe; so a
> > > w
IHAC who is using a very similar test (cp -pr /zpool1/Studio11
/zpool1/Studio11.copy) and is seeing behaviour similar to what we've seen
described here; BUT since he's using a single-CPU box (SunBlade 1500) and has a
single disk in his pool, every time the CPU goes into "100%-mode", interactive
On 22/08/06, Bill Moore <[EMAIL PROTECTED]> wrote:
On Mon, Aug 21, 2006 at 02:40:40PM -0700, Anton B. Rang wrote:
> Yes, ZFS uses this command very frequently. However, it only does this
> if the whole disk is under the control of ZFS, I believe; so a
> workaround could be to use slices rather th
However, I can't help but think that if my file server is
compressing every data block that it writes that it would be able to write
more data if it used a thread (or more) per core I would come out ahead.
No arguments here. MT-hot compression was the design of ZFS from day one.
A bug got i
57 matches
Mail list logo