Brad,
I'm investigating a similar issue and would like to get a coredump if
you have one available.
Thanks,
George
Brad Plecs wrote:
I have similar problems ... I have a bunch of D1000 disk shelves attached via
SCSI HBAs to a V880. If I do something as simple as unplug a drive in a raidz
v
I have similar problems ... I have a bunch of D1000 disk shelves attached via
SCSI HBAs to a V880. If I do something as simple as unplug a drive in a raidz
vdev, it generates SCSI errors that eventually freeze the entire system. I can
access the filesystem okay for a couple minutes until the SCS
Hi Matthew,
In the case of the 8 KB Random Write to the 128 KB recsize filesystem
the I/O were not full block re-writes, yet the expected COW Random Read
(RR) at the pool level is somehow avoided. I suspect it was able to
coalesce enough I/O in the 5 second transaction window to construct 128
On Wed, Aug 09, 2006 at 04:24:55PM -0700, Dave C. Fisk wrote:
> Hi Eric,
>
> Thanks for the information.
>
> I am aware of the recsize option and its intended use. However, when I
> was exploring it to confirm the expected behavior, what I found was the
> opposite!
>
> The test case was build
Hi Eric,
Thanks for the information.
I am aware of the recsize option and its intended use. However, when I
was exploring it to confirm the expected behavior, what I found was the
opposite!
The test case was build 38, Solaris 11, a 2 GB file, initially
created with 1 MB SW, and a recsize
On Wed, Aug 09, 2006 at 03:29:05PM -0700, Dave Fisk wrote:
>
> For example the COW may or may not have to read old data for a small
> I/O update operation, and a large portion of the pool vdev capability
> can be spent on this kind of overhead.
This is what the 'recordsize' property is for. If y
Hi,
Note that these are page cache rates and that if the application pushes harder
and exposes the supporting device rates there is another world of performance
to be observed. This is where ZFS gets to be a challenge as the relationship
between the application level I/O and the pool level is v
Torrey McMahon wrote:
I'm with ya on that one. I'd even go so far as to change "single parity
RAID" to "single parity block". The talk of RAID throws people off
pretty easily especially when you start layering ZFS on top of things
other then a JBOD.
Agree, I still have people who think RAID =
I'm with ya on that one. I'd even go so far as to change "single parity
RAID" to "single parity block". The talk of RAID throws people off
pretty easily especially when you start layering ZFS on top of things
other then a JBOD.
Eric Schrock wrote:
I don't see why you would distinguish between
Steffen,
Are they open to Postgres if it performs 1000 times faster, clusters to 120
nodes and 1.2 Petabytes?
- Luke
On 8/9/06 1:34 PM, "Steffen Weiberle" <[EMAIL PROTECTED]> wrote:
> Does anybody have real-world experince with MySQL 5 datastore on ZFS? Any
> feedback on clustering of
> nodes?
<...>
So having 4 pools isn't a recommended config - i would destroy those 4
pools and just create 1 RAID-0 pool:
#zpool create sfsrocks c4t00173801014Bd0 c4t00173801014Cd0
c4t001738010140001Cd0 c4t0017380101400012d0
each of those devices is a 64GB lun, right?
I did it - created one po
I don't see why you would distinguish between single-level and multiple
levels. ZFS pools are always dynamically striped, I don't see why you'd
call out the degenerate case of single toplevel vdev as anything
special. I would use simple terminology:
Unreplicated
Mirrored
I'd like to get a concensus of how to describe ZFS RAID configs in a
short-hand method. For example,
single-level
no RAID (1 disk)
RAID-0 (dynamic stripe, > 1 disk)
RAID-1
RAID-Z
RAID-Z2
mutliple l
Does anybody have real-world experince with MySQL 5 datastore on ZFS? Any feedback on clustering of
nodes?
Customer is looking at X4500s for DB and data storage.
Thanks
Steffen
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opens
Jesus Cea wrote:
Anton B. Rang wrote:
I have a two-vdev pool, just plain disk slices
If the vdev's are from the same disk, your are doomed.
ZFS tries to spread the load among the vdevs, so if the vdevs are from
the same disk, you will have a seek hell.
It is not clear to me that this is a p
Eric Lowe wrote:
Eric Schrock wrote:
Well the fact that it's a level 2 indirect block indicates why it can't
simply be removed. We don't know what data it refers to, so we can't
free the associated blocks. The panic on move is quite interesting -
after BFU give it another shot and file a bug
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Robert Milkowski wrote:
> JC> Using ZFS over SVM is undocumented, but seems to work fine. Make sure
> JC> the zfs pool is accesible after a machine reboot, nevertheless.
>
> Then create zvol and put UFS on top of it :
>
> ok, just kid
Hello Roch,
Wednesday, August 9, 2006, 5:36:39 PM, you wrote:
R> mario heimel writes:
>> Hi.
>>
>> i am very interested in ZFS compression on vs off tests maybe you can run
>> another one with the 3510.
>>
>> i have seen a slightly benefit with compression on in the following test
>> (
mario heimel writes:
> Hi.
>
> i am very interested in ZFS compression on vs off tests maybe you can run
> another one with the 3510.
>
> i have seen a slightly benefit with compression on in the following test
> (also with high system load):
> S10U2
> v880 8xcore 16Ggb ram
> (only s
Eric Schrock wrote:
Well the fact that it's a level 2 indirect block indicates why it can't
simply be removed. We don't know what data it refers to, so we can't
free the associated blocks. The panic on move is quite interesting -
after BFU give it another shot and file a bug if it still happens
On Wed, Aug 09, 2006 at 01:24:30PM +0200, Jesus Cea wrote:
>
> That is, I support ZFS 2 but the loaded modules are ZFS 1.
>
The ZFS module version is irrelevant. There is an open RFE to have this
match the on-disk version number, but I don't have it off hand.
- Eric
--
Eric Schrock, Solaris K
So while I'm feeling optimistic :-) we really ought to be
able to do this in two I/O operations. If we have, say, 500K
of data to write (including all of the metadata), we should
be able to allocate a contiguous 500K block on disk and
write that with a single operation. Th
Hi.
i am very interested in ZFS compression on vs off tests maybe you can run
another one with the 3510.
i have seen a slightly benefit with compression on in the following test (also
with high system load):
S10U2
v880 8xcore 16Ggb ram
(only six internal disks at this moment, i wait for the san
eric kustarz writes:
>
> >ES> Second, you may be able to get more performance from the ZFS filesystem
> >ES> on the HW lun by tweaking the max pending # of reqeusts. One thing
> >ES> we've found is that ZFS currently has a hardcoded limit of how many
> >ES> outstanding requests to send to t
Hello Jesus,
Wednesday, August 9, 2006, 2:21:24 PM, you wrote:
JC> -BEGIN PGP SIGNED MESSAGE-
JC> Hash: SHA1
JC> Anton B. Rang wrote:
>> I have a two-vdev pool, just plain disk slices
JC> If the vdev's are from the same disk, your are doomed.
JC> ZFS tries to spread the load among the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Anton B. Rang wrote:
> I have a two-vdev pool, just plain disk slices
If the vdev's are from the same disk, your are doomed.
ZFS tries to spread the load among the vdevs, so if the vdevs are from
the same disk, you will have a seek hell.
I would sug
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
George Wilson wrote:
> Luke,
>
> You can run 'zpool upgrade' to see what on-disk version you are capable
> of running. If you have the latest features then you should be running
> version 3:
>
> hadji-2# zpool upgrade
> This system is currently runni
Hello Torrey,
Wednesday, August 9, 2006, 5:39:54 AM, you wrote:
TM> I read through the entire thread, I think, and have some comments.
TM> * There are still some "granny smith" to "Macintosh" comparisons
TM> going on. Different OS revs, it looks like different server types,
TM> a
Hello Torrey,
Wednesday, August 9, 2006, 4:59:08 AM, you wrote:
TM> Robert Milkowski wrote:
>> Hello Richard,
>>
>> Monday, August 7, 2006, 6:54:37 PM, you wrote:
>>
>> RE> Hi Robert, thanks for the data.
>> RE> Please clarify one thing for me.
>> RE> In the case of the HW raid, was there just on
Hello Luke,
Wednesday, August 9, 2006, 6:07:38 AM, you wrote:
LL> We routinely get 950MB/s from 16 SATA disks on a single server with internal
LL> storage. We're getting 2,000 MB/s on 36 disks in an X4500 with ZFS.
Can you share more data? How these disks are configured, what kind of
access pat
John Danielson wrote:
.
Patrick Petit wrote:
David Edmondson wrote:
On 4 Aug 2006, at 1:22pm, Patrick Petit wrote:
When you're talking to Xen (using three control-A's) you should
hit 'q', which caus
On Tue, Aug 08, 2006 at 11:33:28AM -0500, Tao Chen wrote:
> On 8/8/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> >
> >Hello,
> >
> >Solaris 10 GA + latest recommended patches:
> >
> >while runing dtrace:
> >
> >bash-3.00# dtrace -n 'io:::start [EMAIL PROTECTED], args[2]->fi_pathname] =
> >coun
On Tue, Aug 08, 2006 at 04:47:51PM +0200, Robert Milkowski wrote:
> Hello przemolicc,
>
> Tuesday, August 8, 2006, 3:54:26 PM, you wrote:
>
> ppf> Hello,
>
> ppf> Solaris 10 GA + latest recommended patches:
>
> ppf> while runing dtrace:
>
> ppf> bash-3.00# dtrace -n 'io:::start [EMAIL PROTECTE
33 matches
Mail list logo