Has anyone run Solaris on one of these:
http://acmemicro.com/estore/merchant.ihtml?pid=4014&step=4
2U with 12 hotswap SATA disks. Supermicro motherboard, would have to add a
second Supermicro SATA2 controller to cover all the disks and the onboard
intel controller can only handle 6.
Nicholas
___
On 26-Feb-07, at 11:32 PM, Richard Elling wrote:
Rayson Ho wrote:
NT kernel has the filter driver framework:
http://www.microsoft.com/whdc/driver/filterdrv/default.mspx
It seems to be useful for things like FS encrytion and compression...
is there any plan to implement something similar in Sol
Rayson,
NT kernel has the filter driver framework:
http://www.microsoft.com/whdc/driver/filterdrv/default.mspx
It seems to be useful for things like FS encrytion and compression...
is there any plan to implement something similar in Solaris??
The Availability Suite product set
(http://www.open
Rayson Ho wrote:
NT kernel has the filter driver framework:
http://www.microsoft.com/whdc/driver/filterdrv/default.mspx
It seems to be useful for things like FS encrytion and compression...
is there any plan to implement something similar in Solaris??
If you google "stacking file system" you'
AoE and ZFS do not work, as the coraid do not support this yet:
REF: http://www.coraid.com/support/solaris/aoe-1.3.1/doc/aoe-guide.html
Search for ZFS
---
Pascal Gauthier
http://www.nihilisme.ca/
This message posted from opensolaris.org
___
zfs-disc
NT kernel has the filter driver framework:
http://www.microsoft.com/whdc/driver/filterdrv/default.mspx
It seems to be useful for things like FS encrytion and compression...
is there any plan to implement something similar in Solaris??
Rayson
___
zfs-d
Jens Elkner wrote:
Currently I'm trying to figure out the best zfs layout for a thumper wrt. to read AND write performance.
First things first. What is the expected workload? Random, sequential, lots of
little files, few big files, 1 Byte iops, synchronous data, constantly changing
access tim
> for what purpose ?
Darren's correct, it's a simple case of ease of use. Not
show-stopping by any means but would be nice to have.
Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discu
On Sat, Feb 24, 2007 at 09:29:48PM +1300, Nicholas Lee wrote:
> I'm not really a Solaris expert, but I would have expected vol4 to appear on
> the iscsi target list automatically. Is there a way to refresh the target
> list? Or is this a bug.
Hi Nicholas,
This is a bug either in ZFS or in the iS
Currently I'm trying to figure out the best zfs layout for a thumper wrt. to
read AND write performance.
I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s
seems to be the maximum on can reach (tried initial default setup, all 46 HDDs
as R0, etc.).
According to
h
>
> for what purpose ?
For me, I'd say ease of use. Using Netapp .snapshot directories, it's
often easier to find files in a snapshot by relative path from the
directory in question rather than from all the way back at the top of
the filesystem.
In addition, others have mentioned zone mounts:
h
for what purpose ?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
mkfile files also compress rather nicely, when you have ZFS compression
enabled.
-- richard
Jens Elkner wrote:
Hi Wire ;-),
What's the output of
zpool list
zfs list
?
Ooops, already destroyed the pool. Anyway, slept a night over it and found a "maybe
explaination":
Files were created wit
Hello Paul,
Monday, February 26, 2007, 8:28:43 PM, you wrote:
>> From: Eric Schrock [mailto:[EMAIL PROTECTED]
>> Sent: Monday, February 26, 2007 12:05 PM
>>
>> The slow part of zpool import is actually discovering the
>> pool configuration. This involves examining every device on
>> the syst
It does not concern the ACL on ZFS but this represent the mapping of the ZFS
snapshots to samba shares.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
Root cause is in the acl(2) call. ZFS implementation team does not implemented
backward compatibility of the SETACL/GETACL/GETACLCNT functions of this
syscall. Only the extended functions ACE_SETACL/ACE_GETACL/ACE_GETACLCNT are
implemented on ZFS. The old ones returns (errno == ENOTSUP) on ZFS (
I am currently working (among the other problems) on this issue. I hope the
module will be finished till the 3.0.25 will be released.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.openso
> From: Eric Schrock [mailto:[EMAIL PROTECTED]
> Sent: Monday, February 26, 2007 12:05 PM
>
> The slow part of zpool import is actually discovering the
> pool configuration. This involves examining every device on
> the system (or every device within a 'import -d' directory)
> and seeing if i
Hi Gino,
Was there more than one LUN in the RAID-Z using the port you disabled?
-J
On 2/26/07, Gino Ruopolo <[EMAIL PROTECTED]> wrote:
Hi Jason,
saturday we made some tests and found that disabling a FC port under heavy load
(MPXio enabled) often takes to a panic. (using a RAID-Z !)
No prob
Hi Wire ;-),
> What's the output of
> zpool list
> zfs list
> ?
Ooops, already destroyed the pool. Anyway, slept a night over it and found a
"maybe explaination":
Files were created with mkfile an mkfile has an option -n. It was not used to
create the files, however I interrupted mkfile (^C).
On Mon, Feb 26, 2007 at 10:32:22AM -0800, Eric Schrock wrote:
> On Mon, Feb 26, 2007 at 12:27:48PM -0600, Nicolas Williams wrote:
> >
> > What is slow, BTW? The open(2)s of the devices? Or the label reading?
> > And is there a way to do async open(2)s w/o a thread per-open? The
> > open(2) man
On Mon, Feb 26, 2007 at 12:27:48PM -0600, Nicolas Williams wrote:
>
> What is slow, BTW? The open(2)s of the devices? Or the label reading?
> And is there a way to do async open(2)s w/o a thread per-open? The
> open(2) man page isn't very detailed about O_NONBLOCK/O_NDELAY behaviour
> on device
On Mon, Feb 26, 2007 at 10:10:15AM -0800, Eric Schrock wrote:
> On Mon, Feb 26, 2007 at 12:06:14PM -0600, Nicolas Williams wrote:
> > Couldn't all that tasting be done in parallel?
>
> Yep, that's certainly possible. Sounds like a perfect feature for
> someone in the community to work on :-) Sim
[EMAIL PROTECTED] wrote on 02/26/2007 11:36:18 AM:
> Jeff Davis wrote:
> >> Given your question are you about to come back with a
> >> case where you are not
> >> seeing this?
> >
> > Actually, the case where I saw the bad behavior was in Linux using
> the CFQ I/O scheduler. When reading the
Le 26 févr. 07 à 18:30, Frank Cusack a écrit :
On February 26, 2007 9:05:21 AM -0800 Jeff Davis
<[EMAIL PROTECTED]> wrote:
That got me worried about the project I'm working on, and I wanted to
understand ZFS's caching behavior better to prove to myself that the
problem wouldn't happen under Z
On Mon, Feb 26, 2007 at 12:06:14PM -0600, Nicolas Williams wrote:
> On Mon, Feb 26, 2007 at 10:05:08AM -0800, Eric Schrock wrote:
> > The slow part of zpool import is actually discovering the pool
> > configuration. This involves examining every device on the system (or
> > every device within a '
On Mon, Feb 26, 2007 at 10:05:08AM -0800, Eric Schrock wrote:
> The slow part of zpool import is actually discovering the pool
> configuration. This involves examining every device on the system (or
> every device within a 'import -d' directory) and seeing if it has any
> labels. Internally, the
The slow part of zpool import is actually discovering the pool
configuration. This involves examining every device on the system (or
every device within a 'import -d' directory) and seeing if it has any
labels. Internally, the import action itself shoudl be quite fast, and
is essentially the same
It's perfectly reasonable to have multiple exported/destroyed pools with
the same name. Pool names are unique only when active on the system.
This is why 'zpool import' also prints out the pool GUID and allows
import by ID, instead of just names. In your output below, you'd see
that each pool has
Jeff Davis wrote:
Given your question are you about to come back with a
case where you are not
seeing this?
Actually, the case where I saw the bad behavior was in Linux using the CFQ I/O
scheduler. When reading the same file sequentially, adding processes
drastically reduced total disk throu
On February 26, 2007 9:05:21 AM -0800 Jeff Davis <[EMAIL PROTECTED]>
wrote:
That got me worried about the project I'm working on, and I wanted to
understand ZFS's caching behavior better to prove to myself that the
problem wouldn't happen under ZFS. Clearly the block will be in cache on
the secon
After failed luck with a pair of Syba SD-SATA-4P PCI-X SATA II controllers
(Sil3114 chipset), I've now successfully used a Tekram TR-834A 4-port SATA-II
controller (Sil3124-2 chipset) at the full PCI-X 133MHz bus speed and b50.
Since my disk mirror on the previous SATA controller (built-in W1100
This is a lightly loaded v20z but it has zfs across its two disks..
its hung (requiring a power cycle) twice since running
5.11 opensol-20060904
the last time I had a `vmstat 1` running... nice page rates
right before death :-)
kthr memorypagedisk faults
> Given your question are you about to come back with a
> case where you are not
> seeing this?
Actually, the case where I saw the bad behavior was in Linux using the CFQ I/O
scheduler. When reading the same file sequentially, adding processes
drastically reduced total disk throughput (single d
In other words, say you have 4 - 500 GB drives. In a standard raidz
configuration, you should yield (4-1) * 500 GB of space, or 1.5 TB.
In your case, I will mention one caveat. Say you have 8 drives in raidz. If you
have 7 500 GB and 1 20GB drive, you will only yield (8-1) * 20GB or 140GB of
s
>> devfsadm -i iscsi # to create the device on sf3
>> iscsiadm list target -Sv| egrep 'OS Device|Peer|Alias' # not empty
>> Alias: vol-1
>>IP address (Peer): 10.194.67.111:3260
>> OS Device Name:
>> /dev/rdsk/c1t014005A267C12A0045E2F524d0s2
this i
On Mon, 2007-02-26 at 07:00 -0800, dudekula mastan wrote:
>
> Hi All,
>
> I have a zpool (name as testpool) on /dev/dsk/c0t0d0.
>
> The command $zpool import testpool, imports the testpool (means mount
> the testpool).
>
> How the import command comes to know testpool created
> on /dev/dsk/
Also note that you may not need new hardware. While not as compatible as linux,
it will run on a variety of hardware, so you may just be able to get by with
new disks.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss
Hi All,
I have a zpool (name as testpool) on /dev/dsk/c0t0d0.
The command $zpool import testpool, imports the testpool (means mount the
testpool).
How the import command comes to know testpool created on /dev/dsk/c0t0d0 ?
And also the command $zpool import, list out all
On 2/26/07, cedric briner <[EMAIL PROTECTED]> wrote:
hello,
I'm trying to consolidate my HDs in a cheap but (I hope) reliable
manner. To do so, I was thinking to use zfs over iscsi.
Unfortunately, I'm having some issue with it, when I do:
# iscsi server (nexenta alpha 5)
#
svcadm e
hello,
I'm trying to consolidate my HDs in a cheap but (I hope) reliable
manner. To do so, I was thinking to use zfs over iscsi.
Unfortunately, I'm having some issue with it, when I do:
# iscsi server (nexenta alpha 5)
#
svcadm enable iscsitgt
iscsitadm delete target --lun 0 vol-1
Has anyone done benchmarking on the scalability and performance of zpool import
in terms of the number of devices in the pool on recent opensolaris builds?
In other words, what would the relative performance be for "zpool import" for
the following three pool configurations on multi-pathed 4G FC
> My plan was to have 8-10 cheap drives, most of them IDE drives from
> 120 gig and up to 320 gig. Does that mean that I can get 7-9 drives
> with data plus full redundancy from the last drive? It sounds almost
> like magic to me to be able to have the data on maybe 1 TB of drives
> and have one dr
On Mon, Feb 26, 2007 at 01:53:17AM -0800, Tor wrote:
> [...] if using redundancy on ZDF
The ZFS Document Format? ;-)
> uses less disk space as simply getting extra drives and do identical copies,
> with periodic CRC checks of the source material to check the health.
If you create a 2-disk mirro
If I'm gonna use OpenSolaris, I will have to buy new hardware, which I can't
really defend at the moment. But I may be able to defend it in the near future
if using redundancy on ZDF uses less disk space as simply getting extra drives
and do identical copies, with periodic CRC checks of the sour
Dang, I think I'm dead as far as Solaris goes. I checked the HCL and the Java
compatibility check, and none of the two controllers I would need to use, one
PCI IDE and one S-ATA on the KT-4 motherboard, will work with OpenSolaris.
Annoying as heck, but it looks like I'm gonna have to stick with
Dang, I think I'm dead as far as Solaris goes. I checked the HCL and the Java
compatibility check, and none of the two controllers I would need to use, one
PCI IDE and one S-ATA on the KT-4 motherboard, will work with OpenSolaris.
Annoying as heck, but it looks like I'm gonna have to stick with
On 2/26/07, Thomas Garner <[EMAIL PROTECTED]> wrote:
Since I have been unable to find the answer online, I thought I would
ask here. Is there a knob to turn to on a zfs filesystem put the .zfs
snapshot directory into all of the children directories of the
filesystem, like the .snapshot directori
Hi Jason,
saturday we made some tests and found that disabling a FC port under heavy load
(MPXio enabled) often takes to a panic. (using a RAID-Z !)
No problems with UFS ...
later,
Gino
This message posted from opensolaris.org
___
zfs-discuss mail
49 matches
Mail list logo