A good place to start is: http://www.opensolaris.org/os/community/zfs/
Have a look at:
http://www.opensolaris.org/os/community/zfs/docs/
as well as
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#
Create some files, which you can use as disks within zfs and demo to
you
I did say depends on the guarantees, right? :-) My point is that all
hw raid systems are not created equally.
Nathan Kroenert wrote:
Which has little benefit if it's the HBA or the Array internals change
the meaning of the message...
That's the whole point of ZFS's checksumming - It's end t
Which has little benefit if it's the HBA or the Array internals change
the meaning of the message...
That's the whole point of ZFS's checksumming - It's end to end...
Nathan.
Torrey McMahon wrote:
Toby Thain wrote:
On 22-May-07, at 11:01 AM, Louwtjie Burger wrote:
On 5/22/07, Pål Baltzers
Toby Thain wrote:
On 22-May-07, at 11:01 AM, Louwtjie Burger wrote:
On 5/22/07, Pål Baltzersen <[EMAIL PROTECTED]> wrote:
What if your HW-RAID-controller dies? in say 2 years or more..
What will read your disks as a configured RAID? Do you know how to
(re)configure the >controller or restore
Albert Chin wrote:
On Thu, May 24, 2007 at 11:55:58AM -0700, Grant Kelly wrote:
I'm getting really poor write performance with ZFS on a RAID5 volume
(5 disks) from a storagetek 6140 array. I've searched the web and
these forums and it seems that this zfs_nocacheflush option is the
solution,
On 24-May-07, at 6:51 PM, [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] wrote:
You're right of course and lots of people use them. My point is
that
Solaris has been 64 bits lon ger then most others. ...
IRIX was much earlier than Solaris; Solaris was pretty late in
the 64 bit
game wi
Don't take this numbers too seriously - those were only first tries to
see where my port is and I was using OpenSolaris for comparsion, which
has debugging turned on.
Yeah, ZFS does a lot of extra work with debugging on (such as
verifying checksums in the ARC), so always do serious performa
On 24 May, 2007 - Russell Baird sent me these 0,6K bytes:
> I have my ZFS mirror on 2 external drives and no ZFS on my boot drive.
> If I crash my boot drive and I don't have a complete backup of the
> boot drive, can I restore just the /etc/zfs/zpool.cache file?
>
> I tried it and it worked.
I have my ZFS mirror on 2 external drives and no ZFS on my boot drive. If I
crash my boot drive and I don't have a complete backup of the boot drive, can I
restore just the /etc/zfs/zpool.cache file?
I tried it and it worked. Once I rebooted with the new drive, the ZFS pool
reappeared. I ju
I have my ZFS mirror on 2 external drives and no ZFS on my boot drive. If I
loose my boot drive and I don't have a complete backup, can I restore just the
/etc/zfs
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@op
Hi,
I believe Solaris 10 version 3 supports zfs backup and restore. How can I
upgrade previous versions of Solaris to run zfs backup/restore and where to
download the relevant versions.
Also, I have a customer wanting to know (now I am interested too) the detailed
information of how the zfs sn
Arif,
You need to boot from {net | DVD} in single-user mode, like this:
boot net -s or boot cdrom -s
Then, when you get to a shell prompt, relabel the disk like this:
# format -e
format> label
[0] SMI Label
[1] EFI Label
Specify Label type[0]: 0
Then, you should be able to repartition howev
>[EMAIL PROTECTED] wrote:
>
>>
>> >You're right of course and lots of people use them. My point is that
>> >Solaris has been 64 bits lon ger then most others. I think 64 bits in
>> >AIX got 64 bits after Solaris and Linux (via Alpha) did.
>> >Irix was 64 bit near the same time as Solaris but the
[EMAIL PROTECTED] wrote:
>
> >You're right of course and lots of people use them. My point is that
> >Solaris has been 64 bits lon ger then most others. I think 64 bits in
> >AIX got 64 bits after Solaris and Linux (via Alpha) did.
> >Irix was 64 bit near the same time as Solaris but the end of
I accidentally created a zpool on a boot disk, it paniced the system
and now I can jumpstart and install the OS on it.
This is what it looks like.
partition> p
Current partition table (original):
Total disk sectors available: 17786879 + 16384 (reserved sectors)
Part TagFlag First
>You're right of course and lots of people use them. My point is that
>Solaris has been 64 bits lon ger then most others. I think 64 bits in
>AIX got 64 bits after Solaris and Linux (via Alpha) did.
>Irix was 64 bit near the same time as Solaris but the end of the Irix
>is visible. Did they por
> On Wed, May 23, 2007 at 08:03:41AM -0700, Tom Buskey
> wrote:
> >
> > Solaris is 64 bits with support for 32 bits. I've
> been running 64 bit Solaris since Solaris 7 as I
> imagine most Solaris users have. I don't think any
> other major 64 bit OS has been in general use as long
> (VMS?).
>
>
On Thu, May 24, 2007 at 11:55:58AM -0700, Grant Kelly wrote:
> I'm running SunOS Release 5.10 Version Generic_118855-36 64-bit
> and in [b]/etc/system[/b] I put:
>
> [b]set zfs:zfs_nocacheflush = 1[/b]
>
> And after rebooting, I get the message:
>
> [b]sorry, variable 'zfs_nocacheflush' is not d
On 24/05/07, Brian Hechinger <[EMAIL PROTECTED]> wrote:
I don't know about FreeBSD PORTS, but NetBSD's ports system works very
well on solaris. The only thing I didn't like about it is it considers
gcc a dependency to certain things, so even though I have Studio 11
installed, it would insist on
Anton B. Rang wrote:
Richard wrote:
Any system which provides a single view of data (eg. a persistent storage
device) must have at least one single point of failure.
Why?
Consider this simple case: A two-drive mirrored array.
Use two dual-ported drives, two controllers, two power supplies,
a
Hi,
I'm running SunOS Release 5.10 Version Generic_118855-36 64-bit
and in [b]/etc/system[/b] I put:
[b]set zfs:zfs_nocacheflush = 1[/b]
And after rebooting, I get the message:
[b]sorry, variable 'zfs_nocacheflush' is not defined in the 'zfs' module[/b]
So is this variable not available in the
Anton B. Rang wrote:
Richard wrote:
Any system which provides a single view of data (eg. a persistent storage
device) must have at least one single point of failure.
Why?
Consider this simple case: A two-drive mirrored array.
Use two dual-ported drives, two controllers, two power supplies,
a
Anton B. Rang wrote:
Richard wrote:
Any system which provides a single view of data (eg. a persistent storage
device) must have at least one single point of failure.
Why?
Consider this simple case: A two-drive mirrored array.
Use two dual-ported drives, two controllers, two power supplies,
a
Richard wrote:
> Any system which provides a single view of data (eg. a persistent storage
> device) must have at least one single point of failure.
Why?
Consider this simple case: A two-drive mirrored array.
Use two dual-ported drives, two controllers, two power supplies,
arranged roughly as fo
Adam Leventhal wrote:
Right now -- as I'm sure you have noticed -- we use the dataset name for
the alias. To let users explicitly set the alias we could add a new property
as you suggest or allow other options for the existing shareiscsi property:
shareiscsi='alias=potato'
This would sort of
Right now -- as I'm sure you have noticed -- we use the dataset name for
the alias. To let users explicitly set the alias we could add a new property
as you suggest or allow other options for the existing shareiscsi property:
shareiscsi='alias=potato'
This would sort of match what we do for th
> Please tell us how many storage arrays are required to meet a
theoretical I/O bandwidth of 244 GBytes/s?
Just considering disks, you need approximately 6,663 all streaming 50
MB/sec with RAID-5 3+1 (for example).
That is assuming sustained large block sequential I/O. If you have 8 KB
Ran
This looks like another instance of
6429205 each zpool needs to monitor its throughput and throttle
heavy writers|
or at least it is a contributing factor.
Note that your /etc/system is mispelled (maybe just in the e-mail)
Didn't you get a console message ?
-r
Le 24 mai 07 à 09:50, Amer
Anton B. Rang wrote:
Thumper seems to be designed as a file server (but curiously, not for high
availability).
hmmm... Often people think that because a system is not clustered, then it is
not
designed to be highly available. Any system which provides a single view of
data
(eg. a persistent
On 24-May-07, at 6:26 AM, Henk Langeveld wrote:
Richard Elling wrote:
It all depends on the configuration. For a single disk system,
copies
should generally be faster than mirroring. For multiple disks, the
performance should be similar as copies are spread out over different
disks.
Here
> iozone. So I installed solaris 10 on this box and wanted to keep it
> that way. But solaris lacks FreeBSD ports ;-) so when current upgraded
Not entirely. :)
I don't know about FreeBSD PORTS, but NetBSD's ports system works very
well on solaris. The only thing I didn't like about it is it co
I'd love to be able to server zvols out as SCSI or FC targets. Are
there any plans to add this to ZFS? That would be amazingly awesome.
-brian
--
"Perl can be fast and elegant as much as J2EE can be fast and elegant.
In the hands of a skilled artisan, it can and does happen; it's just
that most
On Thu, May 24, 2007 at 01:16:32PM +0200, Claus Guttesen wrote:
>
> iozone. So I installed solaris 10 on this box and wanted to keep it
> that way. But solaris lacks FreeBSD ports ;-) so when current upgraded
Not entirely. :)
I don't know about FreeBSD PORTS, but NetBSD's ports system works ver
Starting from this thread:
http://www.opensolaris.org/jive/thread.jspa?messageID=118786𝀂
I would love to have the possibility to set an ISCSI alias when doing an
shareiscsi=on on ZFS. This will greatly facilate to identify where an
IQN is hosted.
the ISCSI alias is defined in rfc 3721
e.g. ht
On Thu, May 24, 2007 at 01:16:32PM +0200, Claus Guttesen wrote:
> >I'm all set for doing performance comparsion between Solaris/ZFS and
> >FreeBSD/ZFS. I spend last few weeks on FreeBSD/ZFS optimizations and I
> >think I'm ready. The machine is 1xQuad-core DELL PowerEdge 1950, 2GB
> >RAM, 15 x 74GB
I'm all set for doing performance comparsion between Solaris/ZFS and
FreeBSD/ZFS. I spend last few weeks on FreeBSD/ZFS optimizations and I
think I'm ready. The machine is 1xQuad-core DELL PowerEdge 1950, 2GB
RAM, 15 x 74GB-FC-10K accesses via 2x2Gbit FC links. Unfortunately the
links to disks are
On Thu, May 24, 2007 at 11:20:44AM +0100, Darren J Moffat wrote:
> Pawel Jakub Dawidek wrote:
> >Hi.
> >I'm all set for doing performance comparsion between Solaris/ZFS and
> >FreeBSD/ZFS. I spend last few weeks on FreeBSD/ZFS optimizations and I
> >think I'm ready. The machine is 1xQuad-core DELL
Or if you do want to use bfu because you really want to match your
source code revisions up to a given day then you will need to build the
ON consolidation yourself and you an the install the non debug bfu
archives (note you will need to download the non debug closed bins to do
that).
The README
Pawel Jakub Dawidek wrote:
Hi.
I'm all set for doing performance comparsion between Solaris/ZFS and
FreeBSD/ZFS. I spend last few weeks on FreeBSD/ZFS optimizations and I
think I'm ready. The machine is 1xQuad-core DELL PowerEdge 1950, 2GB
RAM, 15 x 74GB-FC-10K accesses via 2x2Gbit FC links. Unf
Richard Elling wrote:
It all depends on the configuration. For a single disk system, copies
should generally be faster than mirroring. For multiple disks, the
performance should be similar as copies are spread out over different
disks.
Here's a crazy idea: could we use zfs on dvd for s/w dist
IHAC complaining about database startup failure after large files are
copied into ZFS filesystem. If he wait for some time then it works. It
seems that ZFS is not freeing buffers from its ARC cache fast enough.
Lockstat shows long block events for lock arc_reclaim_thr_lock:
Adaptive mutex hold: 5
41 matches
Mail list logo