[EMAIL PROTECTED] wrote:
One of the benefits of ZFS is that not only is head synchronization not
needed, but also block offsets do not have to be the same. For example,
in a traditional mirror, block 1 on device 1 is paired with block 1 on
device 2. In ZFS, this 1:1 mapping is not required. I
On 1/30/07, Karen Chau <[EMAIL PROTECTED]> wrote:
dmpk14a603# zoneadm -z snitch-zone04 halt
zoneadm: zone 'snitch-zone04': unable to unmount '/snitch-zone04/root/tmp'
zoneadm: zone 'snitch-zone04': unable to unmount file systems in zone
zoneadm: zone 'snitch-zone04': unable to destroy zone
I wo
Jeremy is correct. There is actually an RFE open to allow a "zpool split" that
would have allowed you to detach the second disk while keeping the vdev data
(and thus allowing you to pull in the data in the detached disk using some sort
of "import" type command).
Rainer
This message posted f
> On 1/18/07, . <[EMAIL PROTECTED]> wrote:
>
> SYBA SD-SATA-4P PCI SATA Controller Card (
> http://www.newegg.com/product/Product.asp?item=N82E168
> 15124020 )
>
>
>From my home ZFS server setup, I had tried two Syba SD-SATA2-2E2I PCI-X SATA
>II Controller Cards without any luck; both cards' BI
> > I think he means that if a block fails to write on a VDEV, ZFS can write
> > that data elsewhere and is not forced to use that location. As opposed
> > to SVM as an example, where the mirror must try to write at a particular
> > offset or fail.
>
> Understood, I am asking if the current code
> > > One of the benefits of ZFS is that not only is head synchronization
not
> > > needed, but also block offsets do not have to be the same. For
example,
> > > in a traditional mirror, block 1 on device 1 is paired with block 1
on
> > > device 2. In ZFS, this 1:1 mapping is not required. I
> > One of the benefits of ZFS is that not only is head synchronization not
> > needed, but also block offsets do not have to be the same. For example,
> > in a traditional mirror, block 1 on device 1 is paired with block 1 on
> > device 2. In ZFS, this 1:1 mapping is not required. I believe thi
> One of the benefits of ZFS is that not only is head synchronization not
> needed, but also block offsets do not have to be the same. For example,
> in a traditional mirror, block 1 on device 1 is paired with block 1 on
> device 2. In ZFS, this 1:1 mapping is not required. I believe this w
Richard Elling wrote:
>
> One of the benefits of ZFS is that not only is head synchronization not
> needed, but also block offsets do not have to be the same. For example,
> in a traditional mirror, block 1 on device 1 is paired with block 1 on
> device 2. In ZFS, this 1:1 mapping is not require
On 30-Jan-07, at 5:48 PM, Richard Elling wrote:
...
One of the benefits of ZFS is that not only is head synchronization
not
needed, but also block offsets do not have to be the same. For
example,
in a traditional mirror, block 1 on device 1 is paired with block 1 on
device 2. In ZFS, thi
Nicolas Williams wrote:
On Tue, Jan 30, 2007 at 06:41:25PM +0100, Roch - PAE wrote:
I think I got the point. Mine was that if the data travels a
single time toward the storage and is corrupted along the
way then there will be no hope of recovering it since the
array was given bad data. Having t
I've blogged about the trade-offs for space, performance, and MTTDL (RAS)
for ZFS and RAID in general.
http://blogs.sun.com/relling/entry/zfs_raid_recommendations_space_performance
Enjoy.
-- richard
___
zfs-discuss mailing list
zfs-discuss@open
On January 30, 2007 9:59:45 AM -0800 Neal Pollack <[EMAIL PROTECTED]>
wrote:
I've got my first server deployment with ZFS.
Consolidating a pair of other file servers that used to have
a dozen or so NFS exports in /etc/dfs/dfstab similar to;
/export/solaris/images
/export/tools
/export/ws
. a
Neal Pollack wrote:
I've got my first server deployment with ZFS.
Consolidating a pair of other file servers that used to have
a dozen or so NFS exports in /etc/dfs/dfstab similar to;
/export/solaris/images
/export/tools
/export/ws
. and so on
For the new server, I have one large zfs po
I'm unable to boot a zone after I did a sys-unconfig, how do I recover?
GLOBAL ZONE:
---
dmpk14a603# zoneadm list -cv
ID NAME STATUS PATH
0 global running/
1 snitch-zone02running/snitch-zone02
4 snitch-zone04down
On Tue, Jan 30, 2007 at 09:24:26AM -0800, Richard Elling wrote:
>
> For Solaris labeled disks, block 0 contains the vtoc. If you overwrite
> block 0 with junk, then this is the error message you should see.
>
Also note that for EFI labelled disks, Solaris will create a 'bare' dev
link that corre
I've got my first server deployment with ZFS.
Consolidating a pair of other file servers that used to have
a dozen or so NFS exports in /etc/dfs/dfstab similar to;
/export/solaris/images
/export/tools
/export/ws
. and so on
For the new server, I have one large zfs pool;
-bash-3.00# df -h
On Tue, Jan 30, 2007 at 06:41:25PM +0100, Roch - PAE wrote:
> I think I got the point. Mine was that if the data travels a
> single time toward the storage and is corrupted along the
> way then there will be no hope of recovering it since the
> array was given bad data. Having the data travel twic
Nicolas Williams writes:
> On Tue, Jan 30, 2007 at 06:32:14PM +0100, Roch - PAE wrote:
> > > The only benefit of using a HW RAID controller with ZFS is that it
> > > reduces the I/O that the host needs to do, but the trade off is that ZFS
> > > cannot do combinatorial parity reconstruction
On Tue, Jan 30, 2007 at 06:32:14PM +0100, Roch - PAE wrote:
> > The only benefit of using a HW RAID controller with ZFS is that it
> > reduces the I/O that the host needs to do, but the trade off is that ZFS
> > cannot do combinatorial parity reconstruction so that it could only
> > detect erro
Nicolas Williams writes:
> On Thu, Jan 25, 2007 at 10:57:17AM +0800, Wee Yeh Tan wrote:
> > On 1/25/07, Bryan Cantrill <[EMAIL PROTECTED]> wrote:
> > >...
> > >after all, what was ZFS going to do with that expensive but useless
> > >hardware RAID controller? ...
> >
> > I almost rolled ov
dudekula mastan wrote:
I don't know whether it's the right place or not to discuss my doubts.
I opened a device ( in raw mode) and I filled the entire space (from 1
block to last block) with some random data. While writing data, I am
seeing the following warning messages in dmesg buffer.
J
On Mon, Jan 29, 2007 at 09:37:57PM -0500, David Magda wrote:
> On Jan 29, 2007, at 20:27, Toby Thain wrote:
>
> >On 29-Jan-07, at 11:02 PM, Jason J. W. Williams wrote:
> >
> >>I seem to remember the Massive Array of Independent Disk guys ran
> >>into
> >>a problem I think they called static fric
ZFS does release memory if I e.g. do a simple malloc(), but using this
"intimate shared memory (flag SHM_SHARE_MMU in the call to shmat()), this does
not happen. BTW the OS here was Solaris 10 U2; the 8Gb machines I'm using now
are running U3.
Hmm, looks like this may have been fixed in U3, th
Bjorn Munch writes:
> Hello,
>
> I am doing some tests using ZFS for the data files of a database
> system, and ran into memory problems which has been discussed in a
> thread here a few weeks ago.
>
> When creating a new database, the data files are first initialized to
> their configur
David Magda wrote:
What about a rotating spare?
When setting up a pool a lot of people would (say) balance things
around buses and controllers to minimize single points of failure,
and a rotating spare could disrupt this organization, but would it be
useful at all?
Functionally, that sound
Hello,
On 1/30/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Hello zfs-discuss,
I had a pool with only two disks in a mirror. I detached one disks
and have erased later first disk. Now i would really like to quickly
get data from the second disk available again. Other than detaching
t
Hello,
I am doing some tests using ZFS for the data files of a database
system, and ran into memory problems which has been discussed in a
thread here a few weeks ago.
When creating a new database, the data files are first initialized to
their configured size (written in full), then the servers a
Hello zfs-discuss,
I had a pool with only two disks in a mirror. I detached one disks
and have erased later first disk. Now i would really like to quickly
get data from the second disk available again. Other than detaching
the second disk nothing else was done to it.
Has anyone written
Hi All,
I don't know whether it's the right place or not to discuss my doubts.
I opened a device ( in raw mode) and I filled the entire space (from 1 block
to last block) with some random data. While writing data, I am seeing the
following warning messages in dmesg buffer.
>Ok, I'll bite. It's been a long day, so that may be why I can't see
>why the radioisotopes in lead that was dug up 100 years ago would be
>any more depleted than the lead that sat in the ground for the
>intervening 100 years. Half-life is half-life, no?
>Now if it were something about the
31 matches
Mail list logo