Actually, random writes on a RAID-5, while not performing that well because of
the pre-read, don't require a full stripe read (or write). They only require
reading the old data and parity, then writing the new data and parity. This is
quite a bit better than a full stripe, since only two actua
Hey,
You'll need to use one of the OpenSolaris/ZFS community releases
to use the snapshot -r option, starting at build 43.
Bugger,
Anyone have an idea if it'll be patched into 06/06, or would it be a
future release plan/plot/idea/etc...
P
___
zfs-d
Hi Patrick,
You'll need to use one of the OpenSolaris/ZFS community releases
to use the snapshot -r option, starting at build 43.
Cindy
Patrick wrote:
Hey,
Would 'zfs snapshot -r poolname' achieve what you want?
I suppose, the idea, would... but alas :
[EMAIL PROTECTED]:/# zfs snapshot -
Hey,
Would 'zfs snapshot -r poolname' achieve what you want?
I suppose, the idea, would... but alas :
[EMAIL PROTECTED]:/# zfs snapshot -r [EMAIL PROTECTED]
invalid option 'r'
usage:
snapshot <[EMAIL PROTECTED]|[EMAIL PROTECTED]>
[EMAIL PROTECTED]:/#
( solaris 06/06, with all patchen
Would 'zfs snapshot -r poolname' achieve what you want?
On 29/09/06, Patrick <[EMAIL PROTECTED]> wrote:
Hi,
Is it possible to create a snapshot, for ZFS send purposes, of an entire pool ?
--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
Hi,
Is it possible to create a snapshot, for ZFS send purposes, of an entire pool ?
Patrick
--
Patrick
patrick eefy net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mai
On Fri, 2006-09-29 at 09:41 +0200, Roch wrote:
> Erik Trimble writes:
> > On Thu, 2006-09-28 at 10:51 -0700, Richard Elling - PAE wrote:
> > > Keith Clay wrote:
> > > > We are in the process of purchasing new san/s that our mail server
> runs
> > > > on (JES3). We have moved our mailstores t
On Sep 29, 2006, at 6:24 AM, Roch wrote:
Keith Clay writes:
On Sep 29, 2006, at 2:41 AM, Roch wrote:
IMO, RAIDZn should perform admirably on the write loads.
The random reads aspects is more limited. The simple rule of
thumb is to consider that a RAIDZ group will deliver random
read IOPS with
In a raid config, you can have hot spares in case a drive goes back.
Is replacing a mirror the only way to do similar?
keith
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Pierre Klovsjo wrote:
Is ZFS/mpxio strong enough to be an alternativ to VXvm/Dmp today? Has anyone
done this change and if so, what was your experiance?
I can only speak for the multpathing side but I know that we've had
plenty of sites change from DMP to Mpxio. Most of the rationale the
cu
Mirroring is more efficient for small reads (the size of one ZFS block or less)
because only one disk has to be accessed. Since RAID-Z spreads a ZFS block
across multiple disks, and the data from all disks is required to verify the
checksum, every read accesses every disk.
Mirror read:
1. Pic
On 9/29/06, Patrick <[EMAIL PROTECTED]> wrote:
Hey,
so i created a snapshot, ( zfs snapshot blah/mail/[EMAIL PROTECTED] ) and
then i tried to send it accross the network, ( zfs send
blah/mail/[EMAIL PROTECTED] | ssh [EMAIL PROTECTED] zfs recv
blah/mail/[EMAIL PROTECTED] ) and i only seem to be g
On September 29, 2006 11:09:21 AM -0500 Keith Clay <[EMAIL PROTECTED]> wrote:
Folks,
I've heard that for small reads/writes (like a mailstore), mirroring is
preferred to raiding. Can some explain why or direct me to that info?
I assumed that raidzn would be the preferred method but apparentl
Folks,
I've heard that for small reads/writes (like a mailstore), mirroring
is preferred to raiding. Can some explain why or direct me to that
info? I assumed that raidzn would be the preferred method but
apparently not.
thanks,
keith
___
zfs
Hello Keith,
Friday, September 29, 2006, 3:17:14 PM, you wrote:
KC> Folks,
KC> Let's say you have a box that dies with zfs filesystems from a san on
KC> them (a mirror config) and so you can't export the fs. Can you move
KC> the fs to a new box? How so?
If you configure your SAN so new host
Hey,
so i created a snapshot, ( zfs snapshot blah/mail/[EMAIL PROTECTED] ) and
then i tried to send it accross the network, ( zfs send
blah/mail/[EMAIL PROTECTED] | ssh [EMAIL PROTECTED] zfs recv
blah/mail/[EMAIL PROTECTED] ) and i only seem to be getting 2 - 4 mb /
s... unless i'm missing somthi
Keith Clay writes:
>
> On Sep 29, 2006, at 2:41 AM, Roch wrote:
>
>
> >>
> >
> > IMO, RAIDZn should perform admirably on the write loads.
> > The random reads aspects is more limited. The simple rule of
> > thumb is to consider that a RAIDZ group will deliver random
> > read IOPS with
Folks,
Let's say you have a box that dies with zfs filesystems from a san on
them (a mirror config) and so you can't export the fs. Can you move
the fs to a new box? How so?
keith
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://m
On Sep 29, 2006, at 2:41 AM, Roch wrote:
IMO, RAIDZn should perform admirably on the write loads.
The random reads aspects is more limited. The simple rule of
thumb is to consider that a RAIDZ group will deliver random
read IOPS with the performance characteristic of single
device. That rul
Erik Trimble writes:
> On Thu, 2006-09-28 at 10:51 -0700, Richard Elling - PAE wrote:
> > Keith Clay wrote:
> > > We are in the process of purchasing new san/s that our mail server runs
> > > on (JES3). We have moved our mailstores to zfs and continue to have
> > > checksum errors -- they
> Why map it to mkdir rather than using zfs create ? Because mkdir means
> it will work over NFS or CIFS.
Also your users and applications don't need to know. The administrator sets the
policy and then it just happens, plus the resulting "directories" would end up
with the correct ownership mode
21 matches
Mail list logo