> Using a ZFS filesystem within a zone will go just as
> fast as in the
> global zone, so there's no need to create multiple
> pools.
So, Robert is actually wrong (at least in theory): using a zfs via
add:fs:dir..,type=lofs gives probably less performances than using it via
add:dataset:name. Co
Steffen Weiberle wrote:
Is there any known processing that would be done every 4 minutes last 10
to 15 seconds that might be introduced by ZFS?
No. But you can use tools like dtrace or lockstat -kgIW to determine
where that additional CPU time is being spent, and then see if ZFS is to
blame.
Jens Elkner wrote:
Yes, I guessed that, but hopefully not that much ...
Thinking about it, it would suggest to me (if I need abs. max. perf): the best
thing to do is, to create a pool inside the zone and to use zfs on it ?
Using a ZFS filesystem within a zone will go just as fast as in the
glo
Robert Milkowski wrote:
Hello Jeremy,
Monday, October 23, 2006, 5:04:09 PM, you wrote:
JT> Hello,
Shrinking the vdevs requires moving data. Once you move data, you've
got to either invalidate the snapshots or update them. I think that
will be one of the more difficult parts.
JT> Updating
'Robert Milkowski wrote:'
>
Hi Robert,
>
> Monday, October 23, 2006, 7:15:39 PM, you wrote:
> JE> 3) the /pool1/flexlm.ulo property is set to atime=off. Do I need
> JE> to specifiy this option or something similar, when creating the zone?
>
> no, you don't.
OK.
> JE> 4) Wrt. best performance, onl
How I managed to make this happen, I'm now no longer sure of.
After upgrading my workstation to Solaris 10, Update 2, I could
not find any ZFS pools to import where I thought they were.
Whether this is due to the partitioning not being correclty preserved
or some other problem remains a mystery.
Hey Robert,
No, all the code fixes and features I mentioned before I developed and
putback before I left Sun, so no active development is happening or
anything. I still like to hang out on the zfs alias though just
because I still luv ZFS and want to keep tabs on it even if I'm not at
Su
> > We use VxVM quite a bit at my place of employment, and are extremely
> > interested in moving to ZFS to reduce complexity and costs. One useful
> > feature that is in VxVM that doesn't seem to be in ZFS is the ability to
> > migrate vdevs between pools.
>
> Could you be more specific? Are
Hello Jeremy,
Monday, October 23, 2006, 5:04:09 PM, you wrote:
JT> Hello,
>> Shrinking the vdevs requires moving data. Once you move data, you've
>> got to either invalidate the snapshots or update them. I think that
>> will be one of the more difficult parts.
JT> Updating snapshots would be
Hello Noël,
I've just had to ask this - your email address is @apple.com and looks
like you are actively developing ZFS - does that mean Apple is looking
into porting/using ZFS for real?
--
Best regards,
Robertmailto:[EMAIL PROTECTED]
Hello Jens,
Monday, October 23, 2006, 7:15:39 PM, you wrote:
JE> 3) the /pool1/flexlm.ulo property is set to atime=off. Do I need
JE> to specifiy this option or something similar, when creating the zone?
no, you don't.
JE> 4) Wrt. best performance, only , what should one prefer: add fs:dir or ad
typo: there shouldn't be a leading '/' before snap1 in the example below. apologies.NoelOn Oct 23, 2006, at 2:10 PM, Noël Dellofano wrote:#zfs send -i /snap1 /tank/[EMAIL PROTECTED] > backup.out ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
ht
It's also worth it to note that I recently added a '-F' flag to zfs
receive for precisely this sort of annoying problem :) I meant to send
a heads up to everyone about it but had not gotten to it yet.
Basically, when you specify '-F' flag to receive, a zfs rollback and a
receive are done at
Matty wrote:
We use VxVM quite a bit at my place of employment, and are extremely
interested in moving to ZFS to reduce complexity and costs. One useful
feature that is in VxVM that doesn't seem to be in ZFS is the ability to
migrate vdevs between pools.
Could you be more specific? Are you
Howdy,
We use VxVM quite a bit at my place of employment, and are extremely
interested in moving to ZFS to reduce complexity and costs. One useful
feature that is in VxVM that doesn't seem to be in ZFS is the ability to
migrate vdevs between pools. This is extremely useful when you want to
s
Dennis Clarke wrote:
Dennis Clarke wrote:
While ZFS may do a similar thing *I don't know* if there is a published
document yet that shows conclusively that ZFS will survive multiple disk
failures.
?? why not? Perhaps this is just too simple and therefore doesn't get
explained well.
That is
I've created a zone which should mount the /pool1/flexlm.ulo zfs via lofs:
+ zfs create pool1/flexlm.ulo
+ zfs set atime=off pool1/flexlm.ulo
+ zfs set sharenfs=off pool1/flexlm.ulo
+ zonecfg -z flexlm
...
add fs
set dir=/usr/local
set special=/pool1/flexlm.ulo
set typ
> > Shrinking the vdevs requires moving data. Once you move data, you've
> > got to either invalidate the snapshots or update them. I think that
> > will be one of the more difficult parts.
>
> Updating snapshots would be non-trivial, but doable. Perhaps some sort
> of reverse mapping or brute f
Customer benchmarked an X4600 using UFS on top of VxVM a while back
and got consistent performance under heavy load. Now they have put the
system into system test, but in the process moved from UFS/VxVM to
ZFS. This is 6/06
The are running at approximately 40% idle most of the time, with 10+%
Hello Krzys,
Monday, October 23, 2006, 5:14:06 PM, you wrote:
K> Awesome, thanks for your help, will there be any way to convert raidz to
K> raidz2?
No, at there's no such tool right now.
--
Best regards,
Robertmailto:[EMAIL PROTECTED]
SVM did RAID 0+1 i.e. mirrored entire sub-mirrors. However SVM
mirroring did not incur the problem that Richard alludes to, i.e. a
single disk failure on a sub-mirror did not take down the entire
sub-mirror, because the reads and writes are smart and acted as though
it was a RAID 1+0. Th
Awesome, thanks for your help, will there be any way to convert raidz to
raidz2?
Thanks again for help/
Chris
On Mon, 23 Oct 2006, Robert Milkowski wrote:
Hello Krzys,
Sunday, October 22, 2006, 8:42:06 PM, you wrote:
K> I have solaris 10 U2 and I have raidz partition setup on 5 disks, I ju
Hello,
Shrinking the vdevs requires moving data. Once you move data, you've
got to either invalidate the snapshots or update them. I think that
will be one of the more difficult parts.
Updating snapshots would be non-trivial, but doable. Perhaps some sort
of reverse mapping or brute force s
Hello Krzys,
Sunday, October 22, 2006, 8:42:06 PM, you wrote:
K> I have solaris 10 U2 and I have raidz partition setup on 5 disks, I just
added a
K> new disk and was wondering, can I add another disk to raidz? I was able to
add
K> it to a pool but I do not think it added it to zpool.
You can't
Perfect, thanks for all answers. The solution that Darren has suggested to me,
can be implemented even between linux -> linux. No more no_root_squash in home
directories, what is a bad thing.
thanks again.
This message posted from opensolaris.org
_
Torrey McMahon writes:
> Reads? Maybe. Writes are an other matter. Namely the overhead associated
> with turning a large write into a lot of small writes. (Checksums for
> example.)
>
> Jeremy Teo wrote:
> > Hello all,
> >
> > Isn't a large block size a simple case of prefetching? In oth
26 matches
Mail list logo