> [...] ZFS gives me the ability to snapshot to archive (I assume it
> works across pools?).
No. Snapshots are only within a pool. Pools are independent storage
arenas.
--
Darren Dunham [EMAIL PROTECTED]
Senior Technical Consultant TAOS
First Post!
Sorry, I had to get that out of the way to break the ice...
I was wondering if it makes sense to zone ZFS pools by disk slice, and if it
makes a difference with RAIDZ. As I'm sure we're all aware, the end of a drive
is half as fast as the beginning ([i]where the zoning stipulates th
A couple of questions for you:
(1) What OS are you running (Solaris, BSD, MacOS X, etc)?
(2) What's your config? In particular, are any of the partitions
on the same disk?
(3) Are you copying a few big files or lots of small ones?
(4) Have you measured UFS-to-UFS and ZFS-to-ZFS performance
On 7/6/07, Orvar Korvar <[EMAIL PROTECTED]> wrote:
> have set up a ZFS raidz with 4 samsung 500GB hard drives.
>
> It is extremely slow when I mount a ntfs partition and copy everything to
> zfs. Its
> like 100kb/sec or less. Why is that?
How are you mounting said NTFS partition?
> When I copy fr
have set up a ZFS raidz with 4 samsung 500GB hard drives.
It is extremely slow when I mount a ntfs partition and copy everything to zfs.
Its like 100kb/sec or less. Why is that?
When I copy from ZFSpool to UFS, I get like 40MB/sec - isnt it very low
considering I have 4 new 500GB discs in raid?
> But now I have another question.
> How 8k blocks will impact on performance ?
When tuning recordsize for things like databases, we try to recommend
that the customer's recordsize match the I/O size of the database
record.
I don't think that's the case in your situation. ZFS is clever enough
th
Łukasz пишет:
> After few hours with dtrace and source code browsing I found that in my space
> map there are no 128K blocks left.
Actually you may have some 128k or more free space segments, but
alignment requirements will not allow to allocate them. Consider the
following example:
1. Space
Adam wrote:
> Just to let everyone know what I did to 'fix' the problem. By halting the
> zones and the exporting the zpool I was able to duplicate the drive without
> issue. Just had to import the zpool upon booting and boot the zones. Although
> my setup uses slices for the zpool (this is not
Just to let everyone know what I did to 'fix' the problem. By halting the
zones and the exporting the zpool I was able to duplicate the drive without
issue. Just had to import the zpool upon booting and boot the zones. Although
my setup uses slices for the zpool (this is not supported by SUN),
If you want to know which blocks you do not have:
dtrace -n fbt::metaslab_group_alloc:entry'{ self->s = arg1; }' -n
fbt::metaslab_group_alloc:return'/arg1 != -1/{ self->s = 0 }' -n
fbt::metaslab_group_alloc:return'/self->s && (arg1 == -1)/{ @s =
quantize(self->s); self->s = 0; }' -n tick-10s'{
After few hours with dtrace and source code browsing I found that in my space
map there are no 128K blocks left.
Try this on your ZFS.
dtrace -n fbt::metaslab_group_alloc:return'/arg1 == -1/{}
If you will get probes, then you also have the same problem.
Allocating from space map works like th
All,
As a follow up on this issue. This was not a ZFS issue after all it was a
configuration issue which I'm still curious about.
I had changed ownership from root:sys on a directory that was going to
collect
the anonymous downloads to a user that had the same UID and GID on both
hosts
and per
Field ms_smo.smo_objsize in metaslab struct is size of data on disk.
I checked the size of metaslabs in memory:
::walk spa | ::walk metaslab | ::print struct metaslab
ms_map.sm_root.avl_numnodes
I got 1GB
But only some metaslabs are loaded:
::walk spa | ::walk metaslab | ::print struct metaslab
13 matches
Mail list logo