Hello Sascha,
Wednesday, February 14, 2007, 6:45:30 AM, you wrote:
SB> Am 13.02.2007 um 22:46 schrieb Ian Collins:
>> [EMAIL PROTECTED] wrote:
>>
>>> Hello,
>>>
>>> I switched my home server from Debian to Solaris. The main cause for
>>> this step was stability and ZFS.
>>> But now after the mig
ek> Have you increased the load on this machine? I have seen a
similar
ek> situation (new requests being blocked waiting for the sync
thread to
ek> finish), but that's only been when either 1) the hardware is
broken
ek> and taking too long or 2) the server is way overloaded.
I don't thin
Not sure is it related to the fragmentation, but I can say that serious
performance degradation in my NFS/ZFS benchmarks [1] is a result of on-disk ZFS
data layout.
Read operations on directories (NFS3 readdirplus) are abnormally time consuming
. That kills the server. After cold restart of the
An update:
Not sure is it related to the fragmentation, but I can say that serious
performance degradation in my NFS/ZFS benchmarks is a result of on-disk ZFS
data layout.
Read operations on directories (NFS3 readdirplus) are abnormally time consuming
. That kills the server. After cold restart
Sascha Brechenmacher wrote:
>
> Am 13.02.2007 um 22:46 schrieb Ian Collins:
>
>> Looks like poor hardware, how was the pool built? Did you give ZFS the
>> entire drive?
>>
>> On my nForce4 Athlon64 box with two 250G SATA drives,
>>
>> zpool status tank
>> pool: tank
>> state: ONLINE
>> scrub
Hello Leon,
Wednesday, February 14, 2007, 10:35:05 AM, you wrote:
LK> An update:
LK> Not sure is it related to the fragmentation, but I can say that
LK> serious performance degradation in my NFS/ZFS benchmarks is a
LK> result of on-disk ZFS data layout.
LK> Read operations on directories (NFS3 r
Hello eric,
Wednesday, February 14, 2007, 9:50:29 AM, you wrote:
>>
>> ek> Have you increased the load on this machine? I have seen a
>> similar
>> ek> situation (new requests being blocked waiting for the sync
>> thread to
>> ek> finish), but that's only been when either 1) the hardware is
I don't think it is good to creat ZFS file system on VxVM volumes ( I feel, it
is not possible to create ZFS file system on VxVM volumes - I 'm not sure )
-Masthan
Mike Gerdts <[EMAIL PROTECTED]> wrote:
On 1/18/07, Tan Shao Yi wrote:
> Hi,
>
> Was wondering if anyone had experience working
[EMAIL PROTECTED]:~# zpool import
pool: new_zpool
id: 3042040702885268372
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
new_zpool ONLINE
c0t2d0s6 ONLINE
It shows that there is one filesystem available for import on one of my disks.
Here is a list of
[i]I create the default storage pool during the install, but then when it
reboots, the hostname/hostid has changed so I need to re-associate the pool. I
know you're frustrated with this stuff, but once you've figured it out it
really is very powerful. :-)[/i]
If you read my contributions, I hav
I've been using it in another CR where destroying one of a snapshots
was helping the performance. Nevertheless here it's on that server:
Short period of time:
bash-3.00# ./metaslab-6495013.d
^C
Loops count
value - Distribution - count
-1 |
Matt,
thanks for some examples and your understanding.
While I am still quarreling to get a pool mounted,
I still find some unexpected (at least in my legacy terms) behaviour:
% zfs mount pool /export/home
is a clear intention. Maybe too much of legacy ?
% zpool import
" can be imported us
I will also recommend using CF or an IDE drive for your boot drive. From there,
I would simply setup a raidz with 4 of the drives. Since you have 8 available
sata spaces, it would make sense to use 4 drives now in raidz with 3 data plus
1 parity drives giving you roughly 960GB of storage in the
Matty wrote:
Howdy,
I have seen a number of folks run into issues due to ZFS file system
fragmentation, and was curious if anyone on team ZFS is working on
this issue? Would it be possible to share with the list any changes
that will be made to to help address fragmentation problems?
We have s
Anantha N. Srirama wrote:
I did find zfs.h and libzfs.h (thanks Eric). However, when I try to
compile the latest version (4.87C) of lsof it finds the following
files missing: dmu.h zfs_acl.h zfs_debug.h zfs_rlock.h zil.h spa.h
zfs_context.h zfs_dir.h zfs_vfsops.h zio.h txg.h zfs_ctldir.h
zfs_ioct
I get around 100Mbits per second sustained on big files transferring to/from my
Solaris/zfs box and Vista over samba. That is over gigabit ethernet through one
switch and one router. I personally think it should be faster, but is probably
just due to my network hardware and not samba or zfs sinc
Luke Scharf wrote:
Dave Sneddon wrote:
Can anyone shed any light on whether the actual software side of this can
be achieved? Can I share my entire ZFS pool as a "folder" or "network drive"
so WinXP can read it? Will this be fast enough to read/write to at DV speeds
(25mbit/s)? Once the pool is
17 matches
Mail list logo