On 8/11/06, eric kustarz <[EMAIL PROTECTED]> wrote:
Leon Koll wrote:
> <...>
>
>> So having 4 pools isn't a recommended config - i would destroy those 4
>> pools and just create 1 RAID-0 pool:
>> #zpool create sfsrocks c4t00173801014Bd0 c4t00173801014Cd0
>> c4t001738010140001Cd0 c4t00173
Hello eric,
Friday, August 11, 2006, 3:04:38 AM, you wrote:
ek> Leon Koll wrote:
>> <...>
>>
>>> So having 4 pools isn't a recommended config - i would destroy those 4
>>> pools and just create 1 RAID-0 pool:
>>> #zpool create sfsrocks c4t00173801014Bd0 c4t00173801014Cd0
>>> c4t001738010
David Magda wrote:
On Aug 10, 2006, at 13:23, Eric Schrock wrote:
# zfs create pool/accounting/bob
# zfs create pool/engineering/anne
# ls /export/home
anne bob
#
Is easy to determine where 'anne' and 'bob' are really located? What
would the output of (say) 'df' be?
Fi
On Aug 10, 2006, at 13:23, Eric Schrock wrote:
# zfs create pool/accounting/bob
# zfs create pool/engineering/anne
# ls /export/home
anne bob
#
Is easy to determine where 'anne' and 'bob' are really located? What
would the output of (say) 'df' be?
I
Leon Koll wrote:
<...>
So having 4 pools isn't a recommended config - i would destroy those 4
pools and just create 1 RAID-0 pool:
#zpool create sfsrocks c4t00173801014Bd0 c4t00173801014Cd0
c4t001738010140001Cd0 c4t0017380101400012d0
each of those devices is a 64GB lun, right?
I di
On Fri, Aug 11, 2006 at 01:50:05AM +0100, Darren J Moffat wrote:
>
> I'm assuming since you don't use that syntax in your examples that
> there will be a separate proposal/ARC case for the set at create time.
Yes. I have a prototype almost done and will send out a similar
proposal, probably tomor
Eric Schrock wrote:
Also, I am working on 6367103, which would allow for this option (and
any others) to be set at create time, so you don't have the weird
situation where the filesystem is temporarily mounted.
I'm assuming since you don't use that syntax in your examples that
there will be a s
Humberto Ramirez wrote:
I'm wondering if I can get all the benefits of EVMS and LVM with ZFS
I'm planing on ZFS with a Raid-Z
1.- Can I expand volumes like I do with LVM ?
You simply add more LUNs to a pool. The filesystems on the pool can grow
until the size of the pool is exceeded.
T
I'm wondering if I can get all the benefits of EVMS and LVM with ZFS
I'm planing on ZFS with a Raid-Z
1.- Can I expand volumes like I do with LVM ?
2.- Is there a central cosole (like EVMS) to do all the managment ?
3.- How do I monitor the hard drives ?
4.- If there is a drive failure do the re
Thanks for the list.
Phi
Eric Schrock wrote:
Yes, there are three incremental fixes that we plan in this area:
6417772 need nicer message on write failure
This just cleans up the failure mode so that we get a nice
FMA failure message and can distinguish this from a random
Yes, there are three incremental fixes that we plan in this area:
6417772 need nicer message on write failure
This just cleans up the failure mode so that we get a nice
FMA failure message and can distinguish this from a random
failed assert.
6417779 ZFS: I/O failure (wri
I remember a discussion about I/O write failures causing a panic for a
non-replicated pool and a plan to fix this in the future. I couldn't
find a bug for this work though. Is there still a plan to fix this?
Phi
___
zfs-discuss mailing list
zfs-discu
On 08/08/2006, at 10:44 PM, Luke Scharf wrote:
The release I'm playing with (Alpha 5) does, indeed, have ZFS.
However, I can't determine what version of ZFS is included.
Dselect gives the following information, which doesn't ring any
bells for me:
*** Req base sunwzfsr 5.11.40
Myron Scott wrote:
Is there any difference between fdatasync and fsync on ZFS?
-No. ZFS does not log data and meta data separately. rather
it logs essentially the system call records, eg writes, mkdir,
truncate, setattr, etc. So fdatasync and fsync are identical
on ZFS.
Is there any difference between fdatasync and fsync on ZFS?
Regards,
Myron
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 8/10/06, Eric Schrock <[EMAIL PROTECTED]> wrote:
On Thu, Aug 10, 2006 at 12:11:46PM -0700, David Coronel wrote:
> So far I understand that if a file is to be modified, it will first
> copy the data to be modified in a new location in the ZFS pool, then
> modify that new data and do all the ZFS
On Thu, Aug 10, 2006 at 12:11:46PM -0700, David Coronel wrote:
> So far I understand that if a file is to be modified, it will first
> copy the data to be modified in a new location in the ZFS pool, then
> modify that new data and do all the ZFS voodoo it does, and ultimately
> do the very last ste
So far I understand that if a file is to be modified, it will first copy the
data to be modified in a new location in the ZFS pool, then modify that new
data and do all the ZFS voodoo it does, and ultimately do the very last step of
the operation (which I think is the final change of the pointer
Robert Milkowski wrote:
Hello Neil,
Thursday, August 10, 2006, 7:02:58 PM, you wrote:
NP> Robert Milkowski wrote:
Hello Matthew,
Thursday, August 10, 2006, 6:55:41 PM, you wrote:
MA> On Thu, Aug 10, 2006 at 06:50:45PM +0200, Robert Milkowski wrote:
btw: wouldn't it be possible to write b
On Thu, Aug 10, 2006 at 10:44:46AM -0700, Eric Schrock wrote:
> Right now I'm using the generic property mechanism, but have a special
> case in dsl_prop_get_all() to ignore searching parents for this
> particular property. I'm not thrilled about it, but I only see two
> other options:
>
> 1. Do
Robert Milkowski wrote:
Hello Neil,
Thursday, August 10, 2006, 7:02:58 PM, you wrote:
NP> Robert Milkowski wrote:
Hello Matthew,
Thursday, August 10, 2006, 6:55:41 PM, you wrote:
MA> On Thu, Aug 10, 2006 at 06:50:45PM +0200, Robert Milkowski wrote:
btw: wouldn't it be possible to write b
Right now I'm using the generic property mechanism, but have a special
case in dsl_prop_get_all() to ignore searching parents for this
particular property. I'm not thrilled about it, but I only see two
other options:
1. Do not use the generic infrastructure. This requires much more
invasive
Yet another reason it was removed. This proposal specifically does not
use the word 'container', nor will the documentation refer to it as
such. I was merely providing background (possibly too much) for why
this option was originally implemented and then removed.
- Eric
On Thu, Aug 10, 2006 at
On Thu, Aug 10, 2006 at 10:23:20AM -0700, Eric Schrock wrote:
> A new option will be added, 'canmount', which specifies whether the
> given filesystem can be mounted with 'zfs mount'. This is a boolean
> property, and is not inherited.
Cool, looks good. Do you plan to implement this using the ge
Hello Neil,
Thursday, August 10, 2006, 7:02:58 PM, you wrote:
NP> Robert Milkowski wrote:
>> Hello Matthew,
>>
>> Thursday, August 10, 2006, 6:55:41 PM, you wrote:
>>
>> MA> On Thu, Aug 10, 2006 at 06:50:45PM +0200, Robert Milkowski wrote:
>>
btw: wouldn't it be possible to write block onl
Hi Eric,
Eric Schrock wrote:
...
Second, it forced the CLI to distinguish between a container and a
filesystem. At first this was accomplished with a trailing slash on the
name, and later introducing the 'ctr' type. Both were confusing to
users for different reasons.
Speaking of confusing
This is a draft proposal to address this RFE:
6366244 'nomount' option for container-like functionality
Any feedback from the community would be welcome. In particular, I
struggled with the name of the option. The original 'nomount' name
suffered from the double-negative problem ('nomount=off'
Hello Neil,
Thursday, August 10, 2006, 7:02:58 PM, you wrote:
NP> Robert Milkowski wrote:
>> Hello Matthew,
>>
>> Thursday, August 10, 2006, 6:55:41 PM, you wrote:
>>
>> MA> On Thu, Aug 10, 2006 at 06:50:45PM +0200, Robert Milkowski wrote:
>>
btw: wouldn't it be possible to write block onl
Robert Milkowski wrote:
Hello Matthew,
Thursday, August 10, 2006, 6:55:41 PM, you wrote:
MA> On Thu, Aug 10, 2006 at 06:50:45PM +0200, Robert Milkowski wrote:
btw: wouldn't it be possible to write block only once (for synchronous
IO) and than just point to that block instead of copying it aga
Hello Matthew,
Thursday, August 10, 2006, 6:55:41 PM, you wrote:
MA> On Thu, Aug 10, 2006 at 06:50:45PM +0200, Robert Milkowski wrote:
>> btw: wouldn't it be possible to write block only once (for synchronous
>> IO) and than just point to that block instead of copying it again?
MA> We actually d
Robert Milkowski wrote:
Hello Matthew,
Thursday, August 10, 2006, 4:50:31 PM, you wrote:
MA> On Thu, Aug 10, 2006 at 11:48:09AM +0200, Robert Milkowski wrote:
MA> This test fundamentally requires waiting for lots of syncronous writes.
MA> Assuming no other activity on the system, the performa
On Thu, Aug 10, 2006 at 06:50:45PM +0200, Robert Milkowski wrote:
> btw: wouldn't it be possible to write block only once (for synchronous
> IO) and than just point to that block instead of copying it again?
We actually do exactly that for larger (>32k) blocks.
--matt
Hello Matthew,
Thursday, August 10, 2006, 4:50:31 PM, you wrote:
MA> On Thu, Aug 10, 2006 at 11:48:09AM +0200, Robert Milkowski wrote:
>> MA> This test fundamentally requires waiting for lots of syncronous writes.
>> MA> Assuming no other activity on the system, the performance of syncronous
>> M
Steffen,
On 8/10/06 8:12 AM, "Steffen Weiberle" <[EMAIL PROTECTED]> wrote:
> Those are compelling numbers! Have you seen them yourself? Or know who has?
O'Reilly Research is a good one, they were using MySQL for data mining work
and each query was taking 10 hours, despite all tuning on modern ha
Luke Lonergan wrote On 08/09/06 17:58,:
Steffen,
Are they open to Postgres if it performs 1000 times faster, clusters to 120
nodes and 1.2 Petabytes?
Thanks, Luke.
Those are compelling numbers! Have you seen them yourself? Or know who has?
I had mentioned that we support it in 6/06 and I tho
On Thu, Aug 10, 2006 at 11:48:09AM +0200, Robert Milkowski wrote:
> MA> This test fundamentally requires waiting for lots of syncronous writes.
> MA> Assuming no other activity on the system, the performance of syncronous
> MA> writes does not scale with the number of drives, it scales with the
> M
Brad,
I have a suspicion about what you might be seeing and I want to confirm
it. If it locks up again you can also collect a threadlist:
"echo $
The core dump timed out (related to the SCSI bus reset?), so I don't
have one. I can try it again, though, it's easy enough to reproduce.
I was
The core dump timed out (related to the SCSI bus reset?), so I don't
have one. I can try it again, though, it's easy enough to reproduce.
I was seeing errors on the fibre channel disks as well, so it's possible
the whole thing was locked up.
BP
--
[EMAIL PROTECTED]
Hello Matthew,
Tuesday, August 8, 2006, 8:08:39 PM, you wrote:
MA> On Tue, Aug 08, 2006 at 10:42:41AM -0700, Robert Milkowski wrote:
>> filebench in varmail by default creates 16 threads - I configrm it
>> with prstat, 16 threrads are created and running.
MA> Ah, OK. Looking at these results, i
Hello Dave,
Thursday, August 10, 2006, 12:29:05 AM, you wrote:
DF> Hi,
DF> Note that these are page cache rates and that if the application
DF> pushes harder and exposes the supporting device rates there is
DF> another world of performance to be observed. This is where ZFS
DF> gets to be a chall
40 matches
Mail list logo