On 8/15/06, Kevin Maguire <[EMAIL PROTECTED]> wrote:
Hi
Is the following an accurate sstatment of the current status with (for me) the
3 main commercial ackup software solutions out there
It seems to me that if zfs send/receive where hooked in with ndmp
(http://ndmp.org), that zfs would very
Following up on a string of related proposals, here is another draft
proposal for user-defined properties. As usual, all feedback and
comments are welcome.
The prototype is finished, and I would expect the code to be integrated
sometime within the next month.
- Eric
INTRODUCTION
ZFS currently
Bob Evans wrote:
Hi, this is a follow up to "Significant pauses to zfs writes".
I'm getting about 15% slower performance using ZFS raidz than if I just mount
the same type of drive using ufs.
What is your expectation?
-- richard
___
zfs-discuss mai
Dick Davies wrote:
On 15/08/06, Lori Alt <[EMAIL PROTECTED]> wrote:
Brian Hechinger wrote:
> On Fri, Jul 28, 2006 at 02:26:24PM -0600, Lori Alt wrote:
>
>>>What about Express?
>>
>>Probably not any time soon. If it makes U4,
>>I think that would make it available in Express late
>>this year
On Thu, Aug 17, 2006 at 10:28:10AM -0700, Adam Leventhal wrote:
> On Thu, Aug 17, 2006 at 10:00:32AM -0700, Matthew Ahrens wrote:
> > (Actually, I think that a RLE compression algorithm for metadata is a
> > higher priority, but if someone from the community wants to step up, we
> > won't turn your
On Thu, Aug 17, 2006 at 10:00:32AM -0700, Matthew Ahrens wrote:
> (Actually, I think that a RLE compression algorithm for metadata is a
> higher priority, but if someone from the community wants to step up, we
> won't turn your code away!)
Is RLE likely to be more efficient for metadata? Have you
On Thu, Aug 17, 2006 at 02:53:09PM +0200, Robert Milkowski wrote:
> Hello zfs-discuss,
>
> Is someone actually working on it? Or any other algorithms?
> Any dates?
Not that I know of. Any volunteers? :-)
(Actually, I think that a RLE compression algorithm for metadata is a
higher priority,
First, I apologize, I listed the Antares in my original post, it was one of two
scsi cards I tested with. The posted CPU snapshots were from the LSI 22320
card (mentioned below).
I've tried this with two different SCSI cards. As far as I know, both are
standard SCSI cards used for Suns. Sun
On Thu, Aug 17, 2006 at 02:20:47PM +0200, Robert Milkowski wrote:
>
> I've just did :)
>
> However currently 'zpool import A B' means importing pool A and
> renaming it to pool B.
>
> I think it would be better to change current behavior and rename only
> with given switch like 'zpool import A -
No ACL's ...
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Bob,
you are using some non-Sun SCSI HBA. Could you please be more specific
about HBA model and driver?
You are getting pretty the same high CPU load with write to single-disk
UFS and raid-z. This may mean that the problem is not with ZFS itself.
Victor
Bob Evans wrote:
Robert,
Sorry
Dear Mark,
You say:
Append_data does not work and it is not a bug issue. it is not implemented. OK?!
What i am asking is " If you did not implemented append_data function, why do
you specify or define it as file permission". If you planning some feature for
future release please do not add it
Hello Louwtjie,
Thursday, August 17, 2006, 2:54:54 PM, you wrote:
LB> Hi there
LB> Did a backup/restore on TSM, works fine.
I assume without ACLs, right?
--
Best regards,
Robertmailto:[EMAIL PROTECTED]
http://milek.blogspot.c
Robert Milkowski writes:
> Hello Roch,
>
> Thursday, August 17, 2006, 11:08:37 AM, you wrote:
> R> My general principles are:
>
> R> If you can, to improve you 'Availability' metrics,
> R> let ZFS handle one level of redundancy;
>
> R> For Random Read performanc
Robert,
Sorry about not being clearer.
The storage unit I am using is configured as follows:
X X X X X X X X X X X X X X
\
\-- (Each X is an 18 GB SCSI Disk)
The first 7 disks have been used for the ZFS RaidZ, I used the last disk (#14)
for my UFS target. The first 7 are on one scsi cha
Hi there
Did a backup/restore on TSM, works fine.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello zfs-discuss,
Is someone actually working on it? Or any other algorithms?
Any dates?
--
Best regards,
Robert mailto:[EMAIL PROTECTED]
http://milek.blogspot.com
___
zfs-discuss mai
Hello Roch,
Thursday, August 17, 2006, 11:08:37 AM, you wrote:
R> My general principles are:
R> If you can, to improve you 'Availability' metrics,
R> let ZFS handle one level of redundancy;
R> For Random Read performance prefer mirrors over
R> raid-z. If you use
Hello Eric,
Wednesday, August 16, 2006, 4:49:27 PM, you wrote:
ES> This seems like a reasonable RFE. Feel free to file it at
ES> bugs.opensolaris.org.
I've just did :)
However currently 'zpool import A B' means importing pool A and
renaming it to pool B.
I think it would be better to change c
Anantha N. Srirama writes:
> Therein lies my dillemma:
>
> - We know the I/O sub-system is capable of much higher I/O rates
> - Under the test setup I've SAS datasets which are lending
> themselves to compression. This should manifest itself as lots of read
> I/O resulting in much sma
Therein lies my dillemma:
- We know the I/O sub-system is capable of much higher I/O rates
- Under the test setup I've SAS datasets which are lending themselves to
compression. This should manifest itself as lots of read I/O resulting in much
smaller (4x) write I/O due to compression. This m
Hi,
IHAC who is simulating disk failure and came across behaviour which seems
wrong:
1. zpool status -v
pool: data
state: ONLINE
scrub: resilver completed with 0 errors on Thu Aug 10 16:55:22 2006
config:
NAMESTATE READ WRITE CKSUM
dataONLINE 0
WYT said:
Hi all,
My company will be acquiring the Sun SE6920 for our storage
virtualization project and we intend to use quite a bit of ZFS as
well. The 2 technologies seems somewhat at odds since the 6920 means
layers of hardware abstraction but ZFS seems to prefer more direct
acc
23 matches
Mail list logo