That's a lot of talking without an answer :)
> internal EIDE 320GB (boot drive), internal
250, 200 and 160 GB drives, and an external USB 2.0 600 GB drive.
> So, what's the best zfs configuration in this situation?
RAIDZ uses disk space like RAID5. So the best you could do here for redundant
The software that we use for our production backups is compatible with ZFS. I
cannot comment on it's stability with ZFS or ability to handle multi-TB
filesystems, as we do not have any ZFS systems in production yet. I can say
that overall, the software is solid, stable and very well documented.
Al Hopper wrote:
On Fri, 4 May 2007, mike wrote:
Isn't the benefit of ZFS that it will allow you to use even the most
unreliable risks and be able to inform you when they are attempting to
corrupt your data?
Yes - I won't argue that ZFS can be applied exactly as you state above.
Howe
kyusun Chang wrote On 05/04/07 19:34,:
If system crashes some time after last commit of transaction group (TxG), what
happens to the file system transactions since the last commit of TxG
They are lost, unless they were synchronous (see below).
(I presume last commit of TxG represents the last
If system crashes some time after last commit of transaction group (TxG), what
happens to the file system transactions since the last commit of TxG
(I presume last commit of TxG represents the last on-disk consistency)?
Does ZFS recover all file system transactions which it returned with success
si
Lee Fyock wrote:
> I didn't mean to kick up a fuss.
>
> I'm reasonably zfs-savvy in that I've been reading about it for a year
> or more. I'm a Mac developer and general geek; I'm excited about zfs
> because it's new and cool.
>
> At some point I'll replace my old desktop machine with something new
I didn't mean to kick up a fuss.
I'm reasonably zfs-savvy in that I've been reading about it for a
year or more. I'm a Mac developer and general geek; I'm excited about
zfs because it's new and cool.
At some point I'll replace my old desktop machine with something new
and better -- probab
On 5/4/07, Al Hopper <[EMAIL PROTECTED]> wrote:
Yes - I won't argue that ZFS can be applied exactly as you state above.
However, ZFS is no substitute for bad practices that include:
- not proactively replacing mechanical components *before* they fail
- not having maintenance policies in place
On Fri, 4 May 2007, mike wrote:
> Isn't the benefit of ZFS that it will allow you to use even the most
> unreliable risks and be able to inform you when they are attempting to
> corrupt your data?
Yes - I won't argue that ZFS can be applied exactly as you state above.
However, ZFS is no substitut
mike wrote:
> Isn't the benefit of ZFS that it will allow you to use even the most
> unreliable risks and be able to inform you when they are attempting to
> corrupt your data?
>
> To me it sounds like he is a SOHO user; may not have a lot of funds to
> go out and swap hardware on a whim like a com
Isn't the benefit of ZFS that it will allow you to use even the most
unreliable risks and be able to inform you when they are attempting to
corrupt your data?
To me it sounds like he is a SOHO user; may not have a lot of funds to
go out and swap hardware on a whim like a company might.
ZFS in my
On 4-May-07, at 6:53 PM, Al Hopper wrote:
...
[1] it continues to amaze me that many sites, large or small, don't
have a
(written) policy for mechanical component replacement - whether disk
drives or fans.
You're not the only one. In fact, while I'm not exactly talking
"enterprise" level
On Fri, 4 May 2007, Lee Fyock wrote:
> Hi--
>
> I'm looking forward to using zfs on my Mac at some point. My desktop
> server (a dual-1.25GHz G4) has a motley collection of discs that has
> accreted over the years: internal EIDE 320GB (boot drive), internal
> 250, 200 and 160 GB drives, and an ext
I've put together some thoughts on the ZFS copies property.
http://blogs.sun.com/relling/entry/zfs_copies_and_data_protection
I hope that you might find this useful. I tried to use simplified drawings
to illustrate the important points. Feedback appreciated.
There is more work to be do
On 5/4/07, Roch - PAE <[EMAIL PROTECTED]> wrote:
Manoj Joseph writes:
> Hi,
>
> I was wondering about the ARC and its interaction with the VM
> pagecache... When a file on a ZFS filesystem is mmaped, does the ARC
> cache get mapped to the process' virtual memory? Or is there another copy?
Manoj Joseph writes:
> Hi,
>
> I was wondering about the ARC and its interaction with the VM
> pagecache... When a file on a ZFS filesystem is mmaped, does the ARC
> cache get mapped to the process' virtual memory? Or is there another copy?
>
My understanding is,
The ARC does not get m
Hi--
I'm looking forward to using zfs on my Mac at some point. My desktop
server (a dual-1.25GHz G4) has a motley collection of discs that has
accreted over the years: internal EIDE 320GB (boot drive), internal
250, 200 and 160 GB drives, and an external USB 2.0 600 GB drive.
My guess is
> A couple more questions here.
...
> You still have idle time in this lockstat (and mpstat).
>
> What do you get for a lockstat -A -D 20 sleep 30?
>
> Do you see anyone with long lock hold times, long
> sleeps, or excessive spinning?
Hmm, I ran a series of "lockstat -A -l ph_mutex -s 16 -D 20 s
Mario Goebbels wrote:
I'm just in sort of a scenario, where I've added devices to a pool and would
now like the existing data to be spread across the new drives, to increase the
performance. Is there a way to do it, like a scrub? Or would I have to have all
files to copy over themselves, or si
Mario Goebbels wrote:
I'm just in sort of a scenario, where I've added devices
to a pool and would now like the existing data to be spread
across the new drives, to increase the performance. Is
there a way to do it, like a scrub? Or would I have to
have all files to copy over themselves, or si
Ian Collins writes:
> Roch Bourbonnais wrote:
> >
> > with recent bits ZFS compression is now handled concurrently with many
> > CPUs working on different records.
> > So this load will burn more CPUs and acheive it's results
> > (compression) faster.
> >
> Would changing (selecting a smal
Roch Bourbonnais wrote
> with recent bits ZFS compression is now handled concurrently with
> many CPUs working on different records.
> So this load will burn more CPUs and acheive it's results
> (compression) faster.
Is this done using the taskq's, created in spa_activate()?
http://src.opens
cedric briner wrote:
hello dear community,
Is there a way to have a ``local_name'' as define in iscsitadm.1m when
you shareiscsi a zvol. This way, it will give even easier
way to identify an device through IQN.
Ced.
Okay no reply from you so... maybe I didn't make myself well understandab
I'm just in sort of a scenario, where I've added devices to a pool and would
now like the existing data to be spread across the new drives, to increase the
performance. Is there a way to do it, like a scrub? Or would I have to have all
files to copy over themselves, or similar hacks?
Thanks,
-m
> A couple more questions here.
...
> What do you have zfs compresison set to? The gzip level is
> tunable, according to zfs set, anyway:
>
> PROPERTY EDIT INHERIT VALUES
> compression YES YES on | off | lzjb | gzip | gzip-[1-9]
I've used the "default" gzip compression level
Darren Moffat,
Yes and no. A earlier statement within this discussion
was whether gzip is appropriate for .wav files. This just
gets a relative time to compress. And relative
sizes of the files after the compression.
My assumption is that gzip will run as a
> On Mon, Apr 23, 2007 at 09:38:47AM -0700, Gino wrote:
> >
> > we had 5 corrupted zpool (on different servers and
> different SANs) !
> > With Solaris up to S10U3 and Nevada up to snv59 we
> are able to corrupt
> > easily a zpool only disconnecting a few times one
> or more luns of a
> > zpool und
On Thu, May 03, 2007 at 02:15:45PM -0700, Bakul Shah wrote:
> [originally reported for ZFS on FreeBSD but Pawel Jakub Dawid
> says this problem also exists on Solaris hence this email.]
Thanks!
> Summary: on ZFS, overhead for reading a hole seems far worse
> than actual reading from a disk. Sma
Hello,
Is someone able to explain me why zfs does not report a filesystem full in
/var/adm/messages ? Did I miss something or is it a expected behaviour ?
Tested on Solaris 11/06 (ZFS version 3)
Thank you for your feedback !
This message posted from opensolaris.org
_
Erblichs wrote:
So, my first order would be to take 1GB or 10GB .wav files
AND time both the kernel implementation of Gzip and the
user application. Approx the same times MAY indicate
that the kernel implementation gzip funcs should
be treatedly maybe more
30 matches
Mail list logo