Manoj Nayak writes:
> Hi All.
>
> ZFS document says ZFS schedules it's I/O in such way that it manages to
> saturate a single disk bandwidth using enough concurrent 128K I/O.
> The no of concurrent I/O is decided by vq_max_pending.The default value
> for vq_max_pending is 35.
>
> We
Roch - PAE wrote:
> Manoj Nayak writes:
> > Hi All.
> >
> > ZFS document says ZFS schedules it's I/O in such way that it manages to
> > saturate a single disk bandwidth using enough concurrent 128K I/O.
> > The no of concurrent I/O is decided by vq_max_pending.The default value
> > for v
Hello,
Thought I'd mention a recent (slightly biased) article comparing
DragonflyBSD's new HAMMER file system and ZFS:
Infinite [automatic] snapshots
As-of mounts [like PITR on Postgres]
Clustered
Backups made easy
File database
Huge ("multi-hundr
This is one feature I've been hoping for... old threads and blogs talk about
this feature possibly showing up by the end of 2007 just curious on what
the status of this feature is...
thanks,
john
This message posted from opensolaris.org
___
zfs
On Jan 23, 2008 6:36 AM, Manoj Nayak <[EMAIL PROTECTED]> wrote:
> It means 4-disk raid-z group inside ZFS pool is exported to ZFS as a
> single device ( vdev ) .ZFS assigns vq_max_pending value of 35 to this vdev.
> To get higher throughput , I need to do following things ?
>
> 1.Reduce no of disks
Manoj Nayak wrote:
> Roch - PAE wrote:
>
>> Manoj Nayak writes:
>> > Hi All.
>> >
>> > ZFS document says ZFS schedules it's I/O in such way that it manages to
>> > saturate a single disk bandwidth using enough concurrent 128K I/O.
>> > The no of concurrent I/O is decided by vq_max_pendin
I remember reading a discussion where these kind of problems were discussed.
Basically it boils down to "everything" not being aware of the radical changes
in "filesystems" concept.
All these things are being worked on, but it might take sometime before
everything is made aware that yes it's no
On Wed, Jan 23, 2008 at 08:02:22AM -0800, Akhilesh Mritunjai wrote:
> I remember reading a discussion where these kind of problems were
> discussed.
>
> Basically it boils down to "everything" not being aware of the
> radical changes in "filesystems" concept.
>
> All these things are being worked
> Is this service something that we'd like to put into OpenSolaris
Heck yes, at least Indiana needs something like that. I guess nobody is
spearheading the "Indiana data backup solution" right now, but that work of
yours could be part of it.
To the user there is no difference between "regularl
Hi,
We're currently testing a Thumper running Solaris10 with a view to buying a
couple.
I'm not seeing what I would expect to see as far as snapshot space utilization
and am hoping someone could explain why.
Here's what I'm doing -
Creating an empty zpool & zfs
Creating a 6MB text file
Takin
Say I'm firing off an at(1) or cron(1) job to do scrubs, and say I want to
scrub two pools sequentially
because they share one device. The first pool, BTW, is a mirror comprising of
a smaller disk and a subset of a larger disk. The other pool is the remainder
of the larger disk.
I see no docu
On Wed, Jan 23, 2008 at 11:11:38AM -0800, Matt Newcombe wrote:
> Creating an empty zpool & zfs
> Creating a 6MB text file
> Taking a snapshot
>
> So far so good. The filesystem size is 6MB and the snapshot 0MB
>
> Now I edit the first 4 characters of the text file. I would have
> expected the siz
How are you editing the file? Are you sure your editor isn't writing
out the entire file even though only four characters have changed? If
you truss the app, do you see a single 4 byte write to the file?
- Eric
On Wed, Jan 23, 2008 at 11:11:38AM -0800, Matt Newcombe wrote:
> Hi,
>
> We're curr
Sorry, no such feature exists. We do generate sysevents for when
resilvers are completed, but not scrubs. Adding those sysevents would
be an easy change, but doing anything more complicated (such as baking
that functionality into zpool(1M)) would be annoying.
If you want an even more hacked up v
OK, to answer my own question (with a little help from Eric!) ...
I was using vi to edit the file which must be rewriting the entire file back
out to disk - hence the larger than expected growth of the snapshot.
Matt
This message posted from opensolaris.org
__
Hi,
I have been experiencing corruption on one of my ZFS pool over the last couple
of days. I have tried running zpool scrub on the pool, but everytime it comes
back with new files being corrupted. I would have thought that zpool scrub
would have identified the corrupted files once and for all
Dan McDonald wrote:
> Say I'm firing off an at(1) or cron(1) job to do scrubs, and say I want to
> scrub two pools sequentially
> because they share one device. The first pool, BTW, is a mirror comprising
> of a smaller disk and a subset of a larger disk. The other pool is the
> remainder of t
On Wed, Jan 23, 2008 at 12:56:16PM -0800, Richard Elling wrote:
> This is pretty trivial to code in a script. Here is a ksh function
> I use for testing resilvering performance.
>
> function wait_for_resilver {
> date
> while zpool status $POOLNAME | grep "resilver in progress"
>
> Thought I'd mention a recent (slightly biased) article comparing
> DragonflyBSD's new HAMMER file system and ZFS:
>
> Fo those who don't know, DragonflyBSD is a fork of FreeBSD 4.x who's
> "ultimate goal is to provide generic clustering support natively in
> the kernel":
Let's wait and see. Di
Thiago Sobral schrieb:
> Hi Thomas,
>
> Thomas Maier-Komor escreveu:
>> Thiago Sobral schrieb:
>>>
>>> I need to manage volumes like LVM does on Linux or AIX, and I think
>>> that ZFS can solve this issue.
>>>
>>> I read the SVM specification and certainly it doesn't will be the
>>> solution that
The Silicon Image 3114 controller is known to corrupt data.
Google for "silicon image 3114 corruption" to get a flavor.
I'd suggest getting your data onto different h/w, quickly.
Jeff
On Wed, Jan 23, 2008 at 12:34:56PM -0800, Bertrand Sirodot wrote:
> Hi,
>
> I have been experiencing corruption
I believe issue been fixed in snv_72+, no?
On Wed, 2008-01-23 at 16:41 -0800, Jeff Bonwick wrote:
> The Silicon Image 3114 controller is known to corrupt data.
> Google for "silicon image 3114 corruption" to get a flavor.
> I'd suggest getting your data onto different h/w, quickly.
>
> Jeff
>
>
Jeff Bonwick wrote:
> The Silicon Image 3114 controller is known to corrupt data.
> Google for "silicon image 3114 corruption" to get a flavor.
> I'd suggest getting your data onto different h/w, quickly.
I'll second this, the 3114 is a piece of junk if you value your data. I
bought a 4 port LSI
Actually s10_72, but it's not really a fix, it's a workaround
for a bug in the hardware. I don't know how effective it is.
Jeff
On Wed, Jan 23, 2008 at 04:54:54PM -0800, Erast Benson wrote:
> I believe issue been fixed in snv_72+, no?
>
> On Wed, 2008-01-23 at 16:41 -0800, Jeff Bonwick wrote:
>
well, we had some problems with si3124 driver, but with driver binary
posted in this forum the problem seems been fixed. Later we saw the same
fix went in into b72.
On Thu, 2008-01-24 at 05:11 +0300, Jonathan Stewart wrote:
> Jeff Bonwick wrote:
> > The Silicon Image 3114 controller is known to co
Hi,
if I want to stay with SATA and not go to SAS, do you have a recommendation on
which SATA controller is actually supported by Solaris?
The weird thing about the corruption is that everything was fine, until one of
the disks went flaky and things went downhill on the resilvering. No I am lef
Bertrand Sirodot wrote:
> Hi,
>
> if I want to stay with SATA and not go to SAS, do you have a
> recommendation on which SATA controller is actually supported by
> Solaris?
SAS controllers do support SATA drives actually (not the other way
around though). I'm running SATA drives on mine without
John wrote:
> This is one feature I've been hoping for... old threads and blogs talk about
> this feature possibly showing up by the end of 2007 just curious on what
> the status of this feature is...
It's still a high priority on our road map, just pushed back a bit. Our
current goal is t
28 matches
Mail list logo