On 6/1/07, Frank Cusack <[EMAIL PROTECTED]> wrote:
On June 1, 2007 9:44:23 AM -0700 Richard Elling <[EMAIL PROTECTED]>
wrote:
[...]
> Semiconductor memories are accessed in parallel. Spinning disks are
> accessed
> serially. Let's take a look at a few examples and see what this looks
> like...
>
On June 1, 2007 9:44:23 AM -0700 Richard Elling <[EMAIL PROTECTED]>
wrote:
Frank Cusack wrote:
On May 31, 2007 1:59:04 PM -0700 Richard Elling <[EMAIL PROTECTED]>
wrote:
CF cards aren't generally very fast, so the solid state disk vendors are
putting them into hard disk form factors with SAS/SA
Eric Schrock <[EMAIL PROTECTED]> wrote on Friday, June 01, 2007 12:50:50:
Only devices that use the SATA framework (Marvell, Silicon Image,
and others - I don't remember the full list) use the SCSI emulation
required to make this work.
* Do I need any special SATA configuration to get the S
On Jun 1, 2007, at 18:37, Richard L. Hamilton wrote:
Can one use a spare SCSI or FC controller as if it were a target?
we'd need an FC or SCSI target mode driver in Solaris .. let's just
say we
used to have one, and leave it mysteriously there. smart idea though!
---
.je
> I'd love to be able to server zvols out as SCSI or FC
> targets. Are
> there any plans to add this to ZFS? That would be
> amazingly awesome.
Can one use a spare SCSI or FC controller as if it were a target?
Even if the hardware is capable, I don't see what you describe as
a ZFS thing really;
On 1-Jun-07, at 7:50 PM, Eric Schrock wrote:
On Fri, Jun 01, 2007 at 12:33:29PM -1000, J. David Beutel wrote:
Excellent! Thanks! I've gleaned the following from your blog.
Is this
correct?
* A week ago you committed a change that will:
** get current SMART parameters and faults for SATA
On Fri, Jun 01, 2007 at 12:33:29PM -1000, J. David Beutel wrote:
> Excellent! Thanks! I've gleaned the following from your blog. Is this
> correct?
>
> * A week ago you committed a change that will:
> ** get current SMART parameters and faults for SATA on x86 via a single
> function in a priv
I'm trying to test an install of ZFS to see if I can backup data from one
machine to another. I'm using Solaris 5.10 on two VMware installs.
When I do the zfs send | ssh zfs recv part, the file system (folder) is getting
created, but none of the data that I have in my snapshot is sent. I can
Excellent! Thanks! I've gleaned the following from your blog. Is this
correct?
* A week ago you committed a change that will:
** get current SMART parameters and faults for SATA on x86 via a single
function in a private library using SCSI emulation;
** decide whether they indicate any proble
Mark J Musante wrote:
Note that if you use the recursive snapshot and destroy, only one line is
My "problem" (and it really is /not/ an important one) was that
I had a cron job that every minute did
min=`date "+%d"`
snap="$pool/[EMAIL PROTECTED]"
zfs destroy "$snap"
See:
http://blogs.sun.com/eschrock/entry/solaris_platform_integration_generic_disk
Prior to the above work, we only monitored disks on Thumper (x4500)
platforms. With these changes we monitor basic SMART data for SATA
drives. Monitoring for SCSI drives will be here soon. The next step
will be
On Fri, 1 Jun 2007, John Plocher wrote:
> This seems especially true when there is closure on actions - the set of
> zfs snapshot foo/[EMAIL PROTECTED]
> zfs destroy foo/[EMAIL PROTECTED]
> commands is (except for debugging zfs itself) a noop
Note that if you use the recursive snap
On Solaris x86, does zpool (or anything) support PATA (or SATA) IDE
SMART data? With the Predictive Self Healing feature, I assumed that
Solaris would have at least some SMART support, but what I've googled so
far has been discouraging.
http://prefetch.net/blog/index.php/2006/10/29/solaris-ne
On Fri, Jun 01, 2007 at 02:09:55PM -0700, John Plocher wrote:
> eric kustarz wrote:
> >We specifically didn't allow the admin the ability to truncate/prune the
> >log as then it becomes unreliable - ooops i made a mistake, i better
> >clear the log and file the bug against zfs
>
> I underst
On Jun 1, 2007, at 2:09 PM, John Plocher wrote:
eric kustarz wrote:
We specifically didn't allow the admin the ability to truncate/
prune the log as then it becomes unreliable - ooops i made a
mistake, i better clear the log and file the bug against zfs
I understand - auditing means
eric kustarz wrote:
We specifically didn't allow the admin the ability to truncate/prune the
log as then it becomes unreliable - ooops i made a mistake, i better
clear the log and file the bug against zfs
I understand - auditing means never getting to blame someone else :-)
There are th
2) Following Chris's advice to do more with snapshots, I
played with his cron-triggered snapshot routine:
http://blogs.sun.com/chrisg/entry/snapping_every_minute
Now, after a couple of days, zpool history shows almost
100,000 lines of output (from all the snapshots and
deletions..
Hello Richard,
Thursday, May 31, 2007, 10:59:04 PM, you wrote:
>>
>> Having 2 cards would certainly make the "unlikely replacement" of a card
>> a LOT more straight-forward than a single-card failure... Much of this
>> would depend on the quality of these CF-cards and how they put up under
>> loa
Hello Richard,
RE> But I am curious as to why you believe 2x CF are necessary?
RE> I presume this is so that you can mirror. But the remaining memory
RE> in such systems is not mirrored. Comments and experiences are welcome.
I was thinking about mirroring - it's not clear from the comment abov
I managed to correct the problem by writing a script inspired
by Chris Gerhard's blog that did a zfs send | zfs recv. Now
that things are back up, I have a couple of lingering questions:
1) I noticed that the filesystem size information is not the
same between the src and dst filesystem sets
> If i put the database in hotbackup mode,then i will have to ensure
> that the filesystem is consistent as well.So, you are saying that
> taking a ZFS snapshot is the only method to guarantee consistency in
> the filesystem since it flushes all the buffers to the filesystem , so
> its consistent.
Both levels, application and filesystem.
If i put the database in hotbackup mode,then i will have to ensure that the
filesystem is consistent as well.So, you are saying that taking a ZFS snapshot
is the only method to guarantee consistency in the filesystem since it flushes
all the buffers to th
Yes they can.
-- Fred
benita ulisano wrote:
Hi,
I would like to clarify one point for the forum experts on what I would like to
do after it was brought to my attention that my posting might not describe a
true picture of what I am trying to accomplish.
All I want to do is setup a separate z
Original Message
Subject: zone mount points are busy following reboot of global zone
65505676
Date: Fri, 01 Jun 2007 12:33:57 -0400
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
I need help on this customers issue. I appreciate any help that y
Frank Cusack wrote:
On May 31, 2007 1:59:04 PM -0700 Richard Elling <[EMAIL PROTECTED]>
wrote:
CF cards aren't generally very fast, so the solid state disk vendors are
putting them into hard disk form factors with SAS/SATA interfaces. These
If CF cards aren't fast, how will putting them into
Hi,
I would like to clarify one point for the forum experts on what I would like to
do after it was brought to my attention that my posting might not describe a
true picture of what I am trying to accomplish.
All I want to do is setup a separate zfs file system running Oracle on the
machine ru
benita ulisano wrote:
Hi,
I have been given the task to research converting our vxfs/vm file
systems and volumes to zfs. The volumes are attached to an EMC Clariion
running raid-5, and raid 1_0. I have no test machine, just a migration
machine that currently hosts other things. It is possible to
> Patching zfs_prefetch_disable = 1 has helped
It's my belief this mainly aids scanning metadata. my
testing with rsync and yours with find (and seen with
du & ; zpool iostat -v 1 ) pans this out..
mainly tracked in bug 6437054 vdev_cache: wise up or die
http://www.opensolaris.org/jive/thread.js
I wrote
> Has anyone else noticed a significant zfs performance
> deterioration when running recent opensolaris bits?
>
> My 32-bit / 768 MB Toshiba Tecra S1 notebook was able
> to do a full opensolaris release build in ~ 4 hours 45
> minutes (gcc shadow compilation disabled; using an lzjb
> comp
zpool replace == zpool attach + zpool detach
It is not a good practice to detach and then attach as you
are vulnerable after the detach and before the attach completes.
It is a good practice to attach and then detach. There is no
practical limit to the number of sides of a mirror in ZFS.
-- ri
Hi,
I have been given the task to research converting our vxfs/vm file systems and
volumes to zfs. The volumes are attached to an EMC Clariion running raid-5, and
raid 1_0. I have no test machine, just a migration machine that currently hosts
other things. It is possible to setup a zfs file sys
On Fri, 1 Jun 2007, Krzys wrote:
> bash-3.00# zpool replace mypool c1t2d0 emcpower0a
> bash-3.00# zpool status
>pool: mypool
> state: ONLINE
> status: One or more devices is currently being resilvered. The pool will
> continue to function, possibly in a degraded state.
> action: Wa
Thankyou Lori, that's fantastic.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ok, now its seems like its working what I wanted to do:
bash-3.00# zpool status
pool: mypool
state: ONLINE
scrub: resilver completed with 0 errors on Thu May 31 23:01:09 2007
config:
NAMESTATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirror
yeah it does something funky that I did not expect, zpool seems like its taking
slice 0 of that emc lun rather than taking the whole device...
so when I did create that lun, I formated disk and it looked like this:
format> verify
Primary label contents:
Volume name = <>
ascii name =
ok, I think I did figure out what is the problem
well what zpool does for that emc powerpath is it takes parition 0 from disk and
is trying to attach it to my pool, so when I added emcpower0a I got the
following:
bash-3.00# zpool list
NAMESIZEUSED AVAILCAP HEALTH
On 6/1/07, Krzys <[EMAIL PROTECTED]> wrote:
bash-3.00# zpool list
NAMESIZEUSED AVAILCAP HEALTH ALTROOT
mypool 68G 53.1G 14.9G78% ONLINE -
mypool2 123M 83.5K123M 0% ONLINE -
Are you sure you've alloca
Never the less I get the following error:
bash-3.00# zpool attach mypool emcpower0a
missing specification
usage:
attach [-f]
bash-3.00# zpool status
pool: mypool
state: ONLINE
scrub: resilver completed with 0 errors on Thu May 31 23:01:09 2007
config:
NAMESTATE
Yes by my goal is to replace exisiting disk which is internal disk 72gb with SAN
storage disk which is 100GB in size... As long as I will be able to detach the
old one then its going to be great... otherwise I will be stuck with one
internal disk and oneSAN disk which I do not like that much to
39 matches
Mail list logo