On Aug 2, 2010, at 8:18 PM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Jonathan Loran
>>
> Because you're at pool v15, it does not matter if the log device fails while
> you&
ill the GUID for each pool get found by
the system from the partitioned log drives?
Please give me your sage advice. Really appreciate it.
Jon
- _/ _/ / - Jonathan Loran - -
-/ /
he zfs layer, and also do backups.
Unfortunately for me, penny pinching has precluded both for us until
now.
Jon
On Jun 1, 2009, at 4:19 PM, A Darren Dunham wrote:
On Mon, Jun 01, 2009 at 03:19:59PM -0700, Jonathan Loran wrote:
Kinda scary then. Better make sure we delete all the bad fil
on
On Jun 1, 2009, at 2:41 PM, Paul Choi wrote:
"zpool clear" just clears the list of errors (and # of checksum
errors) from its stats. It does not modify the filesystem in any
manner. You run "zpool clear" to make the zpool forget that it ever
had any issues.
-Paul
Jonat
es in tact?
I'm going to perform a full backup of this guy (not so easy on my
budget), and I would rather only get the good files.
Thanks,
Jon
- _/ _/ / - Jonathan Loran - -
-/ / /
the system board for this machine would make use of ECC
memory either, which is not good from a ZFS perspective. How many SATA
plugs are there on the MB in this guy?
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /I
tools, resilience of the platform, etc.)..
>
> .. Of course though, I guess a lot of people who may have never had a
> problem wouldn't even be signed up on this list! :-)
>
>
> Thanks!
> ___
> storage-discuss mailing li
two vdevs out
of two raidz to see if you get twice the throughput, more or less. I'll
bet the answer is yes.
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ /
value of a failure in one year:
Fe = 46% failures/month * 12 months = 5.52 failures
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Science
Jorgen Lundman wrote:
> # /usr/X11/bin/scanpci | /usr/sfw/bin/ggrep -A1 "vendor 0x11ab device
> 0x6081"
> pci bus 0x0001 cardnum 0x01 function 0x00: vendor 0x11ab device 0x6081
> Marvell Technology Group Ltd. MV88SX6081 8-port SATA II PCI-X Controller
>
> But it claims resolved for our version:
Miles Nordin wrote:
>> "s" == Steve <[EMAIL PROTECTED]> writes:
>>
>
> s> http://www.newegg.com/Product/Product.aspx?Item=N82E16813128354
>
> no ECC:
>
> http://en.wikipedia.org/wiki/List_of_Intel_chipsets#Core_2_Chipsets
>
This MB will take these:
http://www.inte
e best position to monitor the device.
> >
> > The primary goal of ZFS is to be able to correctly read data which was
> > successfully committed to disk. There are programming interfaces
> > (e.g. fsync(), msync()) which may be used to en
it be possible to have a number of possible places to store this
> log? What I'm thinking is that if the system drive is unavailable,
> ZFS could try each pool in turn and attempt to store the log there.
>
> In fact e-mail alerts or external error logging would be a great
> addition to ZFS. Surely it makes sense that filesy
sed upon
block reference count. If a block has few references, it should expire
first, and vise versa, blocks with many references should be the last
out. With all the savings on disks, think how much RAM you could buy ;)
Jon
--
- _/ _/ / -
t; Check out the following blog..:
>
> http://blogs.sun.com/erickustarz/entry/how_dedupalicious_is_your_pool
>
>
Unfortunately we are on Solaris 10 :( Can I get a zdb for zfs V4 that
will dump those checksums?
Jon
--
- _/ _/ / - Jonathan Loran -
e willing to run it and provide feedback. :)
>
> -Tim
>
>
Me too. Our data profile is just like Tim's: Terra bytes of satellite
data. I'm going to guess that the d11p ratio won't be fantastic for
us. I sure would like
ardware and software, but they are all steep on the ROI
curve. I would be very excited to see block level ZFS deduplication
roll out. Especially since we already have the infrastructure in place
using Solaris/ZFS.
Cheers,
Jon
--
- _/ _/ / - Jonathan Loran -
ions.
>
>
Ben,
Haven't read this whole thread, and this has been brought up before, but
make sure you power supply is running clean. I can't tell you how many
times I've seen very strange and intermittent system errors occur from a
Jonathan Loran wrote:
> Since no one has responded to my thread, I have a question: Is zdb
> suitable to run on a live pool? Or should it only be run on an exported
> or destroyed pool? In fact, I see that it has been asked before on this
> forum, but is there a users
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Sciences Laboratory, UC Berkeley
-/ / / (510) 643-5146 [EMAIL PROTECTED
Hi List,
First of all: S10u4 120011-14
So I have the weird situation. Earlier this week, I finally mirrored up
two iSCSI based pools. I had been wanting to do this for some time,
because the availability of the data in these pools is important. One
pool mirrored just fine, but the other po
s, which use an indirect map,
we just use the Solaris map, thus:
auto_home:
*zfs-server:/home/&
Sorry to be so off (ZFS) topic.
Jon
--
- _____/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- __
Dominic Kay wrote:
> Hi
>
> Firstly apologies for the spam if you got this email via multiple aliases.
>
> I'm trying to document a number of common scenarios where ZFS is used
> as part of the solution such as email server, $homeserver, RDBMS and
> so forth but taken from real implementations
Bob Friesenhahn wrote:
> On Tue, 22 Apr 2008, Jonathan Loran wrote:
>>>
>> But that's the point. You can't correct silent errors on write once
>> media because you can't write the repair.
>
> Yes, you can correct the error (at time of read) due to
Bob Friesenhahn wrote:
>> The "problem" here is that by putting the data away from your machine,
>> you loose the chance to "scrub"
>> it on a regular basis, i.e. there is always the risk of silent
>> corruption.
>>
>
> Running a scrub is pointless since the media is not writeable. :-)
>
>
Luke Scharf wrote:
> Maurice Volaski wrote:
>
>>> Perhaps providing the computations rather than the conclusions would
>>> be more persuasive on a technical list ;>
>>>
>>>
>> 2 16-disk SATA arrays in RAID 5
>> 2 16-disk SATA arrays in RAID 6
>> 1 9-disk SATA array in RAID 5.
>>
>
Chris Siebenmann wrote:
> | What your saying is independent of the iqn id?
>
> Yes. SCSI objects (including iSCSI ones) respond to specific SCSI
> INQUIRY commands with various 'VPD' pages that contain information about
> the drive/object, including serial number info.
>
> Some Googling turns up
Just to report back to the list... Sorry for the lengthy post
So I've tested the iSCSI based zfs mirror on Sol 10u4, and it does more
or less work as expected. If I unplug one side of the mirror - unplug
or power down one of the iSCSI targets - I/O to the zpool stops for a
while, perhaps a
Vincent Fox wrote:
> Followup, my initiator did eventually panic.
>
> I will have to do some setup to get a ZVOL from another system to mirror
> with, and see what happens when one of them goes away. Will post in a day or
> two on that.
>
>
On Sol 10 U4, I could have told you that. A few
kristof wrote:
> If you have a mirrored iscsi zpool. It will NOT panic when 1 of the
> submirrors is unavailable.
>
> zpool status will hang for some time, but after I thinkt 300 seconds it will
> put the device on unavailable.
>
> The panic was the default in the past, And it only occurs if all
> This guy seems to have had lots of fun with iSCSI :)
> http://web.ivy.net/~carton/oneNightOfWork/20061119-carton.html
>
>
This is scaring the heck out of me. I have a project to create a zpool
mirror out of two iSCSI targets, and if the failure of one of them will
panic my system, that wil
Bob Friesenhahn wrote:
> On Tue, 25 Mar 2008, Robert Milkowski wrote:
>> As I wrote before - it's not only about RAID config - what if you have
>> hundreds of file systems, with some share{nfs|iscsi|cifs) enabled with
>> specific parameters, then specific file system options, etc.
>
> Some zfs-re
Robert Milkowski wrote:
Hello Jonathan,
Friday, March 14, 2008, 9:48:47 PM, you wrote:
>
Carson Gaspar wrote:
Bob Friesenhahn wrote:
On Fri, 14 Mar 2008, Bill Shannon wrote:
What's the best way to backup a zfs filesystem to tape, where the size
of the filesystem is la
x27;s choice of NFS v4 ACLs. This is the only way to ensure
CIFS compatibility, and it is the way the industry will be moving.
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ /
Patrick Bachmann wrote:
Jonathan,
On Tue, Mar 04, 2008 at 12:37:33AM -0800, Jonathan Loran wrote:
I'm 'not sure I follow how this would work.
The keyword here is thin provisioning. The sparse zvol only uses
as much space as the actual data needs. So, if you use a sparse
Patrick Bachmann wrote:
> Jonathan,
>
> On Mon, Mar 03, 2008 at 11:14:14AM -0800, Jonathan Loran wrote:
>
>> What I'm left with now is to do more expensive modifications to the new
>> mirror to increase its size, or using zfs send | receive or rsync to
>>
Shawn Ferry wrote:
On Mar 3, 2008, at 2:14 PM, Jonathan Loran wrote:
Now I know this is counterculture, but it's biting me in the back side
right now, and ruining my life.
I have a storage array (iSCSI SAN) that is performing badly, and
requires some upgrades/reconfiguration. I h
with Solaris instead on the SAN box? It's just commodity x86 server
hardware.
My life is ruined by too many choices, and not enough time to evaluate
everything.
Jon
--
- _/ _/ / - Jonathan Loran - -
-/
Roch Bourbonnais wrote:
>
> Le 28 févr. 08 à 21:00, Jonathan Loran a écrit :
>
>>
>>
>> Roch Bourbonnais wrote:
>>>
>>> Le 28 févr. 08 à 20:14, Jonathan Loran a écrit :
>>>
>>>>
>>>> Quick question:
>>>>
Roch Bourbonnais wrote:
>
> Le 28 févr. 08 à 20:14, Jonathan Loran a écrit :
>
>>
>> Quick question:
>>
>> If I create a ZFS mirrored pool, will the read performance get a boost?
>> In other words, will the data/parity be read round robin between the
>
Quick question:
If I create a ZFS mirrored pool, will the read performance get a boost?
In other words, will the data/parity be read round robin between the
disks, or do both mirrored sets of data and parity get read off of both
disks? The latter case would have a CPU expense, so I would thi
David Magda wrote:
> On Feb 24, 2008, at 01:49, Jonathan Loran wrote:
>
>> In some circles, CDP is big business. It would be a great ZFS offering.
>
> ZFS doesn't have it built-in, but AVS made be an option in some cases:
>
> http://opensolaris.org/os/project/avs
Uwe Dippel wrote:
> [i]google found that solaris does have file change notification:
> http://blogs.sun.com/praks/entry/file_events_notification
> [/i]
>
> Didn't see that one, thanks.
>
> [i]Would that do the job?[/i]
>
> It is not supposed to do a job, thanks :), it is for a presentation at a
[EMAIL PROTECTED] wrote:
On Tue, Feb 12, 2008 at 10:21:44PM -0800, Jonathan Loran wrote:
Thanks for any help anyone can offer.
I have faced similar problem (although not exactly the same) and was going to
monitor disk queue with dtrace but couldn't find any docs/urls abo
up for the VFS layer.
>
> I'd also check syscall latencies - it might be too obvious, but it can be
> worth checking (eg, if you discover those long latencies are only on the
> open syscall)...
>
> Brendan
>
>
>
--
- _/ _/ / -
Marion Hakanson wrote:
[EMAIL PROTECTED] said:
It's not that old. It's a Supermicro system with a 3ware 9650SE-8LP.
Open-E iSCSI-R3 DOM module. The system is plenty fast. I can pretty
handily pull 120MB/sec from it, and write at over 100MB/sec. It falls apart
more on random I/O. The s
Marion Hakanson wrote:
[EMAIL PROTECTED] said:
...
I know, I know, I should have gone with a JBOD setup, but it's too late for
that in this iteration of this server. We we set this up, I had the gear
already, and it's not in my budget to get new stuff right now.
What kind of arra
Hi List,
I'm wondering if one of you expert DTrace guru's can help me. I want to
write a DTrace script to print out a a histogram of how long IO requests
sit in the service queue. I can output the results with the quantize
method. I'm not sure which provider I should be using for this. Doe
Anton B. Rang wrote:
Careful here. If your workload is unpredictable, RAID 6 (and RAID 5)
for that matter will break down under highly randomized write loads.
Oh? What precisely do you mean by "break down"? RAID 5's write performance is
well-understood and it's used successfully in
Richard Elling wrote:
Nick wrote:
Using the RAID cards capability for RAID6 sounds attractive?
Assuming the card works well with Solaris, this sounds like a
reasonable solution.
Careful here. If your workload is unpredictable, RAID 6 (and RAID 5)
for that matter wil
he irony is that the
requirement for this very stability is why we haven't seen the features
in the ZFS code we need in Solaris 10.
Thanks,
Jon
Mike Gerdts wrote:
On Jan 30, 2008 2:27 PM, Jonathan Loran <[EMAIL PROTECTED]> wrote:
Before ranting any more, I'll do the test of disablin
o
using fast SSD for the ZIL when it comes to Solaris 10 U? as a preferred
method.
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ /
message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
- _/ _/ / - Jonathan Loran - -
-
Neil Perrin wrote:
>
>
> Roch - PAE wrote:
>> Jonathan Loran writes:
>> > > Is it true that Solaris 10 u4 does not have any of the nice ZIL
>> controls > that exist in the various recent Open Solaris flavors? I
>> would like to > move my ZIL t
ZIL off to see how my NFS on ZFS performance is effected before spending
the $'s. Anyone know when will we see this in Solaris 10?
Thanks,
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /
worse yet, run windoz in
a VM. Hardly practical. Why is it we always have to be second class
citizens! Power to the (*x) people!
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /
is.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
- _/ _/ / - Jonathan Loran - -
-/ / /
Joerg Schilling wrote:
Carsten Bormann <[EMAIL PROTECTED]> wrote:
On Dec 29 2007, at 08:33, Jonathan Loran wrote:
We snapshot the file as it exists at the time of
the mv in the old file system until all referring file handles are
closed, then destroy the single file snap.
l
with the semantics. It's not just a path change as in a directory mv.
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Sciences Laborat
duced. Moving
large file stores between zfs file systems would be so handy! From my
own sloppiness, I've suffered dearly from the the lack of it.
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT M
Jonathan Loran wrote:
Gary Mills wrote:
On Fri, Dec 14, 2007 at 10:55:10PM -0800, Jonathan Loran wrote:
This is the same configuration we use on 4 separate servers (T2000, two
X4100, and a V215). We do use a different iSCSI solution, but we have
the same multi path config setup with
Gary Mills wrote:
On Fri, Dec 14, 2007 at 10:55:10PM -0800, Jonathan Loran wrote:
This is the same configuration we use on 4 separate servers (T2000, two
X4100, and a V215). We do use a different iSCSI solution, but we have
the same multi path config setup with scsi_vhci. Dual GigE
of the Iscsi ethernet interfaces. It certainly appears
> to be doing round-robin. The I/O are going to the same disk devices,
> of course, but by two different paths. Is this a correct configuration
> for ZFS? I assume it's safe, but I thought I should check.
Richard Elling wrote:
> Jonathan Loran wrote:
...
> Do not assume that a compressed file system will send compressed.
> IIRC, it
> does not.
Let's say, if it were possible to detect the remote compression support,
couldn't we send it compressed? With higher compression
http://milek.blogspot.com
_______
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
- _/ _/ / - Jonathan Loran -
rg/mailman/listinfo/zfs-discuss
--
- _/ _____/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Sciences Laboratory, UC Berkeley
-/ / /
Nicolas Williams wrote:
On Thu, Oct 04, 2007 at 10:26:24PM -0700, Jonathan Loran wrote:
I can envision a highly optimized, pipelined system, where writes and
reads pass through checksum, compression, encryption ASICs, that also
locate data properly on disk. ...
I've argued b
rites enough to
make a difference? Possibly not.
Anton
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
- _/ _/
Paul B. Henson wrote:
On Sat, 22 Sep 2007, Jonathan Loran wrote:
My gut tells me that you won't have much trouble mounting 50K file
systems with ZFS. But who knows until you try. My questions for you is
can you lab this out?
Yeah, after this research phase has been comp
roblem of worrying about where a user's
files are when they want to access them :(.
--
- _____/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Spa
C-SAT2-MV8.cfm)
> for about $100 each
>
>> Good luck,
> Getting there - can anybody clue me into how much CPU/Mem ZFS
> needs?I have an old 1.2Ghz with 1Gb of mem laying around - would
> it be sufficient?
>
>
> Thanks!
> Kent
&g
ks!
> Kent
>
>
>
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
--
- _/ _/ / - Jonathan Loran -
be very much appreciated.
Thanks,
Jon
--
- _/ _/ / - Jonathan Loran - -
-/ / /IT Manager -
- _ / _ / / Space Sciences Laboratory, UC Berkeley
-/ /
73 matches
Mail list logo