http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery
Is there a way except for buying enterprise (RAID specific) drives for a array
to use normal drives?
Does anyone have any success stories regarding a particular model?
The TLER cannot be edited on newer drives from Western Digital unfortu
Sorry I probably didn't make myself exactly clear.
Basically drives without particular TLER settings drop out of RAID randomly.
* Error Recovery - This is called various things by various manufacturers
(TLER, ERC, CCTL). In a Desktop drive, the goal is to do everything possible to
recover the d
http://www.stringliterals.com/?p=77
This guy talks about it too under "Hard Drives".
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 29/05/2012 6:39 AM, Richard Elling wrote:
On May 28, 2012, at 5:48 AM, Nathan Kroenert wrote:
Hi folks,
Looking to get some larger drives for one of my boxes. It runs
exclusively ZFS and has been using Seagate 2TB units up until now
(which are 512 byte sector).
Anyone offer up
On 29/05/2012 11:10 PM, Jim Klimov wrote:
2012-05-29 16:35, Nathan Kroenert wrote:
Hi John,
Actually, last time I tried the whole AF (4k) thing, it's performance
was worse than woeful.
But admittedly, that was a little while ago.
The drives were the seagate green barracuda IIRC
requirements seems like a
big downer for *my* configuration, as I have just the one SSD, but I'll
persist and see what I can get out of it.
Thanks for the thoughts thus far!
Cheers,
Nathan.
On 21/11/2012 8:33 AM, Fajar A. Nugraha wrote:
On Wed, Nov 21, 2012 at 12:07 AM, Edward Ned Harvey
(open
o be
able to claim more available space for the same device, and to be lazy
in the CRC generation/checking arena. And to profoundly impact the time
it takes to read or update anything less than 4K. But - then again,
maybe I'm missing something.
While I am about to embark on building a home NAS box using OpenSolaris with
ZFS.
Currently I have a chassis that will hold 16 hard drives, although not in
caddies - down time doesn't bother me if I need to switch a drive, probably
could do it running anyways just a bit of a pain. :)
I am afte
tely 100MB/s (which is about an average PC HDD
reading sequentially), I'd have thought it should be a lot faster than 12x.
Can we really only pull stuff from cache at only a little over one
gigabyte per second if it's dedup data?
Cheers!
Nathan.
___
Do note, that though Frank is correct, you have to be a little careful
around what might happen should you drop your original disk, and only
the large mirror half is left... ;)
On 12/16/11 07:09 PM, Frank Cusack wrote:
You can just do fdisk to create a single large partition. The
attached mi
worth considering something different ;)
Cheers!
Nathan.
On 12/19/11 09:05 AM, Jan-Aage Frydenbø-Bruvoll wrote:
Hi,
On Sun, Dec 18, 2011 at 22:00, Fajar A. Nugraha wrote:
From http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
(or at least Google's cache of it, since it s
be looking at layers below ZFS. If you *can*, then
you start looking further up the stack.
Hope this helps somewhat. Let us know how you go.
Cheers!
Nathan.
On 02/ 1/12 04:52 AM, Mohammed Naser wrote:
Hi list!
I have seen less-than-stellar ZFS performance on a setup of one main
head connecte
Jim Klimov wrote:
>> It is is hard enough already to justify to an average wife that...
That made my night. Thanks, Jim. :)
On 03/20/12 10:29 PM, Jim Klimov wrote:
2012-03-18 23:47, Richard Elling wrote:
...
Yes, it is wrong to think that.
Ok, thanks, we won't try that :)
copy out, co
se so called 'advanced format'
drives (which as far as I can tell are in no way actually advanced, and
only benefit HDD makers and not the end user).
Cheers!
Nathan.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
ust replace the current
ones...)
I might just have to bite the bullet and try something with current SW. :).
Nathan.
On 05/29/12 08:54 PM, John Martin wrote:
On 05/28/12 08:48, Nathan Kroenert wrote:
Looking to get some larger drives for one of my boxes. It runs
exclusively ZFS and has been us
gs and a few other
things but it doesn't seem to change the behavious
Again - I'm looking for thoughts here - as I have only really just
started looking into this. Should I happen across anything interesting,
I'll followup this post.
Cheers,
Nathan. :)
__
I get the chance, I'll give the rpool thing a crack again, but
overall, it seems to me that the behavior I'm observing is not great...
I'm also happy to supply lockstats / dtrace output etc if it'll help.
Thoughts?
Cheers!
Nathan.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ue is identical. (Though I have since determined
that my HP raid controller is actually *slowing* my reads and writes to
disk! ;)
Cheers!
Nathan.
On 14/02/2011 4:08 AM, gon...@comcast.net wrote:
Hi Nathan,
Maybe it is buried somewhere in your email, but I did not see what
zfs version
On 14/02/2011 4:31 AM, Richard Elling wrote:
On Feb 13, 2011, at 12:56 AM, Nathan Kroenert wrote:
Hi all,
Exec summary: I have a situation where I'm seeing lots of large reads starving
writes from being able to get through to disk.
What is the average service time of each disk? Mul
ng to tune zfs_vdev_max_pending...
Nonetheless, I'm now at a far more balanced point than when I started,
so that's a good thing. :)
Cheers,
Nathan.
On 15/02/2011 6:44 AM, Richard Elling wrote:
Hi Nathan,
comments below...
On Feb 13, 2011, at 8:28 PM, Nathan Kroenert wrote:
On 14/02/2
pushing 4 disks pretty much flat out on a PCI-X 133
3124 based card. (note that there was a pci and a pci-x version of the
3124, so watch out.)
Cheers!
Nathan.
On 02/24/11 02:10 AM, Andrew Gabriel wrote:
Krunal Desai wrote:
On Wed, Feb 23, 2011 at 8:38 AM, Mauricio Tavares
wrote:
I se
rticularly when they are sequential - using eSATA.
Note: All of this is with the 'cheap' view... You can most certainly buy
much better hardware... But bang for buck - I have been happy with the
above.
Cheers!
Nathan.
On 02/26/11 01:58 PM, Brandon High wrote:
On Fri, Feb 25, 2011 at 4:3
Actually, I find that tremendously encouraging. Lots of internal
Oracle folks still subscribed to the list!
Much better than none... ;)
Nathan.
On 02/26/11 03:29 PM, Yaverot wrote:
Sorry all, didn't realize that half of Oracle would auto-reply to a public
mailing list since they'
have dome something administratively silly... ;)
Nathan.
On 7/03/2011 12:14 PM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Yaverot
We're heading into the 3rd hour of the zpool destroy on "others".
T
te you get when you disable the disk cache.
Nathan.
On 8/03/2011 11:53 PM, Edward Ned Harvey wrote:
From: Jim Dunham [mailto:james.dun...@oracle.com]
ZFS only uses system RAM for read caching,
If your email address didn't say oracle, I'd just simply come out and say
you're craz
Hi Karl,
Is there any chance at all that some other system is writing to the
drives in this pool? You say other things are writing to the same JBOD...
Given that the amount flagged as corrupt is so small, I'd imagine not,
but thought I'd ask the question anyways.
Cheers!
Nath
mply catastrophically slow.)
Hope this helps at least a little.
Cheers,
Nathan.
On 06/14/11 03:20 PM, Maximilian Sarte wrote:
Hi,
I am posting here in a tad of desperation. FYI, I am running FreeNAS 8.0.
Anyhow, I created a raidz1 (tank1) with 4 x 2Tb WD EARS hdds.
All was doing ok until I dec
eadful xen
experiment :) so I'll be watching this thread with renewed interest to
see who else is doing what...
Nathan.
Bob Friesenhahn wrote:
> On Thu, 17 Jul 2008, Ben Rockwood wrote:
>
>> zfs list is mighty slow on systems with a large number of objects,
>> but ther
It starts with Z, which makes it the one of the last to be considered if
it's listed alphabetically?
Nathan.
Rahul wrote:
> hi
> can you give some disadvantages of the ZFS file system??
>
> plzz its urgent...
>
> help me.
>
>
> This mes
AHCI ports.
It might seem like it'll be a lot of hassle getting it working, but in
the ZFS space, it works great pretty much out of the box (plus ethernet
address change if the nvidia driver is still busted... ;)
Cheers!
Nathan.
*Going like stink means going like a hairy goat - like lig
I second that question, and also ask what brand folks like for
performance and compatibility?
Ebay is killing me with vast choice and no detail... ;)
Nathan.
Al Hopper wrote:
> On Wed, Aug 20, 2008 at 12:57 PM, Neal Pollack <[EMAIL PROTECTED]> wrote:
>> Ian Collins wrote:
>
ication...
I generally look to keep directories to a size that allows the utilities
that work on and in it to perform at a reasonable rate... which for the
most part is around the 100K files or less...
Perhaps you are using larger hardware than I am for some of this stuff? :)
Nathan.
On 1/10
Interesting.
heh - I was piping to tail -10, so output rate was not an issue.
That being said, there is a large delta in your results and mine... If I
get a chance, I'll look into it...
I suspect it's a cached versus I/O issue...
Nathan.
On 1/10/08 10:02 AM, Bob Friesenhahn wrote
ptions are available in that current zfs / zpool version...
That way, you would never need to do anything to bash/zfs once it was
done the first time... do it once, and as ZFS changes, the prompts
change automatically...
Or - is this old hat, and how we do it already? :)
Nathan.
On 10/10/08 05:0
Not wanting to hijack this thread, but...
I'm a simple man with simple needs. I'd like to be able to manually spin
down my disks whenever I want to...
Anyone come up with a way to do this? ;)
Nathan.
Jens Elkner wrote:
> On Mon, Nov 03, 2008 at 02:54:10PM -0800, Yuan Ch
A quick google shows that it's not so much about the mirror, but the BE...
http://opensolaris.org/os/community/zfs/boot/zfsbootFAQ/
Might help?
Nathan.
On 7/11/08 02:39 PM, Krzys wrote:
> What am I doing wrong? I have sparc V210 and I am having difficulty with boot
> -L, I wa
ach
'surprise!'.
:)
I scrub once every month or so, depending on the system.
So, in direct answer to your question, No - You don't *need* to scrub.
But - It's better if you do. ;)
My 2c.
Nathan.
On 10/11/08 11:38 AM, Douglas Walker wrote:
> Hi,
>
> I'm
I have a ZFS pool that has been corrupted. The pool contains a single device
which was actually a file on UFS. The machine was accidentally halted and now
the pool is corrupt. There are (of course) no backups and I've been asked to
recover the pool. The system panics when trying to do anything w
I have moved the zpool image file to an OpenSolaris machine running 101b.
r...@opensolaris:~# uname -a
SunOS opensolaris 5.11 snv_101b i86pc i386 i86pc Solaris
Here I am able to attempt an import of the pool and at least the OS does not
panic.
r...@opensolaris:~# zpool import -d /mnt
pool: zo
I don't know if this is relevant or merely a coincidence but the zdb command
fails an assertion in the same txg_wait_synced function.
r...@opensolaris:~# zdb -p /mnt -e zones
Assertion failed: tx->tx_threads == 2, file ../../../uts/common/fs/zfs/txg.c,
line 423, function txg_wait_synced
Abort (
Thanks for the reply. I tried the following:
$ zpool import -o failmode=continue -d /mnt -f zones
But the situation did not improve. It still hangs on the import.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@open
I've had some success.
I started with the ZFS on-disk format PDF.
http://opensolaris.org/os/community/zfs/docs/ondiskformat0822.pdf
The uberblocks all have magic value 0x00bab10c. Used od -x to find that value
in the vdev.
r...@opensolaris:~# od -A x -x /mnt/zpool.zones | grep "b10c 00ba"
0200
So - will it be arriving in a patch? :)
Nathan.
Richard Elling wrote:
> Marion Hakanson wrote:
>> richard.ell...@sun.com said:
>>
>>> L2ARC arrived in NV at the same time as ZFS boot, b79, November 2007. It was
>>> not back-ported to Solaris 10u6.
>>
enable stuff like gzip-9 compression, which
might, on the slower Atom style chips, get in the way.
Looking forward to any reports.
Nathan.
On 13/01/09 01:47 PM, JZ wrote:
> ok, was I too harsh on the list?
> sorry folks, as I said, I have the biggest ego.
>
> no one can hurt that by trying
quick
explanation...
It would be interesting to see if you see the same issues using a
Solaris or other OS client.
Hope this helps somewhat. Let us know how it goes.
Nathan.
fredrick phol wrote:
> I'm currently experiencing exactly the same problem and it's been driving me
>
Hey, Tom -
Correct me if I'm wrong here, but it seems you are not allowing ZFS any
sort of redundancy to manage.
I'm not sure how you can class it a ZFS fail when the Disk subsystem has
failed...
Or - did I miss something? :)
Nathan.
Tom Bird wrote:
> Morning,
>
> F
An interesting interpretation of using hot spares.
Could it be that the hot-spare code only fires if the disk goes down
whilst the pool is active?
hm.
Nathan.
Scot Ballard wrote:
> I have configured a test system with a mirrored rpool and one hot spare.
> I powered the systems off,
Are you able to qualify that a little?
I'm using a realtek interface with OpenSolaris and am yet to experience
any issues.
Nathan.
Brandon High wrote:
> On Wed, Jan 21, 2009 at 5:40 PM, Bob Friesenhahn
> wrote:
>> Several people reported this same problem. They changed
Interesting. I'll have a poke...
Thanks!
Nathan.
Brandon High wrote:
> On Thu, Jan 22, 2009 at 1:29 PM, Nathan Kroenert
> wrote:
>> Are you able to qualify that a little?
>>
>> I'm using a realtek interface with OpenSolaris and am yet to experience an
command to work, but it would have it's merits...
Cheers!
Nathan.
Jacob Ritorto wrote:
> Hi,
> I just said zfs destroy pool/fs, but meant to say zfs destroy
> pool/junk. Is 'fs' really gone?
>
> thx
> jake
> _
re keen to test the *actual* disk performance, you should just
use the underlying disk device like /dev/rdsk/c0t0d0s0
Beware, however, that any writes to these devices will indeed result in
the loss of the data on those devices, zpools or other.
Cheers.
Nathan.
Richard Elling wrote:
> Ro
5461/gcfhw?a=view
--
//////
// Nathan Kroenert nathan.kroen...@sun.com //
// Systems Engineer Phone: +61 3 9869-6255 //
// Sun Microsystems Fax:+61 3 9869-6288 //
// Level 7, 4
akes?
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
//
// Nathan Kroenert nathan.kroen...@sun.com //
// Senior Systems Engineer Phone: +61 3
device...
Seems a little pricey for what it is though.
It's going onto my list of what I'd buy if I had the money... ;)
Nathan.
On 01/30/09 12:10, Janåke Rönnblom wrote:
> ACARD have launched a new RAM disk which can take up to 64 GB of ECC RAM
> while still looking like a standar
You could be the first...
Man up! ;)
Nathan.
Will Murnane wrote:
> On Thu, Jan 29, 2009 at 21:11, Nathan Kroenert
> wrote:
>> Seems a little pricey for what it is though.
> For what it's worth, there's also a 9010B model that has only one sata
> port and room for s
and sorting out it's own
ZIL and L2ARC would be interesting, though, given the propensity for
SSD's to be either fast read or fast write at the moment, you may well
require some whacky knobs to get it to do what you actually want it to...
hm.
Nathan.
Bill Sommerfeld wrote:
On Wed,
g up all your memory, and your
physical backing storage is taking a while to catch up?
Nathan.
Blake wrote:
My dump device is already on a different controller - the motherboards
built-in nVidia SATA controller.
The raidz2 vdev is the one I'm having trouble with (copying the same
definitely time to bust out some mdb -k and see what it's moaning about.
I did not see the screenshot earlier... sorry about that.
Nathan.
Blake wrote:
I start the cp, and then, with prstat -a, watch the cpu load for the
cp process climb to 25% on a 4-core machine.
Load, measured for ex
definitely time to bust out some mdb -K or boot -k and see what it's
moaning about.
I did not see the screenshot earlier... sorry about that.
Nathan.
Blake wrote:
I start the cp, and then, with prstat -a, watch the cpu load for the
cp process climb to 25% on a 4-core machine.
Load, mea
LI-DS4
Cheers!
Nathan.
On 13/03/09 09:21 AM, Dave wrote:
Tim wrote:
On Thu, Mar 12, 2009 at 2:22 PM, Blake <mailto:blake.ir...@gmail.com>> wrote:
I've managed to get the data transfer to work by rearranging my disks
so that all of them sit on the integrated SATA contr
:04:31.2783 ereport.fs.zfs.checksum
Score one more for ZFS! This box has a measly 300GB mirrored, and I have
already seen dud data. (heh... It's also got non-ecc memory... ;)
Cheers!
Nathan.
Dennis Clarke wrote:
On Tue, 24 Mar 2009, Dennis Clarke wrote:
You would think so eh?
But a tr
Regarding the SATA card and the mainboard slots, make sure that
whatever you get is compatible with the OS. In my case I chose
OpenSolaris which lacks support for Promise SATA cards. As a result,
my choices were very limited since I had chosen a Chenbro ES34069 case
and Intel Little Falls 2 mainboa
This is probably bug #6462803. The work-around goes something like this:
$ pfexec bash
# beadm mount opensolaris /mnt
# beadm unmount opensolaris
# svcadm clear svc:/system/filesystem/zfs/auto-snapshot:frequent
# svcadm clear svc:/system/filesystem/zfs/auto-snapshot:hourly
# svcadm clear svc:/syst
Yes, please write more about this. The photos are terrific and I
appreciate the many useful observations you've made. For my home NAS I
chose the Chenbro ES34069 and the biggest problem was finding a
SATA/PCI card that would work with OpenSolaris and fit in the case
(technically impossible without
I have not carried out any research into this area, but when I was
building my home server I wanted to use a Promise SATA-PCI card, but
alas (Open)Solaris has no support at all for the Promise chipsets.
Instead I used a rather old card based on the sil3124 chipset.
n
On Mon, Aug 3, 2009 at 9:35
h, if all disks are rotated, we
end up with a whole bunch of disks that are evenly worn out again, which
is just what we are really trying to avoid! ;)
Nathan.
Wee Yeh Tan wrote:
On 1/30/07, David Magda <[EMAIL PROTECTED]> wrote:
What about a rotating spare?
When setting up a pool a lot
Urk!
Where is this documented? And - is it something you can do nothing
about, or are we ultimately trying to address it somewhere / somehow?
Thanks!!
Nathan.
Bill Moore wrote:
On Wed, Jan 31, 2007 at 05:01:19AM -0800, Tom Buskey wrote:
As a followup, the system I'm trying to use th
I am trying to understand if zfs checksums apply at a file or a block level.
We know that zfs provides end to end checksum integrity, and I assumed that
when I write a file to a zfs filesystem, the checksum was calculated at a file
level, as opposed to say, a block level. However, I have notic
Thank You, so that means that even if I use something that writes raw i/o to a
zfs emulated volume, I still get the checksum protection, and hence data
corruption protection.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-
that provided dumb dumb protection
would be very cool. I was saved a number of times by the hackery above...
cheers!
Nathan.
Robert Milkowski wrote:
Hello Jeremy,
Monday, February 19, 2007, 1:58:18 PM, you wrote:
Something similar was proposed here before and IIRC someone even has a
worki
.
A salvage / undelete would have been gold.
Nathan.
James Dickens wrote:
Yes - Snapshots are great, but how often do you run a snapshot? Every 60
seconds? That's going to get real ugly if you have a filesystem per
user...
I'm sure every 15 minutes is suffient, if the worker doesn
Simple test - mkfile 8gb now and see where the data goes... :)
Victor Latushkin wrote:
Robert Milkowski wrote:
Hello Leon,
Thursday, May 10, 2007, 10:43:27 AM, you wrote:
LM> Hello,
LM> I've got some weird problem: ZFS does not seem to be utilizing
LM> all disks in my pool properly. For some
Which has little benefit if it's the HBA or the Array internals change
the meaning of the message...
That's the whole point of ZFS's checksumming - It's end to end...
Nathan.
Torrey McMahon wrote:
Toby Thain wrote:
On 22-May-07, at 11:01 AM, Louwtjie Burger wrot
= PROBLEM
To create a disk storage system that will act as an archive point for
user data (Non-recoverable data), and also act as a back end storage
unit for virtual machines at a block level.
= BUDGET
Currently I have about 25-30k to start the project, more could be
allocated in the ne
s
working anyways...
My 2c...
Nathan.
Blake wrote:
> I have re-flashed the BIOS.
>
> Blake
>
> On 8/7/07, *Ian Collins* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:
>
> Blake wrote:
> > Hi.
> >
> > I'm running snv 65
take a look at this box
and see if it's a new bug or just me being a bonehead and not
understanding what I'm seeing, please respond to me directly, and I can
provide access. (I'll make an effort not to reboot the box just in case
it's only this boot that sees the problems.
options, instance #0 (driver name: options)
agpgart, instance #0 (driver name: agpgart)
xsvc, instance #0 (driver name: xsvc)
used-resources
cpus
cpu, instance #0
cpu, instance #1
Nathan.
Ben Middleton wrote:
> I've just purchased an Asus P5K WS, which
And if there is a rubbish file somewhere, I *think* you should be able
to cat /dev/null > thatfile
Which would free up it's blocks.
Assuming you don't have snapshots... ;)
Nathan.
Anton B. Rang wrote:
> At least three alternatives --
>
> 1. If you don't have t
I think I can offer a straightforward explanation to the following:
I like the error-correction quality of ZFS; however, the ZFS
> Administration Guide states: "A non-redundant pool configuration is
> not recommended for production environments even if the single storage
> object is presented from
this by accident and panic
a big box for what I see as no good reason. (though I'm happy to be
educated... ;)
Oh - and also - Kudos to the ZFS team and the other involved in the
whole iSCSI thing. So easy and funky. Great work guys...
Cheers!
Nathan.
__
ecause I tried to import a dud pool...
I'm ok(ish) with the panic on a failed write to a non-redundant storage.
I expect it by now...
Cheers!
Nathan.
Victor Engle wrote:
> Wouldn't this be the known feature where a write error to zfs forces a panic?
>
> Vic
>
>
Erik -
Thanks for that, but I know the pool is corrupted - That was kind if the
point of the exercise.
The bug (at least to me) is ZFS panicing Solaris just trying to import
the dud pool.
But, maybe I'm missing your point?
Nathan.
eric kustarz wrote:
>>
>> Client A
&
step. :)
Cheers.
Nathan.
Eric Schrock wrote:
> On Fri, Oct 05, 2007 at 08:20:13AM +1000, Nathan Kroenert wrote:
>> Erik -
>>
>> Thanks for that, but I know the pool is corrupted - That was kind if the
>> point of the exercise.
>>
>> The bug (at least to me)
Hey all -
Time for my silly question of the day, and before I bust out vi and
dtrace...
If there a simple, existing way I can observe the read / write / IOPS on
a per-zvol basis?
If not, is there interest in having one?
Cheers!
Nathan.
___
zfs
You have not mentioned if you have swapped the 3114 based HBA itself...?
Have you tried a different HBA? :)
Nathan.
Ed Saipetch wrote:
> Hello,
>
> I'm experiencing major checksum errors when using a syba silicon image 3114
> based pci sata controller w/ nonraid firmware
occasion...
Maybe it's not just me... Unfortunately, I'm still running old nv and
xen bits, so I can't speak to the 'current' situation...
Cheers.
Nathan.
Martin wrote:
> Hello
>
> I've got Solaris Express Community Edition build 75 (75a) installed on an
and failover testing with ZFS and VCS.
Furthermore, if anyone has implemented ZFS on SRDF, I would also be
interesting in hearing about those implementation experiences.
Any and all input would be most appreciated.
Kind Regards,
Nathan Dietsch
se guys and the way they treat the ufs buffers versus the
zfs buffers?
Cheers!
Nathan.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ourse, both of these would require non-sparse file creation for the
DB etc, but would it be plausible?
For very read intensive and position sensitive applications, I guess
this sort of capability might make a difference?
Just some stabs in the dark...
Cheers!
Nathan.
Louwtjie Burger wrote:
>
I was interested in that one till I read:
One 240-pin DDR2 SDRAM Dual Inline Memory Module (DIMM) sockets
Support for DDR2 667 MHz, DDR2 533 MHz and DDR2 400 MHz DIMMs (DDR 667
MHz validated to run at 533 MHz only)
Support for up to 1 GB of system memory
Boo!!!
:)
Nathan.
Vincent Fox wrote
format -e
then from there, re-label using SMI label, versus EFI.
Cheers
Al Slater wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hi,
>
> What is the quickest way of clearing the label information on a disk
> that has been previously used in a zpool?
>
> regards
>
> - --
> Al Sl
I see a business opportunity for someone...
Backups for the masses... of Unix / VMS and other OS/s out there.
any takers? :)
Nathan.
Jonathan Loran wrote:
>
>
> eric kustarz wrote:
>> On Jan 14, 2008, at 11:08 AM, Tim Cook wrote:
>>
>>
>>> www.mozy.c
Any chance the disks are being powered down, and you are waiting for
them to power back up?
Nathan. :)
Neal Pollack wrote:
> I'm running Nevada build 81 on x86 on an Ultra 40.
> # uname -a
> SunOS zbit 5.11 snv_81 i86pc i386 i86pc
> Memory size: 8191 Megabytes
>
> I sta
rites.
(A single thread of an N2 is only so fast... Just think of what you
could do with 64 of them ;)
I'll be interested to see what the others have to say. :)
Hope this helps.
Nathan.
Michael Stalnaker wrote:
> We’re looking at building out sever ZFS servers, and are considering
so looking for any other ideas on what
might be hurting me.
I also have set
zfs:zfs_nocacheflush = 1
in /etc/system
The Oracle Logs are on a separate Zpool and I'm not seeing the issue on
those filesystems.
The lockstats I have run are not yet all that interesting. If anyo
uming I understand correctly.
Hopefully someone else on the list will be able to confirm.
Cheers!
Nathan.
Richard Elling wrote:
> Anton B. Rang wrote:
>>> Create a pool [ ... ]
>>> Write a 100GB file to the filesystem [ ... ]
>>> Run I/O against that file, doing
What about new blocks written to an existing file?
Perhaps we could make that clearer in the manpage too...
hm.
Mattias Pantzare wrote:
>> >
>> > If you created them after, then no worries, but if I understand
>> > correctly, if the *file* was created with 128K recordsize, then it'll
>> > k
w existing
files are updated as well...
hm.
Cheers!
Nathan.
Richard Elling wrote:
> Nathan Kroenert wrote:
>> And something I was told only recently - It makes a difference if you
>> created the file *before* you set the recordsize property.
>
> Actually, it has always been
are no newer patches for it, just in case it's one for which
there was a known problem. (which was worked around in the driver)
I *think* there was an issue with at least one or two...
Cheers!
Nathan.
Sandro wrote:
> hi folks
>
> I've been running my fileserver at home with
And would drive storage requirements through the roof!!
I like it!
;)
Nathan.
Jonathan Loran wrote:
>
> David Magda wrote:
>> On Feb 24, 2008, at 01:49, Jonathan Loran wrote:
>>
>>> In some circles, CDP is big business. It would be a great ZFS offering.
>>
1 - 100 of 151 matches
Mail list logo