On 2-Mar-10, at 4:31 PM, valrh...@gmail.com wrote:
Freddie: I think you understand my intent correctly.
This is not about a perfect backup system. The point is that I have
hundreds of DVDs that I don't particularly want to sort out, but
they are pretty useless from a management standpoint
On 6-Jun-10, at 7:11 AM, Thomas Maier-Komor wrote:
On 06.06.2010 08:06, devsk wrote:
I had an unclean shutdown because of a hang and suddenly my pool is
degraded (I realized something is wrong when python dumped core a
couple of times).
This is before I ran scrub:
pool: mypool
state: DE
On 10-Jul-10, at 4:57 PM, Roy Sigurd Karlsbakk wrote:
- Original Message -
Depends on the failure mode. I've spent hundreds (thousands?) of
hours
attempting to recover data from backup tape because of bad hardware,
firmware,
and file systems. The major difference is that ZFS cares th
On 26-Sep-09, at 2:55 PM, Frank Middleton wrote:
On 09/26/09 12:11 PM, Toby Thain wrote:
Yes, but unless they fixed it recently (>=RHFC11), Linux doesn't
actually nuke /tmp, which seems to be mapped to disk. One side
effect is that (like MSWindows) AFAIK there isn't a native tmp
On 30-Sep-09, at 10:48 AM, Brian Hubbleday wrote:
I had a 50mb zfs volume that was an iscsi target. This was mounted
into a Windows system (ntfs) and shared on the network. I used
notepad.exe on a remote system to add/remove a few bytes at the end
of a 25mb file.
I'm astonished that's ev
On 5-Oct-09, at 3:32 PM, Miles Nordin wrote:
"bm" == Brandon Mercer writes:
I'm now starting to feel that I understand this issue,
and I didn't for quite a while. And that I understand the
risks better, and have a clearer idea of what the possible
fixes are. And I didn't before.
haha, y
On 18-Oct-09, at 6:41 AM, Adam Mellor wrote:
I Too have seen this problem.
I had done a zfs send from my main pool "terra" (6 disk raidz on
seagate 1TB drives) to a mirror pair of WD Green 1TB drives.
ZFS send was successful, however i noticed the pool was degraded
after a while (~1 week)
On 27-Oct-09, at 1:43 PM, Dale Ghent wrote:
I've have a single-fs, mirrored pool on my hands which recently went
through a bout of corruption. I've managed to clean up a good bit of
it
How did this occur? Isn't a mirrored pool supposed to self heal?
--Toby
but it appears that I'm left with
On 2-Nov-09, at 3:16 PM, Nicolas Williams wrote:
On Mon, Nov 02, 2009 at 11:01:34AM -0800, Jeremy Kitchen wrote:
forgive my ignorance, but what's the advantage of this new dedup over
the existing compression option? Wouldn't full-filesystem
compression
naturally de-dupe?
...
There are man
On 8-Nov-09, at 12:20 PM, Joe Auty wrote:
Tim Cook wrote:
On Sun, Nov 8, 2009 at 2:03 AM, besson3c wrote:
...
Why not just convert the VM's to run in virtualbox and run Solaris
directly on the hardware?
That's another possibility, but it depends on how Virtualbox stacks
up against V
On 25-Nov-09, at 4:31 PM, Peter Jeremy wrote:
On 2009-Nov-24 14:07:06 -0600, Mike Gerdts wrote:
... fill a 128
KB buffer with random data then do bitwise rotations for each
successive use of the buffer. Unless my math is wrong, it should
allow 128 KB of random data to be write 128 GB of data
On 26-Nov-09, at 8:57 PM, Richard Elling wrote:
On Nov 26, 2009, at 1:20 PM, Toby Thain wrote:
On 25-Nov-09, at 4:31 PM, Peter Jeremy wrote:
On 2009-Nov-24 14:07:06 -0600, Mike Gerdts
wrote:
... fill a 128
KB buffer with random data then do bitwise rotations for each
successive use of
On 5-Dec-09, at 8:32 AM, nxyyt wrote:
Thank you very much for your quick response.
My question is I want to figure out whether there is data loss
after power outage. I have replicas on other machines so I can
recovery from the data loss. But I need a way to know whether there
is data lo
On 5-Dec-09, at 9:32 PM, nxyyt wrote:
The "rename trick" may not work here. Even if I renamed the file
successfully, the data of the file may still reside in the memory
instead of flushing back to the disk. If I made any mistake here,
please correct me. Thank you!
I'll try to find out w
On 12-Dec-09, at 1:32 PM, Mattias Pantzare wrote:
On Sat, Dec 12, 2009 at 18:08, Richard Elling
wrote:
On Dec 12, 2009, at 12:53 AM, dick hoogendijk wrote:
On Sat, 2009-12-12 at 00:22 +, Moritz Willers wrote:
The host identity had - of course - changed with the new
motherboard
and
On 16-Dec-09, at 10:47 AM, Bill Sprouse wrote:
Hi Brent,
I'm not sure why Dovecot was chosen. It was most likely a
recommendation by a fellow University. I agree that it lacking in
efficiencies in a lot of areas. I don't think I would be
successful in suggesting a change at this point
On 19-Dec-09, at 4:35 AM, Colin Raven wrote:
...
There is no original, there is no copy. There is one block with
reference counters.
Many blocks, potentially shared, make up a de-dup'd file. Not sure
why you write "one" here.
- Fred can rm his "file" (because clearly it isn't a file,
On 19-Dec-09, at 11:34 AM, Colin Raven wrote:
...
Wait...whoah, hold on.
If snapshots reside within the confines of the pool, are you saying
that dedup will also count what's contained inside the snapshots?
Snapshots themselves are only references, so yes.
I'm not sure why, but that thoug
On 19-Dec-09, at 2:01 PM, Colin Raven wrote:
On Sat, Dec 19, 2009 at 19:08, Toby Thain
wrote:
On 19-Dec-09, at 11:34 AM, Colin Raven wrote
Then again (not sure how gurus feel on this point) but I have this
probably naive and foolish belief that snapshots (mostly) oughtta
reside on
On 19-Dec-09, at 11:34 AM, Colin Raven wrote:
...
When we are children, we are told that sharing is good. In the
case or references, sharing is usually good, but if there is a huge
amount of sharing, then it can take longer to delete a set of files
since the mutual references create a "
On 22-Dec-09, at 12:42 PM, Roman Naumenko wrote:
On Tue, 22 Dec 2009, Ross Walker wrote:
Applying classic RAID terms to zfs is just plain
wrong and misleading since zfs does not directly implement these
classic RAID approaches
even though it re-uses some of the algorithms for data recovery.
On 22-Dec-09, at 3:33 PM, James Risner wrote:
...
Joerg Moellenkamp:
I do "consider RAID5 as 'Stripeset with an interleaved
Parity'", so I don't agree with the strong objection in this thread
by many about the use of RAID5 to describe what raidz does. I
don't think many particularly
On 29-Dec-09, at 11:53 PM, Ross Walker wrote:
On Dec 29, 2009, at 12:36 PM, Bob Friesenhahn
wrote:
...
However, zfs does not implement "RAID 1" either. This is easily
demonstrated since you can unplug one side of the mirror and the
writes to the zfs mirror will still succeed, catching
On 11-Jan-10, at 1:12 PM, Bob Friesenhahn wrote:
On Mon, 11 Jan 2010, Anil wrote:
What is the recommended way to make use of a Hardware RAID
controller/HBA along with ZFS?
...
Many people will recommend against using RAID5 in "hardware" since
then zfs is not as capable of repairing err
On 11-Jan-10, at 5:59 PM, Daniel Carosone wrote:
With all the recent discussion of SSD's that lack suitable
power-failure cache protection, surely there's an opportunity for a
separate modular solution?
I know there used to be (years and years ago) small internal UPS's
that fit in a few 5.25"
On 12-Jan-10, at 5:53 AM, Brad wrote:
Has anyone worked with a x4500/x4540 and know if the internal raid
controllers have a bbu? I'm concern that we won't be able to turn
off the write-cache on the internal hds and SSDs to prevent data
corruption in case of a power failure.
A power fai
On 12-Jan-10, at 10:40 PM, Brad wrote:
"(Caching isn't the problem; ordering is.)"
Weird I was reading about a problem where using SSDs (intel x25-e)
if the power goes out and the data in cache is not flushed, you
would have loss of data.
Could you elaborate on "ordering"?
ZFS integri
On 16-Jan-10, at 7:30 AM, Edward Ned Harvey wrote:
I am considering building a modest sized storage system with zfs.
Some
of the data on this is quite valuable, some small subset to be backed
up "forever", and I am evaluating back-up options with that in mind.
You don't need to store the "z
On 16-Jan-10, at 6:51 PM, Mike Gerdts wrote:
On Sat, Jan 16, 2010 at 5:31 PM, Toby Thain
wrote:
On 16-Jan-10, at 7:30 AM, Edward Ned Harvey wrote:
I am considering building a modest sized storage system with
zfs. Some
of the data on this is quite valuable, some small subset to be
On 24-Jan-10, at 11:26 AM, R.G. Keen wrote:
...
I’ll just blather a bit. The most durable data backup medium humans
have come up with was invented about 4000-6000 years ago. It’s
fired cuniform tablets as used in the Middle East. Perhaps one
could include stone carvings of Egyptian and/or
On 25-Jan-10, at 2:59 PM, Freddie Cash wrote:
We have the WDC WD15EADS-00P8B0 1.5 TB Caviar Green drives.
Unfortunately, these drives have the "fixed" firmware and the 8
second idle timeout cannot be changed.
That sounds like a laptop spec, not a server spec! How silly. Maybe
you can set
On 2-Feb-10, at 1:54 PM, Orvar Korvar wrote:
100% uptime for 20 years?
So what makes OpenVMS so much more stable than Unix? What is the
difference?
The short answer is that uptimes like that are VMS *cluster* uptimes.
Individual hosts don't necessarily have that uptime, but the cluster
On 2-Feb-10, at 10:11 PM, Marc Nicholas wrote:
On Tue, Feb 2, 2010 at 9:52 PM, Toby Thain
wrote:
On 2-Feb-10, at 1:54 PM, Orvar Korvar wrote:
100% uptime for 20 years?
So what makes OpenVMS so much more stable than Unix? What is the
difference?
The short answer is that uptimes
On 5-Feb-10, at 11:35 AM, J wrote:
Hi all,
I'm building a whole new server system for my employer, and I
really want to use OpenSolaris as the OS for the new file server.
One thing is keeping me back, though: is it possible to recover a
ZFS Raid Array after the OS crashes? I've spent h
On 9-Feb-10, at 2:02 PM, Frank Cusack wrote:
On 2/9/10 12:03 PM +1100 Daniel Carosone wrote:>
Snorcle wants to sell hardware.
LOL ... snorcle
But apparently they don't. Have you seen the new website? Seems
like a
blatant attempt to kill the hardware business to me.
That's very sad.
On 19-Feb-10, at 5:40 PM, Eugen Leitl wrote:
On Fri, Feb 19, 2010 at 11:17:29PM +0100, Felix Buenemann wrote:
I found the Hyperdrive 5/5M, which is a half-height drive bay sata
ramdisk with battery backup and auto-backup to compact flash at power
failure.
Promises 65,000 IOPS and thus should
On 24-Feb-10, at 3:38 PM, Tomas Ögren wrote:
On 24 February, 2010 - Bob Friesenhahn sent me these 1,0K bytes:
On Wed, 24 Feb 2010, Steve wrote:
The overhead I was thinking of was more in the pointer structures...
(bearing in mind this is a 128 bit file system), I would guess that
memory req
On 15/10/11 2:43 PM, Richard Elling wrote:
On Oct 15, 2011, at 6:14 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Tim Cook
In my example - probably not a completely clustered FS.
A clustered ZFS pool with datas
On 15/01/12 10:38 AM, Edward Ned Harvey wrote:
...
Linux is going with btrfs. MS has their own thing. Oracle continues with
ZFS closed source. Apple needs a filesystem that doesn't suck, but they're
not showing inclinations toward ZFS or anything else that I know of.
Rumours have long circu
On 01/08/12 3:34 PM, opensolarisisdeadlongliveopensolaris wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
Well, there is at least a couple of failure scenarios where
copies>1 are good:
1) A single-disk pool, as in a laptop.
On 11/10/12 5:47 PM, andy thomas wrote:
...
This doesn't sound like a very good idea to me as surelt disk seeks for
swap and for ZFS file I/O are bound to clash. aren't they?
As Phil implied, if your system is swapping, you already have bigger
problems.
--Toby
Andy
__
On 27/10/12 11:56 AM, Ray Arachelian wrote:
On 10/26/2012 04:29 AM, Karl Wagner wrote:
Does it not store a separate checksum for a parity block? If so, it
should not even need to recalculate the parity: assuming checksums
match for all data and parity blocks, the data is good.
...
Parity is
On 16/02/13 3:51 PM, Sašo Kiselkov wrote:
On 02/16/2013 06:44 PM, Tim Cook wrote:
We've got Oracle employees on the mailing list, that while helpful, in no
way have the authority to speak for company policy. They've made that
clear on numerous occasions And that doesn't change the fact that w
On 17-Aug-10, at 1:05 PM, Andrej Podzimek wrote:
I did not say there is something wrong about published reports. I
often read
them. (Who doesn't?) However, there are no trustworthy reports on
this topic
yet, since Btrfs is unfinished. Let's see some examples:
(1) http://www.phoronix.com/sc
On 21-Aug-10, at 3:06 PM, Ross Walker wrote:
On Aug 21, 2010, at 2:14 PM, Bill Sommerfeld > wrote:
On 08/21/10 10:14, Ross Walker wrote:
...
Would I be better off forgoing resiliency for simplicity, putting
all my faith into the Equallogic to handle data resiliency?
IMHO, no; the resultin
On 7-Oct-10, at 1:22 AM, Stephan Budach wrote:
Hi Edward,
these are interesting points. I have considered a couple of them,
when I started playing around with ZFS.
I am not sure whether I disagree with all of your points, but I
conducted a couple of tests, where I configured my raids as
On 14-Oct-10, at 3:27 AM, Stephan Budach wrote:
I'd like to see those docs as well.
As all HW raids are driven by software, of course - and software can
be buggy.
It's not that the software 'can be buggy' - that's not the point here.
The point being made is that conventional RAID just d
On 14-Oct-10, at 11:48 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Toby Thain
I don't want to heat up the discussion about ZFS managed discs vs.
HW raids, but if RAID5/6 would be that bad, no one woul
On 27/10/10 3:14 PM, Harry Putnam wrote:
> It seems my hardware is getting bad, and I can't keep the os running
> for more than a few minutes until the machine shuts down.
>
> It will run 15 or 20 minutes and then shutdown
> I haven't found the exact reason for it.
>
One thing to try is a thorou
supply or other
> components.
>
> Your CPU temperature is 56C, which is not out-of-line for most modern
> CPUs (you didn't state what type of CPU it is). Heck, 56C would be
> positively cool for a NetBurst-based Xeon.
>
> On Wed, Oct 27, 2010 at 4:17 PM, Harry Putnam wrote:
On 09/11/10 11:46 AM, Maurice Volaski wrote:
> ...
>
Is that horrendous mess Outlook's fault? If so, please consider not
using it.
--Toby
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 15/11/10 10:32 AM, Bryan Horstmann-Allen wrote:
> +--
> | On 2010-11-15 10:21:06, Edward Ned Harvey wrote:
> |
> | Backups.
> |
> | Even if you upgrade your hardware to better stuff... with ECC and so on ...
> | There
On 15/11/10 7:54 PM, Bryan Horstmann-Allen wrote:
> +--
> | On 2010-11-15 11:27:02, Toby Thain wrote:
> |
> | > Backups are not going to save you from bad memory writing corrupted data
> to
> | &
On 15/11/10 9:28 PM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Toby Thain
>>
>> The corruption will at least be detected by a scrub, even in cases where
> it
>> cannot
On 22/12/10 2:44 PM, Jerry Kemp wrote:
> I have a coworker, who's primary expertise is in another flavor of Unix.
>
> This coworker lists floating point operations as one of ZFS detriments.
>
Perhaps he can point you also to the equally mythical competing
filesystem which offers ZFS' advantages.
On 27/02/11 9:59 AM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of David Blasingame Oracle
>>
>> Keep pool space under 80% utilization to maintain pool performance.
>
> For what it's worth, the same is true for a
On 18/03/11 5:56 PM, Paul B. Henson wrote:
> We've been running Solaris 10 for the past couple of years, primarily to
> leverage zfs to provide storage for about 40,000 faculty, staff, and
> students ... and at this point want to start reevaluating our best
> migration option to move forward from S
On 23/03/11 12:13 PM, Linder, Doug wrote:
> OK, I know this is only tangentially related to ZFS, but we’re desperate
> and I thought someone might have a clue or idea of what kind of thing to
> look for. Also, this issue is holding up widespread adoption of ZFS at
> our shop. It’s making the powe
On 07/04/11 7:53 PM, Learner Study wrote:
> Hello,
>
> I was thinking of moving (porting) ZFS into my linux environment
> (2.6.30sh kernel) on MIPS architecture i.e. instead of using native
> ext4/xfs file systems, I'd like to try out ZFS.
>
> I tried to google for it but couldn't find anything r
On 06/05/11 9:17 PM, Erik Trimble wrote:
> On 5/6/2011 5:46 PM, Richard Elling wrote:
>> ...
>> Yes, perhaps a bit longer for recursive destruction, but everyone here
>> knows recursion is evil, right? :-)
>> -- richard
> You, my friend, have obviously never worshipped at the Temple of the
> Lam
On 08/05/11 10:31 AM, Edward Ned Harvey wrote:
>...
> Incidentally, does fsync() and sync return instantly or wait? Cuz "time
> sync" might product 0 sec every time even if there were something waiting to
> be flushed to disk.
The semantics need to be synchronous. Anything else would be a horribl
On 09/06/11 1:33 PM, Paul Kraus wrote:
> On Thu, Jun 9, 2011 at 1:17 PM, Jim Klimov wrote:
>> 2011-06-09 18:52, Paul Kraus пишет:
>>>
>>> On Thu, Jun 9, 2011 at 8:59 AM, Jonathan Walker wrote:
>>>
New to ZFS, I made a critical error when migrating data and
configuring zpools according t
On 15/06/11 7:45 AM, Darren J Moffat wrote:
> On 06/15/11 12:29, Edward Ned Harvey wrote:
>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>> boun...@opensolaris.org] On Behalf Of Richard Elling
>>>
>>> That would suck worse.
>>
>> Don't mind Richard. He is of the mind that ZFS
On 15/06/11 8:30 AM, Simon Walter wrote:
> On 06/15/2011 09:01 PM, Toby Thain wrote:
>>>> I know I've certainly had many situations where people wanted to
>>>> snapshot or
>>>> rev individual files everytime they're modified. As I said - perfe
On 16/06/11 3:09 AM, Simon Walter wrote:
> On 06/16/2011 09:09 AM, Erik Trimble wrote:
>> We had a similar discussion a couple of years ago here, under the
>> title "A Versioning FS". Look through the archives for the full
>> discussion.
>>
>> The jist is that application-level versioning (and cons
On 18/06/11 12:44 AM, Michael Sullivan wrote:
> ...
> Way off-topic, but Smalltalk and its variants do this by maintaining the
> state of everything in an operating environment image.
>
...Which is in memory, so things are rather different from the world of
filesystems.
--Toby
> But then again
On 21/06/11 7:54 AM, Todd Urie wrote:
> The volumes sit on HDS SAN. The only reason for the volumes is to
> prevent inadvertent import of the zpool on two nodes of a cluster
> simultaneously. Since we're on SAN with Raid internally, didn't seem to
> we would need zfs to provide that redundancy al
On 09/09/11 6:33 AM, Sriram Narayanan wrote:
> Plus, you'll need an & character at the end of each command.
>
Only one of the commands needs to be backgrounded.
--Toby
> -- Sriram
>
> On 9/9/11, Tomas Forsman wrote:
>> On 09 September, 2011 - cephas maposah sent me these 0,4K bytes:
>>
>>> i
On 10/09/11 8:31 AM, LaoTsao wrote:
> imho, there is not harm to use & in both cmd
>
There is a difference.
--T
> Sent from my iPad
> Hung-Sheng Tsao ( LaoTsao) Ph.D
>
> On Sep 10, 2011, at 4:59, Toby Thain wrote:
>
>> On 09/09/11 6:33 AM, Sriram Narayanan
On 27-Aug-08, at 1:41 PM, W. Wayne Liauh wrote:
>> Please read Akhilesh's answer carefully and stop
>> repeating
>> the same thing. Staroffice is to Latex/Framemaker
>> what a
>> mid-size sedan is to an 18-wheeler. To the untrained
>> eye,
>> they appear to perform similar actions, but the
>> a
On 27-Aug-08, at 5:47 PM, Ian Collins wrote:
> Tim writes:
>
>> On Wed, Aug 27, 2008 at 3:29 PM, Ian Collins <[EMAIL PROTECTED]>
>> wrote:
>>
>>>
>>> Does anyone have any tuning tips for a Subversion repository on
>>> ZFS? The
>>> repository will mainly be storing binary (MS Office documents
On 27-Aug-08, at 7:21 PM, Ian Collins wrote:
> Miles Nordin writes:
>
>>
>> In addition, I'm repeating myself like crazy at this point, but ZFS
>> tools used for all pools like 'zpool status' need to not freeze
>> when a
>> single pool, or single device within a pool, is unavailable or slow,
>>
On 28-Aug-08, at 10:11 AM, Richard Elling wrote:
> It is rare to see this sort of "CNN Moment" attributed to file
> corruption.
> http://www.eweek.com/c/a/IT-Infrastructure/Corrupt-File-Brought-
> Down-FAAs-Antiquated-IT-System/?kc=EWKNLNAV08282008STR4
>
"two 20-year-old redundant mainframe c
On 28-Aug-08, at 10:54 AM, Toby Thain wrote:
>
> On 28-Aug-08, at 10:11 AM, Richard Elling wrote:
>
>> It is rare to see this sort of "CNN Moment" attributed to file
>> corruption.
>> http://www.eweek.com/c/a/IT-Infrastructure/Corrupt-File-Brought-
On 30-Aug-08, at 2:32 AM, Todd H. Poole wrote:
>> Wrt. what I've experienced and read in ZFS-discussion etc. list
>> I've the
>> __feeling__, that we would have got really into trouble, using
>> Solaris
>> (even the most recent one) on that system ...
>> So if one asks me, whether to run Sola
On 4-Sep-08, at 4:52 PM, Richard Elling wrote:
> Marcelo Leal wrote:
>> Hello all,
>> Any plans (or already have), a send/receive way to get the
>> transfer backup statistics? I mean, the "how much" was transfered,
>> time and/or bytes/sec?
>>
>
> I'm not aware of any plans, you should file
On 30-Sep-08, at 6:58 AM, Ahmed Kamal wrote:
> Thanks for all the answers .. Please find more questions below :)
>
> - Good to know EMC filers do not have end2end checksums! What about
> netapp ?
Blunty - no remote storage can have it by definition. The checksum
needs to be computed as close
On 30-Sep-08, at 7:50 AM, Ram Sharma wrote:
> Hi,
>
> can anyone please tell me what is the maximum number of files that
> can be there in 1 folder in Solaris with ZSF file system.
>
> I am working on an application in which I have to support 1mn
> users. In my application I am using MySql My
On 30-Sep-08, at 6:31 PM, Tim wrote:
On Tue, Sep 30, 2008 at 5:19 PM, Erik Trimble
<[EMAIL PROTECTED]> wrote:
To make Will's argument more succinct (), with a NetApp,
undetectable (by the NetApp) errors can be introduced at the HBA
and transport layer (FC Switch, slightly damage cable
On 30-Sep-08, at 9:54 PM, Tim wrote:
On Tue, Sep 30, 2008 at 8:50 PM, Toby Thain
<[EMAIL PROTECTED]> wrote:
NetApp's block-appended checksum approach appears similar but is
in fact much stronger. Like many arrays, NetApp formats its drives
with 520-byte sectors. It then
On 1-Oct-08, at 1:56 AM, Ram Sharma wrote:
> Hi Guys,
>
> Thanks for so many good comments. Perhaps I got even more than what
> I asked for!
>
> I am targeting 1 million users for my application.My DB will be on
> solaris machine.And the reason I am making one table per user is
> that it wi
On 18-Oct-08, at 12:46 AM, Roch Bourbonnais wrote:
>
> Leave the default recordsize. With 128K recordsize, files smaller than
> 128K are stored as single record
> tightly fitted to the smallest possible # of disk sectors. Reads and
> writes are then managed with fewer ops.
>
> Not tuning the reco
On 23-Nov-08, at 12:21 PM, Scara Maccai wrote:
> I watched both the youtube video
>
> http://www.youtube.com/watch?v=CN6iDzesEs0
>
> and the one on http://www.opensolaris.com/, "ZFS – A Smashing Hit".
>
> In the first one is obvious that the app stops working when they
> smash the drives; they
On 24-Nov-08, at 10:40 AM, Scara Maccai wrote:
>> Why would it be assumed to be a bug in Solaris? Seems
>> more likely on
>> balance to be a problem in the error reporting path
>> or a controller/
>> firmware weakness.
>
> Weird: they would use a controller/firmware that doesn't work? Bad
> cal
On 24-Nov-08, at 3:49 PM, Miles Nordin wrote:
>>>>>> "tt" == Toby Thain <[EMAIL PROTECTED]> writes:
>
> tt> Why would it be assumed to be a bug in Solaris? Seems more
> tt> likely on balance to be a problem in the error reporting
On 25-Nov-08, at 5:10 AM, Ross Smith wrote:
> Hey Jeff,
>
> Good to hear there's work going on to address this.
>
> What did you guys think to my idea of ZFS supporting a "waiting for a
> response" status for disks as an interim solution that allows the pool
> to continue operation while it's wai
On 26-Nov-08, at 10:30 AM, C. Bergström wrote:
> ... Also is it more efficient/better
> performing to give swap a 2nd slice on the inner part of the disk
> or not
> care and just toss it on top of zfs?
I think the thing about swap is that if you're swapping, you probably
have more to worry a
On 1-Dec-08, at 10:05 PM, Glaser, David wrote:
> Hi all,
>
>
>
> I have a Thumper (ok, actually 3) with each having one large pool,
> multiple filesystems and many snapshots. They are holding rsync
> copies of multiple clients, being synced every night (using
> snapshots to keep ‘incrementa
On 2-Dec-08, at 8:24 AM, Glaser, David wrote:
> Ok, thanks for all the responses. I'll probably do every other week
> scrubs, as this is the backup data (so doesn't need to be checked
> constantly).
Even that is probably more frequent than necessary. I'm sure somebody
has done the MTTDL ma
On 2-Dec-08, at 3:35 PM, Miles Nordin wrote:
>> "r" == Ross <[EMAIL PROTECTED]> writes:
>
> r> style before I got half way through your post :) [...status
> r> problems...] could be a case of oversimplifying things.
> ...
> And yes, this is a religious argument. Just because it sp
On 6-Dec-08, at 7:10 AM, Orvar Korvar wrote:
> Its not me. There are people on Linux forums that wont to try out
> Solaris + ZFS and this is a concern, for them. What should I tell
> them? That it is not fixed? That they have reboot every week?
> Someone knows?
That it's not recommended f
On 11-Dec-08, at 12:28 PM, Robert Milkowski wrote:
> Hello Anton,
>
> Thursday, December 11, 2008, 4:17:15 AM, you wrote:
>
>>> It sounds like you have access to a source of information that the
>>> rest of us don't have access to.
>
> ABR> I think if you read the archives of this mailing list, a
On 12-Dec-08, at 3:10 PM, Miles Nordin wrote:
>>>>>> "tt" == Toby Thain writes:
>>>>>> "mg" == Mike Gerdts writes:
>
> tt> I think we have to assume Anton was joking - otherwise his
> tt> measure is uselessly uns
On 12-Dec-08, at 3:38 PM, Johan Hartzenberg wrote:
...
The only bit that I understand about why HW raid "might" be bad is
that if it had access to the disks behind a HW RAID LUN, then _IF_
zfs were to encounter corrupted data in a read, it will probably be
able to re-construct that data.
>
> Maybe the format allows unlimited O(1) snapshots, but it's at best
> O(1) to take them. All over the place it's probably O(n) or worse to
> _have_ them. to boot with them, to scrub with them.
Why would a scrub be O(n snapshots)?
The O(n filesystems) effects reported from time to time in
O
On 6-Jan-09, at 1:19 PM, Bob Friesenhahn wrote:
> On Tue, 6 Jan 2009, Jacob Ritorto wrote:
>
>> Is urandom nonblocking?
>
> The OS provided random devices need to be secure and so they depend on
> collecting "entropy" from the system so the random values are truely
> random. They also execute co
On 7-Jan-09, at 9:43 PM, JZ wrote:
> ok, Scott, that sounded sincere. I am not going to do the pic thing
> on you.
>
> But do I have to spell this out to you -- somethings are invented
> not for
> home use?
>
> Cindy, would you want to do ZFS at home,
Why would you disrespect your personal d
On 11-Jan-09, at 3:28 PM, Tom Bird wrote:
> Bob Friesenhahn wrote:
>> On Sun, 11 Jan 2009, Eric D. Mudama wrote:
>>> My impression is not that other OS's aren't interested in ZFS, they
>>> are, it's that the licensing restrictions limit native support to
>>> Solaris, BSD, and OS-X.
>>
>> Perhaps
On 12-Jan-09, at 3:43 PM, JZ wrote:
> [having late lunch hour for beloved Orvar]
>
> one more baby scenario for your consideration --
> you can give me some ZFS based codes and I will go to china and
> burn some HW
> RAID ASICs to fulfill your desire?
Is that what passes for product developmen
On 18-Jan-09, at 6:12 PM, Nathan Kroenert wrote:
> Hey, Tom -
>
> Correct me if I'm wrong here, but it seems you are not allowing ZFS
> any
> sort of redundancy to manage.
Which is particularly catastrophic when one's 'content' is organized
as a monolithic file, as it is here - unless, of co
1 - 100 of 295 matches
Mail list logo