On 16/02/13 3:51 PM, Sašo Kiselkov wrote:
On 02/16/2013 06:44 PM, Tim Cook wrote:
We've got Oracle employees on the mailing list, that while helpful, in no
way have the authority to speak for company policy. They've made that
clear on numerous occasions And that doesn't change the fact that w
On 27/10/12 11:56 AM, Ray Arachelian wrote:
On 10/26/2012 04:29 AM, Karl Wagner wrote:
Does it not store a separate checksum for a parity block? If so, it
should not even need to recalculate the parity: assuming checksums
match for all data and parity blocks, the data is good.
...
Parity is
On 11/10/12 5:47 PM, andy thomas wrote:
...
This doesn't sound like a very good idea to me as surelt disk seeks for
swap and for ZFS file I/O are bound to clash. aren't they?
As Phil implied, if your system is swapping, you already have bigger
problems.
--Toby
Andy
__
On 01/08/12 3:34 PM, opensolarisisdeadlongliveopensolaris wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Jim Klimov
Well, there is at least a couple of failure scenarios where
copies>1 are good:
1) A single-disk pool, as in a laptop.
On 15/01/12 10:38 AM, Edward Ned Harvey wrote:
...
Linux is going with btrfs. MS has their own thing. Oracle continues with
ZFS closed source. Apple needs a filesystem that doesn't suck, but they're
not showing inclinations toward ZFS or anything else that I know of.
Rumours have long circu
On 15/10/11 2:43 PM, Richard Elling wrote:
On Oct 15, 2011, at 6:14 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Tim Cook
In my example - probably not a completely clustered FS.
A clustered ZFS pool with datas
On 10/09/11 8:31 AM, LaoTsao wrote:
> imho, there is not harm to use & in both cmd
>
There is a difference.
--T
> Sent from my iPad
> Hung-Sheng Tsao ( LaoTsao) Ph.D
>
> On Sep 10, 2011, at 4:59, Toby Thain wrote:
>
>> On 09/09/11 6:33 AM, Sriram Narayanan
On 09/09/11 6:33 AM, Sriram Narayanan wrote:
> Plus, you'll need an & character at the end of each command.
>
Only one of the commands needs to be backgrounded.
--Toby
> -- Sriram
>
> On 9/9/11, Tomas Forsman wrote:
>> On 09 September, 2011 - cephas maposah sent me these 0,4K bytes:
>>
>>> i
On 21/06/11 7:54 AM, Todd Urie wrote:
> The volumes sit on HDS SAN. The only reason for the volumes is to
> prevent inadvertent import of the zpool on two nodes of a cluster
> simultaneously. Since we're on SAN with Raid internally, didn't seem to
> we would need zfs to provide that redundancy al
On 18/06/11 12:44 AM, Michael Sullivan wrote:
> ...
> Way off-topic, but Smalltalk and its variants do this by maintaining the
> state of everything in an operating environment image.
>
...Which is in memory, so things are rather different from the world of
filesystems.
--Toby
> But then again
On 16/06/11 3:09 AM, Simon Walter wrote:
> On 06/16/2011 09:09 AM, Erik Trimble wrote:
>> We had a similar discussion a couple of years ago here, under the
>> title "A Versioning FS". Look through the archives for the full
>> discussion.
>>
>> The jist is that application-level versioning (and cons
On 15/06/11 8:30 AM, Simon Walter wrote:
> On 06/15/2011 09:01 PM, Toby Thain wrote:
>>>> I know I've certainly had many situations where people wanted to
>>>> snapshot or
>>>> rev individual files everytime they're modified. As I said - perfe
On 15/06/11 7:45 AM, Darren J Moffat wrote:
> On 06/15/11 12:29, Edward Ned Harvey wrote:
>>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>>> boun...@opensolaris.org] On Behalf Of Richard Elling
>>>
>>> That would suck worse.
>>
>> Don't mind Richard. He is of the mind that ZFS
On 09/06/11 1:33 PM, Paul Kraus wrote:
> On Thu, Jun 9, 2011 at 1:17 PM, Jim Klimov wrote:
>> 2011-06-09 18:52, Paul Kraus пишет:
>>>
>>> On Thu, Jun 9, 2011 at 8:59 AM, Jonathan Walker wrote:
>>>
New to ZFS, I made a critical error when migrating data and
configuring zpools according t
On 08/05/11 10:31 AM, Edward Ned Harvey wrote:
>...
> Incidentally, does fsync() and sync return instantly or wait? Cuz "time
> sync" might product 0 sec every time even if there were something waiting to
> be flushed to disk.
The semantics need to be synchronous. Anything else would be a horribl
On 06/05/11 9:17 PM, Erik Trimble wrote:
> On 5/6/2011 5:46 PM, Richard Elling wrote:
>> ...
>> Yes, perhaps a bit longer for recursive destruction, but everyone here
>> knows recursion is evil, right? :-)
>> -- richard
> You, my friend, have obviously never worshipped at the Temple of the
> Lam
On 07/04/11 7:53 PM, Learner Study wrote:
> Hello,
>
> I was thinking of moving (porting) ZFS into my linux environment
> (2.6.30sh kernel) on MIPS architecture i.e. instead of using native
> ext4/xfs file systems, I'd like to try out ZFS.
>
> I tried to google for it but couldn't find anything r
On 23/03/11 12:13 PM, Linder, Doug wrote:
> OK, I know this is only tangentially related to ZFS, but we’re desperate
> and I thought someone might have a clue or idea of what kind of thing to
> look for. Also, this issue is holding up widespread adoption of ZFS at
> our shop. It’s making the powe
On 18/03/11 5:56 PM, Paul B. Henson wrote:
> We've been running Solaris 10 for the past couple of years, primarily to
> leverage zfs to provide storage for about 40,000 faculty, staff, and
> students ... and at this point want to start reevaluating our best
> migration option to move forward from S
On 27/02/11 9:59 AM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of David Blasingame Oracle
>>
>> Keep pool space under 80% utilization to maintain pool performance.
>
> For what it's worth, the same is true for a
On 22/12/10 2:44 PM, Jerry Kemp wrote:
> I have a coworker, who's primary expertise is in another flavor of Unix.
>
> This coworker lists floating point operations as one of ZFS detriments.
>
Perhaps he can point you also to the equally mythical competing
filesystem which offers ZFS' advantages.
On 15/11/10 9:28 PM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Toby Thain
>>
>> The corruption will at least be detected by a scrub, even in cases where
> it
>> cannot
On 15/11/10 7:54 PM, Bryan Horstmann-Allen wrote:
> +--
> | On 2010-11-15 11:27:02, Toby Thain wrote:
> |
> | > Backups are not going to save you from bad memory writing corrupted data
> to
> | &
On 15/11/10 10:32 AM, Bryan Horstmann-Allen wrote:
> +--
> | On 2010-11-15 10:21:06, Edward Ned Harvey wrote:
> |
> | Backups.
> |
> | Even if you upgrade your hardware to better stuff... with ECC and so on ...
> | There
On 09/11/10 11:46 AM, Maurice Volaski wrote:
> ...
>
Is that horrendous mess Outlook's fault? If so, please consider not
using it.
--Toby
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
supply or other
> components.
>
> Your CPU temperature is 56C, which is not out-of-line for most modern
> CPUs (you didn't state what type of CPU it is). Heck, 56C would be
> positively cool for a NetBurst-based Xeon.
>
> On Wed, Oct 27, 2010 at 4:17 PM, Harry Putnam wrote:
On 27/10/10 3:14 PM, Harry Putnam wrote:
> It seems my hardware is getting bad, and I can't keep the os running
> for more than a few minutes until the machine shuts down.
>
> It will run 15 or 20 minutes and then shutdown
> I haven't found the exact reason for it.
>
One thing to try is a thorou
On 14-Oct-10, at 11:48 AM, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Toby Thain
I don't want to heat up the discussion about ZFS managed discs vs.
HW raids, but if RAID5/6 would be that bad, no one woul
On 14-Oct-10, at 3:27 AM, Stephan Budach wrote:
I'd like to see those docs as well.
As all HW raids are driven by software, of course - and software can
be buggy.
It's not that the software 'can be buggy' - that's not the point here.
The point being made is that conventional RAID just d
On 7-Oct-10, at 1:22 AM, Stephan Budach wrote:
Hi Edward,
these are interesting points. I have considered a couple of them,
when I started playing around with ZFS.
I am not sure whether I disagree with all of your points, but I
conducted a couple of tests, where I configured my raids as
On 21-Aug-10, at 3:06 PM, Ross Walker wrote:
On Aug 21, 2010, at 2:14 PM, Bill Sommerfeld > wrote:
On 08/21/10 10:14, Ross Walker wrote:
...
Would I be better off forgoing resiliency for simplicity, putting
all my faith into the Equallogic to handle data resiliency?
IMHO, no; the resultin
On 17-Aug-10, at 1:05 PM, Andrej Podzimek wrote:
I did not say there is something wrong about published reports. I
often read
them. (Who doesn't?) However, there are no trustworthy reports on
this topic
yet, since Btrfs is unfinished. Let's see some examples:
(1) http://www.phoronix.com/sc
On 10-Jul-10, at 4:57 PM, Roy Sigurd Karlsbakk wrote:
- Original Message -
Depends on the failure mode. I've spent hundreds (thousands?) of
hours
attempting to recover data from backup tape because of bad hardware,
firmware,
and file systems. The major difference is that ZFS cares th
On 6-Jun-10, at 7:11 AM, Thomas Maier-Komor wrote:
On 06.06.2010 08:06, devsk wrote:
I had an unclean shutdown because of a hang and suddenly my pool is
degraded (I realized something is wrong when python dumped core a
couple of times).
This is before I ran scrub:
pool: mypool
state: DE
On 2-Mar-10, at 4:31 PM, valrh...@gmail.com wrote:
Freddie: I think you understand my intent correctly.
This is not about a perfect backup system. The point is that I have
hundreds of DVDs that I don't particularly want to sort out, but
they are pretty useless from a management standpoint
On 24-Feb-10, at 3:38 PM, Tomas Ögren wrote:
On 24 February, 2010 - Bob Friesenhahn sent me these 1,0K bytes:
On Wed, 24 Feb 2010, Steve wrote:
The overhead I was thinking of was more in the pointer structures...
(bearing in mind this is a 128 bit file system), I would guess that
memory req
On 19-Feb-10, at 5:40 PM, Eugen Leitl wrote:
On Fri, Feb 19, 2010 at 11:17:29PM +0100, Felix Buenemann wrote:
I found the Hyperdrive 5/5M, which is a half-height drive bay sata
ramdisk with battery backup and auto-backup to compact flash at power
failure.
Promises 65,000 IOPS and thus should
On 9-Feb-10, at 2:02 PM, Frank Cusack wrote:
On 2/9/10 12:03 PM +1100 Daniel Carosone wrote:>
Snorcle wants to sell hardware.
LOL ... snorcle
But apparently they don't. Have you seen the new website? Seems
like a
blatant attempt to kill the hardware business to me.
That's very sad.
On 5-Feb-10, at 11:35 AM, J wrote:
Hi all,
I'm building a whole new server system for my employer, and I
really want to use OpenSolaris as the OS for the new file server.
One thing is keeping me back, though: is it possible to recover a
ZFS Raid Array after the OS crashes? I've spent h
On 2-Feb-10, at 10:11 PM, Marc Nicholas wrote:
On Tue, Feb 2, 2010 at 9:52 PM, Toby Thain
wrote:
On 2-Feb-10, at 1:54 PM, Orvar Korvar wrote:
100% uptime for 20 years?
So what makes OpenVMS so much more stable than Unix? What is the
difference?
The short answer is that uptimes
On 2-Feb-10, at 1:54 PM, Orvar Korvar wrote:
100% uptime for 20 years?
So what makes OpenVMS so much more stable than Unix? What is the
difference?
The short answer is that uptimes like that are VMS *cluster* uptimes.
Individual hosts don't necessarily have that uptime, but the cluster
On 25-Jan-10, at 2:59 PM, Freddie Cash wrote:
We have the WDC WD15EADS-00P8B0 1.5 TB Caviar Green drives.
Unfortunately, these drives have the "fixed" firmware and the 8
second idle timeout cannot be changed.
That sounds like a laptop spec, not a server spec! How silly. Maybe
you can set
On 24-Jan-10, at 11:26 AM, R.G. Keen wrote:
...
I’ll just blather a bit. The most durable data backup medium humans
have come up with was invented about 4000-6000 years ago. It’s
fired cuniform tablets as used in the Middle East. Perhaps one
could include stone carvings of Egyptian and/or
On 16-Jan-10, at 6:51 PM, Mike Gerdts wrote:
On Sat, Jan 16, 2010 at 5:31 PM, Toby Thain
wrote:
On 16-Jan-10, at 7:30 AM, Edward Ned Harvey wrote:
I am considering building a modest sized storage system with
zfs. Some
of the data on this is quite valuable, some small subset to be
On 16-Jan-10, at 7:30 AM, Edward Ned Harvey wrote:
I am considering building a modest sized storage system with zfs.
Some
of the data on this is quite valuable, some small subset to be backed
up "forever", and I am evaluating back-up options with that in mind.
You don't need to store the "z
On 12-Jan-10, at 10:40 PM, Brad wrote:
"(Caching isn't the problem; ordering is.)"
Weird I was reading about a problem where using SSDs (intel x25-e)
if the power goes out and the data in cache is not flushed, you
would have loss of data.
Could you elaborate on "ordering"?
ZFS integri
On 12-Jan-10, at 5:53 AM, Brad wrote:
Has anyone worked with a x4500/x4540 and know if the internal raid
controllers have a bbu? I'm concern that we won't be able to turn
off the write-cache on the internal hds and SSDs to prevent data
corruption in case of a power failure.
A power fai
On 11-Jan-10, at 5:59 PM, Daniel Carosone wrote:
With all the recent discussion of SSD's that lack suitable
power-failure cache protection, surely there's an opportunity for a
separate modular solution?
I know there used to be (years and years ago) small internal UPS's
that fit in a few 5.25"
On 11-Jan-10, at 1:12 PM, Bob Friesenhahn wrote:
On Mon, 11 Jan 2010, Anil wrote:
What is the recommended way to make use of a Hardware RAID
controller/HBA along with ZFS?
...
Many people will recommend against using RAID5 in "hardware" since
then zfs is not as capable of repairing err
On 29-Dec-09, at 11:53 PM, Ross Walker wrote:
On Dec 29, 2009, at 12:36 PM, Bob Friesenhahn
wrote:
...
However, zfs does not implement "RAID 1" either. This is easily
demonstrated since you can unplug one side of the mirror and the
writes to the zfs mirror will still succeed, catching
On 22-Dec-09, at 3:33 PM, James Risner wrote:
...
Joerg Moellenkamp:
I do "consider RAID5 as 'Stripeset with an interleaved
Parity'", so I don't agree with the strong objection in this thread
by many about the use of RAID5 to describe what raidz does. I
don't think many particularly
On 22-Dec-09, at 12:42 PM, Roman Naumenko wrote:
On Tue, 22 Dec 2009, Ross Walker wrote:
Applying classic RAID terms to zfs is just plain
wrong and misleading since zfs does not directly implement these
classic RAID approaches
even though it re-uses some of the algorithms for data recovery.
On 19-Dec-09, at 11:34 AM, Colin Raven wrote:
...
When we are children, we are told that sharing is good. In the
case or references, sharing is usually good, but if there is a huge
amount of sharing, then it can take longer to delete a set of files
since the mutual references create a "
On 19-Dec-09, at 2:01 PM, Colin Raven wrote:
On Sat, Dec 19, 2009 at 19:08, Toby Thain
wrote:
On 19-Dec-09, at 11:34 AM, Colin Raven wrote
Then again (not sure how gurus feel on this point) but I have this
probably naive and foolish belief that snapshots (mostly) oughtta
reside on
On 19-Dec-09, at 11:34 AM, Colin Raven wrote:
...
Wait...whoah, hold on.
If snapshots reside within the confines of the pool, are you saying
that dedup will also count what's contained inside the snapshots?
Snapshots themselves are only references, so yes.
I'm not sure why, but that thoug
On 19-Dec-09, at 4:35 AM, Colin Raven wrote:
...
There is no original, there is no copy. There is one block with
reference counters.
Many blocks, potentially shared, make up a de-dup'd file. Not sure
why you write "one" here.
- Fred can rm his "file" (because clearly it isn't a file,
On 16-Dec-09, at 10:47 AM, Bill Sprouse wrote:
Hi Brent,
I'm not sure why Dovecot was chosen. It was most likely a
recommendation by a fellow University. I agree that it lacking in
efficiencies in a lot of areas. I don't think I would be
successful in suggesting a change at this point
On 12-Dec-09, at 1:32 PM, Mattias Pantzare wrote:
On Sat, Dec 12, 2009 at 18:08, Richard Elling
wrote:
On Dec 12, 2009, at 12:53 AM, dick hoogendijk wrote:
On Sat, 2009-12-12 at 00:22 +, Moritz Willers wrote:
The host identity had - of course - changed with the new
motherboard
and
On 5-Dec-09, at 9:32 PM, nxyyt wrote:
The "rename trick" may not work here. Even if I renamed the file
successfully, the data of the file may still reside in the memory
instead of flushing back to the disk. If I made any mistake here,
please correct me. Thank you!
I'll try to find out w
On 5-Dec-09, at 8:32 AM, nxyyt wrote:
Thank you very much for your quick response.
My question is I want to figure out whether there is data loss
after power outage. I have replicas on other machines so I can
recovery from the data loss. But I need a way to know whether there
is data lo
On 26-Nov-09, at 8:57 PM, Richard Elling wrote:
On Nov 26, 2009, at 1:20 PM, Toby Thain wrote:
On 25-Nov-09, at 4:31 PM, Peter Jeremy wrote:
On 2009-Nov-24 14:07:06 -0600, Mike Gerdts
wrote:
... fill a 128
KB buffer with random data then do bitwise rotations for each
successive use of
On 25-Nov-09, at 4:31 PM, Peter Jeremy wrote:
On 2009-Nov-24 14:07:06 -0600, Mike Gerdts wrote:
... fill a 128
KB buffer with random data then do bitwise rotations for each
successive use of the buffer. Unless my math is wrong, it should
allow 128 KB of random data to be write 128 GB of data
On 8-Nov-09, at 12:20 PM, Joe Auty wrote:
Tim Cook wrote:
On Sun, Nov 8, 2009 at 2:03 AM, besson3c wrote:
...
Why not just convert the VM's to run in virtualbox and run Solaris
directly on the hardware?
That's another possibility, but it depends on how Virtualbox stacks
up against V
On 2-Nov-09, at 3:16 PM, Nicolas Williams wrote:
On Mon, Nov 02, 2009 at 11:01:34AM -0800, Jeremy Kitchen wrote:
forgive my ignorance, but what's the advantage of this new dedup over
the existing compression option? Wouldn't full-filesystem
compression
naturally de-dupe?
...
There are man
On 27-Oct-09, at 1:43 PM, Dale Ghent wrote:
I've have a single-fs, mirrored pool on my hands which recently went
through a bout of corruption. I've managed to clean up a good bit of
it
How did this occur? Isn't a mirrored pool supposed to self heal?
--Toby
but it appears that I'm left with
On 18-Oct-09, at 6:41 AM, Adam Mellor wrote:
I Too have seen this problem.
I had done a zfs send from my main pool "terra" (6 disk raidz on
seagate 1TB drives) to a mirror pair of WD Green 1TB drives.
ZFS send was successful, however i noticed the pool was degraded
after a while (~1 week)
On 5-Oct-09, at 3:32 PM, Miles Nordin wrote:
"bm" == Brandon Mercer writes:
I'm now starting to feel that I understand this issue,
and I didn't for quite a while. And that I understand the
risks better, and have a clearer idea of what the possible
fixes are. And I didn't before.
haha, y
On 30-Sep-09, at 10:48 AM, Brian Hubbleday wrote:
I had a 50mb zfs volume that was an iscsi target. This was mounted
into a Windows system (ntfs) and shared on the network. I used
notepad.exe on a remote system to add/remove a few bytes at the end
of a 25mb file.
I'm astonished that's ev
On 26-Sep-09, at 2:55 PM, Frank Middleton wrote:
On 09/26/09 12:11 PM, Toby Thain wrote:
Yes, but unless they fixed it recently (>=RHFC11), Linux doesn't
actually nuke /tmp, which seems to be mapped to disk. One side
effect is that (like MSWindows) AFAIK there isn't a native tmp
On 26-Sep-09, at 9:56 AM, Frank Middleton wrote:
On 09/25/09 09:58 PM, David Magda wrote:
...
Similar definition for [/tmp] Linux FWIW:
Yes, but unless they fixed it recently (>=RHFC11), Linux doesn't
actually
nuke /tmp, which seems to be mapped to disk. One side effect is
that (like
M
On 25-Sep-09, at 2:58 PM, Frank Middleton wrote:
On 09/25/09 11:08 AM, Travis Tabbal wrote:
... haven't heard if it's a known
bug or if it will be fixed in the next version...
Out of courtesy to our host, Sun makes some quite competitive
X86 hardware. I have absolutely no idea how difficult
On 14-Aug-09, at 11:14 AM, Peter Schow wrote:
On Thu, Aug 13, 2009 at 05:02:46PM -0600, Louis-Fr?d?ric Feuillette
wrote:
I saw this question on another mailing list, and I too would like to
know. And I have a couple questions of my own.
== Paraphrased from other list ==
Does anyone have any
On 4-Aug-09, at 9:28 AM, Roch Bourbonnais wrote:
Le 26 juil. 09 à 01:34, Toby Thain a écrit :
On 25-Jul-09, at 3:32 PM, Frank Middleton wrote:
On 07/25/09 02:50 PM, David Magda wrote:
Yes, it can be affected. If the snapshot's data structure /
record is
underneath the corrupted
On 31-Jul-09, at 7:15 PM, Richard Elling wrote:
wow, talk about a knee jerk reaction...
On Jul 31, 2009, at 3:23 PM, Dave Stubbs wrote:
I don't mean to be offensive Russel, but if you do
ever return to ZFS, please promise me that you will
never, ever, EVER run it virtualized on top of NTFS
(
On 27-Jul-09, at 3:44 PM, Frank Middleton wrote:
On 07/27/09 01:27 PM, Eric D. Mudama wrote:
Everyone on this list seems to blame lying hardware for ignoring
commands, but disks are relatively mature and I can't believe that
major OEMs would qualify disks or other hardware that willingly
ig
On 27-Jul-09, at 5:46 AM, erik.ableson wrote:
The zfs send command generates a differential file between the two
selected snapshots so you can send that to anything you'd like.
The catch of course is that then you have a collection of files on
your Linux box that are pretty much useless s
with metatdata other
than to manage it.
Now if you were too lazy to bother to follow the instructions
properly,
we could end up with bizarre things. This is what happens when
storage
lies and re-orders writes across boundaries.
On 07/25/09 07:34 PM, Toby Thain wrote:
The problem is assumed
On 25-Jul-09, at 3:32 PM, Frank Middleton wrote:
On 07/25/09 02:50 PM, David Magda wrote:
Yes, it can be affected. If the snapshot's data structure / record is
underneath the corrupted data in the tree then it won't be able to be
reached.
Can you comment on if/how mirroring or raidz mitigat
On 24-Jul-09, at 6:41 PM, Frank Middleton wrote:
On 07/24/09 04:35 PM, Bob Friesenhahn wrote:
Regardless, it [VirtualBox] has committed a crime.
But ZFS is a journalled file system! Any hardware can lose a flush;
No, the problematic default in VirtualBox is flushes being *ignored*,
whic
On 20-Jul-09, at 6:26 AM, Russel wrote:
Well I did have a UPS on the machine :-)
but the machine hung and I had to power it off...
(yep it was vertual, but that happens on direct HW too,
As has been discussed here before, the failure modes are different as
the layer stack from filesystem t
On 19-Jul-09, at 7:12 AM, Russel wrote:
Guys guys please chill...
First thanks to the info about virtualbox option to bypass the
cache (I don't suppose you can give me a reference for that info?
(I'll search the VB site :-))
I posted about that insane default, six months ago. Obviously ZFS
On 14-Jul-09, at 5:18 PM, Orvar Korvar wrote:
With dedup, will it be possible somehow to identify files that are
identical but has different names? Then I can find and remove all
duplicates. I know that with dedup, removal is not really needed
because the duplicate will just be a reference
On 23-Jun-09, at 1:58 PM, Erik Trimble wrote:
Richard Elling wrote:
Erik Trimble wrote:
All this discussion hasn't answered one thing for me: exactly
_how_ does ZFS do resilvering? Both in the case of mirrors, and
of RAIDZ[2] ?
I've seen some mention that it goes in cronological order
On 18-Jun-09, at 12:14 PM, Miles Nordin wrote:
"bmm" == Bogdan M Maryniuk writes:
"tt" == Toby Thain writes:
...
tt> /. is no person...
... you and I both know it's plausible
speculation that Apple delayed unleashing ZFS on their consumers
because of
On 17-Jun-09, at 5:42 PM, Miles Nordin wrote:
"bmm" == Bogdan M Maryniuk writes:
"tt" == Toby Thain writes:
"ok" == Orvar Korvar writes:
tt> Slashdot was never the place to go for accurate information
tt> about ZFS.
again, the posts in t
On 17-Jun-09, at 7:37 AM, Orvar Korvar wrote:
Ok, so you mean the comments are mostly FUD and bull shit? Because
there are no bug reports from the whiners? Could this be the case?
It is mostly FUD? Hmmm...?
Having read the thread, I would say "without a doubt".
Slashdot was never the pl
On 16-Jun-09, at 6:22 PM, Ray Van Dolson wrote:
On Tue, Jun 16, 2009 at 03:16:09PM -0700, milosz wrote:
yeah i pretty much agree with you on this. the fact that no one has
brought this up before is a pretty good indication of the demand.
there are about 1000 things i'd rather see fixed/improv
On 10-Jun-09, at 7:25 PM, Alex Lam S.L. wrote:
On Thu, Jun 11, 2009 at 2:08 AM, Aaron Blew
wrote:
That's quite a blanket statement. MANY companies (including Oracle)
purchased Xserve RAID arrays for important applications because of
their
price point and capabilities. You easily could buy
On 26-May-09, at 10:21 AM, Frank Middleton wrote:
On 05/26/09 03:23, casper@sun.com wrote:
And where exactly do you get the second good copy of the data?
From the first. And if it is already bad, as noted previously, this
is no worse than the UFS/ext3 case. If you want total freedom fro
On 25-May-09, at 11:16 PM, Frank Middleton wrote:
On 05/22/09 21:08, Toby Thain wrote:
Yes, the important thing is to *detect* them, no system can run
reliably
with bad memory, and that includes any system with ZFS. Doing nutty
things like calculating the checksum twice does not buy
On 22-May-09, at 5:24 PM, Frank Middleton wrote:
There have been a number of threads here on the reliability of ZFS
in the
face of flaky hardware. ZFS certainly runs well on decent (e.g.,
SPARC)
hardware, but isn't it reasonable to expect it to run well on
something
less well engineered?
On 19-Apr-09, at 10:38 AM, Uwe Dippel wrote:
casper@sun.com wrote:
We are back at square one; or, at the subject line.
I did a zpool status -v, everything was hunky dory.
Next, a power failure, 2 hours later, and this is what zpool
status -v thinks:
zpool status -v
pool: rpool
state:
On 17-Apr-09, at 11:49 AM, Frank Middleton wrote:
... One might argue that a machine this flaky should
be retired, but it is actually working quite well,
If it has bad memory, you won't get much useful work done on it until
the memory is replaced - unless you want to risk your data with
r
On 16-Apr-09, at 5:27 PM, Florian Ermisch wrote:
Uwe Dippel schrieb:
Bob Friesenhahn wrote:
Since it was not reported that user data was impacted, it seems
likely that there was a read failure (or bad checksum) for ZFS
metadata which is redundantly stored.
(Maybe I am too much of a lingu
On 15-Apr-09, at 8:31 PM, Frank Middleton wrote:
On 04/15/09 14:30, Bob Friesenhahn wrote:
On Wed, 15 Apr 2009, Frank Middleton wrote:
zpool status shows errors after a pkg image-update
followed by a scrub.
If a corruption occured in the main memory, the backplane, or the
disk
controller
On 10-Apr-09, at 5:05 PM, Mark J Musante wrote:
On Fri, 10 Apr 2009, Patrick Skerrett wrote:
degradation) when these write bursts come in, and if I could
buffer them even for 60 seconds, it would make everything much
smoother.
ZFS already batches up writes into a transaction group, which
On 10-Apr-09, at 2:03 PM, Harry Putnam wrote:
David Magda writes:
On Apr 7, 2009, at 16:43, OpenSolaris Forums wrote:
if you have a snapshot of your files and rsync the same files again,
you need to use "--inplace" rsync option , otherwise completely new
blocks will be allocated for the ne
On 17-Mar-09, at 3:32 PM, cindy.swearin...@sun.com wrote:
Neal,
You'll need to use the text-based initial install option.
The steps for configuring a ZFS root pool during an initial
install are covered here:
http://opensolaris.org/os/community/zfs/docs/
Page 114:
Example 4–1 Initial Install
On 14-Mar-09, at 12:09 PM, Blake wrote:
I just thought of an enhancement to zfs that would be very helpful in
disaster recovery situations - having zfs cache device serial/model
numbers - the information we see in cfgadm -v.
+1 I haven't needed this but it sounds very sensible. I can imagine
On 5-Mar-09, at 2:03 PM, Miles Nordin wrote:
"gm" == Gary Mills writes:
gm> There are many different components that could contribute to
gm> such errors.
yes of course.
gm> Since only the lower ZFS has data redundancy, only it can
gm> correct the error.
um, no?
...
For wri
1 - 100 of 295 matches
Mail list logo