Bob Friesenhahn wrote:
> Are there any plans to support ZFS for write-only media such as
> optical storage? It seems that if mirroring or even zraid is used
> that ZFS would be a good basis for long term archival storage.
I'm just going to assume that "write-only" here means "write-once,
read-ma
Matt Cohen wrote:
> We have a system with two drives in it, part UFS, part ZFS. It's a software
> mirrored system with slices 0,1,3 setup as small UFS slices, and slice 4 on
> each drive being the ZFS slice.
>
> One of the drives is failing and we need to replace it.
>
> I just want to make su
Torrey McMahon wrote:
> Dana H. Myers wrote:
>> Ed Gould wrote:
>>
>>> On Jan 26, 2007, at 12:13, Richard Elling wrote:
>>>
>>>> On Fri, Jan 26, 2007 at 11:05:17AM -0800, Ed Gould wrote:
>>>>
>>>>> A number that
Ed Gould wrote:
> On Jan 26, 2007, at 12:13, Richard Elling wrote:
>> On Fri, Jan 26, 2007 at 11:05:17AM -0800, Ed Gould wrote:
>>> A number that I've been quoting, albeit without a good reference,
>>> comes from Jim Gray, who has been around the data-management industry
>>> for longer than I have
Neal Pollack wrote:
> I have an 800GB raidz2 zfs filesystem. It already has approx 142Gb of
> data.
> Can I simply turn on compression at this point, or do you need to start
> with compression at the creation time?
As I understand it, you can turn compression on and off at will.
Data will be writ
Karen Chau wrote:
> How do you reconfigure ZFS on the server after an OS upgrade? I have a
> ZFS pool on a 6130 storge array.
> After upgrade the data on the storage array is still intact, but ZFS
> configuration is gone due to new OS.
>
> Do I use the same commands/procedure to recreate the zpo
Chad Leigh -- Shire.Net LLC wrote:
>
> On Dec 2, 2006, at 12:06 AM, Ian Collins wrote:
[...]
>> I don't think that the issue here, it's more one of perceived data
>> integrity. People who have been happily using a single RAID 5 are now
>> finding that the array has been silently corrupting thei
Chad Leigh -- Shire.Net LLC wrote:
>
> On Dec 1, 2006, at 10:17 PM, Ian Collins wrote:
>
>> Chad Leigh -- Shire.Net LLC wrote:
>>
>>>
>>> On Dec 1, 2006, at 4:34 PM, Dana H. Myers wrote:
>>>
>>>> Chad Leigh -- Shire.Net LLC wrote:
&
Chad Leigh -- Shire.Net LLC wrote:
>
> On Dec 1, 2006, at 4:34 PM, Dana H. Myers wrote:
>
>> Chad Leigh -- Shire.Net LLC wrote:
>>>
>>> On Dec 1, 2006, at 9:50 AM, Al Hopper wrote:
>>>
>>>> Followup: When you say you "fixed the HW&quo
Chad Leigh -- Shire.Net LLC wrote:
>
> On Dec 1, 2006, at 9:50 AM, Al Hopper wrote:
>
>> Followup: When you say you "fixed the HW", I'm curious as to what you
>> found and if this experience with ZFS convinced you that your trusted
>> RAID
>> H/W did, in fact, have issues?
>>
>> Do you think that
Al Hopper wrote:
> On Wed, 11 Oct 2006, Dana H. Myers wrote:
>
>> Al Hopper wrote:
>>
>>> Memory: DDR-400 - your choice but Kingston is always a safe bet. 2*512Mb
>>> sticks for a starter, cost effective, system. 4*512Mb for a good long
>>> term solu
Al Hopper wrote:
> Memory: DDR-400 - your choice but Kingston is always a safe bet. 2*512Mb
> sticks for a starter, cost effective, system. 4*512Mb for a good long
> term solution.
Due to fan-out considerations, every BIOS I've seen will run DDR400
memory at 333MHz when connected to more than 1
David Dyer-Bennet wrote:
[...]
> So, having gotten this far, and it being a scratch install and all, I
> reached over and pulled out C3D0. I then typed a zpool status
> command. This hung after the first line of output. And I started
> getting messages on the console, saying things like (retyp
Neal Miskin wrote:
> Hi Robert
>
>> When ZFS can't write to a pool then it panics system.
>
> Thanks for the info.
> I find this hard to understand though, the same wouldnt happen for VxVM or
> SVM. Is this a flaw with zfs?
It is ZFS bug 6322646; a flaw.
Dana
__
Jonathan Wheeler wrote:
I'm not a ZFS expert - I'm just an enthusiastic user inside Sun.
Here are some brief observations:
> Bonnie
> ---Sequential Output ---Sequential Input--
> --Random--
> -Per Char- --Block--- -Rewrite-- -Per Char- --Block---
> --See
Richard Elling wrote:
> Michael Schuster - Sun Microsystems wrote:
>> Sean Meighan wrote:
>>> I am not sure if this is ZFS, Niagara or something else issue? Does
>>> someone know why commands have the latency shown below?
>>>
>>> *1) do a ls of a directory. 6.9 seconds total, truss only shows .07
Darren J Moffat wrote:
> Bill Sommerfeld wrote:
>> On Wed, 2006-06-21 at 14:15, Neil Perrin wrote:
>>> Of course we would need to stress the dangers of setting 'deferred'.
>>> What do you guys think?
>>
>> I can think of a use case for "deferred": improving the efficiency of a
>> large mega-"transa
Richard Elling wrote:
> Erik Trimble wrote:
>> Oh, and the newest thing in the consumer market is called "hybrid
>> drives", which is a melding of a Flash drive with a Winchester
>> drive. It's originally targetted at the laptop market - think a 1GB
>> flash memory welded to a 40GB 2.5" hard dri
Robert Milkowski wrote:
> I issued svcadm disable nfs/server
> nfsd is still there with about 1300 threads (down from 2052).
> stack pointer for thread 3002f4bd300: 2a1084b7021
> [ 02a1084b7021 cv_wait+0x40() ]
> 02a1084b70d1 exitlwps+0x11c(0, 20, 4202, 300116ec7e0, 10,
> 3
Daniel Rock wrote:
> Sean Meighan schrieb:
>
>> The box runs less than 20% load. Everything has been working perfectly
>> until two days ago, now it can take 10 minutes to exit from vi. The
>> following truss shows that the 3 line file that is sitting on the ZFS
>> volume (/archives) took almost 1
Dana H. Myers wrote:
> Phil Brown wrote:
>> Pawel Wojcik wrote:
>>> Only SATA drives that operate under SATA framework and SATA HBA
>>> drivers have this option available to them via format -e. That's
>>> because they are treated and controlled by the syst
Phil Brown wrote:
> Pawel Wojcik wrote:
>> Only SATA drives that operate under SATA framework and SATA HBA
>> drivers have this option available to them via format -e. That's
>> because they are treated and controlled by the system as scsi drives.
>> >From your e-mail it appears that you are talk
22 matches
Mail list logo