Hello,
i have an 8 port sata-controller and i don't want to spend the money for 8 x
750 GB Sata Disks
right now. I'm thinking about an optimal way of building a growing raidz-pool
without loosing
any data.
As far as i know there are two ways to achieve this:
- Adding 750 GB Disks from time to ti
You could estimate how long it will take for ZFS to get the feature you need,
and then buy enough space so that you don't run out before then.
Alternatively, Linux mdadm DOES support growing a RAID5 array with devices, so
you could use that instead.
This message posted from opensolaris.org
_
On 5-May-07, at 2:07 AM, MC wrote:
That's a lot of talking without an answer :)
internal EIDE 320GB (boot drive), internal
250, 200 and 160 GB drives, and an external USB 2.0 600 GB drive.
So, what's the best zfs configuration in this situation?
RAIDZ uses disk space like RAID5. So the
I spend yesterday all day evading my data of one of the Windows disks, so that
I can add it to the pool. Using mount-ntfs, it's a pain due to its slowness.
But once I finished, I thought "Cool, let's do it". So I added the disk using
the zero slice notation (c0d0s0), as suggested for performance
At 04:41 AM 5/5/2007, Christian Rost wrote:
>My Question now:
>Is the second way reasonable or do i missing some things? Anything else to
>consider?
Pardon me for jumping into a group I just joined, but I sense you are
asking sort of a "philosophy of buying" question, and I have a different
one
On Sat, May 05, 2007 at 02:41:28AM -0700, Christian Rost wrote:
>
> - Buying "cheap" 8x250 GB SATA disks at first and replacing them from time to
> time by 750 GB
> or bigger disks. Disadvantage: At the end i've bought 8x250 GB + 8x750 GB
> Harddisks.
Look at it this way. The amount you spen
Harold Ancell wrote:
At 04:41 AM 5/5/2007, Christian Rost wrote:
My Question now:
Is the second way reasonable or do i missing some things? Anything else to consider?
Mirroring is the simplest way to expand in size and performance.
Pardon me for jumping into a group I just joined, but I sen
On May 5, 2007, at 09:34, Mario Goebbels wrote:
I spend yesterday all day evading my data of one of the Windows
disks, so that I can add it to the pool. Using mount-ntfs, it's a
pain due to its slowness. But once I finished, I thought "Cool,
let's do it". So I added the disk using the zero
Why did you choose to deploy the database on ZFS ?
-On disk consistancy was big - one of our datacenters was having power problems
and the systems would sometimes drop live. I had a couple of instances of data
errors with VXVM/VXFS and we had to restore from tape.
-zfs snapshot saves us many hour
> What's the maximum filesystem size you've used in production environment? How
> did the experience come out?
I have a 26tb pool that will be upgraded to 39tb in the next couple of months.
This is the backend for Backup images. The ease of managing this sort of
expanding storage is a little b
>
> > Does ZFS recover all file system transactions which
> it returned with success
> > since the last commit of TxG, which implis that ZIL
> must flush log records for
> > each successful file system transaction before it
> returns to caller so that
> t can replay
> > the filesystem transaction
Lee Fyock wrote:
least this year. I'd like to favor available space over performance, and
be able to swap out a failed drive without losing any data.
Lee Fyock later wrote:
In the mean time, I'd like to hang out with the system and drives I
have. As "mike" said, my understanding is that zfs wo
Mario Goebbels wrote:
do it". So I added the disk using the zero slice notation (c0d0s0),
as suggested for performance reasons. I checked the pool status and
noticed however that the pool size didn't raise.
I believe you got this wrong. You should have given ZFS the whole disk -
c0d0 and not a
Is this possible?
I want an external case with 4 HD in, each having an individual eSata cable. I
plug in this case only when needed (each HD plugs into my PC separately), and
do something like ">import ZFS pool" and use my ZFS raid. When done, I unplug
it with ">export ZFS pool" or something si
What brand is your 8 port satacontroller? I want one sata controller too, but
heard that Solaris is picky about the model. All controllers doesnt work. Your
does?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@open
>No one has said that you can't increase the size of a zpool. What can't
>be increased is the size of a RAID-Z vdev (except by increasing the size
>of all of the components of the RAID-Z). You have created additional
>RAID-Z vdevs and added them to the pool.
If following is nonsense, please bear w
kyusun Chang wrote:
Does ZFS recover all file system transactions which
it returned with success
since the last commit of TxG, which implis that ZIL
must flush log records for
each successful file system transaction before it
returns to caller so that
t can replay
the filesystem tra
17 matches
Mail list logo