Jorgen Lundman wrote:
> If we were interested in finding a method to replicate data to a 2nd
> x4500, what other options are there for us?
If you already have an X4500, I think the best option for you is a cron
job with incremental 'zfs send'. Or rsync.
--
Ralf Ramge
Senior Solaris Administ
On Wed, Sep 17, 2008 at 6:06 AM, Erik Trimble <[EMAIL PROTECTED]> wrote:
> Just one more things on this:
>
> Run with a 64-bit processor. Don't even think of using a 32-bit one -
> there are known issues with ZFS not quite properly using 32-bit only
> structures. That is, ZFS is really 64-bit clea
Just one more things on this:
Run with a 64-bit processor. Don't even think of using a 32-bit one -
there are known issues with ZFS not quite properly using 32-bit only
structures. That is, ZFS is really 64-bit clean, but not 32-bit clean.
--
Erik Trimble
Java System Support
Mailstop: usca2
Sorry, I popped up to Hokkdaido for a holiday. I want to thank you all
for the replies.
I mentioned AVS as I thought it to do be the only product close to
enabling us to do a (makeshift) fail-over setup.
We have 5-6 ZFS filesystem, and 5-6 zvol with UFS (for quotas). To do
"zfs send" snapshot
On Tue, Sep 16, 2008 at 2:28 PM, Peter Tribble <[EMAIL PROTECTED]> wrote:
> For what it's worth, we put all the disks on our thumpers into a single pool -
> mostly it's 5x 8+1 raidz1 vdevs with a hot spare and 2 drives for the OS and
> would happily go much bigger.
so you have 9 drive raidz1 (8 d
Moore, Joe wrote:
> I've recently upgraded my x4500 to Nevada build 97, and am having problems
> with the iscsi target.
>
> Background: this box is used to serve NFS underlying a VMware ESX environment
> (zfs filesystem-type datasets) and presents iSCSI targets (zfs zvol datasets)
> for a Window
> "jd" == Jim Dunham <[EMAIL PROTECTED]> writes:
jd> If at the time the SNDR replica is deleted the set was
jd> actively replicating, along with ZFS actively writing to the
jd> ZFS storage pool, I/O consistency will be lost, leaving ZFS
jd> storage pool in an indeterministic st
On Tue, Sep 16, 2008 at 10:03 PM, Ben Rockwood <[EMAIL PROTECTED]> wrote:
> gm_sjo wrote:
>> 2008/9/15 gm_sjo:
>>
>>> 2008/9/15 Ben Rockwood:
>>>
On Thumpers I've created single pools of 44 disks, in 11 disk RAIDZ2's.
I've come to regret this. I recommend keeping pools reasonably sized
>
I've recently upgraded my x4500 to Nevada build 97, and am having problems with
the iscsi target.
Background: this box is used to serve NFS underlying a VMware ESX environment
(zfs filesystem-type datasets) and presents iSCSI targets (zfs zvol datasets)
for a Windows host and to act as zoneroot
2008/9/15 gm_sjo:
> 2008/9/15 Ben Rockwood:
>> On Thumpers I've created single pools of 44 disks, in 11 disk RAIDZ2's.
>> I've come to regret this. I recommend keeping pools reasonably sized
>> and to keep stripes thinner than this.
>
> Could you clarify why you came to regret it? I was intending
> "s" == Solaris <[EMAIL PROTECTED]> writes:
s> Point being that even if you can't run OpenSolaris due to
s> support issues, you may still be able to use OpenSolaris to
s> help resolve ZFS issues that you might run into in Solaris 10.
glad ZFS is improving, but this sentence i
[EMAIL PROTECTED] wrote on 09/15/2008 11:32:15 PM:
> Brandon High wrote:
> > On Fri, Sep 12, 2008 at 11:49 AM, Dale Ghent <[EMAIL PROTECTED]>
wrote:
> >
> >> Did I detect a (well-done) metaphor for shared ZFS?
> >>
> >
> > Probably not. It looks like a deduplication / MAID solution.
> >
>
> Yeah
Hello... Since there has been much discussion about zpool import failures
resulting in loss of an entire pool, I thought I would illustrate a scenario
I just went through to recover a faulted pool that wouldn't import under
Solaris 10 U5. While this is a simple scenario, and the data was not
terr
[EMAIL PROTECTED] wrote on 09/16/2008 03:10:52 AM:
> Marion Hakanson wrote:
> > [EMAIL PROTECTED] said:
> >
> >> greenBytes has a very well produced teaser commercial on their site.
> >> http://www.green-bytes.com
> >> Actually, I think it is one of the better commercials done by
> tech compan
14 matches
Mail list logo