Hello,
I think I have gained "sufficient fool" status for testing the
fool-proof-ness of zfs. I have a cluster of T1000 servers running
Solaris 10 and two x4100's running an OpenSolaris dist (Nexenta) which
is at b68. Each T1000 hosts several zones each of which has its own
zpool associate
Neil Perrin wrote:
>
>
> Tim Spriggs wrote:
>> Hello,
>>
>> I think I have gained "sufficient fool" status for testing the
>> fool-proof-ness of zfs. I have a cluster of T1000 servers running
>> Solaris 10 and two x4100's running an Open
I'm far from an expert but my understanding is that the zil is spread
across the whole pool by default so in theory the one drive could slow
everything down. I don't know what it would mean in this respect to keep
the PATA drive as a hot spare though.
-Tim
Christopher Gibbs wrote:
> Anyone?
>
zfs get creation pool|filesystem|snapshot
Poulos, Joe wrote:
>
> Hello,
>
>
>
> Is there a way to find out what the timestamp is of a specific
> snapshot? Currently, I have a system with 5 snapshots, and would like
> to know the timestamp as to when it was created. Thanks JOr
>
> This messa
ny way to determine which snapshot was created
> earlier?
>
> This would be helpful to know in order to predict the effect of a
> rollback or promote command.
>
> Fred Oliver
>
>
> Tim Spriggs wrote:
>
>> zfs get creation pool|filesystem|snapshot
>>
>&g
Andy Lubel wrote:
> On 9/20/07 3:49 PM, "Paul B. Henson" <[EMAIL PROTECTED]> wrote:
>
>
>> On Thu, 20 Sep 2007, Richard Elling wrote:
>>
>>
>> That would also be my preference, but if I were forced to use hardware
>> RAID, the additional loss of storage for ZFS redundancy would be painful.
Paul B. Henson wrote:
> On Thu, 20 Sep 2007, Tim Spriggs wrote:
>
>
>> We are in a similar situation. It turns out that buying two thumpers is
>> cheaper per TB than buying more shelves for an IBM N7600. I don't know
>> about power/cooling considerations yet t
Paul B. Henson wrote:
> Is it comparable storage though? Does it use SATA drives similar to the
> x4500, or more expensive/higher performance FC drives? Is it one of the
> models that allows connecting dual clustered heads and failing over the
> storage between them?
>
> I agree the x4500 is a swee
Gino wrote:
>> The x4500 is very sweet and the only thing stopping
>> us from buying two
>> instead of another shelf is the fact that we have
>> lost pools on Sol10u3
>> servers and there is no easy way of making two pools
>> redundant (ie the
>> complexity of clustering.) Simply sending increme
Paul B. Henson wrote:
> On Thu, 20 Sep 2007, Tim Spriggs wrote:
>
>
>> The x4500 is very sweet and the only thing stopping us from buying two
>> instead of another shelf is the fact that we have lost pools on Sol10u3
>> servers and there is no easy way of making
eric kustarz wrote:
>
> On Sep 21, 2007, at 3:50 PM, Tim Spriggs wrote:
>
>> m2# zpool create test mirror iscsi_lun1 iscsi_lun2
>> m2# zpool export test
>> m1# zpool import -f test
>> m1# reboot
>> m2# reboot
>
> Since I haven't actually looke
James C. McPherson wrote:
> Gregory Shaw wrote:
>
>> Hi. I'd like to request a feature be added to zfs. Currently, on
>> SAN attached disk, zpool shows up with a big WWN for the disk. If
>> ZFS (or the zpool command, in particular) had a text field for
>> arbitrary information, it woul
zdb?
Damon Atkins wrote:
> ZFS should allow 31+NULL chars for a comment against each disk.
> This would work well with the host name string (I assume is max_hostname
> 255+NULL)
> If a disk fails it should report c6t4908029d0 failed "comment from
> disk", it should also remember the comment unt
Nicolas Williams wrote:
> On Thu, Oct 04, 2007 at 10:26:24PM -0700, Jonathan Loran wrote:
>
>> I can envision a highly optimized, pipelined system, where writes and
>> reads pass through checksum, compression, encryption ASICs, that also
>> locate data properly on disk. ...
>>
>
> I've a
Would the bootloader have issues here? On x86 I would imagine that you
would have to reload grub, would a similar thing need to be done on SPARC?
Ivan Wang wrote:
>>> Erik Trimble wrote:
>>> After both drives are replaced, you will automatically see the
>>> additional space.
>>>
>> I be
Yeah, that would have saved me several weeks ago.
Samuel Borgman wrote:
> Hi,
>
> Having my 700Gb one disk ZFS crashing on me created ample need for a recovery
> tool.
>
> So I spent the weekend creating a tool that lets you list directories and
> copy files from any pool on a one disk ZFS fil
Jonathan Loran wrote:
> Richard Elling wrote:
>
>> Jonathan Loran wrote:
>>
> ...
>
>
>> Do not assume that a compressed file system will send compressed.
>> IIRC, it
>> does not.
>>
> Let's say, if it were possible to detect the remote compression support,
> couldn't we send it
Joe Little wrote:
> On 11/2/07, MC <[EMAIL PROTECTED]> wrote:
>
>>> I consider myself an early adopter of ZFS and pushed
>>> it hard on this
>>> list and in real life with regards to iSCSI
>>> integration, zfs
>>> performance issues with latency there of, and how
>>> best to use it with
>>> NFS.
Chill. It's a filesystem. If you don't like it, don't use it.
Sincere Regards,
-Tim
can you guess? wrote:
>> can you guess? wrote:
>>
>
> ...
>
>
>>> Most of the balance of your post isn't addressed in
>>>
>> any detail because it carefully avoids the
>> fundamental issues tha
In the previous and current responses, you seem quite determined of
others misconceptions. Given that fact and the first paragraph of your
response below, I think you can figure out why nobody on this list will
reply to you again.
can you guess? wrote:
>> No, you aren't cool, and no it isn't a
Cyril Plisko wrote:
> On Nov 12, 2007 5:51 PM, Neelakanth Nadgir <[EMAIL PROTECTED]> wrote:
>
>> You could always replace this device by another one of same, or
>> bigger size using zpool replace.
>>
>
> Indeed. Provided that I always have an unused device of same or
> bigger size, which i
Hi Boris,
When you create a Solaris2 Partition under x86, Solaris "sees" the
partition as a disk that you can cut into slices. You can find a list of
disks available via the "format" command.
A slice is much like a partition but there is a difference; that's most
or all you really need to know
Rich Teer wrote:
> I should know better than to reply to a troll, but I can't let this
> personal attack stand. I know Al, and I can tell you for a fact that
> he is *far* from "technically incompentent".
>
> Judging from the length of your diatribe (which I didn't bother reading),
> you seem to
can you guess? wrote:
> he isn't being
>
>> paid by NetApp.. think bigger
>>
>
> O frabjous day! Yet *another* self-professed psychic, but one whose internal
> voices offer different counsel.
>
> While I don't have to be psychic myself to know that they're *all* wrong
> (that's an adva
Yet another prime example.
can you guess? wrote:
>> Please see below for an example.
>>
>
> Ah - I see that you'd rather be part of the problem than part of the
> solution. Perhaps you're also one of those knuckle-draggers who believes
> that a woman with the temerity to leave her home af
Look, it's obvious this guy talks about himself as if he is the person
he is addressing. Please stop taking this personally and feeding the troll.
can you guess? wrote:
>> Bill - I don't think there's a point in continuing
>> that discussion.
>>
>
> I think you've finally found something u
Mike Gerdts wrote:
> On Jan 29, 2008 5:55 PM, Andrew Gabriel <[EMAIL PROTECTED]> wrote:
>
>> Having attached new bigger disks to a mirror, and detached all the older
>> smaller disks, how to I tell ZFS to expand the size of the mirror to
>> match that of the bigger disks? I had a look through th
Does anyone know a tool that can look over a dataset and give
duplication statistics? I'm not looking for something incredibly
efficient but I'd like to know how much it would actually benefit our
dataset: HiRISE has a large set of spacecraft data (images) that could
potentially have large amou
Darren J Moffat wrote:
> Glaser, David wrote:
>
>> Hi all,
>>
>> I'm a little (ok, a lot) confused on the whole zfs send/receive commands.
>>
> > I've seen mention of using zfs send between two different machines,
> > but no good howto in order to make it work.
>
> zfs(1) man page, Examp
Will Murnane wrote:
> On Thu, Jul 10, 2008 at 12:43, Glaser, David <[EMAIL PROTECTED]> wrote:
>
>> I guess what I was wondering if there was a direct method rather than the
>> overhead of ssh.
>>
> On receiving machine:
> nc -l 12345 | zfs recv mypool/[EMAIL PROTECTED]
> and on sending mac
30 matches
Mail list logo