Jorgen Lundman wrote:
> We did ask our vendor, but we were just told that AVS does not support
> x4500.
The officially supported AVS works on the X4500 since the X4500 came
out. But, although Jim Dunham and others will tell you otherwise, I
absolutely can *not* recommend using it on this hard
On Thu, Sep 4, 2008 at 12:19 AM, Ralf Ramge <[EMAIL PROTECTED]> wrote:
>
> Jorgen Lundman wrote:
>
> > We did ask our vendor, but we were just told that AVS does not support
> > x4500.
>
>
> The officially supported AVS works on the X4500 since the X4500 came
> out. But, although Jim Dunham and oth
On 03 September, 2008 - Aaron Blew sent me these 2,5K bytes:
> On Wed, Sep 3, 2008 at 1:48 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:
>
> > I've never heard of a battery that's used for anything but RAID
> > features. It's an interesting question, if you use the controller in
> > ``JBOD mode''
Brent Jones wrote:
> I did some Googling, but I saw some limitations sharing your ZFS pool
> via NFS while using HAStorage Cluster product as well.
[...]
> If you are using the zettabyte file system (ZFS) as the exported file
> system, you must set the sharenfs property to off.
That's not a lim
Thanks for the replies.
I guess I misunderstood the manual:
zpool replace [-f] pool old_device [new_device]
Replaces old_device with new_device. This is equivalent to attaching
new_device, waiting for it to resilver, and then detaching old_device.
The size of new_device must be greater
[EMAIL PROTECTED] wrote on 09/04/2008 02:19:23 AM:
> Jorgen Lundman wrote:
>
> > We did ask our vendor, but we were just told that AVS does not support
> > x4500.
>
>
> The officially supported AVS works on the X4500 since the X4500 came
> out. But, although Jim Dunham and others will tell you oth
Hi all,
ZFS send a message to FMA in case of disk failure.
The detail of the message reference a vdev by an hexadecimal number as:
# *fmdump -V -u 50ea07a0-2cd9-6bfb-ff9e-e219740052d5*
TIME UUID SUNW-MSG-ID
Feb 18 11:07:29.5195 50ea07a0-2cd9-6bfb-f
You should be able to do 'zpool status -x' to find out what vdev is
broken. A useful extension to the DE would be to add a label to the
suspect corresponding to /.
- Eric
On Thu, Sep 04, 2008 at 06:34:33PM +0200, Alain Ch?reau wrote:
> Hi all,
>
> ZFS send a message to FMA in case of disk failu
Alain,
I think you want to use fmdump -eV to display the extended device
information. See the output below.
Cindy
class = ereport.fs.zfs.checksum
ena = 0x3242b9cdeac00401
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
sch
Hi,
My problem is one of my customer wants to change his Exanet Systems to ZFS, but
SUN told him that there is real limitation with ZFS :
Customer env :
Incoming Data
FS SIZE : 50 TB, with at least 100 Thousand files write per day, around 20
Millions files.
Consulting Data
FS Size : 200 TB with
The issue is EXANET isn't really holding "300 million files" in one dataset
like you're talking about doing with zfs. It's a clustered approach with a
single namespace.
Reality is you can do what the customer wants to do, but you'd be leveraging
something like pnfs which I don't think is quite pr
Hello all,
Any plans (or already have), a send/receive way to get the transfer backup
statistics? I mean, the "how much" was transfered, time and/or bytes/sec?
And the last question... i did see in many threads the question about "the
consistency between the send/receive through ssh"... but no
Hello all,
I was used to use mirrors and solaris 10, in which the scrub process for 500gb
took about two hours... and with solaris express (snv_79a) tests, terabytes in
minutes. I did search for release changes in the scrub process, and could not
find anything about enhancements in this magnitu
On Thu, Sep 4, 2008 at 14:18, Marcelo Leal
<[EMAIL PROTECTED]> wrote:
> Hello all,
> I was used to use mirrors and solaris 10, in which the scrub process for
> 500gb took about two hours... and with solaris express (snv_79a) tests,
> terabytes in minutes. I did search for release changes in the
Marcelo Leal wrote:
> Hello all,
> Any plans (or already have), a send/receive way to get the transfer backup
> statistics? I mean, the "how much" was transfered, time and/or bytes/sec?
>
I'm not aware of any plans, you should file an RFE.
> And the last question... i did see in many thread
On 4-Sep-08, at 4:52 PM, Richard Elling wrote:
> Marcelo Leal wrote:
>> Hello all,
>> Any plans (or already have), a send/receive way to get the
>> transfer backup statistics? I mean, the "how much" was transfered,
>> time and/or bytes/sec?
>>
>
> I'm not aware of any plans, you should file
[EMAIL PROTECTED] wrote on 09/04/2008 03:40:46 PM:
>
> On 4-Sep-08, at 4:52 PM, Richard Elling wrote:
>
> > Marcelo Leal wrote:
> >> Hello all,
> >> Any plans (or already have), a send/receive way to get the
> >> transfer backup statistics? I mean, the "how much" was transfered,
> >> time and/or
On Wed, 3 Sep 2008, Richard Elling wrote:
> Source packages are usually in a Solaris distribution (overloaded term,
> but look at something like Solaris 10 5/08) and typically end in "S" So
> look in the Product directory for something like SUNWsambaS. Of course,
SUNWsmbaS as it turns out... You
Hi Lori,
is ZFS boot still planned for S10 update 6?
Thanks,
Steve
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Steve Goldberg wrote:
> Hi Lori,
>
> is ZFS boot still planned for S10 update 6?
>
> Thanks,
>
> Steve
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/
On Thu, Sep 4, 2008 at 10:09 AM, <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] wrote on 09/04/2008 02:19:23 AM:
>
>> Jorgen Lundman wrote:
>>
>> > We did ask our vendor, but we were just told that AVS does not support
>> > x4500.
>>
>>
>> The officially supported AVS works on the X4500 since the
On Thu, Sep 4, 2008 at 7:38 PM, Al Hopper <[EMAIL PROTECTED]> wrote:
> On Thu, Sep 4, 2008 at 10:09 AM, <[EMAIL PROTECTED]> wrote:
>> [EMAIL PROTECTED] wrote on 09/04/2008 02:19:23 AM:
>>
>>> Jorgen Lundman wrote:
>>>
>>> > We did ask our vendor, but we were just told that AVS does not support
>>>
If you're using a mirror, and each disk manages 50 MB/second (unlikely if it's
a single disk doing a lot of seeks, but you might do better using a hardware
array for each half of the mirror), simple math says that scanning 1 TB would
take roughly 20,000 seconds, or 5 hours. So your speed under S
I made a bad judgment and now my raidz pool is corrupted. I have a raidz pool
running on Opensolaris b85. I wanted to try out freenas 0.7 and tried to add
my pool to freenas.
After adding the zfs disk, vdev and pool. I decided to back out and went back
to opensolaris. Now my raidz pool will
24 matches
Mail list logo