Brothers,
I've fixed the issue by reconfigure the system device tree as:
# devfsadm -Cv
Some new devices were added,and then zfs works fine.
Thanks for your kind attention.
Rgds,
Simon
On 5/10/07, Simon <[EMAIL PROTECTED]> wrote:
Gurus,
My fresh installed Solaris 10 U3 can't bootup normall
On 9-May-07, at 3:44 PM, Bakul Shah wrote:
Robert Milkowski wrote:
Hello Mario,
Wednesday, May 9, 2007, 5:56:18 PM, you wrote:
MG> I've read that it's supposed to go at full speed, i.e. as
fast as
MG> possible. I'm doing a disk replace and what zpool reports
kind of
MG> surprises me. The
>
> Doug has been doing some performance optimization to
> the sharemgr to allow faster boot up in loading
>
Doug has blogged about his performance numbers here:
http://blogs.sun.com/dougm/entry/recent_performance_improvement_in_zfs
This message posted from opensolaris.org
_
Gurus,
My fresh installed Solaris 10 U3 can't bootup normally on T2000
server(System Firmware 6.4.4 ),the OS can only enter into the
single-user mode,as one critical service fails to start:
# uname -a
SunOS t2000 5.10 Generic_118833-33 sun4v sparc SUNW,Sun-Fire-T200
(it's not patched,just finish
I was thinking of setting up rotating snapshots... probably do
pool/[EMAIL PROTECTED]
Is Tim's method (
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_8 ) the current
preferred plan?
Thanks,
Malachi
___
zfs-discuss mailing list
zfs-discuss@
I've since stopped making the second clone when I realized the
.zfs/snapshot/ still exists after the clone operation is completed.
So my need for the local clone is met by the direct access to the snapshot.
However, the poor performance of the destroy is still valid. It is quite
possible that w
Folks,
We're following up with EMC on this. We'll post something on the alias
when we get it.
Please note that EMC would probably never say anything about
OpenSolaris, but they'll talk about Solaris ZFS
Bev.
Torrey McMahon wrote:
Anantha N. Srirama wrote:
For whatever reason EMC notes
Anantha N. Srirama wrote:
For whatever reason EMC notes (on PowerLink) suggest that ZFS is not supported
on their arrays. If one is going to use a ZFS filesystem on top of a EMC array
be warned about support issues.
They should have fixed that in their matrices. It should say something
like,
Hello Michael,
Tuesday, May 8, 2007, 9:20:56 PM, you wrote:
>> Probably RAID-Z as you don't have enough disks to be interesting for doing
>> 1+0.
>> Paul
MC> How do you configure ZFS RAID 1+0 ?
MC> Will next lines do that right? :
MC> [b]zpool create -f zfs_raid1 mirror c0t1d0 c1t1d0
MC> zpool
> Robert Milkowski wrote:
> > Hello Mario,
> >
> > Wednesday, May 9, 2007, 5:56:18 PM, you wrote:
> >
> > MG> I've read that it's supposed to go at full speed, i.e. as fast as
> > MG> possible. I'm doing a disk replace and what zpool reports kind of
> > MG> surprises me. The resilver goes on at 1
which one is the most performant: copies=2 or zfs-mirror?
s.
On 5/9/07, Richard Elling <[EMAIL PROTECTED]> wrote:
comment below...
Toby Thain wrote:
>
> On 9-May-07, at 4:45 AM, Andreas Koppenhoefer wrote:
>
>> Hello,
>>
>> solaris Internals wiki contains many interesting things about zfs.
>>
go ahead with filebench and don't forget to set
set zfs:zfs_nocacheflush=1
in /etc/system (if using nevada)
s.
On 5/9/07, cesare VoltZ <[EMAIL PROTECTED]> wrote:
Hy,
I'm planning to test on pre-production data center a ZFS solution for
our application and I'm searching a good filesystem benchm
Hello Richard,
Wednesday, May 9, 2007, 9:10:22 PM, you wrote:
RE> Robert Milkowski wrote:
>> Hello Mario,
>>
>> Wednesday, May 9, 2007, 5:56:18 PM, you wrote:
>>
>> MG> I've read that it's supposed to go at full speed, i.e. as fast as
>> MG> possible. I'm doing a disk replace and what zpool rep
Hello Anantha,
Wednesday, May 9, 2007, 4:45:10 PM, you wrote:
ANS> For whatever reason EMC notes (on PowerLink) suggest that ZFS is
ANS> not supported on their arrays. If one is going to use a ZFS
ANS> filesystem on top of a EMC array be warned about support issues.
Nope. For a couple of months
On Wed, 2007-05-09 at 21:09 +0200, Louwtjie Burger wrote:
> > > LUN are configured as RAID5 accross 15 disks.
>
> Won't such a large amount of spindles have a negative impact on
> performance (in a single RAID-5 setup) ... single I/O from system
> generates lots of backend I/O's ?
yes, single io
Robert Milkowski wrote:
Hello Mario,
Wednesday, May 9, 2007, 5:56:18 PM, you wrote:
MG> I've read that it's supposed to go at full speed, i.e. as fast as
MG> possible. I'm doing a disk replace and what zpool reports kind of
MG> surprises me. The resilver goes on at 1.6MB/s. Did resilvering get
> LUN are configured as RAID5 accross 15 disks.
Won't such a large amount of spindles have a negative impact on
performance (in a single RAID-5 setup) ... single I/O from system
generates lots of backend I/O's ?
___
zfs-discuss mailing list
zfs-discuss
On Wed, 2007-05-09 at 16:27 +0200, cesare VoltZ wrote:
> Hy,
>
> I'm planning to test on pre-production data center a ZFS solution for
> our application and I'm searching a good filesystem benchmark for see
> which configuration is the best solution.
>
> Server are Solaris 10 connected to a EMC C
Hello Mario,
Wednesday, May 9, 2007, 5:56:18 PM, you wrote:
MG> I've read that it's supposed to go at full speed, i.e. as fast as
MG> possible. I'm doing a disk replace and what zpool reports kind of
MG> surprises me. The resilver goes on at 1.6MB/s. Did resilvering get
MG> throttled at some poin
Adam Leventhal wrote:
On Wed, May 09, 2007 at 11:52:06AM +0100, Darren J Moffat wrote:
Can you give some more info on what these problems are.
I was thinking of this bug:
6460622 zio_nowait() doesn't live up to its name
Which was surprised to find was fixed by Eric in build 59.
Adam
It
cesare VoltZ wrote:
Hy,
I'm planning to test on pre-production data center a ZFS solution for
our application and I'm searching a good filesystem benchmark for see
which configuration is the best solution.
Pedantically, your application is always best.
-- richard
_
On Wed, May 09, 2007 at 11:52:06AM +0100, Darren J Moffat wrote:
> Can you give some more info on what these problems are.
I was thinking of this bug:
6460622 zio_nowait() doesn't live up to its name
Which was surprised to find was fixed by Eric in build 59.
Adam
--
Adam Leventhal, Solaris
comment below...
Toby Thain wrote:
On 9-May-07, at 4:45 AM, Andreas Koppenhoefer wrote:
Hello,
solaris Internals wiki contains many interesting things about zfs.
But i have no glue about the reasons for this entry:
In Section "ZFS Storage Pools Recommendations - Storage Pools" you can
read
I've read that it's supposed to go at full speed, i.e. as fast as possible. I'm
doing a disk replace and what zpool reports kind of surprises me. The resilver
goes on at 1.6MB/s. Did resilvering get throttled at some point between the
builds, or is my ATA controller having bigger issues?
Thanks
For whatever reason EMC notes (on PowerLink) suggest that ZFS is not supported
on their arrays. If one is going to use a ZFS filesystem on top of a EMC array
be warned about support issues.
This message posted from opensolaris.org
___
zfs-discuss ma
Tried filebench before??
http://www.solarisinternals.com/wiki/index.php/FileBench
Rayson
On 5/9/07, cesare VoltZ <[EMAIL PROTECTED]> wrote:
I used in the past iozone (http://www.iozone.org/) but I'm wondering
if there are other tools.
Thanks.
Cesare
_
We've Solaris 10 Update 3 (aka 11/06) running on an E2900 (24 x 96). On this
server we've been running a large SAS environment totalling well over 2TB. We
also take daily snapshots of the filesystems and clone them for use by a local
zone. This setup has been in use for well over 6 months.
Star
Hy,
I'm planning to test on pre-production data center a ZFS solution for
our application and I'm searching a good filesystem benchmark for see
which configuration is the best solution.
Server are Solaris 10 connected to a EMC Clariion CX3-20 with two FC
cable in a total high-availability (two H
On 9-May-07, at 4:45 AM, Andreas Koppenhoefer wrote:
Hello,
solaris Internals wiki contains many interesting things about zfs.
But i have no glue about the reasons for this entry:
In Section "ZFS Storage Pools Recommendations - Storage Pools" you
can read:
[i]For all production environments
Drive in my solaris box that had the OS on it decided to kick the bucket this
evening, a joyous occasion for all, but luckly all my data is stored on a zpool
and the OS is nothing but a shell to serve it up on. One quick install later
and im back trying to import my pool, and things are not goin
Adam Leventhal wrote:
The in-kernel version is zlib is the latest version (1.2.3). It's not
surprising that we're spending all of our time in zlib if the machine is
being driving by I/O. There are outstanding problems with compression in
the ZIO pipeline that may contribute to the bursty behavio
On Thu, May 03, 2007 at 11:43:49AM -0500, [EMAIL PROTECTED] wrote:
> I think this may be a premature leap -- It is still undetermined if we are
> running up against a yet unknown bug in the kernel implementation of gzip
> used for this compression type. From my understanding the gzip code has
> bee
Hello,
solaris Internals wiki contains many interesting things about zfs.
But i have no glue about the reasons for this entry:
In Section "ZFS Storage Pools Recommendations - Storage Pools" you can read:
[i]For all production environments, set up a redundant ZFS storage pool, such
as a raidz, ra
33 matches
Mail list logo