What does
echo "::memstat" | mdb -k
show?
max
On Aug 2, 2011, at 4:10 PM, Mister Anonyme wrote:
>
> Hi,
>
> We have a host, Solaris 10 10/08 s10s_u6wos_07b on SPARC.
>
> SWAP is on ZFS.
>
> We allocated two swap devices of 64G each, for a total of aro
Hi Ed,
I have been using the Dell r710 for a while. You might try
disabling c-states, as the problem you saw is identical to one I
was seeing (disk i/o stops working, other things are ok). Since
disabling c-states, I haven't seen the problem again.
max
On Oct 13, 2010, at 4:56 PM, Edwar
I was looking for a way to do this without downtime... It seems that
this kind of basic relayout operation should be easy to do.
On Mon, Jul 19, 2010 at 12:44 PM, Freddie Cash wrote:
> On Mon, Jul 19, 2010 at 9:06 AM, Max Levine wrote:
>> Is it possible in ZFS to do the following.
>
Is it possible in ZFS to do the following.
I have an 800GB lun a single device in a pool and I want to migrate
that to 8 100GB luns. Is it possible to create an 800GB concat out of
the 8 devices, and mirror that to the original device, then detach the
original device? It is possible to do this onl
Veritas has this feature called fast mirror resync where they have a
DRL on each side of the mirror and, detaching/re-attaching a mirror
causes only the changed bits to be re-synced. Is anything similar
planned for ZFS?
___
zfs-discuss mailing list
zfs-d
tries at proper events. At least, I wish zfs allow
us to create the cachefiles while they are not currently imported.
so that I can just have a simple daily job to maintain the cache files
on every node of a cluster automatically.
Thanks.
Max
--
d with the end. If the file is large and does not fail for
many mmap/write calls, you can just truss opens and closes:
truss -topen,close cp -r ...
max
Quoting Simon Breden <[EMAIL PROTECTED]>:
> OK, I tried replying by email, and got a message that a moderator
> will approv
complains about the LUNs on
the OS
or the array. And now, I suspect the symptoms are showing up on 2nd node of
this 3-node
cluster.
max
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opens
nyone has same problem or know what might be the cause/fix?
Thanks.
Max Holm
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
; whole_disk=0
> DTL=84
>
>
>
>
>
> Thankyou in advance,
> James C. McPherson
> --
> Senior Kernel Software Engineer, Solaris
> Sun Microsystems
> http://blogs.sun.com/jmcp http://www.jmcp.homeun
Hi,
No, I can't offer insight, but I do have some questions
that are not really on topic.
What version of solaris are you running? Is this
the console output at time of panic? When did the
panic code (or mdb) learn about frame recycling?
Or are you using scat to get this output?
thanks
o PT),
with incremental snapshots and after some host/array failures
on either end of the pair and you cannot be sure the 2 copies of
archives on 2 zfs pools still contain exactly the same files/contents.
How do you resync them or verify their status efficiently? Thanks.
-Max
This message posted
V490/V890 on some SATA drives on 3511 arrays)?
More options? Much Thanks.
Max
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
13 matches
Mail list logo