Ok the "logfix" program compiled for svn111 does run, and lets me change the HDD
32GB slog, with the new SSD (~29GB) slog, comes up as faulty, but I can replace
it with itself, and everything is OK. I can attach the second SSD without issues.
Assuming that it doesn't try to write the full 32
> "re" == Richard Elling writes:
re> although a spec might say that hot-plugging works, that
re> doesn't mean the implementers support it.
hotplug means you can plug in a device after boot and use it. That's
not the same thing as being able to unplug a device after boot.
Yes, both
Mike Gerdts wrote:
Is there still any interest in this? I've done a bit of hacking (then
searched for this thread - I picked -P instead of -c)...
$ zfs get -P compression,dedup /var
NAMEPROPERTY VALUE SOURCE
rpool/ROOT/zfstest compression on inherited
Hi all,
on a x4500 with a relatively well patched Sol10u8
# uname -a
SunOS s13 5.10 Generic_141445-09 i86pc i386 i86pc
I've started a scrub after about 2 weeks of operation and have a lot of
checksum errors:
s13:~# zpool status
pool: atlashome
I installed Solaris 10 x86 on an HP DL 360G5 with an HP smart array P400i
controller. Two mirrored RAID volumes were created, each with it¹s own
spare.
I installed Solaris onto a ZFS partition during setup onto one of the
mirrored volumes. Used the second mirrored volume to create another ZFS
po
Hello,
I am new to this list but i have a big Problem:
We have a Sun Fire V440 with an SCSI RAID system connected. I can see all the
devices and Partitions.
After a failure in the UPS-System the Zpool is not accessible anymore.
The Zpool is a normal stripe over 4 Partitions .
First
Hi,
we have a new fileserver running on X4275 hardware with Solaris 10U8.
On this fileserver we created one test dir with quota and mounted these
on another Solaris 10 system. Here the quota command didnot show the
used quota. Does this feature only work with OpenSolaris or is it
intended to wo
On 11/25/09 22:19, Mike Gerdts wrote:
Is there still any interest in this? I've done a bit of hacking (then
searched for this thread - I picked -P instead of -c)...
$ zfs get -P compression,dedup /var
NAMEPROPERTY VALUE SOURCE
rpool/ROOT/zfstest compression on
On Nov 26, 2009, at 12:33 AM, Miles Nordin wrote:
"re" == Richard Elling writes:
re> although a spec might say that hot-plugging works, that
re> doesn't mean the implementers support it.
hotplug means you can plug in a device after boot and use it. That's
not the same thing as being a
On Nov 26, 2009, at 2:35 AM, Carsten Aulbert wrote:
Hi all,
on a x4500 with a relatively well patched Sol10u8
# uname -a
SunOS s13 5.10 Generic_141445-09 i86pc i386 i86pc
I've started a scrub after about 2 weeks of operation and have a lot
of
checksum errors:
s13:~# zpool status
pool: a
> Hi all,
>
> on a x4500 with a relatively well patched Sol10u8
>
> # uname -a
> SunOS s13 5.10 Generic_141445-09 i86pc i386 i86pc
>
> I've started a scrub after about 2 weeks of operation
> and have a lot of
> checksum errors:
>
> s13:~# zpool status
>
On 26 November, 2009 - Willi Burmeister sent me these 1,7K bytes:
> Hi,
>
> we have a new fileserver running on X4275 hardware with Solaris 10U8.
>
> On this fileserver we created one test dir with quota and mounted these
> on another Solaris 10 system. Here the quota command didnot show the
>
On Thu, Nov 26, 2009 at 06:16:59PM +0100, Tomas Ögren wrote:
> On 26 November, 2009 - Willi Burmeister sent me these 1,7K bytes:
>
> > we have a new fileserver running on X4275 hardware with Solaris 10U8.
> >
> > On this fileserver we created one test dir with quota and mounted these
> > on anoth
On 25-Nov-09, at 4:31 PM, Peter Jeremy wrote:
On 2009-Nov-24 14:07:06 -0600, Mike Gerdts wrote:
... fill a 128
KB buffer with random data then do bitwise rotations for each
successive use of the buffer. Unless my math is wrong, it should
allow 128 KB of random data to be write 128 GB of data
Hopefully this will confirm to you that it should work:
x4500-10:~# zfs get userqu...@57564 zpool1/sd02_www
NAME PROPERTY VALUESOURCE
zpool1/sd02_www userqu...@57564 29.5Glocal
prov01# df -h |grep sd02
x4500-10.unix:/export/sd02/www16T 839G
On Nov 24, 2009, at 1:59 PM, Conner, Neil wrote:
I installed Solaris 10 x86 on an HP DL 360G5 with an HP smart array
P400i controller. Two mirrored RAID volumes were created, each with
it’s own spare.
I installed Solaris onto a ZFS partition during setup onto one of
the mirrored volumes
On Nov 26, 2009, at 1:20 PM, Toby Thain wrote:
On 25-Nov-09, at 4:31 PM, Peter Jeremy wrote:
On 2009-Nov-24 14:07:06 -0600, Mike Gerdts wrote:
... fill a 128
KB buffer with random data then do bitwise rotations for each
successive use of the buffer. Unless my math is wrong, it should
allow 1
On 26-Nov-09, at 8:57 PM, Richard Elling wrote:
On Nov 26, 2009, at 1:20 PM, Toby Thain wrote:
On 25-Nov-09, at 4:31 PM, Peter Jeremy wrote:
On 2009-Nov-24 14:07:06 -0600, Mike Gerdts
wrote:
... fill a 128
KB buffer with random data then do bitwise rotations for each
successive use of the
On Thu, Nov 26, 2009 at 8:53 PM, Toby Thain wrote:
>
> On 26-Nov-09, at 8:57 PM, Richard Elling wrote:
>
>> On Nov 26, 2009, at 1:20 PM, Toby Thain wrote:
>>>
>>> On 25-Nov-09, at 4:31 PM, Peter Jeremy wrote:
>>>
On 2009-Nov-24 14:07:06 -0600, Mike Gerdts wrote:
>
> ... fill a 128
>>
Hi Jorgen,
> Hopefully this will confirm to you that it should work:
thanks for confirmation.
> I would suggest the usual things to check:
server# svcs \*nfs\*
STATE STIMEFMRI
online 10:57:32 svc:/network/nfs/status:default
online 10:57:32 svc:/network/nfs/nlockmgr
20 matches
Mail list logo