Hi,
I recently moved to a freebsd/zfs system for the sake of data integrity, after
losing my data on linux. I've now had my first hard disk failure; the bios
refused to even boot with the failed drive (ad18) connected, so I removed it.
I have another drive, ad16, which had enough space to replac
Hi Geoff,
I also tested a ram disk as a zil and found I could recover the pool:-
ramdiskadm -a zil 1g
zpool create -f tank c1t3d0 c1t4d0 log /dev/ramdisk/zil
zpool status tank
reboot
zpool status tank
ramdiskadm -a zil 1g
zpool replace -f tank /dev/ramdisk/zil
zpool status tank
Cheers
Richard.
Hi.
I known that i can view statistics for the pool (zpool iostat).
I want to view statistics for each file system on pool. Is it possible?
Thank.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-
On 17/05/2010 12:41, eXeC001er wrote:
I known that i can view statistics for the pool (zpool iostat).
I want to view statistics for each file system on pool. Is it possible?
See fsstat(1M)
--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@o
good.
but this utility is used to view statistics for mounted FS.
How can i view statistics for iSCSI shared FS?
Thanks.
2010/5/17 Darren J Moffat
> On 17/05/2010 12:41, eXeC001er wrote:
>
>> I known that i can view statistics for the pool (zpool iostat).
>> I want to view statistics for each f
Hi,
On 05/17/10 01:57 PM, eXeC001er wrote:
good.
but this utility is used to view statistics for mounted FS.
How can i view statistics for iSCSI shared FS?
fsstat(1M) relies on certain kstat counters for it's operation -
last I checked I/O against zvols does not update those counters.
It your
On 12/05/2010 22:19, Ian Collins wrote:
On 05/13/10 03:27 AM, Lori Alt wrote:
On 05/12/10 04:29 AM, Ian Collins wrote:
I just tried moving a dump volume form rpool into another pool so I
used zfs send/receive to copy the volume (to keep some older dumps)
then ran dumpadm -d to use the new loca
perfect!
I found info about kstat for Perl.
Where can I find the meaning of each field?
r...@atom:~# kstat stmf:0:stmf_lu_io_ff00d1c2a8f8
1274100947
module: stmfinstance: 0
name: stmf_lu_io_ff00d1c2a8f8 class:io
crtime
On 05/17/10 03:05 PM, eXeC001er wrote:
perfect!
I found info about kstat for Perl.
Where can I find the meaning of each field?
Most of them can be found here under the section "I/O kstat" :
http://docs.sun.com/app/docs/doc/819-2246/kstat-3kstat?a=view
r...@atom:~# kstat stmf:0:stmf_lu_io_
Good! I found all the necessary information.
Thanks.
2010/5/17 Henrik Johansen
> On 05/17/10 03:05 PM, eXeC001er wrote:
>
>> perfect!
>>
>> I found info about kstat for Perl.
>>
>> Where can I find the meaning of each field?
>>
>
> Most of them can be found here under the section "I/O kstat" :
>
Hello.
I've got a home-storage-server setup with Opensolaris (currently dev build 134)
that is quickly running out of storage space, and I'm looking through what kind
of options I have for expanding it.
I currently have my "storage-pool" in a 4x 1TB drive setup in RAIDZ1, and have
room for 8-9
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Geoff Nordli
>
> I was messing around with a ramdisk on a pool and I forgot to remove it
> before I shut down the server. Now I am not able to mount the pool. I
> am
> not concerned with the
On Sun, May 16, 2010 at 01:14:24PM -0700, Charles Hedrick wrote:
> We use this configuration. It works fine. However I don't know
> enough about the details to answer all of your questions.
>
> The disks are accessible from both systems at the same time. Of
> course with ZFS you had better not act
On May 17, 2010, at 5:29 PM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Geoff Nordli
>>
>> I was messing around with a ramdisk on a pool and I forgot to remove it
>> before I shut down the server. Now I am
>-Original Message-
>From: Edward Ned Harvey [mailto:solar...@nedharvey.com]
>Sent: Monday, May 17, 2010 6:29 AM
>>
>> I was messing around with a ramdisk on a pool and I forgot to remove
>> it before I shut down the server. Now I am not able to mount the
>> pool. I am not concerned wit
On Mon, May 17, 2010 at 6:25 AM, Andreas Gunnarsson wrote:
> I've got a home-storage-server setup with Opensolaris (currently dev build
> 134) that is quickly running out of storage space, and I'm looking through
> what kind of options I have for expanding it.
>
> I currently have my "storage-pool
On Thu, May 13, 2010 at 06:09:55PM +0200, Roy Sigurd Karlsbakk wrote:
> 1. even though they're 5900, not 7200, benchmarks I've seen show they are
> quite good
Minor correction, they are 5400rpm. Seagate makes some 5900rpm drives.
The "green" drives have reasonable raw throughput rate, due to t
>On Thu, May 13, 2010 at 06:09:55PM +0200, Roy Sigurd Karlsbakk wrote:
>> 1. even though they're 5900, not 7200, benchmarks I've seen show they are
>> quite good
>
>Minor correction, they are 5400rpm. Seagate makes some 5900rpm drives.
>
>The "green" drives have reasonable raw throughput rate,
On 17 May, 2010 - Dan Pritts sent me these 1,6K bytes:
> On Thu, May 13, 2010 at 06:09:55PM +0200, Roy Sigurd Karlsbakk wrote:
> > 1. even though they're 5900, not 7200, benchmarks I've seen show they are
> > quite good
>
> Minor correction, they are 5400rpm. Seagate makes some 5900rpm drives.
On Mon, May 17, 2010 at 9:25 AM, Tomas Ögren wrote:
> On 17 May, 2010 - Dan Pritts sent me these 1,6K bytes:
>
> > On Thu, May 13, 2010 at 06:09:55PM +0200, Roy Sigurd Karlsbakk wrote:
> > > 1. even though they're 5900, not 7200, benchmarks I've seen show they
> are quite good
> >
> > Minor corre
On Mon, May 17, 2010 at 06:25:18PM +0200, Tomas Ögren wrote:
> Resilver does a whole lot of random io itself, not bulk reads.. It reads
> the filesystem tree, not "block 0, block 1, block 2..". You won't get
> 60MB/s sustained, not even close.
Even with large, unfragmented files?
danno
--
Dan P
hey, when i do this single user boot, is there anyway to capture what pops
on the screen? It's a LOT of stuff.
anyways, it seems to work fine when i do singleuser -srv
cpustat -h lists exactly what you said it should plus a lot more (though the
"more" is above, so like you said, it shows what i
psrinfo -pv shows:
The physical processor has 8 virtual processors (0-7)
x86 (AuthenticAMD 100F91 family 16 model 9 step 1 clock 200 MHz)
AMD Opteron(tm) Processor 6128 [ Socket: G34 ]
On Sat, May 15, 2010 at 8:35 PM, Dennis Clarke wrote:
> - Original Message -
no.it doesn't. The only sata ports that show up are the ones connected
to the backpane via the reverse breakout sas cableand they show as
emptyso i'm thinking that opensolaris isn't working with the chipset
sata.
In the bios i can select from:
Native IDE
AMD_AHCI
RAID
Legacy IDE
On Mon, May 17, 2010 at 12:51 PM, Thomas Burgess wrote:
> In the bios i can select from:
> Native IDE
> AMD_AHCI
This is probably what you want. AHCI is supposed to be chipset agnostic.
> I also have an option called "Sate IDE combined mode"
See if there's anything in the docs about what this
Hello Everybody,
thank you for your support. I have been able to find a sustained 50-70mb
resilvering with the "iostat -x 10" command. On one out of 3 discs. The other
two discs are now on their way back to the vendor and I hope to be able to
report better success when I get them back.
Than
ok, well this was part of the problem.
I disabled the Sata IDE combined mode and reinstalled opensolaris (i tried
to just disable it but osol wouldn't boot)
now the drive connected to the SSD DOES show up in cfgadm so it seems to be
in sata mode...but the drives connected to the reverse breakout
I'd have to agree. Option 2 is probably the best.
I recently found myself in need of more space...i had to build an entirely
new server...my first one was close to full (it has 20 1TB drives in 3
raidz2 groups 7/7/6 and i was down to 3 TB) I ended up going with a whole
new serverwith 2TB dri
When I did a similar upgrade a while back I did #2. Create a new pool raidz2
with 6 drives, copy the data to it, verify the data, delete the old pool, add
old drives + some new drives to another 6 disk raidz2 in the new pool.
Performance has been quite good, and the migration was very smooth.
Thanks for the tips guys, I'll go with 2x 6drive raidz2 vdevs then.
Regards
Andreas Gunnarsson
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>On 05-17-10, Thomas Burgess wrote:
>psrinfo -pv shows:
>
>The physical processor has 8 virtual processors (0-7)
> x86 (AuthenticAMD 100F91 family 16 model 9 step 1 clock 200 MHz)
> AMD Opteron(tm) Processor 6128 [ Socket: G34 ]
>
That's odd.
Please try this :
# kstat -m
On Mon, 2010-05-17 at 12:54 -0400, Dan Pritts wrote:
> On Mon, May 17, 2010 at 06:25:18PM +0200, Tomas Ögren wrote:
> > Resilver does a whole lot of random io itself, not bulk reads.. It reads
> > the filesystem tree, not "block 0, block 1, block 2..". You won't get
> > 60MB/s sustained, not even c
On Tue, May 11, 2010 at 04:15:24AM -0700, Bertrand Augereau wrote:
> Is there a O(nb_blocks_for_the_file) solution, then?
>
> I know O(nb_blocks_for_the_file) == O(nb_bytes_in_the_file), from Mr.
> Landau's POV, but I'm quite interested in a good constant factor.
If you were considering the hash
The LSI SAS1064E slipped through the cracks when I built the list.
This is a 4-port PCIe x8 HBA with very good Solaris (and Linux)
support. I don't remember having seen it mentionned on zfs-discuss@
before, even though many were looking for 4-port controllers. Perhaps
the fact it is priced too clos
On Mon, May 17, 2010 at 03:12:44PM -0700, Erik Trimble wrote:
> On Mon, 2010-05-17 at 12:54 -0400, Dan Pritts wrote:
> > On Mon, May 17, 2010 at 06:25:18PM +0200, Tomas Ögren wrote:
> > > Resilver does a whole lot of random io itself, not bulk reads.. It reads
> > > the filesystem tree, not "block
35 matches
Mail list logo