[Default] On Fri, 21 Nov 2008 17:20:48 PST, Vincent Kéravec
<[EMAIL PROTECTED]> wrote:
> I just try ZFS on one of our slave and got some really
> bad performance.
>
> When I start the server yesterday, it was able to keep
> up with the main server without problem but after two
> days of consecuti
> Posted for my friend Marko:
>
> I've been reading up on ZFS with the idea to build a
> home NAS.
>
> My ideal home NAS would have:
>
> - high performance via striping
> - fault tolerance with selective use of multiple
> copies attribute
> - cheap by getting the most efficient space
> utilizati
On Fri, Nov 21, 2008 at 11:33 PM, zerk <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I have OpenSolaris on an Amd64 Asus-A8NE with 2gig of Rams and 4x320 gig
> sata drives in raidz1.
>
> With dd, I can write at quasi disk maximum speed of 80meg each for a total
> of 250meg/s if I have no Xsession at all (o
Hi,
I have OpenSolaris on an Amd64 Asus-A8NE with 2gig of Rams and 4x320 gig sata
drives in raidz1.
With dd, I can write at quasi disk maximum speed of 80meg each for a total of
250meg/s if I have no Xsession at all (only console tty).
But as soon as I have an Xsession running, the write speed
On Fri, Nov 21, 2008 at 9:38 PM, Jens Elkner
<[EMAIL PROTECTED]<[EMAIL PROTECTED]>
> wrote:
>
>
> The uncomplete one - where is the '-t all' option? It's really annoying,
> error prone, time consuming to type stories on the command line ...
> Does anybody remember the "keep it small and simple" th
On Fri, Nov 21, 2008 at 03:42:17PM -0800, David Pacheco wrote:
> Pawel Tecza wrote:
> > But I still don't understand why `zfs list` doesn't display snapshots
> > by default. I saw it in the Net many times at the examples of zfs usage.
>
> This was PSARC/2008/469 - excluding snapshot info from 'zfs
I just try ZFS on one of our slave and got some really bad performance.
When I start the server yesterday, it was able to keep up with the main server
without problem but after two days of consecutive run the server is crushed by
IO.
After running the dtrace script iopattern, I notice that the
Andrew Gabriel pisze:
> Pawel Tecza wrote:
>> But I still don't understand why `zfs list` doesn't display snapshots
>> by default. I saw it in the Net many times at the examples of zfs usage.
>>
>
> It was changed.
>
> zfs list -t all
>
> gives you everything, like zfs list used to.
Hi Andrew,
Pawel Tecza wrote:
> But I still don't understand why `zfs list` doesn't display snapshots
> by default. I saw it in the Net many times at the examples of zfs usage.
This was PSARC/2008/469 - excluding snapshot info from 'zfs list'
http://opensolaris.org/os/community/on/flag-days/pages/2008091003
It used to. Although, with the Time Slider now, I agree that it shouldn't by
default
Malachi
On Fri, Nov 21, 2008 at 3:29 PM, Pawel Tecza <[EMAIL PROTECTED]> wrote:
> Ahmed Kamal pisze:
> > zfs list -t snapshot ?
> Hi Ahmed,
>
> Thanks a lot for the hint! It works. I didn't know that I have so m
Prabahar Jeyaram pisze:
> 'zfs list' by default does not list the snapshots.
>
> You need to use '-t snapshot' option with "zfs list" to view the snapshots.
Hello Prabahar,
Thank you very much for your fast explanation! Did `zfs list` always
work in that way or it is default behaviour of the lates
Pawel Tecza wrote:
> But I still don't understand why `zfs list` doesn't display snapshots
> by default. I saw it in the Net many times at the examples of zfs usage.
>
It was changed.
zfs list -t all
gives you everything, like zfs list used to.
--
Andrew
Ahmed Kamal pisze:
> zfs list -t snapshot ?
Hi Ahmed,
Thanks a lot for the hint! It works. I didn't know that I have so many
snapshots :D
# zfs list -t snapshot
NAMEUSED AVAIL
REFER MOUNTPOINT
[EMAIL PROTECTED]
'zfs list' by default does not list the snapshots.
You need to use '-t snapshot' option with "zfs list" to view the snapshots.
--
Prabahar.
On Sat, Nov 22, 2008 at 12:14:47AM +0100, Pawel Tecza wrote:
> Hello All,
>
> This is my zfs list:
>
> # zfs list
> NAME USED AVAIL
zfs list -t snapshot ?
On Sat, Nov 22, 2008 at 1:14 AM, Pawel Tecza <[EMAIL PROTECTED]> wrote:
> Hello All,
>
> This is my zfs list:
>
> # zfs list
> NAME USED AVAIL REFER MOUNTPOINT
> rpool 10,5G 3,85G61K /rpool
> rpool/ROOT9,04G
Hello All,
This is my zfs list:
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 10,5G 3,85G61K /rpool
rpool/ROOT9,04G 3,85G18K legacy
rpool/ROOT/opensolaris89,7M 3,85G 5,44G legacy
rpool/ROOT/opensolaris-1 8,95G 3
Posted for my friend Marko:
I've been reading up on ZFS with the idea to build a home NAS.
My ideal home NAS would have:
- high performance via striping
- fault tolerance with selective use of multiple copies attribute
- cheap by getting the most efficient space utilization possible (not raidz,
The drives are all connected to the motherboard's (Intel S3210SHLX) SATA ports.
I've scrubbed the pool several times in the last two days, no errors:
[EMAIL PROTECTED]:~# zpool status -v
pool: main_pool
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSU
On Fri, Nov 21, 2008 at 14:35, Charles Menser <[EMAIL PROTECTED]> wrote:
> I have a 5 drive raidz2 pool which I have a iscsi share on. While
> backing up a MacOS drive to it I noticed some very strange access
> patterns, and wanted to know if what I am seeing is normal, or not.
>
> There are times
I have a 5 drive raidz2 pool which I have a iscsi share on. While
backing up a MacOS drive to it I noticed some very strange access
patterns, and wanted to know if what I am seeing is normal, or not.
There are times when all five drives are accessed equally, and there
are times when only three of
Chris Gerhard wrote:
> If you have a separate ZIL device is there any way to scrub the data in it?
zpool scrub traverses the ZIL regardless of wither or not it is in a
slog device on in one of the normal pool devices.
> I appreciate that the data in the ZIL is only there for a short time but
>
If you have a separate ZIL device is there any way to scrub the data in it?
I appreciate that the data in the ZIL is only there for a short time but since
it is never read if you had a misbehaving ZIL device that was just throwing the
data away you could potentially run like this for many months
22 matches
Mail list logo