> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Wolfraider
>
> We are looking into the possibility of adding a dedicated ZIL and/or
> L2ARC devices to our pool. We are looking into getting 4 – 32GB Intel
> X25-E SSD drives. Would this be a
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
On 2010/09/14 17:56, Peter Jeremy wrote:
> I am looking at backing up my fileserver by replicating the
> filesystems onto an external disk using send/recv with something
> similar to:
> zfs send ... myp...@snapshot | zfs recv -d backup
> but have r
On 09/15/10 12:56 PM, Peter Jeremy wrote:
I am looking at backing up my fileserver by replicating the
filesystems onto an external disk using send/recv with something
similar to:
zfs send ... myp...@snapshot | zfs recv -d backup
but have run into a bit of a gotcha with the mountpoint property:
I am looking at backing up my fileserver by replicating the
filesystems onto an external disk using send/recv with something
similar to:
zfs send ... myp...@snapshot | zfs recv -d backup
but have run into a bit of a gotcha with the mountpoint property:
- If I use "zfs send -R ..." then the mountp
The difference between multi-user thinking and single-user thinking is
really quite dramatic in this area. I came up the time-sharing side
(PDP-8, PDP-11, DECSYSTEM-20); TOPS-20 didn't have any sort of disk
defragmenter, and nobody thought one was particularly desirable, because
the normal access
On Tue, Sep 14, 2010 at 04:13:31PM -0400, Linder, Doug wrote:
> I recently created a test zpool (RAIDZ) on some iSCSI shares. I made
> a few test directories and files. When I do a listing, I see
> something I've never seen before:
>
> [r...@hostname anewdir] # ls -la
> total 6160
> drwxr-xr-x
I recently created a test zpool (RAIDZ) on some iSCSI shares. I made a few
test directories and files. When I do a listing, I see something I've never
seen before:
[r...@hostname anewdir] # ls -la
total 6160
drwxr-xr-x 2 root other 4 Sep 14 14:16 .
drwxr-xr-x 4 root root
On Tue, Sep 14, 2010 at 08:08:42AM -0700, Ray Van Dolson wrote:
> On Tue, Sep 14, 2010 at 06:59:07AM -0700, Wolfraider wrote:
> > We are looking into the possibility of adding a dedicated ZIL and/or
> > L2ARC devices to our pool. We are looking into getting 4 ??? 32GB
> > Intel X25-E SSD drives. Wo
Cool, we can get the Intel X25-E's for around $300 a piece from HP with the
sled. I don't see the X25-M available so we will look at 4 of the X25-E's.
Thanks :)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@openso
Here is the solution (thanks to Gavin Maltby from mdb forum):
Boot with -kd option to enter in kmdb and type the following commands:
aok/W 1
::bp zfs`zfs_panic_recover
:c
wait that it stops at breakpoint then type this:
zfs_recover/W1
:z
:c
--
This message posted from opensolaris.org
___
On Tue, Sep 14, 2010 at 06:59:07AM -0700, Wolfraider wrote:
> We are looking into the possibility of adding a dedicated ZIL and/or
> L2ARC devices to our pool. We are looking into getting 4 – 32GB
> Intel X25-E SSD drives. Would this be a good solution to slow write
> speeds? We are currently shari
We are looking into the possibility of adding a dedicated ZIL and/or L2ARC
devices to our pool. We are looking into getting 4 – 32GB Intel X25-E SSD
drives. Would this be a good solution to slow write speeds? We are currently
sharing out different slices of the pool to windows servers using com
Richard Elling wote:
> Define "fragmentation"?
Maybe this is the wrong thread. I have noticed that an old pool can take 4
hours to scrub, with a large portion of the time reading from the pool disks at
the rate of 150+ MB/s but zpool iostat reports 2 MB/s read speed. My naive
interpretation i
> From: Richard Elling [mailto:rich...@nexenta.com]
> > With appropriate write caching and grouping or re-ordering of writes
> algorithms, it should be possible to minimize the amount of file
> interleaving and fragmentation on write that takes place.
>
> To some degree, ZFS already does this. Th
> From: Haudy Kazemi [mailto:kaze0...@umn.edu]
>
> With regard to multiuser systems and how that negates the need to
> defragment, I think that is only partially true. As long as the files
> are defragmented enough so that each particular read request only
> requires one seek before it is time to
when I execute
::load zfs
I get kernel panic because of this $...@#(*($...@# space_map_add problem.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-dis
If I launch opensolaris with "-kd" I'm able to do this:
aok/W 1
but if I type:
zfs_recover/W 1
then I get an unkown symbol name error.
Any idea how I could force this variables?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
I can't edit now my /etc/system file because system is not booting.
Is there a way to force this parameters to Solaris kernel on booting with Grub?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:
18 matches
Mail list logo