Thanks for that. Success. I was apprehensive about prying too hard.
Tony
On Dec 12, 2011, at 11:53 AM, Edmund White wrote:
> You need to pry the drive sled off of the disk once the screws are
> removed. There are two or four notches that hold onto the disk. You'll end
> up spreadi
eant to be separated? I've looked the x4540 user guide but it
does not say anything about it.
Tony Schreiner
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello,
Looks like the mirror was removed or deleted... Can I get it back to it's
original???
Original:
mirror-0 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
c1t3d0 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
c1t4
I have a zfs pool called logs (about 200G).
I would like to create 2 volumes using this chunk of storage.
However, they would have different mount points.
ie. 50G would be mounted as /oarcle/logs
100G would be mounted as /session/logs
is this possible? Do I have to use the legacy mount options
on a per drive basis (rather than overall pool
stats), but you can adapt the idea to whatever you need.
-Tony
On Tue, Jun 28, 2011 at 10:08 AM, Matt Harrison <
iwasinnamuk...@genestate.com> wrote:
> Hi list,
>
> I want to monitor the read and write ops/bandwidth for a couple of po
Hello,
Does anyone have any scripts that will monitor the performance of a specific
zvol?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
We seem to be having write issues with zfs, does anyone see anything in the
following:
bash-3.00# kstat -p -n arcstats
zfs:0:arcstats:c655251456
zfs:0:arcstats:c_max5242011648
zfs:0:arcstats:c_min655251456
zfs:0:arcstats:classmisc
zfs:0:arcstats:crtime 5699201.4918501
zfs:0:a
The below ZFS pool:
zpool create tank mirror c1t2d0 c1t3d0 mirror c1t4d0 c1t5d0 spare c1t6d0
Is this a 0 then 1 (mirror of stripes)
or
1 then 0 (stripe of mirrors)
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.
the prefetch off for just a single pool? Also is this something that
can be done online or will it require a reboot to put into effect.
Thanks in advance for your assistance in this matter.
Regards
Tony
--
Tony Marshall | Technical Architect
Is is possible to expand the size of a ZFS volume?
It was created with the following command:
zfs create -V 20G ldomspool/test
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
c5t7d0 AVAIL
am I supposed to do something with c1t3d0 now?
Thanks,
Tony Schreiner
Boston College
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Is it possible to have a shared LUN between 2 servers using zfs? The server
can see both LUN's but when I do an impoer I get:
bash-3.00# zpool import
pool: logs
id: 3700399958960377217
state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its
Is it possible to add 2 disks to increase the size of the pool below?
NAME STATE READ WRITE CKSUM
testpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
c1t3d0 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
c1t4d0 ONLINE 0 0 0
c1t5d0 ONLINE 0 0 0
___
zfs-
Using ZFS v22, is it possible to add a hot spare to rpool?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello,
We have been running into a few issues recently with cpu panics while trying
to reboot the control/service domains on various T-series platforms. Has
anyone seen the message below?
Thanks
SunOS Release 5.10 Version Generic_142900-03 64-bit
Copyright 1983-2009 Sun Microsystems, Inc. All ri
I have 2 ZFS pools all using the same drive type and size. The question is
can I have 1 global hot spare for both of those pools?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I am getting the following erorr message when trying to do a zfs snapshot:
r...@pluto#zfs snapshot datapool/m...@backup1
cannot create snapshot 'datapool/m...@backup1': out of space
r...@pluto#zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
datapool 556G 110G 446G 19% ONLINE -
rpool 278G 12
Hello,
Has anyone encountered the following error message, running Solaris 10 u8 in
an LDom.
bash-3.00# devfsadm
devfsadm: write failed for /dev/.devfsadm_dev.lock: Bad exchange descriptor
bash-3.00# zpool status -v rpool
pool: rpool
state: DEGRADED
status: One or more devices has experienced a
Is it currently possible (Solaris 10 u8) to encrypt a ZFS pool?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ok, this is definitely the kind of feedback I was looking for. I'll have to
check out the docs on these technologies it looks like. Appreciate it.
I figured I would load balance the hosts with a Cisco device, since I can get
around the IOS ok.
I want to offer a online backup service that pr
Lets say I have two servers, both running opensolaris with ZFS. I basically
want to be able to create a filesystem where the two servers have a common
volume, that is mirrored between the two. Meaning, each server keeps an
identical, real time backup of the other's data directory. Set them both
How would one determine if I should have a separate ZIL disk? We are using
ZFS as the backend of our Guest Domains boot drives using LDom's. And we are
seeing bad/very slow write performance?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.or
Was wondering if anyone of you see any issues with the following in Solaris
10 u8 ZFS?
System Memory:
Physical RAM: 11042 MB
Free Memory : 5250 MB
LotsFree: 168 MB
ZFS Tunables (/etc/system):
ARC Size:
Current Size: 4309 MB (arcsize)
Target Size (Adaptive): 10018 MB (c)
Min Size (Hard Limit): 12
I am getting the following error, however as you can see below this is a SMI
label...
cannot set property for 'rpool': property 'bootfs' not supported on EFI
labeled devices
# zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs - default
# zpool set bootfs=rpool/ROOT/s10s_u8wos_08a rp
I was wondering if any data was lost while doing a snapshot on a running
system? Does it flush everything to disk or would some stuff be lost?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zf
Do the following ZFS stats look ok?
> ::memstat
Page Summary Pages MB %Tot
Kernel 106619 832 28%
ZFS File Data 79817 623 21%
Anon 28553 223 7%
Exec and libs 3055 23 1%
Page cache 18024 140 5%
Free (cachelist) 2880 22 1%
Free (freelist) 146309 114
I am trying to understand how "refreservation" works with snapshots.
If I have a 100G zfs pool
I have 4 20G volume groups in that pool.
refreservation = 20G on all volume groups.
Now when I want to do a snapshot will this snapshot need 20G + the amount
changed (REFER)? If not I get a "o
Can I rollback a snapshot that I did a zfs send on?
ie: zfs send testpool/w...@april6 > /backups/w...@april6_2010
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Was wondering if anyone can see any issues with the ARC in the following
output?
bash-3.00# ./arc_summary.pl
System Memory:
Physical RAM: 6023 MB
Free Memory : 784 MB
LotsFree: 90 MB
ZFS Tunables (/etc/system):
ARC Size:
Current Size: 1159 MB (arcsize)
Target Size (Adaptive): 2106 MB (c)
Min Si
When would it be necessary to scrub a ZFS filesystem?
We have many "rpool", "datapool", and a NAS 7130, would you consider to
schedule monthly scrubs at off-peak hours or is it really necessary?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris
Can I create a devalias to boot the other mirror similar to UFS?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Can I assign a disk to multiple pools?
The only problem is one pool is "rpool" with an SMI label and the other pool
is a standard ZFS pool?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-
What is the following syntax?
*zpool create tank mirror c1t2d0 c1t3d0 mirror c1t4d0 c1t5d0 spare c1t6d0 *
*
*
Is this RAID 0+1 or 1+0?*
*
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-di
So in a ZFS boot disk configuration (rpool) in a running environment, it's
not possible?
On Fri, Feb 19, 2010 at 9:25 AM, wrote:
>
>
> >Is it possible to grow a ZFS volume on a SPARC system with a SMI/VTOC
> label
> >without losing data as the OS is built on this volume?
>
>
> Sure as long as th
Is it possible to grow a ZFS volume on a SPARC system with a SMI/VTOC label
without losing data as the OS is built on this volume?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Why would I get the following error:
Reading ZFS config: done.
Mounting ZFS filesystems: (1/6)cannot mount '/data/apache': directory is not
empty
(6/6)
svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a
failed: exit status 1
And yes, there is data in the /data/apache file syste
Was wondering if anyone has had any performance issues with Oracle running
on ZFS as compared to UFS?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I am getting the following message when I try and remove a snapshot from a
clone:
bash-3.00# zfs destroy data/webser...@sys_unconfigd
cannot destroy 'data/webser...@sys_unconfigd': snapshot has dependent clones
use '-R' to destroy the following datasets:
The datasets are being used, so why can't
I have a 2-disk/2-way mirror and was wondering if I can remove 1/2 the
mirror and plunk it in another system?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Has anyone encountered any file corruption when snapping ZFS file systems?
How does ZFS handle open files when compared to other file system types that
use similar technology ie. Veritas, etc...??
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolar
Getting the following error when trying to do a ZFS Flash install via
jumpstart.
error: field 1 - keyword "pool"
Do I have to have Solaris 10 u8 installed as the mini-root, or will previous
versions of Solaris 10 work?
jumpstart profile below
install_type flash_install
archive_location nfs://19
Is it possible to create a RAW device on a ZFS pool?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Can I move the below mounts under / ?
rpool/export/export
rpool/export/home /export/home
It was a result of the default install...
Thaks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
and would probably
call for Solaris 10 instead of OpenSolaris. Are my statements on this valid or
am I off track?
Thanks
Tony Russell
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I believe I had just redirected zfs send to a file, then ftp'ed the file. I
tried that after my script had been producing the error. It had been trying to
do "zfs send | socat | zfs receive" essentially.
--
This message posted from opensolaris.org
___
I'm running into a problem trying to do "zfs receive", with data being
replicated from a Solaris 10 (11/06 release) to a storage server running OS
118. Here is the error:
r...@lznas2:/backup# cat backup+mcc+use...@zn2---pre_messages_delete_20090430 |
zfs receive backup/mcc/users
cannot restore
the running OS.
Thanks
Tony
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks ... the -F works perfectly, and provides a further benefit in that the
client can mess with the file system as much as they want for testing purposes,
but when it comes time to ensure it is synchronized each night, it will revert
back to the previous state.
Thanks
-Tony
--
This message
I am trying to keep a file system (actually quite a few) in sync across two
systems for DR purposes, but I am encountering something that I find strange.
Maybe its not strange, and I just don't understand - but I will pose to you
fine people to help answer my question. This is all scripted, but
I'm not sure if this is the right forum as we are running Solaris 10 (not
OpenSolaris). We do have all the latest patches.
I am trying to get a better understanding of where our memory is going. The
server is a T2000 with 8 GB RAM. I understand that the ARC is a major consumer
but it is only c
SUM
zfs-bo ONLINE 0 0 0
c4t60060E801419DC0119DC01A0d0ONLINE 0 0 0
c4t600A0B800026A5EC07FE47557497d0s0 ONLINE 0 0 0
c4t600A0B800026A5EC07FE47557497d0ONLINE 0 0 0
This stirkes me as bad - should I be able to do this?
Tha
a faster deployment method for new zones, is this
supported and if not when is it likely to be supported?
Thanks
Tony
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
its load across all disks somewhat evenly.
Can someone explain this result? I can consistently reproduce it as well.
Thanks
-Tony
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris
Anton & Roch,
Thank you for helping me understand this. I didn't want to make too many
assumptions that were unfounded and then incorrectly relay that information
back to clients.
So if I might just repeat your statements, so my slow mind is sure it
understands, and Roch, yes your assumption i
Let me elaborate slightly on the reason I ask these questions.
I am performing some simple benchmarking, and during this a file is created by
sequentially writing 64k blocks until the 100Gb file is created. I am seeing,
and this is the exact same as VxFS, large pauses while the system reclaims t
he without messing with c_max / c_min directly
in the kernel.
-Tony
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
To give you fine people an update, it seems that the reason for the skewed
results shown earlier is due to Veritas' ability to take advantage of all the
free memory available on my server. My test system has 32G of Ram, and my test
data file is 10G. Basically, Veritas was able to cache the entir
ill retest with
a much larger file to defeat this cache (I do not want to modify my mount
options). If this then shows similar performance (I will also retest with
ZFS with the same file size) then the question will probably have more to do
with how ZFS handles file system caching.
-Tony
---
0.00 0 0.000.0000.000
0.000 5.5 5.5
If anyone know’s if this is a VDbench issue not properly reporting the
information or if it is ZFS issue – I’d appreciate some further insights.
Thanks
-Tony
This message posted from opensolaris.org
__
59 matches
Mail list logo