Hi all,
We have this problem of losing permission and ownership of the raw zfs devices
when the pool is moved from one system to another.
The owner is an application account and each time we failover to another
machine, have to set the permission and owner manually on that server.
Is this zfs lim
Thanks,
I found the bug report at:
http://bugs.opensolaris.org/view_bug.do;jsessionid=9cb7c1a122ee0ea94993365fee9d?bug_id=6717940
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolari
Hi,
I am wondering if there is a guideline on how to configure ZFS on a server
with Oracle database?
We are experiencing some slowness on writes to ZFS filesystem. It take about
530ms to write a 2k data.
We are running Solaris 10 u5 127127-11 and the back-end storage is a RAID5
EMC EMX.
This is a
Thank you all for reply. I will do the recommendation and see if there is
any change in performance.
I had seen the documents in the link but I needed to make sure that we can
not do anything to improve the performance before going to storage guys.
Thanks again,
Vahid.
On Tue, Mar 3, 2009 at 9:4
On Wed, Mar 4, 2009 at 12:18 PM, Richard Elling wrote:
> Vahid Moghaddasi wrote:
>
>> Thank you all for reply. I will do the recommendation and see if there is
>> any change in performance.
>> I had seen the documents in the link but I needed to make sure that we can
>&
Hi,
All the pools seem healthy and zfs file systems are all fine according to
"zpool status -x" but during the boot we get the following error:
fmadm faulty also returns this:
--
degraded zfs://pool=pe09_01
8f5e
Hi,
Can anybody explain the reason that zpool is completely destroyed and must be
restored from tape after is hitting the bug 6458218?
Also, why most of the machines are OK just one so far (but very high profile)
was hit?
Is this not the matter of if, but when we get hit...until we upgrade?
Is th
Hi all,
We need to move about 1T of data from one zpool on EMC dmx-3000 to another
storage device (dmx-3). DMX-3 can be visible on the same host where dmx-3000 is
being used on or from another host.
What is the best way to transfer the data from dmx-3000 to dmx-3?
Is it possible to add the new d
Thanks for your reply Jim,
The current pool is consist of a few EMC LUNs and we are moving the entire pool
to the new EMC storage with different devices.
What would be my old or new device? Here is some parts of zpool status:
NAME STATE READ WRITE CKSUM
rd_01 ONLINE 0 0 0
c4t60060481878701505
Simple enough thanks. I assume as I start the zpool replace operation, the
original LUNs will not be in rd_01 pool any more.
Not that will do that, but theoretically I can perform this on a live machine
without interruption, is that right?
Thank you,
This message posted from opensolaris.org
On 3/14/08, Vahid Moghaddasi wrote:
On Fri, Mar 14, 2008 at 11:26 PM, Tim wrote:
replace your LUNs one at a time:
zpool replace -f rd_01 c4t6006048187870150525244353543d0
first_lun_off_dmx-3
zpool replace -f rd_01 c4t6006048187870150525244353942d0
Hi everyone,
I am not exactly sure if this is ZFS problem or Java or something else?
On a T2000 with latest patch 120011-14, we are not able to kill a Java process,
e.g. kill or kill -9 has no affect on the process. I did a lsof on the PID and
saw over 200 open files and many are showing the foll
Thanks for your reply,
Before I used lsof, I tried pstack and truss -p but I get the following message:
# pstack 22861
pstack: cannot examine 22861: unanticipated system error
# truss -p 22861
truss: unanticipated system error: 22861
# pstack -F 22861
pstack: cannot examine 22861: unanticipated sy
> You are looking for mdb.
>
> echo '0t22861::pid2proc |::walk thread |::findstack'
> | mdb -k
>
There are over 120 threads in the very long output, I will post some sections
of the output here but they are mostly look alike, what can I do with this
information, is there a way to kill 22861 at
Hi All,
This is a step-by-step procedure to upgrade Solaris 10 06/06 (u2) to Solaris 10
08/07 (u4).
Before any attempt to upgrade you will need to install 'at least' the
Recommended patches dated March 2008.
- The Kernel level of the u2 system must be 120011-14.
- The zones will have to be moved
I did it and worked. Some had problem with the patch but eventually worked.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I created a raidz from three 70GB disks and got a total of 200GB out of it.
It't that supposed to give 140GB? Here is some details:
# zpool status zpoll_c2raidz
pool: zpoll_c2raidz
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
zpoll_c2raid
Hi,
Why would I ever need to specify ZFS mount(s) in /etc/vfstab at all? I see it
in some documents that zfs can be defined in /etc/vfstab with fstype zfs.
Thanks.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@open
> On Wed, Jan 10, 2007 at 08:07:59PM -0800, Vahid
> Moghaddasi wrote:
> >
> > Why would I ever need to specify ZFS mount(s) in
> /etc/vfstab at all? I
> > see it in some documents that zfs can be defined in
> /etc/vfstab with
> > fstype zfs.
> >
>
&
This is not really a ZFS question but I would like to know how those ZFS demo
movies are made? They are really good for training the staff and demonstration.
Thanks,
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@op
20 matches
Mail list logo