On Mon, Jan 11, 2010 at 10:10:37PM -0800, Lutz Schumann wrote:
> p.s. While writing this I'm thinking if a-card handles this case well ? ...
> maybe not.
apart from the fact that they seem to be hard to source, this is a big
question about this interesting device for me too. I hope so, since
it
Your machines won't come up running, they'll start up from scratch (like if you
had hit the reset button).
If you want your machines to come up you have to make vmware snapshots, which
capture the state of the running VM (memory, etc..). Typically this is
automated with solutions like VCB (Vmwa
Thanks for your answer.
I asked primarily because of the mpt timeout issues I saw on the list.
I never experienced timeouts with my (personal) usas-l8i (lsi 1068e) but feared
this issue might cause some problems with 3081.
Anyway, thanks again.
Arnaud
-Message d'origine-
DeĀ : james.mc
Has anyone worked with a x4500/x4540 and know if the internal raid controllers
have a bbu? I'm concern that we won't be able to turn off the write-cache on
the internal hds and SSDs to prevent data corruption in case of a power failure.
--
This message posted from opensolaris.org
__
Hi,
I have a mysql instance which if I point more load towards it it suddenly gets
100% in SYS as shown below. It can work fine for an hour but eventually it gets
to a jump from 5-15% of CPU utilization to 100% in SYS as show in mpstat output
below:
# prtdiag | head
System Configuration: SUN M
On Mon, Jan 11, 2010 at 01:43:27PM -0600, Gary Mills wrote:
>
> This line was a workaround for bug 6642475 that had to do with
> searching for for large contiguous pages. The result was high system
> time and slow response. I can't find any public information on this
> bug, although I assume it's
I had an emergency need for 400gb of storage yesterday and I spent 8 hours
looking for a way to get iSCSI working via a qlogic QLA4010 TOE card and was
unable to get my windows Qlogic 4050 cTOE card to recognize the target. I do
have a Netapp iSCSI connection on the client
cat /etc/release
Hello !
Can anybody help me with some trouble:
j...@opensolaris:~# zpool status -v
pool: green
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possi
Thanks for all the suggestions. Now for a strange tail...
I tried upgrading to dev 130 and, as expected, things did not go well. All
sorts of permission errors flew by during the upgrade stage and it would not
start X-windows. I've heard that things installed from the contrib and extras
rep
On 12-Jan-10, at 5:53 AM, Brad wrote:
Has anyone worked with a x4500/x4540 and know if the internal raid
controllers have a bbu? I'm concern that we won't be able to turn
off the write-cache on the internal hds and SSDs to prevent data
corruption in case of a power failure.
A power fai
On Jan 12, 2010, at 2:53 AM, Brad wrote:
> Has anyone worked with a x4500/x4540 and know if the internal raid
> controllers have a bbu? I'm concern that we won't be able to turn off the
> write-cache on the internal hds and SSDs to prevent data corruption in case
> of a power failure.
Yes, w
I'm working with a Cyrus IMAP server running on a T2000 box under
Solaris 10 10/09 with current patches. Mailboxes reside on six ZFS
filesystems, each containing about 200 gigabytes of data. These are
part of a single zpool built on four Iscsi devices from our Netapp
filer.
One of these ZFS file
On Tue, 12 Jan 2010, Gary Mills wrote:
Is moving the databases (IMAP metadata) to a separate ZFS filesystem
likely to improve performance? I've heard that this is important, but
I'm not clear why this is.
There is an obvious potential benefit in that you are then able to
tune filesystem para
Hi--
The best approach is to correct the issues that are causing these
problems in the first place. The fmdump -eV commnand will identify
the hardware problems that caused the checksum errors and the corrupted
files.
You might be able to use some combination of zpool scrub, zpool clear,
and remo
Hi Dan,
I'm not sure I'm following everything here but I will try:
1. How do you offline a zvol? Can you show your syntax?
You can only offline a redundant pool component, such as a file, slice,
or whole disk.
2. What component does "black" represent? Only a pool can be exported.
3. In genera
Dan,
I see now how you might have created this config.
I tried to reproduce this issue by creating a separate pool on another
disk and a volume to attach to my root pool, but my system panics when
I try to attach the volume to the root pool.
This is on Nevada, build 130.
Panic aside, we don't
On Tue, Jan 12, 2010 at 11:11:36AM -0600, Bob Friesenhahn wrote:
> On Tue, 12 Jan 2010, Gary Mills wrote:
> >
> >Is moving the databases (IMAP metadata) to a separate ZFS filesystem
> >likely to improve performance? I've heard that this is important, but
> >I'm not clear why this is.
>
> There is
On Tue, Jan 12, 2010 at 12:37:30PM -0800, Gary Mills wrote:
> On Tue, Jan 12, 2010 at 11:11:36AM -0600, Bob Friesenhahn wrote:
> > On Tue, 12 Jan 2010, Gary Mills wrote:
> > >
> > >Is moving the databases (IMAP metadata) to a separate ZFS filesystem
> > >likely to improve performance? I've heard t
Ok I have found the issue however i do not know how to get around it.
iscsiadm list target-param
Target: iqn.1986-03.com.sun:01:0003ba08d5ae.47571faa
Alias: -
Target: iqn.2000-04.com.qlogic:qla4050c.gs10731a42094.1
Alias: -
I need to attach all iSCSI targets to
iqn.2000-04.com.qlo
Hello,
I've got auto-snapshots enabled in global zone for home directories of all
users. Users log in to their individual zones and home directory loads from
global zone. All works fine, except that new auto-snapshot have no properties
and therefore can't be accessed in zones.
example from zone:
On Jan 12, 2010, at 12:37 PM, Gary Mills wrote:
> On Tue, Jan 12, 2010 at 11:11:36AM -0600, Bob Friesenhahn wrote:
>> On Tue, 12 Jan 2010, Gary Mills wrote:
>>>
>>> Is moving the databases (IMAP metadata) to a separate ZFS filesystem
>>> likely to improve performance? I've heard that this is imp
Cindys, thank you for answer, but i need explain some details. This pool is new
hardware for my system - 2x1Tb WD Green hard drives, but data on this pool was
copied from old 9x300 Gb hard drives pool with hw problem. while i copied it
data where was many errors, but at the end i see this pictur
We have a zpool made of 4 512g iscsi luns located on a network appliance.
We are seeing poor read performance from the zfs pool.
The release of solaris we are using is:
Solaris 10 10/09 s10s_u8wos_08a SPARC
The server itself is a T2000
I was wondering how we can tell if the zfs_vdev_max_pending
Hi,
I think you are saying that you copied the data on this system from a
previous system with hardware problems. It looks like the data that was
copied was corrupt, which is causing the permanent errors on the new
system (?)
The manual removal of the corrupt files, zpool scrub and zpool clear
m
> "ah" == Al Hopper writes:
ah> The main issue is that most flash devices support 128k byte
ah> pages, and the smallest "chunk" (for want of a better word) of
ah> flash memory that can be written is a page - or 128kb. So if
ah> you have a write to an SSD that only changes 1 b
On Tue, 12 Jan 2010, Ed Spencer wrote:
I was wondering how we can tell if the zfs_vdev_max_pending setting
is impeding read performance of the zfs pool? (The pool consists of
lots of small files).
If 'iostat -x' shows that svc_t is quite high, then reducing
zfs_vdev_max_pending might help.
On 12/01/2010 23:47, Bob Friesenhahn wrote:
On Tue, 12 Jan 2010, Ed Spencer wrote:
I was wondering how we can tell if the zfs_vdev_max_pending setting
is impeding read performance of the zfs pool? (The pool consists of
lots of small files).
If 'iostat -x' shows that svc_t is quite high, the
On Jan 12, 2010, at 2:54 PM, Ed Spencer wrote:
> We have a zpool made of 4 512g iscsi luns located on a network appliance.
> We are seeing poor read performance from the zfs pool.
> The release of solaris we are using is:
> Solaris 10 10/09 s10s_u8wos_08a SPARC
>
> The server itself is a T2000
>
On Tue, Jan 12, 2010 at 01:26:15PM -0700, Cindy Swearingen wrote:
> I see now how you might have created this config.
>
> I tried to reproduce this issue by creating a separate pool on another
> disk and a volume to attach to my root pool, but my system panics when
> I try to attach the volume to t
"(Caching isn't the problem; ordering is.)"
Weird I was reading about a problem where using SSDs (intel x25-e) if the power
goes out and the data in cache is not flushed, you would have loss of data.
Could you elaborate on "ordering"?
--
This message posted from opensolaris.org
Richard,
"Yes, write cache is enabled by default, depending on the pool configuration."
Is it enabled for a striped (mirrored configuration) zpool? I'm asking because
of a concern I've read on this forum about a problem with SSDs (and disks)
where if a power outage occurs any data in cache woul
On 12-Jan-10, at 10:40 PM, Brad wrote:
"(Caching isn't the problem; ordering is.)"
Weird I was reading about a problem where using SSDs (intel x25-e)
if the power goes out and the data in cache is not flushed, you
would have loss of data.
Could you elaborate on "ordering"?
ZFS integri
32 matches
Mail list logo