Can you do an iostat -xCzn 3 from the start of test till it drops speed pls?
Can you also show:
echo "::interrupts" ¦ mdb -k
On 16 Nov 2010 01:45, "Louis Carreiro" wrote:
That's for pointing me towards that site! Saying that "txg_synctime_ms"
controls zfs's breathing was how I was thinking abou
Sridhar,
You have switched to a new disruptive filesystem technology, and it has
to be disruptive in order to break out of all the issues older
filesystems have, and give you all the new and wonderful features.
However, you are still trying to use old filesystem techniques with it,
which is
On 11/16/10 07:19 PM, sridhar surampudi wrote:
Hi,
How it would help for instant recovery or point in time recovery ?? i.e
restore data at device/LUN level ?
Why would you want to? If you are sending snapshots to another pool,
you can do instant recovery at the pool level.
Currently
Hi,
How it would help for instant recovery or point in time recovery ?? i.e
restore data at device/LUN level ?
Currently it is easy as I can unwind the primary device stack and restore data
at device/ LUN level and recreate stack.
Thanks & Regards,
sridhar.
--
This message posted from openso
> On Nov 15, 2010, at 8:32 AM, Bryan Horstmann-Allen wrote:
>
> > +
> --
> > | On 2010-11-15 10:21:06, Edward Ned Harvey wrote:
> > |
> > | Backups.
> > |
> > | Even if you upgrade your hardware to better stuff... with ECC
To add:
Even if you have great faith in ZFS, a backup helps in dealing with the unknown.
Consider:
- multiple disk failures that you are somehow unable to respond to.
- hardware failures (power supplies, motherboard, RAM).
- damage to the building.
- having to recreate everything elsewhere - even a
On 15/11/10 9:28 PM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Toby Thain
>>
>> The corruption will at least be detected by a scrub, even in cases where
> it
>> cannot be repaired.
>
> Not necessarily. Let's
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Toby Thain
>
> The corruption will at least be detected by a scrub, even in cases where
it
> cannot be repaired.
Not necessarily. Let's suppose you have some bad memory, and no ECC. Your
app
On 15/11/10 7:54 PM, Bryan Horstmann-Allen wrote:
> +--
> | On 2010-11-15 11:27:02, Toby Thain wrote:
> |
> | > Backups are not going to save you from bad memory writing corrupted data
> to
> | > disk.
> |
> | It is, how
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Bryan Horstmann-Allen
> |
> | I am a newbie on Solaris.
> | We recently purchased a Sun Sparc M3000 server. It comes with 2
identical
> hard drives. I want to setup a raid 1. After searching on
Almost! It seems like it held out a bit further than last time. Now "arcsz"
hit's 2G (matching 'c'). But it still drops off. It started at 5.6GB/Min and
fell off to less than 700MB/Min.
A snippet of my arcstat.pl output looks like the following:
Time read miss miss% dmis dm% pmis pm% mm
I'm currently having a few problems with my storage server. Server specs are -
Open Solaris snv_134
Supermicro X8DTi motherboard
Intel Xeon 5520
6x 4GB DDR3
LSI RAID Card - running 24x 1.5TB SATA drives
Adaptec 2405 - running 4x Intel SSD X25-E's
Boot's from 8GB USB flash drive
The initial problem
Edward,
I recently installed a 7410 cluster, which had added Fiber Channel HBAs.
I know the site also has Blade 6000s running VMware, but no idea if they
were planning to run fiber to those blades (or even had the option to do so).
But perhaps FC would be an option for you?
Mark
On Nov 12, 201
>From the OpenSolaris ZFS FAQ page:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq
> If you want to use a hardware-level backup or snapshot feature instead of
the ZFS snapshot feature, then you will need to do the following steps:
* zpool export pool-name
* Hardware-level sn
Hi,
This may seem odd, but I had errors just like this coming from a
faulty CD ROM drive.
The drive in question was unable to read the entire media, resulting
in the following:
1. Live CD boots up fine, probably took longer than expected.
Installation appears to be successful, some drive related
On Nov 2, 2010, at 12:10 AM, Ian Collins wrote:
> On 11/ 2/10 08:33 AM, Mark Sandrock wrote:
>>
>>
>> I'm working with someone who replaced a failed 1TB drive (50% utilized),
>> on an X4540 running OS build 134, and I think something must be wrong.
>>
>> Last Tuesday afternoon, zpool status re
+--
| On 2010-11-15 11:27:02, Toby Thain wrote:
|
| > Backups are not going to save you from bad memory writing corrupted data to
| > disk.
|
| It is, however, a major motive for using ZFS in the first place.
In this con
Points to check are iostat,fsstat, zilstat, mpstat, prstat. Check for sw
interrupt sharing, disable ohci.
On 16 Nov 2010 00:27, "Khushil Dep" wrote:
> That controls zfs breathing, I'm on a phone writing this so u hope you
won't
> mind me pointing you to
>
listware.net/201005/opensolaris-zfs/11556
That controls zfs breathing, I'm on a phone writing this so u hope you won't
mind me pointing you to
listware.net/201005/opensolaris-zfs/115564-zfs-discuss-small-stalls-slowing-down-rsync-from-holding-network-saturation-every-5-seconds.html
On 16 Nov 2010 00:20, "Louis Carreiro" wrote:
Almost! It
On Nov 15, 2010, at 4:15 PM, Erik Trimble wrote:
> On 11/15/2010 2:55 PM, Matt Banks wrote:
>> I asked this on the x86 mailing list (and got a "it should work" answer),
>> but this is probably more of the appropriate place for it.
>>
>> In a 2 node Sun Cluster (3.2 running Solaris 10 u8, but co
Is there an up to date reference following on from
http://hub.opensolaris.org/bin/view/Community+Group+zfs/24
listing what's in the zpool versions up to the current 31?
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.open
Set your txg_synctime_ms to 0x3000 and retest please?
On 15 Nov 2010 23:23, "Louis" wrote:
> Hey all1
>
> Recently I've decided to implement OpenSolaris as a target for BackupExec.
>
> The server I've converted into a "Storage Appliance" is an IBM x3650 M2 w/
~4TB of on board storage via ~10 loca
Hey all1
Recently I've decided to implement OpenSolaris as a target for BackupExec.
The server I've converted into a "Storage Appliance" is an IBM x3650 M2 w/ ~4TB
of on board storage via ~10 local SATA drives and I'm using OpenSolaris
svn_134. I'm using a QLogic 4Gb FC HBA w/ the QLT driver an
On 11/15/2010 2:55 PM, Matt Banks wrote:
I asked this on the x86 mailing list (and got a "it should work" answer), but
this is probably more of the appropriate place for it.
In a 2 node Sun Cluster (3.2 running Solaris 10 u8, but could be running u9 if
needed), we're looking at moving from VXF
Do you need registered ECC, or will non-reg ECC do to get around this
issue you described?
On Mon, 2010-11-15 at 16:48 +0700, VO wrote:
> Hello List,
>
> I recently got bitten by a "panic on `zpool import`" problem (same CR
> 6915314), while testing a ZFS file server. Seems the pool is pretty m
I asked this on the x86 mailing list (and got a "it should work" answer), but
this is probably more of the appropriate place for it.
In a 2 node Sun Cluster (3.2 running Solaris 10 u8, but could be running u9 if
needed), we're looking at moving from VXFS to ZFS. However, quite frankly,
part of
On Mon, Nov 15, 2010 at 09:13:42AM -0800, Ray Van Dolson wrote:
> We need to move the disks comprising our mirrored rpool on a Solaris 10
> U9 x86_64 (not SPARC) system.
>
> We'll be relocating both drives to a different controller in the same
> system (should go from c1* to c0*).
>
> We're curio
+--
| On 2010-11-15 08:48:55, Frank wrote:
|
| I am a newbie on Solaris.
| We recently purchased a Sun Sparc M3000 server. It comes with 2 identical
hard drives. I want to setup a raid 1. After searching on google, I fou
On 11/15/10 10:50 PM, sridhar surampudi wrote:
Hi Andrew,
Regarding your point
-
You will not be able to access the hardware
snapshot from the system which has the original zpool mounted, because
the two zpools will have the same pool GUID (there's an RFE outstanding
on fixing this).
Hi Cindy,
> I haven't seen this in a while but I wonder if you just need to set the
> bootfs property on your new root pool and/or reapplying the bootblocks.
I've created the new BE using beadm create, which did this for me:
$ zpool get bootfs rpool2
NAMEPROPERTY VALUE
On Mon, November 15, 2010 14:14, Darren J Moffat wrote:
> Today Oracle Solaris 11 Express was released and is available for
> download[1], this release includes on disk encryption support for ZFS.
>
> Using ZFS encryption support can be as easy as this:
>
> # zfs create -o encryption=on tank/d
Today Oracle Solaris 11 Express was released and is available for
download[1], this release includes on disk encryption support for ZFS.
Using ZFS encryption support can be as easy as this:
# zfs create -o encryption=on tank/darren
Enter passphrase for 'tank/darren':
Enter again:
On Sun, Nov 14, 2010 at 11:45 PM, sridhar surampudi
wrote:
> Thanks you for the details. I am aware of export/import of zpool. but with
> zpool export pool is not available for writes.
>
> is there a way I can freeze zfs file system at file system level.
> As an example, for JFS file system using
We need to move the disks comprising our mirrored rpool on a Solaris 10
U9 x86_64 (not SPARC) system.
We'll be relocating both drives to a different controller in the same
system (should go from c1* to c0*).
We're curious as to what the best way is to go about this? We'd love
to be able to just
I am a newbie on Solaris.
We recently purchased a Sun Sparc M3000 server. It comes with 2 identical hard
drives. I want to setup a raid 1. After searching on google, I found that the
hardware raid was not working with M3000. So I am here to look for help on how
to setup ZFS to use raid 1. Curre
Just went to Oracle's website and just noticed that you can download Solaris 11
Express.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 15/11/10 10:32 AM, Bryan Horstmann-Allen wrote:
> +--
> | On 2010-11-15 10:21:06, Edward Ned Harvey wrote:
> |
> | Backups.
> |
> | Even if you upgrade your hardware to better stuff... with ECC and so on ...
> | There
On Nov 15, 2010, at 8:32 AM, Bryan Horstmann-Allen wrote:
> +--
> | On 2010-11-15 10:21:06, Edward Ned Harvey wrote:
> |
> | Backups.
> |
> | Even if you upgrade your hardware to better stuff... with ECC and so on ...
>
Are those really your requirements? What is it that you're trying to
accomplish with the data? Make a copy and provide to an other host?
On 11/15/2010 5:11 AM, sridhar surampudi wrote:
Hi I am looking in similar lines,
my requirement is
1. create a zpool on one or many devices ( LUNs ) from a
+--
| On 2010-11-15 10:21:06, Edward Ned Harvey wrote:
|
| Backups.
|
| Even if you upgrade your hardware to better stuff... with ECC and so on ...
| There is no substitute for backups. Period. If you care about your da
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of VO
>
> The server hardware is pretty ghetto with whitebox components such as
> non-ECC RAM (cause of the pool loss). I know the hardware sucks but
> sometimes non-technical people don't underst
sridhar surampudi wrote:
Hi I am looking in similar lines,
my requirement is
1. create a zpool on one or many devices ( LUNs ) from an array ( array can be
IBM or HPEVA or EMC etc.. not SS7000).
2. Create file systems on zpool
3. Once file systems are in use (I/0 is happening) I need to take
Hi I am looking in similar lines,
my requirement is
1. create a zpool on one or many devices ( LUNs ) from an array ( array can be
IBM or HPEVA or EMC etc.. not SS7000).
2. Create file systems on zpool
3. Once file systems are in use (I/0 is happening) I need to take snapshot at
array level
a
Hello List,
I recently got bitten by a "panic on `zpool import`" problem (same CR
6915314), while testing a ZFS file server. Seems the pool is pretty much
gone, did try
- zfs:zfs_recover=1 and aok=1 in /etc/system
- `zpool import -fF -o ro`
to no avail. I don't think I will be taking the time tryi
Hi Andrew,
Regarding your point
-
You will not be able to access the hardware
snapshot from the system which has the original zpool mounted, because
the two zpools will have the same pool GUID (there's an RFE outstanding
on fixing this).
Could you please pr
hi Cyril,
I also need to change a guid of a zpool (again because of cloning at LUN level
has produced a duplicate).
Have you a solution?
CD
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://ma
sridhar surampudi wrote:
Hi Darren,
In shot I am looking a way to freeze and thaw for zfs file system so that for
harware snapshot, i can do
1. run zfs freeze
2. run hardware snapshot on devices belongs to the zpool where the given file system is residing.
3. run zfs thaw
Unlike other filesys
On Sun, 14 Nov 2010 23:52:52 PST, sridhar surampudi
wrote:
>Hi Darren,
>
>In shot I am looking a way to freeze and thaw for zfs file system so that for
>harware snapshot, i can do
>1. run zfs freeze
>2. run hardware snapshot on devices belongs to the zpool where the given file
>system is resid
Hi,
we have the same issue, ESX(i) and Solaris on the Storage.
Link Aggregation does not work with ESX(i) (i tried a lot with that for
NFS), when you want to use more than one 1G connection you must
configure one network or vlan and min. one share for each connection.
But this is also limited
49 matches
Mail list logo