Could you show us 'iostat -En' please?
On 21 Oct 2010 13:31, "Harry Putnam" wrote:
Ian Collins writes:
> On 10/21/10 03:47 PM, Harry Putnam wrote:
>> build 133
>> zpool version 22
>>
>> I'm getting:
>>
>> zpool status:
>> NAMESTATE READ WRITE CKSUM
>> z3
We had the same issue with a 24 core box a while ago. Check your l2 cache
hits and misses. Sometimes more cores does not mean more performance dtrace
is your friend!
On 30 Oct 2010 14:12, "zfs user" wrote:
Here is a total guess - but what if it has to do with zfs processing running
on one CPU ha
If you take a look at http://www.brendangregg.com/cachekit.html you will see
some DTrace yummyness which should let you tell...
---
W. A. Khushil Dep - khushil@gmail.com - 07905374843
Visit my blog at http://www.khushil.com/
On 30 October 2010 15:49, Eugen Leitl wrote:
> On Sat,
If you do a dd to the storage from the heads do you still get the same
issues?
On 31 Oct 2010 12:40, "Ian D" wrote:
I get that multi-cores doesn't necessarily better performances, but I doubt
that both the latest AMD CPUs (the Magny-Cours) and the latest Intel CPUs
(the Beckton) suffer from incr
Check your TXG settings, it could be a timing issue, nagles issue, also TCP
buffer issue. Check setup system properties.
On 1 Nov 2010 19:36, "SR" wrote:
What if you connect locally via NFS or iscsi?
SR
--
This message posted from opensolaris.org
__
How is your current system setup as like Chris? What's the config of the new
system? sperate disk array and head nodes or all in one boxes?
---
W. A. Khushil Dep - khushil@gmail.com - 07905374843
Visit my blog at http://www.khushil.com/
On 5 November 2010 13:15, Sriram Nara
Can you send output of iostat -xCzn as well as fmadm faulty please? Is. This
an E2 chassis? Are you using interposers?
On 6 Nov 2010 18:28, "Dave Pooser" wrote:
My setup: A SuperMicro 24-drive chassis with Intel dual-processor
motherboard, three LSI SAS3081E controllers, and 24 SATA 2TB hard dri
Sorry u meant iostat -En I'm looking for errors
On 6 Nov 2010 18:56, "Dave Pooser" wrote:
On 11/6/10 Nov 6, 1:35 PM, "Khushil Dep" wrote:
> Is this an E2 chassis? Are you using interposers?
No, it¹s an SC846A chassis. There are no interposers or expanders; si
hba it was connected to is on the
blink.
Restore from backup might be inevitable unless your snapping and auto
syncing to another system?
On 6 Nov 2010 19:25, "Dave Pooser" wrote:
On 11/6/10 Nov 6, 2:21 PM, "Khushil Dep" wrote:
> Sorry I meant iostat -En ...
# iostat -E
The fmdump will let you get the serial of one disk and id the controller its
on so you can swap it out and check.
On 6 Nov 2010 19:45, "Dave Pooser" wrote:
On 11/6/10 Nov 6, 2:35 PM, "Khushil Dep" wrote:
> Similar to what I've seen...
It's been up for a
I think you maybe wanting the same kind of thing that NexentaStor does when
it upgrade - takes snapshot and marks it a checkpoint in case the upgrade
fails - right? I think you may have to snap then clone from that and use
beadm thought it's something you should play with...
---
W. A. Khushi
I would also add that you should try the NexentaStor Enterprise demo - fully
functional for 45 days. If you find a partner they will most likely be able
to provide you a managed trial. I'd be interested to hear what parts of the
GUI didn't work for you.
---
W. A. Khushil Dep
Hi,
# savecore -vf vmdump.0
This should produce two files: unix.0 and vmcore.0
Now we use mdb on these as follows:
# mdb unix.0 vmcore.0
Now when presented with the '>' prompt, type "::status" and send us all the
output please?
---
W. A. Khushil Dep - khushil@g
Think those might have been the thumper screen shots? Take a look at
nexentastor
On 13 Nov 2010 20:12, "Brad Henderson" wrote:
> I am new to OpenSolaris and I have been reading about and seeing
screenshots of the ZFS Administration Console. I have been looking at the
dates on it and every pos
Wait I thought the x4/x7 was the thumper series?
On 13 Nov 2010 21:54, "Erik Trimble" wrote:
> On 11/13/2010 1:06 PM, Khushil Dep wrote:
>>
>> Think those might have been the thumper screen shots? Take a look at
>> nexentastor
>>
>> On 13 N
Ok so what range was the thumper?
On 13 Nov 2010 22:00, "Tim Cook" wrote:
> On Sat, Nov 13, 2010 at 3:52 PM, Erik Trimble wrote:
>
>> On 11/13/2010 1:06 PM, Khushil Dep wrote:
>>
>> Think those might have been the thumper screen shots? Take a look at
>> ne
Now I feel stupid lol. Thanks for the clarification!
On 13 Nov 2010 22:30, "Erik Trimble" wrote:
> On 11/13/2010 1:56 PM, Khushil Dep wrote:
>>
>> Wait I thought the x4/x7 was the thumper series?
>>
>
> Nope. Thumper is specifically the codename for
Set your txg_synctime_ms to 0x3000 and retest please?
On 15 Nov 2010 23:23, "Louis" wrote:
> Hey all1
>
> Recently I've decided to implement OpenSolaris as a target for BackupExec.
>
> The server I've converted into a "Storage Appliance" is an IBM x3650 M2 w/
~4TB of on board storage via ~10 loca
t have mentioned values lower than 12288 ms.
On Mon, Nov 15, 2010 at 6:35 PM, Khushil Dep wrote:
>
> Set your txg_synct...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Points to check are iostat,fsstat, zilstat, mpstat, prstat. Check for sw
interrupt sharing, disable ohci.
On 16 Nov 2010 00:27, "Khushil Dep" wrote:
> That controls zfs breathing, I'm on a phone writing this so u hope you
won't
> mind me pointing you to
>
listwa
so I'm not going to be able to due much
more tonight (I'm working remotely).
I do notice that when the ARC size reaches capacity, that's when things slow
down. Also, it never appears to drop after I kill the IO. If I stop all IO,
arcstat shows all numbers but the arcsz drop. Should arc
I'm not sure that leaving the ZIL enabled whilst replacing the log devices
is a good idea?
Also - I had no idea Elvis was coming back tomorrow! Sweet. ;-)
---
W. A. Khushil Dep - khushil@gmail.com - 07905374843
Visit my blog at http://www.khushil.com/
On 19 November 2010 14:57,
Check the dmesg and system logs for any output concerning those devices
re-seat one then the other just in case too.
---
W. A. Khushil Dep - khushil@gmail.com - 07905374843
Visit my blog at http://www.khushil.com/
On 20 December 2010 13:10, Paul Piscuc wrote:
> Hi, this is curr
We've always bought 2.5" and adapters for the super-micro cradles - works
well, no issues to report here.
Normally Intel's or Samsung though we also use STECH.
---
W. A. Khushil Dep - khushil@gmail.com - 07905374843
Visit my blog at http://www.khushil.com/
On 22 Dece
"Friends don't let friends disable the ZIL" - right Richard? :-)
On 24 Dec 2010 20:34, "Richard Elling" wrote:
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Do you have SSD in? Which ones and any errors on those?
On 26 Dec 2010 13:35, "Jackson Wang" wrote:
> Dear Richard,
> Thanks for your reply.
>
> Actually there is NO any other disk/controlller fault in this system. An
> engineer of NexentaStor, Andrew, just add a line in /kernel/drv/sd.conf of
> "
We do have a major commercial interest - Nexenta. It's been quiet but I do
look forward to seeing something come out of that stable this year? :-)
---
W. A. Khushil Dep - khushil@gmail.com - 07905374843
Visit my blog at http://www.khushil.com/
On 5 January 2011 14:34, Edwar
o know a whole lot of *nix land.
My 2p. YMMV.
---
W. A. Khushil Dep - khushil@gmail.com - 07905374843
Windows - Linux - Solaris - ZFS - Nexenta - Development - Consulting &
Contracting
http://www.khushil.com/ - http://www.facebook.com/GlobalOverlord
On 6 January 2011 00:14, Edwa
n an argument but it's always interesting to
find out why someone went for certain solutions over others.
My 2p. YMMV.
*goes off to collect cheque from Nexenta* ;-)
---
W. A. Khushil Dep - khushil@gmail.com - 07905374843
Windows - Linux - Solaris - ZFS - Nexenta - Development - Cons
You should also check out VA Technologies (
http://www.va-technologies.com/servicesStorage.php) in the UK which supply a
range of JBOD's. I've used this is very large deployments with no JBOD
related failures to-date. Interestingly the laso list co-raid boxes.
---
W. A. Khushil Dep
Could you not also pin process' to cores, preventing switching should help
too? I've done this for performance reasons before on a 24 core Linux box
Sent from my HTC Desire
On 16 Feb 2011 05:12, "Richard Elling" wrote:
> On Feb 15, 2011, at 7:46 PM, ian W wrote:
>
>> Thanks..
>>
>> given this box
I'd back that. X25E's are great but also look at the STECH ZeusIOPS as well
as the new Intel's.
---
W. A. Khushil Dep - khushil@gmail.com - 07905374843
Windows - Linux - Solaris - ZFS - XenServer - FreeBSD - C/C++ - PHP/Perl -
LAMP - Nexenta - Development - Consulting &am
The adage that I adhere to with ZFS features is "just because you can doesn't
mean you should!". I would suspect that with that many filesystems the normal
zfs-tools would also take an inordinate length of time to complete their
operations - scale according to size.
Generally snapshots are quic
33 matches
Mail list logo