Hi all,
Recently i got myself a new machine (Dell R710) with 1 internal Dell
SAS/i and 2 sun hba (non-raid) .
From time to time this system just freezes and i noticed that it always
freezes after this message (shown in the /var/adm/messages) :
scsi: [ID 107833 kern.warning] WARNING:
/p...@0,0/pci
overo wrote:
> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org
> [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Bruno Sousa
> Sent: 5. maaliskuuta 2010 10:34
> To: ZFS filesystem discussion list
> Subject: [zfs-discuss] snv_133 mpt0 freezing m
Seems like it...and the workaround doesn't help it.
Bruno
On 5-3-2010 16:52, Mark Ogden wrote:
> Bruno Sousa on Fri, Mar 05, 2010 at 09:34:19AM +0100 wrote:
>
>> Hi all,
>>
>> Recently i got myself a new machine (Dell R710) with 1 internal Dell
>> SAS/i and
Hi all,
Today a new message has been seen in my system and another freeze has
happen to it.
The message is :
Mar 9 06:20:01 zfs01 failed to configure smp w50016360001e06bf
Mar 9 06:20:01 zfs01 mpt: [ID 201859 kern.warning] WARNING: smp_start
do passthru error 16
Mar 9 06:20:01 zfs01 scsi
Well...i can only say "well said".
BTW i have a raidz2 with 9 vdevs with 4 disks each (sata enterprise
disks) and the scrub of the pool takes between 12 to 39 hours..depends
on the workload of the server.
So far it's acceptable but each case is a case i think...
Bruno
On 16-3-2010 14:04, Khyron
Hi,
As far as i know this is a "normal" behaviour in ZFS...
So what we need is somesort of "rebalance" task what moves data around
multiple vdevs in order to achieve the best performance possible...
Take a look to
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6855425
Bruno
On 24-3-
Hi,
Actually the idea of having the ZFS code inside a HW raid controllers
does seems quite interesting. Imagine the possibility of having any OS
with raid volumes backed by all the good aspects of the ZFS, specially
the checksum and the raidz vs the "raid5-write-hole" thing...
I also consider the
Hi,
You never experienced any faulted drives, or something similar? So far i
only saw imbalance if the vdevs add changed, if a hotspare is used and i
think even during a replacement of one disk of a raidz2 group.
I
Bruno
On 25-3-2010 9:46, Ian Collins wrote:
> On 03/25/10 09:32 PM, Bruno So
e i'm talking a huge mistake.
If someone with more knowledge about zfs would like to comment, please
do so.. It's always a learning experience.
Bruno
On 25-3-2010 11:53, Ian Collins wrote:
> On 03/25/10 11:23 PM, Bruno Sousa wrote:
>> On 25-3-2010 9:46, Ian Collins wrote:
>
Hi all,
Yet another question regarding raidz configuration..
Assuming a system with 24 disks available , having in mind reliability
as the crucial factor , secondary the usable space and finally
performance would be the last criteria, what would be the preferable
configuration ?
Should it be :
re indexes...) .
So far the system seems to behave quite nice...but than again we are
just starting it.
Thanks for the input,
Bruno
On 25-3-2010 16:46, Freddie Cash wrote:
> On Thu, Mar 25, 2010 at 6:28 AM, Bruno Sousa <mailto:bso...@epinfante.com>> wrote:
>
> Assumi
On 25-3-2010 15:28, Richard Jahnel wrote:
> I think I would do 3xraidz3 with 8 disks and 0 hotspares.
>
> That way you have a better chance of resolving bit rot issues that might
> become apparent during a rebuild.
>
Indeed raidz3...i didn't consider it.
In short, a raidz3 could sustain 3 brok
c3t0d0ONLINE 0 0 0
So...what am i missing here? Just a bad example in the sun documentation
regarding zfs?
Bruno
On 25-3-2010 20:10, Freddie Cash wrote:
> On Thu, Mar 25, 2010 at 11:47 AM, Bruno Sousa <mailto:bso...@epinfante.com>> wrote:
>
> What do
Hi all,
The more readings i do about ZFS, and experiments the more i like this
stack of technologies.
Since we all like to see real figures in real environments , i might as
well share some of my numbers ..
The replication has been achieved with the zfs send / zfs receive but
piped with mbuffer (h
Thanks for the tip..btw is there any advantage with jbod vs simple volumes?
Bruno
On 25-3-2010 21:08, Richard Jahnel wrote:
> BTW, if you download the solaris drivers for the 52445 from adaptec, you can
> use jbod instead of simple volumes.
>
smime.p7s
Description: S/MIME Cryptographic Sig
oes seems quite good.
However, like i said i would like to know other results from other guys...
Thanks for the time.
Bruno
On 25-3-2010 21:52, Ian Collins wrote:
> On 03/26/10 08:47 AM, Bruno Sousa wrote:
>> Hi all,
>>
>> The more readings i do about ZFS, and experime
Hi,
I think that in this case the cpu is not the bottleneck, since i'm not
using ssh.
However my 1gb network link probably is the bottleneck.
Bruno
On 26-3-2010 9:25, Erik Ableson wrote:
>
> On 25 mars 2010, at 22:00, Bruno Sousa wrote:
>
>> Hi,
>>
>> Indeed the
will still deliver a good
performance.
And what a relief to know that i'm not alone when i say that storage
management is part science, part arts and part "voodoo magic" ;)
Cheers,
Bruno
On 25-3-2010 23:22, Ian Collins wrote:
> On 03/26/10 10:00 AM, Bruno Sousa wrote:
>
> [Boy
Hello all,
Currently i'm evaluating a system with an Adaptec 52445 Raid HBA, and
the driver supplied by Opensolaris doesn't support JBOD drives.
I'm running snv_134 but when i try to do uninstall the SUNWacc driver i
have the following error :
pkgrm SUNWaac
The following package is currently ins
On Mon, Mar 29, 2010 at 4:25 PM, Bruno Sousa wrote:
>
>> Hello all,
>>
>> Currently i'm evaluating a system with an Adaptec 52445 Raid HBA, and
>> the driver supplied by Opensolaris doesn't support JBOD drives.
>> I'm running snv_134 but when
Hi all,
I recently had to install the Adaptec AAC driver in a Opensolaris
system, and given the fact that i had to do a couple of steps, i might
as well share them , and hopefully someone will benefit it.
So here goes :
1 - The adaptec AAC driver is needed when we have drives configured as
JBOD
On 30-3-2010 0:39, Nicolas Williams wrote:
> One really good use for zfs diff would be: as a way to index zfs send
> backups by contents.
>
> Nico
>
Any prevision about the release target? snv_13x?
Bruno
smime.p7s
Description: S/MIME Cryptographic Signature
Thanks..it was what i had to do .
Bruno
On 29-3-2010 19:12, Cyril Plisko wrote:
> On Mon, Mar 29, 2010 at 4:57 PM, Bruno Sousa wrote:
>
>> pkg uninstall aac
>> Creating Planpkg: Cannot remove
>> 'pkg://opensolaris.org/driver/storage/a...@0.5.11
>> ,5.11
On 31-3-2010 14:52, Charles Hedrick wrote:
> Incidentally, this is on Solaris 10, but I've seen identical reports from
> Opensolaris.
>
Probably you need to delete any existing view over the lun you want to
destroy.
Example :
stmfadm list-lu
LU Name: 600144F0B67340004BB31F060001
stmfad
, for doing this to your poor unfortunate
>> readers. It would be nice if the page were a wiki, or somehow able
>> to have feedback submitted…
>>
>>
>>
>>
>>
>>
>>
>> *From:* zfs-discuss-boun...@opensolaris.org
>> [mailto:zfs-discuss-boun...@opensolaris.
Hi,
I also ran into the problem of Dell+Broadcom. I fixed it by downgrading
the firmware to version 4.xxx instead of running in version 5.xxx .
You may try that one as well.
Bruno
On 6-4-2010 16:54, Eric D. Mudama wrote:
> On Tue, Apr 6 at 13:03, Markus Kovero wrote:
>>> Install nexenta on a de
Hi all,
Recently one of the servers , a Dell R710, attached to 2 J4400 started
to crash quite often.
Finally i got a message in /var/adm/messages that might point to
something usefull, but i don't have the expertise to start to
troubleshooting this problem, so any help would be highly valuable.
B
QUIESCED
EXISTS
ENABLE
On 13-4-2010 11:42, Bruno Sousa wrote:
> Hi all,
>
> Recently one of the servers , a Dell R710, attached to 2 J4400 started
> to crash quite often.
> Finally i got a message in /var/adm/messages that might point to
Hi,
Maybe your zfs box used for dedup has a big load, therefore giving
timeouts in nagios checks?
I ask you this because i also suffer from that effect in a system with 2
Intel Xeon 3.0Ghz ;)
Bruno
On 14-4-2010 15:48, Paul Archer wrote:
> So I turned deduplication on on my staging FS (the one th
Hi all,Yet another story regarding mpt issues, and in order to make a
longstory short everytime that a Dell R710 running snv_134 logs the
information
scsi: [ID 107833 kern.warning]
WARNING:/p...@0,0/pci8086,3...@4/pci1028,1...@0 (mpt0): , the system
freezes andony a hard-reset fixes the issue.
I
Hi all,
Yet another story regarding mpt issues, and in order to make a long
story short everytime that a Dell R710 running snv_134 logs the information
scsi: [ID 107833 kern.warning] WARNING:
/p...@0,0/pci8086,3...@4/pci1028,1...@0 (mpt0): , the system freezes and
ony a hard-reset fixes the issue
e tip, and i will try to understand what's wrong
with this machine.
Bruno
On 27-4-2010 16:41, Mark Ogden wrote:
> Bruno Sousa on Tue, Apr 27, 2010 at 09:16:08AM +0200 wrote:
>
>> Hi all,
>>
>> Yet another story regarding mpt issues, and in order to make a long
>>
Indeed the scrub seems to take too much resources from a live system.
For instance i have a server with 24 disks (SATA 1TB) serving as NFS
store to a linux machine holding user mailboxes. I have around 200
users, with maybe 30-40% of active users at the same time.
As soon as the scrub process kicks
Hi all,
I have faced yet another kernel panic that seems to be related to mpt
driver.
This time i was trying to add a new disk to a running system (snv_134)
and this new disk was not being detected...following a tip i ran the
lsitool to reset the bus and this lead to a system panic.
MPT driver :
Hi James,
Thanks for the information, and if there's any test/command to be done
on this server, just let me know it.
Regards,
Bruno
On 5-5-2010 15:38, James C. McPherson wrote:
>
> On 5/05/10 10:42 PM, Bruno Sousa wrote:
>> Hi all,
>>
>> I have faced yet another
Hi all,
It seems like the market has yet another type of ssd device, this time a
USB 3.0 portable SSD device by OCZ.
Going on the specs it seems to me that if this device has a good price
it might be quite useful for caching purposes on ZFS based storage.
Take a look at
http://www.ocztechnology.co
Hmm...that easy? ;)
Thanks for the tip, i will see if that works out.
Bruno
On 29-6-2010 2:29, Mike Devlin wrote:
> I havnt tried it yet, but supposedly this will backup/restore the
> comstar config:
>
> $ svccfg export -a stmf > comstar.bak.${DATE}
>
> If you ever need to restore the configur
iscsi/LUN_10GB
Thanks for all the tips.
Bruno
On 29-6-2010 14:10, Preston Connors wrote:
> On Tue, 2010-06-29 at 08:58 +0200, Bruno Sousa wrote:
>
>> Hmm...that easy? ;)
>>
>> Thanks for the tip, i will see if that works out.
>>
>> Bruno
>>
Hi all,
Today i notice that one of the ZFS based servers within my company is
complaining about disk errors, but i would like to know if this a real
physical error or something like a transport error or something.
The server in question runs snv_134 attached to 2 J4400 jbods , and the
head-node ha
On 17-7-2010 15:49, Bob Friesenhahn wrote:
> On Sat, 17 Jul 2010, Bruno Sousa wrote:
>> Jul 15 12:30:48 storage01 SOURCE: eft, REV: 1.16
>> Jul 15 12:30:48 storage01 EVENT-ID: 859b9d9c-1214-4302-8089-b9447619a2a1
>> Jul 15 12:30:48 storage01 DESC: The command was terminated wi
Hi,
If you can share those scripts that make use of mbuffer, please feel
free to do so ;)
Bruno
On 19-7-2010 20:02, Brent Jones wrote:
> On Mon, Jul 19, 2010 at 9:06 AM, Richard Jahnel
> wrote:
>
>> I've tried ssh blowfish and scp arcfour. both are CPU limited long before
>> the 10g link i
On 19-7-2010 20:36, Brent Jones wrote:
> On Mon, Jul 19, 2010 at 11:14 AM, Bruno Sousa wrote:
>
>> Hi,
>>
>> If you can share those scripts that make use of mbuffer, please feel
>> free to do so ;)
>>
>>
>> Bruno
>> On 19-7-2010 20:02, Br
Hi all,
That's what i have, so i'm probably on the good track :)
Basically i have a Sun X4240 with 2 Sun HBA's attached to 2 Sun J4400 ,
each of them with 12 SATA 1TB disks.
The configuration is
- ZFS mirrored pool with 22x2 +2 spares , with 1 disk on Jbod A attached
to HBA A and the other disk
Hi all,
I'm running snv_134 and i'm testing the COMSTAR framework and during
those tests i've created an ISCSI zvol and exported to a server.
Now that the tests are done i have renamed the zvol and so far so
good..things get really weird (at least to me) when i try to destroy
this zvol.
*r...@san
'dataset does not exist', but you can check
again(see 1)
3. Destroy snapshot(s) that could not be destroyed previously
So my thanks goes to * Cindy Swearingen*, but i wonder...wasn't this bug
fixed in build 122 as seen in the OpenSolaris bug database?
Bruno
On 27-7-2010 19:
Hi all,
I have in lab two servers running snv_134 and while doing some
experiences with iscsi volumes and replication i came up to a road-block
that i would like to ask for your help.
So in server A i have a lun created in COMSTAR without any views attach
to it and i can zfs send it to server B wi
On 2-8-2010 2:53, Richard Elling wrote:
> On Jul 30, 2010, at 11:35 AM, Andrew Gabriel wrote:
>
>
>> Just wondering if anyone has experimented with working out the best zvol
>> recordsize for a zvol which is backing a zpool over iSCSI?
>>
>
> This is an interesting question. Today, most Z
Hi all !
I have a serious problem, with a server, and i'm hoping that some one
could help me how to understand what's wrong.
So basically i have a server with a pool of 6 disks, and after a zpool
scrub i go the message :
errors: Permanent errors have been detected in the following files:
Hi all !
I have a serious problem, with a server, and i'm hoping that some one
could help me how to understand what's wrong.
So basically i have a server with a pool of 6 disks, and after a zpool
scrub i go the message :
errors: Permanent errors have been detected in the following files:
Hi,
Something like
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6855425 ?
Bruno
Matthias Appel wrote:
You will see more IOPS/bandwith, but if your existing disks are very
full, then more traffic may be sent to the new disks, which results in
less benefit.
OK, that mean
If you use an LSI, maybe you install the LSI Logic MPT Configuration
Utility.
Example of the usage :
lsiutil
LSI Logic MPT Configuration Utility, Version 1.61, September 18, 2008
1 MPT Port found
Port Name Chip Vendor/Type/RevMPT Rev Firmware Rev IOC
1. mpt0 LS
Hi all,
Recently i upgrade from snv_118 to snv_125, and suddently i started to
see this messages at /var/adm/messages :
Oct 22 12:54:37 SAN02 scsi: [ID 243001 kern.warning] WARNING:
/p...@0,0/pci10de,3...@a/pci1000,3...@0 (mpt0):
Oct 22 12:54:37 SAN02 mpt_handle_event: IOCStatus=0x8000,
e harmless errors to display.
According to the 6694909 comments, this issue is documented in the
release notes.
As they are harmless, I wouldn't worry about them.
Maybe someone from the driver group can comment further.
Cindy
On 10/22/09 05:40, Bruno Sousa wrote:
Hi all,
Recently
Hi Adam,
How many disks and zpoo/zfs's do you have behind that LSI?
I have a system with 22 disks and 4 zpools with around 30 zfs's and so
far it works like a charm, even during heavy load. The opensolaris
release is snv_101b .
Bruno
Adam Cheal wrote:
Cindy: How can I view the bug report you
Could Sun'x x4540 Thumper reason to have 6 LSI's some sort of "hidden"
problems found by Sun where the HBA resets, and due to market time
pressure the "quick and dirty" solution was to spread the load over
multiple HBA's instead of software fix?
Just my 2 cents..
Bruno
Adam Cheal wrote:
J
mless, I wouldn't worry about them.
Maybe someone from the driver group can comment further.
Cindy
On 10/22/09 05:40, Bruno Sousa wrote:
Hi all,
Recently i upgrade from snv_118 to snv_125, and suddently i started
to see this messages at /var/adm/messages :
Oct 22 12:54:37 SAN02 scsi:
Hi all,
I fully understand that within a cost effective point of view,
developing the fishworks for a reduced set of hardware makes , alot, of
sense.
However, i think that Sun/Oracle would increase their user base if they
make availabe a Fishwork framework certified only for a reduced set of
arket will say what's the best approach.
Bruno
Tim Cook wrote:
On Tue, Oct 27, 2009 at 2:35 AM, Bruno Sousa <mailto:bso...@epinfante.com>> wrote:
Hi all,
I fully understand that within a cost effective point of view,
developing the fishworks for a reduced set
series.
Regarding APPLE...well they have marketing gurus
Bruno
On Wed, 28 Oct 2009 09:47:31 +1300, Trevor Pretty wrote:
Bruno Sousa
wrote: Hi,
I can agree that the software is the one that really has the
added value, but to my opinion allowing a stack like Fishworks to run
outside the
I just curious to see how much effort would it take to put the software of
FISH running within a Sun X4275...
Anyway..lets wait and see.
Bruno
On Tue, 27 Oct 2009 13:29:24 -0500 (CDT), Bob Friesenhahn
wrote:
> On Tue, 27 Oct 2009, Bruno Sousa wrote:
>
>> I can agree that the sof
r time and energy is crystal clear...
>
> - Bryan
>
>
--
> Bryan Cantrill, Sun Microsystems Fishworks.
http://blogs.sun.com/bmc
> ___
> zfs
you look back through the archives,
I am FAR from a Sun fanboy... I just feel you guys aren't even grounded in
reality when making these requests.
--Tim
--
This message has been
scanned for viruses and
dangerous content by MAILSCANNER [2], and is
believed to be clean.
--
Bruno Sousa
Hi,
I currently have a 1U server (Sun X2200) with 2 LSI HBA attached to a
Supermicro JBOD chassis each one with 24 disks , SATA 1TB, and so far so
good..
So i have a 48 TB raw capacity, with a mirror configuration for NFS
usage (Xen VMs) and i feel that for the price i paid i have a very nice
sys
wrote:
> Hi Bruno,
>
> Bruno Sousa wrote:
>> Hi,
>>
>> I currently have a 1U server (Sun X2200) with 2 LSI HBA attached to a
>> Supermicro JBOD chassis each one with 24 disks , SATA 1TB, and so far so
>> good..
>> So i have a 48 TB raw capacity, with a mir
, that "only" has disks, disk
backplane, jbod power interface and power supplies
Hope this helps...
Bruno
Sriram Narayanan wrote:
> On Wed, Nov 18, 2009 at 3:24 AM, Bruno Sousa wrote:
>
>> Hi Ian,
>>
>> I use the Supermicro SuperChassis 846E1-R710B, an
Interesting, at least to me, the part where/ "this storage node is very
small (~100TB)" /:)
Anyway, how are you using your ZFS? Are you creating volumes and present
them to end-nodes over iscsi/fiber , nfs, or other? Could be helpfull to
use some sort of cluster filesystem to have some more contro
Maybe 11/30/2009 ?
According to
http://hub.opensolaris.org/bin/view/Community+Group+on/schedule. we have
onnv_129 11/23/2009 11/30/2009
But..as far as i know those release dates are in a "best effort basis".
Bruno
Karl Rossing wrote:
> When will SXCE 129 be released since 128 was passed over? T
Hello !
I'm currently using a X2200 with a LSI HBA connected to a Supermicro
JBOD chassis, however i want to have more redundancy in the JBOD.
So i have looked into to market, and into to the wallet, and i think
that the Sun J4400 suits nicely to my goals. However i have some
concerns and if anyon
er, I do not know if the bottleneck was
> at the disk, controller, backplane, or software level... I'm too close to my
> deadline to do much besides randomly shotgunning different configs to see
> what works best!
>
> -K
>
>
> Karl Katzke
> Systems Analyst II
&g
BOD , including power off/power on and see
how it goes
* replace HBA/disk ?
* other ?
Thanks for the time, and if any other information is required (even ssh
access can be granted) please feel free to ask it.
Best regards,
Bruno Sousa
System specs :
* OpenSolaris snv_101b, wi
wrote:
> Bruno Sousa wrote:
>> Hi all,
>>
>> During this problem i did a power-off/power-on in the server and the
>> bus reset/scsi timeout issue persisted. After that i decided to
>> poweroff/power on the jbod array, and after that everything became
>> nor
But don't forget that "The unknown is what makes life interesting" :)
Bruno
Cindy Swearingen wrote:
> Hi Mike,
>
> In theory, this should work, but I don't have an experience with this
> particular software, maybe someone else does.
>
> One way to determine if it might work is by using use the zd
Hi all,
Is there any way to generate some report related to the de-duplication
feature of ZFS within a zpool/zfs pool?
I mean, its nice to have the dedup ratio, but it think it would be also
good to have a report where we could see what directories/files have
been found as repeated and therefore t
ote:
> On Wed, Dec 9, 2009 at 2:26 PM, Bruno Sousa wrote:
>
>> Hi all,
>>
>> Is there any way to generate some report related to the de-duplication
>> feature of ZFS within a zpool/zfs pool?
>> I mean, its nice to have the dedup ratio, but it think it would
On Wed, Dec 9, 2009 at 2:47 PM, Bruno Sousa wrote:
>
>> Hi Andrey,
>>
>> For instance, i talked about deduplication to my manager and he was
>> happy because less data = less storage, and therefore less costs .
>> However, now the IT group of my company needs
cost centre.
But indeed, you're right , in my case a possible technical solution is
trying to answer for a managerial solution..however, isn't it way IT was
invented, that i believe that's why i got my paycheck each month :)
Bruno
Richard Elling wrote:
> On Dec 9, 2009, at 3:
,
but in order to do that , there has to be a way to measure those
costs/savings.
But yes, this costs probably represent less than 20% of the total cost,
but its a cost no matter what.
However, maybe im driving in the wrong road...
Bruno
Bob Friesenhahn wrote:
> On Wed, 9 Dec 2009, Bruno So
Hi,
Couldn't agree more..but i just asked if there was such a tool :)
Bruno
Richard Elling wrote:
> On Dec 9, 2009, at 11:07 AM, Bruno Sousa wrote:
>> Hi,
>>
>> Despite the fact that i agree in general with your comments, in reality
>> it all comes to money.
Hi all,
According to what i have been reading the opensolaris 2010.03 should be
released around March this year, but with all the process of the
Oracle/Sun deal i was wondering if anyone knows if this schedule still
makes sense, and if not does snv_132/133 look very similar to future
2010.03.
In o
Hi all,
I'm currently evaluating the possibility of migrating a NFS server
(Linux Centos 5.4 / RHEL 5.4 x64-32) based to a opensolaris box and i'm
seeing some huge cpu usage in the opensolaris box.
The zfs box is a Dell R710 with 2 Quad-Cores (Intel E5506 @ 2.13GHz),
16Gb ram , 2 Sun non-Raid HB
Hi,
I don't have compression and deduplication enabled, but checksums are.
However disabling checksums gives a 0.5 load reduction only...
Bruno
On 23-2-2010 20:27, Eugen Leitl wrote:
> On Tue, Feb 23, 2010 at 01:03:04PM -0600, Bob Friesenhahn wrote:
>
>
>> Zfs can consume appreciable CPU if c
Hi Bob,
I have neither deduplication or compression enabled. The checksum are
enabled, but if try to disable it i gain aroud 0.5 less load on the box,
so it still seems to be to much.
Bruno
On 23-2-2010 20:03, Bob Friesenhahn wrote:
> On Tue, 23 Feb 2010, Bruno Sousa wrote:
>> Could th
acpi also affect
the performance of the system?
Regards,
Bruno
On 23-2-2010 20:47, Bob Friesenhahn wrote:
> On Tue, 23 Feb 2010, Bruno Sousa wrote:
>
>> I don't have compression and deduplication enabled, but checksums are.
>> However disabling checksums gives a 0.5 load redu
Hi,
Just some comments on your situation , please take a look the following
things :
* Sometimes the hw looks the same, i'm talking specifically to the
SSD's, but they can be somehow different and that may lead to some
problems in the fut
Hi all,
Using "old" way of sharing volumes over iscsi in zfs (zfs set
shareiscsi=on) i can see i/o stats per iscsi volume running a command
iscsitadm show stats -I 1 volume.
However i couldn't find something similar in new framework,comstar.
Probably i'm missing something, so if anyone has some ti
Hi all,
I still didn't find the problem but it seems to be related with
interrupts sharing between onboard network cards (broadcom) and the
intel 10gbE card PCI-e.
Runing a simple iperf from a linux box to my zfs box, if i use bnx2 or
bnx3 i have a performance over 100 mbs, but if i use bnx0, bxn1
Yes i'm using the mtp driver . In total this system has 3 HBA's, 1
internal (Dell perc), and 2 Sun non-raid HBA's.
I'm also using multipath, but if i disable multipath i have pretty much
the same results..
Bruno
On 24-2-2010 19:42, Andy Bowers wrote:
> Hi Bart,
> yep, I got Bruno to run a
Hi,
Until it's fixed the 132 build should be used instead of the 133?
Bruno
On 25-2-2010 3:22, Bart Smaalders wrote:
> On 02/24/10 12:57, Bruno Sousa wrote:
>> Yes i'm using the mtp driver . In total this system has 3 HBA's, 1
>> internal (Dell perc), and 2 Sun non
in sol 11, I do want it. ;-)
>
>
___
> zfs-discuss mailing
list
> zfs-discuss@opensolaris.org
>
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Bruno Sousa
--
This message has been scanned for viruses and
dangerous co
to remind me about Broadcom NIC issues.
> Different (not fully supported) hardware revision causing issues?
>
> Yours
> Markus Kovero
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.or
n/listinfo/zfs-discuss
--
This
message has been scanned for viruses and
dangerous content by MAILSCANNER
[3], and is
believed to be clean.
--
Bruno Sousa
Links:
--
[1]
mailto:t...@cook.ms
[2] mailto:zfs-discuss@opensolaris.org
[3]
http://www.mailscanner.info/
--
This message has be
I confirm that form the fileserver point of view and storage, i had more
network connections used.
Bruno
On Wed, 17 Nov 2010 22:00:21 +0200, Pasi Kärkkäinen wrote:
> On Wed, Nov 17, 2010 at 10:14:10AM +0000, Bruno Sousa wrote:
>>Hi all,
>>
>>Let me tell you al
On Wed, 17 Nov 2010 16:31:32 -0500, Ross Walker
wrote:
> On Wed, Nov 17, 2010 at 3:00 PM, Pasi Kärkkäinen wrote:
>> On Wed, Nov 17, 2010 at 10:14:10AM +0000, Bruno Sousa wrote:
>>> Hi all,
>>>
>>> Let me tell you all that the MC/S *does* make a
Hello everyone,
I have a pool consisting of 28 1TB sata disks configured in 15*2 vdevs
raid1 (2 disks per mirror)2 SSD in miror for the ZIL and 3 SSD's for L2ARC,
and recently i added two more disks.
For some reason the resilver process kicked in, and the system is
noticeable slower, but i'm cluel
94 matches
Mail list logo