On 5/23/12 11:28 PM, Richard Elling wrote:
The man page is clear on this topic, IMHO
Indeed, even in snv_117 the zpool man page says that. But the
console/dmesg message was also quite clear, so go figure whom
to trust (or fear) more ;)
The FMA message is consistent with the man page.
The ma
On 3/25/12 10:25 AM, Aubrey Li wrote:
On Mon, Mar 26, 2012 at 12:48 AM, Richard Elling
wrote:
This is the wrong forum for general purpose performance tuning. So I won't
continue this much farther. Notice the huge number of icsw, that is a
bigger
symptom than locks.
-- richard
thanks anywa
On 4/27/11 4:00 AM, Markus Kovero wrote:
Sync was disabled on the main pool and then let to inherrit to everything else.
The> reason for disabled this in the first place was to fix bad NFS write
performance (even with> Zil on an X25e SSD it was under 1MB/s).
I've also tried setting the log
On 1/23/11 10:30 AM, Roy Sigurd Karlsbakk wrote:
If you're looking for stats to give an indication of likely wear, and
thus increasing probably of failure, POH is probably not very useful
by
itself (or even at all). Things like Head Flying Hours and Load Cycle
Count are probably more indicative,
On 11/17/10 12:04 PM, Miles Nordin wrote:
"djm" == Darren J Moffat writes:
djm> http://blogs.sun.com/darren/entry/introducing_zfs_crypto_in_oracle
djm> http://blogs.sun.com/darren/entry/assued_delete_with_zfs_dataset
djm>
http://blogs.sun.com/darren/entry/compress_encrypt_check
2 PM, Erik Trimble wrote:
On 10/30/2010 7:07 PM, zfs user wrote:
I did it deliberately - how dumb are these product managers that they name
products with weird names and not expect them to be abused? On the other
hand, if you do a search for mangy cours you'll find a bunch of hits where
it
, etc.
"I am seeing some spotty performance with my new Mangy Cours CPU"...
It is like they are asking for it. I think they be better off doing something
like Intel core arch names using city names "Santa Rosa", etc.
On 10/30/10 3:49 PM, Eugen Leitl wrote:
On Sat, Oct 30, 2010
Here is a total guess - but what if it has to do with zfs processing running
on one CPU having to talk to the memory "owned" by a different CPU? I don't
know if many people are running fully populated boxes like you are, so maybe
it is something people are not seeing due to not having huge amoun
So maybe a next step is to run zilstat, arcstat, iostat -xe?? (I forget what
people like to use for these params), zpool iostat -v in 4 term windows while
running the same test and try to see what is spiking when that high load
period occurs.
Not sure if there is a better version than this:
h
Sorry for the long post but I know trying to decide on hardware often want to
see details about what people are using.
I have the following AS-2022G-URF machine running OpenGaryIndiana[1] that I am
starting to use.
I successfully transferred a deduped zpool with 1.x TB of files and 60 or so
I believe it goes a something like this -
ZPS filesystems with dedupe turned on can be thought of as hippie/socialist
filesystems, wanting to "share", etc. Filesystems with dedupe turned off are
a grey Randian landscape where sharing blocks between files is seen as a
weakness/defect. They all
If you are going to post about your commercial products - please include some
price points, so people know whether to ignore the info based on their budget.
Is the beta free? for how long? if not how much for 5 machines?
One of my biggest pet peeves is secret prices, it is such a waste of time f
I would get a new 1.5 TB and make sure it has the new firmware and replace
c6t3d0 right away - even if someone here comes up with a magic solution, you
don't want to wait for another drive to fail.
http://hardware.slashdot.org/article.pl?sid=09/01/17/0115207
http://techreport.com/discussions.x/1
Charles Wright wrote:
> Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card
> I got errors on all drives that result from SCSI timeout errors.
[snip litany of errors]
I had similar problems on a 1120 card with 2008.05
I upgraded to 2008.11 and the something*.16 sun areca d
Bob Friesenhahn wrote:
> On Tue, 16 Dec 2008, Reed Gregory wrote:
>
>> 8 Hardware RAID-5 Groups ( 5 drives each) and 2 SAN hot spares.
>> zraid of these 8 Raid Groups. ~ 14TB usable.
>>
>> I did read in a FAQ that doing double redundancy is not recommended
>> since parity would have to be calcul
Francois Dion wrote:
> >>"Francois Dion" wrote:
> >> Source is local to rsync, copying from a zfs file system,
> >> destination is remote over a dsl connection. Takes forever to just
> >> go through the unchanged files. Going the other way is not a
> >> problem, it takes a fraction of the t
16 matches
Mail list logo