hi all
im brand new to opensolaris ... feel free to call me noob :)
i need to build a home server for media and general storage
zfs sound like the perfect solution
but i need to buy a 8 (or more) SATA controller
any suggestions for compatible 2 opensolaris products will be really
appreciated
Hi,
I have a machine running 2009.06 with 8 SATA drives in SCSI connected enclosure.
I had a drive fail and accidentally replaced the wrong one, which
unsurprisingly caused the rebuild to fail. The status of the zpool then ended
up as:
pool: storage2
state: FAULTED
status: An intent log reco
I've attached the output of those commands. The machine is a v20z if that makes
any difference.
Thanks,
George
--
This message posted from opensolaris.orgmdb: logging to "debug.txt"
> ::status
debugging crash dump vmcore.0 (64-bit) from crypt
operating system: 5.11 snv_
Another related question -
I have a second enclosure with blank disks which I would like to use to take a
copy of the existing zpool as a precaution before attempting any fixes. The
disks in this enclosure are larger than those that the one with a problem.
What would be the best way to do this
> I suggest you to try running 'zdb -bcsv storage2' and
> show the result.
r...@crypt:/tmp# zdb -bcsv storage2
zdb: can't open storage2: No such device or address
then I tried
r...@crypt:/tmp# zdb -ebcsv storage2
zdb: can't open storage2: File exists
George
--
ge about being unable to find the device (output attached).
George
--
This message posted from opensolaris.orgr...@crypt:~# zdb -C storage2
version=14
name='storage2'
state=0
txg=1807366
pool_guid=14701046672203578408
hostid=8522651
hostname='crypt'
Aha:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6794136
I think I'll try booting from a b134 Live CD and see that will let me fix
things.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.
> I think I'll try booting from a b134 Live CD and see
> that will let me fix things.
Sadly it appears not - at least not straight away.
Running "zpool import" now gives
pool: storage2
id: 14701046672203578408
state: FAULTED
status: The pool was last accessed by another system.
action: Th
> Because of that I'm thinking that I should try
> to change the hostid when booted from the CD to be
> the same as the previously installed system to see if
> that helps - unless that's likely to confuse it at
> all...?
I've now tried changing the hostid using the code from
http://forums.sun.com
corrupted metadata. This seems to be caused by the functions
print_import_config and print_statement_config having slightly different case
statements and not a difference in the pool itself.
Hopefully I'll be able to complete the reinstall soon and see if that fixe
Hi,
I installed a zpool containing of
zpool
__mirror
disk1 500gb
disk2 500gb
__raidz
disk3 1tb
disk4 1tb
disk5 1tb
It works fine, but it displays the wrong size (terminal -> zpool list). It
should be 500gb (mirrored) + 2TB (3TB raidz) = 2,5 TB, right? But it displays
it has
Wow, alright... I'm wondering if there are still some "top secret"
items up Apple's sleeve. Someone just told me yesterday that
Microsoft had some tricks coming and that Apple not having a more
refulgent keynote was likely due to this. Ie, they want Microsoft to
tip their hand first prior to an
I agree wholeheartedly. This ZFS is a must for desktop, small
business and enterprise. I've been hanging out in #zfs and reading
quite a bit over the last couple weeks and I will never trust my data
again unless I have ZFS in place. I look to transfer this to my
clients' setups as well somehow
I'm curious about something. Wouldn't ZFS `send` and `recv` be a
perfect fit for Apple Time Machine in Leopard if glued together by
some scripts? In this scenario you could have an external volume and
simply send snapshots to it and reciprocate as needed with recv.
Also, it would seem that Appl
Where can you find the timeframe on that Tomas?
Thanks!
On 6/16/07, Tomas Ögren <[EMAIL PROTECTED]> wrote:
:n 16 June, 2007 - roland sent me these 0,5K bytes:
> hi !
>
> i think i have read somewhere that zfs gzip compression doesn`t scale
> well since the in-kernel compression isn`t done mult
one dealt with this before and perhaps be able to assist or at least
throw some further information towards me to troubleshoot this?
Thanks much,
-George
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
http://fixunix.com/solaris-rss/570361-make-most-your-ssd-zfs.html
I think this is what you are looking for. GParted FTW.
Cheers,
_GP_
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.op
ers well.
We are actively designing our soon to be available support plans. Your voice
will be
heard, please email directly at for requests,
comments
and/or questions.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
and an overwhelming attention to detail.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
market.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to/from removable media.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
t practice to size your system accordingly such that the dedup table
can stay resident in the ARC or L2ARC.
- George
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> No Slogs as I haven't seen a compliant SSD drive yet.
As the architect of the DDRdrive X1, I can state categorically the X1
correctly implements the SCSI Synchronize Cache (flush cache)
command.
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensol
SSDs that fully comply with the
POSIX
requirements for synchronous write transactions and do not lose transactions on
a
host power failure, we are competitively priced at $1,995 SRP.
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
zfs_vdev_max_pending defaults to 10
which helps. You can tune it lower as described in the Evil Tuning Guide.
Also, as Robert pointed out, CR 6494473 offers a more resource management
friendly way to limit scrub traffic (b143). Everyone can buy George a beer for
implementing this change :-)
I
/Nikos
On Jun 15, 2010, at 4:04 PM, zfs-discuss-requ...@opensolaris.org wrote:
Send zfs-discuss mailing list submissions to
zfs-discuss@opensolaris.org
To subscribe or unsubscribe via the World Wide Web, visit
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
or, via email, send
as not power protecting on-board
volatile caches. As the X25-E does implement the ATA FLUSH
CACHE command, but does not have the required power protection to
avoid transaction (data) loss.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from op
. The same principles
and benefits of multi-core processing apply here with multiple controllers.
The performance potential of NVRAM based SSDs dictates moving away
from a single/separate HBA based controller.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message
I use ZFS (on FreeBSD) for my home NAS. I started on 4 drives then added 4 and
have now added another 4, bringing the total up to 12 drives on 3 raidzs in 1
pool.
I was just wondering if there was any advantage or disadvantage to spreading
the data across the 3 raidz, as two are currently full
I don't recall seeing this issue before. Best thing to do is file a bug
and include a pointer to the crash dump.
- George
zhihui Chen wrote:
Looks that the txg_sync_thread for this pool has been blocked and
never return, which leads to many other threads have been
blocked. I have tri
Here is another very recent blog post from ConstantThinking:
http://constantin.glez.de/blog/2010/07/solaris-zfs-synchronous-writes-and-zil-explained
Very well done, a highly recommended read.
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
current version
22 (snv_129)?
Dmitry,
I can't comment on when this will be available but I can tell you that
it will work with version 22. This requires that you have a pool that is
running a minimum of version 19.
Thanks,
George
[r...@storage ~]# zpool import
pool: tank
Darren,
It looks like you've lost your log device. The newly integrated missing
log support will help once it's available. In the meantime, you should
run 'zdb -l' on your log device to make sure the label is still intact.
Thanks,
George
Darren Taylor wrote:
I'm a
are a
ZIL accelerator well matched to the 24/7 demands of enterprise use.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.open
working on it.
We have a fix for this and it should be available in a couple of days.
- George
- Eric
--
Regards,
Cyril
--
Eric Schrock, Fishworks
http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zf
-
zp_dd allocated 4.22G -
The dedupe ratio has climbed to 1.95x with all those unique files that are
less than %recordsize% bytes.
You can get more dedup information by running 'zdb -DD zp_dd'. This
should show you how we break things down. Add more 'D
I've been following the use of SSD with ZFS and HSPs for some time now, and I
am working (in an architectural capacity) with one of our IT guys to set up our
own ZFS HSP (using a J4200 connected to an X2270).
The best practice seems to be to use an Intel X25-M for the L2ARC (Readzilla)
and an I
Is there a way to use only 2 or 3 digits for the second level of the
var/pkg/download cache? This directory hierarchy is particularly problematic
relative to moving, copying, sending, etc. This would probably speed up
lookups as well.
--
This message posted from opensolaris.org
__
its
entirety. Not a situation to be tolerated in production.
Expect the fix for this issue this month.
Thanks,
George
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
FREECAP DEDUP HEALTH ALTROOT
rpool 464G 64.6G 399G13% 1.00x ONLINE -
tank 2.27T 207K 2.27T 0% 1.00x ONLINE -
jaggs# zpool get allocated,free rpool
NAME PROPERTY VALUE SOURCE
rpool allocated 64.6G -
rpool free 399G -
We realize that these
advancement of Open
Storage and explore the far-reaching potential of ZFS
based Hybrid Storage Pools?
If so, please send an inquiry to "zfs at ddrdrive dot com".
The drive for speed,
Christopher George
Founder/CTO
www.ddrdrive.com
*** Special thanks goes out to SUN employees Garrett D'
ovides an optional (user
configured) backup/restore feature.
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
r HBA's
which do require a x4 or x8 PCIe connection.
Very appreciative of the feedback!
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mai
oduct cannot be supported by
any of the BBUs currently found on RAID controllers. It would require either a
substantial increase in energy density or a decrease in packaging volume
both of which incur additional risks.
> Interesting product though!
Thanks,
Christopher George
Founder/CTO
www
because it is a proven and industry
standard method of enterprise class data backup.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://m
ree to disagree. I respect your point of view, and do
agree strongly that Li-Ion batteries play a critical and highly valued role in
many industries.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
t for the DC jack to be
unpopulated so that an internal power source could be utilized. We will
make this modification available to any customer who asks.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
> Personally I'd say it's a must. Most DC's I operate in wouldn't tolerate
> having a card separately wired from the chassis power.
May I ask the list, if this is a hard requirement for anyone else?
Please email me directly "cgeorge at ddrdrive dot com".
Th
rs (non-clustered) an
additional option.
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
rotection" at:
http://www.intel.com/content/www/us/en/solid-state-drives/ssd-320-series-power-loss-data-protection-brief.html
Intel's brief also clears up a prior controversy of what types of
data are actually cached, per the brief it's both user and system
data!
Best regards,
Christophe
?
Yes! Customers using Illumos derived distros make-up a
good portion of our customer base.
Thanks,
Christopher George
www.ddrdrive.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ta" to mean the SSD's internal meta data...
I'm curious, any other interpretations?
Thanks,
Chris
--------
Christopher George
cgeorge at ddrdrive.com
http://www.ddrdrive.com/
___
zfs-discuss mailing list
zf
ing more than to continue to
design and offer our unique ZIL accelerators as an alternative to Flash
only SSDs and hopefully help (in some small way) the success of ZFS.
Thanks again for taking the time to share your thoughts!
The drive for speed,
Chris
----
Christopher Geor
we target (enterprise customers).
The beauty of ZFS is the flexibility of it's implementation. By supporting
multiple log device types and configurations it ultimately enables a broad
range of performance capabilities!
Best regards,
Chris
--
C
The root filesystem on the root pool is set to 'canmount=noauto' so
you need to manually mount it first using 'zfs mount '.
Then run 'zfs mount -a'.
- George
On 08/16/10 07:30 PM, Robert Hartzell wrote:
I have a disk which is 1/2 of a boot disk mirror from a fa
Robert Hartzell wrote:
On 08/16/10 07:47 PM, George Wilson wrote:
The root filesystem on the root pool is set to 'canmount=noauto' so you
need to manually mount it first using 'zfs mount '. Then
run 'zfs mount -a'.
- George
mounting the dataset failed because
(one device failed, and the other device is good)
... Do you read the data from *both* sides of the mirror, in order to
discover the corrupted log device, and correctly move forward without data
loss?
Yes, we read all sides of the mirror when we claim (i.e. read) the log
blocks for a log device. Th
oss is not possible should it fail.
- George
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
g from the data portion of an empty device wouldn't really show us
much as we're going to be reading a bunch of non-checksummed data. The
best we can do is to "probe" the device's label region to determine it's
health. This
Bob Friesenhahn wrote:
On Thu, 26 Aug 2010, George Wilson wrote:
What gets "scrubbed" in the slog? The slog contains transient data
which exists for only seconds at a time. The slog is quite likely to be
empty at any given point in time.
Bob
Yes, the typical ZIL block never
SSD does *not* suffer the same fate, as its
performance is not bound by or vary with partition (mis)alignment.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-di
en
though we delineate the storage media used depending on host
power condition. The X1 exclusively uses DRAM for all IO
processing (host is on) and then Flash for permanent non-volatility
(host is off).
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from op
remove the log device and
then re-add it to the pool as a mirror-ed log device.
- George
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
be past the import phase and into the mounting
phase. What I would recommend is that you 'zpool import -N zp' so that
none of the datasets get mounted and only the import happens. Then one
by one you can mount the datasets in order (starting with 'zp') so you
can find out wh
tp://mail.opensolaris.org/mailman/listinfo/zfs-discuss
It sounds like you're hitting '6891824 7410 NAS head "continually
resilvering" following HDD replacement'. If you stop taking and
destroying snapshots you should see the resilver finish.
Thanks,
George
_
ll the work over again. Are drives still
failing randomly for you?
3. Can i force remove c9d1 as it is no longer needed but c11t3 can
be resilvered instead?
You can detach the spare and let the resilver work on only c11t3. Can
you send me t
Can you post the output of 'zpool status'?
Thanks,
George
LIC mesh wrote:
Most likely an iSCSI timeout, but that was before my time here.
Since then, there have been various individual drives lost along the way
on the shelves, but never a whole LUN, so, theoretically, /except/
nt (or aggregate) write pattern trends to random. Over
50% random with a pool containing just 5 filesystems. This makes
intuitive sense knowing each filesystem has it's own ZIL and they
all share the dedicated log (ZIL Accelerator).
Best regards,
Christopher George
Founder/CTO
www.d
If your pool is on version > 19 then you should be able to import a pool
with a missing log device by using the '-m' option to 'zpool import'.
- George
On Sat, Oct 23, 2010 at 10:03 PM, David Ehrmann wrote:
> > > From: zfs-discuss-boun...@opensolaris.org
>
The guid is stored on the mirrored pair of the log and in the pool config.
If you're log device was not mirrored then you can only find it in the pool
config.
- George
On Sun, Oct 24, 2010 at 9:34 AM, David Ehrmann wrote:
> How does ZFS detect that there's a log device attached
This value is hard-coded in.
- George
On Fri, Oct 29, 2010 at 9:58 AM, David Magda wrote:
> On Fri, October 29, 2010 10:00, Eric Schrock wrote:
> >
> > On Oct 29, 2010, at 9:21 AM, Jesus Cea wrote:
> >
> >> When a file is deleted, its block are freed, and that
> Any opinions? stories? other models I missed?
I was a speaker at the recent OpenStorage Summit,
my presentation "ZIL Accelerator: DRAM or Flash?"
might be of interest:
http://www.ddrdrive.com/zil_accelerator.pdf
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
1 Express!
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
IOPS / $1,995) = 19.40
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the hour time-limit.
The reason the graphs are done in a time line fashion is so you look
at any point in the 1 hour series to see how each device performs.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from o
> TRIM was putback in July... You're telling me it didn't make it into S11
> Express?
Without top level ZFS TRIM support, SATA Framework (sata.c) support
has no bearing on this discussion.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This mes
is
drive inactivity has no effect on the eventual outcome. So with either a
bursty
or sustained workload the end result is always the same, dramatic write IOPS
degradation after unpackaging or secure erase of the tested Flash based SSDs.
Best regards,
Christopher George
Founder/CTO
www.
he size of the resultant binaries?
Thanks,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
are valid, the resulting degradation
will vary depending on the controller used.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.o
e.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> got it attached to a UPS with very conservative shut-down timing. Or
> are there other host failures aside from power a ZIL would be
> vulnerable too (system hard-locks?)?
Correct, a system hard-lock is another example...
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
y" than sync=disabled.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ng to perform a Secure Erase every hour, day, or even
week really be the most cost effective use of an administrators time?
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailin
g in a larger context:
http://www.oug.org/files/presentations/zfszilsynchronicity.pdf
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensol
Above excerpts written by a OCZ employed thread moderator (Tony).
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
aster,
> assuming that cache disabled on a rotating drive is roughly 100
> IOPS with queueing), that it'll still provide a huge performance boost
> when used as a ZIL in their system.
I agree 100%. I never intended to insinuate otherwise :-)
Best regards,
Christopher George
Fou
ing.com/Home/scripts-and-programs-1/zilstat
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
SATA cable, see slides 15-17.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
k that has
always been there. Now you can monitor how much CPU is being used by the
underlying ZFS I/O subsystem. If you're seeing a specific performance
problem feel free to provide more details about the issue.
- George
On Mon, Jan 31, 2011 at 4:54 PM, Gary Mills wrote:
> After an upgrad
is immune to TRIM support status and
thus unaffected. Actually, TRIM support would only add
unnecessary overhead to the DDRdrive X1's device driver.
Best regards,
Christopher George
Founder/CTO
www.ddrdrive.com
--
This message posted from o
Chris,
I might be able to help you recover the pool but will need access to your
system. If you think this is possible just ping me off list and let me know.
Thanks,
George
On Sun, Feb 6, 2011 at 4:56 PM, Chris Forgeron wrote:
> Hello all,
>
> Long time reader, first ti
Can you share your 'zpool status' output for both pools?
Also you may want to run the following a few times in a loop and
provide the output:
# echo "::walk spa | ::print spa_t spa_name spa_last_io
spa_scrub_inflight" | mdb -k
Thanks,
George
On Sat, May 14, 2011 at 8:29 AM,
Can you check
that you didn't mistype this?
Thanks,
George
On Mon, May 16, 2011 at 7:41 AM, Donald Stahl wrote:
>> Can you share your 'zpool status' output for both pools?
> Faster, smaller server:
> ~# zpool status pool0
> pool: pool0
> state: ONLINE
> sc
system so you may want to make this
change during off-peak hours.
Then check your performance and see if it makes a difference.
- George
On Mon, May 16, 2011 at 10:58 AM, Donald Stahl wrote:
> Here is another example of the performance problems I am seeing:
>
> ~# dd if=/dev/zero of=/p
7;s or processor load-
> so I'm wondering what else I might be missing.
Scrub will impact performance although I wouldn't expect a 60% drop.
Do you mind sharing more data on this? I would like to see the
spa_scrub_* values I sent you earlier while you're running your test
(in a loop s
4) In one internet post I've seen suggestions about this
> value to be set as well:
> set zfs:metaslab_smo_bonus_pct = 0xc8
>
> http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg40765.html
This is used to add more weight (i.e. preference) to specific
metaslabs. A metasla
as to spare the absent
> whiteboard ,)
No. Imagine if you started allocations on a disk and used the
metaslabs that are at the edge of disk and some out a 1/3 of the way
in. Then you want all the metaslabs which are a 1/3 of the way in and
lower to get the bonus. This keeps the allocations tow
Don,
Try setting the zfs_scrub_delay to 1 but increase the
zfs_top_maxinflight to something like 64.
Thanks,
George
On Wed, May 18, 2011 at 5:48 PM, Donald Stahl wrote:
> Wow- so a bit of an update:
>
> With the default scrub delay:
> echo "zfs_scrub_delay/K" | mdb
slice instead of the entire device will
automatically disable the on-board write cache.
Christopher George
Founder / CTO
http://www.ddrdrive.com/
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://m
1 - 100 of 189 matches
Mail list logo