On 03/16/2013 12:57 AM, Richard Elling wrote:
> On Mar 15, 2013, at 6:09 PM, Marion Hakanson wrote:
>> So, has anyone done this? Or come close to it? Thoughts, even if you
>> haven't done it yourself?
>
> Don't forget about backups :-)
> -- richard
Transf
t see a fault.
You can have only one Solaris partition at a time. Ian already shared the
answer, "Create one 100%
Solaris partition and then use format to create two slices."
-- richard
>
> I'm not keen on using Solaris slices because I don't have an understanding o
tiple
> NFS servers into a single global namespace, without any sign of that
> happening anytime soon.
NFS v4 or DFS (or even clever sysadmin + automount) offers single namespace
without needing the complexity of NFSv4.1, lustre, glusterfs, etc.
>
> So, has anyone done this?
e file
systems
to match the policies. For example:
/home/richard = compressed (default top-level, since properties are
inherited)
/home/richard/media = compressed
/home/richard/backup = compressed + dedup
-- richard
> I am not sure how much memory i will need (my es
/target:default
STMF service is svc:/system/stmf:default
>
> On Solaris 11.1, how would I determine what's busying it?
One would think that fuser would work, but in my experience, fuser rarely does
what I expect.
If you suspect STMF, then try
stmfadm list-lu -v
-- richard
On Feb 20, 2013, at 3:27 PM, Tim Cook wrote:
> On Wed, Feb 20, 2013 at 5:09 PM, Richard Elling
> wrote:
> On Feb 20, 2013, at 2:49 PM, Markus Grundmann wrote:
>
>> Hi!
>>
>> My name is Markus and I living in germany. I'm new to this list and I have a
>&
be
> rejected
> when "protected=on" property is set).
Look at the delegable properties (zfs allow). For example, you can delegate a
user to have
specific privileges and then not allow them to destroy.
Note: I'm only 99% sure this is implemented in FreeBSD, hopeful
rests in seeing ZFS thrive on all platforms, I'm happy
> to
> suggest that we'd welcome all comers on z...@lists.illumos.org.
+1
-- richard
--
ZFS and performance consulting
http://www.RichardElling.com
_
e 909G 4.01G 32.5K /storage
>>
>> So I guess the lesson is (a) refreservation and zvol alone aren't enough to
>> ensure your VM's will stay up. and (b) if you want to know how much room is
>> *actually* available, as in "usable," as in, "how
On Jan 29, 2013, at 6:08 AM, Robert Milkowski wrote:
>> From: Richard Elling
>> Sent: 21 January 2013 03:51
>
>> VAAI has 4 features, 3 of which have been in illumos for a long time. The
> remaining
>> feature (SCSI UNMAP) was done by Nexenta and exists in their
On Jan 20, 2013, at 4:51 PM, Tim Cook wrote:
> On Sun, Jan 20, 2013 at 6:19 PM, Richard Elling
> wrote:
> On Jan 20, 2013, at 8:16 AM, Edward Harvey wrote:
> > But, by talking about it, we're just smoking pipe dreams. Cuz we all know
> > zfs is developmentally ch
n every way: # of developers, companies, OSes, KLOCs, features.
Perhaps the level of maturity makes progress appear to be moving slower than
it seems in early life?
-- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.open
bloom filters are a great fit for this :-)
-- richard
On Jan 19, 2013, at 5:59 PM, Nico Williams wrote:
> I've wanted a system where dedup applies only to blocks being written
> that have a good chance of being dups of others.
>
> I think one way to do this would be t
sector disks, then
there will be one data and one parity block. There will not be 4 data + 1
parity with 75%
space wastage. Rather, the space allocation more closely resembles a variant of
mirroring,
like some vendors call "RAID-1E"
-- richard
--
richard.ell...@richardelling.com
+1-7
e results a bit because it
is in LBA order for zvols, not the creation order as seen in the real world.
That said, trying to get high performance out of HDDs is an exercise like
fighting the tides :-)
-- richard
--
richard.ell...@richardelling.com
+1-760-896-4422
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Jan 17, 2013, at 9:35 PM, Thomas Nau wrote:
> Thanks for all the answers more inline)
>
> On 01/18/2013 02:42 AM, Richard Elling wrote:
>> On Jan 17, 2013, at 7:04 AM, Bob Friesenhahn > <mailto:bfrie...@simple.dallas.tx.us>> wrote:
>>
>>
ere chugging
away doing 4K random I/Os, on the wire I was seeing 1MB NFS
writes. In part this analysis led to my cars-and-trains analysis.
In some VMware configurations, over the wire you could see a 16k
read for every 4k random write. Go figure. Fortunately, those 16k
reads find their way
>> default volblocksize of 8k to a bunch of Citrix Xen Servers through iSCSI.
>> The pool is made of SAS2 disks (11 x 3-way mirrored) plus mirrored STEC RAM
>> ZIL
>> SSDs and 128G of main memory
>>
>> The iSCSI access pattern (1 hour daytime average) looks like
lks who want JBOD control.
-- richard
>
> I recently posted on Server Fault with the Nexenta console representation of
> the HP D2700 JBOD. It's already integrated with NexentaStor.
>
> --
> Edmund White
> ewwh...@mac.com
>
> From: Mark -
> Date: Tuesday, Janu
whitelist them,
https://www.illumos.org/issues/644
-- richard
>
> The OS is oi151a7, running on an existing server with a 54TB pool
> of internal drives. I believe the server hardware is not relevant
> to the JBOD issue, although the internal drives do appear to the
> OS with mul
8i.
> There is a separate intel 320 ssd for the OS. The purpose is to backup data
> from the customer's windows workstations. I'm leaning toward using BackupPC
> for the backups since it seems to combine good efficiency with a fairly
> customer-friendly web interface.
Soun
On Jan 3, 2013, at 8:38 PM, Geoff Nordli wrote:
> Thanks Richard, Happy New Year.
>
> On 13-01-03 09:45 AM, Richard Elling wrote:
>> On Jan 2, 2013, at 8:45 PM, Geoff Nordli wrote:
>>
>>> I am looking at the performance numbers for the Oracle VDI admin guide.
&
olaris 11 that are
destabilizing. By contrast, the number of new features being added to
illumos-gate (not to be confused with illumos-based distros) is relatively
modest and in all cases are not gratuitous.
-- richard
--
richard.ell...@richardelling.com
+1-760-896-4422
__
> I've set the pool sync to disabled, and added a couple
> of
>
> 8. c4t1d0
> /pci@0,0/pci1462,7720@11/disk@1,0
> 9. c4t2d0
> /pci@0,0/pci1462,7720@11/disk@2,0
Setting sync=disabled means your log SSDs (slogs) will not be used.
-- richard
&
equirements for
the performance-critical work. So there are a bunch of companies offering
SSD-based arrays
for that market. If you're stuck with HDDs, then effective use of
snapshots+clones with a few
GB of RAM and slog can support quite a few desktops.
On Jan 2, 2013, at 2:03 AM, Eugen Leitl wrote:
> On Sun, Dec 30, 2012 at 10:40:39AM -0800, Richard Elling wrote:
>> On Dec 30, 2012, at 9:02 AM, Eugen Leitl wrote:
>
>>> The system is a MSI E350DM-E33 with 8 GByte PC1333 DDR3
>>> memory, no ECC. All the systems
happy 2013.
>
> P.S. Not sure whether this is pathological, but the system
> does produce occasional soft errors like e.g. dmesg
More likely these are due to SMART commands not being properly handled
for SATA devices. They are harmless.
-- richard
>
> Dec 30 17:45:00 oizfs scsi: [
ct?
>
> Quick update... I found at least one reference to the rate limiting I was
> referring to. It was Richard from ~2.5 years ago :)
> http://marc.info/?l=zfs-discuss&m=127060523611023&w=2
>
> I assume the source code reference is still valid, in which case a popul
bug fix below...
On Dec 5, 2012, at 1:10 PM, Richard Elling wrote:
> On Dec 5, 2012, at 7:46 AM, Matt Van Mater wrote:
>
>> I don't have anything significant to add to this conversation, but wanted to
>> chime in that I also find the concept of a QOS-like capability
with clones.
There are plenty of good ideas being kicked around here, but remember that to
support
things like QoS at the application level, the applications must be written to
an interface
that passes QoS hints all the way down the stack. Lacking these interfaces,
means that
QoS needs to be mana
On Dec 5, 2012, at 5:41 AM, Jim Klimov wrote:
> On 2012-12-05 04:11, Richard Elling wrote:
>> On Nov 29, 2012, at 1:56 AM, Jim Klimov > <mailto:jimkli...@cos.ru>> wrote:
>>
>>> I've heard a claim that ZFS relies too much on RAM caching, but
>>&
y or QoS information (eg read() or write()) into the file system VFS
interface. So the grainularity of priority control is by zone or dataset.
-- richard
--
richard.ell...@richardelling.com
+1-760-896-4422
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ailable in Entic.net-sponsored Openindiana
> and probably in Nexenta, too, since it is implemented inside Illumos.
NexentaStor 3.x is not an illumos-based distribution, it is based on OpenSolaris
b134.
-- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
y years. Prior to the crypto option being integrated
as a first class citizen in OpenSolaris, the codename used was "xlofi," so
try that in your google searches, or look at the man page for lofiadm
-- richard
___
zfs-discuss mailing list
zfs-d
known
> expander)
> * Dell PowerVault MD 1200 (2U 12 drives, dual 600w PS, dual unknown expanders)
> * HP StorageWorks D2600 (2U 12 drives, dual 460w PS, single/dual unknown
> expanders)
I've used all of the above and all of the DataOn systems, too (Hi Rocky!)
No real complaints, th
On Oct 19, 2012, at 4:59 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Richard Elling
>>
>>> At some point, people will bitterly regret s
ams were affecting
> the cache when they started and finished.
There are other cases where data is evicted from the ARC, though I don't
have a complete list at my fingertips. For example, if a zvol is closed, then
the data for the zvol is evicted.
-- richard
>
> Thanks for the res
e to physical paths.
It is fine. The boot process is slightly different in that zpool.cache
is not consulted at first. However, it is consulted later, so there are
edge cases where this can cause problems when there are significant
changes in the de
s on
slideshare
http://www.slideshare.net/relling
source available on request.
-- richard
>
> Does anyone have a short set of presentation slides or maybe
> a short video I could pillage for that purpose? Thanks.
>
> -- Eugen
> ___
> In that fragmented world, some common exchange (replication) format would be
> reassuring.
>
> In this respect, I suppose Arne Jansen's "zfs fits-send" portable streams is
> good news, though it's write only (to BTRFS), And it looks like a filesys
On Oct 12, 2012, at 5:50 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
wrote:
>> From: Richard Elling [mailto:richard.ell...@gmail.com]
>>
>> Pedantically, a pool can be made in a file, so it works the same...
>
> Pool can only be made in a file, by
ssued to ensure that the data made it the platters.
Write cache is flushed after uberblock updates and for ZIL writes. This is
important for
uberblock updates, so the uberblock doesn't point to a garbaged MOS. It is
important
for ZIL writes, because they must be guaranteed written to media befor
On Oct 11, 2012, at 6:03 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
wrote:
>> From: Richard Elling [mailto:richard.ell...@gmail.com]
>>
>> Read it again he asked, "On that note, is there a minimal user-mode zfs thing
>> that would allow
>&g
d is that it works with or without ZFS.
The bad thing is that some SMART tools and devices trigger complaints that
show up as errors (that can be safely ignored)
-- richard
>
> Perhaps it's of use for others sa well:
>
>
On Oct 10, 2012, at 9:29 AM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Richard Elling
>>
>>>> If the recipient syste
>> inside and just get one file. But if you had been doing send |
>> receive, then you obviously can look inside the receiving filesystem
>> and extract some individual specifics.
>>
>> If the recipient system doesn't support "zfs receive," [...]
>
> On that note, is there a minimal user-mode zfs thing that would allow
> receiving a stream into an image file? No need for file/directory access
> etc.
cat :-)
> I was thinking maybe the zfs-fuse-on-linux project may have suitable bits?
I'm sure most Linux distros have cat
-- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ak for current FreeBSD, but I've seen more than 400
disks (HDDs) in a single pool.
-- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
per systems engineering design. Fortunately,
people
tend to not do storage vmotion on a continuous basis.
-- richard
--
richard.ell...@richardelling.com
+1-760-896-4422
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
don't have any numbers to share.
It is not unusual for workloads to exceed the performance of a single device.
For example, if you have a device that can achieve 700 MB/sec, but a workload
generated by lots of clients accessing the server via 10GbE (1 GB/sec), then it
should be immediately obvious
On Oct 4, 2012, at 9:07 AM, Dan Swartzendruber wrote:
> On 10/4/2012 11:48 AM, Richard Elling wrote:
>>
>> On Oct 4, 2012, at 8:35 AM, Dan Swartzendruber wrote:
>>
>>>
>>> This whole thread has been fascinating. I really wish we (OI) had the two
>
ut encountering the upgrade notice ?
>
> I'm using OpenIndiana 151a6 on x86.
-- richard
--
richard.ell...@richardelling.com
+1-760-896-4422
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
nsistency. Both are required to provide a single view of the data.
> 2. CARP.
This exists as part of the OHAC project.
-- richard
--
richard.ell...@richardelling.com
+1-760-896-4422
___
zfs-discuss mailing list
zfs-discuss@opensolaris.
7;d love to see y'all
there in person, but if you can't make it, be sure to register for the streaming
video feeds. Details at:
www.zfsday.com
Be sure to prep your ZFS war stories for the beer bash afterwards -- thanks
Delphix!
-- richard
--
illumos Day & ZFS Day, Oct 1-2, 2012
speed systems or
modern SSDs. It is probably not a bad idea to change the default to
reflect more modern systems, thus avoiding surprises.
-- richard
> *) l2arc_headroom (default 2x): multiplies the above parameter and
>determines how far into the ARC lists we will s
t. Once again, each zpool will only be imported on one host, but in the
> event of a failure, you could force import it on the other host.
>
> Can anybody think of a reason why Option 2 would be stupid, or can you think
> of a better solution?
If they are close enough for
byte references separately.
>
> I am not sure if there is a simple way to get exact
> byte-counts instead of roundings like "422M"...
zfs get -p
-- richard
--
illumos Day & ZFS Day, Oct 1-2, 2012 San Fransisco
www.zfsday.com
richard.ell...@richardelling.com
+1-760-896-4422
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sep 25, 2012, at 1:32 PM, Jim Klimov wrote:
> 2012-09-26 0:21, Richard Elling пишет:
>>> Does this mean that importing a pool with iSCSI zvols
>>> on a fresh host (LiveCD instance on the same box, or
>>> via failover of shared storage to a different host)
>>
incur
> ram/cpu costs across the entire pool...
>
> It depends. -- richard
>
>
>
>
> Can you elaborate at all ? Dedupe can have fairly profound performance
> implications, and I'd like to know if I am paying a huge price just to get a
> dedupe
do I (hopefully) miss something?
That is pretty much how it works, with one small wrinkle -- the
configuration is stored in SMF. So you can either do it the hard
way (by hand), use a commercially-available HA solution
(eg. RSF-1 from high-availability.com), or use SMF export/import.
-- richard
--
il
;
>>> In a sense, yes. The dedup machinery is pool-wide, but
>> only
>>> writes from
>>> filesystems which have dedup enabled enter it. The
>> rest
>>> simply pass it
>>> by and work as usual.
>>
>>
>> Ok - but from a per
y, today. Can you provide a use case
for
how you want this to work? We might want to create an RFE here :-)
-- richard
> I've checked "zfs allow" already but it only helps in restricting the user to
> create, destroy, etc something. There is no permission subcommand for lis
want
> to restore is huge, rollback might be a better option.
Yes, rollback is not used very frequently. It is more common to copy out or
clone the older snapshot. For example, you can clone week03, creating
what is essentially a fork.
-- richard
--
illumos Day & Z
unt and
performance.
-- richard
--
illumos Day & ZFS Day, Oct 1-2, 2012 San Fransisco
www.zfsday.com
richard.ell...@richardelling.com
+1-760-896-4422
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
space utilization more like mirroring. The good news is that performance
is also more like mirroring.
-- richard
--
illumos Day & ZFS Day, Oct 1-2, 2012 San Fransisco
www.zfsday.com
richard.ell...@richardelling.com
+1-760-896-4422
___
zfs
rty. For convenience, "zfs get -p
creation ..."
will return the time as a number. Something like this:
for i in $(zfs list -t snapshot -H -o name); do echo $(zfs get -p -H -o value
creation $i) $i; done | sort -n
-- richard
--
illumos Day &
For illumos-based distributions, there is a "written" and "written@" property
that shows the
amount of data writtent to each snapshot. This helps to clear the confusion
over the way
the "used" property is accounted.
https://www.illumos.org/issues/1645
-- richard
to make massive investments in development and
testing because of an assumption. Build test cases, prove that the
benefits of the investment can outweigh other alternatives, and then
deliver code.
-- richard
>> This IMHO includes primarily the block pointer tree
>> and the DDT for t
On Aug 13, 2012, at 8:59 PM, Scott wrote:
> On Mon, Aug 13, 2012 at 10:40:45AM -0700, Richard Elling wrote:
>>
>> On Aug 13, 2012, at 2:24 AM, Sa?o Kiselkov wrote:
>>
>>> On 08/13/2012 10:45 AM, Scott wrote:
>>>> Hi Saso,
>>>>
>>&g
uid for the device.
It is possible, though nontrivial, to recreate.
That said, I've never seen a failure that just takes out only the ZFS labels.
-- richard
--
ZFS Performance and Training
richard.ell...@richardelling.com
+1-760-896-4422
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ou frequently read this list, can you tell me whether they discuss the
> on-disk format in this list?
Yes, but nobody has posted proposals for new on-disk format changes
since feature flags was first announced.
NB, the z...@lists.illumos.org is but one of the many discuss groups
where ZFS users can
On Aug 2, 2012, at 5:40 PM, Nigel W wrote:
> On Thu, Aug 2, 2012 at 3:39 PM, Richard Elling
> wrote:
>> On Aug 1, 2012, at 8:30 AM, Nigel W wrote:
>>
>>
>> Yes. +1
>>
>> The L2ARC as is it currently implemented is not terribly useful for
>> sto
On Jul 31, 2012, at 8:05 PM, opensolarisisdeadlongliveopensolaris wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Richard Elling
>>
>> I believe what you meant to say was "dedup with HDDs sux.&
, it occurs on older ZFS implementations and the missing
device is an auxiliary device: cache or spare.
-- richard
--
ZFS Performance and Training
richard.ell...@richardelling.com
+1-760-896-4422
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
data is compressed. So you cannot
make a direct correlation between the DDT entry size and the affect on the
stored metadata on disk sectors.
-- richard
--
ZFS Performance and Training
richard.ell...@richardelling.com
+1-760-896-4422
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
; This is on my wishlist as well. I believe ZEVO supports it so possibly
> it'll be available in ZFS in the near future.
ZEVO does not. The only ZFS vendor I'm aware of with a separate top-level
vdev for metadata is Tegile, and it is available today.
-- richard
--
is
displayed
and should point to a website that tells you how to correct this (NB, depending
on the
OS, that URL may or may not exist at Oracle (nee Sun))
-- richard
--
ZFS Performance and Training
richard.ell...@richardelling.com
+1-760-896-4422
ystem that has 4 GB of RAM and
> 2 TB of deduped data, when it crashes (panic, powerfailure, etc) it
> would take 8-12 hours to boot up again. It now has <1TB of data and
> will boot in about 5 minutes or so.
I believe what you meant to say was "dedup with HDDs sux." If you had
On Jul 30, 2012, at 12:25 PM, Tim Cook wrote:
> On Mon, Jul 30, 2012 at 12:44 PM, Richard Elling
> wrote:
> On Jul 30, 2012, at 10:20 AM, Roy Sigurd Karlsbakk wrote:
>> - Opprinnelig melding -
>>> On Mon, Jul 30, 2012 at 9:38 AM, Roy Sigurd Karlsbakk
>>>
t;import -m". :)
>
> On 151a2, man page just says 'use this or that mountpoint' with import -m,
> but the fact was zpool refused to import the pool at boot when 2 SLOG devices
> (mirrored) and 10 L2ARC devices were offline. Should OI/Illumos be able to
or the ZIL.
You are both right and wrong, at the same time. It depends on the data.
Without a slog, writes that are larger than zfs_immediate_write_sz are
written to the permanent place in the pool. Please review (again) my
slides on the subject.
http://www.slideshare.net/relling/zfs-t
o sync-heavy scenarios like databases or NFS servers?
Async writes don't use the ZIL.
-- richard
--
ZFS Performance and Training
richard.ell...@richardelling.com
+1-760-896-4422
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http:/
that you
think is 4KB might look very different coming out of ESXi. Use nfssvrtop
or one of the many dtrace one-liners for observing NFS traffic to see what is
really on the wire. And I'm very interested to know if you see 16KB reads
during the "write-only" workload.
more below...
Important question, what is the interconnect? iSCSI? FC? NFS?
-- richard
On Jul 24, 2012, at 9:44 AM, matth...@flash.shanje.com wrote:
> Working on a POC for high IO workloads, and I’m running in to a bottleneck
> that I’m not sure I can solve. Testbed looks like this :
>
> Sup
On Jul 22, 2012, at 10:18 PM, Yuri Vorobyev wrote:
> Hello.
>
> I faced with a strange performance problem with new disk shelf.
> We a using ZFS system with SATA disks for a while.
What OS and release?
-- richard
> It is Supermicro SC846-E16 chassis, Supermicro X8DTH-6F mother
If you see averages > 20ms, then you are likely
running into scheduling issues.
-- richard
> This is on
> opensolaris 130b, rebooting with openindiana 151a live cd gives the
> same results, dd tests give the same results, too. Storage controller
> is an lsi 1068 using mpt driver. The
eprint.iacr.org or the like...
Agree. George was in that section of the code a few months ago (zio.c) and I
asked
him to add a kstat, at least. I'll follow up with him next week, or get it done
some other
way.
-- richard
--
ZFS Performance and Training
richard.ell...@richar
On Jul 11, 2012, at 10:23 AM, Sašo Kiselkov wrote:
> Hi Richard,
>
> On 07/11/2012 06:58 PM, Richard Elling wrote:
>> Thanks Sašo!
>> Comments below...
>>
>> On Jul 10, 2012, at 4:56 PM, Sašo Kiselkov wrote:
>>
>>> Hi guys,
>>>
>&g
On Jul 11, 2012, at 10:11 AM, Bob Friesenhahn wrote:
> On Wed, 11 Jul 2012, Richard Elling wrote:
>> The last studio release suitable for building OpenSolaris is available in
>> the repo.
>> See the instructions at
>> http://wiki.illumos.org/display/illumos/How+To+Bui
splay/illumos/How+To+Build+illumos
I'd be curious about whether you see much difference based on studio 12.1,
gcc 3.4.3 and gcc 4.4 (or even 4.7)
-- richard
--
ZFS Performance and Training
richard.ell...@richardelling.com
+1-760-896-4422
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
d size (as in -s, units="blocks")
So you can see that it takes only space for metadata.
1 -rw-r--r-- 1 root root 1073741824 Nov 26 06:52 1gig
size length
-- richard
--
ZFS Performance and Training
richard.ell...@richardelling.com
+1-760-896-44
er import the same pool on those changed
>> drives again?
Yes, we do this quite frequently. And it is tested ad nauseum. Methinks it is
simply a bug, perhaps one that is already fixed.
> If you were splitting ZFS mirrors to read data from one half all would be
> sweet (and you wouldn
, HP also sells an 8-port IT-style HBA (SC-08Ge), but it is hard to locate
with their configurators. There might be a more modern equivalent cleverly
hidden somewhere difficult to find.
-- richard
--
ZFS Performance and Training
richard.ell...@richardelling.com
+1-760-896-4422
_
Hi Ian,
Chapter 7 of the DTrace book has some examples of how to look at iSCSI target
and initiator behaviour.
-- richard
On Jun 28, 2012, at 10:47 PM, Ian Collins wrote:
> I'm trying to work out the case a remedy for a very sick iSCSI pool on a
> Solaris 11 host.
>
> The v
ture flags at
the ZSF Meetup in January 2012 here:
http://blog.delphix.com/ahl/2012/zfs10-illumos-meetup/
-- richard
--
ZFS and performance consulting
http://www.RichardElling.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Jun 20, 2012, at 5:08 PM, Jim Klimov wrote:
> 2012-06-21 1:58, Richard Elling wrote:
>> On Jun 20, 2012, at 4:08 AM, Jim Klimov wrote:
>>>
>>> Also by default if you don't give the whole drive to ZFS, its cache
>>> may be disabled upon pool import
that ZFS disables
write caches.
-- richard
--
ZFS and performance consulting
http://www.RichardElling.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Jun 15, 2012, at 7:37 AM, Hung-Sheng Tsao Ph.D. wrote:
> by the way
> when you format start with cylinder 1 donot use 0
There is no requirement for skipping cylinder 0 for root on Solaris, and there
never has been.
-- richard
--
ZFS and performance consulting
http://www.RichardElli
ly sync writes will go to zil right a way (and not always, see
> logbias, etc.) and to arc to be committed later to a pool when txg closes.
In this specific case, there are separate log devices, so logbias doesn't apply.
-- ric
l
be committed when the size reaches this limit, rather than waiting for the
txg_timeout. For streaming writes, this can work better than tuning the
txg_timeout.
-- richard
>
> Thanks for all the help,
> Tim
>
> On Thu, Jun 14, 2012 at 10:30 PM, Phil Harman wrote:
>>
can react to long commit times differently. In this
example,
we see 1.9 seconds for the commit versus about 400 microseconds for each
async write. The cause of the latency of the commit is not apparent from any
bandwidth measurements (eg zpool iostat) and you should conside
1 - 100 of 2771 matches
Mail list logo