Jack,

>> A cache might already "know" that the directories are more
>> useful because they are accessed more often ...

Sorry about the misunderstanding - I did not mean that the
cache would know what a FAT or directory is, just that those
sectors are accessed frequently, giving the cache a chance
(e.g. by LRU) to know that it should throw them away later
than "less useful" (less frequently accessed) sectors when
new space is needed to cache something. Also, the frequent
access means that FAT and directories are likely to be in
the cache again soon...



>> ... Also, you do not have to cache data on the first write,
>> you can wait until it is read again before caching it ...
> 
> This would also add code in UIDE (a table of "pending cache"
> disk output areas), that I do not want to include.

Not necessarily. the code in lbacache for this is very
short. There is a command line argument which sets a
flag and then "write to disk" does the following:

- check in which bin cached data for this sector is
  (you have to do that anyway, to update the cache)

- if any (cache hit) update the cached copy. done.
  (this is needed to keep cache and disk consistent)

- else (cache miss) if flag is not set, we are done.
  (a cache miss for a write does not NEED action...)

- else do allocate fresh space in the cache and cache
  a copy of the written data (anticipating future reads)

So if the "alloc on write" flag is set, the cache will
assume that written data will be eventually read again.

If the flag is not set, the cache will just write the
data to disk and forget about it. It will be cached at
the moment when the data is really read from disk again.

So if you set the flag, cache memory is consumed faster
and if not, reading just written data can be slower :-)



> UIDE is rather efficient as-is, and I want to keep it so.

I understand that. Maybe my little "algorithm" above is
still interesting for UIDE nevertheless, in particular
for people who have less RAM and copy big files but do
not often use the new copies immediately after copying.

> many folks use DOS as a "simple" (and cheap!) backup/restore
> system for Gawd-AWFUL Windows...

In particular those users will usually NOT use their
new backup copy right after copying it so they are a
good example where "alloc on write" can be disabled.

> copy speed in such operations still matters to me, and since
> memory is now ludicrously cheap, why not USE it for a cache?

A cache is good, but in theory, copying many big files
is fastest with a cache which only caches metadata and
read-ahead content but does not try to keep the written
data in memory which will not be read back soon anyway.



One COULD (but normally would not) do something like:
lbacache cool
dir /s > nul
lbacache temp
xcopy ...

That would give the directory metadata extra "stickiness"
in the cache so the xcopy does not flush it out so fast,
in particular with "tuna" option (avoid "tunw" option).

However, the above example is a VERY manual way to tune
a single XCOPY and as you say, it would add much extra
complexity to make the cache REALLY know what directory
and fat metadata is...

On the other hand, deciding whether or not to set the TUNW
alloc-on-write option is quite painless for everybody, as
it can be explained with  "if you have little memory, use
tuna, if you have a lot of memory, use tunw when loading".

> Thanks!   I am "up and about", but food with high fat levels
> still HURTS when I eat it, due to "less" digestive fluids in
> my system!   The Doc says it can take 18 months to "get used
> to this", so I must simply be patient.

Yes - and maybe take some substitute enzymes, I guess...

Eric



PS: The TUNA option makes the cache fully associative so
it considers ALL memory slots for caching (slower) while
the default is to have only e.g. 16 candidates based on
a hash of the sector number in question (16 way assoc.).

When new data has to be cached, the least valuable slot
of old cached data is replaced, based on access counts.


------------------------------------------------------------------------------
Magic Quadrant for Content-Aware Data Loss Prevention
Research study explores the data loss prevention market. Includes in-depth
analysis on the changes within the DLP market, and the criteria used to
evaluate the strengths and weaknesses of these DLP solutions.
http://www.accelacomm.com/jaw/sfnl/114/51385063/
_______________________________________________
Freedos-user mailing list
Freedos-user@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/freedos-user

Reply via email to