was much faster both in clean time of run and in number
of logical reads/scans.
Have anyone thought about it ?
regards, devik
"Mikheev, Vadim" wrote:
>
> > > > > No. Checkpoints are to speedup after crash recovery and
> > > > > to remove/archive log files. With WAL server doesn't write
> > > > > any datafiles on commit, only commit record goes to log
> > > > > (and log fsync-ed). Dirty buffers remains in memory long
> >
Did someone think about query costs ? Is you prepare
query like SELECT id FROM t1 WHERE type=$1 and
execute it with $1=1 and 2. For 1 there is one record
in t1 a all other have type=2.
Without caching, first query will use index, second
not.
Should cached plan use index or not ?
devik
Christof
h yes .. you are right. Thanks for explanation.
devik
r improvements.
Ahh I did not know that there is need to test tuple for
validity for some past cid. I thought that we only need
to know whether tuple has been updated by current cid
to ensure that it will not be scanned again in the same
cid... Where am I wrong ?
devik
me of insert or delete. This saves 4byte from
header ..
devik
hen
also should continue to hold all "locks" (or
HEAP_MARKED_FOR_UPDATE in PG) until PREPARED TX is
resolved.
Probably it should not be hard .. ?
devik
richard excite wrote:
>
> i'm developing one. a library for batch transactions, so you
> can continue processing in the mi
Hello,
anyone thought about implementing two-phase commit to
be able to support distributed transactions ?
I have no clue how complex it would be, someone knows ?
devik
mory,
> > > then scan index and lookup these values ?
> > > Missed I something ?
> > > devik
> > Not sure. I figured they were pretty small values.
> IIRC the whole point was to avoid scanning the table ?
Yes. This was the main point ! For small number of records the
curre
index. No need to store/update the transaction
> status in the index that way.
Huh ? How ? It is how you do it now. Do you expect
load several milion transaction statuses into memory,
then scan index and lookup these values ?
Missed I something ?
devik
ere will the WAL sit ? can you explain it a bit ?
thanks devik
> > question is whether we need it for hash indices. it is definitely
> > good for btree as they support range retrieval. hash ind. doesn't
> > it so I wouldn't implement it for them.
>
> We need in fast heap tuple --> index tuple lookup for overwriting
> storage manager anyway...
oh .. there w
> What is *true* CLUSTER ?
>
> 'grep CLUSTER' over the latest SQL standards gives back nothing.
storing data in b-tree instead of heap for example.
> > And update *entire* heap after addition new index?!
>
> I guess that this should be done even for limited number of
> indices' TIDs in a heap
-tree
which will store precomputed aggregate values (like
cnt,min,max,sum) in btree node keys. It can be then
used for extremely fast evaluation of aggregates. But
in case of MVCC it is more complicated and I'd like
to see how it would be affected by WAL.
devik
> > > The last step could be done in two ways. First by limiting
> > > number of indices for one table we can store coresponding
> > > indices' TIDs in each heap tuple. The update is then simple
> > > taking one disk write.
> >
> > Why limit it ? One could just save an tid array in each tuple .
b
be able to go without
disturbing. Index page should be locked in memory only for few
ticks during actual memcpy to page.
BTW: IMHO when WAL become functional, need we still multi versioned
tuples in heap ? Why don't just version tuples on WAL log and add
them during scans ?
devik
Is someone interested in this ??
regards devik
With current indexscan:
! system usage stats:
! 1812.534505 elapsed 93.060547 user 149.447266 system sec
! [93.118164 user 149.474609 sys total]
! 0/0 [0/0] filesystem blocks in/out
! 130978/32 [131603/297] page faults/r
Hi,
I'm just curious how MVCC will work witk WAL ? Will
it work in the same fashion as now only tuples written
using WAL ?
Or will it search for old tuple's versions in log ?
thanks devik
18 matches
Mail list logo