Igal, thanks for the analysis.
Craig>Part of the question for Pg is what exactly we should and should not
be returning.
I think the following might be a good starting point: return set of columns
that would identify the inserted row(s).
E.g. columns of any primary key would do. Columns of any uni
2016-03-23 16:21 GMT+03:00 Merlin Moncure :
> On Wed, Mar 23, 2016 at 7:27 AM, Craig Ringer wrote:
Craig>> With PREPARE IF NOT EXISTS the client is also paying parse and network
Craig>> overhead for every time you send that statement. Much better
not to send it
Craig>> repeatedly in the first plac
Merlin>No one is arguing that that you should send it any every time (at
least -- I hope not).
Well, what is your suggestion exactly?
pgjdbc is NOT using "prepare ..." sql command.
I'm inclined to suppose, it will not use "prepare..." even after your fix.
Merlin>Again, not in pooling scenarios
M
Merlin>proposed would allow use of server side prepared statements with JDBC.
It would not. If we discuss end-to-end scenarios in detail, we would end up with
"send full query on each execution" -> lex/gram on each execution kind
of overheads.
That is hardly a proper way of using prepared stateme
Merlin> All I'm saying is that the use of
Merlin> server side prepared statements is extremely problematic in
Merlin> conjunction with pgbouncer
I've filed https://github.com/pgbouncer/pgbouncer/issues/126 to get
pgbouncer improved in regard to prepared statements.
Vladimir
--
Sent via pgsql-h
Craig>I really, really doubt you can change this before we do a
protocol version bump.
Technically speaking, the idea of using first bytes of statement name
to convey extra information does not require protocol version bump. It
can be backward and forward compatible.
For instance: if statement na
Tom>If you think that's not a protocol change, you are mistaken. It
Tom>changes a behavior that's specified in the protocol documentation.
Even if it requires documentation, this particular change will work seamlessly
across existing implementations of v3 protocol.
For instance, it would not req
Tom> Not to mention that the whole idea of that being a semantically
Tom> significant property of a name is a monstrous kluge.
You are right here.
Just in case, Marko Kreen says (see [1]) pgbouncer has all the information
required to remap statement names, so he says pgbouncer needs no
cooperatio
Here's the simplified testcase:
https://gist.github.com/vlsi/df08cbef370b2e86a5c1
It reproduces the problem in both 9.4.4 and 9.5rc1.
It is reproducible via both psql and pgjdbc.
I use a single table, however my production case includes a join of
two tables and the query is like
select ... from f
> I don't know if there is a good solution except disabling server prepared
> statements.
Why doesn't backend reuse already existing good plan?
The plan does account for the skew.
Can backend take selectivities from the original bind values?
Vladimir
--
Sent via pgsql-hackers mailing list (p
VS>>Why doesn't backend reuse already existing good plan?
PS>this logic is driven by plan cost, not by plan execution time.
It do understand that currently PG replans with $1, $2 and uses
default selectivities for that.
What I am asking is to make PG aware of "previously used bind values",
so it
>the implementation is simply - but it hard to design some really general - it
>is task for UI
Can you please rephrase?
Current design is "if the cost of a generic plan is less than the one
of a custom plan+replan, prefer generic".
I think that is wrong.
"Generic plan" misunderestimates a cost
Please, read my suggestion again.
TL;DR: I suggest to create "generic plan" with regard to current bind values.
What's wrong with that approach?
Vladimir
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpr
Note: I do not suggest changing already cached plans yet.
I suggest looking into "6th bind values" when building a cached plan.
In other words, "if first 5 execution do not reveal dependence on bind
values, then cache the generated plan".
>Say you already have a plan which looks like this:
>Now th
> The custom plan is *more expensive*;
You compare costs of custom vs generic plans.
I suggest: do not compare costs *at all*.
>I don't know, it's your proposal :-) But it looks like I misunderstood.
It is not.
My suggestion is: build a generic plan (that is the plan that will
return proper re
> for one custom plans can be much better than the generic plan, independent of
> cardinalities
So what? I do not suggest dropping custom plans entirely.
I perfectly understand there are cases when better replan every time.
> consider e.g a table with one somewhat common and otherwise just uniqu
>This completely screws over PL/PgSQL, among other things.
Can you elaborate a bit?
Vladimir
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
>so you don't get to (or want to) have any control over the underlying prepared
>statement.
That is pl/pgsql's problem, isn't it?
In the mean time, user can use different query texts (e.g. by adding
offset 0, offset 0*1, offset 0*2, etc kind of stuff they typically use
to tune queries) to convinc
> Generally using a very small sample
That is another issue. Inventing some other algorithm instead of
current "cache after 5 executions" is another effort.
However, I suggest to "learn" from what client is sending.
You suggest to completely ignore that and just prepare for the case
he/she will s
>Basically you're arguing to fix one specific edge case which bugs you
>personally, by creating a lot of others, which don't bug you. Not
>convincing.
It bugs me.
It bugs clients of pgjdbc (e.g. Thomas who started the thread).
Note: support of prepared statements for java applications has just
la
>A value of -1 could disable generic plans
I do not like the idea.
I've seen dramatic performance improvements from using cached plans.
The numbers are like "20ms to plan vs 1ms to execute" for an often
used OLTP query. Query text is involved (~5-10KiB).
Vladimir
--
Sent via pgsql-hackers mai
>(1) the generic plan is falsely optimistic
That is my case.
Application is sending most common value on every execution while
backend is optimistic and it things that the app would stop sending
MCVs.
Costs for the plans are OK. However, there is a data skew, so it is
hard to tell what is the "tr
>If plan is ok for one value parameters, then can be pretty bad for following
>parameters.
Happy statements are all alike; every unhappy statement is unhappy in
its own way (see [1]).
If user is sending different kinds of parameters, he is shooting in the foot.
>Albe's proposal can be good enoug
t significant time is
spent in standard_ExecutorStart: see [3]
In fact, the time spent in standard_ExecutorStart even exceeds the
time spent in standard_ExecutorRun.
[1]: http://www.postgresql.org/message-id/55130dc8.2070...@redhat.com
[2]: https://github.com/pgjdbc/pgjdbc/pull/491#issuecomment-171780054
>I guess you mean there's a transaction surrounding it?
Sure there is a transaction.
I measure the latency from the first Bind message to the ReadyForQuery response.
The database is at localhost.
The flow is as follows (I've use 4 queries in batch for brevity,
however the test above is executed f
=0)
FE=> Sync
<=BE ParseComplete [null]
<=BE BindComplete [unnamed]
<=BE NoData
<=BE CommandStatus(SET)
<=BE ReadyForQuery(I)
Am I missing something?
[1]: http://www.postgresql.org/message-id/22921.1358876...@sss.pgh.pa.us
[2]: https://github.com/pgjdbc/pgjdbc/issues/496
Vladimi
> I believe, and the conclusion was that
>if you think you need this, you're doing it wrong
So what is the recommended approach to use server-prepared statements
at the client side (I mean at JDBC driver side)?
Currently "prepare, switch search_path, execute" leads to "cached plan
must not chang
Robert>Are you really seeing the same behavior in all versions?
I do not have "pre 9.1" at hand, however all 9.1, 9.2, 9.3, 9.4, and
9.5 are affected.
9.1 just silently executes "old statement" as if search_path was not
modified at all.
9.2, 9.3, 9.4, and 9.5 all fail with "cached plan must not c
I want to treat 'prepare' operation as an optimization step, so it is
functionally equivalent to sending a query text.
In other words, I would like backend to track search_path and other parameters
if necessary transparently, creating (caching) different execution plans if
different plans are
Why do you want that at the database level?
Do you have end-to-end scenario that benefits from using Lucee?
>I was wondering how difficult it would be to implement a Postgres extension
>that will act as a wrapper around it and will allow to write functions in that
>language?
Have you checked PL
> If the pl/v8 was easily ported to Windows then I probably wouldn't even try
> to add Lucee,
That is a good question. ChakraCore has been open sourced recently. It
might be easier to build under Windows.
>That seems like a good place to start, thanks
I am not sure you would be able to bind high
Robert>Hmm, so in your example, you actually want replanning to be able to
Robert>change the cached plan's result type?
I want backend to cache _several_ plans behind a single "statement name".
I want to treat "prepare...exec...deallocate" dance as an optimization
step for a simple "exec...exec...
Tatsuo>understanding it always uses unnamed portal even if the SQL is like
"BEGIN" or "COMMIT" (no parameters). They are too often used. Why not
doing like this?
Does it actually work?
The documentation says named portals last till the end of the transaction:
https://www.postgresql.org/docs/9.5/
Hi,
I've tried your indexonlypatch5.patch against REL9_6_BETA3.
Here are some results.
TL;DR:
1) <> does not support index-only
scan for index (type, upper(vc) varchar_pattern_ops).
3) <<(... where type=42 offset 0) where upper_vc like '%ABC%'>> does
trigger index-only scan. IOS reduces number of
Ildar>The reason why this doesn't work is that '~~' operator (which is a
Ildar>synonym for 'like') isn't supported by operator class for btree. Since
Ildar>the only operators supported by btree are <, <=, =, >=, >, you can use
Ildar>it with queries like:
Ildar>And in 3rd query 'OFFSET' statement p
Ildar> Could you please try the patch and tell if it works for you?
I've tested patch6 against recent head. The patch applies with no problems.
The previous case (filter on top of i-o-s) is fixed. Great work.
Here are the test cases and results:
https://gist.github.com/vlsi/008e18e18b609fcaaec53
Tsunakawa> PgJDBC has supported the feature with autosave parameter only
recently
PgJDBC has the implementation for more than a year (REL9.4.1210,
2016-09-07, see https://github.com/pgjdbc/pgjdbc/pull/477 )
Tsunakawa> The point raised in this thread was that that creates
Tsunakawa> too much netwo
Tsunakawa>So the statement-level rollback is newer to users, isn't it?
Technically speaking, the feature was listed in the changelog.
Tsunakawa>Doesn't PgJDBC execute RELEASE after each SQL statement?
It does not.
Tsunakawa>That said, even with RELEASE, the server memory bloat is not
solved.
T
Hi,
In pgjdbc we have regular regression testing against "build from
master" PostgreSQL, and recent master builds fail for "statement cancel"
test.
The PostgreSQL as of Mon Oct 17 00:09:39 UTC 2016 was fine, then "statement
cancel" started to fail.
The test executes "select pg_sleep(10)" and trie
legrand>when usingversion 42.0.0 with
legrand> jdbc:postgresql://localhost:5432/postgres?autosave=always
The pitfall there is the value should be written with upper case like
autosave=ALWAYS.
I've filed https://github.com/pgjdbc/pgjdbc/issues/769 to improve that at
some point.
Vladimir
, 8 мар. 2017 г. в 1:26, Vladimir Sitnikov :
> legrand>when usingversion 42.0.0 with
> legrand> jdbc:postgresql://localhost:5432/postgres?autosave=always
>
> The pitfall there is the value should be written with upper case like
> autosave=ALWAYS.
>
> I've filed https:/
BEGIN:VCALENDAR
PRODID:-//Google Inc//Google Calendar 70.9054//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:REQUEST
BEGIN:VEVENT
DTSTART:20170616T06Z
DTEND:20170616T07Z
DTSTAMP:20170615T193848Z
ORGANIZER;CN=Vladimir Sitnikov:mailto:sitnikov.vladi...@gmail.com
UID:2a6cc082-5202-11e7-978f-bfaa91f
++pgjdbc dev list.
>I am facing unusual connection breakdown problem. Here is the simple code
that I am using to read WAL file:
Does it always fails?
Can you create a test case? For instance, if you file a pull request with
the test, it will get automatically tested across various PG versions, so
>OK, so here's a patch. Review appreciated.
Please correct typo "Write an unsigned integer field (anythign written with
UINT64_FORMAT)". anythign -> anything.
Vladimir
tion (does it?), however that is
a different discussion.
Regards,
Vladimir Sitnikov
There is already a patch for that (ignore/respect nulls in lead/lag):
https://commitfest.postgresql.org/action/patch_view?id=1096
--
Vladimir
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hack
-manes/caffeine/issues/23#issuecomment-161536706
Vladimir Sitnikov
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
> a global lock would be good enough for a proof of concept that only
evaluates cache hit ratios.
I think emulator can be used to check hit ratios. That way we can see
how different algorithms affect hit ratio.
Is there a set of traces of "buffer load events"? (I did some Google
searches like "po
>Have you got numbers showing any actual performance win for postgres_fdw?
For JDBC purposes, it would be nice to have an ability of asking
backend "to stop fetching if produced more than X MiB of response
data".
For small table (4 int4 fields), having decent fetchSize (~1000) makes
result process
>and fetch a number of rows that, by its estimation, would fit in the memory
>available
What's wrong with having size limit in the first place? It seems to
make much more sense.
Vladimir
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription
The thing is PostgeSQL JDBC driver should be able to parse sql in order to
tell if specific question mark is a bind variable or it is inside string
literal, or it is inside of some comment.
I do not believe JDBC spec forces to treat all ? as bind placeholders.
That's unusable.
Oracle JDBC allows h
> The problem is that, even if I set the reltuples and relpages of my choice, when I run the EXPLAIN clause for a query in which the 'newTable' is involved in (e.g. EXPLAIN SELECT * FROM newTable), I get the same
Craig>That moves work further away from the DB, which has its own costs,
and isn't something you're likely to be happy with if you're looking at
things like optimising PL/PgSQL with a bytecode compiler. But it's the best
we have right now.
What if JVM was started within a background worker?
Then J
>
>
> >> That's a recipe for runaway table bloat; VACUUM can't do much because
> >> there's always some minutes-old transaction hanging around (and SNAPSHOT
> >> TOO OLD doesn't really help, we're talking about minutes here), and
> >> because of all of the indexes HOT isn't effective.
>
Just curio
Shay Rojansky :
> I'm well aware of how the extended protocol works, but it seems odd for a
> 30% increase in processing time to be the result exclusively of processing
> 5 messages instead of just 1 - it doesn't seem like that big a deal
> (although I may be mistaken). I was imagining that there'
>
> I really don't get what's problematic with posting a message on a mailing
> list about a potential performance issue, to try to get people's reactions,
> without diving into profiling right away
>
"Benchmark data is a perfect substitute for benchmarking results. Data is
easy to misinterpret, s
Tom Lane :
> Bruce Momjian writes:
> > On Sun, Jul 31, 2016 at 05:57:12PM -0400, Tom Lane wrote:
> >> In hindsight it seems clear that what a lot of apps want out of extended
> >> protocol is only the ability to send parameter values out-of-line
> instead
> >> of having to quote/escape them into
Tatsuo Ishii :
> Doesn't this patch break an existing behavior of unnamed statements?
> That is, an unnamed statement shall exist until next parse message
> using unnamed statement received. It is possible to use the same
> unnamed statement multiple times in a transaction.
>
>Doesn't this patch
>
> Sorry, but I don't buy that. I think sending both server_version and
> server_version_num would be silly, and we're certainly not going to stop
> sending server_version.
>
What is wrong with sending machine-readable value?
Vladimir
Tom Lane :
> [ shrug... ] What do you claim is not machine-readable about
> server_version?
>
0) server_version needs a dance to parse.
For instance, recent "Stamp version 10devel" patch did touch
"server_version" parsing in fe-exec.c:
https://github.com/vlsi/postgres/pull/2/files#diff-2df5cad06
Shay Rojansky :
>
> That sounds right to me. As you say, the server version is sent early in
> the startup phase, before any queries are sent to the backend, so frontends
> know which server they're communicating with.
>
> We could call this "protocol 3.1" since it doesn't break backwards
> compat
Tom Lane :
> FWIW, this thread started on 25-July, less than two weeks ago.
Technically speaking, there was a pgsql-jdbc thread started on May 14:
https://www.postgresql.org/message-id/nh72v6%24582%241%40ger.gmane.org
9.6beta1 was released on May 12
The fact that it wasn't raised
> till more t
Shay Rojansky :
> That's definitely a valid point. But do you think it's a strong enough
> argument to avoid ever adding new messages?
>
The point is "adding a message to current v3 protocol is not a backward
compatible change".
The problem with adding new message types is not only "client suppor
Shay Rojansky :
> Ah, I understand the proposal better now - you're not proposing encoding a
> new message type in an old one, but rather a magic statement name in Parse
> which triggers an optimized processing path in PostgreSQL, that wouldn't go
> through the query cache etc.
>
Exactly.
> If
I'm sorry, we are discussing technical details with no real-life use case
to cover that.
I do not want to suck time for no reason. Please accept my sincere
apologies for not asking the real-life case earlier.
Shay, can you come up with a real-life use case when those "I claim the
statement will be
Shay>There are many scenarios where connections are very short-lived (think
about webapps where a pooled connection is allocated per-request and reset
in between)
Why the connection is reset in between in the first place?
In pgjdbc we do not reset per-connection statement cache, thus we easily
reu
Shay>But here's the more important general point. We're driver developers,
not application developers. I don't really know what performance is "just
fine" for each of my users, and what is not worth optimizing further. Users
may follow best practices, or they may not for various reasons.
Of course
Robert Haas:
> but for some reason you can't use prepared statements, for example because
> the queries are dynamically generated and . That case is analogous to -M
> extended, not -M prepared. And -M extended is well-known to be SLOWER
>
I do not buy that "dynamically generated queries defeat
Robert>But that makes it the job of every driver to implement some
sort of cache, which IMHO isn't a very reasonable position
Let's wait what Shay decides on implementing query cache in npgsql ?
Here's the issue: https://github.com/npgsql/npgsql/issues/1237
He could change his mind when it comes
Stephen>I encourage you to look through the archives
The thing is pl/pgsql suffers from exactly the same problem.
pl/pgsql is not a typical language of choice (e.g. see Tiobe index and
alike), so the probability of running into "prepared statement issues" was
low.
As more languages would use serv
Stephen> While it may have good results in many cases, it's not accurate to
say that using prepared statements will always be faster than not.
There's no silver bullet. <-- that is accurate, but it is useless for
end-user applications
I've never claimed that "server prepared statement" is a silver
Shay> it's important to note that query parsing and rewriting isn't an
"inevitable evil".
Ok, I stay corrected. ADO.NET have raw mode in the API. That's interesting.
Let's say "lots of heavily used languages do have their own notion of bind
placeholders".
And for the reset, it is still not that h
Shay:
> Prepared statements can have very visible effects apart from the speedup
> they provide (e.g. failure because of schema changes) It's not that these
> effects can't be worked around - they can be - but programmers can be
> surprised by these effects, which can cause difficult-to-diagnose
Shay>As I said, an error is going to kill the ongoing transaction, how can
this be solved without application logic?
1) At least, some well-defined error code should be created for that kind
of matter.
2) The driver can use safepoints and autorollback to the good "right before
failure" state in c
Victor>We don't have 190 message catalog translations in the PostgreSQL.
Victor>So problem with encoding for messages is quite limited.
Even though the number of translations is limited, there's a problem when
trying to tell one "one-byte-encoding" from another "one-byte" one.
It would be so much
Shay>I don't know much about the Java world, but both pgbouncer and pgpool
(the major pools?)
In Java world, https://github.com/brettwooldridge/HikariCP is a very good
connection pool.
Neither pgbouncer nor pgpool is required.
The architecture is: application <=> HikariCP <=> pgjdbc <=> PostgreSQ
Victor>It is not a client job to convert encodings.
Of course.
However, there is a vast amount of old PG versions deployed in the wild
that send wrong data to clients.
This indeed makes bad user experience, so it is worth doing 2 things:
1) Implement proper solution in further PostgreSQL version
Tom> while giving something at least passable in the cases
that are broken today.
Would you mind adding an explicit "encoding" field to the error message?
At least it would give clear explanation how to parse that message without
resorting to a guess dance.
The biggest problem is client has no id
Shay>To be honest, the mere idea of having an SQL parser inside my driver
makes me shiver.
Same for me.
However I cannot wait for PostgreSQL 18 that does not need client-side
parsing.
Shay>We did, you just dismissed or ignored them
Please prove me wrong, but I did provide a justified answer to b
Tatsuo>Interesting. What would happen if a user changes some of GUC
parameters? Subsequent session accidentally inherits the changed GUC
parameter?
Yes, that is what happens.
The idea is not to mess with gucs.
Tatsuo>There's nothing wrong with DICARD ALL
Tatsuo>"DISCARD ALL" is perfect for this g
Shay> What? I really didn't understand your point here. All the doc is saying is
Shay> that if the driver doesn't support prepared statements, then using them
Please read again. PreparedStatement is the only way to execute statements
in JDBC API. There's no API that allows user to specify "use
ser
Vladimir>> Yes, that is what happens.
Vladimir>> The idea is not to mess with gucs.
Shay:> Wow... That is... insane...
Someone might say that "programming languages that enable side-effects
are insane".
Lots of connection pools work by sharing the connections and it is up
to developer
if he can b
Shay> your analogy breaks down. Of course L2 was invented to improve
performance,
Shay> but that doesn't mean that all caches are the same. More precisely, what I
Shay> find objectionable about your approach is not any caching - it's the
Shay> implicit or automatic preparation of statements. This p
collected data.
I think think this change is going to split page access statistics by each
and every execution node (much like 'actual time' is split now).
Will this patch ever be accepted?
Regards,
Vladimir Sitnikov
current counters in buf_init.c is ResetBufferUsage
that zeroes those counters at random (e.g. if log_executor_stats or
log_statement_stats is enabled). I am not sure if it could interfere with
instrumentation.
Regards,
Vladimir Sitnikov
gure out
the session identifier where shared memory was created.
I do not see an elegant solution here. It looks quite easy to store the SID
into some file in working directory, then read it and connect to the shared
memory.
If that makes sense, I could implement it that way and test it under Vista.
Sincerely yours,
Vladimir Sitnikov
have snapshots every 15 minutes. It does not look too
bad to get the statistics of last 15 minutes lost in case of database crash
either.
Regards,
Vladimir Sitnikov
Bitmap Heap Scan" means the operation fetched 116
pages into shared buffers and 108 of those 116 were buffer hits.
Sincerely yours,
Vladimir Sitnikov
buffer_stats_in_explain_analyze.patch
Description: Binary data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To m
quot; and similar information after query
finishes (even in plain "execute" mode).
The second application of that statistics could be server health monitoring:
provided there is an interface for ongoing integral statistics, one could
create a job that takes snapshots, computes the difference and plots it on a
graph.
Sincerely,
Vladimir Sitnikov
lues
select ''>>>'prev_time', '0'>>>'dense_rank';
-- marathon query
select *
from (
select (((case when time::text = <<<'prev_time' then *0* else *1*
end)+(<<<'dense_rank')::int4)::text>>>'dense_rank')::int4 as position,
runnerid, time
from big_marathon
order by time
) results
where position=*2*
Best regards,
Vladimir Sitnikov
hen '' then null else lag
end)::numeric as used
from (
select date, <<<'lag' as lag, reading::text >>>'lag' as reading
from meter_readings
order by date
) as t
order by used asc nulls last limit *1*
Best regards,
Vladimir Sitnikov
d of (4,97) and (4,98) -- it loses
page number somewhere on the way.
Both testcases reveal defects in index creation.
Regards,
Vladimir Sitnikov
BTW: is there a framework to test recovery related features?
The only idea I could take from the top of my head is to comment out all the
page writes and leave only WAL logging. Then crash database at random and
verify if the index still performs as expected.
Regards,
Vladimir Sitnikov
lowing query should calculate cume_dist
properly (and it does so in Oracle):
SELECT name,CAST(r AS FLOAT) / c, cd
FROM (SELECT name,
COUNT(*) OVER(ORDER BY salary) as r,
COUNT(*) OVER() AS c,
CUME_DIST() OVER(ORDER BY salary) AS cd
FROM employees
) t;
Sincerely yours,
Vladimir Sitnikov
word-by-word
translation from SQL to the English and vice versa.
If the patch returns "row_number" it is wrong since there is no way for
row_number to be a "number of rows preceding or peer with R", is there?
Regards,
Vladimir Sitnikov
One more point on pg_am: amsearchnull is equal to "f" however the index
stores and could find nulls perfectly.
Regards,
Vladimir Sitnikov
block of tid that triggered flushing. Does that make sense?
Regards,
Vladimir Sitnikov
ticles" in *one*
> query, eliminating the need for a separate count(*) query.
Sometimes it is better to perform several separate queries since optimizer
could use an index scan to get "20 top" and seq scan to get the "count(*)"
Regards,
Vladimir Sitnikov
afraid the idea of having api for "reset" is broken and it
makes sense removing that function. However it looks like it is out of scope
of this patch.
Regards,
Vladimir Sitnikov
>
> "Vladimir Sitnikov" <[EMAIL PROTECTED]> writes:
>
> > I wish there was a way to get the results of explain into some table. I
> wish
> > it was the default output "format". That would make life of pgAdmin
> easier,
> > and improve re
1 - 100 of 109 matches
Mail list logo