Hi folks,
I'm not sure if this is the right place for this but thought I'd ask.
I'm relateively new to postgres having only used it on 3 projects and am
just delving into the setup and admin for the second time.
I decided to try tsearch2 for this project's search requirements but am
having
"Constantine Filin" <[EMAIL PROTECTED]> writes:
> Do you have any ideas why libpq is so much slower than unixODBC?
Perhaps ODBC is batching the queries into a transaction behind your
back? Or preparing them for you?
regards, tom lane
---(end of br
Greetings -
I am really love Postgres and do enjoy hacking around it but I just
met a libpq performance issue that I would like to get your help with.
I have Postgres 8.0.1 running on Linux version 2.6.10-1.771_FC2.
I have an application that makes queries to Postgres. I want to keep
the databas
Jim C. Nasby wrote:
If you're dealing with something that's performance critical you're not
going to be constantly re-connecting anyway, so I don't see what the
issue is.
I didn't include mailing list in my second reply :( so here it is again.
Someone may find this interesting...
http://arc
David Lang <[EMAIL PROTECTED]> writes:
> On Sat, 21 Jan 2006, Tom Lane wrote:
>> Ron <[EMAIL PROTECTED]> writes:
>>> Maybe we are over thinking this. What happens if we do the obvious
>>> and just make a new page and move the "last" n/2 items on the full
>>> page to the new page?
>>
>> Search per
On Sat, 21 Jan 2006, Tom Lane wrote:
Ron <[EMAIL PROTECTED]> writes:
At 07:23 PM 1/20/2006, Tom Lane wrote:
Well, we're trying to split an index page that's gotten full into
two index pages, preferably with approximately equal numbers of items in
each new page (this isn't a hard requirement th
On Sat, Jan 21, 2006 at 06:22:52PM +0300, Oleg Bartunov wrote:
> >I see how it works, what I don't quite get is whether the "inverted
> >index" you refer to is what we're working with here, or just what's in
> >tsearchd?
>
> just tsearchd. We plan to implement inverted index into PostgreSQL core
>
Ron <[EMAIL PROTECTED]> writes:
> At 07:23 PM 1/20/2006, Tom Lane wrote:
>> Well, we're trying to split an index page that's gotten full into
>> two index pages, preferably with approximately equal numbers of items in
>> each new page (this isn't a hard requirement though).
> Maybe we are over th
gevel is available from
http://www.sai.msu.su/~megera/postgres/gist/
Oleg
On Sat, 21 Jan 2006, Martijn van Oosterhout wrote:
On Sat, Jan 21, 2006 at 04:29:13PM +0300, Oleg Bartunov wrote:
Martijn, you're right! We want not only to split page to very
different parts, but not to increas
On Sat, 21 Jan 2006, Martijn van Oosterhout wrote:
On Sat, Jan 21, 2006 at 04:29:13PM +0300, Oleg Bartunov wrote:
Martijn, you're right! We want not only to split page to very
different parts, but not to increase the number of sets bits in
resulted signatures, which are union (OR'ed) of all sig
Perhaps a different approach to this problem is called for:
_Managing Gigabytes: Compressing and Indexing Documents and Images_ 2ed
Witten, Moffat, Bell
ISBN 1-55860-570-3
This is a VERY good book on the subject.
I'd also suggest looking at the publicly available work on indexing
and searchin
On Sat, 21 Jan 2006, Ron wrote:
Perhaps a different approach to this problem is called for:
_Managing Gigabytes: Compressing and Indexing Documents and Images_ 2ed
Witten, Moffat, Bell
ISBN 1-55860-570-3
This is a VERY good book on the subject.
I'd also suggest looking at the publicly availa
On Sat, Jan 21, 2006 at 04:29:13PM +0300, Oleg Bartunov wrote:
> Martijn, you're right! We want not only to split page to very
> different parts, but not to increase the number of sets bits in
> resulted signatures, which are union (OR'ed) of all signatures
> in part. We need not only fast index c
I have worked round the issue by using 2 separate queries with the LIMIT
construct.
LogSN and create_time are indeed directly correlated, both monotonously
increasing, occasionally with multiple LogSN's having the same create_time.
What puzzles me is why the query with COUNT, MIN, MAX uses id
On Sat, 21 Jan 2006, Ron wrote:
At 07:23 PM 1/20/2006, Tom Lane wrote:
"Steinar H. Gunderson" <[EMAIL PROTECTED]> writes:
> On Fri, Jan 20, 2006 at 06:52:37PM -0500, Tom Lane wrote:
>> It's also worth considering that the entire approach is a heuristic,
>> really --- getting the furthest-apart
On Sat, 21 Jan 2006, Martijn van Oosterhout wrote:
However, IMHO, this algorithm is optimising the wrong thing. It
shouldn't be trying to split into sets that are far apart, it should be
trying to split into sets that minimize the number of set bits (ie
distance from zero), since that's what's w
At 01:20 06/01/21, Jim C. Nasby wrote:
BTW, these queries below are meaningless; they are not equivalent to
min(logsn).
> esdt=> explain analyze select LogSN from Log where create_time <
> '2005/10/19' order by create_time limit 1;
Thank you for pointing it out.
It actually returns the min(l
On Fri, Jan 20, 2006 at 05:50:36PM -0500, Tom Lane wrote:
> Yeah, but fetching from a small constant table is pretty quick too;
> I doubt it's worth getting involved in machine-specific assembly code
> for this. I'm much more interested in the idea of improving the
> furthest-distance algorithm in
At 07:23 PM 1/20/2006, Tom Lane wrote:
"Steinar H. Gunderson" <[EMAIL PROTECTED]> writes:
> On Fri, Jan 20, 2006 at 06:52:37PM -0500, Tom Lane wrote:
>> It's also worth considering that the entire approach is a heuristic,
>> really --- getting the furthest-apart pair of seeds doesn't guarantee
>>
Jim C. Nasby wrote:
If you're dealing with something that's performance critical you're not
going to be constantly re-connecting anyway, so I don't see what the
issue is.
I really missed your point.
In multi user environment where each user uses it's connection for
identification
purposes,
20 matches
Mail list logo