that nature if they need to do subtrings (trigrams can also be
used to provide "Did you mean" results.
--
Jeff Trout <[EMAIL PROTECTED]>
www.dellsmartexitin.com
www.stuarthamm.net
---(end of broadcast)---
TIP 5: don'
_block_level is on, to track_counts.
Now it works for PG 8.3.
hey neat. I wrote that years ago and its been quite dusty.
It worked then so I had nothing else to add to it.
I haven't migrated to 8.3 yet so I didn't know it broke.
Can you send me a diff?
--
Jeff Trout
PostgreSQL
dir by default (with no opt to change it during install).
Thanks for any info on the suggested update procedure.
Regards,
Jeff
ridiculously
low. (120 minutes?)
Just a suggestion... Good luck!
Jeff
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
--
Sent via pgsql-general mailing list (pgsq
sed as the arg is correct)
Any perl gurus have any further info on this? It was a bit surprising
to encounter this. I'm guessing it has something to do with variable
scope and the fact plperl funcs are just anonymous functions.
Stuffing it in $_SHARED seems to work fine and ends up with res
On Feb 24, 2010, at 8:44 AM, Jeff wrote:
Notice on the second run the plan is still "beef" when it was set to
49abf0 (which when passed as the arg is correct)
Any perl gurus have any further info on this? It was a bit
surprising to encounter this. I'm guessing it has
On Feb 24, 2010, at 5:10 PM, Tom Lane wrote:
Richard Huxton writes:
On 24/02/10 21:34, Tom Lane wrote:
Hmm. Jeff found some relevant material on perlref. Should that
link be
added? Should the link(s) be more specific than telling you to
read the
whole d*mn man page? Neither of those
l need to load
the schema on the slaves, but not the data). That is a BIG win for
us folks who can't take a machine down while pg_dump runs (and while
it is restored on hte slave)
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm.net/
---
e the dump & when you restore on the
slave and hten fire up dbmirror. Although it might work if you
install the dbmirror triggers, then dump & restore.
Slony uses the COPY interface to read/load data. This is the same
method used by pg_dump so the only throttle will be the network.
onnect to PG on an alternate port and that may irritate some.
In a web hosting (multi-user) environment it may make more sense. but
you could also just give each user his own db and only allow sameuser
access to each db via pg_hba.conf.
--
Jeff Trout <[EMAIL PROTECTED]>
http://ww
ething else' end
as blah, public.Z_documents as d ...
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm.net/
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
destination_table set column_a = (select column_b from
source_table where blah blah blah ) where blah blah blah
You can reference the values in destination_table inside the select.
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm.net/
---(end of bro
supported for pqread().
well. it SHOULD support text that big. Was there anything in the log?
Also, you may want to consider bytea instead of base64 stored in a text.
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm.net/
---(end of bro
y who can share some real-world with that.
Works great over here.
Our app sits around doing 50-150 requests/sec constantly and spends
most of its time sitting there with a blank look on its face.
Just make sure you take the normal precautions you need to take with
threaded apps (no strtok, etc. )
On Jun 24, 2004, at 9:50 AM, Campano, Troy wrote:
Hello,
Is PostgreSQL’s optimizer cost based or syntactical based?
cost based
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm.net/
---(end of broadcast)---
into the code for the pgSQL library, and as I understand it,
we can put support for user defined exceptions (something we use a lot)
within the pgSQL library by intercepting the
I'm not sure we have user defined exceptions yet..
perhaps you could work on implementing them..
--
Jeff Trout &l
lived connections.
Given you have long lived connections.. I'm not sure if pgpool would
help.
It may, give it a whirl.
however it seems Tom found the actual problem.
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm.net/
---(
On Aug 31, 2004, at 10:19 AM, David Parker wrote:
I have also been unable to reach archives.
The archives have been down for days.
It is quite unfortunate.
I'd suggest using google's cache feature until the problems can be
resolved.
I am however getting list mail...
--
Jeff Tro
t_mem settings set to in
postgresql.conf?
Remember in Linux top will include how much shared memory an app has
touched in its SIZE. (But you can also look at the shared column to see
how much of hte size is shared)
.
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http:
nguages?
>
PL/PGSQL is probably the most popular one. I use it quite a bit and it works like a
champ. It is quite comprable to Oracle's PL/SQL (hence the name)
Check it out in the docs on http://www.postgresql.org/
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.c
.
How many rows should that return?
[explain analyze will tell you that]
and while that runs is your disk thrashing? vmstat's bi/bo columns will
tell you.
7.1 is quite old, but I do understand the pain of migraing to 7.3 [or
.4beta] with huge db's
--
Jeff Trout <[EMAIL PROTECTED
looked on the website and havn't been able to find
> them.
>
http://www.postgresql.org/docs/7.3/static/sql-commands.html
Got there from clicking documentation, 7.3 and scrolled down till I saw
it.
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm
ow.
you know. we should do something about that :)
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm.net/
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings
pe | Modifiers
> -+-+-
> -
> id | integer | not null default
> nextval('public.forn_id_seq'::text)
> forn_id | integer |
> descrip | text|
>
Why not make forn_id a sequence as well?
then you simply c
oice but to use
locking.
don't use max - it isn't fast on PG use select forn_id from thetable
order by fornid desc limit 1. You'll need an index on forn_id or
performance will suffer.
--
Jeff Trout <[EMAIL PROTECTED]>
http://www.jefftrout.com/
http://www.stuarthamm.net/
user id as a foreign key in the address table?
Thanks!
Jeff
cter in the psql prompt.
The latter does not discard semi-colons after the prompt.
PostgreSQL 7.0.2 on i686-pc-linux-gnu, compiled by gcc
egcs-2.91.66
PostgreSQL 7.0.2 on i686-pc-linux-gnu, compiled by gcc
egcs-2.91.66
Peter T Mount wrote:
>
> Quoting Jeff <[EMAIL PROTECTED]>:
>
I'm getting this when access the cgi:
The specified CGI application misbehaved by not returning a complete set of
HTTP headers. The headers it did return are:
Here's the cgi code:
int main()
{
printf("Content-Type: %s\n\n", contentType);
conn = PQconnectdb("dbname=template1");
if (PQstat
I've tried fruitlessly to install cygwin1.1.8 work with postgresql7.03
Has any body out there done it?
Where can I get the latest postgresql7.1?
Thank you
Hi,
How can I programmatically check if a database exists using c lib?
Thank you
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Hi,
How can I programmatically check if a database exists using c lib?
Thank you
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Also how to programmatically check if a table exists in the database?
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Hi,
Can some one show me what's the best way to backup a table or copy a table?
Thanks
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/users-lounge/docs/faq.html
Hi,
How to if it is possible to let postgresql to auto assign unique key to each
new record that is added to the database?
Thanks
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
http://www.postgresql.org/users-lounge/docs/fa
Hi,
I've accidently deleted the file employees.key.seq, however there's still an
entry in pg_class.
It would not allow me to create employees.key.seq anymore because it thinks
it already exists.
How can I delete it from pg_class?
Thanks
---(end of broadcast)-
Hi,
When there's only one recorde to Insert/updat, the server cgi works fine.
When try to insert/update multiple records the cgi keeps running and never
ends.
Here's the code snippet:
while(ip)
{
strcpy(buffer, ip->Item);
ip = ip->next;
while(strstr(buffer,
=
> 2937136;
>
But the thing being reported in the lock wait log message is the relation
OID itself, not the relfilenode.
You would use your query if you get the number by watching the file names
in the file system as they grow,
but not if you get it from the log message.
Cheers,
Jeff
gt; AESTLOG: redirecting log output to logging collector process
> 2596 5533cb8c.a24 2015-04-20 01:36:44 AESTHINT: Future log output will
appear in directory "pg_log"
However, it appears appear that pg_upgrade does honor the logging-collector
related settings from the config file (should it? I can see arguments both
ways) and you so you have to look in the pg_log directory to figure out
what the actual problem is.
Cheers,
Jeff
mail me at j...@pipelinedb.com if your company might be
interested to participate in the beta.
Thanks,
Jeff
--
Jeff Ferguson
President and Co-Founder
<http://www.pipelinedb.com>
ᐧ
eeded. Part of that is using existing
> functions in new functions/classes. In fact in the Postgres source I see
> this in many places. Now it is entirely possible I missed a memo, so I am
> open to a more detailed explanation of the inefficiencies involved.
>
The Postgres source is written in C, not in plpgsql. C has a good
optimizing compiler and plpgsql doesn't.
Cheers,
Jeff
see enough
activity to pass autovacuum_vacuum_scale_factor then the vacuum that occurs
will get promoted to be a full scan.
If they never see that amount of activity, then the tables will continue to
age until autovacuum_freeze_max_age, at which point a vacuum will get
launched specifically for the purpose of advancing relfrozenxid.
Cheers,
Jeff
On Wed, May 6, 2015 at 10:51 AM, Steve Kehlet
wrote:
> On Wed, May 6, 2015 at 9:46 AM Jeff Janes wrote:
>
>> vacuum_freeze_table_age controls when it promotes a vacuum *which is
>> already going to occur* so that it scans the whole table. It doesn't
>> special
. Any questions, please feel free to ask.
Are you applying any load to master? If not, nothing needs to be
replicated so there is nothing to happen. If this is just for testing, run
pgbench -T 3600.
Cheers, Jeff
ecause each backend gets own address space. The total
amount of memory used remains the same, but the 32 bit limit doesn't apply
to the aggregate, only to each process.
Cheers,
Jeff
dn't fix the problem, as the change to ShmemVariableCache has to
be transactional with the change to pg_database. So it would have to be
WAL logged inside the commit record or any transaction which changes
pg_database.
Cheers,
Jeff
_timing numbers and
their rusage numbers into the server log, rather than just their wall-time
durations as it does now. I'm not sure how that idea is going to work out
yet, though. Anyway, it wouldn't be until version 9.6 at minimum.
Also, for temp file, see log_temp_files config parameter.
Cheers,
Jeff
nd then make a
functional index, for example. It would require you to write the query
differently. Whether it would be a btree index or a gin index would depend
on whether you can have more than one IP address in a msg.
Cheers,
Jeff
On Mon, Jun 22, 2015 at 10:39 AM, Christian Ramseyer wrote:
> On 22/06/15 19:00, Jeff Janes wrote:
>
> >
> >
> > A typical query on this table looks like this:
> >
> > explain analyze
> > select log_date, host, msg
> > from logs
is needed because, well, that is just how udt works.
Cheers,
Jeff
t seems like you are
just going to confuse yourself about what is a standby meant for fail over,
and what is a clone meant for testing. With possibly disastrous
consequences.
Cheers,
Jeff
these changes on the main database (master).
>
Sounds like you are trying to use a bulldozer to change a lightbulb.
Improving queries (including the effect running some queries has on the
entire system) starts with "EXPLAIN (ANALYZE, BUFFERS)", not with
multimaster replication.
Cheers,
Jeff
etty easy to do, as long
your client allows you configure which port number it connects to. If you
really want fsync on for one database and off for another one, but each
database in a different cluster.
Cheers,
Jeff
hen take
postgresql out of the loop entirely by installing that certificate to be
used by apache (for instance) instead of by postgresql. Then use s_client
against apache.
Cheers,
Jeff
needs to be written out without the benefit of write-combining or
sequential writes.
However, this is usually only a concern for bulk loading operations, not
real time operations. If you are generating money-equivalent vouchers at a
rate of thousands per second, hopefully you can afford some nice hardware
to run it on.
Cheers,
Jeff
trgm_ops);
But, LIKE '%10.10%' is going to be tough for a pg_trgm index to help with,
unless you compile your own code after removing "#define KEEPONLYALNUM"
Cheers,
Jeff
em, create a temp or
unlogged table with the same check constraints as the real table but
without the unique or foreign key constraints and see if each row inserts.
Cheers,
Jeff
erwise you
will quickly run out of memory.
Also, if there was an error, then after rolling back to the savepoint
either release it, or refrain from starting a new one at the beginning of
the next insert.
Cheers,
Jeff
te the spirit of the benchmark, as
presumably you are expected to inspect the results of the SELECT statement
before proceeding with the rest of the transaction.
Or you could write a custom benchmark which more closely resembles whatever
your true workload will be.
Cheers,
Jeff
On Mon, Jul 27, 2015 at 3:19 AM, Chris Withers
wrote:
> On 24/07/2015 22:51, Jeff Janes wrote:
>
> starting vacuum...end.
>
>> transaction type: TPC-B (sort of)
>> scaling factor: 1
>>
>
> This is your problem. There is only one row in the pgbench_branch
&
after.
Is there anything out there like this? This would be for 9.4.
I'm willing to put the query text, and its replacement, directly into the
extension source code and compile it, but of course something more flexible
would be ideal.
Thanks,
Jeff
> On Thu, Aug 13, 2015 at 1:37 PM, Melvin Davidson
wrote:
> On Thu, Aug 13, 2015 at 3:49 PM, Jeff Janes wrote:
>
>> I am looking for an extension or a technique that will allow me to
>> intercept a query by the exact query text, and replace that query with a
>&g
On Thu, Aug 13, 2015 at 2:02 PM, Adrian Klaver
wrote:
> On 08/13/2015 12:49 PM, Jeff Janes wrote:
>
>> I am looking for an extension or a technique that will allow me to
>> intercept a query by the exact query text, and replace that query with a
>> different one.
>>
gured (I also
> started from defaults and tuned up as best I could).
>
> 32 14.982111
> 64 14.894859
> 128 14.277631
> 256 13.679516
> 512 13.679516
>
Pick the query that dropped in performance the most, then run it with
"explain (analyze, buffers)" and with track_io_timing turned on, and
compare this between the servers. Did the plan change, or just the time?
Cheers,
Jeff
at other problems am I setting myself up for by doing this?
Cheers,
Jeff
ze_threshold" to control the autovacuum process for that
table. Note that a manual ANALYZE will *not* clear the pending list, it
has to be an autoanalyze. (Manual VACUUM will clear the pending list, but
you might have trouble getting manual VACUUM to complete fast enough)
> Any other ideas or suggestions would be appreciated, thanks!
>
The bottleneck is generally a lot of random IO. Can you put the gin index
on SSD?
Cheers,
Jeff
has 50 keys, that would be 64,000 keys. If the keys don't
have a lot of internal overlap inserting them would take 6.4 minutes with
10,000 rpm drives, assuming all the non-leaf pages are already in memory.
There is no pre-fetching (not even with multiple backends trying to do the
same work at the same time), so a RAID won't speed things up.
Cheers,
Jeff
On Fri, Aug 28, 2015 at 11:06 AM, Steve Kehlet
wrote:
> On Fri, Aug 28, 2015 at 10:42 AM Jeff Janes wrote:
>
>
>
>> Vacuum is overkill (and can be extremely slow to run a large gin index),
>> you just need to get it to autoanalyze by changing
On Fri, Aug 28, 2015 at 11:06 AM, Steve Kehlet
wrote:
> On Fri, Aug 28, 2015 at 10:42 AM Jeff Janes wrote:
>
>>
> Vacuum is overkill (and can be extremely slow to run a large gin index),
>> you just need to get it to autoanalyze by changing the
On Fri, Aug 28, 2015 at 12:25 PM, Steve Kehlet
wrote:
> On Fri, Aug 28, 2015 at 12:10 PM Jeff Janes wrote:
>
>> Did you change the system-wide autovacuum_analyze_scale_factor? If so,
>> don't do that. You can use a table's storage parameters to set a custom
>&g
t aware that the space is free, which is the
root of the problem. The newest version of pageinspect has
gin_page_opaque_info which will show those pages as being deleted, but that
version is not yet released.
Cheers,
Jeff
e total amount of data being sorted,
but on the size of the array of row headers, and so limits the number of
rows, regardless of the size of the rows.
Cheers,
Jeff
k if it was a WAL issue the OP could never get the server to
> start and get to the point the query failed on a single table and column.
With pg_basebackup, that is probably the case, as it either doesn't copy
xlog at all, or if it does it makes sure it is complete. But with tar, you
have no such protection.
> All that being said, I think the OP would be better served by
> pg_basebackup:
>
> http://www.postgresql.org/docs/9.4/static/app-pgbasebackup.html
Yes, indeed.
Cheers,
Jeff
nd of token
or time stamp, to make it easier to detect lost work. In which case NULL
would mean not yet sent, so the partial index would be "where
sent_for_processing is null".
Cheers,
Jeff
id1 for transaction
B, I'll create a deadlock and the deadlock detector will kill one of
the transactions.
I'd really love to learn:
1. Why the xmax for foo_id1 goes from 696 to 1 and what does that
mean?
2. How does transaction A know it needs to take a ShareLock on
transaction B?
3. What is a virtualtransaction and what do its numerator and denominator
mean?
Thanks,
Jeff
On Tue, Sep 22, 2015 at 10:44 PM, Alvaro Herrera
wrote:
> Jeff Dik wrote:
>
> > I'd really love to learn:
> >
> > 1. Why the xmax for foo_id1 goes from 696 to 1 and what does that
> >mean?
>
> When two transactions want to lock the same row, the xmax
On Wed, Sep 23, 2015 at 11:26 AM, Alvaro Herrera
wrote:
> Jeff Dik wrote:
>
> > Is there any way to inspect a multixact via psql to see what transaction
> ID
> > values it has? I wasn't able to find anything while searching for an
> hour
> &
turns text AS $
SELECT encode(public.digest($1, 'sha1'), 'hex')
$ LANGUAGE SQL STRICT IMMUTABLE;
It is a good idea to do that anyway.
Cheers,
Jeff
ry or other public repositories?
Cheers,
Jeff
k you're best bet is to do a subquery against the unaggregated table.
select * from aggregated a where exists
(select 1 from unaggregated ua where a.lognum=ua.lognum and flightnum
like '8%')
This is a common problem. If you find a better solution, I'd love to hear
it!
Cheers,
Jeff
On Mon, Oct 12, 2015 at 11:39 AM, Tom Lane wrote:
> Jeff Janes writes:
> > On Mon, Oct 12, 2015 at 10:58 AM, Israel Brewster >
> > wrote:
> >> My first thought was to do something like this:
> >>
> >> SELECT * FROM (SELECT lognum,array_agg(fligh
nough discarded hard drive,or any number of other
things that can get you an offline copy of some (or all) of the data, but
doesn't give you live access to the running database (until you hack the
passwords)
Cheers,
Jeff
On Wed, Oct 14, 2015 at 5:49 PM, Michael Paquier
wrote:
> On Thu, Oct 15, 2015 at 7:19 AM, Jeff Janes wrote:
> > On Wed, Oct 14, 2015 at 1:41 PM, John R Pierce
> wrote:
> >>
> >> On 10/14/2015 1:31 PM, Quiroga, Damian wrote:
> >>
> >>
> >
On Fri, Oct 9, 2015 at 1:36 AM, Albe Laurenz
wrote:
> Jeff Janes wrote:
> > I am facing a scenario where I have different version of an extension,
> say 1.0 and 2.0, which have
> > some different functionality between them (so not merely a bug fix), so
> people might want t
mp_test(fld_1);
> CREATE INDEX
>
I think he means more like:
create temporary table temp_test(id int, fld_1 varchar);
create temporary index on permanent_table (fld_1);
select something from temp_test join permanent_table using (fld_1) where
a=b;
select something_else from temp_test join permanent_table using (fld_1)
where c=d;
Cheers,
Jeff
f
copy, but you can't do that within pgAdmin.
Cheers,
Jeff
ffers) select o_count from
Cheers,
Jeff
>
> cost only 112.754 ms . the High load time log is :
>
> 2015-10-28 00:00:17.177 CST "SELECT",2015-10-23 19:30:20
> CST,276/59546142,0,LOG,0,"duration: 3300.237 ms execute :
> SELECT o_count FROM
> tshow.p_hpart_getcount_intitle($1)&quo
ening every 5 minutes as
your conf has not changed checkpoint_timeout away from the default.
Since you have log_checkpoints on, what do you see in the log files
about how often they occur, and what the checkpoint write time, sync
time, etc. are?
Cheers,
Jeff
--
Sent via pgsql-general mailing lis
> 84.88 2.76 4.14
> 06:35:01 PM dev8-17 15.02 0.00 3994.82265.95 1.27
> 84.88 2.76 4.14
I'm no expert in `sar -d`, but isn't 84.88 pretty high for await?
> As far as the checkpoint goes, it does happen every 5 minutes and takes
> abou
gor only needs &&, @> and <@ then he can just use the builtin gin
opclass for those. But he can't get @@ and ~~ from the built-ins.
Cheers,
Jeff
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
t sometimes refrains from setting hint bits as aggressively
as it could. I would not think that Slony would have that problem,
though.
Cheers,
Jeff
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Sat, May 28, 2016 at 11:32 AM, hubert depesz lubaczewski
wrote:
> On Sat, May 28, 2016 at 10:32:15AM -0700, Jeff Janes wrote:
>> If that wasn't informative, I'd attach to one of the processes with
>> the gdb debugger and get a backtrace. (You might want to do tha
s there another way to fix this?
It sounds like you did not include pg_xlog in your rsync. What you
have done is basically a cold backup. Cold backups must include
pg_xlog, at least if you want them to work without WAL archival. If
you look farther up in the log, it should tell y
the pg_xlog directory over to the slave.
I don’t see any errors on the master log file, or any other messages on
the slave’s. What happened? How can this be prevented in the future?
Thanks,
Jeff
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
ld be very difficult. The project has kind of painted
itself into a corner on that.
If it were easy, I doubt we would have added the % operator with the
ugly set_limit() wart in the first place (although I was not around at
the time that was done--maybe there were other considerations).
Cheers,
On Fri, Jun 3, 2016 at 1:02 PM, Tom Lane wrote:
> Jeff Janes writes:
>> On Fri, Jun 3, 2016 at 12:13 PM, Greg Navis wrote:
>>> I'm curious ... would it be difficult to modify PostgreSQL so that it'd use
>>> the index for `similarity(lhs, rhs) >= show_li
rry around the baggage of new operators and types
they have no interest in using, and vice versa. True, we did just add
several new functions and operators to pg_trgm that many people will
have no interest in, so maybe that is not a big deal.
Cheers,
Jeff
--
Sent via pgsql-general mailin
eful, and I am more familiar with it. YMMV of course.
Once you do that, you will probably start getting errors from the
gtrgm_consistent C function (if not others in the list of functions
first) because it is being asked to evaluate a strategy it doesn't
understand. So then the next step is to teac
On Fri, Jun 10, 2016 at 9:20 AM, Jeff Janes wrote:
> On Thu, Jun 9, 2016 at 1:57 AM, Greg Navis wrote:
>> Artur, no worries, I'm not writing any code ;-)
>>
>> I did the following:
>>
>> CREATE TYPE trgm_match AS (match TEXT, threshold NUMERIC);
>
>
lue here, and how does
> that value affect the result?
>
This is essentially identical to ORDER BY ABS(sent -
'2000-01-01'::TIMESTAMP); except it can use the index.
So maybe pick a constant outside the range of possible values, and use that
as one argument to <->.
Cheers,
Jeff
1 - 100 of 1803 matches
Mail list logo