tring($radius));
Does anyone have any ideas on how I can get this to work? I'm not
sure what is wrong, since it doesn't seem like I need to create a
distance column and when I do create one, I get this:
Invalid query: ERROR: column "aaafacilities.latitude" must appear in
t
It's the whole query as far as I can tell. The app takes input from
the user --- the user enters an address and chooses a radius ("show me
all facilities within 5 miles of this address") and then the latitude
and longitude of the address and the radius is passed into the query
so that the database
Hi everyone,
I posted to this forum once before and was able to receive help.
Thanks again!
I'm trying to implement full text search capabilities. Basically, I
have a very simple "data catalog" type of website (http://
gis.drcog.org/datacatalog), where the user can type in a word or words
to sea
Just wanted to thank everyone for the input so far. I do appreciate
it!
I'm going to read through some of this tonight and see what I can do!
Thanks!
Jonathan
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresq
Hi!
Thanks again for the help. Just wanted to let you all know that it
seems like we had an issue with our install of Postgres 8.3 on our
development server. I installed 8.4 on my own local machine and the
indexing works as it should.
Jonathan
--
Sent via pgsql-general mailing list (pgsql
I'm trying to upgrade some code that powers a newfeed type stream, and hoping
someone can offer some insight on better ways to structure some parts of the
query
The part that has me stumped right now...
There are several criteria for why something could appear in a stream. for
example, here
Sorry, I was trying to ask something very abstract as I have similar situations
on multiple groups of queries/tables (and they're all much more complex).
I'm on pg 9.3
The relevant structure is:
posting:
id
timestamp_publish
group_id__in
user_id__author
Thanks all! These point me in much better directions!
Jim Nasby's approach to selecting an expression addressed some things (SELECT
f.useraccount_id_b IS NOT NULL AS in_friends)
Ladislav Lenart's usage of the CTE is also of a different format that I've used
in the past.
I think i'll be able
On Apr 29, 2015, at 12:25 PM, Ladislav Lenart wrote:
> Could you please explain to me the error(s) in my reasoning?
Let me just flip your list in reverse... and add in some elements (marked with
a *):
posting ts context
p60 60 friend
p55 55 friend*
p54 54 friend*
p50 50
On Apr 29, 2015, at 6:50 PM, Jim Nasby wrote:
> Only because you're using UNION. Use UNION ALL instead.
The difference between "union" and "union all" was negligible. the problem was
in the subselect and the sheer size of the tables, even when we could handle it
as an index-only scan.
On Ap
I have a very simple query that is giving me some issues due to the size of the
database and the number of requests I make to it in order to compile the report
I need:
A dumbed down version of the table and query:
CREATE TABLE a_to_b (
id_a INT NOT NULL REFERENCES table_
Hoping to glean some advice from the more experienced
The major component of our application currently tracks a few dozen object
types, and the total number of objects is in the 100s Millions range. Postgres
will potentially be tracking billions of objects.
Right now the primary key for ou
Thanks for the reply.
On Oct 2, 2015, at 3:26 PM, Jim Nasby wrote:
> I'm not really following here... the size of an index is determined by the
> number of tuples in it and the average width of each tuple. So as long as
> you're using the same size of data type, 18 vs 1 sequence won't change t
On Oct 7, 2015, at 11:58 AM, john.tiger wrote:
> has anyone used postgres jsonb for holding session ? Since server side
> session is really just a piece of data, why bother with special "session"
> plugins and just use postgres to hold the data and retrieve it with psycopg2
> ? Maybe use som
I couldn't find any mention of this on the archives...
Have the project maintainers ever considered extending CREATE INDEX to support
"temporary" indexes like CREATE TEMPORARY TABLE?
When creating temporary tables for analytics/reporting, I've noticed that I
often need to create (then drop) ind
On Oct 21, 2015, at 2:59 PM, Jeff Janes wrote:
> I think he means more like:
>
> create temporary table temp_test(id int, fld_1 varchar);
> create temporary index on permanent_table (fld_1);
>
> select something from temp_test join permanent_table using (fld_1) where a=b;
> select something_else
On Oct 21, 2015, at 3:42 PM, Adrian Klaver wrote:
> I misunderstood then. The only thing I can think of is to wrap in a
> transaction, though that presents other issues with open transactions and/or
> errors in the transaction.
I just explicitly drop. The convenience of an auto-drop would be
On Oct 22, 2015, at 2:08 PM, Tom Lane wrote:
> FWIW, I don't find much attraction in the idea of building an index for
> use by a single query. There basically isn't any scenario where that's
> going to beat running a plan that doesn't require the index. The value of
> an index is generally to
On Oct 22, 2015, at 5:04 PM, Jim Nasby wrote:
>
> What % of execution time is spent creating those indexes? Or is that factored
> into the 1000%? Also, could your analysis queries be run in a REPEATABLE READ
> transaction (meaning that once the transaction starts it doesn't get any new
> data
i'm cleaning up some queries for performance, and noticed that we never use
precision beyond the second (ie, `timestamp(0)`) in our business logic.
would there be any savings in storage or performance improvements from losing
the resolution on fractional seconds, or are `timestamp(precision)` ef
On Jun 21, 2016, at 4:50 PM, Tom Lane wrote:
> Storage-wise, no. If you have a resolution spec on your columns now,
> I think dropping the resolution spec would save you a few nanoseconds per
> row insertion due to not having to apply the roundoff function. Adding
> one would certainly not impr
I have a handful of queries in the following general form that I can't seem to
optimize any further (same results on 9.3, 9.4, 9.5)
I'm wondering if anyone might have a suggestion, or if they're done.
The relevant table structure:
t_a2b
a_id INT references t_a(id)
On Jun 21, 2016, at 6:55 PM, David G. Johnston wrote:
> Aside from the name these indexes are identical...
sorry. tired eyes copy/pasting between windows and trying to 'average' out 40
similar queries.
> These two items combined reduce the desirability of diagnosing this...it
> doesn't see
On Jun 22, 2016, at 4:25 AM, Erik Gustafson wrote:
> don't you want an index on t_a2b.col_a, maybe partial where col_a=1 ?
that table has indexes on all columns. they're never referenced because the
rows are so short. this was just an example query too, col_a has 200k
variations
After a
On Jun 22, 2016, at 2:38 PM, David G. Johnston wrote:
> What query? A self-contained email would be nice.
This was the same query as in the previous email in the thread. I didn't think
to repeat it. I did include it below.
> https://www.postgresql.org/docs/9.6/static/indexes-index-only-sc
ter and
growing constantly so this would make deploying BDR a painful process - if
we had to add a connection for each existing DB and then every new DB.
Is there a way around this or are there plans to make this type of config
available?
Thanks in advance.
*Jonathan J. Eastgate*
Chief Technology O
aving potentially 200 schemas within the DB -
performance or replication wise?
Thanks in advance.
*Jonathan J. Eastgate*
Chief Technology Officer | simPRO Software Group
Ph: 1300 139 467+61 7 3147 8777
<http://simprogroup.com/email-signature-promo/>
Keep up to date with simPRO at: http://simp
e type
pg_ddl_command. Am I missing something? Is the documentation lacking?
--
Jonathan Rogers
Socialserve.com by Emphasys Software
jrog...@emphasys-software.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
e type
pg_ddl_command. Am I missing something? Is the documentation lacking?
--
Jonathan Rogers
Socialserve.com by Emphasys Software
jrog...@emphasys-software.com
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 08/24/2016 12:58 AM, Alvaro Herrera wrote:
> Jonathan Rogers wrote:
>> I am trying to use an event trigger to do something when a column
>> changes. I can declare an event trigger "ON sql_drop WHEN TAG IN ('ALTER
>> TABLE')" to get dropped columns. Howev
We've been storing some "enumerated"/"set" data in postgresql as INT or BIT(32)
for several years for some flags/toggles on records.
This was preferable for storage to the ENUM type (or multiple columns), as we
often changed the number of enumerated options or their labels -- and computing
ev
On Sep 27, 2016, at 10:54 AM, Brian Dunavant wrote:
> db=# select 'foo' where (9 & 1) > 0;
A HA
Thank you Brian and David -- I didn't realize that you needed to do the
comparison to the result.
(or convert the result as these work):
select 'foo' where (9 & 1)::bool;
select 'f
On Sep 27, 2016, at 2:46 PM, Israel Brewster wrote:
> I do have those on, and I could write a parser that scans through the logs
> counting connections and disconnections to give a number of current
> connections at any given time. Trying to make it operate "in real time" would
> be interestin
resolve or counter this would be appreciated.
Thanks in advance.
*Jonathan J. Eastgate*
Chief Technology Officer | simPRO Software Group
Ph: 1300 139 467+61 7 3147 8777
<http://simprogroup.com/au/email-redirect>
Keep up to date with simPRO at: http://simprogroup.com/blog
The contents o
And further to my last post - another post in the forums related to this:
https://devon.so/2015/02/06/as-tale-of-sequences-and-postgresql-replication-9/
Thanks.
*Jonathan J. Eastgate*
Chief Technology Officer | simPRO Software Group
Ph: 1300 139 467+61 7 3147 8777
<http://simprogroup.
Is there a way to find out when a materialized view was created/refreshed? I
couldn't find this information anywhere in the docs.
the use-case is that I wish to update a materialized view a few times a day in
a clustered environment. i'd like to make sure one of the redundant nodes
doesn't re
On Jan 9, 2017, at 12:49 PM, Israel Brewster wrote:
> Planning time: 4.554 ms
> Execution time: 225998.839 ms
> (20 rows)
>
> So a little less than four minutes. Not bad (given the size of the database),
> or so I thought.
>
> This morning (so a couple of days later) I ran the query again wi
I've run into a performance issue, and I think autovacuum may be involved.
does anyone know if its possible to temporarily stop autovacuum without a
server restart ?
It seems that it either requires a server restart, or specific tables to be
configured.
Several times a day/week, I run a handfu
On Jan 11, 2017, at 8:19 PM, Melvin Davidson wrote:
>
> Yes, you're right about ALTER SYSTER. Unfortunately, the op provided neither
> PostgreSQL version or O/S, so we can't even be sure that is
> an option. That is why I stated "I cannot confirm".
I didn't think that would matter, but postg
I'm just wondering if there's a more efficient way of handling a certain
periodic data migration.
We have a pair of tables with this structure:
table_a__live
column_1 INT
column_2 INT
record_timestamp TIMESTAMP
table_a__archive
On Jan 12, 2017, at 5:52 PM, Merlin Moncure wrote:
> On Thu, Jan 12, 2017 at 2:19 PM, bto...@computer.org
> wrote:
>>
>> Review manual section 7.8.2. Data-Modifying Statements in WITH
>>
>>
>> https://www.postgresql.org/docs/9.6/static/queries-with.html
>
> this.
>
> with data as (delete fr
There are over 20 million records in a self-referential database table, where
one record may point to another record as a descendant.
Because of a bug in application code, there was no limit on recursion. The max
was supposed to be 4. A few outlier records have between 5 and 5000
descendants
On Jan 26, 2017, at 7:07 PM, David G. Johnston wrote:
> Thinking aloud - why doesn't just finding every record with 5 descendants
> not work? Any chain longer than 5 would have at least 5 items.
Oh it works. This is why I ask these questions -- new perspectives!
> Even without recursion you
As a temporary fix I need to write some uploaded image files to PostgreSQL
until a task server can read/process/delete them.
The problem I've run into (via server load tests that model our production
environment), is that these read/writes end up pushing the indexes used by
other queries out
Thanks. Unfortunately, this is in a clustered environment. NFS and other
shared drive systems won't scale well. I'd need to run a service that can
serve/delete the local files, which is why I'm just stashing it in Postgres for
now.
> On Nov 19, 2015, at 2:26 AM, Roxanne Reid-Bennett wrot
Is it possible to select index values ?
I haven't found any documentation that says "No", but I haven't found anything
that says "Yes" either.
The reason - I have a few function indexes that are working as partial indexes.
I'd like to run some analytics on them (to determine uniqueness of valu
On Feb 1, 2016, at 6:58 PM, David G. Johnston wrote:
> You can query the statistics portion of the database to get some basic
> statistics of the form mentioned.
Yeah, i didn't think there would be support. The stats collector doesn't have
the info that I want... it's focused on how the data
I'm running postgresql on ubuntu. the 9.4 branch from postgresql.org
I think the only way to disable ipv6 is to edit postgresql.conf and explicitly
state localhost in ipv4 as follows
- listen_addresses = 'localhost'
+ listen_addresses = '127.0.0.1'
can anyone confirm?
--
Sent via
ilar format
// Jonathan Vanasco
| - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - -
| CEO/Founder SyndiClick Netw
Thank you Jon -- thats the exact sort of trick I was hoping for.
Cheers!
On Jun 7, 2007, at 6:36 PM, Jon Sime wrote:
Jonathan Vanasco wrote:
Does anyone have a trick to list all columns in a db ?
No trickery, just exploit the availability of the SQL standard
information_schema views
rather have a permission problem, instead of no more disk space.
However, I don't know why :)
- Jonathan
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
Hi Everyone,
I'm trying to work with connection pools. The example I'm looking at is
lockated here:
http://www.developer.com/img/2009/11/Listing1_ConnectionPoolClass.html
You will notice that the getConnectionFromPool method does not implement
any blocking, and only returns null.
I would l
Hi Everyone,
I'm trying to create a server for a database system which will be used
by multiple clients. Of course, table locking is very important. Reading
the Postgresql docs, locking occurs on a transaction-by-transaction basis.
In our java code, we are doing this:
//Start Code Block
Con
On 01/11/10 18:08, Andy Colson wrote:
On 11/1/2010 12:37 PM, Jonathan Tripathy wrote:
Hi Everyone,
I'm trying to create a server for a database system which will be used
by multiple clients. Of course, table locking is very important. Reading
the Postgresql docs, locking occurs
Hi Andy,
Thanks for your reply. Would the above code be classed as a single
transaction then?
Yes, assuming there's no explicit transaction control
(COMMIT/ROLLBACK/END) in your queries.
Actually, we do have maybe one or 2 queries that use ROLLBACK, however
ROLLBACK happens at the end of a "c
On 01/11/10 18:38, Jonathan Tripathy wrote:
Hi Andy,
Thanks for your reply. Would the above code be classed as a single
transaction then?
Yes, assuming there's no explicit transaction control
(COMMIT/ROLLBACK/END) in your queries.
Actually, we do have maybe one or 2 queries tha
I'll give you the exact case where I'm worried:
We have a table of customers, and each customer can have multiple
memberships (which are stored in the memberships table). We want our
deleteMembership(int membershipID) method to remove the membership, then
check to see if there are no more membe
On 01/11/10 19:12, Andy Colson wrote:
On 11/1/2010 2:01 PM, Jonathan Tripathy wrote:
I'll give you the exact case where I'm worried:
We have a table of customers, and each customer can have multiple
memberships (which are stored in the memberships table). We want our
deleteMemb
Hi Everyone,
I'm looking for the best solution for "Hot Standbys" where once the
primary server fails, the standby will take over and act just like the
master did. The standby must support INSERTS and UPDATES as well (once
the master has failed)
Are there any solutions like this? Looking on
On 01/11/10 19:56, Andy Colson wrote:
On 11/1/2010 2:29 PM, Jonathan Tripathy wrote:
On 01/11/10 19:12, Andy Colson wrote:
On 11/1/2010 2:01 PM, Jonathan Tripathy wrote:
I'll give you the exact case where I'm worried:
We have a table of customers, and each customer can hav
On 01/11/10 20:01, Thomas Kellerer wrote:
Jonathan Tripathy wrote on 01.11.2010 20:53:
Hi Everyone,
I'm looking for the best solution for "Hot Standbys" where once the
primary server fails, the standby will take over and act just like
the master did. The standby must supp
On 01/11/10 20:21, Scott Marlowe wrote:
On Mon, Nov 1, 2010 at 2:12 PM, Jonathan Tripathy wrote:
On 01/11/10 20:01, Thomas Kellerer wrote:
Jonathan Tripathy wrote on 01.11.2010 20:53:
Hi Everyone,
I'm looking for the best solution for "Hot Standbys" where once the
primary se
On 01/11/10 20:26, Thomas Kellerer wrote:
Jonathan Tripathy wrote on 01.11.2010 21:12:
9.0 has streaming replication and "Hot Standby"
http://www.postgresql.org/docs/current/static/hot-standby.html
http://www.postgresql.org/docs/current/static/warm-standby.html#STREAMING-R
On 01/11/10 21:10, Vick Khera wrote:
On Mon, Nov 1, 2010 at 3:53 PM, Jonathan Tripathy wrote:
The standby must support INSERTS and UPDATES as well (once the master has
failed)
Are there any solutions like this? Looking on the Postgresql site, all the
standby solutions seem to be read only
On 02/11/10 01:56, Scott Marlowe wrote:
On Mon, Nov 1, 2010 at 4:39 PM, Jonathan Tripathy wrote:
On 01/11/10 21:10, Vick Khera wrote:
On Mon, Nov 1, 2010 at 3:53 PM, Jonathan Tripathy
wrote:
The standby must support INSERTS and UPDATES as well (once the master has
failed)
Are there any
On 02/11/10 09:53, Craig Ringer wrote:
On 11/02/2010 03:01 AM, Jonathan Tripathy wrote:
user1 goes to customer page, clicks on "delete membership" of the last
member ship, which blows away the membership,
user2 goes to customer page, clicks on "add membership" and sta
From: pgsql-general-ow...@postgresql.org on behalf of Vick Khera
Sent: Tue 02/11/2010 13:18
To: pgsql-general
Subject: Re: [GENERAL] Replication
On Tue, Nov 2, 2010 at 2:59 AM, Jonathan Tripathy wrote:
> What is the difference between the "Hot-Stan
Hi everyone,
When adding a new record, we run an insert query which auto-increments
the primary key for the table. However the method (in java) which calls
this query must return the newly created key.
Any ideas on how to do this, preferably using a single transaction?
Thanks
--
Sent via pg
On 02/11/10 23:11, Craig Ringer wrote:
On 02/11/10 18:29, Jonathan Tripathy wrote:
I don't really mind what happens, as
long as the user is made aware of what has happen, and there aren’t any
memberships with no corresponding customers.
Well, that's taken care of by a referential
Sorry, I don't get it. I usually have an application that knows if it
wants to write some data to database, or not. So it writes the data, and
just gets from database the id that was set by database. No need of
getting the id earlier in a transaction, although the simple insert that
saves the d
Hi Everyone,
When I create a prepared statement like this in java (using JDBC):
pStmt = conn.prepareStatement(qry);
everything works ok. However when I want a scrollable resultset and use this:
pStmt = conn.prepareStatement(qry,ResultSet.TYPE_SCROLL_INSENSITIVE);
I get a syntax error:
org.pos
on a project, i find myself continually finding the database locked up with
"idle in transaction" connections
are there any commands that will allow me to check exactly what was going on in
that transaction ?
i couldn't find anything in the docs, and the project has decent traffic, so
its goin
begin w/o commit or rollback?
and thanks. you've been very helpful!
On Nov 30, 2010, at 2:21 PM, Merlin Moncure wrote:
> Begin w/o commit is a grave application error and you should
> consider reworking your code base so that it doesn't happen (ever).
--
Sent via pgsql-general mailing list (p
Hi Everyone,
What does PG do with data that has been inserted into a table, but was
never committed? Does the data get discarded once the connection dies?
Thanks
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.o
I was setting TIME ZONE to 'EST' in my formatting function, and I've now
learned that EST is NOT the same as 'America/New_York', as EST is not
DST-sensitive.
I mistyped, the 2011-03-17 18:21:50-04 should have been 2011-03-17
10:21:50-04
Thank you all!!
-Original Message-
From: Steve C
I'm having some issues with fulltext searching.
I've gone though the list archives and stack overflow, but can't seem to get
the exact answers. hoping someone can help.
Thanks in advance and apologies for these questions being rather basic. I just
felt the docs and some online posts are lea
(less fast):
create gin index on tsvector(searchable_column)
Option B (faster):
create tsvector column for `searchable_column`
create gin index on searchable_column
> On Mon, Jun 9, 2014 at 8:55 PM, Jonathan Vanasco wrote:
>>I can't figure out whic
On Jun 19, 2014, at 11:21 AM, Andy Colson wrote:
> I think it depends on how you are going to use them. I, for example, have
> lots of images that are served on a web page, after benchmarks I found it was
> faster to store them on filesystem and let apache serve them directly.
I rarely store
I'm getting a handful of 'can not index words longer than 2047 characters' on
my `gin` indexes.
1. does this 2047 character count correspond to tokens / indexed words?
2. if so, is there a way to lower this number ?
3. is there a way to profile the index for the frequency of tokens ?
( apolo
I have a growing database with millions of rows that track resources against an
event stream.
i have a few handfuls of queries that interact with this stream in a variety of
ways, and I have managed to drop things down from 70s to 3.5s on full scans and
offer .05s partial scans.
no matter
In the past, to accomplish the same thing I've done this:
- store the data in hstore/json. instead of storing snapshots, I store deltas.
i've been using a second table though, because it's improved performance on
reads and writes.
- use a "transaction" log. every write session gets logged in
gt; And does your implementation worry about multiple timelines?
Not sure I understand this... but every object is given a revision id. edits
between consecutive revisions are allowed, edits spanning multiple revisions
are rejected.
On Sep 29, 2014, at 5:25 PM, Abelard Hoffman wrote:
> Felix &am
On Sep 27, 2014, at 7:48 PM, snacktime wrote:
> The schema is that a key is a string, and the value is a string or binary. I
> am actually storing protocol buffer messages, but the library gives me the
> ability to serialize to native protobuf or to json. Json is useful at times
> especially
I'm trying to improve the speed of suite of queries that go across a few
million rows.
They use 2 main "filters" across a variety of columns:
WHERE (col_1 IS NULL ) AND (col_2 IS NULL) AND ((col_3 IS NULL) OR
(col_3 = col_1))
WHERE (col_1 IS True ) AND (col_2 IS True) AND (col_
On Sep 30, 2014, at 8:04 PM, John R Pierce wrote:
> if col_1 IS NULL, then that OR condition doesn't make much sense. just
> saying...
I was just making a quick example. There are two commonly used "filter sets",
each are mostly on Bool columns that allow null -- but one checks to see if
On Oct 1, 2014, at 12:34 AM, Misa Simic wrote:
> Have you considered maybe partial indexes?
>
> http://www.postgresql.org/docs/9.3/static/indexes-partial.html
>
> I.e idx1 on pk column of the table with where inside index exactly the same
> as your first where
>
> Idx2 on pk column with where
Does anyone have a good solution for benching queries under various conditions,
and collecting the EXPLAIN data ?
I looked at pgbench, but it doesn't seem to be what I want.
My situation is this-
- For a given query, there are 3-5 different ways that I can run it.
- Each form of the query ha
On Oct 2, 2014, at 7:30 PM, john gale wrote:
> The GUI installer for Mac OS X downloaded from postgresql.org works fine.
Unless you NEED to use the source/etc version, use the GUI installer.
Unless you are already on a system where installing from Fink/Macports/Source
is commonplace... you'
I've been able to fix most of my slow queries into something more acceptable,
but I haven't been able to shave any time off this one. I'm hoping someone has
another strategy.
I have 2 tables:
resource
resource_2_tag
I want to calculate the top 25 "tag_ids" in "resource_2_tag "
On Oct 6, 2014, at 5:56 PM, Jim Nasby wrote:
> Don't join to the resource table; there's no reason to because you're not
> pulling anything from it.
Thanks the reply!
I'm not pulling anything from the resource table, but the join is necessary
because I'm filtering based on it. ( see the WHERE
On Oct 7, 2014, at 10:02 AM, Marc Mamin wrote:
> Hi,
> it seems to me that your subquery may deliver duplicate ids.
> And with the selectivity of your example, I would expect an index usage
> instead of a table scan. You may check how up to date your statistics are
> and try to raise the statist
I have a table with over 1MM records and 15 columns.
I had created a "unique index" on a mix of two columns to enforce a constraint
: (resource_type_id, lower(archive_pathname))
i've noticed that searches never use this. no matter what I query, even if
it's only the columns in the index. I'm
ld be a more appropriate place to make such an announcement?
Best regards,
Jonathan Neve
http://www.copycat.fr
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hello Amit,
I tried pg-announce as well, but my message didn't go through…
Jonathan
Le mardi 14 octobre 2014 11:38:02, Amit Langote a écrit :
On Tue, Oct 14, 2014 at 4:58 PM, Jonathan Neve wrote:
Hello,
I am the author of a database replication solution to which I recently added
Pos
Sorry for being a bit dense, but where should I submit a website news
posting ?
Le mardi 14 octobre 2014 11:45:55, Magnus Hagander a écrit :
On Tue, Oct 14, 2014 at 11:42 AM, Jonathan Neve wrote:
Hello Amit,
I tried pg-announce as well, but my message didn't go throughâ¦
Not having
Thanks, I'll give that a try!
Jonathan
Le 14/10/14 12:01, Magnus Hagander a écrit :
It's on http://www.postgresql.org/, click the link that says "Submit news".
//Magnus
On Tue, Oct 14, 2014 at 11:55 AM, Jonathan Neve wrote:
Sorry for being a bit dense, but where shoul
s within SPI_execute_plan().
Can anyone clarify what occurs when plpy.prepare() is called? Is it
worth using a Python conditional to determine whether to call it rather
than using SD.setdefault()?
--
Jonathan Ross Rogers
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To
On 10/15/2014 05:51 PM, Adrian Klaver wrote:
> On 10/15/2014 02:39 PM, Jonathan Rogers wrote:
>> I was just reading the PL/Python docs section "42.7.1 Database Access
>> Functions" and saw this example:
>>
>> CREATE FUNCTION usesavedplan() RETURNS trigger AS
On 11/01/2014 12:13 PM, Peter Eisentraut wrote:
> On 10/15/14 5:58 PM, Jonathan Rogers wrote:
>> BTW, I would rewrite the 9.1 example to be shorter while
>> behaving the same:
>>
>>
>> CREATE FUNCTION usesavedplan() RETURNS trigger AS $$
>> plan = SD.get(
On Nov 11, 2014, at 5:38 PM, Robert DiFalco wrote:
> Thoughts? Do I just choose one or is there a clear winner? TIA!
I prefer this model
user_id__a INT NOT NULL REFERENCES user(id),
user_id__b INT NOT NULL REFERENCES user(id),
is_reciprocal BOOLEAN
primary key (
1 - 100 of 363 matches
Mail list logo