I'm trying to identify which postgresql.conf file I should be editing, in order
to change the default database files location for Postgres 9.6.6, when
installed on CentOS 7.x/
Is the bet method for changing the default data directory at the time of
database init, to include the $PGDATA variable
Guys,
New to Postgres - here's my code inside an event trigger:
ELSIF (TG_OP = 'INSERT') THEN
EXECUTE format('INSERT INTO %I SELECT statement_timestamp(),
''INSERT'', $1.*', TG_TABLE_NAME || '_cdc')
USING NEW;
RETURN NEW;
Here's the error I am receiving - when I am attempt
I am new to Postgres and I am trying to build this SQL statement in my SQL
script:
ALTER TABLE listings_cdc ALTER COLUMN table_id SET DEFAULT
nextval('tab_id_seq');
I am trying to build the above-stated command as a dynamic SQL statement:
EXECUTE 'ALTER TABLE listings_cdc ALTER COLUMN table_id SE
ive queries, but (I think -- I'll have to check!)
that is all
>
> Cheers,
>
> Jeff
>
Robert.
--
Robert Inder,0131 229 1052 / 07808 492
213
Interactive Information Ltd, 3, Lauriston Gardens, Edinburgh EH3 9HH
Registered in Scotland, Company no. SC 150689
Interactions speak louder than
words
ump of
database_c is timestamped less than 10 minutes after the dump of database_a
Which does not fit with the dump of database_b being given 10 minutes in
which to finish
Have I misunderstood something? Or is Postgres not actually configured the
way I think it is?
Robert.
--
Robert Inder,
Ha guys,
I am new to postgress and I am trying to write my first function to insert,
update or delete and trap errors as a result of the table not existing ,
the columns not exist or if any other error simply pass back the sqlstate
here's my code can you help
CREATE OR REPLACE FUNCTION listings_aud
All the code for creating and managing partitions is part of the core
Postgres code. What we are interested in looking into is what that work
flow might look like and how that workflow can be supported with a GUI
management tool.
-- Rob
On Thu, May 18, 2017 at 3:21 PM, Melvin Davidson
wrote:
>
On Fri, Mar 31, 2017 at 11:29 AM, Tom Lane wrote:
> Robert Haas writes:
>> On Thu, Mar 30, 2017 at 4:45 PM, Tom Lane wrote:
>>> In short, it seems like this statement in the docs is correctly describing
>>> our code's behavior, but said behavior is wrong and sh
ranches
> should also be changed.
Sounds reasonable, but I don't see much advantage to changing it in
the back-branches.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To m
Thanks, everyone, for your comments.
I think I've got a clearer idea of what's going on now...
Robert.
On 1 December 2016 at 13:55, Robert Inder wrote:
> I'm running Postgres9.4 in master/hot-standby mode on a few pairs of servers.
>
> While recovering from A Bit Of
ate it/them. So I'd looked
in "the obvious places" -- Section 18 (Server configuration), and in
particular 18.2 "File Locations". Could I suggest that the motivation
for doing this, and the consequences for backups, should be discussed
in "the right place" -- in
I was wondering if anyone might be able to help me out with a table design
question.
A quick intro -- I'm helping a company switch from a mongo database over to
postgresql (yay!). The company is a marketplace app for musicians and
hosts. The basic idea is that a host can book a musician for an eve
Hello,
If you have jdk 1.8 or above installed go to www.executequery.org and
download the latest jar file. Download the JDBC driver from Postgres and
set it up. It's open source.
It has an ERD generator but obviously depends on having all your foreign
keys declared in order to link tables, etc. Af
of pg_dump to support catalog
> ACLs. I am sure by the way that checking for (owning_tab->dobj.dump &&
> DUMP_COMPONENT_DEFINITION) != 0 is not good because of for example the
> case of a serial column created in an extension where the sequence is
> dropped from the extension afterwards.
Ste
dumpable as well.
>
> That's the reason why the PgQ event tables created by
> pgq.create_queue() are not dumped.
That sucks.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-general mailing list (pgsql-general@postgres
On Mon, 2016-06-20 at 11:43 +0200, Job wrote:
> Hi Andreas,
>
> >I would suggest run only autovacuum, and with time you will see a not
> >more growing table. There is no need for vacuum full.
>
> So new record, when will be pg_bulkloaded, will replace "marked-free"
> location?
Yes, but you may
ordering quals in query by their perceived cost
> is the solution. Keep optimizer informed by setting costs appropriately
> and it will do the right thing more often than not. :)
I think that if the costs are actually identical, the system will keep
the quals in the same order they were written
ong has it been taking?
backend_start| 2016-05-07 11:48:39.218398-03
More than 50 hours.
What is your maintenance_work_mem set to?
maintenance_work_mem = 352MB
2016-05-09 14:34 GMT-03:00 Joshua D. Drake :
> On 05/09/2016 10:32 AM, Robert Anderson wrote:
>
>> Only one li
| CREATE INDEX CONCURRENTLY index_texto
| ON flip_pagina_edicao
| USING hash
| (texto COLLATE pg_catalog."default");
postgres=#
2016-05-09 14:20 GMT-03:00 Tom Lane :
> Robert Anderson writes:
> > There
pid = l.pid)
>JOIN pg_stat_user_tables t ON (l1.relation = t.relid)
>WHERE w.waiting;
>
>
> On Mon, May 9, 2016 at 11:50 AM, Joshua D. Drake
> wrote:
>
>> On 05/09/2016 05:04 AM, Robert Anderson wrote:
>>
>>> Hi,
>>>
>>> We are tryi
380, {{5, -1, 0}}, 1) = 0
...
Thanks in advance.
Robert
a variable that you setup and repeat and you show. A
> bit cumbersome and mixes the parts that are title and those that are present
> only because you are watching.
Ah, come on. This doesn't really seem like an issue we should spend
more time quibbling about. I think Tom's versio
to the people
who do. Whatever is the consensus is OK with me. I just don't want
to get yelled at later for committing something here, so it would be
nice to see a few votes for whatever we're gonna do here.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreS
, and then you don't need to
reference the number 50 again further down.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
rn false;
+}
Instead of repeating the cleanup code, how about making this break;
then, change the return statement at the bottom of the function to
return (res != -1).
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-general
FROM pg_stat_all_indexes i
> JOIN pg_class c ON (c.oid = i.relid)
> JOIN pg_namespace n ON (n.oid = c.relnamespace)
> JOIN pg_index idx ON (idx.indexrelid = i.indexrelid )
> WHERE i.idx_scan = 0
>AND NOT idx.indisprimary
> AND NOT idx.indisunique
> ORDER BY 1, 2,
ta.
On Thu, Mar 10, 2016 at 7:44 PM, Adrian Klaver
wrote:
> On 03/10/2016 04:11 PM, Robert McAlpine wrote:
>
>>
>> Postgresql 9.5, Ubuntu 14.04.
>>
>> I broke my ability to access postgres after attemping to install
>> postgres-xc (ironic, since I insta
stall with a
correct default postgres role, but my gut tells me that screwing around
with those files is doomed to fail.
I would appreciate any help or thoughts on how to recover access to the
data.
--
Robert McAlpine
r...@pfcta.com
Hi,
I’m reading about the ranking functions [1], and I have a couple questions…
1. Is ts_rank taking proximity of terms into account? It seems like it is, but
the docs suggest that only ts_rank_cd does that.
2. Is there a way to search multiple terms like ‘a | b | c …’ but score higher
when mu
On Fri, Jul 10, 2015 at 9:40 AM, John McKown
wrote:
> On Fri, Jul 10, 2015 at 11:15 AM, Robert DiFalco > wrote:
>
>> I have a table something like this:
>>
>> CREATE TABLE devices (
>> owner_idBIGINT NOT NULL,
>> utc_offset_secs INT,
>>
I have a table something like this:
CREATE TABLE devices (
owner_idBIGINT NOT NULL,
utc_offset_secs INT,
PRIMARY KEY (uid, platform),
FOREIGN KEY (owner_id) REFERENCES users(id) ON DELETE CASCADE
);
I want to do a query from an application that returns all devices who's
time is b
>
> I am fairly certain this does not give you the correct results.
> Specifically, the minimum value for each cDate is going to be 1 since
> count(*) counts NULLs. count(u) should probably work.
>
>
> Yes you are right, I forgot to change COUNT(*) to COUNT(id), as you
mention COUNT(u.*) will a
Paul, I'm sure I'm missing something but it seems like your approach will
not work. It's because the LEFT OUTER JOIN is on the numeric day of the
week. So if you had this query going over weeks or months of data wouldn't
you have the same issue with the days that had no new users not being
factored
Thanks Paul, I guess I'm not sure how a generate_series between 0 to 6
would solve this problem. Wouldn't I have to generate a series based on the
date range (by day) and then group by DOW _after_ that? Can you give me an
example of how I'd do it with a series based on 0 to 6?
On Mon, Jul 6, 2015
I'm not sure how to create a result where I get the average number of new
users per day of the week. My issues are that days that did not have any
new users will not be factored into the average, giving an overinflated
result.
This is what I started with:
WITH userdays AS
(SELECT u.created::DAT
Hi,
(Maybe my subject line should be: `is not distinct from` and indexes.)
In Postgres 9.4, I’ve got a table of ‘items’ that references a table ‘colors’.
Not all items have colors, so I created a nullable column in items like:
color_id bigint references colors
There is also an index o
-06-12 16:13:44 PDT [6454]: [1-1] LOG: MultiXact member wraparound
>> protections are disabled because oldest checkpointed MultiXact 1 does not
>> exist on disk
>
> **Thank you Robert and all involved for the resolution to this.**
>
>> With the fixes introduced in this re
I want to make sure I understand the repercussions of this before making it
a global setting.
As far as I can tell this will put data/referential integrity at risk. It
only means that there is a period of time (maybe 600 msecs) between when a
commit occurs and when that data is safe in the case of
s patch is just improving the existing mechanism so
that it's reliable, and you're proposing something notably different
which might be better, but which is really a different proposal
altogether.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
tch I committed today introduces some
regression that is even worse, life will suck.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Fri, Jun 5, 2015 at 2:36 PM, Alvaro Herrera wrote:
> Tom Lane wrote:
>> Robert Haas writes:
>
>> > There are at least two other known issues that seem like they should
>> > be fixed before we release:
>>
>> > 1. The problem that we might truncat
On Fri, Jun 5, 2015 at 2:47 PM, Andres Freund wrote:
> On 2015-06-05 14:33:12 -0400, Tom Lane wrote:
>> Robert Haas writes:
>> > 1. The problem that we might truncate an SLRU members page away when
>> > it's in the buffers, but not drop it from the buffers, leadi
On Fri, Jun 5, 2015 at 12:00 PM, Andres Freund wrote:
> On 2015-06-05 11:43:45 -0400, Tom Lane wrote:
>> Robert Haas writes:
>> > On Fri, Jun 5, 2015 at 2:20 AM, Noah Misch wrote:
>> >> I read through this version and found nothing to change. I encourage
>>
On Fri, Jun 5, 2015 at 2:20 AM, Noah Misch wrote:
> On Thu, Jun 04, 2015 at 05:29:51PM -0400, Robert Haas wrote:
>> Here's a new version with some more fixes and improvements:
>
> I read through this version and found nothing to change. I encourage other
> hackers to
On Thu, Jun 4, 2015 at 5:29 PM, Robert Haas wrote:
> - Forces aggressive autovacuuming when the control file's
> oldestMultiXid doesn't point to a valid MultiXact and enables member
> wraparound at the next checkpoint following the correction of that
> problem.
Err, ena
On Thu, Jun 4, 2015 at 12:57 PM, Robert Haas wrote:
> On Thu, Jun 4, 2015 at 9:42 AM, Robert Haas wrote:
>> Thanks for the review.
>
> Here's a new version. I've fixed the things Alvaro and Noah noted,
> and some compiler warnings about set but unused variables.
On Thu, Jun 4, 2015 at 1:27 PM, Andres Freund wrote:
> On 2015-06-04 12:57:42 -0400, Robert Haas wrote:
>> + /*
>> + * Do we need an emergency autovacuum? If we're not sure, assume yes.
>> + */
>> + return !oldestOffsetKnown ||
>> +
On Thu, Jun 4, 2015 at 9:42 AM, Robert Haas wrote:
> Thanks for the review.
Here's a new version. I've fixed the things Alvaro and Noah noted,
and some compiler warnings about set but unused variables.
I also tested it, and it doesn't quite work as hoped. If started
>> +
>> + /* if nothing has changed, we're done */
>> + if (prevOffsetStopLimitKnown && offsetStopLimit == prevOffsetStopLimit)
>> + return;
>>
>> LWLockAcquire(MultiXactGenLock, LW_EXCLUSIVE);
>> - /* always leave o
On Wed, Jun 3, 2015 at 8:24 AM, Robert Haas wrote:
> On Tue, Jun 2, 2015 at 5:22 PM, Andres Freund wrote:
>>> > Hm. If GetOldestMultiXactOnDisk() gets the starting point by scanning
>>> > the disk it'll always get one at a segment boundary, right? I'm not s
ar numbering space.
>
> I think it *might* (I'm really jetlagged) be fine because that'll only
> happen after a upgrade from < 9.3. And in that case we initialize
> nextOffset to 0. That ought to safe us?
That's pretty much betting the farm on the bugs we know about to
recently:
>
> http://www.postgresql.org/message-id/20150601045534.gb23...@tornado.leadboat.com
>
> He noticed the problem for segment boundaries, when not in recovery.
> In recovery, segment boundaries don't raise an error (the read-zeroes
> case applies), but page boundari
which should be correct, even though
> possibly overly conservative, in these cases.
Uh oh. That seems like a real bad problem for this approach. What
keeps that from being the opposite of too conservative? There's no
"safe" value in a circular numbering space.
--
Robert
On Tue, Jun 2, 2015 at 11:44 AM, Andres Freund wrote:
> On 2015-06-02 11:37:02 -0400, Robert Haas wrote:
>> The exact circumstances under which we're willing to replace a
>> relminmxid with a newly-computed one that differs are not altogether
>> clear to me, but
se, it
will buy us some time to figure out what else we want to do.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Tue, Jun 2, 2015 at 11:27 AM, Andres Freund wrote:
> On 2015-06-02 11:16:22 -0400, Robert Haas wrote:
>> I'm having trouble figuring out what to do about this. I mean, the
>> essential principle of this patch is that if we can't count on
>> relminmxid, datmin
yet
> consistent. That really sucks, because we'll possibly end up with
> multixacts that are completely filled by the time we've reached
> consistency.
That would be a departure from the behavior of every existing release
that includes this code based on, to my knowledge, zer
On Tue, Jun 2, 2015 at 1:21 AM, Noah Misch wrote:
> On Mon, Jun 01, 2015 at 02:06:05PM -0400, Robert Haas wrote:
>> On Mon, Jun 1, 2015 at 12:46 AM, Noah Misch wrote:
>> > On Fri, May 29, 2015 at 03:08:11PM -0400, Robert Haas wrote:
>> >> SetMultiXactIdLimit()
ly once we're sure that the mxact we plan to call it on actually
exists on disk. That won't be called until we replay the first
checkpoint, but that might still be prior to consistency.
Since I forgot to attach the revised patch with fixes for the points
Noah mentioned to that email, here i
On Mon, Jun 1, 2015 at 12:46 AM, Noah Misch wrote:
> Incomplete review, done in a relative rush:
Thanks.
> On Fri, May 29, 2015 at 03:08:11PM -0400, Robert Haas wrote:
>> OK, here's a patch. Actually two patches, differing only in
>> whitespace, for 9.3 and for master (
hink if we do this, we need to
be very careful about step #2. Right now, we decide what we need to
keep and then remove everything else, but that's kind of wonky because
new stuff may be getting created at the same time, so we keep
adjusting our idea of exactly what needs to be removed. It
On Fri, May 29, 2015 at 9:46 PM, Andres Freund wrote:
> On 2015-05-29 15:08:11 -0400, Robert Haas wrote:
>> It seems pretty clear that we can't effectively determine anything
>> about member wraparound until the cluster is consistent.
>
> I wonder if this doesn
On Fri, May 29, 2015 at 3:08 PM, Robert Haas wrote:
> It won't fix the fact that pg_upgrade is putting
> a wrong value into everybody's datminmxid field, which should really
> be addressed too, but I've been working on this for about three days
> virtually non-stop and
On Fri, May 29, 2015 at 12:43 PM, Robert Haas wrote:
> Working on that now.
OK, here's a patch. Actually two patches, differing only in
whitespace, for 9.3 and for master (ha!). I now think that the root
of the problem here is that DetermineSafeOldestOffset() and
SetMultiXactIdLimi
On Fri, May 29, 2015 at 10:17 AM, Tom Lane wrote:
> Thomas Munro writes:
>> On Fri, May 29, 2015 at 11:24 AM, Robert Haas wrote:
>>> B. We need to change find_multixact_start() to fail softly.
>
>> Here is an experimental WIP patch that changes StartupMultiXact an
If we remove
> member files, what is it that we try to read and find not to be present?
Do you have a link to the previous discussion?
I mean, the problem we're having right now is that sometimes we have
an offset, but the corresponding member isn't there. So clearly
offsets reference members. Do members also reference offsets? I
didn't think so, but life is full of surprises.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Thu, May 28, 2015 at 4:06 PM, Joshua D. Drake wrote:
> FTR: Robert, you have been a Samurai on this issue. Our many thanks.
Thanks! I really appreciate the kind words.
So, in thinking through this situation further, it seems to me that
the situation is pretty dire:
1. If you pg_upgrade
On Thu, May 28, 2015 at 8:51 AM, Robert Haas wrote:
> [ speculation ]
OK, I finally managed to reproduce this, after some off-list help from
Steve Kehlet (the reporter), Alvaro, and Thomas Munro. Here's how to
do it:
1. Install any pre-9.3 version of the server and generate enough
mu
On Thu, May 28, 2015 at 8:03 AM, Robert Haas wrote:
>> Steve, is there any chance we can get your pg_controldata output and a
>> list of all the files in pg_clog?
>
> Err, make that pg_multixact/members, which I assume is at issue here.
> You didn't show us the DET
On Thu, May 28, 2015 at 8:01 AM, Robert Haas wrote:
> On Wed, May 27, 2015 at 6:21 PM, Alvaro Herrera
> wrote:
>> Steve Kehlet wrote:
>>> I have a database that was upgraded from 9.4.1 to 9.4.2 (no pg_upgrade, we
>>> just dropped new binaries in place) but it w
formed an
immediate shutdown or just pulled the plug, it would have said
"database system was interrupted" or some such.
There may be bugs in redo, also, but they don't explain what happened to Steve.
Steve, is there any chance we can get your pg_controldata output and a
list of all the fi
already been removed.
Steve: Can you tell us more about how you shut down the old cluster?
Did you by any chance perform an immediate shutdown? Do you have the
actual log messages that were written when the system was shut down
for the upgrade?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
ting at a file that has already
> been removed -- again considering the pg_basebackup scenario where the
> multixact files are copied much later than pg_control, so the checkpoint
> to replay is old but the pg_multixact contents have already been
> truncated in the master and
ogic elsewhere to disregard oldestOffset when the
accompanying flag is false.
This still leaves open an ugly possibility: can we reach normal
running without a valid oldestOffset? If so, until the next
checkpoint happens, autovacuum has no clue whether it needs to worry.
There's got to be a
y there is something I'm missing or not understanding, can
anyone help? Thanks!
--
Robert
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Version 9.2.4
On 3/15/15, David G. Johnston wrote:
> On Sunday, March 15, 2015, Robert James wrote:
>
>> How do I calculate the sum of a field filtered by multiple windows
>> defined by another field?
>>
>> I have table event with fields event_date, num_events
When I save a VIEW, Postgres seems to convert it to a different
format, functionally equivalent but unrecognizable (whitespace,
comments, adds lots of casts, etc.)
Is there any simple way to preserve my original code?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make
How do I calculate the sum of a field filtered by multiple windows
defined by another field?
I have table event with fields event_date, num_events, site_id. I can
easily use aggregate SQL to do SELECT SUM(num_events) GROUP BY
site_id.
But I also have another table site with fields site_id, target
ot;) I'm doing that, Steven's suggestion of
making the dump to a ram file system, then filing it as a separate
step, looks simple enough to be worth trying as a stop-gap...
Robert.
--
Robert Inder,
Interactive Information Ltd, 3, Lauriston Gardens, Edinburgh
this something that has changed with Postgres 9?
We're currently running Postgres 8.4.
Is this my specific reason to embark on an upgrade?
Robert.
--
Robert Inder,
Interactive Information Ltd, 3, Lauriston Gardens, Edinburgh EH3 9HH
Registere
er issue would be how long the "replay" would take. But, I
realise, that's not a major concern: the delay would only be seen by a
client that
had had a major problem. Everyone else would see service as normal.
I think I'll be doing some experiments to find out:-)
Robert.
--
way to go about
things. Or are we missing something? Is there some other way to
restore one database without affecting the others?
Thanks in advance.
Robert.
--
Robert Inder,
Interactive Information Ltd, 3, Lauriston Gardens, Edinburgh EH3 9HH
Registered in Scotland, Company no. SC 150689
I don't think an advisory lock would remove the deadlock.
On Sun, Jan 18, 2015 at 10:33 PM, Roxanne Reid-Bennett
wrote:
> On 1/16/2015 2:41 AM, Jim Nasby wrote:
>
>> On 1/15/15 10:57 PM, Roxanne Reid-Bennett wrote:
>>
>>>
>>> try this: (if you still get deadlocks, uncomment the advisory lock
>>
Hometowns get selected and possibly inserted in unpredictable ways even
from multiple concurrent sessions. The only way I could figure out how to
solve it was to force each INSERT hometowns to be in its own transaction.
On Mon, Jan 19, 2015 at 1:56 PM, Robert DiFalco
wrote:
> I don't
be my
understanding is off.
On Mon, Jan 19, 2015 at 11:10 AM, Andreas Kretschmer <
akretsch...@spamfence.net> wrote:
> Robert DiFalco wrote:
>
> > I have several tables that I use for logging and real-time stats. These
> are not
> > critical and since they are a bottleneck I
I have several tables that I use for logging and real-time stats. These are
not critical and since they are a bottleneck I want transactions against
them to always be asynchronous. Is there a way to specify this at a table
level or do I have to make sure to call set synchronous_commit='off' every
ld be picked
up. And there should only be a quick recoverable deadlock.
On Fri, Jan 16, 2015 at 7:49 AM, Daniel Verite
wrote:
> Robert DiFalco wrote:
>
> > I must be doing something wrong because both of these approaches are
> giving
> > me deadlock exceptions.
>
>
The code shown in the Doc (I think) will still give you deadlock in the
case where you have two sessions concurrently trying to insert the same
'hometown'. For example:
INSERT INTO users VALUES('Tom', select_hometown_id('Portland, OR'));
INSERT INTO users VALUES(''Waits', select_hometown_id('P
name = hometown_name;
> BEGIN
> insert into hometowns (name)
> select hometown_name where v_id is null
> returning id into v_id;
> EXCEPTION WHEN unique_violation
> THEN
> select id into v_id from hometowns where name = hometown_name;
> END;
> inse
insert into hometowns (name)
> select hometown_name where v_id is null
> returning id into v_id;
> EXCEPTION WHEN unique_violation
> THEN
> select id into v_id from hometowns where name = hometown_name;
> END;
> insert into users (name, hometown_id)
> values
Good points. I guess my feeling is that if there can be a race condition on
INSERT then the CTE version is not truly atomic, hence the LOOP.
On Tue, Jan 13, 2015 at 3:11 PM, Brian Dunavant wrote:
> A very good point, but it does not apply as here (and in my article)
> we are not using updates, o
switching you are buying
> yourself much (if anything) by using a CTE query instead of something
> more traditional here.
>
> The advantages of switching to a CTE would be if this code was all
> being done inside of the app code with multiple queries.
>
> On Tue, Jan 13, 2015
Well, traditionally I would create a LOOP where I tried the SELECT, if
there was nothing I did the INSERT, if that raised an exception I would
repeat the LOOP.
What's the best way to do it with the CTE? Currently I have the following
which gives me Duplicate Key Exceptions when two sessions try to
This CTE approach doesn't appear to play well with multiple concurrent
transactions/connections.
On Tue, Jan 13, 2015 at 10:05 AM, John McKown
wrote:
> On Tue, Jan 13, 2015 at 11:45 AM, Robert DiFalco > wrote:
>
>> Thanks John. I've been seeing a lot of examples like
x27;
WHERE NOT EXISTS (SELECT 1 FROM sel)
RETURNING id
)
INSERT INTO users(name, hometown_id)
VALUES ('Robert', SELECT id FROM ins UNION ALL SELECT id FROM sel);
On Tue, Jan 13, 2015 at 8:50 AM, John McKown
wrote:
> On Tue, Jan 13, 2015 at 10:42 AM, Robert DiFalco > wro
sers are created. For example, a
client may submit {"name":"Robert", "hometown":"Portland"}.
The hometowns table will never be updated, only either queries or inserted.
So given this I need to INSERT a row into "users" and either SELECT the
homet
cost approach than what I was already doing.
On Thu, Dec 18, 2014 at 2:07 PM, David Johnston
wrote:
>
> On Thu, Dec 18, 2014 at 3:02 PM, Robert DiFalco
> wrote:
>
>> Is the intersect any better than what I originally showed? On the ROW
>> approach, I'm not sure
David G Johnston <
david.g.johns...@gmail.com> wrote:
>
> Robert DiFalco wrote
> > For 2 arbitrary ids, I need a query to get two pieced of data:
> >* Are the two users friends?
>
> This seems easy...ROW(u_id, f_id) = ROW(n1, n2)
>
>
> >* How many f
I have a table called friends with a user_id and a friend_id (both of these
relate to an id in a users table).
For each friend relationship there are two rows. There are currently ONLY
reciprocal relationships. So if user ids 1 and 2 are friends there will be
two rows (1,2) and (2,1).
For 2 arbit
OUTER JOINs vs EXISTS queries and
if there was a better alternative I had not considered.
On Tue, Dec 9, 2014 at 11:44 AM, Arthur Silva wrote:
> On Tue, Dec 9, 2014 at 4:18 PM, Robert DiFalco
> wrote:
>
>> I'm sorry, I missed a JOIN on the second variation. It is:
>>
>
1 - 100 of 1119 matches
Mail list logo