On Wed, 13 Feb 2019, Brent Wood wrote:
I have not really followed this thread but would not a query along the lines of
select * from activity where person_id = n and timestamp = (select
max(timestamp) from activity where person_id = n);
give the required answer ie, always return the latest resul
On 2/13/19 9:17 AM, Joe Conway wrote:
> On 2/13/19 3:32 AM, Niels Jespersen wrote:
>> Dive into the source for pgaudit and try to understand how it works,
>> perhaps getting a small example extension working along the way.
>> Perhaps contacting the authors of pgaudit to have them confirm that
>> t
On Wed, 13 Feb 2019, Adrian Klaver wrote:
Given a sufficiently large date range that may not be true as you may have
contacted a given person multiple times during that range and generated
multiple activities records.
Adrian,
This is true as it has occurred. I want only the most recent activi
On 2/13/19 2:24 PM, Rich Shepard wrote:
On Wed, 13 Feb 2019, Andrew Gierth wrote:
Adrian> Close to your last posted query. person_id 2 and 3 have NULL
Adrian> values for activities data as there is no record for 2 and 3 is
Adrian> out of the date range.:
DISTINCT ON with no matching ORDER BY
I’m a bit stuck with a few LOG messages. The one in the subject disturbs me
the most.
(FreeBSD 11.2, PostgreSQL 10.6)
When I restart the hot standby, sometimes I see
2019-02-13 21:17:31 CET LOG: redo starts at 0/488A2748
2019-02-13 21:17:31 CET LOG: invalid record length at 0/488A2780: wanted
On Wed, 13 Feb 2019, Andrew Gierth wrote:
Adrian> Close to your last posted query. person_id 2 and 3 have NULL
Adrian> values for activities data as there is no record for 2 and 3 is
Adrian> out of the date range.:
DISTINCT ON with no matching ORDER BY at the _same_ query level is
non-determi
Ah. I didn't realize Postgrest was something, rather than just a typo. An
fkey to a foreign table is not supported.
Related-
https://dba.stackexchange.com/questions/138591/foreign-key-references-constraint-on-postgresql-foreign-data-wrapper
*Michael Lewis | Software Engineer*
*Entrata*
*c: **6
On 2/13/19 1:36 PM, Michael Lewis wrote:
You don't need an fkey to write a select statement with a join. I think
I must be missing something. Are you wanting it to enforce integrity
Me to until I looked at this:
http://postgrest.org/en/v5.2/api.html#resource-embedding
"PostgREST can also det
You don't need an fkey to write a select statement with a join. I think I
must be missing something. Are you wanting it to enforce integrity across
the dblink? Or are you adding an fkey with the assumption that you will get
an index?
*Michael Lewis | Software Engineer*
*Entrata*
*c: **619.370.8
On 2/13/19 1:04 PM, adrien ruffie wrote:
we have a tricky problem with my colleague.
We have to database db1 and db2 linked by a foreign data wrapper
connection.
1 table "contractline" in db1 and "contract" in db2.
We use postgrest in order to request db2 via CURL.
But we
we have a tricky problem with my colleague.
We have to database db1 and db2 linked by a foreign data wrapper connection.
1 table "contractline" in db1 and "contract" in db2.
We use postgrest in order to request db2 via CURL.
But we want to add a link between previous tables.
In db2 we have a fore
> "Adrian" == Adrian Klaver writes:
Adrian> Close to your last posted query. person_id 2 and 3 have NULL
Adrian> values for activities data as there is no record for 2 and 3 is
Adrian> out of the date range.:
Adrian> select
Adrian>p.person_id,
Adrian>p.desc_fld,
Adrian>a.
On 2/13/19 7:37 AM, Rich Shepard wrote:
On Wed, 13 Feb 2019, Adrian Klaver wrote:
The LEFT JOIN. There are rows in people for which there no records coming
from the sub-select on activities, so the row is 'padded' with NULL
values
for the missing data.
Adrian,
I assume it's the inner left
On Wed, 13 Feb 2019, Adrian Klaver wrote:
It will work if you use it to filter after the joins are done.
I'll work on finding the proper syntax to do this. Need to do more reading
and trial-and-error testing.
Regards,
Rich
I didn't see your email yesterday, sorry about that. Index scans instead of
sequential scans and nested loop instead of hash join means that you have
bad row count estimates on "Non prod Aurora RDS instance" as far as I can
figure. Have you run commands like-
analyze asset_info_2019_2_part4;
analy
On 12 February 2019 17:20:09 CET, Vikas Sharma wrote:
>Hello All,
>
>I have a 4 node PostgreSQL 9.6 cluster with streaming replication. we
>encounter today the Out of Memory Error on the Master which resulted
>in
>All postres processes restarted and cluster recovered itself. Please
>let
>me kno
On Wed, Feb 13, 2019, at 9:06 AM, Rich Shepard wrote:
> On Wed, 13 Feb 2019, Adrian Klaver wrote:
>
> > Should have been clearer in my previous post, you can get rid of the nulls
> > by filtering out the entire row.
>
> Adrian,
>
> Thank you. I'm rebuilding the statement from the inside out (whi
On Wed, 13 Feb 2019, Adrian Klaver wrote:
Should have been clearer in my previous post, you can get rid of the nulls
by filtering out the entire row.
Adrian,
Thank you. I'm rebuilding the statement from the inside out (which helps me
learn more SQL in the process). For example,
select * from
On Wed, 13 Feb 2019, Adrian Klaver wrote:
AFAIK there is no inner left join:
https://www.postgresql.org/docs/10/sql-select.html#SQL-FROM
Sigh. There are two LEFT JOINS in the statement. I referred to the SECOND
one as INNER. I correct my message to refer to the second of the two left
joins.
R
On 2/13/19 7:37 AM, Rich Shepard wrote:
On Wed, 13 Feb 2019, Adrian Klaver wrote:
The LEFT JOIN. There are rows in people for which there no records coming
from the sub-select on activities, so the row is 'padded' with NULL
values
for the missing data.
Adrian,
I assume it's the inner left
On Tue, Feb 12, 2019 at 12:55 PM github kran wrote:
>
>
> On Tue, Feb 12, 2019 at 12:33 PM Michael Lewis wrote:
>
>> Did you update the stats by running ANALYZE on the tables involved, or
>> perhaps the entire database on the 'Non prod Aurora RDS instance'? Can you
>> share the two execution pla
On 2/13/19 7:37 AM, Rich Shepard wrote:
On Wed, 13 Feb 2019, Adrian Klaver wrote:
The LEFT JOIN. There are rows in people for which there no records coming
from the sub-select on activities, so the row is 'padded' with NULL
values
for the missing data.
Adrian,
I assume it's the inner left
On Wed, 13 Feb 2019, Adrian Klaver wrote:
The LEFT JOIN. There are rows in people for which there no records coming
from the sub-select on activities, so the row is 'padded' with NULL values
for the missing data.
Adrian,
I assume it's the inner left join. I'll trace what's happening at each s
On 2/13/19 6:28 AM, Rich Shepard wrote:
On Tue, 12 Feb 2019, Ken Tanzer wrote:
If that's getting you what you want, then great and more power to
you. It
looks like you'll only get people who have a next_contact in your target
window there. You might also consider something like this...
sele
On Tue, 12 Feb 2019, Ken Tanzer wrote:
If that's getting you what you want, then great and more power to you. It
looks like you'll only get people who have a next_contact in your target
window there. You might also consider something like this...
select
p.person_id,
p.lname,
p.fname,
On 2/13/19 3:32 AM, Niels Jespersen wrote:
> Dive into the source for pgaudit and try to understand how it works,
> perhaps getting a small example extension working along the way.
> Perhaps contacting the authors of pgaudit to have them confirm that
> there exists working implementations of pgau
sorry, it´s a missing part of the CTE
that constant should be on beginning part of it.
with feriados as (
SELECT dia, repete
FROM (
VALUES ('2014-10-11'::DATE, FALSE), ('2014-10-13', FALSE),
('2014-10-14', FALSE), ('2014-10-15', FALSE), ('2014-10-16', FALSE),
('2014-10-17', FALS
On Wed, 2019-02-13 at 05:32 -0700, PegoraroF10 wrote:
> is a sql FROM VALUES, why do you need tables ?
>
>
I can see that, however:-
ERROR: relation "col_diasaula" does not exist
Position: 7477
[SQL State: 42P01]
Is that a table, view or a missing CTE?
On Wed, 13 Feb 2019, Andrew Gierth wrote:
Rich> Will probably try DISTINCT ON, too, if that makes it simpler or
Rich> faster.
You want LATERAL.
Andrew,
That's new to me so I'll read about it.
Thanks,
Rich
Thank you Adrian for the reply, I did check the postgres processes running
around the time when OOM was invoked, there were lots of high CPU consuming
postgres processes running long running selects.
I am not sure of how to interpret the memory terms appearing in linux
dmeg or /var/log/messages b
is a sql FROM VALUES, why do you need tables ?
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
Hello Marcos,
On Wed, 2019-02-13 at 04:41 -0700, PegoraroF10 wrote:
> Here is a SQL which will get that recovery mode. You can run it on
> any database because we created it with FROM VALUES, so ...
> But this one is that one which grows and grows memory use until all
> memory and swap space are g
*Here is a SQL which will get that recovery mode. You can run it on any
database because we created it with FROM VALUES, so ...*
But this one is that one which grows and grows memory use until all memory
and swap space are gone, so problem occurs. That other SQL which gives us
the same problem but
Am Dienstag, 12. Februar 2019, 23:23:49 CET schrieb Rich Shepard:
> The query is to return the latest next_contact date for each person. Using
> the max() aggregate function and modeling the example of lo_temp on page 13
> of the rel. 10 manual I wrote this statement:
You don't even need a subsele
Hi,
we just wondered: When a huge table is loaded into buffer cache it goes into
the ring buffer to not pollute the cache. The same is apparently not true for
indexes as much more blocks are cached.
-- Restarted the instance
pgbench=# explain (analyze,buffers) select count(*) from pgbench_acco
Hi Tom (and others)
Thank you for input. I think my next steps will be approcching the problem from
differenct angles:
Implement on Linux as Proof of Concept.
Dive into the source for pgaudit and try to understand how it works, perhaps
getting a small example extension working along the way.
36 matches
Mail list logo