auses inside the CTE to get decent performance on
large tables - mostly for my interactive queries.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Problem-with-left-join-when-moving-a-column-to-another-table-tp5760187p5760255.html
Sent from the PostgreSQL -
objects. You just need to get "@admin_company" and
"admin@admin_company" into the database (via deferred constraint resolution
or before constraints are added) and you are good to go.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com
Chris Angelico wrote
> On Fri, Jun 21, 2013 at 7:36 AM, David Johnston <
> polobo@
> > wrote:
>> SELECT input
>> FROM ( SELECT unnest($1) AS input ) src
>> WHERE input IS NOT NULL AND input <> ''
>> LIM
Jashaswee wrote
> i have tried in that way but its showing that the debit column doesn't
> exist
So show us what exactly it is that you tried and maybe someone can tell you
what is wrong.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/postg
ze 1.
Depending on whether you allow an empty array you might want to:
CREATE FUNCTION my_fn(required_first varchar, VARIADIC optional_others
varchar[] DEFAULT '{}'::varchar[]) ...
Then join the two values together and move on to processing.
You would still need separate function
David Johnston wrote
>
> Tom Lane-2 wrote
>>
>>> -- A stored procedure which can accept two argument, which can be a
>>> single
>>> integer field, or an array of integers.
>>
>> Those two cases seem unlikely to be supportable by the same
&
In my development environment, I am using the auto_explain module to help debug
queries the developers complain about being slow. I am also using the
oracle_fdw to perform queries against some oracle servers. These queries are
generally very slow and the application allows them to be. The troubl
>-Original Message-
>From: Albe Laurenz [mailto:laurenz.a...@wien.gv.at]
>Sent: Friday, June 28, 2013 4:05 AM
>To: David Greco; pgsql-general@postgresql.org
>Subject: RE: auto_explain & FDW
>David Greco wrote:
>> In my development environment, I am usin
Came across an interesting situation as part of our Oracle to PostgreSQL
migration. In Oracle, it appears that immediate constraints are checked after
the entire statement is run, including any AFTER ROW triggers. In Postgres,
they are applied before the AFTER ROW triggers. In some of our AFTER
From: Vick Khera [mailto:vi...@khera.org]
Sent: Friday, June 28, 2013 9:35 AM
To: David Greco
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] AFTER triggers and constraints
On Fri, Jun 28, 2013 at 8:45 AM, David Greco
mailto:david_gr...@harte-hanks.com>> wrote:
The last delete sta
-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Friday, June 28, 2013 10:10 AM
To: David Greco
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] AFTER triggers and constraints
David Greco writes:
> Since the trigger is defined as AFTER ROW, versus AF
ns system is lacking some features or the
resources to get people to properly use the features is insufficient.
I'm hoping that this is all a case of humor just being difficult to
communicate in text since even if the OP is just mis-understood the
complaint is legitimate. I'd look deepe
ge the output columns the error
will tell me I forgot something instead of simply re-aliasing all of my
columns and then continuing as if nothing is wrong.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/incomplete-CTE-declaration-and-column-reference-x-is-ambi
have to be spooled before the commit could finish.
If you provide more of the how/why of what you are doing people may be able
to suggest alternative solutions - though 3 seconds for a 10-million row
cursor/temporary-table does not seem that hideous.
David J.
--
View this message in context:
using STRICT without
RETURNING in similar situations to easily define when only one (and only
one) record is expected to be affected. This thought falls into the same
usability category.
Thoughts?
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Feature-Idea-
ormance probably 98%+ of the time.
Yes, large offsets can be problematic but reverse ordering can help AND in
most use cases the frequency of high page numbers compared to lower ones is
significantly less.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/H
cost=0.00..8.38
rows=1 width=27)
Index Cond: (id = fd.typeid)
SubPlan 2
-> Index Scan using my_table_2_pk on my_table_2 o (cost=0.00..8.38
rows=1 width=23)
Index Cond: (id = s3.id)
SubPlan 3
-> Nested Loop (cost=0.00..14.66 rows=1 width=23)
-> Index Scan using my_table_1_pk on my_table_1 fd (cost=0.00..6.27
rows=1 width=4)
Index Cond: (id = s6.my_table_1_id)
-> Index Scan using my_table_2_pk on my_table_2 o (cost=0.00..8.38
rows=1 width=27)
Index Cond: (id = fd.typeid)
(120 rows)
Is there another way to do this query faster?
Thank you for the tips,
David Carpio
boraldomaster wrote
>
> David Johnston wrote
>> Your original examples only create the cursor and do not actually use it.
>> You should be comparing how long it takes both examples to fetch the
>> first 10 pages of records to get a meaningful comparison. It won't
t=0.00..6.27
rows=1 width=4)
Index Cond: (id = s2.my_table_1_id)
-> Index Scan using my_table_2_pk on my_table_2 o (cost=0.00..8.38
rows=1 width=27)
Index Cond: (id = fd.typeid)
SubPlan 2
-> Index Scan using my_table_2_pk on my_table_2 o (cost=0.00..8.38
rows=1 width=23)
Index Cond: (id = s3.id)
SubPlan 3
-> Nested Loop (cost=0.00..14.66 rows=1 width=23)
-> Index Scan using my_table_1_pk on my_table_1 fd (cost=0.00..6.27
rows=1 width=4)
Index Cond: (id = s6.my_table_1_id)
-> Index Scan using my_table_2_pk on my_table_2 o (cost=0.00..8.38
rows=1 width=27)
Index Cond: (id = fd.typeid)
(120 rows)
Is there another way to do this query faster?
Thank you for the tips,
David Carpio
On Thu 04 Jul 2013 08:10:45 PM PET, Michael Paquier wrote:
On Fri, Jul 5, 2013 at 10:04 AM, bricklen wrote:
On Thu, Jul 4, 2013 at 5:26 PM, David Carpio
wrote:
Also, can you supply the EXPLAIN (ANALYZE, BUFFERS) plan instead of the
simple EXPLAIN plan?
Then it might be interesting that you
David Carpio wrote
> Thank you for your time
You're not likely to get too many if any takers who want to try and decipher
that mess you call a query/explain. Especially since you've made it pretty
much impossible to read by removing/obfuscating information. It is not
self-containe
use the "ALL"
form).
Combine various incantation of WITH/CTE expressions to compile whatever
final result you require.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Computing-count-of-intersection-of-two-queries-Relational-Algebra-SQL-tp57629
runs
> to completion with no errors.
VACUUM FULL fixes the problem for us by recouping all the wasted disk
space. I don't have the knowledge to investigate much further on my
own, but I'd be happy to try out a few things. The database is,
unfortunately, sensitive data that I can'
he
index. For me the "how" doesn't matter and it is working as expected.
Do you have some larger intent than just understanding how that you have not
made clear?
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Changing-the-function-used-in
...
to handle the second question.
PostgreSQL has added a "RETURNING" clause to INSERT/UPDATE/DELETE for this
very use-case.
Also, starting with 9.1, you can use these constructs within a CTE/WITH
clause (prior to 9.1 you could only use SELECT).
HTH,
David J.
--
View this mess
ect.
I suspect that the actual value isn't terribly relevant, and they
how's and why's of what it is like it is are best left for a different
discussion.
--
David N. Welton
http://www.dedasys.com/
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make chan
On Fri, Jul 12, 2013 at 02:37:19PM -0700, ktewari1 wrote:
- Hi,
- I need to have some different settings(like NAMEDATALEN etc.) and
- that's why I'm trying to build postgres from the source and to create an rpm
- to be send for install.
-
- Now, the build works fine but, I don't see a way to
smells
like a bug of some sort.
On Mon, Jul 15, 2013 at 7:23 PM, Bradley McCune wrote:
> David,
>
> I'm sorry, but I'm not sure that I follow how this is pertinent to this
> particular thread. Are you proposing a way to replicate the scenario we
> experienced of our m
or what values it could take on.
I'm also curious if you realize that "EXECUTE" used this way has to be done
within a function. I assume you do because of the presence of the "tabname"
variable in your example but again you provide no actually executable code
so there is
On Wed, Jul 17, 2013 at 03:10:37PM +0800, Xiang Jun Wu wrote:
- Hello,
-
- I'd like to ask a common question about scale out for postgres.
-
- Our current data volume is about 500GB ~ 1TB in one pg cluster(postgres 9.2).
We've set up master/slave replication to keep sync.
- To reach better perf
tion and try things
and if you get stumped post a specific question about what is stumping you
(with as much detail as possible; but take time to organize and comment it)
to get past the block.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/dynamic-table-names
rks and the fact that columns can be created on-the-fly (i.e., not
belonging to any schema) this is not surprising. You could try running and
capturing the output of EXPLAIN with various options like JSON and VERBOSE
and store that - it depends on your use-case.
David J.
--
View this message in
uld not either.
The really tricky part is that I prefer to abbreviate the more commonly used
table prefixes (and the really long ones) so generally either interpolation
or verbosity is needed for any given ID but it is a small price compared to
the sanity it provides.
David J.
--
View this message i
uot;bigint" to "integer" in order
to get the view to work:
SELECT myfunction(sum(foo)::integer, sum(bar)::integer);
and just hope the sums are small enough.
David J.
Note that by habit I use integer much too often but I haven't actually
explored the downsides to abolishing in
create type node as (r integer, s integer);
alter type node add attribute children node[];
end;
I'm running 9.0 so cannot readily test this at the moment.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Self-referencing-composite-datatype-tp57666
uly an instance of this behavior
but I'm guessing you tried doing this because similar syntax has worked for
you before due to the behavior I've referenced above.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Populating-array-of-composite-datatype-tp5
the fact the "node.*" will NOT give you the children; you must still
explicitly invoke the function somehow.
Other solutions are possible but as we do not know the use case meaningful
but more specific solutions are hard to envision or suggest. Chris'
solution is fairly generic i
at previously
imported and unchanged or does it only reflect changes from the last
production load? How and what kind of QC is going to be performed and will
you need to build interfaces to support those activities? Would you end up
QCing the same data time-after-time even if it was unchanged.
D
ectly
to your users) the degree of encumbering that the database places on the
application is generally minimal if any at all.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/PostrgreSQL-Commercial-restrictions-tp576p5766674.html
Sent from the Postg
ddr, xact_start, query_start
FROM pg_stat_activity
ORDER BY xact_start ASC, client_addr;
to what activity is currently in-progress.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/DB-transactions-when-browser-freezes-tp5766824p5766829.html
Sent f
Hi,
I am working on a plpgsql function that is not acting as I would hope.
I am working with the Postgres 9.3 beta load and would like to
solicit some feedback.
Looking at the outermost for loop of the function below, If I run this CTE
query from the psql command line
I am returned what I exp
Basavaraj wrote
> Now i want to insert data to the table
> the format should be
>
> id | marks
> --
> 1 |50
> 1 |30
> 1 |30
>
>
> 2 |...
>
>
> the actual query is
>
> insert into table(id,marks) values(1,unnest(array[marks]))
>
>
> But we should not use array an
rs -
END LOOP; -- drow
END LOOP; -- drow
END
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
select admin.activate_translate_user1(1);
-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: Friday, August 09, 2013 10:20 AM
To: Day, David
Cc
wsrow."pattern_match", wsrow."screen_class",
wsrow."term_mode",
wsrow."trans_result", wsrow."port_id", wsrow."tag_id",
wsrow."cause__q850", wsrow."cause__redcom",
wsrow.
, 2013 6:14 PM
To: Day, David
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] plpgsql FOR LOOP CTE problem ?
"Day, David" writes:
> A complete self contained test case: example of the problem with my FOR LOOP
> using a COMMON table expression.
Ah, I see the problem.
ns where t2 has row numbers not existing in t1.
What you are doing, by the example given, is wrong. Proposing an
alternative is impossible since you have not explained WHY you feel you need
to do this or WHAT you are actually trying to accomplish.
David J.
--
View this message in c
Hello,
Is there a query out there where I can get a list of permissions associated to
a schema?
Something like the below query that I can do for a table, but for a schema
instead?
SELECT grantee, privilege_type
FROM infor
f
"t" you can simply explode the "SELECT * FROM r, (x+y)" into an explicit
column-list that has the same ordering as "t". There are limitations,
mainly as relates to default values, but for a table "t" with a large number
of columns it can be conside
neral@
> )
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
You have to write three queries (select statements) and then "UNION ALL"
them together. Each sub-query has a where clause matching the range you
wish to aggregate for.
David J.
the value then add that
category to your group-by.
Both are equally valid and the second one is probably easier to comprehend;
the first option just happened to occur to me first. I have no idea which
one would perform better in theory nor specifically with your data.
David J.
--
Vie
equal intervals.
Another option, though I am unfamiliar with the exact syntax, is to use the
contains operator and an "intrange" (range type, for integer or whatever
actual type is needed).
SELECT CASE WHEN val @> '(0, 25]' -- '(25,50]', etc...
This allows f
little more verbose so the other options, if available, are preferred
from a readability standpoint. I do not know whether the different options
may have different performance characteristics.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/how-to-use-aggre
David Johnston wrote
>
> Janek Sendrowski wrote
>> Hi,
>>
>>
>>
>> thanks for all your answers.
>>
>> I'll have a try with the contains operator and the intrange, but
>> before I'd like to know if this would work:
>>
months and possible recognized abbreviations as well.
^.*\m(June|July|August|September)\M[, a-zA-Z0-9]+
I'd consider helping more with forming an actual expression but a single
input sample with zero context on how such a string is created gives little
to work with.
Though after the month ther
Hi All,
We are using pgpool 3.6.1 with two nodes in the cluster. We are seeing
frequent occurrences where one node will be marked as offline.
When this occurs we are seeing the following statements in our logs.
2017-02-09 09:45:38: pid 12125: WARNING: write on backend 0 failed with
error :"Suc
functions.
> >
> > In 9.6:
> >
>
> That's why in 9.1 I didn't find that command...
You can use the 9.6 client without problems on 9.1, well, apart from
the fact that 9.1 is already past its end of life.
Best,
David.
--
David Fetter http://fetter.org/
Phone: +1 415
g
wildly off. ALTER TABLE set n_distinct doesn't seem to be used by the
planner as it doesn't change any of the plans I've generated or seem
to be taken into account in the row estimates. I'm out of ideas.
Anybody have any ideas?
--
David Hinkle
Senior Software Developer
P
rsion
───
PostgreSQL 9.4.10 on x86_64-unknown-linux-gnu, compiled by gcc
(Ubuntu 4.8.2-19ubuntu1) 4.8.2, 64-bit
(1 row)
On Mon, Feb 13, 2017 at 1:26 PM, Jeff Janes wrote:
> On Mon, Feb 13, 2017 at 9:40 AM, David Hinkle wrote:
>>
>> I'm h
;::regclass) │ plain│ │
Indexes:
"titles_pkey" PRIMARY KEY, btree (titleid)
"titles_md5_title_idx" btree (md5(title::text))
Do you see anything in there that would be problematic?
On Mon, Feb 13, 2017 at 2:41 PM, Jeff Janes wrote:
> On Mon, Feb 13, 2017 at 11
How about using pg_isready?
https://www.postgresql.org/docs/current/static/app-pg-isready.html
--
David Rader
dav...@openscg.com
On Sun, Feb 12, 2017 at 12:23 PM, Nikolai Zhubr wrote:
> Hello all,
>
> In order to locate the problem more precisely, I'd like to prepare a test,
&g
swap.
On Mon, Feb 13, 2017 at 3:21 PM, Jeff Janes wrote:
> On Mon, Feb 13, 2017 at 12:43 PM, David Hinkle
> wrote:
>>
>> Thanks Jeff,
>>
>> No triggers or foreign key constrains:
>>
>> psql:postgres@cipafilter = \d+ titles
>>
loops=1)
Buffers: shared hit=598 read=7324082 dirtied=34
Planning time: 0.072 ms
Execution time: 248807.285 ms
(9 rows)
On Mon, Feb 13, 2017 at 3:47 PM, David Hinkle wrote:
> psql:postgres@cipafilter = EXPLAIN (ANALYZE, BUFFERS) select titleid
> from titles WHERE NOT EXIST
0) (actual time=0.003..274920.970 rows=544670242
loops=1)
Buffers: shared hit=918 read=7323762
Planning time: 0.158 ms
Execution time: 727533.213 ms
(17 rows)
On Mon, Feb 13, 2017 at 3:57 PM, David Hinkle wrote:
> I managed to get this version to finish:
>
>
--
David Rader
dav...@openscg.com
On Tue, Feb 14, 2017 at 5:28 AM, Nikolai Zhubr wrote:
> 14.02.2017 12:47, John R Pierce:
>
>> On 2/13/2017 11:03 PM, Nikolai Zhubr wrote:
>>
>>> Now I'd like to locate a CPU eater more precisely - supposedly there
>>>
Hi All,
We're running two nodes using with replication enabled.
pgpool routinely (every day) performs a failover with the following
statements appearing the in log:
2017-02-15 13:16:01: pid 16190: WARNING: write on backend 1 failed with
error :"Success"
2017-02-15 13:16:01: pid 16190: DETAIL:
ller going wild with
>> 4.7/4.8/some 4.9 kernels.
>
> I guess what I'm trying to say is that it may actually not be
> PostgreSQL's fault but rather the kernel invoking the OOM
> killer way prematurely.
>
> Karsten
> --
> GPG key ID E4071346 @ eu.pool.sks-ke
Maybe also useful to know:
cipafilter=# select reltuples from pg_class where relname = 'log_raw';
reltuples
-
5.40531e+08
(1 row)
On Wed, Feb 15, 2017 at 7:55 PM, David Hinkle wrote:
> Thanks for your help!
>
> Karsten: The system does fill up swap before
;, 'urls');
select patch_ndistinct('log_raw', 'hostid', 'hosts');
ANALYZE log_raw;
On Thu, Feb 16, 2017 at 10:54 AM, Tom Lane wrote:
> David Hinkle writes:
>> Tom, there are three columns in this table that exhibit the problem,
>> h
th=10)
(6 rows)
I guess I will also try throwing in 'set enable_hashjoin = false;' and
see if that gets these purges to go.
On Thu, Feb 16, 2017 at 2:22 PM, David Hinkle wrote:
> Yep, 420ish million records out of 540 million records have a titleid
> of 1. There are about 880,000 other u
greSQL (pre
8.4) you may have also preferred to use GROUP BY over DISTINCT as
GROUP BY could be implemented internally by sorting or hashing the
results, whereas DISTINCT used to only be implemented by Sorting the
results. Although this has long since been the case.
--
David Rowley
be cached.
You may also want to consider running the EXPLAIN (ANALYZE, BUFFERS)
after having SET track_io_timing = on;
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-general mailing list (pgsql-gener
On Sun, Apr 23, 2017 at 10:33 PM, Tatsuo Ishii wrote:
> > Andres Freund writes:
> >> On 2017-04-23 12:08:51 -0700, Konstantin Izmailov wrote:
> >>> Some systems (Presto) are still using protocol 2, and I need to
> understand
> >>> the scope of changes in the middleware to support protocol 2.
> >
there are windows
where they may not actually hold true to their word.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 29 April 2017 at 11:37, David G. Johnston wrote:
>> > Perhaps there are reasons why this optimization is not safe that I
>> > haven't
>> > thought about?
>>
>> Yeah, I think so. What happens if an A row cannot find a match in B or
>> C? Thi
psql, you can execute the
above then execute \gexec which will execute the previous result set
as commands.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
table_constraints row matching the join
condition.
If you can state what you want to achieve then I'm sure someone will help.
(Please, in the future, give your emails a suitable subject line)
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7
erencing or referenced in a
foreign key constraint.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
#x27;a' tablename from a
where id=1 union all select *,'b' tablename from b where id=1) ab
order by id,tablename;
Assuming that id is what you want to be unique.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Serv
I have a table that includes two text columns t1 and t2, and a composite
index on these columns. When issuing a query of the following form:
SELECT * FROM test WHERE t1 = 'X' and t2 = ANY(ARRAY['Y1', 'Y2', ..])
I have observed that it will use the index and have reasonable performance
if the whol
efined as character(1) rather than text. I
can see Postgres has cast the first parameter to bpchar in the first case
but I guess Npgsql is explicitly sending it as type text, which bypasses
the index.
On 16 May 2017 at 15:17, Adrian Klaver wrote:
> On 05/16/2017 06:01 AM, David Chapman wrote:
>
crypted, so the DB contents will likely be meaningless), if
we are missing any, it's not too bad, well, no less bad than whatever we
have now.
Thanks,
David
The OS it is running on shows:
cat /proc/version
Linux version 2.6.18-92.1.10.el5.xs5.0.0.39xen (root@pondo-2) (gcc
version 4.1
On 5/24/17 4:18 PM, Tom Lane wrote:
David Wall writes:
We have not noted any issues, but when I ran a pg_dump on an 8.3.3
database, it failed after an hour or so with the error:
8.3.3?
Yes, it's old.
cat /proc/version
Linux version 2.6.18-92.1.10.el5.xs5.0.0.39xen (root@pondo-2)
On 5/25/17 6:30 AM, Tom Lane wrote:
David Wall writes:
They do have a slave DB running via WAL shipping. Would that likely
help us in any way?
Have you tried taking a backup from the slave? It's possible that
the corruption exists only on the master.
We will give this a try onc
I'm running a Spark job that is writing to a postgres db (v9.6), using
the JDBC driver (v42.0.0), and running into a puzzling error:
2017-06-06 16:05:17.718 UTC [36661] dmx@dmx ERROR: deadlock detected
2017-06-06 16:05:17.718 UTC [36661] dmx@dmx DETAIL: Process 36661 waits
for ExclusiveLock o
On 06/07/2017 10:32 AM, Merlin Moncure wrote:
On Wed, Jun 7, 2017 at 9:16 AM, David Rosenstrauch wrote:
* How could it be possible that there are 2 PG processes trying to acquire
the same lock? Spark's partitioning should ensure that all updates to the
same user record get routed t
this being a bug.
The attached fixes.
(CC'd -hackers since we're starting to discuss code changes. Further
discussion which includes -hackers should drop the general list)
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training &a
safe. Even if
it says it is there are potential gotchas. For example, the backup may
not be consistent if you are using multiple volumes.
In addition, data loss on restore will be greater if there is no WAL
archive to play forward from.
--
-David
da...@pgmasters.net
--
Sent via pgsql-genera
.due_row_id
) c ON c.due_row_id = a.row_id;
SQL Server will probably be doing this rewrite.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To m
]
https://www.postgresql.org/message-id/flat/CAKJS1f9FK_X_5HKcPcSeimy16Owe3EmPmmGsGWLcKkj_rW9s6A%40mail.gmail.com#cakjs1f9fk_x_5hkcpcseimy16owe3empmmgsgwlckkj_rw9...@mail.gmail.com
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training &a
on? Any progress in btree-support?
Not exactly what you're asking, but perhaps
https://commitfest.postgresql.org/14/1001/ could improve your
workload, or perhaps you could just manually rewrite the query.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development
mer_id) to
parameterise the nested loop, at least, it likely would, if you have
one.
It's pretty bad practice to have ORDER BY in views. I kinda wish we
didn't even allow it, but that ship sailed many years ago...
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL
On 10 October 2017 at 02:51, Tom Lane wrote:
> David Rowley writes:
>> It's pretty bad practice to have ORDER BY in views. I kinda wish we
>> didn't even allow it, but that ship sailed many years ago...
>
> I think it's actually disallowed by the SQL spec (al
On 10 October 2017 at 12:44, Tom Lane wrote:
> David Rowley writes:
>> If the only reason that is_simple_subquery() rejects subqueries with
>> ORDER BY is due to wanting to keep the order by of a view, then
>> couldn't we make is_simple_subquery() a bit smarter a
he WHERE to pass through ORDER BY.
>
> A special case can be allowed for WHERE to pass the ORDER BY if the column is
> part of DISTINCT ON.
Yeah, we do allow predicates to be pushed down in that case.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24
that the planner was able to pull the
subquery (or view) up a level. When the planner is able to do this
it's much more flexible to the types of plans it can generate. It's
just that we don't ever pull up subqueries with DISTINCT ON, plus a
bunch of other reasons.
--
David Rowley
Hi everyone,
i have question if is possible log count row of COPY command to
csv/syslog. I know that there are some limitations like triggers BEFORE
INSERT. Don't know if any others were pleased with this feature.
Have a nice day.
David
--
-
Ing.
nect to the db as rakesh who is part of db_rw role.
This will not work because settings (GUCs) on a role are not inherited
by roles (or users) that are members of that role. This is a
characteristic of the roles system and not inherent to pgAudit.
--
-David
da...@pgmasters.net
--
Sent vi
Hello,
I ran into what appears to be a deadlock in the logging subsystem. It
looks like what happened was that the syslogger process exited because it
ran out of memory. But before the postmaster got a chance to handle the
SIGCLD to restart it, it handled a SIGUSR1 to start an autovacuum worker.
On Mon, Nov 6, 2017 at 12:35 PM, Tom Lane wrote:
> David Pacheco writes:
> > ... that process appears to have exited due to a fatal error
> > (out of memory). (I know it exited because the process still exists in
> the
> > kernel -- it hasn't been reaped yet -- and
rish.
> weigh higher than you gain (in flexibility) in relational databases.
> But it sure has its uses cases.
Why, yes. I encourage all my competitors to use it. ;)
Cheers,
David.
--
David Fetter <[EMAIL PROTECTED]> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yah
701 - 800 of 4355 matches
Mail list logo