Hi all, I seem to be in a bit of a jam with the query planner choosing
a significantly less optimal route for a query.
First, a common check list I've seen around when researching these
types of issues on list:
1. pgsql version is PostgreSQL 9.4.7 running on Amazon RDS
2. Vacuum analyze was run
3
DS on this kind of
workload or what cost constant parameters would help this situation
would be extremely helpful.
Thanks again,
Branden
On Sat, Jul 30, 2016 at 11:21 AM, Branden Visser wrote:
> Hi all, I seem to be in a bit of a jam with the query planner choosing
> a significantly less opt
Thanks for your reply Tom.
On Mon, Aug 1, 2016 at 6:56 PM, Tom Lane wrote:
> Branden Visser writes:
>> I just wanted to update that I've found evidence that fixing the
>> planner row estimation may not actually influence it to use the more
>> performant merge join in
On Wed, Aug 17, 2016 at 4:58 AM, gilad905 wrote:
> Vik, note that your new suggestion for a query might be more
> readable/efficient than Jeff's, but IT STILL DOESN'T SOLVE THE PROBLEM OF
> DETECTING DUPLICATE ROWS.
>
I've done this in the past and in my use-case it was easy enough to
export the
My first instinct would be to look into triggers. In addition to an FK
a(b_id) -> b(id), you could have an insert and update trigger on
a(b_id) and b(active) to ensure the additional constraints.
On Thu, Aug 18, 2016 at 1:10 PM, Mark Lybarger wrote:
> I have two tables that i want to link with a