Greetings,
Please don't top-post on these lists.
* Derek Hans (derek.h...@gmail.com) wrote:
> Thanks for the detailed response, super helpful in understanding what's
> happening, in particular understanding the risk of not marking functions as
> leakproof. I'll take a look at the underlying code
Thanks for the detailed response, super helpful in understanding what's
happening, in particular understanding the risk of not marking functions as
leakproof. I'll take a look at the underlying code to understand what's
involved in getting a function to be leakproof.
That said, it does seem like i
On Tue, 13 Aug 2019 16:38:44 +0530
Shital A wrote:
> On Tue, 13 Aug 2019, 11:50 Shital A, wrote:
>
> > On Mon, 12 Aug 2019, 18:54 Adrien Nayrat,
> > wrote:
> >
> >> On 8/12/19 2:57 PM, Shital A wrote:
> >> > Postgres version : 9.6
> >> > OS:Rhel 7.6
> >> >
> >> > We are working on HA setup
Greetings,
* Derek Hans (derek.h...@gmail.com) wrote:
> Unfortunately only "alter function" supports "leakproof" - "alter operator"
> does not. Is there a function-equivalent for marking operators as
> leakproof? Is there any documentation for which operators/functions are
> leakproof?
Tom's quer
Derek Hans writes:
> Unfortunately only "alter function" supports "leakproof" - "alter operator"
> does not. Is there a function-equivalent for marking operators as
> leakproof?
Leakproofness is a property of the underlying function, not the operator,
so that's where you change it.
> Is there an
Thanks for the pointer for marking functions as leakproof, I was unaware of
that whole concept.
Unfortunately only "alter function" supports "leakproof" - "alter operator"
does not. Is there a function-equivalent for marking operators as
leakproof? Is there any documentation for which operators/fu
Derek Hans writes:
>> However, wild-guess time: it might be that without access to the
>> table statistics, the "search like '%yo'" condition is estimated
>> to be too unselective to make an indexscan profitable. And putting
>> RLS in the way would disable that access if the ~~ operator is not
>>
>
>
> Your example is obscuring the issue by incorporating a tenant_name
> condition (where did that come from, anyway?) in one case and not
> the other. Without knowing how selective that is, it's hard to
> compare the EXPLAIN results.
>
>
That's RLS kicking in - RLS condition is defined as
((ten
>
>
> What are the RLS policies on the table?
>
> From select * from pg_policies:
"((tenant_name)::name = CURRENT_USER)"
> What is the definition of the GIN index?
>
> CREATE INDEX search__gist
ON public.search USING gist
(search COLLATE pg_catalog."default" gist_trgm_ops)
TABLESPACE
On 8/13/19 11:57 AM, Derek Hans wrote:
When using row level security, GIN and GIST indexes appear to get
ignored. Is this expected behavior? Can I change the query to get
PostgreSQL using the index? For example, with RLS enabled, this query:
What are the RLS policies on the table?
What is the
Derek Hans writes:
> When using row level security, GIN and GIST indexes appear to get ignored.
> Is this expected behavior? Can I change the query to get PostgreSQL using
> the index? For example, with RLS enabled, this query:
Your example is obscuring the issue by incorporating a tenant_name
co
When using row level security, GIN and GIST indexes appear to get ignored.
Is this expected behavior? Can I change the query to get PostgreSQL using
the index? For example, with RLS enabled, this query:
select * from search where search like '%yo'
Creates this query plan:
"Seq Scan on search (co
On 8/13/19 10:34 AM, Daulat Ram wrote:
H,
We are using below the ora2pg version and the data types for tables.
bash-4.2$ ora2pg -v
Ora2Pg v20.0
bash-4.2$
SQL> SELECT distinct data_type FROM dba_tab_columns WHERE owner='GBOP;
DATA_TYPE
-
H,
We are using below the ora2pg version and the data types for tables.
bash-4.2$ ora2pg -v
Ora2Pg v20.0
bash-4.2$
SQL> SELECT distinct data_type FROM dba_tab_columns WHERE owner='GBOP;
DATA_TYPE
TIMESTAMP(6)
FLOA
On 08/13/2019 09:04 PM, rihad wrote:
On 08/13/2019 08:44 PM, rihad wrote:
On 08/13/2019 08:22 PM, Luca Ferrari wrote:
On Tue, Aug 13, 2019 at 5:59 PM rihad wrote:
[dbname] LOG: temporary file: path
"base/pgsql_tmp/pgsql_tmp93683.257381", size 594
The setting 'work_mem' is within context 'u
On 08/13/2019 08:44 PM, rihad wrote:
On 08/13/2019 08:22 PM, Luca Ferrari wrote:
On Tue, Aug 13, 2019 at 5:59 PM rihad wrote:
[dbname] LOG: temporary file: path
"base/pgsql_tmp/pgsql_tmp93683.257381", size 594
The setting 'work_mem' is within context 'user', that means it will
affect runnin
On 8/13/19 11:04 AM, rihad wrote:
On 08/13/2019 08:44 PM, rihad wrote:
On 08/13/2019 08:22 PM, Luca Ferrari wrote:
On Tue, Aug 13, 2019 at 5:59 PM rihad wrote:
[dbname] LOG: temporary file: path
"base/pgsql_tmp/pgsql_tmp93683.257381", size 594
The setting 'work_mem' is within context 'us
On 08/13/2019 08:22 PM, Luca Ferrari wrote:
On Tue, Aug 13, 2019 at 5:59 PM rihad wrote:
[dbname] LOG: temporary file: path
"base/pgsql_tmp/pgsql_tmp93683.257381", size 594
The setting 'work_mem' is within context 'user', that means it will
affect running sessione unless the session itself h
> All I have left is:
> select tab2 row_number () OVER (order by attr2, attr1 )AS id, attr1,
> attr2 into tab2 from tab1;
> That will not create a serial type in the id column though. You can
> attach a sequence to that column. Something like:
> 1) create sequence tab2_id start owned by tab2.id
On Tue, Aug 13, 2019 at 5:59 PM rihad wrote:
> [dbname] LOG: temporary file: path
> "base/pgsql_tmp/pgsql_tmp93683.257381", size 594
>
The setting 'work_mem' is within context 'user', that means it will
affect running sessione unless the session itself has already issued a
SET work_mem to xxx.
S
On 08/13/2019 07:41 PM, Peter Eisentraut wrote:
On 2019-08-13 17:16, rihad wrote:
If I increase it in postgresql.conf and SIGHUP the master server, will
the change be applied to all running backends, or only to the ones
started after the change? Thanks.
It will be applied to all running backend
On 2019-08-13 17:16, rihad wrote:
> If I increase it in postgresql.conf and SIGHUP the master server, will
> the change be applied to all running backends, or only to the ones
> started after the change? Thanks.
It will be applied to all running backends.
--
Peter Eisentraut http:
If I increase it in postgresql.conf and SIGHUP the master server, will
the change be applied to all running backends, or only to the ones
started after the change? Thanks.
On 8/13/19 6:34 AM, Souvik Bhattacherjee wrote:
> If the selects are returning more then one row then you are already
> doing bulk inserts. If they are returning single rows or you want to
> batch them then you need some sort of code to do that. Something
> like(pseudo Python like code):
>
On Tue, Aug 13, 2019 at 10:23 AM Daulat Ram wrote:
> Initially did not have LongReadLen set, so I thought this was the cause. But,
> I have set LongReadLen, on the db handle, equal to 9000.
Apparently this is an oracle problem because it acceppted data longer
than its type, so my guess would
> On 13 Aug 2019, at 15:19, David G. Johnston
> wrote:
>
> On Tuesday, August 13, 2019, Alban Hertroys wrote:
>
> > On 13 Aug 2019, at 13:10, stan wrote:
> >
> > select
> > project.proj_no ,
>
> Removed columns that get in the way of your desired result. You can’t have
> both deta
> If the selects are returning more then one row then you are already
> doing bulk inserts. If they are returning single rows or you want to
> batch them then you need some sort of code to do that. Something
> like(pseudo Python like code):
> attr2_vals= [(10, 20, 30, 40), (50, 60, 70, 80)]
> for
On Tuesday, August 13, 2019, Alban Hertroys wrote:
>
> > On 13 Aug 2019, at 13:10, stan wrote:
> >
> > select
> > project.proj_no ,
>
> Removed columns that get in the way of your desired result. You can’t have
> both details and the sum over them in a meaningful way.
>
Sure you can, at l
Hi Stan,
Am Dienstag, 13. August 2019, 13:10:18 CEST schrieb stan:
> Maybe I have a basic misunderstanding. What I am trying to get is a total
> cost for each project. This would be calculated by multiplying rate and
> hours for each row, on a per project base, and then summing all of th
> product
> On 13 Aug 2019, at 13:10, stan wrote:
>
> select
> project.proj_no ,
Removed columns that get in the way of your desired result. You can’t have both
details and the sum over them in a meaningful way.
> SUM (rate.rate * task_instance.hours)
> from
> task_instance
> join
Actually yes, thanks
Was just blind when checking EXPLAIN :D
On 13 Aug 2019, at 12:55, Thomas Kellerer wrote:
> Thomas Rosenstein schrieb am 13.08.2019 um 12:17:
>> we would like to evaluate what data a SQL query will access, is there
>> a possibility to ask the Planner directly which tables an
On Tue, Aug 13, 2019 at 05:54:03AM -0400, stan wrote:
> I am trying to write, what is for me, a fairly complex query. It uses JOINS,
> and also GROUP BY. I have this working with the exception of adding the
> GROUP BY clause.
>
> Is there some reason I cannot add a GROUP BY function to a JOIN?
>
On Tue, 13 Aug 2019, 11:50 Shital A, wrote:
>
>
> On Mon, 12 Aug 2019, 18:54 Adrien Nayrat,
> wrote:
>
>> On 8/12/19 2:57 PM, Shital A wrote:
>> >
>> >
>> > Hello,
>> >
>> > Postgres version : 9.6
>> > OS:Rhel 7.6
>> >
>> > We are working on HA setup for postgres cluster of two nodes in
>> activ
Thomas Rosenstein schrieb am 13.08.2019 um 12:17:
> we would like to evaluate what data a SQL query will access, is there
> a possibility to ask the Planner directly which tables and columns
> will be affected in the end?
>
explain (verbose) will show the output columns for each step.
Is that
Hi,
we would like to evaluate what data a SQL query will access,
is there a possibility to ask the Planner directly which tables and
columns will be affected in the end?
Thanks
Thomas
I am trying to write, what is for me, a fairly complex query. It uses JOINS,
and also GROUP BY. I have this working with the exception of adding the
GROUP BY clause.
Is there some reason I cannot add a GROUP BY function to a JOIN?
Here is what I have:
CREATE OR REPLACE view tasks_view as
sele
Hi team ,
I am getting the below error while fetching the data from Oracle 12c using
ora2pg.
DBD::Oracle::st fetchall_arrayref failed: ORA-24345: A Truncation or null fetch
error occurred (DBD SUCCESS_WITH_INFO: OCIStmtFetch, LongReadLen too small
and/or LongTruncOk not set)ERROR no statement e
37 matches
Mail list logo