49 rows=1 width=12) (actual time=0.003..0.
003 rows=1 loops=882)
Index Cond: (product_id = so.product_id)
-> Index Scan using cs_id_c on circuit_status cst
(cost=0.14..0.16 rows=1 width=19) (actual time=0.001..0.001 rows=1 l
oops=882)
Index Cond: (id = c.status)
Planning time: 3.889 ms
Execution time: 1122.357 ms
(68 rows)
Thank you very much for your help,
Kind regards,
Alessandro Aste.
.00..21.91 rows=3 width=12)
Filter: (vendor_id = 12346)
(55 rows)
On Wed, Mar 21, 2018 at 8:01 PM, Tomas Vondra
wrote:
>
>
> On 03/21/2018 05:09 PM, Alessandro Aste wrote:
> > Hi there, we are using postgresql
the self-contained test case - I'll do my best to
prepare it.
Thank you very much, please let me know if this answer your questions.
Il 22 mar 2018 3:04 AM, "Tomas Vondra" ha
scritto:
>
> On 03/21/2018 08:44 PM, Alessandro Aste wrote:
> > Thanks for your reply
Thanks Tomas. We're currently building postgres from source. In order to
enable symbols, you want me to re-configure postres with --enable-debug
then run perf?
Regards,
On Thu, Mar 22, 2018 at 5:00 PM, Tomas Vondra
wrote:
>
>
> On 03/22/2018 11:33 AM, Alessandro Aste wrote:
&g
|219
(1 row)
*Time: 2245.073 ms (00:02.245)*
On Fri, Mar 23, 2018 at 9:31 AM, Alessandro Aste
wrote:
> Tomas, I'm attaching a 4MB file with the perf report. Let me know if it
> gets blocked, I'll shrink it to the first 1000 lines.
>
> Thank you,
>
Hello, any news ?
Thank you,
Alessandro.
On Fri, Mar 23, 2018 at 8:22 PM, Alessandro Aste
wrote:
> PS , in the meanwhile I discovered a 2nd workaround(beside disabling
> parallel processing) . I added offset 0 to the subquery , and, according
> to the documentation, “OFFSET 0 is th
Hello, I am trying to put togheter a query to monitor the index bloat for
a database I maintain.
Is there a "SQL" way to obtain bloated index ? I googled around but I
found nothing working.
I'm currently running 9.6 but I'm looking for something compatible with
version 10 too.
Thank you very mu
Thanks much, I'll check that out. I see the queries are 3 years old so I'm
wondering if they still work for 9.6.x or 10
Il lun 16 lug 2018, 17:44 Adrien NAYRAT ha
scritto:
> On 07/16/2018 05:16 PM, Alessandro Aste wrote:
> > Hello, I am trying to put togheter a query t
Hi, we have a logical backup process that runs every night since 5+ years.
It is a logical backup we use to restore a non production environment. We
use pg_dump in parallel mode in directory format.
Postgres version is 9.6.6
Tonight schedule failed with the following error:
pg_dump: [archiver
Thanks much, I'll keep my eyes open today night hoping it will not happen
again.
On Thu, Jul 19, 2018 at 5:39 PM, Tom Lane wrote:
> [ please keep the list cc'd for the archives' sake ]
>
> Alessandro Aste writes:
> > Hello Tom, thanks for your reply:
> >
You can run this query to itendify the relations owned by the users you're
not allowed to drop, just replace ('',
'username2' ..'userN' ) with the your role names . Then, once you have
identified the tables/objecst change the owner like this:
ALTER TABLE OWNER TO ;
and try to drop the use
Hi,
Postresql version: 10.5
I need to convert an SQL field from real to numeric, but I’m getting a
strange behavior.
See the following query in preprod:
select amount, amount::numeric, amount::numeric(16,4),
amount::varchar::numeric from mytable where id = 32560545;
Result:
17637.75
Hi there, we are running postgresql 10.5 on a centos 7 server.
We're seeing multiple connections (in pg_stat_activity) from our
application with the same query , same user, same application_name, same
query_start etc.
We are 100% sure the query is duplicated and not referring to multiple
queries
]
Sent: 14 December 2017 16:13
To: Nicola Contu
Cc: Rene Romero Benavides ;
pgsql-general@lists.postgresql.org; Alessandro Aste
Subject: Re: pgstattuple free_percent to high
Greetings Nicola,
* Nicola Contu (nicola.co...@gmail.com) wrote:
> I think tuning the autovacuum settings may in
14 matches
Mail list logo