>I also tried cstore_fdw for this, but my queries
>(building a 2-D histogram) were taking 4+ seconds,
>compared to 500ms using arrays.
> ...
> but maybe I could write my own extension
Have you checked the new TimescaleDB extension? [
https://github.com/timescale/timescaledb ]
"TimescaleDB is pack
Paul A Jungwirth schrieb am 21.09.2017 um 23:05:
but maybe I could write my own extension to
load regular files into Postgres arrays, sort of getting the best of
both worlds.
There is a foreign data wrapper for that:
https://github.com/adunstan/file_text_array_fdw
but it's pretty old and s
> It's going to suck big-time :-(.
Ha ha that's what I thought, but thank you for confirming. :-)
> We ended up keeping
> the time series data outside the DB; I doubt the conclusion would be
> different today.
Interesting. That seems a little radical to me, but I'll consider it
more seriously no
Paul A Jungwirth writes:
> I'm considering a table structure where I'd be continuously appending
> to long arrays of floats (10 million elements or more). Keeping the
> data in arrays gives me much faster SELECT performance vs keeping it
> in millions of rows.
> But since these arrays keep growin
On 8 September 2017 at 00:23, Jeff Janes wrote:
> On Tue, Aug 29, 2017 at 1:20 AM, Alban Hertroys wrote:
>>
>> On 28 August 2017 at 21:32, Jeff Janes wrote:
>> > On Mon, Aug 28, 2017 at 5:22 AM, Alban Hertroys
>> > wrote:
>> >>
>> >> Hi all,
>> >>
>> >> It's been a while since I actually got to
On Tue, Aug 29, 2017 at 1:20 AM, Alban Hertroys wrote:
> On 28 August 2017 at 21:32, Jeff Janes wrote:
> > On Mon, Aug 28, 2017 at 5:22 AM, Alban Hertroys
> wrote:
> >>
> >> Hi all,
> >>
> >> It's been a while since I actually got to use PG for anything serious,
> >> but we're finally doing som
On 28 August 2017 at 21:32, Jeff Janes wrote:
> On Mon, Aug 28, 2017 at 5:22 AM, Alban Hertroys wrote:
>>
>> Hi all,
>>
>> It's been a while since I actually got to use PG for anything serious,
>> but we're finally doing some experimentation @work now to see if it is
>> suitable for our datawareh
On Mon, Aug 28, 2017 at 5:22 AM, Alban Hertroys wrote:
> Hi all,
>
> It's been a while since I actually got to use PG for anything serious,
> but we're finally doing some experimentation @work now to see if it is
> suitable for our datawarehouse. So far it's been doing well, but there
> is a part
On 28 August 2017 at 14:22, Alban Hertroys wrote:
> This is on:
Just noticed I forgot to paste this in:
warehouse=# select version();
version
--
On Mon, Jun 19, 2017 at 2:29 PM, Rob Nikander wrote:
> I'm wondering about the tradeoffs, specifically: is it possible to update
> one piece of a jsonb value without having to rewrite the entire field? There
> are cases where that data field was getting pretty big (500kb). Would you
> expect any o
> On 19 June 2017 at 21:29, Rob Nikander wrote:
>
> I'm wondering about the tradeoffs, specifically: is it possible to update
one piece of a jsonb value without having to rewrite the entire field?
There are cases where that data field was getting pretty big (500kb). Would
you expect any obvious pe
Am 19. Juni 2017 21:29:40 MESZ schrieb Rob Nikander :
>
>I'm wondering about the tradeoffs, specifically: is it possible to
>update one piece of a jsonb value without having to rewrite the entire
>field?
Updates in PostgreSQL are always Delete & Insert. So the answer is no. For
session-data pl
> Another idea, if you haven't tried it already, is to run these test cases
> in a server built with --enable-debug and --enable-cassert. The memory
> clobber stuff that's enabled by the latter is very good at turning coding
> errors into reproducible, debuggable crashes ;-)
>
>
Jeff Janes writes:
> On Fri, Jun 9, 2017 at 9:13 AM, Eric Lemoine
> wrote:
>> I now think that the performance bug is not related to the fn_extra
>> thing. I had hope but not anymore :) I don't see where the Pointcloud
>> and PostGIS extensions could conflict.
> Can you run 'perf top' on the slo
On Fri, Jun 9, 2017 at 9:13 AM, Eric Lemoine
wrote:
> On 06/08/2017 10:41 PM, Éric wrote:
> >
> >
> >
> >> Have you experimented with other queries that don't involve PostGIS?
> >> I'm wondering if your hook-installation code fails to work properly
> >> unless PostGIS was loaded first. This woul
On 06/09/2017 09:13 AM, Eric Lemoine wrote:
On 06/08/2017 10:41 PM, Éric wrote:
Have you experimented with other queries that don't involve PostGIS?
I'm wondering if your hook-installation code fails to work properly
unless PostGIS was loaded first. This would be easier to credit if
there a
On 06/08/2017 10:41 PM, Éric wrote:
>
>
>
>> Have you experimented with other queries that don't involve PostGIS?
>> I'm wondering if your hook-installation code fails to work properly
>> unless PostGIS was loaded first. This would be easier to credit if
>> there are hooks both extensions try t
>Have you experimented with other queries that don't involve PostGIS?
>I'm wondering if your hook-installation code fails to work properly
>unless PostGIS was loaded first. This would be easier to credit if
>there are hooks both extensions try to get into.
I think you're right on Tom. It look
Eric Lemoine writes:
> The initial "select pc_typmod_pcid(1)" query completely screws the
> connection.
> "select pc_typmod_pcid(1)" is just an example of a simple query that
> triggers the problem. There are many others. But it has to be a query
> using the Pointcloud extension.
My guess is that
On 06/08/2017 07:27 PM, Moreno Andreo wrote:
> Il 08/06/2017 19:10, Eric Lemoine ha scritto:
>>
>> How can such a thing happen? Thanks for any insight on what could cause
>> this.
>>
>>
> I'd try raising shared_buffers to 1 GB or something near 40% of the
> available memory
I tried to make it 4G,
Il 08/06/2017 19:10, Eric Lemoine ha scritto:
How can such a thing happen? Thanks for any insight on what could cause
this.
I'd try raising shared_buffers to 1 GB or something near 40% of the
available memory
If you run the query again, after getting bad results, what do you get?
Cheers
M
> Note that the execution time is 46 ms when the query is wrapped in an
> explain analyze (while it's 3 s when it's not!)
Actually, it seems to me that the performance issue is not on the query
itself, it is on the fetching of the data returned by the query. Which
explains why the query is fast
> Lots of missing information here ...
>
> Is there an index on public.sthelens.points?
Yes, there are.
lopocs=# \d sthelens;
Table "public.sthelens"
Column |Type| Modifiers
++-
On 06/08/2017 09:20 AM, Eric Lemoine wrote:
Looks like you also have postgis and pointcloud_postgis in mix. I would
say this may get an answer sooner here:
http://lists.osgeo.org/mailman/listinfo/pgpointcloud/
I am actually one of the developers of the Pointcloud extension. I
haven't been ab
> Looks like you also have postgis and pointcloud_postgis in mix. I would
> say this may get an answer sooner here:
>
> http://lists.osgeo.org/mailman/listinfo/pgpointcloud/
I am actually one of the developers of the Pointcloud extension. I
haven't been able to debug this up to now.
--
Éric L
On 06/08/2017 09:00 AM, Eric Lemoine wrote:
Hi
We have a rather strange performance issue with the Pointcloud extension
[*]. The issue/bug may be in the extension, but we don't know for sure
at this point. I'm writing to the list to hopefully get some guidance on
how to further debug this.
[*]
On Thu, 8 Jun 2017 18:00:04 +0200
Eric Lemoine wrote:
> We have a rather strange performance issue with the Pointcloud extension
> [*]. The issue/bug may be in the extension, but we don't know for sure
> at this point. I'm writing to the list to hopefully get some guidance on
> how to further deb
On 01/04/2017 08:54 PM, Kisung Kim wrote:
On Wed, Jan 4, 2017 at 1:21 AM, Andreas Kretschmer
mailto:akretsch...@spamfence.net>> wrote:
Kisung Kim mailto:ks...@bitnine.net>> wrote:
> And finally I found that auto_explain is the cause of the problem.
real hardware or virtual hardwa
On Wed, Dec 28, 2016 at 3:15 AM, Tim Uckun wrote:
> I have seen various links on the internet which indicate that PLV8 is
> significantly faster than PL-PGSQL sometimes an order of magnitude faster.
>
> Is this uniformly true or is it just in certain circumstances?
>
> Is there any benefit to choo
On Wed, Jan 4, 2017 at 1:21 AM, Andreas Kretschmer <
akretsch...@spamfence.net> wrote:
> Kisung Kim wrote:
>
> > And finally I found that auto_explain is the cause of the problem.
>
> real hardware or virtual hardware? On virtual there are sometimes
> problems with exact timings, please read:
>
>
Kisung Kim wrote:
> And finally I found that auto_explain is the cause of the problem.
real hardware or virtual hardware? On virtual there are sometimes
problems with exact timings, please read:
https://www.postgresql.org/docs/current/static/pgtesttiming.html
Regards, Andreas Kretschmer
--
A
Yes I use a migrations tool as well. I like ruby so I use either the Sequel
migrations or the ActiveRecord migrations depending on the project. That's
a great way to work. Having said that I think it might be an interesting
thing to do to create the PGAdmin hierarchy of objects on disk. Not the
dat
On Fri, Dec 30, 2016 at 12:06 PM, ajmcello wrote:
> Reducing worker mem shaved about 12 minutes off the query time.. Thanks
> for the suggestion. I lowered it to 10MB instead of 100MB
>
> [SNIP]
>
> >>> [postgresql.conf]
> >>> max_connections = 10
> >>> max_files_per_process = 100
> >>> s
The reason that you cannot use git for that is that it was never intended for
live data and stuff that resides on a database.
That said, I agree with you that all table DDLs and stored procedures should be
kept under version control. And in fact I do keep them under VC on my database.
How do I
cello [mailto:ajmcell...@gmail.com]
> Sent: Freitag, 30. Dezember 2016 07:05
> To: Charles Clavadetscher
> Subject: Re: [GENERAL] performance tuning postgresql 9.5.5.10 [enterprisedb]
>
> There are no connections except one cli when running the query. After that
> finishes then I get con
Hello
> -Original Message-
> From: pgsql-general-ow...@postgresql.org
> [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of ajmcello
> Sent: Freitag, 30. Dezember 2016 05:54
> To: POSTGRES
> Subject: [GENERAL] performance tuning postgresql 9.5.5.10 [enterprisedb]
>
>
> I am trying
I have datagrip and it's OK but it doesn't really do everything I want.
I don't understand why it doesn't fetch all objects from the database and
then put them into the disk in a directory so I can put it all under git
and then let me work on them syncing the files back as they change. For
exampl
If you want an IDE, Jetbrains, the makers of great IDEs like IntelliJ, PyCharm.
and AppCode, among others, have recently come out with what is arguably the
BEST IDE for DBAs, DataGrip. It runs on most major platforms, and is so good
that I have bitten the bullet and paid the yearly subscription
I am not saying the postgres core people should work on an IDE, just that
an IDE like thing would be nice.
On Fri, Dec 30, 2016 at 12:51 PM, Rob Sargent wrote:
> I would hope Postgres core folk take no more than a nanosecond to reject
> the idea that they work on an IDE. Focus on reading and wri
I would hope Postgres core folk take no more than a nanosecond to reject the
idea that they work on an IDE. Focus on reading and writing faster and faster
ACID all the while.
> On Dec 29, 2016, at 5:32 PM, Tim Uckun wrote:
>
> Honestly I don't even like JS. Having said that I am not too crazy
Honestly I don't even like JS. Having said that I am not too crazy about
PL-PGSQL either. I am willing to put up with either given that they are
supported widely in default installs of postgres in AWS, Linux and MacOSX,
As I said before, I think posgres gives a unique and underutilized language
pl
On 29/12/16 09:12, Francisco Olarte wrote:
On Wed, Dec 28, 2016 at 5:53 PM, Jan de Visser wrote:
.but the term "impedance mismatch"
is at least 25 year old;
Much older, I was told it in class at least 32 years ago.
as far as I know it was coined
_Borrowed_ from electrical engineering /
On 12/29/2016 10:35 AM, Pavel Stehule wrote:
2016-12-29 10:03 GMT+01:00 Tim Uckun mailto:timuc...@gmail.com>>:
I think it's awesome that postgres allows you to code in different
languages like this. It really is a unique development environment
and one that is overlooked as a develo
2016-12-29 10:03 GMT+01:00 Tim Uckun :
> I think it's awesome that postgres allows you to code in different
> languages like this. It really is a unique development environment and one
> that is overlooked as a development platform. It would be nice if more
> languages were delivered in the defau
2016-12-29 10:04 GMT+01:00 Tim Uckun :
> Mostly generating SQL statements to execute. Like for example deciding
> which partition to insert into.
>
Then you don't find any possible performance difference - the query is
about 10-100x slower than expression - so the plpgsql should be good.
More y
Mostly generating SQL statements to execute. Like for example deciding
which partition to insert into.
On Thu, Dec 29, 2016 at 10:00 PM, Pavel Stehule
wrote:
>
>
> 2016-12-29 9:23 GMT+01:00 Tim Uckun :
>
>> I am not doubting the efficacy of stored procs, just wondering which
>> language is bette
I think it's awesome that postgres allows you to code in different
languages like this. It really is a unique development environment and one
that is overlooked as a development platform. It would be nice if more
languages were delivered in the default package especially lua, V8 and
mruby.
On T
2016-12-29 9:23 GMT+01:00 Tim Uckun :
> I am not doubting the efficacy of stored procs, just wondering which
> language is better. From the sound of it string manupilation is slow in
> PL-PGSQL but looking at my procs there does seem to be a lot of string
> manipulation going on so maybe I better
My recommendation. See them as tools in a toolkit, not a question of what
is best.
For places where you have SQL statements as primary do SQL or PLPGSQL
functions.
For places where you are manipulating values (parsing strings for example)
use something else (I usually use pl/perl for string mani
I am not doubting the efficacy of stored procs, just wondering which
language is better. From the sound of it string manupilation is slow in
PL-PGSQL but looking at my procs there does seem to be a lot of string
manipulation going on so maybe I better do some tests.
On Thu, Dec 29, 2016 at 3:02
On Wed, Dec 28, 2016 at 5:53 PM, Jan de Visser wrote:
> .but the term "impedance mismatch"
> is at least 25 year old;
Much older, I was told it in class at least 32 years ago.
> as far as I know it was coined
_Borrowed_ from electrical engineering / communication techs.
It is used to highl
On Wednesday, December 28, 2016 6:02:51 AM EST Mike Sofen wrote:
> the natural lashup of plpgsql to postgres (I liked Alban’s term,
> “impedance”), is a key aspect.
Not to deprive Alban of any of his credit, but the term "impedance mismatch"
is at least 25 year old; as far as I know it was coined
From: Tim Uckun
I have seen various links on the internet which indicate that PLV8 is
significantly faster than PL-PGSQL sometimes an order of magnitude faster.
Is there any benefit to choosing PL-PGSQL?
I can’t speak to PLV8. However, I can speak to plpgsql, and s
2016-12-28 10:46 GMT+01:00 Pavel Stehule :
> Hi
>
> 2016-12-28 10:15 GMT+01:00 Tim Uckun :
>
>> I have seen various links on the internet which indicate that PLV8 is
>> significantly faster than PL-PGSQL sometimes an order of magnitude faster.
>>
>> Is this uniformly true or is it just in certain
Hi
2016-12-28 10:15 GMT+01:00 Tim Uckun :
> I have seen various links on the internet which indicate that PLV8 is
> significantly faster than PL-PGSQL sometimes an order of magnitude faster.
>
> Is this uniformly true or is it just in certain circumstances?
>
It depends on usage
>
> Is there a
Hey Tom,
I appreciate you're busy, but did you ever get a chance to look at this?
On 19/09/2016 08:40, Chris Withers wrote:
On 16/09/2016 15:29, Tom Lane wrote:
Chris Withers writes:
On 16/09/2016 14:54, Igor Neyman wrote:
So, what is the value for "end ts", when the record is inserted (the
Hey Tom,
I appreciate you're busy, but did you ever get a chance to look at this?
On 19/09/2016 08:40, Chris Withers wrote:
On 16/09/2016 15:29, Tom Lane wrote:
Chris Withers writes:
On 16/09/2016 14:54, Igor Neyman wrote:
So, what is the value for "end ts", when the record is inserted (the
On Fri, Sep 16, 2016 at 2:01 AM, Chris Withers
wrote:
> Hi All,
>
> I have quite a few tables that follow a pattern like this:
>
> Table "public.my_model"
> Column | Type| Modifiers
> +---+---
> period | tsrange | not null
> k
On 16/09/2016 15:29, Tom Lane wrote:
Chris Withers writes:
On 16/09/2016 14:54, Igor Neyman wrote:
So, what is the value for "end ts", when the record is inserted (the range just
started)?
It's open ended, so the period is [start_ts, )
I've not looked at the GiST range opclass, but I wou
On 16/09/2016 10:26, John R Pierce wrote:
On 9/16/2016 2:23 AM, John R Pierce wrote:
wait, what is a tsrange? the standard textsearch data types in
postgres are tsvector and tsquery,
never mind, I should have known, its a timestamp range. ...
when you do updates, are you changing any
On 16/09/2016 12:00, John R Pierce wrote:
On 9/16/2016 3:46 AM, Chris Withers wrote:
when you do updates, are you changing any of the indexed fields, or
just "value" ?
Yeah, it's a temporal table, so "updates" involve modifying the period
column for a row to set its end ts, and then inserting
Chris Withers writes:
> On 16/09/2016 14:54, Igor Neyman wrote:
>> So, what is the value for "end ts", when the record is inserted (the range
>> just started)?
> It's open ended, so the period is [start_ts, )
I've not looked at the GiST range opclass, but I would not be surprised if
having lots
On 16/09/2016 14:54, Igor Neyman wrote:
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Chris Withers
Sent: Friday, September 16, 2016 6:47 AM
To: John R Pierce ; pgsql-general@postgresql.org
Subject: Re: [GENERAL
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Chris Withers
Sent: Friday, September 16, 2016 6:47 AM
To: John R Pierce ; pgsql-general@postgresql.org
Subject: Re: [GENERAL] performance problems with bulk inserts
On 16/09/2016 12:00, John R Pierce wrote:
On 9/16/2016 3:46 AM, Chris Withers wrote:
when you do updates, are you changing any of the indexed fields, or
just "value" ?
Yeah, it's a temporal table, so "updates" involve modifying the period
column for a row to set its end ts, and then inserting
On 9/16/2016 3:46 AM, Chris Withers wrote:
when you do updates, are you changing any of the indexed fields, or
just "value" ?
Yeah, it's a temporal table, so "updates" involve modifying the period
column for a row to set its end ts, and then inserting a new row with
a start ts running on from
On 16/09/2016 10:26, John R Pierce wrote:
On 9/16/2016 2:23 AM, John R Pierce wrote:
wait, what is a tsrange? the standard textsearch data types in
postgres are tsvector and tsquery,
never mind, I should have known, its a timestamp range. ...
when you do updates, are you changing any o
On 9/16/2016 2:23 AM, John R Pierce wrote:
wait, what is a tsrange? the standard textsearch data types in
postgres are tsvector and tsquery,
never mind, I should have known, its a timestamp range. ...
when you do updates, are you changing any of the indexed fields, or just
"value" ?
On 9/16/2016 2:12 AM, John R Pierce wrote:
Column | Type| Modifiers
+---+---
period | tsrange | not null
wait, what is a tsrange? the standard textsearch data types in
postgres are tsvector and tsquery,
--
john r pierce, recycli
On 9/16/2016 2:01 AM, Chris Withers wrote:
Hi All,
I have quite a few tables that follow a pattern like this:
Table "public.my_model"
Column | Type| Modifiers
+---+---
period | tsrange | not null
key| character varying |
On 28 January 2016 at 08:41, Matt wrote:
> Moving from NUMERIC to FLOAT(8) did indeed lower query times by about 20%.
>
> I will try fixeddecimal and agg() as time permits.
That's surprisingly little gain. Please note that you'll not gain any
further improvements from the fixeddecimal type than y
Moving from NUMERIC to FLOAT(8) did indeed lower query times by about
20%.
I will try fixeddecimal and agg() as time permits.
On 25 Jan 2016, at 4:44, David Rowley wrote:
On 25 January 2016 at 15:45, Matt wrote:
I have a warehousing case where data is bucketed by a key of an
hourly
timesta
On 25 January 2016 at 15:45, Matt wrote:
> I have a warehousing case where data is bucketed by a key of an hourly
> timestamp and 3 other columns. In addition there are 32 numeric columns. The
> tables are partitioned on regular date ranges, and aggregated to the lowest
> resolution usable.
>
> Th
Matt wrote:
> I have a warehousing case where data is bucketed by a key of an hourly
> timestamp and 3 other columns. In addition there are 32 numeric columns. The
> tables are partitioned on regular date ranges, and aggregated to the lowest
> resolution usable.
>
> The principal use case is to
> Hi,
>
>
> does no one have an idea?
>
> It may be a rare case doing the same UPDATE a thousand times. But I´m really
> interested why this is not happening when doing DIFFERENT updates. And, of
> course, if something could be done on the database side to prevent this
> behavior in case so
Hi
it is side effect of MVCC implementation of Postgres. There is not possible
vacuum inside open transaction.
If you need it, then you should to use a different database - Postgres
doesn't work well when one record is highly often used in transaction.
Usual solution for Postgres is some proxy, t
Hi,
does no one have an idea?
It may be a rare case doing the same UPDATE a thousand times. But I´m really
interested why this is not happening when doing DIFFERENT updates. And, of
course, if something could be done on the database side to prevent this
behavior in case some application dev
Greg Sabino Mullane wrote
> Disabling triggers requires a heavy lock. A better way is to use
> the session_replication_role feature. See:
>
> http://blog.endpoint.com/2015/01/postgres-sessionreplication-role.html
This is a very effective solution to my problem. Thanks for the tip, Greg.
--
Vi
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
> I'm doing some maintenance - which is done quite often, never had this
> problem before - which requires me to disable triggers, run some updates and
> then re-enable the triggers.
Disabling triggers requires a heavy lock. A better way is to
David Steele wrote
>
> ALTER TABLE requires an exclusive lock - my guess is that another
> process has a lock on the table. It could even be a select.
>
> pg_locks is your friend in this case:
> http://www.postgresql.org/docs/9.4/static/view-pg-locks.html
Hi David
I'm a bit confused on how to
On 2/27/15 9:07 AM, gmb wrote:
> Hi all
>
> I'm doing some maintenance - which is done quite often, never had this
> problem before - which requires me to disable triggers, run some updates and
> then re-enable the triggers.
> Where the whole process normally take 30 sec , it took much longer tod
binK_rPgnIklP.bin
Description: charset
Thanks Laurenz, very good point!
Luckily (phew!) the business scenario is such that race conditions cannot
occur (and the transaction table is append only). There is business
workflow to address duplicates but
1) it occurs extremely rarely (it would be a deliberate sabotage if it
occurs)
2) there
Anil Menon wrote:
> I would like to ask from your experience which would be the best "generic"
> method for checking if row
> sets of a certain condition exists in a PLPGSQL function.
>
> I know of 4 methods so far (please feel free to add if I missed out any
> others)
[...]
Are you aware that
Thanks Adrian
On Thu, Nov 20, 2014 at 3:46 AM, Adrian Klaver
wrote:
> On 11/19/2014 08:26 AM, Anil Menon wrote:
>
>> Hello,
>>
>> I would like to ask from your experience which would be the best
>> "generic" method for checking if row sets of a certain condition exists
>> in a PLPGSQL function.
On 11/19/2014 08:26 AM, Anil Menon wrote:
Hello,
I would like to ask from your experience which would be the best
"generic" method for checking if row sets of a certain condition exists
in a PLPGSQL function.
I know of 4 methods so far (please feel free to add if I missed out any
others)
1) ge
Hi Anil:
On Wed, Nov 19, 2014 at 5:26 PM, Anil Menon wrote:
> Cons : It seems doing a count(*) is not the best option for PG
>
For this and some of the following options, if you are going to just test
for existence, I would consider adding a limit 1 somewehere on the query,
to let the optimize
Thank you so much!
So to recap the general way to reproduce this issue is:
create table products(id int primary key, type varchar);
insert into products select generate_series(1,1), 'aaa';
insert into products select generate_series(10001,2), 'bbb';
create index idx on products(type);
pre
David Johnston writes:
> âWhile "planner hints" comes to mind...on the SQL side can we extend the
> "PREPARE" command with two additional keywords?â
> âPREPARE
> name [ ( data_type [, ...] ) ] [
> [NO] GENERIC
> â] â
> âAS statement
Don't really see the point. The OP's problem is t
On Thu, Nov 13, 2014 at 5:47 PM, Tom Lane wrote:
> David G Johnston writes:
> > Tom Lane-2 wrote
> >> In the meantime, I assume that your real data contains a small
> percentage
> >> of values other than these two? If so, maybe cranking up the statistics
> >> target would help. If the planner
Sam Saffron writes:
> I have hit a rather odd issue with prepared queries on both pg 9.3 and 9.4
> beta.
> I have this table (copy at http://samsaffron.com/testing.db.gz) with a
> very odd performance profile:
Interesting case. The issue seems to be that your statistics look like
this:
select
On Thu, Nov 06, 2014 at 02:55:20PM +, Shaun Thomas wrote:
>
> These updates aren't equivalent. It's very important you know this, because
> you're also inflating your table with a lot of extra updated rows.
>
> Take the first UPDATE:
>
> > UPDATE second SET time1 = orig.time1
> > FROM orig
These updates aren't equivalent. It's very important you know this, because
you're also inflating your table with a lot of extra updated rows.
Take the first UPDATE:
> UPDATE second SET time1 = orig.time1
> FROM orig
> WHERE second.key1 = orig.key1;
If you wrote this as a SELECT, it would look
Thank you very much for the insights (and how to read correctly a explain
:)).
There is a difference in data between the 2 databases, crons where running
on one and not the other.
But your insights enabled us to dive deeper, thanks !
2014-11-05 18:30 GMT+01:00 Tom Lane :
> "Tomas Vondra" writes
"Tomas Vondra" writes:
> Dne 5 Listopad 2014, 18:10, Tom Lane napsal(a):
>> For some reason there's over 1000 times more rows in andalertsmatch in
>> the 9.1.14 installation. I'm betting on a foulup somewhere in the data
>> dump/restore process.
> I'd bet that's mostly due to rescans in the merg
Dne 5 Listopad 2014, 18:10, Tom Lane napsal(a):
> "Tomas Vondra" writes:
>> Dne 5 Listopad 2014, 17:31, R??my-Christophe Schermesser napsal(a):
>>> We have 2 instances of PG, one in 9.1.1 and 9.1.14. They have the same
>>> data, schema, PG configuration, and are almost identical machines, same
>>>
"Tomas Vondra" writes:
> Dne 5 Listopad 2014, 17:31, Rémy-Christophe Schermesser napsal(a):
>> We have 2 instances of PG, one in 9.1.1 and 9.1.14. They have the same
>> data, schema, PG configuration, and are almost identical machines, same
>> number of cores and memory, but different cloud provi
Hi,
Dne 5 Listopad 2014, 17:31, Rémy-Christophe Schermesser napsal(a):
> Hi,
>
> We have 2 instances of PG, one in 9.1.1 and 9.1.14. They have the same
> data, schema, PG configuration, and are almost identical machines, same
> number of cores and memory, but different cloud provider. The data was
Le 29 août 2014 18:13, "Emmanuel Medernach" a
écrit :
>
> Hello,
>
> I use Postgres version 9.3.5 and spot a performance issue
> with postgres_fdw.
>
> I have a table object_003_xyz with 275000 lines and is
> exported to the master node as master_object_003_xyz.
>
> ( The following query
On Sun, Jul 13, 2014 at 3:20 AM, Ramesh T wrote:
> Hi,
> How performance monitoring/tuning on postgres..?
That's a vast subject, and the following link can help you depending
on your needs:
https://wiki.postgresql.org/wiki/Performance_Optimization
--
Michael
--
Sent via pgsql-general
1 - 100 of 694 matches
Mail list logo