On 22.3.2015 22:50, Vivekanand Joshi wrote:
> Any documentation regarding how to configure postgresql.conf file as per
> individual user?
That can't be done in postgresql.conf, but by ALTER ROLE commands.
--
Tomas Vondrahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 S
Any documentation regarding how to configure postgresql.conf file as per
individual user?
On 21 Mar 2015 13:10, "Josh Krupka" wrote:
> The other approaches of fixing the estimates, cost params, etc are the
> right way of fixing it. *However* if you needed a quick fix for just this
> report and c
The other approaches of fixing the estimates, cost params, etc are the
right way of fixing it. *However* if you needed a quick fix for just this
report and can't find a way of setting it in Jaspersoft for just the report
(I don't think it will let you run multiple sql statements by default,
maybe
Hi,
On 18.3.2015 18:31, Vivekanand Joshi wrote:
> So, here is the first taste of success and which gives me the
> confidence that if properly worked out with a good hardware and
> proper tuning, PostgreSQL could be a good replacement.
>
> Out of the 9 reports which needs to be migrated in Postgre
interactive.com
> Cc: Tomas Vondra; pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Performance issues
>
> Vivekanand Joshi writes:
>
> > So, here is the first taste of success and which gives me the
> > confidence that if properly worked out with a good hardw
iever...@comcast.net]
Sent: Thursday, March 19, 2015 12:06 AM
To: vjo...@zetainteractive.com
Cc: Tomas Vondra; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance issues
Vivekanand Joshi writes:
> So, here is the first taste of success and which gives me the
> confidence that if prop
ide.
>
>
> Regards,
> Vivek
>
> -Original Message-
> From: pgsql-performance-ow...@postgresql.org
> [mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Tomas Vondra
> Sent: Tuesday, March 17, 2015 9:00 PM
> To: pgsql-performance@postgresql.org
> S
Vivek
>
> -Original Message-
> From: pgsql-performance-ow...@postgresql.org
> [mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Tomas Vondra
> Sent: Tuesday, March 17, 2015 9:00 PM
> To: pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Performanc
-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Tomas Vondra
Sent: Tuesday, March 17, 2015 9:00 PM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance issues
On 17.3.2015 16:24, Thomas Kellerer wrote:
> Tomas Von
On 17.3.2015 16:24, Thomas Kellerer wrote:
> Tomas Vondra schrieb am 17.03.2015 um 15:43:
>> On 17.3.2015 15:19, Thomas Kellerer wrote:
>>> Tomas Vondra schrieb am 17.03.2015 um 14:55:
(2) using window functions, e.g. like this:
SELECT * FROM (
SELECT *,
Tomas Vondra schrieb am 17.03.2015 um 15:43:
> On 17.3.2015 15:19, Thomas Kellerer wrote:
>> Tomas Vondra schrieb am 17.03.2015 um 14:55:
>>> (2) using window functions, e.g. like this:
>>>
>>> SELECT * FROM (
>>>SELECT *,
>>> ROW_NUMBER() OVER (PARTITION BY touchpoint_exe
On 17.3.2015 16:10, Vivekanand Joshi wrote:
> The confusion for me here is that :
>
>
> I am getting results from the view in around 3 seconds
> (S_V_D_CAMPAIGN_HIERARCHY) and 25 seconds (S_V_F_PROMOTION_HISTORY_EMAIL)
>
> But when I am using these two views in the query as the joining
> tables
coming out.
Regards,
Vivek
-Original Message-
From: Vivekanand Joshi [mailto:vjo...@zetainteractive.com]
Sent: Tuesday, March 17, 2015 8:40 PM
To: 'Tomas Vondra'; 'pgsql-performance@postgresql.org'
Subject: RE: [PERFORM] Performance issues
The confusion for me here is t
l.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of Tomas Vondra
Sent: Tuesday, March 17, 2015 8:13 PM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance issues
On 17.3.2015 15:19, Thomas Kellerer wrote:
> Tomas Vondra schrieb am 17.03.2015 um 14:55:
>> (2
On 17.3.2015 15:19, Thomas Kellerer wrote:
> Tomas Vondra schrieb am 17.03.2015 um 14:55:
>> (2) using window functions, e.g. like this:
>>
>> SELECT * FROM (
>>SELECT *,
>> ROW_NUMBER() OVER (PARTITION BY touchpoint_execution_id
>>ORDER BY
Tomas Vondra schrieb am 17.03.2015 um 14:55:
> (2) using window functions, e.g. like this:
>
> SELECT * FROM (
>SELECT *,
> ROW_NUMBER() OVER (PARTITION BY touchpoint_execution_id
>ORDER BY FROM max_creation_dt) AS rn
>FROM s_f_touc
Just as I feared, the attached explain analyze results show significant
misestimates, like this for example:
Nested Loop (cost=32782.19..34504.16 rows=1 width=16)
(actual time=337.484..884.438 rows=46454 loops=1)
Nested Loop (cost=18484.94..20366.29 rows=1 width=776)
(actual time=
pgsql-performance@postgresql.org'
Subject: RE: [PERFORM] Performance issues
Hi Tomas,
This is what I am getting,
EXPLAIN ANALYZE SELECT * FROM s_f_promotion_history WHERE (send_dt >=
'2014-03-13 00:00:00');
ain.depesz.com/s/WxI
Regards,
Vivek
-Original Message-
From: Tomas Vondra [mailto:tomas.von...@2ndquadrant.com]
Sent: Tuesday, March 17, 2015 5:15 PM
To: vjo...@zetainteractive.com; Jim Nasby; Scott Marlowe; Varadharajan
Mukundan
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFOR
On 17.3.2015 12:07, Vivekanand Joshi wrote:
> EXPLAIN ANALYZE didn't give result even after three hours.
In that case the only thing you can do is 'slice' the query into smaller
parts (representing subtrees of the plan), and analyze those first. Look
for misestimates (significant differences betwe
Hi,
On 17.3.2015 08:41, Vivekanand Joshi wrote:
> Hi Guys,
>
> Next level of query is following:
>
> If this works, I guess 90% of the problem will be solved.
>
> SELECT
> COUNT(DISTINCT TARGET_ID)
> FROM
> S_V_F_PROMOTION_HISTORY_EMAIL PH
n'
Cc: 'pgsql-performance@postgresql.org'
Subject: RE: [PERFORM] Performance issues
Hi Guys,
Next level of query is following:
If this works, I guess 90% of the problem will be solved.
SELECT
COUNT(DISTINCT TARGET_ID)
FROM
S_V
t: Tuesday, March 17, 2015 5:36 AM
To: Tomas Vondra; vjo...@zetainteractive.com; Scott Marlowe; Varadharajan
Mukundan
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance issues
On 3/16/15 3:59 PM, Tomas Vondra wrote:
> On 16.3.2015 20:43, Jim Nasby wrote:
>> On 3/13/1
On 3/16/15 3:59 PM, Tomas Vondra wrote:
On 16.3.2015 20:43, Jim Nasby wrote:
On 3/13/15 7:12 PM, Tomas Vondra wrote:
(4) I suspect many of the relations referenced in the views are not
actually needed in the query, i.e. the join is performed but
then it's just discarded because thos
On 16.3.2015 20:43, Jim Nasby wrote:
> On 3/13/15 7:12 PM, Tomas Vondra wrote:
>> (4) I suspect many of the relations referenced in the views are not
>> actually needed in the query, i.e. the join is performed but
>> then it's just discarded because those columns are not used.
>> Try
On 3/13/15 7:12 PM, Tomas Vondra wrote:
(4) I suspect many of the relations referenced in the views are not
actually needed in the query, i.e. the join is performed but
then it's just discarded because those columns are not used.
Try to simplify the views as much has possible - rem
Hey guys, thanks a lot.
This is really helping. I am learning a lot. BTW, I changed CTE into
subquery and it improved the performance by miles. I am getting the result
in less than 3 seconds, though I am using a 24 GB ram server. It is still a
great turnaround time as compared to previous executi
On 16.3.2015 18:49, Marc Mamin wrote:
>
>> Hi Team,
>>
>> This is the EXPLAIN ANALYZE for one of the view : S_V_D_CAMPAIGN_HIERARCHY:
FWIW, this is a somewhat more readable version of the plan:
http://explain.depesz.com/s/nbB
In the future, please do two things:
(1) Attach the plan as a te
ws=1 width=120) (actual time=0.001..0.002 rows=1
>loops=67508)
> Index Cond: (camp.category_id = category_id)
> -> CTE Scan on valid_executions (cost=0.00..0.02 rows=1 width=8)
>(actual time=0.004..6.803 rows=52997 loops=67508)
> Total runtime: 966566.574 ms
>
>
arch 15, 2015 3:02 AM
To: Varadharajan Mukundan
Cc: Tomas Vondra; vjo...@zetainteractive.com; Scott Marlowe;
pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance issues
On 15/03/15 10:23, Varadharajan Mukundan wrote:
> Hi Gavin,
>
> Vivekanand is his first mai
On 15/03/15 10:23, Varadharajan Mukundan wrote:
Hi Gavin,
Vivekanand is his first mail itself mentioned the below configuration
of postgresql.conf. It looks good enough to me.
Total Memory : 8 GB
shared_buffers = 2GB
work_mem = 64MB
maintenance_work_mem = 700MB
effective_cache_size = 4GB
Hi Gavin,
Vivekanand is his first mail itself mentioned the below configuration
of postgresql.conf. It looks good enough to me.
Total Memory : 8 GB
shared_buffers = 2GB
work_mem = 64MB
maintenance_work_mem = 700MB
effective_cache_size = 4GB
On Sat, Mar 14, 2015 at 10:06 PM, Gavin Flower
wro
On 14/03/15 13:12, Tomas Vondra wrote:
On 14.3.2015 00:28, Vivekanand Joshi wrote:
Hi Guys,
So here is the full information attached as well as in the link
provided below:
http://pgsql.privatepaste.com/41207bea45
I can provide new information as well.
Thanks.
We still don't have EXPLAIN ANA
On 14.3.2015 00:28, Vivekanand Joshi wrote:
> Hi Guys,
>
> So here is the full information attached as well as in the link
> provided below:
>
> http://pgsql.privatepaste.com/41207bea45
>
> I can provide new information as well.
Thanks.
We still don't have EXPLAIN ANALYZE - how long was the qu
, March 14, 2015 3:56 AM
To: Varadharajan Mukundan
Cc: vjo...@zetainteractive.com; Tomas Vondra;
pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance issues
On Fri, Mar 13, 2015 at 4:03 PM, Varadharajan Mukundan
wrote:
>> We might even consider taking experts advice on how t
On Fri, Mar 13, 2015 at 4:03 PM, Varadharajan Mukundan
wrote:
>> We might even consider taking experts advice on how to tune queries and
>> server, but if postgres is going to behave like this, I am not sure we would
>> be able to continue with it.
>>
>> Having said that, I would day again that I
> We might even consider taking experts advice on how to tune queries and
> server, but if postgres is going to behave like this, I am not sure we would
> be able to continue with it.
>
> Having said that, I would day again that I am completely new to this
> territory, so I might miss lots and lots
On 13.3.2015 21:46, Vivekanand Joshi wrote:
> Since I was doing it only for the testing purposes and on a
> development server which has only 8 GB of RAM, I used only 10m rows.
> But the original table has 1.5 billion rows. We will obviously be
> using a server with very high capacity, but I am no
Since I was doing it only for the testing purposes and on a development
server which has only 8 GB of RAM, I used only 10m rows. But the original
table has 1.5 billion rows. We will obviously be using a server with very
high capacity, but I am not satisfied with the performance at all. This
might
Hi,
On 13.3.2015 20:59, Vivekanand Joshi wrote:
> I am really worried about the performance of PostgreSQL as we have
> almost 1.5 billion records in promotion history table. Do you guys
In the previous message you claimed the post table has 10M rows ...
> really think PostgreSQL can handle this
I am really worried about the performance of PostgreSQL as we have almost
1.5 billion records in promotion history table. Do you guys really think
PostgreSQL can handle this much load. We have fact tables which are more
than 15 GB in size and we have to make joins with those tables in almost
every
If the s_f_promotion_history table will have a explosive growth, then its
worth considering partitioning by date and using constraint exclusion to
speed up the queries. Else, it makes sense to get started with multiple
partial index (like, have a index for each week or something like that. You
may
10 million records in s_f_promotion_history table.
*From:* Varadharajan Mukundan [mailto:srinath...@gmail.com]
*Sent:* Friday, March 13, 2015 6:29 PM
*To:* vjo...@zetainteractive.com
*Cc:* pgsql-performance@postgresql.org
*Subject:* Re: [PERFORM] Performance issues
Hi Vivekanand,
>F
Hi Vivekanand,
>From the query plan, we can see that good amount of time is spent in this
line
-> Seq Scan on public.s_f_promotion_history base (cost=0.00..283334.00
rows=1296 width=74)
Output: base.promo_hist_id, base.target_id,
base.audience_member_id, base.touchpoint_execution_i
Hi Team,
I am a novice to this territory. We are trying to migrate few jasper
reports from Netezza to PostgreSQL.
I have one report ready with me but queries are taking too much time. To be
honest, it is not giving any result most of the time.
The same query in Netezza is running in less
On 3/8/2011 10:58 AM, Andreas Forø Tollefsen wrote:
Andy. Thanks. That is a great tips. I tried it but i get the error:
NOTICE: ptarray_simplify returned a <2 pts array.
Query:
SELECT ST_Intersection(priogrid_land.cell,
ST_Simplify(cshapeswdate.geom,0.1)) AS geom,
priogrid_land.gid AS divider, g
Forgot to mention that the query terminates the connection because of a
crash of server process.
2011/3/8 Andreas Forø Tollefsen
> Andy. Thanks. That is a great tips. I tried it but i get the error:
> NOTICE: ptarray_simplify returned a <2 pts array.
>
> Query:
> SELECT ST_Intersection(priogrid_
Andy. Thanks. That is a great tips. I tried it but i get the error:
NOTICE: ptarray_simplify returned a <2 pts array.
Query:
SELECT ST_Intersection(priogrid_land.cell,
ST_Simplify(cshapeswdate.geom,0.1)) AS geom,
priogrid_land.gid AS divider, gwcode, gwsyear, gweyear, startdate, enddate,
capname,
I have seen really complex geometries cause problems. If you have
thousands of points, when 10 would do, try ST_Simplify and see if it
doesnt speed things up.
-Andy
On 3/8/2011 2:42 AM, Andreas Forø Tollefsen wrote:
Hi. Thanks for the comments. My data is right, and the result is exactly
wh
Hi. Thanks for the comments. My data is right, and the result is exactly
what i want, but as you say i think what causes the query to be slow is the
ST_Intersection which creates the intersection between the vector grid
(fishnet) and the country polygons.
I will check with the postgis user list if
=?ISO-8859-1?Q?Andreas_For=F8_Tollefsen?= writes:
> This is a query i am working on now. It creates an intersection of two
> geometries. One is a grid of 0.5 x 0.5 decimal degree sized cells, while the
> other is the country geometries of all countries in the world for a certain
> year.
Hm, are y
On Mon, Mar 07, 2011 at 10:49:48PM +0100, Andreas For Tollefsen wrote:
- The synchronous_commit off increased the TPS, but not the speed of the below
- query.
-
- Oleg:
- This is a query i am working on now. It creates an intersection of two
- geometries. One is a grid of 0.5 x 0.5 decimal degree
The synchronous_commit off increased the TPS, but not the speed of the below
query.
Oleg:
This is a query i am working on now. It creates an intersection of two
geometries. One is a grid of 0.5 x 0.5 decimal degree sized cells, while the
other is the country geometries of all countries in the worl
On Mon, 7 Mar 2011, Andreas For? Tollefsen wrote:
Ok. Cheers. I will do some more testing on my heavy PostGIS queries which
often takes hours to complete.
I'd like to see hours long queries :) EXPLAIN ANALYZE
Thanks.
Andreas
2011/3/7 Kenneth Marshall
On Mon, Mar 07, 2011 at 03:17:05PM +
Ok. Cheers. I will do some more testing on my heavy PostGIS queries which
often takes hours to complete.
Thanks.
Andreas
2011/3/7 Kenneth Marshall
> On Mon, Mar 07, 2011 at 03:17:05PM +0100, Andreas For? Tollefsen wrote:
> > Thanks, Ken.
> >
> > It seems like the tip to turn off synchronous_com
On Mon, Mar 07, 2011 at 03:17:05PM +0100, Andreas For? Tollefsen wrote:
> Thanks, Ken.
>
> It seems like the tip to turn off synchronous_commit did the trick:
>
> /usr/lib/postgresql/8.4/bin/pgbench -T 60 test1
> starting vacuum...end.
> transaction type: TPC-B (sort of)
> scaling factor: 1
> que
Thanks, Ken.
It seems like the tip to turn off synchronous_commit did the trick:
/usr/lib/postgresql/8.4/bin/pgbench -T 60 test1
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 1
duration: 60 s
number of transactions actually proces
On Mon, Mar 07, 2011 at 02:45:03PM +0100, Andreas For? Tollefsen wrote:
> Hi,
>
> I am running Postgresql 8.4.7 with Postgis 2.0 (for raster support).
> Server is mainly 1 user for spatial data processing. This involves queries
> that can take hours.
>
> This is running on a ubuntu 10.10 Server w
Hi,
I am running Postgresql 8.4.7 with Postgis 2.0 (for raster support).
Server is mainly 1 user for spatial data processing. This involves queries
that can take hours.
This is running on a ubuntu 10.10 Server with Core2Duo 6600 @ 2.4 GHZ, 6 GB
RAM.
My postgresql.conf:
# - Memory -
shared_buffer
On Fri, Jul 2, 2010 at 1:40 AM, Sachin Kumar wrote:
> At times we have observed that postgres stops responding for several
> minutes, even couldn’t fetch the number of entries in a particular table.
> One such instance happens when we execute the following steps:
Sounds sort of like a checkpoint
On 29/06/10 15:01, Sachin Kumar wrote:
> At times we have observed that postgres stops responding for several
> minutes, even couldn't fetch the number of entries in a particular
> table.
Quick guess: checkpoints. Enable checkpoint logging, follow the logs,
see if there's any correspondance.
In
Hi,
We are using postgresql-8.4.0 on 64-bit Linux machine (open-SUSE 11.x).
It's a master/slave deployment & slony-2.0.4.rc2 is used for DB
replication on the slave box.
At times we have observed that postgres stops responding for several
minutes, even couldn't fetch the number of entries i
On Fri, Jul 2, 2010 at 1:40 AM, Sachin Kumar wrote:
> Hi,
>
> We are using postgresql-8.4.0 on 64-bit Linux machine (open-SUSE 11.x). It’s
> a master/slave deployment & slony-2.0.4.rc2 is used for DB replication (from
> master to slave).
You should really be running 8.4.4, not 8.4.0, as there are
Hi,
We are using postgresql-8.4.0 on 64-bit Linux machine (open-SUSE 11.x).
It's a master/slave deployment & slony-2.0.4.rc2 is used for DB
replication (from master to slave).
At times we have observed that postgres stops responding for several
minutes, even couldn't fetch the number of en
On Wed, May 12, 2010 at 1:45 AM, venu madhav wrote:
> [Venu] Yes, autovacuum is running every hour. I could see in the log
> messages. All the configurations for autovacuum are disabled except that it
> should run for every hour. This application runs on an embedded box, so
> can't change the para
On Wed, May 12, 2010 at 7:26 PM, Kevin Grittner wrote:
> venu madhav wrote:
>
> >> > If the records are more in the interval,
> >>
> >> How do you know that before you run your query?
> >>
> > I calculate the count first.
>
> This and other comments suggest that the data is totally static
> whil
On Wed, May 12, 2010 at 5:25 PM, Kevin Grittner wrote:
> venu madhav wrote:
>
> >>> AND e.timestamp >= '1270449180'
> >>> AND e.timestamp < '1273473180'
> >>> ORDER BY.
> >>> e.cid DESC,
> >>> e.cid DESC
> >>> limit 21
> >>> offset 10539780
>
> > The second column acts as a secondary key for sor
On Wed, May 12, 2010 at 3:20 AM, Kevin Grittner wrote:
> venu madhav wrote:
>
> > When I try to get the last twenty records from the database, it
> > takes around 10-15 mins to complete the operation.
>
> Making this a little easier to read (for me, at least) I get this:
>
> select e.cid, times
sender
> by replying to the message and destroy all copies of the original message.
>
> From: pgsql-performance-ow...@postgresql.org [mailto:
> pgsql-performance-ow...@postgresql.org] On Behalf Of venu madhav
> Sent: Tuesday, May 11, 2010 2:18 PM
> To: pgsql-performance@postgresql
On Wed, May 12, 2010 at 3:17 AM, Jorge Montero <
jorge_mont...@homedecorators.com> wrote:
> First, are you sure you are getting autovacuum to run hourly? Autovacuum
> will only vacuum when certain configuration thresholds are reached. You can
> set it to only check for those thresholds every so o
venu madhav wrote:
> Kevin Grittner > > I calculate the count first.
>>
>> This and other comments suggest that the data is totally static
>> while this application is running. Is that correct?
>>
> No, the data gets added when the application is running. As I've
> mentioned before it could be as
On 5/12/10 4:55 AM, Kevin Grittner wrote:
venu madhav wrote:
we display in sets of 20/30 etc. The user also has the option to
browse through any of those records hence the limit and offset.
Have you considered alternative techniques for paging? You might
use values at the edges of the page t
venu madhav wrote:
>> > If the records are more in the interval,
>>
>> How do you know that before you run your query?
>>
> I calculate the count first.
This and other comments suggest that the data is totally static
while this application is running. Is that correct?
> If generate all the
venu madhav wrote:
>>> AND e.timestamp >= '1270449180'
>>> AND e.timestamp < '1273473180'
>>> ORDER BY.
>>> e.cid DESC,
>>> e.cid DESC
>>> limit 21
>>> offset 10539780
> The second column acts as a secondary key for sorting if the
> primary sorting key is a different column. For this query bot
> * select e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name,
> e.sniff_ip, e.sniff_channel, s.sig_config, e.wifi_addr_1,
> e.wifi_addr_2, e.view_status, bssid FROM event e, signature s WHERE
> s.sig_id = e.signature AND e.timestamp >= '1270449180' AND e.timestamp
> < '1273473180' ORDE
-performance@postgresql.org
Subject: [PERFORM] Performance issues when the number of records are around 10
Million
Hi all,
In my database application, I've a table whose records can reach 10M and
insertions can happen at a faster rate like 100 insertions per second in the
peak tim
First, are you sure you are getting autovacuum to run hourly? Autovacuum will
only vacuum when certain configuration thresholds are reached. You can set it
to only check for those thresholds every so often, but no vacuuming or
analyzing will be done unless they are hit, regardless of how often a
venu madhav wrote:
> When I try to get the last twenty records from the database, it
> takes around 10-15 mins to complete the operation.
Making this a little easier to read (for me, at least) I get this:
select e.cid, timestamp, s.sig_class, s.sig_priority, s.sig_name,
e.sniff_ip, e.sn
Hi all,
In my database application, I've a table whose records can reach 10M
and insertions can happen at a faster rate like 100 insertions per second in
the peak times. I configured postgres to do auto vacuum on hourly basis. I
have frontend GUI application in CGI which displays the data fro
Hi Tom, Greg,
Thanks for your helpful suggestions - switching the BIGINT to FLOAT
and fixing the postgresql.conf to better match my server configuration
gave me about 30% speedup on the queries.
Because of the fact that my data insert order was almost never the
data retrieval order, I also got a
On Wed, 26 Aug 2009, Hrishikesh (??? ) wrote:
key = {device_id (uint64), identifier (uint32), sub_identifier (uint32),
unix_time} (these four taken together are unique)
You should probably tag these fields as NOT NULL to eliminate needing to
consider that possibility during query
2009/8/26 Tom Lane :
>> How does a float ("REAL") compare in terms of SUM()s ?
>
> Casting to float or float8 is certainly a useful alternative if you
> don't mind the potential for roundoff error. On any non-ancient
> platform those will be considerably faster than numeric. BTW,
> I think that 8
=?UTF-8?B?SHJpc2hpa2VzaCAo4KS54KWD4KS34KWA4KSV4KWH4KS2IOCkruClh+CkueClh+CkguCkpuCksw==?=
=?UTF-8?B?4KWHKQ==?= writes:
> 2009/8/26 Tom Lane
>> Do the data columns have to be bigint, or would int be enough to hold
>> the expected range?
> For the 300-sec tables I probably can drop it to an intege
Hi Tom,
Thanks for your quick response.
2009/8/26 Tom Lane
> writes:
> > In my timing tests, the performance of PG is quite a lot worse than the
> > equivalent BerkeleyDB implementation.
>
> Are you actually comparing apples to apples? I don't recall that BDB
> has any built-in aggregation fun
=?UTF-8?B?SHJpc2hpa2VzaCAo4KS54KWD4KS34KWA4KSV4KWH4KS2IOCkruClh+CkueClh+CkguCkpuCksw==?=
=?UTF-8?B?4KWHKQ==?= writes:
> In my timing tests, the performance of PG is quite a lot worse than the
> equivalent BerkeleyDB implementation.
Are you actually comparing apples to apples? I don't recall tha
Hi All,
We are improving our network appliance monitoring system, and are evaluating
using PostgreSQL as the back-end traffic statistics database (we're
currently running a home-grown Berkeley-DB based statistics database).
We log data from various network elements (it's mainly in/out bytes and
p
Scott Marlowe wrote:
Whatever email agent you're using seems to be quoting in a way that
doesn't get along well with gmail, so I'm just gonna chop most of it
rather than have it quoted confusingly... Heck, I woulda chopped a
lot anyway to keep it small. :)
Thanks again for your time. I'm u
Hi Gregory/All,
Thanks for your time.
Yes the difference is pretty small but does seem to be consistent, the
problem that I have is that this is just part of the query, I have tried
to break things down so that I can see where the time is being spent. I
set the default_statistics_target to
Whatever email agent you're using seems to be quoting in a way that
doesn't get along well with gmail, so I'm just gonna chop most of it
rather than have it quoted confusingly... Heck, I woulda chopped a
lot anyway to keep it small. :)
On Jan 28, 2008 9:27 AM, Matthew Lunnon <[EMAIL PROTECTED]> w
Hi Scott,
Thanks for your time
Regards
Matthew
Scott Marlowe wrote:
On Jan 28, 2008 5:41 AM, Matthew Lunnon <[EMAIL PROTECTED]> wrote:
Hi
I am investigating migrating from postgres 743 to postgres 826 but
although the performance in postgres 826 seems to be generally better
there are some i
On Jan 28, 2008 5:41 AM, Matthew Lunnon <[EMAIL PROTECTED]> wrote:
> Hi
>
> I am investigating migrating from postgres 743 to postgres 826 but
> although the performance in postgres 826 seems to be generally better
> there are some instances where it seems to be markedly worse, a factor
> of up to
"Matthew Lunnon" <[EMAIL PROTECTED]> writes:
> In this case the query takes 6.037 ms to run on 862 and 2.332 to run on 743.
The difference between 2ms and 6ms is pretty negligable. A single context
switch or disk cache miss could throw the results off by that margin in either
direction.
But what
Hi
I am investigating migrating from postgres 743 to postgres 826 but
although the performance in postgres 826 seems to be generally better
there are some instances where it seems to be markedly worse, a factor
of up to 10. The problem seems to occur when I join to more than 4
tables. Has an
/ session / query?
Gregory
-Original Message-
From: Jim C. Nasby [mailto:[EMAIL PROTECTED]
Sent: Thursday, May 04, 2006 12:47 PM
To: Gregory Stewart
Cc: Mark Kirkwood; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance Issues on Opteron Dual Core
All the machines I
> > > > > FWIW, I've found problems running PostgreSQL on Windows in a
> > > > > multi-CPU environment on w2k3. It runs fine for some
> period, and
> > > > > then CPU and throughput drop to zero. So far I've
> been unable to
> > > > > track down any more information than that, other than the
>
On Wed, May 03, 2006 at 09:29:15AM +0200, Magnus Hagander wrote:
> > > > FWIW, I've found problems running PostgreSQL on Windows in a
> > > > multi-CPU environment on w2k3. It runs fine for some period, and
> > > > then CPU and throughput drop to zero. So far I've been unable to
> > > > track do
>
>
> -Original Message-
> From: Jim C. Nasby [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, May 02, 2006 3:29 PM
> To: Mark Kirkwood
> Cc: Gregory Stewart; pgsql-performance@postgresql.org
> Subject: Re: [PERFORM] Performance Issues on Opteron Dual Core
>
>
> On
On Wednesday 03 May 2006 03:29, Magnus Hagander wrote:
> > > > FWIW, I've found problems running PostgreSQL on Windows in a
> > > > multi-CPU environment on w2k3. It runs fine for some period, and
> > > > then CPU and throughput drop to zero. So far I've been unable to
> > > > track down any more i
> > > FWIW, I've found problems running PostgreSQL on Windows in a
> > > multi-CPU environment on w2k3. It runs fine for some period, and
> > > then CPU and throughput drop to zero. So far I've been unable to
> > > track down any more information than that, other than the
> fact that
> > > I h
; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Performance Issues on Opteron Dual Core
On Sun, Apr 30, 2006 at 10:59:56PM +1200, Mark Kirkwood wrote:
> Pgadmin can give misleading times for queries that return large result
> sets over a network, due to:
>
> 1/ It takes time t
1 - 100 of 109 matches
Mail list logo