type of bounty
system where people can contribute to developing features?
Regards, David
On 29/07/12 00:13, Tom Lane wrote:
David Barton writes:
I am running postgres 9.1.4 on Ubuntu 12.04 and the stats collector is
generating very high IO usage even when nothing appears to be happening
on the
Thanks, Pavel.
I was a bit worried about trying this because of a comment elsewhere
that the file was supposed to be permanent. Tom's solution of
increasing the vacuum delay has solved it for now.
Regards, David
On 28/07/12 15:07, Pavel St
ilter condition on something indexed e.g. last
week or last month or location or something, you might do better if the
data does exhibit disk locality. If the data really is scattered, then
a seq scan really will be quicker.
Regards, David
On 06/08/12 23:08, Ioannis Anagnostopoulos wrote:
H
Hi Jeff,
Thanks for the detailed reply.
On 13/08/12 05:23, Jeff Janes wrote:
On Fri, Jul 27, 2012 at 9:33 PM, David Barton wrote:
Hi,
I have roughly 150 different databases, each of which is running in 1 of
roughly 30 tablespaces. The databases are small (the dump of most is are
under
rows, so my questions are
just so you can line up the right info for when the real experts get
online :-)
Regards, David
On 16/08/12 11:23, J Ramesh Kumar wrote:
Hi,
My application has high data intensive operations (high number of
inserts 1500 per sec.). I switched my application from MySQ
On 9/27/2012 1:11 PM, M. D. wrote:
I want to buy a new server, and am contemplating a Dell R710 or the
newer R720. The R710 has the x5600 series CPU, while the R720 has the
newer E5-2600 series CPU.
For this the best data I've found (excepting actually running tests on
the physical hardwar
On 9/27/2012 1:37 PM, Craig James wrote:
We use a "white box" vendor (ASA Computers), and have been very happy
with the results. They build exactly what I ask for and deliver it in
about a week. They offer on-site service and warranties, but don't
pressure me to buy them. I'm not locked in to
On 9/27/2012 1:56 PM, M. D. wrote:
I'm in Belize, so what I'm considering is from ebay, where it's
unlikely that I'll get the warranty. Should I consider some other
brand rather? To build my own or buy custom might be an option too,
but I would not get any warranty.
I don't have any recent ex
On 9/27/2012 2:55 PM, Scott Marlowe wrote:
Whatever you do, go for the Intel ethernet adaptor option. We've had so many
>headaches with integrated broadcom NICs.:(
Sound advice, but not a get out of jail card unfortunately : we had a
horrible problem with the Intel e1000 driver in RHEL for sever
On 9/27/2012 2:47 PM, Shaun Thomas wrote:
On 09/27/2012 02:40 PM, David Boreham wrote:
I think the newer CPU is the clear winner with a specintrate
performance of 589 vs 432.
The comparisons you linked to had 24 absolute threads pitted against
32, since the newer CPUs have a higher maximum
On 9/27/2012 3:16 PM, Claudio Freire wrote:
Careful with AMD, since many (I'm not sure about the latest ones)
cannot saturate the memory bus when running single-threaded. So, great
if you have a high concurrent workload, quite bad if you don't.
Actually we test memory bandwidth with John McCalp
On 9/28/2012 9:46 AM, Craig James wrote:
Your best warranty would be to have the confidence to do your own
repairs, and to have the parts on hand. I'd seriously consider
putting your own system together. Maybe go to a few sites with
pre-configured machines and see what parts they use. Order th
On 10/2/2012 2:20 AM, Glyn Astill wrote:
newer R910s recently all of a sudden went dead to the world; no prior symptoms
showing in our hardware and software monitoring, no errors in the os logs,
nothing in the dell drac logs. After a hard reset it's back up as if
nothing happened, and it's an is
On Mon, Oct 08, 2012 at 04:40:31PM -0700, Craig James wrote:
>Nobody has commented on the hyperthreading question yet ... does it
>really matter? The old (fast) server has hyperthreading disabled, and
>the new (slower) server has hyperthreads enabled.
>If hyperthreading is definitel
I'm bringing up a new type of server using Intel E5-2620 (unisocket)
which was selected for good SpecIntRate performance vs cost/power (201
for $410 and 95W).
Was assuming it was 6-core but I just noticed it has HT which is
currently enabled since I see 12 cores in /proc/cpuinfo
Question f
On 11/6/2012 9:16 PM, Mark Kirkwood wrote:
I've been benchmarking a E5-4640 (4 socket) and hyperthreading off
gave much better scaling behaviour in pgbench (gentle rise and flatten
off), whereas with hyperthreading on there was a dramatic falloff
after approx number clients = number of (hype
On 11/7/2012 6:37 AM, Devrim GÜNDÜZ wrote:
HT should be good for file servers, or say many of the app servers, or
small web/mail servers. PostgreSQL relies on the CPU power, and since
the HT CPUs don't have the same power as the original CPU, when OS
submits a job to that particular HTed CPU, qu
Well, the results are in and at least in this particular case
conventional wisdom is overturned. Not a huge benefit, but throughput is
definitely higher with HT enabled and nthreads >> ncores:
HT off :
bash-4.1$ /usr/pgsql-9.2/bin/pgbench -T 600 -j 48 -c 48
starting vacuum...end.
transaction t
On 11/8/2012 6:58 AM, Shaun Thomas wrote:
On 11/07/2012 09:16 PM, David Boreham wrote:
bash-4.1$ /usr/pgsql-9.2/bin/pgbench -T 600 -j 48 -c 48
Unfortunately without -S, you're not really testing the processors. A
regular pgbench can fluctuate a more than that due to writing and
checkp
Here are the SELECT only pgbench test results from my E5-2620 machine,
with HT on and off:
HT off:
bash-4.1$ /usr/pgsql-9.2/bin/pgbench -T 600 -j 48 -c 48 -S
starting vacuum...end.
transaction type: SELECT only
scaling factor: 100
query mode: simple
number of clients: 48
number of threads: 48
d
Have a query using a CTE that is performing very poorly. The equivalent query
against the same data in an Oracle database runs in under 1 second, in Postgres
it takes 2000 seconds.
The smp_pkg.get_invoice_charges queries fedexinvoices for some data and
normalizes it into a SETOF some record ty
id)).*
from fedexinvoices
WHERE
trim(fedexinvoices.trackno)='799159791643'
and
(smp_pkg.get_invoice_charges(id)).charge_name IN ('ADDRESS CORRECTION
CHARGE','ADDRESS CORRECTION')
-Original Message-
From: Andrew Dunstan [mailto:and...@dunslane.net]
Sent: W
-Original Message-
From: Andrew Dunstan [mailto:and...@dunslane.net]
Sent: Wednesday, November 14, 2012 11:08 AM
To: David Greco
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Poor performance using CTE
On 11/14/2012 10:56 AM, David Greco wrote:
> You're righ
I have database with few hundred millions of rows. I'm running the following
query:
select * from "Payments" as p
inner join "PaymentOrders" as po
on po."Id" = p."PaymentOrderId"
inner join "Users" as u
On u."Id" = po."UserId"
INNER JOIN "Roles" as r
on u."RoleId" = r."Id"
Where r."Name" = 'Moses
po."Id" WHERE
r."Name" = 'Moses' ) xLIMIT 1000;
but this solution also generates incorrect query plan. Any idea how to solve
this query without omitting LIMIT keyword?Thanks
> Subject: RE: [PERFORM] PostgreSQL strange query plan for my query
> Date: Fri,
will be fixed
someday :)
Date: Fri, 16 Nov 2012 08:32:24 -0800
Subject: Re: [PERFORM] PostgreSQL strange query plan for my query
From: cja...@emolecules.com
To: dato0...@hotmail.com
CC: pgsql-performance@postgresql.org
On Fri, Nov 16, 2012 at 3:40 AM, David Popiashvili wrote:
I have databa
Have a query using a CTE that is performing very poorly. The equivalent query
against the same data in an Oracle database runs in under 1 second, in Postgres
it takes 2000 seconds.
The smp_pkg.get_invoice_charges queries fedexinvoices for some data and
normalizes it into a SETOF some record ty
I have database with few hundred millions of rows. I'm running the following
query:
select * from "Payments" as p
inner join "PaymentOrders" as po
on po."Id" = p."PaymentOrderId"
inner join "Users" as u
On u."Id" = po."UserId"
INNER JOIN "Roles" as r
on u."RoleId" = r."Id"
Where r."Name" = 'Moses'
On 12/11/2012 7:13 PM, Evgeny Shishkin wrote:
Yes, i am aware of this issue. Never experienced this neither on intel
520, no ocz vertex 3.
Have you heard of them on this list?
People have done plug-pull tests and reported the results on the list
(sometime in the past couple of years).
But you
On 12/11/2012 7:20 PM, Evgeny Shishkin wrote:
Oh, there is no 100% safe system.
In this case we're discussing specifically "safety in the event of power
loss shortly after the drive indicates to the controller that it has
committed a write operation". Some drives do provide 100% safety against
On 12/11/2012 7:38 PM, Evgeny Shishkin wrote:
Which drives would you recommend? Besides intel 320 and 710.
Those are the only drive types we have deployed in servers at present
(almost all 710, but we have some 320 for less mission-critical
machines). The new DC-S3700 Series looks nice too, but
On 12/11/2012 7:49 PM, Evgeny Shishkin wrote:
Yeah, s3700 looks promising, but sata interface is limiting factor for
this drive.
I'm looking towards SMART ssd
http://www.storagereview.com/smart_storage_systems_optimus_sas_enterprise_ssd_review
What don't you like about SATA ?
I prefer to avo
On 12/11/2012 8:11 PM, Evgeny Shishkin wrote:
Quoting
http://www.storagereview.com/intel_ssd_dc_s3700_series_enterprise_ssd_review
Heh. A fine example of the kind of hand-waving of which I spoke ;)
Higher performance is certainly a benefit, although at present we can't
saturate even a single
x27;m working with my own fork of the FDW, so I could build these in myself,
but I was wondering if there's more generic support for this sort of stuff.
Or at the least, if I do implement it can I push it into another stats
collection framework rather than logging it.
Thanks,
David Crawford
mething else
might be a good test.
David
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Josh/Johnny,
We've been seeing a similar problem as well, and had also figured THP was
involved. We found this in syslog:
https://gist.github.com/davewhittaker/4723285, which led us to disable THP
2 days ago. At first the results seemed good. In particular, our issues
always seemed interrupt re
Hi Josh,
On Wed, Feb 6, 2013 at 1:20 PM, Josh Krupka wrote:
> David,
>
> Interesting observations. I had not been tracking the interrupts but
> perhaps I should take a look. How are you measuring them over a period of
> time, or are you just getting them real time?
>
Hi all,
I'm encountering very poor query plans when joining against a union,
where one branch of the union is itself a join: specifically, Postgres
does the entire inside join and then filters the result, rather than
pushing the filters down to the joined tables. I've provided a
standalone test c
On 7 March 2013 05:52, Tom Lane wrote:
> Josh Berkus writes:
>> On 03/06/2013 06:54 AM, David Leverton wrote:
>>> I'm encountering very poor query plans when joining against a union,
>
>> Actually, in case #4, Postgres *is* pushing down the join qual into the
&g
On 7 March 2013 18:47, Tom Lane wrote:
> Good idea, but no such luck in that form: it's still not going to try to
> push the parameterization down into the sub-query. I think you'd have
> to write out the query with the views expanded and manually put the
> WHERE restrictions into the lowest join
On 3/13/2013 1:23 PM, Steve Crawford wrote:
What concerns me more than wear is this:
InfoWorld Article:
http://www.infoworld.com/t/solid-state-drives/test-your-ssds-or-risk-massive-data-loss-researchers-warn-213715
Referenced research paper:
https://www.usenix.org/conference/fast13/understa
On 3/13/2013 9:29 PM, Mark Kirkwood wrote:
Just going through this now with a vendor. They initially assured us
that the drives had "end to end protection" so we did not need to
worry. I had to post stripdown pictures from Intel's s3700, showing
obvious capacitors attached to the board before I
On 3/14/2013 3:37 PM, Mark Kirkwood wrote:
I not convinced about the need for BBU with SSD - you *can* use them
without one, just need to make sure about suitable longevity and also
the presence of (proven) power off protection (as discussed
previously). It is worth noting that using unproven o
On Thu, Mar 14, 2013 at 4:37 PM, David Boreham wrote:
> You might want to evaluate the performance you can achieve with a single-SSD
> (use several for capacity by all means) before considering a RAID card + SSD
> solution.
> Again I bet it depends on the application but our experie
On 3/20/2013 6:44 PM, David Rees wrote:
On Thu, Mar 14, 2013 at 4:37 PM, David Boreham wrote:
You might want to evaluate the performance you can achieve with a single-SSD
(use several for capacity by all means) before considering a RAID card + SSD
solution.
Again I bet it depends on the
On Thu, May 16, 2013 at 7:46 AM, Cuong Hoang wrote:
> For our application, a few seconds of data loss is acceptable.
If a few seconds of data loss is acceptable, I would seriously look at
the synchronous_commit setting and think about turning that off rather
than risk silent corruption with non-e
On 5/22/2013 8:18 AM, Greg Smith wrote:
They can easily hit that number. Or they can do this:
Device: r/sw/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm
%util
sdd 2702.80 19.40 19.67 0.1614.91 273.68 71.74 0.37 100.00
sdd 2707.60 13.00 19.53 0.1014.78 2
-time plan. Whether during the PREPARE phase the
planner tags the resultant plan with some kind of "allow runtime plan" flag
I do not know though so maybe the first few executions will always use
run-time plans and only after N executes does the cached plan come into
effect.
Its proba
+ log_temp_files = 0.
my max connections are 150
Please, what would be my error?
Thank you for the tips,
David Carpio
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
David Carpio wrote
> I have made some changes in my postgresql.conf, well, I made two changes
> in this file. the first time, my queries had improved on their execution
> time considerably but in the second change, I seem my queries have not
> improved on the contrary they have com
Ellen Rothman wrote
> I have the same table definition in two different databases on the same
> computer.
You really should prove this to us by running schema commands on the table
and providing results.
Also, version information has not been provided and you do not state whether
the databases a
On Wed, Jul 17, 2013 at 07:50:06PM +, Ellen Rothman wrote:
- I have the same table definition in two different databases on the same
computer. When I explain a simple query in both of them, one database uses a
sequence scan and the other uses an index scan. If I try to run the Seq Scan
vers
On Thu, Aug 01, 2013 at 07:17:27PM +0400, Sergey Burladyan wrote:
- Sergey Burladyan writes:
-
- > # explain
- > # select i.item_id, u.user_id from items i
- > # left join users u on u.user_id = i.user_id
- > # where item_id = 169946840;
- > QUERY PLAN
- >
--
n/need specific but it would avoid having to calculate/maintain
these two values in a separate part of the application.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/ORDER-BY-LIMIT-and-indexes-tp5766413p5766429.html
Sent from the PostgreSQL - performanc
I've got a normalized data table from which I'm trying to select a small
subset of rows determined by both traditional filtering as well as the
result of a cpu-expensive function that I've defined. What I'm seeing
is that the query planner always attempts to defer the de-normalizing
JOIN over the
properly optimising queries to the most
optimal way by detecting functional dependencies in tables to speed up
joins and grouping.
Regards
David
>
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>
>
On 9/4/2013 3:01 AM, Johan Loubser wrote:
I am tasked with getting specs for a postgres database server for the
core purpose of running moodle at our university.
The main question is at the moment is 12core AMD or 6/8core (E Series)
INTEL.
What would be the most in portend metric in planning a
Hi Roberto,
Yes you could partition by vendor and then truncate the partition before
loading.
Truncate reclaims space immediately and is generally much faster than delete.
On Thu, Sep 05, 2013 at 06:05:08PM +0200, Roberto Grandi wrote:
- Hi Jeff,
-
- the proble is that when continously updloa
Hi All,
I've been seeing a strange issue with our Postgres install for about a year
now, and I was hoping someone might be able to help point me at the cause.
At what seem like fairly random intervals Postgres will become unresponsive
to the 3 application nodes it services. These periods tend to l
Hi Andrew,
On Tue, Sep 10, 2013 at 11:26 AM, Andrew Dunstan wrote:
>
> On 09/10/2013 11:04 AM, David Whittaker wrote:
>
>>
>> Hi All,
>>
>> I've been seeing a strange issue with our Postgres install for about a
>> year now, and I was hoping someone mi
Hi Ken,
On Tue, Sep 10, 2013 at 11:33 AM, k...@rice.edu wrote:
> On Tue, Sep 10, 2013 at 11:04:21AM -0400, David Whittaker wrote:
> > Hi All,
> >
> > I've been seeing a strange issue with our Postgres install for about a
> year
> > now, and I was hoping som
Hi Jeff
On Tue, Sep 10, 2013 at 1:44 PM, Jeff Janes wrote:
> On Tue, Sep 10, 2013 at 8:04 AM, David Whittaker wrote:
>
>> Hi All,
>>
>> I've been seeing a strange issue with our Postgres install for about a
>> year now, and I was hoping someone might be ab
Hi All,
We lowered shared_buffers to 8G and increased effective_cache_size
accordingly. So far, we haven't seen any issues since the adjustment. The
issues have come and gone in the past, so I'm not convinced it won't crop
up again, but I think the best course is to wait a week or so and see how
On Fri, Sep 13, 2013 at 10:52 AM, Merlin Moncure wrote:
> On Thu, Sep 12, 2013 at 3:06 PM, David Whittaker wrote:
> > Hi All,
> >
> > We lowered shared_buffers to 8G and increased effective_cache_size
> > accordingly. So far, we haven't seen any issues since t
n and post back, but if you don't hear from me again you can assume the
problem is solved.
Thank you all again for the help.
-Dave
On Fri, Sep 13, 2013 at 11:05 AM, David Whittaker wrote:
>
>
>
> On Fri, Sep 13, 2013 at 10:52 AM, Merlin Moncure wrote:
>
>> On Thu, Sep
there.
http://wiki.postgresql.org/wiki/SlowQueryQuestions
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/57-minute-SELECT-tp5773169p5773174.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
--
Sent via pgsql-performance mailing
erely slow down processing.
Hopefully someone more knowledgeable and experience will chime in to help
you.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/57-minute-SELECT-tp5773169p5773187.html
Sent from the PostgreSQL - performance mailing list archive at
On Thu, Oct 03, 2013 at 01:47:29AM +, Samuel Stearns wrote:
- Thanks, Claudio:
-
- http://explain.depesz.com/s/WJQx
You're spending a lot of time in the hash join which can kill a system with
low ram.
You may, just for fun, want to try the query with enable_hashjoin=false.
--
Sent via pg
On Thu, Oct 03, 2013 at 09:20:52AM -0700, David Kerr wrote:
- On Thu, Oct 03, 2013 at 01:47:29AM +, Samuel Stearns wrote:
- - Thanks, Claudio:
- -
- - http://explain.depesz.com/s/WJQx
-
- You're spending a lot of time in the hash join which can kill a system with
- low ram.
-
- Yo
stently reference the "live" table and only the
subset of changes introduced would need to be checked.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Adding-foreign-key-constraint-holds-exclusive-lock-for-too-long-on-production-database-tp57
On Wed, Nov 6, 2013 at 8:35 AM, Scott Marlowe wrote:
> That's a mostly religious argument. I.e. you're going on feeling here
> that pooling in jdbc alone is better than either jdbc/pgbouncer or
> plain pgbouncer alone. My experience is that jdbc pooling is not in
> the same category as pgbouncer f
very predictably spiky load and you
can add/remove machines to meet that load, but if you end up needing to have the
machines running a significant percentage of the time, dedicated boxes are
cheaper (as well as faster)
David Lang
--
Sent via pgsql-performance mailing list (pgsql-performance
pectations, and it changes over
the time. Scaling up/down has helped us cope.
how do you add another server without having to do a massive data copy in the
process?
David Lang
- Live relocation of databases helps with hardware upgrades and spreading
of load.
Main issues:
- We are not overpr
On 11/26/2013 7:26 AM, Craig James wrote:
For those of us with small (a few to a dozen servers), we'd like to
get out of server maintenance completely. Can anyone with experience
on a cloud VM solution comment? Do the VM solutions provided by the
major hosting companies have the same good pe
On Tue, Nov 26, 2013 at 11:18:41AM -0800, Craig James wrote:
- On Tue, Nov 26, 2013 at 10:40 AM, Ben Chobot wrote:
-
- > On Nov 26, 2013, at 9:24 AM, Craig James wrote:
- >
- > So far I'm impressed by what I've read about Amazon's Postgres instances.
- > Maybe the reality will be disappointing, b
the DB, etc...)
Adding in clientId overhead will degrade performance somewhat but increase
your flexibility considerably. That is often a worthwhile trade-off to make
even if you decided to create separate schemas/databases.
David J.
--
View this message in context:
http://postgresql.10456
f - is flawed
but as you have only provided code it is impossible to pinpoint where
exactly the disconnect resides. You can either fix the model or the query -
the later by implementing sub-selects with where clauses manually - which
then encodes an assumption about your data that the current query can
only for 10 rows in result where
> conditions are very good ... is strange
I suppose the equivalent query that you'd want would be:
SELECT ... FROM gd JOIN gd_data USING (id_gd)
WHERE id_gd IN (
SELECT id_gd FROM gd WHERE ...
UNION ALL -distinct not required in this situation
SELECT id_gd
in this
situation.
I'd give creating the function index a try, but keep in mind the overhead
that it will cause with inserts, updates and deletes.
Regards
David Rowley
> Thanks,
> Dave
>
I am running PG 9.2.4 and I am trying to figure out why my database size
shows one value, but the sum of my total relation sizes is so much less.
Basically, I'm told my database is 188MB, but the sum of my total
relation sizes adds up to just 8.7MB, which is 1/20th of the reported
total. Wher
On 2/18/2014 12:34 AM, Heikki Linnakangas wrote:
On 02/18/2014 12:14 AM, David Wall wrote:
I am running PG 9.2.4 and I am trying to figure out why my database size
shows one value, but the sum of my total relation sizes is so much less.
Basically, I'm told my database is 188MB, but the s
r sure.
Idepeing on how many rows are hidden I'm not sure an unqualified query on
this view can run in 1/60th the time even with indexes present - the
sequential scans are efficient if the proportion of the tables being
returned is high.
David J.
--
View this message in context:
http:
Bikram Kesari Naik wrote
> Hi David,
>
> We have indexes on all the columns which are used in the where clause and
> these tables are linked by foreign key constraint.
>
>
> Thanks,
> Bikram
>
> -Original Message-
> From:
> pgsql-performance-owne
ool is typically a net positive. Now, I had no clue what
percentage is actually true, or under what conditions and pool sizes it may
vary, but that is a calculation that someone deciding on between managed and
un-managed pools would need to make.
David J.
--
View this message in c
experienced user would decide they need based
upon their usage patterns and need to eke out every last bit of performance
in the extremes situations while only trading a little bit of performance
when the connections are not maxed out.
David J.
--
View this message in context:
http://post
nt database, and measure
the time and effort while practicing both routines (backup and restoring)
yourself.
David J.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/pg-dump-vs-pg-basebackup-tp5797351p5797364.html
Sent from the PostgreSQL - performance mailing list arch
for this is that currently the WHERE clause of the outer query
is not pushed down into the view due to some overly strict code which
completely disallows pushdowns of where clauses into sub queries that
contain windowing functions...
In your case, because you have this id in your partition by clause, then
technically it is possible to push the where clause down into the sub
query. I wrote a patch a while back which lifts this restriction. it
unfortunately missed the boat for 9.4, but with any luck it will make it
into 9.5. If you're up for compiling postgres from source, then you can
test the patch out:
http://www.postgresql.org/message-id/cahoyfk9ihosarntwc-nj5tphko4wcausd-1c_0wecogi9ue...@mail.gmail.com
It should apply to current HEAD without too much trouble.
Regards
David Rowley
On 5/22/2014 7:27 AM, Dimitris Karampinas wrote:
Is there any way to get the call stack of a function when profiling
PostgreSQL with perf ?
I configured with --enable-debug, I run a benchmark against the system
and I'm able to identify a bottleneck.
40% of the time is spent on an spinlock yet I
On 5/27/2014 9:09 AM, Shaun Thomas wrote:
On 05/27/2014 10:00 AM, Albe Laurenz wrote:
I know that Oracle recommends it - they even built an NFS client
into their database server to make the most of it.
That's odd. Every time the subject of NFS comes up, it's almost
immediately shot down wit
Please do not top-posts on these lists.
On Wednesday, June 4, 2014, Huang, Suya wrote:
> Thank you David, I copied the detailed activity from the report as below.
> As it shows, it has prepare and bind queries. One of the item has
> Bind/Prepare pretty high as 439.50. so that looks li
t RAM you might as well commit and free up that RAM
for the next batch.
David J.
On Wed, Aug 27, 2014 at 1:02 AM, Alex Goncharov <
alex.goncharov@gmail.com> wrote:
> > Thank you, Kevin -- this is helpful.
>
> Thank you David, too.
>
>
> > But it still leaves questions for me.
>
> Still...
>
>
> Alex Goncharov wrote:
>
&g
7;s only the contract_contract table that's looked at. This
is because PostgresSQL sees that none of the columns from the 2 tables
which are being left joined to are used, and also that the columns that
you're joining to on these tables are unique, therefore joining to them
cannot duplicate any rows, and since these are left joined, if there was no
matching row, then it wouldn't filter out rows from the contract_contract
table, as it would with INNER JOINs. The planner sees that these left joins
are pointless, so just removes them from the plan.
Regards
David Rowley
asking if there's some way to get PostgreSQL to run the 1st query faster?
Or are you asking if both queries are equivalent?
Regards
David Rowley
t noticed last night that someone is working on them.
Regards
David Rowley
case of
queries perform better at execution time?
Is there a valid reason why you don't just write the query with the <=
operator?
Regards
David Rowley
stick the 50 UUIDs in some table,
analyze it, then perform a join between the 2 tables, using IN() or
EXISTS()... Is that any faster?
Also how well does it perform with: set enable_bitmapscan = off; ?
Regards
David Rowley
x27;::bpchar)
Buffers: shared hit=14723
-> Bitmap Index Scan on table_code_idx (cost=0.00..5206.91
rows=278731 width=0) (actual time=23.298..23.298 rows=275909 loops=1)
Index Cond: (code = 'XX'::bpchar)
Buffers: shared hit=765
Total runtime: 184.043 ms
(13 rows)
http://explain.depesz.com/s/E9VE
Thanks in advance for any help.
Regards,
--
David Osborne
Qcode Software Limited
http://www.qcode.co.uk
Shishkin wrote:
>
> > On 01 May 2015, at 13:54, David Osborne wrote:
> >
> > Hi,
> >
> > We have a query which finds the latest row_id for a particular code.
> >
> > We've found a backwards index scan is much slower than a forward one, to
>
performance is better with 32 cores that 64
> cores?.
>
>
You might be interested in
http://www.postgresql.org/message-id/53f4f36e.6050...@agliodbs.com
Regards
David Rowley
--
David Rowley http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/>
PostgreSQL
701 - 800 of 994 matches
Mail list logo