Dear all
with reference to the discussions and valuable suggestions i got from the
list, the code has been reviewed and updated with explicit commit . There is
a good improvement in performance .I am also planning to upgrade the
database from 8.1 to 8.3 /8.4 .
My current OS is SLES 10 SP3 def
You should also keep in mind that JDBC uses prepared statements, so you have
to explain analyze accordingly.
Dave
On Thu, Oct 8, 2009 at 5:18 PM, Tom Lane wrote:
> G B writes:
> > How can explain-analyze return significantly much faster than other
> means?
>
> If the returned data is large or
On Mon, Oct 5, 2009 at 9:58 AM, Omar Kilani wrote:
> Hi Xia,
>
> Try this patch:
>
> http://treehou.se/~omar/postgresql-8.4.1-array_sel_hack.patch
>
> It's a hack, but it works for us. I think you're probably spending
> most of your query time planning, and this patch helps speed things up
> 10x o
On Fri, Oct 09, 2009 at 08:31:54PM +0800, Xia Qingran wrote:
> On Mon, Oct 5, 2009 at 9:58 AM, Omar Kilani wrote:
> > Hi Xia,
> >
> > Try this patch:
> >
> > http://treehou.se/~omar/postgresql-8.4.1-array_sel_hack.patch
> >
> > It's a hack, but it works for us. I think you're probably spending
> >
On Fri, Oct 9, 2009 at 3:11 AM, Shiva Raman wrote:
> Dear all
> with reference to the discussions and valuable suggestions i got from the
> list, the code has been reviewed and updated with explicit commit . There is
> a good improvement in performance .I am also planning to upgrade the
> datab
Hey folks,
CentOS / PostgreSQL shop over here.
I'm hitting 3 of my favorite lists with this, so here's hoping that
the BCC trick is the right way to do it :-)
We've just discovered thanks to a new Munin plugin
http://blogs.amd.co.at/robe/2008/12/graphing-linux-disk-io-statistics-with-munin.html
I have a system where it would be very useful for the primary keys for
a few tables to be UUIDs (actually MD5s of files, but UUID seems to be
the best 128-bit type available). What is the expected performance of
using a UUID as a primary key which will have numerous foreign
references to it, versu
More update
If I run the concurrent re index locally (psql session) it works fine, but
when run via a connection through php I get the error
Can't be slon, since I can do this locally, but why would postgres have an
issue with a remote connection?
the basic script:
$connectString = "host=serve
I am seeking advice on what the best setup for the following would be.
My company provides a hosted web calendaring solution for school
districts. For each school district we have a separate database. Each
database has 57 tables. There are a total of 649 fields in those
tables. Here is a ta
Scott Otis wrote:
I am seeking advice on what the best setup for the following would be.
My company provides a hosted web calendaring solution for school
districts. For each school district we have a separate database.
Each database has 57 tables.
Over the next couple of months
On 10/09/2009 12:56 PM, tsuraan wrote:
I have a system where it would be very useful for the primary keys for
a few tables to be UUIDs (actually MD5s of files, but UUID seems to be
the best 128-bit type available). What is the expected performance of
using a UUID as a primary key which will have
- "Alan McKay" escreveu:
> CentOS / PostgreSQL shop over here.
>
> Our system
> IBM 3650 - quad 2Ghz e5405 Xeon
> 8K SAS RAID Controller
> 6 x 300G 15K/RPM SAS Drives
> /dev/sda - 2 drives configured as a RAID 1 for 300G for the OS
> /dev/sdb - 3 drives configured as RAID5 for 600G for the DB
On Fri, Oct 9, 2009 at 10:45 AM, Alan McKay wrote:
> Hey folks,
>
> CentOS / PostgreSQL shop over here.
>
> I'm hitting 3 of my favorite lists with this, so here's hoping that
> the BCC trick is the right way to do it :-)
I added pgsql-performance back in in my reply so we can share with the
rest
On Fri, Oct 9, 2009 at 1:46 PM, Scott Otis wrote:
> Over the next couple of months we will be creating an instance of our
> solution for each public school district in the US which is around 18,000.
> That means currently we would be creating 18,000 databases (all on one server
> right now – w
On Fri, Oct 9, 2009 at 9:45 AM, Alan McKay wrote:
> We've just discovered thanks to a new Munin plugin
> http://blogs.amd.co.at/robe/2008/12/graphing-linux-disk-io-statistics-with-munin.html
> that our production DB is completely maxing out in I/O for about a 3
> hour stretch from 6am til 9am
> Th
>
>> any thoughts here? recommendations on what to do with a tight budget?
>> It could be the answer is that I just have to go back to the bean
>> counters and tell them we have no choice but to start spending some
>> real money. But on what? And how do I prove that this is the only
>> choice
On 10/9/09 2:02 PM, "Merlin Moncure" wrote:
> On Fri, Oct 9, 2009 at 1:46 PM, Scott Otis wrote:
>> Over the next couple of months we will be creating an instance of our
>> solution for each public school district in the US which is around 18,000.
>> That means currently we would be creating
Scott Carey writes:
> I've got 200,000 tables in one db (8.4), and some tools barely work. The
> system catalogs get inefficient when large and psql especially has trouble.
> Tab completion takes forever, even if I make a schema "s" with one table in
> it and type "s." and try and tab complete --
> The most significant impact is that it takes up twice as much space,
> including the primary key index. This means fewer entries per block,
> which means slower scans and/or more blocks to navigate through. Still,
> compared to the rest of the overhead of an index row or a table row, it
> is low
19 matches
Mail list logo