Sorry I haven't been responsive for a little while.
I ran your script but creating a new schema instead of my existing
schema. My timings were similar to yours (more or less) except fo the
vacuums which took roughly 147891 ms.
On Tue, Jan 22, 2013 at 3:21 PM, Kevin Grittner wrote:
> Kevin Grit
>
> What if you do:
> alter table cars.imports set (fillfactor=50);
> Before the vacuum full, and then try the update again?
This makes a dramatic difference when combined with a vacuum.
UPDATE 98834
Time: 3408.210 ms
Ten times faster!
--
Sent via pgsql-general mailing list (pgsql-general@po
relid | 26710
schemaname| cars
relname | imports
seq_scan | 280
seq_tup_read | 25873543
idx_scan | 4
idx_tup_fetch | 2749
n_tup_ins | 98926
n_tup_upd | 6350466
n_tup_del | 92
n_tup_hot_upd | 625286
n_live_tup
> I agree that seems like the most likely cause. Each update to the
> row holding the hstore column requires adding new index entries for
> all the hstore elements, and autovacuum will need to clean up the
> old ones in the background. The best solution would be to either
> normalize the data inste
On 25/01/13 08:57, Tim Uckun wrote:
What if you do:
alter table cars.imports set (fillfactor=50);
Before the vacuum full, and then try the update again?
This makes a dramatic difference when combined with a vacuum.
UPDATE 98834
Time: 3408.210 ms
Ten times faster!
That suggests (to me, at leas
Hi,
I´m getting an out of memory error running the following query over 6
tables (the *BASE* tables have over 1 million rows each) on Postgresql
9.1. The machine has 4GB RAM:
SELECT *
FROM dtng."Z_BASE" zb
LEFT JOIN dtng."Z_BASE_COUNTRY" zbc ON zb."ISIN" = zbc."ISIN"
LEFT JOIN dtng."PRODUCT_T
>
> That suggests (to me, at least) that it is related to index updating. Again,
> your GIN index seems primary candidate.
>
> Try running iostat (I think that's available on a Mac) with/without the
> fillfactor and with/without the GIN index while you do the updates. It's
> possible your SSD is ju
On Wed, Jan 23, 2013 at 3:56 PM, Scott Ribe wrote:
> For a client who needs to learn how to query the db:
>
> - No SQL knowledge at all; needs to start from square 1.
>
> - Smart, capable person, who will be in this position for a long time, using
> this db for a long time.
>
> - No chance in hel
On 25/01/13 11:38, Tim Uckun wrote:
That suggests (to me, at least) that it is related to index updating. Again,
your GIN index seems primary candidate.
Try running iostat (I think that's available on a Mac) with/without the
fillfactor and with/without the GIN index while you do the updates. It'
Tim Uckun wrote:
> I dropped the index and the numbers shot up tenfold or more. I don't
> know why postgres feels the need to update the GIN index on the hstore
> field when I am only updating an integer field but it looks like I
> need to split the hstore into a different table.
Every UPDATE tha
I can recommend "SQL Queries for Mere Mortals: A Hands-on Guide to Data
Manipulation in SQL" as a good general SQL book.
From: Scott Ribe
To: Postgresql General
Sent: Wednesday, 23 January 2013, 22:56
Subject: [GENERAL] seeking SQL book recommendation
Hi Guys,
We are having a problem with our infrastructure provider because the
network traffic between master and slave server is reaching more than
30k packages per second(SLA says 20k/second).
Is there any way to throttle the streamming replication? I meant, any
parameter that I set the max n
On Fri, Jan 25, 2013 at 1:59 PM, Rodrigo Pereira da Silva
wrote:
> Hi Guys,
>
> We are having a problem with our infrastructure provider because the network
> traffic between master and slave server is reaching more than 30k packages
> per second(SLA says 20k/second).
> Is there any way to throttl
You really need to show the full, exact text of the error from the database as
well as the SQL run by both apps and relevant details of table structures.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/
Can you not change your method of streaming replication. What is your
archive command, are you using rsync as you can control the bandwidth limit?
We use: archive_command = 'test ! -f
/opt/postgres/remote_pgsql/wal_archive/%f && rsync -az %p
/opt/postgres/remote_pgsql/wal_archive/%f'
According to
Jan Strube writes:
> I'm getting an out of memory error running the following query over 6
> tables (the *BASE* tables have over 1 million rows each) on Postgresql
> 9.1. The machine has 4GB RAM:
It looks to me like you're suffering an executor memory leak that's
probably unrelated to the hash
That's a good idea, but we are using windows copy command, not sure if
it has something similar. Anyway, we just talked to infrastructure
provider and they figured out it was a problem with their monitoring
system, so we are not reaching 30k packages actually.
Thanks a lot for the replies,
E
On Fri, 2013-01-25 at 16:13 +0100, Alexander Farber wrote:
> Hi -
>
> On Thu, Jan 24, 2013 at 8:12 PM, Guillaume Lelarge
> wrote:
> > You should better create an index on pref_money(yw, money). It could
> > help you get rid of the seqscan and sort operations.
>
> I've created an index with
>
>
I'm looking for a tutorial on how to move a Oracle 11gR2
database that consists on 4 instances with table spaces spread across
multiple file systems to PostgreSQL. I need to do this because most of
the open source CMS packages do not play with Oracle. The amount of data
stored in the four in
On Jan 25, 2013, at 8:21 AM, Gene Poole wrote:
> I'm looking for a tutorial on how to move a Oracle 11gR2 database
> that consists on 4 instances with table spaces spread across multiple file
> systems to PostgreSQL. I need to do this because most of the open source CMS
> packages do not pl
On Fri, Jan 25, 2013 at 3:38 AM, Tim Uckun wrote:
>>
>> That suggests (to me, at least) that it is related to index updating. Again,
>> your GIN index seems primary candidate.
>>
>> Try running iostat (I think that's available on a Mac) with/without the
>> fillfactor and with/without the GIN index
Is it possible for LC_TIME locale to affect the format with which
timestamps are input?
I have DB2 CSV dumps with timestamps like '2003-10-21-22.59.44.00'
that I want to load into Postgres with \copy. I would like to eliminate
the sed scripts that convert the timestamps in order to speed up t
On Thu, Jan 24, 2013 at 1:57 AM, Alexander Farber
wrote:
> Hello,
>
> for a PostgreSQL 8.4.13 database + pgbouncer
Using 8.4 is really going to limit your options.
...
>
> LOG: duration: 12590.394 ms statement:
> select count(id) from (
> select id,
>ro
On 01/25/2013 09:00 AM, Anson Abraham wrote:
identical.
This is on Debian correct?
Are you using the Debian pg_cluster system?
In particular are using pg_ctlcluster to start the server?
From what I remember of pg cluster is that the init.d script is called
by pg_ctlcluster and the pg_ctclust
On Fri, Jan 25, 2013 at 5:59 AM, Rodrigo Pereira da Silva
wrote:
> Hi Guys,
>
> We are having a problem with our infrastructure provider because the network
> traffic between master and slave server is reaching more than 30k packages
> per second(SLA says 20k/second).
> Is there any way to throttl
I can't be the first person (or even the 10,000th) to want to define my
own SQLSTATE codes when raising errors in a stored procedure. I've
just tested doing so in a PL/pgSQL function access via JDBC, and I had
no problem retrieving the non-standard state from the SQLException. (I
used 'W' and
On Friday, January 25, 2013 at 23:59, Rodrigo Pereira da Silva wrote:
We are having a problem with our infrastructure provider because the
network traffic between master and slave server is reaching more than
30k packages per second(SLA says 20k/second).
I note the later post where the infrastr
27 matches
Mail list logo