On Thu, Nov 3, 2011 at 08:20, Guillaume Lelarge wrote:
>
> OK, found it. We introduced some time ago a cache for type lookup. It
> seems the boolean type doesn't make it to the cache. The patch is
> commited. It should be available with the release of 1.14.1.
>
>
Nice.
Thanks for fixing this. Exce
great,
thank you a lot Guillaume!
Cheers,
Boris
On 11/03/2011 12:20 PM, Guillaume Lelarge wrote:
On Thu, 2011-11-03 at 12:02 +0100, Guillaume Lelarge wrote:
On Thu, 2011-11-03 at 11:22 +0100, boris pezzatti wrote:
I tested after column types, and the colum that appears to create the
very slo
On Thu, 2011-11-03 at 12:02 +0100, Guillaume Lelarge wrote:
> On Thu, 2011-11-03 at 11:22 +0100, boris pezzatti wrote:
> > I tested after column types, and the colum that appears to create the
> > very slow behaviour is of type boolean. Retrieving only 1 column for
> > 3000 records delays at ca.
On Thu, 2011-11-03 at 11:22 +0100, boris pezzatti wrote:
> I tested after column types, and the colum that appears to create the
> very slow behaviour is of type boolean. Retrieving only 1 column for
> 3000 records delays at ca. 1 minute.
>
> Do you know any possible reason?
>
That's good to k
I tested after column types, and the colum that appears to create the
very slow behaviour is of type boolean. Retrieving only 1 column for
3000 records delays at ca. 1 minute.
Do you know any possible reason?
On 11/02/2011 11:58 PM, Fernando Hevia wrote:
On Tue, Nov 1, 2011 at 07:23, Guil
On Tue, Nov 1, 2011 at 07:23, Guillaume Lelarge wrote:
> [...]
>
> With 10k rows (2.3 MB) it took 3.5 seconds to retrieve data from DB and 40
> > seconds to write the file to an SATA 7200 disk with write-through cache.
> > With 100k rows (23 MB) the DB retrieve went for 35 seconds while the file
>
On Wed, 2011-11-02 at 09:21 +0100, boris pezzatti wrote:
> ... or (I promise this is the last guess) each row is appended to the
> file in the the loop, and on some OS's there is a problem keeping open
> the file (so that each time the file must be opened again ... adding a
> lot of extra time
On Wed, 2011-11-02 at 08:58 +0100, boris pezzatti wrote:
> Thank you Fernando for reproducing this.
> I suspect there must be some part of code in the
>
> * for each row
>* for each column
>
> loops that result inefficient only on some machines or OS's (I'm using
> Archlinux).
I'm using Fed
... or (I promise this is the last guess) each row is appended to the
file in the the loop, and on some OS's there is a problem keeping open
the file (so that each time the file must be opened again ... adding a
lot of extra time). Maybe creating all the "virtual file" in a variable
and sto
or maybe the retrieved data in RAM are somehow lazy bound ... ?
On 11/02/2011 08:58 AM, boris pezzatti wrote:
Thank you Fernando for reproducing this.
I suspect there must be some part of code in the
* for each row
* for each column
loops that result inefficient only on some machines or OS
Thank you Fernando for reproducing this.
I suspect there must be some part of code in the
* for each row
* for each column
loops that result inefficient only on some machines or OS's (I'm using
Archlinux).
In fact the extra time I and Fernando get can not only be attributed to
adding commas,
On Mon, 2011-10-31 at 18:26 -0300, Fernando Hevia wrote:
> [...]
> I could reproduce the issue in a fresh Windows 7 install with no other apps
> running other than pgAdmin v1.14.0.
> From what I could see, the execute-to-file function runs in 2 stages:
> 1. Rows are retrieved from DB server to RAM
rver as the DB and
> then copy that file to your PC
> IMO your problem is network related.
>
> *From:* boris pezzatti
> *To:* Francisco Leovey
> *Cc:* "pgadmin-support@postgresql.org"
>
> *Sent:* Sunday, October 30, 2011 6:28 PM
>
> *Subject:* Re: [pg
same server as the DB
and then copy that file to your PC
IMO your problem is network related.
*From:* boris pezzatti
*To:* Francisco Leovey
*Cc:* "pgadmin-support@postgresql.org"
*Sent:* Sunday, October 30, 2011 6:28 PM
*Subject:* Re: [pgadmin-support] very slow when writing q
*Cc:* pgadmin-support@postgresql.org
*Sent:* Saturday, October 29, 2011 5:44 PM
*Subject:* Re: [pgadmin-support] very slow when writing query to file
Does anyone have any proposition how I could further test where the
problem is. This is really a strange behaviour, that I noticed with
different
Is the file where you write the query output located on the same server as the
DB?
From: boris pezzatti
To: Guillaume Lelarge
Cc: pgadmin-support@postgresql.org
Sent: Saturday, October 29, 2011 5:44 PM
Subject: Re: [pgadmin-support] very slow when writing
Does anyone have any proposition how I could further test where the
problem is. This is really a strange behaviour, that I noticed with
different versions of pgAdmin.
Could a firewall produce a difference when querying data visually or for
a write to file? (should not ...)
Thank you,
Boris
On Wed, 2011-10-26 at 14:57 +0200, boris pezzatti wrote:
> I have a postgresql database 8.3 on a server. When querying the data
> with the pgAdmin sql editor, I can get an answer in about 10 s, for
> 100'000 rows. When I'm pressing the button to execute the query to a
> file it takes more than 1
I have a postgresql database 8.3 on a server. When querying the data
with the pgAdmin sql editor, I can get an answer in about 10 s, for
100'000 rows. When I'm pressing the button to execute the query to a
file it takes more than 1 hour to get the query results saved (writes
about 10 MB in 45
19 matches
Mail list logo