am Wed, dem 12.11.2008, um 8:08:08 +0100 mailte Csaba Együd folgendes:
> Hi All,
>
> --PG8.3
> --Windows 2k3 SBS
>
> I would like to apply an automatic filter condition to a table. I create a
> TEMP table at the beginning of the session to store a value to build up a
> filter condition and I wo
Hi All,
--PG8.3
--Windows 2k3 SBS
I would like to apply an automatic filter condition to a table. I create a
TEMP table at the beginning of the session to store a value to build up a
filter condition and I would like to apply this condition to every select
statement to a table during the current
Ivan Sergio Borgonovo <[EMAIL PROTECTED]> writes:
> Any suggestion about how to track down the problem?
What you are describing sounds rather like a use-of-uninitialized-memory
problem, wherein the behavior depends on what happened to be in that
memory previously. If so, using a debug/cassert-ena
On Tuesday 11 November 2008 6:31:04 pm Adrian Klaver wrote:
> On Tuesday 11 November 2008 6:20:09 pm Glen Beane wrote:
> > Python with the psycopg2 library. I swear this was working earlier
> > today. Maybe I am imagining things :)
> > It does work with a unix socket, and I have a deadline to meet
On Tuesday 11 November 2008 6:20:09 pm Glen Beane wrote:
>
> Python with the psycopg2 library. I swear this was working earlier today.
> Maybe I am imagining things :)
> It does work with a unix socket, and I have a deadline to meet, so for now
> I just need to make sure I am running this on the
"Andrus" <[EMAIL PROTECTED]> writes:
> explain analyze SELECT sum(xxx)
>FROM dok JOIN rid USING (dokumnr)
>WHERE dok.kuupaev BETWEEN '2008-04-01' AND '2008-04-30'
> "Hash Join (cost=29584.84..308259.32 rows=142686 width=0) (actual
> time=68510.748..96932.174 rows=117883 loops=1)"
> " Ha
On 11/11/08 9:09 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> Glen Beane <[EMAIL PROTECTED]> writes:
>> On 11/11/08 2:25 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
>>> The most direct evidence about why it's stuck would probably be had by
>>> attaching to the backend process with gdb and getting
Glen Beane <[EMAIL PROTECTED]> writes:
> On 11/11/08 2:25 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
>> The most direct evidence about why it's stuck would probably be had by
>> attaching to the backend process with gdb and getting a stack trace.
> It wasn't built with debugging symbols so there is
On 11/11/08 2:25 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> The most direct evidence about why it's stuck would probably be had by
> attaching to the backend process with gdb and getting a stack trace.
It wasn't built with debugging symbols so there is some missing info, but
here is what I ge
On Tue, 11 Nov 2008, Thom Brown wrote:
I've noticed that if I set my SHMMAX to 256 * 1024 * 1024 (268435456)
and my shared_buffers value in postgresql.conf to 256MB the server
fails to start. I managed to find a tipping point: 249MB seems to be
too much and 248MB seems to be okay.
Buffers are
I'm still fighting with my very long gin index creation that happens
randomly.
At the beginning I had a pretty long transaction that filled several
tables starting from some "temporary"[1] tables.
After filling the tables I updated a tsvector column in one of them
and finally in another connection
Hello.
Hope this will be helpful for agile developers.
http://en.dklab.ru/lib/dklab_pgunit/
PGUnit is a xUnit-style framework for stored procedures in PostgreSQL 8.3+.
It allows database developers to write automated tests for existed stored
procedures or develop procedures using concepts of Tes
On 11/11/08 2:25 PM, "Tom Lane" <[EMAIL PROTECTED]> wrote:
> Glen Beane <[EMAIL PROTECTED]> writes:
>> I am using the copy_from command from the python psycopg2 library to do some
>> bulk data loading of a postgres database. This had been working OK, until
>> my script barfed because I was bei
explain analyze SELECT sum(xxx)
FROM dok JOIN rid USING (dokumnr)
WHERE dok.kuupaev BETWEEN '2008-04-01' AND '2008-04-30'
"Hash Join (cost=29584.84..308259.32 rows=142686 width=0) (actual
time=68510.748..96932.174 rows=117883 loops=1)"
" Hash Cond: ("outer".dokumnr = "inner".dokumnr)"
" -
Por favor no quiero seguir recibiendo mensajes
Vaclav TVRDIK escribió:
Tom Lane wrote:
Hm, I can't replicate that here --- but if you're using floating-point
timestamps, as is default in 8.3, then a certain amount of
machine-dependent roundoff fuzziness is not surprising.
Por favor, no quiero seguir recibiendo mensajes
Tom Lane escribió:
Richard Huxton <[EMAIL PROTECTED]> writes:
If you connect via psql and then (as root, in another terminal) do:
ps auxw | grep postgres
you should see the backend that corresponds to your psql connection.
strace -p
should t
Tom Lane wrote:
> Hm, I can't replicate that here --- but if you're using floating-point
> timestamps, as is default in 8.3, then a certain amount of
> machine-dependent roundoff fuzziness is not surprising.
>
> regards, tom lane
>
Column is defined in such way:
CREATE_
Richard Huxton <[EMAIL PROTECTED]> writes:
> If you connect via psql and then (as root, in another terminal) do:
> ps auxw | grep postgres
> you should see the backend that corresponds to your psql connection.
> strace -p
> should then show system calls as they are executed (assuming you have
"Sergey Konoplev" <[EMAIL PROTECTED]> writes:
>> Finally - did you compile this from source yourself, or is it installed
>> via apt? I'm wondering whether you have an unusual version of a library
>> linked in, and it's taking a long time to parse the query.
> I've compiled it from sources. BTW, I
Vaclav TVRDIK <[EMAIL PROTECTED]> writes:
> I have one question about converting timestamps to text. I believed
> that using to_char function with proper mask and casting by style
> COLUMN::text is equal, but when I issue following query against 8.3.3
> database it returns me different values (the
Sam Mason <[EMAIL PROTECTED]> writes:
> On Tue, Nov 11, 2008 at 02:17:20PM -0300, Alvaro Herrera wrote:
>> shared_buffers is not the only factor to shared memory, so you need to
>> provide some extra SHMMAX slop.
> what is this "extra slop" needed for?
See table 17-2 here
http://www.postgresql.or
Glen Beane <[EMAIL PROTECTED]> writes:
> I am using the copy_from command from the python psycopg2 library to do some
> bulk data loading of a postgres database. This had been working OK, until
> my script barfed because I was being careless, and it seemed to leave the
> database in a strange stat
I'm not sure if this is the write place to post this question or not, but I
hope someone can help me out.
I am using the copy_from command from the python psycopg2 library to do some
bulk data loading of a postgres database. This had been working OK, until
my script barfed because I was being car
Scott Marlowe escribió:
> On Tue, Nov 11, 2008 at 10:58 AM, Sam Mason <[EMAIL PROTECTED]> wrote:
> > what is this "extra slop" needed for?
> free space map and temp buffers. Not sure what else.
Lock space, pg_clog, pg_subtrans and pg_multixact buffers, FSM, and some
other things. External modu
On Tue, Nov 11, 2008 at 10:58 AM, Sam Mason <[EMAIL PROTECTED]> wrote:
> On Tue, Nov 11, 2008 at 02:17:20PM -0300, Alvaro Herrera wrote:
>> Thom Brown escribió:
>> > I've noticed that if I set my SHMMAX to 256 * 1024 * 1024 (268435456)
>> > and my shared_buffers value in postgresql.conf to 256MB th
It isn't guaranteed, but I think a clustered index on the attrs you want
the dump ordered by will give an ordered dump.
This may depend on your filesystem, and on what else your system
is doing at the time, as interupted disk reads may disrupt the sequence.
It has worked for me on Suse Linux wit
On Tue, Nov 11, 2008 at 02:17:20PM -0300, Alvaro Herrera wrote:
> Thom Brown escribió:
> > I've noticed that if I set my SHMMAX to 256 * 1024 * 1024 (268435456)
> > and my shared_buffers value in postgresql.conf to 256MB the server
> > fails to start. I managed to find a tipping point: 249MB seems
Thanks Erik...
I found an alternative to psql copy to stdout | psql copy from stdout.
I used pg_dump -n schema | psql
This approach replicated the entire schema, rather than just the table
contents,
into the new database, and therefore copied over all the seq data as well. It
worked well in
Thom Brown escribió:
> Hi,
>
> I've noticed that if I set my SHMMAX to 256 * 1024 * 1024 (268435456)
> and my shared_buffers value in postgresql.conf to 256MB the server
> fails to start. I managed to find a tipping point: 249MB seems to be
> too much and 248MB seems to be okay.
>
> Could someon
Actually, I think I may have solved it, but I would like someone to verify it.
The temp_buffers takes up 8MB which suggests that needs to be taken
into account, so 256MB - 8MB = 248MB.
Is this right?
Thanks
Thom
On Tue, Nov 11, 2008 at 4:57 PM, Thom Brown <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I
Hi,
I've noticed that if I set my SHMMAX to 256 * 1024 * 1024 (268435456)
and my shared_buffers value in postgresql.conf to 256MB the server
fails to start. I managed to find a tipping point: 249MB seems to be
too much and 248MB seems to be okay.
Could someone explain what I'm missing here? I t
Sergey Konoplev wrote:
>
>> Finally - did you compile this from source yourself, or is it installed
>> via apt? I'm wondering whether you have an unusual version of a library
>> linked in, and it's taking a long time to parse the query.
>
> I've compiled it from sources. BTW, I tested it on both
>> I did \timing and run my query in console. You can find the result in
>> attachement. Will it be enough?
>
> Very strange.
>
> The explain runtime is 3.1 seconds, but \timing shows 37.8 seconds
> before it returns.
>
> And it only does this for the NOT IN version of the query, but the IN
> versi
Hello all,
I have one question about converting timestamps to text. I believed
that using to_char function with proper mask and casting by style
COLUMN::text is equal, but when I issue following query against 8.3.3
database it returns me different values (they differ on last position
of microsecc
Sergey Konoplev wrote:
>> Maybe something to do with process startup is delaying things - could
>> you tweak the test script to send the outputs of the explain somewhere
>> other than /dev/null? That way we'd know if there was a big difference
>> between query-execution-time and process-execution-t
On Tue, 2008-11-11 at 06:45 -0800, NetGraviton wrote:
> Web Application Engineer - Drupal, PHP, CSS, JavaScript, Postgresql
>
Please use the proper list... pgsql-jobs.
Joshua D. Drake
--
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscript
Web Application Engineer - Drupal, PHP, CSS, JavaScript, Postgresql
Position Title: Web Application Developer - Drupal, PHP, CSS,
JavaScript, Postgresql, Linux, HTML, Apache
Location: Boston, MA (Back Bay - Prudential Building - Commuter Rail
and T-accessible).
Status: Full Time, Regular
Respon
Hi Sergey,
- It's just guess but it could be the range of a SERIAL TYPE that is
generating this behavior.
An example is:
Knowing that "table1_id" is primary a key ( the table will be ordered
by it ) and that a serial range is 2147483647 long.
(a) you use 2000 different numbers of this range
Sergey Konoplev wrote:
>
> Another thing is that even I set statement_timeout to 20s the query
> with NOT IN finishes working after 30+ seconds without "canceled by
> statement timeout" error.
Maybe it's not taking that long to execute the query then.
Maybe something to do with process startup i
Sergey Konoplev wrote:
>> I've never heard of EXPLAIN ANALYSE being *faster* than the actual
>> query, it's usually slower due to all the timing calls. The only thing
>> it doesn't do is actually send the results over the connection to the
>> client. In your case, you're not actually selecting any
>> I've never heard of EXPLAIN ANALYSE being *faster* than the actual
>> query, it's usually slower due to all the timing calls. The only thing
>> it doesn't do is actually send the results over the connection to the
>> client. In your case, you're not actually selecting any columns, so that
>> can
"Sergey Konoplev" <[EMAIL PROTECTED]> writes:
> I've faced strange parser (or may be planner) behaviour. When I do
> "EXPLAIN SELECT 1 FROM table1 WHERE table1_id IN (...~2000 ids
> here...);" it works as fast as I expect (50 ms). But when I rewrite it
> using NOT IN "EXPLAIN SELECT 1 FROM table1 W
> I've never heard of EXPLAIN ANALYSE being *faster* than the actual
> query, it's usually slower due to all the timing calls. The only thing
> it doesn't do is actually send the results over the connection to the
> client. In your case, you're not actually selecting any columns, so that
> can't be
Bruce Momjian wrote:
> Bruce Momjian wrote:
>> Alvaro Herrera wrote:
>>> Tom Lane escribi?:
Bruce Momjian <[EMAIL PROTECTED]> writes:
> I don't know of a way to make MD5 and db_user_namespace work cleanly so
> we are considering removing db_user_namespace in 8.4.
We are? It's no
Tom Lane wrote:
> Richard Huxton <[EMAIL PROTECTED]> writes:
>> I've never heard of EXPLAIN ANALYSE being *faster* than the actual
>> query, it's usually slower due to all the timing calls. The only thing
>> it doesn't do is actually send the results over the connection to the
>> client. In your ca
Richard Huxton <[EMAIL PROTECTED]> writes:
> I've never heard of EXPLAIN ANALYSE being *faster* than the actual
> query, it's usually slower due to all the timing calls. The only thing
> it doesn't do is actually send the results over the connection to the
> client. In your case, you're not actuall
Yasuo Ohgaki <[EMAIL PROTECTED]> writes:
> It would be nice to have API like PQquerySingle that allows only a single SQL
> statement at a time.
We have one (the "extended query" protocol).
regards, tom lane
--
Sent via pgsql-general mailing list (pgsql-general@postgresql
Joao Ferreira gmail wrote:
> On Tue, 2008-11-11 at 11:16 +, Richard Huxton wrote:
>> Joao Ferreira gmail wrote:
>>> hello all,
>>>
>>> I have 2 dumps of the same Pg database in diferent instants.
>>>
>>> I'dd like to merge the two dumps into one single dump in order to
>>> restore all data at o
Sergey Konoplev wrote:
>> Can you post the EXPLAIN ANALYSE output from your NOT IN query?
>>
>
> Seq Scan on table1 (cost=0.00..12648.25 rows=3351 width=0) (actual
> time=0.054..140.596 rows=5000 loops=1)
>Filter: (table1_id <> ALL ('{123456789000, ... plus 1999 ids'::bigint[]))
> Total run
>
> Can you post the EXPLAIN ANALYSE output from your NOT IN query?
>
Seq Scan on table1 (cost=0.00..12648.25 rows=3351 width=0) (actual
time=0.054..140.596 rows=5000 loops=1)
Filter: (table1_id <> ALL ('{123456789000, ... plus 1999 ids'::bigint[]))
Total runtime: 142.303 ms
(3 rows)
But ac
On Mon, Nov 10, 2008 at 02:30:41PM -0800, Steve Atkins wrote:
> On Nov 10, 2008, at 1:35 PM, Tom Lane wrote:
> >Alvaro Herrera <[EMAIL PROTECTED]> writes:
> >>It seems that there is enough need for this feature, that it has been
> >>implemented multiple times -- but most of them will fail in corner
On Tue, 2008-11-11 at 11:16 +, Richard Huxton wrote:
> Joao Ferreira gmail wrote:
> > hello all,
> >
> > I have 2 dumps of the same Pg database in diferent instants.
> >
> > I'dd like to merge the two dumps into one single dump in order to
> > restore all data at one time.
>
> Is there any o
Joao Ferreira gmail wrote:
> hello all,
>
> I have 2 dumps of the same Pg database in diferent instants.
>
> I'dd like to merge the two dumps into one single dump in order to
> restore all data at one time.
Is there any overlap in the data?
If so, simplest might be to restore dump1, rename all
Diego Manilla Suárez wrote:
> Hi. I found this somewhere:
>
> select a from b order by a using ~<~
>
> I've been searching the docs but I found nothing about this weird ~<~
> comparator. Not in comparison operators, String functions and operators,
> nor the order by clause. Maybe it has something
Dmitry Teslenko wrote:
>> richardh=> SELECT * FROM chartbl WHERE c LIKE '111';
>> c
>> ---
>> (0 rows)
>>
>> richardh=> SELECT * FROM chartbl WHERE c LIKE '111 ';
>> c
>>
>> 111
>> (1 row)
>>
>> richardh=> SELECT * FROM chartbl WHERE c LIKE '111%';
>> c
>>
Sergey Konoplev wrote:
>
> I've faced strange parser (or may be planner) behaviour. When I do
> "EXPLAIN SELECT 1 FROM table1 WHERE table1_id IN (...~2000 ids
> here...);" it works as fast as I expect (50 ms). But when I rewrite it
> using NOT IN "EXPLAIN SELECT 1 FROM table1 WHERE table1_id NOT I
It seems that there is enough need for this feature, that it has been
implemented multiple times -- but most of them will fail in corner
cases. Seems an obvious candidate for an in-core function ...
... which will still fail in corner cases. Not to mention the race
condition when the logger ha
hello all,
I have 2 dumps of the same Pg database in diferent instants.
I'dd like to merge the two dumps into one single dump in order to
restore all data at one time.
Is this possible ? are there any helper tools to aid in dealing with
text dump files ?
thanks
Joao
--
Sent via pgsql-gener
Hi. I found this somewhere:
select a from b order by a using ~<~
I've been searching the docs but I found nothing about this weird ~<~
comparator. Not in comparison operators, String functions and operators,
nor the order by clause. Maybe it has something to do with regular
expressions, but s
Developers,
It seems you are overlooking application user/system admin perspective.
I agree developers should use prepared statement, but application user or
system admins are not able to modify codes usually.
There are many PostgreSQL/MySQL applications that generating SQL statements.
MySQL's
Hi all,
Here is my environment information:
# select version();
version
PostgreSQL 8.3.4 on i686-pc-linux-gnu, compiled by GCC gcc (GCC)
4.2.4 (Ubuntu 4.2.
Appreciate all the help. Thank you
On Mon, Nov 10, 2008 at 8:36 AM, Michael Black
<[EMAIL PROTECTED]> wrote:
> James,
>
> It is not good practice to delete an account with out first transfering the
> amount in that account to another account. You will also need to make sure
> the account has a ze
Dear Group,
Please note that to avoid confusion with a previously subscribed
member, I have unsubscribed as James N Hitz ([EMAIL PROTECTED]), and I
am now using WaGathoni ([EMAIL PROTECTED]).
After all.. I realized GMail "knits" the threads together better than Yahoo.
Just thought, I should clea
On Mon, Nov 10, 2008 at 18:14, Richard Huxton <[EMAIL PROTECTED]> wrote:
> Dmitry Teslenko wrote:
>> Hello!
>> There's table:
>> CREATE TABLE table1 (
>> field1 CHARACTER(10),
>> ...
>> );
>>
>> Then there's record: INSERT INTO table1(field1, ..) VALUES ('111', ...);
>>
>> Then I query
64 matches
Mail list logo