[GENERAL] Re: [HACKERS] getting at the actual int4 value of an abstime

1999-08-18 Thread Tom Lane

[EMAIL PROTECTED] (Jim Mercer) writes:
> [ concern about speed of converting datetime values to/from text for
>   Postgres ]

FWIW, I used to be really concerned about that too, because my
applications do lots of storage and retrieval of datetimes.
Then one day I did some profiling, and found that the datetime
conversion code was down in the noise.  Now I don't worry so much.

It *would* be nice though if there were some reasonably cheap documented
conversions between datetime and a standard Unix time_t displayed as a
number.  Not so much because of speed, as because there are all kinds
of ways to get the conversion wrong on the client side --- messing up
the timezone and not coping with all the Postgres datestyles are two
easy ways to muff it.

BTW, I believe Thomas is threatening to replace all the datetime-like
types with what is currently called datetime (ie, a float8 measuring
seconds with epoch 1/1/2000), so relying on the internal representation
of abstime would be a bad idea...

regards, tom lane



Re: [GENERAL] CVS Import/Export

1999-08-18 Thread The Hermit Hacker


There is a COPY command that you can use...there is a man page for it,
sorry, don't use it myself, so dont know the syntax :(  I've never had
much luck with using it, so generally cheat and create a fast perl script
to do it as normal inserts :(

On Wed, 18 Aug 1999, Bruce Tong wrote:

> I feel like such a bone-head asking this question, but I didn't find the
> answer in the FAQ or the documentation, other than pgaccess is supposed to
> have some of this functionality...
> 
> How do I import/export comma delimited tables?
> 
> I thought a combination of pg_dump and psql might do it, but if so I must
> have missed it. I saw a mention of it for pgaccess, but I'm looking for
> something I can put in a shell script.
> 
> --
> 
> Bruce Tong |  Got me an office; I'm there late at night.
> Systems Programmer |  Just send me e-mail, maybe I'll write.
> Electronic Vision / FITNE  |  
> [EMAIL PROTECTED]|  -- Joe Walsh for the 21st Century
> 
> 
> 

Marc G. Fournier   ICQ#7615664   IRC Nick: Scrappy
Systems Administrator @ hub.org 
primary: [EMAIL PROTECTED]   secondary: scrappy@{freebsd|postgresql}.org 




Re: [GENERAL] CVS Import/Export

1999-08-18 Thread Simon Drabble

On Wed, 18 Aug 1999, Bruce Tong wrote:

> I feel like such a bone-head asking this question, but I didn't find the
> answer in the FAQ or the documentation, other than pgaccess is supposed to
> have some of this functionality...
> 
> How do I import/export comma delimited tables?
> 
> I thought a combination of pg_dump and psql might do it, but if so I must
> have missed it. I saw a mention of it for pgaccess, but I'm looking for
> something I can put in a shell script.
> 
> --
> 
> Bruce Tong |  Got me an office; I'm there late at night.

If you're after changing the field separator, psql has a \f command.

You could do something like:

$ psql -e  < out.sql > dump

where out.sql looks like:

\f ,
\o 
-- some select statements go here
SELECT foo FROM bar;

-- EOF


A method for importing would be similar.



Simon.
-- 
 "Don't anthropomorphise computers - they don't like it."
   
   Simon Drabble  It's like karma for your brain.
   [EMAIL PROTECTED]




Re: [GENERAL] CVS Import/Export

1999-08-18 Thread Herouth Maoz

At 17:14 +0300 on 18/08/1999, Bruce Tong wrote:

> How do I import/export comma delimited tables?
>
> I thought a combination of pg_dump and psql might do it, but if so I must
> have missed it. I saw a mention of it for pgaccess, but I'm looking for
> something I can put in a shell script.

It has nothing to do with pgaccess. The way to import/export any tables is
using either the COPY command in PostgreSQL's SQL dialect, or the \copy
command in psql.

The difference between them is in where they look for the file to convert
to/from. The COPY command is executed by the backend, and looks for a file
in the backend's machine. The \copy looks on the client machine that runs
the psql. Since, more often than not, this is the same machine, the best
way to remember is that COPY is executed by the backend and therefore the
file must be readable to the postgres superuser (or writable for an
export), and \copy runs in the client, so it should be readable/writable to
the one who runs the psql.

COPY has an option to read the standard input instead of a file, which is
how clients like psql are able to write things like \copy. You can use COPY
FROM STDIN in shell scripts.

COPY is better that \copy as it allows you to set a delimiter, which \copy
does not - it always expects tabs.

Anyway, this imports data from a file named "stam.txt" into the table
"test5" of the database "testing":

psql -c 'COPY test5 FROM stdin' testing < stam.txt

The following exports the same table:

psql -qc 'COPY test5 TO stdin' testing > stam.txt

Herouth

--
Herouth Maoz, Internet developer.
Open University of Israel - Telem project
http://telem.openu.ac.il/~herutma





Re: [GENERAL] CVS Import/Export

1999-08-18 Thread Bruce Tong

> There is a COPY command that you can use...there is a man page for it,
> sorry, don't use it myself, so dont know the syntax.

Ahh, COPY. All I really needed was the pointer. I remember skimming
that one and concluding it wasn't what I wanted. I must have skimmed
too fast as I was certain it wouldn't be in SQL since nothing turned up in
my seach of "The Practical SQL Handbook" index.

Thanks to all for the examples.

--

Bruce Tong |  Got me an office; I'm there late at night.
Systems Programmer |  Just send me e-mail, maybe I'll write.
Electronic Vision / FITNE  |  
[EMAIL PROTECTED]|  -- Joe Walsh for the 21st Century





Re: [GENERAL] CVS Import/Export

1999-08-18 Thread Stuart Rison

>> There is a COPY command that you can use...there is a man page for it,
>> sorry, don't use it myself, so dont know the syntax.

Then some bit about usually using Perl because of trouble getting COPY to
perform exactly right and then having to pay the price with slow inserts
instead of fast COPY (sorry, I overhastily deleted it).  I'm pretty sure
Marc posted it (sorry about the cc if it wasn't you Marc)...

Yes I usually have a similar problem, especially with 'buggy' CVS file or
other delimited files that haven't been rigourously generated or with
handling of NULL fields etc.

I clean up the file with Perl but use this code to still use fast COPYs:

#/usr/local/bin/perl5

my $database='test';
open PGSQL, "|psql $database" or die "hey man, you crazy or what!  I canny
open pipe psql $database!";

my $table='test';

print PGSQL "COPY $table from stdin;\n"; # First COPY
my $print_count=0; # Set counter to zero

while () { # Where list is a filehandle to your CVS/delimited file

  # We go through the file line by line
  # Clean-up each line
  # And put each element in array @values
  # In the order of the fields in the table definition
  # And replacing NULLs with '\N' (inclusive of quotes)

  print PGSQL join("\t",@values),"\n";
  ++$print_count;

  if (!($print_count%50)) { # every fifty print
print PGSQL "\\.\n"; # close that batch of entries
print PGSQL "COPY $table from stdin;\n"; # start next batch
  };

};

print PGSQL "\\.\n";
# we've printed a copy so worst that can happen is we copy in nothing!
# but we must print this at then end to make sure all entries are copied

close(LIST);
close(PGSQL);

I must say that it goes like the proverbial stuff off the shovel.

HTH,

Stuart.
+--+--+
| Stuart C. G. Rison   | Ludwig Institute for Cancer Research |
+--+ 91 Riding House Street   |
| N.B. new phone code!!| London, W1P 8BT  |
| Tel. +44 (0)207 878 4041 | UNITED KINGDOM   |
| Fax. +44 (0)207 878 4040 | [EMAIL PROTECTED]  |
+--+--+


Check out "PostgreSQL Wearables" @ http://www.pgsql.com



[GENERAL] huge backend processes

1999-08-18 Thread Jim Mercer


maybe i'm doing something wrong here:

CREATE TABLE samples
  (
  mark  abstime,
  subnetinet,
  bytes_in  float8,
  bytes_out float8
  );
CREATE INDEX samples_mark ON samples (mark);

--- fill it with lots and lots of data

BEGIN WORK;
DECLARE mycurs CURSOR FOR
   SELECT DATE_PART('epoch', mark), subnet, bytes_in, bytes_out
   FROM samples
   WHERE mark >= 'epoch or another date'::abstime;

-- LOOP
FETCH FORWARD 1000 IN mycurs

END WORK;


given the above, actually done using C/libpq, i run my program, which
does a PQclear after each FETCH.

after reading 25 records, top says:

  PID USERNAME PRI NICE SIZERES STATETIME   WCPUCPU COMMAND
13748 postgres105   0 22724K 20588K RUN  3:05 86.14% 86.14% postgres

at some point, it runs out of memory (or something):

 279001 records read (1177 rec/sec)
   testprog: query failed - FETCH FORWARD 1000 IN samples;
   testprog: (7) pqReadData() -- backend closed the channel unexpectedly.
   This probably means the backend terminated abnormally before or while
   processing the request.

i have had this problem with other processes on other tables.

the problem is usually if i am selecting a huge number of records,
and defining some conversion or something in the selected fields.

if i revert my code to (note: less the DATE_PART conversion):

DECLARE mycurs CURSOR FOR
   SELECT mark, subnet, bytes_in, bytes_out
   FROM samples
   WHERE mark >= 'epoch or another date'::abstime;

it works fine.

-- 
[ Jim MercerReptilian Research  [EMAIL PROTECTED]   +1 416 410-5633 ]
[ The telephone, for those of you who  have forgotten, was a commonly used ]
[ communications technology in the days before electronic mail.]
[ They're still easy to find in most large cities. -- Nathaniel Borenstein ]


Check out "PostgreSQL Wearables" @ http://www.pgsql.com



Re: [GENERAL] huge backend processes

1999-08-18 Thread Jim Mercer

> if i revert my code to (note: less the DATE_PART conversion):
> 
> DECLARE mycurs CURSOR FOR
>SELECT mark, subnet, bytes_in, bytes_out
>FROM samples
>WHERE mark >= 'epoch or another date'::abstime;
> 
> it works fine.

as a followup, if i use:

DECLARE mycurs CURSOR FOR
   SELECT mark::int8, subnet, bytes_in, bytes_out
   FROM samples
   WHERE mark >= 'epoch or another date'::abstime;

it works fine as well.

-- 
[ Jim MercerReptilian Research  [EMAIL PROTECTED]   +1 416 410-5633 ]
[ The telephone, for those of you who  have forgotten, was a commonly used ]
[ communications technology in the days before electronic mail.]
[ They're still easy to find in most large cities. -- Nathaniel Borenstein ]


Check out "PostgreSQL Wearables" @ http://www.pgsql.com



Re: [GENERAL] huge backend processes

1999-08-18 Thread Bruce Momjian

> 
> maybe i'm doing something wrong here:
> 
> CREATE TABLE samples
>   (
>   mark  abstime,
>   subnetinet,
>   bytes_in  float8,
>   bytes_out float8
>   );
> CREATE INDEX samples_mark ON samples (mark);
> 
> --- fill it with lots and lots of data
> 
> BEGIN WORK;
> DECLARE mycurs CURSOR FOR
>SELECT DATE_PART('epoch', mark), subnet, bytes_in, bytes_out

This is clearly a known problem.  We need a per-tuple memory context. 
Function calls that return palloc'ed memory is not freed for each tuple.


-- 
  Bruce Momjian|  http://www.op.net/~candle
  [EMAIL PROTECTED]|  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026


Check out "PostgreSQL Wearables" @ http://www.pgsql.com