Dan Sugalski wrote:

> getting back a full row as an array, getting back a full

 row as a hash, and stuff like that. Nothing fancy, and nothing that
 high-level, but enough to work the basics without quite as manual work
 as the current libpg requires.

OK.


I am at the point now where I need to know what type of format you want
the data to come out in.

We have the following options although some of them will be impractical
in production. I can drop the data into any type of structure currently
available to Parrot at least I am pretty sure I can.

I can create the entire dataset and return this out to the caller as a
hash of arrays or some other structure. For large datasets this will be
completey impractical but I am highlighting it as an option for testing
or possibly avoiding multiple calls between Parrot and Any Old Language
(AOL).

We can call a funtion to return the data in any format you want ie a single record per call gets passed back. This method will probably be the closest to most libraries in circulation and is the one that makes most sense to me. It could be extended to pass back N records depending on what the caller wants, this might be faster than making lots ot AOL calls to Parrot but would involve some more work on our part.

For later use would it make it easier for people at a higher abstraction
if some metadata gets passed about the data ie the very first row
returned contains an array of types subsequent calls will return. Perl
is lovely the way it converts types but this might not be very practical
for other languages that are a bit more strict about stuff like this. At
the moment I am using "strings" for all the data coming from the
database but this wont work for everyone. This needs to be decided now to avoid a re-write later. It would make my life easier if the guys at the top where to deal with type conversion but I am not sure this is good choice.



The following is the table that I am testing this against. There are
only very few of the basic types here although for what I have done at the moment the types have no real affect. This table is loaded with 10000 records (not realistic data).


                  Table "public.test"
   Column   |            Type             |   Modifiers
------------+-----------------------------+---------------
 _key       | integer                     | not null
 _bigint8   | bigint                      |
 _bool      | boolean                     |
 _char      | character(10)               |
 _varchar   | character varying(100)      |
 _float8    | double precision            |
 _int       | integer                     |
 _float4    | real                        |
 _int2      | smallint                    |
 _text      | text                        |
 _timestamp | timestamp without time zone | default now()
Indexes: parrot_pkey primary key btree (_key)

For the speed freaks doing "select * from test"

real    0m0.997s
user    0m0.630s
sys     0m0.010s

Displaying all 10000 records to screen as follows

9996 9176 t a          Varchar here 9176 9176 9176 9176 smallint <- Text
here -> timestamp 2004-01-11 16:45:28.79144
9997 2182 t a          Varchar here 2182 2182 2182 2182 smallint <- Text
here -> timestamp 2004-01-11 16:45:28.79379
9998 4521 t a          Varchar here 4521 4521 4521 4521 smallint <- Text
here -> timestamp 2004-01-11 16:45:28.79614
9999 4152 t a          Varchar here 4152 4152 4152 4152 smallint <- Text
here -> timestamp 2004-01-11 16:45:28.79849

real    0m4.189s
user    0m0.570s
sys     0m0.280s


Any requests, pointers, advice, abuse or general chit chat welcome.


Harry Jackson



Reply via email to