Thanks Merlin,
I've  tried arrays but plpython does not support returning arrays of custom
db types (which is what I'd need to do)

On Monday, 8 October 2012, Merlin Moncure wrote:

> On Mon, Oct 8, 2012 at 3:14 PM, Seref Arikan
> <serefari...@kurumsalteknoloji.com <javascript:;>> wrote:
> > Greetings,
> > I have a binary blog which is passed to a plpython function by a plpgsql
> > function. plpython is used to create 2 different transformations of this
> > binary blob to sets of postgresql type instances.
> > The flow is:  blob -> plpython -> canonical python based data model ->
>  (set
> > of db_type_As + set of db_type_Bs)
> > The problem is, transforming the binary blob to postgresql is expensive,
> and
> > a single binary blob is the source of two transformations. I have not
> found
> > a way of returning to sets of data form the plpython function.
> > At the moment, I have two options:
> > 1) calling two functions in plpython that use the same blob and return
> > different sets of postgresql types (heavyweight transformation will
> happen
> > twice: bad)
> > 2) creating two temp tables and calling the plpython function which in
> turn
> > writes to these temp tables, and then using the temp tables from plpgsql.
> >
> > Do you think there are any other options that I might be missing? What
> would
> > be the most efficient way of passing temp tables to plpython function?
>
> Are the two sets the same size?  If so, you probably want to do a
> vanilla SRF.  If not, consider a a composite containing arrays:
>
> create type foo as(a int[], b int[]);
>
> CREATE FUNCTION get_stuff()
>   RETURNS foo
> AS $$
> return [(1, 2, 3, 4, 5), (1,2,3)];
> $$ LANGUAGE plpythonu;
>
> select * from get_stuff();
> postgres=# select * from get_stuff();
>       a      |    b
> -------------+---------
>  {1,2,3,4,5} | {1,2,3}
>
> merlin
>

Reply via email to