I wrote:
> Tracing suggests that the expanded array object created by the
> subscript assignment is getting flattened on the way out of the
> procedure in order to stuff it into the composite value that is the
> procedure's actual result.  So that's pretty sad from a performance
> standpoint: it means we aren't getting any real benefit from the
> INOUT variable.

BTW, just to flesh out what "pretty sad" means, here are two
equivalent implementations of my example:

CREATE or replace PROCEDURE prc(INOUT c int[], i int, j int)
AS $$
BEGIN
  c[i] := j;
END;
$$ LANGUAGE plpgsql;

CREATE or replace FUNCTION fnc(INOUT c int[], i int, j int)
AS $$
BEGIN
  c[i] := j;
END;
$$ LANGUAGE plpgsql;

\timing on

DO $$
DECLARE
  c int[];
BEGIN
  FOR i IN 1..100000 LOOP
    CALL prc(c, i, i+10);
  END LOOP;
--RAISE NOTICE 'c = %', c;
END;
$$;

DO $$
DECLARE
  c int[];
BEGIN
  FOR i IN 1..100000 LOOP
    c := fnc(c, i, i+10);
  END LOOP;
--RAISE NOTICE 'c = %', c;
END;
$$;

The first DO-block takes about 47 seconds on my workstation
(in an --enable-cassert build, so take that with a grain of
salt, but certainly it's slow).  The second takes 50ms.
If I mark the function IMMUTABLE as it really ought to be,
that drops to 45ms.

While I'd be glad to see the procedure's speed improved,
there's a lot standing in the way of making that happen.

Certainly this case shouldn't crash, so there's something
to fix here, but it's best not to use procedures when a
function could serve.

                        regards, tom lane


Reply via email to