I am not doubting the efficacy of stored procs, just wondering which
language is better. From the sound of it string manupilation is slow in
PL-PGSQL but looking at my procs there does seem to be a lot of string
manipulation going on so maybe I better do some tests.



On Thu, Dec 29, 2016 at 3:02 AM, Mike Sofen <mso...@runbox.com> wrote:

> *From:* Tim Uckun
> I have seen various links on the internet which indicate that PLV8 is
> significantly faster than PL-PGSQL sometimes an order of magnitude faster.
>
>
>
> Is there any benefit to choosing PL-PGSQL?
>
> ------------------------
>
> I can’t speak to PLV8.  However, I can speak to plpgsql, and specifically
> stored functions (procs).  I use it exclusively to create a database API
> for real-time web applications to hit.  My API calls (procs) are hitting
> large tables, sometimes doing complex logic within the sproc.  It allows me
> to provide a simple, standardized interface to the web devs, allowing them
> to focus on the app code work.
>
>
>
> Performance is superb and continues to surprise me (I came from the SQL
> Server world).  As others have mentioned, the natural lashup of plpgsql to
> postgres (I liked Alban’s term, “impedance”), is a key aspect.  Also:
>
>
>
> -        stored procs provide another security layer against sql
> injection attacks.
>
> -        Caching SEEMS to be more efficient/effective with stored procs
> (that could be wishful thinking too).
>
> -        Stored procs allow skilled sql practitioners to provide far more
> sophisticated sql solutions than the typical python developer is capable
> of…my experience is that most web devs don’t really understand databases
> (or even care about them – they are a necessary evil), so providing a pure
> encapsulated sql solution (via stored procs) removes that mental impedance
> mismatch.
>
> -        Performance?  Simple “get” procs that return data for a specific
> indexed query against larger tables (50m+ rows) in a few milliseconds…I can
> live with that kind of performance.
>
> -        I’m also doing some heavy lifting in the sql, calculating
> histograms and boxplots for data visualizations.  This is an unusual
> scenario, but the other option is sending a massive chunk of data to
> another server for processing – just the transit time would kill the deal.
> I am mindful that at a certain point, there won’t be enough memory and i/o
> to go around, but the web app is a low user count/high user task complexity
> app, so I’ve tailored the model to match.
>
>
>
> Mike Sofen  (Synthetic Genomics)
>

Reply via email to