I am pleased to report that with Merlin's suggestion of using the
pg-large-object middleware, I have a test case now showing that I can write a
25MB buffer from Node.js to Postgres in roughly 700 milliseconds. Here is the
JavaScript code, which is nearly verbatim from the example in the
pg-lar
Thanks, Merlin - lots of good information here, and I had not yet stumbled
across pg-large-object - I will look into it.
Eric
-Original Message-
From: Merlin Moncure [mailto:mmonc...@gmail.com]
Sent: Thursday, May 18, 2017 9:49 AM
To: Eric Hill
Cc: Thomas Kellerer ; PostgreSQL General
My apologies: I said I ran "this query" but failed to include the query. It
was merely this:
SELECT "indexFile"."_id", "indexFile"."contents"
FROM "mySchema"."indexFiles" AS "indexFile"
WHERE "indexFile&q
I would be thrilled to get 76 MB per second, and it is comforting to know that
we have that as a rough upper bound on performance. I've got work to do to
figure out how to approach that upper bound from Node.js.
In the meantime, I've been looking at performance on the read side. For that,
I
OK, thanks very much. It seems like my process is somehow flawed. I'll try
removing some layers and see if I can figure out what is killing the
performance.
Eric
>
> Do these numbers surprise you? Are these files just too large for
> storage in PostgreSQL to be practical? Could there be
Hey,
I searched and found a few discussions of storing large files in the database
in the archives, but none that specifically address performance and how large
of files can realistically be stored in the database.
I have a node.js application using PostgreSQL to store uploaded files. The
col