On 01/31/2010 04:46 AM, Joe Kramer wrote:
Hi,
I need to store a lot of large files (thousands of 10-100 MB files)
uploaded through my web application and I find that storing them in
database as bytea field is not practical for backup purposes.
My database has full backup performed every 12 hours and backup is
encrypted and copied to server on another continent. Having all the
heavy binary data in database will make backups impossible.
So I am thinking of having a table just for metadata and file id
sequence and storing the file on file system:
CREATE TABLE business_logo
(
file_id bigserial NOT NULL,
file_name varchar,
file_date timestamp,
);
Storing file in path composed from serial id, e.g. file with id 2345
will be stored in
/webapp/files/2/3/4/5/2345
So I can backup files separately and database backup is still quick
and painless.
This is very simplistic and straightforward method.
I suppose there are better ways of doing it, using some virtual file system?
Anyone had a similar issue with avoiding of storing large files in
database, how did you solve it?
Thanks.
A bonus you get from using a standard file system is rsync'able backups. I'd
bet most of those thousands of files dont change that often? Your backup times
with rsync will be outstanding. If you use dark magic, you may be limiting
your backup options.
And +1 with Craig, I've also stored thousands of files in the same dir (using
XFS) and it was not slower than splitting them into smaller subdir's. (it
wasnt faster, but it wasnt slower either)
-Andy
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general