Re: [GENERAL] Amazon High I/O instances
Just saw your email between all the others .. Pinterest, Instagram, Netflix, Shazam, NASDAQ, Cycle Computing ( http://arstechnica.com/business/2011/09/3-core-cluster-built-on-amazon-ec2-cloud/) .. that list could go on and on, see http://aws.amazon.com/solutions/case-studies/ for some more. For a small all-in-one web server, any kind of web hosting is fine, and Amazon would most certainly be the pricier option. Sébastien On Thu, Sep 13, 2012 at 12:40 AM, Chris Travers wrote: > > > On Tue, Aug 21, 2012 at 1:18 AM, Vincent Veyron wrote: > >> Le mardi 21 août 2012 à 01:33 -0400, Sébastien Lorion a écrit : >> >> > >> > >> > Since Amazon has added new high I/O instance types and EBS volumes, >> > anyone has done some benchmark of PostgreSQL on them ? >> > >> >> I wonder : is there a reason why you have to go through the complexity >> of such a setup, rather than simply use bare metal and get good >> performance with simplicity? >> >> For instance, the dedibox I use for my app (visible in sig) costs 14,00 >> euros/month, and sits at .03% load average with 5 active users; you can >> admin it like a home pc. >> > > The main use cases I know of are relatively small instances where the web > server and db server for an app may be on the same system. > >> >> >> -- >> Vincent Veyron >> http://marica.fr/ >> Gestion informatique des sinistres d'assurances et des dossiers >> contentieux pour le service juridique >> >> >> >> >> -- >> Sent via pgsql-general mailing list (pgsql-general@postgresql.org) >> To make changes to your subscription: >> http://www.postgresql.org/mailpref/pgsql-general >> > >
Re: [GENERAL] Compressed binary field
On Tue, Sep 11, 2012 at 9:34 AM, Edson Richter wrote: > > No, there is no problem. Just trying to reduce database size forcing these > fields to compress. > Actual database size = 8Gb > Backup size = 1.6Gb (5x smaller) > > Seems to me (IMHO) that there is room for improvement in database storage > (we don't have many indexes, and biggest tables are just the ones with bytea > fields). That's why I've asked for experts counseling. There are two things to keep in mind. One is that each datum is compressed separately, so that a lot of redundancy that occurs between fields of different tuples, but not within any given tuple, will not be available to TOAST, but will be available to the compression of a dump file. Another thing is that PG's TOAST compression was designed to be simple and fast and patent free, and often it is not all that good. It is quite good if you have long stretches of repeats of a single character, or exact densely spaced repeats of a sequence of characters ("123123123123123..."), but when the redundancy is less simple it does a much worse job than gzip, for example, does. It is possible but unlikely there is a bug somewhere, but most likely your documents just aren't very compressible using pglz_compress. Cheers, Jeff -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Index creation takes more time?
Herouth, I don't know if you saw Tomas Vondra's follow-up, as it was only to the list and not CC'd to you. Here's the archive link: http://archives.postgresql.org/message-id/e87a2f7a91ce1fca7143bcadc4553...@fuzzy.cz The short version: "More information required". On 09/09/2012 05:25 PM, Herouth Maoz wrote: We have tables which we archive and shorten every day. That is - the main table that has daily inserts and updates is kept small, and there is a parallel table with all the old data up to a year ago. In the past we noticed that the bulk transfer from the main table to the archive table takes a very long time, so we decided to do this in three steps: (1) drop indexes on the archive table, (2) insert a week's worth of data into the archive table. (3) recreate the indexes. This proved to take much less time than having each row update the index. However, this week we finally upgraded from PG 8.3 to 9.1, and suddenly, the archiving process takes a lot more time than it used to - 14:30 hours for the most important table, to be exact, spent only on index creation. The same work running on the same data in 8.3 on a much weaker PC took merely 4:30 hours. There are 8 indexes on the archive table. The size of the main table is currently (after archive) 7,805,009 records. The size of the archive table is currently 177,328,412 records. Has there been a major change in index creation that would cause 9.1 to do it this much slower? Should I go back to simply copying over the data or is the whole concept breaking down? TIA, Herouth -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] On Ubuntu 12.04 i do have two psql one of those isn't working
I'm testing my second PostgreSQL install on Xunutu 12.04 after a disk crash. Everything went well using command line where i was able, after config and setup, to log into a db using : $ psql -U yt -d yt_tests Then i wanted to test postgres thru php where i got Http error 500. After that i discovered that i do have TWO psql installed : the first : lrwxrwxrwx 1 root root 37 mars 6 2012 /usr/bin/psql -> ../share/postgresql-common/pg_wrapper the second : -rwxr-xr-x 1 root root 433224 août 17 00:58 /usr/lib/postgresql/9.1/bin/psql the first is working not the second : yt@D620 $ sudo -s -u postgres [sudo] password for yt: zsh: locking failed for */home/yt/*.zsh_history: permission non accordée: reading anyway using the first one : postgres@D620 $ /usr/bin/psql psql (9.1.5) Type "help" for help. postgres=# \q using the second : postgres@D620 $ /usr/lib/postgresql/9.1/bin/psql psql: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. could not send startup packet: Relais brisé (pipe) zsh: exit 2 /usr/lib/postgresql/9.1/bin/psql then, i wonder how to workaround this "installation bugg" because i suspect php is using the second not working psql where the first is the one being in my PATH. -- Yvon