Re: [GENERAL] Weight BLOB objects in postgreSQL? How?
On 3/4/2015 7:03 AM, María Griensu wrote: I need to figure out how can I weight BLOB objects in a table of a DB, I'm not expert on this topics, so I appreciate any help you can give me. postgres's equivalent of BLOB is BYTEA. I'm not sure what you mean by 'weight' here. -- john r pierce 37N 122W somewhere on the middle of the left coast
Re: [GENERAL] Postgresql CIFS
On 3/4/2015 9:10 PM, AI Rumman wrote: I am working on setting up a new database server with shared disk and cluster failover. In this environment, only one node will be active at a certain time. Underneath, we are planning to have shared storage with CIFS protocol. As I am newbie with this storag, can anyone please help me with some info what are the database issues I can face with this kind of file system protocol with Postgresql 9.1 why are you building a NEW system with the 2nd oldest release of postgres? within a year or so, 9.1 will be obsolete and unsupported. CIFS will be pretty slow at the sorts of random writes that a database server does a lot of, and there's all sorts of room for hard-to-diagnose issues with unsafe write cache buffering in the file server, depending on the specifics of the CIFS server implementation. Not sure how you implement a high availability CIFS server without single points of failure, either... thats hard enough with shared block storage implementations (requiring redundant storage networks, switches, and dual storage controllers with shared cache, dual homing the actual physical block storage, which is dual ported and all raid 10 typically). ISCSI or a proper SAN (fiberchannel) would be a much better choice for a shared storage active/passive cluster, just implement some sort of storage fencing to ensure only one node can have the file system mounted at a time. with postgres, its usually better to implement a HA cluster via streaming replication, the master and slave each with their own dedicated storage, and promoting the slave to master if/when the master dies. -- john r pierce 37N 122W somewhere on the middle of the left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] #PERSONAL# Reg: date going as 01/01/0001
On 3/5/2015 8:34 PM, Medhavi Mahansaria wrote: I need to enter the date in null column based on the results obtained. but my date gets inserted as 01/01/0001 in postgresql 9.3 when there is no value. I need to enter NULL in the column. umm, it seems to work fine for me?you give us no clue how you're inserting this date field. test=# create table test (calendar date); CREATE TABLE test=# insert into test (calendar) values (null); INSERT 0 1 test=# select calendar, calendar IS NULL from test; calendar | ?column? --+-- | t (1 row) -- john r pierce 37N 122W somewhere on the middle of the left coast
Re: [GENERAL] #PERSONAL# Reg: date going as 01/01/0001
On 3/6/2015 2:12 AM, Medhavi Mahansaria wrote: I am porting my application from oracle to postgresql. in oracle it enters as NULL Oracle has the unique 'feature' that an empty string is NULL. This is contrary to the SQL spec. -- john r pierce 37N 122W somewhere on the middle of the left coast
Re: [GENERAL] Postgres and data warehouses
On 3/8/2015 7:40 AM, Nigel Gardiner wrote: I'm looking at making a data warehouse to address our rapidly spiralling report query times against the OLTP. I'm looking first at what it would take to make this a real-time data warehouse, as opposed to batch-driven. we use a hybrid architecture.we have a 'republisher' process that repeatedly slurps new data from the OLTP database and sends it to the back end databases, using a 'publish/subscribe' messaging bus. several back end databases subscribe to this data, and their subscriber processes insert the incoming data into those OLAP and various other reporting databases. this way the reporting databases can have completely different schemas optimized for their needs, and have different retention requirements than the OLTP database. this republisher is usually within a few seconds of live new data. in our case its made fairly easy to track 'new' because all our OLTP transactions are event-oriented. -- john r pierce 37N 122W somewhere on the middle of the left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgres and data warehouses
On 3/8/2015 8:24 PM, Rob Sargent wrote: I strongly suspect many would like further details on your implementation. A user conference session well recorded perhaps? the details are proprietary and quite specific to our workload and requirements. -- john r pierce 37N 122W somewhere on the middle of the left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Creating composite keys from csv
On 3/8/2015 10:32 PM, David G Johnston wrote: This solves the explicit problem given the assumption that (Name, Total Salary) is indeed a uniquely identifying constraint. that constraint seems flawed to me. -- john r pierce 37N 122W somewhere on the middle of the left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] DB Connections
On 3/13/2015 6:51 AM, sameer malve ⎝⏠⏝⏠⎠ wrote: Just use pgtune utility it will give an o/p of u r postgres.conf depending on your machine hardware . I find on newer large hardware, pg_tune makes some over-the-top choices. -- john r pierce 37N 122W somewhere on the middle of the left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] DB Connections
On 3/13/2015 2:59 AM, Job wrote: The application i use need lots of static DB connections if lots is much over a 100 or so, I'd strongly advise using a connection pooler like pgbouncer or whatever is built into your language framework (for instance, Java frameworks like Tomcat have built in connection pools), and configuring your app to grab a connection from the pool, do a transaction and release the connection. -- john r pierce 37N 122W somewhere on the middle of the left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Archeiving and Purging
On 3/18/2015 7:20 AM, adityagis wrote: I have lots of data in my DB. I need to do archeiving and purging of my data. Can anyone please help me with step by step riles? like this? select * from table where datefield < current_date-interval '6 months'; delete from table where datefield < current_date-interval '6 months'; obviously, save the data you selected in a suitable archive file. repeat this for each table you wish to 'archive and purge'. alter the interval with whatever criteria you want to use for this archive and purge operation. -- john r pierce, from the mid left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Unicode license compatibility with PostgreSQL license
On 3/18/2015 11:03 PM, Haribabu Kommi wrote: For our next set of development activities in PostgreSQL, we want to use the Unicode organization code with PostgreSQL to open source that feature. Is the Unicode license is compatible with PostgreSQL. I'm curious... What does this Unicode Inc code do that the existing UTF8 support doesn't ? -- john r pierce, from the mid left coast -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] SELinux context of PostgreSQL connection process
On 3/24/2015 5:16 AM, Мартынов Александр wrote: There is postgres db with sepgsql enabled. When user connect to postgres db with psql, postgres create new process for each connection. These processes have selinux context unconfined_u:unconfined_r:postgresql_t. Is there a way to assign the process a context of user that connected to db? what if that user is on a different system connecting over the network? no, the only user the postgres server processes should run as are those of the postgres server itself as it needs to read and write files in the postgres data directory tree. -- john, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Index corruption
On 3/24/2015 11:49 AM, Bankim Bhavsar wrote: - 9.2.0 9.2 is currently at 9.2.10. 9.2.0 was released 2.5 years ago. several of the bugs fixed in the 10 incremental updates were data corruption related. -- john, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Index corruption
On 3/24/2015 12:05 PM, Bankim Bhavsar wrote: We'll upgrade to 9.2.10 and attempt to reproduce the issue. If possible, can someone point to corruption related issues fixed after 9.2.0? in the postgres manual, see the release notes for 9.2.1 through 9.2.10 -- john, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Load Data with COPY
On 3/24/2015 2:16 PM, Raymond O'Donnell wrote: Is there a header row in the CSV file? - if so, delete it and try again. or specify 'WITH HEADER' on the COPY command -- john, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] :Posgres - performance problem
On 3/25/2015 2:19 AM, ginkgo36 wrote: Hi all, I have 1 table have: - 417 columns - 600.000 rows data - 34 indexs when i use query on this table, it so long. ex: update master_items set temp1 = '' where temp1 <> '' --Query returned successfully: 435214 rows affected, 1016137 ms execution time. that query is modifying 435000 rows of your table, and if temp1 is an indexed field, the index has to be updated 435000 times, too. note that in postgres, a 'update' translates into a INSERT and a DELETE alter table master_items add "TYPE-DE" varchar default '' -- Query returned successfully with no result in 1211019 ms. that is rewriting all 60 rows, to add this new field with its default empty string content update master_items set "feedback_to_de" = 'Yes' --Query returned successfully: 591268 rows affected, 1589335 ms execution time. that is modifying 591000 rows, essentially rewriting the whole table. Can you help me find any way to increase performance? more/faster storage. faster CPU. more RAM. or, completely rethink how you store this data and normalize it as everyone else has said. -- john, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pgadmin3 installation on Oracle Linux 6.6 64-bit ?
On 3/29/2015 12:48 AM, Yuri Budilov wrote: Red Hat/Oracle Linux 6.x is that anything like Ford/Chevy ? Oracle Linux, while originally forked from Red Hat Enterprise Linux aka RHEL, has diverged significantly and is now its own thing. AFAIK, the Postgres yum repository has made no effort at maintaining compatibility with Oracle. Oracle Linux exists almost solely to help Oracle keep its Oracle Database users captive, and for Oracle to capture support revenue that might otherwise go to Red Hat. If you want a free alternative to RHEL, may I suggest you install CentOS ? its a package by package replica of RHEL, without divergence, other than branding. # cat /etc/redhat-release CentOS release 6.6 (Final) # ls /etc/yum.repos.d/pg* /etc/yum.repos.d/pgdg-92-centos.repo (this system has postgres 9.2 on it for testing purposes) # yum install pgadmin3_92 Loaded plugins: fastestmirror, presto Setting up Install Process Loading mirror speeds from cached hostfile * base: centos.arvixe.com * extras: mirrors.sonic.net * updates: mirror.web-ster.com Resolving Dependencies --> Running transaction check ---> Package pgadmin3_92.x86_64 0:1.18.1-2.rhel6 will be installed --> Processing Dependency: wxGTK for package: pgadmin3_92-1.18.1-2.rhel6.x86_64 --> Processing Dependency: libwx_gtk2u_xrc-2.8.so.0(WXU_2.8)(64bit) for package: pgadmin3_92-1.18.1-2.rhel6.x86_64 --> Processing Dependency: libwx_gtk2u_stc-2.8.so.0(WXU_2.8)(64bit) for package: pgadmin3_92-1.18.1-2.rhel6.x86_64 --> Processing Dependency: libwx_gtk2u_html-2.8.so.0(WXU_2.8)(64bit) for package: pgadmin3_92-1.18.1-2.rhel6.x86_64 --> Processing Dependency: libwx_gtk2u_core-2.8.so.0(WXU_2.8)(64bit) for package: pgadmin3_92-1.18.1-2.rhel6.x86_64 --> Processing Dependency: libwx_gtk2u_aui-2.8.so.0(WXU_2.8.5)(64bit) for package: pgadmin3_92-1.18.1-2.rhel6.x86_64 --> Processing Dependency: libwx_gtk2u_aui-2.8.so.0(WXU_2.8)(64bit) for package: pgadmin3_92-1.18.1-2.rhel6.x86_64 --> Processing Dependency: libwx_gtk2u_adv-2.8.so.0(WXU_2.8)(64bit) for package: pgadmin3_92-1.18.1-2.rhel6.x86_64 --> Processing Dependency: libwx_baseu_xml-2.8.so.0(WXU_2.8)(64bit) for package: pgadmin3_92-1.18.1-2.rhel6.x86_64 --> Processing Dependency: libwx_baseu_net-2.8.so.0(WXU_2.8)(64bit) for package: pgadmin3_92-1.18.1-2.rhel6.x86_64 --> Processing Dependency: libwx_baseu-2.8.so.0(WXU_2.8.5)(64bit) for package: pgadmin3_92-1.18.1-2.rhel6.x86_64 --> Processing Dependency: libwx_baseu-2.8.so.0(WXU_2.8)(64bit) for package: pgadmin3_92-1.18.1-2.rhel6.x86_64 --> Processing Dependency: libwx_gtk2u_xrc-2.8.so.0()(64bit) for package: pgadmin3_92-1.18.1-2.rhel6.x86_64 --> Processing Dependency: libwx_gtk2u_stc-2.8.so.0()(64bit) for package: pgadmin3_92-1.18.1-2.rhel6.x86_64 --> Processing Dependency: libwx_gtk2u_html-2.8.so.0()(64bit) for package: pgadmin3_92-1.18.1-2.rhel6.x86_64 --> Processing Dependency: libwx_gtk2u_core-2.8.so.0()(64bit) for package: pgadmin3_92-1.18.1-2.rhel6.x86_64 --> Processing Dependency: libwx_gtk2u_aui-2.8.so.0()(64bit) for package: pgadmin3_92-1.18.1-2.rhel6.x86_64 --> Processing Dependency: libwx_gtk2u_adv-2.8.so.0()(64bit) for package: pgadmin3_92-1.18.1-2.rhel6.x86_64 --> Processing Dependency: libwx_baseu_xml-2.8.so.0()(64bit) for package: pgadmin3_92-1.18.1-2.rhel6.x86_64 --> Processing Dependency: libwx_baseu_net-2.8.so.0()(64bit) for package: pgadmin3_92-1.18.1-2.rhel6.x86_64 --> Processing Dependency: libwx_baseu-2.8.so.0()(64bit) for package: pgadmin3_92-1.18.1-2.rhel6.x86_64 --> Running transaction check ---> Package wxBase.x86_64 0:2.8.12-1.el6.centos will be installed ---> Package wxGTK.x86_64 0:2.8.12-1.el6.centos will be installed --> Processing Dependency: libXxf86vm.so.1()(64bit) for package: wxGTK-2.8.12-1.el6.centos.x86_64 --> Processing Dependency: libSDL-1.2.so.0()(64bit) for package: wxGTK-2.8.12-1.el6.centos.x86_64 --> Running transaction check ---> Package SDL.x86_64 0:1.2.14-3.el6 will be installed ---> Package libXxf86vm.x86_64 0:1.1.3-2.1.el6 will be installed --> Finished Dependency Resolution Dependencies Resolved === Package Arch Version RepositorySize === Installing: pgadmin3_92 x86_64 1.18.1-2.rhel6 pgdg92 3.2 M Installing for dependencies: SDL x86_64 1.2.14-3.el6 base 193 k libXxf86vm x86_64 1.1.3-2.1.el6 base 16 k wxBase x86_64 2.8.12-1.el6.centos extras 572 k wxGTK x86_64 2.8.12-1.el6.centos extras 2.9 M Transaction Summary
Re: [GENERAL] pgadmin3 installation on Oracle Linux 6.6 64-bit ?
On 3/29/2015 1:39 AM, Yuri Budilov wrote: my employer runs Oracle Linux 6.x and also Red Hat 6.x, so if we were to drop Oracle database and take PostgreSQL instead, I am afraid, everything must work on those two Linux platforms. We pay for support from Oracle and Red Hat for Linux OS. its installing on redhat isn't it ? might be worth opening a ticket with Oracle to find out why its not working on their system, then. On the client OS we use Windows 7 64-bit so the PostgreSQL tools and all drivers (JDBC, .NET, etc) need to also work on Windows as well as on Linux. If this is not available and we need to switch to Centos or another flavour of Linux, I am afraid, PostgreSQL is dead in the water, it wont even get a look in. well, these were the only RPMs' it took to install pgadmin3 for postgres 9.2 on my EL 6.6 system... so I dunno why Oracle Linux is not working. SDL-1.2.14-3.el6.x86_64.rpm libXxf86vm-1.1.3-2.1.el6.x86_64.rpm pgadmin3_92-1.18.1-2.rhel6.x86_64.rpm wxBase-2.8.12-1.el6.centos.x86_64.rpm wxGTK-2.8.12-1.el6.centos.x86_64.rpm I just ran the install for 9.4 and other than the actual pgadmin3 rpm, the rest of the dependencies are the same. -- john, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Link Office Word form document with data from PostgreSQL
On 3/30/2015 1:20 AM, avpro avpro wrote: Have you ever experienced how to populate fields from a MS Office Word document with a PostgreSQL data available either in a table or view? I haven’t seen anything on the web; only possible with MS products, VS or Access. Thank you for your input Microsoft Office tools like Word should be able to directly access Postgres databases via ODBC or OLEDB with the suitable Postgres driver (odbc or oledb) installed on the Windows system, and configured to access the Postgres database. I've done this in the past with Excel, no reason it wouldn't work in Word. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Regarding bytea column in Posgresql
On 4/9/2015 4:10 AM, Bill Moran wrote: 1. Is 'bytea' column intended for storing text data? No, it's intended for storing binary data. >2. Typically a chat can have text data with several special characters (which can be represented in multi bytes), how these characters can be stored in a bytea column and retrieved back properly? bytea won't help you here. You'll have to manage the special characters entirely in your code. bytea gives you back the exact same types you put in, with no changes or interpretation. A better choice would be to use a text field with a proper text encoding (such as utf-8). one possible rationale for using BYTEA is that the data could be in various encodings, which the application wishes to preserve, and keeps track of somewhere else (perhaps in a field within the XML?). PostgreSQL text types would insist that all text be stored in the same encoding, and anything in a different encoding would have to be converted to the database encoding. Me, I'd be inclined to convert everything to UTF8 and store it as such, and convert it back to the user's encoding on display, but the feasibility of this really depends on the use cases. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Finding values of bind variables
On 4/8/2015 10:55 AM, Vasudevan, Ramya wrote: state | idle in transaction query | INSERT into distributed_events (type, action, id, properties) VALUES ($1, $2, $3, $4) RETURNING "distributed_event_id" idle means its NOT executing any query... thats just the LAST query it executed.idle in transaction means someone/something started a transaction but isn't actively doing anything so there's a pending transaction thats idle. backend_start | 2015-04-06 20:55:07.921089-07 this client connected on that date/time xact_start | 2015-04-06 21:16:26.820822-07 the transaction started at that date/time (about 21 minutes after creating the connection) query_start | 2015-04-06 21:16:26.834017-07 the last query started about 0.014 seconds after creating the transaction state_change | 2015-04-06 21:16:26.834144-07 the state_change to idle-in-transaction took place about 0.0001 seconds later (so the insert took all of a millisecond). What date/time was it when you did that select from pg_stat_activity ?thats how long this transaction has been pending, which if its more than a few seconds to at most a minute, is almost certainly a 'very bad thing' unless the transaction is actively crunching data. if you're using a client such as JDBC which autowraps queries in transactions by default, you need to invoke COMMIT's when you're done doing that unit of work. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] schema or database
On 4/12/2015 7:20 PM, Ian Barwick wrote: If as you say access to the database is via a single application database user, it will probably make more sense to use multiple schemas rather than multiple databases. Keeping everything in one database will simplify administration (e.g. making backups - ypu'll just need to dump the one database rather than looping through a variable number) and will make life easier if you ever need to do some kind of query involving multiple customers. There will also be less overhead when adding a new schema vs adding a new database. and less overhead in connections, as one client connection can serve multiple customers -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] schema or database
On 4/13/2015 12:07 AM, Alban Hertroys wrote: That's easier to backup, sure, but you can't restore a single customer's schema easily that way. So if one customer messes up their data big time, you'll need to restore a backup for all customers in the DB. if you use pg_dump -Fc, then you can specify the schema at pg_restore time. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] function to send email with query results
On 4/17/2015 10:30 PM, Suresh Raja wrote: I'm looking to write a function to send email with result of a query. Is it possible to send email with in a function. Any help is appreciated. I would do that in an application, not a pl sql function. make a query, fetch the results, format them as you see fit for the email and toss it at your language-of-choice's email facility. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] LDAP Authentication
On 4/22/2015 11:37 AM, Joseph Kregloh wrote: I have successfully setup LDAP and setup simple authentication using simple bind. This was my test case. Now I need to move to the next lever which would be search and bind. This will allow me to grant access to particular servers for some people. I am not sure where or how the ldapsearchattibute comes into play. you would do this by CREATE USER on the various servers for those people, along with GRANT. LDAP only provides authentication, it doesn't manage authorization. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Connecting to 2 different DB on same machine
On 4/22/2015 10:49 AM, puneet252002 wrote: Hi Team, I am running in a confusion on how to connect two different database at different path. 1. Default PostgreSQL DB 2. ManageEngine Servicedesk Plus DB I have installed PostgreSQL for windows successfully and connected to default database of PostgreSQL. Now I to connect ManageEngine Servicedesk Plus Database located at path "C:\ManageEngine\ServiceDesk". database name is ServiceDesk. How to connect above 2 different DB on PGADMIN III? Regards Puneet if thats two different database *servers* running on the same system, they would need to use two different port numbers.the default port is 5432, so this alternate database server would have to be running on some other port, like 5433. if thats just two different databases on the same server instance, you'd specify the database name when you connect to the default port. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] LDAP Authentication
On 4/22/2015 2:57 PM, Joseph Kregloh wrote: I see. That would still require a manual process to create the user on each server. I was planing on using some already existing scripts to create the user automatically on all servers and then LDAP would authorize depending on attributes in their LDAP profile. but thats not how it works, so all the 'planing' in the world won't change a thing. access rights per database are managed with GRANT, users must be CREATE USER on each server regardless of how they are authenticated. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Restarting DB after moving to another drive
On 5/13/2015 11:06 AM, Daniel Begin wrote: I am then currently running pg_dumpall on the database. I will restore the cluster on the new drive once completed. However, there is still something obscure in the process. The doc says "pg_dumpall requires all needed tablespace directories to exist before the restore". External tablespaces directories are easy to create but what's about pg_default and pg_global tablespace since I never created specific tablespaces for them? those are created when you initdb the target cluster prior to restoring the pg_dumpall output. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Restarting DB after moving to another drive
I use this script, run nightly via crontab, on my small pgsql servers. it runs as the postgres user. #!/bin/bash /usr/pgsql-9.3/bin/pg_dumpall --globals-only | gzip > /home2/backups/pgsql/pgdumpall.globals.`date +\%a`.sql.gz for db in $(psql -tc "select datname from pg_database where not datistemplate"); do \ pg_dump -Fc -f /home2/backups/pgsql/pgdump.$i.$(date +\%a).dump $db done this creates a globals-only backup and a seperate backup of each database, for each day of the week. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL]
On 5/18/2015 4:48 AM, Michael Paquier wrote: And easily I can upgrade this and what are the steps to upgrade, kindly confirm? On disk format for the same major version is compatible, so simply install the new binaries and restart your server. The installation of the new binaries depends on your OS and/or the way things have been installed. you need to read the release notes to see if there's any special steps due to bug fixes. some incremental updates may require a reindex of certain index types, for instance. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Consistent state for pg_dump and pg_dumpall
On 5/20/2015 10:44 AM, Michael Nolan wrote: When I moved to a new release of pg (9.3) last December, I stopped all transaction processing first so that pg_dumpall had no consistency issues. the only possible consistency issue would be if you have applications doing 2-phase commits to two different databases on the same server, otherwise each database is dumped as a single transaction and all data elements within that database are point-in-time consistent. my preferred backup procedure for a whole server dump is to A) pg_dumpall --globals-only | gzip > ... B) for each database, do pg_dump -Fc -f $database.Fc.pgdump $database I do this via the following crontab entry for hte postgres user... $ crontab -l 30 1 * * * /var/lib/pgsql/cronbackup.sh and this script... #!/bin/bash # /usr/pgsql-9.3/bin/pg_dumpall --globals-only | gzip > /home2/backups/pgsql/pgdumpall.globals.`date +\%a`.sql.gz for i in $(psql -tc "select datname from pg_database where not datistemplate"); do \ pg_dump -Fc -f /home2/backups/pgsql/pgdump.$i.$(date +\%a).dump $i done -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] [SQL] extracting PII data and transforming it across table.
On 5/21/2015 9:51 AM, Suresh Raja wrote: I'm looking at directions or help in extracting data from production and alter employee id information while extracting. But at the same time maintain referential integrity across tables. Is it possible to dump data to flat file and then run some script to change emp id data on all files. I'm looking for a easy solution. Thanks, -Suresh Raja Steve: I too would like to update the id's before dumping. can i write a sql to union all tables and at the same time create unique key valid across tables. it sounds like you have a weak grasp of delational database design I would have a single Employee table, with employee ID as the primary key, and any other attributes which are common to all employees, then I would have other tables which contain information about specific employee types or groups, these other tables could have their own primary key, but would reference the Employee table EmployeeID field for the common employee attributes. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Between with a list of ranges possible ?
On 5/29/2015 9:32 AM, Arup Rakshit wrote: Can I do the below 3 queries in a single query ? select * from table where number * 3 between start_value1 and end_value2; select * from table where number * 3 between start_value2 and end_value2; select * from table where number * 3 between start_value3 and end_value3; select * from table where (number * 3 between start_value1 and end_value2) OR (number * 3 between start_value2 and end_value2) OR (number * 3 between start_value3 and end_value3); -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Between with a list of ranges possible ?
On 5/29/2015 10:41 AM, John R Pierce wrote: On 5/29/2015 9:32 AM, Arup Rakshit wrote: Can I do the below 3 queries in a single query ? select * from table where number * 3 between start_value1 and end_value2; select * from table where number * 3 between start_value2 and end_value2; select * from table where number * 3 between start_value3 and end_value3; select * from table where (number * 3 between start_value1 and end_value2) OR (number * 3 between start_value2 and end_value2) OR (number * 3 between start_value3 and end_value3); oh, do note, this won't be /exactly/ the same if the ranges overlap. your first would return the overlapping rows for each query, while the OR version will just return one of each row that is in any of the ranges. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] odbc to emulate mysql for end programs
On 6/1/2015 12:04 PM, Mimiko wrote: I have several programs which can use mysql,access,oracle DB as a front-end DB via odbc. Is there a method to emulate or hide the back-end DB so for program it will be seen as mysql, but real DB will be on postgres? ODBC *is* that method. you just need a database specific ODBC driver, which William already described. now, if your code is making use of MySQL specific features, then its not going to work with Postgres (but it also likely won't work with Oracle or Access/Jet databases either). -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] replicating many to one
On 6/4/2015 10:07 AM, Doiron, Daniel wrote: I have a situation where I need to replicate the databases in 4 clusters on 4 individual servers to a single cluster on 1 server. So far, the best option looks like pgpool statement-based replication, the major down side being sequences. Does anyone know a better way to achieve this type of replication? do all of your applications have inherent protection against collisions? to use an overly simple example, if you have an inventory table, what happens if two nodes concurrently sell the same last units of a given SKU ? Or a banking example, whats to prevent concurrent transactions on distributed servers handing out the same money ? -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] alter column type
On 6/5/2015 11:37 AM, Ravi Krishna wrote: Why is PG even re-writing all rows when the data type is being changed from smaller (int) to larger (bigint) type, which automatically means existing data is safe. Like, changing from varchar(30) to varchar(50) should involve no rewrite of existing rows. int to bigint requires storage change, as all bigints are 64 bit while all ints are 32 bit. it would be a MESS to try and keep track of a table that has some int and some bigint storage of a given field. now, varchar 30 to 50, that I can't answer, are you sure that does a rewrite? the storage is exactly the same for those. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] alter column type
On 6/5/2015 11:46 AM, Casey Deccio wrote: On Fri, Jun 5, 2015 at 2:36 PM, Michael Nolan <mailto:htf...@gmail.com>> wrote: Probably too late for this time, but in the past when I've needed to redefine the type for a column, I've made a dump, edited the dump file to change the type and then renamed the table and reloaded it. That's usually several orders of magnitude faster. Actually, not too late. My first ALTER is still running, and I still have four more to go. Sigh. I had thought of this but wasn't sure how it might compare. Thanks for the data point :) if all 5 alters' were to the same table, you should have combined them into one alter statement. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] GCC error and libmpfr.so.4 not found
On 6/11/2015 7:20 AM, Asma Riyaz wrote: I have seen an earlier post with gcc errors, however I couldn't figure out what the actual problem here is: is it that libmpfr.so.4 is not found? or gcc needs to be installed fresh? on RHEL7/CentOS 7, that library is provided by the package mpfr... ask the ssytem administrator to install... yum install mpfr if your host is not providing basic working development tools, you either need to find a new host, or discuss this with management. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL]
On 6/23/2015 10:58 AM, Bruno Hass de Andrade wrote: *I really don't know what is happening, why postgres hang and didn't close the connections. This django web interface is used only for management and viewing logs, most server have two users only.* * * *I've sent this email just to know if one of yours have seen this, or something like this, before.* use lsof or equivalent to determine what process owns the other side of the socket that one of those processes is listening on. my guess is, you're not releasing resources in django and they are piling up. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] native api or odbc?
On 6/23/2015 3:56 PM, Ted Toth wrote: What are peoples experiences with either/both, pluses/minuses? ODBC adds a whole layer of obfuscation which, IMHO, if you know you're only dealing with PostgreSQL, is infuriating. I'd rather program the native API, its just much cleaner and simpler, unless you're using a programming framework that has native ODBC support built in. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Correct place for feature requests
On 6/25/2015 11:59 AM, Алексей Бережняк wrote: I think that PostgreSQL is great RDBMS, but one important (for me) feature that it missing is case-insensitive identifier quotes ([table].[column]) like in Microsoft SQL Server. putting names in [ ] is not part of any SQL specification. I know that there are double quotes, but they are case-sensitive. Or maybe some option that will make double quotes case-insensitive. the current behavior is compliant with the SQL specification. if you want case-insensitive, don't quote the identifiers. if you do quote them, they are implicitly case sensitive according to the specifications. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Correct place for feature requests
On 6/25/2015 12:33 PM, Alvaro Herrera wrote: What happens if you have spaces in your identifiers? either don't do that, or use quoted identifiers and be consistent about your case. what if you have spaces in your identifiers in your java/c/python/perl/php/basic/fortran/etc program? oh wait, you can't DO that. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] create index on a field of udt
On 6/28/2015 10:08 PM, Shujie Shang wrote: create type info as (id int, name text); I want to create index on info.id <http://info.id>. you can't create an index on a type, just on a table. create table info (id serial primary key, name text); or create table info (id serial, name text); alter table info add primary key(id); or more generically, create index on some_table ( some_field[,...] ) ; (a primary key is a unique not null constraint, this implies an index in postgresql) -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] create index on a field of udt
On 6/28/2015 10:31 PM, Shujie Shang wrote: Oh, I didn't explain my question well, actually I want to create an index on an udt in a table. e.g. create type info as (id int, name text); creat table test (i info); I want to run: create index myindex on test (i.id <http://i.id>) create table test of info primary key(id); or, if you want to use your type plus other stuff in the table, I believe its something like... create table test (i info, stuff...) primary key (i.id) or create index test(i.id); watch out for ambiguity if the type names match the table or field name. see http://www.postgresql.org/docs/current/static/rowtypes.html#AEN7836 -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Turn off streaming replication - leaving Master running
On 6/28/2015 10:52 PM, Michael Paquier wrote: Well, yes. It depends on what you want to do then. If this testing really requires to promote the standby then you will need to take a new fresh base backup knowing that you are using 9.2. if the standby is running on a file system with snapshotting (like zfs), and the master is doing WAL archiving, you could, in theory, pause the replication and snapshot the slave, do read/write tests on the slave, then restore that snapshot and resume replication, pulling from the WAL archive til it catches up. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] create index on a field of udt
On 6/28/2015 11:24 PM, Shujie Shang wrote: insert into test values (generate_series(1, 300), (1, 'hi')::info); explain select * from test where i.id <http://i.id>=1; the result is : seqscan does not every row of that match i.id = 1 ? try ... insert into test values (generate_series(1, 300), (generate_series(1, 300), 'hi')::info); analyze test; explain select * from test where getID(i) <http://i.id>= 1; -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Turn off streaming replication - leaving Master running
On 6/28/2015 11:58 PM, Andy Erskine wrote: If i touch my trigger file and promote my secondary to a master - what effect will that have on the Master - will i need to make any changes on that side ? Will it still try and stream data across to the promoted secondary and just fill up the log files with error messages ? its not 'push', the slave pulls the data from the master. as long as the master has WAL file archiving enabled (which is a separate but related thing to streaming), the slave will catch up... if the slave is understands where the WAL archive is, then when its woken back up after being restored to how it was before your testing, it will query the master, find out its way ahead of its timeline, and consult with the WAL archives, fetching as many as are needed to catch up to the servers' current timeline, then resume streaming ... -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Turn off streaming replication - leaving Master running
On 6/29/2015 12:06 AM, Jeff Janes wrote: But since you want a clone, what is the point of first setting up streaming, and then breaking it? Just use pg_basebackup to set up a clone directly, without ever having started streaming. It seems like you are just going to confuse yourself about what is a standby meant for fail over, and what is a clone meant for testing. With possibly disastrous consequences. VERY good point! -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Which replication is the best for our case ?
On 6/29/2015 10:41 AM, Melvin Davidson wrote: I think it would help immensely if you provided details such as table_structure, indexes the actual UPDATE query and the reason all rows of the table must be updated. indeed, the whole model of massaging the entire database every 10 minutes is highly suspect. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Turn off streaming replication - leaving Master running
On 6/29/2015 10:55 PM, Andy Erskine wrote: agreed there is an element of risk. however a backup of the master will be carried out prior (ok there could potentially be a gap of data during downtime) unfortunately i have nothing else big enough (diskwise) to run my tests on. rent a virtual server for a few days from Amazon or someone. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Which replication is the best for our case ?
On 6/30/2015 9:57 AM, ben.play wrote: To be more precise : We have a database with more than 400 Gb and ONE table with more than 100 Gb of data. This is huge for doctrine. When the cron runs, it writes a lot on the disks in temporary file (although we have 128 GB of Ram...). Of course, each table is well indexes... That is why I'm thinking about replication : My server A (master) is for my users... and my server B is a server reserved for calculations (and this server B which writes on the base) This is a image of my dream system : <http://postgresql.nabble.com/file/n5855916/9e41ce1f-38ea-4fba-a437-a43c598e655c.jpg> (If you can't see the image : http://tof.canardpc.com/view/9e41ce1f-38ea-4fba-a437-a43c598e655c.jpg) what happens when master A continues to update/insert into these tables that your cron job is batch updating on the offline copy ? How would you merge those changes in ? -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Which replication is the best for our case ?
On 7/1/2015 3:08 AM, ben.play wrote: In fact, the cron job will : -> select about 10 000 lines from a big table (>100 Gb of data). 1 user has about 10 lines. -> each line will be examinate by an algorithm -> at the end of each line, the cron job updates a few parameters for the user (add some points for example) -> Then, it inserts a line in another table to indicate to the user each transaction. All updates and inserts can be inserted ONLY by the cron job ... Therefore ... the merge can be done easily : no one can be update these new datas. But ... how big company like Facebook or Youtube can calculate on (a) dedicated server(s) without impacting users ? that sort of batch processing is not normally done in database-centric systems, rather, databases are usually updated continuously in realtime as the events come in via transactions. your cron task is undoubtably single threaded which means it runs on one core only, so the whole system ends up waiting on a single task crunching massive amounts of data, while your other processor cores have nothing to do. it sounds to me like whomever designed this system didn't have a solid grip on transactional database processing. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] database-level lockdown
On 7/4/2015 10:49 AM, Filipe Pina wrote: Terminating all sessions will break every other transaction (except for the one doing it). Locking database will put all the other on hold. As we're talking about quick/instant operations on hold will have impact on performance but won't cause anything to abort.. you can't lock something thats already in use by a transaction as that transaction already has locks on it. better would be to design your workflow so it all can be done transactionally and get away from this batch processing model. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [pg_hba.conf] publish own Python application using PostgreSQL
On 7/5/2015 3:15 AM, c.bu...@posteo.jp wrote: These are the modification I have to do to make my application run with the connetion string "postgres://puser@localhost/FoobarTest". The settings are about the /etc/postgresql/9.3/main/pg_hba.conf file. There I change this line hostall all 127.0.0.1/32md5 to hostall all 127.0.0.1/32trust I have to create a user without a password (beside the admin/postgres), too. why not connect as postgres://puser:somepass@localhost/dbname and create puser with a password ? that way mucking with configuration files is not required. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [pg_hba.conf] publish own Python application using PostgreSQL
On 7/5/2015 9:43 PM, c.bu...@posteo.jp wrote: But isn't there a way to use PostgreSQL without that setup and configuration things? no, not really, as its a generic database server meant to be used by multiple applications across a network, with a wide range of configuration options, plugins and addons, etc. at a bare minimum, a database administrator needs to create database roles (users) and databases for an app like yours. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [pg_hba.conf] publish own Python application using PostgreSQL
On 7/6/2015 9:55 PM, c.bu...@posteo.jp wrote: On 2015-07-05 22:16 John R Pierce wrote: >at a bare minimum, a database administrator needs to create database >roles (users) and databases for an app like yours. The admin don't need to create the db. It is done by the application (sqlalchemy-utils on Python3) itself. an application should not have the privileges to do that. you don't run your apps as 'root', do you? why would you run them as a database administrator ? -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] encrypt psql password in unix script
On 7/8/2015 11:34 AM, Suresh Raja wrote: I cannot use .pgpass as the password stored here is not encrypted. can i use a encrypted password from unix shell script. has anybody ran into same situation. Wht options do i have. I believe anywhere you enter a password in postgres, it can be the hash instead. but what security does that gain you?if someone gets your encrypted/hashed password, he can still log on. the pgpass file has to be permissions 700, so only YOU (and root) can read it. if these are LOCAL connections to a pg server on the same machine, you can use 'ident' as your authentication, where your unix user is used as the postgres username. or, you can use ssl certificates for authentication, this is more complex to setup. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [SQL] encrypt psql password in unix script
On 7/8/2015 12:01 PM, Steve Midgley wrote: My suggestion is to put it in an environment variable and set that variable from a shell startup script that is secured with permissions. (http://www.postgresql.org/docs/9.4/static/libpq-envars.html) that just moves the problem, now the plaintext password is in a script file somewhere, AND many OS's let other users see your environment. If you can't do that, the only other method I've used is to setup Postgres with Ansible, and store the Pg passwords in an ansible vault, which is encrypted. Ansible asks for the decrypt key when it runs. how would that work for unattended scripts, such as cron jobs ? -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Oracle to PostgreSQL Migration - Need Information
On 7/8/2015 1:16 PM, dinesh kumar wrote: We recently done the similar migration for one of our customer. We used all opensource tools to achieve this migration process. We used Pentaho Data Integration tool for doing Online DB migration, which took minimal downtime with CDC{Change Data Capture} approach. Also, we used Ora2Pg tool to migrate the DB objects with some manual syntax modifications. thats the easy part. now what about the massive code base of pl/sql and triggers he mentioned ? -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Socket Connection Authentication
On 7/8/2015 11:32 PM, basti wrote: When I try local database user md5 I get the error: "Peer authentication failed for user ..." Is it possible to authenticate a user with md5 via socket connection? yes, it is, I do it all the time. was this the first local line, or at least in front of any local all all lines ? were you using localhost to connect or leaving the host empty? localhost uses a host entry. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] INSERT ... ON CONFLICT DO UPDATE
On 7/20/2015 7:01 AM, Geoff Winkless wrote: Some web research suggests that random sequences are not great for indexes because of the resultant "keyspace fragmentation". I'm assuming that means a low number of nodes in the btree leafs, so an increase in memory usage for the index? that suggests some folks overthink their indexing strategies and end up 'overoptimized'. anyways, a simple REINDEX fixes all sorts of index fragmentation -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Creating a user for pg_start_backup
On 7/21/2015 1:31 AM, Andrew Beverley wrote: I had to specify a database name when connecting: psql -U backup -c "select pg_start_backup('Daily backup')" -d postgres psql defaults to the current user for both the database name and user name. I probably would have run that psql command as the system postgres user and not specified any -U or -d ... -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Creating a user for pg_start_backup
On 7/21/2015 1:51 AM, Andrew Beverley wrote: Thanks John. The backup script is running as root, so presumably I'd have to use sudo? Or should I run a separate cron job as postgres to do the above, and run the backup script separately? those are both possibilities. I'd either use su (not sudo) from root, or I'd cron it from the postgres DBA account, depending. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Creating a user for pg_start_backup
On 7/21/2015 8:36 AM, Andrew Beverley wrote: Sorry to be dragging this off-topic, but what's the reason for using su instead of sudo? sudo is for non root users, it ends up running the su command. normally root doesn't use sudo at all, look at all the init.d scripts that run daemons as other users, they typically use su, like... $SU -l postgres -c "$PGENGINE/postmaster -p '$PGPORT' -D '$PGDATA' ${PGOPTS} &" >> "$PGLOG" 2>&1 < /dev/null (where $SU is su or runuser) -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgresql upgrade from 8.4 to latest
On 7/28/2015 1:29 PM, Joshua D. Drake wrote: I need to upgrade Postgresql database from 8.4 to latest stable version (9.4). The db size is almost 2.5 TB. Is pg_upgrade in-place is a good idea for it? With quite a bit of testing, yes. yeah, that was my thought to... clone the database to a test/staging machine, and practice running pg_upgrade on it til you get it right. frankly, I'd want to use a new machine for the final upgrade, too, stopping your applications, taking a fresh clone, and doing the upgrade, then swapping it into place, and restarting your applications. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgresql upgrade from 8.4 to latest
On 7/28/2015 1:35 PM, AI Rumman wrote: But what I read, in-place upgrade has smaller outage, compared to dump/restore. But so many articles on having bugs afterwards. Do you think it is a good idea to use pg_upgrade for critical database application? most of those are application issues with 9.4 vs 8.4, not directly related to the pg_upgrade process. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] postgres 9.3
On 7/31/2015 9:39 AM, Ramesh T wrote: -bash-4.1$ service postgres-9.3 start postgres-9.3: unrecognized service on redhat/centos type systems, the service name is postgresql-9.3 note that the service command is meant to be run by the root user, a $ prompt in bash generally indicates you're not running as root. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] postgres 9.3
On 8/1/2015 3:08 AM, Andreas Joseph Krogh wrote: The version goes last: service postgresql start 9.3 thats very OS/distribution specific. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] scaling postgres - can child tables be in a different tablespace?
On 8/4/2015 6:14 AM, Melvin Davidson wrote: As additional advice, to get the best performance, you will want all your tablespaces to be on separate spindles/disks. EG: disk1/tblspc1 disk2/tblspc2 disk3/tblspc3 ... disk99/tblspc99 actually, I find to get best performance most often, stripe all the disks together and put everything on the same big volume, that way all IO is evenly distributed. otherwise you'll find too much IO on some tables/partitions, and not enoguh on others, so most of the disks are idle most of the time. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] postgres connection
On 8/6/2015 6:21 AM, Ramesh T wrote: Hi all, I got a problem when i start postgres server. like' could not connect to the server: connection refused on host 2.3.421.1 and accepting tcp/ip connection on port 5432. any help appreciated.. you could start with telling us what operating system/distribution/version you're running, and how you installed postgresql most system default configurations, the server only listens to localhost, not other IP addresses, you need to change the listen_addresses parameter in postgresql.conf and adjust your authentication methods appropriately in pg_hba.conf... but before messing with that, verify the server is actually running, or not. Do note, /etc/init.d scripts are meant to be run by the root user only, not by a regular user account. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL - The Best Overall Database
On 8/14/2015 11:06 AM, Ravi Krishna wrote: is there any details available on this poll ? the linked article links to this report, which requires registration to download. http://www.dbta.com/DBTA-Downloads/ResearchReports/The-Real-World-of-the-Database-Administrator-5237.aspx -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] deletion of the thread posted by me
On 8/18/2015 3:08 AM, Mitu Verma wrote: I need to delete following threads posted by me on postgreSQL forum, Kindly suggest me a way forward as it is urgent. good luck with that.email is write-only the postgres mailing lists are archived on dozens and dozens of different systems -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] How to tune my new server
On 8/18/2015 6:33 AM, Joseph Kregloh wrote: Of course not every machine is built the same and not every PostgreSQL server is used the same, to tune according to the machine and the usage of the server. indeed, your use case heavily affects the optimal decisions, there's no one-size-fits-all answer. are you doing OLTP, characterized by frequent small transactions with high percentage of update events and many clients, or is this OLAP, where you bulk load large data, then do relatively few very large aggregate type queries for reporting purposes ? -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Problem with pl/python procedure connecting to the internet
listen_addresses should only affect the interfaces that the postgres server is listening to connections from. as long as your app is on the same machine, and uses localhost:someport to connect to the postgres server, then the default listen_addresses='localhost' should be sufficient.' if you want clients to connect from other systems, then listen_addresses = '*' is appropriate. listen_addresses should have no impact on what your plpython app can connect to outside of postgres, unless you're running the 'safe' version of pl*** -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Problem with pl/python procedure connecting to the internet
On 8/22/2015 12:23 AM, Igor Sosa Mayor wrote: Somehow I have the impression postgres is trying to make a connection without having to rights to get the answer (I'm on linux). I dont understand exactly which process (and with which rights) is executing the plpython procedure... Thanks in any case for your answer. each client connection creates a postgresql server process. pl-anything runs in that context, along with all sql for that client. say... is by any chance selinux running in enforcing mode ? you might try temporarily setting it to 'permissive' with the setenforce command, and see if your python works. -- john r pierce, recycling bits in santa cruz if this is a reply to a list, I'm subscribed to that list, please reply to the list not me personally. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL Developer Best Practices
On 8/22/2015 12:40 PM, Melvin Davidson wrote: What is occurring is that the developers are sending strings like 'Mr. M\'vey', which, if we set standard_conforming_strings = ON, would, and does, result in errors and the statement failing, which is a lot less desirable that a simple warning. if your developers were writing invalid C code (or whatever other language they program in), would you modify the compiler to accept their invalid syntax? or would you tell the developer to fix their code properly ? if the developers refused, why would you not fire them on the spot for incompetence ? -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Problem with pl/python procedure connecting to the internet
On 8/23/2015 10:49 AM, Tom Lane wrote: However, what this smells like to me is a permissions problem. I think you were way too quick to dismiss the idea that SELinux (or something just like it) is restricting outbound internet connections from Postgres. It's standard for SELinux to be configured so that network-accessible daemons like Postgres are locked down harder than the very same code would be treated when being invoked from the command line --- and network access would be one of the prime candidates to be disabled by default. Have you poked around under/var/log/ to see if the kernel logs anything when the connection attempt doesn't work? For that matter, have you checked the postmaster log to see what Postgres logs about it? also, `getenforce` ... if it comes back enabled, as root, do... setenforce permissive and try your trigger again (don't reboot or restart anything, just do it) -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] PostgreSQL Developer Best Practices
On 8/24/2015 9:34 AM, Melvin Davidson wrote: And again, I am talking about _database_ design, not Web apps. Letting Web developers design a database to work with their app, is a very, Very, VERY bad idea. It is far better to let DBA's and "database develeopers" design a good database, then to let those apps mold a db into a non-optimum design. if you let the app drive the database design, you tend to end up with a database which is only useful to that single app, and likely breaks when that app changes. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL Developer Best Practices
On 8/25/2015 1:42 PM, Gavin Flower wrote: Account numbers are externally generated, and may potentially change. Management might suddenly decide that they want to start using the year the account started as the first 4 digits, or that the branch code should be reflected in it, or something else. The database should be protected from these arbitrary changes. Hence the account_no is not a good candidate for a primary key. such practices would raise total havoc on a traditional paper ledger accounting system as well as things like pending AR/AP where external companies will be referencing your account numbers. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] two factor authentication for pgadmin iii
On 8/26/2015 11:55 PM, Nima Azizzadeh wrote: Hello, I'm going to create two factor authentication for pgadmin server... you mean postgresql server, don't you? pgadmin is a client program/utility. I'm using postgresql 9.4 with pgadmin III on Linux Mint 17.2 32bit... I already have 1 password authentication but For better security, I just want to force 2 of them. The authentication factors could be any things(what people have,what people know,where people are or what people are). Could anybody help me on this? one standard way of doing 2-factor is to use ssl keys, with a password on the ssl key store. But, how will that work with application programs connecting to postgres, such as a webserver based application ? -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Ubuntu installed postgresql password failure
On 9/13/2015 1:57 PM, Dale Seaburg wrote: I have reached my wit's end. lol I installed postgresql (9.3.9) and pgadmin3 with the Ubuntu Software Center. When using pgadmin or psql, I get an error "password authentication failed for user "postgres" ". Ubuntu (14.04.3). I created a postgres user BEFORE installing postgresql/pgadmin toolset. The password I used for the postgres user will not work with the installed postgresql tools. What in the world am I doing wrong, and more importantly, how do I determine the password used for postgres during the postgresql install? AND, why should it be different from the postgres user I created. Any help will be greatly appreciated. try `su - postgres` THEN run psql if that works, do this at that psql prompt: CREATE USER yourname SUPERUSER PASSWORD 'somepassword'; CREATE DATABASE yourname OWNER yourname; where yourname is your regular login.now exit psql, exit the su, and try running pgadmin. don't specify localhost, leave the host empty/blank, and you should be allowed to log in as yourname. explaining why this is just as it is requires more energy than I have at the moment. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Online backup of PostgreSQL data.
On 9/17/2015 12:31 AM, Raman, Karthik IN BLR STS wrote: I need a small clarification. We are using PostgreSQL version 9.4.1 We have a requirement to take the online backup of a running PostgreSQL system (with out stopping the database). As per the design / architecture of PostgreSQL system, is it technically allowed to dynamically copy the “Data” folder of PostgreSQL system (with out using any replication concept). While restoring (we are allowed to stop the database!), can we directly restore this data folder? Are there any potential problem we will come across by doing this way? (Or) Should I use some replication concept like using pg_basebackup (?) to achieve the same. (In our case, we do not need replication as a feature to do in different PC). It is sufficient for us to take the backup in the same PC. Any suggestions / help is highly appreciated. Thanks in advance! to copy the data directory and have it be useful you need to bracket the copy with calls to pg_start_backup() and pg_stop_backup() this ensures the data files are coherent. this is in fact what pg_basebackup does for you if the data is large enough that regular full copies are expensive, you might consider implementing a WAL archive system. you would periodically do a pg_basebackup as above, and also maintain a WAL archive of all wal files since the beginning of the base backup. you can use thise base backup plus wal archive to implement 'Point In Time Recovery' or PITR, which will let you restore the database to just before any specified transaction number, so if bad data got introduced, you can restore it to just prior to that point. you might read all of this chapter in the manual... http://www.postgresql.org/docs/9.4/static/backup.html -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] looking for old rpm
On 9/21/2015 12:31 AM, Etienne Champetier wrote: I'm looking for old postgres rpm, like postgresql93-server-9.3.6-1PGDG.rhel6.x86_64.rpm (one of our software is "certified" with this minor version ...) I cringe when I see requirements like this. did they also 'certify' this applicaiton to only run on whatever was the day's snapshot build of the OS, such that any newer security updates and bug fixes can't be applied to RHEL 6 ? specific to 9.3.6, you'd be missing out on the fixes ... http://www.postgresql.org/docs/current/static/release-9-3-7.html http://www.postgresql.org/docs/current/static/release-9-3-8.html http://www.postgresql.org/docs/current/static/release-9-3-9.html -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] postgres 9.3
On 9/21/2015 11:52 AM, Melvin Davidson wrote: You cannot directly "connect" one schema with another, but you can set the default to have both in the search_path. Uncomment the #search_path parameter in Postgresql.conf and change it to search_path = 'preview, role,public,"$user",public' the above would change it globally for all databases and users on the system, which is, IMHO, probably not what you want to do. better would be to... ALTER ROLE username SET SEARCH_PATH='preview,"$user", public'; or ALTER DATABASE dbname SET...; and then this change just applies to that named role or database... to the OP, a simple SET only applies to the current session, its not remembered. ALTER ROLE (or ALTER DATABASE) will remember the setting for the specified user or database. but if as you said, its the "preview" role that you want to access the "preview" schema, then the default search path would suffice, as $USER == preview. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pgsql-95 repo in rsync
On 9/21/2015 5:46 AM, Kjetil Nygård wrote: 2. yum repo for Fedora 23 Is it possible to have fedora 23-repo for PostgreSQL? Especially 9.4 and 9.5. I thought F23 starts beta next week? it appears 9.4.4 is included in the F23 base repository. postgres 9.5 is itself still in beta. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Convert number to string
On 9/24/2015 1:09 AM, Hengky Liwandouw wrote: Hi, I don’t have any idea how can I use postgres function to convert series of number (currency) to my own string. My string : F G H I J K L M N as the replacement for number : 1 2 3 4 5 6 7 8 9 Dozens = O Hundreds = P Thousands = C Ten thousands = B So… 200 = GP 2000 = GC 1150 = FFJO 3 = HB Any idea ? going from FFJO -> 1150 is simple character substitution (assuming 'dozens' actually means tens, and not its traditional meaning of 12s). going the other way, thats probably a good excuse for a plperl function. -- john r pierce, recycling bits in santa cruz
[GENERAL] 9.4 and centos/yum multiversion installs
I have a CentOS 6 server thats already running 9.0, ... 9.3 for development. I added 9.4 to it by ... yum install http://yum.postgresql.org/9.4/redhat/rhel-6-x86_64/pgdg-centos94-9.4-1.noarch.rpm yum install postgresql94{,-devel,-server,-contrib} I created /etc/sysconfig/pgsql/postgresql-9.4 with... PGENGINE=/usr/pgsql-${PGMAJORVERSION}/bin PGPORT=5436 PGDATA=/var/lib/pgsql/${PGMAJORVERSION}/data PGLOG=/var/lib/pgsql/${PGMAJORVERSION}/pgstartup.log (same as all the others except a unique port) and initdb via.. service postgresql-9.4 initdb all good so far. but when I ... service postgresql-9.4 start it fails, leaving the following in pgstartup.log... < 2015-09-29 09:38:15.270 PDT >LOG: could not bind IPv6 socket: Address already in use < 2015-09-29 09:38:15.270 PDT >HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry. < 2015-09-29 09:38:15.270 PDT >LOG: could not bind IPv4 socket: Address already in use < 2015-09-29 09:38:15.270 PDT >HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry. < 2015-09-29 09:38:15.270 PDT >WARNING: could not create listen socket for "localhost" < 2015-09-29 09:38:15.270 PDT >FATAL: could not create any TCP/IP sockets does the 9.4 yum install no longer support using the sysconfig settings fragments ? I see what looks like the correct shell code for it in /etc/rc.d/init.d/postgresql-9.4 ... # Override defaults from /etc/sysconfig/pgsql if file is present [ -f /etc/sysconfig/pgsql/${NAME} ] && . /etc/sysconfig/pgsql/${NAME} agh, comment in /etc/rc.d/init.d/postgresql-9.4 would suggest so... # Version 9.4.0 Devrim Gunduz # Remove PGPORT variable so... how do you run multiple versions?? -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] 9.4 and centos/yum multiversion installs
On 9/29/2015 10:09 AM, John R Pierce wrote: does the 9.4 yum install no longer support using the sysconfig settings fragments ? I see what looks like the correct shell code for it in /etc/rc.d/init.d/postgresql-9.4 ... # Override defaults from /etc/sysconfig/pgsql if file is present [ -f /etc/sysconfig/pgsql/${NAME} ] && . /etc/sysconfig/pgsql/${NAME} agh, comment in /etc/rc.d/init.d/postgresql-9.4 would suggest so... # Version 9.4.0 Devrim Gunduz # Remove PGPORT variable so... how do you run multiple versions?? and apparently the answer is, you set pgport in postgresql.conf rather than via shell. ugh. means you can't tell the port from ps... $ ps auxww |grep postmaster postgres 3291 0.0 0.0 1288244 57876 ? SAug30 13:31 /usr/pgsql-9.1/bin/postmaster -p 5433 -D /var/lib/pgsql/9.1/data postgres 0.0 0.0 2352980 76048 ? SAug30 21:28 /usr/pgsql-9.2/bin/postmaster -p 5432 -D /var/lib/pgsql/9.2/data postgres 3442 0.0 0.0 2352848 76060 ? SAug30 10:57 /usr/pgsql-9.3/bin/postmaster -p 5435 -D /var/lib/pgsql/9.3/data postgres 44139 0.0 0.0 324628 15084 ?S10:12 0:00 /usr/pgsql-9.4/bin/postmaster -D /var/lib/pgsql/9.4/data -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Postgresql 9.4 and ZFS?
On 9/29/2015 10:01 AM, Benjamin Smith wrote: Does anybody here have any recommendations for using PostgreSQL 9.4 (latest) with ZFS? For databases, I've always used mirrored pools, not raidz*. put pgdata in its own zfs file system in your zpool. on that dedicated zfs, set the blocksize to 8k. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgresql 9.4 and ZFS?
On 9/29/2015 1:08 PM, Benjamin Smith wrote: put pgdata in its own zfs file system in your zpool. on that dedicated >zfs, set the blocksize to 8k. Based on my reading here, that would be -o ashift=13 ? HowDoesZFSonLinuxHandleAdvacedFormatDrives EG: 2^13 = 8192 sorry, I meant recordsize. zfs set recordsize=8192 zpool/pgdata -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] postgresql doesn't start
On 10/2/2015 2:02 PM, Paolo De Michele wrote: I've postgresql 9.3 installed on docker docker is a packaging system for linux containers, its not an operating system per se.what OS is your docker container running on? supervisord is a process/daemon manager, its not part of 'postgresql', although apparently your docker container is using it to run postgresql. your questions all seem related to the specific packaging of this particular docker container, you might see who created this container configuration, and ask them these questions as there's a lot going on there abstracting things. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] postgresql doesn't start
On 10/2/2015 2:02 PM, Paolo De Michele wrote: exec su postgres -c "/usr/lib/postgresql/9.3/bin/postgres -D /var/lib/postgresql/9.3/main -c config_file=/etc/postgresql/9.3/main/postgresql.conf" until yesterday there were no problem right now I see this in the /var/log/supervisor's directory: 2015-10-01 21:40:18 UTC HINT: The file seems accidentally left over, but it could not be removed. Please remove the file by hand and try again. 2015-10-01 21:40:20 UTC FATAL: could not remove old lock file "postmaster.pid": Permission denied try... ls -la /var/lib/postgresql/9.3/main The directory . should be owned by the postgres user, and it should have 700, 750, or 770 permissions. all the files in it should also be owned by postgres. also look and see if postgres logged anything in its own system log files (/var/log/postgresql/9.3 or whatever). -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Recording exceptions within function (autonomous transactions?)
On 10/6/2015 2:38 AM, Steve Pritchard wrote: I am porting several stored procedures from Oracle to Postgres. In the Oracle code, if an exception is thrown within a stored procedure, the exception is caught and details are written to a database table using an autonomous transaction (as the main transaction is rolled back). we were faced with a similar task a few years ago, massive complex system with 100s of stored procedures written in Oracle's PL/SQL. We opted to completely rebase the business logic software in a conventional high level language (Java), and only used pl/pgsql functions when they had significant performance enhancements. This Java version was designed to work with either Oracle or Postgres, and in fact performs BETTER than the original version, and is easier to maintain. I know thats probably not what you wanted to hear, but I'm just throwing it out there. -- john r pierce, recycling bits in santa cruz
Re: [GENERAL] Best practices for aggregate table design
On 10/6/2015 11:24 AM, droberts wrote: OR a dimension 'type' with values outbound/inbound and a single measure column 'total' ? that smells a bit too much like an "EAV" (entity-attribute-value) which is considered an antipattern in relational circles -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] backup.old
On 10/6/2015 8:28 PM, Steve Pribyl wrote: What do I need to do to recover a database server that has a backup.old file in the data_directory. I have see references to a database being stopped during a backup and/or a second backup running and moving the old file aside, but it was not clear to me what needs to be done do recover. postgresql-9.3 9.3.0-2.pgdg12.4+1 what method or software were you using to perform this backup? -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Best practices for aggregate table design
On 10/9/2015 6:07 PM, Roxanne Reid-Bennett wrote: On 10/9/2015 3:16 PM, droberts wrote: Thanks for everyone's help. Can anyone recommend a good book on database modeling, around these 'cube' or aggregate concepts in particular? I'm using Postgres but shouldn't matter too much I assume. Given the shift towards NoSQL for BI, and the age of the consultants, I don't know how much of a driver of design patterns the Kimball Group still is in the BI/Data warehouse space - but their blogs and books have been good fodder for thought for the modeling I've done. actually, there've been numerous cases where NoSQL deployments never worked right, and people reverted to SQL for reporting. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Trouble setting up replication
On 10/10/2015 12:02 PM, Chuck Martin wrote: I only find one version of pg_basebackup on the replicant server, but CentOS 7 comes with PG 9.2, so I suspect that pg_basebackup is left over from that installation. But I’m not sure how to update that. yum update pg_basebackup did not work. try, /usr/pgsql-9.3/bin/pg_basebackup -V if you don't have that path, you'll need to install postgresql-9.3 on it, using the PGDG yum repository at http://yum.postgresql.org... # yum install http://yum.postgresql.org/9.3/redhat/rhel-7-x86_64/pgdg-centos93-9.3-1.noarch.rpm # yum install postgresql93 ... (note, the last command just installs the runtime libraries and utilities, it does NOT install the postgresql 9.3 database server, that is package postgresql93-server) -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] ID column naming convention
On 10/13/2015 3:27 PM, droberts wrote: Thanks. My only question is how do you create a schema diagram (ERD) then? The tool won't know what the relationships are unless maybe you put foreign key constraints on. BTW does anyone recommend a tool to to that? I've been playing with DbVisualizer. I don't know of any ERD tool that will recognize foreign key references without FK constraint definitions, regardless of what things are named. -- john r pierce, recycling bits in santa cruz -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Not storing MD5 hashed passwords
On 10/14/2015 1:31 PM, Quiroga, Damian wrote: Does postgres support other (stronger) hashing algorithms than MD5 to store the database passwords at disk? If not, is there any plan to move away from MD5? if you can read the password database, you already have superuser access to the full database so what threat does a stronger hash address? if you need stronger security, don't use passwords, use ssl certificates, or LDAP, or something. -- john r pierce, recycling bits in santa cruz