Hi Richard,
This means a file systems backup. eg.
tar -cvpf data_bakup.tar /var/lib/pgsql/data
Here's a script I use to automate this process. It may be helpful to
customize for yourself.
#!/bin/bash
#
# PostgreSQL Weekly Backup
#
DATE=$(date +%G%m%d)
MAILLOG="/backup/weekly_$DATE.log"
WALAR
Someone please please help me :-(
Steve
Steve wrote:
Hi,
I've been running postgres on my server for over a year now and the
tables have become huge. I have 3 tables that have data over 10GB each
and these tables are read very very frequently. In fact, heavy searches
on these table
king the files (WAL files only?) across.
Can anyone please tell me what to do here and how to harness the power
of the three SCSI drives that I have. Which files in the data directory
need to be moved? Is this safe? Can backups etc be easily done? Any
information will be greatly
there is activity on the database. I do/can get a catalog_xmin value.
Should I expect the 'xmin' value to be null? Is there another way to monitor
the replication latency when using BDR?
Thanks,
Steve Boyle
--
Sent via pgsql-general mailing list (pgsql-general@postgresq
4 the compat-postgresql-libs,
which offered libpq.so.4x, has disappeared from the repository causing
dependency issues with packages that expect libpq.so.4.
Any recommendations on which of many approaches might be best to address
this?
Cheers,
Steve
--
Sent via pgsql-general mailing list (
Craig,
Your response was very helpful, thank you.
I was looking at some of the standard bits in Postgres like
txid_current_snapshot() and txid_snapshot_xmin(). Can the results from
txid_snapshot_xmin be used with pg_get_transaction_committime() to get the
latency?
Thanks again,
Steve Boyle
onfigure tzdata
Did you reload/restart PostgreSQL after making the change?
What is the setting of timezone in postgresql.conf?
What is set as the timezone on the *client* side? (That's really more
relevant for timestamp_tz data.)
Cheers,
Steve
--
Sent via pgsql-general mailing
,text,text,integer,text[]) line 3 at
PERFORM
I can see that the schema exists:
postgres=# \dn
List of schemas
Name | Owner
+--
bdr| postgres
public | postgres
Any pointers on what I am missing are appreciated.
Thanks,
Steve Boyle
--
Sent via pgsql
eded for you application:
extract(epoch from now())*10
Cheers,
Steve
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 04/02/2015 10:34 AM, David G. Johnston wrote:
On Thu, Apr 2, 2015 at 10:27 AM, James Cloos <mailto:cl...@jhcloos.com>>wrote:
>>>>> "SC" == Steve Crawford mailto:scrawf...@pinpointresearch.com>> writes:
...
What I haven't dete
real disk subsystem then just access it from wherever on the LAN you
need to? Postgresql is a client-server database, and you can access it over the
network as easily as you can from the machine it's running on.
Cheers,
Steve
[1] Almost certainly, unless you make absolutely sure post
data
transformation.
If you're looking to use C++ then Qt is an excellent framework for a GUI app -
one that you won't beat for cross-platform work - but you might consider
whether using libpqxx or libpq to connect to the database might suit your needs
better.
Cheers,
Steve
--
appears that a temporary table and temporary
index were being created within the same second that the query was run.
Any advice?
Cheers,
Steve
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
m. Fix the query and don't worry?
Cheers,
Steve
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
aightforward:
>
> INSERT INTO schema_b.my_table
> SELECT * FROM schema_a.my_table WHERE ...
Would ALTER TABLE ... SET SCHEMA do what you need? A schema is mostly just a
name space, so there's no need to create new tables or copy data around.
Cheers,
Steve
>
> What I am t
We have a table:
sjr_local1db=# \d cons_refs
Table "public.cons_refs"
Column | Type | Modifiers
---++--
code | bigint | not null
...
ref_type
System catalogs should help, unless I misunderstand your question:
http://www.postgresql.org/docs/9.4/static/catalogs.html
http://www.postgresql.org/docs/9.4/static/view-pg-tables.html
http://stackoverflow.com/questions/20698169/how-to-use-postgres-pg-tables-to-compare-contraints
On Tue, May 5
Hello, recently one of my tables needed a vacuum (to prevent wraparound)
and of course it happened at a really bad time, so since then I've been
learning about how Transaction ID Wraparound works and its associated
parameters.
I'm trying this query to see how close my tables are to hitting the
vac
I just use sql tables. But if I represent them outside of pg I usually use
migration files which are part of ruby on rails. But sqitch looks good too.
Steve
On May 6, 2015 7:31 AM, "Suresh Raja" wrote:
> Thanks Melvin, Joshua, PT and Steve!
>
> your information is good. Im l
I just use sql tables. But if I represent them outside of pg I usually use
migration files which are part of ruby on rails. But sqitch looks good too.
Steve
On May 6, 2015 7:31 AM, "Suresh Raja" wrote:
> Thanks Melvin, Joshua, PT and Steve!
>
> your information is good. Im l
On Wed, May 6, 2015 at 9:46 AM Jeff Janes wrote:
> vacuum_freeze_table_age controls when it promotes a vacuum *which is
> already going to occur* so that it scans the whole table. It doesn't
> specially schedule a vacuum to occur. When those tables see enough
> activity to pass autovacuum_vacuu
scr 1 | b | descr 1
a | descr 1 | c | descr 4
c | descr 4 | d | descr 3
d | descr 3 | b | descr 1
Thanks,
Steve
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On 05/11/2015 07:16 AM, Oliver Elphick wrote:
On Mon, 2015-05-11 at 06:46 -0400, Steve Clark wrote:
Hi List,
I am having trouble trying to figure out
how to get the result listed at the bottom.
I have 3 tables units, types of units which has a description of the units,
and a table that list
On Wed, May 6, 2015 at 7:24 PM Jeff Janes wrote:
> I've booked-marked these but haven't really looked into them to any
> extent. It would be awesome if you put the SQL one somewhere on
> http://wiki.postgresql.org. That way it is easier to find, and anyone
> who finds it can contribute explanat
I would update the IDs using SQL before dumping if possible. If necessary
clone the tables, adjust the IDs and then dump. SQL has better tools than
most languages/scripts to adjust values in columns across multiple
entities. Plus it should be easier to build some test queries in SQL to
validate tha
tips but if you tell us more about the problem
you are attempting to solve you may get some better advice and/or
warnings about pitfalls.
-Steve
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/
s of converting the timestamp without time zone to
UTC, converting that to the desired local zone, doing the calculations,
converting back to UTC and back to timezone without timestamp all the
while potentially adding an easy error such as doing things in the wrong
order and checking for DST changeover in the wrong time-zone.
Cheers,
Steve
Hello, I'd like to postpone an "autovacuum: VACUUM public.mytable (to
prevent wraparound)" and handle it manually at another time. I thought I
could set these storage parameters on the large table in question
("mytable") like this:
ALTER TABLE mytable SET (
autovacuum_freeze_min_age=1000,
On Tue, May 26, 2015 at 7:27 PM Alvaro Herrera
wrote:
> See the docs about the freeze max age storage parameter -- the per-table
> setting can decrease the global setting but not increase it.
Thanks Alvaro, that explains it. I found it in the docs: "Note that
autovacuum will ignore attempts to
I have a database that was upgraded from 9.4.1 to 9.4.2 (no pg_upgrade, we
just dropped new binaries in place) but it wouldn't start up. I found this
in the logs:
waiting for server to start2015-05-27 13:13:00 PDT [27341]: [1-1] LOG:
database system was shut down at 2015-05-27 13:12:55 PDT
20
On Wed, May 27, 2015 at 3:21 PM Alvaro Herrera
wrote:
> I think a patch like this should be able to fix it ... not tested yet.
>
Thanks Alvaro. I got a compile error, so looked for other uses of
SimpleLruDoesPhysicalPageExist and added MultiXactOffsetCtl, does this look
right?
+ (!InRecovery |
On Fri, May 29, 2015 at 12:08 PM Robert Haas wrote:
> OK, here's a patch.
>
I grabbed branch REL9_4_STABLE from git, and Robert got me a 9.4-specific
patch. I rebuilt, installed, and postgres started up successfully! I did a
bunch of checks, had our app run several thousand SQL queries against
Good Afternoon,
Built a fresh 9.3. postgres server and added some users and noticed that any
user can create tables in any database including the postgres database by
default.
Have I missed some step in securing the default install?
Steve Pribyl
[http
.
Steve Pribyl
From: pgsql-general-ow...@postgresql.org
on behalf of Joshua D. Drake
Sent: Tuesday, June 2, 2015 12:44 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] postgres db permissions
On 06/02/2015 10:36 AM, Steve Pribyl wrote:
>
>
ture.
Is this what is allowing write access to the database?
Steve Pribyl
Sr. Systems Engineer
steve.pri...@akunacapital.com<mailto:steve.pri...@akunacapital.com>
Desk: 312-994-4646
From: Melvin Davidson
Sent: Tuesday, June 2, 2015 12:55 PM
To: Steve Priby
They all look like this.
CREATE ROLE dbA
NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE NOREPLICATION;
Steve Pribyl
From: Adrian Klaver
Sent: Tuesday, June 2, 2015 1:06 PM
To: Steve Pribyl; Joshua D. Drake; pgsql-general@postgresql.org
Subject: Re
ne)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-==---=
ii postgresql-9.4 9.4.0-1.pgdg amd64object-relational SQL database, v
St
N postgres;
GRANT ALL ON SCHEMA public TO postgres;
COMMENT ON SCHEMA public
Bad and happens to be the default:
CREATE SCHEMA public
AUTHORIZATION postgres;
GRANT ALL ON SCHEMA public TO postgres;
GRANT ALL ON SCHEMA public TO public;
COMMENT ON SCHEMA public
St
Rows Removed by Filter: 22
Total runtime: 19.769 ms
A couple initial questions:
1. Does the result change if you analyze the table and rerun the query?
2. Are there any non-default settings for statistics collection on your
database?
-Steve
On Fri, Jun 5, 2015 at 11:47 AM Andres Freund wrote:
> But I'd definitely like some
> independent testing for it, and I'm not sure if that's doable in time
> for the wrap.
>
I'd be happy to test on my database that was broken, for however much that
helps. It's a VM so I can easily revert back as
#x27;statistics';)
Cheers,
Steve
P.S. The convention on the PostgreSQL mailing lists it to bottom-post,
not top-post replies.
Konsole outpor name ~ 'statistics';)
On 06/05/2015 12:28 PM, Steve Crawford wrote:
On 06/05/2015 11:38 AM, Sheena, Prabhjot wrote:
When I run vacuum analyze it fixes the problem but after 1 or 2 days
the problem comes back
Is autovacuum running and using what settings?
(select name, setting from pg_settings where name
Just wanted to report that I rolled back my VM to where it was with 9.4.2
installed and it wouldn't start. I installed 9.4.4 and now it starts up
just fine:
> 2015-06-12 16:05:58 PDT [6453]: [1-1] LOG: database system was shut down
at 2015-05-27 13:12:55 PDT
> 2015-06-12 16:05:58 PDT [6453]: [2-1
sible, and store the Pg passwords in an ansible vault, which is
encrypted. Ansible asks for the decrypt key when it runs.
Steve
On Wed, Jul 8, 2015 at 11:34 AM, Suresh Raja
wrote:
> Hi:
>
> I cannot use .pgpass as the password stored here is not encrypted.
>
> can i use a encrypted
:
CREATE OR REPLACE FUNCTION is_valid_date(text)
RETURNS bool AS
'
begin
return case when $1::date is null then false else true end;
exception when others then
return false;
end;
'
LANGUAGE 'plpgsql' VOLATILE;
Cheers,
Steve
You might check the stunnel settings. A quick search of "stunnel 12-hours"
indicates that this is the stunnel default for idle connections.
Cheers,
Steve
On Fri, Aug 21, 2015 at 11:16 AM, Rich Schaaf wrote:
> I’m running into a problem where the connection between application i
This is Postgres 9.4.4. I am troubleshooting some occasional (every 3-4
hours) slowness with UPDATEs on a table that has a GIN index on a JSONB
column. During these episodes, UPDATEs that normally take < 1sec take
upwards of 2-4 minutes, and all finish simultaneously, like they were all
blocked on
On Fri, Aug 28, 2015 at 10:11 AM Tom Lane wrote:
> Steve Kehlet writes:
> > This is Postgres 9.4.4. I am troubleshooting some occasional (every 3-4
> > hours) slowness with UPDATEs on a table that has a GIN index on a JSONB
> > column. During these episodes, UPDATEs that
On Fri, Aug 28, 2015 at 10:42 AM Jeff Janes wrote:
> Or what I usually do in a case like this is clone the database to a
>>> test/QA server then run pg_upgrade to get that running on 9.5, then hope
>>> what I learn transfers back to production.
>>
>>
I'll save this great idea.
> But the symptom
On Fri, Aug 28, 2015 at 12:10 PM Jeff Janes wrote:
> Did you change the system-wide autovacuum_analyze_scale_factor? If so,
> don't do that. You can use a table's storage parameters to set a custom
> autovacuum_analyze_scale_factor just for individual tables. So just the
> table with the troub
On Fri, Aug 28, 2015 at 10:27 AM Tom Lane wrote:
> Steve Kehlet writes:
> > Yeah head scratch. That is really weird. Still gathering data, any way I
> > can see for sure when these cleanup cycles are occurring?
>
> contrib/pgstattuple has a function that can report th
On Fri, Aug 28, 2015 at 11:18 AM Tom Lane wrote:
> It looks like a VACUUM will do the cleanup during the first ginbulkdelete
> call, so you could probably handle this by running a manual "VACUUM
> VERBOSE" with the smallest possible maintenance_work_mem, and canceling it
> as soon as you see some
On Fri, Aug 28, 2015 at 1:23 PM Jeff Janes wrote:
> You should RESET the autovacuum_vacuum_scale_factor for the table. You
> don't want it to be vacuumed aggressively, just autoanalyzed aggressively.
> Sorry if my copy-paste error led you astray on that.
>
No problem, done, thank you.
There is
PostgreSQL as a
service?
Cheers,
Steve
adjusting our autovacuum settings and looking into
improving that blocked/blocking query.
On Fri, Aug 28, 2015 at 1:44 PM Steve Kehlet wrote:
> On Fri, Aug 28, 2015 at 1:23 PM Jeff Janes wrote:
>
>> You should RESET the autovacuum_vacuum_scale_factor for the table. You
>> d
Any null values in first name??
-Steve
On Thu, Sep 10, 2015 at 12:35 PM, Ken Tanzer wrote:
> Hi. In a table that includes these columns:
>
> my_db=> \d tbl_client
> ...
> name_last | character varying(40) | not null
> name_first
sent_for_processing
and pg_try_advisory_xact_lock(12345, the_id)
order by
the_priority
limit 1 for update)
update
queuetest
set
sent_for_processing = true
where
the_id = (select the_id from next_up)
returning
the_work_to_do;
Cheers,
Steve
appropriately.
I'm really looking for any things like planner ordering nuances that would
make the query operate in unexpected ways.
Cheers,
Steve
On Tue, Sep 22, 2015 at 9:26 AM, Jeff Janes wrote:
> On Mon, Sep 21, 2015 at 3:51 PM, Steve Crawford <
> scrawf...@pinpointresearch.c
h(t.*::text)) FROM observation t;
-- 287 bytes
-- Number of rows * average size of row:
select pg_size_pretty(count(obs_id) * 287) from observation;
-- 9.4 GB
If the live tuples take 9.4GB, then dead ones take 1.2GB = 10.6GB total.
What accounts for the remaining 9.4GB? (20GB - 10.6GB)
Steve Pritchard
British Trust for Ornithology
l of routine vacuuming is to do standard VACUUMs often enough
to avoid needing VACUUM FULL. The autovacuum daemon attempts to work this
way, and in fact will never issue VACUUM FULL."
http://www.postgresql.org/docs/9.4/static/routine-vacuuming.html#VACUUM-FOR-SPACE-RECOVERY
I'll do a VACUU
as a PDF).
All working out well in testing at the moment.
Steve Pritchard
British Trust for Ornithology
On 23 September 2015 at 17:36, Israel Brewster
wrote:
>
> With my application servers, I have a system set up using corosync and
> pacemaker that allows for seamless fail-over bet
postgresql-common 154 all
PostgreSQL database-cluster manager
TIA,
Steve
____
[http://www.akunacapital.com/images/akuna.png]
Steve Pribyl | Senior Systems Engineer
Akuna Capital LLC
36 S Wabash, Suite 310 Chicago IL 606
Good Morning,
Has anyone had a moment to look at this?
It is a bit of a show stopper.
Thanks Steve
From: pgsql-general-ow...@postgresql.org
on behalf of Steve Pribyl
Sent: Thursday, October 1, 2015 12:27 PM
To: pgsql-general@postgresql.org
Subject
omething that would meet my requirements above?
Steve Pritchard
British Trust for Ornithology, UK
That was it thanks.
Steve Pribyl
Sr. Systems Engineer
steve.pri...@akunacapital.com
Desk: 312-994-4646
From: Craig Ringer
Sent: Tuesday, October 6, 2015 12:35 AM
To: Steve Pribyl
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] BDR Rejoin of
recover.
postgresql-9.3 9.3.0-2.pgdg12.4+1
TIA
Steve
[http://www.akunacapital.com/images/akuna.png]
Steve Pribyl | Senior Systems Engineer
Akuna Capital LLC
36 S Wabash, Suite 310 Chicago IL 60603 USA | www.akunacapital.com
<h
ackup
BACKUP FROM: master
START TIME: 2015-05-30 01:39:24 CDT
LABEL: base-backup
Steve Pribyl
Sr. Systems Engineer
steve.pri...@akunacapital.com
Desk: 312-994-4646
From: pgsql-general-ow...@postgresql.org
on behalf of John R Pierce
Sent: Tuesday, Octo
?
Thanks
Steve Pribyl
From: pgsql-general-ow...@postgresql.org
on behalf of Steve Pribyl
Sent: Tuesday, October 6, 2015 10:47 PM
To: John R Pierce; pgsql-general@postgresql.org
Subject: Re: [GENERAL] backup.old
Honestly I don't know. It was e
Great, dur(rtfm), so is it save to delete the backup.old, if the db is not in
backup mode.
Steve Pribyl
Sr. Systems Engineer
steve.pri...@akunacapital.com<mailto:steve.pri...@akunacapital.com>
Desk: 312-994-4646
From: David G. Johnston
Sent: Wed
Thank you very much. I read someplace if you run pg_start_backup twice the
backup.old will be created, but there was not much beyond that and now I can't
seem to find the reference.
Steve Pribyl
Sr. Systems Engineer
steve.pri...@akunacapital.com<mailto:steve.pri...@akunacapital.com>
9:04.395 CDT,,,12693,,561bb1ae.3195,21,,2015-10-12 08:12:14
CDT,,0,LOG,0,"starting background worker process ""bdr
(6204748238611542317,1,16494,)->bdr (6204748255428234532,1,""",""
Steve Pribyl
Thanks
[ht
he
change callback, associated LSN 2/FD250E48""bdr
(6204748238611542317,1,16494,):receive"
Steve Pribyl
Sr. Systems Engineer
steve.pri...@akunacapital.com
Desk: 312-994-4646
From: Andres Freund
Sent: Monday, October 12, 2015 11:08 AM
To: Steve Prib
bdr.
Glad to have a work around but would like to get to understand the failure.
Steve Pribyl
From: Steve Pribyl
Sent: Monday, October 12, 2015 11:19 AM
To: Andres Freund
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] BDR workers exiting?
Yup, th
want to find the max(id) whose sts is 0 but whose id is not referenced by
ref_id.
so the answer would be id=3.
Thanks for any pointers,
Steve
--
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/
ef_id,'')='' if
it can be null or empty string.
Cheers,
Steve
On Thu, Jun 2, 2016 at 10:16 AM, Steve Clark
wrote:
> Hi List,
>
> I am a noob trying to do something that seems like it should be easy but I
> can't figure it out.
>
> I have a tab
On Thu, Jun 2, 2016 at 10:40 AM, Felipe Santos wrote:
>
>
> 2016-06-02 14:23 GMT-03:00 Steve Crawford
> :
>
>> Something like:
>>
>> select max(id) from yourtable where sts=0 and ref_id is null;
>>
>> That assumes that ref_id is null. It would help
Thanks all the below seem to do the trick.
On 06/02/2016 01:58 PM, David G. Johnston wrote:
select max(id) from yourtable where sts=0 and id not in (select ref_id from
yourtable);
select max(id) from yourtable where sts=0 and id not in (select ref_id from
yourtable);
--
Stephen Clark
want to find the max(id) whose sts is 0 but whose id is not referenced by
ref_id.
so the answer would be id=3.
Thanks for any pointers,
Steve
--
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/
) from sometable where sts=0
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Steve Clark
Sent: Thursday, June 2, 2016 9:56 AM
To: pgsql
Subject: [GENERAL] dumb question
Hi List,
I am a noob trying to do something that
could easily pull PHP.
I'd go for that first one, if possible. Robust, and zero overhead in the happy
path.
> I'm looking at trying to parse/search/replace. This might well be imperfect,
> and error-prone. But if I can get something that at least works in a lot of
> cases,
sedb.com/postgres-plus-edb-blog/fred-dalrymple/postgres-meets-hipaa-cloud
http://www.slideshare.net/EnterpriseDB/achieving-hipaa-compliance-with-postgres-plus-cloud-database
Cheers,
Steve
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Hi, I've been searching for a 9.2.15 version of the postgresql script for "init
script for starting up the PostgreSQL". I have managed to find older versions
than what we are currently using, 8.2.5 but haven't had any luck finding a new
version in the postgres 9.2.15 rpms. We are moving from Ce
modify the script to
get 9.2 to run but I was hoping this script had been updated for 9.x. The
current script uses --auth='ident sameuser' when calling initdb for instance
which is not supported in 9.2.
Steve
steve.langl...@tavve.com
From: Adrian Kl
Using Postgres 9.2 with CentOS7.
I ran
+ /usr/bin/initdb --pgdata=/usr/xxx/databases/pgsql/data --auth=ident
without issue however when I try to start the database it complains about the
lockfile.
FATAL: could not create lock file "/var/run/postgresql/.s.PGSQL.5432.lock":
Permission denied
My apologies but I didn't include the command I am using to start the database
/usr/bin/postmaster -p 5432 -D /usr//databases/pgsql/data
FATAL: could not create lock file "/var/run/postgresql/.s.PGSQL.5432.lock":
Permission denied
Thank you.
>Presumably, you are working with a distro-modified version of Postgres,
>because the stock sources don't use /var/run/postgresql as a socket
>directory. You will likely find that your version of libpq.so also
>expects /var/run/postgresql as the socket directory, so you won't be
>able to make non
Steve Langlois writes:
> I ran
> + /usr/bin/initdb --pgdata=/usr/xxx/databases/pgsql/data --auth=ident
> without issue however when I try to start the database it complains about the
> lockfile.
> FATAL: could not create lock file "/var/run/postgresql/.s.PGSQL.5432.lock&qu
you never did answer my previous question, why are you messing about with
manually starting postgres from the wrong user account, when it should be run
as a system service with systemctl ?
I am upgrading an existing system running CentOS 5.6 with Postgres 8.2.5 to
CentOS 7 with 9.2.15. The orig
On Jul 20, 2016, at 7:48 PM, Steve Langlois
mailto:steve.langl...@tavve.com>> wrote:
you never did answer my previous question, why are you messing about with
manually starting postgres from the wrong user account, when it should be run
as a system service with systemctl ?
I am upgrad
> On Jul 20, 2016, at 8:03 PM, John R Pierce wrote:
>
> On 7/20/2016 4:48 PM, Steve Langlois wrote:
>> I am upgrading an existing system running CentOS 5.6 with Postgres 8.2.5 to
>> CentOS 7 with 9.2.15. The original system modified the postgresql script to
>> man
you don't
have lots of roles with different ownership and permissions across your
database you should be fine.
Or create role(s) on your test database that match those on the production
database. This may require updating pg_hba.conf on the test database.
Cheers,
Steve
On Wed, Aug 17, 2016
ance to a master and kill off the original master instance.
As always in these instance, testing and practice is mandatory.
Cheers,
Steve
x27;s a good resource
> for reporting slow queries:
>
> https://wiki.postgresql.org/wiki/Slow_Query_Questions
+1
Cheers,
Steve
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
tried adding -testing to the repo but no joy:
deb http://apt.postgresql.org/pub/repos/apt/ xenial-pgdg-testing main
Is packaging just not complete or am I missing something? (I had hoped that
getting configured for testing would be more friction-free.)
Cheers,
Steve
On Tue, Sep 13, 2016 at 11:03 AM, Steve Crawford <
scrawf...@pinpointresearch.com> wrote:
> I'm trying to install 9.6 RC1 on Ubuntu 16.04 Xenial on my laptop and it
> seems broken.
>
> Installation of 9.6 RC1 on Centos was straightforward by comparison - just
> add the 9
due to the lack of availability
of the actual server and client.
I know this is open source. I know that people work on their "itch" or what
their employer sponsors. I'm just sharing the user experience should it
provide value and increase the number of testers.
Cheers,
Steve
If the former then a regular btree
index
on the (case-folded text form of the) value, possibly using text_pattern_ops,
is the right thing.
The prefix module isn't what you want - it's for matching, e.g., an entire
phone number
against a table of possible prefixes, not a prefix
Hello List,
I am occasionally seeing the following error:
ALERT 3 sqlcode=-400 errmsg=deadlock detected on line 3351
from an application written using ecpg when trying an update to the table.
Can autovacuum be causing this,
since no one else is updating this database table.
Thanks,
--
Stephen
On 10/28/2016 09:15 AM, Adrian Klaver wrote:
On 10/28/2016 05:28 AM, Steve Clark wrote:
Hello List,
I am occasionally seeing the following error:
ALERT 3 sqlcode=-400 errmsg=deadlock detected on line 3351
So what exactly is it doing at line 3351?
from an application written using ecpg when
On 10/28/2016 09:48 AM, Tom Lane wrote:
Steve Clark writes:
No. But I examined the pg_log/log_file and saw an error indicating it was
autovacuum:
2016-10-27 09:47:02 EDT:srm2api:12968:LOG: sending cancel to blocking
autovacuum PID 12874
2016-10-27 09:47:02 EDT:srm2api:12968:DETAIL: Process
1 - 100 of 1490 matches
Mail list logo