On Sat, Jul 30, 2016 at 8:51 AM, Stephen Frost wrote:
> * Larry Rosenman (l...@lerctr.org) wrote:
>> On 2016-07-29 15:14, Bruce Momjian wrote:
>> >On Fri, Jul 29, 2016 at 03:09:59PM -0500, Larry Rosenman wrote:
>> >>>Data Directory naming, as well as keeping the init-scripts straight.
>> >>>
>> >>
On 7/29/16 5:31 PM, Rakesh Kumar wrote:
>> Are you saying that?:
>>
>> 1) You ran pg_basebackup against a live cluster and sent the output to
>> another location.
>>
>> 2) At the other location the cluster is not in use.
>>
>> 3) You want to grab the contents of the inactive cluster directly off th
On 07/29/2016 02:31 PM, Rakesh Kumar wrote:
Are you saying that?:
1) You ran pg_basebackup against a live cluster and sent the output to
another location.
2) At the other location the cluster is not in use.
3) You want to grab the contents of the inactive cluster directly off the
disk.
If tha
> Are you saying that?:
>
> 1) You ran pg_basebackup against a live cluster and sent the output to
> another location.
>
> 2) At the other location the cluster is not in use.
>
> 3) You want to grab the contents of the inactive cluster directly off the
> disk.
>
> If that is the case, then no it is
On 07/29/2016 02:16 PM, Rakesh Kumar wrote:
If a cluster is backed up physically using pg_basebackup, how can we
restore only a particular schema from it. Is it even possible?
Are you saying that?:
1) You ran pg_basebackup against a live cluster and sent the output to
another location.
2) A
If a cluster is backed up physically using pg_basebackup, how can we
restore only a particular schema from it. Is it even possible?
Thanks
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
* Larry Rosenman (l...@lerctr.org) wrote:
> On 2016-07-29 15:14, Bruce Momjian wrote:
> >On Fri, Jul 29, 2016 at 03:09:59PM -0500, Larry Rosenman wrote:
> >>>Data Directory naming, as well as keeping the init-scripts straight.
> >>>
> >>And who gets 5432, and Unix socket naming, it starts to get me
On Fri, Jul 29, 2016 at 3:22 PM, Larry Rosenman wrote:
> I'm willing to help on the FreeBSD side.
One more tip -- if you are running multiple clusters (same version
or not) on the same machine, it is best to run each cluster under a
separate OS user. It's not *required*, but it makes a restart
On Fri, Jul 29, 2016 at 3:18 PM, Condor wrote:
> On 29-07-2016 20:33, Jerry Sievers wrote:
>> I've done several ~7TB pg_upgrades and with the hard link option and a
>> framework that parallelizes the post-analyzer phase...
>>
>> ...45 minutes till completion.
> GL to you
Luck has nothing to do
On 07/29/2016 01:17 PM, D'Arcy J.M. Cain wrote:
On Fri, 29 Jul 2016 15:09:59 -0500
Larry Rosenman wrote:
version to the bare version of the binary name. Example:
- psql.8.3
- psql.9.1
- psql.9.3
- psql ==> psql.9.3
Other than linking to the correct library can you think of any
other issue
On 2016-07-29 15:17, D'Arcy J.M. Cain wrote:
On Fri, 29 Jul 2016 15:09:59 -0500
Larry Rosenman wrote:
>> version to the bare version of the binary name. Example:
>> - psql.8.3
>> - psql.9.1
>> - psql.9.3
>> - psql ==> psql.9.3
>>
>> Other than linking to the correct library can you think o
On 29-07-2016 20:33, Jerry Sievers wrote:
Condor writes:
On 26-07-2016 21:04, Dorian Hoxha wrote:
Many comments: https://news.ycombinator.com/item?id=12166585
https://www.reddit.com/r/programming/comments/4uph84/why_uber_engineering_switched_from_postgres_to/
On Tue, Jul 26, 2016 at 7:39 PM
Given:
CREATE TABLE jsontbl ( id int, label text, json_obj json);
I need the create two rows from the following JSON, with the json_obj in
each row equal to the text/json object in the content
1,'one',{"id":1,"label":"one"}
2,'two',{"id":2,"label":"two"}
I'd like to not have to hard-code the co
On Fri, 29 Jul 2016 15:09:59 -0500
Larry Rosenman wrote:
> >> version to the bare version of the binary name. Example:
> >> - psql.8.3
> >> - psql.9.1
> >> - psql.9.3
> >> - psql ==> psql.9.3
> >>
> >> Other than linking to the correct library can you think of any
> >> other issues with this
On 2016-07-29 15:14, Bruce Momjian wrote:
On Fri, Jul 29, 2016 at 03:09:59PM -0500, Larry Rosenman wrote:
>>I might take a look at the NetBSD package (I'm a developer) to see how
>>hard it would be to allow multiple versions. We do keep all the lib
>>stuff in a separate directory so that part w
On Fri, Jul 29, 2016 at 03:09:59PM -0500, Larry Rosenman wrote:
> >>I might take a look at the NetBSD package (I'm a developer) to see how
> >>hard it would be to allow multiple versions. We do keep all the lib
> >>stuff in a separate directory so that part would be relatively simple.
> >>We just
On 2016-07-29 15:06, Larry Rosenman wrote:
On 2016-07-29 15:04, D'Arcy J.M. Cain wrote:
On Fri, 29 Jul 2016 15:07:53 -0400
Bruce Momjian wrote:
> The answer is either chroot or mount and run pg_upgrade on another
> server. If you can afford the downtime you can also delete PG,
> install the ne
On 2016-07-29 15:04, D'Arcy J.M. Cain wrote:
On Fri, 29 Jul 2016 15:07:53 -0400
Bruce Momjian wrote:
> The answer is either chroot or mount and run pg_upgrade on another
> server. If you can afford the downtime you can also delete PG,
> install the new version and run pg_upgrade without modifyi
On Fri, 29 Jul 2016 15:07:53 -0400
Bruce Momjian wrote:
> > The answer is either chroot or mount and run pg_upgrade on another
> > server. If you can afford the downtime you can also delete PG,
> > install the new version and run pg_upgrade without modifying the
> > existing DB. If it succeeds the
On Fri, Jul 29, 2016 at 02:50:32PM -0400, Stephen Frost wrote:
> > Er, using a not yet invented pg_downgrade:-)
>
> The short answer is 'no'. Consider a case like the GIN page changes- as
> soon as you execute DML on a column that has a GIN index on it, we're
> going to rewrite that page using a
On Fri, Jul 29, 2016 at 03:03:46PM -0400, D'Arcy J.M. Cain wrote:
> As does NetBSD. The problem is that unlike Python (which BSD allows
> multiple versions) there is only one executable to deal with. It's not
> an insurmountable problem but it could get messy.
>
> The answer is either chroot or
On Fri, 29 Jul 2016 13:06:04 -0500
Larry Rosenman wrote:
> > Is that because it is hard to install the old and new clusters on
> > the same server on FreeBSD?
> >
> The current FreeBSD Ports collection ports only allow ONE version to
> be installed at a time.
As does NetBSD. The problem is that
Jerry,
* Jerry Sievers (gsiever...@comcast.net) wrote:
> Bruce Momjian writes:
> > I agree, but I am not sure how to improve it. The big complaint I have
> > heard is that once you upgrade and open up writes on the upgraded
> > server, you can't re-apply those writes to the old server if you nee
Bruce Momjian writes:
> On Wed, Jul 27, 2016 at 10:22:27AM -0400, Scott Mead wrote:
>
>> That being said, it doesn't really provide a back-out plan. The beauty of
>> replication is that you can halt the upgrade at any point if need be and cut
>> your (hopefully small) losses. If you use -k, you
On 2016-07-29 12:59, Bruce Momjian wrote:
On Fri, Jul 29, 2016 at 07:49:36PM +0200, Maeldron T. wrote:
And yes, I hate upgrading PostgreSQL especially on FreeBSD where
pg_upgrade
isn’t really an option.
Is that because it is hard to install the old and new clusters on the
same server on FreeB
On Fri, Jul 29, 2016 at 07:49:36PM +0200, Maeldron T. wrote:
> And yes, I hate upgrading PostgreSQL especially on FreeBSD where pg_upgrade
> isn’t really an option.
Is that because it is hard to install the old and new clusters on the
same server on FreeBSD?
--
Bruce Momjian http://mo
On 26/07/16 19:39, Guyren Howe wrote:
Honestly, I've never heard of anyone doing that. But it sounds like they had
good reasons.
https://eng.uber.com/mysql-migration/
Thoughts?
1. Open the page
2. Press Cmd-F on Mac, Ctrl-F on Linux/Windows. (Find on page)
3. Type "transaction" in the sear
On Fri, Jul 29, 2016 at 7:35 AM, ben.play
wrote:
> Hi guys,
>
> We have some storage problem on our dedicated server and we have to
> migrate.
> Do you know how can I log all queries in order to have no downtime during
> migration ?
>
>
I don't understand how logging of queries has anything to d
log_min_duration_statement = 0
On 07/29/2016 05:35 AM, ben.play wrote:
Hi guys,
We have some storage problem on our dedicated server and we have to migrate.
Do you know how can I log all queries in order to have no downtime during
migration ?
I tried many options on the conf, but i never fin
Condor writes:
> On 26-07-2016 21:04, Dorian Hoxha wrote:
>
>> Many comments: https://news.ycombinator.com/item?id=12166585
>> https://www.reddit.com/r/programming/comments/4uph84/why_uber_engineering_switched_from_postgres_to/
>>
>> On Tue, Jul 26, 2016 at 7:39 PM, Guyren Howe wrote:
>>
>>> Hon
On Wed, Jul 27, 2016 at 01:02:40PM -0400, Bruce Momjian wrote:
> Watching the video was helpful:
>
> https://vimeo.com/145842299
>
> You can see the failover happened because of various user errors. That
> doesn't excuse our bug, but I am not sure exactly how much they
> understood of Post
On 7/29/16, Keith Fiske wrote:
> On Fri, Jul 29, 2016 at 11:49 AM, Vitaly Burovoy
> wrote:
>
>> On 7/29/16, Keith Fiske wrote:
>> > On Fri, Jul 29, 2016 at 12:53 AM, Vitaly Burovoy <
>> vitaly.buro...@gmail.com>
>> > wrote:
>> >
>> >> On 7/28/16, Keith Fiske wrote:
>> >> > Working on trying to
rob stone writes:
> So, could somebody tell me if the 9.6beta1 version of initdb contained
> an incorrect version constant?
It did, see
https://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=99dd8b05aa5647a59f30ca67e67e2e3377f50094
You would have had to do an initdb or pg_upgrade fo
On 07/29/2016 09:06 AM, rob stone wrote:
Hi,
"The database cluster was initialized with PG_CONTROL_VERSION 942, but
the server was compiled with PG_CONTROL_VERSION 960."
So, could somebody tell me if the 9.6beta1 version of initdb contained
an incorrect version constant?
9.3Beta3 requires a ne
On Wed, Jul 27, 2016 at 2:24 AM, Michael Paquier
wrote:
> On Wed, Jul 27, 2016 at 8:07 AM, Stephen Frost wrote:
>> That'd be great. It's definitely on my list of things to look into, but
>> I'm extremely busy this week. I hope to look into it on Friday, would
>> be great to see what you find.
>
On 07/29/2016 09:06 AM, rob stone wrote:
Hi,
Some weeks ago I downloaded 9.6beta1 source, compiled it and used
initdb to create a cluster, then psql to create roles, schema, etc.,
etc. I loaded some data into tables using COPY FROM.
Today I downloaded the 9.6beta3 source, compiled it and tried t
Hi,
Some weeks ago I downloaded 9.6beta1 source, compiled it and used
initdb to create a cluster, then psql to create roles, schema, etc.,
etc. I loaded some data into tables using COPY FROM.
Today I downloaded the 9.6beta3 source, compiled it and tried to start
postgres using pg_ctl but pulled th
On Fri, Jul 29, 2016 at 11:49 AM, Vitaly Burovoy
wrote:
> On 7/29/16, Keith Fiske wrote:
> > On Fri, Jul 29, 2016 at 12:53 AM, Vitaly Burovoy <
> vitaly.buro...@gmail.com>
> > wrote:
> >
> >> On 7/28/16, Keith Fiske wrote:
> >> > Working on trying to get a C version of the maintenance function
On 7/29/16, Keith Fiske wrote:
> On Fri, Jul 29, 2016 at 12:53 AM, Vitaly Burovoy
> wrote:
>
>> On 7/28/16, Keith Fiske wrote:
>> > Working on trying to get a C version of the maintenance function for my
>> > pg_partman extension working so I can hopefully make it more flexible
>> > and
>> > eff
On Fri, Jul 29, 2016 at 12:53 AM, Vitaly Burovoy
wrote:
> On 7/28/16, Keith Fiske wrote:
> > Working on trying to get a C version of the maintenance function for my
> > pg_partman extension working so I can hopefully make it more flexible and
> > efficient.
> >
> >
> https://github.com/keithf4/p
Hi:
On Fri, Jul 29, 2016 at 10:35 AM, Moreno Andreo
wrote:
> After Andreas post and thinking about it a while, I went to the decision
> that it's better not to use RAM but another persistent disk, because there
> can be an instant between when a WAL is written and it's fsync'ed, and if a
> failur
Il 29/07/2016 15:13,
FarjadFarid(ChkNet) ha scritto:
If
you add a URL to an ftp with SSL certificate. Your backup
will be much quicker and if someone stole the computer the
images are still
On Fri, Jul 29, 2016 at 7:08 AM, Moreno Andreo
wrote:
> R
> egarding backups I disagree. Files related to database must be consistent
> to the database itself, so backup must be done saving both database and
> images.
>
I'd suggest you consider that such binary data be defined as immutable.
T
Sorry the URL should have been https://www.maytech.net/
Of course there are other companies in this space.
From: Moreno Andreo [mailto:moreno.and...@evolu-s.it]
Sent: 29 July 2016 12:08
To: FarjadFarid(ChkNet) ;
pgsql-general@postgresql.org
Subject: Re: [SPAM] Re: [GENERAL] WAL directory
Actually you can increase the over-all performance of your system several fold
by distributing the source of data with encryption. CDN services use this old
technique consistently all the time.
If you add a URL to an ftp with SSL certificate. Your backup will be much
quicker and if someon
Hi guys,
We have some storage problem on our dedicated server and we have to migrate.
Do you know how can I log all queries in order to have no downtime during
migration ?
I tried many options on the conf, but i never find the good one. I didn't
find out to generate .sql file with all queries.
Il 29/07/2016 11:44,
FarjadFarid(ChkNet) ha scritto:
The
question to ask is what benefit would you gain by saving
BLOB object on a database than on say a flat file server or
url on an ftp se
Another option which is growing in popularity is distributed in memory cache.
There are quite a few companies providing such technology.
Pricing can range from free to quite expensive.
One recommendation with these technologies is to test them under heavy load
conditions.
Good luck
The question to ask is what benefit would you gain by saving BLOB object on a
database than on say a flat file server or url on an ftp server? Specially
larger ones.
BLOB’s cause a lot problem for all DBs. Not unless the DB engine can understand
their structure and process them. It is no
Il 29/07/2016 10:43, John R Pierce ha
scritto:
Aside of this,
I'm having 350 DBs that sum up a bit more than 1 TB, and
plan
to use wal_level=archive because I plan to have a backup
Aside of this, I'm having 350 DBs that sum up a bit more than 1 TB,
and plan
to use wal_level=archive because I plan to have a backup server with
barman.
With that many databases with that so many objectsand undoubtable client
connections, I'd want to spread that across a cluster of smaller
Il 28/07/2016 20:45, Francisco Olarte ha scritto:
On Thu, Jul 28, 2016 at 3:25 PM, Moreno Andreo wrote:
Obviously ramdisk will be times faster disk, but having a, say, 512 GB
ramdisk will be a little too expensive :-)
Besides defeating the purpose of WAL, if you are going to use non
persistent
52 matches
Mail list logo