In a complex query I have query I noticed that the planner made a bad
estimate for a join between two tables in the query (I made sure statistics
were up to date).
The join was on a single column. In the first table are about 985 rows. All
rows except one have a NULL value in the join column the
On Tue, Oct 22, 2013 at 10:50 AM, James Sewell wrote:
> That looks great, but it doesn't really help with my problem unless I'm
> missing something (very possible!)
>
> I need a way to backup either from SQL in PSQL (possibly \!) or from a
> PG/PLSQL function to a file with a name set from a :vari
On 10/23/2013 9:10 PM, Jayadevan M wrote:
I went through
http://www.postgresql.org/docs/9.3/static/continuous-archiving.html
and set up the archiving process. With this approach, if my database
crashes after a couple of weeks after the base backup is taken,
recovering would mean replaying the W
Hi,
I went through
http://www.postgresql.org/docs/9.3/static/continuous-archiving.html
and set up the archiving process. With this approach, if my database
crashes after a couple of weeks after the base backup is taken, recovering
would mean replaying the WAL logs for about 2 weeks, right? To avoid
Newb question.
I'm running 9.1 with a slave using streaming replication. A coworker wants
to turn off fsync on the master and insists that the slave will still be in
a usable state if there is a failure on the master. We all know that
turning off fsync is a bad idea, but I was under the impression
On 10/23/2013 04:28 PM, Jeff Janes wrote:
My page response time is sub-second, and I run quite a few queries to build
the page. But also, my server isn't to busy at the moment. The load is around
0.3 to 0.5 when its busy.
Wasn't your question to figure out how to make sure things conti
On Tue, Oct 22, 2013 at 1:13 PM, andy wrote:
> On 10/22/2013 2:18 PM, John R Pierce wrote:
>
>> On 10/22/2013 11:25 AM, andy wrote:
>>
>>> Hum.. I had not thought of that. My current setup uses 40 max
>>> connections, and I don't think I've ever hit it. I use apache and
>>> php, and my db conne
On Tue, Oct 22, 2013 at 12:18 PM, John R Pierce wrote:
> On 10/22/2013 11:25 AM, andy wrote:
>
>> Hum.. I had not thought of that. My current setup uses 40 max
>> connections, and I don't think I've ever hit it. I use apache and php, and
>> my db connections are not persistent.
>>
>
> that styl
On Tue, Oct 22, 2013 at 10:41 AM, andy wrote:
> Hi all.
>
> My website is about to get a little more popular. I'm trying to add in
> some measurements to determine an upper limit of how many concurrent
> database connections I'm currently using.
>
> I've started running this:
>
> SELECT sum(numb
On Tue, Oct 22, 2013 at 09:45:24PM -0500, Andy Colson wrote:
> On 10/22/2013 12:59 PM, Stephen Frost wrote:
> >Andy,
> >
> >* andy (a...@squeakycode.net) wrote:
> >>My website is about to get a little more popular. I'm trying to add
> >>in some measurements to determine an upper limit of how many
On Tue, Oct 22, 2013 at 1:10 PM, Shaun Thomas wrote:
> > So you can grab the extra files, but you can't make it apply them,
> > as you are telling it that it doesn't need to.
>
> Do I have to, though? Replaying transaction logs is baked into the crash
> recovery system. If I interrupt it in the mi
On Wed, Oct 23, 2013 at 12:11:39PM -0500, andy wrote:
- On 10/23/2013 11:07 AM, David Kerr wrote:
- >On Tue, Oct 22, 2013 at 12:41:58PM -0500, andy wrote:
- >- Hi all.
- >-
- >- My website is about to get a little more popular. I'm trying to add in
- >- some measurements to determine an upper limi
Hi James, i wanna share with you a script that i use, i scheduled it with
crontab. It generates a backup for each database. After that, there is
another script scheduled that copies the backups to another server.
#! /bin/bash
TIEMPO=$(date '+%d%m%Y_%H%M%S')
UBICACION="/var/lib/pgsql/9.3/backups_a
On 10/23/2013 11:07 AM, David Kerr wrote:
On Tue, Oct 22, 2013 at 12:41:58PM -0500, andy wrote:
- Hi all.
-
- My website is about to get a little more popular. I'm trying to add in
- some measurements to determine an upper limit of how many concurrent
- database connections I'm currently using.
On Tue, Oct 22, 2013 at 12:41:58PM -0500, andy wrote:
- Hi all.
-
- My website is about to get a little more popular. I'm trying to add in
- some measurements to determine an upper limit of how many concurrent
- database connections I'm currently using.
-
- I've started running this:
-
- SELE
You could write a plperlul function that runs a shell script to back up
your database, you can even pass it parameters and put a call to that in a
trigger.
BUT, this could result in multiple backups running at the same time and
become a performance drag.
--
Mike Nolan
On Tue, Oct 22, 2013 at 9:1
> On Wednesday, October 23, 2013 9:45 PM, Tom Lane wrote:
> Patrick Dung writes:
>
> By default, FreeBSD ports does not build postgresql with icu
>(http://www.icu-project.org/
> ).
>
> Postgres does not have any option to use ICU, default or otherwise.
> Nor is it likely to happen in future,
Patrick Dung writes:
> By default, FreeBSD ports does not build postgresql with icu
> (http://www.icu-project.org/).
Postgres does not have any option to use ICU, default or otherwise.
Nor is it likely to happen in future, judging from previous discussions
of the idea.
r
Hi all,
By default, FreeBSD ports does not build postgresql with icu
(http://www.icu-project.org/).
Some questions:
1) It is necessary that icu should be used on FreeBSD?
I have heard that FreeBSD's locale (glibc) has sorting problem with Postgresql
UTF8 DB.
reference: http://wiki.postgresql.
For archive :
FOR x IN q_query behaves like RETURN QUERY regarding previously described
behavior.
Cheers,
Rémi-C
2013/10/23 Rémi Cura
> Hey,
> thanks for the answers,
>
> sorry for the cross post, i didn't know if it was postgis or postgres
> issue, hence the double post (removed postgis now)
Anson Abraham wrote:
> No client connecting to the slave. It's just streamed replication for HA.
> This occurs when the slave
> starts immediately. SSL is used. And as I mentioned the libraries are
> identical on both slave and
> master. Interestingly, another slave that replicates from mast
Ok thanks for this precision Merlin.
Seems like aggregates are way more powerful than I thought.
Obviously I need a lot more reading about custom aggregates before fully
understanding it.
Elliot's query is pure SQL so obviously very cool !
It could be improved at the margin, and aggregates/funct
Shaun Thomas wrote:
>> Wrong. The database cannot check all data for consistency
>> upon backup. For one, that would take way too long.
>
> Well, what I meant, was that it would stop the database if it couldn't
> apply one of the transaction logs for whatever reason. It wasn't
> "inconsistent en
Hey,
thanks for the answers,
sorry for the cross post, i didn't know if it was postgis or postgres
issue, hence the double post (removed postgis now).
I'm afraid I don't understand perfectly the answer. Are you (both) saying
that it is a normal behavior that a function that should return a custom
24 matches
Mail list logo