Ah, got it. Thanks!
On Dec 23, 2006, at 5:59 PM, Tom Lane wrote:
Ben <[EMAIL PROTECTED]> writes:
But, this page confuses me when it talks about pg_start_backup and
pg_stop_backup. What do these functions do? It seems like they do
nothing more than let me know which wal files were in use over t
Tom Lane wrote:
Well, you could update --- 8.2 contains code to recognize that the IS
NULL expression is constant, but prior releases do not.
That's excellent to hear -- I'd missed that in my perusing of the
changelogs between 8.0.x and 8.2. That does give me one more reason to
upgrade. It
Centuries ago, Nostradamus foresaw when [EMAIL PROTECTED] would write:
> Suggest you download my little application and read the documentation,
> you'll see its very different, maybe even interesting.
> Maybe they should change that to Postgres DOES HAVE a free multi-master
> replication system
Hi,
I did a vacuum with -z and it fixed the issue. I was not aware that
vacuumdb didn't ANALYZE by default.Thanks everybody for all of the help!
Benjamin
Tom Lane wrote:
> Benjamin Arai <[EMAIL PROTECTED]> writes:
>
>>-> Index Scan using mutualfd_weekday_qbid_pkey_idx on
>> mutualf
After takin a swig o' Arrakan spice grog, [EMAIL PROTECTED] (Csaba Nagy)
belched out:
> On Thu, 2006-12-21 at 18:41, Alvaro Herrera wrote:
>> > From all the discussion here I think the most benefit would result from
>> > a means to assign tables to different categories, and set up separate
>> > au
Ben <[EMAIL PROTECTED]> writes:
> But, this page confuses me when it talks about pg_start_backup and
> pg_stop_backup. What do these functions do? It seems like they do
> nothing more than let me know which wal files were in use over the
> duration of the backup, which is certainly useful. Bu
Benjamin Arai <[EMAIL PROTECTED]> writes:
>-> Index Scan using mutualfd_weekday_qbid_pkey_idx on
> mutualfd_weekday_qbid (cost=0.00..6.01 rows=1 width=19) (actual
> time=34.579..8510.801 rows=253 loops=1)
> Index Cond: ((pkey >= '2005-12-15'::date) AND (pkey <=
> '2006-12-15'::dat
Alexander Farber wrote:
> I started to prepare a test case and realized I had a bug.
> So convert() works for me, sorry for my previous message!
In any case, it's probably saner if you SET client_encoding at the start
of the session instead of using convert() everywhere. The
server_encoding shoul
I started to prepare a test case and realized I had a bug.
So convert() works for me, sorry for my previous message!
Regards
Alex
--
http://preferans.de
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Hello,
I'm programming a small Flash game with PostgreSQL 8.1.4
and phpBB serving as backend. The data in the database is
in windows_1251 encoding. For my game I have to convert
it into utf8, and at the command prompt it seems to work (I recon
this because 7 win1251 chars seem to produce 14 utf8
Thanks for the pointer. This does look like what I want, because in
retrospect I don't know how I would know which wal logs I would start
to replay after a given pg_dumpall to bring myself up to the present
after a recovery.
But, this page confuses me when it talks about pg_start_backup and
On Sat, 23 Dec 2006, Benjamin Arai wrote:
"-> Index Scan using mutualfd_weekday_qbid_pkey_idx on mutualfd_weekday_qbid
(cost=0.00..6.01 rows=1 width=19) (actual time=34.579..8510.801 rows=253
loops=1)"
You're right that this is the problem and show that the planner was expecting
a very low
Yes, ANALYZE should definitely improve the performance for query...
--
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 12/24/06, Benjamin Arai <[EMAIL PROTECTED]> wrote:
Just to make things more clear I ran EXPLAIN ANALYZE on the slow query.
I got
Merge Full Join (cost=62.33..7
Just to make things more clear I ran EXPLAIN ANALYZE on the slow query.
I got
Merge Full Join (cost=62.33..73.36 rows=1000 width=19) (actual
time=39.205..8521.644 rows=272 loops=1)
Merge Cond: ("outer".pkey = "inner".d1)
-> Index Scan using mutualfd_weekday_qbid_pkey_idx on
mutualfd_we
adding to the last email, for now try the work_mem but you should be
adding ANALYZE along with the VACUUM (with a cron job I guess) you do
regularly.
Shoaib Mir
EntperpriseDB (www.enterprisedb.com)
On 12/24/06, Shoaib Mir <[EMAIL PROTECTED]> wrote:
Try increasing the work_mem
Try increasing the work_mem first to see the change, that might help.
-
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 12/24/06, Benjamin Arai <[EMAIL PROTECTED]> wrote:
I have been running pieces of my PL function by hand and I have found
that the following queries work by them
I have been running pieces of my PL function by hand and I have found
that the following queries work by themselves taking less than a second
to execute.
getDateRange"('12/1/2005','12/1/2006') <- simply generates a date
list. Doesn't even access a table
SELECT * FROM mutualfd_weekday_qbid
On Sat, 23 Dec 2006, Benjamin Arai wrote:
I thought that you only need to use the -z flag if the distribution of the
data is changing.
You're absolutely correct. Have you not been inserting, updating or deleting
data? It sounds like you are based on the followup email you just sent:
One m
I thought that you only need to use the -z flag if the distribution of
the data is changing.
Jeff Frost wrote:
On Sat, 23 Dec 2006, Benjamin Arai wrote:
The largest table in my database (30GB) has mysteriously went from
taking milli-seconds to perform a query to minutes. This disks are
fine
Function Scan on "getTimeSeries" (cost=0.00..12.50 rows=1000 width=24)
(actual time=11065.981..11067.008 rows=262 loops=1)
Total runtime: 11067.991 ms
(2 rows)
It is a PL function. Do I need to break it down?
A. Kretschmer wrote:
am Sat, dem 23.12.2006, um 11:26:08 -0800 mailte Benjamin Ara
On Sat, 23 Dec 2006, Benjamin Arai wrote:
The largest table in my database (30GB) has mysteriously went from taking
milli-seconds to perform a query to minutes. This disks are fine and I have
a 4GB shared_memory. Could this slow down have to do with the fsm_max_pages
or something else like t
am Sat, dem 23.12.2006, um 11:26:08 -0800 mailte Benjamin Arai folgendes:
> because it would literally take over a week to complete. Any help would
> be greatly appreciated.
What says an 'explain analyse'?
Andreas
--
Andreas Kretschmer
Kontakt: Heynitz: 035242/47215, D1: 0160/7141639 (meh
Hi,
The largest table in my database (30GB) has mysteriously went from
taking milli-seconds to perform a query to minutes. This disks are fine
and I have a 4GB shared_memory. Could this slow down have to do with
the fsm_max_pages or something else like that? I made it larger but the
querie
Thanks Bruce... just read it, still running 8.1 so didnt see 8.2
documentation.
Nice to know my work has not been duplicated... quote from
http://www.postgresql.org/docs/8.2/static/high-availability.html
"PostgreSQL does not offer this type of replication, though PostgreSQL
two-phase commit (PREP
I think you might want to do incremental backups so a better approach to
that as you mentioned too will be WAL files. For details you can refer to
--> http://www.postgresql.org/docs/current/static/continuous-archiving.html
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 12
Hi everybody,
I'm trying to find a good solution to making backups to tape, where I
want to define "good" as:
- easy to use, like pg_dumpall, BUT
- not in a single file, so I don't backup my entire database cluster
with every differential backup
As I understand my backup program (Bacula)
I assume you have read this new documentation for 8.2:
http://www.postgresql.org/docs/8.2/static/high-availability.html
---
[EMAIL PROTECTED] wrote:
> >Hi,
> >
> >I am currently investigating the preferred method of
Maxime Zakharov <[EMAIL PROTECTED]> writes:
> I have a problem restarting postgres 8.1.3 running on FreeBSD 5.4 after
> a crash. Just after start, postgresql core down, backtrace is following:
> Core was generated by `postgres'.
> Program terminated with signal 4, Illegal instruction.
At a guess
Maxime Zakharov wrote:
I have a problem restarting postgres 8.1.3 running on FreeBSD 5.4 after
a crash. Just after start, postgresql core down, backtrace is following:
Server rebooting solved the problem.
--
http://www.maxime.net.ru/ http://sochi.org.ru/
---(end of b
Hi,
I have a problem restarting postgres 8.1.3 running on FreeBSD 5.4 after
a crash. Just after start, postgresql core down, backtrace is following:
gdb ../bin/postgres postgres.core
GNU gdb 6.1.1 [FreeBSD]
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU
Joost Kuckartz wrote:
> Thanks for all the help guys,
> I actually found the problem and solution.
>
> Maybe for others who will have this, let me recap:
> - While installing the msi file I had to disable the initialization in
> the installer, because of installation on a FAT32 partition
> - I th
>Hi,
>
>I am currently investigating the preferred method of clustering a postgresql
>database on Redhat?
>
>i would really appreciate some suggestions or experiences you guys have had.
>
>note: performance & redundancy are both equally desirable and i have plenty of
>resources.
>
>we already
32 matches
Mail list logo