some daily dump/reload scripts for all
projects right away.
--
Robins Tharakan
On Thu, Feb 9, 2012 at 9:11 PM, Tom Lane wrote:
> Robins Tharakan writes:
> > This is a case where I changed the name of a field in a table that a VIEW
> > referred to, but the VIEW definition still
this
server may help.
Further, (I am unsure here) but I believe the field name was changed ~1-2
weeks back and the server was restarted just the day before. Is it possible
that this survives a restart as well?
Thanks
--
Robins Tharakan
==
[pgsql@server /webstats/pgsql]$ psql
psql (
This message has been digitally signed by the sender.
Re___GENERAL__Why_does_index_not_use_for_CTE_query_.eml
Description: Binary data
-
Hi-Tech Gears Ltd, Gurgaon, India
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to y
). This is
expected as normal behaviour.
--
Robins Tharakan
On 12/27/2011 02:24 PM, AI Rumman wrote:
I know that. I wrote here only a sample. I have to have UNION ALL on the
CTE expression for severral times where UNION ALL and a CONCAT SELECT
will be changed.
That's why I can't include
This message has been digitally signed by the sender.
Re___GENERAL__Why_does_index_not_use_for_CTE_query_.eml
Description: Binary data
-
Hi-Tech Gears Ltd, Gurgaon, India
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to y
Hi,
The CTE is a distinct query and you're trying to do a SELECT * FROM t1.
Which is quite expected to do a table scan.
If you do a WHERE i=2 *within the CTE*, you should start seeing usage of
the index where you're expecting to.
--
Robins Tharakan
On 12/27/2011 02:15 PM, AI Ru
ass appeal, whatever. No offence, but the base platform isn't
always a striking factor. Personally, I don't care if I have a
steam-engine under the bonnet as long as it runs like a Ferrari ;)
--
Robins Tharakan
--
Sent via pgsql-general mailing list (pgsql-general@postgresq
You could also do a
pg_dump -Fc | gzip -1 -c > dumpfile.gz
at the cost of a slightly larger (but faster backup).
Actually if you're going this route, you could skip even the pg_dump
compression as well...
pg_dump db | gzip -1 -c > dumpfile.gz
--
Robins Tharakan
--
Sent via pg
on that (pgdump dbname |
gzip > file.gz)
http://www.postgresql.org/docs/8.4/static/backup-dump.html#BACKUP-DUMP-LARGE
You could also do a
pg_dump -Fc | gzip -1 -c > dumpfile.gz
at the cost of a slightly larger (but faster backup).
--
Robins Tharakan
--
Sent via pgsql-general mailing list
zip before writing to disk)?
--
Robins Tharakan
On 11/13/2011 05:08 PM, Phoenix Kiula wrote:
Hi.
I currently have a cronjob to do a full pgdump of the database every
day. And then gzip it for saving to my backup drive.
However, my db is now 60GB in size, so this daily operation is making
less and
This message has been digitally signed by the sender.
Re___GENERAL__dblink_not_returning_result.eml
Description: Binary data
-
Hi-Tech Gears Ltd, Gurgaon, India
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscr
doing a simple SELECT expecting a one-row-one-column
after each call ? (before you make the next SELECT) ?
--
Robins Tharakan
On 11/03/2011 12:56 PM, AI Rumman wrote:
select new_conn('conn1');
select dblink_send_query
12 matches
Mail list logo