Re: [GENERAL] general questions

2014-01-08 Thread Raghavendra
On Thu, Jan 9, 2014 at 5:04 AM, Tom Lane wrote: > CS DBA writes: > > 1) \d and schema's > > - I setup 2 schema's (sch_a and sch_b) > > - I added both schema's to my search_path > > - I created 2 tables: sch_a.test_tab and sch_b.test_tab > > > If I do a \d with no parameters I only see the first

Re: [GENERAL] general questions

2014-01-08 Thread Tom Lane
CS DBA writes: > 1) \d and schema's > - I setup 2 schema's (sch_a and sch_b) > - I added both schema's to my search_path > - I created 2 tables: sch_a.test_tab and sch_b.test_tab > If I do a \d with no parameters I only see the first test_tab table > based on the order of my search_path. > I ge

Re: [GENERAL] general questions postgresql performance config

2010-01-26 Thread Jayadevan M
Hi, Regarding Pentaho - please keep in mind that Pentaho needs significant amount of memory. We had a lot of issues with Pentaho crashing with java out of memory error. If you are using a 64 bit machine, you may be able to give it sufficient RAM and keep it happy. If all you have is one 4 GB ma

Re: [GENERAL] general questions postgresql performance config

2010-01-26 Thread Greg Smith
Andy Colson wrote: I recall seeing someplace that you can avoid WAL if you start a transaction, then truncate the table, then start a COPY. Is that correct? Still hold true? Would it make a lot of difference? That is correct, still true, and can make a moderate amount of difference if the

Re: [GENERAL] general questions postgresql performance config

2010-01-26 Thread Andy Colson
On 1/25/2010 8:12 PM, Craig Ringer wrote: On 26/01/2010 12:15 AM, Dino Vliet wrote: 5) Other considerations? Even better is to use COPY to load large chunks of data. libpq provides access to the COPY interface if you feel like some C coding. The JDBC driver (dev version only so far) now prov

Re: [GENERAL] general questions postgresql performance config

2010-01-25 Thread Scott Marlowe
On Mon, Jan 25, 2010 at 9:15 AM, Dino Vliet wrote: > > Introduction > Today I've been given the task to proceed with my plan to use postgresql and > other open source techniques to demonstrate to the management of my > department the usefullness and the "cost savings" potential that lies ahead.

Re: [GENERAL] general questions postgresql performance config

2010-01-25 Thread Craig Ringer
On 26/01/2010 12:15 AM, Dino Vliet wrote: 5) Other considerations? To get optimal performance for bulk loading you'll want to do concurrent data loading over several connections - up to as many as you have disk spindles. Each connection will individually be slower, but the overall throughp

Re: [GENERAL] general questions about joins in queries

2006-01-17 Thread Viktor Lacina
Hi, it's the same , try "EXPLAIN " if you are not sure. Viktor Dne pondělí 16 ledna 2006 18:01 Zlatko Matić napsal(a): > Hello. > Is it better to use A) or B) ? > > A) > > SELECT > "public"."departments".*, > "public"."plants".*, > "public"."batches_microbs".*, > "public"."results_microbs".*

Re: [GENERAL] general questions on Postgresql and deployment on

2004-12-03 Thread Doug McNaught
Calvin Wood <[EMAIL PROTECTED]> writes: > symbolic link. But on win32, there is no equivalent. However, even under > *nix system, I believe symbolic link can only be created for directories on > the same hard drive. This seems less than optimal. Typically, one would > place database files on RA

Re: [GENERAL] general questions on Postgresql and deployment on win32 platform

2004-12-03 Thread Magnus Hagander
> I have gone through the documentation that come with version > 8 beta 4 and I have a number of questions. > > (1) backup/restore > I notice that in the documentation, it seems to suggest that > an online backup, made via pg_start_backup() and > pg_stop_backup() functions would back up all dat

Re: [GENERAL] general questions on Postgresql and deployment on win32

2004-12-03 Thread Richard Huxton
Calvin Wood wrote: Does it also mean that I must back up and restore all the databases (or database cluster in Postgresql Speak) even if I am only interested in 1 database? You can use pg_dump to backup individual databases (or tables etc). A file-level backup does require all databases in one "