On Aug 4, 5:23 pm, [EMAIL PROTECTED] ("Bill Bartlett")
wrote:
> Couldn't this be offered pre-built as an alternative in the Windows
> installer or as a "psql2.exe" program? Seems like there might be more
> than a few PostgreSQL users running with US keyboard layouts under
> native Windows, and pro
nac1967 wrote:
> On Aug 4, 5:23 pm, [EMAIL PROTECTED] ("Bill Bartlett")
> wrote:
>> Couldn't this be offered pre-built as an alternative in the Windows
>> installer or as a "psql2.exe" program? Seems like there might be more
>> than a few PostgreSQL users running with US keyboard layouts under
>>
RPK wrote:
I installed PGSQL on Windows XP. I ran:
Select pg_start_backup('label');
It responded: "Archive disabled"
Is WAL archiving disabled by default?
Can this feature be enabled while installing PGSQL?
Can I change the path of the logs to another partition on my hard disk?
Enable it i
Before modifying postgresql.conf, do I need to stop PGSQL service?
>Enable it in the "Write Ahead Log" section of postgresql.conf file in
>the PGDATA directory.
--
View this message in context:
http://www.nabble.com/WAL-Queries-tf4218911.html#a12003653
Sent from the PostgreSQL - general mail
RPK wrote:
Before modifying postgresql.conf, do I need to stop PGSQL service?
Enable it in the "Write Ahead Log" section of postgresql.conf file in
the PGDATA directory.
You need to restart PostgreSQL after the file has been modified - it
does not need to be shut down whilst the modificat
How to change the path of WAL? How much free space to leave on hard disk for
effective performance? Since it is disabled by default, is it insignificant
to ENABLE it?
>You need to restart PostgreSQL after the file has been modified - it
>does not need to be shut down whilst the modification is
RPK wrote:
How to change the path of WAL? How much free space to leave on hard disk for
effective performance? Since it is disabled by default, is it insignificant
to ENABLE it?
Just change the settings under the WAL section of the config file. The
WAL files themselves will end up in the pg_x
When PGSQL is installed, it creates two default tablespaces, postgres and
pg_default. When a user creates a new database he is opted to select a
tablespace from the above two. Do I need to create a different tablespace
before creating a database in order to prevent playing with these default
table
On 8/4/07, Decibel! <[EMAIL PROTECTED]> wrote:
> On Wed, Aug 01, 2007 at 08:22:24AM -0400, Kenneth Downs wrote:
> > Ubuntu: What Windows wants to be, what the Mac is w/o the and with
> > more control. I just replaced a hard drive in a dell machine. A
> ...
> > Great graphics, great package m
I am thinking about how to model tags for a web site in the database.
(Each site entry is assigned a number of tags and a match can then later
be performed between these tags and tags entered by the visitor).
Is tsearch2 the way to do it? Each site entry could then store its tags
in a tsvector
Isn't plainto_tsquery() what you're looking for?
Yes if plainto_tsquery() is safe enough for inputting user search
keywords it looks like it is. I didn't knew it existed.
Thanks
---(end of broadcast)---
TIP 5: don't forget to increase your free
On 8/3/07, Guy Fraser <[EMAIL PROTECTED]> wrote:
> On Wed, 2007-08-01 at 07:14 +0530, Merlin Moncure wrote:
> > On 8/1/07, Decibel! <[EMAIL PROTECTED]> wrote:
> > > David Fetter and I just came up with these, perhaps others will find
> > > them useful:
> > >
> > > CREATE OR REPLACE FUNCTION array_t
On 8/5/07, Paul Lambert <[EMAIL PROTECTED]> wrote:
> RPK wrote:
> > Before modifying postgresql.conf, do I need to stop PGSQL service?
> >
> >
> >> Enable it in the "Write Ahead Log" section of postgresql.conf file in
> >> the PGDATA directory.
> >
>
> You need to restart PostgreSQL after the file
How hard would it be to do the following two statements (the
equivalent of the following two statements) using a client?
pg_dump -i -U u -s -x -O -f file dbname
psql -U u -f file dbname
I'm trying to help out the rails people. Currently they have "rake"
scripts which call pg_dump,
Owing to a problem with the way access and a couple of other programs
I've found handle text columns in PG as something they call 'memo', I
want to change all of the text columns in my database to varchar.
There's about 600 text columns all up, so I'm looking for a quick way of
doing this.
I
On Aug 5, 2007, at 16:31 , Perry Smith wrote:
I'm trying to help out the rails people. Currently they have
"rake" scripts which call pg_dump, dropdb, createdb, and psql. It
would be nicer if this could be done via a database connection.
Why would this be nicer? What's the advantage? pg_d
Paul Lambert wrote:
Owing to a problem with the way access and a couple of other programs
I've found handle text columns in PG as something they call 'memo', I
want to change all of the text columns in my database to varchar.
There's about 600 text columns all up, so I'm looking for a quick way
On Aug 5, 2007, at 6:07 PM, Michael Glaesemann wrote:
On Aug 5, 2007, at 16:31 , Perry Smith wrote:
I'm trying to help out the rails people. Currently they have
"rake" scripts which call pg_dump, dropdb, createdb, and psql. It
would be nicer if this could be done via a database connectio
Perry Smith <[EMAIL PROTECTED]> writes:
> I find it odd that you are resistant to the idea. To me, the
> advantages are clear if it can be done without a tremendous amount of
> work.
Well, it can't. pg_dump is an enormously complicated and frequently
changed bit of code, and so you really re
I installed 8.2 on ubuntu 6.10 using the backport a while ago. In pgadmin III
I still can't connect to the server settings. When I open server status in
pgadmin on the ubuntu box, it first messages me that server instrumentation
functions are missing and then presents what may be a subset of the s
Are those geographical copies, or geographical subsets? Multi-master
replication is hard with postgres (read: probably not going to happen) but
if you can partition your data up so that you have one writer for a
subset of records, that could work quite well. Especially if you have rich
clients
You may import the adminpack.sql come with the -contrib-8.2 package
to create those administrative functions.
Regards
novnov wrote:
I installed 8.2 on ubuntu 6.10 using the backport a while ago. In pgadmin III
I still can't connect to the server settings. When I open server status in
pgadmin
On Fri, 2007-08-03 at 07:55 -0600, Josh Tolley wrote:
> On 8/3/07, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> > Can anyone shed some light on this. I just would like to know if
> queries
> > for raw data (not aggregregates) is expected to take a long time.
> > Running times between 30 - 2 hours for l
Single master point subsets wasn't the plan but is doable. Each
geographic region should have a local read copy.
Ben wrote:
> Are those geographical copies, or geographical subsets? Multi-master
> replication is hard with postgres (read: probably not going to happen)
> but if you can partition yo
On Aug 5, 2007, at 7:37 PM, Tom Lane wrote:
Perry Smith <[EMAIL PROTECTED]> writes:
I find it odd that you are resistant to the idea. To me, the
advantages are clear if it can be done without a tremendous amount of
work.
Well, it can't. pg_dump is an enormously complicated and frequently
ch
On Aug 5, 2007, at 7:37 PM, Tom Lane wrote:
PS: *please* see if you can get them to take out the usage of
pg_dump's
-i switch. Having code invoke that blindly borders on criminal
negligence.
http://dev.rubyonrails.org/ticket/9198
Thank you again.
Take care,
Perry Smith ( [EMAIL PROTECTED]
I get the following error when tying to index a large text field:
dev=# CREATE INDEX idx_fulltext_articel_block ON fulltext_article
(article_block);
ERROR: index row requires 8724 bytes, maximum size is 8191
I am actually creating a GIN index on another field but I need to
index the origina
Ow Mun Heng <[EMAIL PROTECTED]> writes:
> Queries involves 11 Joins (of 5 Main tables, of which 3 are Large, 2 are
> fact tables)
It would be a good idea to increase join_collapse_limit to more than 11.
This will increase the planning time, but the investment will likely be
worthwhile for such a l
Benjamin Arai <[EMAIL PROTECTED]> writes:
> I am actually creating a GIN index on another field but I need to
> index the original "text" field to perform exact phrase matches.
Why do you think an extra index will be useful for that? Especially
a btree index?
regards
To clarify, I am using the GIN index to perform the tsearch2 queries
which works fine. The problem with this is that you cannot use the
tsearch2 fields since they are vectors to do exact match searches.
The tsearch2 webpage says to do something like
SELECT intindex, strTopic FROM t
Benjamin Arai <[EMAIL PROTECTED]> writes:
> The tsearch2 webpage says to do something like
> SELECT intindex, strTopic FROM tblmessages
> WHERE idxfti @@ to_tsquery('default', 'gettysburg &
> address')
> AND strMessage ~* '.*men are created equal.*';
>
I will have to look into slony.. not much into db admin yet...
OK.. so
1. lets stick with complete duplicates in each region
identical
2. what opensource/free rdbms are there that can do this better?
Hmm got an answer from Ben but didnt see my post on the list...
B
I think the same way :). But as I mentioned in the first letter I'm not a C
guy. So I wonder why doesn't postgres store hashes for all queries and
misses parsing step if not needed like Oracle does?
On 8/3/07, Sibte Abbas <[EMAIL PROTECTED]> wrote:
>
> On 8/3/07, Sergey Moroz <[EMAIL PROTECTED]> w
I've been going back and forth on the best way to model this.
A user can have one to many bill and ship addresses.
An order can have one bill address and one to many ship addresses
Let's assume I have a single address table, with an address_type
column that is a foreign key to the address_types t
"Sergey Moroz" <[EMAIL PROTECTED]> writes:
> So I wonder why doesn't postgres store hashes for all queries and
> misses parsing step if not needed like Oracle does?
Because we think it's a waste of cycles, not to mention storage.
regards, tom lane
I am trying to create a function to handle inserts in a database with
partitions in a particular table.
The function looks like:
AS $$
DECLARE
v_sql TEXT;
BEGIN
v_sql := 'INSERT INTO messagecatalog_'||
to_char(NEW.timestampfield,'')||
'(field1, field2) values ('
||New.field1||','
snacktime wrote:
I've been going back and forth on the best way to model this.
A user can have one to many bill and ship addresses.
I would think that one billing and many shipping would be most likely.
eg head office for billing and shipping to each branch office.
An order can have one bill
37 matches
Mail list logo