On 2008-11-20 12:56, WireSpot wrote:
On Thu, Nov 20, 2008 at 10:56, Albe Laurenz <[EMAIL PROTECTED]> wrote:
Do you still need the old prepared statement?
If not, you can simple DEALLOCATE it and then try the PREPARE again.
Yes, I'd like to keep the old statements, that's part of the perks --
On 2008-11-17 23:26, Blazej wrote:
CREATE TABLESPACE ram_space LOCATION '/mnt/ram0/pgspace';
And then:
CREATE TABLE (...) TABLESPACE ram_space;
and table is in memory.
And when your server will loose power or hang now then your database
will not start after reboot.
I'd rather start a new, t
On 2008-11-05 08:13, Christian Schröder wrote:
> If I have 5 disks available, how should I use them to get best
> performance without the risk of severe data loss?
What percentage of your usage are writes? What do you need the most:
high throughput or minimal latency?
> How important is data in
On 2008-10-31 09:01, Christian Schröder wrote:
> We will now move the database to a raid5
> (which should be faster than the raid1)
This is wrong. RAID5 is slower than RAID1.
You should go for RAID1+0 for fast and reliable storage. Or RAID0 for
even faster but unreliable.
Regards
Tometzky
--
.
On 2008-10-17 12:13, Mikkel Høgh wrote:
>> You're supposed to use "Reply to all" if you want to reply to the
>> list.
>
> Well, I think the most common use case for a mailing list is to reply
> back to the list, isn't that the whole point?
It is a point of having "Reply to all" button. With "r
On 2008-10-16 23:17, Mikkel Høgh wrote:
> P.S.: Why are e-mails from this list not sent with a Reply-To: header
> of the lists e-mail-address?
Because it is dangerous - too easy to send to the list, when you really
mean to send to one. Most e-mail programs have two buttons for replying:
ordinar
On 2008-10-16 16:40, Stephen Frost wrote:
There is an issue report with lengthy discussion on drupal.org:
http://drupal.org/node/196862
And a proposed patch:
I don't see 'limit 1' anywhere in that patch..
Sorry - haven't checked it - I have checked only a 6.x version
http://drupal.org/files/
On 2008-10-16 10:34, Mikkel Hogh wrote:
> It's not only to avoid one query, but to avoid one query every time
> drupal_lookup_path() is called (which is every time the system builds
> a link, which can be dozens of time on a page).
Oh, $count is static. My bad. Using count for testing for emp
On 2008-10-14 23:57, Mikkel Hogh wrote:
> one is the dreaded "SELECT COUNT(pid) FROM
> url_alias" which takes PostgreSQL a whopping 70.65ms out of the
> 115.74ms total for 87 queries.
This is stupid.
The Drupal code looks like this:
// Use $count to avoid looking up paths in subsequent call
On 2008-09-24 18:01, William Garrison wrote:
> Then I commented-out the constraints from the schema. Then I loaded
> the data.
Don't forget to restore these constraints back after loading data.
> I made a schema only dump, then a data only dump with --inserts. (...)
> Unfortunately, the INSERT
On 2008-09-23 19:03, William Garrison wrote:
> I have several .SQL files created from pg_dump, and I find that when I
> feed them into psql that I get tons of foreign key errors because the
> INSERT statements in the dump are not in the correct order. After
> reading the docs, mailing lists, an
On 2008-09-12 15:52, Jack Orenstein wrote:
Sorry, I misspoke. I have an index, but preferred doing a scan without
the index in this case.
Why?
The only reason I can think of is that you'd like to avoid disk seeking.
But you get at most 1 row in 30 seconds, so disk latency (only several
mill
On 2008-09-11 17:21, Jack Orenstein wrote:
> The id > last_id trick doesn't work for me -- I don't have an index that
> would
> support it efficiently.
You do not have a primary key? If you do then you have an index as it is
automatically created.
Watch this:
test=> create temporary table tes
On 2008-09-11 18:03, Jack Orenstein wrote:
>> When you do:
>> result = query("select something from sometable")
>> then all rows of a result will be cached by a client program.
>
> I am very sure this is not happening. Maybe some rows are being
> cached (specifying fetch size), but certainly not
On 2008-09-11 17:21, Jack Orenstein wrote:
>> Then do the processing in separate transactions like this (in pseudocode):
> The id > last_id trick doesn't work for me -- I don't have an index that
> would
> support it efficiently.
>
> Turning on autocommit seems to work, I'm just not clear on th
On 2008-09-10 16:46, Jack Orenstein wrote:
> Application requirement. We need to do something for each row retrieved from
> BIG
> and the something is expensive. We do the scan slowly (30 second sleep inside
> the loop) to amortize the cost.
Then do the processing in separate transactions like
On 2008-09-09 09:30, Tomasz Ostrowski wrote:
> On 2008-09-08 11:46, Ivan Zolotukhin wrote:
>
>> vmstat 5
>> procs memory page disk faults cpu
>> r b w avmfre flt re pi po fr sr am0 insycs us sy
>>
On 2008-09-08 11:46, Ivan Zolotukhin wrote:
> vmstat 5
> procs memory page disk faults cpu
> r b w avmfre flt re pi po fr sr am0 insycs us sy
> id
> 28 77 0 2328792 793424 34813 0 0 0 4351 0 41 1913 21230 20337 14 86 > 0
On 2008-09-09 05:07, Kashmir wrote:
> querying data from august 1st - august 25:
> Total runtime: 26762.999 ms
>
> now querying data from august 1st - august 29:
> Total runtime: 20123.584 ms
>
> Any idea why these analysises look so different? the only
> query-difference is the 2nd timestamp
On 2008-08-26 13:39, Phoenix Kiula wrote:
> Ok done. Slony is installed. Now what?
http://www.slony.info/documentation/versionupgrade.html
I think nobody would guide you step by step. Either read documentation
and do it yourself or hire an expert:
http://www.postgresql.org/support/professional_su
On 2008-08-21 11:09, Ow Mun Heng wrote:
> I want to find out if there's a method to change this
> select to_char('1 day 09:18:42.37996'::interval,'HH24:MI:SS')
> to something like
> 24+9(hours) = 33:18:42 instead of returning It as 09:19:42
That's because 1 day doesn't always have 24 hours, becau
On 2008-08-21 11:12, Shashwat_Nigam wrote:
> I had done the same but still the problem is like that the user is
> able to access all the databases in the current server. Even the
> order is like same and if I comment the second line(all access) then
> the error is there.
You did issue "pg_ctl rel
On 2008-08-21 10:53, Shashwat_Nigam wrote:
> hostall all 127.0.0.1/32 md5
> hostHMRI_database hmri127.0.0.1/32 md5
> in the above case hmri is user and HMRI_database. But still when user
> is log in as 'hmri' it is able to access all the databases.
On 2008-08-21 05:29, Andrew Maclean wrote:
> Is char(1) one byte in size?
No. It will also depend on database encoding, etc.
I think you should go with smallint, which is exactly 2 bytes. You'll
have 15 bits of storage (16 if you'd want to implement the special case
of minus sign).
IMHO the onl
On 2008-08-20 12:13, Roshni Mani wrote:
> Does the sequence commands like nextval().setval(),curval() exist
> using libpqxx?
You just run an ordinary query:
select nextval('sequence_name')
Regards
Tometzky
--
...although Eating Honey was a very good thing to do, there was a
moment just
On 2008-08-14 20:03, [EMAIL PROTECTED] wrote:
> I'd like to ask you about some experience in managing huge databases which
> store mostly binary files.
Do you mean BYTEA or large objects? Both have pros and cons.
> We're developing a system which is likely to grow up to terabytes in
> some years
On 2008-08-13 10:06, Bartels, Eric wrote:
> After a while the whole system memory is eaten up and every
> operation becomes very slow.
Show us:
- output of "free" command, when server gets slow.
- output of "ps v --sort=-size | head -10"
- output of "ps auxww | grep postgres"
- in terminal st
On 2008-08-07 11:49, [EMAIL PROTECTED] wrote:
> maybe i misworded my problem. i am looking for the possibility to
> install a psql client (without a server) out of the sources.
It is not a basic problem - it is your solution for a problem, which
maybe does have another, better solution.
> i have
On 2008-08-06 17:38, [EMAIL PROTECTED] wrote:
> what is the best way to get a psql-client at a linux system without
> the need of a server.
On Fedora/RedHat/CentOS or other yum based distribution:
# yum install postgresql
For Debian/Ubuntu or other dpkg based distribution I think it will be:
# dp
On 2008-08-03 12:12, Sim Zacks wrote:
> SELECT m.message_idnr,k.messageblk
> FROM dbmail_messageblks k
> JOIN dbmail_physmessage p ON k.physmessage_id = p.id
> JOIN dbmail_messages m ON p.id = m.physmessage_id
> WHERE
> mailbox_idnr = 8
> AND status IN (0,1 )
> AND k.is_header = '0'
> GROUP BY
On 2008-07-29 23:47, John Cheng wrote:
> Slony-I replication is also a viable choice for backups.
No, it's not. Redundancy is not a substitute for backups. Slony will not
help you if you do by mistake "delete from important_table" - as a copy
will also have all rows deleted.
For backups I'd recom
On 2008-07-23 21:24, Keaton Adams wrote:
> We run into a problem when an equality search on a timestamp column
> returns no records, even when there are records to return, as in this
> example:
>
> mxl=# select * from mxl_scheduler_queue where status_modified =
> '2008-07-03 16:55:06.44695-06';
On 2008-06-24 16:30, David Siebert wrote:
> Which disto is best for running a Postgres server?
I'd go for CentOS 5.2 (or better RedHat Enterprise Linux 5.2, if you can
afford it, as $349/year for basic support can save you several hours of
problem solving).
But by default CentOS5/RHEL5 have Postg
On 2008-06-06 07:25, Brent Wood wrote:
> Would "real" tables in a tablespace defined on a ramdisk meet this
> need?
Bad idea. This would mean an unusable database after a restart.
> You could also mount a tablespace on a physical disk with a
> filesystem which has delayed/deferred writes to disk
On 2008-06-04 23:18, Jason Long wrote:
> I have a query that takes 2.5 sec if I run it from a freshly restored
> dump. If I run a full vacuum on the database it then takes 30 seconds.
1. Don't run "vacuum full", run plain "vacuum". If you run "vacuum full"
then "reindex" afterwards.
2. Run "an
On 2008-05-12 20:49, Justin wrote:
> We take (List Price * discount Percent) * Number of Pieces = net
> price.
This is wrong. You should do in Excel:
( price * amount ) * discount
As otherwise any small error in representation of price*discount would
be multiplied by usually high amount.
I'd do
On 2008-04-23 17:22, Terry Lee Tucker wrote:
> On Wednesday 23 April 2008 11:14, Gabor Siklos wrote:
>> The advantage of the first method would be that I would not have to wait
>> for pg_dump (it takes quite long on our 60G+ database) and would just be
>> able to configure the backup agent to monit
On 2008-04-21 00:19, [EMAIL PROTECTED] wrote:
> I have a very large table with about 60 million rows. I sometimes
> need to do a simple update to ALL rows that resets a status-flag to
> zero.
> UPDATE table SET flag=0;
First optimization:
UPDATE table SET flag=0 where flag!=0;
Second optimi
On 2008-04-22 23:46, David Wilson wrote:
> Upping the segments to 50, timeout to 30m and completion target to
> 0.9 has improved average copy time to between 2 and 10 seconds, which
> is definitely an improvement.
I'd up them to 128 (or even 256) and set completion target back to 0.5.
But make su
On 2008-04-11 08:53, J Ottery wrote:
> I install PostgreSQL on a client PC and put the data files on a
> networked drive (instead of the local drive). Postgres as user and
> localport. This works well.
This is not the way it is meant to work, and it can eat your data.
> Now I install postgresSQ
On 04/04/2008 05:43 PM, mark wrote:
stats with new settings are below..
These stats look good for me.
but even with this sometimes update queries take more than coupla
seconds sometimes...
checkpoint_completion_target = 0.8
It looks like that this is set too high for your workload. If y
On 2008-04-02 09:30, mark wrote:
> Based on what Tomasz suggested a day ago, I had changed settings to
>checkpoint_segments = 16
>checkpoint_timeout = 20min
>checkpoint_completion_target = 0.8
> but i still do get statements that take over 2 or 3 seconds to execute
> someti
On 2008-04-01 09:44, mark wrote:
> I already am running 8.3.1 [ i mentioned in subject].
Missed that, sorry.
I'd first try to set the following on a write-busy 8.3 server to smooth
checkpoint spikes:
checkpoint_segments = 16
checkpoint_timeout = 20min
checkpoint_completio
On 2008-03-31 21:16, mark wrote:
> is the query I am running , and it takes over 10 seconds to complete
> this query...
> update users set number_recieved=number_recieved+1 where
> uid=738889333;
Every time or only sometimes?
If it is sometimes then I think this query is waiting for a checkpoin
On 2008-03-28 13:27, Teemu Juntunen wrote:
> I am developing an ERP to customer and I have made few tables using a
> row number as part of the key. When deleting a line from such a
> table, I have made an after delete trigger, which fixes the row
> numbers with following command:
> UPDATE orderro
On 2008-03-28 02:00, Andrej Ricnik-Bay wrote:
> On 28/03/2008, Dawid Kuroczko <[EMAIL PROTECTED]> wrote:
>> Agree, except I would prefer "pg" instead of "pgc".
>
> And it's been taken for about 35 years by a Unix command called "page".
> From its man-page.
>pg - browse pagewise through te
On 2008-03-25 09:51, sathiya psql wrote:
> Table size is nearly 700 MB,
> and my machine had a 1 GB + 256 MB RAM, i had created the table space in
> RAM, and then created this table in this RAM.
>
> So now everything is in RAM, if i do a count(*) on this table it returns
> 327600 in 3 seconds,
Stumo wrote:
> Each client retrieves an unprocessed record from the server, marks
> that it's processing that record, does some processing (this will take
> some time), and adds some extra data to the server based on the
> processing. [snip]
>
> However, if a client disconnects (which, because of
On Fri, 22 Feb 2008, Daniel Verite wrote:
> You could use the function below that breaks a bytea value into
> pieces of 'chunksize' length and returns them as a set of rows.
When you do this then make sure that this column has external
storage:
alter column [column_name] set storage exter
On Wed, 23 Jan 2008, Tom Hart wrote:
>>> pg_restore: [archiver (db)] COPY failed: ERROR: invalid byte sequence
>>> for encoding "UTF8": 0xc52f
>>
>> Try editing your dump-file and change the line which reads "SET
>> client_encoding = 'SQL_ASCII';" to "SET client_encoding = 'LATIN1';"
>
> I trie
On Sun, 06 Jan 2008, Chuck wrote:
> Sort order, and specifically setting LC_COLLATE and LC_CTYPE was less of a
> concern. (I still need to read and learn more.)
It should be, as it is not only sort order. Try for example this:
select upper('ąŧäɣ');
- these are
polish a_ogonek, t stroke, g
On Tue, 01 Jan 2008, Chuck wrote:
> I'm not sure how to "make sure automatic updates are turned on" as
> Tometzky recommended. Is that a yum setting?
You need to install and configure "yum-updatesd" to perform automatic
updates for you. I don't use it so I don't know exactly how to do
this, but I
On Mon, 31 Dec 2007, Chuck wrote:
> I asked the web host to make sure that Postgres is installed. They did this
> by running the following command:
> yum -y install postgre postgre-server postgre-devel
I think the proper command was:
yum -y install postgresql postgresql-server postgresql-devel
On Fri, 28 Dec 2007, Anthony B. Colson wrote:
> Is it advisable or ok to install v8.2.5 to a Red Hat 9
> configuration?
No.
It is not ok to run any RedHat Linux, as thay are totally not
supported and insecure.
You should as soon as possible install RedHat Enterprise 5 (or a
clone like CentOS 5)
On Mon, 10 Dec 2007, A. Ozen Akyurek wrote:
> We have a large table (about 9,000,000 rows and total size is about 2.8 GB)
> which is exported to a binary file.
How was it exported? With "COPY tablename TO 'filename' WITH BINARY"?
"The BINARY key word causes all data to be stored/read as binary
f
On Sat, 01 Dec 2007, Martin Marques wrote:
>> Also check whether a disks have write-caching turned off - it should.
>> On Linux the command is:
>> # hdparm -W /dev/sda
>> /dev/sda:
>> write-caching = 0 (off)
>
> Uh? That command as was written lacks an argument for -W (0/1). From
On Sat, 01 Dec 2007, rihad wrote:
> fsync = on;
> synchronous_commit = off;
> full_page_writes = off;
> Thanks for any insights or clarifying missed points.
Also check whether a disks have write-caching turned off - it should.
On Linux the command is:
# hdparm -W /dev/sda
/dev/sda
On Fri, 30 Nov 2007, Madison Kelly wrote:
> If there a ./configure switch (or config file/command line switch) to
> tell postgresql to put the lock file '.s.PGSQL..lock' and socket
> '.s.PGSQL.' in a different directory?
There's no ./configure option, so you should change
DEFAULT_PGSO
On Tue, 27 Nov 2007, Erik Jones wrote:
> I'm just wondering what is considered the general wisdom on config setting
> for large pg_restore runs.
I think the first thing you can do is to "fsync=off" temporarily. But
do remember to turn this back on when you're done restoring.
Regards
Tometzky
--
On Thu, 11 Oct 2007, Carlos H. Reimer wrote:
> the problem happens with many machines where our Visual Basic
> applications is running. After debuging the application we discovered that
> the problem was always with "select *" statements.
I'd try locally:
$ psql -c 'select * from table where pk=
On Mon, 01 Oct 2007, Mike Charnoky wrote:
> I altered the table in question, with "set statistics 100" on the
> timestamp column, then ran analyze. This seemed to help somewhat. Now,
> queries don't seem to hang, but it still takes a long time to do the count:
> * "where evtime between '2007-09
On Tue, 10 Jul 2007, Alexander Staubo wrote:
> My take: Stick with TOAST unless you need fast random access. TOAST
> is faster, more consistently supported (eg., in Slony) and easier
> to work with.
Toasted bytea columns have some other disadvantages also:
1.
It is impossible to create its valu
On Thu, 14 Jun 2007, Lza wrote:
> Does anyone have any suggestions on how to store historical
> information in databases?
-- I have tables:
create table history_columns (
column_id smallint primary key,
column_name varchar(63) not null,
table_name varchar(63) not null,
On Wed, 14 Feb 2007, Peter Eisentraut wrote:
> By installing functions or operators with appropriate signatures in
> other schemas, users can then redirect any function or operator
> call in the function code to implementations of their choice
> [snip]
> The proper fix for this problem is to inser
On Thu, 18 Jan 2007, [EMAIL PROTECTED] wrote:
> When I installed Fedora Core 5 Linux to my x86 Desktop machine, it
> automatically included PostGreSQL. What is the proper way to
> uninstall?
Because you are probably a new user of Fedora, you should use
graphical interface called "pirut" - if sho
On Thu, 21 Dec 2006, Bruno Wolff III wrote:
> > But I need an ability to change passwords.
>
> But do you have to use the native passwords in Postgres? If you use
> ldap or pam, you could use passwords maintained somewhere else that
> had more strict requirements.
As I've written earlier I'd hav
On Thu, 21 Dec 2006, Steve Atkins wrote:
> >Is there any way to disallow self changing of password by ordinary
> >users? Or force password strength in any other way?
>
> If you check
> http://www.postgresql.org/docs/8.2/static/client-authentication.html
> you'll see a bunch of different ways to
I'm working on a project which needs to satisfy some legal
requirements for password strength. But any postgresql user can do;
alter role [session_user] password 'foo';
Is there any way to disallow self changing of password by ordinary
users? Or force password strength in any other way?
R
On Tue, 19 Dec 2006, Tomasz Ostrowski wrote:
> - it will be possible to use for example a left open psql session to
> change password of logged in user without knowledge of previous
> password.
Forget it - I just found on
http://www.postgresql.org/docs/8.2/static/sql-alterrole.html
I need to implement an option to change passwords in my application,
which authenticates using MD5. This application however operates over
unencrypted link, so it'll be a little bit of a challege.
I've came up with several possible implementations.
On Mon, 06 Nov 2006, Roman Neuhauser wrote:
> # [EMAIL PROTECTED] / 2006-11-06 12:26:43 +0100:
> > On Sun, 05 Nov 2006, CSN wrote:
> >
> > > Anybody know of a script that dumps all databases into
> > > corresponding dump files
> >
> > I've written this one in bash:
> > [snip]
> > This would brea
On Sun, 05 Nov 2006, CSN wrote:
> Anybody know of a script that dumps all databases into
> corresponding dump files
I've written this one in bash:
#
#!/bin/bash
pg_dumpall -g > /var/lib/pgsql/backups/globals.sql
for dbname in `psql -qXtc "
On Thu, 19 Oct 2006, Peter Bauer wrote:
> A vaccum of the whole database is performed every 10 minutes with
> cron
What is the command you use? Maybe you are vaccuming with "-f" and
locking the whole table.
Regards
Tometzky
--
...although Eating Honey was a very good thing to do, there was a
mo
On Thu, 19 Oct 2006, Joe Kramer wrote:
> Question: is it okay to use timestamp as primary key, or there is
> possibility of collision? (Meaning two processes may INSERT into table
> within same millisecond.) It is a web application.
If your insert fail you can always try again after some random s
On Wed, 20 Sep 2006, Tom Lane wrote:
> Tomasz Ostrowski <[EMAIL PROTECTED]> writes:
> > Now for inserting data to a bytea we need 5*data_size in a buffer for
> > escaped data representation. 6*data_size if we do PQescapeByteaConn
> > in one chunk.
>
> If you
On Wed, 20 Sep 2006, [EMAIL PROTECTED] wrote:
> Any ideas how i could check the hardware?
1. memtest86 or memtest86+ at least 8 hours
2. CPU Burn-in
http://users.bigpond.net.au/cpuburn/ at least 8 hours
3. badblocks -s -v -t random /dev/sd%
WARNING: this will destroy your data!
4. smartctl -a
On Wed, 20 Sep 2006, Tomasz Ostrowski wrote:
> I've made some experiments with attached program and
> came up that for inserting 1kB of data this program needs about
> 5120kB of virtual memory, 1MB - 10MB, 10MB - 55MB.
Forgot to attach it.
Pozdrawiam
Tometzky
--
Best of prhn - n
I'd like to propose something which would make an easy way for memory
efficient insertion/retrieval of bytea data.
Now for inserting data to a bytea we need 5*data_size in a buffer for
escaped data representation. 6*data_size if we do PQescapeByteaConn
in one chunk. I've made some experiments with
On Thu, 27 Jul 2006, Tom Lane wrote:
> Tomasz Ostrowski <[EMAIL PROTECTED]> writes:
> > * When somebody knows md5('secret_salt' || '5') he will be able to
> > easily compute
> > md5('secret_salt' || '50')
> > md
On Thu, 27 Jul 2006, Lexington Luthor wrote:
> >Session id's for web cannot be predictable because this will create a
> >security hole in application.
>
> Using a sequence does not mean it will be predictable.
> In the past I have used something similar to this:
>
> SELECT md5('secret_salt' || n
On Wed, 26 Jul 2006, Tom Lane wrote:
> "Antimon" <[EMAIL PROTECTED]> writes:
> > As the id field is primary key, it should generate a unique violation
> > if duplicate ids created, might be seen rarely but wanted to solve it
> > anyway.
>
> Why don't you just use a serial generator?
If I may int
81 matches
Mail list logo