On Sat, 1 Dec 2007, Joshua D. Drake wrote:
change wal_sync_method to open_sync and fsync=on isn't nearly as bad as
it sounds.
Just be warned that there's been one report that some Linux versions have
bugs that make open_sync problematic:
http://archives.postgresql.org/pgsql-hackers/2007-10/
Rainer Bauer wrote:
> Alvaro Herrera wrote:
> >It has been theorized that cluster would be faster in general if instead
> >of doing an indexscan we would instead use a seqscan + sort step. It
> >would be good to measure it.
>
> Could a reindex on the clustered index speed up the clustering (when
Hi,
I'm using a python script w/ the PyGreSQL library to insert 1 billion rows
into a database table for an experiment (performing a commit every 10K
rows). My script failed at about 170M rows with the following exception:
File "/usr/lib64/python2.3/site-packages/pgdb.py", line 163, in execute
Hello
I did not thought about this. Anyway I think is quite unusable in my
environment. We're talking 50+ server (and in near future 100+ servers)
and 500+ users each of which will be granted access to a small number of
servers (like 2 or 3). So is very easy to say to one server who is
allowed to
On 30 Nov, 16:12, [EMAIL PROTECTED] (Tom Lane) wrote:
>
[Quoting a re-telling of the myth of products living happily ever
after under the control of big companies]
> Anyone who thinks that's a reason to feel good is living on some other
> planet than I do. Consider that if the company *does* dec
On Thursday 29 November 2007 16:08, Jennifer Spencer wrote:
> I am looking for suggestions in setting up a large postgres database
> scenario. We are running a science project with a lot of data expected from
> the science instrument. If you have time to comment, any advice is most
> welcome!
>
> H
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 12/02/07 04:43, oruc çimen wrote:
> hi;
> i have tested postgresql in memory but in ramdisk is not faster than
> hardisk.
> why??? if there are some option for postgresql in ramdisk, pls help me
> i need too much fast db if you know another way for
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 12/02/07 07:35, rokj wrote:
> Hi.
>
> For an example let me say that I have a big (over 1 million) user
> "base". Then every user does a lot of inserting/updating of data.
> Would it be better to create different tables for insert/updating for
> ev
Postgres User wrote:
The problem turned out to be related to my function..
Given this table:
CREATE TABLE "table2" (
"s_val" numeric(6,2),
"e_val" numeric(6,2)
) WITH OIDS;
I am curious what would happen if you wrote your procedure like this:
declare
retval numeric(6,2);
rec table
Lew wrote:
Postgres User wrote:
The problem turned out to be related to my function..
Given this table:
CREATE TABLE "table2" (
"s_val" numeric(6,2),
"e_val" numeric(6,2)
) WITH OIDS;
The following functions of code will set retval = NULL;
declare
retval numeric(6,2);
rec record;
b
hi;
i have tested postgresql in memory but in ramdisk is not faster than
hardisk.
why??? if there are some option for postgresql in ramdisk, pls help me
i need too much fast db if you know another way for fast db pls send a mail
to me
thank you
(sorry for my bad english:( )
G.Oruc Cimen
Hi.
For an example let me say that I have a big (over 1 million) user
"base". Then every user does a lot of inserting/updating of data.
Would it be better to create different tables for insert/updating for
every user or would it be better just to have one big table with all
data (tables would hav
> That works fine for me... are you sure log_line_prefix is line 482 in your
> config file? You might have inadvertently put a superfluous % somewhere
> else.
I use the config file below. I have added only some lines to the end of
file, all other contents is from windows installer created conf
Hi Usama
yes, currently I am reading a brochure about Continuent uni/cluster for
PostgreSQL. Looks quite interesting.
Another product sounds promising: Cybercluster from www.postgres.at
English Product Description:
http://www.postgresql.at/picts/download/dokumentation/documentation_cybe
rcluster
On Nov 29, 5:09 pm, [EMAIL PROTECTED] (Martijn van Oosterhout) wrote:
> On Wed, Nov 28, 2007 at 02:00:58PM -0800, JonXP wrote:
> > I have a table that contains a nested set (using paths), and I'm
> > trying to create a trigger that updates the timestamps of a node and
> > all of its parents on a mo
On Thu, Nov 29, 2007 at 02:44:25PM -0800, Gautam Sampathkumar wrote:
> Hi,
>
> I'm using a python script w/ the PyGreSQL library to insert 1 billion rows
> into a database table for an experiment (performing a commit every 10K
> rows). My script failed at about 170M rows with the following excepti
On Dec 2, 2007 7:40 AM, Dragan Zubac <[EMAIL PROTECTED]> wrote:
> Hello
>
> I have a stored procedure which does the billing stuff
> in our system,it works ok,but if I put in
> production,where there is some 5-10 billing events per
> second,the whole database slows down. It won't even
> drop some
On Dec 2, 2007 6:35 PM, rokj <[EMAIL PROTECTED]> wrote:
> Hi.
>
> For an example let me say that I have a big (over 1 million) user
> "base". Then every user does a lot of inserting/updating of data.
> Would it be better to create different tables for insert/updating for
> every user or would it
On Sat, 1 Dec 2007, Tomasz Ostrowski wrote:
You can also use "hdparm -I" to check this - look for a "Write
caching" in "Commands/features" section. If it has a "*" in front
then it is enabled and dangerous.
Right; using -I works with most Linux hdparm versions:
# hdparm -V
hdparm v6.6
# hdpar
Alvaro Herrera wrote:
> Alvaro Herrera wrote:
>Probably most of the time is going into creating the new table then.
>
>If you are looking for a short-term solution to your problem, maybe the
>best is to follow the recommendation on CLUSTER ref page:
I've read that section before, but I have lots
On Friday 30 November 2007 2:31 am, Andrus wrote:
> > That works fine for me... are you sure log_line_prefix is line 482 in
> > your config file? You might have inadvertently put a superfluous %
> > somewhere else.
>
> I use the config file below. I have added only some lines to the end of
> file,
On Fri, 30 Nov 2007, Wolfgang Keller wrote:
it was impossible for me to find a similarly priced
(Linux-/*BSD/Intel/AMD-)equivalent to my PowerMac G5 over here at the
time when I bought it.
The problem from my perspective is the common complaint that Apple doesn't
ship an inexpensive desktop
On Thursday 29 November 2007 2:44 pm, Gautam Sampathkumar wrote:
> Hi,
>
> I'm using a python script w/ the PyGreSQL library to insert 1 billion rows
> into a database table for an experiment (performing a commit every 10K
> rows). My script failed at about 170M rows with the following exception:
>
"Andrus" <[EMAIL PROTECTED]> writes:
> I use the config file below. I have added only some lines to the end of
> file, all other contents is from windows installer created conf file.
> If I remove # sign in front of last line (line 482), and reload
> configuration, I got syntax error
> in log fil
Hello
Please find in attachment stored procedure
(proc_uni.txt),as well as description of tables
involved in calculations.
The idea for procedure is to find longest prefix match
for destination number,try to find it in table
'billing' for particular users,find the price,and
insert message into his
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 12/02/07 14:58, Usama Dar wrote:
> On Dec 2, 2007 6:35 PM, rokj <[EMAIL PROTECTED]> wrote:
>
>> Hi.
>>
>> For an example let me say that I have a big (over 1 million) user
>> "base". Then every user does a lot of inserting/updating of data.
>> Woul
Ow Mun Henq wrote:-
> Ingress is also an open source RDBM (and DataWarehouseing) and I'm
> wondering if anyone here has anything to say about it. They also offer
> community editions but I've not gone to see how much it differs/offers
> compared to PG.
>
> I've tried to DL the community edition,
Em Friday 30 November 2007 05:02:25 Aarni Ruuhimäki escreveu:
>
> I followed the recent thread about 'replication in Postgres' but still any
> info on experience of similar circumstances and pointers / comments /
> recommendations are more than welcome.
You problem is not one where replication wou
Hello
Here's the stored procedure itself,as well as the
related tables involved in it's calculations.
The idea for procedure is to find longest prefix match
for destination number,try to find it in table
'billing' for particular users,find the price,and
insert message into history and inqueue tabl
On Fri, Nov 30, 2007 at 10:30:47PM +0100, Pascal Cohen wrote:
> I am facing a probably very common problem. I made a search in the
> recent archives and could find many posts related to my issue. But I did
> not get exactly "the answer" to my question.
No, and I doubt you will.
> But I don't kn
30 matches
Mail list logo