Hi.
Trying to implement some simple digest routines via UDFs and for whatever
reason I get: ERROR: invalid memory alloc request size 4294967293 on
PG_RETURN_TEXT_P(); any ideas what the issue is exactly?
The code is verified as working when pulled out of a UDF and put into a
normal C program. Th
>
> hash = (text *)palloc(hlen+1);
??? palloc(hlen + VARHDRSZ)
>
> memset(VARDATA(hash), 0, hlen);
> SHA512(VARDATA(plain), hlen, VARDATA(hash));
++ VARATT_SIZEP (hash) = VARHDRSZ + ;
> PG_RETURN_TEXT_P(hash);
> }
>
> ---(end of broad
On Mon, Aug 13, 2007 at 03:17:36PM +, jf wrote:
> Hi.
>
> Trying to implement some simple digest routines via UDFs and for whatever
> reason I get: ERROR: invalid memory alloc request size 4294967293 on
> PG_RETURN_TEXT_P(); any ideas what the issue is exactly?
A few points about your code:
Yup, an uninitialized variable makes sense, That appears to have worked, I
appreciate it. In regards to the checks for
NULL, I wasn't sure if it was necessary or not so I opted for the safe
route, thanks for clearing that up for me.
The output from the function is binary, does it matter if I use
The part of the php code for the connection is
$dbconn=pg_connect( "dbname=lumbribase host=localhost port=5432
user=postgres password=$PG_PASS" );
if ( ! $dbconn ) {
echo "Error connecting to the database ! " ;
printf("%s", pg_errormessage( $dbconn ) );
exit(); }
This code works on z
On Mon, Aug 13, 2007 at 03:17:36PM +, jf wrote:
> Trying to implement some simple digest routines via UDFs and for whatever
> reason I get: ERROR: invalid memory alloc request size 4294967293 on
> PG_RETURN_TEXT_P(); any ideas what the issue is exactly?
just checking - you do realize that it'
my understanding was that pgcrypto was not compiled by default?
Furthermore, finding next to no documentation about it online and deciding
I only needed one function instead of an entire crypto API i decided it
would make the most sense to just code the 10 lines to do it myself.
On Mon, 13 Aug
On 8/13/07, John Coulthard <[EMAIL PROTECTED]> wrote:
> The part of the php code for the connection is
>
>
> $dbconn=pg_connect( "dbname=lumbribase host=localhost port=5432
> user=postgres password=$PG_PASS" );
> if ( ! $dbconn ) {
> echo "Error connecting to the database ! " ;
> printf("%s
On Mon, Aug 13, 2007 at 05:07:50PM +, jf wrote:
> my understanding was that pgcrypto was not compiled by default?
your function is also not compiled on default. but pgcrypto is at the
very least available by default (in sources, or in precompiled
packages).
> Furthermore, finding next to no d
.ep wrote:
> Hi,
>
> I'm moving from the mysql camp and quite liking things like functions
> and such, but a lot of my functionality depends on queries such as
>
>SELECT id, name, start_date
>FROM customer
>WHERE name LIKE 'eri%';
>
> These kinds of queries are super fast in MySQL be
The part of the php code for the connection is
$dbconn=pg_connect( "dbname=lumbribase host=localhost port=5432
user=postgres password=$PG_PASS" );
if ( ! $dbconn ) {
echo "Error connecting to the database ! " ;
printf("%s", pg_errormessage( $dbconn ) );
exit(); }
This code works on z
Hi
I am getting the following error while running queries such as "vacuum
analyze TABLE", even on small tables with a piddly 35,000 rows!
The error message:
--
ERROR: out of memory
DETAIL: Failed on request of size 67108860.
--
My postgresql.conf is below. I am on a Dual Core server with 4GB o
>
> your function is also not compiled on default. but pgcrypto is at the
> very least available by default (in sources, or in precompiled
> packages).
Yes I understand, and trust me I'm typically not a 'not made in my house'
type, but google for pgcrypto, you don't find much out there other than
On Mon, Aug 13, 2007 at 07:22:26PM +, jf wrote:
> understood, I appreciate the suggestion. In addition I couldn't find any
> documentation that told me how to install the functions in pgcrypto (do I
> need to CREATE FUNCTION for every function in there?), the README mentions
> a .sql file thats
On Mon, Aug 13, 2007 at 07:36:41PM +, jf wrote:
> > usually it's in: /share/postgresql/contrib/pgcrypto.sql
> > in the database you want to use pgcrypto functions, you simply run this
> > sql (as superuser), and that's all.
> theory# pwd
> /home/jf/postgresql-8.2.4
> theory# cd share
> bash: cd
> share of *installed* system. if you compiled with --prefix=/usr/local,
> then it would be /usr/local/share/postgresql/...
Ah you have to forgive me, I'm in the states and its quite late ;]
> of course - you dont need all (on the other hand - i strongly suggest
> that you get some familiarity wi
On Mon, Aug 13, 2007 at 07:36:41PM +, jf wrote:
> > usually it's in: /share/postgresql/contrib/pgcrypto.sql
> > in the database you want to use pgcrypto functions, you simply run this
> > sql (as superuser), and that's all.
>
> theory# pwd
> /home/jf/postgresql-8.2.4
> theory# cd share
> bash:
> 1. as for installing contrib - usually when you install psotgresql from
> prebuilt binary packages, there is also contrib package. for example on
> ubuntu it is postgresql-contrib-8.2, so simple: apt-get install
> postgresql-contrib-8.2 will install it.
I actually built from source, and just did
"Lim Berger" <[EMAIL PROTECTED]> writes:
> Hi
>
> I am getting the following error while running queries such as "vacuum
> analyze TABLE", even on small tables with a piddly 35,000 rows!
>
> The error message:
> --
> ERROR: out of memory
> DETAIL: Failed on request of size 67108860.
> --
>
> My
Hello,
I'm doing some select statements on my table that look like:
SELECT * FROM table WHERE prod_num = '1234567' AND transaction_timestamp >
'2007-07-18 21:29:57' OR prod_num > '1234567' ORDER BY prod_num ASC,
transaction_timestamp ASC LIMIT 1;
I've added two indices one for prod_num and anot
On 8/13/07, Gregory Stark <[EMAIL PROTECTED]> wrote:
> "Lim Berger" <[EMAIL PROTECTED]> writes:
>
> > Hi
> >
> > I am getting the following error while running queries such as "vacuum
> > analyze TABLE", even on small tables with a piddly 35,000 rows!
> >
> > The error message:
> > --
> > ERROR: o
In response to Alan J Batsford <[EMAIL PROTECTED]>:
>
> Hello,
>
> I'm doing some select statements on my table that look like:
> SELECT * FROM table WHERE prod_num = '1234567' AND transaction_timestamp >
> '2007-07-18 21:29:57' OR prod_num > '1234567' ORDER BY prod_num ASC,
> transaction_timesta
"Lim Berger" <[EMAIL PROTECTED]> writes:
> On 8/13/07, Gregory Stark <[EMAIL PROTECTED]> wrote:
>> "Lim Berger" <[EMAIL PROTECTED]> writes:
>>
>> > Hi
>> >
>> > I am getting the following error while running queries such as "vacuum
>> > analyze TABLE", even on small tables with a piddly 35,000 row
From: "Lim Berger" <[EMAIL PROTECTED]>
To: "John Coulthard" <[EMAIL PROTECTED]>
CC: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Unable to connect to PostgreSQL server via PHP
Date: Mon, 13 Aug 2007 18:51:37 +0800
On 8/13/07, John Coulthard <[EMAIL PROTECTED]> wrote:
> The part of the
[EMAIL PROTECTED] wrote on 08/13/2007 08:36:23 AM:
> While it's difficult to be sure, I'm guessing you have either a hardware
> problem, or a tuning problem -- but I don't think your indexes are a
problem.
>
> Keep in mind that once PostgreSQL has determined which rows to return, it
> has to actua
"Lim Berger" <[EMAIL PROTECTED]> writes:
> ERROR: out of memory
> DETAIL: Failed on request of size 67108860.
Apparently, this number:
> maintenance_work_mem = 64MB
is more than your system can actually support. Which is a bit odd for
any modern-day machine. I suspect the postmaster is being
"Alan J Batsford" <[EMAIL PROTECTED]> writes:
> Thanks for the help, after your email I went to capture some analyze output
> for you and when I did I figured to bump up the statistics on the two
> columns of interest from 100 to 1000. Now all statements return close to
> instantly.
Note that 100
"John Coulthard" <[EMAIL PROTECTED]> writes:
> That's not my problem though this is "could not connect to server:
> Permission denied" If it's denying permission I must have the permissions
> set wrong but where to I start looking for them?
"Permission denied" is a pretty strange error for a TC
On Aug 13, 2007, at 0:35 , Naz Gassiep wrote:
As clearly stated in the documentation
http://www.postgresql.org/docs/8.2/interactive/datatype-
datetime.html#DATATYPE-TIMEZONES
Perhaps I'm thick, but I don't find that particular page to be
clear on this at all.
Had you read the document
Hello,
I am trying to get this query to work with no luck...
select * from foobar where ts between now() and now() - interval '5 days'
btw, the column ts is defined as:
ts timestamp with time zone NOT NULL DEFAULT now()
No rows are returned, but I know there are at least 100 rows that should
Well that was easy enough... Thanks!
Michael Fuhr <[EMAIL PROTECTED]> wrote: On Mon, Aug 13, 2007 at 07:16:30AM
-0700, Jeff Lanzarotta wrote:
> select * from foobar where ts between now() and now() - interval '5 days'
>
> btw, the column ts is defined as:
>
> ts timestamp with time zone NOT NUL
On Mon, 13 Aug 2007 09:44:26 -0500
Erik Jones <[EMAIL PROTECTED]> wrote:
> I'll agree with Scott on this one. (Not that I can recall
> specifically ever disagreeing with him before...). Unless you
> know all of the potential caveats associated with php's persisent
> postgres connections and ha
"Terri Reid" <[EMAIL PROTECTED]> writes:
> I have data that is being updated in a table that I need to export to a flat
> file via a database trigger on insert or update. The user performing the
> update will not be a superuser. I've tried to use COPY TO, but that doesn't
> work for non-superusers.
On Aug 13, 2007, at 9:35 AM, Scott Marlowe wrote:
On 8/13/07, Naz Gassiep <[EMAIL PROTECTED]> wrote:
Hi,
Does the connection pooling feature of PHP cause the persistent
connections to keep the properties between accesses? E.g., if a user
takes a connection, sets a timezone to it using SET
On Mon, Aug 13, 2007 at 07:16:30AM -0700, Jeff Lanzarotta wrote:
> select * from foobar where ts between now() and now() - interval '5 days'
>
> btw, the column ts is defined as:
>
> ts timestamp with time zone NOT NULL DEFAULT now()
>
> No rows are returned, but I know there are at least 100 ro
On 8/13/07, Naz Gassiep <[EMAIL PROTECTED]> wrote:
> Hi,
> Does the connection pooling feature of PHP cause the persistent
> connections to keep the properties between accesses? E.g., if a user
> takes a connection, sets a timezone to it using SET TIMEZONE, will the
> next user who happens to t
I have data that is being updated in a table that I need to export to a flat
file via a database trigger on insert or update. The user performing the
update will not be a superuser. I've tried to use COPY TO, but that doesn't
work for non-superusers. Is there some other functionality that can write
On 8/12/07, novice <[EMAIL PROTECTED]> wrote:
> I resolved it by doing this - is there another more efficient method?
> And yes, the text file I am working with doesn't have any TABs
>
> 5162 OK SM 06/12/04 06:12
>
> substr("data", 30, 2)||'-'||substr("data", 27,
> 2)||'-20'||substr("da
Tom Lane wrote:
> Oleg Bartunov <[EMAIL PROTECTED]> writes:
>> On Wed, 8 Aug 2007, cluster wrote:
>>> Does anyone know where I can request an OR-version of plainto_tsquery()?
>
>> plainto_tsquery expects plain text, use to_tsquery for boolean operators.
>
> Are either of these definitions really
In response to "Lim Berger" <[EMAIL PROTECTED]>:
> On 8/13/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> > "Lim Berger" <[EMAIL PROTECTED]> writes:
> > > ERROR: out of memory
> > > DETAIL: Failed on request of size 67108860.
> >
> > Apparently, this number:
> >
> > > maintenance_work_mem = 64MB
> >
On 8/13/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Lim Berger" <[EMAIL PROTECTED]> writes:
> > ERROR: out of memory
> > DETAIL: Failed on request of size 67108860.
>
> Apparently, this number:
>
> > maintenance_work_mem = 64MB
>
> is more than your system can actually support. Which is a bit odd
My responses below yours. Thanks so much for bearing with me..
On 8/13/07, Gregory Stark <[EMAIL PROTECTED]> wrote:
> "Lim Berger" <[EMAIL PROTECTED]> writes:
>
> > On 8/13/07, Gregory Stark <[EMAIL PROTECTED]> wrote:
> >> "Lim Berger" <[EMAIL PROTECTED]> writes:
> >>
> >> > Hi
> >> >
> >> > I a
On Aug 13, 2007, at 9:50 AM, Tom Lane wrote:
"Terri Reid" <[EMAIL PROTECTED]> writes:
I have data that is being updated in a table that I need to export
to a flat
file via a database trigger on insert or update. The user
performing the
update will not be a superuser. I've tried to use COPY
From: Tom Lane <[EMAIL PROTECTED]>
To: "John Coulthard" <[EMAIL PROTECTED]>
CC: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Unable to connect to PostgreSQL server via PHP Date:
Mon, 13 Aug 2007 10:09:15 -0400
"John Coulthard" <[EMAIL PROTECTED]> writes:
> That's not my problem thoug
On 8/13/07, Bill Moran <[EMAIL PROTECTED]> wrote:
> In response to "Lim Berger" <[EMAIL PROTECTED]>:
>
> > On 8/13/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> > > "Lim Berger" <[EMAIL PROTECTED]> writes:
> > > > ERROR: out of memory
> > > > DETAIL: Failed on request of size 67108860.
> > >
> > > Ap
On 8/13/07, Erik Jones <[EMAIL PROTECTED]> wrote:
>
> On Aug 13, 2007, at 9:50 AM, Tom Lane wrote:
>
> > "Terri Reid" <[EMAIL PROTECTED]> writes:
> >> I have data that is being updated in a table that I need to export
> >> to a flat
> >> file via a database trigger on insert or update. The user
> >
Le lundi 13 août 2007, Erik Jones a écrit :
> If you need something to pool connections, look at pgpool.
Or better yet, pgbouncer. At least for my values of better :)
https://developer.skype.com/SkypeGarage/DbProjects/PgBouncer
http://pgfoundry.org/projects/pgbouncer/
Hope this helps,
--
dim
On Aug 13, 2007, at 9:25 , Coarr, Matt wrote:
Is there some way that I can treat a two dimensional array as a table
that can be referenced in the from clause?
I know of no way off hand, and if so, not easily. This is a pretty
clear sign that you shouldn't be using arrays in this context and
Alejandro Torras escreveu:
-- English --
Hi,
Is there some way to put values in a INSERT statement
without taking care of apostrophes?
In example:
INSERT INTO persons VALUES ('Harry', 'O'Callaghan');
^^^
I think that it can be used some kind of len
Is there some way that I can treat a two dimensional array as a table
that can be referenced in the from clause?
Thanks,
Matt
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan
On 8/13/07, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> On 8/13/07, Josh Trutwin <[EMAIL PROTECTED]> wrote:
> > On Mon, 13 Aug 2007 09:44:26 -0500
> > Erik Jones <[EMAIL PROTECTED]> wrote:
> >
> > > I'll agree with Scott on this one. (Not that I can recall
> > > specifically ever disagreeing with h
--- Michael Glaesemann <[EMAIL PROTECTED]> wrote:
> > Is there some way that I can treat a two dimensional array as a table
> > that can be referenced in the from clause?
>
> I know of no way off hand, and if so, not easily. This is a pretty
> clear sign that you shouldn't be using arrays in thi
Can someone give me a simple example of the way in which I might be able to
call a stored procedure or query using pgagent? I have never scripted a
postgres .sql etc to run via cron or pgagent and an example of the syntax
would be helpful. In pgagent terms, the Step is set up as SQL, and it's the
> 6: The reason for connection pooling is primarily to twofold. One is
> to allow very fast connections to your database when doing lots of
> small things where connection time will cost too much. The other is
> to prevent your database from having lots of stale / idle connections
> that cause i
On 8/14/07, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> Lim Berger escribió:
>
> > Thanks. I did "su postgres" and ran the ulimit command again. All
> > values are the same, except for "open files" which is double in the
> > case of this user (instead of 4096, it is 8192). Not sure what I can
> > g
I would like to make a request for this feature to be added to postgres.
Postgres is a really great database. I'm still very much a novice at using
postgres but in general, it's been a very good experience and I plan to use
it as often as I can. The community is very helpful.
My projects tend t
Lim Berger escribió:
> Thanks. I did "su postgres" and ran the ulimit command again. All
> values are the same, except for "open files" which is double in the
> case of this user (instead of 4096, it is 8192). Not sure what I can
> gather from that?
Try "su - postgres" instead (which will run the
On Aug 13, 2007, at 11:21 , Osvaldo Rosario Kussama wrote:
Dollar-Quoted String Constants?
http://www.postgresql.org/docs/8.2/interactive/sql-syntax-
lexical.html#SQL-SYNTAX-CONSTANTS
INSERT INTO persons VALUES ($$Harry$$, $$O'Callaghan$$);
Do not interpolate values into SQL literals, rega
On 8/13/07, Josh Trutwin <[EMAIL PROTECTED]> wrote:
> On Mon, 13 Aug 2007 09:44:26 -0500
> Erik Jones <[EMAIL PROTECTED]> wrote:
>
> > I'll agree with Scott on this one. (Not that I can recall
> > specifically ever disagreeing with him before...). Unless you
> > know all of the potential caveats
On 8/14/07, Lim Berger <[EMAIL PROTECTED]> wrote:
> On 8/14/07, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> > Lim Berger escribió:
> >
> > > Thanks. I did "su postgres" and ran the ulimit command again. All
> > > values are the same, except for "open files" which is double in the
> > > case of this
I am trying to run pg_dump on the database with the corrupt table, and
try to restore the database. I also tried to vacuumdb the database and
get the same error.
I get the following error.
pg_dump
pg_dump: query to obtain list of data types failed: PANIC: read of clog
file 100
"Lim Berger" <[EMAIL PROTECTED]> writes:
> Wow, you are right! The "su - postgres" showed up with wildly
> different values! Most notably, the "max user processes" is only 20!!
> Whereas in the regular user stuff it was above 14000. Would you know
> how to change this in a CentOS Linux machine? Whe
Mary Ellen Fitzpatrick <[EMAIL PROTECTED]> writes:
> I am trying to run pg_dump on the database with the corrupt table, and
> try to restore the database. I also tried to vacuumdb the database and
> get the same error.
> I get the following error.
> pg_dump
> pg_dump: query to o
We recently installed and populated a new postgres 7.3 server, which was
quickly abused with a good 12 hours of 115-degree heat. Now, we see ~1000
rows missing from a single table, and given our application, a delete of
those rows seems a very remote possibility. Is there some database analogy
On Aug 13, 2007, at 12:50 , Ben wrote:
We recently installed and populated a new postgres 7.3 server,
Why would you deploy a new server with 7.3? Current release is 8.2.
The 7.3 branch is no longer even updated.
Michael Glaesemann
grzm seespotcode net
---(end of
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Ben wrote:
> We recently installed and populated a new postgres 7.3 server, which was
> quickly abused with a good 12 hours of 115-degree heat. Now, we see
> ~1000 rows missing from a single table, and given our application, a
> delete of those rows se
On 8/14/07, Sander Steffann <[EMAIL PROTECTED]> wrote:
> Hi Lim,
>
> > "Lim Berger" <[EMAIL PROTECTED]> writes:
> >> Wow, you are right! The "su - postgres" showed up with wildly
> >> different values! Most notably, the "max user processes" is only 20!!
> >> Whereas in the regular user stuff it was
Hi Lim,
"Lim Berger" <[EMAIL PROTECTED]> writes:
Wow, you are right! The "su - postgres" showed up with wildly
different values! Most notably, the "max user processes" is only 20!!
Whereas in the regular user stuff it was above 14000. Would you know
how to change this in a CentOS Linux machine?
On Mon, 13 Aug 2007, Michael Glaesemann wrote:
On Aug 13, 2007, at 12:50 , Ben wrote:
We recently installed and populated a new postgres 7.3 server,
Why would you deploy a new server with 7.3? Current release is 8.2. The 7.3
branch is no longer even updated.
Because our product uses a de
Hi Lim,
It might also be in /etc/security/limits.conf.
Thanks. I see these two lines in that file:
postgressoftnofile 8192
postgreshardnofile 8192
How should I change these values? I am not sure how this reflects the
"ulimit" options.
Those are limits to the allow
[EMAIL PROTECTED] (Ben) writes:
> We recently installed and populated a new postgres 7.3 server, which
> was quickly abused with a good 12 hours of 115-degree heat. Now, we
> see ~1000 rows missing from a single table, and given our application,
> a delete of those rows seems a very remote possibil
> --- Original Message ---
> From: novnov <[EMAIL PROTECTED]>
> To: pgsql-general@postgresql.org
> Sent: 13/08/07, 17:36:12
> Subject: [GENERAL] Running a stored procedure via pgagent, need an example
>
> Can someone give me a simple example of the way in which I might be able to
> call
On 13/08/07, novnov <[EMAIL PROTECTED]> wrote:
>
>
> I would like to make a request for this feature to be added to postgres.
>
> Postgres is a really great database. I'm still very much a novice at using
> postgres but in general, it's been a very good experience and I plan to
> use
> it as often
I'm confused. Shouldn't this index be used?
(It's running on v7.4.7)
airburst=> \d stats2
Table "public.stats2"
Column | Type | Modifiers
---+---+---
lab | character varying(30) |
name | character varying(50) |
stat
On 8/13/07, Ralph Smith <[EMAIL PROTECTED]> wrote:
> I'm confused. Shouldn't this index be used?
> (It's running on v7.4.7)
>
> airburst=> \d stats2
> Table "public.stats2"
> Column | Type | Modifiers
> ---+---+---
> lab
On Aug 13, 2007, at 4:39 PM, Scott Marlowe wrote:
On 8/13/07, Ralph Smith <[EMAIL PROTECTED]> wrote:
I'm confused. Shouldn't this index be used?
(It's running on v7.4.7)
airburst=> \d stats2
Table "public.stats2"
Column | Type | Modifiers
---+---
On 8/13/07, Ralph Smith <[EMAIL PROTECTED]> wrote:
>
>
>
> On Aug 13, 2007, at 4:39 PM, Scott Marlowe wrote:
>
> On 8/13/07, Ralph Smith <[EMAIL PROTECTED]> wrote:
> I'm confused. Shouldn't this index be used?
> (It's running on v7.4.7)
>
> airburst=> \d stats2
> Table "public.stats2"
Oh, and you can use the sledge hammer of tuning by using the
set enable_xxx = off
settings for the planner. It's not a normal way to tune most queries,
but it certainly can let you know if the problem is using the index or
not.
psql mydb
\timing
select count(*) from table where field > 12345;
s
Oh yeah, go read this:
http://www.westnet.com/~gsmith/content/postgresql/pg-5minute.htm
Note that you shouldn't set your shared buffers quite as high as in
that guide, since you're running 7.4 which isn't quite as good at
using shared_buffers
---(end of broadcast)
Hi,
I've googled and yahooed and most of the performance tweaks suggested
cover SELECT speed, some cover COPY speed with things like turning
fsync off and such. But I still have not found how to improve regular
INSERT speed on Postgresql.
I have a table in MySQL with three compound indexes. I hav
On 8/14/07, Sander Steffann <[EMAIL PROTECTED]> wrote:
> Hi Lim,
>
> >> It might also be in /etc/security/limits.conf.
> >
> > Thanks. I see these two lines in that file:
> >
> > postgressoftnofile 8192
> > postgreshardnofile 8192
> >
> > How should I change these values?
Hi,
Writing a script to pull data from SQL server into a flat-file (or just
piped in directly to PG using Perl DBI)
Just wondering if the copy command is able to do a replace if there are
existing data in the Db already. (This is usually in the case of updates
to specific rows and there be a time
Lim Berger wrote:
INSERTing into MySQL takes 0.0001 seconds per insert query.
INSERTing into PgSQL takes 0.871 seconds per (much smaller) insert query.
What can I do to improve this performance? What could be going wrong
to elicit such poor insertion performance from Postgresql?
Thanks.
-
On 8/14/07, Lim Berger <[EMAIL PROTECTED]> wrote:
> INSERTing into MySQL takes 0.0001 seconds per insert query.
> INSERTing into PgSQL takes 0.871 seconds per (much smaller) insert query.
>
> What can I do to improve this performance? What could be going wrong
> to elicit such poor insertion perfo
On Mon, 13 Aug 2007, Scott Marlowe wrote:
We can look at how big your shared_buffers are, your work_mem, and a
few others in postgresql.conf.
That's going to be sort_mem, not work_mem, with 7.4
--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD
-
On 8/14/07, Andrej Ricnik-Bay <[EMAIL PROTECTED]> wrote:
> On 8/14/07, Lim Berger <[EMAIL PROTECTED]> wrote:
>
> > INSERTing into MySQL takes 0.0001 seconds per insert query.
> > INSERTing into PgSQL takes 0.871 seconds per (much smaller) insert query.
> >
> > What can I do to improve this performa
"Lim Berger" <[EMAIL PROTECTED]> writes:
> I think I have located the problem. It is in "/etc/profile" where some
> ulimits are added. This is the offending text, I think:
> #* cPanel Added Limit Protections -- BEGIN
> #unlimit so we can run the whoami
> ulimit -n 4096 -u 143
"Lim Berger" <[EMAIL PROTECTED]> writes:
> I have a table in MySQL with three compound indexes. I have only three
> columns from this table also in PostgreSQL, which serves as a cache of
> sorts for fast queries, and this table has only ONE main index on the
> primary key!
> INSERTing into MySQL t
Lim Berger wrote:
On 8/14/07, Andrej Ricnik-Bay <[EMAIL PROTECTED]> wrote:
On 8/14/07, Lim Berger <[EMAIL PROTECTED]> wrote:
INSERTing into MySQL takes 0.0001 seconds per insert query.
INSERTing into PgSQL takes 0.871 seconds per (much smaller) insert query.
What can I do to improve t
On 8/14/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Lim Berger" <[EMAIL PROTECTED]> writes:
> > I have a table in MySQL with three compound indexes. I have only three
> > columns from this table also in PostgreSQL, which serves as a cache of
> > sorts for fast queries, and this table has only ONE ma
On 8/14/07, Lim Berger <[EMAIL PROTECTED]> wrote:
> On 8/14/07, Tom Lane <[EMAIL PROTECTED]> wrote:
> > "Lim Berger" <[EMAIL PROTECTED]> writes:
> > > I have a table in MySQL with three compound indexes. I have only three
> > > columns from this table also in PostgreSQL, which serves as a cache of
"Lim Berger" <[EMAIL PROTECTED]> writes:
>> Thanks Tom. But on a newly minted table, sure, the performance would
>> be great. My table now has about 3 million rows (both in MySQL and
>> PG).
Well, INSERT speed is really not very dependent on table size (else I'd
have inserted a few zillion rows be
92 matches
Mail list logo