You might unknowingly be asking a FAQ. See the end of this section:
http://www.postgresql.org/docs/faqs.FAQ.html#4.6
Depending on the version of postgres you're running and the data types
of the symbol and source columns, you might need to quote (or cast)
your constant data for symbol and sourc
The Bricolage development team is pleased to announce the release of
Bricolage 1.8.5. This maintenance release addresses a number of
issues in
Bricolage 1.8.3 and adds a number of improvements (there was no
announcement for the short-lived 1.8.4 release). The SOAP server in
par
I can also do the same thing without sub-queries - I messed about some more
since I was keen to ensure backward compatibility with prior versions of
mysql that have left/right joins but no subqueries ... quite a bit quicker
still!
Query took 0.0037 sec - 1/10th of the sub-query time.
SELECT ch
On Fri, Mar 18, 2005 at 09:22:52AM -0500, Stephen Howie wrote:
> I have a java program that excepts print streams and inserts in into a
> spool table as a bytea. This fires a pl/pgsql trigger that passes the
> bytea to a pl/perl function to process the bytea and spits the results
> as an array
On Fri, Mar 18, 2005 at 10:33:01AM -0500, Tom Lane wrote:
>
> I think it would be reasonable to back-patch a small fix to convert CRLF.
> The sort of rewrite Marco is considering, I wouldn't back-patch.
I just submitted a small patch to convert CRLF => LF, CR => LF.
--
Michael Fuhr
http://www.f
First time I ran it it took 5127.243 ms .. then I did a full vacuum.
then ...
SQL executed.
Total runtime: 33.707 ms
I am keen to just have the one lot of code all in the scripts ... so I was
pleased when the identical sql also worked on mysql!!!
Your SQL-query has been executed success
Mark Rae <[EMAIL PROTECTED]> writes:
> Even with the numa support, which makes sure any memory allocated by
> malloc or the stack ends up local to the processor which originally
> called it, and then continues to schedule the process on that CPU,
> there is still the problem that all table access
Phil Daintree wrote:
Appreciated you help Paul - many thanks for taking the time.
I view this as merely passing on all the pearls of wisdom I have gleaned
from this list. : )
Advice:
Are you running this inside a transaction? Do so, because if you don't,
then each UPDATE or INSERT or SELECT
On Fri, 18 Mar 2005 20:28:50 -0300, Fernando Lujan <[EMAIL PROTECTED]> wrote:
> Hi folks,
>
> I wanna know from you, how often must I run vaccum analyze on my db?
>
> Once per day, twice... One per hour and so on...
>
> I didn't find a especific document about this question.
That's because it d
Search for "pg_autovacuum"
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Fernando Lujan
Sent: Friday, March 18, 2005 3:29 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Vaccum analyze.
Hi folks,
I wanna know from you, how often must I run vaccu
Hi folks,
I wanna know from you, how often must I run vaccum analyze on my db?
Once per day, twice... One per hour and so on...
I didn't find a especific document about this question.
Thanks in advance.
Fernando Lujan
---(end of broadcast)---
TIP 9: t
/* On the test stub: */
[snip]
double data[30];
int main(void)
{
size_t i;
size_t size = sizeof(data) / sizeof(data[0]);
for (i = 0; i < size; i++) {
data[i] = rand();
}
for (i = 0; i < size; i++) {
cout << data[i] << end
On Fri, Mar 18, 2005 at 01:31:51PM -0500, Tom Lane wrote:
> BTW, although I know next to nothing about NUMA, I do know that it is
> configurable to some extent (eg, via numactl). What was the
> configuration here exactly, and did you try alternatives? Also,
> what was the OS exactly? (I've heard
Rafael Martinez Guerrero <[EMAIL PROTECTED]> writes:
> We are trying to dump a 30GB+ database using pg_dump with the --file
> option. In the beginning everything works fine, pg_dump runs and we get
> a dumpfile. But when this file becomes 16GB it disappears from the
> filesystem,
FWIW, I tried and
On Fri, Mar 18, 2005 at 10:23:14AM -, Alex Stapleton wrote:
> If I SET ENABLE_SEQSCAN TO OFF it uses the Index, but it does an Index scan
> backward if I do
>
> SELECT * FROM article WHERE symbol=12646 AND source = 19 ORDER BY
> symbol desc, source DESC, time DESC LIMIT 1000;
>
> Which is bet
Sorry, I did not see your email off-list until after I sent the other
one.
On Mar 18, 2005, at 11:56 AM, Devrim GUNDUZ wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
On Fri, 18 Mar 2005, Dianne Yumul wrote:
Why don't you try yum, it will install the dependencies for you ; )
http://linux
Yup, I didn't see your email off-list till after I re-sent the other
email. But thanks for the info.
On Mar 18, 2005, at 11:56 AM, Devrim GUNDUZ wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
On Fri, 18 Mar 2005, Dianne Yumul wrote:
Why don't you try yum, it will install the dependencie
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
On Fri, 18 Mar 2005, Dianne Yumul wrote:
Why don't you try yum, it will install the dependencies for you ; )
http://linux.duke.edu/projects/yum/
As we talked off-list before, currently PGDG RPMs cannot be installed via
yum, AFAIK.
Cheers,
- --
Dev
Why don't you try yum, it will install the dependencies for you ; )
http://linux.duke.edu/projects/yum/
On Mar 18, 2005, at 11:32 AM, Devrim GUNDUZ wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
On Fri, 18 Mar 2005, Edgar Amorim wrote:
I'm a kind new bye with linux & related stuffs. I'm t
Folks,
I'm a kind new bye with linux & related stuffs. I'm trying to install PG on a FC3 using "postgresql-8.0.1-2PGDG.i686.rpm", but after to issue "rpm -ivh (the rpm file)" I've got a message telling "error: Failed dependencies: libpq.so.3 is need". I take a look at the documentation and it see
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
On Fri, 18 Mar 2005, Edgar Amorim wrote:
I'm a kind new bye with linux & related stuffs. I'm trying to install
PG on a FC3 using "postgresql-8.0.1-2PGDG.i686.rpm", but after to issue
"rpm -ivh (the rpm file)" I've got a message telling "error: Fai
Hello all,
I have a java program that excepts print streams and inserts in into a
spool table as a bytea. This fires a pl/pgsql trigger that passes the
bytea to a pl/perl function to process the bytea and spits the results
as an array back. It then proceeds to insert the data into multiple
ta
Mean is just sum(col)/count(col)
Mode can be calculated with having, max, count
Median can be computed by sorting, using a cursor, and going to the
middle.
There are more efficient and better (more accurate) ways to do it, but
those have to be implemented at a low level. Of course, since you have
On Fri, Mar 18, 2005 at 01:37:10PM -0500, Hrishikesh Deshmukh wrote:
> Hi All,
>
> Is there a way to simple statistics like mean/median/mode in
> PostgreSQL. I have tables like PsetID | IntensityValue. I want to
> find out mean (intensityValue) of some PsetID(s)?!
> Any urls/pointers/books would
Hi All,
Is there a way to simple statistics like mean/median/mode in PostgreSQL.
I have tables like PsetID | IntensityValue. I want to find out mean
(intensityValue) of some PsetID(s)?!
Any urls/pointers/books would be a big help.
Thanks,
Hrishi
---(end of broadcast)-
Mark Rae <[EMAIL PROTECTED]> writes:
> The altix still only scales up to 10x rather than 16x, but that probably
> is the NUMA configuration taking effect now.
BTW, although I know next to nothing about NUMA, I do know that it is
configurable to some extent (eg, via numactl). What was the
config
Bruce Momjian writes:
> So it seems our entire SMP problem was that global lock. Nice.
Yeah, I was kind of expecting to see the LockMgrLock up next, but
it seems we're still a ways away from having a problem there. I guess
that's because we only tend to touch locks once per query, whereas
we're
Mark Rae wrote:
> On Fri, Mar 18, 2005 at 10:38:24AM -0500, Tom Lane wrote:
> > Hey, that looks pretty sweet. One thing this obscures though is whether
> > there is any change in the single-client throughput rate --- ie, is "1.00"
> > better or worse for CVS tip vs 8.0.1?
>
> Here are the figures
Marco Colombo <[EMAIL PROTECTED]> writes:
> Right now I'm parsing the string first, changing the resulting
> parse tree adding missing nodes (def, INDENT, DEINDENT) and
> then compiling it.
Hmmm ... is this really going to be simpler or more robust than lexing
the string carefully enough to insert
On Fri, 18 Mar 2005, Tom Lane wrote:
Michael Fuhr <[EMAIL PROTECTED]> writes:
On Fri, Mar 18, 2005 at 12:35:07AM -0500, Tom Lane wrote:
So that part of it can be solved fairly easily.
Should I submit a patch? It should be only a few additional lines
in PLy_procedure_munge_source(). Would you app
Welcome
Make some database and have problem with japanese characters maybe with other
too
Below some example
Linux system it is fedora (RH)
Locale for postgresql i pl_PL.utf8
Database is in uniqode
Thanks for help.
==
[EMAIL PROTECTED] 1.0.4]
At the suggestion of several
people, I have increased the
default settings in postgresql.conf
before continuing my
postgresql vs mysql
performance tests.
To date, I have only been
loading a (roughly) million-row
file, creating indexes
during the load, running a vacuum analyze,
and a
On Fri, 2005-03-18 at 09:58 -0500, Tom Lane wrote:
> Rafael Martinez <[EMAIL PROTECTED]> writes:
> > On Thu, 2005-03-17 at 10:17 -0500, Tom Lane wrote:
> >> Is that a plain text, tar, or custom dump (-Ft or -Fc)? Is the behavior
> >> different if you just write to stdout instead of using --file?
>
On Fri, Mar 18, 2005 at 10:38:24AM -0500, Tom Lane wrote:
> Hey, that looks pretty sweet. One thing this obscures though is whether
> there is any change in the single-client throughput rate --- ie, is "1.00"
> better or worse for CVS tip vs 8.0.1?
Here are the figures in queries per second.
Cli
Mark Rae <[EMAIL PROTECTED]> writes:
> Ok, I've done the tests comparing 8.0.1 against a snapshot from the 16th
> and the results are impressive.
> Clients1 2 3 4 6 8 12 16
> 32 64
> -
Michael Fuhr <[EMAIL PROTECTED]> writes:
> On Fri, Mar 18, 2005 at 12:35:07AM -0500, Tom Lane wrote:
>> So that part of it can be solved fairly easily.
> Should I submit a patch? It should be only a few additional lines
> in PLy_procedure_munge_source(). Would you apply it only to HEAD,
> or wou
Joe Maldonado <[EMAIL PROTECTED]> writes:
> db=# vacuum verbose analyze pg_catalog.pg_attribute;
> INFO: vacuuming "pg_catalog.pg_attribute"
> INFO: index "pg_attribute_relid_attnam_index" now contains 9965 row
> versions in 181557 pages
REINDEX is probably the only realistic way to fix that. I
On Fri, Mar 18, 2005 at 11:34:46AM +0100, Marco Colombo wrote:
> Just let me suggest not to mimic its behaviour, but to use the
> Real Thing if we manage to. That is, directly use the Universal Line
> Support code provided by python itself, so that we don't even have
> to think about being compati
Rafael Martinez <[EMAIL PROTECTED]> writes:
> On Thu, 2005-03-17 at 10:17 -0500, Tom Lane wrote:
>> Is that a plain text, tar, or custom dump (-Ft or -Fc)? Is the behavior
>> different if you just write to stdout instead of using --file?
> - In this example, it is a plain text (--format=3Dp).
> -
Hello,
After a VACUUM FULL I saw that pg_attribute tables indexes haven't
been deleted as reported by a subsequent vacuum analyze. But the pages
corresponding to just the table has been deleted to 196 pages from
about 181557 pages. Are all system tables affected by this ? How can
we reclaim thi
On Tue, Mar 15, 2005 at 07:00:25PM -0500, Bruce Momjian wrote:
> Oh, you have to try CVS HEAD or a nightly snapshot. Tom made a major
> change that allows scaling in SMP environments.
Ok, I've done the tests comparing 8.0.1 against a snapshot from the 16th
and the results are impressive.
As well
On Thu, 17 Mar 2005, Michael Fuhr wrote:
On Thu, Mar 17, 2005 at 09:48:51PM -0500, Tom Lane wrote:
Michael Fuhr <[EMAIL PROTECTED]> writes:
Line-ending CRs stripped, even inside quotes; mid-line CRs converted
to LF. Tests done with Python 2.4 on FreeBSD 4.11-STABLE; I wonder
what Python on Windows
I should point out that theres no garuntee our data is inserted in anything
like the order we want (time desc) but there is a high correlation. Most of
the time it is almost in order.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Alex Stapleton
Sent: 18 Mar
If I SET ENABLE_SEQSCAN TO OFF it uses the Index, but it does an Index scan
backward if I do
SELECT * FROM article WHERE symbol=12646 AND source = 19 ORDER BY
symbol desc, source DESC, time DESC LIMIT 1000;
Which is better but still quite slow.
-Original Message-
From: Hegyvari Krisztian
Does not it look like the index you are actually using is on article and
then PG has to filter for symbol and source?
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Alex Stapleton
Sent: Friday, March 18, 2005 10:57 AM
To: pgsql-general@postgresql.org
Subj
On Thu, 17 Mar 2005, Tom Lane wrote:
Martijn van Oosterhout writes:
On Thu, Mar 17, 2005 at 01:03:36PM +0100, Marco Colombo wrote:
OMG! It's indenting the funtion body. I think you can't do that
w/o being syntax-aware. I'm not familiar with the code, why is it
adding a 'def' in front of it at all?
On Fri, Mar 18, 2005 at 09:29:06AM -, Alex Stapleton wrote:
> We have a ~10million row table but are expecting it to get larger, possibly
> by a factor of 10 or more. The rows are quite long and fixed length (just
> over 500 bytes.)
>
> We have an index of (symbol, source, date) on this table
Woops sorry we have indexes on (symbol, source, time) and there is no date
column :/
SELECT * FROM article WHERE symbol=12646 AND source = 19 ORDER BY time DESC
LIMIT 1000;
QUERY PLAN
-
Alex Stapleton wrote:
SELECT * FROM article WHERE symbol=12646 AND source = 19 ORDER BY time DESC
LIMIT 1000;
To get the latest 1000 rows for that symbol and source.
My (not yet implemented) solution to this problem is to add a SEQUENCE and
index it so that by adding a WHERE id > [max_id]-1000 and
We have a ~10million row table but are expecting it to get larger, possibly
by a factor of 10 or more. The rows are quite long and fixed length (just
over 500 bytes.)
We have an index of (symbol, source, date) on this table and doing queries
like this
SELECT * FROM article WHERE symbol=12646 AND
On Thu, 2005-03-17 at 10:41 -0700, Aly Dharshi wrote:
Hello
> Would it help to use a different filesystem like SGI's XFS ?
I do not see the connection between this problem and using another
filesystem. I think we would have this problem with all the programs in
the system if we had a problem wit
On Thu, 2005-03-17 at 10:17 -0500, Tom Lane wrote:
> Rafael Martinez Guerrero <[EMAIL PROTECTED]> writes:
> > We are trying to dump a 30GB+ database using pg_dump with the --file
> > option. In the beginning everything works fine, pg_dump runs and we get
> > a dumpfile. But when this file becomes 1
52 matches
Mail list logo