On Sat, 16 Feb 2008, Phoenix Kiula wrote:
The top output shows httpd on top, and sometimes postmaster, but I don't
know how to repeatedly capture it. Any suggestions?
Try this:
top -bc | tee topdata
That will save everything to a file called topdata while also letting you
watch it scroll by
On Feb 15, 2008 10:38 PM, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
> My hosting provider tells me that the Postgresql server is taking up a
> lot of memory but I've been running the same db with the same config
> for over 2 years. Yes we have been growing but what happened in the
> last 3 days to
On 16/02/2008, Greg Smith <[EMAIL PROTECTED]> wrote:
> On Sat, 16 Feb 2008, Phoenix Kiula wrote:
>
> > The script you suggested doesn't work:
> >tmp > ./trackusage.sh
> >-bash: ./trackusage.sh: /bin/sh: bad interpreter: Permission denied
>
>
> Try changing the first line to
>
> #!/bin/b
On Sat, 16 Feb 2008, Phoenix Kiula wrote:
The script you suggested doesn't work:
tmp > ./trackusage.sh
-bash: ./trackusage.sh: /bin/sh: bad interpreter: Permission denied
Try changing the first line to
#!/bin/bash
Anyway, I did the vmstat command. I was running it while the system
was
What would the command be to have a query result be put into a
location/space sensitive file [position 1 through 5 would be one thing
where position 6 through 10 would be the next field, 11 through 16
another, etc]? Is this even possible with Postgres?
---(end of br
On 15/02/2008, Richard Huxton <[EMAIL PROTECTED]> wrote:
> Phoenix Kiula wrote:
> > On 15/02/2008, Richard Huxton <[EMAIL PROTECTED]> wrote:
> >>
>
> >> Are you sure the two sets of vmstat/top figures are from when PG was
> >> crashing/running queries slow? Everything seems idle to me in those
Thanks for all the help. Performance is back where I thought it
should be, after I fixed our pooling bug.
I didn't think that postgres would be released with performance issues
like that - its just too good :)
Thanks,
Dan
---(end of broadcast)---
"Jozsef Szalay" <[EMAIL PROTECTED]> writes:
> 8.1.2 on Linux
> I've got a user defined data type that has been defined in the "public"
> schema. I use pg_dump to dump a table that has a column of this type:
> create myschema.mytable (id public.mytype primary key, name varchar);
> ...
> psql:mytabl
On Feb 15, 2008 5:25 PM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> On Feb 15, 2008 3:31 PM, James B. Byrne <[EMAIL PROTECTED]> wrote:
> >
> > On Fri, February 15, 2008 14:43, Scott Marlowe wrote:
> > >
> > >> For something externally provided and widely used like country codes
> > >> then option
On Feb 15, 2008 3:31 PM, James B. Byrne <[EMAIL PROTECTED]> wrote:
>
> On Fri, February 15, 2008 14:43, Scott Marlowe wrote:
> >
> >> For something externally provided and widely used like country codes
> >> then option one is attractive and possibly the most sensible and
> >> robust solution.
Tom Lane wrote:
It's possible to special-case any particular function you really feel
you need this behavior for. We did special-case || (the string
concatenation operator), and there was some discussion of also putting
in a built-in special case for LIKE, but we desisted from sliding any
furth
Jeff Davis wrote:
If postgresql were to revert to 8.2 implicit casting behavior, would
that actually improve compatibility with other DBMSs? Every DBMS
probably has it's own rules for implicit casting, different from every
other DBMS.
So are you sure it wouldn't just introduce more compatibility
Due to limitations (perceived or real) within my client application I am
trying to return a complex dataset which I am assembling using an expensive
PL/PGSql function which I would like to wrap in a writeable view.
I have written the function as both a row-level function which returns a
RO
"Keaton Adams" <[EMAIL PROTECTED]> writes:
> Version: Postgres 8.1.4
> Platform: RHEL
>
> Given this scenario with the indexes in place, when I ask for the distinct
> field1_id values, why does the optimizer choose a sequential scan instead of
> just reading from the kda_log_fid_cre_20080123_idx i
Hi,
I've searched the archives for this issue but I could not find an
answer. I apologize if this has been beaten to death already.
Postgresql version:
8.1.2 on Linux
The issue:
===
I've got a user defined data type that has been defined in the "public"
schema. I us
On Fri, 15 Feb 2008, Phoenix Kiula wrote:
I am not sure what checkpoint stuff means. But I added that entry, and
now my log has ONLY this:
LOG: test message did not get through on socket for statistics collector
LOG: disabling statistics collector for lack of working socket
If you're gettin
On Feb 15, 2008, at 2:56 PM, Anton Andreev wrote:
Hi,
How do I return a result set? Is there a better way in 'plpgsql'
than the one described below?
I do not want to make a select from a function(which pretty useful
in many cases):
SELECT * FROM getfoo();
, but I want to just call the fun
Version: Postgres 8.1.4
Platform: RHEL
Given this scenario with the indexes in place, when I ask for the distinct
field1_id values, why does the optimizer choose a sequential scan instead of
just reading from the kda_log_fid_cre_20080123_idx index? The time it takes
to perform the sequential scan
On Fri, February 15, 2008 14:43, Scott Marlowe wrote:
>
>> For something externally provided and widely used like country codes
>> then option one is attractive and possibly the most sensible and
>> robust solution. But consider things like transaction status codes.
>> Perhaps an invoice tran
On Fri, Feb 15, 2008 at 9:56 AM, Balázs Klein <[EMAIL PROTECTED]> wrote:
> > given that answers for a questionnaire are stored as a
> > batch
>
> Not in our setup - for all sorts of reasons (preserving responses on a
> connection failure or restart, monitoring response latency in real time,
> c
Greg Smith wrote:
I recall a couple of people asking about when 8.3 would be available for
Debian and Ubuntu. Here's an update now that some useful packages have
come out this week.
Thanks for the summary, Greg.
Colin
---(end of broadcast)---
Hi,
How do I return a result set? Is there a better way in 'plpgsql' than
the one described below?
I do not want to make a select from a function(which pretty useful in
many cases):
SELECT * FROM getfoo();
, but I want to just call the function with SELECT getfoo();
--DROP FUNCTION getfoo();
Magnus Hagander wrote:
For the case of upgrading, it wouldn't work. But there are certainly
other cases where it would help. Say from your central pgadmin console
administering 10 servers from 3 different major release trees :-(
It can be done with commandline pg_dump, but it means you have
Phoenix Kiula escribió:
> Thanks. Comments below. (PS: I am still unable to connect to
> postgresql even in SSH! I see this message:
>
>psql: could not connect to server: Connection timed out
> Is the server running on host "localhost" and accepting
> TCP/IP connections on
On Fri, Feb 15, 2008 at 12:12 PM, James B. Byrne <[EMAIL PROTECTED]> wrote:
>
> I can over-ride Rails assumptions and force a primary key formed by multiple
> columns which will have a unique index automatically created for the
> previously described "system_values_table". My question still hin
On Fri, 15 Feb 2008, Tom Lane wrote:
He claims it didn't work, but there's no details about what went wrong.
He also seems entirely misinformed on the difference between "portable"
and "PG-specific" pg_dump output.
I just left a note on this and related subjects on the blog. If you
search f
paul rivers wrote:
Going from 8.2.4 and 8.2.6 to 8.3.0 has been painless for me.
However, unlike the blogger you cite, I read the directions before,
not after, attempting it.
The blogger has a point about pg_dump and restore, it could be much
better, for example
the backup process could b
On Fri, Feb 15, 2008 at 8:21 AM, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
> I'm glad I didn't go from 8.2.3 to 8.3 straight!
>
> http://ogasawalrus.com/blog/node/462
If only he were on debian or ubuntu, he could run pg_upgradecluster
and he'd have been done.
---(end of br
Dave Page wrote:
On Fri, Feb 15, 2008 at 4:21 PM, Tony Caduto
<[EMAIL PROTECTED]> wrote:
paul rivers wrote:
>>
> Going from 8.2.4 and 8.2.6 to 8.3.0 has been painless for me.
> However, unlike the blogger you cite, I read the directions before,
> not after, attempting it.
The blogger has
On Fri, February 15, 2008 12:38, Richard Huxton wrote:
>
> I'm not a Rails guy, but everything I've read about it suggests if
> you're going to gain any advantage from it, then you should follow its
> way of doing things. That means not converting anything, but rather
> writing a rails app that do
On 15/02/2008, Richard Huxton <[EMAIL PROTECTED]> wrote:
> Phoenix Kiula wrote:
> > On 15/02/2008, Richard Huxton <[EMAIL PROTECTED]> wrote:
> >
> >> Ah, more new information! This does seem to point to the load,
> >> particularly if it's exactly the same query each time. So what do
> >> top
On Fri, Feb 15, 2008 at 10:21:16PM +0800, Phoenix Kiula wrote:
>
> http://ogasawalrus.com/blog/node/462
"Reading more carefully"
sounds like it was the first read to me.
---(end of broadcast)---
TIP 4: Have you searched our list archives?
> I would instead queue messages (or suitable information about them) in
> a table, and have a process outside PostgreSQL periodically poll for them
Why poll when you can wait?
http://www.postgresql.org/docs/8.2/interactive/sql-notify.html
---(end of broadcast)-
James B. Byrne wrote:
I am considering how best to handle the issue of attribute encoding for an
OLTP application conversion.
[snip]
The conversion project framework is Ruby on Rails which embeds the practice of
arbitrary integer primary keys assigned by sequencers rather than so-called
"natur
On Fri, Feb 15, 2008 at 4:21 PM, Tony Caduto
<[EMAIL PROTECTED]> wrote:
> paul rivers wrote:
> >>
> > Going from 8.2.4 and 8.2.6 to 8.3.0 has been painless for me.
> > However, unlike the blogger you cite, I read the directions before,
> > not after, attempting it.
>
>
> The blogger has a poin
I am considering how best to handle the issue of attribute encoding for an
OLTP application conversion. The existing system, which does not employ a
relational DBMS in the commonly accepted sense, uses a "system_table" to
validate system codes. This dataset uses concatenated fields to form a uniq
Tony Caduto <[EMAIL PROTECTED]> writes:
> paul rivers wrote:
>> However, unlike the blogger you cite, I read the directions before,
>> not after, attempting it.
> The blogger has a point about pg_dump and restore,
Does he? He claims it didn't work, but there's no details about what
went wrong.
> -Original Message-
> > Do youthink there is a way to ensure that the order of the values in the
> > array below is the same for each person?
> >
> > tbl(eID, aID, value)
> >
> > Select eID, array_accum(value) from
> > (
> > (Select Distinct eID from tbl) e
> > CROSS JOIN
> > (Select
Tony Caduto wrote:
paul rivers wrote:
Going from 8.2.4 and 8.2.6 to 8.3.0 has been painless for me.
However, unlike the blogger you cite, I read the directions before,
not after, attempting it.
The blogger has a point about pg_dump and restore, it could be much
better, for example
the bac
On Feb 15, 2008, at 9:56 AM, Balázs Klein wrote:
given that answers for a questionnaire are stored as a
batch
Not in our setup - for all sorts of reasons (preserving responses
on a connection failure or restart, monitoring response latency in
real time, creating adaptive/branching questio
> Balázs Klein wrote:
> >
> > I was hoping that now with PG supporting plan invalidation it would
> > be possible to return a recordset.
>
> Plan invalidation has nothing to do with it. In Postgres a returned
> recordset can be used as a row source in the FROM clause -- this
> requires data type
On Feb 14, 2008, at 8:19 PM, Joe Conway wrote:
Erik Jones wrote:
See how postgres handles filling the NULLs for you? What you'd
really want to do with this would be to define some functions for
setting and getting a person's answers to a given question or set
of questions so that you cou
> given that answers for a questionnaire are stored as a
> batch
Not in our setup - for all sorts of reasons (preserving responses on a
connection failure or restart, monitoring response latency in real time,
creating adaptive/branching questionnaires) we send each response separately.
> peopl
Phoenix Kiula wrote:
On 15/02/2008, Richard Huxton <[EMAIL PROTECTED]> wrote:
Are you sure the two sets of vmstat/top figures are from when PG was
crashing/running queries slow? Everything seems idle to me in those figures.
No. They are the vmstat figures from when I was replying to your
em
Phoenix Kiula wrote:
I'm glad I didn't go from 8.2.3 to 8.3 straight!
http://ogasawalrus.com/blog/node/462
Going from 8.2.4 and 8.2.6 to 8.3.0 has been painless for me. However,
unlike the blogger you cite, I read the directions before, not after,
attempting it.
Paul
-
On Fri, Feb 15, 2008 at 8:36 AM, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
> No. They are the vmstat figures from when I was replying to your
> email. What will vmstat tell me and how should I set it up to do
> "vmstat 10 logging"?
Something like
vmstat 10 > vmstat.log
> LOG: could not rece
On Fri, Feb 15, 2008 at 8:21 AM, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
> I'm glad I didn't go from 8.2.3 to 8.3 straight!
ither way, you need to update to 8.2.6
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an app
On Feb 15, 2008, at 6:29 AM, Balázs Klein wrote:
Erik Jones wrote:
First, please stop top-posting. It makes it difficult for both me
and others to know to whom/what you are replying.
Sorry, I don't know much about mailing list customs - I had to look
up what top-posting is. I will behave
On Feb 15, 2008, at 4:49 AM, [EMAIL PROTECTED] wrote:
Hi all,
I have a large sdf file with many records of molecules and associated
data items and I want to save them in a PostgreSQL database. There are
about less than 40 data items for every molecule(the names of the data
items fore each mole
Balázs Klein wrote:
I was hoping that now with PG supporting plan invalidation it would
be possible to return a recordset.
Plan invalidation has nothing to do with it. In Postgres a returned
recordset can be used as a row source in the FROM clause -- this
requires data type information to be
If you can select stable structure (common columns) and additional columns
than you can :
1. Use base class (table) with common columns + inherited tables with their
own additional columns
2. We use contrib/hstore as a storage for semistructured data - we store
additional columns as a key->
On Friday 15. February 2008, [EMAIL PROTECTED] wrote:
>Hi all,
>
>I have a large sdf file with many records of molecules and associated
>data items and I want to save them in a PostgreSQL database. There are
>about less than 40 data items for every molecule(the names of the data
>items fore each mo
On 2/15/08, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
> LOG: could not receive data from client: Connection reset by peer
> LOG: unexpected EOF on client connection
> LOG: could not receive data from client: Connection reset by peer
> LOG: unexpected EOF on client connection
This means you
On 2/14/08, hewei <[EMAIL PROTECTED]> wrote:
> Can send email from stored procedure in Postgres?
In principle, yes, using one of the "untrusted" stored function
languages. pl/perl, pl/sh, pl/python, and such.
I wouldn't do things that way...
I would instead queue messages (or suitable informati
Geez. My default terminal didn't support UNICODE. Shame on me :P
Thanks!
Douglas McNaught wrote:
> On 2/15/08, Tomás Di Doménico <[EMAIL PROTECTED]> wrote:
>
>> Now I have the data into the UTF8 DB, and using graphical clients
>> everything seems to be great. The thing is, when I query the dat
I'm glad I didn't go from 8.2.3 to 8.3 straight!
http://ogasawalrus.com/blog/node/462
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
On 2/15/08, Harald Fuchs <[EMAIL PROTECTED]> wrote:
> But you have to ensure that you build PostgreSQL on your desktop
> machine in exactly the same way as the RPM got built
> (integer_datetimes etc).
It'd probably be much easier to just install the -contrib RPM. :)
--
-Doug
---
On 2/15/08, Tomás Di Doménico <[EMAIL PROTECTED]> wrote:
> Now I have the data into the UTF8 DB, and using graphical clients
> everything seems to be great. The thing is, when I query the data via
> psql, with \encoding UTF8 I get weird data ("NeuquÃ(c)n" for "Neuquén").
> However, with \encod
In article <[EMAIL PROTECTED]>,
"Shoaib Mir" <[EMAIL PROTECTED]> writes:
> On Fri, Feb 15, 2008 at 6:13 PM, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
> Thanks. But I had installed from rpm. Can I just download that .so
> file and put in the lib folder for pgsql and then start using it?
Greetings.
I'm currently using 8.3, but I've been coping with this since previous
versions.
I'm trying to integrate some LATIN1 and some UTF8 DBs into a single UTF8
one. To avoid the "Invalid UNICODE character..." error, I used iconv to
convert the LATIN1 dumps to UTF8.
Now I have the data into
Phoenix Kiula wrote:
On 15/02/2008, Richard Huxton <[EMAIL PROTECTED]> wrote:
Ah, more new information! This does seem to point to the load,
particularly if it's exactly the same query each time. So what do
top/vmstat etc show for these "go-slow" periods?
In included top and vmstat info in
On Fri, Feb 15, 2008 at 6:13 PM, Phoenix Kiula <[EMAIL PROTECTED]>
wrote:
>
>
> Thanks. But I had installed from rpm. Can I just download that .so
> file and put in the lib folder for pgsql and then start using it?
>
Well I would say download the source for the same version you have, copy it
to
On 15/02/2008, Shoaib Mir <[EMAIL PROTECTED]> wrote:
>
> You do not need to restart the database server for that purpose as all you
> need is the pgstattuple.so file copied to /lib folder.
>
> Do the following (in case you have installed server from source):
>
> - Go to the /contrib/pgstattuple fol
Actually my host has just told me that I have a number of "hung
semaphores" in my server. And he is relating them to postgresql. I am
not surprised, because this is the only utility that has issues. All
the rest is working (apache, mysql, exim, etc). Any thoughts on where
I should start looking for
On Fri, Feb 15, 2008 at 5:18 PM, Phoenix Kiula <[EMAIL PROTECTED]>
wrote:
>
>
> How should I install a "contrib" without bringing down my database, or
> stopping it, or doing ANYTHING to it? It's in production. I can't
> touch it. Will it be installed on the side and then I simply start
> using i
Hi all,
I have a large sdf file with many records of molecules and associated
data items and I want to save them in a PostgreSQL database. There are
about less than 40 data items for every molecule(the names of the data
items fore each molecule are the same, but values differ). The number
of mol
Phoenix Kiula wrote:
On 15/02/2008, Richard Huxton <[EMAIL PROTECTED]> wrote:
First guess is that it's not using the index. What does
EXPLAIN ANALYSE SELECT user_id FROM clients WHERE user_id LIKE '...'
show?
Check the list archives for locale and like and text_pattern_ops too -
that's a
Erik Jones wrote:
> First, please stop top-posting. It makes it difficult for both me
> and others to know to whom/what you are replying.
Sorry, I don't know much about mailing list customs - I had to look up what
top-posting is. I will behave now ...
I would prefer to keep the complications f
On 15/02/2008, Richard Huxton <[EMAIL PROTECTED]> wrote:
> First guess is that it's not using the index. What does
> EXPLAIN ANALYSE SELECT user_id FROM clients WHERE user_id LIKE '...'
> show?
>
> Check the list archives for locale and like and text_pattern_ops too -
> that's a good place to
On 15/02/2008, Shoaib Mir <[EMAIL PROTECTED]> wrote:
>
>
> On Fri, Feb 15, 2008 at 3:55 PM, Phoenix Kiula <[EMAIL PROTECTED]>
> wrote:
> > Hi,
> >
> > I have an index on the user_id field in the query below:
> >
> > myuser=# delete from clients where user_id like '64.22.91.%';
> > DELETE 22
> >
Phoenix Kiula wrote:
Hi,
I have an index on the user_id field in the query below:
myuser=# delete from clients where user_id like '64.22.91.%';
DELETE 22
Time: 220324.975 ms
Is there any reason why it's taking 220 seconds to run this simple
query? There are about 3 million rows in thi
On Fri, Feb 15, 2008 at 3:55 PM, Phoenix Kiula <[EMAIL PROTECTED]>
wrote:
> Hi,
>
> I have an index on the user_id field in the query below:
>
> myuser=# delete from clients where user_id like '64.22.91.%';
> DELETE 22
> Time: 220324.975 ms
>
> Is there any reason why it's taking 220 seconds
Hi,
I have an index on the user_id field in the query below:
myuser=# delete from clients where user_id like '64.22.91.%';
DELETE 22
Time: 220324.975 ms
Is there any reason why it's taking 220 seconds to run this simple
query? There are about 3 million rows in this table.
How can I deb
Joe wrote
> It occurs to me that it shouldn't be terribly difficult to make an
> alternate version of crosstab() that returns an array rather than tuples
> (back when crosstab() was first written, Postgres didn't support NULL
> array elements). Is this worth considering for 8.4?
I think there shou
ah, of course.
the exclusive lock was preventing tty1 to read "test", and when the lock was
gone, so was the table.
I get it. Thanks a lot.
But, what about the "ERROR: tuple concurrently updated" ? (in TTY3)
What should have happened, i guess, is "ERROR: table "test" does not exist,
upon " drop t
On Thu, Feb 14, 2008 at 7:56 PM, Dan Armbrust
<[EMAIL PROTECTED]> wrote:
> On Thu, Feb 14, 2008 at 1:31 PM, Dave Page <[EMAIL PROTECTED]> wrote:
>
> > You must have enabled the debugger when you installed (or didn't
> > disable it). You can turn it back off in postgresql.conf if you like -
> >
76 matches
Mail list logo