On 8/19/07, Michael Glaesemann <[EMAIL PROTECTED]> wrote:
> As you mention, you could use a trigger instead of explicitly setting
> updated_at to DEFAULT, which might be more convenient because you
> don't need remember to set the updated_at column explicitly on update.
>
> Whether or not this info
Bill Thoen <[EMAIL PROTECTED]> writes:
> Tom, here's the "explain" results: Does this help explain what went wrong?
> (And yes, I think there will be a *lot* of groups.)
> explain select count(*) as count,fips_st_cd, fips_cnty_cd, farm_nbr,
> tract_nbr, field_nbr,crop_cd,crop_status_cd,practice_cd
On 8/9/07, cluster <[EMAIL PROTECTED]> wrote:
> Thanks for your response! Let me try to elaborate what I meant with my
> original post.
>
> If R is the set of words in the tsvector for a given table row and S is
> the set of keywords to search for (entered by e.g. a website user) I
> would like to
"David Azevedo" <[EMAIL PROTECTED]> writes:
> 2007-08-16 13:49:43||/labrsangioro/forms/recepcao_temp.php||ERROR: could
> not open relation 1663/1491040/1555634: Invalid argument
> 2007-08-16 14:12:36||/labrsangioro/forms/recepcao_temp.php||PANIC: could
> not open file "pg_xlog/00010
On 8/19/07, David Azevedo <[EMAIL PROTECTED]> wrote:
> Please, i need help desperately.
>
> Im running postgresql 8.1.9 on windows 2003 server standard edition service
> pack 2. intel pentium 4 3.2 - 1 gb ram
> I have 5 databases in this server and they all have action all day (
> inserts, selects
Please, i need help desperately.
Im running postgresql 8.1.9 on windows 2003 server standard edition service
pack 2. intel pentium 4 3.2 - 1 gb ram
I have 5 databases in this server and they all have action all day (
inserts, selects etc )
i have 2 problems.
First, postgre crashes many times in
On Aug 19, 2007, at 14:04 , Robin Helgelin wrote:
When I started with MySQL I exploited their "bug" with timestamp
fields and always had a entered and updated field on my tables.
As I'm blissfully ignorant of MySQL's peculiarities, without a more
detailed explanation of what you're trying to
Tom, here's the "explain" results: Does this help explain what went wrong?
(And yes, I think there will be a *lot* of groups.)
explain select count(*) as count,fips_st_cd, fips_cnty_cd, farm_nbr,
tract_nbr, field_nbr,crop_cd,crop_status_cd,practice_cd,seq_nbr from
compliance_2006 group by fips_st_
Ron Mayer schrieb:
Scott Marlowe wrote:
On 8/14/07, Harpreet Dhaliwal <[EMAIL PROTECTED]> wrote:
Hi,
I read a few lines about SP compilation in postgres
http://searchoracle.techtarget.com/originalContent/0,289142,sid41_gci1179016,00.html
1. stored procedure compilation is transactional.
"You
Hi,
When I started with MySQL I exploited their "bug" with timestamp
fields and always had a entered and updated field on my tables.
My question, is this interesting information enough to save on the
table itself? If so, I guess this could easily be solved with a
trigger, however, should one inst
So is there really any version control mechanism of functions in postgresql
or not ?
~Harpreet
On 8/18/07, Ron Mayer <[EMAIL PROTECTED]> wrote:
>
> Scott Marlowe wrote:
> > On 8/14/07, Harpreet Dhaliwal <[EMAIL PROTECTED]> wrote:
> >> Hi,
> >> I read a few lines about SP compilation in postgres
>
Bill Thoen <[EMAIL PROTECTED]> writes:
> I knew this would take some time, but what I didn't expect was that about
> an hour into the select, my mouse and keyboard locked up and also I
> couldn't log in from another computer via SSH. This is a Linux machine
> running Fedora Core 6 and PostgresQL is
> The command you gave only shows currently
> active users, not the historic peak of connections for instance. I'll
> keep digging tha manual but would love any nudges in the right
> direction, thanks!
Can you set up a snapshot in a cronjob? It would still only be sample
of a sample, but?
>
> --
Bill Thoen <[EMAIL PROTECTED]> wrote:
>
> I'm new to PostgreSQL and I ran into problem I don't want to repeat. I have
> a database with a little more than 18 million records that takes up about
> 3GB. I need to check to see if there are duplicate records, so I tried a
> command like this:
>
> SELE
I'm new to PostgreSQL and I ran into problem I don't want to repeat. I have
a database with a little more than 18 million records that takes up about
3GB. I need to check to see if there are duplicate records, so I tried a
command like this:
SELECT count(*) AS count, fld1, fld2, fld3, fld4 FROM My
[EMAIL PROTECTED] writes:
> when is WAITING in PG_STATS_ACTIVITYset to TRUE ?
> When this connection is waiting on a lock , or are there any other
> reasons, waiting on another resource ?
Only when waiting on a lock.
regards, tom lane
---(end of br
We use PHP, but think of it as a universal PgSQL proxy.. If you connect to
a connection you setup in pgBouncer via psql, it looks like a normal
database. Nothing is different in your code but where you connect (for us,
it's the same as our core DB server on a different server). Let me know if
t
On 19/08/07, Gavin M. Roy <[EMAIL PROTECTED]> wrote:
> We use PHP, but think of it as a universal PgSQL proxy.. If you connect to
> a connection you setup in pgBouncer via psql, it looks like a normal
> database. Nothing is different in your code but where you connect (for us,
> it's the same as
On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
> Phoenix Kiula wrote:
> No need to match. If you have 150 relations, 200 is a reasonable value.
> But once you get the proper output from the vacuum command, it tells you
> that as well (74 in my example above)
Found it! You get those word
Phoenix Kiula wrote:
> On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
>> Phoenix Kiula wrote:
>
> ...snip
>
>> There should be a line like this at the end of a "VACUUM VERBOSE" command:
>> INFO: free space map contains 33 pages in 74 relations
>> DETAIL: A total of 1184 page slots
Btw, related to one my earlier questions: where can I see how many
connections are being made to the DB, what was the maximum number
attempted at any given time, and so on? The connections related info.
Thanks!
---(end of broadcast)---
TIP 4: Have y
On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
> Phoenix Kiula wrote:
...snip
> There should be a line like this at the end of a "VACUUM VERBOSE" command:
> INFO: free space map contains 33 pages in 74 relations
> DETAIL: A total of 1184 page slots are in use (including overhead).
19 aug 2007 kl. 06:34 skrev Felix Ji:
Hi all,
i am using PQexecParams() to "SELECT" about 3 million record in C+
+, and it takes several minutes to make it done with used memory
dramatically incresed(about 200MB).
it seems when i using PQexecParams(), i can't use the query result
before al
Phoenix Kiula wrote:
> On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
>> Phoenix Kiula wrote:
>>> On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
>
>
>>> should we do one (VACUUM FULL) now given that we've overrun our
>>> max_fsm_pages?
>> Yes, but not until you've fixed it. And
"Joey K." <[EMAIL PROTECTED]> wrote:
>
> Greetings,
I'm guessing you're in a hurry or in a pinch that you need to repost
after one day on a weekend. I was waiting to let someone more
knowledgeable answer, but I've had some experience with this, so
I'll answer to the best of my ability.
> We have
On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
> Phoenix Kiula wrote:
> > On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
> > should we do one (VACUUM FULL) now given that we've overrun our
> > max_fsm_pages?
>
> Yes, but not until you've fixed it. And only once.
>
FIxed what
On Sat, Aug 18, 2007 at 10:54:11PM -0700, Vance Maverick wrote:
> This is my experience with a Java client too. Writing the data with
> PreparedStatement.setBinaryStream works great for long strings, but
> reading it with the complementary method ResultSet.getBinaryStream runs
> into the memory p
On Jul 23, 5:18 am, [EMAIL PROTECTED] ("Gavin M. Roy") wrote:
> You might want to look at pgBouncer to pool your drupal pgsql needs. I've
> found with 2000 needed connections, I can pool out to only 30 backends and
> still push 8k transactions per second.
>
How you do use pgBouncer -- through a
Joey K. wrote:
> After reading the docs, PITR is still fuzzy. Our ideas for backup are (do
> not worry about the syntax),
Maybe consider using LVM snapshots in conjunction with rdiff-backup as
an option well. Simple to setup, simple to manage, no downtime, very
space efficient.
--
Best regards,
Phoenix Kiula wrote:
> On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
>> Phoenix Kiula wrote:
>
>> There are ways to do this, but if you can't just use timeouts to expire
>> from the cache, things can become pretty complicated pretty fast. But
>> perhaps you can isolate some kinds of quer
Phoenix Kiula wrote:
> On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
>> Phoenix Kiula wrote:
>
> .snipped
>
>>> I can merrily increase the "max_fsm_pages" directive, but the manual
>>> also caveats that with "this can use more system V memory than
>>> available on your system".
On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
> Phoenix Kiula wrote:
> There are ways to do this, but if you can't just use timeouts to expire
> from the cache, things can become pretty complicated pretty fast. But
> perhaps you can isolate some kinds of queries that can be cached for
>
Hi,
little question:
when is WAITING in PG_STATS_ACTIVITYset to TRUE ?
When this connection is waiting on a lock , or are there any other
reasons, waiting on another resource ?
TIA
ruediger
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postma
On 19/08/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
> Phoenix Kiula wrote:
.snipped
> > I can merrily increase the "max_fsm_pages" directive, but the manual
> > also caveats that with "this can use more system V memory than
> > available on your system". My full verbose vacuum info be
Muhyiddin A.M Hayat wrote:
> Dear all,
>
> i'm unable to connect postgres server with error :
>
> C:\Program Files\PostgreSQL\8.2\bin>psql -U postgres siakad
> Password for user postgres:
> psql: server closed the connection unexpectedly
> This probably means the server terminated abnor
Phoenix Kiula wrote:
> On 19/08/07, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
[... ]
> Well based on some past posts, I looked into my pg_log stuff and found
> a number of these lines:
>
>
> [
> LOG: could not fork new process for connection: Resource temporarily
> unavailable
>
Phoenix Kiula wrote:
> [Sorry for the length of this post. It stretched as I provided as much
> info as possible..]
>
> So the rubber meets the road. We've put postgresql in a production
> environment with some heavy simultaneous usage. It works well in
> general, but often PG doesn't respond. How
Phoenix Kiula wrote:
>
>
> Well based on some past posts, I looked into my pg_log stuff and found
> a number of these lines:
>
>
> [
> LOG: could not fork new process for connection: Resource temporarily
> unavailable
> LOG: could not fork new process for connection: Resource
On Sun, Aug 19, 2007 at 05:15:34PM +0800, Phoenix Kiula wrote:
> Well based on some past posts, I looked into my pg_log stuff and found
> a number of these lines:
> [
> LOG: could not fork new process for connection: Resource temporarily
> unavailable
Usually this message means th
On 19/08/07, Phoenix Kiula <[EMAIL PROTECTED]> wrote:
> [Sorry for the length of this post. It stretched as I provided as much
> info as possible..]
>
> So the rubber meets the road. We've put postgresql in a production
> environment with some heavy simultaneous usage. It works well in
> general, b
[Sorry for the length of this post. It stretched as I provided as much
info as possible..]
So the rubber meets the road. We've put postgresql in a production
environment with some heavy simultaneous usage. It works well in
general, but often PG doesn't respond. How should I test what is going
wron
41 matches
Mail list logo