As Tom said, you need to declare tmp_var as per the result set coming from
select distinct (value) column.
I gave a try on it.
create or replace function prn_test() returns void as $$
declare
tmp_var test_table.name%type; ///Test_table with name column
which is varchar(20) in my case
b
Hi Josh,
Josh Kupershmidt wrote:
>
> Using a recent pg_restore (tested with 9.1.2, verbose mode on) and a
> custom-format dump containing large objects, you should see messages
> like this:
>
> pg_restore: creating TABLE image
> pg_restore: executing BLOB 126810
> pg_restore: executing BLOB 1331
On Fri, May 18, 2012 at 3:11 PM, Bosco Rama wrote:
> One thing you may want to look at (if this is PG 8.4.x) is the number of
> large objects in pg_largeobjects. If your apps don't use large objects
> this is not relevant. If they do, then it may be. I've noticed that
> pg_restore no longer re
Oh.. and while I'm polluting this list (sorry) it's a timestamp field
without a time zone.
thanks for any ideas,
-Dave
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Actually, figured I'd post the whole function, painful as it
might be for anyone to read. If anyone sees something that's a bit
of a risk ( like perhaps the whole thing ;)
On 5/18/12 5:19 PM, David Salisbury wrote:
I'm trying to debug an intermittent problem I'm seeing in one of our
rollup sc
I'm trying to debug an intermittent problem I'm seeing in one of our rollup
scripts.
I'll try to summarize. A table has a measured_at field, of which I calculate
another
time value based on that field and a longitude value, called solar_noon, and I
summarize
min/max values grouped around thi
On May 18, 2012, at 17:06, Jon Smark wrote:
> Hi,
>
>> ...WHERE wid = ANY(string_to_array(?,';'))
>>
>> where the ? is a parameter that you replace with a semi-colon delimited
>> listing of widget IDs
>>
>> Performance depends on specifics you have not provided, especially the
>> expected nu
Hi,
Poul Møller Hansen wrote:
> I have just restored a database about 347GB in size
>
> postgres=# select * from pg_database_size('dbname');
> pg_database_size
> --
> 346782483256
>
> using this command:
> pg_restore -d dbname -O -Fc dbname.backup
>
> It started at 13/5
Hi,
> ...WHERE wid = ANY(string_to_array(?,';'))
>
> where the ? is a parameter that you replace with a semi-colon delimited
> listing of widget IDs
>
> Performance depends on specifics you have not provided, especially the
> expected number of widgets you are going to be filtering one.
Thanks
"J.V." writes:
> for tmp_var in select distinct(value) from mytable where
> value2='literal'
> tmp_var has to be in ' ' ticks or will not work. it is failing on the
> first FOR statment stating: "invalid input syntax for integer:
> "some_distinct_value".
Um, how do you have tmp_var dec
I have a table with a varchar column.
I want to select the distinct values from this column and loop through
them (using as a variable) in a raise notice statement and also in an
update statement.
I have not been able to do this trying over 100 things in the last two
hours. I could not find
On May 18, 2012, at 13:46, Jon Smark wrote:
> Dear postgresql-general,
>
> What would be the best way to fetch in a single round trip a set of table
> rows?
> To clarify, suppose I have a 'widgets' table with columns 'wid' and 'data',
> and I wish to retrieve all rows that belong to the client
On 05/18/2012 11:01 AM, Aaron Burnett wrote:
... One particular query will run perfectly fine (around 5 seconds)
for several weeks, then suddenly decide to hang indefinitely and never
finish
Is the machine busy processing the query or is it idle?
It is processing and in fact drives the loa
Thanks Steve,
Answers are inserted below:
On 5/18/12 11:09 AM, "Steve Crawford"
wrote:
>On 05/18/2012 09:17 AM, Aaron Burnett wrote:
>> Greetings,
>>
>> I run a handful of queries overnight when traffic is at it's lowest on
>>our
>> system. One particular query will run perfectly fine (around
Dear postgresql-general,
What would be the best way to fetch in a single round trip a set of table rows?
To clarify, suppose I have a 'widgets' table with columns 'wid' and 'data',
and I wish to retrieve all rows that belong to the client side array $targets.
Obviously one solution would be to l
On 05/17/12 4:17 PM, E-Blokos wrote:
sorry for my ignorance, but is a db like pg can work like bittorrent,
take bits wherever it's possible so set multiple primary_conninfo,
"secondary_conninfo" etc... ?
torrents work because the torrent itself is static and constant.
databases are dynamic, a
On 05/18/2012 09:17 AM, Aaron Burnett wrote:
Greetings,
I run a handful of queries overnight when traffic is at it's lowest on our
system. One particular query will run perfectly fine (around 5 seconds)
for several weeks, then suddenly decide to hang indefinitely and never
finish. It needs to be
"John Watts" writes:
> Anyone?
I'm still suspicious that you're not really re-ANALYZE'ing the relevant
tables, because there are some discrepancies in the row count estimates
that seem hard to explain otherwise, eg here:
-> Index Scan using tblcompanyindidnumber on tblcompany
(cost=0.
Greetings,
I run a handful of queries overnight when traffic is at it's lowest on our
system. One particular query will run perfectly fine (around 5 seconds)
for several weeks, then suddenly decide to hang indefinitely and never
finish. It needs to be killed manually after several hours (I've
int
On Fri, May 18, 2012 at 8:18 AM, Chris Angelico wrote:
> On Sat, May 19, 2012 at 12:09 AM, Vincenzo Romano
> wrote:
>> You said enough about PQstatus. So issuing a "blank" command seems to
>> be a good idea. Possibly the only one.
>>
>> Well, a "ping" function trying to use the current "PGconn*"
When logging long queries we get a DETAIL record with the query parameters,
like this:
2012-05-17 08:33:33 CDT [15053]: [1-1] user@database x.x.x.x(35559)LOG:
0: duration: 587.476 ms execute dbdpg_p15947_158: select
a_bunch_of_stuff from table_b where id = $1 and
2012-05-17 08:33:33
On 05/18/2012 06:47 AM, yxj wrote:
Hi Steve,
Thanks for your reply.
My Postgresql version is 8.2. I amn't sure we have the autovacuum job
running or not? How can I check it ?
Unless something is broken it is already running. You can connect to the
db and run "show autovacuum;". It should be o
Thanks. Steve
At 2012-05-18 02:54:36,"Steve Crawford [via PostgreSQL]"
wrote:
On 05/17/2012 11:30 AM, leaf_yxj wrote:
> Hi Guys. Please help me about this.
>
> For postgres database, it looks like we need analyze and vacuum all the
> tables periodly. I need to write a script which can be exe
On Sat, May 19, 2012 at 12:09 AM, Vincenzo Romano
wrote:
> You said enough about PQstatus. So issuing a "blank" command seems to
> be a good idea. Possibly the only one.
>
> Well, a "ping" function trying to use the current "PGconn*" to ping
> the server would make a lot of sense, though.
If it h
2012/5/18 Tom Lane :
> Vincenzo Romano writes:
>> I'd need to check from a C language program whether the server is
>> running or not.
>> I was planning to use either "PGPing PQping(const char *conninfo)" or
>> "ConnStatusType PQstatus(const PGconn *conn)".
>> I have my program running and checkin
Thanks Steve.
At 2012-05-18 03:04:02,"Steve Crawford [via PostgreSQL]"
wrote:
On 05/17/2012 11:54 AM, leaf_yxj wrote:
> I know we can know the currenct activity via pg_stat_activity. What's else
> you guys use to debug.
> And for some times back, how can we check the activities?
>
> Thanks.
>
Hi Steve,
This is a very good website. We use the postgresql 8.2. there may be a little
different. It's very useful to me.
Thanks.
Regards.
Grace
At 2012-05-18 03:02:41,"Steve Crawford" wrote:
>On 05/17/2012 11:54 AM, leaf_yxj wrote:
>> I know we can know the currenct activity via pg_s
Hi Steve,
Thanks for your reply.
My Postgresql version is 8.2. I amn't sure we have the autovacuum job running
or not? How can I check it ?
Thanks.
Grace
At 2012-05-18 02:53:15,"Steve Crawford" wrote:
>On 05/17/2012 11:30 AM, leaf_yxj wrote:
>> Hi Guys. Please help me about this.
>>
Vincenzo Romano writes:
> I'd need to check from a C language program whether the server is
> running or not.
> I was planning to use either "PGPing PQping(const char *conninfo)" or
> "ConnStatusType PQstatus(const PGconn *conn)".
> I have my program running and checking from time to time whether
Anyone?
I've tried playing with GEQO on version 9.1, setting it on/off,
adjusting its parameters.
I've also increased default_statistics_target to 1000, set
join_collapse_limit to 1, ran ANALYZE again, no progress.
The query plan on the restored db remains 34 rows instead of 21 on the
current p
Hi Marti,
Thanks, this is exactly the kind of feedback I was looking for.
I am already storing the whole XML in a payload table actually. My problem
is, the queries are actually created in a domain specific langauge, and
then they are transformed to SQL.
There is a no way of knowing what kind of qu
CentOS 6.0 - 2.6.32-220.13.1.el6.x86_64
PostgreSQL version: 8.4.11
Primarily checked on PG 8.4.9 (same OS), problem also occurs. Few times a
day I get a situation where PostgreSQL stops running for 1-2 minutes. CPU is
running 99% in systime. IO is OK, only interrupts are extremely high (over
100k
Hi all.
I'd need to check from a C language program whether the server is
running or not.
I was planning to use either "PGPing PQping(const char *conninfo)" or
"ConnStatusType PQstatus(const PGconn *conn)".
I have my program running and checking from time to time whether the
connection is kicking a
On Thu, May 17, 2012 at 8:40 PM, Seref Arikan wrote:
> Is there a glaring error in my approach? Should I be better off with another
> SQL query, or Ltree/XPATH queries?
For the particular query you posted, I would suggest the following indexes:
(rm_type_name, payload_id, parent_feature_mapping_i
Hi Steve,
> Out of memory or OOM killer?? Any such messages in system logs?
That was my first thought too - but I could't find anything indicating
an OOM event in the logs.
Usually the server only uses ~110mb out of the available 2GB assigned to it.
So if this isn't a known postgres behaviour, I
Hi,
just wondering if PG 9.2 can accept
multiple primary_conninfo on slaves standby servers.
if I understood we can now replicate from slave to slave, tha'ts great,
but with only one primary_conninfo so one upstream server,
is it not a point of failure in case of the slave upstream down ?
sorry
Good day.
Continuing the subject about DB cluster based on plproxy - a question for
developers:
Is it possible to manage schema visibility from one database for different
databases, at least in the read-only mode ? For example as schema
pg_catalog.
Or is it possible to create FDW on the "shared m
Scott Briggs wrote:
> So if you do need to use wal files to catch up a slave, what would
> that process be? If you caught up with wal files, how would streaming
> replication know what positon to start at? And how would you tell
> streaming replication the new position after catching up with wal
>
On Thu, May 17, 2012 at 11:10 PM, John Mudd wrote:
> For example, I take it that if the master is unavailable then you
> switch to a slave. The former slave becomes the current master. When
> the original "master" is ready to run and network accessible then do
> you bring it online in slave mode a
John Mudd wrote:
> Sorry if this is a dumb question. Feel free to just point me to a doc.
Sure, here:
http://www.postgresql.org/docs/current/static/warm-standby-failover.html
> I've read a little about Postgres replication and the concept of a
> master and one or more slaves. If one db is down th
40 matches
Mail list logo