Thanks Sergey for such a quick response, but i dont think this is some
patch problem because we have other DB servers also running fine on same
version and message is also different :
host= PANIC: _bt_restore_page: cannot add item to page
And the whole day replication is working fine but at midni
Alexander Reichstadt writes:
> I think I solved it:
> SELECT * FROM (SELECT DISTINCT ON(refid_messages) refid_messages as msgid, *
> FROM messagehistorywithcontent WHERE
> (lower(ARRAY_TO_STRING(ARRAY[login,firstname,lastname,content,msgtitle], '
> ')) LIKE '%gg%') ORDER BY refid_messages DES
On 2013.04.24 7:16 PM, � wrote:
Maybe you must see this extension [1] ;-)
[1] http://pgxn.org/dist/session_variables/
Fabrízio de Royes Mello
Thanks for your response.
/*
* Author: Fabrízio de Royes Mello
* Created at: Thu Oct 27 14:37:36 -0200 2011
*
*/
CREATE FUNCTION set_value(
On Wed, Apr 24, 2013 at 4:21 AM, Dmitriy Igrishin wrote:
> I've spend some time to find a bug in the application,
> which performed query with entire quoted schema-qualified
> relation name (i.e. "schema.relation" instead of "schema"."relation"
> or just schema.relation), and the error handler pri
On Wed, Apr 24, 2013 at 8:49 AM, François Beausoleil
wrote:
> Presently, I'm migrating each partition individually to add NOT NULL, set a
> default value and update the table to have correct values. Essentially, I'm
> doing this:
>
> ALTER TABLE parent ADD COLUMN new_field int; -- adds the field
On Wed, Apr 24, 2013 at 5:05 PM, Adarsh Sharma wrote:
> I have a Postgresql 9.2 instance running on a CentOS6.3 box.Yesterday i
> setup a hot standby by using pgbasebackup. Today i got the below alert from
> standby box :
>
> [1] (from line 412,723)
> 2013-04-24 23:07:18 UTC [13445]: [6-1] user=
On 04/24/2013 11:20 AM, Daniel Cristian Cruz wrote:
I've done an explain analyze under the test environment, and there is no
aggressive memory usage.
So I dropped the database in the new cluster and restored a fresh dump
from production (in theory, that's the difference between the two
environme
On 04/24/2013 06:48 PM, S H wrote:
What should be the interval for calling vacuum on template1 and postgres
database in case
1. No table is created on template1.
Should i pass analyze parameter for template1 and postgres vacuum.
I would suggest reading this section of the docs:
http://www.po
On Wed, Apr 24, 2013 at 9:47 PM, Darren Duncan wrote:
> Thank you for that.
>
> I had actually thought of this, but it seemed to me that using a temporary
> table was a heavy-handed approach and that a temporary scalar variable
> would be more efficient or less verbose to use.
>
> It is *a* soluti
What should be the interval for calling vacuum on template1 and postgres
database in case
1. No table is created on template1.
Should i pass analyze parameter for template1 and postgres vacuum.
Thank you for that.
I had actually thought of this, but it seemed to me that using a temporary table
was a heavy-handed approach and that a temporary scalar variable would be more
efficient or less verbose to use.
It is *a* solution certainly, and potentially a better one than the url I
poin
Hello
You could just use temporary tables like:
BEGIN;
CREATE TEMPORARY TABLE csid (i int);
-- somehow write the ID you want into that table
-- and then...
INSERT INTO other_table (changeset_ids, msg) VALUES
((SELECT i FROM csid), 'Some log message');
COMMIT;
When insertin
Hi all,
I have a Postgresql 9.2 instance running on a CentOS6.3 box.Yesterday i
setup a hot standby by using pgbasebackup. Today i got the below alert
from standby box :
[1] (from line 412,723)
2013-04-24 23:07:18 UTC [13445]: [6-1] user= db= host= PANIC:
_bt_restore_page: cannot add item to pag
Hello,
I would like to have one or more session-scoped global variables that are
useable in a similar way to sequence generators, via analogies to
setval()+currval().
Here's a (simplified) scenario ...
Say that for auditing purposes all regular database tables have a changeset_id
column, wh
I'm guessing the reason is something like this: even though the "things"
returned by these two statements are the same logical entity (from a
mathematics/set theory standpoint):
pg_dev=# select * from unnest(array[1,2,3]);
unnest
1
2
3
(3 rows)
pg_dev=# select unnest(a
Possibly due to my lack of thorough SQL understanding. Perhaps there's a
better way of doing what I'm ultimately trying to accomplish, but still the
question remains - why does this work:
pg_dev=# select unnest(array[1,2,3]);
unnest
1
2
3
(3 rows)
But not this:
pg_dev
On Wed, Apr 24, 2013 at 12:48:44PM -0700, Stephen Scheck wrote:
> I have a UDF (written in C) that returns SETOF RECORD of an anonymous
> record type
> (defined via OUT parameters). I'm trying to use array_agg() to transform
> its output to
> an array:
> pg_dev=# SELECT array_agg((my_setof_record_r
I think I solved it:
SELECT * FROM (SELECT DISTINCT ON(refid_messages) refid_messages as msgid, *
FROM messagehistorywithcontent WHERE
(lower(ARRAY_TO_STRING(ARRAY[login,firstname,lastname,content,msgtitle], ' '))
LIKE '%gg%') ORDER BY refid_messages DESC) as foo ORDER BY messagekind ASC
Tha
Hi,
I have a UDF (written in C) that returns SETOF RECORD of an anonymous
record type
(defined via OUT parameters). I'm trying to use array_agg() to transform
its output to
an array:
pg_dev=# SELECT array_agg((my_setof_record_returning_func()).col1);
ERROR: set-valued function called in context
I've done an explain analyze under the test environment, and there is no
aggressive memory usage.
So I dropped the database in the new cluster and restored a fresh dump from
production (in theory, that's the difference between the two environments).
Some minutes after I got an answer: after a dum
I'll give using TRUNCATE to clear the tables a try and see what happens.
Dominic Jones
Quoting Sergey Konoplev :
On Tue, Apr 23, 2013 at 8:50 AM, wrote:
Good morning. I'm seeing several of the following log messages each morning
(for example, there were five this morning, spaced approxim
I would also give it a try on turning on statistics on service_id and
follower_id fields and tune collecting of distinct values for the optimizer.
Cheers,
Amador A.
On Wed, Apr 24, 2013 at 9:04 AM, François Beausoleil
wrote:
> Hi!
>
> Le 2013-04-17 à 14:15, Jeff Janes a écrit :
>
> On Wed, Apr
Hi!
Le 2013-04-17 à 14:15, Jeff Janes a écrit :
> On Wed, Apr 17, 2013 at 4:26 AM, François Beausoleil
> wrote:
>
>
> Insert on public.persona_followers (cost=139261.12..20483497.65
> rows=6256498 width=16) (actual time=4729255.535..4729255.535 rows=0 loops=1)
>Buffers: shared hit=3313
Hi all!
I have a partitioned table with millions of rows per weekly partition. I am
adding new fields, with null values and no default values to ensure I had a
reasonable migration time. All downstream code knows how to work with null
fields.
Presently, I'm migrating each partition individuall
On Apr 24, 2013, at 6:14 AM, Bill Moran wrote:
>>>
>
> Write your own client that uses the copy interface to
> load a file from wherever and send it to the server.
>
> Or just use the one built in to psql, as Jasen suggested.
>
I am using "copy to" to write data from the db out to csv files.
Alexander Reichstadt wrote:
> SELECT
> DISTINCT ON (msgid)
> msgid
> FROM (
> SELECT refid_messages as msgid
> FROM messagehistorywithcontent
> WHERE 1=1
> AND
>(lower(ARRAY_TO_STRING(ARRAY[login,firstname,lastname,content,msgtitle], ' '))
>LI
On 04/24/2013 09:03 AM, Alexander Reichstadt wrote:
The order is correct. Now from the outer SELECT I would expect then to get:
53
29
46
Please re-read the manual on DISTINCT ON.
"SELECT DISTINCT ON ( expression [, ...] ) keeps only the first row of
each set of rows where the given expressio
Hi,
following a query:
SELECT DISTINCT ON (msgid) msgid FROM (SELECT refid_messages as msgid FROM
messagehistorywithcontent WHERE 1=1 AND
(lower(ARRAY_TO_STRING(ARRAY[login,firstname,lastname,content,msgtitle], ' '))
LIKE '%gg%') ORDER BY messagekind DESC) as foo
This query rearranges the sor
Le 2013-04-24 à 09:15, hubert depesz lubaczewski a écrit :
> On Tue, Apr 23, 2013 at 03:08:52PM -0400, François Beausoleil wrote:
>> I used omnipitr to launch a base backup, but I fumbled a couple of
>> things, so I Ctrl+C'd *once* the console where I had
>> omnipitr-backup-master running. omnipi
On 04/24/2013 04:57 AM, S H wrote:
$ sudo -u postgres psql template1
psql (9.2.4)
Type "help" for help.
template1=# \d
No relations found.
I am getting as mentioned above. I am not sure what is taking vacuuming
long time.
If you are getting 'No relations found' then t
On 04/24/2013 03:35 AM, jesse.wat...@gmail.com wrote:
I am receiving an error when running a pg_dump. These are older legacy
systems and upgrading them is not in plan.
Any help will be appreciated.
CentOS 5.3 (64bit)
psql (PostgreSQL) 8.3.11
full error message:
pg_dump: SQL command failed
On Tue, Apr 23, 2013 at 03:08:52PM -0400, François Beausoleil wrote:
> I used omnipitr to launch a base backup, but I fumbled a couple of
> things, so I Ctrl+C'd *once* the console where I had
> omnipitr-backup-master running. omnipitr-backup-master correctly
> launched pg_stop_backup, but pg_stop_
$ sudo -u postgres psql template1
psql (9.2.4)
Type "help" for help.
template1=# \d
No relations found.
I am getting as mentioned above. I am not sure what is taking vacuuming long
time.
- What is the recommendation of vacuuming for wraparound issue for template1 -
Once
Hey,
It seems to me, that this is confusing:
dmitigr=> create schema test;
CREATE SCHEMA
dmitigr=> create table test.test();
CREATE TABLE
dmitigr=> table "test.test";
ERROR: relation "test.test" does not exist
LINE 1: table "test.test";
^
dmitigr=> table test.test1;
ERROR: relatio
> On 2013-04-23, Kirk Wythers wrote:
> > I would like to run the COPY command as a user other than "postgres". I
> > find it a bit of a pain (or at least requiring an extra step or two) to
> > have the postgres user own the files that I am creating with COPY TO. Here
> > is a simple example wh
On 2013-04-23, Kirk Wythers wrote:
> I would like to run the COPY command as a user other than "postgres". I find
> it a bit of a pain (or at least requiring an extra step or two) to have the
> postgres user own the files that I am creating with COPY TO. Here is a simple
> example where the loc
I am receiving an error when running a pg_dump. These are older legacy
systems and upgrading them is not in plan.
Any help will be appreciated.
CentOS 5.3 (64bit)
psql (PostgreSQL) 8.3.11
full error message:
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR: could not acce
>> [regression tests have different plans or row orderings]
>> It seems that the problem only occurs when configuring the make
>> with these settings :
>>
>> --with-libraries=/lib64 --with-blocksize=2 --with-wal-blocksize=2
>> is this problem common, i.e. the expected results files need to
>> be
W dniu 03/24/2013 12:11 PM, Rafał Pietrak pisze:
W dniu 03/24/2013 12:06 PM, Misa Simic pisze:
maybe,
SELECT DISTINCT issuer,amount, array_agg(REFERENCE) over (partition
by invoice_nr) from invoices;
RIGHT. Thenx. (and the first thing I did, I've read the doc on
array_agg() what stress
Hello all,
In our application we are enabling session_replication_role TO 'replica' in
certain situations so that triggers will not fire in a table during DML
operations. However, we observed that when setting session_replication_role
TO 'replica' referential integrity constraints will not fire on
40 matches
Mail list logo