[GENERAL] Please help me regarding the WITH RECURSIVE query

2013-08-26 Thread gajendra s v
Hello All,



I am migrating oracle queries to postgres queries


*Oracle query is below*

select * from (select * from KM_COURSE_MAST where ID in (select OBJ_ID from
 (select  OBJ_ID,PERFORMER_TYPE,PERFORMER_ID from  KM_REL_OBJ_PER_ACTION
where OBJ_TYPE='COURSETYPE') where  PERFORMER_TYPE='GROUP' and PERFORMER_ID
in (select PARENT_ID from KM_REL_SELF_GROUP start with CHILD_ID in ( SELECT
GROUP_ID FROM KM_REL_GRP_USER WHERE USER_ID=52247) connect by CHILD_ID=
prior PARENT_ID union  SELECT GROUP_ID PARENT_ID FROM KM_REL_GRP_USER WHERE
USER_ID=52247)) union select * from KM_COURSE_MAST where CREATED_BY=52247)
order by DISPLAYORDER
*
*
*We have changed to postgres query like below*
*
*
select * from (select * from KM_COURSE_MAST where ID in (select OBJ_ID from
 (select  OBJ_ID,PERFORMER_TYPE,PERFORMER_ID from  KM_REL_OBJ_PER_ACTION
where
OBJ_TYPE='COURSETYPE') g where  PERFORMER_TYPE='GROUP' and PERFORMER_ID in
(WITH RECURSIVE parents as ( select PARENT_ID from KM_REL_SELF_GROUP where
CHILD_ID in ( SELECT
GROUP_ID FROM KM_REL_GRP_USER WHERE USER_ID=52247) UNION select a.PARENT_ID
FROM KM_REL_SELF_GROUP a ,*parents p *where a.CHILD_ID = p.PARENT_ID )
select PARENT_ID from  parents order by
PARENT_ID asc)) union select * from KM_COURSE_MAST where
CREATED_BY='52247') KM_COURSE_MAST   where ID =214

Above postgres query will work fine if resultset has multiple tuples but
returns empty if result set has single row.

Again i have changed above query like below

select * from (select * from KM_COURSE_MAST where ID in (select OBJ_ID from
 (select  OBJ_ID,PERFORMER_TYPE,PERFORMER_ID from  KM_REL_OBJ_PER_ACTION
where
OBJ_TYPE='COURSETYPE') g where  PERFORMER_TYPE='GROUP' and PERFORMER_ID in
(WITH RECURSIVE parents as ( select PARENT_ID from KM_REL_SELF_GROUP where
CHILD_ID in ( SELECT
GROUP_ID FROM KM_REL_GRP_USER WHERE USER_ID=52247) UNION select a.PARENT_ID
FROM KM_REL_SELF_GROUP a ,*KM_REL_SELF_GROUP* *p *where a.CHILD_ID =
p.PARENT_ID ) select PARENT_ID from  parents order by
PARENT_ID asc)) union select * from KM_COURSE_MAST where
CREATED_BY='52247') KM_COURSE_MAST   where ID =214

It returns resultset with single row

Please explain me why it is ?


Thanks,
Gajendra


Re: [GENERAL] What is the relationship between checkpoint and wal

2013-08-26 Thread Luca Ferrari
On Mon, Aug 26, 2013 at 4:57 AM, 高健  wrote:
> But why "writes the entire content of each disk page to WAL "?
>

The documentation states that: "The row-level change data normally
stored in WAL will not be enough to completely restore such a page
during post-crash recovery.". I guess that a mixed page (i.e., a page
that contains old and new data) cannot be safely recovered with
deltas, so you need to have a "clean" page image to which start
recovery.

Someone with more internal knowledge can comment on this.

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is the relationship between checkpoint and wal

2013-08-26 Thread Florian Weimer

On 08/26/2013 11:37 AM, Luca Ferrari wrote:

On Mon, Aug 26, 2013 at 4:57 AM, 高健  wrote:

But why "writes the entire content of each disk page to WAL "?



The documentation states that: "The row-level change data normally
stored in WAL will not be enough to completely restore such a page
during post-crash recovery.". I guess that a mixed page (i.e., a page
that contains old and new data) cannot be safely recovered with
deltas, so you need to have a "clean" page image to which start
recovery.


Correct, the full-page image is needed for restoring the known state of 
a page.


It also speeds up recovery because you can just fire off writes to the 
pages under recovery, followed by incremental updates to the cached 
copy.  Without full page writes, you'd have to read the current version 
of the page from the disk first, often resulting in somewhat random read 
activity during recovery.


--
Florian Weimer / Red Hat Product Security Team


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: postmaster.pid still exists after pacemaker stopped postgresql - how to remove

2013-08-26 Thread Mistina Michal
Hi there.

I didn't find out why this issue happened. Only backup and format of the
filesystem where corrupted postmaster.pid file existed helped to get rid of
it. Hopefully the file won't appear in the future.

 

Best regards,

Michal Mistina

 

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Mistina Michal
Sent: Sunday, August 18, 2013 12:11 AM
To: pgsql-general@postgresql.org
Subject: [GENERAL] postmaster.pid still exists after pacemaker stopped
postgresql - how to remove

 

Hi all.

I'm running 2 node pacemaker/pgsql cluster in one technical center TC1 and
the second 2 node pacemaker/pgsql cluster in the TC2.

After the streaming replication has been established between TC1 (master)
and TC2 (slave) I've tried to migrate resources within TC1 from node 1 to
node 2.

Pacemaker operation FAILED to stop resource postgres. However postgresql was
stopped but postmaster.pid stayed corrupted. This situation didn't occur
when streaming replication was not established.

Why this happened?

How am I able to delete corrupted pid file? Standard way of deleting file
"rm -f" does not work.

 

It looks like this:

[root@tstcaps01 data]# ll

ls: cannot access postmaster.pid: No such file or directory

total 56

drwx-- 7 postgres postgres62 Jun 26 17:13 base

drwx-- 2 postgres postgres  4096 Aug 17 23:38 global

drwx-- 2 postgres postgres17 Jun 26 09:54 pg_clog

-rw--- 1 postgres postgres  5127 Aug 17 16:24 pg_hba.conf

-rw--- 1 postgres postgres  1636 Jun 26 09:54 pg_ident.conf

drwx-- 2 postgres postgres  4096 Jul  2 00:00 pg_log

drwx-- 4 postgres postgres34 Jun 26 09:53 pg_multixact

drwx-- 2 postgres postgres17 Aug 17 23:38 pg_notify

drwx-- 2 postgres postgres 6 Jun 26 09:53 pg_serial

drwx-- 2 postgres postgres 6 Jun 26 09:53 pg_snapshots

drwx-- 2 postgres postgres 6 Aug 17 23:38 pg_stat_tmp

drwx-- 2 postgres postgres17 Jun 26 09:54 pg_subtrans

drwx-- 2 postgres postgres 6 Jun 26 09:53 pg_tblspc

drwx-- 2 postgres postgres 6 Jun 26 09:53 pg_twophase

-rw--- 1 postgres postgres 4 Jun 26 09:53 PG_VERSION

drwx-- 3 postgres postgres  4096 Aug 17 23:38 pg_xlog

-rw--- 1 postgres postgres 19884 Aug 17 22:54 postgresql.conf

-rw--- 1 postgres postgres71 Aug 17 23:38 postmaster.opts

?? ? ???? postmaster.pid

-rw-r--r-- 1 postgres postgres   491 Aug 17 16:33 recovery.done

 

Some additional information.

[root@tstcaps01 data]# crm_mon -1



Last updated: Sun Aug 18 00:08:04 2013

Last change: Sat Aug 17 23:26:19 2013 via crm_resource on tstcaps01

Stack: openais

Current DC: tstcaps02 - partition WITHOUT quorum

Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14

2 Nodes configured, 4 expected votes

6 Resources configured.



 

Online: [ tstcaps01 tstcaps02 ]

 

Resource Group: PGServer

 pg_lvm (ocf::heartbeat:LVM):   Started tstcaps01

 pg_fs  (ocf::heartbeat:Filesystem):Started tstcaps01

 pg_lsb (lsb:postgresql-9.2):   Started tstcaps01 (unmanaged) FAILED

 pg_vip (ocf::heartbeat:IPaddr2):   Stopped 

 Master/Slave Set: ms_drbd_pg [drbd_pg]

 Masters: [ tstcaps01 ]

 Slaves: [ tstcaps02 ]

 

Failed actions:

pg_lsb_stop_0 (node=tstcaps01, call=90, rc=-2, status=Timed Out):
unknown exec error

 

[root@tstcaps01 data]# rpm -qa | grep postgres

postgresql92-9.2.4-1PGDG.rhel6.x86_64

postgresql-libs-8.4.11-1.el6_2.x86_64

postgresql92-libs-9.2.4-1PGDG.rhel6.x86_64

postgresql92-server-9.2.4-1PGDG.rhel6.x86_64

postgresql92-devel-9.2.4-1PGDG.rhel6.x86_64

postgresql92-contrib-9.2.4-1PGDG.rhel6.x86_64

 

 

Any help would be appreciated.

 

Best regards,

Michal Mistina

 



smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Re: postmaster.pid still exists after pacemaker stopped postgresql - how to remove

2013-08-26 Thread Fujii Masao
On Mon, Aug 26, 2013 at 9:53 PM, Mistina Michal  wrote:
> Hi there.
>
> I didn’t find out why this issue happened. Only backup and format of the
> filesystem where corrupted postmaster.pid file existed helped to get rid of
> it. Hopefully the file won’t appear in the future.

I have encountered similar problem when I broke the filesystem by
a double mount. You may have gotten the same problem.

>  Master/Slave Set: ms_drbd_pg [drbd_pg]
>
>  Masters: [ tstcaps01 ]
>
>  Slaves: [ tstcaps02 ]

Why do you use DRBD with streaming replicatin? If you locates
the database cluster on DRBD, it's better to check the status of
DRBD filesystem.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] effective_io_concurrency with an SSD SAN?

2013-08-26 Thread Samrat Revagade
>Given a system with 32 cores, an SSD SAN with 48x drives, and 2x 8Gbps
>paths from the server to the SAN, what would be a good starting point
>to set effective_io_concurrency?  I currently have it set to 32, but I
>kind of feel like the right setting would be "2" since we have two
>paths.  We don't often saturate both links but it does happen from
>time to time.

One assumption about optimal value is that, it is usually the number of data
drives participating in the I/O system excluding the parity drives.
here is the article about setting it in Postgres Plus Advanced Server.
Have a look at it if you find it useful for normal PostgreSQL setup:
http://www.enterprisedb.com/docs/en/8.3R2/perf/Postgres_Plus_Advanced_Server_Performance_Guide-17.htm




-
Greetings,
Samrat Revagade,
NTT DATA OSS Center Pune, India.

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/effective-io-concurrency-with-an-SSD-SAN-tp5768029p5768570.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to use aggregate functions in this case

2013-08-26 Thread David Johnston
BladeOfLight16 wrote
> Then again, I guess you don't need a nested query.
> 
> SELECT  v_rec1.user,
> CASE WIDTH_BUCKET(v_rec_fts.lev, 0, 100, 4)
>   WHEN 1 THEN '0 to 25'
>   WHEN 2 THEN '25 to 50'
>   WHEN 3 THEN '50 to 75'
>   WHEN 4 THEN '75 to 100'
>   ELSE 'But how?'
> END CASE AS quarter_percentage
>  COUNT(*) as count,
> FROM v_rec2
> GROUP BY user, quarter_percentage;

This is clean but requires the use of equal intervals.

Another option, though I am unfamiliar with the exact syntax, is to use the
contains operator and an "intrange" (range type, for integer or whatever
actual type is needed).

SELECT CASE WHEN val @> '(0, 25]' -- '(25,50]', etc...

This allows for uneven bucket sizes and avoid the double-inclusive endpoints
problem that results from using BETWEEN.

Requires 9.2

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/how-to-use-aggregate-functions-in-this-case-tp5768522p5768573.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: postmaster.pid still exists after pacemaker stopped postgresql - how to remove

2013-08-26 Thread Mistina Michal
Hi Masao.
Thank you for suggestion. In deed that could occure. Most probably while I
was testing split-brain situation. In that case I turned off network card on
one node and on both nodes DRBD was in primary role. But after the
split-brain occurred I resync DRBD so from two primaries I promoted one as
"primary" (winner) and second one as "secondary" (victim). Data should be
consistent by that moment. But probably it wasn't consistent.

I am using DRBD only in one technical center. Data are syncing by streaming
replication to the secondary technical center where is another DRBD
instance.

It's like this:

TC1:
--- node1: DRBD (primary), pgsql
--- node2: DRBD (secondary), pgsql

TC2:
--- node1: DRBD (primary), pgsql
--- node2: DRBD (secondary), pgsql

Within one technical center only one pgsql runs only on one node. This is
done by pacemaker/corosync.
>From the outside perspective it looks like only one postgresql server is
running in one TC.
TC1 (master)  streaming replication => TC2 (slave)

If one node in technical center fails, the fail-over to secondary node is
really quick. It's because fast network within technical center.
Between TC1 and TC2 there is a WAN link. If something goes wrong and TC1
became unavailable I can switch manually / automatically to TC2.

Is there more appropriate solution? Would you use something else?

Best regards,
Michal Mistina

On Mon, Aug 26, 2013 at 9:53 PM, Mistina Michal 
wrote:
> Hi there.
>
> I didn't find out why this issue happened. Only backup and format of 
> the filesystem where corrupted postmaster.pid file existed helped to 
> get rid of it. Hopefully the file won't appear in the future.

I have encountered similar problem when I broke the filesystem by a double
mount. You may have gotten the same problem.

>  Master/Slave Set: ms_drbd_pg [drbd_pg]
>
>  Masters: [ tstcaps01 ]
>
>  Slaves: [ tstcaps02 ]

Why do you use DRBD with streaming replicatin? If you locates the database
cluster on DRBD, it's better to check the status of DRBD filesystem.

Regards,

--
Fujii Masao


smime.p7s
Description: S/MIME cryptographic signature


[GENERAL] Problem creating index

2013-08-26 Thread Torello Querci
Hi to all 

On my Postgresql 9.1 instance I had a problem with an index.
Using index I get less tuples than expected.
I try to remove index and the query works fine but obviosly the query is
slow so I try to recreate the index.

I run the create index statement but after a lot of time I get this error
message:

ERROR:  unexpected end of tape

I try to look in postgresql and syslog log files but I find nothing.


Any suggestion?


Create index statement that I use is:

CREATE INDEX dati_impianto_id_tipo_dato_id_data_misurazione_idx
  ON dati
  USING btree
  (impianto_id , tipo_dato_id , data_misurazione  DESC);


Re: [GENERAL] Re: postmaster.pid still exists after pacemaker stopped postgresql - how to remove

2013-08-26 Thread Fujii Masao
On Mon, Aug 26, 2013 at 11:02 PM, Mistina Michal
 wrote:
> Hi Masao.
> Thank you for suggestion. In deed that could occure. Most probably while I
> was testing split-brain situation. In that case I turned off network card on
> one node and on both nodes DRBD was in primary role. But after the
> split-brain occurred I resync DRBD so from two primaries I promoted one as
> "primary" (winner) and second one as "secondary" (victim). Data should be
> consistent by that moment. But probably it wasn't consistent.
>
> I am using DRBD only in one technical center. Data are syncing by streaming
> replication to the secondary technical center where is another DRBD
> instance.
>
> It's like this:
>
> TC1:
> --- node1: DRBD (primary), pgsql
> --- node2: DRBD (secondary), pgsql
>
> TC2:
> --- node1: DRBD (primary), pgsql
> --- node2: DRBD (secondary), pgsql
>
> Within one technical center only one pgsql runs only on one node. This is
> done by pacemaker/corosync.
> From the outside perspective it looks like only one postgresql server is
> running in one TC.
> TC1 (master)  streaming replication => TC2 (slave)
>
> If one node in technical center fails, the fail-over to secondary node is
> really quick. It's because fast network within technical center.
> Between TC1 and TC2 there is a WAN link. If something goes wrong and TC1
> became unavailable I can switch manually / automatically to TC2.
>
> Is there more appropriate solution? Would you use something else?

Nope. I've heard the similar configuration, though it uses shared disk
failover solution instead of DRBD.

Regards,

-- 
Fujii Masao


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is there any method to limit resource usage in PG?

2013-08-26 Thread Igor Neyman
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of ??
Sent: Monday, August 26, 2013 2:08 AM
To: pgsql-general
Subject: [GENERAL] Is there any method to limit resource usage in PG?

Hello:

Sorry for disturbing.

I am now encountering a serious problem: memory is not enough.

My customer reported that when they run a program they found the totall memory 
and disk i/o usage all reached to threshold value(80%).

That program is written by Java.
It is to use JDBC to pull out data from DB, while the query joined some table 
together,  It will return about  3000,000 records.
Then the program will use JDBC  again to write the records  row by row , to 
inert into another table in the DB.

My first question is:

Currently, my customer can not update there applications , and also they have 
no budget to buy new hardware. 

In fact, my customer want the program 's JDBC related part can be dealed with 
by PG at  this way:  
    The PG can control all of its process to run under usage  threshold.
    If  the resource will reach the threshold, then PG will slow processes down 
under the threshold.


I know that in Oracle, that there are resource plan etc. to approach this.
But in PG, I haven't find it.

And  I also want to know:

When there are so many records need to be  thrown to client, 
will PG use as much memory as it can?  
If  the total amount of records is larger than shared_buffer, will PG hung? or 
just do its work and communicate with client for a few times till all the data 
is transferred?


Best Regards
  

No, PG does not have feature similar to Oracle's Resource Manager.

Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem creating index

2013-08-26 Thread Florian Weimer

On 08/26/2013 04:27 PM, Torello Querci wrote:

Create index statement that I use is:

CREATE INDEX dati_impianto_id_tipo_dato_id_data_misurazione_idx
   ON dati
   USING btree
   (impianto_id , tipo_dato_id , data_misurazione  DESC);


What are the data types of these columns?

--
Florian Weimer / Red Hat Product Security Team


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem creating index

2013-08-26 Thread Luca Ferrari
On Mon, Aug 26, 2013 at 4:27 PM, Torello Querci  wrote:
> ERROR:  unexpected end of tape

Really strange, if I get it right something went wrong while sorting tuples.
Is it possible to test with an incremented work_mem value?

Luca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem creating index

2013-08-26 Thread Torello Querci
2013/8/26 Florian Weimer 

> On 08/26/2013 04:27 PM, Torello Querci wrote:
>
>> Create index statement that I use is:
>>
>> CREATE INDEX dati_impianto_id_tipo_dato_id_**data_misurazione_idx
>>ON dati
>>USING btree
>>(impianto_id , tipo_dato_id , data_misurazione  DESC);
>>
>
> What are the data types of these columns?
>
>
impianto_id -> integer
tipo_dato_id -> integer
data_misurazione -> date

The index was present in the database before I drop it since one about year
and half.


Re: [GENERAL] Problem creating index

2013-08-26 Thread Torello Querci
2013/8/26 Luca Ferrari 

> On Mon, Aug 26, 2013 at 4:27 PM, Torello Querci  wrote:
> > ERROR:  unexpected end of tape
>
> Really strange, if I get it right something went wrong while sorting
> tuples.
> Is it possible to test with an incremented work_mem value?
>
> Actually I use the default work_set value (1MB).
Of course it is possible increase work_set value to any values.
Now I'll try with 10MB (shared_buffers is set to 412MB).
If you think that 10MB is a very low value for a table with million of
records I can try to increase work_set value to higher value.

Best Regards, Torello


Re: [GENERAL] What is the relationship between checkpoint and wal

2013-08-26 Thread Jeff Janes
On Sun, Aug 25, 2013 at 7:57 PM, 高健  wrote:
> Hi :
>
> Thanks to Alvaro! Sorry for replying lately.
>
> I have understood a little about it.
>
> But the description of full_page_write made me even confused. Sorry that
> maybe I go to another problem:
>
> It is said:
> http://www.postgresql.org/docs/9.2/static/runtime-config-wal.html#GUC-FULL-PAGE-WRITES
> --
> When this parameter is on, the PostgreSQL server writes the entire content
> of each disk page to WAL during the first modification of that page after a
> checkpoint. This is needed because a page write that is in process during an
> operating system crash might be only partially completed, leading to an
> on-disk page that contains a mix of old and new data.
> ---
>
> Let me imagine that:
> On a disk page, there are following data:
>
> id=1 val=1 with transaction id  of 1001
> id=2 val=2 with transaction id  of 1002
> id=3 val=3 with transaction id  of 1003
>
> If  I start DB,
> And begin with transaction id of 2002 deal with data of id=2 ,making val to
> 20
> Then with trsansaction id of 2003 deal with data of id=3,making val to 30
>
> If With full_page_write =off,
> When my checkpoint occur, it succeed with transaction 2002 but failed with
> 2003 because of crash.

A checkpoint either succeeds or fails.  It cannot succeed with some
transactions and fail with others.

> Then disk page will be of:
>
> id=1 val=1 with transaction id  of 1001--maybe this is the very old data
> id=2 val=20 with transaction id  of 2002--This is now new data
> id=3 val=3 with transaction id  of 1003--This is old data.


Postgres does not do in-place updates, it marks the old row as
obsolete and creates a new one.

id=1 val= 1 with transaction id  of 1001
id=2 val= 2 with transaction id  of 1002 xmax of 2002
id=3 val= 3 with transaction id  of 1003 xmax of 2003
id=2 val=20 with transaction id  of 2002
id=3 val=30 with transaction id of 2003

Of course the whole point of a torn page write is that you don't how
much got written, so you don't know what is actually on the disk.

> When DB restart from  crash,
> I think  that  there are wal data of transaction id 2002 and 2003 beause
> that wal written to wal_buffer is before data written to shared_buffer.

The wal is written out of wal_buffer and flushed, before the
corresponding block is written out of shared_buffer.

> So if  Online wal log file is ok, there will be no data lost, and
> roll-forward and roll-back can happen.
> If  some online wal log file is dmaged during crash:
> There might be some data lost,but if we have archive log, we can restore
> back due to archive wal log's latest transaction id.

Most WAL records would have no problem being applied to a block that
is an otherwise uncorrupted mix of old and new.

But some WAL records have instructions that amount to "grab 134 bytes
from offset 7134 in the block and move them to offset 1623".  If the
block is an unknown mix of old and new data, that can't be carried out
safely.

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem creating index

2013-08-26 Thread Tom Lane
Torello Querci  writes:
> 2013/8/26 Luca Ferrari 
>> Is it possible to test with an incremented work_mem value?

> Actually I use the default work_set value (1MB).

maintenance_work_mem is what would be used for CREATE INDEX.

FWIW, though, the combination of this weird error and the fact that you
had a corrupt index to begin with makes me suspicious that there's some
low-level problem.  You might be well advised to do some memory testing
on that machine, for example.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem creating index

2013-08-26 Thread Alvaro Herrera
Torello Querci escribió:
> 2013/8/26 Luca Ferrari 
> 
> > On Mon, Aug 26, 2013 at 4:27 PM, Torello Querci  wrote:
> > > ERROR:  unexpected end of tape
> >
> > Really strange, if I get it right something went wrong while sorting
> > tuples.
> > Is it possible to test with an incremented work_mem value?
> >
> > Actually I use the default work_set value (1MB).
> Of course it is possible increase work_set value to any values.
> Now I'll try with 10MB (shared_buffers is set to 412MB).
> If you think that 10MB is a very low value for a table with million of
> records I can try to increase work_set value to higher value.

Note that index creation uses maintenance_work_mem to limit memory used,
not work_mem.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem creating index

2013-08-26 Thread Torello Querci
2013/8/26 Tom Lane 

> Torello Querci  writes:
> > 2013/8/26 Luca Ferrari 
> >> Is it possible to test with an incremented work_mem value?
>
> > Actually I use the default work_set value (1MB).
>
> maintenance_work_mem is what would be used for CREATE INDEX.
>
> Ok  thanks


> FWIW, though, the combination of this weird error and the fact that you
> had a corrupt index to begin with makes me suspicious that there's some
> low-level problem.  You might be well advised to do some memory testing
> on that machine, for example.
>
> I check for ecc memory but unfortunally the machine use non ecc memory.
This machine is installed on a remote site so I should to try to use e
memory tester in normal linux shell, so I can't use memtest at boot level.

In this moment I get this error while executing the restore of the big
table in a different database on the same machine:

psql:dump_ess_2013_08_26.sql:271177424: SSL error: sslv3 alert unexpected
message
psql:dump_ess_2013_08_26.sql:271177424: connection to server was lost

I was connected with psql -h localhost.


Any other suggestion?

Best Regards


Re: [GENERAL] Is there any method to limit resource usage in PG?

2013-08-26 Thread Jeff Janes
On Sun, Aug 25, 2013 at 11:08 PM, 高健  wrote:
> Hello:
>
> Sorry for disturbing.
>
> I am now encountering a serious problem: memory is not enough.
>
> My customer reported that when they run a program they found the totall
> memory and disk i/o usage all reached to threshold value(80%).
>
> That program is written by Java.
> It is to use JDBC to pull out data from DB, while the query joined some
> table together,  It will return about  3000,000 records.
> Then the program will use JDBC  again to write the records  row by row , to
> inert into another table in the DB.

What is using the memory, the postgres backend or the client program?

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem creating index

2013-08-26 Thread Torello Querci
Ok,

now create index is finished using maintenance_work_mem=100MB.

Thanks to all.

I suppose that an error message more clear can help.


Best Regards, Torello


2013/8/26 Torello Querci 

>
>
>
> 2013/8/26 Tom Lane 
>
>> Torello Querci  writes:
>> > 2013/8/26 Luca Ferrari 
>> >> Is it possible to test with an incremented work_mem value?
>>
>> > Actually I use the default work_set value (1MB).
>>
>> maintenance_work_mem is what would be used for CREATE INDEX.
>>
>> Ok  thanks
>
>
>> FWIW, though, the combination of this weird error and the fact that you
>> had a corrupt index to begin with makes me suspicious that there's some
>> low-level problem.  You might be well advised to do some memory testing
>> on that machine, for example.
>>
>> I check for ecc memory but unfortunally the machine use non ecc memory.
> This machine is installed on a remote site so I should to try to use e
> memory tester in normal linux shell, so I can't use memtest at boot level.
>
> In this moment I get this error while executing the restore of the big
> table in a different database on the same machine:
>
> psql:dump_ess_2013_08_26.sql:271177424: SSL error: sslv3 alert unexpected
> message
> psql:dump_ess_2013_08_26.sql:271177424: connection to server was lost
>
> I was connected with psql -h localhost.
>
>
> Any other suggestion?
>
> Best Regards
>


Re: [GENERAL] Problem creating index

2013-08-26 Thread Rafael Martinez Guerrero
On 08/26/2013 06:37 PM, Torello Querci wrote:
> 
> 
> In this moment I get this error while executing the restore of the big
> table in a different database on the same machine:
> 
> psql:dump_ess_2013_08_26.sql:271177424: SSL error: sslv3 alert
> unexpected message
> psql:dump_ess_2013_08_26.sql:271177424: connection to server was lost
> 

Hello

This error has probably nothing to do with your index problem.

How big is the restore file you are restoring? More than 512MB?

We had a similar error in our RHEL6 servers and the problem was that the
server could not SSL renegotiate. Some systems has SSL libraries that
are incapable of SSL renegotiation as a safeguard against (old) bugs in
the protocol.

If you are doing a restore of a database/table bigger than 512MB and
your system can not do SSL renegotiation, you have to change the
parameter ssl_renegotiation_limit to 0 in your postgresql.conf and
restart postgreSQL.

regards,
-- 
Rafael Martinez Guerrero
Center for Information Technology
University of Oslo, Norway

PGP Public Key: http://folk.uio.no/rafael/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] filling database

2013-08-26 Thread Steve Crawford

On 08/23/2013 03:29 AM, sachin kotwal wrote:

create sample table with one or two rows then use following command to
populate data.

INSERT INTO TABLE_NAME VALUES(generate_series(1,10));

Cartesian joins are also useful - especially when you want 
semi-realistic data. A quick Google will get you a list of 1000 surnames 
and another 1000 given names. Then a cartesian join like:


select given.givenname, surnames.surname from given,surnames

will give you a million rows. Add a middle-initial column and you 
quickly expand to 26-million unique names. Need more? There are lists 
that go well beyond 1000 names or you can just use a full middle name 
instead of an initial and get a 1,000,000,000 names (or somewhat less if 
you eliminate names where first/middle/last name are the same).


Then you can chuckle at your newly integrated society with Huang 
Alexopoulos O'Leary, Albrecht Fernandez Grebenshchikov and Wekesa 
Étienne Nikoleta.


Add in similar lists of common town-names and street-names along with 
some generated street numbers and you will quickly exceed the world 
population and/or your storage capacity.


Cheers,
Steve






Re: [GENERAL] Problem creating index

2013-08-26 Thread Jeff Janes
On Mon, Aug 26, 2013 at 10:01 AM, Torello Querci  wrote:
> Ok,
>
> now create index is finished using maintenance_work_mem=100MB.
>
> Thanks to all.
>
> I suppose that an error message more clear can help.

Unfortunately, since no one knows what the real problem is, we can't
make the message more clear.  Something that is never supposed to
happen has happened.

One thing you could do is set log_error_verbosity to verbose.

It seems like the most likely cause is flaky hardware, either memory
or hard-drive.  In which case, your database is in serious danger of
irrecoverable corruption.

Is it reproducible that if you lower the maintenance_work_mem you get
the error again, and if you raise it the error does not occur?

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to use aggregate functions in this case

2013-08-26 Thread Janek Sendrowski

Hi,

 

thanks for all your answers.

I'll have a try with the contains operator and the intrange, but before I'd like to know if this would work:

 

CASE WHEN a >= 0 AND a < 25

CASE WHEN a >= 25 AND a < 50

 

There wouldn't be a double endpoint. I just have to decide which range the endpoint includes.

 

Janek Sendrowski




Re: [GENERAL] how to use aggregate functions in this case

2013-08-26 Thread David Johnston
Janek Sendrowski wrote
> Hi, 
> 
>   
> 
> thanks for all your answers. 
> 
> I'll have a try with the contains operator and the intrange, but
> before I'd like to know if this would work: 
> 
>   
> 
> CASE WHEN a >= 0 AND a < 25 
> 
> CASE WHEN a >= 25 AND a < 50 
> 
>   
> 
> There wouldn't be a double endpoint. I just have to decide which range
> the endpoint includes. 
> 
>   
> 
> Janek Sendrowski

Yes, using explicit comparison operators with "AND" will work just fine; its
just a little more verbose so the other options, if available, are preferred
from a readability standpoint.  I do not know whether the different options
may have different performance characteristics.

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/how-to-use-aggregate-functions-in-this-case-tp5768522p5768636.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Please help me regarding the WITH RECURSIVE query

2013-08-26 Thread BladeOfLight16
On Mon, Aug 26, 2013 at 3:17 AM, gajendra s v  wrote:

> Please explain me why it is ?
>

A good place to start would be removing all the parts here that don't seem
to matter. Your problem seems to be with the recursive query (since that is
the part you're changing). Cut off everything else and compare the results
of the recursive queries, and if you still can't figure it out, come back
here with your findings (and the isolated recursive queries). You'll be
much more likely to get responses if you narrow down the problem you're
having instead of asking people on this list to do it.

You might find this a good read: http://sscce.org/. And it wouldn't hurt if
you could create a SQL Fiddle  that demonstrates
your problem; the simpler and more trimmed down the better.


Re: [GENERAL] Replication Postgre > Oracle

2013-08-26 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


> ...replication between PostgreSQL 9.2.4 and Oracle Database
...
> We also thank to develop a solution based on trigger and/or WAL

Before you reinvent that first wheel, check out Bucardo, which is a 
trigger-based solution that can go from Postgres to Oracle.

- -- 
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201308262157
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAlIcB6AACgkQvJuQZxSWSshF+gCgkx7McL06gXAws96rzMSscsli
F9kAoIaTEJ1aZYxk8izDomL4ZtSjtMuO
=NavN
-END PGP SIGNATURE-




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Call for translations

2013-08-26 Thread yinminmin
Peter Eisentraut,

   My name is Minmin,I come from China.At present,I am responsible for 
translation work.From the site,I see some information about calling for 
translations.I have great interest in this job,
and have time to do it.I hope that have opportunity to do this work.
   Thank you!

With Regards,
Minmin

Re: [GENERAL] Problem creating index

2013-08-26 Thread Torello Querci
Yes,

the table is bigger than 512MB.
Thank got your tips.


Best Regard, Torello


2013/8/26 Rafael Martinez Guerrero 

> On 08/26/2013 06:37 PM, Torello Querci wrote:
> >
> >
> > In this moment I get this error while executing the restore of the big
> > table in a different database on the same machine:
> >
> > psql:dump_ess_2013_08_26.sql:271177424: SSL error: sslv3 alert
> > unexpected message
> > psql:dump_ess_2013_08_26.sql:271177424: connection to server was lost
> >
>
> Hello
>
> This error has probably nothing to do with your index problem.
>
> How big is the restore file you are restoring? More than 512MB?
>
> We had a similar error in our RHEL6 servers and the problem was that the
> server could not SSL renegotiate. Some systems has SSL libraries that
> are incapable of SSL renegotiation as a safeguard against (old) bugs in
> the protocol.
>
> If you are doing a restore of a database/table bigger than 512MB and
> your system can not do SSL renegotiation, you have to change the
> parameter ssl_renegotiation_limit to 0 in your postgresql.conf and
> restart postgreSQL.
>
> regards,
> --
> Rafael Martinez Guerrero
> Center for Information Technology
> University of Oslo, Norway
>
> PGP Public Key: http://folk.uio.no/rafael/
>


Re: [GENERAL] Problem creating index

2013-08-26 Thread Torello Querci
2013/8/26 Jeff Janes 

> On Mon, Aug 26, 2013 at 10:01 AM, Torello Querci 
> wrote:
> > Ok,
> >
> > now create index is finished using maintenance_work_mem=100MB.
> >
> > Thanks to all.
> >
> > I suppose that an error message more clear can help.
>
> Unfortunately, since no one knows what the real problem is, we can't
> make the message more clear.  Something that is never supposed to
> happen has happened.
>
> One thing you could do is set log_error_verbosity to verbose.
>
> It seems like the most likely cause is flaky hardware, either memory
> or hard-drive.  In which case, your database is in serious danger of
> irrecoverable corruption.
>
> Is it reproducible that if you lower the maintenance_work_mem you get
> the error again, and if you raise it the error does not occur?
>
> I'll try to restore the database on the same hw but different DB using
differente maintenance_work_mem end verbosity and I'll posted the result
here, if can help to improve the error message.


Cheers, Torello