Re: [GENERAL] Appending a newline to a column value - in a psql cronjob

2012-01-18 Thread Jasen Betts

On 2012-01-13, Alexander Farber  wrote:
> Hello!
>
> I'm using PostgreSQL 8.4.9 on CentOS 6.2 and with bash.
>
> The following cronjob works well for me
> (trying to send a mail to myself - for moderation):
>
> 6   6   *   *   *   psql -c "select
> 'http://mysite/user.php?id=' ||id, about from pref_rep where
> length(about) > 1 and last_rated > now() - interval '1 day'"
>
> but I can't figure out how to append a newline to the
> 1st value (because otherwise the line is too long
> and I have to scroll right in my mail reader):

[several command-line attempts skipped]

I'd be incluned to cheat and use a literal newline like this:

psql -c "select 'http://mysite/user.php?id=' ||id|| '
' . ";

I think the one you're groping in the dark for is this:

psql -c "select 'http://mysite/user.php?id=' ||id|| e'\\n' . ";
 
but I think the real problem is that that road doesn't lead where you
want to go as after appending the neline psql reformats the content
into columns (this is usually a good thing).

As you;re using cron and not the command line the rules about what's
allowable change.

try this:

psql -c "select http://mysite/user.php?id=' ||id || e'\n' || about from 
pref_rep where
 length(about) > 1 and last_rated > now() - interval '1 day'"

or possibly with more backslashes: I'm not sure what cron does to backslashes 
(if anything)


-- 
⚂⚃ 100% natural


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] On duplicate ignore

2012-01-18 Thread Gnanakumar
Hi,

Ours is a web-based application.  We're trying to implement ON DUPLICATE
IGNORE for one of our application table, named EMAILLIST.  After a quick
Google search, I'm finding the following "easy & convenient" single SQL
statement syntax to follow with:

INSERT INTO EMAILLIST (EMAIL)
   SELECT 'j...@example.net'
   WHERE NOT EXISTS (SELECT 1 FROM EMAILLIST WHERE EMAIL =
'j...@example.net');

My question is, in a single threaded INSERT, this will *definitely* work.
Since ours is a web-based application, will this work out in a concurrent
multi-threaded environment too?  In other words, will it be truly unique
when INSERT calls are concurrent?

Regards,
Gnanam



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] On duplicate ignore

2012-01-18 Thread Atul Goel
Just create a unique index on EMAIL column and handle error if it comes

Thanks,
Atul Goel

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Gnanakumar
Sent: 18 January 2012 11:04
To: pgsql-general@postgresql.org
Subject: [GENERAL] On duplicate ignore

Hi,

Ours is a web-based application.  We're trying to implement ON DUPLICATE IGNORE 
for one of our application table, named EMAILLIST.  After a quick Google 
search, I'm finding the following "easy & convenient" single SQL statement 
syntax to follow with:

INSERT INTO EMAILLIST (EMAIL)
   SELECT 'j...@example.net'
   WHERE NOT EXISTS (SELECT 1 FROM EMAILLIST WHERE EMAIL = 
'j...@example.net');

My question is, in a single threaded INSERT, this will *definitely* work.
Since ours is a web-based application, will this work out in a concurrent 
multi-threaded environment too?  In other words, will it be truly unique when 
INSERT calls are concurrent?

Regards,
Gnanam



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make 
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

The information contained in this email is strictly confidential and for the 
use of the addressee only, unless otherwise indicated. If you are not the 
intended recipient, please do not read, copy, use or disclose to others this 
message or any attachment. Please also notify the sender by replying to this 
email or by telephone (+44 (0)20 7896 0011) and then delete the email and any 
copies of it. Opinions, conclusions (etc) that do not relate to the official 
business of this company shall be understood as neither given nor endorsed by 
it. IG Group Holdings plc is a company registered in England and Wales under 
number 04677092. VAT registration number 761 2978 07. Registered Office: Cannon 
Bridge House, 25 Dowgate Hill, London EC4R 2YA. Listed on the London Stock 
Exchange. Its subsidiaries IG Markets Limited and IG Index Limited are 
authorised and regulated by the Financial Services Authority (IG Markets 
Limited FSA registration number 195355 and IG Index Limited FSA registration 
number 114059).

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Partial index does not make query faster

2012-01-18 Thread Ruben Blanco
Hi, folks:

I'm trying to reduce execution time on a query using a partial index,
but Postgres doesn't make a significant improvement, even when the
partial index is 30 times smaller than the index used currently. Query
plan returns a slightly higher cost (cost=0.00..327952.12) for the
partial index than the one used instead (cost=0.00..327446.61).

The table is a partitioned table, holding telephone calls for one
month. The partial index holds the calls for just one day.

The table:

\d calls_201109
...
Indexes:
"calls_201109_index_1" UNIQUE, btree (company, call_date,
caller_cli, receiver_cli, call_time, caller_cli_whs, outgoing_call)
"calls_201109_index_2" btree (company, call_date, caller_cli)
"calls_201109_index_partial" btree (company, call_date,
caller_cli) WHERE call_date = '2011-09-01'::date


Using partial index "calls_201109_index_partial":

REINDEX TABLE calls_201109;
ANALYZE calls_201109;

EXPLAIN ANALYZE
SELECT  *
FROMcalls_201109
WHERE   company = 1
AND call_date = '20110901'
AND outgoing_call!='I'
;

QUERY PLAN

--
 Index Scan using calls_201109_index_2 on calls_201109
(cost=0.00..327952.12 rows=225604 width=866) (actual
time=0.061..456.512 rows=225784 loops=1)
   Index Cond: ((company = 1) AND (call_date = '2011-09-01'::date))
   Filter: (outgoing_call <> 'I'::bpchar)
 Total runtime: 643.349 ms



Size of the (partial) index used:

SELECT 
pg_size_pretty(pg_total_relation_size('calls_201109_index_partial'));
 pg_size_pretty

 11 MB


Without using partial index ("calls_201109_index_2" is used instead):

DROP INDEX calls_201109_index_partial;
REINDEX TABLE calls_201109;
ANALYZE calls_201109;

EXPLAIN ANALYZE
SELECT  *
FROMcalls_201109
WHERE   company = 1
AND call_date = '20110901'
AND outgoing_call!='I'
;


  QUERY PLAN

--
 Index Scan using calls_201109_index_2 on calls_201109
(cost=0.00..327446.61 rows=225015 width=865) (actual
time=0.103..468.209 rows=225784 loops=1)
   Index Cond: ((company = 1) AND (call_date = '2011-09-01'::date))
   Filter: (outgoing_call <> 'I'::bpchar)
 Total runtime: 656.103 ms



Size of the index used:

SELECT pg_size_pretty(pg_total_relation_size('calls_201109_index_2'));
 pg_size_pretty

 330 MB


Any idea on how to make partial index effective?

Thanks in advance.
Ruben.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] On duplicate ignore

2012-01-18 Thread Gnanakumar
> Just create a unique index on EMAIL column and handle error if it comes

Thanks for your suggestion.  Of course, I do understand that this could be
enforced/imposed at the database-level at any time.  But I'm trying to find
out whether this could be solved at the application layer itself.  Any
thoughts/ideas?



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] On duplicate ignore

2012-01-18 Thread Dmitriy Igrishin
Hey Gnanakumar,

2012/1/18 Gnanakumar 

> > Just create a unique index on EMAIL column and handle error if it comes
>
> Thanks for your suggestion.  Of course, I do understand that this could be
> enforced/imposed at the database-level at any time.  But I'm trying to find
> out whether this could be solved at the application layer itself.  Any
> thoughts/ideas?
>
Exactly at the application level you just need to ignore
an unique constraint violation error reported by the backend.
You may also wrap INSERT statement in the PL/pgSQL
function or in the DO statement and catch the exception
generated by the backend.

// Dmitriy.


[GENERAL] Table permessions

2012-01-18 Thread salah jubeh
Hello,

I have create a table from another table such as 

CREATE TABLE tmp_XXX AS SELECT * FROM XXX;  


The  tmp_XXX  tables has no permissions assigned to it and I  want to assign it 
with the same owner and access privileges  of XXX  table. I had a look on pg 
catalog tables  http://www.postgresql.org/docs/8.4/static/catalogs.html to 
create a stored procedure to do this job for me. I have found some tables which 
might be useful to get the original table iformation. For example, I can use  
pg_table to get the owner of the original table.  Also, I have found pg_roles. 
However, I was not able to find where the access privileges are stored.

Is there a better way to do this task than extracting the access privileges 
from pg catalog ?  If not, where I can find the access privileges information ?


Thanks in advance 

[GENERAL] Redirect ERROR, FATAL and other messages

2012-01-18 Thread pittgs
Hello!

I'm executing a postgres command from a C code, the command that I introduce
is the following:

"sudo -u pgsql psql -p 3306 -d triage_dump -c 'insert into control select
*from uuid ' > logfile"

In the logfile, the output is introduced, but certain lines are not, like
for example:
*ERROR:  duplicate key value violates unique constraint "control_pkey"
DETAIL:  Key (uuid)=(1717) already exists.
could not find a "psql" to execute*

I understand the error messages, but I don't want them to appear in the
screen (stdout), is there any way to send this messages to a log file? I
need all messages from postgres goes to a logfile, but seems is not enough
with "> logfile"
Can somebody give me a hint to solve my issue?

Thanks in advance
Gabs

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Redirect-ERROR-FATAL-and-other-messages-tp5154551p5154551.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] scenario with a slow query

2012-01-18 Thread Volodymyr Kostyrko

Hi all.

Maybe I'm missing something but I have found a case when planner is 
unoptimal.


# Creating table

create table test_stat(id smallint, count smallint, date date);

# Filling table, sorry for php
$insert = $db->prepare('insert into test_stat (id, count, date) values 
(?, 1, to_timestamp(?)::date)');


$today = mktime(0, 0, 0);

$db->beginTransaction();
for($i = 0; $i < 150; $i++) {
  $insert(rand(0, 1000), $today);
}
$db->commit();
?>

And now goes the query.

select * from (
  select id, sum(count) as today
  from test_stat
  where date = now()::date
  group by id
)a natural full join (
  select id, sum(count) as lastday
  from test_stat
  where date = (now() - interval '1 day')::date
  group by id
)b natural full join (
  select id, sum(count) as week
  from test_stat
  where date_trunc('week', now()) = date_trunc('week', date)
and date <> now()::date
  group by id
)c natural full join (
  select id, sum(count) as whole
  from test_stat
  where date <> now()::date
or date is null
  group by id
)d
where id = ?;

Which yields this explain:


QUERY PLAN 




 Hash Full Join  (cost=94830.30..126880.73 rows=5 width=48)
   Hash Cond: (COALESCE(COALESCE(public.test_stat.id, 
public.test_stat.id), public.test_stat.id) = public.test_stat.id)
   Filter: (COALESCE(COALESCE(COALESCE(public.test_stat.id, 
public.test_stat.id), public.test_stat.id), public.test_stat.id) = 1)

   ->  Hash Full Join  (cost=91193.49..123240.10 rows=1001 width=36)
 Hash Cond: (COALESCE(public.test_stat.id, public.test_stat.id) 
= public.test_stat.id)

 ->  Hash Full Join  (cost=40259.93..72302.74 rows=1001 width=24)
   Hash Cond: (public.test_stat.id = public.test_stat.id)
   ->  GroupAggregate  (cost=0.01..32042.63 rows=1 width=8)
 ->  Index Scan using test_stat__id_date on 
test_stat  (cost=0.01..32042.61 rows=1 width=8)
   Index Cond: (date = ((now() - '1 
day'::interval))::date)

   ->  Hash  (cost=40247.41..40247.41 rows=1001 width=12)
 ->  HashAggregate  (cost=40227.39..40237.40 
rows=1001 width=8)
   ->  Seq Scan on test_stat 
(cost=0.00..33089.97 rows=1427484 width=8)

 Filter: (date = (now())::date)
 ->  Hash  (cost=50933.55..50933.55 rows=1 width=12)
   ->  HashAggregate  (cost=50933.53..50933.54 rows=1 width=8)
 ->  Seq Scan on test_stat  (cost=0.00..50933.52 
rows=1 width=8)
   Filter: ((date <> (now())::date) AND 
(date_trunc('week'::text, now()) = date_trunc('week'::text, 
(date)::timestamp with time zone)))

   ->  Hash  (cost=3636.80..3636.80 rows=1 width=12)
 ->  GroupAggregate  (cost=34.80..3636.79 rows=1 width=8)
   ->  Bitmap Heap Scan on test_stat  (cost=34.80..3636.78 
rows=1 width=8)

 Recheck Cond: (id = 1)
 Filter: ((date <> (now())::date) OR (date IS NULL))
 ->  Bitmap Index Scan on test_stat__id_date 
(cost=0.00..34.80 rows=1378 width=0)

   Index Cond: (id = 1)
(25 rows)

The part which yields a Seq scan is a:
  select id, sum(count) as today
  from test_stat
  where date = now()::date
  group by id

And it uses index when executed like this:
select * from (
  select id, sum(count) as today
  from test_stat
  where date = now()::date
  group by id
)a where id = 1

Where am I wrong here? What I have done so this subquery can't inherit 
constraint from outer query?


--
Sphinx of black quartz judge my vow.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table permessions - solved

2012-01-18 Thread salah jubeh
I found this view 


information_schema.table_privileges



Regards


 




 From: salah jubeh 
To: pgsql  
Sent: Wednesday, January 18, 2012 2:48 PM
Subject: [GENERAL] Table permessions 
 

Hello,

I have create a table from another table such as 

CREATE TABLE tmp_XXX AS SELECT * FROM XXX;  


The  tmp_XXX  tables has no permissions assigned to it and I  want to assign it 
with the same owner and access privileges  of XXX  table. I had a look on pg 
catalog tables  http://www.postgresql.org/docs/8.4/static/catalogs.html to 
create a stored procedure to do this job for me. I have found some tables which 
might be useful to get the original table iformation. For example, I can use  
pg_table to get the owner of the original table.  Also, I have found pg_roles. 
However, I was not able to find where the access privileges are stored.

Is there a better way to do this task than extracting the access privileges 
from pg catalog
 ?  If not, where I can find the access privileges information ?


Thanks in advance 

Re: [GENERAL] Redirect ERROR, FATAL and other messages

2012-01-18 Thread Jerry Sievers
pittgs  writes:

> Hello!
>
> I'm executing a postgres command from a C code, the command that I introduce
> is the following:
>
> "sudo -u pgsql psql -p 3306 -d triage_dump -c 'insert into control select
> *from uuid ' > logfile"
>
> In the logfile, the output is introduced, but certain lines are not, like
> for example:
> *ERROR:  duplicate key value violates unique constraint "control_pkey"
> DETAIL:  Key (uuid)=(1717) already exists.
> could not find a "psql" to execute*
>
> I understand the error messages, but I don't want them to appear in the
> screen (stdout), is there any way to send this messages to a log file? I
> need all messages from postgres goes to a logfile, but seems is not enough
> with "> logfile"
> Can somebody give me a hint to solve my issue?

See the 2 settings;

client_min_messages
log_min_messages

> Thanks in advance
> Gabs
>
> --
> View this message in context: 
> http://postgresql.1045698.n5.nabble.com/Redirect-ERROR-FATAL-and-other-messages-tp5154551p5154551.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres.consult...@comcast.net
p: 305.321.1144

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Redirect ERROR, FATAL and other messages

2012-01-18 Thread hubert depesz lubaczewski
On Wed, Jan 18, 2012 at 03:20:05AM -0800, pittgs wrote:
> with "> logfile"
> Can somebody give me a hint to solve my issue?

errors are written not to stdout (which is redirected with >), but to
stderr.
which means, that you can redirect them, as with any other program, with
2>, like:

psql ... > logfile 2>&1

depesz

-- 
The best thing about modern society is how easy it is to avoid contact with it.
 http://depesz.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Full text search - How to build a filtering dictionary

2012-01-18 Thread Antonio Franzoso

Hi all,
I need to build a synonym dictionary that performs a normalization of 
tokens just like a filtering dictionary does. I've searched for a 
filtering dictionary template but I've found it. Where Can I find it? 
Or, if there isn't such a template, How can I build a simple filter 
dictionary that simply maps a term with another (in a synonym dict-like 
way)?


Thanks in advance,
Antonio

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Full text search - How to build a filtering dictionary

2012-01-18 Thread Oleg Bartunov

Antonio,

you can see contrib/unaccent dictionary, which is a filtering 
dictionary. I have a page about it - http://mira.sai.msu.su/~megera/wiki/unaccent





Oleg
On Wed, 18 Jan 2012, Antonio Franzoso wrote:


Hi all,
I need to build a synonym dictionary that performs a normalization of 
tokens just like a filtering dictionary does. I've searched for a filtering 
dictionary template but I've found it. Where Can I find it? Or, if there 
isn't such a template, How can I build a simple filter dictionary that 
simply maps a term with another (in a synonym dict-like way)?


Thanks in advance,
Antonio




Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: o...@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table permissions

2012-01-18 Thread A.M.

On Jan 18, 2012, at 8:48 AM, salah jubeh wrote:

> Hello,
> 
> I have create a table from another table such as 
> 
> CREATE TABLE tmp_XXX AS SELECT * FROM XXX;  
> 
> 
> The  tmp_XXX  tables has no permissions assigned to it and I  want to assign 
> it with the same owner and access privileges  of XXX  table. I had a look on 
> pg catalog tables  http://www.postgresql.org/docs/8.4/static/catalogs.html to 
> create a stored procedure to do this job for me. I have found some tables 
> which might be useful to get the original table iformation. For example, I 
> can use  pg_table to get the owner of the original table.  Also, I have found 
> pg_roles. However, I was not able to find where the access privileges are 
> stored.
> 
> Is there a better way to do this task than extracting the access privileges 
> from pg catalog ?  If not, where I can find the access privileges information 
> ?
> 

You are looking for pg_catalog.pg_class.relacl. Just copy that from the 
original table to the duplicate (and perhaps relowner, depending on your 
situation), and you will have duplicate permissions.

Cheers,
M
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] self referencing table.

2012-01-18 Thread David Salisbury



On 1/17/12 6:00 PM, Chris Travers wrote:

On Tue, Jan 17, 2012 at 4:31 PM, David Salisbury  wrote:


I've got a table:

   Taxa
 Column |Type
+-
id  | integer |
parent_id   | integer |
taxonomic_rank  | character varying(32)   |
latin_name  | character varying(32)

It's basically a self referential table, with
values in the taxonomic_rank like

  phylum
  family
  order
  genus
  species

So at any row in the table I can get all the parent
information be traversing upward using the parent id.

However I'm interested in only getting just genus and species
when I'm given a taxa.id value.  It would be a nice simple
self join if the taxa.id I was given was always to a
row with rank of 'species'.  Problem is, grasses don't
have species, so sometimes my id is pointing to a genus
row instead ( the id will be to lowest rank  ), so the
parent is of no use.


So basically you are just getting genus and species, why not just join
the table against itself?  It's not like you need recursion here.
Something like:

select g.latin_name as genus, s.latin_name as species
from "Taxa" s
join "Taxa" g ON s.parent_id = g.id
WHERE s.taxonomic_rank = 'species' AND s.id = ?

If you want the whole taxonomic ranking, you'd probably have to do a
with recursive...

Best Wishes,
Chris Travers


Well, that works fine if my s.id is pointing to a row that has a
taxonomic_rank of 'species'.  But that's not always the case.  If
there is no species for a plant's classification, the rank will be
'genus' for that s.id, so the query above would return nothing.
Instead, for that case I'd like the query to return s.latin_name as genus, and
null for species.  I'm wondering if I'm missing something
clever to do this, but I'm seeing this logic as row based iteration
type stuff.. :(

Cheers,

-ds




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] self referencing table.

2012-01-18 Thread David Salisbury



On 1/18/12 9:46 AM, David Salisbury wrote:



On 1/17/12 6:00 PM, Chris Travers wrote:

On Tue, Jan 17, 2012 at 4:31 PM, David Salisbury
wrote:


I've got a table:

Taxa
Column | Type
+-
id | integer |
parent_id | integer |
taxonomic_rank | character varying(32) |
latin_name | character varying(32)

It's basically a self referential table, with
values in the taxonomic_rank like

phylum
family
order
genus
species

So at any row in the table I can get all the parent
information be traversing upward using the parent id.

However I'm interested in only getting just genus and species
when I'm given a taxa.id value. It would be a nice simple
self join if the taxa.id I was given was always to a
row with rank of 'species'. Problem is, grasses don't
have species, so sometimes my id is pointing to a genus
row instead ( the id will be to lowest rank ), so the
parent is of no use.


So basically you are just getting genus and species, why not just join
the table against itself? It's not like you need recursion here.
Something like:

select g.latin_name as genus, s.latin_name as species
from "Taxa" s
join "Taxa" g ON s.parent_id = g.id
WHERE s.taxonomic_rank = 'species' AND s.id = ?

If you want the whole taxonomic ranking, you'd probably have to do a
with recursive...

Best Wishes,
Chris Travers


Well, that works fine if my s.id is pointing to a row that has a
taxonomic_rank of 'species'. But that's not always the case. If
there is no species for a plant's classification, the rank will be
'genus' for that s.id, so the query above would return nothing.
Instead, for that case I'd like the query to return s.latin_name as
genus, and
null for species. I'm wondering if I'm missing something
clever to do this, but I'm seeing this logic as row based iteration
type stuff.. :(

Cheers,

-ds


Think I'll answer myself on this.  I'll join in whatever rows I get from
the self referential query above to the base table, and include the rank column,
and then figure out some sort of post processing on the resultant view ( I hope 
).

-ds

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] On duplicate ignore

2012-01-18 Thread Atul Goel
No way you can make sure at application level. Think in sense of an uncommitted 
row and other session inserting at the same moment in time.


Thanks,
Atul Goel


-Original Message-
From: Gnanakumar [mailto:gna...@zoniac.com]
Sent: 18 January 2012 12:59
To: Atul Goel; pgsql-general@postgresql.org
Subject: RE: [GENERAL] On duplicate ignore

> Just create a unique index on EMAIL column and handle error if it
> comes

Thanks for your suggestion.  Of course, I do understand that this could be 
enforced/imposed at the database-level at any time.  But I'm trying to find out 
whether this could be solved at the application layer itself.  Any 
thoughts/ideas?



The information contained in this email is strictly confidential and for the 
use of the addressee only, unless otherwise indicated. If you are not the 
intended recipient, please do not read, copy, use or disclose to others this 
message or any attachment. Please also notify the sender by replying to this 
email or by telephone (+44 (0)20 7896 0011) and then delete the email and any 
copies of it. Opinions, conclusions (etc) that do not relate to the official 
business of this company shall be understood as neither given nor endorsed by 
it. IG Group Holdings plc is a company registered in England and Wales under 
number 04677092. VAT registration number 761 2978 07. Registered Office: Cannon 
Bridge House, 25 Dowgate Hill, London EC4R 2YA. Listed on the London Stock 
Exchange. Its subsidiaries IG Markets Limited and IG Index Limited are 
authorised and regulated by the Financial Services Authority (IG Markets 
Limited FSA registration number 195355 and IG Index Limited FSA registration 
number 114059).

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Full text search - How to build a filtering dictionary

2012-01-18 Thread Antonio Franzoso

Thanks for reply,
there is any simplest way? I have to do just a simple map (in a similar 
way of synonym dictionary), set the TSL_FILTER flag (if there's a map 
for a token) and then pass the normalized token to my own thesaurus 
dictionary. I'm working on Windows and I've to write a C library to do 
these operations (and I cannot see the unaccent code because it's a dll 
file).
If there is no other solution, I though that I can integrate this 
filtering dictionary in the thesaurus in a similar way:


token: lemma, term1, term2,

where token is the denormalized term, lemma is one entry of thesaurus 
and term1, term2,... are terms associated with lemma in the original 
thesaurus structure. What do you think about this solution?


Il 18/01/2012 17:40, Oleg Bartunov ha scritto:

Antonio,

you can see contrib/unaccent dictionary, which is a filtering 
dictionary. I have a page about it - 
http://mira.sai.msu.su/~megera/wiki/unaccent





Oleg
On Wed, 18 Jan 2012, Antonio Franzoso wrote:


Hi all,
I need to build a synonym dictionary that performs a normalization of 
tokens just like a filtering dictionary does. I've searched for a 
filtering dictionary template but I've found it. Where Can I find it? 
Or, if there isn't such a template, How can I build a simple filter 
dictionary that simply maps a term with another (in a synonym 
dict-like way)?


Thanks in advance,
Antonio




Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: o...@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table permissions

2012-01-18 Thread salah jubeh
Hello, 

Thanks for the info, I have already solved this by writing the following 
function. Also, i think it is better than changing the schema tables directly 

Regards
  

CREATE OR REPLACE FUNCTION grant_permissions  (org_tbl TEXT  , new_tbl TEXT , 
change_owner BOOLEAN) RETURNS VOID
AS 
$$
    DECLARE 
        tblOwner      text  := '';
        roleName      text  := '';
        privilegeType      text  := '';
        grantSql       text  := '';
    BEGIN
        -- Some checks
        IF NOT EXISTS (SELECT 1 FROM information_schema.tables WHERE table_name 
= $1) THEN
            RAISE EXCEPTION 'The relation % does not exists', $1;
        ELSIF NOT EXISTS (SELECT 1 FROM information_schema.tables WHERE 
table_name = $2) THEN
            RAISE EXCEPTION 'The relation % does not exists', $2;
        ELSIF  NOT EXISTS (SELECT 1 FROM information_schema.table_privileges 
WHERE table_name = $1) THEN
            RAISE EXCEPTION 'No privileges assigned to the relation %', $1;
        END IF;

        -- Change the table owner 
        IF  (change_owner) THEN
            SELECT tableowner INTO  tblOwner  FROM pg_tables;
            grantSql = 'ALTER TABLE ' ||  quote_ident ($2) || ' OWNER TO ' || 
tblOwner || E';\n' ;
        END IF;

        -- Grant the privileges
        FOR roleName IN SELECT DISTINCT grantee FROM 
information_schema.table_privileges WHERE table_name = $1 LOOP 
            -- 6 = DELETE, INSERT, UPDATE, SELECT, TRIGGER, REFERENCE
            IF (SELECT count(*) FROM information_schema.table_privileges WHERE 
table_name = $1 AND grantee = roleName)::INTEGER = 6 THEN
                grantSql = grantSql || 'GRANT SELECT ON TABLE ' ||quote_ident 
($2) || ' TO ' || roleName ||E';\n' ;
            END IF;

            FOR privilegeType IN SELECT privilege_type FROM 
information_schema.table_privileges WHERE table_name = $1 AND grantee = 
roleName LOOP
                grantSql = grantSql || 'GRANT ' || privilegeType ||' ON TABLE ' 
|| quote_ident ($2)  ||' TO '|| roleName || E';\n' ;
            END LOOP;
             
        END  LOOP; 

        --Execute ALL
        EXECUTE grantSQL;
    END 
$$ LANGUAGE 'plpgsql' ;




 



 From: A.M. 
To: salah jubeh  
Cc: pgsql  
Sent: Wednesday, January 18, 2012 5:44 PM
Subject: Re: [GENERAL] Table permissions 
 

On Jan 18, 2012, at 8:48 AM, salah jubeh wrote:

> Hello,
> 
> I have create a table from another table such as 
> 
> CREATE TABLE tmp_XXX AS SELECT * FROM XXX;  
> 
> 
> The  tmp_XXX  tables has no permissions assigned to it and I  want to assign 
> it with the same owner and access privileges  of XXX  table. I had a look on 
> pg catalog tables  http://www.postgresql.org/docs/8.4/static/catalogs.html to 
> create a stored procedure to do this job for me. I have found some tables 
> which might be useful to get the original table iformation. For example, I 
> can use  pg_table to get the owner of the original table.  Also, I have found 
> pg_roles. However, I was not able to find where the access privileges are 
> stored.
> 
> Is there a better way to do this task than extracting the access privileges 
> from pg catalog ?  If not, where I can find the access privileges information 
> ?
> 

You are looking for pg_catalog.pg_class.relacl. Just copy that from the 
original table to the duplicate (and perhaps relowner, depending on your 
situation), and you will have duplicate permissions.

Cheers,
M
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Partial index does not make query faster

2012-01-18 Thread Tom Lane
Ruben Blanco  writes:
> I'm trying to reduce execution time on a query using a partial index,
> but Postgres doesn't make a significant improvement, even when the
> partial index is 30 times smaller than the index used currently.

That doesn't really matter that much.  The part of the index a given
query will actually access is about the same size either way, ie same
number of leaf tuples of the same size.  If you're lucky, you might save
one level of btree descent to reach the leaf pages; but considering that
btree fanout for integer-size keys is several hundred to one, you need a
size ratio of several hundred even to be assured of that.

The planner does have a small correction to favor smaller indexes over
larger, but it's so small that it's often lost in the noise.  In this
case I think it's probably getting swamped by rounding off the estimate
of the number of leaf pages accessed to the nearest number of pages.
So it doesn't see the partial index as being any cheaper to use than the
full index.

>   Indexes:
>   "calls_201109_index_2" btree (company, call_date, caller_cli)
>   "calls_201109_index_partial" btree (company, call_date, caller_cli) 
> WHERE call_date = '2011-09-01'::date

In this case you could have made the partial index smaller in a useful
way (ie, reducing the number of leaf pages touched) by omitting the
call_date column, which is quite redundant given the WHERE clause.
I experimented a bit with that, but found that there actually is a
planner bug in that case --- it misestimates the number of index tuples
to be read because of failing to account for the partial index predicate
in one place.  I'll see about fixing that, but in the meantime I don't
think you are really going to get any win with the above line of
thought, for a couple reasons:

First, is there some reason why 2011-09-01 is such a special date that
it deserves its own index, or are you just showing us a fragment of a
grand plan to manually partition the index through creating a large set
of partial indexes?  That sort of approach is almost certainly going to
be a dead loss once you consider the extra overhead of updating the
indexes and the extra planning costs.  Basically, while partitioning a
table can be useful when it comes time to drop one partition, it doesn't
save anything for routine queries except in very special cases; and
since there's no equivalent administrative need at the index level,
partitioning an index is not going to be a win.

Second, even if you can omit the call_date column, that's not going to
make this index much smaller, perhaps even not at all smaller depending
on alignment considerations.  You need to reduce the size of an index
entry by probably a factor of 2 before it's worth the extra complexity.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Table permissions

2012-01-18 Thread A.M.

On Jan 18, 2012, at 12:25 PM, salah jubeh wrote:

> Hello, 
> 
> Thanks for the info, I have already solved this by writing the following 
> function. Also, i think it is better than changing the schema tables directly 
> 
> Regards
>   

It doesn't look like the procedure handles grant options (WITH GRANT OPTION), 
so the output ACL will not be the same as the input ACL.

Cheers,
M
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] scenario with a slow query

2012-01-18 Thread Tom Lane
Volodymyr Kostyrko  writes:
> Maybe I'm missing something but I have found a case when planner is 
> unoptimal.

The planner knows next to nothing about optimizing FULL JOIN, and
I would not recommend holding your breath waiting for it to get better
about that, because there's basically no demand for the work that'd
be involved.  I'd suggest refactoring this query instead.  A nest of
full joins seems like a rather unintuitive way to get the result
anyway ...

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Transaction ID wraparound, Oracle style

2012-01-18 Thread Igor Polishchuk
Here is an article on a recently discovered Oracle flaw, which allows SCN to
reach its limit.
http://www.computerworld.com/s/article/9223506/Fundamental_Oracle_flaw_revea
led?taxonomyId=18&pageNumber=1
 
Please don't beat me for posting a link for an Oracle related article.
If you despise a very notion of mentioning Oracle, please just don't read
the post.
This article may be interesting to any RDBMS  professional, no mater what db
flavor he/she is working with.
Also, this story may be a lesson for the Postgresql community on how not do
things. I'm not a developer, but it seems that having synchronized
transaction id between let say streaming-replicated databases would give
some advantages if done properly.
Regards
Igor Polishchuk



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transaction ID wraparound, Oracle style

2012-01-18 Thread Scott Marlowe
On Wed, Jan 18, 2012 at 11:21 AM, Igor Polishchuk  wrote:
> Here is an article on a recently discovered Oracle flaw, which allows SCN to
> reach its limit.
> http://www.computerworld.com/s/article/9223506/Fundamental_Oracle_flaw_revea
> led?taxonomyId=18&pageNumber=1
>
> Please don't beat me for posting a link for an Oracle related article.
> If you despise a very notion of mentioning Oracle, please just don't read
> the post.
> This article may be interesting to any RDBMS  professional, no mater what db
> flavor he/she is working with.
> Also, this story may be a lesson for the Postgresql community on how not do
> things. I'm not a developer, but it seems that having synchronized
> transaction id between let say streaming-replicated databases would give
> some advantages if done properly.

Wow, interesting difference between postgresql which occasionally
resets its smaller transaction id to prevent wrap whereas oracle just
uses a bigger number.  If my calcs are right, Oracle has about 500
years to figure out the wrap around limit at 16ktps etc.

Thanks for the link, it was a fascinating read.


-- 
To understand recursion, one must first understand recursion.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transaction ID wraparound, Oracle style

2012-01-18 Thread A.M.

On Jan 18, 2012, at 2:15 PM, Scott Marlowe wrote:

> On Wed, Jan 18, 2012 at 11:21 AM, Igor Polishchuk  
> wrote:
>> Here is an article on a recently discovered Oracle flaw, which allows SCN to
>> reach its limit.
>> http://www.computerworld.com/s/article/9223506/Fundamental_Oracle_flaw_revea
>> led?taxonomyId=18&pageNumber=1
>> 
>> Please don't beat me for posting a link for an Oracle related article.
>> If you despise a very notion of mentioning Oracle, please just don't read
>> the post.
>> This article may be interesting to any RDBMS  professional, no mater what db
>> flavor he/she is working with.
>> Also, this story may be a lesson for the Postgresql community on how not do
>> things. I'm not a developer, but it seems that having synchronized
>> transaction id between let say streaming-replicated databases would give
>> some advantages if done properly.
> 
> Wow, interesting difference between postgresql which occasionally
> resets its smaller transaction id to prevent wrap whereas oracle just
> uses a bigger number.  If my calcs are right, Oracle has about 500
> years to figure out the wrap around limit at 16ktps etc.
> 
> Thanks for the link, it was a fascinating read.

By the way, this is called a Lamport clock.

http://en.wikipedia.org/wiki/Lamport_timestamps?banner=none

"On receiving a message, the receiver process sets its counter to be greater 
than the maximum of its own value and the received value before it considers 
the message received."

Cheers,
M
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does Version 9.1 Streaming Replication Supports Multi-Master?

2012-01-18 Thread Simon Riggs
On Tue, Jan 17, 2012 at 6:33 PM, Fujii Masao  wrote:
> On Wed, Jan 18, 2012 at 3:09 AM, Jerry Richards
>  wrote:
>> I know PostgreSQL version 9.1 supports  built-in streaming replication.
>> Just wondering if that supports only a single-master or also multi-master
>> implementation?
>
> Only a single-master. If you want a multi-master solution, see Postgres-XC.

Postgres-XC doesn't support multi-site disaster recovery.

Oracle RAC recommends the use of Data Guard as well. Does Postgres-XC
support streaming replication for replay on another Postgres-XC
cluster? If so, you have been a busy bee.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Does Version 9.1 Streaming Replication Supports Multi-Master?

2012-01-18 Thread Koichi Suzuki
Yes it does, not for hot standby though.  Each node can have streaming
replication slave for recovery.   Also, GTM has its own
backup/recovery mechanism which provide no-loss failover when GTM
fails.

Regards;
--
Koichi Suzuki

# For XC to provide hot standby feature, we need to synchronize the
visibility of each node, which is not implemented yet.  For recovery,
XC provides a feature to synchronize recovery point of each node.


2012/1/19 Simon Riggs :
> On Tue, Jan 17, 2012 at 6:33 PM, Fujii Masao  wrote:
>> On Wed, Jan 18, 2012 at 3:09 AM, Jerry Richards
>>  wrote:
>>> I know PostgreSQL version 9.1 supports  built-in streaming replication.
>>> Just wondering if that supports only a single-master or also multi-master
>>> implementation?
>>
>> Only a single-master. If you want a multi-master solution, see Postgres-XC.
>
> Postgres-XC doesn't support multi-site disaster recovery.
>
> Oracle RAC recommends the use of Data Guard as well. Does Postgres-XC
> support streaming replication for replay on another Postgres-XC
> cluster? If so, you have been a busy bee.
>
> --
>  Simon Riggs   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general