Re: [GENERAL] Longest prefix matching CTE

2015-02-25 Thread Tim Smith
Will take a look. Thanks steve. On 24 February 2015 at 23:57, Steve Atkins wrote: > > On Feb 24, 2015, at 3:50 PM, Tim Smith wrote: > >> >> >> The goal being to match the longest prefix given a full phone number, e.g. >> >> >> 61234567890 would match "australia proper 61" >> whilst >> 61134567

Re: [GENERAL] Longest prefix matching CTE

2015-02-25 Thread Pavel Stehule
Some other solutions http://postgres.cz/wiki/PostgreSQL_SQL_Tricks_II#Fast_searching_of_longer_prefix 2015-02-25 9:04 GMT+01:00 Tim Smith : > Will take a look. Thanks steve. > > On 24 February 2015 at 23:57, Steve Atkins wrote: > > > > On Feb 24, 2015, at 3:50 PM, Tim Smith > wrote: > > > >>

Re: [GENERAL] Longest prefix matching CTE

2015-02-25 Thread Alban Hertroys
> On 25 Feb 2015, at 24:50, Tim Smith wrote: > > Have an Oracle "connect by" SQL that looks something like : > > select phone, pfx, len, (select info from codes where > pfx = x.pfx) infot > from ( > select :x phone, to_number(substr( :x, 1, length(:x)-level+1 )) pfx, > length(:x)-level+1 len >

Re: [GENERAL] 9.3: bug related to json

2015-02-25 Thread Torsten Förtsch
On 25/02/15 07:22, David G Johnston wrote: > I'm doubting you intended to join a bunch of commas using the field value as > the delimiter...methinks your got the argument order reversed for > string_agg. OMG, I am so stupid. Thanks. -- Sent via pgsql-general mailing list (pgsql-general@postgres

Re: [GENERAL] 9.3: bug related to json

2015-02-25 Thread Torsten Förtsch
On 25/02/15 07:34, David G Johnston wrote: > Torsten Förtsch wrote >> > Is there anything similar for JSON scalars? > IDK, but have you tried "::text"? yes. Here is the difference select * from (values (('{"a":"b"}'::json -> 'a')::text), ('{"a":"b"}'::json ->> 'a')) t; col

Re: [GENERAL] Sequences not moved to new tablespace

2015-02-25 Thread Albe Laurenz
Guillaume Drolet wrote: If you want to move a whole database to a different tablespace (the only reason I can think of for doing what you are trying to so), use the command ALTER DATABASE ... SET TABLESPACE ... >>> Thanks Laurenz. I tried your suggestion: >>> >>> psql -U postg

Re: [GENERAL] 9.3: bug related to json

2015-02-25 Thread Tom Lane
=?UTF-8?B?VG9yc3RlbiBGw7ZydHNjaA==?= writes: > On 25/02/15 07:34, David G Johnston wrote: >> Torsten Förtsch wrote >>> Is there anything similar for JSON scalars? >> IDK, but have you tried "::text"? > yes. Here is the difference > select * from (values (('{"a":"b"}'::json -> 'a')::text), >

[GENERAL] utf8 issues

2015-02-25 Thread Felix Ivan Romero Rodríguez
I've got the followin issue with postgres 9.3, trying to restore a backup from sql script: "invalid byte sequence for encoding “UTF8″: 0xc2de" how can i solve it?

Re: [GENERAL] utf8 issues

2015-02-25 Thread Vick Khera
Fix the character sequence to be valid UTF8 before you restore it. I'm assuming you're restoring from a pretty old version of Postgres which did not do strict UTF8 character validation. Are you sure the data is encoded as UTF8 and not some other?

Re: [GENERAL] utf8 issues

2015-02-25 Thread Adrian Klaver
On 02/25/2015 07:11 AM, Felix Ivan Romero Rodríguez wrote: I've got the followin issue with postgres 9.3, trying to restore a backup from sql script: "invalid byte sequence for encoding “UTF8″: 0xc2de" how can i solve it? Provide more information:) Where is backup coming from , another databa

Re: [GENERAL] Some indexing advice for a Postgres newbie, please?

2015-02-25 Thread Merlin Moncure
On Sat, Feb 21, 2015 at 5:33 AM, Arjen Nienhuis wrote: > Hi, > > On 19 Feb 2015 17:12, "brian" wrote: >> >> >> Hi folks, >> >> I have a single-user application which is growing beyond the >> fixed-format data files in which it currently holds its data, I need a >> proper database as the backend.

[GENERAL] : :Full text search query ::

2015-02-25 Thread JD
Hi All, please find herewith the following query 1. select * from partdetails where scode=118 and (to_tsvector('english'::regconfig, part_number::text)) @@ to_tsquery('104') it is showing only 1 record as output, it is expected to give 17 records as output. 2. select * from partdetails where s

[GENERAL] Locking during UPDATE query with SUBSELECT

2015-02-25 Thread Cenkar, Maciej
Hi All, I wonder if anyone can explain something I cannot easily find on google. Given PostgreSQL 9.3.5 what is locking strategy when executing query such as: UPDATE table SET some_col = some_val WHERE id IN (SELECT id FROM expensive_query_with_joins). Is this starting to lock rows after it ex

Re: [GENERAL] Row-level Security vs Application-level authz

2015-02-25 Thread Dowwie
Thanks, Stephen and David for your responses. My setup is as you described it. Thanks for clarifying. -- View this message in context: http://postgresql.nabble.com/Row-level-Security-vs-Application-level-authz-tp5839069p5839291.html Sent from the PostgreSQL - general mailing list

Re: [GENERAL] : :Full text search query ::

2015-02-25 Thread Tomas Vondra
Hi, On 25.2.2015 12:50, JD wrote: > Hi All, > > please find herewith the following query > > 1. select * from partdetails where scode=118 and > (to_tsvector('english'::regconfig, part_number::text)) @@ to_tsquery('104') > > it is showing only 1 record as output, it is expected to give 17 reco

Re: [GENERAL] newbie how to access the information scheme

2015-02-25 Thread frank ernest
Thanks -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] [postgresql 9.3.5] autovacuums stuck on non-existent pg_toast tables

2015-02-25 Thread tpham
Hi everyone, Two weeks ago, one of our Postgres databases crashed violently and had to be brought up again. This certainly resulted in some lost pg_clog files, and we had to zero-fill them in one by one to get autovacuum up and running again. Now, we have two autovacuuming processes constantly st

Re: [GENERAL] Locking during UPDATE query with SUBSELECT

2015-02-25 Thread David Steele
On 2/25/15 10:49 AM, Cenkar, Maciej wrote: > Given PostgreSQL 9.3.5 what is locking strategy when executing query > such as: > > UPDATE table SET some_col = some_val WHERE id IN (SELECT id FROM > expensive_query_with_joins). > > Is this starting to lock rows after it executed sub-select or is it

Re: [GENERAL] [postgresql 9.3.5] autovacuums stuck on non-existent pg_toast tables

2015-02-25 Thread Alvaro Herrera
tpham wrote: > Hi everyone, > > Two weeks ago, one of our Postgres databases crashed violently and had to be > brought up again. This certainly resulted in some lost pg_clog files, and we > had to zero-fill them in one by one to get autovacuum up and running again. You should never lose pg_clog f

Re: [GENERAL] Some indexing advice for a Postgres newbie, please?

2015-02-25 Thread Jeremy Harris
On 25/02/15 15:42, Merlin Moncure wrote: > On Sat, Feb 21, 2015 at 5:33 AM, Arjen Nienhuis > wrote: >> Hi, >> >> On 19 Feb 2015 17:12, "brian" wrote: >>> >>> >>> Hi folks, >>> >>> I have a single-user application which is growing beyond the >>> fixed-format data files in which it currently holds

Re: [GENERAL] parallel dump fails to dump large tables

2015-02-25 Thread Shanker Singh
There is no problem dumping large tables using parallel dump. My script had limit on the file size that was causing parallel dump to abort on large tables. Thanks everyone for their valuable suggestion. Thanks shanker From: Shanker Singh Sent: Monday, February 23, 2015 6:18 PM To: Sterfield Cc:

Re: [GENERAL] [postgresql 9.3.5] autovacuums stuck on non-existent pg_toast tables

2015-02-25 Thread Alvaro Herrera
Tong Pham wrote: > We do have fsync turned on, and there was no disk failure. The database > had to be shut down forcefully because it was becoming nonresponsive > (probably due to inadequate earlier vacuuming) and we could not get the > remaining queries to terminate with normal cancel/terminate

Re: [GENERAL] [postgresql 9.3.5] autovacuums stuck on non-existent pg_toast tables

2015-02-25 Thread Erik Jones
> On Feb 25, 2015, at 3:27 PM, Alvaro Herrera wrote: > > Tong Pham wrote: > >> We do have fsync turned on, and there was no disk failure. The database >> had to be shut down forcefully because it was becoming nonresponsive >> (probably due to inadequate earlier vacuuming) and we could not get t

[GENERAL] Triggers Operations

2015-02-25 Thread Emanuel Araújo
Hi, I have an application that replicates data from an Oracle database for postgresql. The flow goes as follows: oracle.table1 -> AppSincronizador -> postgresql.table1 -> Trigger (upd, ins, del) -> postgresql.table2 I'm having situations where the data volume is large that the changes that shoul

[GENERAL] Help with tokenization of age-ranges in full text search

2015-02-25 Thread Mason Hale
Hello, I've got a 9.3 database hosted at Heroku. I'm full text search to search for "group names" in part of my application, and some of my group names are the names of youth sports age groups like "Boys 9-10" or "Girls 11-12". I would like for a search for the terms "Boys", "Boys 9-10", "9", "1

Re: [GENERAL] Help with tokenization of age-ranges in full text search

2015-02-25 Thread Alvaro Herrera
Mason Hale wrote: > Hello, I've got a 9.3 database hosted at Heroku. > > I'm full text search to search for "group names" in part of my application, > and some of my group names are the names of youth sports age groups like > "Boys 9-10" or "Girls 11-12". > > I would like for a search for the ter

[GENERAL] Re: [pgadmin-support] Issue with a hanging apply process on the replica db after vacuum works on primary

2015-02-25 Thread Sergey Shchukin
Hi Radovan ! Thank you for the reply. The question is that this table is not a subject for a massive updates/deletes. Is there any additional traces except from perf or pg_top to trace what replica is doing at the particular moment when we are lagging in replay? To see locks or spins or slee

[GENERAL] Create Virtual Indexes on Postgres

2015-02-25 Thread Sreerama Manoj
Hi, I use Postgres 9.4 database.Now,I am optimizing the queries by using the results of "explain" and "explain analyze",Sometimes I am creating Indexes to optimize them. But, I was not successful sometimes as even I create Index to optimize them, the planner is not using them . So my question