I fear I have an involved challenge concerning FTS.
Assume I have the following text in a column:
Graz,06.Bez.:Blah
This parses as:
SELECT alias, description, token FROM ts_debug('german',
'Graz,06.Bez.:Blah');
alias | description | token
---+-+
ascii
On 2016-03-08 10:16:57 +, Geoff Winkless wrote:
> On 7 March 2016 at 20:40, Peter J. Holzer wrote:
> > As Tom wrote, the estimate of having to read only about 140 rows is only
> > valid if sc_id and sc_date are uncorrelated. In reality your query has
> > to read a lot more than 140 rows, so it
On 03/12/2016 08:11 AM, Condor wrote:
Ccing list
On 03/10/2016 01:09 AM, Condor wrote:
Process that populate them isnt the same but data is coming from
database not from user input filed.
Any ideas ?
Not at the moment, but some unanswered questions:
Is '28411123315' the only value y
Hi,
I have been searching but haven't been able to find the
answer to the following question:
How can I (programmatically) find out which database a dump
was taken from given the dump file ?
Constraints of the question:
- existing dump in directory format
- dump was taken of only one particular
On Sat, Mar 12, 2016 at 09:33:33PM +0100, Karsten Hilbert wrote:
> Not-so-nice solutions coming to mind:
>
> - rely on the dump file name
> - use pg_restore to create an SQL dump
> with --create and grep the SQL file
> for "create database ..."
> - restore and compare psql -l output
> befor
hmmm, let's see. You haven't specified PostgreSQL version or O/S as is
common sense and courtesy, so I will choose one for you.
You are using PostgreSQL version 8.4 on Ubuntu 14.04
Since pg_dump requires an output file, and the database you are dumping
must be known, just just the db name in the p
On Sat, Mar 12, 2016 at 09:38:13PM +0100, Karsten Hilbert wrote:
> > Not-so-nice solutions coming to mind:
> >
> > - rely on the dump file name
> > - use pg_restore to create an SQL dump
> > with --create and grep the SQL file
> > for "create database ..."
> > - restore and compare psql -l ou
On 12 March 2016 at 18:43, Peter J. Holzer wrote:
> On 2016-03-08 10:16:57 +, Geoff Winkless wrote:
>> On 7 March 2016 at 20:40, Peter J. Holzer wrote:
>> > As Tom wrote, the estimate of having to read only about 140 rows is only
>> > valid if sc_id and sc_date are uncorrelated. In reality yo
On Sat, Mar 12, 2016 at 03:55:32PM -0500, Melvin Davidson wrote:
> hmmm, let's see. You haven't specified PostgreSQL version or O/S as is
> common sense and courtesy, so I will choose one for you.
:-) Sorry. I am on 9.5.1 on Debian 8.0.
OTOH, in the wild it could be any OS and PG 9.1.0 upward
On 3/12/2016 12:33 PM, Karsten Hilbert wrote:
How can I (programmatically) find out which database a dump
was taken from given the dump file ?
Constraints of the question:
- existing dump in directory format
- dump was taken of only one particular database
I know of no documentation on the fo
On Sat, Mar 12, 2016 at 10:05:47PM +0100, Karsten Hilbert wrote:
> :-) Sorry. I am on 9.5.1 on Debian 8.0.
Debian Testing to be precise:
root@hermes:~/tmp# apt-cache policy postgresql
postgresql:
Installiert: 9.5+172
Installationskandidat: 9.5+172
BTW, other than the obvious of including the name in path or file, if you
are referring to previous/existing dumps than one of two options apply.
grep -i some_dump_file 'CREATE DATABASE'
If it's found, then you know it's from at least one known database.
If nothing is found, then the dump can be a
On Sat, Mar 12, 2016 at 01:12:52PM -0800, John R Pierce wrote:
>> Constraints of the question:
>>
>> - existing dump in directory format
>> - dump was taken of only one particular database
>
> I know of no documentation on the format of the toc.dat file contained in
> that directory format pg_dum
On Sat, Mar 12, 2016 at 04:17:07PM -0500, Melvin Davidson wrote:
> BTW, other than the obvious of including the name in path or file, if you
> are referring to previous/existing dumps
I do.
> grep -i some_dump_file 'CREATE DATABASE'
That will not work (directly) because the dump is in
directory
Hello again,
no such luck (no easy fix). I turned SIP off and rebuilt PG 9.5.1, copied
libpq.5.8.dylib to /usr/lib, bent the two sym links to the new library but I'm
still getting "Bus error: 10". You owe somebody a nickel, Tom ;).
Larsaf:postgresql-9.5.1 root# cp src/interfaces/libpq/libpq.5.8
On 3/12/2016 12:59 PM, Karsten Hilbert wrote:
Another option that comes to mind is
pg_restore -l $DUMPDIR | grep dbname: | cut -f 7 -d ' ' -s
but that is quite fragile on the
-f 7 -d ' '
side of things but that's another question.
starting with...
pg_dump -Fd -f junky "dat
On 3/12/2016 1:40 PM, John R Pierce wrote:
pg_restore -l junky | awk '/^;\wdbname: (.*)/{print $3}'
oops, pasted the wrong one, meant that one to be ...
pg_restore -l junky | awk '/^; +dbname: /{print $3}'
but I think the 2nd one is more robust
--
john r pierce, recycling bits in sant
On Sat, Mar 12, 2016 at 2:05 PM, Karsten Hilbert
wrote:
> On Sat, Mar 12, 2016 at 03:55:32PM -0500, Melvin Davidson wrote:
>
> > hmmm, let's see. You haven't specified PostgreSQL version or O/S as is
> > common sense and courtesy, so I will choose one for you.
>
> :-) Sorry. I am on 9.5.1 on D
On 2016-03-12 21:00:04 +, Geoff Winkless wrote:
> On 12 March 2016 at 18:43, Peter J. Holzer wrote:
> > The question is what can be done to improve the situation.
> >
> > Tom thinks that correlation statistics would help. That seems plausible
> > to me.
[...]
> > You claim that no statistics a
On Sat, Mar 12, 2016 at 02:59:05PM -0700, David G. Johnston wrote:
> And a much more reasonable assumption would have been 9.5 - let the user
> complain if/when the advice doesn't work because they are on an unstated
> older release that doesn't support the feature in question.
>
> I guess the O/
On 2016-03-10 11:09:00 +0200, Condor wrote:
> I using postgresql 9.5.1 and I have problem to match value in one field.
> Both tables are text:
[...]
> =# select imsi from list_cards_tbl where imsi = '28411123315';
> imsi
> --
> (0 rows)
>
> No value, lets change to LIKE
>
> =# select ims
Chris Ruprecht writes:
> no such luck (no easy fix). I turned SIP off and rebuilt PG 9.5.1, copied
> libpq.5.8.dylib to /usr/lib, bent the two sym links to the new library but
> I'm still getting "Bus error: 10". You owe somebody a nickel, Tom ;).
Oh well.
Personally I'd not have messed with t
Hi,
Debian Stretch
PG 9.5.1
I am trying to pg_restore from a directory dump.
However, despite using
--clean
--create
--if-exists
I am getting an error because schema PUBLIC already exists.
That schema is, indeed, included in the dump to be restored
and
On Sun, Mar 13, 2016 at 12:09:19AM +0100, Karsten Hilbert wrote:
In case it is needed:
> pg_restore: erstelle SCHEMA „public“
creating SCHEMA "public"
> pg_restore: [Archivierer (DB)] Fehler in Phase PROCESSING TOC:
Error in Phase ...
> pg_restore: [Archivier
On 03/12/2016 03:09 PM, Karsten Hilbert wrote:
Hi,
Debian Stretch
PG 9.5.1
I am trying to pg_restore from a directory dump.
However, despite using
--clean
--create
--if-exists
I am getting an error because schema PUBLIC already exists.
That schema is,
On Sat, Mar 12, 2016 at 03:32:15PM -0800, Adrian Klaver wrote:
> > pg_restore: [Archivierer (DB)] Fehler in Phase PROCESSING TOC:
> > pg_restore: [Archivierer (DB)] Fehler in Inhaltsverzeichniseintrag 8;
> > 2615 2200 SCHEMA public postgres
> > pg_restore: [Archivierer (DB)] could not
On Saturday, March 12, 2016, Karsten Hilbert
wrote:
> Hi,
>
> Debian Stretch
> PG 9.5.1
>
> I am trying to pg_restore from a directory dump.
>
> However, despite using
>
> --clean
> --create
> --if-exists
>
> I am getting an error because schema PUBLIC alre
On Sun, Mar 13, 2016 at 12:37:02AM +0100, Karsten Hilbert wrote:
> > > pg_restore: [Archivierer (DB)] Fehler in Phase PROCESSING TOC:
> > > pg_restore: [Archivierer (DB)] Fehler in Inhaltsverzeichniseintrag 8;
> > > 2615 2200 SCHEMA public postgres
> > > pg_restore: [Archivierer (DB)] could
On Sat, Mar 12, 2016 at 4:32 PM, Adrian Klaver
wrote:
> On 03/12/2016 03:09 PM, Karsten Hilbert wrote:
>
>> Hi,
>>
>> Debian Stretch
>> PG 9.5.1
>>
>> I am trying to pg_restore from a directory dump.
>>
>> However, despite using
>>
>> --clean
>> --create
>>
On Sat, Mar 12, 2016 at 04:53:20PM -0700, David G. Johnston wrote:
> The docs could probably use improvement here - though I am inferring
> behavior from description and not code.
>
> The create option tells restore that it is pointless to use conditions or
> actively drop objects since the newly
On Saturday, March 12, 2016, Karsten Hilbert
wrote:
> On Sat, Mar 12, 2016 at 04:53:20PM -0700, David G. Johnston wrote:
>
> > The docs could probably use improvement here - though I am inferring
> > behavior from description and not code.
> >
> > The create option tells restore that it is pointl
On Sat, Mar 12, 2016 at 05:31:38PM -0700, David G. Johnston wrote:
> > The reason being, of course, that I want to check the exit
> > code in a pg_restore wrapper script.
> >
> >
> I mistakenly thought public only came from template1...I wouldn't be
> opposed to that change. This all seems awfull
On Sat, Mar 12, 2016 at 5:31 PM, David G. Johnston <
david.g.johns...@gmail.com> wrote:
> You probably should just drop the existing database and use --create by
> itself.
>
> You can even use the dropdb command to avoid SQL in your script.
>
>
This seems like it is the main problem:
# dropdb po
On Sat, Mar 12, 2016 at 5:43 PM, Karsten Hilbert
wrote:
> On Sat, Mar 12, 2016 at 05:31:38PM -0700, David G. Johnston wrote:
>
> > > The reason being, of course, that I want to check the exit
> > > code in a pg_restore wrapper script.
> > >
> > >
> > I mistakenly thought public only came from tem
On Sat, Mar 12, 2016 at 05:49:56PM -0700, David G. Johnston wrote:
> I'd operate under the premise that all warnings and errors are fatal
> (i.e., keep --exit-on-error) until you cannot for some very specific
> reason.
--exit-on-error will exit on _any_ perceived error,
regardless of whether it c
I have this problem: a Master table containing records with a timestamp
column registering creation date-time, and one Detail table containing
info related to the Master table.
As time went by, those tables grew enormously, and I can't afford
expanding my SSD VPS. So I'm thinking about storing
Hi. I think pgpool-II can do that job for you. It's a middleware, so you can
use it without even changing your app code(but your postgres configuration). It
suppoerts many clustering functions, including replication, failover, and a lot
more; it also supports partitioning. so that may be suitabl
On Saturday, March 12, 2016, Karsten Hilbert
wrote:
> On Sat, Mar 12, 2016 at 05:49:56PM -0700, David G. Johnston wrote:
>
> > I'd operate under the premise that all warnings and errors are fatal
> > (i.e., keep --exit-on-error) until you cannot for some very specific
> > reason.
>
> --exit-on-er
On Sat, Mar 12, 2016 at 8:33 PM, Alvaro Aguayo Garcia-Rada <
aagu...@opensysperu.com> wrote:
> Hi. I think pgpool-II can do that job for you. It's a middleware, so you
> can use it without even changing your app code(but your postgres
> configuration). It suppoerts many clustering functions, inclu
oops! Better example
eg: {note: below is psuedo code}
child {master} (SSD) NO ROWS33
tg_insert_child before insert execute tgf_split_data
child1 (SSD) CONSTRAINT timestamp > {specified time}
child2 (SATA) CONSTRAINT timestamp <= {specified time}
tgf_split_data()
On Tue, Mar 8, 2016 at 2:16 AM, Geoff Winkless wrote:
> On 7 March 2016 at 20:40, Peter J. Holzer wrote:
>> As Tom wrote, the estimate of having to read only about 140 rows is only
>> valid if sc_id and sc_date are uncorrelated. In reality your query has
>> to read a lot more than 140 rows, so it
On Sat, Mar 12, 2016 at 1:00 PM, Geoff Winkless wrote:
>
> As Jeff points out I'd have a much larger win in this instance by
> someone spending the time implementing skip index scans rather than
> messing with the planner :)
But I think the hardest part of implementing skip index scans would
prob
42 matches
Mail list logo