As far as I understand it (not much), gist index over spatial data is in
fact gist index over range(x), range(y).
This is why Gist works in n-dimension. It always works on range
(conceptually).
In fact rectangle are the intersection of a range on x and a range on y
(literally)
same, a 3D box is t
Hey, postgres 9.3 here.
for partitionning expert.
I have several table child with columns (gid serial, patch pcpatch(n))
where n may change depending on the tables, and pcpatch is a type from
pgpointcloud extension
(here is the definition)
-
CREATE TYPE pcpatch
(INPUT=pcpatch_in,
OUT
> ) INHERITS (father);
>
>
>
>
>
>
>
> *From:* pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...@postgresql.org] *On Behalf Of *Rémi Cura
> *Sent:* Wednesday, April 01, 2015 8:41 AM
> *To:* PostgreSQL General
> *Subject:* [GENERAL] partitoning
Okay,
thanks for the answer.
I take it that it is "do it differently then".
I might have an idea but it is not working yet, so I'll post another
message.
Thanks dear co-list user ^^
Cheers,
Rémi-C
2015-04-01 16:56 GMT+02:00 David G. Johnston :
> On Wed, Apr 1, 2015 at
Hey dear list,
I'd like to partition geographical (geometry) data with postgres mechanism.
(my usage is in fact related to pointcloud, but I use geometry as a work
around)
>From example I read on constraint, nothing should prevent it from working
Here is a self contained example, the planner doesn
Hey dear List,
On a windows XP 64.
I installed python (64b),
it works.
CReating plpython3u works, and python works within database.
I installed numpy (manually compiled,64b),
it works outside of Postgres,
but inside a plpython3u function, simply doing
'import numpy' raises an error saying that p
Hey thanks for the help !
> > Hey dear List,
> >
> > On a windows XP 64.
> >
> > I installed python (64b),
> > it works.
>
> What version of Python 2 or 3 or both?
>
> What does python -V show at the command line?
>
Python 3.2 and python 2.6, both 64bits are installed on the PC.
Whe
Hey, thanks to help me with that.
I started fresh to have a truly reproducible process,
so you can have all information and rule out some error possibilities.
- Uninstall all python.
- Check that PythonPath doesn't exist anymore
- check that python doesn't exist anymore
- install python 3.2.5
s = h.helloworld()
plpy.notice(s)
return
$$ LANGUAGE plpython3u IMMUTABLE STRICT;
--call function
SELECT *
FROM rc_test_python( ) ;
-
---
2015-06-01 18:09 GMT+02:00 Rémi Cura :
> Hey, thanks to help me with that.
> I started fresh to have a truly reprodu
+02:00 Adrian Klaver :
> On 06/01/2015 09:09 AM, Rémi Cura wrote:
>
>> Hey, thanks to help me with that.
>> I started fresh to have a truly reproducible process,
>> so you can have all information and rule out some error possibilities.
>>
>> - Uninstall all py
Hey,
the only straight workaround I know (which is pretty bad)
is to cast down your record to text.
Then you have an array of text, which is manageable.
For this you can either 'flatten' your record into a unique text,
or cast each part of your record to text, then emulate an array of array
(you n
(evolution difficult)
, and having many columns will make you create function with many arguments,
which is often a bad idea.
Cheers,
Rémi-C
2015-06-02 10:36 GMT+02:00 Rémi Cura :
> Hey,
> the only straight workaround I know (which is pretty bad)
> is to cast down your record to text.
> Then
Hey list,
I'm stuck on a problem that I can't figure out (postgres 9.3).
In short, using an
INSERT INTO __view_with_trigger__ ... RETURNING gid INTO _gid;
returns nothing.
I need this feature because I use views on tables as user interface.
This must have to do with the postgres order of executi
I think I got it,
I have to always return something (like NEW) in the instead of trigger,
but fill NEW
with returnings of INSERT into regular table.
CHeers,
Rémi-C
2015-09-02 13:44 GMT+02:00 Rémi Cura :
> Hey list,
> I'm stuck on a problem that I can't figure out (postgres
You could check the max number of points in your geometries :
SELECT max(ST_NumPoints(geom))
FROM ...
Of course you could still have invalid / abberant geometry,
which you could also check (ST_IsValid, St_IsSimple).
You could solve both those hypotheses if you could perform your buffer by
batch.
Hey dear list,
I can't find a nice solution to enforce a necessary behaviour in my case :
I want a parent table to remain empty.
Of course I could define a trigger and return NULL in any case, but I'd
like a more elegant approach using check or constraints.
Any thought appreciated,
Cheers,
Rémi C
the fast answer,
Cheers,
Rémi C
2016-10-11 18:33 GMT+02:00 Manuel Gómez :
> On Tue, Oct 11, 2016 at 12:27 PM, Rémi Cura wrote:
> > Hey dear list,
> > I can't find a nice solution to enforce a necessary behaviour in my case
> :
> > I want a parent table to remain emp
Perfect !
Cheers,
Rémi C
2016-10-11 19:12 GMT+02:00 Julien Rouhaud :
> On 11/10/2016 19:04, Rémi Cura wrote:
> > This solution is very nice.
> > Sadly the check is inherited by the children
> > (I only want the parent to be empty, not the children).
> >
> >
Hey,
1 sec seems really good in this case,
and I'm assuming you tuned postgres so the main index fits into ram
(work_mem and all other stuff).
You could avoid a CTE by mixing both cte.
WITH pts AS (
SELECT (pt).geom, (pt).path[1] as vert
FROM
ST_DumpPoints(
ST_Segmentize(
Hey,
I like your curiosity !
At the billion range, you __have__ to use pgpointcloud,
pyramid raster solution (actually the more common way to perform this task)
or another database (hello monetdb).
Cheers,
Rémi-C
2017-01-09 20:11 GMT+01:00 Jonathan Vanasco :
>
> On Jan 9, 2017, at 12:49 PM, Isra
Hey dear list,
after a fex years of experiments and crash,
I ended up with a grossly bloated postgres folder.
I had about 8 Go of useless files.
All is in a virtualbox, so I'm sure to be able to reproduce exactly, and
fried my postgres folder a couple of time before getting it right.
Julien (Rouha
Would gladly do it,
but still this "wiki cooloff" stuff,
can't create a page
Cheers,
Rémi-C
2016-02-29 20:44 GMT+01:00 Alvaro Herrera :
> Rémi Cura wrote:
> > Hey dear list,
> > after a fex years of experiments and crash,
> > I ended up with a grossly bloate
ising as a result of
> any virus being passed on or arising from any alteration of this message by
> a third party. PAY.ON may monitor e-mails sent to and from PAY.ON.
>
>
>
>
>
>
>
>
>
> *From:* pgsql-general-ow...@postgresql.org [mailto:
> pgsql-general-ow...@
.0 │ 7584 kB
>
> 1.6 │ 71 MB
>
> 1.0 │ 704 kB
>
> 1.1 │ 6968 kB
>
> (25 rows)
>
>
>
> So actually, quite a lot of bloated data J
>
> What am I doing wrong?
>
>
>
> Mit freundlichen Grüßen / With kind regards,
>
> Johnny Morano
>
Hey List,
would it be considered safe to use plpythonu for a production database?
What would be the limitations/ dangers?
Thanks,
Cheers,
Rémi-C
administration script like function, advanced processing, etc ...)
Cheers,
Rémi-C
2016-03-03 20:55 GMT+01:00 David G. Johnston :
> On Thu, Mar 3, 2016 at 12:35 PM, Adrian Klaver
> wrote:
>
>> On 03/03/2016 10:09 AM, Rémi Cura wrote:
>>
>>> Hey List,
>>>
>>> wo
t tablespace is also important.
I'll put all of this on the wiki as soon has I have permission to create a
new page
(man, how long is this "cool-off", it's been already several days !).
Cheers,
Rémi-C
2016-03-03 20:10 GMT+01:00 Julien Rouhaud :
> On 03/03/2016 18:
Thanks !
Cheers,
Rémi-C
2016-03-05 0:38 GMT+01:00 Adrian Klaver :
> On 03/04/2016 01:46 AM, Rémi Cura wrote:
>
>> Thanks for the answer guys.
>>
>> I should have mentionned that I had read the doc,
>> and was looking for non explicit knowledge,
>> like :
>
Hey List,
I use plpython with postgis and 2 python modules (numpy and shapely).
Sadly importing such module in the plpython function is very slow (several
hundreds of milliseconds).
I also don't know if this overhead is applied each time the function is
called in the same session.
Is there a way
et loaded by default,
would'nt it be possible to hack this module to add other import inside it?
I also use PL/R (untrusted I guess) and you can create a special table to
indicate which module to load at startup.
Cheers,
Rémi-C
2014-06-25 21:46 GMT+02:00 Jeff Janes :
> On Thu, Jun 19
eom;
2014-06-27 4:27 GMT+02:00 Adrian Klaver :
> On 06/26/2014 02:14 AM, Rémi Cura wrote:
>
>> Hey,
>> thanks for your answer !
>>
>> Yep you are right, the function I would like to test are going to be
>> called a lot (100k times), so even 15 ms pe
Hey,
postgres already takes care of multiple client writting/reading,
so you don't really need to be afraid of concurrency (for most of the stuff)
If it is so your desire, you could also have multiple server on the same
machine (althought on different port).
This way each server would have its own
Hey,
I guess you know all about PL/R,
the R language extension for postgres .
It is very convenient, though be carefull as sometime it crashed my server.
Cheers,
Rémi-C
2014-07-16 3:42 GMT+02:00 John McKown :
> On Tue, Jul 15, 2014 at 8:46 AM, David G Johnston <
> david.g.johns...@gmail.com> wr
Hey dear list,
following the advise of Depesz I'm trying to use advisory lock.
I'm trying to perform parallel upsert.
I have 2 different sessions in which I run a very complicated querry (lot's
of CTE) concurrently. In fact this complicated querry is simply put inside
a plpgsql function for ease
2014-08-11 5:33 GMT+02:00 John R Pierce :
>
> -
>
>> PERFORM pg_advisory_lock(#1 , 2 or 3 depending on the table#,id of the
>> stuff I want to upsert) ;
>> WITH stuff_to_upsert ()
>> ,updating AS (update returning id)
>> ,inserting AS (insert if not updated)
>> PERFORM pg_advisory_unlock(s
parallel query to compute faster (load dividing). I guess it would be
very slow with about 8 parallel queries with locks.
I should test this lock approach to be sure.
Thanks both of you !
Cheers,
Rémi-C
2014-08-11 17:51 GMT+02:00 Merlin Moncure :
> On Mon, Aug 11, 2014 at 9:49 AM, Kevin Grittner
2014-08-11 22:48 GMT+02:00 Kevin Grittner :
> Rémi Cura wrote:
>
> > as you (both?) suggested it works using advisory lock used at the
> > beginning and end of the transaction. This way there is no upsert
> > at all if the element is locked? (I used general advisory loc
4-08-12 14:45 GMT+02:00 Merlin Moncure :
> On Tue, Aug 12, 2014 at 5:07 AM, Rémi Cura wrote:
> >
> >
> >
> > 2014-08-11 22:48 GMT+02:00 Kevin Grittner :
> >
> >> Rémi Cura wrote:
> >>
> >> > as you (both?) suggested it works using ad
Hey,
I'm working in GIS field and I had the same problems.
Solution I found, which has been working for the past year :
virtual box on external drive !
This way you can have an independent OS (Linux for easy
postgres/postgis/whatever gis you want).
I find it very comfortable because my server is s
Hey,
if you are using postgis and postgis_topology,
there are specific backup/restore process.
Cheers,
Rémi-C
2014-09-11 12:49 GMT+02:00 Iain Mott :
> Hello,
>
> The server for my websites was recently changed and upgraded. I have ssh
> access to the server and since the upgrade I am no longer ab
BASE > dump.sql
>
> I tried it, but it results in the same error messages
>
> Cheers,
>
>
>
>
>
>
>
>
>
> Em Qui, 2014-09-11 às 12:53 +0200, Rémi Cura escreveu:
> > Hey,
> >
> > if you are using postgis and postgis_topology,
>
Hey,
I had many external hard drive crash (savage unplug, power off, pc forced
restart).
The server on the virtual machine was never hurt, nor the data.
Cheers,
Rémi-C
2014-09-12 15:34 GMT+02:00 George Neuner :
> Hi Craig,
>
> On Fri, 12 Sep 2014 11:33:55 +0800, Craig Ringer
> wrote:
>
> >On 09
Why don't you use "COPY" ?
Cheers,
Rémi-C
2014-09-23 12:50 GMT+02:00 FarjadFarid(ChkNet) <
farjad.fa...@checknetworks.com>:
> Hi,
>
>
>
> Quotation marks should be around both the name of each and every column
> and their values.
>
>
>
> Also replace column comas as separation character for sql i
Hey
http://www.postgresql.org/docs/9.3/static/contrib-dblink-connect.html
Seems dblink is for postgres to postgres connections.
Cheers,
Rémi-C
2014-10-22 12:45 GMT+02:00 Postgres India :
> Hi,
> I am trying to connect DB2 from postgres using dblink, is there any
> configuration required at DB2 an
You may want to use foreign data wrapper (fdw)
https://wiki.postgresql.org/wiki/Foreign_data_wrappers
Cheers,
Rémi-C
2014-10-22 12:50 GMT+02:00 Rémi Cura :
> Hey
> http://www.postgresql.org/docs/9.3/static/contrib-dblink-connect.html
> Seems dblink is for postgres to postgres co
Hey,
a trivial question I guess,
can't make it work.
IF I define a function returning several values , I can't use it to update
mutliple columns of a table at once.
i __don't__ want to use CTE or subquerry,
and of course I don't wan tto compute the function several time.
CREATE TABLE test_update_
Thanks,
waiting for the 9.5 then =)
Cheers,
Rémi-C
2014-12-02 18:23 GMT+01:00 Tom Lane :
> =?UTF-8?Q?R=C3=A9mi_Cura?= writes:
> > IF I define a function returning several values , I can't use it to
> update
> > mutliple columns of a table at once.
> > ...
> > UPDATE test_update_m_values SET (gid
Hey,
You may want to post this on postGIS list.
I take that so many rows mean either raster or point cloud.
If it is point cloud simply consider using pg_pointcloud.
A 6 billion point cloud is about 600 k lines for one of my data set.
If it is raster, you may consider using postgis raster type.
I
Please let me one more guess ^^
Third guess : you are using topology (nodes are indexed by node_id).
- If this is the case, you could use postgis topology.
- The gain is that with this topology model, you store *shared linestring*,
and not shared points.
More seriously from what you say it
More bluntly maybe :
if you can do it in Excel,
you can do it in Postgres.
Cheers,
Rémi-C
2015-01-21 16:37 GMT+01:00 Raymond O'Donnell :
> On 21/01/2015 14:38, Pierre Hsieh wrote:
> >
> >
> > Hi,
> >
> >
> >
> > Would you please tell me whether PostgreSQL can execute the following
> > tasks? If
Are you sur you don't want a moving windows
(stddev on 0 to 50 , then stdev on 1 to 51)
..
If you don't want moving windows your query would look like
DROP TABLE IF EXISTS your_data;
CREATE TABLE your_data AS
SELECT s as gid , random() as your_data_value
FROM generate_series(1,1) as s ;
SEL
Hey,
I'm not a guru, here is what I understood.
You are mixing several problems in the same question :
- 1. why the planner isn't more efficient
- 2. why the workaround is difficult to use with an ORM.
for 1. you can't do much (as said by others, you don't really need a case
here anyway). I thin
Hello dear list,
I would appreciate some help on a small matter that has been bothering me
for a long time :
CREATE TABLE test_insert_returning(
gid SERIAL
,some_value int
);
WITH serie AS (
select s, s*10 as computing
from generate_series(1,10) as s
)
INSERT INTO test_insert_returning (some_valu
Hey,
pg_hba is to manage who has *access* to database.
Your problem seems to be who has* SELECT permission* to x table.
Cheers,
Rémi-C
2015-02-18 12:03 GMT+01:00 BladeOfLight16 :
> On Wed, Feb 18, 2015 at 5:48 AM, Brian Sutherland > wrote:
>
>> # dump and reload
>> pg_dump --username sup
Hey folks,
thanks for the answers.
As you guessed it is just a synthetic example,
(so of course RETURNING some_value / 10 is possible in this case, but not
in general when the function is much more complicated than "/10").
Same wise, adding a column is just not a serious option.
It correspond to
Hello Dear list,
this is more a plpgsql dev. issue.
I wrote a plpgsql function to compute union of time range taht works :
[1,4]U[3,8]U[12,14]U[16,18] ---> [1,8]U[12,14]U[16,18]
It works on multiple rows.
My issue is a design issue :
I want to work on set of row and return set of row.
I am awa
Hello Dear List,
this is a re-post with a more clear title and rewrite.
I have a design issue :
My function works on ordered set of rows and returns ordered set of rows
(less thant input). I want to use it in standard sql query (in WITH for
example).
What is best practice for input?
I see 3 solu
BE carefull you have a number of limitation with recursive cte (I'm
thinking of update and so.)
You can work around with plpgsql but it might be painfull.
You forgot a solution :
if you need powerfull graph features,
use postgres as a database and a SPARQL speaking frontend.
It may be a bit of ove
To be very straightforward :
your bash script will dynamically create the sql query in a string, then
send it to database using psql.
You can also use pipes.
For example :
$4 -c "COPY $1 FROM '/tmp/pipe_ply_binaire_vers_ply_ascii_"$1"' WITH
CSV DELIMITER AS ' ';";
where $4 is the psql command to
Hey short trick :
to avoid to use the schema name multiple time (more readable and more easy
to re use).
You can use the
SET search_path gpstracking_device_tracks, public;
(see manual here :
http://www.postgresql.org/docs/current/static/sql-set.html)
Cheers,
Rémi-C
2013/10/2 Carlos Eduardo Sot
Hey sorry if my answer is stupid,
but there is an extension for array, even if it is limited to int (but int
could be indexes of row)
It's named http://www.postgresql.org/docs/9.3/static/intarray.html
It provides essential function, although lacking some (I re-implemented
union of array with disjoi
Hey,
Can you check the line 424855 in the file maincluster-20131011.csv .
Yo may have a comma in unprotected field, or empty field, or wrong end
line, etc.
Cheers,
Rémi -C
2013/10/18 Adrian Klaver
> On 10/18/2013 08:07 AM, Helen Griffiths wrote:
>
>> Hello.
>>
>> I've got a table set up on s
Your welcome;
Best wishes for fixes =)
Cheers,
Rémi-C
2013/10/18 Helen Griffiths
> On Fri, 18 Oct 2013, Adrian Klaver wrote:
>
> This came up before recently in this thread:
>>
>> http://www.postgresql.org/**message-id/**CADK3HHJNEWKD9gNyXmjv9ABbn+**
>> 37rY3Mvp9=1j7msg9YpoBBBw@mail.**gmail
Hey,
I tried something very similar to compute generalized union of numeric
range (which was innapropriate, anyway).
My conclusion were that it's not possible using windows function as you
need either a memory (windows function are not allowed in update) or
iterations to propagate information (win
10/22/13, Rémi Cura wrote:
> > But it is immensely easier and sometimes mandatory to use instead
> > a plpgsql function using cursor (or cursors).
> >
> > It would be something like that in plpgsql :
> >
> > cursor on table of letter ordered
> > accum = 0;
&
héhé,
nice snipping Merlin !
I guess you are almost there, output is still wrong (should be) (
> Eg: Take A,A,A,B,C,A,D,A,A,D,D,B,C,C and return: A,3; B,1; C,1; A,1;
> D,1; A,2; D,2; B,1; C,2
)
I don't understand enough to make the modifications =)
Cheers,
Rémi-C
2013/10/22 hubert depesz luba
Thanks for this good example Merlin !
I didn't know you could use variable inside custom aggregates, and this
allow to solve the problem!
In my own problem I couldn't use aggregates because
_as it output at most one row, it would have mean a lots of useless
computation (as in this example I guess
27;t be
written, mat view weren't available.
Anyway I solved it using cursors, not optimal but works !
(
http://stackoverflow.com/questions/18873797/plpgsql-expert-set-of-record-input-and-output-for-a-function/18874283#18874283
)
Cheers,
Rémi-C
2013/10/22 Merlin Moncure
> On Tue, Oct 22
Hmm exactly what I was thinking !
Thank you a lot, I spend many hours thinking about this and this solution
is very nice.
Cheers,
Rémi-C
2013/10/22 Merlin Moncure
> On Tue, Oct 22, 2013 at 10:01 AM, Elliot
> wrote:
> > On 2013-10-21 20:38, Robert James wrote:
> >>
> >> I have a table of even
OK,
just out of pure curiosity,
is it always the case or is it due to this particular aggregate?
Cheers,
Rémi-C
2013/10/22 Merlin Moncure
> On Tue, Oct 22, 2013 at 9:43 AM, Rémi Cura wrote:
> > Thanks again for the precision !
> >
> > I still don't understand perfec
Hey dear lists,
Here is a self contained example showing strange behavior from a real life
example concerning the use of postgis_topology topogeometry type.
The problem is :
when trying to return setof topogeometry,
the "return query" gives an error of type where there is none, and the
return n
,1,1,1;
>
> The error message means the cast failed between ttt.fake_topogeometry and
> the topology_id (i.e. first) field of the return type of the function,
> which isn't what you wanted to do.
>
> Pls. don't cross-post between lists.
>
>
> On 23 October 2013 0
Ok thanks for this precision Merlin.
Seems like aggregates are way more powerful than I thought.
Obviously I need a lot more reading about custom aggregates before fully
understanding it.
Elliot's query is pure SQL so obviously very cool !
It could be improved at the margin, and aggregates/funct
For archive :
FOR x IN q_query behaves like RETURN QUERY regarding previously described
behavior.
Cheers,
Rémi-C
2013/10/23 Rémi Cura
> Hey,
> thanks for the answers,
>
> sorry for the cross post, i didn't know if it was postgis or postgres
> issue, hence the double post
Hey List,
I would like to be able to get the rows following the order of an index
(*NOT* getting an order by accelerated, but only an order defined by an
index).
Something like this :
SELECT my_row
FROM my_table
ORDER BY the_index ASC
where the_index is a GIST index over points.
I know there i
Hello,
I'm interested in the tree structure inherent to the gist indexing.
I was thinking to retrieve it from order of index.
Do you know how I could access it directly?
My use case would be to take advantage of this gist ordering to order 2D
points
s1 : N1 N2 .. Nn
so that for any given t
> =?U
Ok,
thank you Tom for this precise answer !
I don't understand how the CLUSTER .. USING index command work then.
It is supposed to rewrite on disk following index order. Does it do nothing
for GIST index?
Cheers,
Rémi-C
2013/10/24 Tom Lane
> =?UTF-8?Q?R=C3=A9mi_Cura?= writes:
> > I'm interes
Great,
thanks.
Now you say that I never saw any improvement when clustering table with
gist.
You just saved me a lot of unnecessary queries :-)
Cheers,
Rémi-C
2013/10/24 Tom Lane
> =?UTF-8?Q?R=C3=A9mi_Cura?= writes:
> > I don't understand how the CLUSTER .. USING index command work then.
>
Are the geoserver and postgres on same computer?
Cheers,
Rémi-C
2013/10/30 si24
> I'm not sure if its suppose to be under general so please let me know if I
> need to move it to another topic area please.
>
>
>
> --
> View this message in context:
> http://postgresql.1045698.n5.nabble.com/Con
Isn'it a client problem?
It should be client application closing connection when done with data
retrieval, and not the other way around?
Cheers,
Rémi-C
2013/10/31 Jayadevan
> I have never used pgbouncer myself. But my guess is you have to look at the
> Timeout parameters in the configuration
Hey,
I might be completly wrong, but when you say
"get the connections to close if they are not being used",
I'd say that it is a bad client design to not close a connection when it
doesn't need it anymore.
The client should retrieve the data or close when not using after a certain
amount of time
maybe a stupid answer,
but why not use another language (plpgsql? python?).
Is it because of transaction issue?
Sorry for not being more helpfull.
Cheers,
Rémi-C
2013/11/8 Tim Kane
> Hi all,
>
> I’m having difficulty using variable interpolation within psql, where that
> variable is within a ta
Hey all,
somebody knows of a way to autodocument plpgsql function, in a docxygen
style
(adding tags in comments for instance, or creating doc templates to fill).
It would really help to write the doc and maintain it.
Thanks,
Rémi-C
Hey thanks for the answer.
I'm thinking way more than that.
Typically in you comments you include special tags, like @input, then
doxygen will parse it and generate an html documentation.
Cheers,
Rémi-C
2013/11/26 Albe Laurenz
> Rémi Cura wrote:
> > somebody knows of a way t
Now it's too late,
but maybe you could allow to not use a single transaction ( but instead
127k transactions).4
Then at the end of every transaction you could print something in gui
(print for pgscript, raise for plpgsql) or execute a command to write in a
file (copy for instance).
It would also b
First serious answer :
you don't have to use command line,
you can use the pgadmin gui, loading your file with all the command, and
then hit F6 (or select run as pgscript).
This will wrapp each command in a transaction , and will print messages all
along.
Please test this on a few line before tryin
I'm not an expert,
I would think if you can spare using only one transaction , it would be
way way faster to do it !
the system simply could skip keeping log to be ready to roll back for a 1
billion row update !
Of course it would be preferable to use psql to execute statement by
statement as se
to
> lose two days worth of run time...
>
> If I do run the same script (a bit over 100k lines) in PGS mode, will
> that affect the speed? If so, how much? 1%? 5%? More?
>
>
>
>
> On Wed, Nov 27, 2013 at 3:20 AM, Rémi Cura wrote:
>
>> First serious answer :
>
May be totally a bad idea :
explode your sentence into(sentence_number, one_word), n times , (makes a
big table, you may want to partition)
then, classic index on sentence number, and on the one world (btree if you
make = comparison , more subtel if you do "like 'word' ")
depending on perf, it cou
Hello,
could it be possible then to define a custom CTE as the result of a
function?
somthing like :
with first_cte AS (
select blabla)
, second_cte AS (
a_function_returning_a_set(parameters)
)
SELECT blabla
A CTE is much like what you would like bborie :
you can reference previously defined CTE
There is a trick to simplify the thing and avoid using aggregates :
I think it will give you your answer.
http://postgresql.1045698.n5.nabble.com/Count-of-records-in-a-row-td5775363i20.html
Cheers,
Rémi-C
2013/12/13 Misa Simic
> Hi All,
>
> I am not sure how to define with words what I want t
Hey List,
kind of a simple question :
I'm using the postgis_topology extension,
and I'm trying to figure where the slowness comes from when importing data.
It involves plpgsql function calling other plpgsql functions, insert,
update, etc etc.
I know I can use explain analyze for one querry, but
Thanks all =)
Cheers,
Rémi-C
2014/1/13 Guillaume Lelarge
> On Fri, 2014-01-10 at 10:10 +0100, Rémi Cura wrote:
> > Hey List,
> > kind of a simple question :
> >
> > I'm using the postgis_topology extension,
> > and I'm trying to figure where th
Hey,
I'm trying to use an array of text as input in a C function in a custom
extension.
the prototype of the sql function is :
CREATE OR REPLACE FUNCTION pc_subset( dimensions TEXT[])
it is called like :
pc_subset( ARRAY['X'::text,'Y'::text])
and the C function trying to read the text array
umGetTextP(dimdatums[i]);
char *text_str = VARDATA(dimensiontext);
int text_len = VARSIZE(dimensiontext) - VARHDRSZ;
char *s;
char *p;
s = TextDatumGetCString(dimdatums[i]);
final_dimension_array[i] = s;
}
//pcinfo("end of the text retrieval\n");
Cheers,
Rémi-C
2
Hey list,
another tricky C function interface problem :
How to write a set returning function, that returns for each row an array?
it seems like the main function SRF_RETURN_NEXT takes Datum and so I can't
use PG_RETURN_ARRAYTYPE_P().
Shall I encapsulate the array into a composite field (only 1
another auto-answer :
Suprisingly ,
result = construct_array(...)
SRF_RETURN_NEXT(funcctx, PointerGetDatum(result));
But Datum memory must be allocated
Cheers,
Rémi-C
2014/1/16 Rémi Cura
> Hey list,
>
> another tricky C function interface problem :
>
> How to write
Hey,
I don"t understand the difference between this ORDINALITY option and
adding a "row_number() over()" in the SELECT.
Thanks,
Cheers,
Remi-C
2014-02-06 Vik Fearing :
> On 02/06/2014 04:16 AM, Michael Sacket wrote:
>
> Often times I find it necessary to work with table rows in a specific,
>
On my private computer I upgraded first the postgres to 9.3, then upgraded
postgis.
Sadly according to
http://trac.osgeo.org/postgis/wiki/UsersWikiPostgreSQLPostGIS ,
postgis 1.5 is not compatible with postgres 9.3.
However POstgis 2.1 is compatible with you current postgres option.
So as suggested
Ok, thanks ^^
Cheers,
Rémi-C
2014-02-06 Vik Fearing :
> On 02/06/2014 10:00 AM, Rémi Cura wrote:
> > Hey,
> > I don"t understand the difference between this ORDINALITY option and
> > adding a "row_number() over()" in the SELECT.
>
> WITH ORDINALITY
1 - 100 of 117 matches
Mail list logo