Hi all
I've recently installed pg 8.2.5 on a new server and transferred my data
from 8.2.4 running on a slow old thing, via pg_dump.
One of these tables has point UK address data, with 27 million rows, and
another the UK roads data, approx 4 million rows.
My problem is I have several text fields
Genius!
Thanks Richard,
The old locale was C and the new one English_United Kingdom.1252
I created a new index with "varchar_pattern_ops" and off it went!
> the strange thing is my btree indexes on the uk roads data work fine.
>
> Do they use like, or explicit range-checks?
>
> I wasn't using l
Hi
I would be most grateful if someone could help me create an xml doc in the form:
-3.04,53.56,0
-2.04,55.56,0
-3.44,57.56,0
This will be created from a single table of point geometries where
each belongs to a layer, e.g. x1 or x2 etc. The layer a geometry
belongs to is
Hi
Does anyone know if there is an 8.2.X windows build that has xml support,
including the XML datatype and SQL/XML functions such as xmlagg and
xmlelement?
I know 8.3 has excellent support for this, however I have a client that
requires a routing solution using the PGRouting extension, which onl
Hi All
I hope this isn't a FAQ, but does anyone have any suggestions as to
how to make a query that selects using:
"where in()"
secure from an sql injection point of view?
I have grid of tiles I'm using to reference geographical points.
These tiles are identical to the tiling system google maps
Thanks for the replies,
"Rodrigo E. De León Plicet" <[EMAIL PROTECTED]> wrote:
>Use a prepared query and ANY, e.g.:
>select st_collect(the_geom) from tiles
>where tilename = any('{foo,bar,baz}');
Thanks, that's what I was looking for!
$sql = "select uid, accredited as acc, x(the_geom), y(the_geom
Hi all
I'm trying to calculate the percentile rank for a record based on a
'score' column, e.g. a column of integers such as:
23,77,88,23,23,23,12,12,12,13,13,13
without using a stored procedure.
So,
select count(*) as frequency, score
from scoretable
group by score
order by score
Yields:
frequ
On Thu, Apr 10, 2008 at 4:36 PM, Osvaldo Rosario Kussama
<[EMAIL PROTECTED]> wrote:
>
> Try:
>
> SELECT count(*) AS frequency, score,
> count((SELECT * FROM scoretable st2 WHERE st2.score <= st1.score)) AS
> runningtotal
> FROM scoretable st1
> GROUP BY score
> ORDER BY score
>
> Osvaldo
>
Viktor
The quick and dirty method would be to pass the subquery as a string,
then execute the subquery in the function.
Will T
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
> Jan Christian Dittmer wrote:
>
>
> >Thank you very much!
> >You have remind me that the our server runs under Linux and not under
> > Windows as our clients :-)
> >So indeed I can use a sed-pipe construct to switch '.' and ','.
> >But wait, there is just another problem then. Our
Dear All
Sometimes postgres.exe will thrash one of the cores and won't stop
until I kill the process. I know it's the statistics collector as I
get this message when I kill the process:
"statistics collector process (PID 172) exited with exit code 1"
Nothing other than this app is accessing my PG
> >
> > Any ideas why this might be happening, and how I can stop it?
>
> It'd be interesting to know what the stats collector is actually doing.
> Could you, using Process Explorer or a debugger, get a stack trace from
> that process while it's in the trashing state?
>
> //Magnus
>
Certainl
On Thu, May 8, 2008 at 6:14 PM, Magnus Hagander <[EMAIL PROTECTED]> wrote:
> William Temperley wrote:
>> > >
>> > > Any ideas why this might be happening, and how I can stop it?
>> >
>> > It'd be interesting to know what the stats co
On Fri, May 9, 2008 at 2:55 PM, Magnus Hagander <[EMAIL PROTECTED]> wrote:
> William Temperley wrote:
> > On Thu, May 8, 2008 at 6:14 PM, Magnus Hagander <[EMAIL PROTECTED]>
> > wrote:
> > > William Temperley wrote:
> > >> > >
> > >&
> Filtering out with the pid showed that it was the file
> pgdata/global/pgstat.tmp
>> Filtering out with the pid showed that it was the file
>> pgdata/global/pgstat.tmp
>
> That's the statistics collector -- which makes sense, depending
> on your settings, it has to write stats for every operation
Hi All,
I've got two transactions I tried to kill 3 days ago using "select
pg_cancel_backend()", then SIGTERM, and have since then been
using 100% of a cpu core each. They were supposed to insert the
results of large unions with PostGIS and appear to have failed.
Could someone tell me what's the l
2009/6/22 Tom Lane :
> William Temperley writes:
>> I've got two transactions I tried to kill 3 days ago using "select
>> pg_cancel_backend()", then SIGTERM, and have since then been
>> using 100% of a cpu core each. They were supposed to insert the
>>
2009/6/22 Tom Lane :
> William Temperley writes:
>> I'm wondering if I happened as I'd started the same query twice.
>> The first had work_mem = 1MB so I tried to kill it and started another
>> with work_mem = 1000MB, but both were attempting to insert the same i
Tom Lane writes:
> Just out of curiosity, does anyone know of any ORM anywhere that doesn't
> suck? They seem to be uniformly awful, at least in terms of their
> interfaces to SQL databases. If there were some we could recommend,
> maybe people would be less stuck with these bogus legacy archite
2009/10/15 Merlin Moncure :
> On Thu, Oct 15, 2009 at 12:31 PM, danclemson wrote:
>>
>> Hi,
>>
>> As postgres now has enum type, does npgsql driver support the enum type?
>>
>> I use c# and npgsql as databse driver. One of the database stored procedure
>> takes enum as its parameter.
>>
>> What w
2009/10/28 Richard Huxton :
> Xai wrote:
>> i want to create a type for an email field but i'm not good with regx
>> can some one help me?
>
> Google for "email regex". Be warned - this is very complicated if you
> want to match *all* possible email addresses.
>
Just send your users an email askin
Dear all,
I have a single "source" table that is referenced by six
specialization tables, which include:
"journal_article"
"report"
4 more
There is a "citation" column in the source, which is what will be
displayed to users. This is generated by a trigger function on each
specialization table
On 21 September 2010 18:39, Alban Hertroys
wrote:
> On 21 Sep 2010, at 16:13, William Temperley wrote:
>
>> Dear all,
>>
>> I have a single "source" table that is referenced by six
>> specialization tables, which include:
>> "journal_article"
Dear all
I'd really appreciate a little advice here - I'm designing a PG
database to manage a scientific dataset.
I've these fairly clear requirements:
1. Multiple users of varying skill will input data.
2. Newly inserted data will be audited and marked good / bad
3. We must have a dataset that i
On Thu, Aug 14, 2008 at 2:55 AM, Craig Ringer
<[EMAIL PROTECTED]> wrote:
> William Temperley wrote:
>> A. Two databases, one for transaction processing and one for
>> modelling. At arbitrary intervals (days/weeks/months) all "good" data
>> will be moved
Hi all
Has anyone any experience with very large tables?
I've been asked to store a grid of 1.5 million geographical locations,
fine. However, associated with each point are 288 months, and
associated with each month are 500 float values (a distribution
curve), i.e. 1,500,000 * 288 * 500 = 216 bi
On Fri, Nov 28, 2008 at 3:48 PM, Alvaro Herrera
<[EMAIL PROTECTED]> wrote:
> William Temperley escribió:
>> So a 216 billion row table is probably out of the question. I was
>> considering storing the 500 floats as bytea.
>
> What about a float array, float[]?
I gues
On Fri, Nov 28, 2008 at 5:46 PM, Simon Riggs <[EMAIL PROTECTED]> wrote:
>
> I would look carefully at the number of bits required for each float
> value. 4 bytes is the default, but you may be able to use less bits than
> that rather than rely upon the default compression scheme working in
> your f
Hi all
Could anyone tell me what's the best thing to with idle transactions
that are holding locks?
I just killed the process as I wanted to get on with some work. I'm
just not sure this is a good idea when we go into production.
Cheers
Will T
--
Sent via pgsql-general mailing list (pgsql-gen
>>
>> Could anyone tell me what's the best thing to with idle
>> transactions
>> that are holding locks?
On Fri, Dec 5, 2008 at 2:25 PM, Glyn Astill <[EMAIL PROTECTED]> wrote:
>
> select pg_cancel_backend();
>
Thanks. Sorry for the basic question.
Will
--
Sent via pgsql-general mailing list (p
Hi All
I'm wondering if anyone can share any insights or experience with
temporary versions of databases, allowing "disconnected editing"
during Internet downtime.
The use-case is that I run a Postgres database, hosted in the UK, but
used by scientists in several other countries - Ecuador, Vietna
On Wed, Apr 15, 2009 at 9:34 AM, Dimitri Fontaine
wrote:
>> On Tue, 14 Apr 2009, William Temperley wrote:
>> > I could potentially run a database in each of these countries and
>> > provide 100% uptime, obviously raising the issue of version conflicts
>> > that wo
32 matches
Mail list logo