Hi,
On 14/01/2011 7:35, Craig Ringer wrote:
On 01/14/2011 06:26 AM, Jensen Somers wrote:
I know that some applications (Poker Tracker 3 is the first one that
comes to my mind)
It's not a good example, either, as demonstrated by the number of
questions that pop up about it on this list, and
On 01/14/2011 06:26 AM, Jensen Somers wrote:
I know that some applications (Poker Tracker 3 is the first one that
comes to my mind)
It's not a good example, either, as demonstrated by the number of
questions that pop up about it on this list, and the incredibly ancient
versions of Pg that th
On 01/14/2011 12:01 PM, Nimesh Satam wrote:
Hi,
We have a stale connection on one of our machine. The connection is
shown as follows:
26286 ?Ss 1:31 postgres: postgres reporting localhost
(37118) idle in transaction (aborted)
Can anybody let me know how we can get rid of such conn
On 01/12/2011 01:37 AM, Karayappalli, Raghunath (Raghu) wrote:
I am using the version 8.2.
I have enabled auto vacuum (assuming that you are referring to the
"autovacuum = on" configuration).
Autovacuum improves significantly with every Pg release. 8.3 and 8.4
both saw quite big improvements.
Hi,
We have a stale connection on one of our machine. The connection is shown as
follows:
26286 ?Ss 1:31 postgres: postgres reporting localhost (37118)
idle in transaction (aborted)
Can anybody let me know how we can get rid of such connection, is there are
any command which will ki
Allen Chen writes:
>> That won't really help. The fundamental point here is that '1 day' is
>> not the same concept as '24 hours', because of DST changes; and the
>> interval type treats them as different.
> I don't understand how DST changes matter for a time interval or how that
> could even b
Ok given your response, this is my understanding of how the WAL works:
When you begin a transaction, all your changes write to the in-memory WAL
buffer, and that buffer flushes to disk when:
a) Somebody commits a synchronous transaction
b) The WAL buffer runs out of space
Please correct me if I'm
On Jan 13, 2011, at 1:15 PM, John R Pierce wrote:
> On 01/13/11 1:08 PM, Ben Chobot wrote:
>> On Jan 13, 2011, at 11:03 AM, Tom Lane wrote:
>>
>>> If you don't care about that, you can use justify_hours (I think that's
>>> the right function) to smash them to the same thing.
>> I use justify_hour
On 01/13/2011 12:55 PM, Allen Chen wrote:
That won't really help. The fundamental point here is that '1 day' is
not the same concept as '24 hours', because of DST changes; and the
interval type treats them as different.
If you don't care about that, you can use justify_hours (
Hi,
I am looking into using PostgreSQL as a database service in a desktop
application since it's one of the support local database systems by
DataObjects.net and has more features and higher limitations than SQL
server express. I prefer a database versus a traditional binary data
file, but I
>
> That won't really help. The fundamental point here is that '1 day' is
> not the same concept as '24 hours', because of DST changes; and the
> interval type treats them as different.
>
> If you don't care about that, you can use justify_hours (I think that's
> the right function) to smash them
No, the order is not supposed to make any difference. The planner plans
the query from costs and statistics and shouldn't care about the order
(I think)
What you should do is to divide the big geometries and build new index.
That can be done by building a grid with generate_series and cutting th
On 01/13/11 1:08 PM, Ben Chobot wrote:
On Jan 13, 2011, at 11:03 AM, Tom Lane wrote:
If you don't care about that, you can use justify_hours (I think that's
the right function) to smash them to the same thing.
I use justify_hours, and I still get entries like '1 day 35:31:10' intermixed
with
On Jan 13, 2011, at 11:03 AM, Tom Lane wrote:
> If you don't care about that, you can use justify_hours (I think that's
> the right function) to smash them to the same thing.
I use justify_hours, and I still get entries like '1 day 35:31:10' intermixed
with the entires I'd expect like '2 days 03
Hi, so one solution is to use the most common feature first (), in this case
the houses ?
|What version of PostGIS are you using?
1.5.1
2011/1/13 Nicklas Avén
> Hallo Håvard
>
> The planner is supposed to take care of that. It estimates ehat is the
> cheapest part of the OR statements and che
2011/1/13 Alban Hertroys :
> On 13 Jan 2011, at 20:21, Pavel Stehule wrote:
>
>>> I'm not sure what you mean here, Postgres certainly _does_ support
>>> set-returning functions. Maybe you were referring to something in the
>>> particular context of the problem the OP is trying to solve?
>>>
>>
>>
On 13 Jan 2011, at 20:21, Pavel Stehule wrote:
>> I'm not sure what you mean here, Postgres certainly _does_ support
>> set-returning functions. Maybe you were referring to something in the
>> particular context of the problem the OP is trying to solve?
>>
>
> The name of feature "SET RETURNED
2011/1/13 Alban Hertroys :
> On 13 Jan 2011, at 19:25, Pavel Stehule wrote:
>
>>> The above does work, thanks. There is however one drawback: the type
>>> associated with _page.users is now an array. Is there a way to make
>>> it a 'SETOF user_t'?
>>>
>>
>> no. PostgreSQL doesn't supports SET. On
On 13 Jan 2011, at 17:22, Jon Smark wrote:
> create type page_t AS
>(
>total int4,
>users user_t[]
>);
>
> create function get_page ()
> returns page_t
> language plpgsql as
> $$
> declare
>_page page_t;
> begin
>_page.total := select count (*
On 13 Jan 2011, at 19:25, Pavel Stehule wrote:
>> The above does work, thanks. There is however one drawback: the type
>> associated with _page.users is now an array. Is there a way to make
>> it a 'SETOF user_t'?
>>
>
> no. PostgreSQL doesn't supports SET. Only arrays are supported.
I'm not
Gary Chambers writes:
>>> Why do some of the intervals show days broken out whereas others only
>>> show hours? I have seen intervals left in hours even when the intervals
>>> are more than two days long. FWIW, I would prefer if it was always left
>>> in hours, but would be happy if it would ju
On 01/13/11 9:34 AM, Allen Chen wrote:
Has anyone else out there noticed inconsistencies in how pgsql formats
time intervals over 1 day?
For example, I have a query that returns a column of intervals and I
get output like this:
30:30:00
1 day 03:02:47
1 day 01:38:34
26:25:29.50
Why do some
Why do some of the intervals show days broken out whereas others only
show hours? I have seen intervals left in hours even when the intervals
are more than two days long. FWIW, I would prefer if it was always left
in hours, but would be happy if it would just be consistent either way.
I have
Have you considered switching to embedded Linux instead of XP?
This has the potential to help increase performance, as embedded Linux
will most likely have a smaller footprint.
Give this a read:
http://www.lynuxworks.com/products/whitepapers/xp-vs-linux.php3
Of course, if you're using an applica
On Jan 13, 2011, at 9:34 AM, Allen Chen wrote:
> Has anyone else out there noticed inconsistencies in how pgsql formats time
> intervals over 1 day?
>
> For example, I have a query that returns a column of intervals and I get
> output like this:
>
> 30:30:00
> 1 day 03:02:47
> 1 day 01:38:34
>
2011/1/13 Jon Smark :
> Hi,
>
>> That is certainly not going to work: that select does not produce an
>> array, it produces a column of user_t (of which SELECT INTO is only
>> gonna take the first, anyway).
>>
>> Untested, but I think you'd have better results with
>>
>> _page.users := array(se
Hi,
> That is certainly not going to work: that select does not produce an
> array, it produces a column of user_t (of which SELECT INTO is only
> gonna take the first, anyway).
>
> Untested, but I think you'd have better results with
>
> _page.users := array(select users from users limit 10
Has anyone else out there noticed inconsistencies in how pgsql formats time
intervals over 1 day?
For example, I have a query that returns a column of intervals and I get
output like this:
30:30:00
1 day 03:02:47
1 day 01:38:34
26:25:29.50
Why do some of the intervals show days broken out wherea
Jon Smark writes:
> create type page_t AS
> (
> total int4,
> users user_t[]
> );
> create function get_page ()
> returns page_t
> language plpgsql as
> $$
> declare
> _page page_t;
> begin
> _page.total := select count (*) from users;
>
Hi, I have a spatial query with multiple OR statements, for large tables
it's very slow. Is it possible to skip the spatial lookup on the other
conditions if first(previous) condition equal 1, and thereby increase the
performance?
SELECT vciia_main.sitrp,vciia_main.date_time from vciia_main,
south
Hello
try to SELECT INTO ARRAY(SELECT user_t FROM users LIMIT 10) _page.users;
Regards
Pavel Stehule
2011/1/13 Jon Smark :
> Hi,
>
> I am trying to create a PL/pgSQL function whose return type is a tuple
> consisting of an integer and a list of table rows. I emulate the tuple
> by defining a
>
> Von: pgsql-general-ow...@postgresql.org im Auftrag von Ivan Voras
> Gesendet: Do 13.01.2011 15:47
> An: pgsql-general@postgresql.org
> Betreff: Re: [GENERAL] Optimal settings for embedded system running
> PostgreSQL
>
> On 13/01/2011 14:30, Christian Walter wrot
Hi,
I am trying to create a PL/pgSQL function whose return type is a tuple
consisting of an integer and a list of table rows. I emulate the tuple
by defining a record 'page_t' with the two fields; however, the naïve
approach of doing a SELECT INTO one the record's fields does not work
(see functi
On Thu, Jan 13, 2011 at 03:29:03PM -, Phillip Smith wrote:
> modifying user inputted data is not an option for me. I need to
> maintain the original data. I had read about indexing on a lower
> function. This all seems a bit of a bodgy workaround to me. A clean
> Case Insensitive collation set
Most examples in the array documentation uses ARRAY[1,2,3] and similar.
http://www.postgresql.org/docs/9.0/interactive/functions-array.html
I think (actually I have experienced it, both in myself and others)
that this can be misleading in some cases.
For example: array_upper(ARRAY[1,2,3,4], 1)
2011/1/13 Phillip Smith :
> Thanks Pavel,
>
> Is this the only way?. I would prefer to use a collation setting if this is
> possible. Do you know whether there would be a performance hit using non
> standard character strings?
It is preferred way. PostgreSQL doesn't support a collations, and what
Hi Karsten,
modifying user inputted data is not an option for me. I need to maintain the
original data. I had read about indexing on a lower function. This all seems a
bit of a bodgy workaround to me. A clean Case Insensitive collation setting is
cleaner. Is this supported in postgreSQL 9?
R
Thanks Pavel,
Is this the only way?. I would prefer to use a collation setting if this is
possible. Do you know whether there would be a performance hit using non
standard character strings?
Phillip
> you should to use a citext datatype
>
> http://www.postgresql.org/docs/9.0/interactive/citext.html
Or:
- use a functional index with lower() to ensure uniqueness
- use a BEFORE trigger to lower() the inserted data
Karsten
--
GMX DSL Doppel-Flat ab 19,99 Euro/mtl.! Jetzt mit
gratis Handy-Fl
On 1/13/2011 7:02 AM, Jaiswal Dhaval Sudhirkumar wrote:
Hi List,
I have a query about loading huge amount of data into postgis. I am
using shp2pgsql tool which is loading data with single channel only
inside database.
i.e. one shape file at a time.
Is there any tool which can load multiple shape
Hello
you should to use a citext datatype
http://www.postgresql.org/docs/9.0/interactive/citext.html
regards
Pavel Stehule
2011/1/13 Phillip Smith :
> Hi,
> I'm moving over from MS SQL. I've been googling this for ages now and
> suprisingly cannot find a clear answer.
>
> I want my data in tab
Hi,
I'm moving over from MS SQL. I've been googling this for ages now and
suprisingly cannot find a clear answer.
I want my data in tables to be case insensitive.
This is so i can:
1. Put keys on natural key fields, for instance a product part number. I
product number 'ABC123' inserted i need
On 13/01/2011 14:30, Christian Walter wrote:
Dear Members,
We are currently using PostgreSQL 7.3 on an Embedded System (Based on
http://www.msc-ge.com/de/produkte/com/qseven/q7-us15w.html) running
Windows XP Embedded / SP3. The onbard flash shows the following
performance figures:
- Average re
On Thu, Jan 13, 2011 at 8:30 AM, Christian Walter
wrote:
> We are currently using PostgreSQL 7.3 on an Embedded System (Based on
> http://www.msc-ge.com/de/produkte/com/qseven/q7-us15w.html) running Windows
> XP Embedded / SP3.
You're asking for an explosion of some sort to happen. Take a read o
Dear Members,
We are currently using PostgreSQL 7.3 on an Embedded System (Based on
http://www.msc-ge.com/de/produkte/com/qseven/q7-us15w.html) running
Windows XP Embedded / SP3. The onbard flash shows the following
performance figures:
- Average read = 15,6Mb/s
- 4Kbyte reads = 3,5Mb/
Hi List,
I have a query about loading huge amount of data into postgis. I am using
shp2pgsql tool which is loading data with single channel only inside database.
i.e. one shape file at a time.
Is there any tool which can load multiple shape file at a time.
Another query is that I want to sto
46 matches
Mail list logo