Re: [GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-02 Thread Albe Laurenz
Tim Uckun wrote:
> I have the following query.

[...]
>  SELECT
>  interval_start,
>  (interval_start AT TIME ZONE 'Africa/Monrovia')::timestamp with time zone  as
> interval_start_in_africa,
>   min_datetime,
>  min_datetime AT TIME ZONE 'Europe/Berlin' as min_datetime_in_berlin,
>   max_datetime,
>  max_datetime AT TIME ZONE 'America/New_York' as max_datetime_in_new_york
> 
>   FROM grouped_data gd

> When I run this query in pgadmin I get the following results
> 
> "interval_start","interval_start_in_africa","min_datetime","min_datetime_in_berlin","max_datetime","ma
> x_datetime_in_new_york"
> "2013-10-04 15:35:00+13","2013-10-04 02:35:00+13","2013-10-04 
> 15:35:00+13","2013-10-04
> 04:35:00","2013-10-04 15:39:59+13","2013-10-03 22:39:59"
> "2013-10-04 15:25:00+13","2013-10-04 02:25:00+13","2013-10-04 
> 15:28:11+13","2013-10-04
> 04:28:11","2013-10-04 15:29:59+13","2013-10-03 22:29:59"
> "2013-10-04 15:40:00+13","2013-10-04 02:40:00+13","2013-10-04 
> 15:40:00+13","2013-10-04
> 04:40:00","2013-10-04 15:44:39+13","2013-10-03 22:44:39"
> "2013-10-04 15:30:00+13","2013-10-04 02:30:00+13","2013-10-04 
> 15:30:00+13","2013-10-04
> 04:30:00","2013-10-04 15:34:59+13","2013-10-03 22:34:59"
> 
> Notice that all the offsets are set to +13 which is my laptop's offset. Why 
> don't they show the offset
> of Africa or Berlin or whatever?

The configuration parameter "TimeZone" determines how "timestamp with
time zone" is interpreted and converted to a string.

The reason for that is that in PostgreSQL there is no time zone
information stored along with a "timestamp with time zone",
it is stored in UTC.

> Also note then unless I explictly cast the data as timestamp with
> time zone all the offsets go away and it's reported as timestamp without time 
> zone.

That is because AT TIME ZONE returns a "timestamp without time zone"
in this case, see the documentation.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Who is pgFoundery administrator?

2013-10-02 Thread KONDO Mitsumasa
Hi,

I want to submit new project in pgFoundery project.

I submitted new project which is WAL archive copy tool with directIO method in
pgFoundery homepage 2 weeks ago, but it does not have approved and responded at
all:-(

Who is pgFoundery administrator or board member now? I would like to send e-mail
them. At least, it does not have information and support page in pgFoundery 
homepage.

Regards,
-- 
Mitsumasa KONDO
NTT Open Source Software Center


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] Who is pgFoundery administrator?

2013-10-02 Thread Tatsuo Ishii
> Hi,
> 
> I want to submit new project in pgFoundery project.
> 
> I submitted new project which is WAL archive copy tool with directIO method in
> pgFoundery homepage 2 weeks ago, but it does not have approved and responded 
> at
> all:-(
> 
> Who is pgFoundery administrator or board member now? I would like to send 
> e-mail
> them. At least, it does not have information and support page in pgFoundery 
> homepage.

It's Marc (scra...@hub.org).
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-02 Thread Tim Uckun
>The reason for that is that in PostgreSQL there is no time zone
information stored along with a "timestamp with time zone",
it is stored in UTC.

That seems unintuitive. What is the difference between timestamp without
time zone and timestamp with time zone? I was expecting to have the time
zone stored in the field. For example one row might be in UTC  but the
other row might be in my local time.

Maybe the question I need to ask is "how can I store the time zone along
with the timestamp"

>That is because AT TIME ZONE returns a "timestamp without time zone"

Also seems counterintutive but I guess I can aways convert it. I am just
not getting the right offset when I convert. That's what's puzzling.



On Wed, Oct 2, 2013 at 9:15 PM, Albe Laurenz wrote:

> Tim Uckun wrote:
> > I have the following query.
>
> [...]
> >  SELECT
> >  interval_start,
> >  (interval_start AT TIME ZONE 'Africa/Monrovia')::timestamp with time
> zone  as
> > interval_start_in_africa,
> >   min_datetime,
> >  min_datetime AT TIME ZONE 'Europe/Berlin' as min_datetime_in_berlin,
> >   max_datetime,
> >  max_datetime AT TIME ZONE 'America/New_York' as max_datetime_in_new_york
> >
> >   FROM grouped_data gd
>
> > When I run this query in pgadmin I get the following results
> >
> >
> "interval_start","interval_start_in_africa","min_datetime","min_datetime_in_berlin","max_datetime","ma
> > x_datetime_in_new_york"
> > "2013-10-04 15:35:00+13","2013-10-04 02:35:00+13","2013-10-04
> 15:35:00+13","2013-10-04
> > 04:35:00","2013-10-04 15:39:59+13","2013-10-03 22:39:59"
> > "2013-10-04 15:25:00+13","2013-10-04 02:25:00+13","2013-10-04
> 15:28:11+13","2013-10-04
> > 04:28:11","2013-10-04 15:29:59+13","2013-10-03 22:29:59"
> > "2013-10-04 15:40:00+13","2013-10-04 02:40:00+13","2013-10-04
> 15:40:00+13","2013-10-04
> > 04:40:00","2013-10-04 15:44:39+13","2013-10-03 22:44:39"
> > "2013-10-04 15:30:00+13","2013-10-04 02:30:00+13","2013-10-04
> 15:30:00+13","2013-10-04
> > 04:30:00","2013-10-04 15:34:59+13","2013-10-03 22:34:59"
> >
> > Notice that all the offsets are set to +13 which is my laptop's offset.
> Why don't they show the offset
> > of Africa or Berlin or whatever?
>
> The configuration parameter "TimeZone" determines how "timestamp with
> time zone" is interpreted and converted to a string.
>
> The reason for that is that in PostgreSQL there is no time zone
> information stored along with a "timestamp with time zone",
> it is stored in UTC.
>
> > Also note then unless I explictly cast the data as timestamp with
> > time zone all the offsets go away and it's reported as timestamp without
> time zone.
>
> That is because AT TIME ZONE returns a "timestamp without time zone"
> in this case, see the documentation.
>
> Yours,
> Laurenz Albe
>


Re: [GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-02 Thread Tony Theodore

On 02/10/2013, at 6:49 PM, Tim Uckun  wrote:

> >The reason for that is that in PostgreSQL there is no time zone
> information stored along with a "timestamp with time zone",
> it is stored in UTC.
> 
> That seems unintuitive. What is the difference between timestamp without time 
> zone and timestamp with time zone? I was expecting to have the time zone 
> stored in the field. For example one row might be in UTC  but the other row 
> might be in my local time.
> 
> Maybe the question I need to ask is "how can I store the time zone along with 
> the timestamp"   
> 
> >That is because AT TIME ZONE returns a "timestamp without time zone"
> 
> Also seems counterintutive but I guess I can aways convert it. I am just not 
> getting the right offset when I convert. That's what's puzzling.


Here's a handy blog post from Josh Berkus about timestamps:

http://it.toolbox.com/blogs/database-soup/zone-of-misunderstanding-48608

Cheers,

Tony



Re: [GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-02 Thread Tim Uckun
That's interesting article but it tells me that I can't really use the
timestamp with time zone data type.   I really need to store the time zone
information along with the datetime and do not want to automatically
convert the timestamp to the connection time zone.

If one row has a timestamp in UTC and another has a timestamp in EST I want
the user (all users) to view the timestamps "as is" with the offset so they
can see how it may differ from each other and their own time zone.

I guess I have to store the time zones separately in another field.


On Wed, Oct 2, 2013 at 10:05 PM, Tony Theodore wrote:

>
> On 02/10/2013, at 6:49 PM, Tim Uckun  wrote:
>
> >The reason for that is that in PostgreSQL there is no time zone
> information stored along with a "timestamp with time zone",
> it is stored in UTC.
>
> That seems unintuitive. What is the difference between timestamp without
> time zone and timestamp with time zone? I was expecting to have the time
> zone stored in the field. For example one row might be in UTC  but the
> other row might be in my local time.
>
> Maybe the question I need to ask is "how can I store the time zone along
> with the timestamp"
>
> >That is because AT TIME ZONE returns a "timestamp without time zone"
>
> Also seems counterintutive but I guess I can aways convert it. I am just
> not getting the right offset when I convert. That's what's puzzling.
>
>
>
> Here's a handy blog post from Josh Berkus about timestamps:
>
> http://it.toolbox.com/blogs/database-soup/zone-of-misunderstanding-48608
>
> Cheers,
>
> Tony
>
>


Re: [GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-02 Thread Albe Laurenz
Tim Uckun wrote:
>> The reason for that is that in PostgreSQL there is no time zone
>> information stored along with a "timestamp with time zone",
>> it is stored in UTC.
> 
> That seems unintuitive. What is the difference between timestamp without time 
> zone and timestamp with
> time zone? I was expecting to have the time zone stored in the field. For 
> example one row might be in
> UTC  but the other row might be in my local time.

It is unintuitive and has caused many similar complaints
in the past, not least because other databases do it
differently.

The main difference between timestamp with time zone and
timestamp without is that the former will get converted
to your time zone (specified with the "TimeZone" parameter)
automatically, while the latter always looks the same.

> Maybe the question I need to ask is "how can I store the time zone along with 
> the timestamp"

Store an additional field "offset".
If you want to invest more energy and don't mind writing C,
you could create your own data type.

>> That is because AT TIME ZONE returns a "timestamp without time zone"
> 
> Also seems counterintutive but I guess I can aways convert it. I am just not 
> getting the right offset
> when I convert. That's what's puzzling.

I think that this is required by the SQL standard.

But think of it that way:
It is the answer to the question "What is 2013-10-02 00:00:00 UTC
in Vienna?"
The answer is not time zone dependent.  It should be
"2013-10-02 02:00:00" and not "2013-10-02 02:00:00 CEST".

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] Who is pgFoundery administrator?

2013-10-02 Thread Michael Paquier
On Wed, Oct 2, 2013 at 5:37 PM, KONDO Mitsumasa
 wrote:
> Who is pgFoundery administrator or board member now? I would like to send 
> e-mail
> them. At least, it does not have information and support page in pgFoundery 
> homepage.
Why don't you consider github as a potential solution? You could use
with it a bug tracker, a wiki and a website for documentation. git is
also more pluggable when working with pg core.
Creating an account as well as a new project does not take more than 5
minutes as well...
-- 
Michael


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-02 Thread Karsten Hilbert
On Wed, Oct 02, 2013 at 09:49:38PM +1300, Tim Uckun wrote:

> Maybe the question I need to ask is "how can I store the time zone along
> with the timestamp"

You need an extra field, say, of type interval.

Karsten
-- 
GPG key ID E4071346 @ gpg-keyserver.de
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-02 Thread Karsten Hilbert
On Wed, Oct 02, 2013 at 09:49:38PM +1300, Tim Uckun wrote:

> >The reason for that is that in PostgreSQL there is no time zone
> information stored along with a "timestamp with time zone",
> it is stored in UTC.

A better name might perhaps been "timezone aware timestamp".

Karsten
-- 
GPG key ID E4071346 @ gpg-keyserver.de
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-02 Thread Karsten Hilbert
On Wed, Oct 02, 2013 at 09:18:30AM +, Albe Laurenz wrote:

> > Maybe the question I need to ask is "how can I store the time zone along 
> > with the timestamp"
> 
> Store an additional field "offset".
> If you want to invest more energy and don't mind writing C,
> you could create your own data type.

Might not a composite type (timestamp without timezone, interval) suffice ?

Or does that still need some C sprinkling (for operator support, say) ?

Karsten
-- 
GPG key ID E4071346 @ gpg-keyserver.de
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-02 Thread Albe Laurenz
Karsten Hilbert wrote:
> > > Maybe the question I need to ask is "how can I store the time zone along 
> > > with the timestamp"
> >
> > Store an additional field "offset".
> > If you want to invest more energy and don't mind writing C,
> > you could create your own data type.
> 
> Might not a composite type (timestamp without timezone, interval) suffice ?

Depends on what you want.

If all you want is store timestamp and time zone, a composite
type is fine.

> Or does that still need some C sprinkling (for operator support, say) ?

Exactly. If you want "<" to work right for this data type
that's the road you have to go.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] partitioning for speed, but query planner ignores

2013-10-02 Thread David Rysdam
We have a by-our-standards large table (about 40e6 rows). Since it is
the bottleneck in some places, I thought I'd experiment with
partitioning. I'm following the instructions here:

http://www.postgresql.org/docs/current/static/ddl-partitioning.html

The table holds data about certain objects, each of which has an object
number and some number of historical entries (like account activity at a
bank, say). The typical usage pattern is: relatively rare inserts that
happen in the background via an automated process (meaning I don't care
if they take a little longer) and frequent querying, including some
where a human is sitting in front of it (i.e. I'd like it to be a lot
faster).

Our most frequent queries either select "all history for object N" or
"most recent item for some subset of objects".

Because object number figure so prominently, I thought I'd partition on
that. To me, it makes the most sense from a load-balancing perspective
to partition on the mod of the object number (for this test, evens vs
odds, but planning to go up to mod 10 or even mod 100). Lower numbers
are going to be queried much less often than higher numbers. This scheme
also means I never have to add partitions in the future.

I set up my check constraints ((objnum % 2) = 0 and (objnum % 2) = 1 on
the relevant tables) and turned constraint_exclusion to 'partition' in
postgresql.conf. I also turned it to 'on' in my psql interface.

However, when I run an explain or an explain analyze, I still seeing it
checking both partitions. Is this because the query planner doesn't want
to do a mod? Should I go with simple ranges, even though this adds a
maintenance task? 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] partitioning for speed, but query planner ignores

2013-10-02 Thread David Rysdam
On Wed, 2 Oct 2013 08:34:44 -0400, David Rysdam  wrote:
> However, when I run an explain or an explain analyze, I still seeing it
> checking both partitions. Is this because the query planner doesn't want
> to do a mod? Should I go with simple ranges, even though this adds a
> maintenance task? 

I guess I should give some administrivia as well: Server is 9.2.1
running Linux. The configuration is otherwise pretty vanilla with only
minor, and poorly-understood, conf changes.


smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] partitioning for speed, but query planner ignores

2013-10-02 Thread Bill Moran
On Wed, 2 Oct 2013 08:34:44 -0400
David Rysdam  wrote:

> We have a by-our-standards large table (about 40e6 rows). Since it is
> the bottleneck in some places, I thought I'd experiment with
> partitioning. I'm following the instructions here:
> 
> http://www.postgresql.org/docs/current/static/ddl-partitioning.html
> 
> The table holds data about certain objects, each of which has an object
> number and some number of historical entries (like account activity at a
> bank, say). The typical usage pattern is: relatively rare inserts that
> happen in the background via an automated process (meaning I don't care
> if they take a little longer) and frequent querying, including some
> where a human is sitting in front of it (i.e. I'd like it to be a lot
> faster).
> 
> Our most frequent queries either select "all history for object N" or
> "most recent item for some subset of objects".
> 
> Because object number figure so prominently, I thought I'd partition on
> that. To me, it makes the most sense from a load-balancing perspective
> to partition on the mod of the object number (for this test, evens vs
> odds, but planning to go up to mod 10 or even mod 100). Lower numbers
> are going to be queried much less often than higher numbers. This scheme
> also means I never have to add partitions in the future.
> 
> I set up my check constraints ((objnum % 2) = 0 and (objnum % 2) = 1 on
> the relevant tables) and turned constraint_exclusion to 'partition' in
> postgresql.conf. I also turned it to 'on' in my psql interface.
> 
> However, when I run an explain or an explain analyze, I still seeing it
> checking both partitions. Is this because the query planner doesn't want
> to do a mod? Should I go with simple ranges, even though this adds a
> maintenance task? 

Last I looked, the partitioning mechanism isn't _quite_ as smart as could
be desired.  For example:
SELECT * FROM table WHERE objnum = 5; -- will not take advantage of partition
You have to give the planner a little more hint as to the fact that it can
take advantage of the partition:
SELECT * FROM table WHERE (objnum % 2) = 1 AND objnum = 5;
As silly as it seems, this is enough information for the planner to know
that it only needs to scan one partition.

If this doesn't answer your question, you should probably provide some
more details (actual query and actual explain output, for example) to
help people better help you.

-- 
Bill Moran 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-02 Thread Karsten Hilbert
On Wed, Oct 02, 2013 at 11:48:02AM +, Albe Laurenz wrote:

>> Or does that still need some C sprinkling (for operator support, say) ?
> 
> Exactly. If you want "<" to work right for this data type
> that's the road you have to go.

I see.

Whatever became of the 2011 intent to implement
the above that's linked to in the blog post ?

Karsten
-- 
GPG key ID E4071346 @ gpg-keyserver.de
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] partitioning for speed, but query planner ignores

2013-10-02 Thread David Rysdam
On Wed, 2 Oct 2013 09:12:02 -0400, Bill Moran  wrote:
> Last I looked, the partitioning mechanism isn't _quite_ as smart as could
> be desired.  For example:
> SELECT * FROM table WHERE objnum = 5; -- will not take advantage of partition
> You have to give the planner a little more hint as to the fact that it can
> take advantage of the partition:
> SELECT * FROM table WHERE (objnum % 2) = 1 AND objnum = 5;
> As silly as it seems, this is enough information for the planner to know
> that it only needs to scan one partition.

This seemed ridiculously silly until I thought about it. I guess it has
no way of "unwrapping" my constraint and figuring out what to do. Would
this also apply if I did ranges or is that a common enough constraint
that it *can* figure it out without me having to modify all my queries?


smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] [HACKERS] Who is pgFoundery administrator?

2013-10-02 Thread Merlin Moncure
On Wed, Oct 2, 2013 at 3:37 AM, KONDO Mitsumasa
 wrote:
> Hi,
>
> I want to submit new project in pgFoundery project.
>
> I submitted new project which is WAL archive copy tool with directIO method in
> pgFoundery homepage 2 weeks ago, but it does not have approved and responded 
> at
> all:-(
>
> Who is pgFoundery administrator or board member now? I would like to send 
> e-mail
> them. At least, it does not have information and support page in pgFoundery 
> homepage.

I have not been able to get in contact with Marc either to help
resolve a mailing list administration issue.  Github is nice, but a
lot of projects are still hosted on pgfoundry and it kind of sucks to
be forced to move on this basis.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: [GENERAL] Help on ṕerformance

2013-10-02 Thread Merlin Moncure
On Mon, Sep 30, 2013 at 10:03 AM, Carlos Eduardo Sotelo Pinto
 wrote:
>
> I need a help on postgresql performance
>
> I have configurate my postgresql files for tunning my server, however it is
> slow and cpu resources are highter than 120%
>
> I have no idea on how to solve this issue, I was trying to search more infor
> on google but is not enough, I also have try autovacum sentences and reindex
> db, but it continues beeing slow
>
> My app is a gps listener that insert more than 6000 records per minutes
> using a tcp server developed on python twisted, where there is no problems,
> the problem is when I try to follow the gps devices on a map on a relatime,
> I am doing queries each 6 seconds to my database from my django app, for
> request last position using a stored procedure, but the query get slow on
> more than 50 devices and cpu start to using more than 120% of its resources
>
> Django App connect the postgres database directly, and tcp listener server
> for teh devices connect database on threaded way using pgbouncer, I have not
> using my django web app on pgbouncer caause I dont want to crash gps devices
> connection on the pgbouncer
>
> I hoe you could help on get a better performance
>
> I am attaching my store procedure, my conf files and my cpu, memory
> information
>
> **Stored procedure**
>
> CREATE OR REPLACE FUNCTION gps_get_live_location (
> _imeis varchar(8)
> )
> RETURNS TABLE (
> imei varchar,
> device_id integer,
> date_time_process timestamp with time zone,
> latitude double precision,
> longitude double precision,
> course smallint,
> speed smallint,
> mileage integer,
> gps_signal smallint,
> gsm_signal smallint,
> alarm_status boolean,
> gsm_status boolean,
> vehicle_status boolean,
> alarm_over_speed boolean,
> other text,
> address varchar
> ) AS $func$
> DECLARE
> arr varchar[];
> BEGIN
> arr := regexp_split_to_array(_imeis, E'\\s+');
> FOR i IN 1..array_length(arr, 1) LOOP
> RETURN QUERY
> SELECT
> gpstracking_device_tracks.imei,
> gpstracking_device_tracks.device_id,
> gpstracking_device_tracks.date_time_process,
> gpstracking_device_tracks.latitude,
> gpstracking_device_tracks.longitude,
> gpstracking_device_tracks.course,
> gpstracking_device_tracks.speed,
> gpstracking_device_tracks.mileage,
> gpstracking_device_tracks.gps_signal,
> gpstracking_device_tracks.gsm_signal,
> gpstracking_device_tracks.alarm_status,
> gpstracking_device_tracks.gps_status,
> gpstracking_device_tracks.vehicle_status,
> gpstracking_device_tracks.alarm_over_speed,
> gpstracking_device_tracks.other,
> gpstracking_device_tracks.address
> FROM gpstracking_device_tracks
> WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR
> AND gpstracking_device_tracks.date_time_process >= date_trunc('hour',
> now())
> AND gpstracking_device_tracks.date_time_process <= NOW()
> ORDER BY gpstracking_device_tracks.date_time_process DESC
> LIMIT 1;
> END LOOP;
> RETURN;
> END;
> $func$
> LANGUAGE plpgsql VOLATILE SECURITY DEFINER;


Why are you doing this in a loop?  What's the point of the LIMIT 1?
You can almost certainly refactor this procedure into a vanilla query.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-02 Thread Albe Laurenz
Karsten Hilbert wrote:
> Whatever became of the 2011 intent to implement
> the above that's linked to in the blog post ?

You'd have to ask Alvaro.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-02 Thread Karsten Hilbert
On Wed, Oct 02, 2013 at 02:09:23PM +, Albe Laurenz wrote:

> Karsten Hilbert wrote:
> > Whatever became of the 2011 intent to implement
> > the above that's linked to in the blog post ?
> 
> You'd have to ask Alvaro.

I figured he'd maybe read this on-list :-)

Karsten
-- 
GPG key ID E4071346 @ gpg-keyserver.de
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-02 Thread Adrian Klaver

On 10/02/2013 05:58 AM, Karsten Hilbert wrote:

On Wed, Oct 02, 2013 at 11:48:02AM +, Albe Laurenz wrote:


Or does that still need some C sprinkling (for operator support, say) ?


Exactly. If you want "<" to work right for this data type
that's the road you have to go.


I see.

Whatever became of the 2011 intent to implement
the above that's linked to in the blog post ?


Not sure about that, but I do remember this:

http://svana.org/kleptog/pgsql/taggedtypes.html

No indication of whether it will work with 9.x servers though.



Karsten




--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] partitioning for speed, but query planner ignores

2013-10-02 Thread Kevin Grittner
David Rysdam  wrote:

> We have a by-our-standards large table (about 40e6 rows). Since it is
> the bottleneck in some places, I thought I'd experiment with
> partitioning.

In my personal experience I have gone into hundreds of millions of
rows with good performance without partitioning.  It's all about
designing good indexes for the workload.

I have only seen partitioning help in two cases:
(1)  There will be bulk deletes of rows, and you know at insert
time which bulk delete the row belongs with.  Dropping a partition
table is a very fast way to delete a large number of rows.
(2)  The bulk of activity will be on a relatively small subset of
the rows at any one time, and you can partition such that the set
of active rows will be in a small number of partitions.

In all other cases, I have only seen partitioning harm performance.
 There is no reason to think that checking the table-level
constraints on every partition table will be faster than descending
through an index tree level.

> The table holds data about certain objects, each of which has an object
> number and some number of historical entries (like account activity at a
> bank, say). The typical usage pattern is: relatively rare inserts that
> happen in the background via an automated process (meaning I don't care
> if they take a little longer) and frequent querying, including some
> where a human is sitting in front of it (i.e. I'd like it to be a lot
> faster).
>
> Our most frequent queries either select "all history for object N" or
> "most recent item for some subset of objects".
>
> Because object number figure so prominently, I thought I'd partition on
> that. To me, it makes the most sense from a load-balancing perspective

Load balancing?  Hitting a single partition more heavily improves
your cache hit ratio.  What sort of benefit are you expecting from
spreading the reads across all the partitions?  *Maybe* that could
help if you carefully placed each partition table on a separate set
of spindles, but usually you are better off having one big RAID so
that every partition is spread across all the spindles
automatically.

> Lower numbers are going to be queried much less often than higher
> numbers.

This suggests to me that you *might* get a performance boost if you
define partitions on object number *ranges*.  It still seems a bit
dubious, but it has a chance.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] Who is pgFoundery administrator?

2013-10-02 Thread Marc Fournier

direct_cp project approved, sorry for delay …

as to the the mailing list issue, where did you send it?   can you resend it to 
me here?


On 2013-10-02, at 6:46 , Merlin Moncure  wrote:

> On Wed, Oct 2, 2013 at 3:37 AM, KONDO Mitsumasa
>  wrote:
>> Hi,
>> 
>> I want to submit new project in pgFoundery project.
>> 
>> I submitted new project which is WAL archive copy tool with directIO method 
>> in
>> pgFoundery homepage 2 weeks ago, but it does not have approved and responded 
>> at
>> all:-(
>> 
>> Who is pgFoundery administrator or board member now? I would like to send 
>> e-mail
>> them. At least, it does not have information and support page in pgFoundery 
>> homepage.
> 
> I have not been able to get in contact with Marc either to help
> resolve a mailing list administration issue.  Github is nice, but a
> lot of projects are still hosted on pgfoundry and it kind of sucks to
> be forced to move on this basis.
> 
> merlin



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] partitioning for speed, but query planner ignores

2013-10-02 Thread David Rysdam
On Wed, 2 Oct 2013 11:19:58 -0400, Kevin Grittner  wrote:
> David Rysdam  wrote:
> 
> > We have a by-our-standards large table (about 40e6 rows). Since it is
> > the bottleneck in some places, I thought I'd experiment with
> > partitioning.
> 
> In my personal experience I have gone into hundreds of millions of
> rows with good performance without partitioning.  It's all about
> designing good indexes for the workload.

Well, our performance is still good. Certainly better than a lot of
projects I've seen even with less data. But it's still our "worst" table
and I have some free time to experiment...

> > Because object number figure so prominently, I thought I'd partition on
> > that. To me, it makes the most sense from a load-balancing perspective
> 
> Load balancing?  Hitting a single partition more heavily improves
> your cache hit ratio.  What sort of benefit are you expecting from
> spreading the reads across all the partitions?  *Maybe* that could
> help if you carefully placed each partition table on a separate set
> of spindles, but usually you are better off having one big RAID so
> that every partition is spread across all the spindles
> automatically.

Now that you spell it out, I guess that does make more sense. I had some
vague notion of tables "doing work" but really if it can load one
partition into RAM and get most of my hits from there, it'd be a big
win.
 
> > Lower numbers are going to be queried much less often than higher
> > numbers.
> 
> This suggests to me that you *might* get a performance boost if you
> define partitions on object number *ranges*.  It still seems a bit
> dubious, but it has a chance.

Would the planner be smart enough to figure out ranges without me having
to "hint" my queries? 

In any case, my speed tests are coming out the opposite what I
expected. Within-partition queries are taking longer than the whole
table did while across-partition queries are faster. I'll have to do
more thinking on that.


smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] [HACKERS] Who is pgFoundery administrator?

2013-10-02 Thread Merlin Moncure
On Wed, Oct 2, 2013 at 10:42 AM, Marc Fournier  wrote:
>
> direct_cp project approved, sorry for delay …
>
> as to the the mailing list issue, where did you send it?   can you resend it 
> to me here?

sure -- I had sent it initially to the www list then directly to you..

"Trying to figure out what/who to contact -- I admin the libpqtypes
mailing lists and I'm unable to authenticate through the moderator
interface -- is there a way to reset or recover the admin password?"

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Doubt with physical storage being used by postgres when storing LOBs

2013-10-02 Thread Víctor Cosqui
Hi all


I am developing an application which uses postgres 9.2 to store binaries as
oid objects.

CREATE TABLE content (contentname text, contentoid oid);

I am making some tests to evaluate how much HD space I will need to
allocate these objects.

To measure the space used by postgres I have used two different tools, both
with the same results

1.- Checking physical HD space by making a "sudo du -sb
/opt/PostgreSQL/9.2/data/base/" before and after inserting the data

2.- Asking directly postgres about the tables size estimation "select
pg_size_pretty(pg_relation_size('pg_largeobject'))"

I have tested with different binaries and I am getting different results,
for example when I put the content of a zipped file of 17MB size, the
increment of the disk space is of 24MB. The reason for this increment seems
to be an index created on the table "pg_largeobject". The index is
"pg_largeobject_loid_pn_index"

In other hand when I put let's say many zeroes (same 17Mb) the increase of
HD usage is much smaller.

I think it could be caused because TOAST compresses the content stored, se
he can compress the zeroes but not the previously compressed zip content.

My question is: Is this increase of ~40% normal? Has someone else
experienced this?

Thanks in advance for your help!

Víctor Cosqui


[GENERAL] Re: [GENERAL] Help on ṕerformance

2013-10-02 Thread Carlos Eduardo Sotelo Pinto
Thanks to all

I have fix that refactoring the function

BEGIN
arr := regexp_split_to_array(_imeis, E'\\s+');
 RETURN QUERY
SELECT
gpstracking_device_tracks.imei,
gpstracking_device_tracks.device_id,
gpstracking_device_tracks.date_time_process,
gpstracking_device_tracks.latitude,
gpstracking_device_tracks.longitude,
gpstracking_device_tracks.course,
gpstracking_device_tracks.speed,
gpstracking_device_tracks.mileage,
gpstracking_device_tracks.gps_signal,
gpstracking_device_tracks.gsm_signal,
gpstracking_device_tracks.alarm_status,
gpstracking_device_tracks.gps_status,
gpstracking_device_tracks.vehicle_status,
gpstracking_device_tracks.alarm_over_speed,
gpstracking_device_tracks.other,
gpstracking_device_tracks.address
FROM (
SELECT
gpstracking_device_tracks.imei,
gpstracking_device_tracks.device_id,
gpstracking_device_tracks.date_time_process,
gpstracking_device_tracks.latitude,
gpstracking_device_tracks.longitude,
gpstracking_device_tracks.course,
gpstracking_device_tracks.speed,
gpstracking_device_tracks.mileage,
gpstracking_device_tracks.gps_signal,
gpstracking_device_tracks.gsm_signal,
gpstracking_device_tracks.alarm_status,
gpstracking_device_tracks.gps_status,
gpstracking_device_tracks.vehicle_status,
gpstracking_device_tracks.alarm_over_speed,
gpstracking_device_tracks.other,
gpstracking_device_tracks.address,
ROW_NUMBER() OVER(PARTITION BY gpstracking_device_tracks.imei ORDER BY
gpstracking_device_tracks.date_time_process DESC) as rnumber
FROM gpstracking_device_tracks
WHERE gpstracking_device_tracks.imei = ANY(arr)
AND gpstracking_device_tracks.date_time_process >= date_trunc('hour',
now())
AND gpstracking_device_tracks.date_time_process <= NOW()
) AS gpstracking_device_tracks
WHERE gpstracking_device_tracks.rnumber = 1;
END;


2013/10/2 Merlin Moncure 

> On Mon, Sep 30, 2013 at 10:03 AM, Carlos Eduardo Sotelo Pinto
>  wrote:
> >
> > I need a help on postgresql performance
> >
> > I have configurate my postgresql files for tunning my server, however it
> is
> > slow and cpu resources are highter than 120%
> >
> > I have no idea on how to solve this issue, I was trying to search more
> infor
> > on google but is not enough, I also have try autovacum sentences and
> reindex
> > db, but it continues beeing slow
> >
> > My app is a gps listener that insert more than 6000 records per minutes
> > using a tcp server developed on python twisted, where there is no
> problems,
> > the problem is when I try to follow the gps devices on a map on a
> relatime,
> > I am doing queries each 6 seconds to my database from my django app, for
> > request last position using a stored procedure, but the query get slow on
> > more than 50 devices and cpu start to using more than 120% of its
> resources
> >
> > Django App connect the postgres database directly, and tcp listener
> server
> > for teh devices connect database on threaded way using pgbouncer, I have
> not
> > using my django web app on pgbouncer caause I dont want to crash gps
> devices
> > connection on the pgbouncer
> >
> > I hoe you could help on get a better performance
> >
> > I am attaching my store procedure, my conf files and my cpu, memory
> > information
> >
> > **Stored procedure**
> >
> > CREATE OR REPLACE FUNCTION gps_get_live_location (
> > _imeis varchar(8)
> > )
> > RETURNS TABLE (
> > imei varchar,
> > device_id integer,
> > date_time_process timestamp with time zone,
> > latitude double precision,
> > longitude double precision,
> > course smallint,
> > speed smallint,
> > mileage integer,
> > gps_signal smallint,
> > gsm_signal smallint,
> > alarm_status boolean,
> > gsm_status boolean,
> > vehicle_status boolean,
> > alarm_over_speed boolean,
> > other text,
> > address varchar
> > ) AS $func$
> > DECLARE
> > arr varchar[];
> > BEGIN
> > arr := regexp_split_to_array(_imeis, E'\\s+');
> > FOR i IN 1..array_length(arr, 1) LOOP
> > RETURN QUERY
> > SELECT
> > gpstracking_device_tracks.imei,
> > gpstracking_device_tracks.device_id,
> > gpstracking_device_tracks.date_time_process,
> > gpstracking_device_tracks.latitude,
> > gpstracking_device_tracks.longitude,
> > gpstracking_device_tracks.course,
> > gpstracking_device_tracks.speed,
> > gpstracking_device_tracks.mileage,
> > gpstracking_device_tracks.gps_signal,
> > gpstracking_device_tracks.gsm_signal,
> > gpstracking_device_tracks.alarm_status,
> > gpstracking_device_tracks.gps_status,
> > gpstracking_device_tracks.vehicle_status,
> > gpstracking_device_tracks.alarm_over_speed,
> > gpstracking_device_tracks.other,
> > gpstracking_device_tracks.address
> > FROM gpstracking_device_tracks
> > WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR
> > AND gpstracking_device_tracks.date_time_process >= date_trunc('hour',
> > now())
> > AND gpstracking_device_tracks.date_time_process <= NOW()
> >

Re: [GENERAL] partitioning for speed, but query planner ignores

2013-10-02 Thread Kevin Grittner
David Rysdam  wrote:

> Would the planner be smart enough to figure out ranges without me
> having to "hint" my queries?

Yes, it handles ranges well.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-02 Thread Steve Crawford

On 10/02/2013 04:19 AM, Karsten Hilbert wrote:

On Wed, Oct 02, 2013 at 09:49:38PM +1300, Tim Uckun wrote:


The reason for that is that in PostgreSQL there is no time zone

information stored along with a "timestamp with time zone",
it is stored in UTC.

A better name might perhaps been "timezone aware timestamp".

Karsten


The trouble is that it isn't timezone aware.

When I have to explain this I tend to tell people to mentally change 
"timestamp with time zone" to "point-in-time". That "point-in-time" data 
can be represented in many different formats and "localized" to 
different zones but they are all the identical point-in-time.


Cheers,
Steve



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] Who is pgFoundery administrator?

2013-10-02 Thread Andrew Gierth
> "Merlin" == Merlin Moncure  writes:

 >> Who is pgFoundery administrator or board member now? I would like
 >> to send e-mail them. At least, it does not have information and
 >> support page in pgFoundery homepage.

 Merlin> I have not been able to get in contact with Marc either to
 Merlin> help resolve a mailing list administration issue.  Github is
 Merlin> nice, but a lot of projects are still hosted on pgfoundry and
 Merlin> it kind of sucks to be forced to move on this basis.

Also, many pgfoundry projects had all their admins and members removed
leaving them empty; no attempt seems to have been made to restore this
(surely any project with no admins could have had them restored from a
backup, rather than expecting every individual user to complain about
it?)

The help-with-pgfoundry forum was also broken last I looked.

-- 
Andrew (irc:RhodiumToad)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-02 Thread Steve Crawford

On 10/02/2013 01:49 AM, Tim Uckun wrote:

>The reason for that is that in PostgreSQL there is no time zone
information stored along with a "timestamp with time zone",
it is stored in UTC.

That seems unintuitive. What is the difference between timestamp 
without time zone and timestamp with time zone? I was expecting to 
have the time zone stored in the field. For example one row might be 
in UTC  but the other row might be in my local time.


Maybe the question I need to ask is "how can I store the time zone 
along with the timestamp"


>That is because AT TIME ZONE returns a "timestamp without time zone"

Also seems counterintutive but I guess I can aways convert it. I am 
just not getting the right offset when I convert. That's what's puzzling.



As I mentioned in a separate reply, the best mental-model I've found for 
the ill-named "timestamp with time zone" is "point in time."


If you also need the location (or just the time zone) of an event I 
would recommend using two fields one of which is the event_timestamp as 
a timestamp with time zone (point in time) and the other is the 
event_timezone which is a text column with the full timezone name. You 
can get a full list of recognized time-zone names with "select * from 
pg_timezone_names".


I recommend storing the data as a timestamp with time zone and a full 
time-zone name to avoid data ambiguity during daylight saving changes. 
I.e. when the clock falls-back you will have 1:30 am twice if you are 
storing a timestamp without time zone. This *may* be disambiguated if 
you use an appropriate zone abbreviation like 1:30 EST vs 1:30 EDT but 
abbreviations lead to other problems in worldwide data including the 
problem that abbreviations may be reused leading to weirdness like 
needing to set the AUSTRALIAN_TIMEZONES parameter properly to avoid 
conflict with EST (Australian Eastern Standard Time) and EST (US Eastern 
Standard Time) among others - this will be even more "fun" if trying to 
select from a table that includes both Australian and United States data.


If you structure the data as recommended above you can simply get the 
local time as:


SELECT ..., event_timestamp at time zone event_timezone as 
event_local_time, ...


when you need the local time but you will still retain the exact 
point-in-time for use as needed.


Cheers,
Steve



Re: [GENERAL] Doubt with physical storage being used by postgres when storing LOBs

2013-10-02 Thread Merlin Moncure
On Wed, Oct 2, 2013 at 11:08 AM, Víctor Cosqui  wrote:
> Hi all
>
>
> I am developing an application which uses postgres 9.2 to store binaries as
> oid objects.
>
> CREATE TABLE content (contentname text, contentoid oid);
>
> I am making some tests to evaluate how much HD space I will need to allocate
> these objects.
>
> To measure the space used by postgres I have used two different tools, both
> with the same results
>
> 1.- Checking physical HD space by making a "sudo du -sb
> /opt/PostgreSQL/9.2/data/base/" before and after inserting the data
>
> 2.- Asking directly postgres about the tables size estimation "select
> pg_size_pretty(pg_relation_size('pg_largeobject'))"
>
> I have tested with different binaries and I am getting different results,
> for example when I put the content of a zipped file of 17MB size, the
> increment of the disk space is of 24MB. The reason for this increment seems
> to be an index created on the table "pg_largeobject". The index is
> "pg_largeobject_loid_pn_index"
>
> In other hand when I put let's say many zeroes (same 17Mb) the increase of
> HD usage is much smaller.
>
> I think it could be caused because TOAST compresses the content stored, se
> he can compress the zeroes but not the previously compressed zip content.
>
> My question is: Is this increase of ~40% normal? Has someone else
> experienced this?

TOAST will compress data if it thinks it can (you can disable this
behavior and arguably should if your data is pre-compressed).  40% for
the index seems high but it may be accurate.  Personally, I prefer
bytea storage to LOB although LOB is a little bit faster.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] partitioning for speed, but query planner ignores

2013-10-02 Thread bricklen
On Wed, Oct 2, 2013 at 9:01 AM, David Rysdam  wrote:

>  I had some vague notion of tables "doing work" but really if it can load
> one
> partition into RAM and get most of my hits from there, it'd be a big
> win.


The same concept applies to the frequently-used indexes on that partition.


Re: [GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-02 Thread Karsten Hilbert
On Wed, Oct 02, 2013 at 10:40:36AM -0700, Steve Crawford wrote:

> >>>The reason for that is that in PostgreSQL there is no time zone
> >>information stored along with a "timestamp with time zone",
> >>it is stored in UTC.
> >A better name might perhaps been "timezone aware timestamp".
> >
> >Karsten
> 
> The trouble is that it isn't timezone aware.

INSERT/UPDATE is, SELECT is not :-)

> When I have to explain this I tend to tell people to mentally change
> "timestamp with time zone" to "point-in-time". That "point-in-time"
> data can be represented in many different formats and "localized" to
> different zones but they are all the identical point-in-time.

That is, indeed, a helpful metaphor.

Maybe an explicit statement could be added to the docs
(I just checked 9.3) to the effect that "no, the input
time zone is NOT stored and can NOT be retrieved later,
think of it as a point-in-time".

Karsten
-- 
GPG key ID E4071346 @ gpg-keyserver.de
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-02 Thread Tim Uckun
I do think it would help to add it to the docs. The name "TIMESTAMP WITH
TIME ZONE" clearly implies the time zone is stored in the field.   One day
there will really be a timestamp with time zone embedded in it field and I
wonder what they will call that.


Re: [GENERAL] Timestamp with and without timezone conversion confusion.

2013-10-02 Thread Adrian Klaver

On 10/02/2013 02:53 PM, Tim Uckun wrote:

I do think it would help to add it to the docs. The name "TIMESTAMP WITH
TIME ZONE" clearly implies the time zone is stored in the field.   One
day there will really be a timestamp with time zone embedded in it field
and I wonder what they will call that.




I think you are fighting the standard here:

sql92
http://www.contrib.andrew.cmu.edu/~shadow/sql/sql1992.txt

pg 31
"

 Therefore, datetime data types that contain time fields (TIME and
 TIMESTAMP) are maintained in Universal Coordinated Time (UTC), 
with

 an explicit or implied time zone part.

 The time zone part is an interval specifying the difference 
between

 UTC and the actual date and time in the time zone represented by
 the time or timestamp data item. The time zone displacement is
 defined as

  INTERVAL HOUR TO MINUTE

 A TIME or TIMESTAMP that does not specify WITH TIME ZONE has 
an im-

 plicit time zone equal to the local time zone for the SQL-session.
 The value of time represented in the data changes along with the
 local time zone for the SQL-session. However, the meaning of the
 time does not change because it is effectively maintained in UTC.


"

sql99

http://dbis-informatik.uibk.ac.at/files/ext/lehre/ss11/vo-ndbm/lit/ORel-SQL1999-IBM-Nelson-Mattos.pdf

pg 67
"
Coordinated universal time (UTC) used to store TIME and
TIMESTAMP values

WITH TIME ZONE can be specified

Each session has a time zone, which is used if no time zone is
explicitly specified

"
--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] Who is pgFoundery administrator?

2013-10-02 Thread KONDO Mitsumasa
(2013/10/02 17:37), KONDO Mitsumasa wrote:
> I want to submit new project in pgFoundery project.
Our new project was approved yesterday!
Thanks very much for pgFoundery crew.

Regards,
-- 
Mitsumasa KONDO
NTT Open Source Software Center


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres replication question :- One master 2 slaves 9.0.10

2013-10-02 Thread Jaime Casanova
On Tue, Oct 1, 2013 at 10:48 PM, Sergey Konoplev  wrote:
> On Tue, Oct 1, 2013 at 6:15 PM, Jaime Casanova  wrote:
>
>> but it would be a good idea to set hot_standby_feedback to on and
>> max_standby_archive_delay to something larger than 30s
>
> Doesn't replica need a connection to master for hot_standby_feedback?
>

doh! yes, it needs it...

vacuum_defer_cleanup_age it's the one you should set if never do
streaming... but, of course, that is not accurate enough

-- 
Jaime Casanova www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación
Phone: +593 4 5107566 Cell: +593 987171157


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general