Re: [GENERAL] Strange message from pg_receivexlog

2013-08-20 Thread Magnus Hagander
On Tue, Aug 20, 2013 at 8:17 AM, Sergey Konoplev  wrote:
> Hi all,
>
> My WAL archiving script based on pg_receivexlog reported the following
> error several days ago (just ignore everything before
> 'pg_receivexlog', it's a message my script generates).
>
> Thu Aug 15 18:33:09 MSK 2013 ERROR archive_wal.sh: Problem occured
> during WAL archiving: pg_receivexlog: could not send feedback packet:
> server closed the connection unexpectedly
>
> At the same time postgres reported this error in log:
>
> 2013-08-15 18:32:51 MSK 30945 postgres@[unknown] from [local]
> [vxid:53/0 txid:0] [streaming 2A97/6FA48000] LOG:  terminating
> walsender process due to replication timeout
>
> Both pg_receivexlog and postgres run at the same machive,
> pg_receivexlog connects to postgres locally. /var/log/messages has
> absolutely nothing about it. I also have a hot standby on another
> machine connecting to the same master, but there is nothing strange in
> its logs either.
>
> Any thoughts what it was?

It looks like something outside postgres or pg_receivexlog did
terminate the connection. pg_receievexlog noticed it was closed,
whereas postgres waited for the timeout (but probably would've noticed
if it had actually had some other data to send maybe?). Do you have
some iptables connection tracking or something like that which could
be interfering?


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Handling of tz-aware literals in non-tz-aware fields

2013-08-20 Thread Daniele Varrazzo
Hello,

the issue can be show with this example:

piro=> SET TIMEZONE = 'America/New_York';
SET

piro=> select '1970-01-01T00:00:00+03:00'::timestamp;
  timestamp
-
 1970-01-01 00:00:00
(1 row)

piro=> select '1970-01-01T00:00:00+03:00'::timestamptz::timestamp;
  timestamp
-
 1969-12-31 16:00:00
(1 row)

I find surprising that an unknown literal containing a TZ-aware
timestamp has the tz info discarded (e.g. upon insertion in a
timestamp without time zone field), whereas the cast from tz-aware to
non-tz-aware performs a conversion. I find the second behaviour much
more reasonable.

Is there an explanation for the first behaviour?

Is the first behaviour documented?

Thank you very much,


-- Daniele


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] effective_io_concurrency with an SSD SAN?

2013-08-20 Thread Evan D. Hoffman
Given a system with 32 cores, an SSD SAN with 48x drives, and 2x 8Gbps
paths from the server to the SAN, what would be a good starting point
to set effective_io_concurrency?  I currently have it set to 32, but I
kind of feel like the right setting would be "2" since we have two
paths.  We don't often saturate both links but it does happen from
time to time.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Handling of tz-aware literals in non-tz-aware fields

2013-08-20 Thread Adrian Klaver

On 08/20/2013 04:00 AM, Daniele Varrazzo wrote:

Hello,

the issue can be show with this example:

 piro=> SET TIMEZONE = 'America/New_York';
 SET

 piro=> select '1970-01-01T00:00:00+03:00'::timestamp;
   timestamp
 -
  1970-01-01 00:00:00
 (1 row)

 piro=> select '1970-01-01T00:00:00+03:00'::timestamptz::timestamp;
   timestamp
 -
  1969-12-31 16:00:00
 (1 row)

I find surprising that an unknown literal containing a TZ-aware
timestamp has the tz info discarded (e.g. upon insertion in a
timestamp without time zone field), whereas the cast from tz-aware to
non-tz-aware performs a conversion. I find the second behaviour much
more reasonable.

Is there an explanation for the first behaviour?

Is the first behaviour documented?


http://www.postgresql.org/docs/9.2/interactive/datatype-datetime.html#DATATYPE-DATETIME-INPUT

The SQL standard differentiates timestamp without time zone and 
timestamp with time zone literals by the presence of a "+" or "-" symbol 
and time zone offset after the time. Hence, according to the standard,


TIMESTAMP '2004-10-19 10:23:54'
is a timestamp without time zone, while

TIMESTAMP '2004-10-19 10:23:54+02'
is a timestamp with time zone. PostgreSQL never examines the content of 
a literal string before determining its type, and therefore will treat 
both of the above as timestamp without time zone. To ensure that a 
literal is treated as timestamp with time zone, give it the correct 
explicit type:


TIMESTAMP WITH TIME ZONE '2004-10-19 10:23:54+02'

In a literal that has been determined to be timestamp without time zone, 
PostgreSQL will silently ignore any time zone indication. That is, the 
resulting value is derived from the date/time fields in the input value, 
and is not adjusted for time zone.








Thank you very much,


-- Daniele





--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory Issue with array_agg?

2013-08-20 Thread Pavel Stehule
Hello

It is strange. I am trying to simulate it without success. On 1 M rows
where every id is 2 times duplicated

processing string_agg .. cca 30MB

processing array_agg cca 32MB

postgres=# create table foo(a int, b varchar);
CREATE TABLE
postgres=# insert into foo select i, md5(i::text) from
generate_series(1,100) g(i);
INSERT 0 100
postgres=# insert into foo select i, md5(i::text) from
generate_series(1,100) g(i);
INSERT 0 100
postgres=# CREATE INDEX on foo(b);
CREATE INDEX
postgres=# ANALYZE foo;
ANALYZE
postgres=# explain analyze select string_agg(a::text,',') from foo group by
b;
 QUERY
PLAN

 GroupAggregate  (cost=410045.19..447831.37 rows=1022895 width=37) (actual
time=10195.972..14993.493 rows=100 loops=1)
   ->  Sort  (cost=410045.19..415045.19 rows=200 width=37) (actual
time=10195.944..13659.985 rows=200 loops=1)
 Sort Key: b
 Sort Method: external merge  Disk: 97768kB
 ->  Seq Scan on foo  (cost=0.00..36667.00 rows=200 width=37)
(actual time=0.018..321.197 rows=200 loops=1)
 Total runtime: 15066.397 ms
(6 rows)

postgres=# explain analyze select array_agg(a::text) from foo group by b;
 QUERY
PLAN

 GroupAggregate  (cost=410045.19..447831.37 rows=1022895 width=37) (actual
time=10062.095..15697.755 rows=100 loops=1)
   ->  Sort  (cost=410045.19..415045.19 rows=200 width=37) (actual
time=10062.059..13613.300 rows=200 loops=1)
 Sort Key: b
 Sort Method: external merge  Disk: 97768kB
 ->  Seq Scan on foo  (cost=0.00..36667.00 rows=200 width=37)
(actual time=0.029..311.423 rows=200 loops=1)
 Total runtime: 15799.226 ms
(6 rows)

Regards

Pavel


2013/8/19 Robert Sosinski 

> At the moment, all guids are distinct, however before I zapped the
> duplicates, there were 280 duplicates.
>
> Currently, there are over 2 million distinct guids.
>
> -Robert
>
>
> On Mon, Aug 19, 2013 at 11:12 AM, Pavel Stehule 
> wrote:
>
>>
>>
>>
>> 2013/8/19 Robert Sosinski 
>>
>>> Hi Pavel,
>>>
>>> What kind of example do you need?  I cant give you the actual data I
>>> have in the table, but I can give you an example query and the schema
>>> attached below.  From there, I would just put in 2 million rows worth 1.2
>>> Gigs of data.  Average size of the the extended columns (using the
>>> pg_column_size function) in bytes are:
>>>
>>> guid: 33
>>> name: 2.41
>>> currency: 4
>>> fields: 120.32
>>>
>>> example query:
>>>
>>> -- find duplicate records using a guid
>>> select guid, array_agg(id) from orders group by guid;
>>>
>>
>> how much distinct guid is there, and how much duplicates
>>
>> ??
>>
>> regards
>>
>> Pavel
>>
>>
>>
>>>
>>> example schema:
>>>  Table "public.things"
>>>
>>>Column   |Type |
>>>  Modifiers  | Storage  | Stats target | Description
>>>
>>> +-+-+--+--+-
>>>  id | integer | not null default
>>> nextval('things_id_seq'::regclass) | plain|  |
>>>  version| integer | not null
>>>| plain|  |
>>>  created_at | timestamp without time zone | not null
>>>| plain|  |
>>>  updated_at | timestamp without time zone | not null
>>>| plain|  |
>>>  foo_id | integer | not null
>>>| plain|  |
>>>  bar_id | integer | not null
>>>| plain|  |
>>>  baz_id | integer | not null
>>>| plain|  |
>>>  guid   | character varying   | not null
>>>| extended |  |
>>>  name   | character varying   | not null
>>>| extended |  |
>>>  price  | numeric(12,2)   | not null
>>>| main |  |
>>>  currency   | character varying   | not null
>>>| extended |  |
>>>  amount | integer | not null
>>>| plain|  |
>>>  the_date   | date| not null
>>>| plain|  |
>>>  fields | hstore  |
>>> | extended |  |
>>> Indexes:
>>> "things_pkey" PRI

Re: [GENERAL] Memory Issue with array_agg?

2013-08-20 Thread Pavel Stehule
Can you send a EXPLAIN result in both use cases?

Pavel


2013/8/19 Robert Sosinski 

> At the moment, all guids are distinct, however before I zapped the
> duplicates, there were 280 duplicates.
>
> Currently, there are over 2 million distinct guids.
>
> -Robert
>
>
> On Mon, Aug 19, 2013 at 11:12 AM, Pavel Stehule 
> wrote:
>
>>
>>
>>
>> 2013/8/19 Robert Sosinski 
>>
>>> Hi Pavel,
>>>
>>> What kind of example do you need?  I cant give you the actual data I
>>> have in the table, but I can give you an example query and the schema
>>> attached below.  From there, I would just put in 2 million rows worth 1.2
>>> Gigs of data.  Average size of the the extended columns (using the
>>> pg_column_size function) in bytes are:
>>>
>>> guid: 33
>>> name: 2.41
>>> currency: 4
>>> fields: 120.32
>>>
>>> example query:
>>>
>>> -- find duplicate records using a guid
>>> select guid, array_agg(id) from orders group by guid;
>>>
>>
>> how much distinct guid is there, and how much duplicates
>>
>> ??
>>
>> regards
>>
>> Pavel
>>
>>
>>
>>>
>>> example schema:
>>>  Table "public.things"
>>>
>>>Column   |Type |
>>>  Modifiers  | Storage  | Stats target | Description
>>>
>>> +-+-+--+--+-
>>>  id | integer | not null default
>>> nextval('things_id_seq'::regclass) | plain|  |
>>>  version| integer | not null
>>>| plain|  |
>>>  created_at | timestamp without time zone | not null
>>>| plain|  |
>>>  updated_at | timestamp without time zone | not null
>>>| plain|  |
>>>  foo_id | integer | not null
>>>| plain|  |
>>>  bar_id | integer | not null
>>>| plain|  |
>>>  baz_id | integer | not null
>>>| plain|  |
>>>  guid   | character varying   | not null
>>>| extended |  |
>>>  name   | character varying   | not null
>>>| extended |  |
>>>  price  | numeric(12,2)   | not null
>>>| main |  |
>>>  currency   | character varying   | not null
>>>| extended |  |
>>>  amount | integer | not null
>>>| plain|  |
>>>  the_date   | date| not null
>>>| plain|  |
>>>  fields | hstore  |
>>> | extended |  |
>>> Indexes:
>>> "things_pkey" PRIMARY KEY, btree (id)
>>> "things_foo_id_idx" btree (foo_id)
>>> "things_bar_id_idx" btree (bar_id)
>>> "things_baz_id_idx" btree (baz_id)
>>> "things_guid_uidx" UNIQUE, btree (guid)
>>> "things_lpad_lower_name_eidx" btree (lpad(lower(name::text), 10,
>>> '0'::text))
>>> "things_price_idx" btree (price)
>>>
>>> Foreign-key constraints:
>>> "things_foo_id_fkey" FOREIGN KEY (foo_id) REFERENCES foos(id)
>>> "things_bar_id_fkey" FOREIGN KEY (bar_id) REFERENCES bars(id)
>>> "things_baz_id_fkey" FOREIGN KEY (baz_id) REFERENCES bazs(id)
>>> Triggers:
>>> timestamps_trig BEFORE INSERT OR UPDATE ON things FOR EACH ROW
>>> EXECUTE PROCEDURE timestamps_tfun()
>>>
>>> Let me know if you need anything else.
>>>
>>> Thanks,
>>>
>>>
>>> On Mon, Aug 19, 2013 at 3:29 AM, Pavel Stehule 
>>> wrote:
>>>
 Hello

 please, can you send some example or test?

 Regards

 Pavel Stehule


 2013/8/19 Robert Sosinski 

> When using array_agg on a large table, memory usage seems to spike up
> until Postgres crashes with the following error:
>
> 2013-08-17 18:41:02 UTC [2716]: [2] WARNING: terminating connection
> because of crash of another server process
> 2013-08-17 18:41:02 UTC [2716]: [3] DETAIL: The postmaster has
> commanded this server process to roll back the current transaction and
> exit, because another server process exited abnormally and possibly
> corrupted shared memory.
> 2013-08-17 18:41:02 UTC [2716]: [4] HINT: In a moment you should be
> able to reconnect to the database and repeat your command.
>
> I've definitely isolated it down to using array_agg, as when I changed
> the query to use string_agg, it worked fine.  I also tried using array_agg
> on a few different queries, all yielding the same issue.  Swapping in
> string_agg fixed the issue once more.
>
> This particular table has over 2 million rows and is 

[GENERAL] pg_extension_config_dump() with a sequence

2013-08-20 Thread Moshe Jacobson
I wrote an extension and marked one of the tables it creates as a config
table using pg_extension_config_dump(). This caused the data to be dumped
and restored correctly, but the sequence attached to the PK column was not
correctly set to its old value.

In searching for a solution I found an old message where Tom suggested
marking the sequence as a config table in the same way. This seems to work
OK when I use pg_dump, but for some reason, pg_dumpall generates the
following error:

pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  cannot copy from sequence
"sq_pk_audit_data_type"
pg_dump: The command was: COPY auditlog.sq_pk_audit_data_type  TO stdout;
pg_dumpall: pg_dump failed on database "ises", exiting


Why does it work with pg_dump but not pg_dumpall?

-- 
Moshe Jacobson
Nead Werx, Inc. | Manager of Systems Engineering
2323 Cumberland Parkway, Suite 201 | Atlanta, GA 30339
mo...@neadwerx.com | www.neadwerx.com

"Quality is not an act, it is a habit." -- Aristotle


Re: [GENERAL] earthdistance

2013-08-20 Thread Olivier Chaussavoine
I also look at cube extension, but the built in type box - a couple of
points - does not require any extension and has a GIST index. It can be
used to represent a rectangle on the domain [-PI/2,+PI/2[*[-PI,PI[. If the
extension was providing a function get_rect_from_cap() giving the smallest
rectangle of this domain containing a spherical cap, this rectangle could
be used as you pointed out to reduce the set of rows where the earth
distance need to be computed to know if a point A belongs to the cap. The
operator && (box overlaps box) could be used if the point A is converted to
box(A,A). Do you think this function get_rect_from_cap() could be usefull?


2013/8/11 Bruno Wolff III 

> On Sat, Aug 10, 2013 at 12:18:48 +0200,
>   Olivier Chaussavoine 
> >
> wrote:
>
>> I did not found any geographic indexing with earthdistance, and need it.
>>
>
> Some of the earthdistance stuff is based on cube which does have indexing.
> I don't know how well that indexing works and it might be pretty bad in
> practice.
>
>
>  The need I have is simple:
>> "is the distance between two (lat,long) positions less than X km?"
>> the model used for the shape of the earth should be related to the
>> precision of lat,lon, and most sources are imprecise. The spherical model
>> should be enough.
>>
>
> You might just be looking at this wrong. You don't have an index on the
> distance. What you want is to find points within a cube that is big enough
> to include all of the points of interest and then double check the returned
> points to make sure they are really within the expected range. You can
> calculate the size of the cube needed based on the distance and the radius
> of the earth. I don't remember if there was a built in function for that,
> since it's been such a long time since I looked at it.
>



-- 
Olivier Chaussavoine


Re: [GENERAL] pg_extension_config_dump() with a sequence

2013-08-20 Thread Tom Lane
Moshe Jacobson  writes:
> In searching for a solution I found an old message where Tom suggested
> marking the sequence as a config table in the same way. This seems to work
> OK when I use pg_dump, but for some reason, pg_dumpall generates the
> following error:

It's pretty hard to believe that that would work in pg_dump but not
pg_dumpall.  You sure the error references a database that you fixed the
sequence definition in?  (You'd probably have to drop and recreate the
extension to fix it in an existing database.)  If so, what PG version are
we talking about exactly, and what's the whole pg_dumpall command line?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Alternate input for user defined data type

2013-08-20 Thread Aram Fingal
I want to create a user defined data type but have flexible input just like, 
for example, the boolean data type where TRUE can be entered as any of (TRUE, 
true, T, t, YES, yes, Y, y, 1...) and it will be interpreted as the same thing. 
 

So suppose I have days of the week:

CREATE TYPE  days_of_week AS ENUM ('Su','M','Tu','W','Th','F','Sa');

Except that I want 'MON', 'Mon', 'mon' and 'monday' all to be interpreted as 
'M' in the data type.  What is the best way to do this.? Is that what 
input_function is for in the CREATE TYPE command?  Do I need to create a 
trigger?




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fastest Index/Algorithm to find similar sentences

2013-08-20 Thread andres.pascal
As I understand, you need to search strings by similarity (using levenshtein
or any other metric distance). In that case, you can use metric indexes like
FHQT or FQA (this is better if you are using a relational database, like
postgres). But they are not implemented yet in most DBMS, so you need to
program the index. It s not too hard, but you need to understand the base
concepts. You can look for "searching in metric spaces" to read about it. If
you are in a hurry, you can mail me and maybe I can help you. 

Andres.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Index-for-Levenshtein-distance-tp5764546p5768127.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Fastest Index/Algorithm to find similar sentences

2013-08-20 Thread Merlin Moncure
On Fri, Aug 2, 2013 at 10:25 AM, Kevin Grittner  wrote:
> Janek Sendrowski  wrote:
>
>> I also tried pg_trgm module, which works with tri-grams, but it's
>> also very slow with 100.000+ rows.
>
> Hmm.  I found the pg_trgm module very fast for name searches with
> millions of rows *as long as I used KNN-GiST techniques*.  Were you
> careful to do so?  Check out the "Index Support" section of this
> page:
>
> http://www.postgresql.org/docs/current/static/pgtrgm.html
>
> While I have not tested this technique with a column containing
> sentences, I would expect it to work well.  As a quick
> confirmation, I imported the text form of War and Peace into a
> table, with one row per *line* (because that was easier than
> parsing sentence boundaries for a quick test).  That was over
> 65,000 rows.

+ 1 this.  pg_trgm is black magic.  search time (when using index) is
mostly dependent on number of trigrams in search string vs average
number of trigrams in database.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_extension_config_dump() with a sequence

2013-08-20 Thread Moshe Jacobson
On Tue, Aug 20, 2013 at 4:03 PM, Tom Lane  wrote:

> It's pretty hard to believe that that would work in pg_dump but not
> pg_dumpall.  You sure the error references a database that you fixed the
> sequence definition in?  (You'd probably have to drop and recreate the
> extension to fix it in an existing database.)  If so, what PG version are
> we talking about exactly, and what's the whole pg_dumpall command line?
>

Thanks for the response, Tom.

You're right, pg_dump fails as well:

pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  cannot copy from sequence
"sq_pk_audit_data_type"
pg_dump: The command was: COPY auditlog.sq_pk_audit_data_type  TO stdout;


I can pg_dump our prod database and pg_restore it onto our dev server with
zero errors, but once I try to pg_dump the dev copy again, that's when I
receive the errors above.

I didn't drop & recreate the extension to fix it on prod -- I just created
a new version of it and updated it.

Any ideas?

-- 
Moshe Jacobson
Nead Werx, Inc. | Manager of Systems Engineering
2323 Cumberland Parkway, Suite 201 | Atlanta, GA 30339
mo...@neadwerx.com | www.neadwerx.com

"Quality is not an act, it is a habit." -- Aristotle


Re: [GENERAL] pg_extension_config_dump() with a sequence

2013-08-20 Thread Tom Lane
Moshe Jacobson  writes:
> I can pg_dump our prod database and pg_restore it onto our dev server with
> zero errors, but once I try to pg_dump the dev copy again, that's when I
> receive the errors above.
> I didn't drop & recreate the extension to fix it on prod -- I just created
> a new version of it and updated it.

Well, I think you did it wrong, or else you're using a PG version that
predates some necessary fix, because it works for me.  I made a simple
extension containing

CREATE TABLE mytable (data text, id serial primary key);
SELECT pg_catalog.pg_extension_config_dump('mytable', '');
SELECT pg_catalog.pg_extension_config_dump('mytable_id_seq', '');

and did

tseq=# create extension myext ;
CREATE EXTENSION
tseq=# \dx+ myext
Objects in extension "myext"
   Object Description
-
 sequence mytable_id_seq
 table mytable
(2 rows)

tseq=# insert into mytable values ('foo');
INSERT 0 1
tseq=# insert into mytable values ('bar');
INSERT 0 1

and now pg_dump gives me

---

--
-- Name: myext; Type: EXTENSION; Schema: -; Owner: 
--

CREATE EXTENSION IF NOT EXISTS myext WITH SCHEMA public;


--
-- Name: EXTENSION myext; Type: COMMENT; Schema: -; Owner: 
--

COMMENT ON EXTENSION myext IS 'testing 1,2,3,4';


SET search_path = public, pg_catalog;

--
-- Data for Name: mytable; Type: TABLE DATA; Schema: public; Owner: postgres
--

COPY mytable (data, id) FROM stdin;
foo 1
bar 2
\.


--
-- Name: mytable_id_seq; Type: SEQUENCE SET; Schema: public; Owner: postgres
--

SELECT pg_catalog.setval('mytable_id_seq', 2, true);

---

which is what I'd expect.

To debug, you might try looking in pg_extension to see if the extconfig
entry for your extension includes the OID of the sequence.  If not, you
messed up somehow in updating the extension.  If so, you must need a
newer version of pg_dump (you did not answer the question what version
you're using).

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Strange message from pg_receivexlog

2013-08-20 Thread Sergey Konoplev
On Tue, Aug 20, 2013 at 2:10 AM, Magnus Hagander  wrote:
> It looks like something outside postgres or pg_receivexlog did
> terminate the connection. pg_receievexlog noticed it was closed,
> whereas postgres waited for the timeout (but probably would've noticed
> if it had actually had some other data to send maybe?). Do you have
> some iptables connection tracking or something like that which could
> be interfering?

AFAIU, just standard things:

-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited

Nothing looks suspicious for me.

-- 
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA

http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray...@gmail.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Commit hung bug

2013-08-20 Thread S H
There is one bug mentioned  commit hung for days..
http://www.postgresql.org/message-id/1af3044fcab26f4db1ae551f8a33634b3d2...@mail.digital-rapids.com
 > The interesting thing would be to see the server logs, not the> application 
logs. Specifically, an issue that could look just likethis> was fixed in 8.1.7, 
in which case you would see weird error messages> about permission denied or 
such in the *server* logs. None of thatwould> show up in the client logs.
Any idea what exactly is this bug.
I could not make out relation between release notes mentioned in 
http://www.postgresql.org/docs/8.1/static/release-8-1-7.html
and above comment.
Regards,S H   

[GENERAL] SSL or Tunnelling for Streaming Replication

2013-08-20 Thread ascot.m...@gmail.com
Hi,

I am planning Streaming Replication to a new remote server, can you please 
suggest how to set up data encryption by SSL or tunnelling in Postgresql?

regards 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSL or Tunnelling for Streaming Replication

2013-08-20 Thread Edson Richter

Em 21/08/2013 01:07, ascot.m...@gmail.com escreveu:

Hi,

I am planning Streaming Replication to a new remote server, can you please 
suggest how to set up data encryption by SSL or tunnelling in Postgresql?

regards



I've implemented streaming replication using OpenVPN as encrypted 
tunneling solution with high success.
Just follow OpenVPN tutorial to establish your virtual private network, 
enable the virtual IP address in PostgreSQL configuration and establish 
the replication.


Regards,

Edson


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSL or Tunnelling for Streaming Replication

2013-08-20 Thread John R Pierce

On 8/20/2013 9:07 PM, ascot.m...@gmail.com wrote:

I am planning Streaming Replication to a new remote server, can you please 
suggest how to set up data encryption by SSL or tunnelling in Postgresql?


see
http://www.postgresql.org/docs/current/static/ssl-tcp.html



--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Commit hung bug

2013-08-20 Thread Alvaro Herrera
S H wrote:
> There is one bug mentioned  commit hung for days..
> http://www.postgresql.org/message-id/1af3044fcab26f4db1ae551f8a33634b3d2...@mail.digital-rapids.com
>  > The interesting thing would be to see the server logs, not the> 
> application logs. Specifically, an issue that could look just likethis> was 
> fixed in 8.1.7, in which case you would see weird error messages> about 
> permission denied or such in the *server* logs. None of thatwould> show up in 
> the client logs.
> Any idea what exactly is this bug.
> I could not make out relation between release notes mentioned in 
> http://www.postgresql.org/docs/8.1/static/release-8-1-7.html
> and above comment.

Maybe it's this commit, which was part of 8.1.6:

commit 9f1b531420ee13d04c7701b34bb4b874df7ff2fa
Author: Teodor Sigaev 
Date:   Fri Oct 13 14:00:17 2006 +

Fix infinite sleep and failes of send in Win32.

1) pgwin32_waitforsinglesocket(): WaitForMultipleObjectsEx now called with
finite timeout (100ms) in case of FP_WRITE and UDP socket. If timeout occurs
then pgwin32_waitforsinglesocket() tries to write empty packet goes to
WaitForMultipleObjectsEx again.

2) pgwin32_send(): add loop around WSASend and 
pgwin32_waitforsinglesocket().
The reason is: for overlapped socket, 'ok' result from
pgwin32_waitforsinglesocket() isn't guarantee that socket is still free,
it can become busy again and following WSASend call will fail with
WSAEWOULDBLOCK error.

See http://archives.postgresql.org/pgsql-hackers/2006-10/msg00561.php


It's troubling to be talking about a bug that was patched in 2006 for
the 8.1.6 release, however.  Anything prior to that is not something
anyone should be using anymore.  At the very least, you should have
migrated to 8.1.23; but 8.1 has been unsupported altogether for more
than two years now.  Even 8.2 is out of support.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] thank you

2013-08-20 Thread Alisa

  Hi,
   why do  you connect one table with the other? They are't any 
relations between them. So I don't understand your point, I am somewhat 
confused, you can refer to the following site:

http://www.postgresql.org/docs/9.0/static/queries-table-expressions.html
   http://my.oschina.net/Kenyon/blog/79543

Ya i got the answer here is the code


SELECT *
FROM (SELECT row_number() over(), * FROM employee) t1
right outer JOIN (SELECT row_number() over(), * FROM managers) t2 on
t1.row_number=t2.row_number



Thank you



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Here-is-my-problem-tp5766954p5767787.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general