> postgres=# select concat('[', now()::date, ',', now()::date,
> ']')::daterange testrange;
There are range specific functions for this:
select daterange(now()::date, now()::date, '[]')
regards,
Marc Mamin
Von: pgsql-general-ow...@postgresql.org
[pgsq
On Thu, Feb 26, 2015 at 9:50 PM, Melvin Call
wrote:
> Montreal where the e is an accented e. The output ends up in the text file
> as
> Montr\xe9al, where the xe9 is a single character. When I try to copy that
> into
> my PostgreSQL table, I get an error "ERROR: invalid byte sequence for
> encod
Good evening list,
I am taking the output of a MySQL query and saving it into a text file for COPY
input into a PostgreSQL database. The query gives me a list of addresses. One of
the addresses is in Montreal, and was input using the correct spelling of
Montreal where the e is an accented e. The o
This is exactly my situation.
Thanks.
Semyon Reyfman
-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of David Steele
Sent: Thursday, February 26, 2015 8:19 PM
To: Tom Lane; Semyon Reyfman
Cc: pgsql-general@postgresql.org
S
David Steele writes:
> On 2/26/15 5:23 PM, Alvaro Herrera wrote:
>> It isn't. It also doesn't create any indexes, which you might want to
>> do before analyze.
> Is it necessary to create indexes before analyzing?
> I usually do, just to be safe, but I thought statistics were based
> solely on
On Fri, Feb 27, 2015 at 2:13 AM, MattF wrote:
> I've talked to the admin here and he said that upgrading to Postgres 9.4 is
> possible, however the only risk is for us on the warehouse side is that it
> is entirely possible that while the warehouse servers will be Postgres 9.4,
> the production se
On 2/26/15 5:23 PM, Alvaro Herrera wrote:
> Semyon Reyfman wrote:
>
>> When I create a table with "CREATE TABLE name AS SELECT." statement and
>> immediately afterward use this new table in a query does it make sense to
>> run ANALYZE on the table in between? It appears that postgres should be
>>
On 2/26/15 2:05 PM, Tom Lane wrote:
> "Semyon Reyfman" writes:
>> When I create a table with "CREATE TABLE name AS SELECT." statement and
>> immediately afterward use this new table in a query does it make sense to
>> run ANALYZE on the new table in between?
>
> Yes.
Yes. And to be more specifi
I just want to make sure I understood correctly.
All the triggers are firing in a single thread assigned to the connection
and will be run serially no matter how many tables are firing triggers.
If this is correct then yes I guess I have to create a queue of some sort
and process them via an exte
Semyon Reyfman wrote:
> When I create a table with "CREATE TABLE name AS SELECT." statement and
> immediately afterward use this new table in a query does it make sense to
> run ANALYZE on the table in between? It appears that postgres should be
> able to collect very detailed statistics while em
On 2/26/2015 2:03 PM, Merlin Moncure wrote:
I would strongly advise you not to put complex processing in triggers
if at all possible. Instead have the insert operation write a record
into another table which forms a queue of work to do. That queue can
then be walked by another process which acc
Hi,
When I create a table with "CREATE TABLE name AS SELECT." statement and
immediately afterward use this new table in a query does it make sense to
run ANALYZE on the table in between? It appears that postgres should be
able to collect very detailed statistics while emitting the table but I
Tim Uckun writes:
> I want to write a trigger which runs semi-complicated code after each
> insert. I have done some reading and from what I can gather this could
> cause problems because
> after insert triggers "don't spill to the disk" and can cause queue
> problems.  Many people sugges
On Thu, Feb 26, 2015 at 3:54 PM, Tim Uckun wrote:
> I want to write a trigger which runs semi-complicated code after each
> insert. I have done some reading and from what I can gather this could
> cause problems because after insert triggers "don't spill to the disk" and
> can cause queue problem
I want to write a trigger which runs semi-complicated code after each
insert. I have done some reading and from what I can gather this could
cause problems because after insert triggers "don't spill to the disk" and
can cause queue problems. Many people suggest LISTEN NOTIFY but that's
not going
On Thu, Feb 26, 2015 at 2:30 AM, Jim Nasby wrote:
> On 2/26/15 1:34 AM, James Sewell wrote:
>>
>> Hello,
>
>
> The correct place for this is pgsql-general@. -hackers is for development of
> Postgres itself. I'm moving the discussion there.
>
>
>> I have the following table:
>>
>> \d a
>>
hey, guys, I came across a merge statement when I'm trying to convert
stored procedures from Mysql to Postgres:
merge into db.ChargePeriod d
using (
select ba.ClientID
, ba.BillingAccountID
, bs.BillingScheduleID
, @CodeWithholdD as WithholdTypeID
from
On Thu, 26 Feb 2015 15:11:28 -0500, John Turner
wrote:
Seems I'm missing a trick trying to get rangetypes working:
No problem building the string:
select concat('''[', now()::date, ',', now()::date, ']''') testrange;
testrange
---
'[2015-02-26,2015-02-26]
Seems I'm missing a trick trying to get rangetypes working:
No problem building the string:
select concat('''[', now()::date, ',', now()::date, ']''') testrange;
testrange
---
'[2015-02-26,2015-02-26]'
(1 row)
Bombed-out trying to turn this into a daterange:
po
On 02/26/2015 08:27 AM, Tim Smith wrote:
Seriously? Json not supporting infinity makes it useless. Ok, so it has
been useless for the, I don't know, last 10 years?
Just face it Andres, it should have never been coded that way in the
first place. The fact that it appears that nobody in the last
On 02/26/2015 08:27 AM, Tim Smith wrote:
Seriously? Json not supporting infinity makes it useless. Ok, so it has
been useless for the, I don't know, last 10 years?
Just face it Andres, it should have never been coded that way in the
first place. The fact that it appears that nobody in the last
Hi Tom,
Yep I know its out of date but thank you for replying anyways :)
I retried my test with the autovacuum logs turned on and confirmed that a
postgresql restart fix the issue.
thanks your your help!
-HH
On Thu, Feb 26, 2015 at 11:02 AM, Tom Lane wrote:
> Hanns Hartman writes:
> > I am r
"Semyon Reyfman" writes:
> When I create a table with "CREATE TABLE name AS SELECT." statement and
> immediately afterward use this new table in a query does it make sense to
> run ANALYZE on the new table in between?
Yes.
regards, tom lane
--
Sent via pgsql-general ma
Hi,
When I create a table with "CREATE TABLE name AS SELECT." statement and
immediately afterward use this new table in a query does it make sense to
run ANALYZE on the new table in between? It appears that postgres should be
able to collect very detailed statistics while emitting the table bu
Hi Alvaro,
Thank you for your quick response!
We do have fsync turned on, and there was no disk failure. The database
had to be shut down forcefully because it was becoming nonresponsive
(probably due to inadequate earlier vacuuming) and we could not get the
remaining queries to terminate with no
Hi
I want to setup postgres to encrypt all connections to my db using ssl.
My settings:
pg_hba.conf:
# TYPE DATABASEUSERADDRESS METHOD
hostsslmy_db all 0.0.0.0/0md5
postgresql.conf:
ssl = on
ssl_ciphers = 'ECDHE-ECDSA-A
My company has a data warehouse, currently running on Postgres 9.3, as well
as the production servers also running on Postgres 9.3. We have connections
to foreign tables from the warehouse, however I've noticed that implementing
triggers on foreign tables is not supported in Postgres 9.3, and is a
> If you want to do that then store that in your date/timestamp data and we'll
> output it. But we're not going to silently convert infinity to anything
> else:
Just for the record, I never said I wanted to do it. I was saying it
for the benefit of those people who replied to this thread talking
> Seriously? Json not supporting infinity makes it useless. Ok, so it has
> been useless for the, I don't know, last 10 years?
Just face it Andres, it should have never been coded that way in the
first place. The fact that it appears that nobody in the last 10
years has used "infinity" in conjunc
On 02/26/2015 11:03 AM, Tim Smith wrote:
FYI although I remain a +1 on KISS and emitting "infinity", for
those of you still yearning after a standards-based implementation,
there is a StackOverflow post which hints at sections 3.5 and 3.7 of
ISO8601:2004.
Unfortunatley I can't find a link t
On 02/26/2015 07:54 AM, Tim Smith wrote:
So +1 for removing the error and emitting "infinity" suitably quoted.
Andrew, will you do that?
+1 here too. Otherwise there's very little point having the
"infinity" feature in Postgres if only some of the database functions
actually support it withou
FYI although I remain a +1 on KISS and emitting "infinity", for
those of you still yearning after a standards-based implementation,
there is a StackOverflow post which hints at sections 3.5 and 3.7 of
ISO8601:2004.
Unfortunatley I can't find a link to an ISO8601:2004 text, so you'll
have to ma
Thank you so much for clarification about list room for discussion and
suggestion
Now I can do BDR multiple database with this configuration
bdr.connections = 'bdrnode02db1, bdrnode02db2'
bdr.bdrnode02db1 = 'dbname=db1 host=172.17.42.1 port=49319 user=postgres'
bdr.bdrnode02db2 = 'dbname=db2 host=
On 2015-02-26 15:54:37 +, Tim Smith wrote:
> Otherwise there's very little point having the "infinity" feature in
> Postgres if only some of the database functions actually support it
> without throwing a tantrum.
Seriously? Json not supporting infinity makes it useless. Ok, so it has
been use
The system can’t know what conditions will be present when your query executes
sometime in the future without defining those conditions. To define those
conditions you create a simulated environment with the index, data, and load
you want to test and test it.
Without more info, your requiremen
Hanns Hartman writes:
> I am running postgres 8.3.17 on a RedHat linux derivative using a mips64
> architecture.
You realize of course that 8.3 is long out of support ...
> I've recently noticed some odd autovacuum behavior.
> Recently we've had some systems deployed with the system clock set t
On 02/26/2015 10:38 AM, Tom Lane wrote:
Yeah, I think so. The sequence 'infinity'::timestamp to JSON to
ISO-8601-only consumer is going to fail no matter what; there is no
need for us to force a failure at the first step. Especially when
doing so excludes other, perfectly useful use-cases.
S
> So +1 for removing the error and emitting "infinity" suitably quoted.
> Andrew, will you do that?
>
+1 here too. Otherwise there's very little point having the
"infinity" feature in Postgres if only some of the database functions
actually support it without throwing a tantrum. If its a databas
Hi,
I am running postgres 8.3.17 on a RedHat linux derivative using a mips64
architecture.
I've recently noticed some odd autovacuum behavior.
Recently we've had some systems deployed with the system clock set to the
year 2016. Postgres was installed with that date and things were fine
until a
On 02/25/2015 06:14 PM, Emanuel Araújo wrote:
Hi,
I have an application that replicates data from an Oracle database for
postgresql. The flow goes as follows:
oracle.table1 -> AppSincronizador -> postgresql.table1 -> Trigger (upd,
ins, del) -> postgresql.table2
I'm having situations where the
Andres Freund writes:
> On 2015-02-26 10:16:38 -0500, Tom Lane wrote:
>> At the same time, there is definitely no such requirement in the JSON
>> spec itself, so at least the error message is quoting the wrong
>> authority.
> To me there seems to be very little point in restricing the output that
On 2015-02-26 10:16:38 -0500, Tom Lane wrote:
> Andres Freund writes:
> > On 2015-02-26 11:55:20 +, Tim Smith wrote:
> >> As far as I'm aware, JSON has no data types as such, and so why is
> >> Postgres (9.4.1) attempting to impose its own nonsense constraints ?
>
> > "impose its own nonsense
On 02/26/2015 10:16 AM, Tom Lane wrote:
Andres Freund writes:
On 2015-02-26 11:55:20 +, Tim Smith wrote:
As far as I'm aware, JSON has no data types as such, and so why is
Postgres (9.4.1) attempting to impose its own nonsense constraints ?
"impose its own nonsense constraints" - breathe
Andres Freund writes:
> On 2015-02-26 11:55:20 +, Tim Smith wrote:
>> As far as I'm aware, JSON has no data types as such, and so why is
>> Postgres (9.4.1) attempting to impose its own nonsense constraints ?
> "impose its own nonsense constraints" - breathe slowly in, and out, in,
> and out.
=?UTF-8?B?VG9yc3RlbiBGw7ZydHNjaA==?= writes:
> given a query like this:
> select *
> from account a
> cross join lateral (
>select rate
> from exchange
> where target='USD'
> and source=a.currency
> order by date desc
> limit 1) e
> where a.i
On 02/26/2015 07:02 AM, Andres Freund wrote:
Hi,
On 2015-02-26 11:55:20 +, Tim Smith wrote:
As far as I'm aware, JSON has no data types as such, and so why is
Postgres (9.4.1) attempting to impose its own nonsense constraints ?
"impose its own nonsense constraints" - breathe slowly in, an
On 02/26/2015 03:55 AM, Tim Smith wrote:
Hi,
As far as I'm aware, JSON has no data types as such, and so why is
Postgres (9.4.1) attempting to impose its own nonsense constraints ?
Surely it should just insert the word 'infinity' into the JSON output,
just like it displays in a simple SQL query
> On Feb 26, 2015, at 12:47 AM, Sreerama Manoj
> wrote:
>
> Hi,
> I use Postgres 9.4 database.Now,I am optimizing the queries by using the
> results of "explain" and "explain analyze",Sometimes I am creating Indexes to
> optimize them. But, I was not successful sometimes as even I create
it is in postgres i need to convert into oracle
CREATE RULE payment_insert_p2007_04 AS ON INSERT TO payment WHERE
(new.payment_date >= '2007-04-01'::timestamp without time zone) DO INSTEAD
INSERT INTO payment_p2007_04 (payment_id)VALUES (1);
in oracle format i used google
i got
BEGIN
DBMS_MACAD
Hi,
On 2015-02-26 11:55:20 +, Tim Smith wrote:
> As far as I'm aware, JSON has no data types as such, and so why is
> Postgres (9.4.1) attempting to impose its own nonsense constraints ?
"impose its own nonsense constraints" - breathe slowly in, and out, in,
and out.
It looks to me like ab14
Hi,
As far as I'm aware, JSON has no data types as such, and so why is
Postgres (9.4.1) attempting to impose its own nonsense constraints ?
Surely it should just insert the word 'infinity' into the JSON output,
just like it displays in a simple SQL query ?
create table app_test.foo(a text,b date,
Hi,
given a query like this:
select *
from account a
cross join lateral (
select rate
from exchange
where target='USD'
and source=a.currency
order by date desc
limit 1) e
where a.id=19
for update;
If I understand the documentation correctl
Hi,
Please note that hackers is not the right list for this; it's for
development discussions. Please ask such questions on -general.
On 2015-02-26 17:46:55 +0700, Jirayut Nimsaeng wrote:
> We want to use BDR with multiple database but now all the document didn't
> show any example how to config
On 2/26/15 1:34 AM, James Sewell wrote:
Hello,
The correct place for this is pgsql-general@. -hackers is for
development of Postgres itself. I'm moving the discussion there.
I have the following table:
\d a
Table "phxconfig.a"
Column | Type | Modifiers
---
54 matches
Mail list logo