On Tue, Sep 20, 2016 at 3:21 PM, KGA Official wrote:
> Regarding "are you really going to downgrade again"?
> The upgraded postgres will be bundled along with product code changes and
> other products. So, if someone finds product issues, they will roll back
> everything together.
> So, essentiall
Hi Andreas, Adrian and Venkata,
Regarding "are you really going to downgrade again"?
The upgraded postgres will be bundled along with product code changes and
other products. So, if someone finds product issues, they will roll back
everything together.
So, essentially, yes. One requirement of the
> We are currently running at 9.5.2 and plan to upgrade to 9.5.4, but our
> change requirement needs to plan for a downgrade with data preservation,
> before upgrade is authorized.
>
> Thus, I am asking this: Is it safe to downgrade from 9.5.4 to 9.5.2 by
> simply replacing the binaries? We will ke
On 09/19/2016 09:43 PM, Andreas Kretschmer wrote:
It is binary compatible, so yes. But trust me, there is problem with the
new version.
Just to be clear did you mean:
"But trust me, there is problem with the new version."
Or
But trust me, there is no problem with the new version.
On 20 Se
2016-09-20 16:46 GMT+12:00 Michael Paquier :
> On Tue, Sep 20, 2016 at 1:30 PM, Patrick B
> wrote:
> > 2016-09-20 15:14 GMT+12:00 Venkata B Nagothi :
> >> Do you mean to say that the WAL file "0002179A00F8" is
> >> available @ "/var/lib/pgsql/9.2/archive" location ?
> >
> > Yes!
>
> T
On Tue, Sep 20, 2016 at 1:30 PM, Patrick B wrote:
> 2016-09-20 15:14 GMT+12:00 Venkata B Nagothi :
>> Do you mean to say that the WAL file "0002179A00F8" is
>> available @ "/var/lib/pgsql/9.2/archive" location ?
>
> Yes!
Timeline 2 has visibly reached its end at segment
000217
It is binary compatible, so yes. But trust me, there is problem with the new
version.
On 20 September 2016 06:00:59 CEST, KGA Official wrote:
>Hi All,
>
>We are currently running at 9.5.2 and plan to upgrade to 9.5.4, but our
>change requirement needs to plan for a downgrade with data
>preservat
2016-09-20 15:14 GMT+12:00 Venkata B Nagothi :
>
> On Tue, Sep 20, 2016 at 12:38 PM, Patrick B
> wrote:
>
>> Hi guys,
>>
>> I got a slave server running Postgres 9.2 with streaming replication and
>> wal_archive in an EC2 Instance at Amazon.
>>
>> Postgres logs are showing me this error:
>>
>>> r
2016-09-20 16:29 GMT+12:00 Lucas Possamai :
>
>
> 2016-09-20 15:14 GMT+12:00 Venkata B Nagothi :
>
>>
>> On Tue, Sep 20, 2016 at 12:38 PM, Patrick B
>> wrote:
>>
>>> Hi guys,
>>>
>>> I got a slave server running Postgres 9.2 with streaming replication and
>>> wal_archive in an EC2 Instance at Ama
2016-09-20 15:14 GMT+12:00 Venkata B Nagothi :
>
> On Tue, Sep 20, 2016 at 12:38 PM, Patrick B
> wrote:
>
>> Hi guys,
>>
>> I got a slave server running Postgres 9.2 with streaming replication and
>> wal_archive in an EC2 Instance at Amazon.
>>
>> Postgres logs are showing me this error:
>>
>>> r
Hi All,
We are currently running at 9.5.2 and plan to upgrade to 9.5.4, but our
change requirement needs to plan for a downgrade with data preservation,
before upgrade is authorized.
Thus, I am asking this: Is it safe to downgrade from 9.5.4 to 9.5.2 by
simply replacing the binaries? We will keep
On Tue, Sep 20, 2016 at 12:38 PM, Patrick B
wrote:
> Hi guys,
>
> I got a slave server running Postgres 9.2 with streaming replication and
> wal_archive in an EC2 Instance at Amazon.
>
> Postgres logs are showing me this error:
>
>> restored log file "0002179A00F8" from archive
>> inv
Hi guys,
I got a slave server running Postgres 9.2 with streaming replication and
wal_archive in an EC2 Instance at Amazon.
Postgres logs are showing me this error:
> restored log file "0002179A00F8" from archive
> invalid record length at 179A/F8FFF3D0
> WAL segment `/var/lib/pgsql/
On 09/19/2016 01:55 PM, kbran...@pwhome.com wrote:
On 09/19/2016 01:06 PM, kbrannen(at)pwhome(dot)com wrote:
--- adrian(dot)klaver(at)aklaver(dot)com wrote:
From: Adrian Klaver
To: kbrannen(at)pwhome(dot)com, pgsql-general(at)postgresql(dot)org
Subject: Re: [GENERAL] pg_restore question
Date:
>On 09/19/2016 01:06 PM, kbrannen(at)pwhome(dot)com wrote:
>>> --- adrian(dot)klaver(at)aklaver(dot)com wrote:
>>>
>>> From: Adrian Klaver
>>> To: kbrannen(at)pwhome(dot)com, pgsql-general(at)postgresql(dot)org
>>> Subject: Re: [GENERAL] pg_restore question
>>> Date: Mon, 19 Sep 2016 12:46:24 -070
On 09/19/2016 01:06 PM, kbran...@pwhome.com wrote:
--- adrian.kla...@aklaver.com wrote:
From: Adrian Klaver
To: kbran...@pwhome.com, pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_restore question
Date: Mon, 19 Sep 2016 12:46:24 -0700
On 09/19/2016 11:46 AM, kbran...@pwhome.com wrote:
On 09/19/2016 01:06 PM, kbran...@pwhome.com wrote:
--- adrian.kla...@aklaver.com wrote:
From: Adrian Klaver
To: kbran...@pwhome.com, pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_restore question
Date: Mon, 19 Sep 2016 12:46:24 -0700
On 09/19/2016 11:46 AM, kbran...@pwhome.com wrote:
> --- adrian.kla...@aklaver.com wrote:
>
> From: Adrian Klaver
> To: kbran...@pwhome.com, pgsql-general@postgresql.org
> Subject: Re: [GENERAL] pg_restore question
> Date: Mon, 19 Sep 2016 12:46:24 -0700
>
> On 09/19/2016 11:46 AM, kbran...@pwhome.com wrote:
> > I think I'm going to need some he
On 09/19/2016 11:46 AM, kbran...@pwhome.com wrote:
I think I'm going to need some help in understanding a couple of restore issues.
This is for Pg 9.5.1.
It seems that if I create a dump using
pg_dump --clean --create --format=p --dbname=nms --schema=public > dump.sql
then the restore (after
I think I'm going to need some help in understanding a couple of restore issues.
This is for Pg 9.5.1.
It seems that if I create a dump using
pg_dump --clean --create --format=p --dbname=nms --schema=public > dump.sql
then the restore (after "drop schema public cascade") with "psql nms < dump.
> On 09/19/2016 10:56 AM, Willy-Bas Loos wrote:
> > On Mon, Sep 19, 2016 at 6:26 PM, Paul Jungwirth
> > mailto:p...@illuminatedcomputing.com>>
wrote:
> > I've worked on similar
> > projects that maintain history for regulatory reasons.
> Can you explain "regulatory reasons" please?
I me
On Mon, Sep 19, 2016 at 5:10 AM, Kiran wrote:
> Dear All,
>
> I have a table called question which has a ts_vector column *weighted_tsv*
> .
> I have gin indexed the weighted_tsv column.
>
> When I query using the following
>
> EXPLAIN ANALYZE select * from question where weighted_tsv @@
> to_tsq
On Fri, Sep 16, 2016 at 2:01 AM, Chris Withers
wrote:
> Hi All,
>
> I have quite a few tables that follow a pattern like this:
>
> Table "public.my_model"
> Column | Type| Modifiers
> +---+---
> period | tsrange | not null
> k
On 09/19/2016 08:48 AM, Willy-Bas Loos wrote:
Since records can
be changed afterwards, it has been argued that we should have
"journaling", meaning that every change to the data is saved in a
separate schema that holds a "journaling" copy of each table
I don't think this is especially unusual.
On Mon, Sep 19, 2016 at 5:48 PM, Willy-Bas Loos wrote:
>
> The use case of legal disputes being fought with our data as evidence and
> digging up the exact data from a certain point of time never occurred in
> those 10 years, and it is unlikely that it ever will.
> But it might, if anyone could r
Hi,
In a joint effort with some parties, we have created a large database (1.1
TB) of records that change only occasionally. Mainly, more of them are
added to the database in a slow rate (1.1 TB built up in 10 years).
The records can have some significance in legal cases. Since records can be
cha
On Mon, Sep 19, 2016 at 02:56:17PM +0200, Kiran wrote:
> I want to know whatever the Analyze output I am getting is normal for a
> table having few records or something is wrong.
> Will the DB engine uses whatever the best way to execute a query
> irrespective of the indexing in this case?
It wil
Hi Karsten,
Thanks for the response.
I want to know whatever the Analyze output I am getting is normal for a
table having few records or something is wrong.
Will the DB engine uses whatever the best way to execute a query
irrespective of the indexing in this case?
At the moment the records in the
On Mon, Sep 19, 2016 at 02:10:50PM +0200, Kiran wrote:
> EXPLAIN ANALYZE select * from question where weighted_tsv @@
> to_tsquery('Hur&ofta');
>
> I get the following output
>
> "Bitmap Heap Scan on question (cost=12.33..25.38 rows=10 width=731)
> (actual time=0.058..0.062 rows=3 loops=1)"
> "
Dear All,
I have a table called question which has a ts_vector column *weighted_tsv*.
I have gin indexed the weighted_tsv column.
When I query using the following
EXPLAIN ANALYZE select * from question where weighted_tsv @@
to_tsquery('Hur&ofta');
I get the following output
"Bitmap Heap Scan o
Hi,
Thanks for the reply. It worked.
But I also fixed the original ts_query as follows which gives the exact
result
SELECT * from question
where
weighted_tsv @@ to_tsquery('Hur&ofta');
Thank you .
regards
Kiran
On Mon, Sep 19, 2016 at 11:21 AM, Oleg Bartunov wrote:
> On Mon, Sep 19, 2016 at
On Mon, Sep 19, 2016 at 4:19 AM, Oleg Bartunov wrote:
> On Mon, Sep 19, 2016 at 3:46 AM, Kiran wrote:
>> Dear All,
>>
>> I have a ts_vector column in question table called weighted_tsv.
>> I am trying to search using ts_query as follows
>>
>> SELECT *
>>
>> FROM question
>>
>> WHERE weighted_tsv
On Mon, Sep 19, 2016 at 3:46 AM, Kiran wrote:
> Dear All,
>
> I have a ts_vector column in question table called weighted_tsv.
> I am trying to search using ts_query as follows
>
> SELECT *
>
> FROM question
>
> WHERE weighted_tsv @@ to_tsquery('Hur ofta');
you should use to_tsquery('Hur & ofta')
Dear All,
I have a ts_vector column in question table called *weighted_tsv*.
I am trying to search using ts_query as follows
SELECT *
FROM question
WHERE weighted_tsv @@ to_tsquery('Hur ofta');
*But the query results in an error as follows:*
ERROR: function ts_query(unknown) does not exist
On 16/09/2016 15:29, Tom Lane wrote:
Chris Withers writes:
On 16/09/2016 14:54, Igor Neyman wrote:
So, what is the value for "end ts", when the record is inserted (the range just
started)?
It's open ended, so the period is [start_ts, )
I've not looked at the GiST range opclass, but I wou
35 matches
Mail list logo