While testing a query on an integer array with a GIN index, I stumbled over a
behaviour which surprised me and which I would consider a bug - but maybe I am
wrong.
Consider the following table:
create table idlist (ids int[], ... other columns ...);
create index on idlist using gin (id
On 28/2/19 1:08 π.μ., Ahmed, Nawaz wrote:
Hi,
I believe the "file copy" method (listed in the table) in pgbackrest is based on pg_basebackup, so i think it should be "pg_basebackup over ssh" as pgbackrest internally calls pg_basebackup. David
Steele can correct me.
No, apparently pgbackrest
On 27/2/19 6:52 μ.μ., Mark Fletcher wrote:
On Wed, Feb 27, 2019 at 1:39 AM Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>> wrote:
Hello, as promised here is my blog :
https://severalnines.com/blog/current-state-open-source-backup-management-postgresql
Nice blog post. If you'
On Wednesday, February 27, 2019, Andy Fan wrote:
>
> The following way works with 2 commands:
>
> zhifan=# create table t1 (a jsonb);
> CREATE TABLE
> zhifan=# create unique index t1_a_name on t1 ((a->'name'));
> CREATE INDEX
>
> but know I want to merge them into 1 command, is it possible?
>
> z
The following way works with 2 commands:
zhifan=# create table t1 (a jsonb);
CREATE TABLE
zhifan=# create unique index t1_a_name on t1 ((a->'name'));
CREATE INDEX
but know I want to merge them into 1 command, is it possible?
zhifan=# create table t2 (a jsonb, constraint uk_t2_a_name
unique((a->'
On Wed, Feb 27, 2019 at 10:39:10AM -0800, Stephen Eilert wrote:
> Are you running Vacuum on the slave node? It has to run on the master.
VACUUM performs an equivalent write activity so it has to be
restricted. ANALYZE can work though.
--
Michael
signature.asc
Description: PGP signature
On Wed, Feb 27, 2019 at 10:21:00AM +0100, Peter Eisentraut wrote:
> On 2019-02-26 23:35, Michael Paquier wrote:
>> What I do in such cases is to compile OpenSSL by myself and link
>> Postgres to it, here is a command to build shared libraries (all that
>> is documented in INSTALL):
>> ./config --pr
Peter J. Holzer wrote:
> On 2019-02-27 12:33:02 +0100, Julien Rouhaud wrote:
> > On Wed, Feb 27, 2019 at 12:22 PM Luca Ferrari wrote:
> > >
> > > What's wrong with using a mountpoint?
> >
> > You can see most obvious reasons at
> > https://bugzilla.redhat.com/show_bug.cgi?id=1247477
>
> I see o
On 2019-Feb-22, Derek Hans wrote:
> I've set up 2 instances of PostgreSQL 11. On instance A, I created a table
> with 2 local partitions and 2 partitions on instance B using foreign data
> wrappers, following https://pgdash.io/blog/postgres-11-sharding.html.
> Inserting rows into this table works
On Wed, Feb 27, 2019 at 12:59 PM Julie Nishimura
wrote:
> Hello Steve,
> Thanks a lot for your info yesterday, it was very useful. If I run this
> command on some of the servers and the results look like this, what would
> it tell you?
>
> select * from pg_extension ;
>extname| extowner
>
> If those 50-100 connections are all active at once, yes, that is high.
> They can easily spend more time fighting each other over LWLocks,
> spinlocks, or cachelines rather than doing useful work. This can be
> exacerbated when you have multiple sockets rather than all cores in a
> single sock
On 2/27/19 3:15 PM, Julie Nishimura wrote:
Hello everybody, I am new to postgresql environment, but trying to get up
to speed.
Can you please share your experience on how you can automate refreshment
of dev environment on regular basis (desirably weekly), taking for
consideration some of prod d
My method is complex and not so good for newbies, but it is incredibly fast and
should scale to almost any size database. Mine are not nearly as large though.
I use two methods... the normal backup/restore for longer lived development
environments, and for shorter lived environments I use postg
Hello everybody, I am new to postgresql environment, but trying to get up to
speed.
Can you please share your experience on how you can automate refreshment of dev
environment on regular basis (desirably weekly), taking for consideration some
of prod dbs can be very large (like 20+ TB
Any sugge
On Wed, Feb 27, 2019 at 2:07 PM Scottix wrote:
> Hi we are running a Postgresql Database 9.4.18 and we are noticing a
> high CPU usage. Nothing is critical at the moment but if we were to
> scale up more of what we are doing, I feel we are going to run into
> issues.
>
9.4 is old. A lot of impro
Ron writes:
> On 2/27/19 12:43 PM, Joe Conway wrote:
>> FWIW, if you want to read the whole gory details of that incident, here
>> it is:
>> https://www.postgresql.org/message-id/flat/41D04FA4.7010402%40joeconway.com#dfc38927745e238d49569ffd5b33beba
> What in the world was that SuSE maintainer --
Hi we are running a Postgresql Database 9.4.18 and we are noticing a
high CPU usage. Nothing is critical at the moment but if we were to
scale up more of what we are doing, I feel we are going to run into
issues.
It is a 2 x 6 core machine, 128GB ram, Raid 10 HDD
The iostat metrics for the HDD lo
On 2/27/19 12:43 PM, Joe Conway wrote:
On 2/27/19 11:49 AM, Peter J. Holzer wrote:
On 2019-02-27 10:42:12 -0500, Tom Lane wrote:
Luca Ferrari writes:
On Wed, Feb 27, 2019 at 12:33 PM Julien Rouhaud wrote:
You can see most obvious reasons at
https://bugzilla.redhat.com/show_bug.cgi?id=124747
Hi all,
This behavior makes the new data sharding functionality in v11 only
marginally useful as you can't shard across database instances.
Considering data sharding appeared to be one of the key improvements in
v11, I'm confused - am I misunderstanding the expected functionality?
Thanks!
On Fri,
On 2/27/19 11:49 AM, Peter J. Holzer wrote:
> On 2019-02-27 10:42:12 -0500, Tom Lane wrote:
>> Luca Ferrari writes:
>> > On Wed, Feb 27, 2019 at 12:33 PM Julien Rouhaud wrote:
>> >> You can see most obvious reasons at
>> >> https://bugzilla.redhat.com/show_bug.cgi?id=1247477
> [...]
>> The case t
Are you running Vacuum on the slave node? It has to run on the master.
Thanks,
– Stephen
On Feb 27, 2019, 6:43 AM -0800, github kran , wrote:
> Hello Team,
>
> We are using a PostgreSQL 9.6 and seeing the below error while trying to run
> a VACUUM on one of our live tables running in Production.
On 2/27/19 4:48 PM, Achilleas Mantzios wrote:
On 27/2/19 4:16 μ.μ., David Steele wrote:
On 2/27/19 2:31 PM, Achilleas Mantzios wrote:
On 27/2/19 1:58 μ.μ., rich...@simkorp.com.br wrote:
Just to notice, I d o use backup from standby and WAL archive from
standby. It is possible. But you have to
On Wed, Feb 27, 2019 at 1:39 AM Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:
>
> Hello, as promised here is my blog :
>
> https://severalnines.com/blog/current-state-open-source-backup-management-postgresql
>
>
Nice blog post. If you're aiming for a comprehensive run down of tools, I
>
> SET lock_timeout TO '1s';
>
No, my assumption is that lock_timeout does not cover my use case here. My
point is actually that any one statement is not hitting lock_timeout, but
as a whole the transaction takes too long. For example if I set
lock_timeout to 1 second, but my migration actually
On 2019-02-27 10:42:12 -0500, Tom Lane wrote:
> Luca Ferrari writes:
> > On Wed, Feb 27, 2019 at 12:33 PM Julien Rouhaud wrote:
> >> You can see most obvious reasons at
> >> https://bugzilla.redhat.com/show_bug.cgi?id=1247477
[...]
> The case that I can recall most clearly was actually in the oth
On Wed, Feb 27, 2019 at 7:56 AM Jeremy Finzel wrote:
> I was hoping to use idle_in_transaction_session_timeout to prevent schema
> change migrations from running too long and thereby locking up the
> application for an extended period even if any one statement in the
> migration is very short.
>
OK I have it fixed;; just for anyone who's interested - the error was in
the base backup procedure.
When switched to plain "rsync -az" - it works like a charm.
Most probably, the fault was I assumed that you can use the rsync --update
option when doing base backup.
You cannot, especially when tim
Sam Saffron writes:
> So something is clearly different about the way the tokenisation is
> defined in PG. My question is, how do I figure out what is different
> and how do I make my mac install of PG work like the Linux one?
I'm not sure you can :-(. This devolves to what the libc locale
funct
Luca Ferrari writes:
> On Wed, Feb 27, 2019 at 12:33 PM Julien Rouhaud wrote:
>> You can see most obvious reasons at
>> https://bugzilla.redhat.com/show_bug.cgi?id=1247477
> Thanks, I didn't see the lost+found problem because I'm on UFS (I'm
> wondering if this applies also to ZFS datasetes with
Em 27/02/2019 12:12, Achilleas Mantzios escreveu:
On 27/2/19 5:04 μ.μ., Edson Carlos Ericksson Richter wrote:
Em 27/02/2019 09:31, Achilleas Mantzios escreveu:
On 27/2/19 1:58 μ.μ., rich...@simkorp.com.br wrote:
Just to notice, I d o use backup from standby and WAL archive from
standby. It is
On 27/2/19 5:04 μ.μ., Edson Carlos Ericksson Richter wrote:
Em 27/02/2019 09:31, Achilleas Mantzios escreveu:
On 27/2/19 1:58 μ.μ., rich...@simkorp.com.br wrote:
Just to notice, I d o use backup from standby and WAL archive from standby. It is
possible. But you have to configure standby with
Em 27/02/2019 09:31, Achilleas Mantzios escreveu:
On 27/2/19 1:58 μ.μ., rich...@simkorp.com.br wrote:
Just to notice, I d o use backup from standby and WAL archive from
standby. It is possible. But you have to configure standby with
option of wal archive "always".
I guess there are issues w
On 2019-02-27 12:33:02 +0100, Julien Rouhaud wrote:
> On Wed, Feb 27, 2019 at 12:22 PM Luca Ferrari wrote:
> >
> > What's wrong with using a mountpoint?
>
> You can see most obvious reasons at
> https://bugzilla.redhat.com/show_bug.cgi?id=1247477
I see only one good reason there: The fact that p
I was hoping to use idle_in_transaction_session_timeout to prevent schema
change migrations from running too long and thereby locking up the
application for an extended period even if any one statement in the
migration is very short.
I am not finding predictable behavior
using idle_in_transaction_
On 27/2/19 4:16 μ.μ., David Steele wrote:
On 2/27/19 2:31 PM, Achilleas Mantzios wrote:
On 27/2/19 1:58 μ.μ., rich...@simkorp.com.br wrote:
Just to notice, I d o use backup from standby and WAL archive from standby. It is
possible. But you have to configure standby with option of wal archive "
Hello Team,
We are using a PostgreSQL 9.6 and seeing the below error while trying to
run a VACUUM on one of our live tables running in Production. We wanted to
clean up some DEAD tuples on the table.
*Command*: VACUUM (ANALYZE,VERBOSE) table_name.
ERROR: cannot execute VACUUM during recovery
On 2/27/19 2:31 PM, Achilleas Mantzios wrote:
On 27/2/19 1:58 μ.μ., rich...@simkorp.com.br wrote:
Just to notice, I d o use backup from standby and WAL archive from
standby. It is possible. But you have to configure standby with option
of wal archive "always".
I guess there are issues with it
On 27/2/19 1:58 μ.μ., rich...@simkorp.com.br wrote:
Just to notice, I d o use backup from standby and WAL archive from standby. It is
possible. But you have to configure standby with option of wal archive "always".
I guess there are issues with it. If this was so easy then pgbarman and
pgback
On Wed, Feb 27, 2019 at 12:33 PM Julien Rouhaud wrote:
> You can see most obvious reasons at
> https://bugzilla.redhat.com/show_bug.cgi?id=1247477
Thanks, I didn't see the lost+found problem because I'm on UFS (I'm
wondering if this applies also to ZFS datasetes with mount point).
Same story for
On Wed, Feb 27, 2019 at 12:22 PM Luca Ferrari wrote:
>
> What's wrong with using a mountpoint?
You can see most obvious reasons at
https://bugzilla.redhat.com/show_bug.cgi?id=1247477
I apparently cannot find an answer in the documentation, however initdb states:
initdb: directory "/mnt/data1" exists but is not empty
It contains a dot-prefixed/invisible file, perhaps due to it being a
mount point.
Using a mount point directly as the data directory is not recommended.
Create a s
(This is a cross post from Stack Exchange, not getting much traction there)
On my Mac install of PG:
```
=# select to_tsvector('english', 'abcd สวัสดี');
to_tsvector
-
'abcd':1
(1 row)
=# select * from ts_debug('hello สวัสดี');
alias | description | token | dictionaries
On 21/2/19 9:28 π.μ., Achilleas Mantzios wrote:
On 21/2/19 9:17 π.μ., Julie Nishimura wrote:
Does anyone use this solution? any recommenations?
Thanks!
Barman will fit most requirements. PgBackRest excels when WAL traffic goes on 10 files/day or more. I have written an article, not yet pu
On 2019-02-26 23:35, Michael Paquier wrote:
> What I do in such cases is to compile OpenSSL by myself and link
> Postgres to it, here is a command to build shared libraries (all that
> is documented in INSTALL):
> ./config --prefix=$INSTALLPATH shared
I did test it now using a custom-built OpenSSL
44 matches
Mail list logo