On Mon, Jul 24, 2017 at 9:08 PM, Jeff Janes wrote:
> On Sun, Jul 16, 2017 at 8:47 AM, Michael Paquier
> wrote:
>> What do you think about the patch attached?
>
> Looks OK. Should it mention specifically "On a hot standby" rather than "On
> a standby"? Otherwise people might be left confused on
Hi All,
I am facing below error while parsing log file.
[postgres@abc pgaudit]$ pgbadger -f stderr postgres-2017-07-25_121445.csv
Can't locate Text/CSV_XS.pm in @INC (@INC contains: /usr/local/lib64/perl5
/usr/local/share/perl5 /usr/lib64/perl5/vendor_perl
/usr/share/perl5/vendor_perl /usr/lib64/
On Wed, Jul 26, 2017 at 10:59 AM, PAWAN SHARMA
wrote:
>
> Hi All,
>
> I am facing below error while parsing log file.
>
> [postgres@abc pgaudit]$ pgbadger -f stderr postgres-2017-07-25_121445.csv
> Can't locate Text/CSV_XS.pm in @INC (@INC contains: /usr/local/lib64/perl5
> /usr/local/share/perl5
On Wed, Jul 26, 2017 at 2:42 PM, Michael Paquier
wrote:
> On Wed, Jul 26, 2017 at 10:59 AM, PAWAN SHARMA
> wrote:
> >
> > Hi All,
> >
> > I am facing below error while parsing log file.
> >
> > [postgres@abc pgaudit]$ pgbadger -f stderr
> postgres-2017-07-25_121445.csv
> > Can't locate Text/CSV_
On Wed, Jul 26, 2017 at 3:18 PM, PAWAN SHARMA
wrote:
>
> On Wed, Jul 26, 2017 at 2:42 PM, Michael Paquier <
> michael.paqu...@gmail.com> wrote:
>
>> On Wed, Jul 26, 2017 at 10:59 AM, PAWAN SHARMA
>> wrote:
>> >
>> > Hi All,
>> >
>> > I am facing below error while parsing log file.
>> >
>> > [pos
On Wed, Jul 26, 2017 at 2:01 PM, PAWAN SHARMA wrote:
> For root user its working fine
>
> [root@abc :/opt/PostgreSQL/9.5/data/pgaudit]#
> #-> pgbadger -f stderr postgres-2017-07-26_00.csv -o abc.html
> [==> ] Parsed 1873673 bytes of 1991998 (94.06%),
> queries: 0, events:
On Wed, Jul 26, 2017 at 5:38 PM, Michael Paquier
wrote:
> On Wed, Jul 26, 2017 at 2:01 PM, PAWAN SHARMA
> wrote:
> > For root user its working fine
> >
> > [root@abc :/opt/PostgreSQL/9.5/data/pgaudit]#
> > #-> pgbadger -f stderr postgres-2017-07-26_00.csv -o abc.html
> > [==
On Wed, Jul 26, 2017 at 3:18 PM, PAWAN SHARMA wrote:
>
> On Wed, Jul 26, 2017 at 2:42 PM, Michael Paquier
> wrote:
> On Wed, Jul 26, 2017 at 10:59 AM, PAWAN SHARMA
> wrote:
> >
> > Hi All,
> >
> > I am facing below error while parsing log file.
> >
> > [postgres@abc pgaudit]$ pgbadger -f stder
Hi,
We have recently promoted our Prod DB slave (2TB) to migrate to new
hardware, and upgraded from v9.2.9.21 to 9.5.1.6 using pg_upgrade.
The upgrade went without incident and we have been running for a week, but
the optimizer is ignoring indexes on 2 of our largest partitioned tables
causing v
On 23.07.2017 14:35, dilaz03 . wrote:
> - IN-VALUES clause adds new node to plan. Has additional node big
> overhead? How about filter by two or more IN-VALUES clause?
>
Hmmm. This works.
-- Full table can fit in memory
show shared_buffers;
shared_buffers
4GB
show work_mem;
w
On Tue, 25 Jul 2017 18:21:43 +0530, Krithika Venkatesh
wrote:
>I have a table that is partitioned on a numeric column (ID).
>
>Partitioning works when I query the table with no joins.
>
>SELECT * FROM TABLE A a WHERE ID IN (SELECT ID FROM TABLE B b WHERE
>CREATED_TS = CURRENT_TIMESTAMP)
>
>Partit
On Tue, Jul 25, 2017 at 10:34 PM, Nick Brennan wrote:
> We've added duplicate indexes and analyzing, however the new indexes are
> still ignored unless we force using enable_seqscan=no or reduce
> random_page_cost to 2. The query response times using the new indexes are
> still as slow when we do
Hi,
I'm new to PostgreSQL. I'm looking for a developer tool that works
similarly to TOAD or SQL Developer for Oracle/MySQL which would allow me to
view and make DDL changes to database objects and create data models. It
would be a plus if I can use the same tool to perform some database
administrat
On Tue, Jul 4, 2017 at 10:18 PM, Chris Travers wrote:
> First, I haven't seen major problems of database bloat in a long time which
> is why I find this case strange. I wanted to ask here what may be causing
> it.
>
> Problem:
> ==
> Database is in the 100GB to 200GB size range, running o
You can try SQL Manager for PostgreSql. The Lite edition is enough ans
it's free.
It's fast, secure and very friendly.
On 26/07/17 19:41, Tiffany Thang wrote:
Hi,
I'm new to PostgreSQL. I'm looking for a developer tool that works
similarly to TOAD or SQL Developer for Oracle/MySQL which would
I still use pgadmin3. I also have a subscription to the jetbrains tools so
I also give datagrip a try once in a while. Datagrip has a lot going for it
but it's lagging behind pgadmin3 is a lot of ways so it hasn't replaced it
for me.
On Thu, Jul 27, 2017 at 10:41 AM, Tiffany Thang
wrote:
> Hi,
>
On Wed, Jul 26, 2017 at 2:05 PM, Peter Geoghegan wrote:
> On Tue, Jul 25, 2017 at 10:34 PM, Nick Brennan wrote:
>> We've added duplicate indexes and analyzing, however the new indexes are
>> still ignored unless we force using enable_seqscan=no or reduce
>> random_page_cost to 2. The query respon
I am waiting till pgadmin4 gets a bit faster. It seems kind of slow on my
mac.
On Thu, Jul 27, 2017 at 12:05 PM, Brent Wood wrote:
>
> +1
> I think PgAdmin4 has yet to catch up with v3 for functionality, but it is
> more actively developed and supported.
>
> You might check this page out as well
On 07/26/2017 05:58 PM, Tim Uckun wrote:
I am waiting till pgadmin4 gets a bit faster. It seems kind of slow on
my mac.
PostgreSQL Studio
Navicat
EMS PostgreSQL Manager
PgAdmin3 LTS (BigSQL)
JD
--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc
PostgreSQL Centered full
Hi,
I'm testing my program and got an interesting issue.
I have an OSX 10.8 with iODBC manager and PostgreSQL ODBC driver.
When I tried to query the database table with the column "char(129)" I get:
"My field text"
(the text with the bunch of spaces at the end).
The dri
See dbeaver. http://dbeaver.jkiss.org
community and Enterprise editions are free and it's very good
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Wed, 2017-07-26 at 18:31 -0700, Joshua D. Drake wrote:
> On 07/26/2017 05:58 PM, Tim Uckun wrote:
> > I am waiting till pgadmin4 gets a bit faster. It seems kind of slow
> > onĀ
> > my mac.
>
> PostgreSQL Studio
> Navicat
> EMS PostgreSQL Manager
> PgAdmin3 LTS (BigSQL)
>
> JD
>
>
If you
I recommend that you can try Squirrel SQL.
(http://squirrel-sql.sourceforge.net/index.php?page=screenshots)
It is a SQL tool based on JVM. You can execute it whether on Linux or
WIndow, even MAC.
It use JDBC to connect Database. So that you can install differet JDBC
Driver to connect to different
Hi all,
I've got two servers (A,B) which are part of a streaming replication pair.
A is the master, B is a hot standby. I'm sending archived WAL to a
directory on A, B is reading it via SCP.
This all works fine normally. I'm on Redhat 7.3, running EDB 9.6.2 (I'm
currently working to reproduce wit
On 7/26/2017 7:25 PM, Igor Korot wrote:
When I tried to query the database table with the column "char(129)" I get:
"My field text"
(the text with the bunch of spaces at the end).
The driver is not the current one - but I don't remember the version.
Is this known issue? Maybe its already fixe
Hi, John,
On Wed, Jul 26, 2017 at 11:08 PM, John R Pierce wrote:
> On 7/26/2017 7:25 PM, Igor Korot wrote:
>>
>> When I tried to query the database table with the column "char(129)" I
>> get:
>>
>> "My field text"
>>
>> (the text with the bunch of spaces at the end).
>>
>> The driver is not the c
On 7/26/2017 9:06 PM, Igor Korot wrote:
With the char(), is there a "Standard SQL" way to do trimming?
trim(trailing from fieldname)
but really, if you want a variable length string without padding, don't
use CHAR() as a data type. use VARCHAR or TEXT.
--
john r pierce, recycling bits i
Hi
I have a large table about 3B rows, that I would like to partition on a
column called _received which is of type timestamp
I followed this
https://medium.com/@StreamBright/creating-partitions-automatically-in-postgresql-7006d68c0fbb
and came up with
CREATE OR REPLACE FUNCTION create_partiti
On 7/26/2017 10:08 PM, Alex Samad wrote:
I have a large table about 3B rows, that I would like to partition on
a column called _received which is of type timestamp
a good goal is to have no more than about 100 partitions max, and
ideally more like 25.
when we partition on time stamp, we t
(sorry for the top post, bitchy K9 Mail)
James,
are you sure you're scp'ing from the archive, not from pg_xlog?
Regards,
Gunnar "Nick" Bluth
Am 27. Juli 2017 05:00:17 MESZ schrieb James Sewell :
>Hi all,
>
>I've got two servers (A,B) which are part of a streaming replication
>pair.
>A is t
On Thu, Jul 27, 2017 at 4:41 PM, Gunnar "Nick" Bluth <
gunnar.bl...@pro-open.de> wrote:
>
> are you sure you're scp'ing from the archive, not from pg_xlog?
>
Yes:
restore_command = 'scp -o StrictHostKeyChecking=no
10.154.19.30:/archive/xlog//%f
%p'
Although you are right - that would almost mak
31 matches
Mail list logo