Hi guys,
select schemaname,relname,n_live_tup, n_dead_tup from pg_stat_all_tables
where relname = 'parts';
schemaname relname n_live_tup n_dead_tup
> -- - -- --
> public parts 191623953 182477402
See the large number of dead_tup?
My autovacuum p
2016-09-13 0:06 GMT+12:00 Jeff Janes :
> On Sep 12, 2016 1:12 AM, "Scott Marlowe" wrote:
> >
> >
> >
> > Why not subscribe a new cluster on the same box with pg_basebackup?
>
> +1.
>
> Maybe he is afraid of (or doesn't know how to) configuring things to run
> on a non standard port, for testing?
Hi guys,
I got the following column:
modified_date TIMESTAMP(6) WITHOUT TIME ZONE DEFAULT
> "statement_timestamp"(),
I want to select all rows that have been modified from now to 4 months ago.
I've used these queries:
select
> modified_date,
> from
> clients
> WHERE
> modified
2016-09-14 13:17 GMT+12:00 David Rowley :
> On 14 September 2016 at 12:20, Patrick B wrote:
> > I want to select all rows that have been modified from now to 4 months
> ago.
> >
> > I've used these queries:
> >
> >> select
> >> mod
Hi guys,
I got the test1 table with three columns:
id(BIGINT) - j_id(BIGINT) - comments(CHARACTER VARYING)
*This needs to be done 180 times:*
> INSERT INTO test1 (id,j_id,comments) VALUES (default,123321,'test-1 -
> comments');
> INSERT INTO test1 (id,j_id,comments) VALUES (default,123322,'te
>
>
>
> A select can make up columns too, not just what you've got in a table,
> so you can:
>
> select j_id, 'test-1 - comments' as comment from test2 where
> customer_id=88897;
>
> and then you can simply insert that into your other table (you don't
> need to specify the columns that are getting
>
>
>>
>>
>
> Please have a look on this example Patrick: http://sqlfiddle.com/#!15/
> 1773d/4
>
> Lucas
>
>
This helped a lot... it's working now :) Thanks guys!!!
Patrick
Hi guys,
I've got the following query:
WITH
> accounts AS (
> SELECT
> c.id AS company_id,
> c.name_first AS c_first_name,
> c.name_last AS c_last_name,
> c.company AS c_name,
> FROM public.clients c
> WHERE id = 33412393
> ORDER BY 1 LIMIT 100
>
2016-09-19 9:18 GMT+12:00 Patrick B :
> Hi guys,
>
> I've got the following query:
>
> WITH
>> accounts AS (
>> SELECT
>> c.id AS company_id,
>> c.name_first AS c_first_name,
>> c.name_last AS c_last_name,
>>
Hi guys,
I got a slave server running Postgres 9.2 with streaming replication and
wal_archive in an EC2 Instance at Amazon.
Postgres logs are showing me this error:
> restored log file "0002179A00F8" from archive
> invalid record length at 179A/F8FFF3D0
> WAL segment `/var/lib/pgsql/
2016-09-20 15:14 GMT+12:00 Venkata B Nagothi :
>
> On Tue, Sep 20, 2016 at 12:38 PM, Patrick B
> wrote:
>
>> Hi guys,
>>
>> I got a slave server running Postgres 9.2 with streaming replication and
>> wal_archive in an EC2 Instance at Amazon.
>>
2016-09-20 16:46 GMT+12:00 Michael Paquier :
> On Tue, Sep 20, 2016 at 1:30 PM, Patrick B
> wrote:
> > 2016-09-20 15:14 GMT+12:00 Venkata B Nagothi :
> >> Do you mean to say that the WAL file "0002179A00F8" is
> >> available @ "/var/
Hi guys,
I've got a table with email column:
> email CHARACTER VARYING(50) DEFAULT ''::CHARACTER VARYING NOT NULL,
There are 30k rows and the email column is not null... there is data in
there.
But for testing purpose I need to overwrite the email. So the customer
won't get an email from me whi
2016-09-21 17:27 GMT+12:00 John R Pierce :
> On 9/20/2016 10:02 PM, Patrick B wrote:
>
> I've got a table with email column:
>
>> email CHARACTER VARYING(50) DEFAULT ''::CHARACTER VARYING NOT NULL,
>
>
> There are 30k rows and the email column is not n
2016-09-21 18:31 GMT+12:00 John R Pierce :
> On 9/20/2016 10:56 PM, Patrick B wrote:
>
> update table tablename set email = 'test@example.com'; ?
>>
>>
>>
>
> I can't overwrite the data into that column...
>
> I was hopping that in a S
Hi guys,
I'm exporting some data for testing purpose.
COPY (SELECT
> name_first
> name_last,
> email,
> company
> FROM
> clients
> )
> TO '/var/lib/pgsql/test1.csv' DELIMITER ',' csv HEADER QUOTE '"';
cat /var/lib/pgsql/test1.csv
"","",hiddenem...@hotm
kbran...@pwhome.com
Yes.. it is a conde issue and not a DB issue
2016-09-22 6:50 GMT+12:00 Steve Petrie, P.Eng. :
> Hi Patrick.
>
> - Original Message - From: "Patrick B"
> To: "pgsql-general"
> Sent: Wednesday, September 21, 2016 1:02 AM
> Su
>
>
>>
>
> Perhaps you mean you want to quote of all strings? For that you use FORCE
> QUOTE.
> eg:
> COPY (SELECT
> name_first
> name_last,
> email,
> company
> FROM
> clients
> )
> TO '/var/lib/pgsql/test1.csv' DELIMITER
2016-09-22 10:02 GMT+12:00 Jim Nasby :
> On 9/21/16 1:50 PM, Steve Petrie, P.Eng. wrote:
>
>>
>> The reason I ask is -- the maximum length of a valid email address is
>> actually 256 characters (or 254, according comments in the PHP function
>> is_valid_email_address(...) that I found on the Inter
>
> Rather than test.u...@example.com I was hoping for
> values such as:
>
> test.4645364.@ example.com
>
>
> test.8786756.@ example.com
>
>
>
> With UNIQUE UUID
>
>
> is that possible?
>
>
>
I was able to do that using:
SELECT cast(''test.''|| uuid_generate_v1() AS varchar(30)) || ''@example.c
Hi guys,
I'm using postgres 9.2 and got the following column:
start TIMESTAMP(6) WITHOUT TIME ZONE NOT NULL
SELECT start FROM test1;
2015-12-18 02:40:00
I need to split that date into two columns on my select:
2015-12-18 = date column
02:40:00 = time column
How can I do that without modif
Hi guys,
I've got this domain:
CREATE DOMAIN public.a_city
> AS character varying(80)
> COLLATE pg_catalog."default";
And I need to increase the type from character varying(80) to character
varying(255).
How can I do that? didn't find info about it. I'm using Postgres 9.2
Thanks!
Patrick
Hi guys,
I've got 2k rows in a table:
> CREATE TABLE
> public.not_monthly
> (
> id BIGINT DEFAULT "nextval"('"id_seq"'::"regclass") NOT NULL,
> clientid BIGINT,
> name_first CHARACTER VARYING(80) DEFAULT ''::CHARACTER VARYING,
> name_last CHARACTER VARYING(80) DEFAULT ''::
2016-09-27 16:22 GMT+13:00 Patrick B :
> Hi guys,
>
> I've got 2k rows in a table:
>
>> CREATE TABLE
>> public.not_monthly
>> (
>> id BIGINT DEFAULT "nextval"('"id_seq"'::"regclass") NOT NULL,
>
2016-09-28 0:29 GMT+13:00 Vitaly Burovoy :
> On 9/26/16, Patrick B wrote:
> > 2016-09-27 16:22 GMT+13:00 Patrick B :
> > I'm doing this now:
> >
> > sel AS (
> >> SELECT i.id AS c_id
> >> FROM (select id, row_number() OVER (ORDER BY id) AS rn F
2016-09-28 8:54 GMT+13:00 David G. Johnston :
> On Mon, Sep 26, 2016 at 9:06 PM, Patrick B
> wrote:
>
>>
>> I'm doing this now:
>>
>>
>> sel AS (
>>> SELECT i.id AS c_id
>>> FROM (select id, row_number() OVER (ORDER BY id) AS rn FROM
2016-09-28 9:23 GMT+13:00 Kevin Grittner :
> On Tue, Sep 27, 2016 at 2:59 PM, Patrick B
> wrote:
>
> [sel is a relation which can have multiple rows; the fact that it
> is being generated in a CTE isn't relevant for purposes of the
> error.]
>
>
2016-09-28 10:11 GMT+13:00 Kevin Grittner :
> On Tue, Sep 27, 2016 at 3:33 PM, Patrick B
> wrote:
>
> > 4 - Get the table_1.c_id from the STEP NUMBER 2 and put it into
> table_2.c_id
> > - This is the problem.. how can I get the inserted id from STEP2 and put
> it
&
2016-09-28 10:25 GMT+13:00 Patrick B :
>
>
> 2016-09-28 10:11 GMT+13:00 Kevin Grittner :
>
>> On Tue, Sep 27, 2016 at 3:33 PM, Patrick B
>> wrote:
>>
>> > 4 - Get the table_1.c_id from the STEP NUMBER 2 and put it into
>> table_2.c_id
>> >
2016-09-09 1:09 GMT+12:00 Scott Marlowe :
> On Tue, Sep 6, 2016 at 5:25 PM, John R Pierce wrote:
> > On 9/6/2016 4:20 PM, Melvin Davidson wrote:
> >>
> >> If you use slony to replicate, you CAN have 9.2 on the master and 9.4 on
> >> the slave.
> >
> >
> > does rackspace support slony? how about
Hi guys,
I need to export an entire database to another server, for testing purpose.
Is there any way to export all indexes and constraints ?
Postgres 9.2
Patrick
Thank you guys... good to know that pg_dump does all the job for me :)
So.. If I only dump using the --schema-only option, it will dump all the
schemas, constraints, indexes and tables?
Because probably, I'll have to import the data manually. NOt in a single
pg_restore I mean. (AWS issue)
Hi guys,
I got a very big database, that I need to export (dump) into a new test
server.
However, this new database test server doesn't need to have all the data. I
would like to have only the first 100 rows(example) of each table in my
database.
I'm using pg_sample to do that, but unfortunately
2016-10-19 13:39 GMT+13:00 Michael Paquier :
> On Wed, Oct 19, 2016 at 9:24 AM, Patrick B
> wrote:
> > However, this new database test server doesn't need to have all the
> data. I
> > would like to have only the first 100 rows(example) of each table in my
>
Hi guys,
I'm using this query to measure the delay between a Master and a Streaming
Replication Slave server, using PostgreSQL 9.2.
SELECT
> pg_last_xlog_receive_location() receive,
> pg_last_xlog_replay_location() replay,
> (
> extract(epoch FROM now()) -
> extract(epoch FROM pg_last_xact_replay
2016-10-31 15:54 GMT+13:00 Venkata B Nagothi :
>
> On Mon, Oct 31, 2016 at 11:57 AM, Patrick B
> wrote:
>
>> Hi guys,
>>
>> I'm using this query to measure the delay between a Master and a
>> Streaming Replication Slave server, using PostgreSQL 9.2.
>&
Hi guys,
I got a test server, let's call it test01.
The test01 has a basebackup from the master.
I want to turn test01 into a master. It doesn't need to catch up with the
wal_files, because I don't need it to be up-to-date.
So what I did is:
- Replaced /var/lib/pgsql/9.2/data/ with the baseback
2016-11-01 10:33 GMT+13:00 David G. Johnston :
> On Mon, Oct 31, 2016 at 1:46 PM, Patrick B
> wrote:
>
>> Hi guys,
>>
>> I got a test server, let's call it test01.
>>
>> The test01 has a basebackup from the master.
>> I want to turn test01 i
I actually want to restore in a point of time.
Don't want to recovery_target_timeline = 'latest'
How can I stipulate a date?
Thanks
2016-11-01 11:59 GMT+13:00 Patrick B :
>
>
> 2016-11-01 10:33 GMT+13:00 David G. Johnston :
>
>> On Mon, Oct 31, 2016 at 1:46 P
If I change recovery.conf:
recovery_target_time = '2016-10-30 02:24:40'
I get error:
FATAL: requested recovery stop point is before consistent recovery point
2016-11-02 2:55 GMT+13:00 Scott Marlowe :
> On Mon, Oct 31, 2016 at 8:01 PM, Patrick B
> wrote:
> > If I change recovery.conf:
> >
> > recovery_target_time = '2016-10-30 02:24:40'
> >
> >
> > I get error:
> >
> > FATAL: requested
2016-11-02 8:43 GMT+13:00 Patrick B :
>
>
> 2016-11-02 2:55 GMT+13:00 Scott Marlowe :
>
>> On Mon, Oct 31, 2016 at 8:01 PM, Patrick B
>> wrote:
>> > If I change recovery.conf:
>> >
>> > recovery_target_time = '2016-10-30 02:24:40'
>
Hi guys,
I'm writing a simple Plpgsql function to delete some data from different
tables.
The function starts with a select, and then 2 deletes after that.
How can I return the number of rows that each delete performed?
CREATE or REPLACE FUNCTION delete_ids_clientid(account_id integer)
RETURN
Hi guys,
My current scenario is:
master01 - Postgres 9.2 master DB
slave01 - Postgres 9.2 streaming replication + wal_files slave server for
read-only queries
slave02 - Postgres 9.2 streaming replication + wal_files slave server @ AWS
master01 sends wal_files to both slaves via ssh.
*On the ma
2016-11-14 15:33 GMT+13:00 Venkata B Nagothi :
>
> On Mon, Nov 14, 2016 at 1:22 PM, Patrick B
> wrote:
>
>> Hi guys,
>>
>> My current scenario is:
>>
>> master01 - Postgres 9.2 master DB
>> slave01 - Postgres 9.2 streaming replication + wal_files
Would be possible to check the integrity between two database servers?
Both servers are slaves (streaming replication + wal_files) but I believe
one of them, when recovered from wal_files in a fast outage we got, got
recovered not 100%.
How could I check the data between both DB?
I'm using Postg
2016-11-17 12:19 GMT+13:00 Patrick B :
> Would be possible to check the integrity between two database servers?
>
> Both servers are slaves (streaming replication + wal_files) but I believe
> one of them, when recovered from wal_files in a fast outage we got, got
> recovered no
Hi guys,
I currently have a slave02 server that is replicating from another slave01
via Cascading replication. The master01 server is shipping wal_files (via
ssh) to both slaves.
I'm doing some tests on slave02 to test the recovery via wal_files... The
goal here is to stop postgres, wait few min
2016-11-23 15:55 GMT+13:00 Venkata B Nagothi :
>
>
> On Wed, Nov 23, 2016 at 1:03 PM, Patrick B
> wrote:
>
>> Hi guys,
>>
>> I currently have a slave02 server that is replicating from another
>> slave01 via Cascading replication. The master01 server is
2016-11-23 16:18 GMT+13:00 Venkata B Nagothi :
>
> On Wed, Nov 23, 2016 at 1:59 PM, Patrick B
> wrote:
>
>>
>>
>> 2016-11-23 15:55 GMT+13:00 Venkata B Nagothi :
>>
>>>
>>>
>>> On Wed, Nov 23, 2016 at 1:03 PM, Patrick B
>>&
2016-11-29 15:21 GMT+13:00 David Steele :
> On 11/24/16 8:05 PM, Patrick B wrote:
>
> > hmm.. I really don't get it.
> >
> >
> >
> > If I get messages like:
> >
> > *cp: cannot stat '/walfiles/00021AF800A5': No such fil
2016-11-29 16:36 GMT+13:00 David G. Johnston :
> On Mon, Nov 28, 2016 at 8:22 PM, Patrick B
> wrote:
>
>>
>> Ho
>> [w]
>> is that even possible?? I don't understand!
>>
>>
> https://www.postgresql.org/docs/9.2/static/warm-standby.html
>
Hi guys,
I use these queries to monitor the streaming replication:
*on master:*
select client_addr, state, sent_location, write_location, flush_location,
replay_location, sync_priority from pg_stat_replication;
*On slave:*
select now() - pg_last_xact_replay_timestamp() AS replication_delay;
Can
2016-11-30 14:02 GMT+13:00 John R Pierce :
> On 11/29/2016 3:31 PM, Patrick B wrote:
>
> I use these queries to monitor the streaming replication:
>
> *on master:*
> select client_addr, state, sent_location, write_location, flush_location,
> replay_location, sync_priority fro
2016-11-30 14:21 GMT+13:00 John R Pierce :
> On 11/29/2016 5:10 PM, Patrick B wrote:
>
>
> Yep.. once a minute or so. And yes, I need to store a history with
> timestamp.
>
> Any idea? :)
>
>
> so create a table with a timestamptz, plus all the fields you want,
nside the database as an extension. It uses the same
>> syntax as regular cron, but it allows you to schedule PostgreSQL commands
>> directly from the database"
>>
>> It looks like what you want.
>>
>> Walter.
>>
>> On Tue, Nov 29, 201
2016-11-29 23:59 GMT+13:00 Patrick B :
>
>
> 2016-11-29 16:36 GMT+13:00 David G. Johnston :
>
>> On Mon, Nov 28, 2016 at 8:22 PM, Patrick B
>> wrote:
>>
>>>
>>> Ho
>>> [w]
>>> is that even possible?? I don't understand!
&
2016-12-01 14:15 GMT+13:00 David G. Johnston :
> On Wed, Nov 30, 2016 at 6:05 PM, Patrick B
> wrote:
>
>> https://www.postgresql.org/docs/9.2/static/runtime-config-
>> replication.html
>>
>> wal_keep_segments is the parameter responsible for streaming replica
Hi guys,
I've got some database servers in USA (own data center) and also @ AWS
Japan.
*USA:*
master01
slave01 (Streaming Replication from master01 + wal_files)
slave02 (Streaming Replication from master01 + wal_files)
*Japan: (Cascading replication)*
slave03 (Streaming Replication from slave02
Hi guys,
Are the history files copied with the wal_files? Or I have to do it
separated?
0003.history': No such file or directory
I'm using PostgreSQL 9.2.
Cheers
Patrick
2016-12-12 12:00 GMT+13:00 Venkata B Nagothi :
>
> On Mon, Dec 12, 2016 at 7:48 AM, Patrick B
> wrote:
>
>> Hi guys,
>>
>> Are the history files copied with the wal_files? Or I have to do it
>> separated?
>>
>> 0003.history': No suc
2016-12-12 12:09 GMT+13:00 Patrick B :
> 2016-12-12 12:00 GMT+13:00 Venkata B Nagothi :
>
>>
>> On Mon, Dec 12, 2016 at 7:48 AM, Patrick B
>> wrote:
>>
>>> Hi guys,
>>>
>>> Are the history files copied with the wal_files? Or I have to do
Hi,
I've got this query, that I manually run it once a month:
SELECT
uuid,
clientid),
*
FROM
logging
WHERE
logtime
BETWEEN
'201611015'
AND
'201612015'
As you can see, I select a date. So in December, the date will be: *BETWEEN
'201612015' AND '201601015'*, for examp
>
> 1. Why when I run the function manually I get this error?
>>
>> select logextract(201612015, 201612015);
>>
>> ERROR: operator does not exist: timestamp without time zone >=
>> integer
>>
>> LINE 13: BETWEEN
>>
>
> The answer is above. Look at yo
2016-12-15 10:40 GMT+13:00 Adrian Klaver :
> On 12/14/2016 01:30 PM, Patrick B wrote:
>
>> 1. Why when I run the function manually I get this error?
>>
>> select logextract(201612015, 201612015);
>>
>> ERROR: operator do
2016-12-15 14:00 GMT+13:00 David G. Johnston :
> On Wed, Dec 14, 2016 at 5:12 PM, rob stone wrote:
>
>>
>> On Wed, 2016-12-14 at 17:00 -0700, David G. Johnston wrote:
>> > On Wed, Dec 14, 2016 at 4:49 PM, Patrick B
>> > wrote:
>> > > ERROR: func
2016-12-15 14:54 GMT+13:00 Lucas Possamai :
>
>
> 2016-12-15 14:34 GMT+13:00 Adrian Klaver :
>
>> On 12/14/2016 05:19 PM, Patrick B wrote:
>>
>> Reading the suggestions might help:)
>>
>> Another try:
>>
>> CREATE or REP
Hi,
I got this scenario:
master01
--> slave01 ---> slave02 -> slave03 ---> slave04
As you can see, slave03 replicates from slave02 and slave04 from slave03.
I'm promoting slave03 into a master, and trying to make slave04 to be able
to connect to its new master.
AS i'm using PostgreSQL
Hi guys,
I've got the following Query:
WITH
query_p AS (
SELECT CAST(6667176 AS
BIGINT) AS client_id),
clients AS (
SELECT
>
> 3,581 individual pokes into the heap to confirm tuple visibility and
> apply the deleted filter - that could indeed take a while.
> David J.
I see.. The deleted column is:
deleted boolean
Should I create an index for that? How could I improve this query?
Does it execute as slowly when y
2017-01-11 4:05 GMT+13:00 Tomas Vondra :
> On 01/10/2017 04:05 AM, Patrick B wrote:
>
>> 3,581 individual pokes into the heap to confirm tuple visibility
>> and apply the deleted filter - that could indeed take a while.
>> David J.
>>
>&
Hi guys,
I'm using PostgreSQL 9.2 in two different servers.
server1 (Master Postgres DB server, running Postgres 9.2 / 128GB ram) -
RAID 10 Magnetic disks
server2 (Master Postgres DB server, running Postgres 9.2 / 128GB ram) - EBS
(AWS) io2 10k IOPS
When I run a query, I get this error:
ERROR:
2017-01-12 13:23 GMT+13:00 Adrian Klaver :
> On 01/11/2017 04:08 PM, Patrick B wrote:
>
>> Hi guys,
>>
>> I'm using PostgreSQL 9.2 in two different servers.
>>
>> server1 (Master Postgres DB server, running Postgres 9.2 / 128GB ram) -
>> RAID 1
2017-01-12 13:41 GMT+13:00 Adrian Klaver :
> On 01/11/2017 04:31 PM, Patrick B wrote:
>
>> 2017-01-12 13:23 GMT+13:00 Adrian Klaver > <mailto:adrian.kla...@aklaver.com>>:
>>
>> On 01/11/2017 04:08 PM, Patrick B wrote:
>>
>> Hi guy
Hi guys,
I've got a slow query, running at 25 seconds.
-> Bitmap Heap Scan on ja_notes r_1103088
(cost=234300.55..1254978.62 rows=553275 width=101) (actual
time=1423.411..10572.549 rows=475646 loops=1)
Recheck Cond: (n_type = ANY
('{note,
2017-01-12 16:48 GMT+13:00 Andreas Joseph Krogh :
> På torsdag 12. januar 2017 kl. 03:15:59, skrev Patrick B <
> patrickbake...@gmail.com>:
>
> Hi guys,
>
> I've got a slow query, running at 25 seconds.
>
>
> -> Bitmap Heap Scan
Hi guys,
I've got a custom data type column... The query I'm using its looking over
9 million rows.
I've created a BTREE index but it didn't help on the speed. Is there any
special index for custom types?
Thanks
Patrick
Hi guys,
I've got a table which has id and created date columns.
I want to get the average of inserted rows monthly.
How can I get this data?
This query is not working as it is showing me same data in both columns.
select created_date,
AVG(id)
OVER(ORDER BY created_date) AS avr from test
http:
Hi guys,
I get these messages at least once a day in my Prod environment:
> FATAL: remaining connection slots are reserved for non-replication
> superuser connections
I do not have a DB pooler and my max_connections is 200. However, max
connections for my PHP Application is 120.
My server has
2017-02-08 16:27 GMT+13:00 Tatsuo Ishii :
> > Something is using too many connections.
> >
> > I may be wrong but I'm unaware of a limit on connections from PHP except
> > when you are using persistent connections. Since each PHP script is it's
> > own process, it can create one or more connection
Hi guys
I just wanna understand the locks in a DB server:
[image: Imagem inline 1]
Access share = Does that mean queries were waiting because an
update/delete/insert was happening?
I'm asking because I got a very big spike with > 30 seconds web response
time.
Running PG 9.3
Thanks!
Patrick
2017-02-10 18:18 GMT+13:00 John R Pierce :
> On 2/9/2017 9:16 PM, John R Pierce wrote:
>
>> that spike in your graph suggests you had 8000 concurrent SELECT
>> operations...
>>
>
> errr, 7000, still way too many.
>
Thanks a lot John!! Got it
PAtrick
Hi all,
I'm simply doing an insert and I want to get the inserted id with a select.
I'm doing this all in the same transactions.
Example:
BEGIN;
INSERT INTO test (id,name,description) VALUES (default,'test 1','testing
insert');
SELECT FROM test ORDER BY id DESC; -- I don't see the inserted ro
2017-02-15 12:19 GMT+13:00 Tom Lane :
> Patrick B writes:
> > I'm simply doing an insert and I want to get the inserted id with a
> select.
> > I'm doing this all in the same transactions.
>
> > Example:
> > BEGIN;
> > INSERT INTO test (id,na
Hi all,
I just got a quick question about warm-cache. I'm using PG 9.2.
When I execute this statement soon after I start/restart the database:
explain select id from test where id = 124;
The runtime is 40ms.
Then, If I execute this statement just after the above one;
explain analyze select i
2017-02-16 13:25 GMT+13:00 Steve Atkins :
>
> > On Feb 15, 2017, at 3:58 PM, Patrick B wrote:
> >
> > Hi all,
> >
> > I just got a quick question about warm-cache. I'm using PG 9.2.
> >
> > When I execute this statement soon after I start/restart
2017-02-16 14:08 GMT+13:00 Tom Lane :
> Patrick B writes:
> > For the first time I ran the query, it took >10 seconds. Now it is taking
> > less than a second.
> > How can I clear for good the cache? So i can have a real idea of how long
> > the query takes t
2017-02-16 14:57 GMT+13:00 Patrick B :
>
> I've got two different scenarios:
>
> Production database server > PG 9.2
>
>- I ran one single time, in a slave server that no queries go to that
>server, and it took >10 seconds.
>
> Test database server >
Hi all,
how can I update a row with newest id from another table if it exists
somewhere else?
Example:
*table test1*
- id (primary key)
- id_user_bill
- clientid
*table test2*
- item_id
- userid (there are duplicated rows here)
- clientid
- id (primary key)
-- finding th
Hi guys,
I've got a lot of bloat indexes on my 4TB database.
Let's take this example:
Table: seg
Index: ix_filter_by_tree
Times_used: 1018082183
Table_size: 18 GB -- wrong. The table is mostly on pg_toast table. Its real
size is 2TB
Index_size: 17 GB
Num_writes 16245023
Index definition: CREATE
2017-02-22 10:59 GMT+13:00 Adrian Klaver :
> On 02/21/2017 01:44 PM, Patrick B wrote:
> > Hi guys,
> >
> > I've got a lot of bloat indexes on my 4TB database.
> >
> > Let's take this example:
> >
> > Table: seg
> > Index: ix_fil
2017-02-22 11:11 GMT+13:00 Patrick B :
> 2017-02-22 10:59 GMT+13:00 Adrian Klaver :
>
>> On 02/21/2017 01:44 PM, Patrick B wrote:
>> > Hi guys,
>> >
>> > I've got a lot of bloat indexes on my 4TB database.
>> >
>> > Le
2017-02-22 13:10 GMT+13:00 Adrian Klaver :
> On 02/21/2017 03:41 PM, Patrick B wrote:
> > 2017-02-22 11:11 GMT+13:00 Patrick B > <mailto:patrickbake...@gmail.com>>:
> >
> > 2017-02-22 10:59 GMT+13:00 Adrian Klaver > <mailto:adrian.kla...@aklaver.
2017-02-23 11:46 GMT+13:00 Jeff Janes :
> On Tue, Feb 21, 2017 at 1:44 PM, Patrick B
> wrote:
>
>> Hi guys,
>>
>> I've got a lot of bloat indexes on my 4TB database.
>>
>> Let's take this example:
>>
>> Table: seg
>> Inde
2017-02-25 17:53 GMT+13:00 Patrick B :
>
>
> 2017-02-23 11:46 GMT+13:00 Jeff Janes :
>
>> On Tue, Feb 21, 2017 at 1:44 PM, Patrick B
>> wrote:
>>
>>> Hi guys,
>>>
>>> I've got a lot of bloat indexes on my 4TB database.
Hi all.
I have a database which is 4TB big. We currently store binary data in a
bytea data type column (seg_data BYTEA). The column is behind binary_schema
and the files types stored are: pdf, jpg, png.
*Getting the schema binary_schema size:*
SELECT pg_size_pretty(pg_database_size('live_databa
Hi all.
I'm testing GIN indexes on a wildcard search.
Basically I've created this on my test environment:
create table test_gin_index (
> name_first CHARACTER VARYING(80) DEFAULT ''::CHARACTER VARYING,
> name_last CHARACTER VARYING(80) DEFAULT ''::CHARACTER VARYING
> );
insert into test_gin_ind
Hi guys. How can I count using 'CASE WHEN'?
Example:
SELECT
CASE
WHEN AND c.regdate > EXTRACT(epoch FROM (now() - INTERVAL '14 day'))
THEN 'trial'
WHEN last_pay > EXTRACT(epoch FROM (now() - INTERVAL '37 day'))
THEN 'paying'
END as account_status,
2017-03-09 23:15 GMT+13:00 vinny :
> On 2017-03-09 05:27, Patrick B wrote:
>
>> Hi guys. How can I count using 'CASE WHEN'?
>>
>> Example:
>>
>> SELECT
>>>>
>>>
>>> CASE
>>>>
>>>
>&
2017-03-10 10:17 GMT+13:00 Yasin Sari :
> if you want see account_status and the count()- try this:
>
> SELECT
>
> CASE
>
> WHEN AND c.regdate > EXTRACT(epoch FROM (now() - INTERVAL '14
> day'))
>
> THEN 'trial'
>
> WHEN last_pay > EXTRACT(epoch FROM (now() - INTERVAL '37 d
101 - 200 of 234 matches
Mail list logo