there. I'm worried
about replication lag in this case.
Is there any other way to perform this? I'm using PG 9.1 for this.
[1]
https://stackoverflow.com/questions/10135754/how-to-make-a-foreign-key-with-a-constraint-on-the-referenced-table-in-postgresq
Thanks!
Patrick
2017-07-07 22:32 GMT+12:00 Thomas Markus :
> Hi,
>
> Am 07.07.17 um 12:16 schrieb Patrick B:
>
> Hi guys!
>
> I've got 2 tables, and I need to get some data between them.
>
> test1:
>
> WITH account_status AS (
> select
> CASE
> WHEN regda
single query.. How
can I do that?
I'm using Postgres 9.3.
Thanks!
Patrick
2017-06-19 13:19 GMT+12:00 Melvin Davidson :
>
>
> On Sun, Jun 18, 2017 at 9:02 PM, Patrick B
> wrote:
>
>> Hi guys.
>>
>> I just wanna understand the effective_io_concurrency value better.
>>
>> My current Master database server has 16 vCPUS and I
&g
Hi guys.
I just wanna understand the effective_io_concurrency value better.
My current Master database server has 16 vCPUS and I
use effective_io_concurrency = 0.
What can be the benefits of increasing that number? Also, do you guys have
any recommendations?
I'm using PG 9.2 and the official do
2017-06-16 10:35 GMT+12:00 David G. Johnston :
> On Thu, Jun 15, 2017 at 3:19 PM, Patrick B
> wrote:
>
>> 2017-05-29 19:27 GMT+12:00 Albe Laurenz :
>>
>>> Patrick B wrote:
>>> > I am running a background task on my DB, which will copy data from
>>&
2017-05-29 19:27 GMT+12:00 Albe Laurenz :
> Patrick B wrote:
> > I am running a background task on my DB, which will copy data from
> tableA to tableB. For
> > that, I'm writing a PL/PGSQL function which basically needs to do the
> following:
> >
> >
>
mp;fiddle=15c571caa36876f00a0a2eaace703a2b
How can I extract, from that tablea.type_m column the tableb.id value?
Thanks
Patrick
2017-05-31 16:34 GMT+12:00 David G. Johnston :
> On Tue, May 30, 2017 at 9:17 PM, Patrick B
> wrote:
>
>>
>> regexp_matches(name, '((main|medium).name/\d+.\d+)') as filename,
>>
>>
>> Example here: http://sqlfiddle.com/#!15/5f4f0/4
>>
>&g
on that only gets me the file
name, after the main.name/ for example.
On the example I gave there are 2 problems:
1. I can only get the jpg file name
2. I don't get only the file name but the rest as well, which is not
what I need
How to do that?
Thanks!
Patrick
Hi guys,
I am running a background task on my DB, which will copy data from tableA
to tableB. For that, I'm writing a PL/PGSQL function which basically needs
to do the following:
1. Select the data from tableA
2. The limit will be put when calling the function
3. insert the selected dat
iew, it's really slow. I can't even run explain
analyze (it's been 1h and query did not finished yet).
However, when running both selects with explain analyze, query is fast.
What should I do in this case? Why is that taking so long? I assume it's
because the UNION will look for duplicates?
Thanks
Patrick.
2017-05-15 16:10 GMT+12:00 David G. Johnston :
> On Sunday, May 14, 2017, Patrick B wrote:
>
>>
>> Demo: http://dbfiddle.uk/?rdbms=postgres_9.6&fiddle=3c3a3f870eb4d0
>> 02c5b4200042b25669
>> <http://dbfiddle.uk/?rdbms=postgres_9.6&fiddle=c2fbb7da5a2397
2017-05-15 15:20 GMT+12:00 Patrick B :
> Hi guys,
>
> Demo: http://dbfiddle.uk/?rdbms=postgres_9.6&fiddle=
> 3c3a3f870eb4d002c5b4200042b25669
> <http://dbfiddle.uk/?rdbms=postgres_9.6&fiddle=c2fbb7da5a2397f7cda5126ed239c080>
>
>
FYI - NEW LINK
http://dbf
ng are:
5 /testfile/client/10/attachment/1000/master/ 10
7 /testfile/client/10/attachment/unassigned/file/1001/master 10
8 /testfile/client/10/attachment/unassigned/file/1002/master 10
What am I doing wrong?
Thanks
Patrick.
ified_date | timestamp without time zone | default
> statement_timestamp()
> created_date | timestamp without time zone | default
> statement_timestamp()
> clientid | bigint | not null
> f_id| bigint |
So, will the best way to use UNION ALL?
Thanks
Patrick.
in a Select? Can you please provide some examples?
I'm using PostgreSQL 9.1.
Thanks
Patrick
2017-04-09 20:18 GMT+12:00 Patrick B :
> Hi guys,
>
> I've got this select:
> SELECT split_part(n.node_full_path::text, '/'::text, 8)::integer AS id,
>
> However, not always I will get the 8th field, and because of that, i may
> get no data somethings.
>
/attachment/21314134/file/12312312312/small/photo.jpg
Note that, sometimes it only goes until the 7th splitted_part, not always i
will get the 8th.
How can I tell the select, if no 8th field is found, then returns null?
Thanks!
Patrick
2017-04-07 14:19 GMT+12:00 David G. Johnston :
> On Thu, Apr 6, 2017 at 7:15 PM, Patrick B
> wrote:
>
>>
>> David,
>> That won't work.
>>
>
> Actually, it works fine, you just keep moving the under-specified problem
> space.
>
> I'd s
2017-04-07 14:08 GMT+12:00 David G. Johnston :
> On Thu, Apr 6, 2017 at 6:33 PM, Patrick B
> wrote:
>
>> When actually I just want the 'main''
>>
>
> SELECT * FROM tbl WHERE path_name ~ '/main$' ?
>
> David J.
>
>
David,
2017-04-06 18:10 GMT+12:00 Patrick B :
>
> 2017-04-06 17:35 GMT+12:00 Arjen Nienhuis :
>
>>
>>
>> On Apr 6, 2017 05:57, "Patrick B" wrote:
>>
>> Hi guys,
>>
>> i've got this column:
>>
>> path_name character varying(25
2017-04-06 17:35 GMT+12:00 Arjen Nienhuis :
>
>
> On Apr 6, 2017 05:57, "Patrick B" wrote:
>
> Hi guys,
>
> i've got this column:
>
> path_name character varying(255)
>>
>
> I store full S3 bucket path for the attachments of my applicat
na do a select, where path_name has only 'main' and not anything else.
Maybe using regexp_matches but then how to put it into a where clause?
Thanks!
Patrick.
2017-04-03 13:23 GMT+12:00 Patrick B :
> Hi guys.
>
> I'm thinking about increasing the query cache for my PG 9.2 server.
> I've got a project happening, which is doing lots and lots of writes and
> reads during the night, and in the morning I see PG cache warming up
d read tasks.
So my environment gets very slow for a few hours, until the queries used on
a daily basis go to the cache.
Question:
Should I increase effective_cache_size or shared_buffer? What's the
difference between them?
Thanks
Patrick
2017-03-20 13:27 GMT+13:00 Melvin Davidson :
>
>
> On Sun, Mar 19, 2017 at 8:16 PM, Patrick B
> wrote:
>
>> Hi guys,
>>
>> I've got a column 'type_note' on a new table that it's being designed:
>>
>> type_note varchar(32) N
Hi guys,
I've got a column 'type_note' on a new table that it's being designed:
type_note varchar(32) NOT NULL;
On that column, there will be three different data:
1. yes
2. no
3. maybe
I wanna create a FK but just when the data on that column is = maybe.
How can I do that? Thanks!
o test, but the query is not using it.
> create index on log gin (description gin_trgm_ops)
Can you guys help to improve that part please?
Patrick.
2017-03-10 10:17 GMT+13:00 Yasin Sari :
> if you want see account_status and the count()- try this:
>
> SELECT
>
> CASE
>
> WHEN AND c.regdate > EXTRACT(epoch FROM (now() - INTERVAL '14
> day'))
>
> THEN 'trial'
>
> WHEN last_pay > EXTRACT(epoch FROM (now() - INTERVAL '37 d
2017-03-09 23:15 GMT+13:00 vinny :
> On 2017-03-09 05:27, Patrick B wrote:
>
>> Hi guys. How can I count using 'CASE WHEN'?
>>
>> Example:
>>
>> SELECT
>>>>
>>>
>>> CASE
>>>>
>>>
>&
nd 'paying' customers the query returns.
can you guys please advice how to do it?
Thanks
Patrick
-> Bitmap Index Scan on test_gin_index_name_first_idx
(cost=0.00..92.00 rows=1 width=0) (actual time=0.010..0.010 rows=3 loops=1)
Index Cond: ((name_first)::text ~~* '%on%'::text)
Planning time: 0.122 ms
Execution time: 0.042 ms
(8 rows)
Why is SEQ SCAN faster than index scan? This is an environment test but i'm
running the same test on a production environment and also seq scan is
cheaper than index.
Thanks
Patrick
ves, Will I need to run the vacuum full on them too?
3 - [2] vacuum full needs some free disk space as same size as the target
table. It locks the table (cannot be used while running vacuum full) and a
REINDEX might be needed after. AM I right?
Thanks in advanced for your help.
Patrick
[1] http
2017-02-25 17:53 GMT+13:00 Patrick B :
>
>
> 2017-02-23 11:46 GMT+13:00 Jeff Janes :
>
>> On Tue, Feb 21, 2017 at 1:44 PM, Patrick B
>> wrote:
>>
>>> Hi guys,
>>>
>>> I've got a lot of bloat indexes on my 4TB database.
2017-02-23 11:46 GMT+13:00 Jeff Janes :
> On Tue, Feb 21, 2017 at 1:44 PM, Patrick B
> wrote:
>
>> Hi guys,
>>
>> I've got a lot of bloat indexes on my 4TB database.
>>
>> Let's take this example:
>>
>> Table: seg
>> Inde
2017-02-22 13:10 GMT+13:00 Adrian Klaver :
> On 02/21/2017 03:41 PM, Patrick B wrote:
> > 2017-02-22 11:11 GMT+13:00 Patrick B > <mailto:patrickbake...@gmail.com>>:
> >
> > 2017-02-22 10:59 GMT+13:00 Adrian Klaver > <mailto:adrian.kla...@aklaver.
2017-02-22 11:11 GMT+13:00 Patrick B :
> 2017-02-22 10:59 GMT+13:00 Adrian Klaver :
>
>> On 02/21/2017 01:44 PM, Patrick B wrote:
>> > Hi guys,
>> >
>> > I've got a lot of bloat indexes on my 4TB database.
>> >
>> > Le
2017-02-22 10:59 GMT+13:00 Adrian Klaver :
> On 02/21/2017 01:44 PM, Patrick B wrote:
> > Hi guys,
> >
> > I've got a lot of bloat indexes on my 4TB database.
> >
> > Let's take this example:
> >
> > Table: seg
> > Index: ix_fil
on: CREATE INDEX ix_filter_by_tree ON seg USING btree
(full_path varchar_pattern_ops) WHERE (full_path IS NOT NULL)
What is the real impact of a bloat index? If I reindex it, queries will be
faster?
Thanks
Patrick
Thanks!
Patrick.
2017-02-16 14:57 GMT+13:00 Patrick B :
>
> I've got two different scenarios:
>
> Production database server > PG 9.2
>
>- I ran one single time, in a slave server that no queries go to that
>server, and it took >10 seconds.
>
> Test database server >
2017-02-16 14:08 GMT+13:00 Tom Lane :
> Patrick B writes:
> > For the first time I ran the query, it took >10 seconds. Now it is taking
> > less than a second.
> > How can I clear for good the cache? So i can have a real idea of how long
> > the query takes t
2017-02-16 13:25 GMT+13:00 Steve Atkins :
>
> > On Feb 15, 2017, at 3:58 PM, Patrick B wrote:
> >
> > Hi all,
> >
> > I just got a quick question about warm-cache. I'm using PG 9.2.
> >
> > When I execute this statement soon after I start/restart
ow can I clean the cache to get the REAL runtime for that statement?
Thanks
Patrick
2017-02-15 12:19 GMT+13:00 Tom Lane :
> Patrick B writes:
> > I'm simply doing an insert and I want to get the inserted id with a
> select.
> > I'm doing this all in the same transactions.
>
> > Example:
> > BEGIN;
> > INSERT INTO test (id,na
C; -- I don't see the inserted row here
COMMIT;
I only can see that inserted row if I do the select outside of this
transaction.
How could I get that ?
Thanks!
Patrick
2017-02-10 18:18 GMT+13:00 John R Pierce :
> On 2/9/2017 9:16 PM, John R Pierce wrote:
>
>> that spike in your graph suggests you had 8000 concurrent SELECT
>> operations...
>>
>
> errr, 7000, still way too many.
>
Thanks a lot John!! Got it
PAtrick
Hi guys
I just wanna understand the locks in a DB server:
[image: Imagem inline 1]
Access share = Does that mean queries were waiting because an
update/delete/insert was happening?
I'm asking because I got a very big spike with > 30 seconds web response
time.
Running PG 9.3
Thanks!
Patrick
ipt are the only things that
> can
> > connect - no scripts, backups, etc. are consuming connections.
>
> You can disable persistent connection feature of pg_pconnect by
> tweaking php.ini.
>
>
> @Steven, yes, my developer said we are using persistent connections.
However, he checked and he is using pg_connect instead of pg_pconnect.
Patrick
er has 128GB and SSD 10K iops disks (Amazon EBS).
Can you guys please outlines me the steps to troubleshoot this?
Interesting is that I didn't see any IO/CPU limitation on my server.
I'm currently running a Postgres 9.2 - one master and one slave streaming
replication.
Thanks
Patrick
http://sqlfiddle.com/#!15/3289b/1
Thanks!
Patrick
Hi guys,
I've got a custom data type column... The query I'm using its looking over
9 million rows.
I've created a BTREE index but it didn't help on the speed. Is there any
special index for custom types?
Thanks
Patrick
2017-01-12 16:48 GMT+13:00 Andreas Joseph Krogh :
> På torsdag 12. januar 2017 kl. 03:15:59, skrev Patrick B <
> patrickbake...@gmail.com>:
>
> Hi guys,
>
> I've got a slow query, running at 25 seconds.
>
>
> -> Bitmap Heap Scan
Hi guys,
I've got a slow query, running at 25 seconds.
-> Bitmap Heap Scan on ja_notes r_1103088
(cost=234300.55..1254978.62 rows=553275 width=101) (actual
time=1423.411..10572.549 rows=475646 loops=1)
Recheck Cond: (n_type = ANY
('{note,
2017-01-12 13:41 GMT+13:00 Adrian Klaver :
> On 01/11/2017 04:31 PM, Patrick B wrote:
>
>> 2017-01-12 13:23 GMT+13:00 Adrian Klaver > <mailto:adrian.kla...@aklaver.com>>:
>>
>> On 01/11/2017 04:08 PM, Patrick B wrote:
>>
>> Hi guy
2017-01-12 13:23 GMT+13:00 Adrian Klaver :
> On 01/11/2017 04:08 PM, Patrick B wrote:
>
>> Hi guys,
>>
>> I'm using PostgreSQL 9.2 in two different servers.
>>
>> server1 (Master Postgres DB server, running Postgres 9.2 / 128GB ram) -
>> RAID 1
ERROR: canceling statement due to statement timeout
statement_timeout is 0 in both servers.
However, on server1 I am able to run the query. Only on server2 that I get
that error.
Why? If it is same DB???
Patrick
2017-01-11 4:05 GMT+13:00 Tomas Vondra :
> On 01/10/2017 04:05 AM, Patrick B wrote:
>
>> 3,581 individual pokes into the heap to confirm tuple visibility
>> and apply the deleted filter - that could indeed take a while.
>> David J.
>>
>&
ake.
Would an Index be sufficient to solve the problem?
Patrick
g(80) | default
''::character varying
company| character varying(255) | default
''::character varying
*Index clientid_customers:*
CREATE INDEX
clientid_customers
ON
customers
(
"clientid"
);
Thanks!
Patrick
4 with the new history file and the
message that I get is:
FATAL: could not receive data from WAL stream: 00040001234
FATAL: requested WAL segment has already been removed
Why does that happen? I can't find 00040001234 even on the
slave03(new master)! wal_keep_segments is 500.
Thanks
Patrick
2016-12-15 14:54 GMT+13:00 Lucas Possamai :
>
>
> 2016-12-15 14:34 GMT+13:00 Adrian Klaver :
>
>> On 12/14/2016 05:19 PM, Patrick B wrote:
>>
>> Reading the suggestions might help:)
>>
>> Another try:
>>
>> CREATE or REP
2016-12-15 14:00 GMT+13:00 David G. Johnston :
> On Wed, Dec 14, 2016 at 5:12 PM, rob stone wrote:
>
>>
>> On Wed, 2016-12-14 at 17:00 -0700, David G. Johnston wrote:
>> > On Wed, Dec 14, 2016 at 4:49 PM, Patrick B
>> > wrote:
>> > > ERROR: func
2016-12-15 10:40 GMT+13:00 Adrian Klaver :
> On 12/14/2016 01:30 PM, Patrick B wrote:
>
>> 1. Why when I run the function manually I get this error?
>>
>> select logextract(201612015, 201612015);
>>
>> ERROR: operator do
>
> 1. Why when I run the function manually I get this error?
>>
>> select logextract(201612015, 201612015);
>>
>> ERROR: operator does not exist: timestamp without time zone >=
>> integer
>>
>> LINE 13: BETWEEN
>>
>
> The answer is above. Look at yo
But what should I use instead?
2. To call the function, I have to login to postgres and then run: select
logextract(201612015, 201612015);
How can I do it on cron? because the dates will be different every time.
Thanks
Patrick
2016-12-12 12:09 GMT+13:00 Patrick B :
> 2016-12-12 12:00 GMT+13:00 Venkata B Nagothi :
>
>>
>> On Mon, Dec 12, 2016 at 7:48 AM, Patrick B
>> wrote:
>>
>>> Hi guys,
>>>
>>> Are the history files copied with the wal_files? Or I have to do
2016-12-12 12:00 GMT+13:00 Venkata B Nagothi :
>
> On Mon, Dec 12, 2016 at 7:48 AM, Patrick B
> wrote:
>
>> Hi guys,
>>
>> Are the history files copied with the wal_files? Or I have to do it
>> separated?
>>
>> 0003.history': No suc
Hi guys,
Are the history files copied with the wal_files? Or I have to do it
separated?
0003.history': No such file or directory
I'm using PostgreSQL 9.2.
Cheers
Patrick
d anything.
SCP and FTP (big files) between those servers are really fast, +1.0MB/s.
I'm using PostgreSQL 9.2.14
Thanks!
Patrick.
2016-12-01 14:15 GMT+13:00 David G. Johnston :
> On Wed, Nov 30, 2016 at 6:05 PM, Patrick B
> wrote:
>
>> https://www.postgresql.org/docs/9.2/static/runtime-config-
>> replication.html
>>
>> wal_keep_segments is the parameter responsible for streaming replica
2016-11-29 23:59 GMT+13:00 Patrick B :
>
>
> 2016-11-29 16:36 GMT+13:00 David G. Johnston :
>
>> On Mon, Nov 28, 2016 at 8:22 PM, Patrick B
>> wrote:
>>
>>>
>>> Ho
>>> [w]
>>> is that even possible?? I don't understand!
&
nside the database as an extension. It uses the same
>> syntax as regular cron, but it allows you to schedule PostgreSQL commands
>> directly from the database"
>>
>> It looks like what you want.
>>
>> Walter.
>>
>> On Tue, Nov 29, 201
2016-11-30 14:21 GMT+13:00 John R Pierce :
> On 11/29/2016 5:10 PM, Patrick B wrote:
>
>
> Yep.. once a minute or so. And yes, I need to store a history with
> timestamp.
>
> Any idea? :)
>
>
> so create a table with a timestamptz, plus all the fields you want,
2016-11-30 14:02 GMT+13:00 John R Pierce :
> On 11/29/2016 3:31 PM, Patrick B wrote:
>
> I use these queries to monitor the streaming replication:
>
> *on master:*
> select client_addr, state, sent_location, write_location, flush_location,
> replay_location, sync_priority fro
;
Can I create a table to store that data?
I also need the data is constantly put into this table. How would be the
best way to do it?
Cheers
Patrick
2016-11-29 16:36 GMT+13:00 David G. Johnston :
> On Mon, Nov 28, 2016 at 8:22 PM, Patrick B
> wrote:
>
>>
>> Ho
>> [w]
>> is that even possible?? I don't understand!
>>
>>
> https://www.postgresql.org/docs/9.2/static/warm-standby.html
>
2016-11-29 15:21 GMT+13:00 David Steele :
> On 11/24/16 8:05 PM, Patrick B wrote:
>
> > hmm.. I really don't get it.
> >
> >
> >
> > If I get messages like:
> >
> > *cp: cannot stat '/walfiles/00021AF800A5': No such fil
2016-11-23 16:18 GMT+13:00 Venkata B Nagothi :
>
> On Wed, Nov 23, 2016 at 1:59 PM, Patrick B
> wrote:
>
>>
>>
>> 2016-11-23 15:55 GMT+13:00 Venkata B Nagothi :
>>
>>>
>>>
>>> On Wed, Nov 23, 2016 at 1:03 PM, Patrick B
>>&
2016-11-23 15:55 GMT+13:00 Venkata B Nagothi :
>
>
> On Wed, Nov 23, 2016 at 1:03 PM, Patrick B
> wrote:
>
>> Hi guys,
>>
>> I currently have a slave02 server that is replicating from another
>> slave01 via Cascading replication. The master01 server is
d the file. However, the file exists on the
Master, but it didn't start shipping yet. What are the consequences of that?
Cheers
Patrick
2016-11-17 12:19 GMT+13:00 Patrick B :
> Would be possible to check the integrity between two database servers?
>
> Both servers are slaves (streaming replication + wal_files) but I believe
> one of them, when recovered from wal_files in a fast outage we got, got
> recovered no
Would be possible to check the integrity between two database servers?
Both servers are slaves (streaming replication + wal_files) but I believe
one of them, when recovered from wal_files in a fast outage we got, got
recovered not 100%.
How could I check the data between both DB?
I'm using Postg
2016-11-14 15:33 GMT+13:00 Venkata B Nagothi :
>
> On Mon, Nov 14, 2016 at 1:22 PM, Patrick B
> wrote:
>
>> Hi guys,
>>
>> My current scenario is:
>>
>> master01 - Postgres 9.2 master DB
>> slave01 - Postgres 9.2 streaming replication + wal_files
on the slaves isn't the
00021A7C* ones. I think the SSH delivery is being delayed. Not sure
why tho.
How can I see how many files are behind?
Cheers
Patrick.
Hi guys,
I'm writing a simple Plpgsql function to delete some data from different
tables.
The function starts with a select, and then 2 deletes after that.
How can I return the number of rows that each delete performed?
CREATE or REPLACE FUNCTION delete_ids_clientid(account_id integer)
RETURN
2016-11-02 8:43 GMT+13:00 Patrick B :
>
>
> 2016-11-02 2:55 GMT+13:00 Scott Marlowe :
>
>> On Mon, Oct 31, 2016 at 8:01 PM, Patrick B
>> wrote:
>> > If I change recovery.conf:
>> >
>> > recovery_target_time = '2016-10-30 02:24:40'
>
2016-11-02 2:55 GMT+13:00 Scott Marlowe :
> On Mon, Oct 31, 2016 at 8:01 PM, Patrick B
> wrote:
> > If I change recovery.conf:
> >
> > recovery_target_time = '2016-10-30 02:24:40'
> >
> >
> > I get error:
> >
> > FATAL: requested
If I change recovery.conf:
recovery_target_time = '2016-10-30 02:24:40'
I get error:
FATAL: requested recovery stop point is before consistent recovery point
I actually want to restore in a point of time.
Don't want to recovery_target_timeline = 'latest'
How can I stipulate a date?
Thanks
2016-11-01 11:59 GMT+13:00 Patrick B :
>
>
> 2016-11-01 10:33 GMT+13:00 David G. Johnston :
>
>> On Mon, Oct 31, 2016 at 1:46 P
2016-11-01 10:33 GMT+13:00 David G. Johnston :
> On Mon, Oct 31, 2016 at 1:46 PM, Patrick B
> wrote:
>
>> Hi guys,
>>
>> I got a test server, let's call it test01.
>>
>> The test01 has a basebackup from the master.
>> I want to turn test01 i
then postgres starts recovering the wal_files. But I don't want that..
as I don't need a up-to-date
Is the wal_files required anyway?
Patrick
2016-10-31 15:54 GMT+13:00 Venkata B Nagothi :
>
> On Mon, Oct 31, 2016 at 11:57 AM, Patrick B
> wrote:
>
>> Hi guys,
>>
>> I'm using this query to measure the delay between a Master and a
>> Streaming Replication Slave server, using PostgreSQL 9.2.
>&
; extract(epoch FROM pg_last_xact_replay_timestamp())
> )::int lag;
In your opinion, is that right?
Cheers
Patrick
2016-10-19 13:39 GMT+13:00 Michael Paquier :
> On Wed, Oct 19, 2016 at 9:24 AM, Patrick B
> wrote:
> > However, this new database test server doesn't need to have all the
> data. I
> > would like to have only the first 100 rows(example) of each table in my
>
that, but unfortunately it doesn't work well.
It doesn't get the first 100 rows. It gets random 100 rows.
Do you guys have any idea how could I do this?
Thanks
Patrick
Thank you guys... good to know that pg_dump does all the job for me :)
So.. If I only dump using the --schema-only option, it will dump all the
schemas, constraints, indexes and tables?
Because probably, I'll have to import the data manually. NOt in a single
pg_restore I mean. (AWS issue)
Hi guys,
I need to export an entire database to another server, for testing purpose.
Is there any way to export all indexes and constraints ?
Postgres 9.2
Patrick
2016-09-09 1:09 GMT+12:00 Scott Marlowe :
> On Tue, Sep 6, 2016 at 5:25 PM, John R Pierce wrote:
> > On 9/6/2016 4:20 PM, Melvin Davidson wrote:
> >>
> >> If you use slony to replicate, you CAN have 9.2 on the master and 9.4 on
> >> the slave.
> >
> >
> > does rackspace support slony? how about
1 - 100 of 563 matches
Mail list logo