Kevin Wilkinson wrote:
> on 10.2, we're seeing very high cpu usage when doing an update statement
> on a relatively small table (1GB). one of the updated columns is text,
> about 1k bytes. there are four threads doing similar updates
> concurrently to the same table (but different rows). each th
po 8. 4. 2019 v 7:57 odesílatel Igal Sapir napsal:
> David,
>
> On Sun, Apr 7, 2019 at 8:11 PM David Rowley
> wrote:
>
>> On Mon, 8 Apr 2019 at 14:57, Igal Sapir wrote:
>> > However, I have now deleted about 50,000 rows more and the table has
>> only 119,688 rows. The pg_relation_size() still
David,
On Sun, Apr 7, 2019 at 8:11 PM David Rowley
wrote:
> On Mon, 8 Apr 2019 at 14:57, Igal Sapir wrote:
> > However, I have now deleted about 50,000 rows more and the table has
> only 119,688 rows. The pg_relation_size() still shows 31MB and
> pg_total_relation_size() still shows 84GB.
> >
Sorry for late but my server works without problem for a while.
> What problem occurs?
> Where is the Windows server?
Problem means Windows server doesn't accept new connection and non-admin
connection.
Only connected admin connection lives.
Windows server is not so close on my work space. So I u
On Mon, 8 Apr 2019 at 14:57, Igal Sapir wrote:
> However, I have now deleted about 50,000 rows more and the table has only
> 119,688 rows. The pg_relation_size() still shows 31MB and
> pg_total_relation_size() still shows 84GB.
>
> It doesn't make sense that after deleting about 30% of the rows
David,
On Sun, Apr 7, 2019 at 7:28 PM David Rowley
wrote:
> On Mon, 8 Apr 2019 at 14:19, Igal Sapir wrote:
> >
> > On Sun, Apr 7, 2019 at 6:20 PM David Rowley <
> david.row...@2ndquadrant.com> wrote:
> >>
> >> On Mon, 8 Apr 2019 at 10:09, Igal Sapir wrote:
> >> >
> >> > I have a table for whic
On Mon, 8 Apr 2019 at 14:19, Igal Sapir wrote:
>
> On Sun, Apr 7, 2019 at 6:20 PM David Rowley
> wrote:
>>
>> On Mon, 8 Apr 2019 at 10:09, Igal Sapir wrote:
>> >
>> > I have a table for which pg_relation_size() shows only 31MB, but
>> > pg_total_relation_size() shows a whopping 84GB.
>> >
>> >
David,
On Sun, Apr 7, 2019 at 6:20 PM David Rowley
wrote:
> On Mon, 8 Apr 2019 at 10:09, Igal Sapir wrote:
> >
> > I have a table for which pg_relation_size() shows only 31MB, but
> pg_total_relation_size() shows a whopping 84GB.
> >
> > The database engine is running inside a Docker container,
On Mon, 8 Apr 2019 at 10:09, Igal Sapir wrote:
>
> I have a table for which pg_relation_size() shows only 31MB, but
> pg_total_relation_size() shows a whopping 84GB.
>
> The database engine is running inside a Docker container, with the data
> mounted as a volume from a partition on the host's f
On 4/7/19 3:08 PM, Igal Sapir wrote:
I have a table for which pg_relation_size() shows only 31MB, but
pg_total_relation_size() shows a whopping 84GB.
What does:
pg_indexes_size()
show.
Also how many indexes are on the table?
The database engine is running inside a Docker container, with t
I have a table for which pg_relation_size() shows only 31MB, but
pg_total_relation_size() shows a whopping 84GB.
The database engine is running inside a Docker container, with the data
mounted as a volume from a partition on the host's file system.
When I try to run `VACUUM FULL`, the disk usage
> It may take a while for slony replication to be in sync, but when it is,
there will be very little down time to switch over.
I agree in principle, which is why I chose Slony over pg_upgrade for my
company's very similar situation, but my experience was that, out of the
box, Slony was projected t
Got it! Thanks!
Hi,
On postgres mailing lists please don't write your reply at the top of a
fully quoted email. We like the reply to be inline and trimmed to the
necessary parts.
On 2019-04-07 13:28:46 -0700, Konstantin Izmailov wrote:
> Yes, Andres, I meant "pipelining", just couldn't choose correct word. Thank
Never mind, I found the link to the github in emails from the link. Thanks
again!
Konstantin
On Sun, Apr 7, 2019 at 1:28 PM Konstantin Izmailov wrote:
> Yes, Andres, I meant "pipelining", just couldn't choose correct word.
> Thank you for the answer(s)!
>
> I also made changes in my own copy of
Yes, Andres, I meant "pipelining", just couldn't choose correct word. Thank
you for the answer(s)!
I also made changes in my own copy of libpq, and they work fine. I think
the pipelining support is needed in libpq. Btw, how can I get the patch
code? I want to compare your approach with mine. I cou
> The original scheduled downtime for one installation was 24 hours. By 21
hours it had not >completed the pg_dump schema-only so it was returned to
operation.
To me, your best option is to create a slony cluster with the version you
need to upgrade to.
When slony is in sync, simply make it the m
On 4/7/19 12:05 PM, senor wrote:
Thank you Adrian. I'm not sure if I can provide as much as you'd need for a
definite answer but I'll give you what I have.
The original scheduled downtime for one installation was 24 hours. By 21 hours
it had not completed the pg_dump schema-only so it was retu
Hi,
On 2019-04-07 20:57:56 +0200, Pavel Stehule wrote:
> ne 7. 4. 2019 v 20:47 odesílatel Konstantin Izmailov
> napsal:
>
> > Hi,
> > I'm experimenting with Postgres 10 and protocol v3. I noticed that the
> > Postgres allows executing multiple queries simultaneously (I basically
> > commented ou
I just noticed I missed Sherrylyn's post.
I did some reading about Slony and believe it is would be useful if I had the
time to dig in. As pointed out, it's not an out-of-the box solution. It is
included on the TODO list though. For now I can only dream of the 86 second
down time.
Thanks
_
On 4/6/19 11:42 PM, haman...@t-online.de wrote:
> Hi,
>
> I have no real idea about solving the complete problem, and would probably try
> something with a temp table first.
> For extracting the hostname from a url you could use
>
> select regex_replace('https?://(.*=)/.*', '\\1', url)
>
> instead
Thank you Adrian. I'm not sure if I can provide as much as you'd need for a
definite answer but I'll give you what I have.
The original scheduled downtime for one installation was 24 hours. By 21 hours
it had not completed the pg_dump schema-only so it was returned to operation.
The amount of da
Hi
ne 7. 4. 2019 v 20:47 odesílatel Konstantin Izmailov
napsal:
> Hi,
> I'm experimenting with Postgres 10 and protocol v3. I noticed that the
> Postgres allows executing multiple queries simultaneously (I basically
> commented out a check that prevents sending another query in libpq while
> pre
Hi,
I'm experimenting with Postgres 10 and protocol v3. I noticed that the
Postgres allows executing multiple queries simultaneously (I basically
commented out a check that prevents sending another query in libpq while
previous result(s) reading is not complete). Things appear like working,
but I j
On 4/6/19 5:47 PM, senor wrote:
Thanks Tom for the explanation. I assumed it was my ignorance of how the schema
was handled that was making this look like a problem that had already been
solved and I was missing something.
I fully expected the "You're Doing It Wrong" part. That is out of my co
On 4/7/19 6:22 AM, Lou Tseng wrote:
Hi folks,
Is there a good tutorial to recover from logical replication out of
sync? We ran into this error stating WAL has been removed and now
replica is out of sync. Also, is there a way to increase the number of
WAL kept on the master?
I have to beli
are there any shortcuts to upgrading that would circumvent exporting the
entire schema?
By "shortcuts," do you mean you want to minimize the time and energy you
put into the upgrade, or that you want to minimize database downtime? If
you mean downtime, I was able to upgrade a customer-facing datab
Hi Lou,
Try setting wal_keep_segments parameter to a higher value. I think it will
suffice to solve the problem.
Regards,
Pavan Teja,
9841380956
On Sun, 7 Apr, 2019, 6:53 PM Lou Tseng, wrote:
> Hi folks,
>
> Is there a good tutorial to recover from logical replication out of sync?
> We ran int
Hi folks,
Is there a good tutorial to recover from logical replication out of sync? We
ran into this error stating WAL has been removed and now replica is out of
sync. Also, is there a way to increase the number of WAL kept on the master?
Thanks!
2019-04-07 12:28:37.180 UTC [22292] ERROR:
On 4/5/19 6:23 AM, Rob Northcott wrote:
> Ah, fair enough – I’ll get into the habit of doing that then. Thanks again.
>
> Rob
> *From:*Ron
> *Sent:* 05 April 2019 11:07
> *To:* pgsql-general@lists.postgresql.org
> *Subject:* Re: Query much slower on 9.6.5 than on 9.3.5
>
> pg_dump/restore gets
30 matches
Mail list logo