I have a heavily used PostgreSQL 9.3.5 database on CentOS 6. Sometimes I
need to add/remove columns, preferably without any service interruptions,
but I get temporary errors.
I follow the safe operations list from
https://www.braintreepayments.com/blog/safe-operations-for-high-volume-postgresql
bu
Note that these errors most of the time only happens very briefly at the
same time as the ALTER is run. When I did some experiments today the server
in total had around 3k req/s with maybe 0.1% of them touching the table
being updated, and the error then happens maybe 1-10% of the times I try
this
On Sat, Oct 10, 2015 at 10:00 PM, Adrian Klaver
wrote:
> On 10/09/2015 08:30 PM, Victor Blomqvist wrote:
>
>> Note that these errors most of the time only happens very briefly at the
>> same time as the ALTER is run. When I did some experiments today the
>> server in t
changes when there
are people in the office.
/Victor
On Mon, Oct 12, 2015 at 10:15 PM, Adrian Klaver
wrote:
> On 10/12/2015 06:53 AM, Tom Lane wrote:
>
>> Andres Freund writes:
>>
>>> On 2015-10-09 14:32:44 +0800, Victor Blomqvist wrote:
>>>
>>>> C
On Mon, Oct 12, 2015 at 10:15 PM, Adrian Klaver
wrote:
> On 10/12/2015 06:53 AM, Tom Lane wrote:
>
>> Andres Freund writes:
>>
>>> On 2015-10-09 14:32:44 +0800, Victor Blomqvist wrote:
>>>
>>>> CREATE FUNCTION select_users(id_ integer) RETURNS SET
On Wed, Nov 4, 2015 at 1:31 AM, Tom Lane wrote:
> Victor Blomqvist writes:
> > In case any of you are interested of recreating this problem, I today had
> > the time to create a short example that reproduce the error every time I
> > try.
>
> Hmm. If you just do th
Hi,
Is it possible to break/limit a query so that it returns whatever results
found after having checked X amount of rows in a index scan?
For example:
create table a(id int primary key);
insert into a select * from generate_series(1,10);
select * from a
where id%2 = 0
order by id limit 10
On Fri, Aug 19, 2016 at 1:31 PM, Sameer Kumar
wrote:
>
>
> On Fri, 19 Aug 2016, 1:07 p.m. Victor Blomqvist, wrote:
>
>> Hi,
>>
>> Is it possible to break/limit a query so that it returns whatever results
>> found after having checked X amount of row
On Fri, Aug 19, 2016 at 6:01 PM, Francisco Olarte
wrote:
> Hi Victor:
>
> On Fri, Aug 19, 2016 at 7:06 AM, Victor Blomqvist wrote:
> > Is it possible to break/limit a query so that it returns whatever results
> > found after having checked X amount of rows in a index scan
On Sat, Aug 20, 2016 at 1:13 AM, Francisco Olarte
wrote:
> Hi Victor:
>
>
> On Fri, Aug 19, 2016 at 7:02 PM, Victor Blomqvist wrote:
> > What I want to avoid is my query visiting the whole 1m rows to get a
> result,
> > because in my real table that can take 100s
>From time to time I get this and similar errors in my Postgres log file:
< 2015-12-17 07:45:05.976 CST >ERROR: index
"user_pictures_picture_dhash_idx" contains unexpected zero page at block
123780
< 2015-12-17 07:45:05.976 CST >HINT: Please REINDEX it.
< 2015-12-17 07:45:05.976 CST >CONTEXT: P
relevant for this I think)
/Victor
On Thu, Dec 17, 2015 at 12:22 PM, Tom Lane wrote:
> Victor Blomqvist writes:
> >> From time to time I get this and similar errors in my Postgres log file:
> > < 2015-12-17 07:45:05.976 CST >ERROR: index
> > "user_pictures_pic
Hello!
We just had a major issue on our databases, after a index was replaced a
user defined function didnt change its query plan to use the new index. At
least this is our theory, since the function in question became much slower
and as a result brought our system to a halt.
Basically it went:
1
The end goal is to get rid of index bloat. If there is a better way to
handle this Im all ears!
/Victor
On Thu, Feb 18, 2016 at 5:21 PM, Oleg Bartunov wrote:
>
>
> On Thu, Feb 18, 2016 at 11:17 AM, Victor Blomqvist wrote:
>
>> Hello!
>>
>> We just had a major i
On Thu, Feb 18, 2016 at 11:05 PM, Tom Lane wrote:
> Victor Blomqvist writes:
> > We just had a major issue on our databases, after a index was replaced a
> > user defined function didnt change its query plan to use the new index.
>
> I'm suspicious that this is s
15 matches
Mail list logo