On Tue, Jun 13, 2017 at 03:00:28PM +0100, Wilco Dijkstra wrote:
> 
> ping

I've been avoiding reviewing this patch as Richard was the last to comment
on it, and I wasn't sure that his comments had been resolved to his
satisfaction. The conversation was back in August 2016 on v1 of the patch:

> Richard Earnshaw (lists) <richard.earns...@arm.com> wrote:
> >
> > So isn't the real bug that we've permitted the user to create an object
> > that is too large for the data model?
> 
> No that's a different issue I'm not trying to address here. The key is that 
> as long
> as the start of the symbol is in range, we should be able to link. Due to 
> optimization
> the offset may be huge even when the object is tiny, so the offset must be 
> limited.
> 
> > Consider, for example:
> 
> char fixed_regs[0x200000000ULL];
> char fixed_regs2[100];
> 
> int
> main()
> {
>   return fixed_regs[0] + fixed_regs2[0];
> }
> 
> > Neither offset is too large, but we still generate relocation errors
> > when trying to reference fixed_regs2.
> 
> But so would creating a million objects of size 1. The linker could warn about
> large objects as well as giving better error messages for relocations that are
> out of range. But that's mostly QoI, what we have here is a case where legal
> code fails to link due to optimization. The original example is from GCC 
> itself,
> the fixed_regs array is small but due to optimization we can end up with
> &fixed_regs + 0xffffffff.

Richard, do you have anything further to say on this patch? Or can we start
progressing the review again.

Thanks,
James

Reply via email to