On 25/09/11 17:15, Dave Korn wrote:
On 25/09/2011 13:56, David Brown wrote:
There is a big difference between defining an object as "const", and
merely declaring it as const or accessing it as const. When you access
it as const, you are saying "/I/ won't change the object with this
access". When you declare an object as const (such as an extern
object), you are saying "/I/ won't change this object". When you
/define/ an object as const, as you do with a "static const", you are
saying "this object is constant. It will never change value - you (the
toolchain) can safely place it in read-only memory that cannot ever
change value".
And then you make it volatile, telling the compiler "this object might
change unexpectedly, or use values written to it unexpectedly".
If someone could explain to me how this could have real-world usage, I
think it would be easier for me (and others) to be sure of what it
really means.
Just because it's static doesn't mean the address can't escape:
/* May read or write to dest, according to direction flag. */
extern void start_dma_xfer (void *dest, unsigned int size, bool direction,
uint64_t bus_addr);
/* We don't want to change this ourselves, and nor do we want the symbol
to be externally visible. */
static const volatile char *dma_buffer[BUFFERSIZE];
[ ... later, in a function ... ]
start_dma_transfer (&dma_buffer[0], size, DIRECTION_DMA_TO_MEMORY, devaddr);
A bit contrived perhaps, but start_dma_transfer (or some similar function)
might be part of the OS or written in assembly and so necessarily C-type-safe.
cheers,
DaveK
I can see that as a possibility, though to me it reads as bad style.
I'm aware that taking the address of a static object can let it
"escape", and mentioned that possibility in other posts - it will
certainly force the object to be created in memory. However, I just
don't see why a buffer like this would be defined "const" - it's not
constant, so what are you trying to achieve by saying "const"?
To answer that, you have to first say /why/ anyone would use the "const"
qualifier in the first place. There are five reasons I can think of.
First, you have compatibility with other code and types. Then you have
error-checking - if you know you will not change an object, you make it
"const" to let the compiler check that you haven't changed it by
mistake. Then there is optimisation - by telling the compiler that the
object won't change value, you let it generate better code (such as
using "static const" instead of old-fashioned "#define" or enum
constants). Next, you define objects as "const" to let the toolchain
place the object in read-only memory - this is very important for small
embedded systems running from flash memory. Finally, you use "const" as
documentation - you use it when it makes it clearer what the code is
doing, why it is doing it, or how it is doing it.
So what advantages would there be in declaring a volatile buffer like
this to be "const"? At best, you are helping the compiler check that
you don't accidentally write to it in your own code. It is dangerous
regarding optimisation - you definitely don't want the compiler to
optimise the buffer, yet this whole discussion started because some
compilers /do/ optimise such code (rightly or wrongly). You don't want
the toolchain to put it read-only memory (as noted by another poster,
doing so would be against the standards - but toolchain writers are not
infallible, and weird corner-cases like this are where mistakes are less
likely to be spotted and fixed). So does the "const" make the program
clearer to the code writer, and other readers? I would say it has the
opposite effect, and leaves a reader wondering what it is supposed to
mean - the buffer is clearly not meant to be constant, so why is it
declared "const"?
When I write code, I almost always define objects as "const" when
possible. But in this case there would be no doubt in my mind - that
buffer is /not/ constant, and should not be defined as "const" even if
it happened to compile and run correctly.
I am aware that this is a stylistic choice. But I don't see it as good
programming to push boundaries with "risky" code that might or might not
work depending on how the compiler writers interpreted the standards -
especially when it makes the code less clear to human readers.