On 2/22/25 4:38 AM, Phi Debian wrote:

This way pure unumbered (no mixed indexed) works as before (as we used to)
Pure numbered (no unumbered) works as it does in C.
Mixed depart from C (as do fmt re-use) and the rule of thumb is easy to remember, count unnumbered as they appears, and numbered as indexed, there is one last catch, for fmt re-use, where to start the next iteration, well I decided the next one arg in line for next fmt re-use is the max of unumbered one and numbered one max index, i.e if there are 4 unumbered and max index is 3 then the next fmt arg start is 5, and the other way around if only 2 unumbered was used and max indexed is 4 then again next fmt start is at 5.

Yes, this is what I was trying to say the other day. You use some base for
the numbered arguments for the format reuse case, and it's the highest-
numbered argument one of the conversions consumes. That's basically what
POSIX says in point 10. That base starts at 0 so there's no offset the
first time through the format string.



This is one approach, other approach could be made, like refusing mix'n'match, accepting only pure access (numbered or unumbered exclusivly)

POSIX does say it's unspecified, and puts an application requirement in
place not to mix them.

Note that I used the same rule for numbered unumbered mix'n'match for the implementation of %*.*s and %n$*w$.*p$s  that is the use of num.prec
                              |  |   prec
                              |  width
                              pos
Note the pos is not too intuitive but that's libc implementation, it comes before num.prec.

It looks like coreutils printf does the same thing.


     >
     >     It decreases clarity. If you have an argument '1+1', your proposal
     >     makes it depend on the conversion specifier. Right now, there's no
     >     ambiguity: 1+1 is a string, and $(( 1+1 )) is 2, regardless of
    whether
     >     or not the conversion specifier accepts an integer argument.


But bash already have a conversion specifier ambiguity resolved

$ printf '%d\n' yo
bash: printf: yo: invalid number
0
$ printf '%x\n' yo
bash: printf: yo: invalid number
0

I don't get the argument. There's no ambiguity: yo is always a string.
If you're saying that you should treat it as an expression, then you get
the value of $yo. I'm pretty sure that doesn't increase clarity.


The beast of burden is already done in bash, it does scan the fmt string recognise integer conversion specifier fetch the arg 'string' and apply an integer validity test on it, this is on this last operation that the validity check could be replaced by a airth_eval(string)

Sure, I get how it could be done. I just don't buy the argument for doing
it.



It is not an expression context at the moment but could be one day, for now it is an 'integer' context, so the error when the string is not an integer (number), and nothing really define what is a context of an arg in a command arg list, beside it is a string.

When I do
$ function f { echo $(($1)) ; }
$ f 1+1
2

This is perfecty valid f() really accept what you call an expression context, while it is just a string that is internally view and used as an arith expr.

That expansion inside $((...)) is standardized. POSIX says what happens
to the expression.

If f() is capable to decide what arg could be an arith expresion why printf could not ?

Oh, it certainly could. I just don't think it adds anything. I know ksh93
does it, but ksh93 threw arithmetic evaluation in a bunch of different
places before $((...)) came along, and I think arithmetic expansion renders
them superfluous.

--
``The lyf so short, the craft so long to lerne.'' - Chaucer
                 ``Ars longa, vita brevis'' - Hippocrates
Chet Ramey, UTech, CWRU    c...@case.edu    http://tiswww.cwru.edu/~chet/

Attachment: OpenPGP_signature.asc
Description: OpenPGP digital signature

Reply via email to