Re: [fpc-pascal] basic question on begin, end;

2020-09-26 Thread Bernd Oppolzer via fpc-pascal


Am 25.09.2020 um 22:16 schrieb James Richters via fpc-pascal:


I think that’s a GREAT quote from Niklaus Wirth, and I agree with that 
whole heartedly… programs should be readable by humans… otherwise do 
all your programming in assembly language… the whole POINT of a hi 
level language is to make it readable by humans… not computers.I can’t 
stand trying to muddle through things like C++,it’s just to 
confusing.. trying to follow all those curly braces and figure out 
what this line of code is going to do.. it’s just a mess. yes I can 
manage, but I defiantly prefer the clarity of PASCAL… so I also name 
my variable very clearly instead of using cryptic shorthand.. who 
cares how verbose my variable names are… it doesn’t make the program 
any less efficient.. but very clear function and variable names sure 
make it easier to remember what you were thinking when you have to go 
back and modify code you originally wrote 30 years ago.



...

I admit my code gets a little sloppy with the indents, so after a 
while it sometimes looks like:


if something then
begin
some code here;

some more code;
end;

at least it compiles correctly because the begin and end; are defining 
things… once I get a function or procedure working the way I want it, 
I will then take the time to go back and fix my indents.




Same for me, except that I totally automated the process of fixing the 
indents;
when inserting new code into a program, I don't care much about 
indentation;
instead after compiling successfully, I run a (self-written) Pascal 
program which fixes the
indentation etc.; it also draws boxes around certain comments and 
inserts blank lines,
when necessary. The parts of the program, which are already well-formed, 
remain unchanged.


The program, which fixes the indentation, is in many cases scheduled 
automatically after

a successful compiler run.

This is how a program looks after automated indentation:
https://github.com/StanfordPascal/Pascal/blob/master/PASCAL1.pas

Kind regards

Bernd


___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Graphing library

2020-11-15 Thread Bernd Oppolzer via fpc-pascal

Hi,

I don't know if this can help you, but in the 1980s I worked with a 
library called GKS (graphic kernel system)

which I used to build such graphics like the following example:
http://bernd-oppolzer.de/fdynsb.pdf

This programs that did this were written in Pascal at that time.

It still works today for me (the customer still uses this software),
although is it C today, and GKS is not available any more.
What I did: the original GKS calls are written to files (some sort of 
GKS metafile, but not the
original 1980s format), and then this file format is read by a C program 
GOUTHPGL,
which translates this (proprietary) format to HPGL. The HPGL files are 
either sent to
HP plotters or translated to PDF using public domain software; see the 
file above.

(GOUTHGPL was a Pascal program in the 1990s, too).

IMO, you could easily write the "GKS metafile format" with Pascal;
in fact, it is simply is a sort of logfile of the GKS calls.

Here is an old paper about the GKS system: 
http://nsucgcourse.github.io/lectures/Lecture01/Materials/Graphical%20Kernel%20System.pdf


The translator GOUTHGPL supports only a small subset of GKS; see again 
the example picture above.


If you are interested for more details, you could contact me offline.

Kind regards

Bernd


Am 15.11.2020 um 09:33 schrieb Darius Blaszyk via fpc-pascal:

Hi,

I am looking for a simple to use non-visual graphing library to 
produce x-y plots in a  raster file format (similar to how pyplot 
works). Rather than developing something from scratch or writing a 
wrapper to GNU plot (additional dependency), I was hoping something 
like this already would exist that I could build upon.


Thank you for any tips!

Rgds, Darius

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Converting old pascal written for Pascal/MT+ compiler

2023-04-04 Thread Bernd Oppolzer via fpc-pascal

Am 04.04.2023 um 08:16 schrieb Jacob Kroon via fpc-pascal:


Thanks for the tip above.

I was able to write a couple of perl-scripts that are able to convert 
my old Pascal sources to something that fpc can parse. Amongst other 
things, the scripts inject the "public name"/"external name" 
annotations so that the program can link.


But I suspect I have a new problem: With the old Pascal/MT+ compiler 
it would appear that local variables declared in functions/procedures 
have a life-time that spans the whole program, like a "static" 
declared variable in C. With fpc, it looks like locally declared 
variables are automatic, put on the stack(?), and so they go out of 
existence once out of scope ?



IMO, this is not a feature of the old PASCAL compiler,
but instead your Pascal programs "by error" depends on the local variables
appearing at the same location on the stack at subsequent calls as before
(if there are no other calls at the same level in between).

The local variables are not initialized when allocated, and so it is 
possible

that they still have the old value that they had when the same function
was left the last time. I know (from the 1970s and 1980s) that some 
weird programs
used this effect. But as soon as another call was entered between the 
two calls
of the procedure, the stack at this location was overwritten, and the 
"clever" use
of the old value of the variable was not possible any more. And: it 
depends on the

stack being initialized to a known default value in the first time.

This said: the initial value of the local variables is UNDEFINED, and 
this is true
for every Pascal compiler. I cannot imagine a compiler which doesn't 
follow this basic rule.


So IMO: you should find the places in your program, where this weird 
technique is used

and not blame the old compiler for this.

Some compilers have an option which initializes every automatic variable 
on every allocation;
some compilers even allow the bit-pattern to be specified (for example: 
0xff).
While this is a performance nightmare, this is very good to run the 
program once during program test,
because it will show if your program depends on such effects or if it 
produces different values,

depending on initialized or uninitialized local variables.

Kind regards

Bernd




The program depends on this feature in the old compiler. I did some 
googling and found that putting local variables in a "const" section 
instead of "var" would make them have a "whole-program" lifetime, but 
then I need to provide them with an initial value.


Do I have any other option besides changing from "var" to "const" 
everywhere, and provide initial values in all declarations ?


Regards
Jacob
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Legitimate use of for and break

2023-06-18 Thread Bernd Oppolzer via fpc-pascal


Am 18.06.2023 um 03:04 schrieb Hairy Pixels via fpc-pascal:



On Jun 18, 2023, at 1:07 AM, tsie...@softcon.com wrote:

This is interesting, because it's the first time I've ever seen "break" as a 
valid command in pascal, and I've been using pascal since the mid/late 80s.  All kinds of 
dialects too, and I've never seen break as a keyword.  C, Python, Perl, sure, even shell 
scripts, but pascal? Never seen it used before.  Is this a relatively new addition to fpc 
or something?

I don't remember break NOT being in Pascal. How did you exit a loop otherwise, 
goto? Break is common in basically all languages now. Can't think of a language 
I've used without it.

FWIW, when I started to work on New Stanford Pascal 
(http://bernd-oppolzer.de/job9.htm)
in 2011, the very first thing that I did was to add BREAK, CONTINUE and 
RETURN to this compiler.
New Stanford Pascal is an offspring of the Zürich P4 compiler from the 
1970s, directly from

the working group of Niklaus Wirth (who was at Stanford, too, BTW).

The compiler is a self-hosting compiler (like most Pascal compilers, I 
believe) and up to 2011
there were many exits from loops bye putting a label after the loop and 
using GOTO
(because of the absence of BREAK). Similar use of GOTO to implement 
CONTINUE and RETURN.


I got rid of most of these labels (if not all) by adding these three 
keywords. This was easy.
I did many extensions to the compiler later (from 2016 on) and ported 
the compiler to Windows and
Linux etc.; the compiler, which had 6.000 lines in 2011, now has over 
25.000 lines :-)


Kind regards

Bernd


Regards,
Ryan Joseph

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Pointer question

2023-08-10 Thread Bernd Oppolzer via fpc-pascal
FWIW, when I added similar functionality to my Stanford Pascal compiler, 
I chose not to allow arithmetic

of pointers, but instead I added some functions:

PTRADD (p, i) - p is type ANYPTR, i is integer, result is of type ANYPTR
PTRDIFF (p1, p2) - two pointers, the result is integer
ANYPTR is a predefined type, compatible with every (typed pointer)
ADDR (x) is a function (borrowed from PL/1), which returns an ANYPTR ... 
and it is allowed for all types of variables
PTRCAST is the same as PTRADD (p, 0) - and is used to cast pointers 
between incompatible pointers (not type safe)


Kind regards

Bernd


Am 10.08.2023 um 10:52 schrieb Elmar Haneke via fpc-pascal:

1) what does "i := x - x;" do and what is it's purpose and why doesn't "x + x" 
work the same?


Subtracting pointers may be useful if they point to consecutive 
memory. The Result is the number of bytes between both addresses.


Adding pointers is useless, you would get a pointer pointing to some 
address in address space which has no relation to the pointers — 
presumably accessing it would rise an error.


Therefore, it is a good idea to let the compiler prevent such mistakes.


2) I've used pointer equality of course but what does "x > p" do and what is 
its purpose?


It may be useful if pointers do point into a continuous data object, 
e.g. a write-pointer inside a buffer.


Elmar

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Does the compiler make prodigious use of use ENTER instruction?

2023-12-12 Thread Bernd Oppolzer via fpc-pascal

Am 12.12.2023 um 17:51 schrieb Marco van de Voort via fpc-pascal:


Op 12-12-2023 om 17:48 schreef Anthony Walter via fpc-pascal:


Do any of the compiler devs know if Pascal programs for the x86 
instruction set are using ENTER and its second argument to the best 
possible effect? I am curious.


No, and if they do, they don't do in the way they are meant to. These 
are very old instructions and the intended use has a nesting limit (of 
32 levels iiirc).  Because of that limit, modern compilers don't use 
them.



32 static levels is MUCH, IMO.

I have an old compiler here (New Stanford Pascal, originating from 
Pascal P4), which has only 9 static levels.

Dynamic nesting is unlimited, of course.
This was never a problem for me; every seperately compiled module starts 
again at level 2.
The only program which comes close to the 9 level limit is the 26.000 
lines compiler phase 1.


My compiler copies and restores the addresses of all 9 stack frame 
levels, but only when passing procedure and function parameters;
otherwise the addresses of the stack frames are located at certain (well 
known) places which can always be found,
and only individual stack frame addresses have to be set and restored 
when entering or leaving a function.


I had the idea to extend the limit from 9 to 20, but there was no hard 
requirement so far, so I left it at 9.


C, for example, and other "modern" languages, have a static limit of 1.

Kind regards

Bernd



Some forms of enter and leave are use as peephole optimizations.


___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Does the compiler make prodigious use of use ENTER instruction?

2023-12-12 Thread Bernd Oppolzer via fpc-pascal
I wrote a comment on the original Microsoft dev blog (for a non-Pascal 
community), maybe it's of interest here, too ...


In normal Pascal procedure calls, such a vector of stack frame addresses 
is not needed. A standard Pascal runtime knows all the time about the 
current stack frame address of – say – the procedure which is currently 
active at static level n. This information is called the DISPLAY VECTOR 
and there is no need to copy the display vector on procedure calls, 
because it is stored at a well-known location inside the runtime. You 
only have to replace the stack frame addresses of the current static 
level, when you enter or leave a procedure (and maybe set the new 
current static level).


What makes things more complicated, are procedure and function 
PARAMETERS (in Pascal), that is: procedures that are passed as 
parameters to other procedures. In this case, it is indeed necessary to 
COPY THE COMPLETE DISPLAY VECTOR, because it is not possible to predict 
what static level the procedure (which is passed as a parameter) has. So 
maybe the ENTER instruction is meant for such use cases.


Some of the old Pascal compilers didn’t allow procedure parameters (or 
implemented them badly) due to these difficulties.
To see, if your (Pascal or Algol) compiler implemented procedure 
parameters correctly, you can use the “Man or Boy” test: 
https://en.wikipedia.org/wiki/Man_or_boy_test



Am 12.12.2023 um 17:48 schrieb Anthony Walter via fpc-pascal:
Iwas reading this article today on the Microsoft website about the 
mysterious x86 ENTER instruction. The article states that it's primary 
purpose is to support Pascal and similar compilers to allow for 
preserving local variables on the stack when using with nested functions.


Here is the article:

https://devblogs.microsoft.com/oldnewthing/20231211-00/?p=109126

Do any of the compiler devs know if Pascal programs for the x86 
instruction set are using ENTER and its second argument to the best 
possible effect? I am curious.


___
fpc-pascal maillist  -fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] case statement

2023-12-17 Thread Bernd Oppolzer via fpc-pascal


Am 17.12.2023 um 06:12 schrieb Adriaan van Os via fpc-pascal:


Anyway, the innocent looking case-statement does have some interesting 
aspects.



Indeed.

My Stanford compiler tries to be portable across platforms;
due to its IBM mainframe heritage even on platforms that have "strange" 
character sets like EBCDIC.
When I ported it to ASCII based machines in 2016, I had some trouble 
with case statements based on

character variables and expressions.

See the story here:

http://bernd-oppolzer.de/job9i025.htm


___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] case statement

2023-12-17 Thread Bernd Oppolzer via fpc-pascal

Am 17.12.2023 um 16:36 schrieb Adriaan van Os via fpc-pascal:


As the otherwise-clause is not in ISO-7185 Pascal, it seems more 
plausible that Borland invented the else-clause (without semicolon) 
independently. All other Pascals I have looked at, use an 
otherwise-clause (with an obligatory semicolon). The motivation for 
this, given in IBM Pascal is interesting. The manual says that the 
statement-part of the otherwise-clause can be intentionally "left 
blank" and be used "to prevent possible errors during execution". I 
recall that in ISO-7185 Pascal it is an error if no case discriminator 
matches at runtime. So, the otherwise-clause was seen as a way to get 
around that !


This was one of Niklaus Wirth's mistakes: that the original Pascal 
definition did not tell how the syntax of the "otherwise" part of the 
case statement should be.
I recall that our profs in the computer science classes in the 1970s 
criticized this serious flaw of the language.


So almost every compiler had to find its own solution.

The compilers for the IBM machines were, to some degree, influenced by 
PL/1. There we have:

IF ... THEN ... ELSE
and
SELECT ... WHEN ... OTHER(WISE) ... END

SELECT is much the same as case, although more general; not limited to 
simple expressions;

not even limited to expressions ... there are two flavors of SELECT:

SELECT (expr); WHEN (A) ...; WHEN (B) ...; OTHER ...; END;
SELECT; WHEN (cond1) ...; WHEN (cond2) ...; OTHER ...; END;

PL/1 was first defined in the mid 1960s, so maybe some of the Pascal 
compiler designers may have been influenced (to some degree) by PL/1.
Even C was influenced by PL/1 (because PL/1 was the implementation 
language of the Multics system, and K & R had that experience

and agreed about how things should NOT be done).

BTW: my Stanford compiler uses OTHERWISE (no abbreviation). There is no 
error, if OTHERWISE is omitted and no case label matches;
in this case, simply no action is taken. And: my compiler took some 
inspiration from IBMs Pascal compilers ... and some builtin functions

were inspired by PL/1 functions, indeed.

Kind regards

Bernd



Regards,

Adriaan van Os
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-01-28 Thread Bernd Oppolzer via fpc-pascal

To simplify the problem further:

the addition of 12 /24.0 and the subtraction of 0.5 should be removed, IMO,
because both can be done with floats without loss of precision (0.5 can 
be represented exactly in float).


So the problem can be reproduced IMO with this small Pascal program:

program TESTDBL1 ;

var TT : REAL ;

begin (* HAUPTPROGRAMM *)
  TT := 8427 + 33 / 1440.0 ;
  WRITELN ( 'tt=' , TT : 20 : 20 ) ;
end (* HAUPTPROGRAMM *) .

With my compiler, REAL is always DOUBLE, and the computation is carried 
out by a P-Code interpreter

(or call it just-in-time compiler - much like Java), which is written in C.

The result is:

tt=8427.022916667879

and it is the same, no matter if I use this simplified computation or 
the original


tt := (8427 - 0.5) + (12 / 24.0) + (33 / 1440.0);

My value is between the two other values:

tt=8427.022916668000
tt=8427.022916667879
ee=8427.022916625000

The problem now is:

the printout of my value suggest an accuracy which in fact is not there, 
because with double, you can trust
only the first 16 decimal digits ... after that, all is speculative 
a.k.a. wrong. That's why FPC IMO rounds at this

place, prints the 8, and then only zeroes.

The extended format internally has more hex digits and therefore can 
reliably show more decimal digits.

But the last two are wrong, too (the exact value is 6... period).

HTH,
kind regards

Bernd



Am 27.01.2024 um 22:53 schrieb Bart via fpc-pascal:

On Sat, Jan 27, 2024 at 6:23 PM Thomas Kurz via fpc-pascal
  wrote:


Hmmm... I don't think I can understand that. If the precision of "double" were 
that bad, it wouldn't be possible to store dates up to a precision of milliseconds in a 
TDateTime. I have a discrepancy of 40 seconds here.

Consider the following simplified program:

var
   tt: double;
   ee: extended;

begin
   tt := (8427 - Double(0.5)) + (12/ Double(24.0)) +
(33/Double(1440.0)) + (0/Double(86400.0));
   ee := (8427 - Extended(0.5)) + (12/ Extended(24.0)) +
(33/Extended(1440.0)) + (0/Extended(86400.0));
   writeln('tt=',tt:20:20);
   writeln('ee=',ee:20:20);
end.
===

Now see what it outputs:

C:\Users\Bart\LazarusProjecten\ConsoleProjecten>fpc test.pas
Free Pascal Compiler version 3.2.2 [2021/05/15] for i386
...

C:\Users\Bart\LazarusProjecten\ConsoleProjecten>test
tt=8427.022916668000
ee=8427.022916625000

C:\Users\Bart\LazarusProjecten\ConsoleProjecten>fpc -Px86_64 test.pas
Free Pascal Compiler version 3.2.2 [2021/05/15] for x86_64
..

C:\Users\Bart\LazarusProjecten\ConsoleProjecten>test
tt=8427.022916668000
ee=8427.022916668000

On Win64 both values are the same, because there Extended = Double.
On Win32 the Extended version is a bit closer to the exact solution:
8427 - 1/2 + 1/2 + 33/1440 = 8427 + 11/480

Simple as that.

Bart
___
fpc-pascal maillist  -fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-06 Thread Bernd Oppolzer via fpc-pascal
I didn't follow all the discussions on this topic and all the details of 
compiler options of FPC

and Delphi compatibility and so on, but I'd like to comment on this result:

program TESTDBL1 ;

Const
   HH = 8427.02291667;
Var
   AA : Integer;
   BB : Byte;
   CC : Single;
   DD : Single;
   EE : Double;
   FF : Extended;
   GG : Extended;
   


begin
   AA := 8427;
   BB := 33;
   CC := 1440.0;
   DD := AA+BB/CC;
   EE := AA+BB/CC;
   FF := AA+BB/CC;
   GG := 8427+33/1440.0;
   
   WRITELN ( 'DD = ',DD: 20 : 20 ) ;

   WRITELN ( 'EE = ',FF: 20 : 20 ) ;
   WRITELN ( 'FF = ',FF: 20 : 20 ) ;
   WRITELN ( 'GG = ',GG: 20 : 20 ) ;
   WRITELN ( 'HH = ',HH: 20 : 20 ) ;
end.


result:

DD = 8427.02246100
EE = 8427.022916625000
FF = 8427.022916625000
GG = 8427.022460937500
HH = 8427.022916625000


IMO, the computations of AA+BB/CC (right hand side) should be carried 
out the same way, regardless of the type
on the left hand side of the assignment. So I would expect the values in 
DD, EE and FF being the same.


But as it seems, the left hand side (and the type of the target 
variable) HAS AN INFLUENCE on the computation

on the right hand side, and so we get (for example)

DD = 8427.02246100

and

EE = 8427.022916625000

which IMHO is plain wrong.

If all computations of AA+BB/CC would be carried out involving only 
single precision,

all results DD, EE, FF (maybe not GG) should be 8427.0224...
only minor differences because of the different precisions of the target 
variables

(but not as large as the difference between DD and EE above).

This would be OK IMHO;
it would be easy to explain to everyone the reduced precision on these 
computations

as a consequence of the types of the operands involved.

Another question, which should be answered separately:

the compiler apparently assigns types to FP constants.
It does so depending on the fact if a certain decimal representation can 
exactly be represented

in the FP format or not.

1440.0 and 1440.5 can be represented as single precision, so the FP type 
single is assigned
1440.1 cannot, because 0.1 is an unlimited sequence of hex digits, so (I 
guess), the biggest available FP type is assigned

1440.25 probably can, so type single is assigned
1440.3: biggest FP type
1440.375: probably single

and so on

Now: who is supposed to know for any given decimal representation of a 
FP constant, if it can or cannot
be represented in a single precision FP variable? This depends on the 
length of the decimal representation,
among other facts ... and the fraction part has to be a multiple of 
negative powers of 2 etc. etc.


That said: wouldn't it make more sense to give EVERY FP CONSTANT the FP 
type with the best available precision?


If the compiler did this, the problems which arise here could be solved, 
I think.


GG in this case would have the same value as HH, because the computation 
involving the constants
(hopefully done by the compiler) would be done with the best available 
precision.


HTH, kind regards

Bernd


Am 06.02.2024 um 16:23 schrieb James Richters via fpc-pascal:

program TESTDBL1 ;

Const
HH = 8427.02291667;
Var
AA : Integer;
BB : Byte;
CC : Single;
DD : Single;
EE : Double;
FF : Extended;
GG : Extended;



begin
AA := 8427;
BB := 33;
CC := 1440.0;
DD := AA+BB/CC;
EE := AA+BB/CC;
FF := AA+BB/CC;
GG := 8427+33/1440.0;

WRITELN ( 'DD = ',DD: 20 : 20 ) ;

WRITELN ( 'EE = ',FF: 20 : 20 ) ;
WRITELN ( 'FF = ',FF: 20 : 20 ) ;
WRITELN ( 'GG = ',GG: 20 : 20 ) ;
WRITELN ( 'HH = ',HH: 20 : 20 ) ;
end.

When I do the division of a byte by a single and store it in an extended, I
get the division carried out as an extended.
FF, GG, and HH should all be exactly the same if there is not a bug.
But:

DD = 8427.02246100
EE = 8427.022916625000
FF = 8427.022916625000
GG = 8427.022460937500
HH = 8427.022916625000___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-11 Thread Bernd Oppolzer via fpc-pascal

Am 11.02.2024 um 17:31 schrieb Florian Klämpfl via fpc-pascal:

On 09.02.24 15:00, greim--- via fpc-pascal wrote:

Hi,

my test with Borland Pascal 7.0 running in dosemu2 running 80x87 code.
The compiler throws an error message for calculating HH and II with 
explicit type conversion.

The results of FF and GG are the same!
Even on 16 bit system!

I think this behavior is right!


The x87 fpu behavior is completely flawed as its precision is not 
dependent on the instruction used but the state of the fpu.


Overall, the intermediate float precision is a very difficult topic. 
The famous Goldberg article 
(https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html) does 
not suggest to use the highest possible precision after all. And an 
additional interesting read: 
https://randomascii.wordpress.com/2012/03/21/intermediate-floating-point-precision/


Many thanks for the links, I read them with interest; for me - working 
almost every day with IBM systems -
the remarks on the old IBM FP hex format (base 16) are very interesting. 
Today's IBM systems support IEEE as well.


IMO, the question regarding FP constants (not variables) in compilers is 
not yet answered fully. If we have an expression
consisting only of FP constants like in the original coding: should the 
FP constants indeed given different
FP types by the compiler? Or should the FP constants maybe have all the 
same type .. the largest type available?
This would automatically lead to a computation using the maximum 
precision, no matter if it is done at compile time
or at run time ... and this would IMHO be the solution which is the 
easiest to document and maybe to implement

and which would satisfy the users.

Kind regards

Bernd Oppolzer

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-13 Thread Bernd Oppolzer via fpc-pascal

In this example below, the performance argument does not count IMO,
because the complete computation can be done at compile time.

That's why IMO in all 3 cases the values on the right side should be 
computed with
maximum precision (of course independent of the left side), and in an 
ideal world

it should be done at compile time. But if not: anyway with max precision.
Tagging the FP constants with FP attributes like single, double and 
extended and
then doing arithmetic on them which leads to MATHEMATICAL results which 
are unexpected
is IMO wrong and would not be accepted in most other programming 
languages or compilers.


This is NOT about variables ... they have attributes and there you can 
explain all sort of
strange behaviour. It's about CONSTANT EXPRESSIONS (which can and should 
be evaluated
at compile time, and the result should be the same, no matter if the 
evaluation is done at

compile time or not).

That said:

if you have arithmetic involving a single variable and a FP constant, say

x + 1440.0

you don't need to handle this as an extended arithmetic IMO, if you 
accept my statement above.
You can treat the 1440.0 as a single constant in this case, if you wish. 
It's all about context ...


Kind regards

Bernd


Am 12.02.2024 um 10:44 schrieb Thomas Kurz via fpc-pascal:

I wouldn't say so. Or at least, not generally. Why can't the compiler do what 
the programer intends to do:

var
   s: single;
   d: double;
   e: extended;
   
begin

   s := 8427.0 + 33.0 / 1440.0; // treat all constants all "single"
   d := 8427.0 + 33.0 / 1440.0; // treat all constants all "double"
   e := 8427.0 + 33.0 / 1440.0; // treat all constants all "extended"
end.

Shouldn't this satisfy all the needs? Those caring for precision will work with 
double precision and don't have to take care for a loss in precision. Those 
caring for speed can use the single precision type and be sure that no costly 
conversion to double or extended will take place.




- Original Message -
From: Jonas Maebe via fpc-pascal 
To: fpc-pascal@lists.freepascal.org 
Sent: Sunday, February 11, 2024, 23:29:42
Subject: [fpc-pascal] Floating point question

On 11/02/2024 23:21, Bernd Oppolzer via fpc-pascal wrote:

and this would IMHO be the solution which is the easiest to document and
maybe to implement
and which would satisfy the users.

And generate the slowest code possible on most platforms.


Jonas
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-13 Thread Bernd Oppolzer via fpc-pascal

Am 13.02.2024 um 10:54 schrieb Michael Van Canneyt via fpc-pascal:



On Tue, 13 Feb 2024, James Richters via fpc-pascal wrote:

Sorry for the kind of duplicate post, I submitted it yesterday 
morning and I

thought it failed, so I re-did it and tried again.. then after that the
original one showed up.

A thought occurred to me.   Since the complier math is expecting all the
constants would be in full precision, then the compiler math doesn't 
need to

change, it's just that the reduction in precision is just happening too
soon.  It's evaluating and reducing each term of an expression, then the
math is happening, and the answer is not coming out right.

If instead everything was left full precision until after the 
compiler math
(because this is what the compiler math expects), and then the final 
answer
was reduced in precision where possible, then it would work 
flawlessly.  So

the reduction in precision function only needs to run once on the final
answer, not on every term before the calculation.


As Jonas said, this would result in less efficient code, since all the 
math will then be done at full precision, which is slower.


As usual, it is a trade-off between size (=precision) and speed.

Michael.



But, sorry, because we are talking about compile time math, performance 
(nanoseconds) in this case doesn't count, IMO.



___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-13 Thread Bernd Oppolzer via fpc-pascal

My opinions about the solutions below ...


Am 13.02.2024 um 12:07 schrieb Thomas Kurz via fpc-pascal:

But, sorry, because we are talking about compile time math, performance 
(nanoseconds) in this case doesn't count, IMO.



That's what i thought at first, too. But then I started thinking about how to 
deal with it and sumbled upon difficulties very soon:

a) 8427.0 + 33.0 / 1440.0
An easy case: all constants, so do the calculation at highest precision and 
reduce it afterwards, if possible.

I agree; I would say:
all constants, so do the calculation at highest precision and reduce it 
afterwards, if required by the target


b) var_single + 33.0 / 1440.0
Should also be feasable by evaluating the constant expression first, then 
reducing it to single (if possible) and adding the variable in the end.
yes ... first evaluate the constant expression with maximum precision 
(best at compile time), then reduce the result.
The reduction to single must be done in any case, because the var_single 
in the expression dictates it, IMO


c) 8427.0 + var_double / 1440.0
Because of using the double-type variable here, constants should be treated as 
double even at the cost of performance due to not knowing whether the result 
will be assigned to a single or double.

yes


d) 8427.0 + var_single / 1440.0
And this is the one I got to struggle with. And I can imagine this is the 
reason for the decision about how to handle decimal constants.
My first approach would have been to implicitly use single precision values throughout 
the expression. This would mean to lose precision if the result will be assigned to a 
double-precision variable. One could say: "bad luck - if the programmer intended to 
get better precision, he should have used a double-precision variable as in case c". 
But this wouldn't be any better than the current state we have now.

8427.0 + (var_single / 1440.0)

the 1440.0 can be reduced to single, because the other operand is single
and so the whole operation is done using single arithmetic.

If here we had a FP constant instead of var_single, the whole operation 
IMO should be done
with maximum precision and at compile time in the best case. I have no 
problem that this
operation may give a different result with decimal constants than with 
explicitly typed
(reduced) FP variables. This can be easily explained to the users. 
Operations involving
FP variables with reduced precision may give reduced precision results. 
This seems to
be desirable for performance reasons and can be avoided by appropriate 
type casting.
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-16 Thread Bernd Oppolzer via fpc-pascal

Am 16.02.2024 um 08:32 schrieb Florian Klämpfl via fpc-pascal:
Am 16.02.2024 um 08:23 schrieb Ern Aldo via fpc-pascal 
:

 Compile-time math needs to be as correct as possible. RUN-time math can worry 
about performance.

So you are saying when constant propagation is on, an expression should have a 
different result than with constant propagation off?


I don't know exactly, what you mean by constant propagation.

But IMO, given this (sort of fictive) Pascal code snippet:


const Xconst : single = 1440.0;

var y1, y2 : real;

y1 := 33.0 / 1440.0;

y2 :=  33.0 / Xconst;


the division in the first assignment (to y1) should be done at maximum 
precision, that is,
both constants should be converted by the compiler to the maximum 
available precision and

the division should be done (best at compile time) using this precision.

in the second case, if the compiler supports constants of the reduced 
type (which I believe it does,
no matter how the syntax is), I find it acceptable if the computation is 
done using single precision,

because that's what the developer calls for.

So probably the answer to your question is: yes.

Kind regards

Bernd




___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-16 Thread Bernd Oppolzer via fpc-pascal



Am 16.02.2024 um 15:57 schrieb James Richters via fpc-pascal:

So you are saying when constant propagation is on, an expression should have a 
different result than with constant propagation off?

The result of math when using constants MUST be the same as the result of 
identical math using variables.

There should never be a difference if I did my formula with hard coded 
constants vs variables.

   Const_Ans = 2.0010627116630224
  Const_Ans1 = 2.0010627116630224
Var_Ans1 = 2.

This should not be happening.

James


See my other post;

if the developer explicitly wants reduced precision, then this is what 
happens.

But the reduced precision should not come unexpectedly simply because the
compiler attaches type attributes to constants (which can't be easily 
explained),

and then the outcome of simple decimal arithmetic is incorrect.

So I have to disagree, sorry.

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-17 Thread Bernd Oppolzer via fpc-pascal

Am 17.02.2024 um 02:12 schrieb Ern Aldo via fpc-pascal:


It is possible math is being done differently by the compiler than by 
programs? For math-related source code, the compiler compiles the 
instructions and writes them to the program file for execution at 
runtime. For compile-time constant calculations that produce run-time 
constant values, one would expect the compiler to compile the 
instructions, execute them during compilation, and write the resulting 
value to the program file for use at runtime. Such instructions are 
discarded, because the program does not need them. If math is being 
compiled differently for program-executed calculations versus 
compiler-executed calculations, then that would be a problem.


I'll try to comment on this using some source code which hopefully does 
conform to FPC,
but I am not sure, because I am not familiar with FPC standards. Please 
look:


Const
   A_const = Integer(8427);
   B_const = Byte(33);
   C_const = Single(1440.5);

y1 := A_const + C_const / B_const;
y2 := 8427 + 1440.5 / 33;

In my understanding, in the first assignment the constants have types, 
which are given to them
by the const declarations. And that's why the computation is done using 
single precision.
This would be OK for me, because the developers decided to do the 
definitions this way,

and so he or she takes responsibility.
If the computation is done at run time or at compile time, DOESN'T MATTER.

In the second case, using literal constants, the compiler should do the 
math using the maximum
precision available (IMO), because one constant (1440.5) has a FP 
representation. It does and should
not matter, that this constant can be stored exactly in a single FP 
field. Here again:

If the computation is done at run time or at compile time, DOESN'T MATTER.

Maybe this is not how FPC works today, but IMO this is how it should be 
done, because we want
(IMO) Pascal to be a clear language which is simple to explain, easy to 
use and easy to implement.


The case would be different, of course, if you do the same casting in 
the y2 case as in the const

declarations.

Kind regards

Bernd

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-17 Thread Bernd Oppolzer via fpc-pascal

Am 17.02.2024 um 14:38 schrieb Michael Van Canneyt via fpc-pascal:


There can be discussion about the rules that the compiler uses when it 
chooses a type, but any given set of rules will always have 
consequences that may or may not be desirable.


Possibly some compiler switches can be invented that modify the 
compiler's

rules for the constant type to use.


If the rules at the moment make this a single:

const xs = 64.015625;   { 64 + 1 / 64 }

because it can be represented correctly (without rounding error) in a 
binary single FP IEEE representation,

and this a double or extended type:

const xd = 64.1;  { no finite representation in binary or hex }

with all the observed effects on computations that the other posters 
here have pointed out


my personal opinion would be:

- never use such (implicitly typed) definitions ... but that's standard 
Pascal, after all

- try to convince the compiler builders that we need a better solution here

IMO, a compiler switch that gives all FP constants the best available 
precision would solve the problem -
BTW: WITHOUT forcing expressions where they appear to use this 
precision, if the other parts of the expression

have lower precision.

In fact, when parsing and compiling the expressions, you always can 
break the problem down to TWO operands
that you have to consider, and if one of them is a literal constant, it 
should not force the type of the operation to

a higher precision ... that's what I would do.

That's why I write all those mails (although I am not an active FPC 
user), because I want all Pascal versions around

to implement a clear and UNDERSTANDABLE language without strange effects.

Kind regards

Bernd


(incidentally, this is one of the reasons the FPC team does not want 
to make
inline variables as Delphi does, since there the type will again be 
determined by

the compiler - just as for constants, leading to ambiguity...)

Michael.
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal

___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-17 Thread Bernd Oppolzer via fpc-pascal

Am 17.02.2024 um 16:38 schrieb Bernd Oppolzer:


IMO, a compiler switch that gives all FP constants the best available 
precision would solve the problem -
BTW: WITHOUT forcing expressions where they appear to use this 
precision, if the other parts of the expression

have lower precision.

In fact, when parsing and compiling the expressions, you always can 
break the problem down to TWO operands
that you have to consider, and if one of them is a literal constant, 
it should not force the type of the operation to

a higher precision ... that's what I would do.



Commenting on my own post (this time):

const xs : single = 1440.5;
  xd : double = 1440.5;
  xu = 1440.5;  /* double or single, depending on new 
option */

  z : single = 33.0;

y1 := xs / z;   { single precision }
y2 := xd / z;   { double precision }
y3 := xu / z;   { different result, depending on new option }
y4 := 1440.5 / z;   { single, because z dictates it, independent of 
option }
y5 := 1440.1 / z;   { IMO: single, because z dictates it, independent of 
option }

y6 := 1440.5 / 33.0;   { depending on new option }


This may be in contrast to what's today done in FPC,
but that's how I (personally) would like to have it done.
Maybe the behaviour without the new option set is the same as now.

Not sure about y5.

Kind regards

Bernd





___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-17 Thread Bernd Oppolzer via fpc-pascal

Am 17.02.2024 um 20:18 schrieb Florian Klämpfl via fpc-pascal:



const Xconst : single = 1440.0;

var y1, y2 : real;

y1 := 33.0 / 1440.0;

y2 :=  33.0 / Xconst;

the division in the first assignment (to y1) should be done at 
maximum precision, that is,
both constants should be converted by the compiler to the maximum 
available precision and

the division should be done (best at compile time) using this precision.

Constant folding is an optimization technique, so the first expression 
could be also evaluated at run time in case of a simple compiler 
(constant folding is not something which is mandatory) which means 
that we have to use always full precision (what full means depends on 
the host and target platform thought) for real operations. So either: 
always full precision with the result all operations get bloated or 
some approach to assign a precision to real constants.


no problem here; the result of y1 must be the same, no matter if the 
computation is done at compile time or at run time.
the result should always be computed at the best precision available, 
IMO (maybe controlled by a compiler option,

which I personally would set).

y2: the computation could be done using single precision, because the 
second operand says so.
IMO: even if the first operand was a literal constant which cannont be 
represented exactly in a single FP field


It gets even more hairy if more advanced optimization techniques are 
involved:


Consider

var
   y1,y2 : single;

 y1 := 1440.0
 y2 := 33.0 / y1;

When constant propagation and constant folding are on (both are 
optimizations), y2 can be calculated at compile time and everything 
reduced to one assignment to y2. So with your proposal the value of y2 
would differ depending on the optimization level.


if y2 is computed at compile time (which is fine), then the result IMO 
is determined by the way the source code is written.
A possible optimization must not change the meaning of the program, 
given by the source code.
So in this case, the compiler would have to do a single precision 
division (if we could agree on the rules that we discussed so far),
and the meaning of the program may not be changed by optimization 
techniques (that is: optimization may not change the
result to a double or extended precision division ... otherwise the 
optimization is wrong).


BTW: many of the ideas about what a compiler should do come from my 30+ 
years experience with PL/1.
That may be a sort of "deformation professionelle", as the French call 
it, but that's how it is.


Apart from the proper handling of literal FP constants (which is what we 
discuss here, IMO), there is another topic,

which is IMO also part of the debate:

does

 y2 := 33.1 / y1;

require the division to be done at single precision or not?

We have here a literal constant, which is NOT single (33.1) and a single 
variable operand.
I understood from some postings here, that some people want the 
divisions with singles carried out using
single arithmetic, for performance reasons, so I asked for a single 
division here (in my previous postings).
But IMO that's different in the current implementation ... what do 
others think about this?


I, for my part, would find it strange, if the precision of the division 
in this case would depend on the (implicit)

type of the operand, that is:

 y2 := 33.015625 / y1;  { single precision, because constant is single 
- 33 + 1 / 64 }

 y2 := 33.1 / y1;   { extended precision, because constant is extended }

IMO, both of these divisions should be done at single precision, 
controlled by the type of y1.

But this could be controlled by ANOTHER new option, if someone asks for it.

Kind regards

Bernd
___
fpc-pascal maillist  -  fpc-pascal@lists.freepascal.org
https://lists.freepascal.org/cgi-bin/mailman/listinfo/fpc-pascal


Re: [fpc-pascal] Floating point question

2024-02-20 Thread Bernd Oppolzer via fpc-pascal

See below ...


Am 19.02.2024 um 02:00 schrieb James Richters via fpc-pascal:


>And if you have set the precision, then the calculation will be 
identical to the calculation when you use a variable of the same type 
(if not, it's indeed a bug).


This is what I have been trying to point out.Math with identical 
casting with variables and constants are not the same.


Maybe if I try with a simpler example:

program Const_Vs_Var;

Const

A_const = Byte(1);

B_const = Single(3.5);

Var

A_Var : Byte;

B_Var : Single;

Const_Ans1, Var_Ans1 : Extended;

Begin

A_Var := A_Const;

B_Var := B_Const;

Const_Ans1 := Extended(Byte(A_Const)/Single(B_Const));

Var_Ans1:= Extended(Byte(A_Var)/Single(B_Var));

WRITELN ( ' Const_Ans1 = ', Const_Ans1);

WRITELN ( 'Var_Ans1 = ',Var_Ans1);

End.

Const_Ans1 =2.85714298486709594727E-0001

Var_Ans1 =2.85714285714285714282E-0001

Windows 10 Calculator shows the answer to be

0.28571428571428571428571428571429Which matches up with the way 
variables have done this math, not the way constants have done it.




you don't need a calculator for 2 / 7 or 1 / 3.5. There is a simple rule 
for the decimal representation when dividing by 7:


1 / 7 = 0.142857 ...   repeat ad infinitum
2 / 7 = 0.285714
3 / 7 = 0.428571
4 / 7 = 0.571428
5 / 7 = 0.714285
6 / 7 = 0.857142

you see the pattern? You simply have to rotate the six digits in a 
certain manner ...



I am explicitly casting everything I possibly can.



I don't think you need the cast to extended around the divisions;
the divisions are done at different precision, which makes your problem,
but the cast to extended at the end doesn't help ... it will be done 
anyway,

because the target field is extended.

The problem indeed is that the division is done differently for consts 
and for vars,
and this seems to be the case for Windows only, as another poster 
pointed out.

This seems to be a real bug.

When casting this way

Byte(A_Var)/Single(B_Var)

I would expect the division to be done with single precision, but 
apparently it is done
using extended (or another) precision ... on Windows, not on Linux. And 
this is what

causes your headaches.


Without the :20:20 you can see that the result of each of these is in 
fact extended, but they are VERY different numbers, even though my 
casting is IDENTICAL , and I can’t make it any more the same, the 
results are very different.Math with Variables allows the result of a 
low precision entity, in this case a Byte, divided by a low precision 
entity, in this case a Single, to be calculated and stored in an 
Extended, Math with Constants does not allow this possibility, and 
this is where all the confusion is coming from.Two identical pieces of 
code not producing the same results.


Math with Constants is NOT the same as Math with Variables, and if 
this one thing was fixed, then all the other problems go away.


I am doing:

Const_Ans1 := Extended(Byte(A_Const)/Single(B_Const));

Var_Ans1:= Extended(Byte(A_Var)/Single(B_Var));

Just to make a point, but the code:

Const_Ans1 := A_Const/B_Const;

Var_Ans1:= A_Var/B_Var;

Should also produce identical results without re-casting, because 
A_Const and A_Var are both defined to be a Byte, and B_Const and B_Var 
are both defined to be a Single, and Const_Ans1 and Var_Ans1 are both 
defined to be Extended.


Why are the result different?

As I tried to explain before, if I force all constants to be Extended:

Const_Ans1 := Extended(Extended(A_Const)/Extended(B_Const));

Then I do get the correct results, but this should not be needed, and 
this casting is wrong,because a byte divided by a single should be 
able to be extended without first storing them in extended entities, 
the same as it is with variables.


With variables I do not need to re-cast every single term in an 
expression as Extended to get an Extended answer.


With constants this is the ONLY way I can get an extended answer.

Before the changes to 2.2, all constants WERE at highest precision, so 
the math involving constants never had to bother with considering that 
a low precision number divided by a low precision number could end up 
as an extended, because there were no low precision constants at all. 
But now there are, and that’s really fine, because we often have low 
precision variables, and that’s fine, but the math needs to be done 
the same way whether with constants or variables to produce identical 
results so now math with constants also has to take into consideration 
that math with low precision entities can and often does result in a 
high precision answer.


To demonstrate that a low precision entity divided by a low precision 
entity should always be able to be an Extended, use this example my 
constants as BYTES so there can be no lower precision:


program Const_Vs_Var;

Const

A_const = Byte(2);

B_const = Byte(7);

Var

A_Var : Byte;

B_Var : Byte;

Const_Ans1, Const_Ans2, Var_Ans1 : Extended;

Begin

A_Var := Byte(A_Const);

B_Var := Byte(B_Const);

Con

Re: [fpc-pascal] Fwd: What to do to get new users

2024-10-20 Thread Bernd Oppolzer via fpc-pascal

No shitstorm from my part :-)

I am working with Pascal, C and other programming languages (PL/1 for 
example) for more than 40 years now,
and I am thinking sometimes about what makes programming languages 
secure or insecure - or: what are the

common reasons for runtime errors?

Some observations:

1. a big problem IMO is the null-termination of C strings (or: the 
possibility to use it or not use it, say: memcpy).
This C paradigm often creates hard-to-find runtime errors; languages, 
which operate on fixed-sized strings or
strings with length fields like PL/1 varchars don't have this problem 
and are more secure


2. Automatic checking of array bounds should always be enabled, and I 
prefer languages that support this (like Pascal
and PL/1, for example).  For example: in the 1980s, I translated a large 
Fortran program to Pascal, and I suddenly
observed how many bounds checking errors still remained in the Fortran 
program (although it was in production use

for 4 years).

3. Standard Pascal has only pointers, which point to the heap and are 
created by the NEW procedure.
I am the maintainer of another Pascal dialect (Stanford Pascal) - and, 
like many others, I have an ADDR function
which allows to assign the address of a stack variable to a pointer (and 
pointer arithmetic and so on).
So this may look like pandora's box, but IMO this is needed to write 
useful programs - and it needs to be done carefully,
of course. Run time errors are created by using uninitialzed pointers 
etc.; but that's not much different from using
ANY OTHER variable without initialization. We have no protection against 
this with current platforms.


4. This said, missing initializations are another major source for 
runtime errors. I am working much on IBM mainframes.
IMO, a big part of the "stability" of the mainframe platform comes from 
the fact that on this platform
many programs (written in COBOL and PL/1) work with DECIMAL data, where 
uninitialized variables
(with hex zeroes) are NOT VALID and create runtime errors, when 
referenced. In constrast, with binary
variables like integers, every bit pattern has a meaning, and you never 
get a runtime error when
referencing a variable which has not been initialized. With some 
historic platforms like the Telefunken TR440
machine, you had the possibility to initialize the stack frames with a 
bit pattern that always creates runtime
exceptions when you referenced uninitialized data ... this was possible 
because of the storage tags the
Telefunken machine had. If you referenced uninitialized data (even 
integer), the Telefunken machine
produced a runtime error, called "Typenkennungs-Alarm". Unfortunately, 
such concepts of the 1960s

and 1970s didn't survive.

5. My observation regarding C++ versus C: we once added C++ components 
to a large C software package
(insurance math). The C++ evangelists told us that we would get more 
security (among other goodies).
But what we observed after some months or years: in the C++ area in 
almost every release we had
hard to find memory leaks (the application requiring more and more 
memory until hard stop).
It took us advanced tools like ValGrind or special diagnostic memory 
managers (from IBM)

to diagnose and repair the C++ functions which were the culprits.

My summary:

- Pascal - by language definition - is much better in this respect than 
C or C++
- Garbage collection doesn't really help and makes things slow (counter 
example: in my Stanford Pascal,
when working with strings, for example concatenation etc., this is done 
in the string workarea,
and the temporaries there are garbage collected at statement boundaries 
... this is necessary and

cannot be avoided)
- to write useful programs, you occasionally need pointers and functions 
operating on them
(using explicit lengths, maybe) and you have to take care, when using 
such functions


HTH, kind regards

Bernd



Am 19.10.2024 um 16:54 schrieb greim--- via fpc-pascal:

Regarding Memory Management

Its possible to write a Pascal program w/o any pointer, but it may be 
not elegant and  interfaces to some C-like GUI structures, as used in 
all common OSs, are impossible.


But, I am using Borland Pascal (sic!) and also FreePascal (no Lazarus) 
for small embedded system for over 25 years now w/o any pointer.
Code is sometimes ugly, but it is (proofed by reality with many 
different projects)  hard rock stable for 24/7/365 applications.
Maybe I am wrong, but afaik, procedural programming w/o objects and 
pointers requires no add. memory management.
The size and memory location of all variables is fixed. And, yes, of 
course, you have to care about an array access, but $R+ is your friend.


See: N. Wirth Algorithms and Data Structures chapter 4.2:
/"A further consequence of the explicitness of pointers is that it is 
possible to define and manipulate cyclic/
/data structures. *This additional flexibility* yields, of course, not 
only increased power but also requires/
/*increased care