Re: Unexpected result with inner product
Hi again, Gentlemen, I double-checked with IBM APL2 and the output of IBM APL2 was indeed what Elias was expecting. My previous answer below was only half-way correct: Like I supposed, the problem was in fact related to enclose and disclose in the inner product. However, the problem was not caused by the regular behaviour of enclose and disclose but simply by a missing disclose in the APL macro that implements the inner product when called with a defined function. Fixed in SVN 1515. Best Regards, Jürgen On 1/26/22 10:37 PM, Blake McBride wrote: As a more general comment relating to these sorts of issues, I offer the following opinion. I imagine that there are many variations that can legitimately be argued. Where one lands on an issue is somewhat arbitrary. In some instances, X is better and makes more sense. And in other instances, Y makes more sense. Sometimes the answer is logically clear but more often not. For better or worse, given its somewhat "standard-setting" regard, I think of IBM's APL-2 as "the standard". For me, it's not a matter of who is right. That can be debated ad nauseam. I just consider IBM APL-2 APL-2. All else are variations on a theme. It is my understanding that one of the main goals of GNU APL is to provide an open-source implementation of IBM APL-2. If one were looking for a platform to do explorations in the APL space, we already have NARS2000, KAP, and other vendors to a lesser degree. I do not think GNU APL was attempting the same thing. If I am correct, these sorts of debates are far simpler. We don't debate the various merits. Rather, we simply compare the results with IBM APL-2. Case closed. Also, if my view of GNU APL is correct, I like this fact a lot! For better or worse, it works a specific way and won't change because someone has a good example and argument. I am interested in stability and reliability. Just an opinion. Blake McBride On Wed, Jan 26, 2022 at 2:47 PM Dr. Jürgen Sauermannwrote: Hi Elias, I suppose the reason is roughly this: Some interpreter, including IBM APL2 and GNU APL, sometimes allow 1-element vertors (lets call them quasi-scalars) in places where strictly speaking scalars would be required. Your partial results 0/x if some a=0 is always a vector while 1/x for some other a-1 is always 1 1-element vector which is subject to be being treated as a scalar instead. When the the inner product f.g and the outer product ∘.g gets a non-scalar result from g then it will enclose that result before the f/ and disclose it again after the f/. The final disclose will in your case see a mix of 0-element and 1-element vectors and will scalar-entend the 1-element quasi-scalars to the common shape of all items which is, in your example empty). A different A reveals this: (1⌈A≠0) +.Q B 6 6 6 6 6 6 Best Regards, Jürgen On 1/26/22 5:25 AM, Elias Mårtenson wrote: Consider the following code: A←3 4⍴1 3 2 0 2 1 0 1 4 0 0 2 B←4 2⍴4 1 0 3 0 2 2 0 Q←{⍺/⍵} (A≠0) +.Q B My reading (and implementation) of the ISO spec suggests the output should be the following: ┏━━━┓ ┃4 6┃ ┃6 4┃ ┃6 1┃ ┗━━━┛
Re: Absolute limits of rank 2 bool matrix size in GNU APL?
Hi, coming back to this discussion, I have added a command line option --mem (see info apl chapter 1.3 for details) that gives you more direct control over the total memory allocation of the GNU APL interpreter. Best Regards, Jürgen On 12/29/21 12:03 AM, Kacper Gutowski wrote: This is somewhat tangential but, On Tue, Dec 28, 2021 at 01:25:19PM -0600, Blake McBride wrote: Level 1: you are using the RAM that exists (not over-committed) Level 2: you are using more RAM than you have causing over-commit and paging. Level 3: you allocate more memory than exists in RAM + paging. In the context of memory handling in linux, your "level 2" is not considered to be an overcommitment yet. When you have swap configured, this is a perfectly normal mode of operation and, depending on workload, it might not be problematic at all. Commit limit is calculated as the sum of all the swaps plus a configurable percentage of physical RAM. The problem is that the commit limit is mostly ignored by default. On Tue, Dec 28, 2021 at 1:16 PM Elias Mårtenson wrote: Unfortunately, Linux malloc never returns NULL. Even if you try to allocate a petabyte in one allocation. You can write to this memory and new pages will be created as you write, and at some point your write will fail with a SEGV because there are no free page left. You can try it right now. Write a C program that allocates a few TB of RAM and see what happens. Actually, nothing will happen until you start writing to this memory. Minor nit, but it's not never. With vm.overcommit_memory set to 0 (default heuristic) it will still return NULL for obviously oversized requests (more than the commit limit in one go; allocating "few TB" on my machine with 8G fails early). But, of course, what is important is that in this default configuration it doesn't actually reserve the requested amount of memory and later a page fault might end up being fatal when one tries to access it. Linux has a MAP_POPULATE flag that pre-faults mmaped memory, and my understanding is that if it succeeds, then it should be safe to use later, but it's no different from trying to touch all the pages returned from malloc--it just gets you killed early before the return from mmap. What's even worse is that once you start to fail, the kernel will randomly kill processes until it gets more free pages. If that sounds idiotic, that's because it is, but that's how it works. It's not random. The scoring systems looks silly, but it works really well (at least without swap), usually killing exactly what needs to be killed. (In essence doing exactly what Blake says he would do manually.) In my experience, if you can't set vm.overcommit_memory to 2 (and you can't because of browsers etc.), then OOM killer is close to the best of what can be done to manage it. You can configure the kernel to use not do this, but unfortunately you're going to have other problems if you do, primarily because all Linux software is written with the default behaviour in mind. Sad truth. -k
Re: Compiling under cygwin
Hi Bill, strange. According to the config log that you kindly provided: #define HAVE_AFFINITY_NP 1 which means that according to the confgurere.ac of GNU APL lines 291 ff. the following test program compiled successfully: #include #include int main() { cpu_set_t cpuset;] pthread_setaffinity_np(0, sizeof(cpuset), &cpuset);] pthread_getaffinity_np(0, sizeof(cpuset), &cpuset);] _Atomic_word count = 0;] __gnu_cxx::__exchange_and_add_dispatch(&count, 1);] __gnu_cxx::__atomic_add_dispatch(&count, 1);] } But then this fails: memory_benchmark.cc: In function ‘void multi(int, int, int64_t*)’: memory_benchmark.cc:147:9: error: ‘CPU_ZERO’ was not declared in this scope 147 | CPU_ZERO(&cpus); | ^~~~ memory_benchmark.cc:148:9: error: ‘CPU_SET’ was not declared in this scope 148 | CPU_SET(c, &cpus); | ^~~ memory_benchmark.cc:149:9: error: ‘pthread_setaffinity_np’ was not declared in this scope 149 | pthread_setaffinity_np(ctx[c].thread, sizeof(cpu_set_t), &cpus); | ^~ ... I believe that you may need to troubleshoot this a bit deeper on your machine (where are pthread_setaffinity_np() and CPU_ZERO/CPU_SET defined (supposedly under /usr/local)? In the meantime you could delete the following lines in tools/Makefile.am: noinst_PROGRAMS += memory_benchmark memory_benchmark_SOURCES = memory_benchmark.cc memory_benchmark_LDADD = -lpthread memory_benchmark_CXXFLAGS = -Wall and then autoreconf and ./configure again at the top-level. In the meantime I will look into removing the memory_benchmark from the standard build. Best Regards, Jürgen On 12/28/21 6:22 PM, Bill Daly wrote: I've encountered errors installing GNU APL on Windows under cygwin. Configure, Make and Install logs attached w
Re: Unexpected result with inner product
Thank you very much!! On Thu, Jan 27, 2022 at 5:53 AM Dr. Jürgen Sauermann < mail@jürgen-sauermann.de> wrote: > Hi again, Gentlemen, > > I double-checked with IBM APL2 and the output of IBM APL2 was indeed > what Elias was expecting. > > My previous answer below was only half-way correct: Like I supposed, > the problem was in fact related to enclose and disclose in the inner > product. > However, the problem was not caused by the regular behaviour of enclose > and disclose but simply by a missing disclose in the APL macro that > implements the inner product when called with a defined function. > > Fixed in *SVN 1515*. > > Best Regards, > Jürgen > > > > > > On 1/26/22 10:37 PM, Blake McBride wrote: > > As a more general comment relating to these sorts of issues, I offer the > following opinion. > > I imagine that there are many variations that can legitimately be argued. > Where one lands on an issue is somewhat arbitrary. In some instances, X is > better and makes more sense. And in other instances, Y makes more sense. > Sometimes the answer is logically clear but more often not. > > For better or worse, given its somewhat "standard-setting" regard, I think > of IBM's APL-2 as "the standard". For me, it's not a matter of who is > right. That can be debated ad nauseam. I just consider IBM APL-2 APL-2. > All else are variations on a theme. > > It is my understanding that one of the main goals of GNU APL is to provide > an open-source implementation of IBM APL-2. If one were looking for a > platform to do explorations in the APL space, we already have NARS2000, > KAP, and other vendors to a lesser degree. I do not think GNU APL was > attempting the same thing. > > If I am correct, these sorts of debates are far simpler. We don't debate > the various merits. Rather, we simply compare the results with IBM APL-2. > Case closed. > > Also, if my view of GNU APL is correct, I like this fact a lot! For > better or worse, it works a specific way and won't change because > someone has a good example and argument. I am interested in stability and > reliability. > > Just an opinion. > > Blake McBride > > > On Wed, Jan 26, 2022 at 2:47 PM Dr. Jürgen Sauermann < > mail@jürgen-sauermann.de> wrote: > >> Hi Elias, >> >> I suppose the reason is roughly this: >> >> Some interpreter, including IBM APL2 and GNU APL, sometimes >> allow 1-element vertors (lets call them quasi-scalars) in places >> where strictly speaking scalars would be required. >> >> Your partial results 0/x if some a=0 is always a vector while 1/x >> for some other a-1 is always 1 1-element vector which is subject >> to be being treated as a scalar instead. >> >> When the the inner product f.g and the outer product ∘.g gets >> a non-scalar result from g then it will enclose that result before >> the f/ and disclose it again after the f/. >> >> The final disclose will in your case see a mix of 0-element and >> 1-element vectors and will scalar-entend the 1-element >> quasi-scalars to the common shape of all items which is, >> in your example empty). >> >> A different A reveals this: >> >> * (1⌈A≠0) +.Q B* >> * 6 6 * >> * 6 6 * >> * 6 6 * >> >> Best Regards, >> Jürgen >> >> >> >> On 1/26/22 5:25 AM, Elias Mårtenson wrote: >> >> Consider the following code: >> >> >> >> >> *A←3 4⍴1 3 2 0 2 1 0 1 4 0 0 2 B←4 2⍴4 1 0 3 0 2 2 0 Q←{⍺/⍵} >> (A≠0) +.Q B* >> >> My reading (and implementation) of the ISO spec suggests the output >> should be the following: >> >> ┏━━━┓ >> ┃4 6┃ >> ┃6 4┃ >> ┃6 1┃ >> ┗━━━┛ >> >> However, in GNU APL I get this: >> >> ┏→━━┓ >> ↓┏⊖┓ ┏⊖┓┃ >> ┃┃0┃ ┃0┃┃ >> ┃┗━┛ ┗━┛┃ >> ┃┏⊖┓ ┏⊖┓┃ >> ┃┃0┃ ┃0┃┃ >> ┃┗━┛ ┗━┛┃ >> ┃┏⊖┓ ┏⊖┓┃ >> ┃┃0┃ ┃0┃┃ >> ┃┗━┛ ┗━┛┃ >> ┗∊━━┛ >> >> Which one is correct? >> >> Regards, >> Elias >> >> >> >