Re: [Pharo-users] Porting Transducers to Pharo - Name Clash

2017-06-06 Thread Steffen Märcker

Hi,

I found some name clash with the message #reduce: in Pharo. It is already  
declared in SequencableCollection. Additionally, Collection>>fold: just  
calls #reduce:, which makes the difference between folding and reducing a  
bit unclear.


How should I handle this situation? I see the following options:
- I could simply solve the situation at runtime depending on the argument  
(e.g., using double-dispatching).
- I could check whether it might be possible to separate #fold: and  
#reduce: with the semantics:

  - reduce: starts with an initial value and the first collection item.
  - fold: uses no initial value and starts with the first two items.

In the Transducers library, there are two variants of reduce:
- #reduce:init: reduce using a block and an intial value
- #reduce: reduce using a block that carries an intial value or an  
initializer block.


Ciao,
Steffen


Am .05.2017, 14:23 Uhr, schrieb Steffen Märcker :


Hi,

I am the developer of the library 'Transducers' for VisualWorks. It was  
formerly known as 'Reducers', but this name was a poor choice. I'd like  
to port it to Pharo, if there is any interest on your side. I hope to  
learn more about Pharo in this process, since I am mainly a VW guy. And  
most likely, I will come up with a bunch of questions. :-)


Meanwhile, I'll cross-post the introduction from VWnc below. I'd be very  
happy to hear your optinions, questions and I hope we can start a  
fruitful discussion - even if there is not Pharo port yet.


Best, Steffen



Transducers are building blocks that encapsulate how to process elements
of a data sequence independently of the underlying input and output  
source.




# Overview

## Encapsulate
Implementations of enumeration methods, such as #collect:, have the logic
how to process a single element in common.
However, that logic is reimplemented each and every time. Transducers  
make

it explicit and facilitate re-use and coherent behavior.
For example:
- #collect: requires mapping: (aBlock1 map)
- #select: requires filtering: (aBlock2 filter)


## Compose
In practice, algorithms often require multiple processing steps, e.g.,
mapping only a filtered set of elements.
Transducers are inherently composable, and hereby, allow to make the
combination of steps explicit.
Since transducers do not build intermediate collections, their  
composition

is memory-efficient.
For example:
- (aBlock1 filter) * (aBlock2 map)   "(1.) filter and (2.) map elements"


## Re-Use
Transducers are decoupled from the input and output sources, and hence,
they can be reused in different contexts.
For example:
- enumeration of collections
- processing of streams
- communicating via channels



# Usage by Example

We build a coin flipping experiment and count the occurrence of heads and
tails.

First, we associate random numbers with the sides of a coin.

 scale := [:x | (x * 2 + 1) floor] map.
 sides := #(heads tails) replace.

Scale is a transducer that maps numbers x between 0 and 1 to 1 and 2.
Sides is a transducer that replaces the numbers with heads an tails by
lookup in an array.
Next, we choose a number of samples.

 count := 1000 take.

Count is a transducer that takes 1000 elements from a source.
We keep track of the occurrences of heads an tails using a bag.

 collect := [:bag :c | bag add: c; yourself].

Collect is binary block (reducing function) that collects events in a  
bag.
We assemble the experiment by transforming the block using the  
transducers.


 experiment := (scale * sides * count) transform: collect.

   From left to right we see the steps involved: scale, sides, count and
collect.
Transforming assembles these steps into a binary block (reducing  
function)

we can use to run the experiment.

 samples := Random new
   reduce: experiment
   init: Bag new.

Here, we use #reduce:init:, which is mostly similar to #inject:into:.
To execute a transformation and a reduction together, we can use
#transduce:reduce:init:.

 samples := Random new
   transduce: scale * sides * count
   reduce: collect
   init: Bag new.

We can also express the experiment as data-flow using #<~.
This enables us to build objects that can be re-used in other  
experiments.


 coin := sides <~ scale <~ Random new.
 flip := Bag <~ count.

Coin is an eduction, i.e., it binds transducers to a source and
understands #reduce:init: among others.
Flip is a transformed reduction, i.e., it binds transducers to a reducing
function and an initial value.
By sending #<~, we draw further samples from flipping the coin.

 samples := flip <~ coin.

This yields a new Bag with another 1000 samples.



# Basic Concepts

## Reducing Functions

A reducing function represents a single step in processing a data  
sequence.

It takes an accumulated result and a value, and returns a new ac

Re: [Pharo-users] Porting Transducers to Pharo

2017-06-07 Thread Steffen Märcker

Hi Phil,

that's great. I do have a GitHub account (merkste) but none at  
SmalltalkHub. Is there a recommendable doc on how to use Git from Pharo?


Best, Steffen



Am .06.2017, 14:09 Uhr, schrieb p...@highoctane.be :


Hi Steffen,

I am willing to help you create the package in SmalltalkHub or Github  
based

on your files/changeset.

Do you have a github and/or SmalltalkHub account?

Best,
Phil


On Tue, Jun 6, 2017 at 1:08 PM, Steffen Märcker  wrote:


Hi!

If the need arises, we could of course factor the compact notation out  
into

a separate package.


Good idea
[...] I do not want to help promoting a syntax that alienates me (and
others because other people reported the saem to me).



I understand. Btw, I'd really, really appreciate if others post their
thoughts and feedback here as well. Discussion helps moving things  
forward.

=)


  (collection transduce map: #squared) take: 1000.




To me this is much more readable.



Well, I'll provide that extension once it is finished.

I cannot and do not want to use the other forms.





  collection transduce

   map: #squared;
   take: 1000.

But as the message chain has to modify the underlying object
(an eduction), very snaky side effects my occur. E.g., consider

  eduction := collection transduce.
  squared  := eduction map: #squared.
  take := squared take: 1000.

Now, all three variables hold onto the same object, which first  
squares

all elements and than takes the first 1000.



This is because the programmer did not understand what he did. No?



Sure. ;-) Nevertheless, it would be very hard to debug. All of which are
the reasons I wouldn't implement that variant. ;-)


PS: I played with infinite stream and iteration back in 1993 in CLOS.

Now I do not like to mix things because it breaks my flow of thinking.




I am not sure whether I understand what your mean by mixing. Concerning
transducers, the ability to handle infinite sources in only a (natural)
side-effect of the ability to finish reductions before all elements are
processed, e.g., like #detect: and such.

Best, Steffen









Re: [Pharo-users] Porting Transducers to Pharo

2017-06-08 Thread Steffen Märcker

Hi Phil,

thanks a lot for your effort and valuable input. I am having a look at  
STIG for VW in the hope that I can set up a common repository for the VW  
and Pharo version. Though, the next days I won't work on the port, because  
I am quite busy at the moment.



Tell me how it goes for you.


I'll let you know soon.


I read your note about DoubleAgents for the tests, well, yeah this one we
do not have. I read about it on the blog of its maker and it looked  
decent indeed.


I'll check whether a port is doable with reasonable effort and the  
author's blessing. =)



Then look at the warnings. A bunch of them are non issues but there are
Undeclared things in need of a fix (e.g. IndexNotFoundError)


Luckily, most of them are straight forward to resolve. However, I'd really  
like to hear your opinion on the name clash with #reduce: in the base (see  
other thread). A solution could be to use double-dispatching (or similar)  
to distinguish between the two cases

- reduce: aBlock, and
- reduce: aReduction
However, I have a slight preference to redefine #fold: and #reduce:, since  
now (as far as I can see), they are redundant. E.g.,

- fold: aBlock "reduce as defined right now"
- reduce: aBlock init: "value "reduce starting with an initial value"
- reduce: aReduction   "reduce with block and value from a reduction"

Kind regards,
Steffen



Best,
Phil


On Wed, Jun 7, 2017 at 11:29 AM, Steffen Märcker  wrote:


Hi Phil,

that's great. I do have a GitHub account (merkste) but none at
SmalltalkHub. Is there a recommendable doc on how to use Git from Pharo?

Best, Steffen



Am .06.2017, 14:09 Uhr, schrieb p...@highoctane.be :

Hi Steffen,


I am willing to help you create the package in SmalltalkHub or Github
based
on your files/changeset.

Do you have a github and/or SmalltalkHub account?

Best,
Phil


On Tue, Jun 6, 2017 at 1:08 PM, Steffen Märcker  wrote:

Hi!


If the need arises, we could of course factor the compact notation out
into


a separate package.


Good idea

[...] I do not want to help promoting a syntax that alienates me (and
others because other people reported the saem to me).



I understand. Btw, I'd really, really appreciate if others post their
thoughts and feedback here as well. Discussion helps moving things
forward.
=)


  (collection transduce map: #squared) take: 1000.






To me this is much more readable.



Well, I'll provide that extension once it is finished.

I cannot and do not want to use the other forms.






  collection transduce


   map: #squared;

   take: 1000.

But as the message chain has to modify the underlying object
(an eduction), very snaky side effects my occur. E.g., consider

  eduction := collection transduce.
  squared  := eduction map: #squared.
  take := squared take: 1000.

Now, all three variables hold onto the same object, which first  
squares

all elements and than takes the first 1000.



This is because the programmer did not understand what he did. No?


Sure. ;-) Nevertheless, it would be very hard to debug. All of which  
are

the reasons I wouldn't implement that variant. ;-)


PS: I played with infinite stream and iteration back in 1993 in CLOS.

Now I do not like to mix things because it breaks my flow of  
thinking.





I am not sure whether I understand what your mean by mixing.  
Concerning
transducers, the ability to handle infinite sources in only a  
(natural)
side-effect of the ability to finish reductions before all elements  
are

processed, e.g., like #detect: and such.

Best, Steffen












Re: [Pharo-users] Porting Transducers to Pharo

2017-06-08 Thread Steffen Märcker

In fact, I moved from Mocketry to DoubleAgents quite some time ago... =)

Am .06.2017, 23:04 Uhr, schrieb Denis Kudriashov :


2017-06-07 22:04 GMT+02:00 p...@highoctane.be :

I read your note about DoubleAgents for the tests, well, yeah this one  
we
do not have. I read about it on the blog of its maker and it looked  
decent

indeed.



I think Mocketry can easily replace DoubleAgents but API is different and
most tests will needed rewrites






Re: [Pharo-users] Porting Transducers to Pharo

2017-06-14 Thread Steffen Märcker

Hi Phil,


Tell me how it goes for you.


I played a bit with Iceberg in Pharo, managed to check out the repository,  
and load Transducers-Core. As a bit of a surprise, multiple packages show  
up in the SystemBrowser; one for each method category, e.g.,  
Transducers-accessing and Transducers-class initialization. Is this  
expected? Maybe I am just missing something obvious here since I am not  
familiar with the tools.


However, I think Iceberg will facilitate a nice way exchange code between  
VW an Pharo. I managed to export the packages using STIG from VW and the  
differences seem to be sufficiently small.



I've put a README. md with your notes, and under packages/ you'll find 3
packages:

Transducers-Core
Transducers-Examples
Transducers-Tests


Did you rename the main package from Transducers to Transducers-Core for a  
special reason? And is it a convention in Pharo to use the dash '-'  
instead of space to separate parts of a package name?


Cheers!
Steffen



Re: [Pharo-users] Binary Decision Diagram Package in Smalltalk

2017-10-27 Thread Steffen Märcker

Dear Andrew,

I didn't find time to answer earlier. Some time ago, I was looking for a  
(MT)BDD package in ST as well. I didn't found one. So the only two options  
left are


1) implementing a new BDD lib in ST and
2) doing FFI to some existing lib, e.g. CUDD, Sylvan, Jinc

I'd prefer 2) since the existing libraries are feature-rich and highly  
optimized - which took quite some time. As a bonus, a solution could  
potentially switch between those backends. The biggest hurdle, in my  
option, is memory management, since most libs use some sort of reference  
counting. And you do not want to end up with nightmarish dozens of  
ref(bddNode) deref(bddNode) in your application code (like the  
probabilistic model checker PRISM does). This introduces hard to track  
bugs easily. However, I have an idea in mind how to tackle this but I  
didn't found the time to put it into code yet.


May I ask, which sort of application do you have in mind?

Best, Steffen



Am .10.2017, 07:54 Uhr, schrieb Prof. Andrew P. Black :

Thanks for the responses so far.  I see that I need to clarify my  
enquiry.


B-Trees and BDDs are not the same.  BDDs are an efficient and compact  
representations for Boolean functions, sometimes used in SAT-solvers and  
electronics design.   The key idea is that since the output must be 0 or  
1, you can represent any Boolean function as a tree whose depth is the  
same as the number of bits in the input.


To make the tree small and efficient, though, you need to eliminate any  
node whose two children are the same, and to share subtrees, so that you  
really get a DAG, not a tree.  The full name for these efficient  
compressed trees is “Reduced Order Binary Decision Diagrams”, or  
ROBDDs.  I was hoping that someone else had implemented the algorithms  
necessary to build this representation.


Because sets can be considered to be Booleans functions (true =>  
argument is in the set), you can use ROBDDs to efficiently represent  
large sets.


To be clear, despite the word “diagram” in the name, one is not normally  
interested in drawing the BDD — except in the documentation for the  
package ;-).  Normally, BDDs they are used to represent sets, or  
functions, where the drawing would be hopelessly large.


The BuDDy package (http://buddy.sourceforge.net/manual/main.html) is an  
example of what I’m looking for, but unfortunately it’s in C++.


Andrew


On 25 Oct 2017, at 21:39 , Stephane Ducasse   
wrote:


Hi andrew

I think that Avi did a package about BDD (but I thought it was special
binary trees) so this is probably the same.
Did you check on Squeaksource?
http://www.squeaksource.com/BTree.html
If this is what you are looking for I can help porting it to Pharo.

Stef


On Wed, Oct 25, 2017 at 9:02 PM, Prof. Andrew P. Black  
 wrote:
Does anyone know of a BDD — that’s Binary Decision Diagram — package  
written in Smalltalk?


   Andrew










Re: [Pharo-users] Binary Decision Diagram Package in Smalltalk

2017-10-28 Thread Steffen Märcker
I see. What is the task in detail? Are some of the set fixed or known in 
advance? What's the argument against a bitset-based solution?

Cheers, Steffen


Am 27. Oktober 2017 19:10:35 MESZ schrieb Stephane Ducasse 
:
>It was for test inclusion of UTF-8 characters so we do not want to
>rely on external libraries.
>
>On Fri, Oct 27, 2017 at 1:54 PM, Steffen Märcker 
>wrote:
>> Dear Andrew,
>>
>> I didn't find time to answer earlier. Some time ago, I was looking
>for a
>> (MT)BDD package in ST as well. I didn't found one. So the only two
>options
>> left are
>>
>> 1) implementing a new BDD lib in ST and
>> 2) doing FFI to some existing lib, e.g. CUDD, Sylvan, Jinc
>>
>> I'd prefer 2) since the existing libraries are feature-rich and
>highly
>> optimized - which took quite some time. As a bonus, a solution could
>> potentially switch between those backends. The biggest hurdle, in my
>option,
>> is memory management, since most libs use some sort of reference
>counting.
>> And you do not want to end up with nightmarish dozens of ref(bddNode)
>> deref(bddNode) in your application code (like the probabilistic model
>> checker PRISM does). This introduces hard to track bugs easily.
>However, I
>> have an idea in mind how to tackle this but I didn't found the time
>to put
>> it into code yet.
>>
>> May I ask, which sort of application do you have in mind?
>>
>> Best, Steffen
>>
>>
>>
>>
>> Am .10.2017, 07:54 Uhr, schrieb Prof. Andrew P. Black
>:
>>
>>> Thanks for the responses so far.  I see that I need to clarify my
>enquiry.
>>>
>>> B-Trees and BDDs are not the same.  BDDs are an efficient and
>compact
>>> representations for Boolean functions, sometimes used in SAT-solvers
>and
>>> electronics design.   The key idea is that since the output must be
>0 or 1,
>>> you can represent any Boolean function as a tree whose depth is the
>same as
>>> the number of bits in the input.
>>>
>>> To make the tree small and efficient, though, you need to eliminate
>any
>>> node whose two children are the same, and to share subtrees, so that
>you
>>> really get a DAG, not a tree.  The full name for these efficient
>compressed
>>> trees is “Reduced Order Binary Decision Diagrams”, or ROBDDs.  I was
>hoping
>>> that someone else had implemented the algorithms necessary to build
>this
>>> representation.
>>>
>>> Because sets can be considered to be Booleans functions (true =>
>argument
>>> is in the set), you can use ROBDDs to efficiently represent large
>sets.
>>>
>>> To be clear, despite the word “diagram” in the name, one is not
>normally
>>> interested in drawing the BDD — except in the documentation for the
>package
>>> ;-).  Normally, BDDs they are used to represent sets, or functions,
>where
>>> the drawing would be hopelessly large.
>>>
>>> The BuDDy package (http://buddy.sourceforge.net/manual/main.html) is
>an
>>> example of what I’m looking for, but unfortunately it’s in C++.
>>>
>>> Andrew
>>>
>>>
>>>> On 25 Oct 2017, at 21:39 , Stephane Ducasse
>
>>>> wrote:
>>>>
>>>> Hi andrew
>>>>
>>>> I think that Avi did a package about BDD (but I thought it was
>special
>>>> binary trees) so this is probably the same.
>>>> Did you check on Squeaksource?
>>>> http://www.squeaksource.com/BTree.html
>>>> If this is what you are looking for I can help porting it to Pharo.
>>>>
>>>> Stef
>>>>
>>>>
>>>> On Wed, Oct 25, 2017 at 9:02 PM, Prof. Andrew P. Black
>
>>>> wrote:
>>>>>
>>>>> Does anyone know of a BDD — that’s Binary Decision Diagram —
>package
>>>>> written in Smalltalk?
>>>>>
>>>>>Andrew
>>>>>
>>>>>
>>>>
>>>
>>


Re: [Pharo-users] Binary Decision Diagram Package in Smalltalk

2017-10-28 Thread Steffen Märcker
Does that mean the sets/bdd would be constructed mainly at comile time?  
Anyway, Andrew, feel free to contact me, I might help you with this.


Best, Steffen


Am .10.2017, 16:05 Uhr, schrieb Stephane Ducasse :


I think that andrew would like to improve smacc when parsing inputs
containing utf-8 characters.


On Sat, Oct 28, 2017 at 1:46 PM, Steffen Märcker  wrote:

I see. What is the task in detail? Are some of the set fixed or known in
advance? What's the argument against a bitset-based solution?

Cheers, Steffen



Am 27. Oktober 2017 19:10:35 MESZ schrieb Stephane Ducasse
:

<---Schnitt--->










Re: [Pharo-users] Binary Decision Diagram Package in Smalltalk

2017-11-07 Thread Steffen Märcker
I am not familiar with the literature on scanners. May I ask you about some 
details on the "for all characters" algorithms you are referring to?

Building a (or connecting to) a BDD library would be fun, indeed. But within 
that time frame it seems not realistic. Anyway, after finishing my thesis, I'd 
like to come back to that idea.

Ciao, Steffen


Am 7. November 2017 16:33:03 MEZ schrieb Andrew Glynn :
>A possible way to accomplish it would be to use an object graph with an
>incremental query engine, such as EMF/CDO with Viatra or something
>similar.  You could then put different character sets in connected
>objects and query only as far as you need to.
>
>Andrew Glynn
>
>Sent from Mail for Windows 10
>
>From: Thierry Goubier
>Sent: Tuesday, November 7, 2017 7:17 AM
>To: Any question about pharo is welcome
>Subject: Re: [Pharo-users] Binary Decision Diagram Package in Smalltalk
>
>Hi Andrew, Steffen,
>
>2017-11-07 13:10 GMT+01:00 Prof. Andrew P. Black :
>
>> On 28 Oct 2017, at 17:37 , Steffen Märcker  wrote:
>>
>> Does that mean the sets/bdd would be constructed mainly at comile
>time? Anyway, Andrew, feel free to contact me, I might help you with
>this.
>>
>
>Thanks for the offer, Steffen!  The problem is that I need to use SmaCC
>for my current project, and really do not have a month to take off and
>re-design the way that it builds its scanner.  I’ve talked to Thierry
>Goubier about, and he doesn’t have time either!  It would be a fun
>project, though, and it ought to be fairly separate from other parts of
>SmaCC.  I’ve spent a fair bit of time thinking about how to do it, but
>don’t think that I will be able to actually focus on it.
>
>Yes, this is the essence of the issue. There are a few alternatives
>about it, but none we have the time to pursue.
> 
>
>An alternative approach, which Thierry has suggested, is to make SmaCC
>work on the UTF-8 representation of the Unicode.  Then we could
>represent character sets as prefix trees.  But the core problem would
>still exist: you can’t run an algorithm that repeatedly executes
>
>                for all characters in the alphabet do:
>
>when there are 2^21 characters in the alphabet!
>
>The main issue is that `for all characters`... All the literature on
>scanner building uses 'for all characters do'.
>
>Thierry
> 
>
>        Andrew


Re: [Pharo-users] Binary Decision Diagram Package in Smalltalk

2017-11-08 Thread Steffen Märcker
I see. How about the following (sketched) solution to avoid looping over  
all characters? It might be very well the case that you already considered  
(and dismissed) that path.


A) Assumption
In order to allow any meaningful matching, the input to the scanner is  
normalized according to the unicode spec.


B) Abstraction
Treat each character and character group of an regex as a set of intervals  
in the unicode code points. Lets call them "character tests" and lift the  
common set operations union, intersection and difference to them.


C) Construct NFA
NFA has potentially overlapping character tests at the transitions of each  
state.


D) Construct DFA
Given a product state s in the DFA and two transitions t1, t2 from the  
original NFA, add three new transitions to the DFA:
- a transition labeled with the character test of t1 minus the character  
test of t2
- a transition labeled with the intersection of the character tests of t1  
and t2
- a transition labeled with the character test of t2 minus the character  
test of t1


E) Extension
Instead of sets of unicode intervals we could also use test-functions,  
e.g. blocks. Then, in step D), the set operations are translated to  
boolean operations:

- difference t1 - t2 becomes: t1 && not t2
- intersection of t1 and t2 becomes: t1 && t2
This would allow to use optimized test functions, e.g., bdds, instead of  
relying on charcter tests only.



Cheers,
Steffen




Am .11.2017, 23:16 Uhr, schrieb Thierry Goubier  
:



Le 07/11/2017 à 23:00, Steffen Märcker a écrit :
I am not familiar with the literature on scanners. May I ask you about  
some details on the "for all characters" algorithms you are referring  
to?


The two main ones available, from the typical Aho/Ullman textbook, are:

- NFA to DFA conversion (i.e. build a NFA with your regular expressions,  
then convert it to a DFA)


- Direct regular expression to DFA construction

Both of them have loop of the type:

for (each input symbol a) {
...

Building a (or connecting to) a BDD library would be fun, indeed. But  
within that time frame it seems not realistic. Anyway, after finishing  
my thesis, I'd like to come back to that idea.


It would certainly be interesting. Please contact us again when you'll  
have time :)


Regards,

Thierry


Ciao, Steffen
  Am 7. November 2017 16:33:03 MEZ schrieb Andrew Glynn  
:
 A possible way to accomplish it would be to use an object graph  
with

an incremental query engine, such as EMF/CDO with Viatra or
something similar.  You could then put different character sets in
connected objects and query only as far as you need to.
 Andrew Glynn
 Sent from Mail <https://go.microsoft.com/fwlink/?LinkId=550986> for
Windows 10
 *From: *Thierry Goubier <mailto:thierry.goub...@gmail.com>
*Sent: *Tuesday, November 7, 2017 7:17 AM
*To: *Any question about pharo is welcome
<mailto:pharo-users@lists.pharo.org>
*Subject: *Re: [Pharo-users] Binary Decision Diagram Package in
Smalltalk
 Hi Andrew, Steffen,
 2017-11-07 13:10 GMT+01:00 Prof. Andrew P. Black mailto:bl...@cs.pdx.edu>>:
   > On 28 Oct 2017, at 17:37 , Steffen Märcker mailto:merk...@web.de>> wrote:
 >
 > Does that mean the sets/bdd would be constructed mainly at
comile time? Anyway, Andrew, feel free to contact me, I might
help you with this.
 >
 Thanks for the offer, Steffen!  The problem is that I need to
use SmaCC for my current project, and really do not have a month
to take off and re-design the way that it builds its scanner.  
I’ve talked to Thierry Goubier about, and he doesn’t have time

either!  It would be a fun project, though, and it ought to be
fairly separate from other parts of SmaCC.  I’ve spent a fair
bit of time thinking about how to do it, but don’t think that I
will be able to actually focus on it.
 Yes, this is the essence of the issue. There are a few alternatives
about it, but none we have the time to pursue.
  An alternative approach, which Thierry has suggested, is to  
make

SmaCC work on the UTF-8 representation of the Unicode.  Then we
could represent character sets as prefix trees.  But the core
problem would still exist: you can’t run an algorithm that
repeatedly executes
  for all characters in the alphabet do:
 when there are 2^21 characters in the alphabet!
 The main issue is that `for all characters`... All the literature  
on

scanner building uses 'for all characters do'.
 Thierry
   Andrew







Re: [Pharo-users] Stream API

2017-11-14 Thread Steffen Märcker

Hi!


Yes, I agree, Xtreams is much better (but steep learning curve).

I just wanted to point out that my contributions in Zn streams focus and
better/simpler byte/character IO.


Yes, and it is really nice.
Interesting how many users we have in system for general streams?  
(created on arbitrary collections).


I really think streams (in general) should focus on what they are best at.  
Namely, (stepwise) reading and writing from and to various sources, and  
buffering for efficiency, too. XStreams does an excellent job here.  
However, higher level operations - like collecting, selecting, splitting  
(map, filter, partition) and such - should be addressed by other means.  
Those operations apply to streams, collections, generators and other data  
structures. They can efficiently be implemented independent from the data  
structure. By doing so, code duplication can be avoided and the API of  
streams, etc. can be kept simple.


Although I won't have time to contribute code, before finishing my thesis,  
I'd like to point out, that transducers are here to address exactly this.  
The package already works with collections, streams and xstreams.


Best,
Steffen



Re: [Pharo-users] Stream API

2017-11-14 Thread Steffen Märcker

Hi,


Are transducers the subject of your thesis ?


No. Transducers is my side project. I've implemented a package for  
VisualWorks. Unfortunately, I did not finish the port to Pharo yet, simply  
due to a lack of time. Originally, transducers evolved in the clojure  
community. I figured, the concept a good fit for Smalltalk and adapted it  
accordingly. (My thesis is on conditional probabilities in model-checking  
of probabilistic systems.)



Any pointers to more information ?


http://www.cincomsmalltalk.com/publicRepository/Transducers.html
https://clojure.org/reference/transducers
Plus some mails on this list and more on the vwnc list. Feel free to ask;  
I understand, that the package comment has lots of potential for  
improvement to help understanding.


Best, Steffen



Re: [Pharo-users] Stream API

2017-11-14 Thread Steffen Märcker
I forgot to mention, that the most recent code for Pharo is already on  
Github: https://github.com/Pharophile/Transducers


Reducers was the name of the first very first implementation.

(In fact, I was originally inspired by clojures Reducers lib. After  
implementing it in Smalltalk, I developed the concept further. Later I  
found out, that the clojure guys did the same in parallel and ended up  
with the same abstraction but a differnt name. Hence I decided change the  
name in order to make the relation clear.)


Am .11.2017, 16:18 Uhr, schrieb Sven Van Caekenberghe :





On 14 Nov 2017, at 16:00, Steffen Märcker  wrote:

Hi,


<---Schnitt--->


No. Transducers is my side project. I've implemented a package for  
VisualWorks. Unfortunately, I did not finish the port to Pharo yet,  
simply due to a lack of time. Originally, transducers evolved in the  
clojure community. I figured, the concept a good fit for Smalltalk and  
adapted it accordingly. (My thesis is on conditional probabilities in  
model-checking of probabilistic systems.)



<---Schnitt--->


http://www.cincomsmalltalk.com/publicRepository/Transducers.html
https://clojure.org/reference/transducers
Plus some mails on this list and more on the vwnc list. Feel free to  
ask; I understand, that the package comment has lots of potential for  
improvement to help understanding.


Best, Steffen


Some code seems to be here: http://smalltalkhub.com/#!/~cdlm/Experiments

Not sure if it is complete or what the relation is, or the difference  
between transducers and reducers ...









Re: [Pharo-users] Stream API

2017-11-15 Thread Steffen Märcker

Short: No. =)

Actually, the <~ notation was originally proposed by another Smalltalker  
on the vwnc list. The idea was that <~ visualizes the flow into the  
variable, which I find nice (see first expression). However, I do  
understand that this alienates others. Hence, I am non-religious about the  
matter; I could even live with both messages coexisting.



Denis I agree. I do not like to code in reverse order.


I like abstraction. But I think names and order of computation should be
changed to be more Smalltalk friendly. Because now it looks like Haskell
with right to left order:

squares := Set <~ 1000 take <~ #squared map <~ (1 to: 1000).
fileOut writeStream <~ #isSeparator filter <~ fileIn readStream.

Is there any reason to not change it?


Best, Steffen



Re: [Pharo-users] Stream API

2017-11-15 Thread Steffen Märcker
Please note, the port on GitHub is not the most recent version. I had  
trouble porting it to Pharo and code exchange in general. For now it's  
unfinished due to a lack of time.



Am .11.2017, 16:30 Uhr, schrieb Steffen Märcker :

I forgot to mention, that the most recent code for Pharo is already on  
Github: https://github.com/Pharophile/Transducers


Reducers was the name of the first very first implementation.

(In fact, I was originally inspired by clojures Reducers lib. After  
implementing it in Smalltalk, I developed the concept further. Later I  
found out, that the clojure guys did the same in parallel and ended up  
with the same abstraction but a differnt name. Hence I decided change  
the name in order to make the relation clear.)


Am .11.2017, 16:18 Uhr, schrieb Sven Van Caekenberghe :





On 14 Nov 2017, at 16:00, Steffen Märcker  wrote:

Hi,


<---Schnitt--->


No. Transducers is my side project. I've implemented a package for  
VisualWorks. Unfortunately, I did not finish the port to Pharo yet,  
simply due to a lack of time. Originally, transducers evolved in the  
clojure community. I figured, the concept a good fit for Smalltalk and  
adapted it accordingly. (My thesis is on conditional probabilities in  
model-checking of probabilistic systems.)



<---Schnitt--->


http://www.cincomsmalltalk.com/publicRepository/Transducers.html
https://clojure.org/reference/transducers
Plus some mails on this list and more on the vwnc list. Feel free to  
ask; I understand, that the package comment has lots of potential for  
improvement to help understanding.


Best, Steffen


Some code seems to be here: http://smalltalkhub.com/#!/~cdlm/Experiments

Not sure if it is complete or what the relation is, or the difference  
between transducers and reducers ...










[Pharo-users] Set Rounding mode for IEEE floating point operations

2018-01-15 Thread Steffen Märcker

Hi,

is there any way to set the rounding mode for IEEE floating point
operations? Maybe something like

Double roundToMinusInfWhile: [... code goes here ...]Double  
roundToZeroWhile: [... more code here ...]


If not, is it possible to add this behavior, e.g., via a custom primitive?

Best, Steffen



[Pharo-users] Porting Transducers to Pharo

2017-05-31 Thread Steffen Märcker

Hi,

I am the developer of the library 'Transducers' for VisualWorks. It was  
formerly known as 'Reducers', but this name was a poor choice. I'd like to  
port it to Pharo, if there is any interest on your side. I hope to learn  
more about Pharo in this process, since I am mainly a VW guy. And most  
likely, I will come up with a bunch of questions. :-)


Meanwhile, I'll cross-post the introduction from VWnc below. I'd be very  
happy to hear your optinions, questions and I hope we can start a fruitful  
discussion - even if there is not Pharo port yet.


Best, Steffen



Transducers are building blocks that encapsulate how to process elements
of a data sequence independently of the underlying input and output source.



# Overview

## Encapsulate
Implementations of enumeration methods, such as #collect:, have the logic
how to process a single element in common.
However, that logic is reimplemented each and every time. Transducers make
it explicit and facilitate re-use and coherent behavior.
For example:
- #collect: requires mapping: (aBlock1 map)
- #select: requires filtering: (aBlock2 filter)


## Compose
In practice, algorithms often require multiple processing steps, e.g.,
mapping only a filtered set of elements.
Transducers are inherently composable, and hereby, allow to make the
combination of steps explicit.
Since transducers do not build intermediate collections, their composition
is memory-efficient.
For example:
- (aBlock1 filter) * (aBlock2 map)   "(1.) filter and (2.) map elements"


## Re-Use
Transducers are decoupled from the input and output sources, and hence,
they can be reused in different contexts.
For example:
- enumeration of collections
- processing of streams
- communicating via channels



# Usage by Example

We build a coin flipping experiment and count the occurrence of heads and
tails.

First, we associate random numbers with the sides of a coin.

scale := [:x | (x * 2 + 1) floor] map.
sides := #(heads tails) replace.

Scale is a transducer that maps numbers x between 0 and 1 to 1 and 2.
Sides is a transducer that replaces the numbers with heads an tails by
lookup in an array.
Next, we choose a number of samples.

count := 1000 take.

Count is a transducer that takes 1000 elements from a source.
We keep track of the occurrences of heads an tails using a bag.

collect := [:bag :c | bag add: c; yourself].

Collect is binary block (reducing function) that collects events in a bag.
We assemble the experiment by transforming the block using the transducers.

experiment := (scale * sides * count) transform: collect.

  From left to right we see the steps involved: scale, sides, count and
collect.
Transforming assembles these steps into a binary block (reducing function)
we can use to run the experiment.

samples := Random new
  reduce: experiment
  init: Bag new.

Here, we use #reduce:init:, which is mostly similar to #inject:into:.
To execute a transformation and a reduction together, we can use
#transduce:reduce:init:.

samples := Random new
  transduce: scale * sides * count
  reduce: collect
  init: Bag new.

We can also express the experiment as data-flow using #<~.
This enables us to build objects that can be re-used in other experiments.

coin := sides <~ scale <~ Random new.
flip := Bag <~ count.

Coin is an eduction, i.e., it binds transducers to a source and
understands #reduce:init: among others.
Flip is a transformed reduction, i.e., it binds transducers to a reducing
function and an initial value.
By sending #<~, we draw further samples from flipping the coin.

samples := flip <~ coin.

This yields a new Bag with another 1000 samples.



# Basic Concepts

## Reducing Functions

A reducing function represents a single step in processing a data sequence.
It takes an accumulated result and a value, and returns a new accumulated
result.
For example:

collect := [:col :e | col add: e; yourself].
sum := #+.

A reducing function can also be ternary, i.e., it takes an accumulated
result, a key and a value.
For example:

collect := [:dic :k :v | dict at: k put: v; yourself].

Reducing functions may be equipped with an optional completing action.
After finishing processing, it is invoked exactly once, e.g., to free
resources.

stream := [:str :e | str nextPut: each; yourself] completing: #close.
absSum := #+ completing: #abs

A reducing function can end processing early by signaling Reduced with a
result.
This mechanism also enables the treatment of infinite sources.

nonNil := [:res :e | e ifNil: [Reduced signalWith: res] ifFalse:  
[res]].


The primary approach to process a data sequence is the reducing protocol
with the messages #reduce:init: and #transduce:reduce:init: if transducers
are involved.
The behavior is similar to #inject:into: but in addition it takes care of:
- handling binary and ternary reducing functions,
- invoking th

Re: [Pharo-users] Porting Transducers to Pharo

2017-05-31 Thread Steffen Märcker

Hello Damien,

I remember very well. How far did you get? Did you kick of a discussion on  
one of the Pharo lists? And did FileTree become a convenient way to  
exchange code between VW and Pharo?


Best,
Steffen


Am .05.2017, 16:16 Uhr, schrieb Damien Pollet  
:



As you know I experimented with that a while ago. My code is at
http://smalltalkhub.com/#!/~cdlm/Experiments/source

On 31 May 2017 at 15:00, Sven Van Caekenberghe  wrote:



> On 31 May 2017, at 14:23, Steffen Märcker  wrote:
>
> Hi,
>
> I am the developer of the library 'Transducers' for VisualWorks. It  
was
formerly known as 'Reducers', but this name was a poor choice. I'd like  
to

port it to Pharo, if there is any interest on your side. I hope to learn
more about Pharo in this process, since I am mainly a VW guy. And most
likely, I will come up with a bunch of questions. :-)
>
> Meanwhile, I'll cross-post the introduction from VWnc below. I'd be  
very
happy to hear your optinions, questions and I hope we can start a  
fruitful

discussion - even if there is not Pharo port yet.
>
> Best, Steffen

Hi Steffen,

Looks like very interesting stuff. Would make an nice library/framework
for Pharo.

Sven

> Transducers are building blocks that encapsulate how to process  
elements

> of a data sequence independently of the underlying input and output
source.
>
>
>
> # Overview
>
> ## Encapsulate
> Implementations of enumeration methods, such as #collect:, have the  
logic

> how to process a single element in common.
> However, that logic is reimplemented each and every time. Transducers
make
> it explicit and facilitate re-use and coherent behavior.
> For example:
> - #collect: requires mapping: (aBlock1 map)
> - #select: requires filtering: (aBlock2 filter)
>
>
> ## Compose
> In practice, algorithms often require multiple processing steps, e.g.,
> mapping only a filtered set of elements.
> Transducers are inherently composable, and hereby, allow to make the
> combination of steps explicit.
> Since transducers do not build intermediate collections, their
composition
> is memory-efficient.
> For example:
> - (aBlock1 filter) * (aBlock2 map)   "(1.) filter and (2.) map  
elements"

>
>
> ## Re-Use
> Transducers are decoupled from the input and output sources, and  
hence,

> they can be reused in different contexts.
> For example:
> - enumeration of collections
> - processing of streams
> - communicating via channels
>
>
>
> # Usage by Example
>
> We build a coin flipping experiment and count the occurrence of heads  
and

> tails.
>
> First, we associate random numbers with the sides of a coin.
>
>scale := [:x | (x * 2 + 1) floor] map.
>sides := #(heads tails) replace.
>
> Scale is a transducer that maps numbers x between 0 and 1 to 1 and 2.
> Sides is a transducer that replaces the numbers with heads an tails by
> lookup in an array.
> Next, we choose a number of samples.
>
>count := 1000 take.
>
> Count is a transducer that takes 1000 elements from a source.
> We keep track of the occurrences of heads an tails using a bag.
>
>collect := [:bag :c | bag add: c; yourself].
>
> Collect is binary block (reducing function) that collects events in a
bag.
> We assemble the experiment by transforming the block using the
transducers.
>
>experiment := (scale * sides * count) transform: collect.
>
>  From left to right we see the steps involved: scale, sides, count and
> collect.
> Transforming assembles these steps into a binary block (reducing
function)
> we can use to run the experiment.
>
>samples := Random new
>  reduce: experiment
>  init: Bag new.
>
> Here, we use #reduce:init:, which is mostly similar to #inject:into:.
> To execute a transformation and a reduction together, we can use
> #transduce:reduce:init:.
>
>samples := Random new
>  transduce: scale * sides * count
>  reduce: collect
>  init: Bag new.
>
> We can also express the experiment as data-flow using #<~.
> This enables us to build objects that can be re-used in other
experiments.
>
>coin := sides <~ scale <~ Random new.
>flip := Bag <~ count.
>
> Coin is an eduction, i.e., it binds transducers to a source and
> understands #reduce:init: among others.
> Flip is a transformed reduction, i.e., it binds transducers to a  
reducing

> function and an initial value.
> By sending #<~, we draw further samples from flipping the coin.
>
>samples := flip <~ coin.
>
> This yields a new Bag with another 1000 samples.
>
>
>
> # Basic Concepts
>
> ## Reducing Functions
>
> A

Re: [Pharo-users] Porting Transducers to Pharo

2017-05-31 Thread Steffen Märcker
Thanks for the encouraging response! First question: Which is the  
recommended (friction free) way to exchange code between VW and Pharo?


Cheers!
Steffen

Am .05.2017, 16:22 Uhr, schrieb Alexandre Bergel :


I second Sven. This is very exciting!

Let us know when you have something ready to be tested.

Alexandre






Re: [Pharo-users] Porting Transducers to Pharo

2017-06-02 Thread Steffen Märcker

Dear all,

thanks for the many suggestions. I didn't had time to test all  
import/export ways yet. But for now, I can report on two:


1) NGFileOuter
Unfortunately It raised several MNUs in my image. I'll investigate them  
later.


2) FileOut30 (VW Contributed)
I was able to file out the code except for the package definition.  
Replacing {category: ''} in the class definitions with {package:  
'Transducers'} fixed that. However, methods that extend existing classes  
did not end up in the Transducers package. Is there a similar easy change  
to the file-out making that happen? Also I'd like to add the package  
comment if that's possible.


Most things appear to work as far as I can see. Two exceptions:
1) Random is a subclass of Stream in VW and in Pharo it is not. Hence,  
I'll have to copy some methods from Stream to Random.
2) I used #beImmutable in VW but I couldn't yet figure out how to make  
objects immutable in Pharo.


However, until the tests are ported, I cannot guarantee. Porting the test  
suite will be another beast, since I rely on the excellent  
mocking/stubbing library DoubleAgents by Randy Coulman. I am not sure how  
I will handle that. In general, I think it would be really worth the  
effort to be ported to Pharo, too. DoubleAgents is pretty powerful and  
produces easy to read and understand mocking/stubbing code. Personally, I  
prefer it clearly, e.g., over Mocketry (no offence intended!).


Attached you'll find the file-out that I loaded into Pharo. The issues  
above are not addressed yet. However, the following example works:


| scale sides count collect experiment random samples coin flip |
scale := [:x | (x * 2 + 1) floor] map.
sides := #(heads tails) replace.
count := 1000 take.
collect := [:bag :c | bag add: c; yourself].
experiment := (scale * sides * count) transform: collect.
random := #(0.1 0.3 0.4 0.5 0.6 0.7 0.8 0.9).

samples := random
  reduce: experiment
  init: Bag new.

samples := random
  transduce: scale * sides * count
  reduce: collect
  init: Bag new.

coin := sides <~ scale <~ random.
flip := Bag <~ count.

samples := flip <~ coin.


Best, Steffen


Am .06.2017, 08:16 Uhr, schrieb Stephane Ducasse :


There is a package for that NGFileOuter or something like that on cincom
store.
We used it for mobydic code.

On Wed, May 31, 2017 at 6:35 PM, Alexandre Bergel  


wrote:

If I remember correctly, there is a parcel in VisualWorks to export a  
file

out (Squeak format).

@Milton, can you give a hand to Steffen?

Alexandre
--
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Alexandre Bergel  http://www.bergel.eu
^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;.



On May 31, 2017, at 10:32 AM, Steffen Märcker  wrote:

Thanks for the encouraging response! First question: Which is the
recommended (friction free) way to exchange code between VW and Pharo?

Cheers!
Steffen

Am .05.2017, 16:22 Uhr, schrieb Alexandre Bergel  

>:

I second Sven. This is very exciting!

Let us know when you have something ready to be tested.

Alexandre






transducers.pharo.st
Description: Binary data


Re: [Pharo-users] Porting Transducers to Pharo

2017-06-02 Thread Steffen Märcker

Hi Stephane!


This is a great news. We need cool frameworks.


I am really curious how well it will work for others. =)


- There is a package on cincom store to support the migration from VW to
Pharo. FileOuter something. The name escapes my mind now. We updated it
last year to help porting one application to Pharo.


I think it is FileOuterNG (at least your name appears quit often in the  
commits ;-) ).
Unfortunately, I didn't get it to work straight away and got some MNU. But  
it it is very likely, that this is my fault and I missed something  
important. I'll try it again later.



- I can help producing a nice document :)


Do you mean like the booklets published over the last weeks? This would be  
great.


Do you have an idea, how to add a package comment to the simple file-out  
it used? I think, a simple message send should suffice.


Cheers!
Steffen



Am .06.2017, 21:06 Uhr, schrieb Stephane Ducasse :


Hi steffen





On Wed, May 31, 2017 at 2:23 PM, Steffen Märcker  wrote:


Hi,

I am the developer of the library 'Transducers' for VisualWorks. It was
formerly known as 'Reducers', but this name was a poor choice. I'd like  
to

port it to Pharo, if there is any interest on your side. I hope to learn
more about Pharo in this process, since I am mainly a VW guy. And most
likely, I will come up with a bunch of questions. :-)

Meanwhile, I'll cross-post the introduction from VWnc below. I'd be very
happy to hear your optinions, questions and I hope we can start a  
fruitful

discussion - even if there is not Pharo port yet.

Best, Steffen



Transducers are building blocks that encapsulate how to process elements
of a data sequence independently of the underlying input and output  
source.




# Overview

## Encapsulate
Implementations of enumeration methods, such as #collect:, have the  
logic

how to process a single element in common.
However, that logic is reimplemented each and every time. Transducers  
make

it explicit and facilitate re-use and coherent behavior.
For example:
- #collect: requires mapping: (aBlock1 map)
- #select: requires filtering: (aBlock2 filter)


## Compose
In practice, algorithms often require multiple processing steps, e.g.,
mapping only a filtered set of elements.
Transducers are inherently composable, and hereby, allow to make the
combination of steps explicit.
Since transducers do not build intermediate collections, their  
composition

is memory-efficient.
For example:
- (aBlock1 filter) * (aBlock2 map)   "(1.) filter and (2.) map elements"


## Re-Use
Transducers are decoupled from the input and output sources, and hence,
they can be reused in different contexts.
For example:
- enumeration of collections
- processing of streams
- communicating via channels



# Usage by Example

We build a coin flipping experiment and count the occurrence of heads  
and

tails.

First, we associate random numbers with the sides of a coin.

scale := [:x | (x * 2 + 1) floor] map.
sides := #(heads tails) replace.

Scale is a transducer that maps numbers x between 0 and 1 to 1 and 2.
Sides is a transducer that replaces the numbers with heads an tails by
lookup in an array.
Next, we choose a number of samples.

count := 1000 take.

Count is a transducer that takes 1000 elements from a source.
We keep track of the occurrences of heads an tails using a bag.

collect := [:bag :c | bag add: c; yourself].

Collect is binary block (reducing function) that collects events in a  
bag.
We assemble the experiment by transforming the block using the  
transducers.


experiment := (scale * sides * count) transform: collect.

  From left to right we see the steps involved: scale, sides, count and
collect.
Transforming assembles these steps into a binary block (reducing  
function)

we can use to run the experiment.

samples := Random new
  reduce: experiment
  init: Bag new.

Here, we use #reduce:init:, which is mostly similar to #inject:into:.
To execute a transformation and a reduction together, we can use
#transduce:reduce:init:.

samples := Random new
  transduce: scale * sides * count
  reduce: collect
  init: Bag new.

We can also express the experiment as data-flow using #<~.
This enables us to build objects that can be re-used in other  
experiments.


coin := sides <~ scale <~ Random new.
flip := Bag <~ count.

Coin is an eduction, i.e., it binds transducers to a source and
understands #reduce:init: among others.
Flip is a transformed reduction, i.e., it binds transducers to a  
reducing

function and an initial value.
By sending #<~, we draw further samples from flipping the coin.

samples := flip <~ coin.

This yields a new Bag with another 1000 samples.



# Basic Concepts

## Reducing Functions

A reducing function represents a single step in processing a data  
sequence.
It takes an acc

Re: [Pharo-users] Porting Transducers to Pharo

2017-06-02 Thread Steffen Märcker

Hi Stephane & Damien!

The short answer is that the compact notation turned out to work much  
better for me in my code, especially, if multiple transducers are  
involved. But that's my personal taste. You can choose which suits you  
better. In fact,


  1000 take.

just sits on top and simply calls

  Take number: 1000.

If the need arises, we could of course factor the compact notation out  
into a separate package. Btw, would you prefer (Take n: 1000) over (Take  
number: 1000)?


Damien, you're right, I experimented with additional styles. Right now, we  
already have in the basic Transducer package:


  (collection transduce: #squared map * 1000 take. "which is equal to"
  (collection transduce: #squared map) transduce: 1000 take.

Basically, one can split #transduce:reduce:init: into single calls of  
#transduce:, #reduce:, and #init:, depending on the needs.

I also have an (unfinished) extension, that allows to write:

  (collection transduce map: #squared) take: 1000.

This feels familiar, but becomes a bit hard to read if more than two steps  
are needed.


  collection transduce
   map: #squared;
   take: 1000.

I think, this alternative would reads nicely. But as the message chain has  
to modify the underlying object (an eduction), very snaky side effects may  
occur. E.g., consider


  eduction := collection transduce.
  squared  := eduction map: #squared.
  take := squared take: 1000.

Now, all three variables hold onto the same object, which first squares  
all elements and than takes the first 1000.


Best,
Steffen




Am .06.2017, 21:28 Uhr, schrieb Damien Pollet  
:


If I recall correctly, there is an alternate protocol that looks more  
like

xtreams or the traditional select/collect iterations.

On 2 June 2017 at 21:12, Stephane Ducasse   
wrote:



I have a design question

why the library is implemented in functional style vs messages?
I do not see why this is needed. To my eyes the compact notation
goes against readibility of code and it feels ad-hoc in Smalltalk.


I really prefer

square := Map function: #squared.
take := Take number: 1000.

Because I know that I can read it and understand it.
From that perspective I prefer Xtreams.

Stef









On Wed, May 31, 2017 at 2:23 PM, Steffen Märcker  wrote:


Hi,

I am the developer of the library 'Transducers' for VisualWorks. It was
formerly known as 'Reducers', but this name was a poor choice. I'd  
like to
port it to Pharo, if there is any interest on your side. I hope to  
learn

more about Pharo in this process, since I am mainly a VW guy. And most
likely, I will come up with a bunch of questions. :-)

Meanwhile, I'll cross-post the introduction from VWnc below. I'd be  
very
happy to hear your optinions, questions and I hope we can start a  
fruitful

discussion - even if there is not Pharo port yet.

Best, Steffen



Transducers are building blocks that encapsulate how to process  
elements

of a data sequence independently of the underlying input and output
source.



# Overview

## Encapsulate
Implementations of enumeration methods, such as #collect:, have the  
logic

how to process a single element in common.
However, that logic is reimplemented each and every time. Transducers  
make

it explicit and facilitate re-use and coherent behavior.
For example:
- #collect: requires mapping: (aBlock1 map)
- #select: requires filtering: (aBlock2 filter)


## Compose
In practice, algorithms often require multiple processing steps, e.g.,
mapping only a filtered set of elements.
Transducers are inherently composable, and hereby, allow to make the
combination of steps explicit.
Since transducers do not build intermediate collections, their  
composition

is memory-efficient.
For example:
- (aBlock1 filter) * (aBlock2 map)   "(1.) filter and (2.) map  
elements"



## Re-Use
Transducers are decoupled from the input and output sources, and hence,
they can be reused in different contexts.
For example:
- enumeration of collections
- processing of streams
- communicating via channels



# Usage by Example

We build a coin flipping experiment and count the occurrence of heads  
and

tails.

First, we associate random numbers with the sides of a coin.

scale := [:x | (x * 2 + 1) floor] map.
sides := #(heads tails) replace.

Scale is a transducer that maps numbers x between 0 and 1 to 1 and 2.
Sides is a transducer that replaces the numbers with heads an tails by
lookup in an array.
Next, we choose a number of samples.

count := 1000 take.

Count is a transducer that takes 1000 elements from a source.
We keep track of the occurrences of heads an tails using a bag.

collect := [:bag :c | bag add: c; yourself].

Collect is binary block (reducing function) that collects events in a  
bag.

We assemble the experiment by transforming the block using the
transducers.

experiment := (scale * sides * count) transform: collect.

  

Re: [Pharo-users] Porting Transducers to Pharo

2017-06-06 Thread Steffen Märcker
I assumed so. I used file-outs only to get something working quickly. =)  
Properly published code will follow as soon as I am more familiar with  
code management in Pharo.



Am .06.2017, 09:05 Uhr, schrieb Stephane Ducasse :


We do not work with fileouts :)
You should produce packages with also a configuration and published them  
on

smalltalkhub or git and
in the MetaRepository.
You can also add package comments

On Sat, Jun 3, 2017 at 10:29 PM, Steffen Märcker  wrote:


Dear all,

attached are updated file-outs. I fixed a couple of annoyances that
slipped through yesterday evening. Most notable:

1) Random generator now works.
2) Early termination via Reduced exception does MNU anymore.
3) Printing a transducer holding a block does not MNU anymore.

Please, give it a spin and tell me your impressions. (At least) The
coin-flipping the example from the package comment works now:

scale := [:x | (x * 2 + 1) floor] map.
sides := #(heads tails) replace.
count := 1000 take.
collect := [:bag :c | bag add: c; yourself].
experiment := (scale * sides * count) transform: collect.
"experiment cannot be re-used"
samples := Random new
  reduce: experiment
  init: Bag new.
"transform and reduce in one step"
samples := Random new
  transduce: scale * sides * count
  reduce: collect
  init: Bag new.
"assemble coin (eduction) and flip (reduction) objects"
coin := sides <~ scale <~ Random new.
flip := Bag <~ count.
"flip coin =)"
samples := flip <~ coin.

Cheers!
Steffen



Am .06.2017, 23:08 Uhr, schrieb Steffen Märcker :

Thanks, this appears to work.  Attached you'll find the file-out from

VisualWorks and the file-out from Pharo (includes package comment).

Cheers!
Steffen


Am .06.2017, 20:06 Uhr, schrieb Yanni Chiu :

To get the extension methods into the Transducers package, the  
following

worked for me - edit the category to have the prefix '*Transducers-'

2710c2710

< !Number methodsFor: 'transforming' stamp: ' 2/6/17 15:38'!

---

!Number methodsFor: '*Transducers-transforming' stamp: ' 2/6/17  
15:38'!





On Fri, Jun 2, 2017 at 11:05 AM, Steffen Märcker   
wrote:


Dear all,


thanks for the many suggestions. I didn't had time to test all
import/export ways yet. But for now, I can report on two:

1) NGFileOuter
Unfortunately It raised several MNUs in my image. I'll investigate  
them

later.

2) FileOut30 (VW Contributed)
I was able to file out the code except for the package definition.
Replacing {category: ''} in the class definitions with {package:
'Transducers'} fixed that. However, methods that extend existing  
classes

did not end up in the Transducers package. Is there a similar easy
change
to the file-out making that happen? Also I'd like to add the package
comment if that's possible.

Most things appear to work as far as I can see. Two exceptions:
1) Random is a subclass of Stream in VW and in Pharo it is not.  
Hence,

I'll have to copy some methods from Stream to Random.
2) I used #beImmutable in VW but I couldn't yet figure out how to  
make

objects immutable in Pharo.

However, until the tests are ported, I cannot guarantee. Porting the
test
suite will be another beast, since I rely on the excellent
mocking/stubbing
library DoubleAgents by Randy Coulman. I am not sure how I will  
handle

that. In general, I think it would be really worth the effort to be
ported
to Pharo, too. DoubleAgents is pretty powerful and produces easy to  
read
and understand mocking/stubbing code. Personally, I prefer it  
clearly,

e.g., over Mocketry (no offence intended!).

Attached you'll find the file-out that I loaded into Pharo. The  
issues

above are not addressed yet. However, the following example works:

| scale sides count collect experiment random samples coin flip |
scale := [:x | (x * 2 + 1) floor] map.
sides := #(heads tails) replace.
count := 1000 take.
collect := [:bag :c | bag add: c; yourself].
experiment := (scale * sides * count) transform: collect.
random := #(0.1 0.3 0.4 0.5 0.6 0.7 0.8 0.9).

samples := random
  reduce: experiment
  init: Bag new.

samples := random
  transduce: scale * sides * count
  reduce: collect
  init: Bag new.

coin := sides <~ scale <~ random.
flip := Bag <~ count.

samples := flip <~ coin.


Best, Steffen



Am .06.2017, 08:16 Uhr, schrieb Stephane Ducasse
:

There is a package for that NGFileOuter or something like that on  
cincom



store.
We used it for mobydic code.

On Wed, May 31, 2017 at 6:35 PM, Alexandre Bergel <
alexandre.ber...@me.com>
wrote:

If I remember correctly, there is a parcel in VisualWorks to export  
a

file


out (Squeak format).

@Milton, can you give a hand to Steffen?

Alexandre
--
_,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:~^~:;._,.;:
Al

Re: [Pharo-users] Porting Transducers to Pharo

2017-06-06 Thread Steffen Märcker

Hi!

If the need arises, we could of course factor the compact notation out  
into

a separate package.

Good idea
[...] I do not want to help promoting a syntax that alienates me (and
others because other people reported the saem to me).


I understand. Btw, I'd really, really appreciate if others post their  
thoughts and feedback here as well. Discussion helps moving things  
forward. =)




  (collection transduce map: #squared) take: 1000.


To me this is much more readable.


Well, I'll provide that extension once it is finished.


I cannot and do not want to use the other forms.




  collection transduce
   map: #squared;
   take: 1000.

But as the message chain has to modify the underlying object
(an eduction), very snaky side effects my occur. E.g., consider

  eduction := collection transduce.
  squared  := eduction map: #squared.
  take := squared take: 1000.

Now, all three variables hold onto the same object, which first squares  
all elements and than takes the first 1000.


This is because the programmer did not understand what he did. No?


Sure. ;-) Nevertheless, it would be very hard to debug. All of which are  
the reasons I wouldn't implement that variant. ;-)




PS: I played with infinite stream and iteration back in 1993 in CLOS.
Now I do not like to mix things because it breaks my flow of thinking.



I am not sure whether I understand what your mean by mixing. Concerning  
transducers, the ability to handle infinite sources in only a (natural)  
side-effect of the ability to finish reductions before all elements are  
processed, e.g., like #detect: and such.


Best, Steffen






Re: [Pharo-users] Porting Transducers to Pharo

2017-06-06 Thread Steffen Märcker

Hi Phil,


Coupling this with Olek's work on the DataFrame could really come handy.


I am new to this list. Could you please elaborate?

Cheers!
Steffen



On Mon, Jun 5, 2017 at 9:14 AM, Stephane Ducasse  


wrote:


Hi Steffen


> The short answer is that the compact notation turned out to work much
better
> for me in my code, especially, if multiple transducers are involved.  
But
> that's my personal taste. You can choose which suits you better. In  
fact,

>
>   1000 take.
>
> just sits on top and simply calls
>
>   Take number: 1000.

To me this is much much better.


> If the need arises, we could of course factor the compact notation out
into
> a separate package.
Good idea

 Btw, would you prefer (Take n: 1000) over (Take number:
> 1000)?

I tend to prefer explicit selector :)


> Damien, you're right, I experimented with additional styles. Right  
now,

we
> already have in the basic Transducer package:
>
>   (collection transduce: #squared map * 1000 take. "which is equal to"
>   (collection transduce: #squared map) transduce: 1000 take.
>
> Basically, one can split #transduce:reduce:init: into single calls of
> #transduce:, #reduce:, and #init:, depending on the needs.
> I also have an (unfinished) extension, that allows to write:
>
>   (collection transduce map: #squared) take: 1000.

To me this is much mre readable.
I cannot and do not want to use the other forms.


> This feels familiar, but becomes a bit hard to read if more than two
steps
> are needed.
>
>   collection transduce
>map: #squared;
>take: 1000.

Why this is would hard to read. We do that all the time everywhere.


> I think, this alternative would reads nicely. But as the message chain
has
> to modify the underlying object (an eduction), very snaky side effects
may
> occur. E.g., consider
>
>   eduction := collection transduce.
>   squared  := eduction map: #squared.
>   take := squared take: 1000.
>
> Now, all three variables hold onto the same object, which first  
squares

all
> elements and than takes the first 1000.

This is because the programmer did not understand what he did. No?



Stef

PS: I played with infinite stream and iteration back in 1993 in CLOS.
Now I do not like to mix things because it breaks my flow of thinking.


>
> Best,
> Steffen
>
>
>
>
>
> Am .06.2017, 21:28 Uhr, schrieb Damien Pollet
> :
>
>> If I recall correctly, there is an alternate protocol that looks more
like
>> xtreams or the traditional select/collect iterations.
>>
>> On 2 June 2017 at 21:12, Stephane Ducasse 
wrote:
>>
>>> I have a design question
>>>
>>> why the library is implemented in functional style vs messages?
>>> I do not see why this is needed. To my eyes the compact notation
>>> goes against readibility of code and it feels ad-hoc in Smalltalk.
>>>
>>>
>>> I really prefer
>>>
>>> square := Map function: #squared.
>>> take := Take number: 1000.
>>>
>>> Because I know that I can read it and understand it.
>>> From that perspective I prefer Xtreams.
>>>
>>> Stef
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Wed, May 31, 2017 at 2:23 PM, Steffen Märcker 
wrote:
>>>
>>>> Hi,
>>>>
>>>> I am the developer of the library 'Transducers' for VisualWorks. It
was
>>>> formerly known as 'Reducers', but this name was a poor choice. I'd
like
>>>> to
>>>> port it to Pharo, if there is any interest on your side. I hope to
learn
>>>> more about Pharo in this process, since I am mainly a VW guy. And  
most

>>>> likely, I will come up with a bunch of questions. :-)
>>>>
>>>> Meanwhile, I'll cross-post the introduction from VWnc below. I'd be
very
>>>> happy to hear your optinions, questions and I hope we can start a
>>>> fruitful
>>>> discussion - even if there is not Pharo port yet.
>>>>
>>>> Best, Steffen
>>>>
>>>>
>>>>
>>>> Transducers are building blocks that encapsulate how to process
elements
>>>> of a data sequence independently of the underlying input and output
>>>> source.
>>>>
>>>>
>>>>
>>>> # Overview
>>>>
>>>> ## Encapsulate
>>>> Implementations of enumeration methods, such as #collect:, have the
>>>> logic
>>>> how to process a single element in common.
>>>>

Re: [Pharo-users] [ANN] Iterators

2019-08-24 Thread Steffen Märcker

Hi Julien,

nice work! Could you please tell how your approach is related to
transducers from the user perspective and technically?
(https://github.com/Pharophile/Transducers)

Your example suggests that the API is quite similar to the data flow API
of transducers. Let me show your example using transducers.


result := (#+ init: 0) <~ [:x | x * 2] map <~ #(1 2 3).
OrderedCollection with: result.


Or the more classical way:


result := #(1, 2, 3)
transduce: [:x | x * 2] map
reduce: #+
init: 0.
OrderedCollection with: result.



Best regards,
Steffen



Re: [Pharo-users] Set >> collect:thenDo:

2019-09-08 Thread Steffen Märcker
Surprise!

The selector #collect:thenDo: strongly suggests that it behaves just as 
#collect: then #do:. But as #collect: usually means map + aggregate in the 
reveiver type, I'd expect the input to the do block to be deduped already. At 
least it is an easy to miss source of subtle bugs. Maybe an additional method 
#map:thenDo: would make sense?

Best, Steffen

Am 8. September 2019 08:02:30 MESZ schrieb "Kasper Østerbye" 
:
>Feature!
>
>collect: forms a new collection of the same kind as its receiver. In
>this
>case a set. As the result of your collect:
>#(1 2 3) asSet collect: #odd)
>is booleans, the resulting set will contain only to elements (the
>duplicate
>odd resulting in true is removed).
>
>collect: thenDo: applies the collect-block to each element of the
>receiver,
>and then applies the do to each of those results. You can see the
>implementation of collect:thenDo: in class Collection.
>
>Best,
>
>Kasper
>
>
>On 7 September 2019 at 17.22.03, Herby Vojčík (he...@mailbox.sk) wrote:
>
>Hello!
>
>
>(#(1 2 3) asSet collect: #odd)
>do: [ :each | Transcript show: each; cr ]
>
>> true
>> false
>
>
>
>#(1 2 3) asSet collect: #odd
>thenDo: [ :each | Transcript show: each; cr ]
>
>> true
>> false
>> true
>
>
>
>Bug or feature?
>
>Herby


Re: [Pharo-users] Set >> collect:thenDo:

2019-09-09 Thread Steffen Märcker

I think this thread indicates that the selector #collect:thenDo: may
indeed cause some missunderstanding given standard semantics of #collect:.
Is there any reason not to use a different one, such as #map:thenDo:, to
make the difference clearer?

A nice side effect could be a new automatic refactoring rule that
consolidates chained #collect: and #do: sends into a faster
#collect:thenDo: send without breaking existing code.

Am .09.2019, 07:46 Uhr, schrieb Richard O'Keefe :


(1) If you want (aSet collect: collectBlock) do: doBlock
you can write exactly that.  Nothing stops you, and it will
be as clear and reliable as any use of Set>>collect:, which
is to say NOT VERY CLEAR OR RELIABLE AT ALL.

(2) #collect:then{Do:Select:Reject:} had no other purpose than
to avoid creating an intermediate and otherwise useless
collection.  If you are not trying to involve an intermediate
set then it just plain wrong to use #collect:thenDo:.

(3) Oddly enough, the reason that #collect:thenDo: exists in my
library is that I copied it from Squeak, at a time when it had
the same definition as Pharo and ST/X.  Had I known of the change
in Squeak I would have bitterly opposed it.  The comment in the
current Squeak code, that it is for readability, is 100% the
reverse of the truth.  Using the version with parentheses is WAY
clearer than using the portmanteau method.  Sigh.  I see they
broke #collect:thenSelect: in the same way.

(4) Let me offer you another member of the #collect:then* family.

Collection
  collect: collectBlock thenInject: initial into: injectBlock
|r|
r := initial.
self do: [:each |
  r := injectBlock value: r value: (collectBlock value: each)].
^r

#(now is the hour for us to say goodbye) asSet
  collect: [:each | each size]
  thenInject: 0 into: [:acc :each | acc + each]
 => 29
(#(now is the hour for us to say goodbye) asSet
  collect: [:each | each size])
  inject: 0 into: [:acc :each | acc + each]
 => 16

   That is, it would be WRONG to implement #collect:thenInject:into:
   as #collect: followed by #inject:into:.  The point is NOT to
   coalesce things that the receiver might (in general, incorrectly)
   regard as equal.

(5) The bottom line is that if #collect:thenDo: and its relatives did
not have their present semantics in Pharo (and ST/X), they would
need to be reinvented, with names that were not as clear.

(1) Just to repeat for emphasis, if you *want* (_ collect: _) do: _
then that is exactly what you should write.  There is no
excuse for using #collect:thenDo: in that case.  It is NOT
clearer to do so.  And you should imagine me jumping up and
down screaming that sending #collect: to a set is a bad code
smell which demands very explicit documentation as to why you
(for example) want a Set to answer a Set but an IdentitySet
to also answer a Set, not the expected IdentitySet.  (I have
been bitten by that more than once.)



On Mon, 9 Sep 2019 at 01:33, Herby Vojčík  wrote:


On 8. 9. 2019 14:28, Peter Kenny wrote:
> Two comments:
> First, the method comment for Collection>>collect:thenDo: is "Utility
method
> to improve readability", which is exactly the same as for
> collect:thenSelect: and collect:thenReject:. This suggests that the
> *intention* of the method is not to introduce new behaviour, but
simply
to
> provide a shorthand for the version with parentheses. For other kinds
of

I had that same impression.

> collection this is true; just the deduping makes Set different. If we
want

I would be more defensive here and say that generic collection should
have (collect:)do: implementation and only sequenceable collections have
the optimized one (if it indeed is the case that it is a shotrhand for
parenthesized one).

> the different behaviour, this should be indicated by method name and
> comment.
> Second, if we remove asSet from the second snippet, the output is
exactly
> the same. It will be the same as long as the original collection has
no
> duplicates. Somehow the effect is to ignore the asSet. It just smells
wrong.
>
> Peter Kenny

Herby

> Kasper Osterbye wrote
>> The first version:
>>
>> (#(1 2 3) asSet collect: #odd)
>> do: [ :each | Transcript show: each; cr ]
>>
>> is rather straight forward I believe, as collect: and do: has been
around
>> forever (relatively speaking).
>>
>>
>> #(1 2 3) asSet collect: #odd
>> thenDo: [ :each | Transcript show: each; cr ]
>>
>>
>> On 8 September 2019 at 09.13.36, Richard Sargent (
>
>> richard.sargent@
>
>> ) wrote:
>>
>>   I am skeptical of one that relies on a specific implementation
rather
>> than
>> a specific definition.
>>
>> I share your feeling. I am not sure where such a definition would
come
>> from. In Squeak it is defined as:
>>
>> collect: collectBlock thenDo: doBlock
>>
>> ^ (self collect: collectBlock) do: doBlock
>>
>> In pharo as:
>>
>> collect: collectBlock thenDo: doBlo

Re: [Pharo-users] Lazy Streams - was: Re: Set >> collect:thenDo:

2019-09-10 Thread Steffen Märcker

Hi,

I really think we should have an efficient way to chain operations on
collections and alike. In my personal opinion, this should be backed by
transducers, as they decouple the operations from the type the data source
and sink. Hence they are applicable to different collection types, streams
and so on. Also, transducers allow for straight-forward parallization of
operations that are concurrency-safe.

However, the transducers port is not complete yet. I think I'll finish it
in October/November, but definitely this year. Maybe this could be a
starting point to discuss how to move forward?

One related question: Is there a somewhat comprehensive benchmark yielding
a baseline of Pharo's collection/streams/iteration performance? I'd really
like to compare the existing implemenation to the alternative approaches.

Best regards,
Steffen


Am .09.2019, 11:30 Uhr, schrieb Kasper Østerbye
:


On 10 September 2019 at 00.56.28, Richard O'Keefe (rao...@gmail.com)
wrote:

Does that sound like a way forward?
I could convert my implementation of this interface to
Pharo if people would like me to.

Hi Richard,

It seems like what you propose (in sofar as I understand it) is already
present in at least three other libraries which have not yet made it to
the
core of Pharo:

The Iterator framework Julien is working on:
https://github.com/juliendelplanque/Iterators

The XStreams framework: https://code.google.com/archive/p/xtreams/


The transducers: https://github.com/Pharophile/Transducers


I was not aware of the virtual collections from Strongtalk.

I am aware you know them (as you commented on the Iterators Julien
proposed).

I have been working with the LINQ stream/iterator framework in C#, and
found that to be a rather sound library, lazy evaluation. The streams in
Java are also nice, and are actually very good at exploiting multi cores.

It is sad there is no lazy streams in the core of Pharo, and I would find
it intersting to participate in a project to build one.

Best,

Kasper






Re: [Pharo-users] Concurrency Best Practices + Tests

2019-10-16 Thread Steffen Märcker

I can definitely second all the points. If you want to verify your design
formally, Vereofy (http://www.vereofy.de) is another model checker that
focuses on communication and coordination.

Best, Steffen



Am .10.2019, 08:49 Uhr, schrieb Richard O'Keefe :


(1) Be very clear in your design about which objects are shared and
which are not.
(2) Immutable objects can be shared safely.  Mutable ones are much
much harder to work with.
(3) Lazy initialisation does not work well in concurrent programming,
*except* when you are
 constructing an immutable value and it doesn't matter if the
variable is initialised twice.
(4) If you can possibly do so, *SEPARATE* the concurrency structure
from the application-
 specific work.  Use something like Promela/SPIN, or NuSMV, or
NuXmv, to check your
 concurrency structure,  or CPN Tools if you prefer Coloured Petri
Nets as a modelling
 notation.  This is what Erlang does with great success with
"behaviours"; the concurrency
 structures are there in the library and you just plug in the
non-concurrent part.  Look up
"parallelism patterns" and "algorithmic skeletons".  The book
 
https://pdfs.semanticscholar.org/f43e/7c6a40b96743f2217472a49cd616622bdc26.pdf
 may be helpful.  Separation of concerns: this way you can test
the concurrency
 structure of your code independently of the rest.
(5) It isn't clear to me that TDD is a good way to develop concurrent
programs.  It's a great
 way to develop the non-concurrent *parts*.  One reason I say this
is that I've found that
 it is better to avoid adding unneeded sequencing in a design than
to add concurrency
 to a sequential code.  If you try to *add* critical sections in a
language like Lisp or
 Smalltalk, you *will* miss some.  So the TDD Way of is likely to
seduce you into
 making your code too sequential too soon.
(6) I personally find processes communicating via SharedQueues *much*
easier to design
 and make work than processes communicating via shared variables
and mutexes or
 semaphores.  (Basically, a Semaphore is a SharedQueue that can
only remember how
 many items it's holding, not what they were.)  Your experience
may be different, but
 Processes with private data communicating via SharedQueues
transfer much easier
 to concurrent activities on separate machines communicating via
sockets.

On Thu, 5 Sep 2019 at 01:32, Noury Bouraqadi  wrote:


Hi everyone,

Can I get your input on the following questions :

- What are your best practices and recommendations for developing and
testing concurrent software?

- How to discover need for synchronization/critical sections/ when
doing TDD?

- How to write code to avoid dead-locks?

Noury








Re: [Pharo-users] BlockClosure

2019-10-23 Thread Steffen Märcker

Looks nice!

Just my two cents. To make function comopostion work with multiple
aruments, I implemented * as:

  BlockClosure class>>* first
^ComposedFunction first: first second: self

And use a dedicated class to make multiple arguments work:

  ComposedFunction>>value: arg
^second value: (first value: arg)

  ComposedFunction>>value: arg1 value: arg2
^second value: (first value: arg1 value arg2)

Then, we can write:

  [:x | x factorial ] * [:x :y | x + y]
value: 1 value: 2


(I usually use the mathematical compostion operator, wich evalutes from
right to left.)

However, it is often convenient to interchange Symbols and Blocks, e.g.,
the above can be rewritten as

  #factorial * #+
value: 1 value: 2

That requires to implement the * operator on symbols, too. And depending
on your Smalltalk implemenation, you may have to implement the evaluation
protocol on symbols.

Best, Steffen


Am .10.2019, 15:12 Uhr, schrieb main :


Hi again!

Just putting it out there for anyone interested.

What I did was define the following in Object:

|> aBlock
^ [ :x | x => self => aBlock ]

and also:

=> msg
^ msg value: self


This enabled me to compose like this (I know I probably violate every
rule
in the book, whatever ):

f :=  [ :x | x + 100 ].
g := [ :x | x +   20 ].
h := [ :x | x + 3 ].

0 => (f |> g |> h). 123

Also tried using ~:

0 => (f ~ g ~ h). 123

It's just a matter of taste I guess, but for me it warms my heart



--
Sent from: http://forum.world.st/Pharo-Smalltalk-Users-f1310670.html




Re: [Pharo-users] [vwnc] Parsing in Smalltalk

2019-12-02 Thread Steffen Märcker

Dear Sean,

thanks for bringing this up and for the interesting link [1]. I'll have a  
look. Meanwhile, I am still on both lists. ;-)



Steffen Märcker wrote

I wrote a PetitParserGenerator that takes the DSL and builds a
PetitParser.


I don't know how I could've missed this gem! I hope Steffen is still
subscribed. I googled "PetitParserGenerator", but only found these ML  
posts

:/


It was easy to miss, since I never actually posted that code. Here's what  
I've done:


From Xtreams.PEG syntax:
- PetitParserGenerater > PetitParser code
- PetitParserParser> PetitParser instance

I can send you that code. It should be straight forward to adapt it for  
PP2. However, I moved away from Xtreams own PEG flavor to Bryan Ford's  
original PEG syntax (close but more common) and programmed for Xtreams, PP  
and PP2:


From Bryan Ford's original PEG syntax:
1. Xtreams Grammer and Actor that build an Xtreams parser
2. PP(2)PEGParserParser that builds a PP(2) parser instance
3. PP(2)PEGParserCompiler that builds PP(2) parser classes

All three are available for VW in the Cincom's public repository:
- Xtreasm-Parsing (8.2-3,stm): +PEG parser +some fixes to Xtreams parser
- PetitParser-PEG (1.11,stm)
- PetitParser2-PEG (1.11,stm)

And 2, 3 for Pharo on GitHub, e.g.
- https://github.com/kursjan/petitparser2/tree/master/PetitParser2-PEG

Speaking of, I just noticed that translator form Xtreams.PEG to PEG is  
missing to complete the picture here. ;-)



IIRC Xtreams can take a BNF and generate a parser.


As far as I know, there is no BNF-like parser generator for Xtreams,  
available.



wouldn't mind a two step BNF -> Xtreams -> PP process.


Is it an option for you to convert your BNF to PEG manually?

Best regards,
Steffen



Re: [Pharo-users] [ANN] uFFI Booklet v1.0

2020-02-12 Thread Steffen Märcker
Great! Thanks for the effort of putting this example-driven booklet together!


Steffen

Guillermo Polito schrieb am Wed Feb 12 2020 10:30:31 GMT+0100 
(Mitteleuropäische Normalzeit):


Hi all,


We have been working a lot in the last months in the documentation of uFFI, and 
we are happy to announce a first stable version of it that you’ll find in a 
link below :)
The booklet talks about uFFI, how to use it, how marshalling works, how to use 
structures, unions, etc.
And it is aimed for the version present on Pharo8.0.


https://github.com/SquareBracketAssociates/Booklet-uFFI/releases/tag/v1.0.1


Of course, a v1 does not mean it’s perfect. Future versions will include new 
chapters/features that are on the pipeline such as callbacks and how to specify 
string encodings in your bindings.
Also, so if you find something to fix, to add, to remove, or something hard to 
follow, let us know 
 - by email or
 - by issue (will be easier to track actually than an email) or
 - by PR (and we will have a boost of happiness in addition)


Cheers,
Guille in behalf of everybody that helped and contributed to this



[Pharo-users] Hi-DPI Support

2020-02-18 Thread Steffen Märcker

Hi,

I'd like to ask about the state of Hi-DPI support for Pharo. I already
found two related GitHub issues which where inconclusive so far. Does the
discussion take place elsewhere? If so, where? If not:

- What is the current state?
- Does a plan exist how to get there?
- What are the missing pieces?
- How can I support this endeavor in practice?

Best, Steffen



[Pharo-users] Rosetta Code

2020-04-17 Thread Steffen Märcker

Hi,

some of you might have hear of Rosetta Code, a Wiki that presents
solutions to standard programming tasks in various languages. I figured
that in many categories no Smalltalk/Pharo solutions are shown. Maybe we
can contribute some pieces from time to time? I started with the following:

https://rosettacode.org/wiki/Tree_traversal#Smalltalk

Your comments are welcome.

Best, Steffen



Re: [Pharo-users] Rosetta Code

2020-04-18 Thread Steffen Märcker

Hi,

I'd be interested to see how youe approached these problems.

Best,
Steffen

Am .04.2020, 15:49 Uhr, schrieb Richard O'Keefe :


I've tackled more than 600 of the RosettaCode problems in Smalltalk.
Unfortunately, this was done as exercises in developing my own Smalltalk
system.  If anyone would like to have a zip file to serve as a shortcut
for
developing Pharo versions, you'd be very welcome.

On Fri, 17 Apr 2020 at 21:31, Steffen Märcker  wrote:


Hi,

some of you might have hear of Rosetta Code, a Wiki that presents
solutions to standard programming tasks in various languages. I figured
that in many categories no Smalltalk/Pharo solutions are shown. Maybe we
can contribute some pieces from time to time? I started with the
following:

https://rosettacode.org/wiki/Tree_traversal#Smalltalk

Your comments are welcome.

Best, Steffen









Re: [Pharo-users] Can it do this way ?

2020-09-06 Thread Steffen Märcker

Maybe this is a naive question, but can you just split the task into the
following two?

1. Check whether whether the string is syntactically an ISBN number.
This can be done, e.g., using a regex.

2. Check the the check character.
Calculate the check character from the (now to be known) syntactically
valid string.

ISBNValidator>>isValidISBN: aString
  ^(self isSyntacticallyValid: aString) and: [self isCheckCharacterValid:
aString]

Kind regards,
Steffen

Am .09.2020, 07:35 Uhr, schrieb Roelof Wobben via Pharo-users
:


Nope, with your idea I cannot make this part work :

he ISBN-10 format is 9 digits (0 to 9) plus one check character (either
a digit
or an X only). In the case the check character is an X, this represents
the
value '10'. These may be communicated with or without hyphens, and can be
checked for their validity by the following formula:

(x1 * 10 + x2 * 9 + x3 * 8 + x4 * 7 + x5 * 6 + x6 * 5 + x7 * 4 + x8 * 3
+ x9 * 2 +

so I mean the calculation.


Roelof


Op 4-9-2020 om 06:45 schreef Roelof Wobben:

oke, then I could use your idea but then I have to make the code for
calculating
if its a valid number.
and I wonder if the code will not be too big. I learned that it is good
that a
method does only 1 thing and this one seems to be doing more then 1
thing.

Roelof



Op 4-9-2020 om 05:24 schreef Richard O'Keefe:

What part of "return false if there are not exactly 10 characters

left after discarding dashes" fails to handle the empty string?

A test case for the empty string is is only valuable if the

empty string is NOT a special case.



On Wed, 2 Sep 2020 at 22:52, Roelof Wobben  wrote:

Op 2-9-2020 om 12:38 schreef Richard O'Keefe:

There is simply no point in "taking the first nine numbers out".

And there shouldn't BE a test for the string being empty, anywhere.

'' '-' '---' and so on should all be handled the same way.


Oh well, what stops you doing


digits := aString select: [:each | each ~= $-].

digits size = 10 ifFalse: [^false].

lastDigit := digits la ost.

digits := digits copyFrom: 1 to: 9.

( (lastDigit = $X or: [lastDigit isDigit]) and: [

digits allSatisfy: #isDigit]

) ifFalse: [^false].


Now my code does not do this, but it is just 16 lines of code with

nothing that it would make sense to extract.



Nothing only that I could not think of this one for myself.
If I do it the TDD way I come more on the way Im currently thinking

but does this case then be covered

test14_EmptyIsbn
| result |
result := isbnVerifierCalculator isValidIsbn: ''.
self assert: result equals: false

and still I have to do the calcualation to see if it's valid.
If I understand the code well I can use the digits variable ?


Roelof







Re: [Pharo-users] Can it do this way ?

2020-09-07 Thread Steffen Märcker
No problem. I am not knowledgeable about isbn numbers. At which places may a 
dash occur?

Kind regards,
Steffen

07.09.2020 16:18:22 Roelof Wobben via Pharo-users :

> Op 6-9-2020 om 10:07 schreef Steffen Märcker:
>> Maybe this is a naive question, but can you just split the task into the
>> following two?
>> 
>> 1. Check whether whether the string is syntactically an ISBN number.
>> This can be done, e.g., using a regex.
>> 
>> 2. Check the the check character.
>> Calculate the check character from the (now to be known) syntactically
>> valid string.
>> 
>> ISBNValidator>>isValidISBN: aString
>> ^(self isSyntacticallyValid: aString) and: [self isCheckCharacterValid:
>> aString]
>> 
>> Kind regards,
>> Steffen nder if the code will not be too big. I learned that it is good
>> 
> Sorry to respond not earlier but your respons seems to be in the spam folder 
> of my provider.
> 
> I could do that but if very bad in regex so I do not know a regex which van 
> validate 123456789 or 123-456-78-9
> 
> Roelof
> 



Re: [Pharo-users] Can it do this way ?

2020-09-07 Thread Steffen Märcker

Hi,

after reading the link and some additional sources, it turns out that a
valid ISBN-10 has either no separators or four blocks separated by either
dashes or spaces:
  Group-Publisher-Title-CheckDigit

Assuming Regex11 (and that I made no mistake), the following should do the
trick:

IsbnVarifier>>isSyntacticIsbn: aString
  "no groups"
  noGrouped := '\d{9}[0-9X]' asRegex.
  "groups separated by either dashes or spaces"
  dashes := '\d{1,7}-\d{1,7}-\d{1,7}-[0-9X]'
  spaces := '\d{1,7} \d{1,7} \d{1,7} [0-9X]'
  grouped := (dashed , '|' , spaces) asRegex.

  ^(aString matches: nonGrouped) or:
[(aString matches: grouped) and:
  [aString size = 10 + 3]]

Surely, you could cleverly compress the regex even further but that does
not matter for this example. After checking the syntax, you can just
iterate over the string and compute the check-digit on the fly.

Kind regards,
Steffen

Am .09.2020, 18:19 Uhr, schrieb Roelof Wobben via Pharo-users
:


See here for all the tests :
https://github.com/exercism/pharo-smalltalk/blob/master/exercises/isbn-verifier/IsbnVerifierTest.class.st#L88

Roelof







Re: [Pharo-users] Can it do this way ?

2020-09-07 Thread Steffen Märcker

Now having a Workspace at hand, I fixed some minor typos:

IsbnVarifier>>isSyntacticIsbn: aString
  | nonGrouped dashes spaces grouped |
  nonGrouped := '\d{9}[0-9X]' asRegex.
  "groups separated by either dashes or spaces"
  dashes := '\d{1,7}-\d{1,7}-\d{1,7}-[0-9X]'.
  spaces := '\d{1,7} \d{1,7} \d{1,7} [0-9X]'.
  grouped := (dashes , '|' , spaces) asRegex.
  ^(aString matchesRegex: nonGrouped) or:
[(aString matchesRegex: grouped) and:
  [aString size = (10 + 3)]]

Best, Steffen


Am .09.2020, 19:01 Uhr, schrieb Steffen Märcker :


Hi,

after reading the link and some additional sources, it turns out that a
valid ISBN-10 has either no separators or four blocks separated by either
dashes or spaces:
   Group-Publisher-Title-CheckDigit

Assuming Regex11 (and that I made no mistake), the following should do  
the

trick:

IsbnVarifier>>isSyntacticIsbn: aString
   "no groups"
   noGrouped := '\d{9}[0-9X]' asRegex.
   "groups separated by either dashes or spaces"
   dashes := '\d{1,7}-\d{1,7}-\d{1,7}-[0-9X]'
   spaces := '\d{1,7} \d{1,7} \d{1,7} [0-9X]'
   grouped := (dashed , '|' , spaces) asRegex.

   ^(aString matches: nonGrouped) or:
 [(aString matches: grouped) and:
   [aString size = 10 + 3]]

Surely, you could cleverly compress the regex even further but that does
not matter for this example. After checking the syntax, you can just
iterate over the string and compute the check-digit on the fly.

Kind regards,
Steffen

Am .09.2020, 18:19 Uhr, schrieb Roelof Wobben via Pharo-users
:


See here for all the tests :
https://github.com/exercism/pharo-smalltalk/blob/master/exercises/isbn-verifier/IsbnVerifierTest.class.st#L88

Roelof







Re: [Pharo-users] Can it do this way ?

2020-09-08 Thread Steffen Märcker

Hi Richard and Roelof,

thanks for your comprehensive answer. I brought up Regex only to point out  
alternative solutions. Another one is the following using transducers,  
where Tee works like the tee command from the command line.


IsbnValidator>>isValidIsbn: aString
  | length countChars separators getSeparators lastChar getLastChar  
filterDigits computeCheckSum checkSum |


  "Count number of characters"
  length := 0.
  countChars := [:count :char | length := length + 1] init: length.

  "Get non-digit characters"
  separators := Set new
  getSeparators := separators <~ #isDigit remove.

  "Get last character"
  lastChar := nil.
  getLastChar := [:prev :char | lastChar := char ] init: lastChar.

  "Get digits"
  filterDigits := #isDigit filter.

  "Calculate check sum"
  computeCheckSum := ([:sum :index :digit | sum + index * digit value]  
init: 0) completing: #\\ .


  "Compute"
  checkSum := aString
transduce: (Tee to: countChars) * (Tee to: getSeparators) * (Tee to:  
getLastChar) * filterDigits

reduce: computeCheckSum
init: 0.

  "Check validity"
  ^((length = 10 or: [length = 13])
and: [separators = Set with: $-])
and: [checkSum = (lastChar = $X ifTrue: [10]  ifFalse: [lastChar  
value])]


Kind regards,
Steffen


Am .09.2020, 08:30 Uhr, schrieb Roelof Wobben via Pharo-users  
:



Op 8-9-2020 om 04:22 schreef Richard O'Keefe:

There are two quite different questions.
(1) Where may dashes occur in a real ISBN-10?
(2) What does Exercism require in the specification and check in the
test cases?

For (1) the rules are

Each ISBN consists of 5 elements with each section being separated by  
spaces or hyphens. Three of the five elements may be of >>varying  
length:
Prefix element – currently this can only be either 978 or 979. It is  
always 3 digits in lengthRegistration group element – this identifies  
the particular country, geographical region, or language area  
participating >>in the ISBN system. This element may be between 1 and 5  
digits in lengthRegistrant element - this identifies the particular  
publisher or imprint. This may be up to 7 digits in lengthPublication  
element – this identifies the particular edition and format of a  
specific title. This may be up to 6 digits >>in lengthCheck digit –  
this is always the final single digit that mathematically validates the  
rest of the number. It is >>calculated using a Modulus 10 system with  
alternate weights of 1 and 3.An ISBN-10 does not have the three-digit  
prefix.  So we have

 [0-9]{1,5}   -- prefix
 [0-9]{1,7}   -- registrant
 [0-9]{1,6}   -- publication
 [0-9X]
 -- check digit

As an examplw, "Standard C++ IOStreams and Locales" by Langer & Kreft  
has

ISBN-10 0-201-18395-1
ISBN-13 9780201183955
so I shall assume the separators are optional.
/^[0-9]{1,5}[- ]?[0-9]{1,7}[- ]?[0-9]{1,6}[- ]?[0-9X]$/
Of course the elements cannot all have their maximum length at the same
time.  In AWK I would write
  x = a_putative_ISBN_10
  y = x
  gsub(/[- ]+/, "", y)
  if (x ~ /^[0-9]{1,5}[- ]?[0-9]{1,7}[- ]?[0-9]{1,6}[- ]?[0-9X]$/ \
   && y ~ /^[0-9]{9,9}[0-9X]$/ \
  ) {
  x *might* be valid, we still need to check the checksum
  }

For (2), there appear to be no restrictions on where dashes may occur
or how many: "These may be communicated with or without hyphens".
Exercism doesn't allow spaces.

Regular expressions are elegant in their own way, BUT for this problem
they are (a) excessive, (b) inefficient, and (c) insufficient.

  digit count := 0.
  check sum := 0.
  for each character c of the string
  if c is not a hyphen then
  if c is a digit then
  digit value := c's value as a digit
  else if c is X and digit count = 9 then
  digit value := 10
  else
  return false.
  digit count := digit count + 1.
  if digit count > 10 then return false.
  check sum := (11 - digit count) * digit value + check sum.
   return check sum mod 11 is zero.

Part of the insight here is "don't DO it, just PRETEND you did."
That is, instead of copying the string without the hyphens,
just ignore the hyphens as they turn up.
Another part is "if you are only going to use it once, don't store it."
That is, we need a digit's value just once, in the update to check sum,
so we should compute it just before we need it, not store it.

Now the pseudo-code above is classic sequential imperative coding.

Classic functional coding does something like
  let no_dashes = filter (/= '-') (explode string) in
  length no_dashes = 10 and
  let check = last no_dashes in
  (is_digit check or check = 'X') and
  all is_digit (take 9 no_dashes) and
  let xval c = if x = 'X' then 10 else digit_value c in
  dot (map xval no_dashes) [10,9..1]) mod 11 = 0

This pseudo-code translates nicely to Smalltalk too.
You might want to add

SequenceableCollection>>
 with: other inject: initial into: aBlock
   |r|
   r := initial.
   self with: other do: [:x :y |
 r := aBlo

[Pharo-users] Re: order of execution

2020-09-18 Thread Steffen Märcker

Hi,

don't forget to read the expression from left to right:


currTotal := prevTotal + (prevTotal := currTotal).

5  1 43  2
1. the current value (X) of prevTotal is fetched.
2. the current value (Y) of currTotal is fetched
3. prevTotal is assigned currTotal which is also the value of the
parenthesis
4. X is sent the message + with the argument Y
5. currTotal is assigned the result from 4.

Kind regards,
Steffen

Am .09.2020, 15:05 Uhr, schrieb Russ Whaley :


Can someone please explain this? I'm guessing I don't understand order of
execution.

When perusing >>fibonacciSequence, I get a proper result, but I don't
understand why when looking at the code.

Consider this fragment...

prevTotal := 0.
currTotal := 1.
currTotal := prevTotal + (prevTotal := currTotal).

My understanding *was* that parentheses are executed first.

(prevTotal := currTotal) - assigns and returns 1


currTotal := prevTotal + (1)


and since prevTotal = 1

currTotal := 1 + (1)


prevTotal = 1.

currTotal = 2.


Yet what appears to be happening is...

prevTotal = 0

currTotal := 0 + (prevTotal := currTotal)


then the parentheses...

currTotal := 0 + (1)


prevTotal = 1.

currTotal = 1.


Care to school me?

Thanks!
Russ


[Pharo-users] Re: [Pharo-dev] [Ann] Pharo new version available & mooc needs subtitle translaters

2020-11-10 Thread Steffen Märcker

Hi!

Is it still possible to attend / watch the Mooc? The page  
(https://www.fun-mooc.fr/courses/course-v1:inria+41024+session01/about)  
tells that registration ended on nov 07. Or do I miss other ways to access  
the course and its material?


Best,
Steffen

Am .11.2020, 18:02 Uhr, schrieb Hernán Morales Durand  
:



El mié., 4 nov. 2020 a las 5:30, Stéphane Ducasse (<
stephane.duca...@inria.fr>) escribió:


Hello

We are about to launch the new version of the Pharo mooc. Spread the  
word

around you. It will open the 9th of november
https://www.fun-mooc.fr/courses/course-v1:inria+41024+session01/about


This new version contains
70 new videos about the Pharo ide (in french and english voice) +  
subtitles

better english dubs

Now we are looking for help for the subtitles
All the subtitles of the previous mooc and the new ones are at

https://github.com/pharo-mooc/PharoMooc/tree/master/2020-All-Subtitles-Reorganized
The new ones in folders named C048SD or 2018-Prese*
The old ones in folders named C019SD.




I will help to translate the new ones to Spanish.

Cheers,

Hernán


[Pharo-users] Re: [Pharo-dev] [Ann] Pharo new version available & mooc needs subtitle translaters

2020-11-10 Thread Steffen Märcker

*rofl* Well, I was not even close ... Thanks!

Am .11.2020, 11:04 Uhr, schrieb jtuc...@objektfabrik.de
:


YEs, you miss the year ;-) It says 2021



Am 10.11.20 um 10:13 schrieb Steffen Märcker:

Hi!

Is it still possible to attend / watch the Mooc? The page
(https://www.fun-mooc.fr/courses/course-v1:inria+41024+session01/about)
tells that registration ended on nov 07. Or do I miss other ways to
access the course and its material?

Best,
Steffen

Am .11.2020, 18:02 Uhr, schrieb Hernán Morales Durand
:


<---Schnitt--->








[Pharo-users] Re: Rounding in Floats

2021-06-14 Thread Steffen Märcker

Have you considered using fixed-point arithmetic? For example:
7.1s2 roundTo: 0.1s2

The rule of thumb I stick to is to use FP only if I know the inaccuracies
won't bite me. Funny enough, both 7.1 and 0.1 are already not accurately
representable as floats. (And by coincidence, I prepared exam questions
about floats for my students yesterday. )

Kind regards,
Steffen


Konrad Hinsen schrieb am Dienstag, 15. Juni 2021 07:02:30 (+02:00):

> On 15/06/2021 01:03, Esteban Maringolo wrote:
> > Sure, but what initiated this thread was a reference to roundTo: 0.1
> > which produced a "wrong" output.
> >
> > (9.1 + (-2.0)) roundTo: 0.1 "=> 7.1005"
> > 7.1 roundTo: 0.1 "=> 7.1005"
> >
> > However, at this point I know that Pharo "does the right, raw, thing"
> > (at least compared to other mainstream languages), but it still
> > produces a surprise effect.
>
> That's the "floating point surprise" that everyone has at some point, no
matter the language and runtime system. If that surprise is a problem for
you, are you sure that floating-point arithmetic is what you really want?
Maybe your needs are better served with integers and fractions.
>
>
> Konrad.
>
>
--
Gesendet mit Vivaldi Mail. Laden Sie Vivaldi kostenlos von vivaldi.com
herunter.


[Pharo-users] Re: Rounding in Floats

2021-06-14 Thread Steffen Märcker
Typo: FP should read floating point and not fixed point.

Steffen Märcker schrieb am Dienstag, 15. Juni 2021 08:47:24 (+02:00):

> Have you considered using fixed-point arithmetic? For example:
> 7.1s2 roundTo: 0.1s2
>
> The rule of thumb I stick to is to use FP only if I know the inaccuracies
> won't bite me. Funny enough, both 7.1 and 0.1 are already not accurately
> representable as floats. (And by coincidence, I prepared exam questions
> about floats for my students yesterday. )
>
> Kind regards,
> Steffen
>
>
> Konrad Hinsen schrieb am Dienstag, 15. Juni 2021 07:02:30 (+02:00):
>
> > On 15/06/2021 01:03, Esteban Maringolo wrote:
> > > Sure, but what initiated this thread was a reference to roundTo: 0.1
> > > which produced a "wrong" output.
> > >
> > > (9.1 + (-2.0)) roundTo: 0.1 "=> 7.1005"
> > > 7.1 roundTo: 0.1 "=> 7.1005"
> > >
> > > However, at this point I know that Pharo "does the right, raw, thing"
> > > (at least compared to other mainstream languages), but it still
> > > produces a surprise effect.
> >
> > That's the "floating point surprise" that everyone has at some point, no
> matter the language and runtime system. If that surprise is a problem for
> you, are you sure that floating-point arithmetic is what you really want?
> Maybe your needs are better served with integers and fractions.
> >
> >
> > Konrad.
> >
> >
> --
> Gesendet mit Vivaldi Mail. Laden Sie Vivaldi kostenlos von vivaldi.com
> herunter.
>
-- 
Gesendet mit Vivaldi Mail. Laden Sie Vivaldi kostenlos von vivaldi.com herunter.


[Pharo-users] Fwd: [vwnc] Exception in Regex11 1.4.6

2021-06-24 Thread Steffen Märcker
Hi!


Does Pharo use the Regex11 package? If yes, has it already diverged from the 
version shipped with VisualWorks?


The reason I am asking is that I just pushed an update to the public store. It 
addresses a bug that prevented $[ to be used in a character class. For details, 
see the excerpt below. Furthermore, you might have an opinion on allowing more 
escape sequences in character classes, don't you?


Kind regards,
Steffen


- Weitergeleitete Nachricht -
Von: Steffen Märcker 
An: 'VWNC' 
Betreff: Re: [vwnc] Exception in Regex11 1.4.6
Datum: Thu Jun 24 2021 18:47:01 GMT+0200 (Mitteleuropäische Sommerzeit)


Hi!


I just published Regex11 version 1.4.7 with the following changes:


1. Fix: Character sets could not contain an opening bracket $[.
2. Fix: Character sets could not contain the characters '[:', e.g. as in 
'[[:something]' asRegex.


I also provided additional tests for the improved functionality. I might tidy 
the code a bit later in a minor version bump.



Just to note that Regex11 uses [[:xxx:]] as a special syntax, which might 
interfere with attempts to allow [[] and []]. 



Indeed. If I did no mistake, the new version does not break this.



I agree with the idea to allow backslash escaping in character classes too, 
with the default being that backslash followed by any character is parsed as 
that character.

I also like the idea of allowing more backslash escaping in character classes. 
However, I still have the bad feeling that this might change the semantics of 
existing code. Hence, I refrained from implementing this right away until I am 
more confident that this does not break other peoples stuff.



Currently only a few explicitly defined backslash escapes are recognized, 
forcing the user to remember whether a given character can be used as-is in a 
given context, or must be escaped.

 

A couple of gotchas (probably not applicable in a character set?):

\<   an empty string at the beginning of a word

\>   an empty string at the end of a word

Thanks, I'll keep them in mind and consider them when I decide to implement the 
changes.


OT: I also noticed that repetition, e.g. '.{5}' behaves strange. For instance, 
'.{{5}}' should match 'a{}' but it doesn't. Has anyone an opinion on that 
one?


Best regards, Steffen

-- 
Gesendet mit Vivaldi Mail. Laden Sie Vivaldi kostenlos von vivaldi.com herunter.

[Pharo-users] Re: Fwd: [vwnc] Exception in Regex11 1.4.6

2021-06-29 Thread Steffen Märcker
Dear all!


I just checked and found that Regex in Pharo is indeed based on Regex11. It 
suffers from the same bug as the original. I'd like to bring the fix to Pharo. 
As a first-timer, where can I read about the procedure of contributing code?


Kind regards,
Steffen

Steffen Märcker schrieb am Donnerstag, 24. Juni 2021 18:52:16 (+02:00):


Hi!


Does Pharo use the Regex11 package? If yes, has it already diverged from the 
version shipped with VisualWorks?


The reason I am asking is that I just pushed an update to the public store. It 
addresses a bug that prevented $[ to be used in a character class. For details, 
see the excerpt below. Furthermore, you might have an opinion on allowing more 
escape sequences in character classes, don't you?


Kind regards,
Steffen


- Weitergeleitete Nachricht -
Von: Steffen Märcker 
An: 'VWNC' 
Betreff: Re: [vwnc] Exception in Regex11 1.4.6
Datum: Thu Jun 24 2021 18:47:01 GMT+0200 (Mitteleuropäische Sommerzeit)


Hi!


I just published Regex11 version 1.4.7 with the following changes:


1. Fix: Character sets could not contain an opening bracket $[.
2. Fix: Character sets could not contain the characters '[:', e.g. as in 
'[[:something]' asRegex.


I also provided additional tests for the improved functionality. I might tidy 
the code a bit later in a minor version bump.



Just to note that Regex11 uses [[:xxx:]] as a special syntax, which might 
interfere with attempts to allow [[] and []]. 



Indeed. If I did no mistake, the new version does not break this.



I agree with the idea to allow backslash escaping in character classes too, 
with the default being that backslash followed by any character is parsed as 
that character.

I also like the idea of allowing more backslash escaping in character classes. 
However, I still have the bad feeling that this might change the semantics of 
existing code. Hence, I refrained from implementing this right away until I am 
more confident that this does not break other peoples stuff.



Currently only a few explicitly defined backslash escapes are recognized, 
forcing the user to remember whether a given character can be used as-is in a 
given context, or must be escaped.

 

A couple of gotchas (probably not applicable in a character set?):

\<   an empty string at the beginning of a word

\>   an empty string at the end of a word

Thanks, I'll keep them in mind and consider them when I decide to implement the 
changes.


OT: I also noticed that repetition, e.g. '.{5}' behaves strange. For instance, 
'.{{5}}' should match 'a{}' but it doesn't. Has anyone an opinion on that 
one?


Best regards, Steffen

-- 
Gesendet mit Vivaldi Mail. Laden Sie Vivaldi kostenlos von vivaldi.com herunter.

[Pharo-users] Re: Fwd: [vwnc] Exception in Regex11 1.4.6

2021-06-29 Thread Steffen Märcker
Thanks Sebastian, that's exactly what I was looking for. I vaguely remembered 
that such a site exits. =)


Have a nice day!
Steffen



Sebastian Jordan schrieb am Dienstag, 29. Juni 2021 18:23:23 (+02:00):


Here


https://github.com/pharo-project/pharo/wiki/Contribute-a-fix-to-Pharo



Sebastian


From: Steffen Märcker 
Sent: Tuesday, June 29, 2021 6:13:00 PM
To: Any question about pharo is welcome 
Subject: [Pharo-users] Re: Fwd: [vwnc] Exception in Regex11 1.4.6
 
Dear all!


I just checked and found that Regex in Pharo is indeed based on Regex11. It 
suffers from the same bug as the original. I'd like to bring the fix to Pharo. 
As a first-timer, where can I read about the procedure of contributing code?


Kind regards,
Steffen

Steffen Märcker schrieb am Donnerstag, 24. Juni 2021 18:52:16 (+02:00):


Hi!


Does Pharo use the Regex11 package? If yes, has it already diverged from the 
version shipped with VisualWorks?


The reason I am asking is that I just pushed an update to the public store. It 
addresses a bug that prevented $[ to be used in a character class. For details, 
see the excerpt below. Furthermore, you might have an opinion on allowing more 
escape sequences in character classes, don't you?


Kind regards,
Steffen


- Weitergeleitete Nachricht -----
Von: Steffen Märcker 
An: 'VWNC' 
Betreff: Re: [vwnc] Exception in Regex11 1.4.6
Datum: Thu Jun 24 2021 18:47:01 GMT+0200 (Mitteleuropäische Sommerzeit)


Hi!


I just published Regex11 version 1.4.7 with the following changes:


1. Fix: Character sets could not contain an opening bracket $[.
2. Fix: Character sets could not contain the characters '[:', e.g. as in 
'[[:something]' asRegex.


I also provided additional tests for the improved functionality. I might tidy 
the code a bit later in a minor version bump.



Just to note that Regex11 uses [[:xxx:]] as a special syntax, which might 
interfere with attempts to allow [[] and []]. 



Indeed. If I did no mistake, the new version does not break this.



I agree with the idea to allow backslash escaping in character classes too, 
with the default being that backslash followed by any character is parsed as 
that character.

I also like the idea of allowing more backslash escaping in character classes. 
However, I still have the bad feeling that this might change the semantics of 
existing code. Hence, I refrained from implementing this right away until I am 
more confident that this does not break other peoples stuff.



Currently only a few explicitly defined backslash escapes are recognized, 
forcing the user to remember whether a given character can be used as-is in a 
given context, or must be escaped.

 

A couple of gotchas (probably not applicable in a character set?):

\<   an empty string at the beginning of a word

\>   an empty string at the end of a word

Thanks, I'll keep them in mind and consider them when I decide to implement the 
changes.


OT: I also noticed that repetition, e.g. '.{5}' behaves strange. For instance, 
'.{{5}}' should match 'a{}' but it doesn't. Has anyone an opinion on that 
one?


Best regards, Steffen

-- 
Gesendet mit Vivaldi Mail. Laden Sie Vivaldi kostenlos von vivaldi.com herunter.

-- 
Gesendet mit Vivaldi Mail. Laden Sie Vivaldi kostenlos von vivaldi.com herunter.

[Pharo-users] Re: Pharo 9 arbitrarily changes temporary variable names

2021-12-28 Thread Steffen Märcker

Hi Robert,


have you checked that you're seeing the code you've typed and not just
decompiled methods?


Best,
Steffen



Robert Briggs via Pharo-users schrieb am Montag, 27. Dezember 2021 11:43
(+01:00):



Hi



I define temporary variables in a method, e.g. |e r |, e.g. in a unit test,
but when I run the test Pharo automatically replaces these with | tmp1 tmp2
|.

A similar thing happens with method arguments.  For example methodName:
aString at: anInteger will become methodName: arg1 at: arg2.



This has happened through my entire model, not just as described above.
What is going on.  Is it a bug, or is there a setting that needs to be
changed?



Many thanks for any feedback on this.



Regards

Robert Briggs


--
Gesendet mit Vivaldi Mail. Laden Sie Vivaldi kostenlos von vivaldi.com
herunter.


Re: [Pharo-users] looking for another iterator :)

2018-01-22 Thread Steffen Märcker

Hi!

this is straightforward with Transducers, e.g.,


(Take n: 5) <~ #(1 2 3 4 5 6 7 8).

or

#(1 2 3 4 5 6 7 8) transduce take: 5.


The different primitives like take, drop, map, filter, etc. are composable  
and do not generate intermediate representations.
The bad news are, that the Pharo port is incomplete and I won't have the  
time until April to finish it.


Best, Steffen

Am .01.2018, 10:57 Uhr, schrieb Sven Van Caekenberghe :


5






Re: [Pharo-users] Set Rounding mode for IEEE floating point operations

2018-01-22 Thread Steffen Märcker
Yes, I was considering this in the context of some approximation  
algorithms. Maybe one of the VM experts here could comment on the  
possibility to implement that?



Am .01.2018, 10:00 Uhr, schrieb Serge Stinckwich  
:



On Mon, Jan 15, 2018 at 12:27 PM, Steffen Märcker  wrote:


Hi,

is there any way to set the rounding mode for IEEE floating point
operations? Maybe something like

Double roundToMinusInfWhile: [... code goes here ...]Double

roundToZeroWhile: [... more code here ...]



If not, is it possible to add this behavior, e.g., via a custom  
primitive?



​This might interesting to have something like that for PolyMath, but  
I'm a

little worried of subtle consequences
that might appear in UI if you use points for examples.​




Re: [Pharo-users] looking for another iterator :)

2018-01-26 Thread Steffen Märcker

Hi!

No stress is good news!


- what is the API in terms of vocabulary (ie drop the same as reject)


These are the operations implemented so far. |= means "satisfies", i.e.,  
evaluate to true.


Drop  drop first n elements
DropWhile drop first elements |= a block
Filter*   pick only elements |= a block
Keep* only use elements ~= nil
Map*  map each element
MapKeys*  map each element an use as key
Partition split after n elements
PartitionBy   split after elements |= a block
RandomSample* pick elements with probability p
Reductionsintermediate results of reduce
Remove*   = Filter not
Replace*  replace elements by LUT
Cat*  concatenate sequences
Deduperemove consecutive duplicates
Flatten*  flatten nested sequence
Take  pick first n elements
TakeNth   pick every n-th element
TakeWhile pick first elements |= a block
Tee   UNIX tee, concurrent evaluation

Note, * means that the operation is parallelizable. As the operations are  
independent of the sequence class, the apply naturally to all kinds of  
sources, like collections, streams, channels and so on.



- can we reexpress some of our iterators?


Most of them. For example, generic for all collections:

collect: aBlock
  ^self class <~ aBlock map <~ self
reject: aBlock
  ^self class <~ ablock remove <~ self
select: aBlock
  ^self class <~ aBlock filter <~ self

As said before, names are not fixed yet and there is a more classical API  
on top available, too:


collect: aBlock
  ^(self transduce map: aBlock) into: self class


- what is the cost?


Very little. In detail:
1) Upfront there is the constant cost of instantiating the tranducers.
2) The linear cost of evaluating a nested block for each element. This is  
likely to be optimized by the JIT, as the structure is very regular.


I only did some micro-benchmarks which showed little to none impact on  
performance so far.


Best,
Steffen



Re: [Pharo-users] Smalltalk-76 Demo

2018-04-29 Thread Steffen Märcker



Cool stuff!Am .04.2018, 04:22 Uhr, schrieb Bernardo Ezequiel Contreras :https://youtu.be/NqKyHEJe9_w-- Bernardo E.C.Sent from a cheap desktop computer in South America.





Re: [Pharo-users] Set Rounding mode for IEEE floating point operations

2018-05-23 Thread Steffen Märcker

*bump*

I guess there no way to do this now. But maybe a VM guy can give me how  
difficult it might be to extend the VM accordingly.


Best, Steffen

Am .01.2018, 12:27 Uhr, schrieb Steffen Märcker :


Hi,

is there any way to set the rounding mode for IEEE floating point
operations? Maybe something like

Double roundToMinusInfWhile: [... code goes here ...]Double  
roundToZeroWhile: [... more code here ...]


If not, is it possible to add this behavior, e.g., via a custom  
primitive?


Best, Steffen




Re: [Pharo-users] Set Rounding mode for IEEE floating point operations

2018-05-23 Thread Steffen Märcker

I'll cross-post there later. Thanks for the suggestion!

The application I have in mind is approximation algorithms for systems of  
linear equations or linear programs. Approximation algorithms, such as the  
power method, typically run until convergence of the solution vector is  
reached. However, determining convergence is usually done by computing the  
difference (epsilon) between the last two solutions and comparing it to  
some threshold. This may work in some cases but not in general, as the  
intermediate epsilon does not tell the distance to the actual solution.  
Interval iteration tackles this problem by approaching the solution from  
above and below. The difference between the upper and lower solution  
bounds determines the quality of the solution. However, if we use  
imprecise arithmetic (floats, doubles) it is important to round down/up in  
computing the lower/upper bounds.


Best, Steffen



Am .05.2018, 12:50 Uhr, schrieb Serge Stinckwich  
:



On Wed, May 23, 2018 at 11:47 AM Steffen Märcker  wrote:


*bump*

I guess there no way to do this now. But maybe a VM guy can give me how
difficult it might be to extend the VM accordingly.



​Yes you should followup this question to the VM mailing-list.
Can you open an issue here:  
https://github.com/PolyMathOrg/PolyMath/issues

so we don't forget about it ?​

​Do you have specific needs behind this ?​




Re: [Pharo-users] Set Rounding mode for IEEE floating point operations

2018-05-23 Thread Steffen Märcker

Hi Henry,


[...] it might be possible to try to do this with FFI only?


If I understand you correctly, I could use FFI to call an external lib to  
perform the actual computation according to the rounding mode, right? In  
this case I'd be worry about the performance impact of many FFI calls.



For the same reason, I hope you intend roundByMethodXWhile: to be a
mandatory API, forgetting to set the mode back when done (or, never  
getting

there due to errors, etc) is an area best left unexplored ;)


I used roundByMethodXWhile: aBlock just to convey the idea that an API  
could work "local" and ensures resetting the rounding mode afterwards. =)


Best, Steffen



Re: [Pharo-users] Set Rounding mode for IEEE floating point operations

2018-05-24 Thread Steffen Märcker
I actually made progress: It works like a charm! Basically, I implemented  
the same code as you. Testing is straightforward (using the constant  
values from libc):


  current := LibC fegetround.
  LibC fesetround: FE_DOWNWARDS.
  v1 := 1.0/10.0.
  LibC feesetround: FE_UPWARDS.
  v2 := 1.0/10.0.
  LibC feesetround: current.
  v1 < v2. "true"


​but apparently nothing happens when you do :
  LibC uniqueInstance fesetround: 1024.
  LibC uniqueInstance fegetround.
always return 0.​


This is expected, since the fesetround function returns 0 only if the set  
operation was successful.


Thanks again for your support!

Best,
Steffen



Re: [Pharo-users] Set Rounding mode for IEEE floating point operations

2018-05-24 Thread Steffen Märcker

Hi,

now I've observed the same issue. It might be related to context  
switching, since introducing a delay has a similar effect. Consider:


  | FE_TONEAREST FE_DOWNWARD FE_UPWARD FE_TOWARDZERO |
  FE_TONEAREST  := 16r.
  FE_DOWNWARD   := 16r0400.
  FE_UPWARD := 16r0800.
  FE_TOWARDZERO := 16r0C00.
  "For some reasons we have to call fegetround once."
  "c := LibC new fegetround."
  LibC new fesetround: FE_DOWNWARD.
  (Delay forSeconds: 1) wait.
  v1 := 1.0/10.0.
  LibC new fesetround: FE_UPWARD.
  v2 := 1.0/10.0.
  LibC new fesetround: FE_TONEAREST.
  v1 < v2.

If the delay is inserted, the script evaluates to false. Using the same  
LibC-instance or creating a new one does not seem to change anything here.  
Interestingly, a similar approach in VisualWorks does not show this issue  
yet.


Actually, I expect the FE_* macros to be platform/implementation  
dependent, as suggested here:

http://www.enseignement.polytechnique.fr/informatique/INF478/docs/Cpp/en/c/numeric/fenv/FE_round.html

Best, Steffen




Am .05.2018, 11:57 Uhr, schrieb Serge Stinckwich  
:


What is really strange is when I print the following lines, I obtain  
2048:


current := LibC uniqueInstance fegetround.
LibC uniqueInstance fesetround: 2048.
LibC uniqueInstance fegetround.

and 0 when I remove the first line :
LibC uniqueInstance fesetround: 2048.
LibC uniqueInstance fegetround.



On Thu, May 24, 2018 at 10:37 AM Serge Stinckwich <
serge.stinckw...@gmail.com> wrote:




On Thu, May 24, 2018 at 10:31 AM Steffen Märcker  wrote:


I actually made progress: It works like a charm! Basically, I
implemented
the same code as you. Testing is straightforward (using the constant
values from libc):

   current := LibC fegetround.
   LibC fesetround: FE_DOWNWARDS.
   v1 := 1.0/10.0.
   LibC feesetround: FE_UPWARDS.
   v2 := 1.0/10.0.
   LibC feesetround: current.
   v1 < v2. "true"

> ​but apparently nothing happens when you do :
>   LibC uniqueInstance fesetround: 1024.
>   LibC uniqueInstance fegetround.
> always return 0.​

This is expected, since the fesetround function returns 0 only if the
set
operation was successful.



​I was talking about fgetround function.
​ Can you check the value returns by your fegetround function ?​

Constants values are not dependent of the platform ?​

​Can you commit yr code to Pharo 7.0 or if there is no interest for this
to PolyMath ?​
​Thank you.​
--
Serge Stinckwich
UMI UMMISCO 209 (SU/IRD/UY1)
"Programs must be written for people to read, and only incidentally for
machines to execute."http://www.doesnotunderstand.org/







Re: [Pharo-users] Set Rounding mode for IEEE floating point operations

2018-05-29 Thread Steffen Märcker

Dear Serge,

on the VM mailing list, Levente pointed out, that this could also be  
related to some lib the VM uses (e.g., Freetype) that might change the  
rounding mode. I'll dig into it and give your idea a try and come back  
with the results.


I think it would be nice to have in Pharo 7. But before I want to make  
sure it a) works as expected and b) does not cause some nasty side effects.


Bye, Steffen


Am .05.2018, 12:26 Uhr, schrieb Serge Stinckwich  
:



On Thu, May 24, 2018 at 12:27 PM Steffen Märcker  wrote:


Hi,

now I've observed the same issue. It might be related to context
switching, since introducing a delay has a similar effect. Consider:

   | FE_TONEAREST FE_DOWNWARD FE_UPWARD FE_TOWARDZERO |
   FE_TONEAREST  := 16r.
   FE_DOWNWARD   := 16r0400.
   FE_UPWARD := 16r0800.
   FE_TOWARDZERO := 16r0C00.
   "For some reasons we have to call fegetround once."
   "c := LibC new fegetround."
   LibC new fesetround: FE_DOWNWARD.
   (Delay forSeconds: 1) wait.
   v1 := 1.0/10.0.
   LibC new fesetround: FE_UPWARD.
   v2 := 1.0/10.0.
   LibC new fesetround: FE_TONEAREST.
   v1 < v2.

If the delay is inserted, the script evaluates to false. Using the same
LibC-instance or creating a new one does not seem to change anything
here.
Interestingly, a similar approach in VisualWorks does not show this  
issue

yet.



​Ok, so maybe we need to use put evaluation in a block and use
valueNoContextSwitch ?
​Maybe use an API like the one you propose before :
Double roundToMinusInfWhile: [ ... ]




Actually, I expect the FE_* macros to be platform/implementation
dependent, as suggested here:

http://www.enseignement.polytechnique.fr/informatique/INF478/docs/Cpp/en/c/numeric/fenv/FE_round.html



​Ok.

Can we try to pack everything in a PR for Pharo 7.0 ?
​Thank you.




Re: [Pharo-users] Pharo 7 streams API

2018-07-03 Thread Steffen Märcker
I think, streams and functional composition match up nicely and  
transducers are a way to do this. I've introduced them earlier on this  
list. (I hesitated to weight into the discussion, as I won't have time to  
work on the Pharo port of Transducers until October.)


Let me give a simplified example. I assume the basic messages are  
#nextPut: and #close: to write to aStream and close it.


  aString
transduce: LineEndCrLf flatMap
reduce: (#nextPut: completing: #close)
init: aStream

* Let aString be the source, i.e., some object that yields a sequence of  
characters:

  a CR b
* Let LineEndConventionLF a function that maps CR to #(CR LF):
  a CR b -> a #(CR LF) b
* #flatMap embeds #(CR LF) into the sequence:
  a CR LF b
* (#nextPut: completing: #close) puts each character on the stream and  
calls #close at the end:

  aStream
nextPut: $a;
nextPut: CR;
nextPut: LF;
nextPut: $b;
close;
yourself.
* #transduce:reduce:init: actually starts the writing process.

First, (LineEndConventionLF flatMap) is composable with other  
transformations, e.g., encoding. The example above would change to:


  aString
transduce: LineEndCrLf flatMap * EncodeUTF8 flatMap
reduce: (#nextPut: completing: #close)
init: aByteStream

LineEndCrLf and EncodeUTF8 only have to know how to process single  
characters. Hence, they are highly reusable.


Second, as the source the transformations, the writing process and the  
data sink are distinct objects, we can freely interact with them and build  
arbitrary pipelines. It is straight-forward to come up with other  
iteration methods than #reduce:init:, e.g., step-wise processing of  
streams.


Best, Steffen



Re: [Pharo-users] Pharo 7 streams API

2018-07-03 Thread Steffen Märcker

No worries! =) Let me clarify the relation between Transducers and
function composition.

The basic component in the framework is the so-called ReducingFunction.
This are functions that take two arguments, an 'intermediate value' and a
'current element', and map them to a new intermediate value, i.e., rf : A
x I -> A.
In the example, #nextPut: is a reducing function, since it takes a stream
and an element to put to the stream (I assume #nextPut: would return the
stream).

Basic operations like mapping, filtering, partitioning etc. are generic
and independent of streams/collections/whatsoever. Hence, they should be
resuable. This can be achieved by Transducers which are objects that take
reducing functions and transform them to incorporate the additional
functionality, e.g., mapping. Their signature is similar to xf : (A x I ->
A) -> (A x I -> A).

Function composition of transducer objects chains multiple basic
operations and allows to attach them to a reducing function. In fact, the
implementation indeed uses function composition for this purpose. However,
its up to the context how to make use of these functions, e.g., via
#reduce:init:.

Feel free to ask if anything remains unclear! =)

Best, Steffen


Am .07.2018, 16:20 Uhr, schrieb :


Solutions to different problems.

I proposed a simple generic thing that only composes functions, allowing  
for transformation of block arg.


Transducers seem like streaming, data-flow specific, thing.

Maybe yours helps the original problem in the long run.

I just tried to find something to solve more specific part of it while  
being such generic that it helps in other places as well.


Just pointing it out so there isn't a perception they are competing to  
solve same problem and only one should be selected.


Herby

On July 3, 2018 3:57:21 PM GMT+02:00, "Steffen Märcker"   
wrote:

I think, streams and functional composition match up nicely and
transducers are a way to do this. I've introduced them earlier on this

list. (I hesitated to weight into the discussion, as I won't have time
to
work on the Pharo port of Transducers until October.)

Let me give a simplified example. I assume the basic messages are
#nextPut: and #close: to write to aStream and close it.

  aString
transduce: LineEndCrLf flatMap
reduce: (#nextPut: completing: #close)
init: aStream

* Let aString be the source, i.e., some object that yields a sequence
of
characters:
  a CR b
* Let LineEndConventionLF a function that maps CR to #(CR LF):
  a CR b -> a #(CR LF) b
* #flatMap embeds #(CR LF) into the sequence:
  a CR LF b
* (#nextPut: completing: #close) puts each character on the stream and

calls #close at the end:
  aStream
nextPut: $a;
nextPut: CR;
nextPut: LF;
nextPut: $b;
close;
yourself.
* #transduce:reduce:init: actually starts the writing process.

First, (LineEndConventionLF flatMap) is composable with other
transformations, e.g., encoding. The example above would change to:

  aString
transduce: LineEndCrLf flatMap * EncodeUTF8 flatMap
reduce: (#nextPut: completing: #close)
init: aByteStream

LineEndCrLf and EncodeUTF8 only have to know how to process single
characters. Hence, they are highly reusable.

Second, as the source the transformations, the writing process and the

data sink are distinct objects, we can freely interact with them and
build
arbitrary pipelines. It is straight-forward to come up with other
iteration methods than #reduce:init:, e.g., step-wise processing of
streams.

Best, Steffen




Re: [Pharo-users] Pharo 7 streams API

2018-07-03 Thread Steffen Märcker

No worries! =)

Let me clarify the relation between Transducers and function composition.

The main component in the framework are so-called ReducingFunctions, which  
are the operations you want to perform. They are functions that take two  
arguments, an 'intermediate value' and a 'current element', and map them  
to a new intermediate value, i.e.,

rf : A x I -> A.
In the example, #nextPut: is a reducing function, since it takes a stream  
and an element to put to the stream (I assume #nextPut: returns the stream  
itself).


Basic operations like mapping, filtering, partitioning etc. are generic
and independent of streams/collections/whatsoever. Hence, they should be
resuable. This can be achieved by Transducers which are objects that take
a reducing function and transform it to incorporate the additional
functionality, e.g., mapping. The transducers signature is similar to
xf : (A x I -> A) -> (A x I -> A).
The classic approach adds these basic operations by wrapping the data  
(collections/streams). In contrast, transducers add them to the operations.


Function composition of transducer objects chains multiple basic
operations and allows to attach them to a reducing function. In fact, the
implementation indeed uses function composition for this purpose. However,
its up to the context how to make use of these functions, e.g., via
#reduce:init:.

Feel free to ask if anything remains unclear! =)

Best, Steffen


Am .07.2018, 16:20 Uhr, schrieb :


Solutions to different problems.

I proposed a simple generic thing that only composes functions, allowing  
for transformation of block arg.


Transducers seem like streaming, data-flow specific, thing.

Maybe yours helps the original problem in the long run.

I just tried to find something to solve more specific part of it while  
being such generic that it helps in other places as well.


Just pointing it out so there isn't a perception they are competing to  
solve same problem and only one should be selected.


Herby

On July 3, 2018 3:57:21 PM GMT+02:00, "Steffen Märcker"   
wrote:

I think, streams and functional composition match up nicely and
transducers are a way to do this. I've introduced them earlier on this

list. (I hesitated to weight into the discussion, as I won't have time
to
work on the Pharo port of Transducers until October.)

Let me give a simplified example. I assume the basic messages are
#nextPut: and #close: to write to aStream and close it.

  aString
transduce: LineEndCrLf flatMap
reduce: (#nextPut: completing: #close)
init: aStream

* Let aString be the source, i.e., some object that yields a sequence
of
characters:
  a CR b
* Let LineEndConventionLF a function that maps CR to #(CR LF):
  a CR b -> a #(CR LF) b
* #flatMap embeds #(CR LF) into the sequence:
  a CR LF b
* (#nextPut: completing: #close) puts each character on the stream and

calls #close at the end:
  aStream
nextPut: $a;
nextPut: CR;
nextPut: LF;
nextPut: $b;
close;
yourself.
* #transduce:reduce:init: actually starts the writing process.

First, (LineEndConventionLF flatMap) is composable with other
transformations, e.g., encoding. The example above would change to:

  aString
transduce: LineEndCrLf flatMap * EncodeUTF8 flatMap
reduce: (#nextPut: completing: #close)
init: aByteStream

LineEndCrLf and EncodeUTF8 only have to know how to process single
characters. Hence, they are highly reusable.

Second, as the source the transformations, the writing process and the

data sink are distinct objects, we can freely interact with them and
build
arbitrary pipelines. It is straight-forward to come up with other
iteration methods than #reduce:init:, e.g., step-wise processing of
streams.

Best, Steffen




[Pharo-users] Parsing in VW

2018-10-01 Thread Steffen Märcker

Dear all,

I have two questions regarding parsing frameworks.

1) Do you have any insights on the performance of SmaCC VS Xtreams Parsing  
VS PetitParser?
2) Has anybody started to port PetitParser 2 from Pharo to VW? Is it worth  
the effort?


Sorry for cross-posting, I thought this might interest both communities.

Cheers, Steffen



Re: [Pharo-users] [vwnc] Parsing in Smalltalk

2018-10-04 Thread Steffen Märcker
I gave Xtreams-Parsing and PetitParser a shot and like to share my  
findings.[*]


The task was to parse the modelling language of the probabilistic model  
checker PRISM. I've written a grammer of about 130 definitions in the  
Xtreams DSL, which is close to Bryan Fords syntax. To avoid doing it all  
again with PetitParser, I wrote a PetitParserGenerator that takes the DSL  
and builds a PetitParser.


The numbers below are just parsing times, no further actions involved. For  
reference I show the times from PRISM (which uses JavaCC), too -- although  
they involve additional verification and normalization steps on the AST.


input  PrismXP   PP 
230kB14s9s   2s
544kB   121s   20s   5s
1.1MB   421s   34s   8s
1.4MB  1091s   47s  12s
2.2MB  63s  16s
2.9MB  81s  20s
3.8MB 107s  25s
4.4MB 123s  30s

Please note that these times are not representative at all. It's just a  
single example and I put zero effort in optimization. However, I am quite  
satisfied with the results.


[*] I was already familiar with the DSL of Xtreams-Parsing, which I like  
very much. I did not consider SmaCC, as I find PEGs easier to use.


Best, Steffen



Am .10.2018, 20:14 Uhr, schrieb Steffen Märcker :


Dear all,

I have two questions regarding parsing frameworks.

1) Do you have any insights on the performance of SmaCC VS Xtreams  
Parsing VS PetitParser?
2) Has anybody started to port PetitParser 2 from Pharo to VW? Is it  
worth the effort?


Sorry for cross-posting, I thought this might interest both communities.

Cheers, Steffen




Re: [Pharo-users] [vwnc] Parsing in Smalltalk

2018-10-05 Thread Steffen Märcker

Hi Ben!


Thanks for your report Steffen. Nice to see such comparisons even when a
bit apples & oranges.
Will you be implementing those "additional verification and normalization
steps" ?
It seems they have an exponential or power impact on times.


I certainly will. Meanwhile, I have some more precise numbers. As it turns  
out, it is indeed the additional steps (nobuild) that causes the impact. I  
also added the (nobuild) times from another tool, the probabilistic model  
checker Storm.


input Prism Storm   XPPP
size  parse nobuild nobuild parse parse 
230kB  0.1s 10s  6s9s2s
544kB  0.2s 90s 20s   20s5s
1.1MB  0.4s392s 46s   34s8s
1.4MB  0.8s   1091s 85s   47s   12s
2.2MB 63s   16s
2.9MB 81s   20s
3.8MB107s   25s
4.4MB123s   30s

Best, Steffen



Re: [Pharo-users] [vwnc] Parsing in Smalltalk

2018-10-05 Thread Steffen Märcker

Hi Doru!

I assume that you tried the original PetitParser. PetitParser2 offers  
the possibility to optimize the parser (kind of a compilation), and this  
provides a significant speedup:

https://github.com/kursjan/petitparser2

Would you be interested in trying this out?


Yes, I'd like to give this a shot, too. However, as far as I know, PP2 is  
only available for Pharo and not VW, is it?


Speaking of optimizations, I also tried to use memoizing the petit parser.  
However, the times got worse instead of better. Is there a rule of thumb  
where to apply #memoized in a sensible way? As far as I understand,  
applying it to the root parser does not memoize subsequent parsers, does  
it?


Kind regards, Steffen



Re: [Pharo-users] [vwnc] Parsing in Smalltalk

2018-10-11 Thread Steffen Märcker

Dear Jan,

I just tried to use PP2 but ran into two issues:

1. PP2 does not load into Pharo 6.1 stable.
2. I use #- to create character classes but was not able to find the  
equivalent in PP2 yet.



There should be no fundamental issue with porting PP2 to VW. As far as I
know, there is an automated tool to do so, right?


I am not aware of this tool. Can you give me some hints what exactly to  
look for?


Best, Steffen


On the other hand, PP

is
stable and does not change, PP2 is maintained and updated from time to  
time
(mostly adding optimizations), so there might be an overhead of syncing  
PP2

to VW2.

Cheers,
Jan

[1]:
https://kursjan.github.io/petitparser2/pillar-book/build/Chapters/memoization.html

On Fri, Oct 5, 2018, 13:26 Steffen Märcker  wrote:


Hi Doru!

> I assume that you tried the original PetitParser. PetitParser2 offers
> the possibility to optimize the parser (kind of a compilation), and
this
> provides a significant speedup:
> https://github.com/kursjan/petitparser2
>
> Would you be interested in trying this out?

Yes, I'd like to give this a shot, too. However, as far as I know, PP2  
is

only available for Pharo and not VW, is it?

Speaking of optimizations, I also tried to use memoizing the petit
parser.
However, the times got worse instead of better. Is there a rule of thumb
where to apply #memoized in a sensible way? As far as I understand,
applying it to the root parser does not memoize subsequent parsers, does
it?

Kind regards, Steffen





Re: [Pharo-users] [vwnc] Parsing in Smalltalk

2018-10-11 Thread Steffen Märcker

Can you give more details? IIRC I have PP2 loaded in several 6.


I did the following:
1)  Download and start Pharo 6.1 stable via the launcher.
2a) Attempt to install PetitParser2 via the CatalogBrowser:
"Information
There was an error while trying to install PetitParser2.
Installation was cancelled."
2b) Attempt to install PP2 via the scripts from GitHub:
Metacello new
baseline: 'PetitParser2';
repository: 'github://kursjan/petitparser2';
load.
Metacello new
baseline: 'PetitParser2Gui';
repository: 'github://kursjan/petitparser2';
load.
"Could not resolve: [BaselineOfPetitParser2] in [...]"

Interestingly, it works in Pharo 7 dev, but there the GUI-Tools won't load  
because of some issues with their dependencies.


I hope this helps. As I am not familiar with Pharo, I'd appreciate any  
hints.


Best, Steffen



Re: [Pharo-users] [vwnc] Parsing in Smalltalk

2018-10-11 Thread Steffen Märcker

I am using MacOS 10.13.6 and the 32bit VM:

Pharo 6.0
Latest update: #60546

... the String in about is wrong, it should be 6.1. I installed it via the  
launcher as "Official Distribution: Pharo 6.1 - 32Bit (stable)" I just  
noticed, that the sources file is missing from vms/private/6521/, too.


Am .10.2018, 17:02 Uhr, schrieb Sean P. DeNigris :


Steffen Märcker wrote

I did the following:
1)  Download and start Pharo 6.1 stable via the launcher.
2b) Attempt to install PP2 via the scripts from GitHub:
 Metacello new
 baseline: 'PetitParser2';
 repository: 'github://kursjan/petitparser2';
 load.
 Metacello new
 baseline: 'PetitParser2Gui';
 repository: 'github://kursjan/petitparser2';
 load.


This way worked for me in Pharo #60546 (check in World->System->About).  
What

exact Pharo version/OS are you on? 32 or 64-bit



-
Cheers,
Sean
--
Sent from: http://forum.world.st/Pharo-Smalltalk-Users-f1310670.html







Re: [Pharo-users] [vwnc] Parsing in Smalltalk

2018-10-11 Thread Steffen Märcker
Reading the code of PetitParser, I wonder why PPRepeatingParser  
initializes 'max' with SmallInteger maxVal instead of some notion of  
infinity, like Float infinity (and PP2RepeatingNode as well). If I  
understand the code correctly, PParser>>min: fails if the number of  
repetitions exceeds SmallInteger maxVal, doesn't it?


Best, Steffen


Am .10.2018, 17:10 Uhr, schrieb Steffen Märcker :


I am using MacOS 10.13.6 and the 32bit VM:

Pharo 6.0
Latest update: #60546

... the String in about is wrong, it should be 6.1. I installed it via  
the launcher as "Official Distribution: Pharo 6.1 - 32Bit (stable)" I  
just noticed, that the sources file is missing from vms/private/6521/,  
too.


Am .10.2018, 17:02 Uhr, schrieb Sean P. DeNigris :


Steffen Märcker wrote

I did the following:
1)  Download and start Pharo 6.1 stable via the launcher.
2b) Attempt to install PP2 via the scripts from GitHub:
 Metacello new
 baseline: 'PetitParser2';
 repository: 'github://kursjan/petitparser2';
 load.
 Metacello new
 baseline: 'PetitParser2Gui';
 repository: 'github://kursjan/petitparser2';
 load.


This way worked for me in Pharo #60546 (check in World->System->About).  
What

exact Pharo version/OS are you on? 32 or 64-bit



-
Cheers,
Sean
--
Sent from: http://forum.world.st/Pharo-Smalltalk-Users-f1310670.html








Re: [Pharo-users] [vwnc] Parsing in Smalltalk

2018-10-11 Thread Steffen Märcker

Hi, I tried it some more times and things are different now:
- image appeared to lock up (1st)
- no network traffic at all (2nd)
- image unresponsive, loading successful after 2 minutes (3rd)
Call me a fool, but I didn't do anything different. Notably, it succeeded  
each time in 7.0. =)


There is no specific reason to use SmallInteger maxVal...  and nobody  
ever thought it might be too little.


Maybe it makes sense to change this? It appears to be just wrong and on  
32bit the limit is well in practical reach. With a little guidance, I'd  
try to do a first PR myself (if the change is considered sensible).


I was mentioning #min: since it is implemented in terms of 'min: min max:  
SmallInteber maxVal'.


Is there an easy way to create a character class, similar to [1-2x-z]?

Best, Steffen




Use $- asPParser for characters, e.g:
$- asPParser parse: '-'

[1]: https://travis-ci.org/kursjan/petitparser2/builds/438358467

On Thu, Oct 11, 2018 at 8:13 PM Steffen Märcker  wrote:


Reading the code of PetitParser, I wonder why PPRepeatingParser
initializes 'max' with SmallInteger maxVal instead of some notion of
infinity, like Float infinity (and PP2RepeatingNode as well). If I
understand the code correctly, PParser>>min: fails if the number of
repetitions exceeds SmallInteger maxVal, doesn't it?

Best, Steffen


Am .10.2018, 17:10 Uhr, schrieb Steffen Märcker :

> I am using MacOS 10.13.6 and the 32bit VM:
>
> Pharo 6.0
> Latest update: #60546
>
> ... the String in about is wrong, it should be 6.1. I installed it via
> the launcher as "Official Distribution: Pharo 6.1 - 32Bit (stable)" I
> just noticed, that the sources file is missing from vms/private/6521/,
> too.
>
> Am .10.2018, 17:02 Uhr, schrieb Sean P. DeNigris  

>:
>
>> Steffen Märcker wrote
>>> I did the following:
>>> 1)  Download and start Pharo 6.1 stable via the launcher.
>>> 2b) Attempt to install PP2 via the scripts from GitHub:
>>>  Metacello new
>>>  baseline: 'PetitParser2';
>>>  repository: 'github://kursjan/petitparser2';
>>>  load.
>>>  Metacello new
>>>  baseline: 'PetitParser2Gui';
>>>  repository: 'github://kursjan/petitparser2';
>>>  load.
>>
>> This way worked for me in Pharo #60546 (check in
World->System->About).
>> What
>> exact Pharo version/OS are you on? 32 or 64-bit
>>
>>
>>
>> -
>> Cheers,
>> Sean
>> --
>> Sent from: http://forum.world.st/Pharo-Smalltalk-Users-f1310670.html
>>
>
>





[Pharo-users] Persistant Taskbar Previews

2018-10-12 Thread Steffen Märcker

Hi,

Using Pharo 6.1 for some hours, I ended up with quite a few previews from  
(long gone) taskbar entries that refuse to go away. They seem to be  
PanelMorphs. How do I get rid of them?


Best, Steffen



Re: [Pharo-users] Persistant Taskbar Previews

2018-10-13 Thread Steffen Märcker
Thanks both of you. It turned out to be not a second UI process but simply  
some orphan morphs. I got rid of them using alt-shift. I was just not  
aware of bringing up the halos. =)


Am .10.2018, 10:48 Uhr, schrieb Sven Van Caekenberghe :

I would guess you have 2 instances of the Morphic UI process, probably  
left over from some debugging. Kill one using the Process Browser. But  
this is just a blind guess.



On 13 Oct 2018, at 04:51, Ben Coman  wrote:



On Fri, 12 Oct 2018 at 18:57, Steffen Märcker  wrote:
Hi,

Using Pharo 6.1 for some hours, I ended up with quite a few previews  
from

(long gone) taskbar entries that refuse to go away. They seem to be
PanelMorphs. How do I get rid of them?

Best, Steffen

I'd guess like any morph you could meta-click on them and choose to  
delete them.

e.g. on Windows SHIFT-ALT-middleClick

cheers -ben









Re: [Pharo-users] [vwnc] Parsing in Smalltalk

2018-10-13 Thread Steffen Märcker

Hi,

I gave PetitParser 2 a try and I am pretty impressed by the results,  
please see the updated table below. =) Again, that's pure parsing and  
Array-based AST-building. Moving to PP2 was indeed as easy as sending  
#asPParser and working around character ranges ($a - $z). Is there a  
preferred way to do the latter?


Jan mentioned that there might be an automated tool to port stuff to  
VisualWorks. Do you have a name? And again the old question: what is the  
preferred workflow to exchange code between the two dialects? Till now I  
stick to FileOut30.


input  PrismStorm  Xtreams.PEG  PP PP2
size   parse check  check  parse cache  parse  parse optim
230kB   0.1s   10s 6s 9s3s 2s 4s  0.2s
544kB   0.2s   90s20s20s7s 5s 9s  0.5s
1.1MB   0.4s  392s46s34s   13s 8s15s  1.0s
1.4MB   0.8s 1091s85s47s   20s12s20s  1.3s
2.2MB63s   30s16s27s  1.9s
2.9MB81s   44s20s34s  2.5s
3.8MB   107s   61s25s45s  3.1s
4.4MB   123s   76s30s56s  3.7s

Best, Steffen


Am .10.2018, 05:22 Uhr, schrieb Tudor Girba :


Hi,

Interesting experiment. Thanks for sharing!

I assume that you tried the original PetitParser. PetitParser2 offers  
the possibility to optimize the parser (kind of a compilation), and this  
provides a significant speedup:

https://github.com/kursjan/petitparser2

Would you be interested in trying this out?

Cheers,
Doru




On Oct 4, 2018, at 10:46 PM, Steffen Märcker  wrote:

I gave Xtreams-Parsing and PetitParser a shot and like to share my  
findings.[*]


The task was to parse the modelling language of the probabilistic model  
checker PRISM. I've written a grammer of about 130 definitions in the  
Xtreams DSL, which is close to Bryan Fords syntax. To avoid doing it  
all again with PetitParser, I wrote a PetitParserGenerator that takes  
the DSL and builds a PetitParser.


The numbers below are just parsing times, no further actions involved.  
For reference I show the times from PRISM (which uses JavaCC), too --  
although they involve additional verification and normalization steps  
on the AST.


input  PrismXP   PP 
230kB14s9s   2s
544kB   121s   20s   5s
1.1MB   421s   34s   8s
1.4MB  1091s   47s  12s
2.2MB  63s  16s
2.9MB  81s  20s
3.8MB 107s  25s
4.4MB 123s  30s

Please note that these times are not representative at all. It's just a  
single example and I put zero effort in optimization. However, I am  
quite satisfied with the results.


[*] I was already familiar with the DSL of Xtreams-Parsing, which I  
like very much. I did not consider SmaCC, as I find PEGs easier to use.


Best, Steffen



Am .10.2018, 20:14 Uhr, schrieb Steffen Märcker :


Dear all,

I have two questions regarding parsing frameworks.

1) Do you have any insights on the performance of SmaCC VS Xtreams  
Parsing VS PetitParser?
2) Has anybody started to port PetitParser 2 from Pharo to VW? Is it  
worth the effort?


Sorry for cross-posting, I thought this might interest both  
communities.


Cheers, Steffen


--
www.feenk.com

"No matter how many recipes we know, we still value a chef."











[Pharo-users] Pharo2VW (Was:[vwnc] Parsing in Smalltalk)

2018-10-19 Thread Steffen Märcker

Hi,

I tried the tool yesterday and found some hickups. After opening issues on  
GitHub, I wonder how I can easily contribute a PR there. I imagine there  
is a straight-forward way with Iceberg now, isn't it? I a sorry if I miss  
the obvious but I am still not familiar with Pharos comprehensive tool set.


Best, Steffen


Am .10.2018, 18:06 Uhr, schrieb milton mamani :


Hi you can use

https://github.com/ObjectProfile/Pharo2VW

Cheers, Milton

El sáb., 13 oct. 2018 a las 12:38, Steffen Märcker ()
escribió:


Hi,

I gave PetitParser 2 a try and I am pretty impressed by the results,
please see the updated table below. =) Again, that's pure parsing and
Array-based AST-building. Moving to PP2 was indeed as easy as sending
#asPParser and working around character ranges ($a - $z). Is there a
preferred way to do the latter?

Jan mentioned that there might be an automated tool to port stuff to
VisualWorks. Do you have a name? And again the old question: what is the
preferred workflow to exchange code between the two dialects? Till now I
stick to FileOut30.

input  PrismStorm  Xtreams.PEG  PP PP2
size   parse check  check  parse cache  parse  parse optim
230kB   0.1s   10s 6s 9s3s 2s 4s  0.2s
544kB   0.2s   90s20s20s7s 5s 9s  0.5s
1.1MB   0.4s  392s46s34s   13s 8s15s  1.0s
1.4MB   0.8s 1091s85s47s   20s12s20s  1.3s
2.2MB63s   30s16s27s  1.9s
2.9MB81s   44s20s34s  2.5s
3.8MB   107s   61s25s45s  3.1s
4.4MB   123s   76s30s56s  3.7s

Best, Steffen


Am .10.2018, 05:22 Uhr, schrieb Tudor Girba :

> Hi,
>
> Interesting experiment. Thanks for sharing!
>
> I assume that you tried the original PetitParser. PetitParser2 offers
> the possibility to optimize the parser (kind of a compilation), and
this
> provides a significant speedup:
> https://github.com/kursjan/petitparser2
>
> Would you be interested in trying this out?
>
> Cheers,
> Doru
>
>
>
>> On Oct 4, 2018, at 10:46 PM, Steffen Märcker  wrote:
>>
>> I gave Xtreams-Parsing and PetitParser a shot and like to share my
>> findings.[*]
>>
>> The task was to parse the modelling language of the probabilistic
model
>> checker PRISM. I've written a grammer of about 130 definitions in the
>> Xtreams DSL, which is close to Bryan Fords syntax. To avoid doing it
>> all again with PetitParser, I wrote a PetitParserGenerator that takes
>> the DSL and builds a PetitParser.
>>
>> The numbers below are just parsing times, no further actions  
involved.

>> For reference I show the times from PRISM (which uses JavaCC), too --
>> although they involve additional verification and normalization steps
>> on the AST.
>>
>> input  PrismXP   PP
>> 230kB14s9s   2s
>> 544kB121s   20s   5s
>> 1.1MB421s   34s   8s
>> 1.4MB  1091s   47s  12s
>> 2.2MB  63s  16s
>> 2.9MB  81s  20s
>> 3.8MB 107s  25s
>> 4.4MB 123s  30s
>>
>> Please note that these times are not representative at all. It's just
a
>> single example and I put zero effort in optimization. However, I am
>> quite satisfied with the results.
>>
>> [*] I was already familiar with the DSL of Xtreams-Parsing, which I
>> like very much. I did not consider SmaCC, as I find PEGs easier to  
use.

>>
>> Best, Steffen
>>
>>
>>
>> Am .10.2018, 20:14 Uhr, schrieb Steffen Märcker :
>>
>>> Dear all,
>>>
>>> I have two questions regarding parsing frameworks.
>>>
>>> 1) Do you have any insights on the performance of SmaCC VS Xtreams
>>> Parsing VS PetitParser?
>>> 2) Has anybody started to port PetitParser 2 from Pharo to VW? Is it
>>> worth the effort?
>>>
>>> Sorry for cross-posting, I thought this might interest both
>>> communities.
>>>
>>> Cheers, Steffen
>
> --
> www.feenk.com
>
> "No matter how many recipes we know, we still value a chef."
>
>
>
>
>
>
>




Re: [Pharo-users] Pharo2VW (Was:[vwnc] Parsing in Smalltalk)

2018-10-19 Thread Steffen Märcker

Nevermind, I figured it out easily. Must been blind before.

Many thanks to the Iceberg team, the tool works like a charm in 6.1!


Am .10.2018, 10:58 Uhr, schrieb Steffen Märcker :


Hi,

I tried the tool yesterday and found some hickups. After opening issues  
on GitHub, I wonder how I can easily contribute a PR there. I imagine  
there is a straight-forward way with Iceberg now, isn't it? I a sorry if  
I miss the obvious but I am still not familiar with Pharos comprehensive  
tool set.


Best, Steffen


Am .10.2018, 18:06 Uhr, schrieb milton mamani :


Hi you can use

https://github.com/ObjectProfile/Pharo2VW

Cheers, Milton

El sáb., 13 oct. 2018 a las 12:38, Steffen Märcker ()
escribió:


Hi,

I gave PetitParser 2 a try and I am pretty impressed by the results,
please see the updated table below. =) Again, that's pure parsing and
Array-based AST-building. Moving to PP2 was indeed as easy as sending
#asPParser and working around character ranges ($a - $z). Is there a
preferred way to do the latter?

Jan mentioned that there might be an automated tool to port stuff to
VisualWorks. Do you have a name? And again the old question: what is  
the
preferred workflow to exchange code between the two dialects? Till now  
I

stick to FileOut30.

input  PrismStorm  Xtreams.PEG  PP PP2
size   parse check  check  parse cache  parse  parse optim
230kB   0.1s   10s 6s 9s3s 2s 4s  0.2s
544kB   0.2s   90s20s20s7s 5s 9s  0.5s
1.1MB   0.4s  392s46s34s   13s 8s15s  1.0s
1.4MB   0.8s 1091s85s47s   20s12s20s  1.3s
2.2MB63s   30s16s27s  1.9s
2.9MB81s   44s20s34s  2.5s
3.8MB   107s   61s25s45s  3.1s
4.4MB   123s   76s30s56s  3.7s

Best, Steffen


Am .10.2018, 05:22 Uhr, schrieb Tudor Girba :

> Hi,
>
> Interesting experiment. Thanks for sharing!
>
> I assume that you tried the original PetitParser. PetitParser2 offers
> the possibility to optimize the parser (kind of a compilation), and
this
> provides a significant speedup:
> https://github.com/kursjan/petitparser2
>
> Would you be interested in trying this out?
>
> Cheers,
> Doru
>
>
>
>> On Oct 4, 2018, at 10:46 PM, Steffen Märcker  wrote:
>>
>> I gave Xtreams-Parsing and PetitParser a shot and like to share my
>> findings.[*]
>>
>> The task was to parse the modelling language of the probabilistic
model
>> checker PRISM. I've written a grammer of about 130 definitions in  
the

>> Xtreams DSL, which is close to Bryan Fords syntax. To avoid doing it
>> all again with PetitParser, I wrote a PetitParserGenerator that  
takes

>> the DSL and builds a PetitParser.
>>
>> The numbers below are just parsing times, no further actions  
involved.
>> For reference I show the times from PRISM (which uses JavaCC), too  
--
>> although they involve additional verification and normalization  
steps

>> on the AST.
>>
>> input  PrismXP   PP
>> 230kB14s9s   2s
>> 544kB121s   20s   5s
>> 1.1MB421s   34s   8s
>> 1.4MB  1091s   47s  12s
>> 2.2MB  63s  16s
>> 2.9MB  81s  20s
>> 3.8MB 107s  25s
>> 4.4MB 123s  30s
>>
>> Please note that these times are not representative at all. It's  
just

a
>> single example and I put zero effort in optimization. However, I am
>> quite satisfied with the results.
>>
>> [*] I was already familiar with the DSL of Xtreams-Parsing, which I
>> like very much. I did not consider SmaCC, as I find PEGs easier to  
use.

>>
>> Best, Steffen
>>
>>
>>
>> Am .10.2018, 20:14 Uhr, schrieb Steffen Märcker :
>>
>>> Dear all,
>>>
>>> I have two questions regarding parsing frameworks.
>>>
>>> 1) Do you have any insights on the performance of SmaCC VS Xtreams
>>> Parsing VS PetitParser?
>>> 2) Has anybody started to port PetitParser 2 from Pharo to VW? Is  
it

>>> worth the effort?
>>>
>>> Sorry for cross-posting, I thought this might interest both
>>> communities.
>>>
>>> Cheers, Steffen
>
> --
> www.feenk.com
>
> "No matter how many recipes we know, we still value a chef."
>
>
>
>
>
>
>




Re: [Pharo-users] library to chain select:/collect:/ ... via cascade

2018-10-19 Thread Steffen Märcker

Hi,

indeed, transducers provided a way to achieve this, e.g.

#(12 7 'a' nil #(0)) pipe
filter: #notNil;
filter: #isNumber;
map: #squared;
filter: #even;
into: OrderedCollection.

But this feature is deprecated, as it was not that useful. The preferred  
way to do this is either:


#(12 7 'a' nil #(0))
transduce: #notNil filter * #isNumber filter * squared map * #even 
filter
reduce: Set accumulate.

or:

Set <~ #even filter
<~ #squared map
<~ #isNumber filter
<~ #notNil filter
<~ #(12 7 'a' nil #(0)).

The advantage of the transducer approach is that it decouples  
filtering/mapping/etc. from iteration and aggregation. This facilitates  
reuse and makes it trivial to provide all operations to new custom data  
types.


However, I didn't have time to finish the Pharo port of Transducers, yet.  
Hence, the a current version is available in Cincom's Public Store or  
(most current) directly from me only. But if you are interested and have a  
nice use case, I'd be happy to help out.


Best, Steffen


Am .10.2018, 08:45 Uhr, schrieb Julien :


I think this was the idea of Transducers as well.

Julien

---
Julien Delplanque
Doctorant à l’Université de Lille
http://juliendelplanque.be/phd.html
Equipe Rmod, Inria
Bâtiment B 40, Avenue Halley 59650 Villeneuve d'Ascq
Numéro de téléphone: +333 59 35 86 40


Le 17 oct. 2018 à 09:13, Peter Uhnak  a écrit :

Hi,

is there some library that will allow me to chain select:/collect:/...  
via cascade?


E.g.

#(12 7 'a' nil #(0)) query reject: #isNil; select: #isNumber; collect:  
#squared; select: #even?


The point is to not have to write billion parentheses when building a  
more complex query.


I imagine this would be pretty easy to write, but figured I ask first.

Thanks,
Peter




Re: [Pharo-users] library to chain select:/collect:/ ... via cascade

2018-10-19 Thread Steffen Märcker
Another important difference is that no intermediate collections are  
built. In contrast, chaining x enumeration statements #select:, #collect:,  
etc. iterates x times over the collection elements and builds x-1  
intermediate collections.


Am .10.2018, 11:59 Uhr, schrieb Steffen Märcker :


Hi,

indeed, transducers provided a way to achieve this, e.g.

#(12 7 'a' nil #(0)) pipe
filter: #notNil;
filter: #isNumber;
map: #squared;
filter: #even;
into: OrderedCollection.

But this feature is deprecated, as it was not that useful. The preferred  
way to do this is either:


#(12 7 'a' nil #(0))
	transduce: #notNil filter * #isNumber filter * squared map * #even  
filter

reduce: Set accumulate.

or:

Set <~ #even filter
 <~ #squared map
 <~ #isNumber filter
 <~ #notNil filter
 <~ #(12 7 'a' nil #(0)).

The advantage of the transducer approach is that it decouples  
filtering/mapping/etc. from iteration and aggregation. This facilitates  
reuse and makes it trivial to provide all operations to new custom data  
types.


However, I didn't have time to finish the Pharo port of Transducers,  
yet. Hence, the a current version is available in Cincom's Public Store  
or (most current) directly from me only. But if you are interested and  
have a nice use case, I'd be happy to help out.


Best, Steffen


Am .10.2018, 08:45 Uhr, schrieb Julien :


I think this was the idea of Transducers as well.

Julien

---
Julien Delplanque
Doctorant à l’Université de Lille
http://juliendelplanque.be/phd.html
Equipe Rmod, Inria
Bâtiment B 40, Avenue Halley 59650 Villeneuve d'Ascq
Numéro de téléphone: +333 59 35 86 40


<---Schnitt--->








Re: [Pharo-users] [vwnc] Parsing in Smalltalk

2018-10-22 Thread Steffen Märcker

Hi Jan,

I am trying to port PP2 to VW and managed to get most of the tests green.  
=) Some of the remaining test failures occur in


PP2DebuggingStrategy>>cacheIfNeeded:debugResult:

where a result of nil is to be stored in an IdentityDictionary. But in VW  
Dictionaries do not accept nil as a key. If this is indeed intended, I  
wonder how to circumvent the limitation best. Would it be feasible to use  
a placeholder object instead of nil (which code would be effected)? Or  
would it be better to not cache nil at all?


I'd be happy to hear your thoughts.

Best, Steffen





Am .10.2018, 20:58 Uhr, schrieb Jan Kurš :

I run PP2 on travis [1], seems Pharo 6.1 loads all configurations, both  
on
linux and mac. Pharo 5, Pharo 6.0 got broken, why is build configuration  
so

hard :'( I don't know, how can I support you. I myself had to gave up on
some tools, because I failed to load them.

There is no specific reason to use SmallInteger maxVal...  and nobody  
ever

thought it might be too little. 'PP2 min: X' fails if there are less
repetitions that X. 'PP2 max: X' parses at most X repetitions.

($a asPParser min: 2 max: 3) parse: 'a'. -> Failure
($a asPParser min: 2 max: 3) parse: 'aa'.  #($a $a)
($a asPParser min: 2 max: 3) parse: 'aaa'. #($a $a $a)
($a asPParser min: 2 max: 3) parse: ''. #($a $a $a)


Use $- asPParser for characters, e.g:
$- asPParser parse: '-'

[1]: https://travis-ci.org/kursjan/petitparser2/builds/438358467

On Thu, Oct 11, 2018 at 8:13 PM Steffen Märcker  wrote:


Reading the code of PetitParser, I wonder why PPRepeatingParser
initializes 'max' with SmallInteger maxVal instead of some notion of
infinity, like Float infinity (and PP2RepeatingNode as well). If I
understand the code correctly, PParser>>min: fails if the number of
repetitions exceeds SmallInteger maxVal, doesn't it?

Best, Steffen


Am .10.2018, 17:10 Uhr, schrieb Steffen Märcker :

> I am using MacOS 10.13.6 and the 32bit VM:
>
> Pharo 6.0
> Latest update: #60546
>
> ... the String in about is wrong, it should be 6.1. I installed it via
> the launcher as "Official Distribution: Pharo 6.1 - 32Bit (stable)" I
> just noticed, that the sources file is missing from vms/private/6521/,
> too.
>
> Am .10.2018, 17:02 Uhr, schrieb Sean P. DeNigris  

>:
>
>> Steffen Märcker wrote
>>> I did the following:
>>> 1)  Download and start Pharo 6.1 stable via the launcher.
>>> 2b) Attempt to install PP2 via the scripts from GitHub:
>>>  Metacello new
>>>  baseline: 'PetitParser2';
>>>  repository: 'github://kursjan/petitparser2';
>>>  load.
>>>  Metacello new
>>>  baseline: 'PetitParser2Gui';
>>>  repository: 'github://kursjan/petitparser2';
>>>  load.
>>
>> This way worked for me in Pharo #60546 (check in
World->System->About).
>> What
>> exact Pharo version/OS are you on? 32 or 64-bit
>>
>>
>>
>> -
>> Cheers,
>> Sean
>> --
>> Sent from: http://forum.world.st/Pharo-Smalltalk-Users-f1310670.html
>>
>
>





Re: [Pharo-users] [vwnc] Parsing in Smalltalk

2018-10-22 Thread Steffen Märcker

Hi,

my current shot is to subclass IdentityDictionary with  
NildentityDictionary. The class checks on indexed access whether the key  
is nil. If so, storing/retrival uses a singleton Object as key (stored in  
instance variable NIL := Object new). I needed to override at least:


  >>at:ifAbsent:
  >>associationsAt:ifAbsent:
  >>at:put:
  >>findElementLike:ifAbsent:
  >>initialize
  class>>new
  class>>new:

It works for PP2 (without serious performance impact so far), although the  
class is yet incomplete wrt. Collection/Dictionary protocol.


Currently, all but the Morphic-related tests are green. \o/ After some  
cleanup of the bundles, I'll publish an inital version to the Public Store.


@James: If you like, I we could discuss our ports later to double-check an  
making the future process easier. How about that?

@Jan: I'd be happy to hear your thoughts on the porting matter.

Best, Steffen



Am .10.2018, 16:06 Uhr, schrieb James Foster :

We will have the same issue porting PP2 to GemStone and look forward to  
suggestions.



On Oct 22, 2018, at 4:40 AM, Steffen Märcker  wrote:

Hi Jan,

I am trying to port PP2 to VW and managed to get most of the tests  
green. =) Some of the remaining test failures occur in


PP2DebuggingStrategy>>cacheIfNeeded:debugResult:

where a result of nil is to be stored in an IdentityDictionary. But in  
VW Dictionaries do not accept nil as a key. If this is indeed intended,  
I wonder how to circumvent the limitation best. Would it be feasible to  
use a placeholder object instead of nil (which code would be effected)?  
Or would it be better to not cache nil at all?


I'd be happy to hear your thoughts.

Best, Steffen





Am .10.2018, 20:58 Uhr, schrieb Jan Kurš :

I run PP2 on travis [1], seems Pharo 6.1 loads all configurations,  
both on
linux and mac. Pharo 5, Pharo 6.0 got broken, why is build  
configuration so
hard :'( I don't know, how can I support you. I myself had to gave up  
on

some tools, because I failed to load them.

There is no specific reason to use SmallInteger maxVal...  and nobody  
ever

thought it might be too little. 'PP2 min: X' fails if there are less
repetitions that X. 'PP2 max: X' parses at most X repetitions.

($a asPParser min: 2 max: 3) parse: 'a'. -> Failure
($a asPParser min: 2 max: 3) parse: 'aa'.  #($a $a)
($a asPParser min: 2 max: 3) parse: 'aaa'. #($a $a $a)
($a asPParser min: 2 max: 3) parse: ''. #($a $a $a)


Use $- asPParser for characters, e.g:
$- asPParser parse: '-'

[1]: https://travis-ci.org/kursjan/petitparser2/builds/438358467

On Thu, Oct 11, 2018 at 8:13 PM Steffen Märcker  wrote:


Reading the code of PetitParser, I wonder why PPRepeatingParser
initializes 'max' with SmallInteger maxVal instead of some notion of
infinity, like Float infinity (and PP2RepeatingNode as well). If I
understand the code correctly, PParser>>min: fails if the number of
repetitions exceeds SmallInteger maxVal, doesn't it?

Best, Steffen


Am .10.2018, 17:10 Uhr, schrieb Steffen Märcker :

> I am using MacOS 10.13.6 and the 32bit VM:
>
> Pharo 6.0
> Latest update: #60546
>
> ... the String in about is wrong, it should be 6.1. I installed it  
via
> the launcher as "Official Distribution: Pharo 6.1 - 32Bit (stable)"  
I
> just noticed, that the sources file is missing from  
vms/private/6521/,

> too.
>
> Am .10.2018, 17:02 Uhr, schrieb Sean P. DeNigris  

>:
>
>> Steffen Märcker wrote
>>> I did the following:
>>> 1)  Download and start Pharo 6.1 stable via the launcher.
>>> 2b) Attempt to install PP2 via the scripts from GitHub:
>>>  Metacello new
>>>  baseline: 'PetitParser2';
>>>  repository: 'github://kursjan/petitparser2';
>>>  load.
>>>  Metacello new
>>>  baseline: 'PetitParser2Gui';
>>>  repository: 'github://kursjan/petitparser2';
>>>  load.
>>
>> This way worked for me in Pharo #60546 (check in
World->System->About).
>> What
>> exact Pharo version/OS are you on? 32 or 64-bit
>>
>>
>>
>> -
>> Cheers,
>> Sean
>> --
>> Sent from:  
http://forum.world.st/Pharo-Smalltalk-Users-f1310670.html

>>
>
>










[Pharo-users] [ANN] PetitParser2 for VisualWorks (Was: [vwnc] Parsing in Smalltalk)

2018-10-23 Thread Steffen Märcker

Hi,

I am happy to announce that I've just finished an initial port of (the  
impressive) PetitParser2 (https://github.com/kursjan/petitparser2) to  
VisualWorks. It is available in the Public Store. The port consists of 4  
bundles:

* PetitParser2
* PetitParser2-Tests
* PetitParser2-Smalltalk (defunct, test support only)
* PetitParser2-Validation (defunct, test support only)

I'd be happy to hear your comments.

Cheers,
Steffen



Re: [Pharo-users] Pharo2VW (Was:[vwnc] Parsing in Smalltalk)

2018-10-23 Thread Steffen Märcker
You're welcome. =) I really like the tool as it makes cross-platform-life  
a lot easier. I really wish there was a similar tool for the reverse  
direction, too!


Cheers! Steffen

Am .10.2018, 22:38 Uhr, schrieb Alexandre Bergel via Pharo-users  
:



Thanks Steffen for your contribution to Pharo2VW.
We reviewed and accepted your PullRequests.

Alexandre


On Oct 19, 2018, at 6:29 AM, Steffen Märcker  wrote:

Nevermind, I figured it out easily. Must been blind before.

Many thanks to the Iceberg team, the tool works like a charm in 6.1!


Am .10.2018, 10:58 Uhr, schrieb Steffen Märcker :


<---Schnitt--->











[Pharo-users] [ANN] PEG Grammar, Compiler and Parser for PetitParser2

2018-10-24 Thread Steffen Märcker

Hi,

I've just published a small tool that helped me prototyping grammars.

PP2PEGGrammar - the original PEG ASCII syntax
PP2PEGParserParser - parse a PEG and build a parser object
PP2PEGParserCompiler - compile a PEG to methods and variables in a given  
parser class


It is available as PetitParser2-PEG in the PublicStore (VW) and on GitHub  
(https://github.com/merkste/petitparser2/tree/PEG).


Best, Steffen


Am .10.2018, 22:46 Uhr, schrieb Steffen Märcker :

I gave Xtreams-Parsing and PetitParser a shot and like to share my  
findings.[*]


The task was to parse the modelling language of the probabilistic model  
checker PRISM. I've written a grammer of about 130 definitions in the  
Xtreams DSL, which is close to Bryan Fords syntax. To avoid doing it all  
again with PetitParser, I wrote a PetitParserGenerator that takes the  
DSL and builds a PetitParser.


The numbers below are just parsing times, no further actions involved.  
For reference I show the times from PRISM (which uses JavaCC), too --  
although they involve additional verification and normalization steps on  
the AST.


input  PrismXP   PP 
230kB14s9s   2s
544kB   121s   20s   5s
1.1MB   421s   34s   8s
1.4MB  1091s   47s  12s
2.2MB  63s  16s
2.9MB  81s  20s
3.8MB 107s  25s
4.4MB 123s  30s

Please note that these times are not representative at all. It's just a  
single example and I put zero effort in optimization. However, I am  
quite satisfied with the results.


[*] I was already familiar with the DSL of Xtreams-Parsing, which I like  
very much. I did not consider SmaCC, as I find PEGs easier to use.


Best, Steffen



Am .10.2018, 20:14 Uhr, schrieb Steffen Märcker :


Dear all,

I have two questions regarding parsing frameworks.

1) Do you have any insights on the performance of SmaCC VS Xtreams  
Parsing VS PetitParser?
2) Has anybody started to port PetitParser 2 from Pharo to VW? Is it  
worth the effort?


Sorry for cross-posting, I thought this might interest both communities.

Cheers, Steffen




Re: [Pharo-users] VW2Pharo ? (Was: Pharo2VW (Was:[vwnc] Parsing in Smalltalk))

2018-11-15 Thread Steffen Märcker
Please let me know if I can help with that porting tool! Btw, is it RDF  
like Resource Description Framework?


Dreamer mode: It would be awesome to teach VW tonel and Git for export and  
versioning!


Best, Steffen

Am .11.2018, 11:37 Uhr, schrieb Christian Haider  
:



Hi,


I did a tool for porting PDFtalk from VW to Gemstone (bundle {Gemstone  
Fileout Project}) .


When I have time, I plan to extend it for Pharo in order to port RDF  
(and Values).


I am thinking of making Pharo my RDF engine… we’ll see…


Happy hacking,

Christian


Von: Cédrick Béler 
Gesendet: Mittwoch, 14. November 2018 21:23
An: Any question about pharo is welcome 
Cc: Christian Haider ;  
Christian Haider 

Betreff: VW2Pharo ? (Was: Pharo2VW (Was:[vwnc] Parsing in Smalltalk))



Objet: Rép : [Pharo-users] Pharo2VW (Was:[vwnc] Parsing in Smalltalk)


I really like the tool as it makes cross-platform-life a lot easier.


Thanks :-)




I really wish there was a similar tool for the reverse direction, too!


That is interesting. Anyone else interested in such a tool?



A tool to port from VW to Pharo ?


Me interested, at least to port Christian work on RDF :)


Cheers,


Cédrick









Cheers,
Alexandre






Cheers! Steffen

Am .10.2018, 22:38 Uhr, schrieb Alexandre Bergel via Pharo-users  
mailto:pharo-users@lists.pharo.org> >:





Thanks Steffen for your contribution to Pharo2VW.
We reviewed and accepted your PullRequests.

Alexandre




On Oct 19, 2018, at 6:29 AM, Steffen Märcker <mailto:merk...@web.de> > wrote:


Nevermind, I figured it out easily. Must been blind before.

Many thanks to the Iceberg team, the tool works like a charm in 6.1!


Am .10.2018, 10:58 Uhr, schrieb Steffen Märcker <mailto:merk...@web.de> >:


<---Schnitt--->





















Re: [Pharo-users] Playing with Pharo 7

2018-11-27 Thread Steffen Märcker

Great!

Am .11.2018, 21:47 Uhr, schrieb Cyril Ferlicot D.  
:



Le 26/11/2018 à 16:57, Konrad Hinsen a écrit :


Thanks for the pointer! Unfortunately all the documentation there
assumes way more familiarity with Smalltalk than I have. And even a lot
of familiarity with Metacello itself, as the documentation only explains
the latest functionality.

The good news is that it refers to "Deep into Pharo", so I will download
that book and see if answers my questions.

Konrad.



Hello,

There is also a recent guide here:
https://github.com/pharo-tips-and-tricks/pharo-tips-and-tricks/blob/master/General/Baselines.md

I don't know if it is more beginner friendly. Feedback is welcome.







Re: [Pharo-users] [ANN] PetitParser (1) has been migrated to Github

2018-12-04 Thread Steffen Märcker

Great, thanks a lot!

Steffen

Am .12.2018, 10:22 Uhr, schrieb Julien :


Hello,

I just finished the migration of PetitParser (1) on Github:  
https://github.com/moosetechnology/PetitParser  
 .


I created a release v1.9.2 which mirrors latest release of Smalltalkhub.

Cheers,

Julien

---
Julien Delplanque
Doctorant à l’Université de Lille
http://juliendelplanque.be/phd.html
Equipe Rmod, Inria
Bâtiment B 40, Avenue Halley 59650 Villeneuve d'Ascq
Numéro de téléphone: +333 59 35 86 40







Re: [Pharo-users] Updating singletons

2019-01-03 Thread Steffen Märcker
Have you considered to yield (an) proxy object(s) instead of the actual 
Singleton in uniqueInstance? This way it suffices to update the proxy with each 
update of the code.

Am 3. Januar 2019 15:36:27 MEZ schrieb Ben Coman :
>On Thu, 3 Jan 2019 at 20:01, Konrad Hinsen via Pharo-users <
>pharo-users@lists.pharo.org> wrote:
>
>> Dear Pharo experts,
>>
>> I am wondering if there is a good way to deal with singleton objects
>> whose value needs to be updated following changes in the code that
>> initializes it.
>>
>> Following the model of many examples in Pharo itself, I have defined
>a
>> singleton class with a uniqueInstance method for accessing (and
>creating
>> if necessary) the single instance, and a method "reset" marked as a
>> script to set the uniqueInstance back to nil when required, i.e.
>after
>> source code changes make the prior value inappropriate.
>>
>> This works very well, as long as I don't forget to do the reset,
>which
>> has already caused me a few hours of debugging time. Worse, suppose
>> someone else is using my project in progress, pulling changes from my
>> GitHub repo once per week. That person cannot know if the latest
>changes
>> require a singleton reset. More importantly, users shouldn't have to
>> know about such internal details at all.
>>
>> So is there a way to do the reset automatically whenever a new
>version
>> of my package is loaded into an image?
>>
>> Thanks in advance,
>>   Konrad.
>>
>
>You might use a Baseline #postLoadDoIt:
>https://github.com/pharo-open-documentation/pharo-wiki/blob/master/General/Baselines.md
>
>
>Consider that a person pulling your changes cannot know if you have
>upgraded the library versions of any dependencies,
>so always updating via a Baseline might be a reasonable expectation.
>
>But that doesn't help you while developing within the one image.
>
>cheers -ben


Re: [Pharo-users] Updating singletons

2019-01-06 Thread Steffen Märcker

Hi Konrad,

sorry, I might have misread your question such that it was about updating  
references to an obsolete object.


However, there is another fun possibility to figure out that a change  
happened that does not involve configuration - though, a bit hacky. ;-)  
You could write the accessor method such that it recompiles itself with  
the first access after loading new code. For example:



Singleton>>uniqueInstance
 instance := self class new.
 self class compile: 'uniqueInstance ^instance'.
 ^instance


Best, Steffen


Am .01.2019, 18:04 Uhr, schrieb Konrad Hinsen :


Hi Steffen,

Have you considered to yield (an) proxy object(s) instead of the  
actual  > Singleton in uniqueInstance? This way it suffices to update  
the proxy

 > with each update of the code.
I am not sure I understand what you are proposing. My problem is  
figuring out how to change in-memory objects when they become obsolete  
after a code change. So I'd say it doesn't matter much if the object I  
need to change is the singleton or a proxy to it, but maybe I am missing  
something!


Cheers,
   Konrad.




Re: [Pharo-users] Documentation on Traits

2019-01-09 Thread Steffen Märcker
Nice. Do you know where to find details on stateful traits and slots? I am 
familiar with stateless traits and wonder how state is incoeporated and how 
conflicts are handled.

Best, Steffen

Am 9. Januar 2019 15:21:09 MEZ schrieb Konrad Hinsen 
:
>Cyril Ferlicot  writes:
>
>> There is documentation on Traits here:
>>
>https://github.com/pharo-open-documentation/pharo-wiki/blob/master/General/Traits.md
>
>Great, thanks!
>
>Konrad.


Re: [Pharo-users] Documentation on Traits

2019-01-09 Thread Steffen Märcker
Maybe I missed that part: Can conflicts with slots occur? The section on 
conflicts only mentions methods as cause.

Do you have a pointer to a slots documentation?


Am 9. Januar 2019 20:32:51 MEZ schrieb "Cyril Ferlicot D." 
:
>Le 09/01/2019 à 18:49, Steffen Märcker a écrit :
>> Nice. Do you know where to find details on stateful traits and slots?
>I
>> am familiar with stateless traits and wonder how state is
>incoeporated
>> and how conflicts are handled.
>> 
>
>The guide I send cover the user documentation of Stateful traits and
>how
>to handle conflict.
>
>What it does not cover are Slots (that are independents from Traits)
>and
>implementation documentation.
>
>> Best, Steffen
>> 
>
>
>-- 
>Cyril Ferlicot
>https://ferlicot.fr


Re: [Pharo-users] Symbol equality method #= - weird condition in the Pharo sourcecode

2019-03-01 Thread Steffen Märcker
Is it consistent with the definition of #= in String such that
'aSymbol = aString' iff 'aString = aSymbol'?
Sorry, I don't have an image at hand.


Am 1. März 2019 18:40:11 MEZ schrieb Sven Van Caekenberghe :
>Why ? Please explain ...
>
>> On 1 Mar 2019, at 18:02, David T. Lewis  wrote:
>> 
>> On Fri, Mar 01, 2019 at 05:18:27PM +0100, Sven Van Caekenberghe
>wrote:
>>> 
>>> 
 On 1 Mar 2019, at 17:08, Petr Fischer via Pharo-users
> wrote:
 
 
 From: Petr Fischer 
 Subject: Symbol equality method #= - weird condition in the Pharo
>sourcecode
 Date: 1 March 2019 at 17:08:03 GMT+1
 To: pharo-users@lists.pharo.org
 
 
 Hello, this is Symbol equality method in Pharo:
 
 1: = aSymbol
 2: "Compare the receiver and aSymbol." 
 3: self == aSymbol ifTrue: [^ true].
 4: self class == aSymbol class ifTrue: [^ false].
 5: "Use String comparison otherwise"
 6: ^ super = aSymbol
 
 Look at line 4 - what does it mean? That's wrong, isn't it?
 
 Typically, every symbol comparisons end up in line 3, but if you do
>some work with forward proxies for example, condition on line 3 is
>"false" and then weird things on line 4 happens.
 
 If line 4 and further are correct, can someone explain a little?
 
 Thanks! pf
>>> 
>>> Yes, that looks weird. Line 4 should probably be removed, unless I
>am missing something.
>> 
>> It is wrong in a Spur image, because we now have subclasses of
>Symbol.
>> But removing line 4 is not the right solution. See Nicolas'
>implementation
>> in Squeak:
>> 
>> Symbol>>= aSymbol
>>  "Compare the receiver and aSymbol." 
>>  self == aSymbol ifTrue: [^ true].
>>  aSymbol isSymbol ifTrue: [^ false].
>>  "Use String comparison otherwise"
>>  ^ super = aSymbol
>> 
>> Dave
>> 
>>> 
>>> Symbols are by definition always #== so in that sense, #= should not
>even be implemented (as #= on Object is defined as #==), but since its
>direct super class String already overwrote #=, it has to follow.
>>> 
>>> The super call in line 6 is what allows Symbols and String to be
>compared.
>>> 
>>> I would say line 4 is a kind of sanity check, but probably not
>needed.


Re: [Pharo-users] Why can't a Bag answer as a dictionary?

2019-03-07 Thread Steffen Märcker

This is exactly how I think and feel about bags. Thanks Richard.

Am .03.2019, 01:35 Uhr, schrieb Richard O'Keefe :


To me, a bag is first and foremost a multiset.
Yes, it's a collection, but it's not *just* a collection.
It's a very specific kind of collection with a very
salient "characteristic function".

In my own Smalltalk library, #asDictionary isn't even *defined*
on things that are not dictionaries or some kind of mapping,
because let's face it, a dictionary *isn't* a collection of
Associations, and trying to make it look like one made
Smalltalk-80 rather inconsistent.  For example, if a dictionary
were a collection of associations, you would expect
(Dictionary with: #a -> 1 with: #b -> 2) includes: #a -> 1
to be true. No, it's false.  You would expect
(Dictionary with: #a -> 1 with: #b -> 2) asArray
to be (a->1 b->2).  No, it's (1 2).  In order to avoid major
confusion, I had to learn never to think of Associations in
connection with Dictionaries.  So I expect #asDictionary to
have something to do with #keysAndValuesDo:.
or something like that.

For {1->2. 3->4} asDictionary I would have to write
Dictionary withAllAssociations: {1->2. 3->4}
except that I never actually found a use for it.

#valuesAndCounts is a method that violates good OO practice,
because it exposes (what you'd expect to be) private state.
For example,
  b := 'abracadabra' asBag.
  b valuesAndCounts at: $a put: 'BOOM!'.
  b
select, Print It, BOOM!.  It should *copy* the dictionary.



On Thu, 7 Mar 2019 at 04:50, Sven Van Caekenberghe  wrote:


I was just explaining how it is now, what I think the rationale is
behind
it.

I understand #asDictionary as working on a collection of
pairs/associations (because it basically goes to #withAll:).

A bag is just a collection that is optimised for many duplicates, the
fact
that you have values and counts is more an implementation detail than an
intrinsic property.

The conversion that you want, and that already exists in
#valuesAndCounts
is one interpretation of what a bag is, not the only one. A bag is
foremost
a collection of things.

I am not immediately convinced that #valuesAndCounts should be the
default
#asDictionary interpretation.

What about the inverse for example ?

  { #foo->2. #bar->3 } asDictionary asBag.

But this is certainly an interesting discussion.

> On 6 Mar 2019, at 16:23, Tim Mackinnon  wrote:
>
> As Richard said - as a bag is relationship between keys and
frequencies,
I would expect it to be able to convert to a dictionary.
>
> It displays in an inspector just like a Dictionary - which is why I
figured I could convert to pass it back to the exercise that was written
with Dictionaries in mind.
>
> 
>
>
> The code to fix it is quite simple, but on these kinds of things - I
thought it worth checking before submitting a PR.
>
> Tim
>
>
>> On 6 Mar 2019, at 13:53, Sven Van Caekenberghe  wrote:
>>
>> Why would that work ? What would you expect the output to be ?
>>
>> Try:
>>
>>  #(1 2 3) asDictionary
>>
>> it fails in exactly the same way. You need key/value pairs
(Associations).
>>
>> These do work
>>
>>  Bag new add: #foo->100; asDictionary.
>>
>>  Bag new addAll: 'ABABAB'; valuesAndCounts.
>>
>>> On 6 Mar 2019, at 14:25, Tim Mackinnon  wrote:
>>>
>>> I was surprised to find that a Bag can’t convert to a dictionary -
e.g.
>>>
>>> Bag new
>>> addAll: 'aacddd’;
>>> asDictionary
>>>
>>> Gives an error - Dnu #key
>>>
>>>
>>> It looks to me like Bag is inheriting a bad version of
#associationsDo:  and instead could simply forward it to
#doWithOccurences:
instead?
>>>
>>> I know Bag is not used that much - but it comes up a lot in
programming exercises.
>>>
>>> Tim
>>
>>
>









[Pharo-users] Run old 32-Bit image on 64-Bit macOS

2022-02-02 Thread Steffen Märcker

Hi,
I am looking for a way to run an old 32-Bit image on a modern Mac
(Montery). Pharo launcher gives me the following error:

Cannot run Prism Grammar (Pharo 6.1)!
This image requires a 32-Bit VM to run and 32-Bit applications are not
supported on this Operating System.

Is there a way to run it nevertheless? I want to extract some old app/code
from the image for further use.

Best, Steffen


[Pharo-users] Re: [ANN] Pharo 10 released!

2022-04-06 Thread Steffen Märcker
Congratulations to the new release. That's quite an impressive amount of 
changes and cleanups. Will the stable version be available in the PharoLauncher 
soon?


Cheers, Steffen


Esteban Lorenzano schrieb am Dienstag, 5. April 2022 12:39:44 (+02:00):


Dear Pharo users and dynamic language lovers: 


We have released Pharo version 10 !


Pharo is a pure object-oriented programming language and a powerful 
environment, focused on simplicity and immediate feedback.




Pharo 10 was a short iteration where we focused mainly on stability and 
enhancement of the environment :


Massive system cleanup
gained speed
removed dead code
removed old/deprecated frameworks (Glamour, GTTools, Spec1)
All Remaining tools written using the deprecated frameworks have been 
rewritten: Dependency Analyser, Critique Browser, and many other small 
utilities.
Modularisation has made a leap, creating correct baselines (project 
descriptions) for many internal systems, making possible the work and 
deployment of minimal images.
Removing support for the old Bytecode sets and embedded blocks simplified the 
compiler and language core.
As a result, our image size has been reduced by 10% (from 66MB to 58MB)
The VM has also improved in several areas: better async I/O support, socket 
handling, FFI ABI,  
Even being a short iteration, we have closed a massive amount of issues: around 
600 issues and 700 pull requests. A more extended changelog can be found at 
https://github.com/pharo-project/pharo-changelogs/blob/master/Pharo100ChangeLogs.md.


While the technical improvements are significant, still the most impressive 
fact is that the new code that got in the main Pharo 10 image was contributed 
by more than 80 people.


Pharo is more than code. It is an exciting project involving a great community. 


We thank all the contributors to this release:


Aaron Bieber, Ackerley Tng, Alban Benmouffek, Alejandra Cossio, Aless Hosry, 
Alexandre Bergel, Aliaksei Syrel, Alistair Grant, Arturo Zambrano, Asbathou 
Biyalou-Sama, Axel Marlard, Bastien Degardins, Ben Coman, Bernardo Contreras, 
Bernhard Pieber, Carlo Teixeira, Carlos Lopez, Carolina Hernandez, Christophe 
Demarey, Clotilde Toullec, Connor Skennerton, Cyril Ferlicot, Dave Mason, David 
Wickes, Denis Kudriashov, Eric Gade, Erik Stel, Esteban Lorenzano, Evelyn Cusi 
Lopez, Ezequiel R. Aguerre, Gabriel Omar Cotelli, Geraldine Galindo, Giovanni 
Corriga, Guille Polito, Himanshu, Jan Bliznicenko, Jaromir Matas, Kasper 
Østerbye, Kausthub Thekke Madathil, Konrad Hinsen, Kurt Kilpela, Luz Paz, Marco 
Rimoldi, Marcus Denker, Martín Dias, Massimo Nocentini, Max Leske, 
Maximilian-ignacio Willembrinck Santander, Miguel Campero, Milton Mamani 
Torres, Nahuel Palumbo, Norbert Hartl, Norm Green, Nour Djihan, Noury 
Bouraqadi, Oleksandr Zaitsev, Pablo Sánchez Rodríguez, Pablo Tesone, Pavel 
Krivanek, Pierre Misse-Chanabier, Quentin Ducasse, Raffaello Giulietti, 
Rakshit, Renaud de Villemeur, Rob Sayers, Roland Bernard, Ronie Salgado, 
Santiago Bragagnolo, Sean DeNigris, Sebastian Jordan Montt, Soufyane Labsari, 
Stephan Eggermont, Steven Costiou, Stéphane Ducasse, Sven Van Caekenberghe, 
Theo Rogliano, Thomas Dupriez, Théo Lanord, Torsten Bergmann, Vincent Blondeau.
 


(If you contributed to Pharo 10 development in any way and we missed your name, 
please send us an email and we will add you).


Enjoy!


The Pharo Team


Discover Pharo: https://pharo.org/features


Try Pharo: http://pharo.org/download


Learn Pharo: http://pharo.org/documentation

-- 
Gesendet mit Vivaldi Mail. Laden Sie Vivaldi kostenlos von vivaldi.com herunter.

[Pharo-users] Re: [ANN] Pharo Launcher 3.0 released!

2022-04-14 Thread Steffen Märcker

Hi Christophe,


thanks, that's great news! [Downloading ...]


Cheers! Steffen

Christophe Demarey schrieb am Donnerstag, 14. April 2022 14:26:26 (+02:00):


Hi all,

Pharo Launcher 3.0 has just been released! It is available from
http://pharo.org/download.
It is based on latest Spec2 and Pharo 10 image.
It now comes with a native Apple Silicon version and Mac OS packages are
notarized.
Big thanks to all contributors, including issue reports.

Here is the changelog:
Pharo Launcher v3.0

New features:

Full rewrite of the UI using Spec 2 and Pharo 10
Pharo Launcher can now run natively on Apple Silicon #544
Pharo Launcher is now signed with a Mac Os developper account and notarized
#529
Windows "portable" package (only an archive without installer) #534

Improvements:

While importing image .sources and .version are not moved #543
recreate the image does not re apply the script if one is provided #536
(thanks to @hogoww)
Save keybind for scripts #545 (thanks to @hogoww and @Inao0)

Bug fixes:

GitHub releases artefacts download were not always working #535
"Basic launch" does not do its job in Pharo 10 #540
Proxy settings not applied through settings screen #541

Regards,
The Pharo team.

--
Gesendet mit Vivaldi Mail. Laden Sie Vivaldi kostenlos von vivaldi.com
herunter.


[Pharo-users] Re: A question about #beginsWith: and #endsWith:

2022-04-26 Thread Steffen Märcker

I can only fully agree to Richard's explanation. Also, not having the empty
string pre/suf-fixing every string breaks the free monoid that describes
concatenation of strings and forms the basis of regular expressions. In
effect, this will lead to subtle inconsistencies. Hence, I consider this a
bug rather than a feature.


Kind regards,
Steffen





Richard O'Keefe schrieb am Samstag, 23. April 2022 02:37:48 (+02:00):


Dan Ingalls is of course a big NAME in the
world of Smalltalk, but the stated reason
for changing the behaviour of #beginsWith:
and #endsWith: makes no sense.




We have many ways to define a partial order
on strings.
x <= y iff y beginsWith: x
x <= y iff y endsWith: x
x <= y iff y includesSubCollection: x
x <= y iff y includesSubSequence: x
These things are supposed to obey laws:
if a beginsWith: b , c
then a beginsWith: b
if a endsWith: b , c
then a endsWith: c
if a includesSubCollection: b , c
then a includesSubCollection: b
and a includesSubCollection: c
if a includesSubSequence: b , c
then a includesSubSequence: b
and a includesSubSequence: c.


We also expect the usual rules of equality
to hold.  So
(1) a beginsWith: a
(2) a = '' , a
(3) THEREFORE a beginsWith: ''


(1) a endsWith: a
(2) a = a , ''
(3) THEREFORE a endsWith: ''


(1) a includesSubCollection: a
(2) a = '' , a
(3) THEREFORE a includesSubCollect: ''


Reasoning about strings (as values) gets
enormously more complicated if the operations
do not follow simple sensible rules, and
having '' be the least string under these
orderings and having '' be a prefix and a
suffix of any string is essential if the
rules are going to be simple and coherent.


'' is to strings (and more generally
empty sequences are to sequences) pretty
much what 0 is to integers.   Denying that
'abc' beginsWith: '' is *structurally*
just like denying that 0 <= 123.


Now as it happens I *can* see a use for
versions of #beginsWith: and #endsWith:
that diverge from the ones we have, but
*this* is not where they need to diverge.
a beginsWithGraphemesOf: b
iff a asGraphemes = b asGraphemes , c asGraphemes
for some c, where s asGraphemes returns a
sequence of strings each of which is a maximally
long grapheme cluster, such that concatenating
s asGraphemes recovers s.  That is,
#beginsWithGraphemesOf: and
#endsWithGraphemesOf: would respect the
Unicode Text Segmentation boundaries.
But s beginsWithGraphemesOf: ''
would still need to be true.


The thing is, in practice you often DON'T
KNOW whether a potential affix is empty or
not.  Here are some of my test cases.


testTestData
  "Ensure that the sample string has no duplicates."
  [(Set withAll: string) size = stringSize] assert.

testBeginsWith
  "Test that every prefix of the sample IS a prefix of it."
  0 to: stringSize do: [:n |
[string beginsWith: (string copyFrom: 1 to: n)] assert].

testEndsWith
  "Test that every suffix of the sample IS a suffix of it."
  0 to: stringSize do: [:n |
[string endsWith: (string copyFrom: stringSize - n + 1 to:
stringSize)] assert].

testIndexOfSubCollectionAtBeginning
  "Test that every prefix of 'abcd' is found at the beginning."
  0 to: stringSize do: [:n | |s i t|
s := string copyFrom: 1 to: n.
i := string indexOfSubCollection: s startingAt: 1.
[1 = i] assert.
t := string copyFrom: i to: i - 1 + n.
[t = s] assert].

testIndexOfSubCollectionAtEnd
  "Test that every proper suffix of the sample is found at the end."
  1 to: stringSize do: [:n | |s i t|
s := string copyFrom: stringSize - n + 1 to: stringSize.
i := string indexOfSubCollection: s startingAt: 1.
[stringSize + 1 - n = i] assert.
t := string copyFrom: i to: i - 1 + n.
[t = s] assert].

testLastIndexOfSubCollectionAtBeginning
  "Test that every proper prefix of the sample is found at the
beginning."
  1 to: stringSize do: [:n | |s i t|
s := string copyFrom: 1 to: n.
i := string lastIndexOfSubCollection: s startingAt: stringSize.
[1 = i] assert.
t := string copyFrom: i to: i - 1 + n.
[t = s] assert].

testLastIndexOfSubCollectionAtEnd
  "Test that every suffix of the sample is found at the end."
  0 to: stringSize do: [:n | |s i t|
s := string copyFrom: stringSize - n + 1 to: stringSize.
i := string lastIndexOfSubCollection: s startingAt: stringSize.
[stringSize + 1 - n = i] assert.
t := string copyFrom: i to: i - 1 + n.
[t = s] assert].

testOccurrencesOfEmptyCollection
  "Test that the empty string occurs at the beginning,
   at the end, and in between every pair of adjacent characters."
  [(string occurrencesOfSubCollection: '') = (stringSize + 1)] assert.

testOccurrencesOfUniqueParts
  "Test that unique parts occur as many times as they should."
  |repeated|
  repeated := string , string , string.
  1 to: stringSiz

[Pharo-users] Re: A question about #beginsWith: and #endsWith:

2022-04-26 Thread Steffen Märcker
Maybe I should add an example. Consider the following natural proposition:


> (A, B) beginsWith: A and: [(A, B) endsWith: B]


This breaks for A := '' or B := '' ".


Best, Steffen



Kasper Osterbye schrieb am Dienstag, 26. April 2022 14:50:51 (+02:00):


I have now raised it as an issue on the issue tracker


Issue #11165 in https://github.com/pharo-project/pharo/issues/11165




Best,


Kasper



On 26 Apr 2022, at 12.11, Steffen Märcker  wrote:


I can only fully agree to Richard's explanation. Also, not having the empty 
string pre/suf-fixing every string breaks the free monoid that describes 
concatenation of strings and forms the basis of regular expressions. In effect, 
this will lead to subtle inconsistencies. Hence, I consider this a bug rather 
than a feature.


Kind regards,
Steffen




-- 
Gesendet mit Vivaldi Mail. Laden Sie Vivaldi kostenlos von vivaldi.com herunter.

[Pharo-users] Re: A question about #beginsWith: and #endsWith:

2022-04-29 Thread Steffen Märcker

Hi Kasper,


I've thought about that approach too. But then asked myself whether it is
more likely that there is code that relies on this bug than code where this
went unnoticed and is therefore broken. What do the other think about that
matter and the fix in Squeak?


Best, Steffen







Kasper Osterbye schrieb am Donnerstag, 28. April 2022 19:03:07 (+02:00):


Kasper Osterbye schrieb am Dienstag, 26. April 2022 14:50:51 (+02:00):


I have now raised it as an issue on the issue tracker


Issue #11165 in https://github.com/pharo-project/pharo/issues/11165


If I may suggest a solution it will be to:
add two new methods - prefixedBy:  and suffixedBy: to handle the empty
prefix/suffix correctly
add comments to beginsWith:  and endsWith:  referring to the two new
methods


The problem is that there is client code which depend on the (wrong)
implementation of the beginsWith and endsWith methods.



[Pharo-users] Re: Recovering old images in mac

2022-10-30 Thread Steffen Märcker
Is there a way to save / convert such an image to 64 Bit?

Best, Steffen


  1   2   >