Hi Michael,

Thanks a lot for this early feedback.

I think in light of it I will rework the proposal(s) to include semantic events right from the beginning.

More inline comments below.

--John

On 07/11/2023 08:09, Michael Strauß wrote:
Hi John,

I like that you clearly define the terms Control, Skin and Behavior,
as well as their roles within the control architecture.

However, I don't see how your proposal scales to non-trivial controls,
and I agree with Andy that the Button example doesn't help (because a
Button lacks substructure and provides only a single interaction).

I didn't go into too great detail due to some time constraints, but the API should scale to any kind of control as there is no requirement to make all interactions available immediately, and any interactions you do want to make available that don't bubble up to Control level automatically can be generated by the Skin with non-public events, or by leaving events to bubble up that can be distinguished by a tag on the source; the SpinnerSkin could have ActionEvents bubble up (although it doesn't work that way, and I wouldn't recommend using a public event type for this anyway).

I'm missing your previous idea of using the event system for
higher-level semantic events, because I think they're required to make
this work. Here's how I see these parts working together:

I've attempted to keep the proposal small, and I had the impression there was some resistance against the semantic event idea, so I've left that part out for now, although hopefully left enough room to apply it later still.

1) A control is an opaque node in the scene graph, which defines the
API for a particular interactive element. It also defines the
interactions afforded by its implementation. For example, a Spinner
will usually consist of a text field and two buttons, but a skin might
choose to implement these components differently. The interactions
afforded by a control are exposed as semantic events:

     class SpinnerEvent extends Event {
         static EventType<SpinnerEvent> COMMIT_TEXT;
         static EventType<SpinnerEvent> START_INCREMENT;
         static EventType<SpinnerEvent> STOP_INCREMENT;
         static EventType<SpinnerEvent> START_DECREMENT;
         static EventType<SpinnerEvent> STOP_DECREMENT;
     }

I didn't examine the Spinner before, but I think these would be excellent events for a Spinner control.

2) Skins are responsible for generating semantic events, and sending
those events to the control. Since we don't need those events to have
a tunneling/bubbling behavior, we could have a flag on the event that
indicates a "direct event", one that is dispatched directly to its
target.
Performance tests could show if this is truly needed, many events are fired already to get to this point, and firing another should not be prohibitive. There may also be an option to use a different root instead (instead of Scene as the root, Control is used as root, so dispatching and bubbling is limited to the Control and its substructure; we should consider this carefully to be absolutely sure these events have no value outside the Control and its substructure).
3) Behaviors listen for semantic events on the control, and convert
these events into state changes of the control. This part would
probably be quite similar to some of the things that have already been
proposed.

Yeah, the novelty here is that the Behavior would be also the one responding to the semantic events, and the Behavior would then be calling regular methods directly on the Control again (in my earlier idea, the Behavior also sent out semantic events and the control was the one responding to them).

For comparison, my earlier idea had:

1. Controls define semantic events
2. Controls respond to their own semantic events (by calling its own public methods, like `fire` or `increment(int)`) 3. Behaviors take standard events (KeyEvent, MouseEvent) and translates them to semantic events

With your adjustment this becomes:

1. Controls define semantic events

2. Behaviors install event handlers on the control

They listen for standard events (KeyEvent, MouseEvent) and translate them to semantic events. This indirection can be exposed as a key mapping system, where semantic events are the "Functions".

They also listen to semantic events (generated by Skins, or generated by itself) and modify control state accordingly (by calling public methods like `fire` or `increment(int)`).

3. Skins handle the visuals and are allowed to generate semantic events

Skins are not allowed to have event handlers at the control level, nor are they allowed to manipulate the control beyond installing listeners.  For Behaviors I'm enforcing such restrictions by using the `BehaviorContext` during installing, but Skins being an older design lack such enforcement for now.  Skins are still alllowed to do anything they want with their nested controls (including manipulating them and installing event handlers).


In this way, controls, skins, and behaviors would end up as loosely
coupled parts. In particular, I don't see the value in making
behaviors public API if they are so tightly coupled to skins that they
end up as being basically implementation details.

Andy:
Imagine a specific skin that has a Node that accepts a user input.  A scroll 
bar, a button, or a region with a some function.  Unless this element is 
proclaimed as must-have for any skin and codified via some new public API 
(MySkin.getSomeElement()), it is specific to that particular skin and that 
particular behavior.
I think that's a very important observation. A skin can't just be
anything it wants to be, it must be suitable for its control. So we
need a place where we define the API and the interactions afforded by
that control. In my opinion, this place is the Control. Its
functionality is exposed via properties and methods, and its
interactions are specified using semantic events.
Yeah, I think we're certainly in agreement here.
Now skins are free to be implemented in any imaginable way, provided
that they interact with the control using semantic events. This gives
us very straightforward restrictions:
* A skin can never add interactions that the control didn't specify.
* If additional interactions are required, the control must be
subclassed and the interactions must be specified by the control.
Additionally, the behavior must be extended to account for the
additional interactions.

Something I noticed is that there are Skins that have their own styleable CSS properties (TextInputControlSkin provides for example `-fx-text-fill`, `-fx-prompt-text-fill`, `-fx-display-caret`), but they are presented as properties of TextInputControl (parent class of TextArea and TextField) in the CSS documentation.  The CSS system allows for this, and will find these properties as if the belong to the control.  I have the feeling this shouldn't be done in this manner, especially when they're presented as being part of TextInputControl, when as soon as you reskin it those properties are lost.

However, I don't think that's counter to what you said above, but it is a bit surprising.

--John




On Mon, Nov 6, 2023 at 4:50 AM John Hendrikx <john.hendr...@gmail.com> wrote:
As promised,  a public Behavior API proposal.

Summary:

Introduce a new Behavior interface that can be set on a control to replace its 
current behavior. The new behavior can be fully custom or composed (or later 
subclassed) from a default behavior. Some default behaviors will be provided as 
part of this proposal, but not all.

See here: https://gist.github.com/hjohn/293f3b0ec98562547d49832a2ce56fe7

--John

Reply via email to