Hi Andy, hi list,

I've had the weekend to think about the proposal made by Andy Goryachev to make some of the API's surrounding InputMap / Behaviors public.

I'm having some nagging doubts if that proposal is really the way forward, and I'd like to explore a different approach which leverages more of FX's existing event infrastructure.

First, let me repeat an earlier observation; I think event handlers installed by users should always have priority over handlers installed by FX behaviors. The reasoning here is that the user (the developer in this case) should be in control.  Just like CSS will back off when the user changes values directly, so should default behaviors.  For this proposal to have merit, this needs to be addressed.

One thing that I think Andy's proposal addresses very nicely is the need for an indirection between low level key and mouse events and their associated behavior. Depending on the platform, or even platform configuration, certain keys and mouse events will result in certain high level actions.  Which keys and mouse events is platform specific.  A user wishing to change this behavior should not need to be aware of how these key and mouse events are mapped to a behavior.

I however think this can be addressed in a different way, and I will use the Button control to illustrate this, as it is already doing something similar out of the box.

The Button control will trigger itself when a specific combination of key/mouse events occurs.  In theory, a user could install event handlers to check if the mouse was released over the button, and then perform some kind of action that the button is supposed to perform.  In practice however, this is tricky, and would require mimicing the whole process to ensure the mouse was also first **pressed** on that button, if it wasn't moved outside the clickable area, etc.

Obviously expecting a user to install the necessary event handlers to detect button presses based on key and mouse events is a ridiculous expectation, and so Button offers a much simpler alternative: the ActionEvent; this is a high level event that encapsulates several other events, and translates it to a new concept.  It is triggered when all the criteria to fire the button have been met without the user needing to be aware of what those are.

I think the strategy of translating low level events to high level events, is a really good one, and suitable for reusing for other purposes.

One such purpose is converting platform dependent events into platform independent ones. Instead of needing to know the exact key press that would fire a Button, there can be an event that can fire a button.  Such a specific event can be filtered and listened for as usual, it can be redirected, blocked and it can be triggered by anyone for any reason.

For a Button, the sequence of events is normally this:

- User presses SPACE, resulting in a KeyEvent
- Behavior receives KeyEvent and arms the button
- User releases SPACE, resulting in a KeyEvent
- Behavior receives KeyEvent, disarms and fires the button
- Control fires an ActionEvent

What I'm proposing is to change it to:

- User presses SPACE, resulting in a KeyEvent
- Behavior receives KeyEvent, and sends out ButtonEvent.BUTTON_ARM
- Control receives BUTTON_ARM, and arms the button
- User releases SPACE, resulting in a KeyEvent
- Behavior receives KeyEvent and sends out ButtonEvent.BUTTON_FIRE
- Control receives BUTTON_FIRE, disarms the button and fires an ActionEvent

The above basically adds an event based indirection.  Normally it is KeyEvent -> ActionEvent, but now it would be KeyEvent -> ButtonEvent -> ActionEvent. The user now has the option of hooking into the mechanics of a Button at several different levels:

- The "raw" level, listening for raw key/mouse events, useful for creating custom behavior that can be platform specific - The "interpreted" level, listening for things like ARM, DISARM, FIRE, SELECT_NEXT_WORD, SELECT_ALL, etc...; these are platform independent
- The "application" level, primarily action type events

There is sufficient precedence for such a system.  Action events are a good example, but another example are the DnD events which are created by looking at raw mouse events, effectively interpreting magic mouse movements and presses into more useful DnD events.

The event based indirection here is very similar to the FunctionTag indirection in Andy's proposal.  Instead of FunctionTags, there would be new events defined:

    ButtonEvent {
        public static final EventType<ButtonEvent> ANY = ... ;
        public static final EventType<ButtonEvent> BUTTON_ARM = ... ;
        public static final EventType<ButtonEvent> BUTTON_DISARM = ... ;
        public static final EventType<ButtonEvent> BUTTON_FIRE = ... ;
    }

    TextFieldEvent {
        public static final EventType<TextFieldEvent> ANY = ... ;
        public static final EventType<TextFieldEvent> SELECT_ALL = ... ;
        public static final EventType<TextFieldEvent> SELECT_NEXT_WORD = ... ;
    }

These events are similarly publically accessible and static as FunctionTags would be.

The internal Behavior classes would shift from translating + executing a behavior to only translating it.  The Control would be actually executing the behavior.

This also simplifies the role of Behaviors, and maybe even clarifies it; a Behavior's purpose is to translate platform dependent to platform independent events, but not to act on those events.  Acting upon the events will be squarely the domain of the control.  As this pinpoints better what Behavior's purpose it, and as it simplifies their implementation (event translation only) it may be the way that leads to them becoming public as well.

---

I've used a similar mechanism as described above in one of my FX Applications; key bindings are defined in a configuration file:

    BACKSPACE: navigateBack
    LEFT: player.position:subtract(10000)
    RIGHT: player.position:add(10000)
    P: player.paused:toggle
    SPACE: player.paused:toggle
    I:
        - overlayVisible:toggle
        - showInfo:trigger

When the right key is pressed (and it is not consumed by anything), it is translated to a new higher level event by a generic key binding system.  This event is fired to the same target (the focused node).  If the high level event is consumed, the action was succesfully triggered; if not, and a key has more than one mapping, another event is sent out that may get consumed or not.  If none of the high level events were consumed, the low level event that triggered it is allowed to propogate as usual.

The advantage of this system is obvious; the controls involved can keep the action that needs to be performed separate from the exact key (or something else) that may trigger it.  For "navigateBack" for example, it is also an option to use the mouse; controls need not be aware of this at all.  These events also bubble up; a nested control that has several states may consume "navigateBack" until it has reached its local "top level", and only then let it bubble up for one of its parents to act on.

--John

Reply via email to