From: "Jan-Christoph Borchardt", Date: 15/05/2010 06:17:

> There has been quite a discussion on Ubuntu’s usability mailing list
> (Ayatana) about making single click standard or not:
> https://lists.launchpad.net/ayatana/msg01863.html (There are some
> mockups as well.)

Urgh.  As someone who uses a laptop with one of those little finger pads for a 
mouse, I find myself occasionally miss-clicking or dragging when I shouldn't.  
Wherever possible, I prefer to cut/copy and paste instead of dragging, simply 
because it's safer (accidental dropping is another problem with those things).  
This business of single-click to do stuff gives me the jitters.  I'd much 
rather select a file, then click the open button (or more usually, select a 
file and hit Enter on the keyboard).  I'll quite often select a file as a 
place-marker which I consider whether it was in fact the one I wanted - I then 
have two choices, double-click, or click the dialog button.


On the toolkit side, what I would like to see, is more cooking of the input 
events, similar to how terminals and X itself allow access to raw keystrokes, 
the processed/mapped input events, down to the final activation of a widget.

In this regard, how about a consistant mechanism across all GTK widgets to 
intelligently process keyboard and mouse events, kind of like the three-stage 
cooking that goes on in commandline terminals:

1-  the existing signals for the original raw mouse and keyboard events.  Each 
stage's default handler then generates the signals for the next (unless the 
signal emission has been stopped).  This is essentially what's presently 
implemented in GTK.

2- recognition of the shift/click/drag operation that the user may be 
performing, without the spurious extra clicks and shift keydown's, also 
conversion of raw key codes and meta-key combinations into symbolic names.  It 
could also represent a letter key as both the raw character (eg. 'A' regardless 
of shifting), as well as the letter with shifting in effect (ie. 'A' with 
shift/caps pressed and 'a' without).  Dealing with these issues have been asked 
on the groups a few times, and having both available allows for example, the 
developer to easily latch onto the raw character and then test shift as a flag 
if they so wish, or latch onto the shifted character when a key has two 
different functions depending on shifting.

3- looking up the emissions on stage two in a list of symbolic mappings, and 
re-emitting the resultant "action" as a final "fully cooked input" signal.  For 
a simple button, the stage two "left click" emission, would be mapped to a 
stage three "activate" signal, and a simple handler on "activate" would then 
emit the buttons usual clicked action (button depression would still be picked 
up from stage 1, as they are now).  This final stage would remove the need for 
a lot of the existing mouse/keyboard processing code replicated and/or 
re-implemented through every single widget.  Supporting per-widget meta-states 
would allow for regions or states of the widget to be incorporated into these 
final-stage mappings, and multi-stage input sequences to handle some of the 
odder input devices.

A widget, instead of implementing its own keyboard/mouse mapping code, could in 
most cases simply register a set of actions and their corresponding default 
mappings in the third stage processing, and let the default (or theme) mappings 
match those actions to their input sequences.  That would making it possible 
for something like Ctrl-X + 'E' to be the combination to activate a button in 
some weird input environment where it makes sense (for example, a magic button 
on a custom keyboard).


Fredderic
_______________________________________________
usability mailing list
usability@gnome.org
http://mail.gnome.org/mailman/listinfo/usability

Reply via email to