> Hi Steve,
>
> This is exactly the sort of design I love to see, and it's what I was strongly
> pushing for. This sort of an approach of designing APIs after use cases on a
> one-by-one basis is what GNOME design is about, and it makes me a lot more
> comfortable than a generic request/response system.
>
> I'm not a UX designer, so I can't speak to the flow of the system you've 
> designed,
> whether it fits in with our plans, or if we even want such a widget to display
> or provide credentials in the first place. It's simply an example, of course.
>
> I feel like we can do this on a case-by-case basis, by looking at places where
> security is poor or limited (Portals, Intents, and the ability to choose user
> content by delegating to a system component is the most often quoted example),
> and designing a new solution that takes security measures into account.
>
> Things like screen sharing are more generic and difficult. Imagine I have a
> program like OBS: the goal is to capture my desktop and stream it to a service
> like Twitch. This is a perfectly acceptable use case for screen sharing, but 
> we
> don't want the user to have their screen shared when they don't want it.

Looking back at the three proposed approaches, and speculating over what UX 
each would provide:
1) OBS is allowed to capture the screen by default in the system config -- 
strong trust implication involved. Note that I proposed not to give privileges 
to apps that don't have a GUI, however separate security indicators may be more 
adequate (see below). User needs to link GUI element or security indicator to 
app and to be relatively aware of the presence of such an indicator. This is 
very much the philosophy of Swirl (isr.uci.edu/projects/swirl/), however 
security isn't retroactive. User can be compromised before s/he notices the 
screen is being captured. User can removed OBS from the list if unhappy/insecure

2) OBS is rewritten to take implement some new interaction techniques with the 
DE. It can contain a DE-managed widget or other method of interaction that 
conveys the user's intent and desire for OBS to capture the screen, at no extra 
interaction cost. The user is then more in control than in the default list 
case because the permission is present only when needed. Neatly enough security 
indicators (those that allow revoking the permission) appear at the same time 
the user performed the transparent granting action, so the user can experience 
a relationship between those. This approach is likely to be better for some 
privileges (but maybe not when it comes to clipboard managers, how'd you tell 
your OS that you want a clipboard manager app to have complete access to the 
clipboard!?)

3) a) OBS doesn't want to / can't implement technique above, and then it cannot 
access the screen capture capability directly. It needs to send a request to 
some system component that prompts the user in a way or another for permission 
(suggesting my UI because it's unlikely that a passive notification would get 
the job done, especially for purpose-built apps - the user's more likely to 
have started OBS to record the screen *now*). Annoying for the user because 
they have one more interaction step, but they might tolerate it *if it's rare 
enough*
   b) a legacy OBS doesn't have code to ask for a permission. The DE may 
include some per-app or per-window security controls mechanism to give OBS some 
capability, and then OBS's legacy code would automagically work. I wouldn't 
bother with that myself, I don't think unmaintained apps should be dragged 
around when such massive changes to how userland works are required. Really, 
really annoying for the user: they have to rationalise the way they perform the 
primary task, identify the relationship between the app's initial failure and 
security mechanisms, switch contexts to the security control and use it to fix 
the app. This is called an interaction breakdown and is really the nightmare 
scenario for me



> A design here could be to have an indicator available to the user when an
> application is capturing the desktop, and when the user clicks it, they can 
> see
> all the applications currently recording and have the ability to stop the
> recordings, flat out kill the app, or go to a more detailed settings panel to
> see exactly what the applications are capturing.
>
> While this design may not be perfect, it affords what I think are the key
> frustrations with SELinux: it gives transparency into what the application is
> doing (the user will notice a red icon to mean their screen is being 
> recorded),
> why it's doing it (well, if they're running OBS, then they want to stream this
> game), and gives them the ability to set their own policy.

Indicators are probably the way to go (more than taking permissions away when 
the app "hides").
I'm unsure myself what's good to do for indicators, I already think though that 
it's
better to link an indicator to a GUI area when possible. If the app has a 
window, the
indicator should be located around that window. If it's in a panel, the 
indicator should
sit next to it. If it's in the notification area, etc.

The problem is security is not that big a thing that it warrants destroying and 
cluttering
the whole GUI with indicators. So I guess there's a lot of thinking to put into 
how
such indicators can also play a useful role for process management for users, 
or how
to make them discreet yet obvious to find.

One tiny thing though: I don't think we should ever assume that the OS or user 
knows
*why* an app is doing something just because they know it's doing it. We should
actually communicate the idea that if the user doesn't know why, they should 
prevent
the app from doing it right away, and that the OS will never be able to tell the
why part itself.


> At this point, I'm reminded of tef's post, asking, "What's your threat 
> model?" [0]:
>
> Who is attacking? How are they attacking? Will we know when somebody's 
> attacking?
> Will the user know when somebody's attacking? Can we know a legitimate actions
> compared to an illegitimate one? Can the user know a legitimate action 
> compared
> to an illegitimate one?
>
> How much of a choice should the user be making, and how much of a choice 
> should
> we make for the user? What choices can we, or the user make? What information 
> do
> we expose to the user in the case of a questionable action?
>
> How can the user stop a perceived attack? Should they stop a perceived attack?
> What can go wrong if the user stops a perceived attack that is a legitimate 
> action?
>
> We need to be asking those questions, and more, rather than talk about 
> GtkPasswordForms
> and the X,Y positions of widgets. Those are just technical implementation 
> details
> of how to implement a more secure design. We can crank those out in a day. The
> tough part is the thinking towards making a more usable, transparent, secure
> system in general.
>
> [0] http://programmingisterrible.com/post/67851666020/whats-your-threat-model

http://mupuf.org/blog/2014/03/18/managing-auth-ui-in-linux/#4-threat-model

Summary:
- who: wildly irrelevant
- how: remotely, by getting you to use their app or process their data 
(file/web/etc)
- will you know? nope
- legitimate or not? can't tell at all now, in the future might be sometimes 
able to (using permissions/indicators)
- who chooses: very context-dependent. you need to never prevent the user from 
achieving primary task, though
- what can we choose: we can decide which apps are likely to: a) get 
compromised and b) be malicious, and we can adjust default policy and user 
decision support accordingly
- what info to expose: see my article
- how to stop attack: now, can't. later, by revoking permissions
- what can go wrong: interaction breakdown

It doesn't go into that many details, I can develop if non-security people feel
this'd be useful to them. Ultimately what will influence the interactions that 
can
be proposed is tightly linked to the techniques available to implement them. In
my opinion only DEs and their own upstreams will be willing to make significant
development efforts. Especially, app developers should not suffer too much of a
cost because apart from Chrome/Firefox, few will bother to implement security in
their apps (as indicated by how easy it is to fuzz, e.g., evince which would be
my personal attack vector if I wanted to target Linux users).

--
Steve Dodier-Lazaro
PhD student in Information Security
University College London
Dept. of Computer Science
Malet Place Engineering, 6.07
Gower Street, London WC1E 6BT
OpenPGP : 1B6B1670?

_______________________________________________
gnome-shell-list mailing list
gnome-shell-list@gnome.org
https://mail.gnome.org/mailman/listinfo/gnome-shell-list

Reply via email to