Thanks to the authors for coming up with this document.
The scenario is very close to what I implemented back in 2011 or so, so I
am naturally interested.

Here are some questions I have with the draft.

1) Am I correct to assume that the draft targeting a device that is
completely unable to accept user input?
2) I feel that it is appropriate to mention the shoulder hacking as well.
In the Kiosk kind of use cases, the screen might be watched by the remote
camera and the "session" might be hijacked by the remote attacker. (This is
why I am asking 1) above. If the device has a capability to accept a
number, the risk can be made much lower. )
3) It probably is better to explicitly say that "device code MUST NOT be
displayed" especially in the case of a public client.
4) Does section 3.4 and 3.5 exclude the possibility of using something like
web socket?
5) If my read is correct, the client is doing the polling etc. by itself
and not spawning a system browser. In a Kiosk kind of use case, I can
imagine a case that the original app spawning a browser -- i.e, doing PKCE.
In this case, the authorization server creates user authentication and
authorization page that displays the verification URI and the user code.
The client does nothing but a regular PKCE. This kind of use case is out of
scope for this document, is it correct?

Cheers,

Nat Sakimura




-- 

Nat Sakimura

Chairman of the Board, OpenID Foundation
_______________________________________________
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth

Reply via email to