- Post Content: Non-members will be able to read the post contents, see replies, and see reply authors (except in cases where authors have anonymized their signatures).
- Findable: This post is findable by URL and can appear on the homepage if it’s within the top 10 posts.
- Public For Now: This post can still be made “Unreadable to the outside”.
I’ve been around some people at sva working on a project where they used top down projections for a restaurant menu prototype to show the selected food on the plate before you ordered it. That part was great. Interaction wasn’t. While it demoed well, it was always really frustrating to me to have the projection on my hand and no good tactile reactive surface to interact with — menu selection was just done on the table by pushing your finger on a menu item for a prolonged time — essentially just holding still. Glass screens have the same problem to me, and I was always hoping that a company would create a surface that could create physical buttons ad hoc to conform to an interface (was so said to see 3D Touch go, that went part of the way there, but nobody cared enough to make it worthwhile for the cost). That’s why I am skeptical about light projections. I dig the idea of the eye on the chest. That takes away a bit of the creepy FUD of somebody recording you — that’s assuming they go privacy first and don’t give the video out and only allow the machine to use it to interpret it for good computational advice in the moment. That’s punching above the level that Glass did. That I could get exited about.
(Rambling trails off.)
It does feel like we could come up with something like that if the research was put into it. There's talks of doing that sort of adaptive tactile surface with foldable devices, there's probably a prototype of it deep in the labs of Apple or Samsung