Enabling a Future of Perceptive Assistance: System Support for Efficiency and Privacy in Continuous Mobile Vision
- Robert LiKamWa | Rice University
In my vision for the future, wearable computing devices will interpret rich visual real-world environments in real-time, providing services to assist in our daily lives with “Continuous Mobile Vision”. For example, by remembering faces and objects in our personal encounters, the device can maintain a visual search engine relevant to the user. I show that today’s system software and imaging hardware are ill-suited for such continuous mobile vision. Highly optimized for photography, current systems fail to attain a sufficient level of energy efficiency and privacy preservation. I present my rethinking of the vision system stack in application frameworks, operating system drivers, and sensor architecture to attain energy efficiency. This cross layer work contributes: (1) scalability to multiple vision applications, (2) mechanisms for energy proportional image capture, and (3) early processing to reduce the image sensor’s readout burden. Altogether, this results in two orders of magnitude improved efficiency in vision processing. Progressing into the future, in addition to seeking further OS and architectural opportunities for efficiency, I seek to enable continuous mobile vision to work more securely and privately, innovating low-level mechanisms that enable high-level policies for the vision stack. By providing a narrow, monitored view of vision data access, such privacy mechanisms will enable users to trust the software stack of vision processing. In the long term, I will push for a future of continuous mobile vision will enable a new wave of personal computing assistance, wherein a perception of the real world will help our devices relieve our precious human memory and attention.
-
-
Ben Ryon
-
-
Watch Next
-
From Microfarms to the Moon: A Teen Innovator’s Journey in Robotics
- Pranav Kumar Redlapalli
-
-