Friday, July 24, 2020

Zeroing in on Zero-Touch Interaction

Jim Spadaccini and Hugh McDonald from Ideum were kind enough to share the post below about their company's latest work on touchless interactive systems. Lots of interesting things to learn and think about here -- enjoy!

Five years ago, our company worked with Intel to devise a touchless interactive system for desktop users. At the time, Intel had recently released their first RealSense depth and tracking technology, and they were looking for a proof of concept. We were very excited about the challenges this presented and enthusiastically jumped at the opportunity. As we got started, we quickly recognized that implementing a successful touchless interface had little to do with the technology itself. Instead, at its core, creating a strong, intuitive touchless experience was a design challenge—and one that requires special attention to visual feedback.

This early experiment came to mind as the coronavirus pandemic worsened. (You can read about it in greater depth in an article we posted back in April 2020, Touchless Gesture-Based Exhibits, Part One. Paul Orselli and I also discussed this topic in May 2020, and you can check out that conversation at Museum FAQ - The Future of Digital Interactivity.) As COVID-19 swept through the country, concerns about touch tables and hands-on exhibits grew, and museums and design firms were (and still are) looking for possible solutions for retrofitting exhibits or developing new types of experiences. Keeping the lessons we learned from the Intel collaboration in mind, we looked to create a system that provides real-time feedback and helps visitors understand what is, at its core, a novel method for interaction. 

More specifically, we aimed to create a system that would allow visitors to use gestures to navigate large screen and touch table interactives. This is quite different from the full-body touchless interactions that have become fairly common in museums over the last decade or so. (We wrote about those types of interactions recently as well; see Touchless Gesture-Based Exhibits, Part Two: Full-Body Interaction.) The system we imagined would require a certain amount of precision to allow visitors to navigate, make selections, and access information and media. For this project, we wanted to build a touchless experience that would help visitors use digital exhibits focusing on wayfinding, collections, media viewers, simple games, and other types of informational interactives. 

After a period of discovery and experimentation, we designed an integrated and color-coded hardware and software system. This mouse-emulation system consists of a motion and depth sensor along with small LCD displays and LEDs to provide immediate visitor feedback. (We are using Leap Motion and RealSense sensors in two alternate versions of the experience.) In addition, the cursor is color-coded and animated, and changes based on interaction to provide additional feedback to the visitor. The graphic inspiration came from signage for the NYC subway system: clear, bright graphics; bold colors; high-contrast text. Finally, we built a simple housing for the integrated touchless device that attaches to our line of Drafting Touch Tables

The system is designed to provide onboarding information and immediate feedback on how to use it. Here’s how it works:

White - In the exhibit’s attract state, LED lights pulse to draw attention to the integrated touchless system. This is designed to highlight the fact that there is something new here. The small display on the sensor unit shows literal iconography to onboard the visitor—a hand with a pointing finger.

Yellow - As the visitor moves their cursor, yellow is the hover state. The cursor changes as the visitor moves it over onscreen objects, such as buttons or other interface elements.

Green - Visitors make selections by quickly retracting their pointer finger or by holding a clenched hand to grab and drag an onscreen object.

Red - If visitors actually touch the display, they see red LEDs, a red border around the large display, and a red “this is a touchless display” message on the small display.

The combination of a small auxiliary display, LEDs, and cursor animations might initially seem like overkill, but we have found that novel interactions like this often require more than one type of feedback. These elements work together to let visitors know that this system is present and is responding to their actions. It certainly would have been simpler to forgo the added hardware system and modify the exhibit on the larger display, but such changes to existing applications aren’t trivial in terms of redesign and reprogramming, and it is not clear whether visitors would understand that experience on the large display had been altered in some way. 

This integrated hardware system has the advantage that, once built, it can be deployed on any number of interactive exhibits with little change to underlying source code. In addition, since the changes to existing applications are minor, the system can be removed and the application can revert to its original touch interface when the need for a touchless system is past.

Our initial rounds of informal testing suggest that this prototype system works reasonably well. While it is, not unexpectedly, less intuitive than a touch interface, it is accurate and reliable enough for most visitors to navigate without frustration. In addition, in a new world where people are understandably reluctant to touch objects in public spaces, the system provides an odd feeling of empowerment: I can make things happen, I can make selections and interact, and I can do it without touching something!

The development of this system is part of a new broader initiative called Touchless Design which we announced last week. The software and DIY instructions for the integrated hardware system will be open-sourced and available at Touchless.Design later this summer. As part of this new initiative, we are collaborating with the National Gallery of Art in Washington, D.C., with whom we will build and test a proof-of-concept kiosk to be installed in the fall. In addition, we are very fortunate to have received funding from Intel as part of their Pandemic Response Technology Initiative.

In the coming weeks, we will be testing our early prototypes, publishing our findings, and continuing to refine and develop the hardware and software. Along with the National Gallery of Art, we are working with other institutions on new proof-of-concept touchless exhibits. You can follow our development by visiting the Touchless.Design website or one of Ideum’s many social media channels. We look forward to hearing from you!

Don't miss out on any ExhibiTricks posts! It's easy to get updates via email or your favorite news reader. Just click the "Sign up for Free ExhibiTricks Blog Updates" link on the upper right side of the blog.

Paul Orselli writes the posts on ExhibiTricks. Paul likes to combine interesting people, ideas, and materials to make exhibits (and entire museums!) with his company POW! (Paul Orselli Workshop, Inc.) Let's work on a project together!

If you enjoy the blog, you can help keep it free to read and free from ads by supporting ExhibiTricks through our PayPal "Tip Jar"

No comments:

Post a Comment