The issue of designing new interactive interfaces with machine learning systems that best serve our needs and help us build and maintain trust is a central issue in AI. Read one researcher’s take on this topic.
For common readers or for experts, the topic of machine learning is one that more often than not brings up lengthy heated discussions, with eyes turning and heads shaking in disagreement. No wonder why… Mounds of private information are being collected by giant corporations, stored in private data silos, and exposed to us only through creepy and yet insightful automated recommendations and suggestions.
Like it or not, machine learning has entered our lives boldly and is here to stay. In the voice of Siri, in our search engines, in systems that protect us from frauds and intrusions, in applications that understand our emotions, and the list goes on and on… These days, my phone auto completes almost all information about my new contacts and meetings. I can almost feel a growing discomfort with that thought and I know I’m not alone.
Sure, we all love that magic, it makes our lives easier, but for some reason there is that feeling of discomfort that we can’t get rid off. As a researcher and as a practitioner I struggle to know why.
I had a conversation with my sister the other day. I listened to her as she raged against AI driven bots.
My sister goes on.
She is right. These are interfaces built by human developers and designers, and yet we have somehow failed to understand the basic human need when interacting, and that is to understand, being understood and know we can trust them. Without the help of emotions however, without proper explanations and without socially acceptable patterns of behavior, it is difficult to trust even the most benevolent bot.
The problem is not simple. There are many aspects of it and today I will touch upon only one topic: The issue of designing new interactive interfaces with machine learning systems that best serve our needs and help us build and maintain trust.
This question is addressed by researchers that work in the intersection of human computer interaction and machine learning. Last week I attended the Human Computer Interaction conference in San Jose and had the chance to discuss this topic with some of the speakers. Here are my main takeaways and favorite ideas.
With the ever-growing presence of sensors and IoT devices in our lives, there is a growing need for urging the citizen to attain a level of data literacy. In the case of smart city applications, there is the role of the maker or the stakeholder, who has good insight into his problems, but doesn’t have the needed understanding of what machine learning is and how it can help him. There is also the consumer, who wants his data to be protected and to not be misused. There is the expert who is able to use machine learning for other people’s needs but doesn’t understand their goals. As daily interactions with data become an evermore commonplace data literacy is more of a life skill we all need to learn.
Data literacy calls for placing larger importance on building the right interfaces and tools that will help us become more comfortable with data. Tools that will let us easily understand our environment through data and be able to control it the ways suits us best. From governmental to environmental data, the list extends through thousands of important examples.
An inspiring approach to this problem is the Physikit system, designed to allow users to explore and engage with environmental data through physical ambient visualizations. The physical cubes which are part of the Physikit kit, show values and changes in environmental data in four different ways, by light, vibration, movement and air. This is visual and clear, and yet ambient and unobtrusive.