The Dialog and the Dance
In our latest episode of Humans and Machines podcast, titled UX and Visualizations: Dialog AND Dance, I had such a great time chatting with Noah Iliinsky, the Principal UX Architect at Collibra and well-known speaker and author on the topics of user experience (UX) and advanced visualization. Noah is one of my favorite people to talk about UX and visualization: he is both super-knowledgeable in this space and amazingly well-spoken… perhaps because of his unique humanities and physics background.
It also helped that we spoke about one of my favorite topics, and the reason we named our podcast series “Humans and Machines:” the partnership between the human user and the AI. With methods such as human-in-the-loop which explicitly requires human interaction, advanced visualization to engage human visual perception, and AI ethics to weave in societal values, I believe effective AI requires a continuous dialogue and dance between the humans and the machines.
Focusing on the Human
It often seems like the machine gets all the attention. We talk about the AI, the models, the math, but we seem to ignore the human. Noah and I discuss why we tend to focus on the math. Put plainly, it’s easier. Math has a finite, concrete answer, and people tend to get really excited about technology.
In contrast, Noah points out that “humans are messy. And continuous and not discrete. And squishy and non-deterministic.” This fuzziness makes human things harder to work with, and a harder problem in general.
But including humans in the equation is important. That partnership has proven to be too useful to ignore.
I bring up the example of centaur chess. Named after the mythical half-person and half-horse creatures, centaur chess is a style of chess playing that combines human grandmasters with chess-playing programs. Fascinatingly, it turns out that these partner teams can achieve states of play that are better than human grandmasters or chess programs alone. The high-level explanation? Human grandmasters tend to be better at long-term, strategic, and intuitive play, while chess algorithms tend to be better at short-term, tactical play and never making mistakes.
Noah then brings up Douglas Engelbart, the inventor of the computer mouse and influencer of modern computer science. Engelbart coined the phrase “augmenting human intellect” to describe the role of software. Let computers do what they are good at, in a way that augments, not replaces, humans.
User Experience and Important Principles
Sitting between the human and the machine is user experience, UX. A good UX can make all the difference between a successful interaction and failed one. The problem with bad UX, especially in the case of cybersecurity, is that the stakes can be so high that a failure state can have dire consequences. Mistakes can lead to the wrong person being incriminated, the crime being missed, the attacker being successful, and the victim being wronged. In contrast, when we are successful, we find the bad guy, we prevent the crime, we save the victim, and make the world a better, safer place.
As a result, I wanted to get Noah’s thoughts on UX principles that would reduce the chances of these failure states happening and improve the robustness and success of our systems. Noah brings up two suggestions.
First, get clear on your assumptions and write them down. “A design solution is the sum or superposition of all the choices made along the way,” Noah states. And I see his point: documenting all the choices makes explicit all the decision points along the way, and the pre-meditative nature of that exercise reduces the chances of bad defaults and unintended consequences.
Second, design for human inaction when humans will do nothing. By being intentional about your defaults, you avoid a common failure state. “If your system depends on that human always doing everything right,” Noah argues, “then that system will inevitably fail.”
We touch on briefly the relationship between these two principles and AI ethics. Responsible AI is worried about model bias, which is in a sense baking bad assumptions and decisions in the model itself. Responsible AI also encourages human-in-the-loop approaches, where the human is always involved and never inactive.
Successful Design and the Information Abstraction Hierarchy
I was fascinated by a hierarchy featured in one of Noah’s talks, where he describes a hierarchy for successful design that resonated with me: a ladder with data at the bottom, and solutions at the top, and increasing value as you move up the stack:
From the bottom to the top, Noah’s hierarchy is data, information, answers, actions, and solutions.
Why does this work? In essence, Noah points out that there is more value as you go up the stack. However, it requires effort to go up the stack, so not everyone bothers to do that. However, if you don’t take the effort, you are instead asking the customer to go up the stack or losing potential business to others that are willing to go up that stack.
Coincidentally, moving up the stack involves more humans.
This is, effectively, how the best data visualizations work. The best data visualizations, according to Noah, reveal knowledge in a domain where human judgement is required. They go up the stack.
Back to the dialog and dance
But, to bring it back to the original thesis of our conversation, that partnership between humans and machines is still there. Noah brings up the example of a cybersecurity anomaly detection system that points out an unusual series of logins and are indicative of an impossible journey scenario: here, the machine does the bulk of the computation, but the human provides the ultimate judgement.
It’s that partnership of humans and machines again, Engelbart’s “human augmentation”.
I really enjoyed my conversation with Noah. I got a lot out of it, and I know you will, too.
Links and Show Notes
- Watch the episode video
- Follow and listen to the episode
- Keep up with Noah on LinkedIn
- Follow Noah on Twitter: @noahi
- Read Noah’s Blog