Rich, engaging conversation is a hallmark of ameaningful human-to-human experience. However, factors such as relational closeness, social formalities, and insecurities often hinder individuals from going beyond surface-level conversation.

Thinkspace is a speculative concept that aims to reinvision the role of conversational agents in discussion-based contexts.

Skills —

Interaction Design
Adobe Aftereffects

Timeframe —

7 Weeks (Spring)

Collaborators —


Generative Conversation —

In our world today, almost all the tools we use are passive. They do exactly what we tell them and nothing more. Current conversational agents are no different — we provide a request, and the assistant performs that action. In his TED talk, designer and engineer Maurice Conti challenges society to rebuild the passive (tech) tools we use today into generative ones.

In this project, I wanted to challenge the 'assistant' metaphor of voice agents. Though the main intent of CUI's is still to help accomplish tasks, what if they could provide input in a way that we might not even know to ask for? In the proper context, what if human conversation can be enriched through the facilitation of a proactive voice assistant rather than a retroactive one?

Modular Intents —

Intent: a user-selected mode that dictates the conversational assistant’s behavior.

Every conversation has a different context and intention of initiation. Thinkspace embraces that, giving users the ability to choose the voice assistant's role prior to beginning the actual conversation. This system is modular, meaning that number of Intent options will expand as the entire Thinkspace system develops.

Input Methods —

The goal of Thinkspace is to enhance human-to-human conversation, not human-to-computer conversation. Because of that, Thinkspace is designed so that no verbal exchange occurs between person & computer.

System Wake & Sleep

Person invocates through voice. Thinkspace is about conversation, therefore the first point of interaction should reflect that.

Role Selection

Person chooses through mobile interface. It’s illogical for the system to project the various options given limited wall space.

How the system provides information

Only visual and audial cues, no spoken dialogue. The goal is to highlight the human-to-human conversation, therefore the assistant never replies auditorily in words.

Thought Metaphor —

Based on scientific research, a research article from the National Academy of Sciences claims that highly creative people differ from the average person because of the unique coactivation of their three primary brain regions. This consists of the default, salience, and executive systems which are neural circuits that usually work in opposition.

I couldn't help but wonder how the conceptual model of the human mind might be able to influence us on the social level — more specifically, our hesitance to expand beyond social cliques. How can the grouping and regrouping of our thoughts act as analogy for the way we should engage with people with different backgrounds, cultures, and ideas?

Live Prototype —

I used the javascript library p5.js and its speech recognition library to map voice-input to the visual particle system. Below is a link to a live prototype that translates the words you say. Additional thanks to Daniel Shiffman for his unending list of javascript video tutorials.

Progress Videos —

Though this project was quite conceptually driven, a large part of my process went into exploring the code-based relationship between particle engines, input systems, and eventually text translation. Below are a series of progress videos that document my javascript explorations throughout this project.

Iteration 1 — Gravitational particles
Iteration 2 — Particle-to-text
Iteration 3 — Voice-to-text
Iteration 4 — Voice-to-Image
Integration into Physical Space

Conclusion —

As time pass, so do people, their way of thinking, and the types of conversation they engage in. As people change, so does the role of Thinkspace.

Working on Thinkspace was delightful because it stemmed from a genuine interest in computation, multimodal forms of interaction, and the application of metaphor thinking in design. This project pushed me to challenge the potential applications of voice assistants, especially in proactive contexts such as this. Given more time, I'd love to build my program to a fidelity where it can be tested with real users. Whether it be through Wizard of Oz experiments or any other testing method, I want to see how this concept plays out in a real human conversation.