This project was completed in 2018 as part of my Industrial Design honours project under the supervision of Pierre Proske at RMIT University, Melbourne.
Project Overview
The internet is full of language learning apps and translation technologies for improving second language skills. Gone are the days when a person has to enter a language classroom or a foreign country to learn a new language. Some of these apps allow users to answer written vocabulary and grammar questions but rarely give them the opportunity to practice their speaking skills. Speaking and listening skills are essential components of second language acquisition and long-term memory retention.
With the growing ubiquity of smart home speakers such as 'Google Home', 'Amazon Echo', we are now seeing these apps shift to voice-based user interface design. Voice User Interface Design is a growing field in the design world and could be disrupting the digital interface. This research project aimed to explore ways to create a language learning tool using voice-based user interactions. ​​​​​​​
What is Hanashi?
‘Hanashi’ is a tool that language learners can use on their smart home speakers and mobile devices to engage in foreign language conversation within the home. The tool was named after the Japanese word for ‘talk’ or ‘story’.
In the language of their choice, the 'Hanashi' voice assistant prompts users to recall vocabulary and pronounce different objects in their surroundings. The user is then asked follow-up questions to prompt further spontaneous conversation. ‘Hanashi’ is a contextual mnemonic device that aims to help users retain their language skills and supports visual and auditory learning styles in the home environment.
Research
What do existing digital language learning tools do?
These incredible tools listed below are giving language learners more opportunities to engage in ‘informal’ language studies beyond the classroom. I wanted to identify the ways people are currently using apps and technology to study new languages. This was an insightful way to identify potential opportunities within this field. It also showed that it is possible to learn a language with no prior knowledge with apps such as Duolingo and Rosetta Stone. People can also have a cross-cultural conversation through translation apps like Google Translate and Say Hi without needing to learn a new language! Understanding 
Understanding the Problem and Identifying Stakeholders
A brainstorming session was done at the beginning of the research phase to understand and map out why people learn languages, what are the benefits of learning languages and why it is so difficult to study and practise speaking a new language. This brainstorm raised a lot of questions but needed further clarification with a user survey.
A stakeholders map was used to identify the companies and professionals who would hold an interest in this research and who the research will affect. This method of identifying who is at stake has placed the users as the key stakeholders. As this is a language learning tool, the users are the key people who will determine the success or failure of the outcome.
The process of producing speech in our first language is not something we often need to think about; it occurs very naturally. However, when trying to speak a foreign language, we become more aware of our brains searching for the right thing to say with the correct pronunciation.
Research Survey
I sent out a survey as a way to collect preliminary research data for the Hanashi project. It was distributed to fifty current university students and recent graduates via Facebook. The survey aimed to uncover if people studied a language in high school, if people continued their language study after high school and why, methods for maintaining languages after high school and how likely they would be to use a product that enabled them to maintain a second language should one be available. 
Why did/didn’t you choose to continue your second language study after high school?
Reasons people didn’t continue their second language studies
High school ruined it for me, I was no longer interested in studying it
Couldn’t be bothered. Too much effort
Because I didn’t even learn anything
Already could speak French, but took it because I was lazy; didn’t want to continue Japanese
Wasn’t too interested in Japanese
Wasn’t interested
University didn’t allow for it with the degree I am doing
I didn’t see a need for knowing Spanish in my community, and I wanted to focus on German, my other first language, or Korean instead since I was going to study abroad there
Too difficult and not what I wanted to do
I was no longer invested and it became too challenging/complicated
Decided it wasn’t a language I was interested in learning, other languages I want to learn more
Reasons people did continue their second language after high school
Languages are interesting
It would have been a waste otherwise, and I would like to be able to understand and communicate with people from dissimilar cultures.
My uni asked to me keep learning
Travelling, in a relationship with a French person, working in Mexico
It is very useful for my career aspirations (teacher)
I was just interested in learning English
I wanted to study Spanish during my travels and instead picked up Mandarin again living in China.
Wanted to keep up vocabulary and writing skills
My parents made me, as Dutch people generally speak at least 3 languages.
To maintain my ability and also want to major in the language
It’s part of the curriculum to finish my Bachelor’s Degree.
User Personas
I did a persona exercise to try and refine the target audience for the Hanashi project. The characteristics, motivations and desires of the Personas were crafted from the outcomes of the aforementioned research methods. 
Initial Prototypes and Testing
Initially, I made a collection of objects to represent a language learning speaker that you could place around your home in various places. This way, users could practice speaking in their second language wherever they went in their home. Ultimately, after experimenting with this idea, I decided to move in the direction of AI Smart Home Voice Assistants. Because voice assistants were becoming more ubiquitous in households, it was more likely that this type of technology would be easier to adopt and therefore give users a better chance to practice their language skills. 

User journey for initial prototypes

Prototypes made to place around the house

User with a Hanashi device in the house

Conversation Design and Voice User Interface Design
After a failed attempt to redesign and digitalise random objects in the house, I began experimenting with Google’s ‘Dialogflow’. Learning how to use Dialogflow created a paradigm shift in this design process. It proved my suspicions that all roads really do lead to Google. Dialogflow became my way to create a tool for users to engage in basic two-way second language conversation in the home. Instead of placing strange talking robot objects all over the home, users would now be able to interact with this second language speaking tool using the products that they may already have in the home (including smart home speakers, smartphones and computers). However, with Dialogflow came another set of challenges. One of the challenges was designing an interface... with only the voice. 
Voice User Interface Design (VUI Design) is a form of interface design that uses audio and speaking as the main outputs and inputs. Conversation Design is the design of the flow of conversation that users have with these voice-based interfaces. This research aimed to uncover an approach to conversation design that considered users interacting in their non-native speaking language.
Testing my Prototype
After playing around with Dialogflow and writing a lot of dialogue, I created a testing website to try and get people to interact with my language-learning voice assistant. You can view the testing site at the link below. 
The Outcome
The final outcome of this research is a voice-based language learning tool that can currently be used on Google Home, Google Assistant as well as online. The final release of the app is currently pending. Once it is released to the public, Ms. Hanashi can be accessed by the user commanding Google with, ‘Hey Google, talk to Ms. Hanashi’. From there, Hanashi greets the user and gives the user the option to select a language to converse in.
The version made for this project was only available in English. Ms. Hanashi has been programmed to respond to the names of different household objects and other abstract concepts. The responses prompt further discussion about the object or idea that the user has input. This leads to a natural flow of conversations that gives the user the chance to engage in a spontaneous, unpredictable conversation, much like they would if conversing with a native speaker of a foreign language. Each follow up question that is prompted by the user’s input has been designed to expose users to a variety of grammar structures, tenses and question words. Often, Ms. Hanashi will ask the user to count how many of a particular object there is, describe the objects next to a particular object, discuss what they do in a particular room and so on.
User Scenario Video
Mini User Journey Map

You may also like

Back to Top