AR2R: AI Assistant for Managing Sessions in Voice Interaction

AR2R: AI Assistant for Managing Sessions in Voice Interaction

AR2R: AI Assistant for Managing Sessions in Voice Interaction

Mobile

Mobile

Mobile

AI Assistant

AI Assistant

AI Assistant

SaaS

SaaS

SaaS

Timeline

Timeline

Timeline

May 2024 - July 2024

May 2024 - July 2024

May 2024 - July 2024

Role

Role

Role

Product Designer

Product Designer

Product Designer

Client

Client

Client

Outcome

Outcome

Outcome

Created an AI assistant similar to Siri that interacts with users through voice and helps them create, manage, and interact with sessions directly within the Sessions platform.

Created an AI assistant similar to Siri that interacts with users through voice and helps them create, manage, and interact with sessions directly within the Sessions platform.

Created an AI assistant similar to Siri that interacts with users through voice and helps them create, manage, and interact with sessions directly within the Sessions platform.

Introduction

AR2R is an AI voice assistant designed to improve productivity and simplify task management for Sessions users. Sessions is a platform that reimagines virtual meetings by offering tools to streamline the entire process, from scheduling and planning to real-time collaboration and post-meeting analysis. Sessions integrates video conferencing, agenda management, and analytics to make meetings more efficient and productive.

AR2R was created to allow users to interact with Sessions hands-free, using voice commands to manage meetings, create sessions, modify agendas, and track performance. The assistant’s primary goal is to reduce the friction of manual input by enabling users to control their tasks through simple voice interactions.

Research and Analysis

The founding team’s extensive experience in productivity and communication tools helped define the project's initial goals. To design AR2R effectively, I analysed the recently introduced ChatGPT voice function to understand how voice-based interaction can complement traditional keyboard input. My research focused on how natural language processing (NLP) can interpret diverse user inputs—whether spoken or typed—and generate actionable responses.

A key objective in the research phase was to investigate user preferences for interacting with AR2R. We sought to understand if users would gravitate towards using voice commands or prefer traditional text input when managing their sessions. This insight was crucial in shaping AR2R’s final design, as it would influence how voice and text interactions are balanced within the platform.

Concept Development

With these insights in mind, I developed a minimum viable prototype (MVP) to test with the team and early users. The goal was to determine how users would interact with AR2R—whether they would use voice commands, text input, or a combination of both. The prototype also tested the effectiveness of AR2R’s responses, including the use of actionable widgets that users could engage with after receiving voice responses.

During the prototype phase, I focused on creating input examples and interactive widgets that AR2R would generate in response to commands. For instance, if a user asked AR2R to create a session, the assistant would provide a voice confirmation and present options in a widget format (e.g., selecting time, adding participants). These widgets were designed to be actionable, allowing users to adjust settings or manage their sessions without needing to manually type commands.

One of the key insights we wanted to gain from the MVP was whether users felt more comfortable interacting with AR2R through voice commands or text input. This feedback would guide the direction of the assistant’s development, ensuring it aligns with user preferences for how they interact with technology.

Link to Prototype in Figma

Design

The design addressed how AR2R would handle user interactions when creating sessions, modifying sessions, providing notifications, and managing ongoing meetings—all through either voice commands or text input, depending on the user’s preference.

  • Creating a Session: When the user initiates the session creation process, AR2R responds with both a voice confirmation and an interactive widget displaying the session details (e.g., date, time, participants). Users can modify these details through voice commands or by directly interacting with the widget.


  • Modifying a Session: If the user wants to adjust an existing session, AR2R presents available sessions and allows users to modify them via voice or text input. The design includes options to adjust session details, timing, add participants, or change the agenda, with real-time updates shown on screen.


  • Notifications: AR2R sends voice-based reminders about upcoming sessions, paired with visual notifications that allow users to take action, such as rescheduling or receiving an alert that a new session is about to start. This feature provides flexibility by letting users choose between verbal or manual interactions.

  • Actions During a Session: During live sessions, AR2R responds to user commands to perform actions such as starting recordings, muting participants, or displaying shared content. The interface is designed to offer quick responses with minimal disruption to the flow of the meeting. Users can choose to interact by giving spoken instructions or typing commands, with both options equally supported.

Reflections and Learning

Key learnings from the project include:

  • Prototyping as a Communication Tool: The MVP served as an excellent tool for bridging communication between the design team and stakeholders. It allowed us to visually demonstrate how AR2R would handle voice and text interactions and gather immediate feedback.

  • Voice vs. Text Preferences: One of the most important insights we gained from testing was that user preferences for voice vs. text interactions varied depending on the task. Some users preferred the convenience of voice commands for tasks like creating or modifying sessions, while others felt more comfortable using text, especially for detailed changes. This insight will be pivotal in shaping future iterations of AR2R to support both modes equally.

  • Better Done Than Perfect: Focusing on delivering a functional MVP rather than a perfect product allowed us to meet deadlines while gathering valuable feedback for further refinement.

Conclusion

The AR2R project was a fascinating opportunity to merge AI voice interaction with productivity tools in the Sessions platform. The project demonstrated the importance of balancing innovation with user comfort, ensuring AR2R can grow and evolve as an essential tool for managing meetings within Sessions.

Figma

More projects

Let's share ideas!

I’m always excited to collaborate on innovative and exciting projects!

E-mail

staicuandreea3@gmail.com

Let's share ideas!

I’m always excited to collaborate on innovative and exciting projects!

E-mail

staicuandreea3@gmail.com

Let's share ideas!

I’m always excited to collaborate on innovative and exciting projects!

E-mail

staicuandreea3@gmail.com