• Home
  • Case Studies
  • 30-Day CxD Challenge

Ian Roberts / Portfolio 2025

  • Home
  • Case Studies
  • 30-Day CxD Challenge
 

30-Day Conversational Design Challenge

A daily dive into how humans really talk — and what that means for designing better voice and AI interactions.Each post = one idea, one sample dialogue, one design takeaway.


Day 1: Grice’s Cooperative Principle

Concept: Philosopher H.P. Grice argued that conversations work because people generally follow the Cooperative Principle:

“Make your contribution such as is required, at the stage at which it occurs, by the accepted purpose of the talk exchange.”

In plain English: we assume others are being helpful, relevant, and understandable.

Sample Dialogue

  • User: What time’s the train?

  • Assistant (good): It leaves at 10 a.m.

  • Assistant (bad): Trains were invented in the 19th century.

In the second case, the assistant gave a true fact but violated the cooperative principle by not serving the purpose of the exchange.

Design Takeaway
A voice assistant should act like a cooperative partner, not a trivia machine. Always ask: Does this reply serve the user’s purpose right now?


Day 2: Grice’s Maxim of Quantity

Concept: Grice’s first maxim says: “Give as much information as needed, and no more.”

Too little → user feels lost.
Too much → user feels overwhelmed.

Sample Dialogue

  • User: What’s the weather today?

  • Assistant (too little): Bad.

  • Assistant (too much): At 8 a.m. it will be 12.3°C with 67% humidity, winds of 9.2 mph from the southwest, increasing to 10.7 mph by noon, with…

  • Assistant (just right): Cloudy this morning, with rain after 3 p.m.

Design Takeaway
Voice is a low-bandwidth channel — keep answers short, clear, and just enough for the task.


Day 3: Grice’s Maxim of Quality

Concept: This maxim is simple: “Do not say what you believe to be false. Do not say that for which you lack evidence.” In conversation, we assume the other person is being truthful. When that assumption is broken, trust collapses.

Sample Dialogue

  • User: Is my train on time?

  • Assistant (violates Quality): Yes, it’s on time. (but it isn’t)

  • Assistant (better): I don’t have live updates right now, but I can check the schedule.

The second answer respects the maxim: it doesn’t fabricate. It admits limits.

Design Takeaway
Trust > slickness. A voice assistant should never “wing it.” If it doesn’t know, say so — and guide the user to the next best step.


Day 4: Grice’s Maxim of Relation (Relevance)

Concept: This maxim says: “Be relevant.”Every turn in a conversation should connect to what came before. If it drifts off-topic, it feels uncooperative or awkward.

Sample Dialogue

  • User: What time is it?

  • Assistant (violates Relation): Lunch is usually around noon.

  • Assistant (better): It’s 3:15 p.m.

The first reply is factually true but irrelevant. The second directly answers the question and keeps the conversation coherent.

Design Takeaway
Relevance is about meeting intent. Always design system responses to stay on-topic and aligned with the user’s purpose.


Day 5: Grice’s Maxim of Manner

Concept: This maxim says: “Be clear, brief, and orderly. Avoid obscurity and ambiguity.” It’s not just what you say, but how you say it. Confusing phrasing disrupts the flow of conversation.

Sample Dialogue

  • User: How do I get to the station?

  • Assistant (violates Manner): Proceed in a manner not inconsistent with leftward motion at the forthcoming intersection.

  • Assistant (better): Turn left at the next traffic light.

Both answers point to the same action, but one is cluttered with jargon. Clarity wins.

Design Takeaway: Clarity is kindness. Keep voice UI responses simple, concrete, and well-ordered so users never have to untangle the meaning.


Day 6: Implicature

Concept: Grice showed that we often mean more than we literally say. This extra layer of meaning is called implicature. Example: “It’s cold in here.” → could imply “Please close the window.”

Sample Dialogue

  • User: Can you reach the salt?

  • Assistant (literal only): Yes.

  • Assistant (with implicature): Here you go.

The literal answer is technically correct, but unhelpful. The implicature-aware reply responds to the real intent.

Design Takeaway: Users don’t always speak in commands. Voice UIs need to handle indirect speech acts — listen for what’s implied, not just what’s said.


Day 7: Speech Acts (Austin)

Concept: Philosopher J.L. Austin argued that language isn’t just about saying things — it’s about doing things. Words are actions. When you say “I promise” or “I apologise”, you’re not describing, you’re performing.

He called these speech acts.

Sample Dialogue

  • User: Remind me to call Sam at 3.

  • Assistant (good): Reminder set for 3 p.m.

  • Assistant (bad): That’s a sentence about calling Sam.

The second reply treats the utterance as description. The first treats it as an action — the whole point of speech act theory.

Design Takeaway: Voice UIs should interpret many utterances as actions, not statements. The key is to understand what the user is doing with words (requesting, promising, apologizing, commanding).


Day 8: Turn-taking

Concept: Conversation is fast and coordinated — humans switch turns in less than 0.2 seconds.


Dialogue:

  • User: Set a timer for—

  • Assistant (interrupts badly): Timer set for 5 minutes!

  • Assistant (better): [waits for completion] Timer set for 5 minutes.

Design Takeaway: Voice systems must handle barge-in and smooth turn-taking.


Day 9: Backchannels

Concept: Little responses like “mm-hmm” or “got it” signal listening without taking over the turn.

Dialogue:

  • User: Draft a short intro that highlights sustainability, cost savings, and innovation.

  • Assistant (no backchannel): [silent until finished] Here’s a draft.

  • Assistant (with backchannel): Mm-hmm. Got it. Here’s a draft.

Design Takeaway: Backchannels build trust and flow. Use them sparingly but purposefully.


Day 10: Preference Organisation

Concept: Some responses are “preferred” (fast, direct) — e.g. yes > no. Dispreferred answers come with hedges or delays.

Dialogue:

  • Assistant: Do you want me to read the full list?

  • User (preferred): Yes.

  • User (dispreferred): Uh… no, not right now.

Design Takeaway: When refusing, assistants should soften responses: “I couldn’t find that, but I can check again later.”


Day 11: Sequences & Projects

Concept: Conversations have main projects, but people insert side-sequences without losing track.

Dialogue:

  • User: Book me a train to Paris tomorrow.

  • User: Oh, what’s the time now?

  • Assistant: It’s 2:30. Do you still want the Paris train?

Design Takeaway: Good assistants handle digressions and then return to the main thread.


Day 12: Grounding & Common Ground

Concept: People establish shared knowledge — “common ground” — to keep conversations coherent.

Dialogue:

  • User: Schedule a call with Alex.

  • Assistant: Okay, Alex Chen?

  • User: Yes, that’s the one.

Design Takeaway: Assistants should confirm references to avoid confusion.


Day 13: Context Dependence

Concept: Meaning builds from what’s already been said.

Dialogue:

  • User: Book me a train to Paris.

  • Later: Make it earlier.

  • Assistant (good): Moved your Paris train to 9 a.m.

  • Assistant (bad): Make what earlier?

Design Takeaway: Conversations need memory of prior turns to feel natural.


Day 14: Situated Action (Suchman; Heath & Luff)

Concept: People improvise based on the situation — they don’t follow rigid plans.

Dialogue:

  • User: Expand this paragraph… actually, stop. Summarise it instead.

  • Assistant (rigid): Sorry, I can only expand.

  • Assistant (flexible): Okay, summarising now.

Design Takeaway: Voice UIs must be adaptable mid-flow, not locked to scripts.


Day 15: Politeness Theory (Brown & Levinson)

Concept: People soften requests to preserve “face” (dignity).

Dialogue:

  • User: Turn off the lights.

  • Assistant (softer): Sure, lights off. Want me to dim them first?

Design Takeaway: Assistants can mirror politeness strategies to feel less commanding and more collaborative.


Day 16: Indirectness in Requests

Concept: “Could you maybe…” often means do it, not are you able to.

Dialogue:

  • User: Could you maybe open the blinds?

  • Assistant (literal): Yes, I could.

  • Assistant (better): Opening the blinds.

Design Takeaway: Design assistants to interpret indirect speech acts as real requests.


Day 17: Conversational Style (Tannen)

Concept: Some cultures overlap in talk, others pause. Misalignments cause friction.

Dialogue:

  • User: Can you—

  • Assistant (overlaps wrongly): I’ll set that up now.

  • Assistant (better): [waits beat] Okay, setting that up now.

Design Takeaway: Voice systems should adapt to different pacing styles.


Day 18: Sociolinguistics

Concept: Language varies by culture, group, and identity.

Dialogue:

  • User (dialect): Set an alarm for half seven.

  • Assistant (bad): I don’t understand.

  • Assistant (good): Alarm set for 7:30.

Design Takeaway: Inclusivity means designing for dialects, registers, and accents.


Day 19: Power & Turn-taking

Concept: Who “owns the floor” is about power. Assistants mustn’t dominate.

Dialogue:

  • User: Let’s draft an intro—

  • Assistant (bad, interrupts): Draft complete.

  • Assistant (better): Okay, want me to suggest a draft now?

Design Takeaway: Build assistants to yield floor control unless invited.


Day 20: Empathy in Dialogue

Concept: People acknowledge feelings, not just facts.

Dialogue:

  • User: Ugh, I missed my train.

  • Assistant (flat): Next train is at 10:15.

  • Assistant (empathetic): Sorry to hear that. The next train is at 10:15.

Design Takeaway: Empathy tokens make assistants feel more human, but must be used sparingly and sincerely.


Day 21: Humor & Flouting Maxims

Concept: Breaking rules (too much info, off-topic, exaggeration) can be funny.

Dialogue:

  • User: Set an alarm for 5 a.m.

  • Assistant: Alarm set. That’s really early. Coffee recommended.

Design Takeaway: Humour works when tasks are complete — delight comes after delivery.


Day 22: Relevance Theory (Sperber & Wilson)

Concept: People assume every utterance is the most relevant thing the speaker could say.

Dialogue:

  • User: What’s the weather?

  • Assistant (bad): Weather reports are a feature of modern forecasting.

  • Assistant (good): It’s sunny right now, rain later at 3 p.m.

Design Takeaway: Design for effort vs reward — every system response should maximise value with minimal cognitive load.


Day 23: Metaphors We Live By (Lakoff & Johnson)

Concept: Our thinking is structured by metaphors (time = money, ideas = food).

Dialogue:

  • User: I’m running out of time.

  • Assistant (bad): You have 2 hours left.

  • Assistant (better): You’ve got 2 hours left — want me to help you save time by shortening the draft?

Design Takeaway: Use conceptual metaphors to make prompts intuitive and human.


Day 24: Multimodality in Conversation

Concept: Humans combine speech, gesture, and visual cues.

Dialogue:

  • User: Navigate to King’s Cross.

  • Assistant (voice only): Head northeast on Euston Road.

  • Assistant (multimodal): Voice: “Turn left ahead.” Screen: map with arrow.

Design Takeaway: The best conversational UIs are often voice + visual + touch, not voice alone.


Day 25: Conversation as Performance (Goffman)

Concept: Talk is a performance — we present ourselves like actors on a stage.

Dialogue:

  • User: Tell me a joke.

  • Assistant (flat): Why did the chicken cross the road…

  • Assistant (performative): [pause, upbeat tone] Okay, ready? Why did the chicken cross the road…

Design Takeaway: Treat your assistant as a character — tone, timing, style matter as much as content.


Day 26: Narrative & Storytelling in Assistants

Concept: Humans think in stories, not data points.

Dialogue:

  • User: What’s new in this release?

  • Assistant (list): Bug fixes, security patch, UI update.

  • Assistant (story): This release makes your app safer and faster — we fixed major bugs, added a patch, and refreshed the UI.

Design Takeaway: Shape responses as mini-stories to make them memorable.


Day 27: Cognitive Load & Memory

Concept: Humans can only hold 5–7 items in working memory.

Dialogue:

  • Assistant (overload): You have 7 tasks: A, B, C, D, E, F, G…

  • Assistant (better): You have 7 tasks. Here are the top 2: A and B. Want the rest?

Design Takeaway: Always chunk information for the ear.


Day 28: Ethical Voice Design

Concept: Voice carries issues of trust, privacy, and bias.

Dialogue:

  • User: Are you recording me?

  • Assistant (bad): [ignores the question].

  • Assistant (good): I only listen when activated. You can review or delete recordings anytime.

Design Takeaway: Be transparent and accountable in voice interactions.


Day 29: Global Voices

Concept: Conversation styles vary worldwide. One system won’t fit all.

Dialogue:

  • User: Please do one thing, set alarm 6 o’clock.

  • Assistant (bad): Sorry, I don’t understand.

  • Assistant (good): Alarm set for 6 a.m.

Design Takeaway: Design for cross-cultural pragmatics — different registers, idioms, and structures.


Day 30: Pulling It All Together

Concept: A conversational style guide brings theory into practice.

  • Principle: Be cooperative.

  • Example: Always answer the user’s question first.

  • Principle: Allow repair.

  • Example: Accept “no, wait…” and adjust.

Design Takeaway: Build your own design grammar for conversation — grounded in linguistics, adapted for your product.