Design Principles
Five principles that govern every screen, interaction, and AI moment in ThriveGuide. Each principle is stated, translated into concrete design practice, given a review litmus test, and paired with anti-patterns that signal violation.
These principles are grounded in the Behavioral Design Philosophy and operationalized through the Moments That Matter framework.
1. The User Is the Center of Gravity
Behavior change is deeply personal — the system must orbit the person, not the other way around.
What It Means in Practice
The experience should feel like it knows you — not like you're feeding a machine. Every interaction should reduce the burden of input and maximize the feeling of being understood.
- The system infers before it asks. Passive signals (wearable data, behavioral patterns, session timing) reduce the need for explicit input.
- Navigation and information architecture orient around the user's current state and context — not around the system's feature taxonomy.
- Personalization is felt, not configured. The user shouldn't need to visit a settings screen to make the experience theirs. Adaptation happens through interaction.
- The coach adapts its frequency, tone, and intensity per user. Some people want daily check-ins. Some want to be left alone until something matters.
Litmus Test
Could this screen/interaction work without knowing anything about this specific user? If yes, it's not orbiting the person — it's generic.
Ask in design review: "What does this surface know about the user that makes it different from what anyone else would see?"
Anti-Patterns
- Requiring the user to manually configure preferences that could be inferred from behavior
- Showing the same morning card to every user regardless of context, sleep data, or recent activity
- Feature-organized navigation (e.g., tabs labeled "Sleep," "Movement," "Nutrition") that forces the user to self-sort instead of the system contextualizing
- Asking the user to explain something the system should already know from prior interactions
- A coach that uses the same tone and cadence for a power user and a skeptical newcomer
2. Start With What's Universal, Personalize From There
The five Core Health Behaviors are the foundation for everyone. Personalization isn't about building a different app for each user — it's about how the same foundational truths get expressed differently depending on who you are and what you're dealing with.
What It Means in Practice
The five CHBs (sleep, nutrition, movement, stress management, connection) are always the foundation. Personalization is the expression layer — how those universal behaviors surface for this person, in this context, at this moment.
- The system never presents all five CHBs simultaneously as equal priorities. It reads the user's context and foregrounds what matters most right now.
- Condition-informed coaching adjusts the emphasis and framing of universal behaviors — it doesn't create a parallel experience.
- A user managing diabetes and a user optimizing general wellness both interact with the same system. The coaching is different; the architecture is the same.
- Content (Microsteps, resets, articles, recipes) is drawn from the same library but surfaced through personalized coaching logic, not browsing.
Litmus Test
Can you trace this recommendation back to one of the five CHBs? If not, it's outside the system's domain. Is it expressed in a way that reflects this user's specific context? If not, it's generic.
Ask in design review: "Which CHB does this serve, and what makes this version of it specific to the user?"
Anti-Patterns
- Building separate "modes" or "tracks" for different conditions that share no UI, logic, or content with the core experience
- Presenting the five CHBs as a static checklist the user must work through sequentially
- Surfacing content through a browsable library organized by CHB category rather than through contextual coaching
- Treating personalization as a configuration step rather than an emergent property of interaction
- Overwhelming the user with all five dimensions at once rather than foregrounding what's most relevant
3. Progress Over Perfection
Behavior change is nonlinear. People don't fail because of one bad day — they fail because the system stops being relevant to their reality.
What It Means in Practice
The experience celebrates small signals of progress, normalizes setbacks, and never makes the user feel like they've failed. The tone is a coach who believes in you, not a scorecard that judges you.
- Streaks are a double-edged sword. Falling off can be demotivating. The system must find ways to manufacture progress that don't depend on unbroken chains of compliance.
- Missing a day doesn't break the narrative. The system recalibrates and finds something real to acknowledge — even in an imperfect week.
- The Thrive Score reflects direction and trajectory, not absolute position. "Your sleep consistency improved this week" matters more than "Your score is 72."
- Setback moments are met with presence and recalibration, not shame or re-motivation pitches.
- Weekly reflections frame the week as a narrative with nuance — "Here's what shifted" — not as a pass/fail report card.
Litmus Test
If a user missed three days this week, does this screen make them feel like they failed or like they still have momentum? Does the system acknowledge imperfection without dismissing it?
Ask in design review: "What does this look like for a user who had a bad week? Does it still feel supportive and relevant?"
Anti-Patterns
- Streak counters that reset to zero and emphasize the break
- "You missed 5 days!" as a re-engagement message
- Progress visualizations that make partial completion feel like failure (empty rings, unfilled bars)
- Comparing the user's current performance to their best performance in a way that highlights decline
- Requiring perfection to unlock features or content
- An evening reflection that asks "Did you complete all your Microsteps?" with a binary yes/no
4. Value Before Investment
Show the user something useful before asking for anything in return. The first meaningful moment should require near-zero effort.
What It Means in Practice
Trust is built by demonstrating relevance, not by requesting commitment. The system front-loads value and defers asks.
- Within 5 minutes of sign-up, the user receives one tailored Microstep and makes a lightweight commitment. No 30-minute onboarding. No extensive health assessments.
- Onboarding is a continuous workflow — not a gate. The system asks just enough to personalize, then learns progressively through interaction.
- The first coach interaction should feel like receiving a gift — a relevant insight, a smart recommendation — not filling out a form.
- Wearable connection, detailed goal-setting, and preference configuration are offered at natural moments after the user has experienced value, not as prerequisites.
- Every ask has a visible payoff. "Tell me about your sleep" should immediately produce a relevant coaching response, not a "thank you, we'll use this later."
Litmus Test
At this point in the experience, has the system given the user more than it has asked of them? If the ratio of value delivered to input requested is below 1:1, the ask is premature.
Ask in design review: "What has the user received before we reach this ask? Would they feel the system has earned the right to request this?"
Anti-Patterns
- Multi-screen onboarding flows that collect data before delivering any value
- Requiring wearable connection or health condition disclosure before first coaching interaction
- "Complete your profile" prompts on Day 1
- Permission requests (notifications, health data) without first demonstrating why they matter
- Sign-up flows that end with "You're all set!" instead of an immediate first action
- Treating onboarding completion as a milestone — the user doesn't care about setup, they care about the first moment that feels personal
5. Make the Invisible Visible
The system sees patterns the user can't — connections between sleep and stress, the impact of consistency over intensity, the moment an objective should shift. Surface these insights at moments when they create meaning, not noise.
What It Means in Practice
The coach's highest-value work is connecting dots across time, behaviors, and data sources that the user would never see on their own. This is the core differentiator from generic AI tools.
- Cross-CHB insights are surfaced when they're actionable: "Your stress levels drop on days you walk in the morning" is only valuable when the user is planning their morning.
- Weekly reflections synthesize patterns from daily data — the user sees the story of their week, not a data dump.
- The Thrive Score creates a single, comprehensible signal from complex multi-dimensional data. The complexity is hidden; the insight is clear.
- Wearable data is referenced by the coach as context for recommendations — not presented as raw metrics for the user to interpret.
- The system detects when context has shifted (declining engagement, life event, seasonal change) and initiates recalibration before the user disengages.
Litmus Test
Does this insight tell the user something they didn't already know? Does it arrive at a moment when they can act on it? If the user could have figured this out themselves, it's not making the invisible visible — it's restating the obvious.
Ask in design review: "What non-obvious connection is being surfaced here? Is the timing right for it to create meaning rather than noise?"
Anti-Patterns
- Showing raw wearable data (heart rate charts, sleep stage graphs) without coaching interpretation
- Surfacing insights at random times with no connection to the user's current context or moment
- Presenting all available data and expecting the user to draw their own conclusions
- Dashboards that show metrics without narrative ("Your HRV was 45ms" means nothing without "...which is higher than your baseline, suggesting your stress management this week is working")
- Insights that are accurate but not actionable ("You slept less on weekdays" without "Here's a Microstep for your Wednesday evening routine")
- Pattern observations that arrive too late to be useful
Using These Principles
These five principles are not ranked — they operate simultaneously. When they create tension with each other (e.g., "Make the invisible visible" could conflict with "The user is the center of gravity" if an insight is unsolicited), resolve the tension by returning to the Behavioral Design Philosophy: does this serve the healthy flywheel? Does it build trust or erode it?
In design reviews, every surface should be evaluated against all five:
- Is it personalized to this user's context? (Center of gravity)
- Is it grounded in a CHB and expressed for this person? (Universal → personal)
- Does it handle imperfection gracefully? (Progress over perfection)
- Has the system earned the right to show/ask this? (Value before investment)
- Does it surface something the user couldn't see alone? (Invisible → visible)
If a design passes all five, it belongs. If it fails one, investigate. If it fails two or more, redesign.