AI that understands
how people feel

Psyche is building the foundational model for emotional cognition. Voice-first. Grounded in physiology. Because the most important signals are the ones you can't fake.

Talk to us Read our research

AI has no body. That's the gap no model can close.

Today's AI can reason, generate, and predict. But it can't feel a racing heartbeat, detect a tremor in someone's voice, or sense the stress that never makes it into words.

Emotional intelligence isn't a software problem. It's a measurement problem. The signals that reveal how someone truly feels — heart rate variability, vocal micro-tremors, electrodermal response — are physiological. They happen inside the body. No model gets access by default.

Psyche is building that access layer.

Where Psyche fits

The signal layer is general. Anywhere a person speaks to an interface, there's emotional data being left on the table.

Telehealth

Restore the cues video takes away

Clinicians on video calls lose the body-language signals they rely on in person. Voice-derived physiology brings them back.

Clinical research

Continuous physiological ground truth

Affective and mental-health studies that need real, in-the-moment signals at scale — not self-report on a five-point scale.

Voice agents

User state, not just user words

AI assistants and support systems that adapt to frustration, confusion, or distress before it escalates.

Coaching & therapy

Visibility between sessions

Signal-based feedback that gives practitioners a view into what happens when they're not in the room.

Interactive demo

Hear what your voice reveals

Record a short clip and see the emotional signals underneath — pitch, energy, speech dynamics, and the physiological layer we're building next.

Try the demo
VOICE
PITCH
HRV
Physiological layer — requires paired sensor data

How Psyche works

Voice is the universal sensor

Everyone has a phone. No one needs a wearable. Voice carries more emotional information than any other signal you can capture at scale — and it's already being recorded in every telehealth call, support session, and clinical encounter.

Real signals, not performed ones

Most emotion AI trains on actors, crowdsourced reactions, and performative data. We work with live, non-performative physiological signals — what's actually happening inside someone, not what they're showing the world.

A measurement layer that endures

AI capabilities will keep advancing. Reasoning will improve. But no model gains a body. The layer that connects AI to human physiology remains necessary — and becomes more valuable — as everything else gets smarter.

Stimulus channels

What's happening to the person

  • Voice & speech
  • Text & language
  • Video & facial expression
  • Audio environment

Response channels

What's happening inside the person

  • Cardiac & heart rate variability
  • Electrodermal activity
  • Vocal micro-prosody
  • Movement & gaze patterns

Research roots.
Startup urgency.

Psyche is born out of Carnegie Mellon's Human-Computer Interaction Institute, where our research in wearable sensing and affective computing has been published and peer-reviewed.

A small team in Pittsburgh, building at the intersection of signal processing, machine learning, and human physiology.

Carnegie Mellon HCII Pittsburgh, PA

Let's talk

We're looking for design partners, clinical collaborators, and people who believe emotional intelligence shouldn't be left out of AI.