She sees your world, remembers your style, and pulls the perfect solution from an infinite pocket of skills.
Coming soon to Kickstarter.
Sign up for early access
and get 33% off at launch!
Pophie knows when to engage, when to initiate help — and when to stay out of your way.
Sight, sound, and context combine into real world awareness. Pophie knows who's talking and what's happening.
Through your personal interactions and a library of community updates, Pophie keeps getting better.
"Is this parsley or cilantro?"
"Tell me a bedtime story — but make it about me as prince."
"How do I look?"
"Red doesn't go with this jacket."
"Sum up only what Linda says."
"It's been 20 minutes. Time to take an eye break."
"Explain to her how to solve, don't solve for her."
Designed to feel alive, not robotic. Pophie communicates through body language before she ever says a word.
She can be mischievous, excited, quiet, or even a little drama... depending on how you interact and communicate over time.
Vision, spatial audio, touch, and context fuse into continuous scene understanding.
A large-model-driven system decides whether, when, and how to act.
A real-time affect engine that modulates voice, gaze, and response intensity for natural, not mechanical, responses.
Non-verbal communication through silent 5-DOF motion, gaze, breathing rhythm, and light cues.
From community updates, new abilities are added as independent Skills without changing hardware or core behavior.