Capture feeling. Track it over time.

Open-source web components grounded in affective science. Framework-agnostic. MIT licensed.

npm install affect-kit

<affect-kit-rater>

<affect-kit-result>

commit event.detail
import 'affect-kit/rater';
import 'affect-kit/result';

// In your HTML:
// <affect-kit-rater></affect-kit-rater>
// <affect-kit-result show-face show-labels color-mode></affect-kit-result>

const rater = document.querySelector('affect-kit-rater');
const result = document.querySelector('affect-kit-result');
rater.addEventListener('change', (e) => result.rating = e.detail);

Why this exists

There's a gap in how we measure feeling

Today's options for capturing emotional state pull in opposite directions. Neither works for quick, repeated, universal self-report.

Too blunt

"How are you feeling? (1–10)"

Not grounded in any validated model of emotion. A 7 before and a 7 after can mean entirely different things. No nuance, no structure, no data you can meaningfully act on.

Too clinical

PHQ-9, GAD-7, PANAS

Rigorously validated, but designed for clinical populations and specific pathological states. They take 10–20 minutes to administer and produce scores that are opaque to the people filling them out. A PHQ-9 score of 14 means something to a clinician. It means nothing to the patient.

There is no quick, evidence-based, universal way to capture how someone feels before and after an experience, in plain language they recognize, with structured data you can actually use. affect-kit is built to fill that gap.

Web-standard

Built with Lit. Custom elements that work in React, Vue, Angular, SvelteKit, or plain HTML. No framework required.

Science-grounded

Standardized label selection on a peer-reviewed lexicon. V/A/D coordinates from NRC VAD's 20k entries; face built on FACS action units validated across 17 cultures.

Logbook-faithful

Records, not verdicts. Every Rating is a structured, time-stamped snapshot. No derived scores, no clinical labels. Your data, your interpretation.

The face glyph

One continuous face, not categories

Driven by valence (positive ↔ negative) and arousal (energized ↔ calm). Based on FACS action units cross-culturally validated in 17 cultures.

anger v=−0.8, a=0.8
anxious v=−0.4, a=0.5
neutral v=0, a=0
content v=0.6, a=−0.5
joy v=0.8, a=0.8

Four components

<affect-kit-compare>

Two snapshots side-by-side, or two arrays of ratings averaged. Same face + word + color vocabulary as <affect-kit-result>. No comparison metric, no "improvement" claim.

Docs →

<affect-kit-rater>

The primary input. Drag on the V/A pad to set a gut-feeling position, then intensity-rate up to 5 labels from the NRC VAD lexicon. Fires a change event on commit.

Docs →

<affect-kit-result>

Renders a committed Rating as a face glyph, dominant label word, and optional color chip. Pair with <affect-kit-rater> or drive from stored data.

Docs →

<affect-kit-face>

Standalone face glyph driven by v and a props. Any size via CSS. Animates breath + tremor by default.

Docs →

Scientific foundations

Words are the measurement. The face sorts them.

Emotion science's most validated instrument is the labeled word: frustrated, content, anxious. The NRC VAD Lexicon scores 20,000 of them on valence, arousal, and dominance. The hard part has never been the list. It's finding the right word in the moment without the list itself biasing the answer.

affect-kit makes label selection a standardized act. A pre-verbal gesture orients you in V/A space and re-sorts the lexicon so the closest words rise first. You refine by tapping the ones that fit. A single commit writes a structured Rating. The face is a fancy sorter; the labels are the data. Dominance — the sense of agency — is preserved alongside, distinguishing frustrated from anxious even when their V/A coordinates look identical.

Explore the research foundations →

Get started

npm install affect-kit View on GitHub →