loading ...
loading ...
now with built-in ats.
sign up freethe engine that captures how a person thinks.
most ai listens to what a person says.
but words are only a small part of how we think.
we built the engine that captures the rest.
how it works
a summary tells you what someone said. a profile shows you how they got there. NeuralPrint builds the second one.
one profile.
a structured map of how a person thinks. ready for any AI system to act on.
the inputs
02 / 13
architecture
inside the engine
each layer captures one part of how someone thinks. they all feed into a single profile that other systems can use.
sent to your ai system
one profile
NeuralPrint
layer
5
reasoning path
maps the route someone took to reach their answer
layer
4
thinking effort
tracks how hard someone is thinking, in real time
layer
3
language
maps how their ideas connect, not just what they said
layer
2
face
tracks every facial signal, frame by frame
layer
1
voice
captures how they speak, not just the words
voice · face · language
what we capture
voice
how they sound.
rhythm, pitch, and pace. how someone speaks tells you how they are thinking, not just what they are saying.
face
46 facial signals.
every micro-expression, tracked frame by frame. based on fifty years of facial-coding research.
language
what they actually mean.
we map how their ideas connect, not just the words. NeuralPrint sees the meaning behind the sentence.
inside one profile
12,000 data points
state · scattered signals
step 1
twelve thousand data points.
every profile is built from twelve thousand tiny signals about how someone thinks.
step 1 · the signals
thinking effort · over the session
tracking how hard they are thinking
a live readout of how hard someone is thinking at every moment, built from voice, face, and language together.
each signal can be misread on its own. combining all three gives a much more reliable picture than any one of them alone.
the path they took
NeuralPrint rebuilds the full path someone took to reach their answer. how they framed the question, where they went, where they got stuck, and how they recovered.
you see a map of how they got from the question to the answer. not just the answer itself.
one answer · the path they took
the numbers
up to
0
signals tracked
how many things we capture
under
0
milliseconds
how fast we capture each one
up to
0
data points
what fits inside one profile
up to
0
times per second
how often we update
measured in controlled testing.
full specifications
cognitive fidelity index (CFI)
up to94 neurallayers
thought latency
sub 200 ms
mindset resolution
up to12,000 cognitons
reasoning density
up to8.4× baseline
behavioral coherence index (BCI)
up to0.97 (normalized 0 to 1)
cognitive surface area (CSA)
up to24 mind quadrants
inference depth
up to7 reasoning strata
voice cognition quotient (VCQ)
up to880 (baseline 500)
mind refresh rate
up to60 frames / second
decision topology score (DTS)
up to4.2 decision octaves
expression coherence index (ECI)
up to0.94 (normalized 0 to 1)
multimodal fidelity quotient (MFQ)
up to1,240 (baseline 800)
gestural density (GD)
up to32 gesture units / minute
gaze stability index (GSI)
up to0.92 (normalized 0 to 1)
action unit resolution
46 distinct AUs, frame by frame
profile build time
1.8 seconds post session
acoustic capture window
continuous, 16 kHz
visual capture rate
up to30 frames / second
cognitive substrate layers
5
profile object size
18 to 28 MB / session
where it fits
useful anywhere knowing how someone got to their answer matters as much as the answer itself.
references
a representative subset of the published peer reviewed work that informs NeuralPrint axon's design.
what comes next
the aperture cognition system
one captures how a person thinks. the other turns it into a score you can act on.