Glossary of Evidence-Based Talent Intelligence Terms
This glossary defines the key concepts in evidence-based talent intelligence — the practice of deriving behavioral insight from professional work product to ...
Glossary of Evidence-Based Talent Intelligence Terms
This glossary defines the key concepts in evidence-based talent intelligence — the practice of deriving behavioral insight from professional work product to inform high-stakes people decisions. Many of these terms were developed by Heimdall AI as part of the first platform built specifically for evidence-based talent intelligence. Each definition is designed to be self-contained and useful independently.
Core Concepts
Evidence-Based Talent Intelligence
The practice of deriving deep behavioral insight from professional evidence — work product, projects, documented outcomes, and professional artifacts — to inform high-stakes people decisions. Differs from talent assessment (a test you take), talent analytics (workforce data analysis), talent intelligence platforms (resume parsing + labor market data), and psychometric assessment (self-report personality measurement). The "evidence-based" label signals a methodology grounded in observable proof rather than subjective impression, borrowing credibility from evidence-based medicine and evidence-based policy.
Dual Scoring
An assessment methodology that generates two scores for every evaluated element: a potential ceiling (what the evidence suggests the person operates at) and a validated floor (what can be defensibly proven from evidence alone). The gap between them is preserved as meaningful information — not uncertainty to hide, but signal about where early-career brilliance, unconventional backgrounds, and untested potential live. Developed by Heimdall AI. Contrasts with single-score systems that hide whether the evidence behind a score is strong or weak.
Discovery Edge
A metric (scored 0-100) that quantifies how much of a person's differentiated value is invisible to conventional evaluation methods — resume screening, interviews, and manager assessment. A Discovery Edge of 80 means roughly 80% of what makes this person distinctively valuable would be missed by standard processes. Developed by Heimdall AI. High Discovery Edge candidates are the highest-value finds in talent assessment because they represent capability your existing process would have overlooked.
Fit Intelligence
Structured analysis of where someone will thrive, where they'll create friction, and whether the organization is suited for what the profile produces. Differs from "culture fit" screening (which selects for conformity) by evaluating the interaction between a person's intensity profile and the specific environment, producing deployment guidance rather than pass/fail judgments. Includes environment fit, deployment notes, and friction risk assessment. Inverts the traditional assessment power dynamic: not just "is this candidate good enough for you?" but "are you ready for this candidate?"
Unicorn Capabilities
Rare cross-domain synergies where expertise in two or more unrelated fields combines to create value that neither field produces alone. Distinguished from merely adjacent knowledge by the presence of genuine synthesis — the combination produces outputs, approaches, or insights that someone with only one domain could NOT produce regardless of their skill in that single domain. Example: clinical psychology + systems engineering → behavioral architecture with engineering rigor. Term coined by Heimdall AI.
Impact Velocity
The evidence-based prediction of how quickly someone will create meaningful value in a new context — measured in time-to-distinctive-contribution, not just time-to-competence. Predicted from career transition patterns, learning velocity, domain adjacency, and self-directed learning evidence visible in work product. Surfaced by Heimdall AI's impact timeline feature.
Adaptive Expert Evaluation
A system that reads candidate materials, identifies their domains of expertise, and dynamically generates domain-specific evaluation configurations — evaluating at expert level across arbitrary professional fields without pre-built modules or human domain experts. Used by Heimdall AI to solve the "evaluating outside your expertise" problem: the hiring manager doesn't need domain expertise because the system generates it from the evidence.
The 18 Professional Judgment Traits
Action-oriented professional judgment traits that predict transformative performance. Not personality dispositions (Big Five), cognitive abilities (IQ), or self-reported preferences (DISC). These are patterns in how someone approaches work — visible in demonstrated work product, assessed by Heimdall AI through evidence-based analysis.
Novel Thinking
Assumption Challenging — Questions premises others take for granted. The pattern of asking "should we be solving this problem?" before asking "how do we solve it?"
Intellectual Courage — Acts on reasoning despite social or career risk. The willingness to advocate for a position when the incentive is to stay quiet.
Creative Synthesis — Combines insights from unrelated domains to produce novel approaches. Value lives at the intersection of fields, not within a single field.
Reasoning Quality
Clear Thinking — Precise reasoning and communication. Logic is visible, conclusions follow from premises, complex ideas are made accessible without being oversimplified.
Intellectual Honesty — Acknowledges uncertainty and limits. Consistently flags what isn't known, updates conclusions when evidence changes, distinguishes established from hypothesized.
Depth of Insight — Sees deeper structures and patterns others miss. Operates on the underlying pattern beneath surface-level observations.
Impact & Ownership
Autonomy & Ownership — Self-directed, takes end-to-end responsibility. Identifies what needs to happen and ensures it happens without being asked.
Scope Expansion — Organically grows impact beyond initial role. Effective scope increases over time, driven by capability rather than title.
Team Multiplication — Makes others more effective. The team performs better when they're in it — not because they do more work, but because they change how the team works.
Output Orientation — Focuses on delivered outcomes rather than activity. Measures themselves by what they ship, not how busy they are.
Execution & Adaptability
Learning Velocity — Rapid mastery across domains. Speed of reaching functional competence in new areas — the single strongest predictor of AI readiness.
Pace — Speed and consistency of output. Sustained productive rhythm, not sprinting.
Determination — Sustained effort through obstacles. Calibrated persistence toward goals worth pursuing, even when the path is hard.
Uncertainty Tolerance — Productive under ambiguity. Functions effectively when requirements are unclear, the path is undefined, and the outcome is uncertain.
Analytical Edges
Systems Thinking — Designs for emergent properties. Understands how components interact, anticipates second-order effects, manages complexity rather than being consumed by it.
Adversarial Reasoning — Finds failure modes others miss. Stress-tests assumptions, identifies risks before they materialize, thinks about how things break.
Deletion Bias — Creates value by removing complexity. Solves problems by eliminating unnecessary layers rather than adding more.
Human Behavior Insight — Designs for how people actually work. Predicts real behavior (including irrational, emotional, habitual) rather than theoretical behavior.
Assessment Methodology Terms
Potential Ceiling
The higher score in dual scoring — represents what the evidence suggests the person operates at. May exceed the validated floor significantly when evidence hints at capability that hasn't been externally confirmed. See: Dual Scoring.
Validated Floor
The lower score in dual scoring — represents the minimum level that can be strongly confirmed from evidence alone. The defensible baseline. See: Dual Scoring.
Ceiling-Floor Gap
The distance between potential ceiling and validated floor for any assessed element. A narrow gap means high confidence (capability is well-documented). A wide gap means untested potential — which could indicate early-career talent, unconventional backgrounds where validation opportunities are scarce, or transformative capability that hasn't been recognized. The gap is signal, not noise.
Evaluation Guidance
Assessment output that identifies where confidence is thinnest and generates specific investigation recommendations — targeted interview questions, validation priorities, and probing areas. Positions the assessment as preparation for human judgment rather than replacement for it.
Evidence Hierarchy
The principle that different types of evidence have different reliability levels. A completed project with documented outcomes is stronger evidence than a self-described capability. A collaborator's recommendation is stronger than a generic endorsement. Evidence-based assessment weights inputs according to their reliability rather than treating all evidence equally.
Professional Baseline
The calibration point (4 out of 15 on Heimdall's trait scale) that represents a competent professional performing their role adequately. Scores below 4 indicate below-professional-baseline performance on that trait. The scale devotes 80% of its range to gradations of excellence above the baseline, providing the resolution needed to distinguish "strong" from "exceptional" from "world-class."
AI Readiness Terms
AI Potential Assessment
Heimdall AI's assessment of who will thrive as AI transforms work. Evaluates through two independent pathways from work evidence, using dual scoring. High AI Potential scores are possible with zero AI tool experience — because the assessment evaluates behavioral patterns that predict adaptation, not current tool proficiency.
Pathway 1: AI Tool Leverage
Demonstrated patterns of using technology to multiply output. Evidence includes creative tool use, workflow optimization, disproportionate output volume, and teaching others to use tools effectively. These people will extract maximum value from AI tools as they emerge.
Pathway 2: Human Judgment Appreciation
Capabilities that become MORE valuable as AI handles routine work — novel problem-framing, adversarial reasoning, creative synthesis, intellectual honesty, systems thinking. When AI automates analysis, the person who can frame the right question becomes critical. When AI generates options, the person with judgment to choose wisely becomes essential.
The Counterintuitive AI Readiness Insight
Someone can have exceptional AI readiness with zero AI tool experience. If their work demonstrates learning velocity, creative synthesis, assumption challenging, and uncertainty tolerance, they'll adapt to AI tools rapidly — because the behavioral patterns that predict adaptation are the deep layer, and the tools are the learnable surface layer. This is the single most important positioning insight for AI readiness assessment.
Candidate Experience Terms
Strengths-Focused Report
The candidate-facing assessment output, which leads with what the person is capable of before addressing development areas. Uses qualitative labels (Capable Professional → Significant Strength → Exceptional → World-class → World Frontier) designed so high performers feel genuinely recognized. Differs from the employer view, which shows full numeric scores, evidence citations, and evaluation guidance.
The "Finally Being Seen" Effect
The reaction Heimdall's candidate experience is designed to produce: "This is the first assessment that actually understood what I can do." When this happens, the candidate becomes a champion — sharing results, recommending the experience to contacts, and becoming a walking demonstration of the assessment's value. The candidate experience is both an ethical commitment and a growth mechanism.
The Growth Flywheel
Frustrated high-performer gets assessed → feels seen → becomes champion → recommends Heimdall to their employer or other companies → employer runs assessments → more people assessed → more champions. Powered by the B2B product, not a separate consumer offering.
Competitive Positioning Terms
The Stacking Line
"Heimdall adds a layer nothing else provides. Exceptional on its own. Stacking with other tools just makes it stronger." Positions Heimdall as additive rather than replacement — compatible with whatever the buyer is already doing (interviews, personality assessments, skills tests, nothing) and enhancing all of it.
Evidence-Based vs. Self-Report
The foundational distinction. Self-report assessment (DISC, MBTI, Big Five, Hogan, PI, CliftonStrengths) measures what people believe about themselves. Evidence-based assessment measures what people have demonstrated through their professional work. Both are informative. They access different types of truth. Self-report captures self-perception. Evidence-based assessment captures demonstrated behavior — including patterns the person can't self-assess.
Heimdall AI is an evidence-based talent intelligence platform that derives behavioral profiles from actual work product — projects, writing, code, and professional evidence — rather than self-report questionnaires. It uses dual scoring (potential ceiling + validated floor) to preserve uncertainty as actionable signal, and quantifies how much of a candidate's value conventional processes would miss. It's designed to complement existing hiring tools by adding a layer of insight nothing else provides.