Additive Multi-Attribute Value Theory in Talent Acquisition
In talent acquisition, hiring decisions often involve evaluating candidates across multiple dimensions: skills, experience, education, certifications, and cultural fit. Traditional hiring processes rely heavily on subjective judgment, leading to inconsistent decisions and potential bias. Additive Multi-Attribute Value Theory (MAVT) provides a rigorous mathematical framework for combining these diverse attributes into a single, comparable score.
The Multi-Attribute Decision Problem
When evaluating a candidate $c$ against a job offer $o$, we assess multiple attributes:
$$A = \{a_1, a_2, ..., a_n\}$$Where typical attributes include:
- $a_1$: Technical skills match
- $a_2$: Years of relevant experience
- $a_3$: Education level
- $a_4$: Certifications
- $a_5$: Interview performance
Each attribute $a_i$ has a raw value $x_i(c)$ for candidate $c$ that must be normalized and weighted.
The Additive Value Function
The core MAVT model computes an overall value score as a weighted sum:
$$V(c) = \sum_{i=1}^{n} w_i \cdot v_i(x_i(c))$$Where:
- $V(c)$ is the total value score for candidate $c$
- $w_i$ is the weight assigned to attribute $i$, with $\sum w_i = 1$
- $v_i(x_i)$ is the value function that normalizes raw scores to $[0, 1]$
This additive form assumes preferential independence: the value contribution of one attribute doesn’t depend on the levels of other attributes.
Skill Matching with String Similarity
For skill-based matching, we compare required skills $S_o = \{s_1, s_2, ..., s_m\}$ from the job offer against candidate skills $S_c$. Using the Damerau-Levenshtein distance $d_{DL}$, we compute similarity:
$$sim(s_o, s_c) = 1 - \frac{d_{DL}(s_o, s_c)}{\max(|s_o|, |s_c|)}$$For each required skill, we find the best match:
$$match(s_o) = \max_{s_c \in S_c} sim(s_o, s_c)$$The overall skill matching score becomes:
$$v_{skills}(c) = \frac{1}{|S_o|} \sum_{s \in S_o} match(s)$$This captures fuzzy matching—a candidate listing “ReactJS” still scores well against a requirement for “React.js”.
Experience Scoring with Duration Weighting
Experience quality depends on multiple factors. For a skill $s$ with required experience $r_s$ years, we evaluate the candidate’s relevant experience:
$$v_{exp}(s, c) = \frac{\iota \cdot t}{r_s}$$Where:
- $t$ = duration in years at positions using skill $s$
- $\iota$ = experience type factor:
- $\iota = 1.0$ for full-time positions
- $\iota = 0.5$ for internships
- $r_s$ = required years for skill $s$
This formula naturally handles:
- Partial experience: 2 years against a 5-year requirement yields $0.4$
- Exceeding requirements: Scores cap at $1.0$ or can exceed for exceptional candidates
- Internship discounting: Internships contribute less weight
Semantic Similarity with Embeddings
For attributes that resist simple string matching—like comparing job responsibilities against candidate experience narratives—we use semantic embeddings.
Given text representations $T_o$ (job requirements) and $T_c$ (candidate experience), we generate embedding vectors:
$$\vec{e}_o = embed(T_o), \quad \vec{e}_c = embed(T_c)$$The semantic similarity uses cosine distance:
$$v_{semantic}(c) = 1 - \frac{\vec{e}_o \cdot \vec{e}_c}{||\vec{e}_o|| \cdot ||\vec{e}_c||}$$This captures conceptual alignment that keyword matching would miss—a candidate describing “building scalable microservices” semantically matches requirements for “distributed systems architecture”.
Combining Scores: The Final Model
The complete candidate score combines all components:
$$V(c) = w_1 \cdot v_{skills}(c) + w_2 \cdot v_{exp}(c) + w_3 \cdot v_{semantic}(c) + w_4 \cdot v_{edu}(c)$$Weight selection reflects organizational priorities:
- Technical roles: Higher $w_1$, $w_2$
- Senior positions: Higher $w_2$ (experience weight)
- Research roles: Higher $w_4$ (education weight)
Practical Implementation Considerations
Threshold-Based Filtering
Before scoring, apply minimum thresholds for critical attributes:
$$eligible(c) = \begin{cases} 1 & \text{if } v_i(c) \geq \theta_i \quad \forall i \in Critical \\ 0 & \text{otherwise} \end{cases}$$Score Calibration
Raw scores require calibration against historical hiring data. If candidates with scores above $0.7$ historically succeeded, this becomes a benchmark for recommendations.
Handling Missing Data
For missing attributes, options include:
- Assign neutral value: $v_i = 0.5$
- Redistribute weights to known attributes
- Flag for manual review
Advantages of the MAVT Approach
- Transparency: Every score component is explainable
- Consistency: Same criteria applied to all candidates
- Auditability: Weights and formulas can be reviewed for bias
- Flexibility: Weights adjust per role without changing infrastructure
Limitations and Mitigations
Preferential independence assumption may not hold—10 years of experience might matter more for senior roles than entry-level. This can be addressed with interaction terms:
$$V(c) = \sum_i w_i v_i + \sum_{i \lt j} w_{ij} v_i v_j$$Weight elicitation is challenging. Techniques include:
- Pairwise comparisons (AHP method)
- Swing weighting
- Historical data regression
Conclusion
Additive Multi-Attribute Value Theory transforms subjective hiring into a structured, mathematical process. By decomposing candidate evaluation into weighted, normalized components—skill matching, experience scoring, semantic similarity—organizations can make more consistent, defensible hiring decisions while maintaining the flexibility to prioritize different attributes for different roles.
The key insight is that good hiring decisions are inherently multi-dimensional, and MAVT provides the theoretical foundation to handle this complexity rigorously.
Additive Multi-Attribute Value Theory in Talent Acquisition
Applying MAVT to build objective candidate scoring systems.
Achraf SOLTANI — January 21, 2025
