Table of Contents
Fetching ...

Scalable Identification and Prioritization of Requisition-Specific Personal Competencies Using Large Language Models

Wanxin Li, Denver McNeney, Nivedita Prabhu, Charlene Zhang, Renee Barr, Matthew Kitching, Khanh Dao Duc, Anthony S. Boyce

Abstract

AI-powered recruitment tools are increasingly adopted in personnel selection, yet they struggle to capture the requisition (req)-specific personal competencies (PCs) that distinguish successful candidates beyond job categories. We propose a large language model (LLM)-based approach to identify and prioritize req-specific PCs from reqs. Our approach integrates dynamic few-shot prompting, reflection-based self-improvement, similarity-based filtering, and multi-stage validation. Applied to a dataset of Program Manager reqs, our approach correctly identifies the highest-priority req-specific PCs with an average accuracy of 0.76, approaching human expert inter-rater reliability, and maintains a low out-of-scope rate of 0.07.

Scalable Identification and Prioritization of Requisition-Specific Personal Competencies Using Large Language Models

Abstract

AI-powered recruitment tools are increasingly adopted in personnel selection, yet they struggle to capture the requisition (req)-specific personal competencies (PCs) that distinguish successful candidates beyond job categories. We propose a large language model (LLM)-based approach to identify and prioritize req-specific PCs from reqs. Our approach integrates dynamic few-shot prompting, reflection-based self-improvement, similarity-based filtering, and multi-stage validation. Applied to a dataset of Program Manager reqs, our approach correctly identifies the highest-priority req-specific PCs with an average accuracy of 0.76, approaching human expert inter-rater reliability, and maintains a low out-of-scope rate of 0.07.

Paper Structure

This paper contains 33 sections, 2 figures, 10 tables.

Figures (2)

  • Figure 1: (A) An example output from our approach. In this example, our approach output two PCs for PMT-1: A "Domain/Team-Specific" PC "Consumer-Facing Techinical Products" with a priority rating of 9 and a "Other Functional" PC "Vendor Management" with a priority rating of 6. (B) An overview of our approach with a toy example using primary call, evaluation, regeneration, filter and validation components. PC justification is omitted from the output to save space. In this example, we want to identify req-specific PCs for PM-31. In the primary call component, we prepare the primary call prompt from the reqs and the most similar example from the example library. In the evaluation component, an LLM evaluates the outputs from the primary call and gives a suggestion to revise PC_2 definition. In the improvement component, an LLM uses the suggestion to improve PC_2 definition and corrects the priority for PC_1 using rules. In the filter component, we filter out PC_3 as it is too similar to "Ownership", which is an explicitly defined PC to exclude (out-of-scope). In the validation component, we validate each PC label against the standardized competency library and find PC_2'label is too similar to the library PC "Program Management" but with significantly different definitions. Hence, our label refinement LLM refines PC_2 to a different label PC_2_revised to avoid confusion.
  • Figure 2: Workflow of req-specific PC identifier. Distinct shapes and colors represent different components and flows. Actions are depicted as blue rectangles, data storage is shown as grey cylinders, model-generated label and evaluation metrics are represented by orange circles, the model itself is indicated by a red circle, and decision points are marked by green diamonds. The pipeline execution follows a sequential order: beginning with steps on the train set (purple lines), followed by steps on the dev set (green lines), and concluding with steps on the test set (blue lines).