How AI is transforming the way we assess workforce skills
To attract and retain top talent in a competitive labor market, federal organizations are adopting skill-based assessment strategies. Here are three ways we're using AI to create and improve them efficiently.
To attract and retain top talent in a competitive labor market, federal organizations are adopting skill-based assessment strategies, consistent with guidance from the Office of Personnel Management. Whereas formal education, certifications, and previous job titles marked qualifications in the past, skills are the primary currency of today’s talent marketplace. This extends beyond technical skills. Cognitive skills like critical thinking, and non-cognitive “soft” skills such as interpersonal tact and resilience, are equally or even more important contributors to job success.
Skill-based approaches to talent management can improve agility, equity, innovation, and the overall talent experience, yet organizations continue to grapple with the fundamental question: How can we be confident this individual really possesses a skill?
Traditional skills assessment methods
Skills can be assessed through traditional methods like testing, interviews, simulations, direct observations, and 360-degree assessments. Each method has its pros and cons, and organizations often selectively apply these methods based on the situation. For example, more rigorous methods like simulations might have historically been reserved for certain critical occupations, like aviation, while more accessible methods like interviews may have been leveraged across a range of roles even when more rigor may have been warranted.
Automated and AI-enabled assessments
While each method has its place, there is tremendous opportunity to better leverage cognitive and non-cognitive assessments as part of a larger strategy to future-proof the workforce. Historically, developing and validating skill-based assessments required significant time and investment which may have put them out of reach for some organizations. But AI is changing that.
What used to take hundreds of hours to generate skill-based assessments can now be accomplished in a fraction of the time. Leveraging the expertise of qualified professionals is still critical to validate skill-based assessments and ensure they are used appropriately, but with efficiencies gained by AI, these professionals’ expertise can be more accessible to more organizations, driving better talent outcomes.
Proven experience
At ICF, we’re pairing the capabilities of generative AI (Gen AI) with the deep expertise of our industrial organizational and cognitive psychologists to measure various cognitive skills. To support the U.S. Army, for example, we have applied automation and Gen AI in three different contexts:
Non-verbal, puzzle-like items
Our team developed an automatic test item generator to rapidly produce measures of deductive reasoning, spatial visualization, and other similar abilities. Previously, item writers spent upwards of two months and over 300 hours to develop roughly 200 valid items. With this new automated tool, puzzles can be generated in seconds, with rule-based checks built in to ensure each puzzle has just one correct answer. Our experts now spend closer to 30 hours reviewing and validating outputs for accuracy, a 90%-time reduction.
Analogies
Analogies challenge respondents to infer the relationship between two words and apply the same relationship to another pair of words. For instance, the pairings of a) chef and kitchen and b) executive and office both reflect the relationship between a type of worker and their workplace.
It can be labor-intensive to create unique analogies at varying levels of difficulty, all with multiple response options and “distractor” items that seem correct at first glance but are not. However, our team has engineered Gen AI prompts to produce dozens of analogy test items almost instantly. As expected, 60-80% of our outputs for the Army were deemed usable, which provided a significant head start for our experts, who were then able to refocus their energies on hand-picking the highest quality items and improving distractor responses to ensure the final product met our high standards.
Critical thinking scenarios
This method invites respondents to evaluate a given scenario. For instance, they might identify underlying assumptions, or weigh alternate courses of action given a set of conditions. Developing these scenarios requires creativity and a unique blend of expertise from industrial organizational psychologists who know how to construct these items, and industry experts who ensure the scenarios are realistic. This combination of expertise rarely resides in a single person, requiring teamwork and/or significant research to ensure the quality of each item.
To streamline this process, we’re leveraging Gen AI and prompt engineering to produce scenarios relevant to the job and context in which Army officers work. Experts then refine the resulting outputs, tweaking the style, vernacular, and other details as needed. By enlisting the support of AI as a team member, our team has reduced the average number of review cycles for these scenarios from two or three to just one, resulting in considerable time savings.
Reconsidering skills assessment for the future
These are just a few examples of how we’re leveraging automation and AI to efficiently create and improve skill-based workforce assessments that can support hiring, internal mobility, promotion, and/or learning and development, provided the appropriate experts are engaged to ensure reliability and validity.
If your organization has considered leveraging assessments like these in the past, but felt it might not be possible, the emergence of these innovative technologies may help. For more information about how to improve talent outcomes, please explore our additional resources.