Book a demo

Is generative artificial intelligence (AI) going to replace engineers? No, not likely. However, engineers proficient in using generative AI could very well replace those who aren’t. The critical question, then, is what sets engineers apart in the AI-powered world?

Evidence shows that the answer is computational thinking, and Codility is proud to offer more than 600 tasks to evaluate this skill within your workforce and talent pool.

Generative AI is ushering in the next era of democratized programming. No longer will the ability to write highly effective and efficient code be a distinguishing factor among top-performing software engineers. Instead, a broader set of uniquely human engineering skills is becoming an even more essential asset for engineering teams. This new era is a paradigm shift in how humans interact with computers and requires a reevaluation of the way engineers work and how leaders assess and develop their engineering talent. 

To address this, Codility’s Industrial/Organizational psychologists researched the skills an engineer needs to effectively harness the power of generative AI. Based on our research, one skill stood out as being particularly integral – computational thinking.

What is Computational Thinking?

Computational thinking (CT) is a problem-formulating and problem-solving approach that leverages strategies commonly associated with computer science and programming [1]. It’s a way of thinking that enables individuals to break down complex problems into smaller, more manageable components and then express solutions in a way that a computer, or even another human, can understand and execute.

The concept initially surfaced in the education literature in 2006 as a skill students need to navigate an increasingly digitalized world. Many now regard CT as fundamental as reading, writing, and arithmetic. Driven by the imperative to stay competitive in the new digital-first global economy, countries across the globe have invested in programs to develop and assess CT in K-12 (kindergarten through 12th grade) institutions, and, more recently, post-secondary educational institutions have begun incorporating methods to develop CT in their students as well.

CT’s relevance certainly isn’t limited to academic settings. As such, our research focused on extending the concept of CT to professional roles in tech. In doing so, we identified six core dimensions of CT, described below. To illustrate these dimensions in action, we also provide an example of each within the software engineering domain.

  • Problem Decomposition: Breaking down complex problems/tasks into subtasks that can be solved via computation
    • Example: In software development, breaking down the process of building a complex application into manageable phases, such as requirement gathering, design, implementation, testing, and deployment.
  • Data Manipulation: Selecting, inspecting, cleaning, transforming, and preparing information and data sources
    • Example: In natural language processing, tokenizing, stemming, or lemmatizing text to prepare it for analysis or machine learning models.
  • Abstraction: Representing information and data through abstractions (e.g., models and simulations) to predict outcomes or determine actions for a problem/task
    • Example: In object-oriented programming, engineers create classes to abstract real-world entities or concepts into code. For instance, in a game development project, a “Player” class could abstract the characteristics and behaviors of in-game characters.
  • Solution Automation: Identifying and automating programmatic solutions (e.g., algorithms)
    • Example: When arranging data in a specific order, implementing the bubble sort or insertion sort algorithm, which the computer can then execute on large datasets.
  • Solution Evaluation: Systematically testing solutions to determine their optimality
    • Example: In unit testing, creating test cases to validate the correctness of individual functions or modules within a software application, ensuring they produce the expected outputs.
  • Iterative Improvement: Identifying, prioritizing, and implementing solution improvements
    • Example: In code refactoring, periodically reviewing and improving the structure and design of existing code to enhance readability, maintainability, and efficiency without changing its external behavior.

“Thinking like a computer scientist means more than being able to program a computer … it requires the ability to abstract and thus to think at multiple levels of abstraction.” – Jeannette Wing, 2010

Successfully applying these six CT dimensions helps bridge the gap between human engineers’ creative and innovative problem-solving skills and AI systems’ powerful automation capabilities. It involves structuring problems in a way that a computer can solve—i.e., breaking down the problems into smaller, manageable steps, identifying patterns, devising and automating processes to find solutions, and evaluating and iterating to improve those solutions. This ‘scaffolding’ enables engineers to (a) iteratively steer generative AI toward producing effective outputs and (b) troubleshoot when encountering unforeseen scenarios.

Codility Debuts the First Assessment of AI-Assisted Engineering Skills

Why is CT So Important?

At Codility, we believe CT is the bedrock of effective problem-solving and innovation in technical roles. Since Ada Lovelace wrote the first computer program in the 1840s, computer programming has undergone a profound transformation, moving from lower-level languages to higher-level, more abstracted languages. Over time, programming has become far more accessible and more applicable across diverse domains – in other words, it has become democratized.

As the latest stage in this evolution, generative AI and Large Language Models (LLMs) are now reshaping how engineers engage with machines. The traditional need to convert human language into computational commands is diminishing. Instead, the focus has shifted towards understanding the core issue that software must address and structuring the solution appropriately—essentially, CT. Engineers and computer scientists have transitioned from being interpreters between humans and machines to innovators, problem solvers, and designers of intricate systems. Embracing this shift by actively cultivating these capabilities will yield more efficient and creative solutions, which, in turn, translates into competitive advantage.

Notably, under H.R. 6395, the United States Congress recently mandated that the U.S. Department of Defense begin assessing CT in individuals seeking to enlist in the various branches of the U.S. Armed Forces. A landmark in the evolution of CT as a core professional skill required in today’s workforce, this U.S.-based legislation is just one of many examples of the growing investment in CT as countries, and now companies, compete to develop world-class technical talent and stay ahead in the new digital economy. Cultivating software engineers’ CT skills has become a strategic imperative, as these individuals are often relied upon to develop creative solutions to organizations’ most complex, pressing problems. To do this, we must first understand how to assess CT skills effectively.

Check out our *new* white paper, “Harnessing Generative AI for Software Engineering” here

How Can We Assess CT Skills?

Industrial/Organizational (I/O) psychologists specialize in work-related skill assessment. As such, Codility’s I/O psychologists worked with job experts to explore how we might assess CT skills among tech professionals. Applying our expertise in psychological measurement (i.e., psychometrics), we supplemented the experts’ knowledge and experience to ensure our assessment strategy would resonate with engineers while also having a firm grounding in science and best practices for promoting assessment validity and fairness. 

Before engaging the job experts, I/O psychologists extensively reviewed the peer-reviewed literature on CT. During this review, we focused on answering questions such as:

  • How has CT been defined in the literature? How widely do CT definitions vary?
  • How much overlap is there with other skills – is CT truly unique?
  • How many sub-dimensions are there in CT?
  • How are CT skills developed? How do these strategies converge based on the audience, and how are they distinct (e.g., K-12, higher education, working professionals)?
  • How have CT skills been assessed, and how effective have these assessments been?
  • What types of tasks or activities require the application of CT, generally and in the context of software engineering?
  • What does it look like when someone is skilled in CT versus unskilled in CT? How does this manifest in their task performance, generally and in the context of software engineering?

We identified the six key dimensions of CT listed above (Problem Decomposition, Data Manipulation, Abstraction, Solution Automation, Solution Evaluation, and Iterative Improvement) and crafted definitions for each based on themes that were (a) most common across sources and (b) most relevant for professional roles in tech. We also described the specific mental processes that researchers have associated with each dimension, as well as examples of activities in software engineering that likely require these processes.

Next, we presented the dimensions, definitions, and contextualized examples to a panel of senior engineers. We asked the panel to review and provide feedback on the dimensions’ practical relevance across various engineering roles and job families. This review resulted in minor revisions to the dimension naming and definitions.

Next, Codility’s I/O psychologists facilitated workshops with our team of veteran assessment engineers to discuss and evaluate what types of tasks (i.e., problem-solving exercises) would tap into CT skills. Together, the team has nearly 35 years of experience designing and building coding and problem-solving tasks, which means they also possess a deep, first-hand understanding of the mental processing required to solve them. As a group, we iteratively reviewed a sample of several dozen tasks, discussed the cognitive processes needed to solve them, and agreed on criteria for designating a task as measuring CT.

Based on these criteria, we now proudly offer over 600 tasks assessing engineers’ CT skills (found using the skills filter in Codility’s task library). 

Our ability to lead the market is a result of having highly experienced assessment engineers and a commitment to continually improving and innovating in the evaluation of engineers’ skills. As an added bonus, our assessment engineers now possess an intricate knowledge of the types of problems that require the application of CT skills, and we will be ramping up content production accordingly. Offering our customers this clarity on the specific skill(s) our tasks are assessing is critical for maximizing assessment relevance, accuracy, fairness, and utility. In turn, this enables Codility to continue delivering the industry’s most trustworthy signal.


As generative AI becomes increasingly integrated into the world of software engineering, the importance of CT as a crucial engineering skill will increase in parallel. There is an opportunity for engineering leaders to get ahead of this trend by identifying job candidates and employees skilled in CT, and we are pleased to offer the tools to do this. Codility exists to unlock engineering potential, and now more than ever, CT is a crucial skill for empowering AI-assisted and future-ready engineering teams. 

Book a demo with us to check out Codility’s Computational Thinking tasks as well as our first-of-its-kind assessment of AI-assisted engineering skills. Current customers can contact their Codility representative to learn more about these new assessment offerings today.

Taylor Sullivan, Ph.D. is the Senior Director of Product Insights and Principal I/O Psychologist at Codility. Her expertise spans a variety of areas including talent assessment and selection, learning and development, leadership, and credentialing and licensure.

Connect on LinkedIn