Book a demo
Codility Screen: Standardized Technical Signal at Scale
Codility

AI can write code now.
Hire the engineers who can build.

Codility Screen delivers standardized technical signal early in the funnel, backed by assessment science that holds up under scrutiny

  • Real engineering environments: 1,200+ work simulations across 80+ languages and frameworks in a full VS Code workspace
  • Configurable AI posture: enable, restrict, or monitor AI use role by role, with every interaction logged and reviewable
  • Defensible outcomes: every task validated by occupational psychologists, with adverse impact monitoring at every cut score
Codility Screen: VS Code environment with task description, file explorer, code editor, and terminal output
1,200+
work simulations
1.5M+
candidate sessions surveyed
882
G2 reviews, 4.6/5.0

Trusted by engineering-first teams worldwide

SpaceX
GitHub
Samsung
BMW
Zalando
Cockroach Labs
Grab

Why early technical signal is harder than ever

AI-generated submissions blur the signal

Candidates can produce working code without understanding it. Traditional screening cannot distinguish engineers who build from engineers who prompt. You need a way to verify what someone actually knows.

High-volume pipelines need structured evaluation

Ad hoc processes break at scale. When every interviewer runs their own format, signal quality varies across the team and every hire carries different evidence behind it.

Hiring decisions face increasing scrutiny

From the EU AI Act to adverse impact audits, regulators and candidates are asking harder questions about how hiring decisions are made. An undocumented process is a liability.

Generic tools miss what matters

Toy problems and algorithm puzzles measure recall, not engineering ability. They do not predict how someone debugs production code, reviews AI output, or works across a real codebase.


One workflow: configure, assess, decide

Screen combines real engineering environments, layered integrity signals, and documented methodology in a single assessment workflow

1

Configure assessments that match real work

Build assessments in a full VS Code environment with terminal access, packages, and multi-file projects. Choose from 1,200+ work simulations mapped to the Codility Engineering Skills Model across 80+ languages and frameworks.

Set AI posture role by role: enabled, restricted, or monitored. Every candidate gets the same environment and the same rules.

Candidates work the way your engineers work

AI posture configuration: AI assistance toggle, follow-up questions, time limits, and weighted scoring options
2

Capture rich, reviewable evidence

Layered integrity signals track how the work was done: behavioral monitoring, similarity checks, paste detection, and AI follow-up questions that verify whether a candidate understands the code they wrote.

Risk scoring aggregates signals into a clear indicator. Automated code quality analysis measures maintainability, complexity, and structure beyond correctness alone.

Signal on engineering ability, not just output

Assessment integrity panel: identity verification, device integrity, behavioral signals, plagiarism detection, and similarity check
3

Decide with a defensible record

Structured scoring backed by documented assessment methodology. Every task reviewed by occupational psychologists. Adverse impact monitored at every cut score against EEOC guidelines.

The cApStAn linguistic audit confirmed 65% of tasks at or below B1 CEFR, designed for global fairness across non-native English speakers.

Decisions that hold up when questioned

Screen report: task summary with individual scores, total score, and tabs for review, timeline, integrity, and AI assistant chat

What other tools miss

Typical technical screening tools

Large question libraries with no published methodology behind them
Black-box plagiarism flags or basic proctoring with no reviewable evidence
AI features available but no policy-level control or audit trail
No adverse impact data, no fairness documentation, no compliance posture
Simplified editors that test recall, not real engineering work

Codility Screen

Every task designed by occupational psychologists, backed by a 76-page Technical Manual
Layered integrity signals with drill-down evidence: behavioral, similarity, paste, AI follow-up
AI posture configured role by role, with full interaction log and reviewable activity
Adverse impact monitored at every cut score. cApStAn linguistic audit. EU AI Act ready.
Full VS Code with terminal, packages, and multi-file projects that mirror real engineering work

Screen is where it starts

Codility extends the same validated methodology from screening through interviews and into your existing workforce

Interview

Structured technical interviews in a shared VS Code environment with sidecar services, whiteboard, and full transcript. Replace ad hoc technical interviews with a repeatable, evidence-backed process.

Skills Intelligence

Map and verify technical capability across the engineering org using the same validated methodology. Staff projects on proven skills, target development where gaps actually exist, and report AI readiness with evidence.