Enabling your organisation to thrive in a changing world. A data-driven workforce intelligence platform that measures, maps, and develops leadership agility across your entire organisation.
The world of work is undergoing its most significant structural transformation in a generation. The World Economic Forum's Future of Jobs Report 2025 quantifies the scale of disruption. The organisations that will thrive are those that can identify, develop, and deploy leaders based on agility, not just experience.
The WEF projects 170 million new roles created and 92 million displaced by 2030. But the critical issue is not volume: it is the mismatch between the skills organisations need and the skills available in the market. 59 out of every 100 workers globally will need reskilling or upskilling by 2030. The ability to identify who can learn, adapt, and lead through complexity has never been more important.
The WEF identifies the ten skills most critical for 2030. Eight of them map directly to the Agility Quotient framework. This is not a coincidence.
| # | WEF Top Skill (2025-2030) | AQ Facet / Component | Domain |
|---|---|---|---|
| 1 | Analytical Thinking | Mental Agility + Strategic Processing | Cognitive |
| 2 | Resilience, Flexibility & Agility | Resilience & Composure + Change Agility | Self + Cognitive |
| 3 | Leadership & Social Influence | Social Astuteness + People Agility | Social |
| 4 | Creative Thinking | Mental Agility + Results Agility | Cognitive |
| 5 | Motivation & Self-Awareness | Self-Awareness (direct match) | Self |
| 6 | Technological Literacy | Domain-agnostic (outside AQ scope) | N/A |
| 7 | Empathy & Active Listening | Open-Mindedness + People Agility | Social |
| 8 | Curiosity & Lifelong Learning | Intellectual Curiosity (direct match) | Self |
| 9 | Talent Management | Enabled by Workforce Intelligence Dashboard | Platform |
| 10 | Service Orientation | Results Agility + People Agility | Social + Cognitive |
"When 39% of the skills my workforce needs will change in five years, how do I know who can adapt, who can lead through it, and where my risk sits?"
The Agility Quotient was built to answer that question. Not with gut feel or unstructured interviews, but with a validated composite psychometric score that integrates cognitive capacity, behavioural agility, and derailment risk into a single workforce-wide intelligence picture.
The single biggest differentiator between growing and declining jobs is not technical skill. It is resilience, flexibility, and agility. The most in-demand human capabilities are analytical thinking, self-awareness, curiosity, social influence, and the capacity to navigate ambiguity.
Source: WEF Future of Jobs Report 2025, 1,000+ employers, 55 economies
A 9-facet agility profile across your entire workforce. A G-Factor composite that quantifies readiness. A Selection Readiness Score that predicts performance. A derailer profile that identifies risk. And a dashboard that makes it all visible at a glance.
Built on De Meuse (2022), Schmidt & Hunter (1998), Sackett et al. (2022)
One interactive view of your entire organisation's agility landscape. Filter by department, seniority, or domain. Drill into individuals. Spot patterns. Make decisions grounded in data, not instinct. This is what your CHRO sees.
This is a simulated organisation with 54 leaders across 6 departments, assessed at the Professional tier (Adapt-g + PVQ). This is the tier designed for workforce-wide diagnostics: it delivers the full 9-facet agility profile, 12 derailer scales, leadership styles, values, and the G-Factor composite at a price point that makes assessing entire leadership populations viable. For the most senior roles (C-suite, board), the Executive tier (CPP + PVQ) adds cognitive complexity mapping and thinking style analysis. Your Hydrogen consultant manages the assessment process and delivers the populated dashboard as part of the engagement. As new candidates or team members are assessed, the dashboard is updated to reflect the latest picture.
Restructuring decisions grounded in agility data. Succession planning based on G-Factor readiness, not tenure. Development investment directed at the growth edges that matter most. And when teams are re-assessed after a development cycle, the dashboard shows whether the interventions moved the needle.
The Workforce Intelligence Dashboard is delivered as a self-contained file, not hosted on an external cloud platform. It sits on your infrastructure, shared through your internal systems, and accessed only by the people you choose. No employee assessment data leaves your organisation. No third-party servers, no external logins, no data residency concerns. You own the file and you determine who within your organisation has access to it. Data governance and access management remain entirely with your HR and IT teams.
Grounded in De Meuse's (2022) Learning Agility research, the Agility Quotient measures nine facets of leadership agility across three domains. The result is a composite profile that predicts how effectively a person will learn, adapt, and lead through complexity.
Learning agility is distinct from cognitive ability (meta-analytic r = .091, De Meuse 2022). A person can be highly intelligent but struggle to apply new learning in unfamiliar situations. The AQ separates these constructs and measures them independently, giving you a much richer picture than IQ or experience alone.
A balanced composite that answers: "What is this person's overall agility potential?" Integrates cognitive capacity with all three agility domains. Used for readiness classification and development planning.
A cognitive-weighted, derailer-penalised composite that answers: "Should we hire or promote this person into THIS role?" Includes hard gates for cognitive capacity and derailer load. Produces a traffic light recommendation.
Eccentric, Appeasing, Suspicious, Volatile, Undisciplined, Detached, Rigid, Confrontational, Manipulative, Avoidant, Arrogant, Moody. These dark-side traits activate under pressure, reduced oversight, or increased autonomy. They directly penalise the SRS and are grouped by severity in every report.
Based on research by Lombardo & Eichinger, and De Meuse on derailment in senior leadership.
For senior leadership roles, the Cognitive Process Profile (CPP) replaces traditional IQ-style testing. It is a simulation-based assessment that tracks how a person processes unfamiliar, ambiguous information in real time. Not what they know. Not how fast they answer structured questions. How they actually think when the information is messy, incomplete, and contradictory.
Click any level to learn more. The CPP maps two critical data points per individual: where they currently operate, and the ceiling of where they could reach with development. This distinction between current and potential is what makes succession planning possible.
Every individual assessed with the CPP receives two scores: where their cognitive processing consistently operates today (Current), and the ceiling the CPP identifies as reachable with development (Potential). This is what makes the CPP uniquely valuable for succession planning and promotion decisions.
Where the person's processing consistently operates today. This is the level of complexity they can reliably handle right now, under normal conditions.
The complexity ceiling the CPP identifies as reachable. This is not a guarantee but a developmental trajectory: with the right support, exposure, and time, the person could operate here.
The minimum complexity the role demands. Set during the scoping phase. The gap (or match) between Current, Potential, and Target determines the Fit Classification.
Traditional assessments tell you what someone can do today. The CPP tells you what they could do tomorrow. A VP currently operating at Tactical Strategy (Level 3) with potential at Parallel Processing (Level 4) is a fundamentally different succession prospect than one who is at Level 3 with a ceiling at Level 3. Both perform equally well today. Only one can grow into the C-suite. Without this distinction, you are making promotion decisions blind.
The relationship between Current, Potential, and Target produces one of five classifications:
Composite of six processing competencies: Integration, Complexity, Logical Reasoning, Verbal Conceptualisation, Judgement, and Quick Insight Learning. These are the "red scores" in CPP terminology: higher is always better. Each competency is mapped against the SST levels, showing which stratum the person's processing reaches on each dimension.
The SPP is then adjusted by thinking style pattern rules that detect whether a person's dominant styles facilitate or anchor their strategic processing. Three or more strategic facilitators in the top five styles earn a bonus. Over-reliance on operational anchors triggers a penalty. This is the most consequential cognitive metric in the Executive tier assessment.
Metacognition is the ability to think about your own thinking. To notice when your judgement is being compromised. To know when to slow down and when to trust your instincts. To catch yourself falling into a pattern that worked before but doesn't fit the current situation. In an era where AI handles more of the analytical heavy lifting, this self-regulatory capacity is what separates leaders who can navigate genuine ambiguity from those who simply process information faster.
The MQ composite captures this through four indicators: Judgement quality (35%), Pace Control (25%), inverse Quick Closure (20%), and Quick Insight speed (20%). A leader with high SPP but low MQ has the processing power but lacks the self-regulation to deploy it wisely. They will make fast, confident decisions that are sometimes brilliantly right and sometimes catastrophically wrong. A leader with moderate SPP but high MQ will be more consistent, more self-correcting, and more reliable under sustained complexity.
As AI takes over more analytical and processing tasks, the premium shifts to the human capability that AI fundamentally cannot replicate: the ability to monitor, regulate, and redirect your own cognitive processes in real time.
McKinsey's Superagency research (2025) identifies this as the critical differentiator between leaders who achieve "AI superagency" and those who don't. The leaders who thrive alongside AI are not the fastest processors. They are the best self-regulators: they know what to delegate to the machine, what to keep human, and when their own judgement needs checking. That is metacognition. And the CPP is the only assessment that measures it directly.
The CPP ranks 14 thinking styles by dominance, revealing whether a person naturally gravitates toward strategic facilitation or operational anchoring under cognitive load. The pattern of styles in the top five positions determines whether SPP receives a bonus or a penalty, and flags specific risks like the "Deadly Three" (Analytical + Reflective + Logical dominating) which indicates a gather-analyse-check cycle that can inhibit strategic action.
Every organisation's challenge is different. These are the modular assessment instruments that underpin the AQ framework. Think of them as building blocks: you select the combination that answers your specific question about your specific people.
A simulation-based assessment that tracks how a person processes unfamiliar, ambiguous information in real time. Measures cognitive complexity level, 14 thinking styles, processing competencies, and metacognitive quality. Unlike traditional cognitive tests, it measures the process, not just the outcome.
A validated adaptive cognitive assessment measuring verbal reasoning, numerical reasoning, and abstract reasoning. Produces a composite GMA stanine score. The strongest single predictor of job performance across all complexity levels (Schmidt & Hunter, 1998; Sackett et al., 2022).
A comprehensive personality and values assessment that produces the nine AQ facet scores, 12 derailer scales, leadership and subordinate styles, team roles, influencing approaches, culture fit, Grit, and Emotional Intelligence scores. The behavioural backbone of the AQ framework.
A broad personality assessment based on the well-established 15-factor model. Provides the personality data that maps to the AQ nine-facet framework at the Essential tier. Designed for volume assessment where the full PVQ depth is not required.
CPP + PVQ
17-page Strategic Agility Diagnostic with blended CPP+PVQ scoring, SST complexity mapping, 14 thinking styles, and strategic processing power.
Adapt-g + PVQ
14-page report with GMA normalisation, pure PVQ facet scoring, radar chart, derailer profile, and development coaching guide.
Adapt-g + 15FQ+
Streamlined report using broad personality scales mapped to the 9-facet framework. Designed for volume hiring and early career talent.
Three report types serve three different audiences. The Development report is for the organisation. The Selection report is for the hiring decision. The Candidate report is for the individual. Below are actual pages from the reports, showing the design quality and depth of insight your people and HR team will receive.
The interactive dashboard you explored in the previous tab. Delivered as a self-contained file alongside the individual reports. Aggregates all assessment data into an organisational intelligence view with filtering, drill-down, heatmaps, derailer analysis, and People & Culture insights.
Every report follows a three-act structure: the answer first (executive brief), the evidence second (domain deep dives), and the prescription third (development plan). A board member reads page 3. A CHRO reads pages 3-14. A coach reads the full document. Each audience gets what they need without wading through what they don't.
The AQ platform is built exclusively on published, peer-reviewed research and assessment instruments with decades of validation history. Every tool we use has stood the test of time. This section is for HR teams with in-house psychologists, compliance officers, and anyone who needs to understand the science before committing. Click any section to expand.
The PVQ is the behavioural backbone of the AQ framework. It is an extended version of the 15FQ+, combining personality scales from the 15FQ+ and OPPro with values scales from the Values and Motives Inventory (VMI). This produces a single comprehensive assessment covering 35 trait and value scales, 12 derailer scales, leadership and subordinate styles, team roles, influencing approaches, culture fit, Grit, and Emotional Intelligence.
Internal consistency (Cronbach's alpha) coefficients are published in the PVQ Technical Manual across all primary scales, with separate reliability analyses by gender. The 5-point Likert response format was specifically chosen to maximise reliability and score variability. Reliability coefficients across language translations are documented in Table 20 of the Technical Manual, demonstrating scale stability across cultural and linguistic contexts.
Construct validity established through correlations with the 16PF (Form A and 16PF5) and MAPP. The PVQ's inclusion of values scales alongside personality scales is grounded in research showing that values moderate the personality-performance relationship (Kanfer, 1991; McCrae & Costa, 1996), enabling person-organisation fit assessment alongside person-job fit.
Personality assessment's predictive validity for job performance was established by Barrick & Mount's (1991) landmark meta-analysis of 117 validation studies (N = 23,994), showing Conscientiousness and Emotional Stability as consistent predictors across all job types. Subsequent meta-analyses (Anderson & Viswesvaran, 1998; Salgado, 1997; Tett et al., 1991) confirmed these findings. The PVQ's five-factor structure aligns with this evidence base while extending it through values-based moderation of the personality-performance relationship.
Gender differences documented with effect sizes (Technical Manual Tables 13-15), demonstrating minimal practical differences. Age relationship analysis across four age groups with effect size estimates (Table 17). Available in 30+ languages with published internal consistency data per translation, ensuring psychometric equivalence across linguistic groups. Normed across multiple international populations.
The Adapt-g is a Computerised Adaptive Test (CAT) measuring general mental ability through three subtests: Verbal Reasoning, Numerical Reasoning, and Abstract Reasoning. Built on Item Response Theory (3-Parameter Logistic Model), it adapts in real time to the test-taker's ability level, providing a more precise measurement in fewer items than traditional fixed-length tests. Grounded in the Cattell-Horn-Carroll (CHC) model of intelligence.
The adaptive design inherently maximises measurement precision: the IRT standard error is computed per individual, ensuring that reliability is optimised for every test-taker regardless of ability level. Test-retest reliability documented across three waves of development and item calibration. Test Information Functions (TIF) published for each subtest, demonstrating consistent measurement precision across the ability range.
General mental ability is the single strongest predictor of job performance ever discovered in personnel psychology. Schmidt & Hunter (1998) established an operational validity of .51 for GMA in predicting job performance, rising to .58 when combined with structured interviews. Sackett et al. (2022) updated these findings, confirming GMA's pre-eminence. The Adapt-g operationalises this through the CHC framework, measuring both fluid (Gf) and crystallised (Gc) intelligence components. Cognitive ability research heritage extends to Spearman (1904).
The adaptive algorithm selects items matched to each individual's demonstrated ability, reducing both floor and ceiling effects. This inherently reduces cultural and educational bias compared to fixed-length tests where all candidates receive identical items. Item calibration has undergone three waves of development with ongoing analysis to ensure item parameters remain stable across populations.
The CPP is fundamentally different from traditional cognitive assessments. Rather than measuring what someone knows or how fast they can solve structured problems, it tracks how they process unfamiliar, ambiguous information in real time. It is a simulation-based assessment developed by Dr Maritie Prinsloo (Cognadev), grounded in Stratified Systems Theory (Jaques, 1989) and holonic cognitive processing models.
Six processing competencies (Integration, Complexity, Logical Reasoning, Verbal Conceptualisation, Judgement, Quick Insight Learning), eight blue-score processing preferences, four speed and pace indicators, 14 thinking styles ranked by dominance, current and potential work environment levels, learning potential, and metacognitive quality. Unlike self-report instruments, the CPP observes actual cognitive behaviour rather than relying on self-perception.
Criterion validity established across 20+ years of organisational research and practical application. The SST framework it operationalises (Jaques, 1989) has been validated in organisational settings globally. The CPP's unique contribution is its ability to distinguish between current operating level and potential ceiling, providing a development trajectory that traditional cognitive tests cannot offer. Used by multinational organisations for C-suite selection and leadership development across multiple continents.
The CPP uses non-verbal, pattern-based simulation materials that minimise linguistic and cultural bias. It measures cognitive processing strategies rather than knowledge or vocabulary, making it more equitable across educational and cultural backgrounds than traditional verbal-heavy cognitive assessments. Available in multiple languages for instruction delivery.
The 15FQ+ is a well-established broad personality assessment based on the fifteen-factor personality model. It is the personality foundation from which the PVQ was extended. At the Essential tier, the 15FQ+ provides the personality data that maps to the AQ nine-facet framework, making it suitable for volume assessment scenarios where the full PVQ depth is not required.
The 15FQ+ is grounded in the Cattell tradition of factor-analytic personality assessment, with construct validity established through correlations with the 16PF. It shares the same theoretical heritage as the PVQ's personality scales, ensuring consistency across tiers. Normed internationally with published reliability coefficients across language versions and demographic groups.
De Meuse, K.P. (2022). Learning Agility: Could it Become the G-Factor of Leadership? Consulting Psychology Journal: Practice and Research.
This meta-analytic review is the theoretical backbone of the AQ. De Meuse establishes that learning agility, the willingness and ability to learn from experience and apply those lessons to new situations, is a distinct construct from cognitive ability (meta-analytic r = .091). This near-zero correlation means that being intelligent does not predict being agile, and vice versa. Both need to be measured independently.
De Meuse proposes a nine-facet framework organised in a 3x3 matrix: three components (Ability, Motivation, Application) across three domains (Cognitive, Social, Self). This is the exact architecture the AQ operationalises. He explicitly notes that the question of differential facet importance remains unanswered empirically, which is why the AQ weights facets equally within each domain.
Schmidt, F.L. & Hunter, J.E. (1998). The Validity and Utility of Selection Methods in Personnel Psychology. Psychological Bulletin, 124, 262-274. Updated by Sackett, P.R. et al. (2022).
The most cited paper in personnel psychology. Schmidt & Hunter's meta-analysis established that general mental ability (GMA) has an operational validity of .51 for predicting overall job performance, making it the single strongest predictor ever identified. When combined with structured interviews (.51 + .51 = .63 combined validity) or integrity tests (.51 + .41 = .65), predictive power increases substantially. Sackett et al. (2022) updated these findings with modern statistical corrections and confirmed GMA's pre-eminence.
This underpins the AQ's weighting of cognitive capacity: GMA carries 15% in development contexts and 25% in selection contexts at the Professional tier. At the Executive tier, the CPP replaces GMA entirely and carries 30-48% of the composite because it provides a richer measure of cognitive complexity than a single stanine score.
Jaques, E. (1989). Requisite Organization. Operationalised through the CPP by Dr Maritie Prinsloo (Cognadev).
SST proposes that organisational effectiveness depends on matching the cognitive complexity of role-holders to the complexity demands of their roles. Jaques identified distinct strata of cognitive processing, each characterised by qualitatively different information handling capabilities. The CPP measures this directly through simulation, mapping individuals to one of five work environment levels from Pure Operational to Pure Strategy.
The practical implication is profound: a person operating at Tactical Strategy (Level 3) will struggle in a role that demands Parallel Processing (Level 4), regardless of their experience, personality, or motivation. This is the single most consequential data point in executive assessment because it answers the question "can this person handle the information processing demands of this role?" before any personality or behavioural data is considered.
Lombardo, M.M. & Eichinger, R.W. (2000). High potentials as high learners. Human Resource Management, 39, 321-329. De Meuse, K.P. on dark-side trait activation under reduced oversight.
Research from the Center for Creative Leadership (CCL) and subsequent work by De Meuse established that leadership failure (derailment) is rarely caused by a lack of technical competence. Instead, it is driven by dark-side personality traits that emerge under pressure, fatigue, or reduced oversight: traits like rigidity, volatility, detachment, and arrogance. These traits become more consequential at senior levels where there is less structure, more autonomy, and greater organisational impact.
The AQ operationalises this through 12 derailer scales that directly penalise the Selection Readiness Score. The penalty is heavier at Executive tier (-2.0 cap) than Professional tier (-1.5 cap), reflecting the research finding that dark-side traits are more damaging in senior roles.
The World Economic Forum's Future of Jobs Report 2025 provides the macro context for why agility measurement matters now. Based on data from over 1,000 employers representing 14 million workers across 55 economies:
22% of all jobs will be structurally disrupted by 2030. 39% of core workforce skills will change. 63% of employers cite skills gaps as their primary barrier to business transformation. 59 out of every 100 workers globally will need reskilling by 2030. The top skills rising in importance are resilience, flexibility, and agility; analytical thinking; curiosity and lifelong learning; self-awareness; and leadership and social influence. Eight of the WEF's top ten skills map directly to the AQ's nine facets.
McKinsey's "Superagency in the Workplace" report (2025) identifies a state where AI amplifies human creativity and productivity, but only for those organisations that invest in supporting their people alongside their technology. 88% of organisations now use AI in at least one function, yet only 10% deploy AI agents per function, revealing a massive execution gap.
Gartner predicts that by 2026, 20% of organisations will use AI to flatten their structures, eliminating more than half of current middle management positions. PwC's 2025 Global AI Jobs Barometer found that workers with AI skills command wage premiums up to 56% higher, but crucially, human capabilities, judgment, creativity, leadership, and resilience, become more valuable, not less, as AI handles technical tasks.
The implication for the AQ: as AI automates routine cognitive work, the human capabilities that differentiate high performers shift toward exactly what the AQ measures: agility, self-awareness, social influence, resilience under complexity, and the capacity to learn and adapt. Organisations that can identify, develop, and retain people with these capabilities will outperform those that cannot. The dashboard makes this visible. The reports make it actionable.
All evaluative narrative text in AQ reports is drawn from a human-authored narrative library of 393+ narrative blocks. No AI-generated text is used for any evaluative or decision-impacting content. This ensures compliance with Article 22 of the GDPR which gives individuals the right not to be subject to decisions based solely on automated processing. Scoring calculations are transparent and auditable.
HR assessment systems are classified as high-risk under the EU AI Act. The AQ framework addresses this through: transparent scoring methodology with published weightings, human-authored narrative content, auditable composite calculations, clear documentation of all weighting and threshold decisions, and a complete technical scoring guide available for regulatory review. No black-box algorithms are used at any stage.
Whether you are assessing a handful of executive candidates or mapping agility across your entire leadership population, the process follows the same structured path. Your Hydrogen consultant manages every step.
Define the question. Which roles, which levels, what are you trying to learn about your organisation?
Candidates complete the selected battery online. AI-proctored. Typically 60-90 minutes depending on tier.
Reports are generated through the AQ scoring engine. Dashboard populates. The data picture takes shape.
Strategic briefing with your HR leadership. Dashboard walkthrough. Pattern interpretation. The "so what" conversation.
Restructure, develop, select, or promote. Every action grounded in the data. Every decision auditable.
3-5 candidates assessed. Development and Selection reports generated per candidate. Traffic light recommendation for each. Consultant delivers the findings. Typically 2-3 weeks from assessment to decision.
50-500 people assessed across departments and levels. Individual Development reports generated for each person. Workforce Intelligence Dashboard delivered with full organisational analysis. Typically 4-6 weeks from kickoff to dashboard delivery.
The dashboard is not a one-off deliverable. As your organisation evolves, new hires are assessed, teams restructure, and development programmes are completed, the dashboard can be updated to reflect the new reality. Each update adds data and deepens the organisational intelligence picture. The AQ becomes a living capability, not a point-in-time snapshot.
Tell us what you are trying to solve. We will show you which pieces of the toolkit fit your challenge.
We do not prescribe solutions. We equip you with the tools and intelligence to build your own workforce strategy for the future.