Before AI was widely available, Greenhouse’s philosophy of structured hiring already helped hiring teams focus on what matters for a fair and equitable hiring process:
- Structured processes and recommended flows, like job kickoff documents
- Inclusion nudges and reminders in key moments of subjective judgement
- Built-in system architecture, like “focus attribute,” that allows users to approximate their inputs
This philosophy of structured hiring is carried through into AI-powered workflows in Real Talent - the system is designed, ground-up, to require human decision-making throughout the process and use AI to sort through crushing candidate pipelines and reduce manual processes so that hiring teams can focus on what’s really important.
Greenhouse’s Talent Matching helps bridge the gap between ambiguous hiring needs and structured system inputs. As part of the Real Talent feature set, Talent Matching measures a candidate’s strength relative to the user-defined and user-weighted calibration and allows the recruiter to choose how to prioritize their candidates.
Read more about Greenhouse’s dedication to secure and transparent AI here.
How does Real Talent match candidates?
Before seeing any match results, the hiring team sets up a calibration, a combination of (1) objective hiring criteria, such as required skills, relevant experience, and job titles and (2) the desired weight of each criteria. Then, Real Talent will use our tested algorithms to sort the candidate pool into categories based on their degree of fit against the calibration criteria.
Can calibration criteria be customized for my role?
Yes - calibrations are 100% customized, within certain parameters:
- The system blocks attempts to add a biased or protected attribute, like gender.
- The system warns the user if a skill can’t be parsed from the candidate’s resume.
- The system warns if a skill could serve as a proxy for a protected attribute.
Is AI making hiring decisions in Real Talent?
No. Talent Matching does not automatically disposition any candidates nor does it make any hiring decisions. AI is used to analyze the candidate's resume and then compare it against the user-defined calibration criteria. The results are presented to the recruiter in their customized dashboard for further action and evaluation.
What happens if a candidate’s application can’t be analyzed using AI?
In situations where AI cannot be used based on the job's location or rare technical issues with their resume, Talent Matching flags them for manual review and notifies the recruiter in the interface.
Does Talent Matching auto-reject candidates with a low match score?
No. Talent Match does not auto-reject or auto-advance any candidate. All disposition and hiring decisions are made by the hiring team.
What information can Greenhouse provide through Real Talent about how a candidate’s data was processed by AI?
Organizations can export a candidate packet containing the results of Talent Matching when they were advanced or rejected.
How does Talent Matching comply with state, federal and global regulations about the use of AI in the hiring process?
Greenhouse is staying up-to-date on the changing legislation around AI tools in hiring. Most regulations require transparency so we have provided a template that describes the AI-poweredTalent Match feature and notifies candidates that AI will be used in the application process. Recruiters can customize this template to suit their individual needs.
Additionally, an independent third party conducts monthly bias audits of our algorithms and the audit results are publicly available here.
What steps does Greenhouse take to audit and test its AI for fairness?
Greenhouse applies a combination of technical safeguards and audit processes to promote fairness and reduce bias in AI-powered workflows. This includes monthly bias audits of the algorithms by Warden AI, an independent third party auditor:
- Continuous auditing: Greenhouse’s AI is audited by a third-party on a monthly basis, confirming the system continues to be suitable for use or flagging potential issues early.
- Real-time transparency: Greenhouse publishes the results of each audit on a public dashboard for all customers to see.
-
Deep technical auditing techniques: AI is examined through two techniques - equality of outcome and equality of treatment.
- Equality of outcome evaluates disparate impact between protected groups (whether certain groups receive disproportionately better/worse results than others from the AI).
- Equality of treatment examines how the AI behaves around demographic variables (such as names, gendered words, hobbies and interests etc) have an impact on the AI’s treatment of individuals being reviewed.
-
Wide coverage of protected classes: Greenhouse leverages the Warden AI Dataset to evaluate bias across 10 protected classes, in line with civil rights regulations and emerging AI regulations.
- This includes: Sex, race/ethnicity, age, disability, religion, sexual orientation, Veteran status, National origin / ancestry, Reproductive health / pregnancy, English proficiency
- Validation across the product lifecycle: new versions of Greenhouse’s AI system are validated using Warden AI's third-party dataset during development and before being released to live environment.