Available for all current subscription tiers (Core, Plus, and Pro) with the Real Talent add‑on.
Even before AI tools were widely available, Greenhouse’s philosophy of structured hiring helped hiring teams focus on what matters for a fair and equitable hiring process:
- Structured processes and recommended flows, like job kickoff documents
- Inclusion nudges and reminders in key moments of subjective judgment
- Built-in system features, like “focus attributes,” that help teams translate ambiguous hiring needs into structured, job‑related criteria
This philosophy of structured hiring carries through into AI-powered workflows in Real Talent. Talent Matching is designed as assistive AI, not automated decision‑making: it helps sort through large candidate pipelines and reduce manual work so hiring teams can focus on what’s really important.
As part of the Real Talent add‑on, Talent Matching helps bridge the gap between ambiguous hiring needs and structured system inputs. It measures a candidate’s strength relative to the user-defined and user-weighted calibration and allows the recruiter to choose how to prioritize their candidates. Talent Matching also tracks calibration history so teams can see how criteria changed over time and which calibration was active when a candidate received their match category.
Read more about Greenhouse’s dedication to secure and transparent AI here.
How does Real Talent match candidates?
Before seeing any match results, the hiring team sets up a calibration — a combination of:
- Objective hiring criteria, such as required skills, relevant experience, job titles, and (optionally) industry
- The desired weight of each criteria
Talent Matching then uses our tested algorithms to sort the candidate pool into match categories (such as Strong, Good, Partial, Limited, or Needs manual review) based on how closely each candidate aligns with the calibration criteria. Talent Matching is designed as assistive AI — it helps prioritize candidates, but recruiters and hiring managers make all advancement and rejection decisions.
Can calibration criteria be customized for my role?
Yes. Calibrations are highly customizable within guardrails that help support fair, job‑related hiring:
- You can select skills, experience, job titles, and (optionally) industry that are relevant to the role.
- The system blocks attempts to add a biased or protected attribute, like gender.
- The system warns the user if a skill can’t be parsed from the candidate’s resume.
- The system warns if a skill could serve as a proxy for a protected attribute.
Is AI making hiring decisions in Real Talent?
No. Talent Matching does not automatically disposition any candidates nor does it make any hiring decisions. AI is used to analyze the candidate's resume and compare it against the user-defined calibration criteria, then place each candidate into a match category. The results are presented to recruiters and hiring managers in their dashboard for further review, so humans decide how to advance, reject, or follow up with each candidate.
For more technical detail about how Talent Matching works — including data sources, match categories, and human‑in‑the‑loop requirements — see Talent Matching - Data Processing FAQ.
Can candidates opt out of AI‑assisted review?
In some jurisdictions, employers may be required to offer an alternative to AI‑assisted review. Talent Matching includes an optional AI opt‑out flow that Site Admins can enable in Configure > Real Talent > Talent Matching.
When candidate AI opt‑out is turned on:
- Candidates see an AI disclaimer on the job post and can follow a link to manage their automated processing preferences.
- Candidates who opt out of AI‑assisted review for that application are labeled “Needs manual review” and are not evaluated by Talent Matching.
- Your team must review these candidates using your standard, non‑AI process.
Customers are responsible for deciding when to enable candidate AI opt‑out and for designing non‑AI review processes that comply with applicable laws and internal policies. For more details, see Operational readiness guide: Talent Matching policy and Talent Matching – Data Processing FAQ.
What happens if a candidate’s application can’t be analyzed using AI?
In some situations, Talent Matching can’t analyze a candidate’s application using AI. This can happen, for example, in any of these situations:
- A job is in a location where AI use is restricted
- There are rare formatting or technical issues with the candidate’s resume
- The candidate has opted out of AI‑assisted review for that application (when your organization has enabled AI opt-out).
In these cases, the candidate is labeled “Needs manual review.” They are not evaluated by Talent Matching and must be reviewed by your team. These candidates are clearly surfaced in the Talent Matching view and can be found using filters.
Does Talent Matching auto-reject candidates with a low match score?
No. Talent Matching does not auto-reject or auto-advance any candidate, including candidates in lower match categories or those labeled “Needs manual review.” All disposition and hiring decisions are made by the hiring team.
What information can Greenhouse provide about how a candidate’s data was processed by AI?
Organizations can export a candidate packet containing the results of Talent Matching when the candidate was advanced or rejected, including the match category and reasoning.
These exports help support candidate access requests and internal audit requirements related to AI‑assisted review. For more detail about the contents of the candidate packet and how long match results are retained, see Talent Matching - Data Processing FAQ.
How does Talent Matching comply with state, federal, and global regulations about the use of AI in the hiring process?
Greenhouse is staying up-to-date on the changing legislation around AI tools in hiring. Most regulations focus on transparency, human oversight, and data protection, so Talent Matching is designed as assistive AI, not automated decision‑making, and keeps recruiters in control of every decision.
To support compliance and transparency, Greenhouse provides:
- A job post disclaimer template that describes the AI‑powered Talent Matching feature and notifies candidates that AI may be used in the application review process. Recruiters can customize this template.
- Optional candidate AI opt‑out controls that allow candidates in certain jurisdictions to request manual (non‑AI) review instead of AI‑assisted scoring, when enabled by your organization.
- Detailed technical and data‑processing documentation in Talent Matching – Data Processing FAQ, which customers can use to support their own assessments and compliance processes.
Additionally, an independent third-party conducts regular bias audits of our algorithms and the audit results are publicly available our AI Assurance dashboard.
What steps does Greenhouse take to audit and test its AI for fairness?
Greenhouse applies a combination of technical safeguards and audit processes to promote fairness and reduce bias in AI-powered workflows. This includes regular bias audits of the Talent Matching algorithms by Warden AI, an independent third-party auditor:
- Continuous auditing: Greenhouse’s AI is audited regularly by a third-party, confirming the system continues to be suitable for use or flagging potential issues early.
- Real-time transparency: Greenhouse publishes the results of each audit on a public dashboard for all customers to see.
-
Deep technical auditing techniques: AI is examined through two techniques - equality of outcome and equality of treatment.
- Equality of outcome evaluates disparate impact between protected groups (whether certain groups receive disproportionately better or worse results than others from the AI).
- Equality of treatment examines how demographic variables (such as names, gendered words, hobbies, and interests, etc.) impact the AI's treatment of individuals being reviewed.
-
Wide coverage of protected classes: Greenhouse leverages the Warden AI Dataset to evaluate bias across 10 protected classes, in line with civil rights regulations and emerging AI regulations.
- This includes: sex, race/ethnicity, age, disability, religion, sexual orientation, veteran status, national origin/ancestry, reproductive health/pregnancy, and English proficiency
- Validation across the product lifecycle: New versions of Greenhouse’s AI system are validated using Warden AI's third-party dataset during development and before being released to live environment.