Our Commitment to Innovation and Ethical AI

Greenhouse builds AI with security, privacy, and ethical responsibility at the core. Our AI features are developed and governed to support human decision-making, not replace it. We prioritize transparency, integrity, and customer choice as we bring additional AI/ML features to market. 

Greenhouse has published our Ethical Principles for our customers, which presents our guiding philosophy for how we evaluate AI feature development.

Innovative AI in Our Product

Greenhouse envisions AI as a catalyst for transformative advancements in hiring. Check out the many additional ways that Greenhouse offers AI-powered assistance in our products to recruiters.

People make hiring decisions - but AI can help

Greenhouse believes that the critical step of evaluating and deciding who to hire should remain with hiring teams. AI or algorithmic technology may be prone to bias so we believe that people are best suited to make hiring decisions. AI can serve as an indispensable, time-saving assistant, but it shouldn’t be a replacement for the human decision-makers.

For more content on Greenhouse’s thoughts and potential roadmap for AI technology check out the following blog: How we’re embracing AI in our hiring software.

AI models used at Greenhouse

As Greenhouse builds AI into our products, we will be using AI technology that falls into two categories: Off-the-shelf, public generative AI models (e.g., GPT) and our own proprietary models known as “Greenhouse AI.”

Public generative AI models

These kinds of AI models are integrated into Greenhouse to perform tasks such as generating content based on a user-submitted prompt, or series of prompts, and summarization of content. We may have a unique approach for how we apply these models, but this type of generative AI is not based on any AI model proprietary to Greenhouse.

Greenhouse AI models

This category refers to our own proprietary approach to developing AI models through multiple learning techniques, including deep learning. Greenhouse AI leverages our unique data, such as job and hiring process data, to complete specific and more complex tasks, such as making a prediction on when you might make a hire based on your pipeline of candidates alongside goals for when to extend an offer. Each feature supported by Greenhouse AI involves training a new model to perform a specific task.

Our intention going forward is to use both off-the-shelf models and Greenhouse AI depending on the specific customer problem we are solving.

Transparency & Control

Greenhouse commits to full transparency by clearly notifying users when they are interacting with AI features within our product, and distinctly marking any content or text generated by Generative AI to ensure informed user engagement. Recommendations made by AI technology should function as a starting point for our customers, and our expectation is that the output will be tweaked to support a customer’s own purposes.

Greenhouse puts customers in control of AI feature capabilities by providing a configuration page that allows you to turn off any of our AI features.

For Real Talent users:

  • Greenhouse provides the ability to opt-in specific offices and departments to use the AI-powered Talent Matching feature
  • Talent Matching users are provided with logic to help understand the matching results, including visual highlighting of the parts of the candidate’s resume that match the recruiter’s criteria, a short summary that explicitly justifies the matching results, and a clear list of the candidate’s skills that matched the calibration criteria. 
  • Greenhouse provides AI notice templates to notify job applicants how AI will be used during hiring. Each company can configure custom notices based on their compliance obligations
  • Greenhouse provides customer facing AI opt-out functionality that each company can configure at the job level

Privacy and Data Protection

Data Usage - Greenhouse does not use any personal data that we store or process on behalf of our customers to train our internal LLMs/ML or third-party models.

Data Retention - Greenhouse stores, processes, and retains customer data in accordance with our Master Subscription Agreement and Data Privacy Addendum. Greenhouse is subject to OpenAI’s 30-day retention policy, which states that OpenAI may securely retain API inputs and outputs for up to 30 days to provide the services and to identify abuse. After 30 days, API inputs and outputs are removed from OpenAI’s systems, unless they are legally required to retain them.

Data Sharing and Disclosure - Greenhouse is committed to transparency with customers about which AI features may involve sharing personal data with third party AI models, and customers can always turn off those AI features via our configuration page. When possible, Greenhouse uses tokenized data prior to sharing with third parties. 

Data Encryption - Customer data is encrypted in-transit between customers and Greenhouse using Transport Layer Security (TLS) 1.2 or higher. Customer data is encrypted at rest using a minimum Advanced Encryption Standard (AES) 256-bit encryption.

Data Access - As always, Greenhouse restricts access to customer data and content to its employees who require it in connection with their roles and based on the principle of least privilege.

AI Outputs - Outputs generated by Greenhouse AI features are treated as customer data, protected under and governed by our Master Subscription Agreement and Data Processing Addendum. 

Legal and Regulatory Compliance

Greenhouse’s contractual commitments for our responsible use of AI are further enumerated in the Greenhouse AI addendum.

To comply with evolving AI legislation, Greenhouse prioritizes human-led hiring decisions, as our platform prohibits AI from making automated dispositions on candidate applications. This "Human in the Loop" approach aligns with the most stringent requirements for automated employment decisions. 

We use an independent firm, Warden AI, to conduct monthly bias audits on our AI-powered Talent Matching tool to test our algorithms for bias or other unintended impact on candidates. The results of these audits are publicly available here. By prioritizing responsible product design and human oversight, our technology supports, rather than replaces, the people who are ultimately in the best position to decide who will be the new addition to their team.

More information specific to legal obligations and regulations regarding Greenhouse’s AI products can be found in our blog: The evolving legal landscape of AI in the hiring process.

Responsible AI & governance

Greenhouse has achieved ISO 42001:2023 certification, which is an international standard for  AI management systems and validates our commitment to building responsible AI products. The ISO 42001 standard audits our AI governance program against our documented objectives for accountability, fairness, privacy/security, compliance and transparency. ISO 42001 certification provides independent third-party assurance to our customers that Greenhouse continues to proactively align to evolving global AI regulations, including the EU AI Act.

A core part of this is our AI Ethics Committee, a cross-functional body that evaluates new AI capabilities, assesses risks, implements appropriate guardrails, and ensures alignment with our Ethical Principles and appropriate controls and guardrails. The committee typically examines data practices, algorithmic bias, transparency, accountability mechanisms, privacy implications, and effects on affected individuals or groups.

Security and privacy by design

We prioritize the security of our products and services, including all AI-powered functionality.

AI features are integrated into Greenhouse’s broader secure development lifecycle and are subject to the same rigorous security controls, reviews, and testing as the rest of our platform.

Data privacy and protection

AI systems are designed to protect customer and candidate data. Encryption in transit and at rest helps ensure sensitive information remains confidential and secure.

Algorithmic decision-making

Greenhouse does not deploy machine learning or algorithmic decision-making in any way that may violate a candidate or prospect’s privacy – such as inferring sensitive demographic data based on anything other than their voluntary self-report.

AI-specific security best practices

In addition to standard platform controls, Greenhouse applies AI-specific security measures to address risks such as prompt injection, data leakage, output handling, and denial-of-service scenarios. AI features undergo annual AI focused penetration testing.

FAQs

Is personal data from your customers used to train your AI?

Greenhouse’s internal ML and LLM models are trained using customer data solely in an anonymized and de-identified form that cannot reasonably be used to identify the customer or any person or entity. Examples include job location, number of candidates applied, time-to-hire metrics, and scheduling availability. Such data does not contain or constitute personal data. Greenhouse’s AI features abide by the same terms and privacy agreements as our product. No customer data, personal data or otherwise, is used to train external models. 

How do Greenhouse’s AI features – especially Real Talent™ Talent Matching – comply with NYC Local Law 144, Colorado SB 205, the EU AI Act, and California FEHA?

Greenhouse is committed to complying with new legislation that will govern AI tools and we are closely monitoring our compliance obligations and making adjustments where necessary in anticipation of them going into effect.  By aligning our AI governance practices to an international standard such ISO 42001:2023, we will ensure our practices will continue to align with emerging AI laws. 

Greenhouse’s AI-powered Talent Matching feature provides the following controls designed to satisfy the main requirements in these laws

  • Human-in-the-loop - AI never automatically advances or rejects candidates; Talent Matching only scores and groups candidates against recruiter‑defined criteria and requires humans to make all hiring decisions. 
  • Independent bias audits & fairness safeguards - Monthly third‑party bias audits by Warden AI across 10 protected classes, with disparate‑impact and demographic‑variable testing.  The results are publicly available here. Greenhouse includes guardrails on configured calibrations to avoid common proxy terms that could lead to un-intentional bias.
  • Transparency, configurability, and customer control - Configurable AI notices/templates and product guidance so customers can disclose AI use and rights to candidates. Talent Matching can be enabled/disabled by office, department, and job, and AI scores can be turned off or overridden at any time. 
  • AI governance, risk management, and security - ISO 42001:2023 AI management certification plus AI‑focused penetration testing, secure SDLC, and strong privacy/security controls (encryption, subprocessors under DPAs, Data Privacy Framework, etc.).  Our AI Ethics Committee reviews product features to ensure they align with emerging AI legal requirements as well as our AI Ethical Principles.
  • Logging and audit trail - We provide in-app features to fulfill your data processing requirements such as data exports and retention for AI matching results.  Greenhouse provides audit logs for configuration changes on Talent Matching settings.

Do you have an AI Model Card available for your AI-powered Talent Matching feature?

Yes, we provide our customers an AI Model Card on our Trust Portal to explain our Talent Matching algorithm. 

Can Greenhouse help my company complete a Data Processing Impact Assessment (DPIA) for the Talent Matching feature? 

Greenhouse provides customers with a Talent Matching Data Processing FAQ article that covers many of the questions that would need to be answered to perform a DPIA.  

Will penetration tests be conducted on the AI/ML features?

Yes, Greenhouse performs an AI focused annual penetration test against all our AI-powered features. Our latest report can be downloaded by customers on our Trust Portal

How does Greenhouse address bias in the outputs from the AI/ML features?

Greenhouse is deeply committed to ensuring that AI or machine learning features do not introduce bias into hiring decisions. When working with internal models that have the potential to lead to bias, the engineering and data teams do tests to validate those concerns to improve our models.

For Real Talent - Greenhouse undergoes a monthly third party bias audit for its Talent Matching algorithm that aligns with NYC Local Law 144, Colorado SB 205, EU AI Act and California FEHA requirements. We provide several compliance reports specific to these regulations here.  

Do you support bring-your-own-key with OpenAI?

This is not something that Greenhouse currently supports.

Can clients opt out of AI features?

Customers may toggle AI features off/on in the “AI Features” menu within Greenhouse Recruiting (but this does not opt them out of training).

For Enterprise customers, we have additional flexibility to set features as "opt-in," ensuring they remain off until manually enabled by an admin.

Does Greenhouse have specific legal terms regarding the use of AI?

Greenhouse’s contractual commitments for our responsible use of AI are further enumerated in the Greenhouse AI Terms.

Do we store audit logs for AI features?

Greenhouse stores internal logs for AI features for 15 days to support responding to product issues. 

For Real Talent - Greenhouse provides audit logs to customers via the Greenhouse Change log and through our Audit Log add-on for any changes to Talent Matching settings or job calibration configurations.

Where can I find a listing of all Greenhouse AI features and what model is used and their data processing activities? 

Greenhouse provides the following support article to provide documentation for each of our AI features, where third party AI or Greenhouse AI is used and what data is processed by the AI feature.