Our Commitment to Innovation and Ethical AI
Greenhouse is not just releasing innovative AI features - we’re doing so with a steadfast commitment to ethical AI development, with security and privacy at the core of our AI and machine learning (AI/ML) capabilities. We prioritize transparency, integrity, and customer choice as we bring additional AI/ML features to market.
Greenhouse has published our Ethical Principles for our customers, which presents our guiding philosophy for how we evaluate AI feature development. The Greenhouse AI Ethics Committee is tasked with assessing new AI product features to ensure they are aligned with our Ethical Principles.
Innovative AI in Our Product
Greenhouse envisions AI as a catalyst for transformative advancements in hiring. Here are a few current and upcoming examples across our various products:
- Content Generation: Integrating advanced text generation models into hiring workflows enables recruiters to create job postings and prospecting templates more efficiently. For instance, recruiters can swiftly generate written content for sourcing emails through AI-generated templates that users can adapt and refine to fit their employer brand.
- Categorization & Anonymization: By applying AI-driven data analysis to resume parsing, Greenhouse aims to enhance inclusivity in candidate evaluation. As part of this aim, leveraging AI for resume parsing provides Greenhouse with the ability to anonymize resumes to reduce bias during the application review process. In addition, harnessing AI to categorize text will enable our customers to search more effectively across their candidates for specific skills, even if those skills are not always phrased identically by different candidates.
- Summarization: AI-powered summarization capabilities will allow users to collate and analyze vast amounts of hiring data quickly. These features enable recruiters to gain insightful answers from the hiring platform through natural language conversations, saving time and facilitating more informed decision-making. Greenhouse is releasing new features that will integrate summarization into recruiting workflows in 2025, and will continue to research these capabilities.
People make hiring decisions - but AI can help
Greenhouse believes that the critical step of evaluating and deciding who to hire should remain with hiring teams. AI or algorithmic technology may be prone to bias so we believe that people are best suited to make hiring decisions. AI can serve as an indispensable, time-saving assistant, but it shouldn’t be a replacement for the human decision-makers.
For more content on Greenhouse’s thoughts and potential roadmap for AI technology check out the following blog: How we’re embracing AI in our hiring software.
AI models used at Greenhouse
As Greenhouse builds AI into our products, we will be using AI technology that falls into two categories: Off-the-shelf, public generative AI models (e.g., GPT) and our own proprietary models known as “Greenhouse AI.”
Public generative AI models
These kinds of AI models are integrated into Greenhouse to perform tasks such as generating content based on a user-submitted prompt, or series of prompts, and summarization of content. We may have a unique approach for how we apply these models, but this type of generative AI is not based on any AI model proprietary to Greenhouse.
Greenhouse AI models
This category refers to our own proprietary approach to developing AI models through multiple learning techniques, including deep learning. Greenhouse AI leverages our unique data, such as job and hiring process data, to complete specific and more complex tasks, such as making a prediction on when you might make a hire based on your pipeline of candidates alongside goals for when to extend an offer. Each feature supported by Greenhouse AI involves training a new model to perform a specific task.
Our intention going forward is to use both off-the-shelf models and Greenhouse AI depending on the specific customer problem we are solving.
Transparency & Control
Greenhouse commits to full transparency by clearly notifying users when they are interacting with AI features within our product, and distinctly marking any content or text generated by Generative AI to ensure informed user engagement. Recommendations made by AI technology should function as a starting point for our customers, and our expectation is that the output will be tweaked to support a customer’s own purposes.
Greenhouse puts customers in control of AI feature capabilities by providing a configuration page that allows you to turn off any of our AI features.
Privacy and Data Protection
Data Usage - Greenhouse does not use any personal data that we store or process on behalf of our customers to train our internal LLMs/ML or third-party models.
Data Retention - Greenhouse stores, processes, and retains customer data in accordance with our Master Subscription Agreement and Data Privacy Addendum. Greenhouse is subject to OpenAI’s 30-day retention policy, which states that OpenAI may securely retain API inputs and outputs for up to 30 days to provide the services and to identify abuse. After 30 days, API inputs and outputs are removed from OpenAI’s systems, unless they are legally required to retain them.
Data Sharing and Disclosure - Greenhouse is committed to transparency with customers about which AI features may involve sharing personal data with third party AI models, and customers can always turn off those AI features via our configuration page. When possible, Greenhouse uses tokenized data prior to sharing with third parties.
Data Encryption - Customer data is encrypted in-transit between customers and Greenhouse using Transport Layer Security (TLS) 1.2 or higher. Customer data is encrypted at rest using a minimum Advanced Encryption Standard (AES) 256-bit encryption.
Data Access - As always, Greenhouse restricts access to customer data and content to its employees who require it in connection with their roles and based on the principle of least privilege.
Legal and Regulatory Compliance
In light of recent legislation addressing the ethical use of artificial intelligence in hiring practices, our company is committed to upholding the highest standards of fairness and transparency. As described above, Greenhouse recognizes the importance of human-led decision-making in the recruitment process and, as such, our platform does not currently employ AI for candidate scoring/ranking or making hiring decisions. This approach aligns with global regulatory trends, including New York City’s Anti-Bias in Hiring Law and the EU AI Act, that classify automated decision-making in the employment context as high-risk. By prioritizing human oversight, we ensure that our technology supports, rather than replaces, the people who are ultimately in the best position to decide who will be the addition to their team.
Greenhouse does not deploy machine learning or algorithmic decision-making in any way that may violate a candidate or prospect’s privacy – such as inferring sensitive demographic data based on anything other than their voluntary self-report.
More information specific to legal obligations and regulations regarding Greenhouse’s AI products can be found in our blog: The evolving legal landscape of AI in the hiring process.
Security Practices
We prioritize the security of our products and services as a matter of utmost importance. This dedication extends to our AI functionality. Our AI features are integrated into our broader security development lifecycle, ensuring they undergo the same stringent security controls and reviews that have shielded our other products effectively against threats.
To safeguard our AI services, we implement a comprehensive suite of security measures:
Data Privacy and Protection - Our AI systems are designed to respect and protect user data. Encryption, both in transit and at rest, ensures that personal and sensitive information remains confidential and secure.
AI-Specific Best Practices - Recognizing the unique challenges posed by AI systems, we also adhere to additional Generative AI security best practices. These practices include mitigations for vulnerabilities such as prompt injection, output handling, denial of service, and data leakage. We also review features to ensure they adhere to ethical AI principles.
Looking ahead, we are committed to continuously enhancing our security measures. As part of this ongoing commitment, we will be including our AI features in our 2025 penetration testing scope. This will provide an additional layer of scrutiny and reassurance, ensuring our AI capabilities not only meet but exceed industry security standards.
FAQs
Is personal data from your customers used to train your AI?
Greenhouse’s internal ML and LLM models are trained using anonymized and/or aggregated customer data, such as job location, number of candidates applied, time-to-hire metrics, and scheduling availability. Such data does not contain or constitute personal data. Greenhouse’s AI features abide by the same terms and privacy agreements as our product.
Is Greenhouse compliant with the NYC AI Bias Law?
Because Greenhouse does not employ any AI functionality that constitutes an automated employment decision tool, it is not as of yet required to perform bias audits under the NYC Anti-Bias in Hiring Law. Although Greenhouse has been developing AI features to optimize administrative aspects of recruiting workflows, we are intentionally not replacing human decision-making with AI functionality.
How is Greenhouse planning to comply with emerging AI laws?
Greenhouse is committed to complying with new legislation that will govern AI tools, such as the EU AI Act, the Colorado AI Act, and California’s AB 2013, and we are closely monitoring our compliance obligations and making adjustments where necessary in anticipation of them going into effect.
Greenhouse’s AI features do not replace human beings as the decision-makers in the hiring process, and they do not infer demographic information about candidates. In addition, Greenhouse is building our AI functionality in a manner that prioritizes transparency, integrity, and customer choice. Accordingly, we expect that our AI functionality will not be considered high risk under the new laws, and that Greenhouse will be compliant with their transparency requirements. As described above, Greenhouse currently provides clear notifications to our users any time they are engaging with AI-powered features within our tool, and allows customers to turn off any AI feature at any time.
Has a penetration test been conducted on the AI/ML features before public release?
The Greenhouse Security Engineering team is involved in the design phase of all new AI/ML features to be released to ensure that key security controls are integrated into the software development lifecycle. Greenhouse will be explicitly scoping new AI/ML features into our 2025 penetration tests (estimated in Q3 2025) to ensure we have coverage.
How does Greenhouse address bias in the outputs from the AI/ML features?
Greenhouse is deeply committed to ensuring that AI or machine learning features do not introduce bias into hiring decisions. When working with internal models that have the potential to lead to bias, the engineering and data teams do tests to validate those concerns to improve our models.