Our Commitment to Ethical AI
Greenhouse is committed to ethical AI development, with security and privacy at the core of the AI and machine learning (AI/ML) capabilities we provide to our customers. AI and machine learning features at Greenhouse are built with commitments to security and privacy across our platform. We recognize that AI and machine learning present an evolving set of risks for our customers and are prioritizing transparency and customer choice as we bring additional AI/ML features to market. Greenhouse reaffirms that we do not use any personal customer data to train Greenhouse’s proprietary or third-party artificial intelligence models.
Greenhouse has published our Ethical Principles for our customers, which presents our guiding philosophy for how we evaluate AI feature development. The Greenhouse AI Ethics Committee is tasked with assessing new AI product features to ensure they are aligned with our ethical principles.
Introduction to AI in Our Product
Greenhouse envisions AI as a tool for expediting routine tasks and a catalyst for potentially transformative advancements in hiring. However, we believe AI is an assistant in hiring, not a replacement, and currently see no evidence that AI is capable of making end-to-end hiring decisions. The potential we do see across various products is outlined below:
- Content Generation: Integrating advanced text generation models into hiring workflows enables recruiters to create job postings and prospecting templates more efficiently. For instance, recruiters can swiftly generate written content for sourcing emails through AI-generated templates that users can adapt and refine to fit their employer brand.
- Categorization & Anonymization: By applying AI-driven data analysis to resume parsing, Greenhouse aims to enhance equity and inclusivity in candidate evaluation. As part of this aim, leveraging AI for resume parsing provides Greenhouse with the ability to anonymize resumes to reduce bias during the application review process.
- Summarization: AI-powered summarization capabilities will allow users to collate and analyze vast amounts of hiring data quickly. These features enable recruiters to gain insightful answers from the hiring platform through natural language conversations, saving time and facilitating more informed decision-making. Greenhouse will be researching these capabilities and certain feature areas where summarization can be integrated into recruiting workflows.
- Automation: Greenhouse plans to leverage AI to streamline complex tasks through automation. We are currently researching which hiring workflows and features may benefit from automation opportunities in the future.
Our position on using AI for hiring decision-making
While recognizing AI's power to streamline time-consuming administrative and analytical tasks related to hiring, Greenhouse believes that the critical step of evaluating and deciding which candidates to hire should remain with hiring teams. Our position is that people – not strictly AI or algorithmic technology that may be prone to bias – are best suited to make end-to-end hiring decisions. AI can serve as an indispensable, time-saving assistant that frees up hiring teams to focus on the sophisticated work of making hiring decisions, but it shouldn’t be a replacement for the decision-makers.
For more content on Greenhouse’s thoughts and potential roadmap for AI technology check out the following blog: How we’re embracing AI in our hiring software
AI models used at Greenhouse
As Greenhouse builds AI into our products, we will be using AI technology that falls into two categories: Off-the-shelf, public generative AI models (e.g., GPT) and our own proprietary models known as “Greenhouse AI.”
Public generative AI models
These kinds of AI models are integrated into Greenhouse to perform tasks such as generating content based on a user-submitted prompt, or series of prompts, and summarization of content. We may have a unique approach for how we apply these models, but this type of generative AI is not based on any AI model proprietary to Greenhouse.
Greenhouse AI models
This category refers to our own proprietary approach to developing AI models through multiple learning techniques, including deep learning. Greenhouse AI leverages our unique data, such as job and hiring process data, to complete specific and more complex tasks, such as making a prediction on when you might make a hire based on your pipeline of candidates alongside goals for when to extend an offer. Each feature supported by Greenhouse AI involves training a new model to perform a specific task.
Our intention going forward is to use both off-the-shelf models and Greenhouse AI depending on the specific customer problem we are solving.
Transparency & Control
Greenhouse commits to full transparency by clearly notifying users when they are interacting with AI features within our product, and distinctly marking any content or text generated by Generative AI to ensure informed user engagement. Recommendations made by AI technology should function as a starting point for our customers, and our expectation is that the output will be tweaked to support a customer’s own purposes.
Greenhouse puts customers in control of AI feature capabilities by providing a configuration page that allows you to opt out of any of our AI features.
Privacy and Data Protection
Data Usage - Greenhouse does not use any personal data that we store or process on behalf of our customers to train our internal LLMs/ML or third party models
Data Retention - Greenhouse stores, processes, and retains customer data in accordance with our Master Subscription Agreement and Data Privacy Addendum. Greenhouse is subject to OpenAI’s 30 day retention policy, which states that OpenAI may securely retain API inputs and outputs for up to 30 days to provide the services and to identify abuse. After 30 days, API inputs and outputs are removed from OpenAI’s systems, unless they are legally required to retain them.
Data Sharing and Disclosure - Customer personal data is not used in prompts or contextual data when utilizing third-party AI models. When possible, Greenhouse uses tokenized data prior to sharing with third parties.
Data Encryption - Customer data is encrypted in-transit between customers and Greenhouse using Transport Layer Security (TLS) 1.2 or higher. Customer data is encrypted at rest using a minimum Advanced Encryption Standard (AES) 256-bit encryption.
Data Access - As always, Greenhouse restricts access to customer data and content to its employees who require it in connection with their roles and based on the principle of least privilege.
Legal and Regulatory Compliance
In light of recent legislation addressing the ethical use of artificial intelligence in hiring practices, our company is committed to upholding the highest standards of fairness and transparency. As described above, Greenhouse recognizes the importance of human-led decision-making in the recruitment process and, as such, our platform does not employ AI for candidate scoring/ranking or making hiring decisions. This approach aligns with global regulatory trends, including New York City’s Anti-Bias in Hiring Law and the EU AI Act, that classify automated decision-making in the employment context as high-risk. By prioritizing human oversight, we ensure that our technology supports, rather than replaces, the people who are ultimately in the best position to decide who will be the addition to their team.
Greenhouse does not deploy machine learning or algorithmic decision-making in any way that may violate a candidate or prospect’s privacy – such as inferring sensitive demographic data based on anything other than their voluntary self-report.
For more information specific to legal obligations and regulations regarding Greenhouse’s AI products can be found in our blog: The evolving legal landscape of AI in the hiring process
Security Practices
We prioritize the security of our products and services as a matter of utmost importance. This dedication extends to our AI functionality. Our AI features are integrated into our broader security development lifecycle, ensuring they undergo the same stringent security controls and reviews that have shielded our other products effectively against threats.
To safeguard our AI services, we implement a comprehensive suite of security measures:
Data Privacy and Protection - Our AI systems are designed to respect and protect user data. Encryption, both in transit and at rest, ensures that personal and sensitive information remains confidential and secure.
AI-Specific Best Practices - Recognizing the unique challenges posed by AI systems, we also adhere to additional Generative AI security best practices. These practices include mitigations for vulnerabilities such as prompt injection, output handling, denial of service, and data leakage. We also review features to ensure they adhere to ethical AI principles.
Looking ahead, we are committed to continuously enhancing our security measures. As part of this ongoing commitment, we will be including our AI features in our 2024 penetration testing scope. This will provide an additional layer of scrutiny and reassurance, ensuring our AI capabilities not only meet but exceed industry security standards.
FAQs
Is customer PII used to train your Artificial Intelligence?
Greenhouse’s internal ML and LLM models are trained using anonymized and/or aggregate customer account metadata, such as job location, number of candidates applied, time-to-hire metrics, and scheduling availability. Such metadata does not contain or constitute personal data. Greenhouse’s AI features abide by the same terms and privacy agreements as our product.
Is Greenhouse compliant with the NYC AI Bias Law?
Because Greenhouse does not employ any AI functionality that constitutes an automated employment decision tool, it is not required to perform bias audits under the NYC Anti-Bias in Hiring Law. Although Greenhouse has been developing AI features to optimize administrative aspects of recruiting workflows, we are intentionally not replacing human decision-making with AI functionality.
Is Greenhouse compliant with the EU AI Act?
Greenhouse’s AI features do not replace human beings as the ultimate decision-makers in the hiring process, and they do not infer demographic information about candidates. Accordingly, we expect that they will fall under the ‘limited risk’ classification established by the EU AI Act, meaning that they will be subject to minimal transparency obligations to end users. As described above, Greenhouse currently provides clear notifications to our users any time they are engaging with AI-powered features within our tool. We will continue to monitor our compliance obligations under the EU AI Act and make adjustments when necessary.
Has a penetration test been conducted on the AI/ML features before public release?
The Greenhouse Security Engineering team is involved in the design phase of all new AI-ML features to be released to ensure that key security controls are integrated into the software development lifecycle. Greenhouse will be explicitly scoping new AI/ML features into our 2024 penetration tests (estimated in Q4 2024) to ensure we have coverage.
How does Greenhouse address bias in the outputs from the AI/ML features?
Greenhouse is deeply committed to ensuring that AI or machine learning features do not introduce bias into hiring decisions. When working with internal models that have the potential to lead to bias, the engineering and data teams do tests to validate those concerns to improve our models.