Attest is committed to protecting your data and safeguarding your privacy. We believe in being transparent about how we use AI to enhance your experience. The summary below outlines the Attest AI features along with the associated security and privacy measures.
Summary of AI at Attest
Attest offers a suite of AI-powered tools, integrated seamlessly across the Attest platform. These capabilities help our users to conduct research effectively: get inspiration with recommended survey design, guarantee robust data & distill key insights quickly.
We use proprietary models that are built and managed by our in-house data experts, as well as third party generative AI tools.
Proprietary models: This category refers to our approach to developing in-house AI models via multiple learning techniques, including Natural Language Processing (”NLPs”). We leverage these models to help ensure high quality survey responses, for example with quality checks, text analysis and transcription. Models are constantly reviewed to ensure a positive impact on the integrity of our customers’ research.
Third-party Generative AI: We use a limited and carefully chosen selection of generative AI models to complete tasks such as recommending questions, summarising open-text responses or generating insights. These tools are overseen and carefully monitored by our expert Data Team.
For the avoidance of doubt, Attest does not use customers’ personal data to train any of our AI models. This extends to both third-party Generative AI tools, and our in-house proprietary models. With regards to our use of Generative AI within the platform, we have a strict policy of having broad opt-outs in place to ensure that any data provided by Attest is not used to train third party models.
Coming soon [Compass]:
In December 2025, we release Attest’s first co-pilot, Compass. Compass offers an embedded and seamless experience to get support at every step of your research journey. Compass appears seamlessly in Attest but leverages technology from third-party generative AI models to provide you with the service. When you use this feature, Attest securely sends only the necessary text to approved third-party LLM providers, who return an AI-generated response. We have rigorous evaluation testing in place with automated and human-checks to ensure that outputs fit our standard and meet our safety and ethical principles (outlined below).
Who are Attest’s Large Language Model providers?
Attest uses a carefully chosen range of large language models - some hosted by Attest and others trusted providers such as Anthropic and Google. We regularly review providers and new models to ensure the best option is used for each feature. As with all of our third party providers, they are subject to review by our in-house legal and security teams, with input from our expert Data Team colleagues.
Attest’s AI principles
We’ve adopted the following principles, which reflect our core principles and values when it comes to building and using AI within Attest.
Human centred design
Human centred design
We prioritise human well-being and autonomy in AI development, focusing on inclusivity, fairness, and responsiveness to user needs.
Human primacy: We prioritise human well-being, dignity, and autonomy. AI enhances, but does not replace, human judgement.
Inclusive development: We commit to engaging diverse stakeholders throughout the design and evaluation processes, ensuring multiple perspectives inform AI systems.
Fairness focus: We work to identify and mitigate bias and discrimination risks in AI tools.
Customer-driven innovation: We emphasise gathering and responding to customer feedback to develop solutions that genuinely address user needs.
Transparency and explainability
Transparency and explainability
We are committed to transparency in AI development. We provide comprehensive information and we ensure users can easily identify when they are interacting with AI systems.
Comprehensive disclosure: We provide clear information about what leads our AI-driven decisions. We share information about models’ capabilities, limitations and potential risks.
Thorough documentation: We document all aspects of our AI development including data sources, model architecture, testing procedures, and prompt design.
Knowledge sharing: We share best practices both internally and with the broader community, contributing to industry standards and responsible AI advancement.
Clear AI identification: We ensure users can easily distinguish when they are interacting with AI systems versus humans, with consistent interface design.
Privacy and data governance
Privacy and data governance
Our AI developments are centred on robust data protection and ethical data handling, focusing on implementing strong security measures and respecting user privacy.
Robust protection: We implement strong data protection measures throughout our systems for both our assets and those of your customers.
Informed consent: We respect user control over personal data and ensure proper informed consent.
Ethical data handling: We commit to fairness, transparency, and accountability when processing personal information. We collect only necessary and proportional data, avoiding excessive gathering.
For more information on Attest’s approach to data protection, see our dedicated page.
Safety and security
Safety and security
We maintain a comprehensive security approach focused on secure infrastructure, proactive threat assessment and continuous system monitoring.
Secure infrastructure: We host models and services on secure and robust infrastructure and maintain regular security updates, patches, and mitigations to address emerging threats. We have high expectations of our suppliers and their security infrastructure.
Threat assessment and risk mitigation: We proactively evaluate AI security threats and manage risks to protect users. We create mechanisms for redress when systems cause harm to users or stakeholders.
Behavioural monitoring: We continuously monitor system behaviour to detect and address anomalies or deviations. We establish clear protocols for handling such situations, which are factored into our red-teaming processes.
For more information on Attest’s security standards, see our dedicated page.
Scientific integrity and excellence
Scientific integrity and excellence
We design AI systems with a primary focus on performance and functionality, while maintaining high standards of data quality and methodological precision.
Scientific foundation: We base AI development on sound scientific principles and established best practices.
Verified claims: We subject all claims about system capabilities to appropriate verification and testing.
Data quality standards: We maintain high standards for data quality and methodological rigour throughout development, including via our red-teaming processes. We conduct appropriate testing and evaluation across diverse scenarios and conditions.
Performance-focused design: We design AI systems with functionality and performance as primary considerations.
Accountability and responsible development
Accountability and responsible development
We are committed to regulatory compliance and responsible AI lifecycle planning, ensuring that human accountability remains central to technological advancement.
Human accountability: We establish clear frameworks for human responsibility and meaningful oversight of AI systems. All Attest employees receive training related to the use of AI (and relevant considerations).
Impact assessment: We consider potential societal and structural impacts beyond immediate use cases, taking a broader view of consequences.
Lifecycle planning: We plan for system evolution, maintenance, and responsible decommissioning throughout the AI lifecycle.
Regulatory compliance: We comply with all relevant laws and regulations, integrating these requirements into our AI development process.
Continuous improvement: We commit to constantly learning from experiences and improving our AI systems based on outcomes and feedback.
Our customers’ use of the Attest platform, and any AI functionality, is subject to our Acceptable Use Policy.
