Security & compliance

How we handle your data

Professional subscriptions. Training opt-out enabled. DPAs available.

Note

We describe standards and frameworks we align our methodology with. Excetra Ltd does not currently hold formal certifications for SOC 2, ISO 27001, or ISO 42001. We follow their principles and can provide security questionnaire responses, DPAs, and NDAs for your review.

Your data during workshops

  • Our accounts. We use professional subscriptions (ChatGPT Pro, Claude Pro) with data training explicitly disabled. Data from our sessions is not used to train AI models.
  • Your exercises. Workshop participants use their own accounts or organisational instances. This ensures your data remains under your organisation's control and subject to your existing policies.
  • Sensitive work. Clients requiring maximum data protection can provide access to their enterprise AI instances, or we can work via API access where data is never used for model training by default.
  • What we don't do. We don't input your proprietary data, code, or confidential information into AI platforms without explicit agreement on data handling. Workshop exercises use anonymised scenarios or participant-controlled accounts.

Platform data policies

AI platform data handling varies by subscription tier. Here's what applies to workshops:

  • Professional tiers (what we use). ChatGPT Pro and Claude Pro allow users to opt out of model training. We have this setting enabled on all our accounts.
  • Enterprise tiers (what clients may have). ChatGPT Enterprise, Claude for Work, and similar business tiers exclude data from training by default and offer additional controls like custom retention periods.
  • API access. Both OpenAI and Anthropic APIs do not use data for model training by default. We use API access for any programmatic client work.
  • Your responsibility. If you're using your own accounts during workshops, we recommend checking your privacy settings and using your organisation's approved AI tools where available.

Our security practices

  • SOC 2-aligned practices for access control and data handling (not certified).
  • ISO 42001-aligned AI governance frameworks (not certified).
  • OWASP LLM Top 10 awareness in solution design.
  • GDPR and UK GDPR compliance for all data processing.

What we need from you

  • Share only the data necessary for workshop objectives.
  • Use your organisation's approved AI tools where available.
  • Flag any specific compliance requirements before we begin.
Common questions

The procurement answers.

Do you have SOC 2 certification?

Not yet. We're a small team. We follow SOC 2-aligned practices: data training disabled on our accounts, DPAs available, minimal data collection. Our security questionnaire responses are ready for your review.

Where is workshop data processed?

Data location depends on the platform and tier used. ChatGPT Enterprise offers EU data residency (storage at rest within the EU). For Claude, Anthropic's data storage is US-based, though processing can occur in EU regions — discuss specific data residency requirements with your Anthropic account team if needed. For maximum control, clients can provide access to their own enterprise instances or we can work via cloud-hosted API endpoints with regional deployment options (e.g., AWS Bedrock, Google Vertex AI).

Can you sign an NDA?

Yes. We're happy to sign mutual NDAs before any engagement. Contact us at hello@excetra.ai.

What about our proprietary data?

Workshop exercises use anonymised scenarios. If you need to work with real data, we'll agree data handling terms in advance and can work within your existing enterprise AI instances.

Do you have a DPA?

Yes. We provide a Data Processing Agreement on request for any engagement involving personal data. Contact hello@excetra.ai.

What subscription tiers do you use?

We use professional-tier subscriptions (ChatGPT Pro, Claude Pro) with data training explicitly disabled. We do not currently use enterprise-tier subscriptions (ChatGPT Enterprise, Claude for Work). For client work requiring enterprise-grade data isolation, we can work within your organisation's existing enterprise instances or via API access, which excludes data from model training by default.