What is Strict Privacy Mode?
When using commercial AI models, text fragments and user messages are temporarily sent to the model provider's servers for processing. For organisations that require no content to leave their controlled environment, the Einstein AI platform offers Strict Privacy Mode.
How it works
Only locally hosted open-source models are used (via Einstein NL infrastructure or your own on-premises installation). No conversation fragment or document leaves your managed environment. The vector database, language model and conversation processing all run on servers you control.
Trade-offs
- Local models are highly capable but may underperform premium cloud models in some niche tasks.
- Higher infrastructure costs — included in Einstein NL plans; on-premises servers are at your own cost.
- Potentially slower response times at peak load compared to cloud APIs.
Best suited for
Organisations with foreign cloud processing restrictions, medical record processing, legal due diligence, government agencies with security classifications, and companies with strict ISO 27001 IT security policies. Available as part of the Einstein NL add-on or as a separate on-premises Enterprise option.