LLM Credential Isolation
No. At absolutely no point during execution, elicitation, or idle analysis does the LLM (or the company hosting the LLM, such as Anthropic or OpenAI) ever intercept, view, ingest, or temporarily hold your plaintext passwords, API keys, or OAuth Access Tokens. This absolute separation of logic from authentication is the primary cryptographic value proposition of the HasMCP Proxy Architecture.How Segregation Works
- The LLM Request: When Claude attempts to execute a secure tool (like
salesforceQuery), it outputs a JSON blob containing the requested parameters (e.g.,{"account_id": "12345"}). This JSON object critically does not contain any authentication headers. - The Proxy Interception: The HasMCP Execution Proxy securely receives this raw JSON blob over the SSE (MCP Streamable HTTP) stream.
- Token Injection: The proxy matches the LLM session against a valid internal User Identity. The proxy securely extracts the user’s mapped Salesforce OAuth token from the internal AES-256-GCM vault.
- Outbound Request: The proxy independently constructs the final HTTP REST request. It injects the specific OAuth
Authorization: Bearer <token>header dynamically. - The Return: HasMCP receives the target API response, strips any downstream identifying headers, and forwards only the sanitized JSON payload natively back up to the LLM agent.