Skip to content

secret-in-prompt โ€” Hardcoded Secret Detection โ€‹

Severity: CRITICAL ยท Auto-fix: No ยท Category: ๐Ÿ”’ Security

What It Does โ€‹

Detects credentials, tokens, and keys hardcoded directly in the prompt text. Secrets in prompts are a critical vulnerability: they appear in model completions if the model echoes the prompt, in logs, in error messages, and can be exfiltrated by injection attacks.

Detected Secret Patterns โ€‹

PatternLabelExample
sk-[A-Za-z0-9]{20,}OpenAI API keysk-abcdef1234567890abcdef
sk-proj-[A-Za-z0-9]{20,}OpenAI project keysk-proj-abcdef1234...
sk-ant-[A-Za-z0-9]{20,}Anthropic API keysk-ant-api03-...
AIza[0-9A-Za-z_-]{35}Google API keyAIzaSyBxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
ghp_[A-Za-z0-9]{36}GitHub PATghp_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
gho_[A-Za-z0-9]{36}GitHub OAuth tokengho_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Bearer <token>Bearer auth tokenBearer eyJhbGciOiJSUzI1NiJ9...
api_key = "..."Generic API key assignmentapi_key="c1234567890abcdef"
password = "..."Hardcoded passwordpassword="hunter2"
[A-Fa-f0-9]{32}MD5 hash / potential token5d41402abc4b2a76b9719d911017c592
[A-Fa-f0-9]{40}SHA1 / potential tokenaaf4c61ddcc5e8a2dabede0f3b482cd9aea9434d
[A-Fa-f0-9]{64}SHA256 / potential token2c624232cdd221771294dfbb310acbc...

Hash pattern false positives

The 32/40/64-character hex patterns catch MD5, SHA1, and SHA256 hashes which are used extensively in legitimate technical contexts โ€” git commit SHAs, file checksums, content hashes. These also match common token formats, so PromptLint errs on the side of flagging them.

If your prompts contain legitimate checksums or commit SHAs, you can disable the rule for those cases or add an inline ignore comment. This is a known trade-off: the false-positive rate is higher than for the named-key patterns, but the security risk of missing a real token is severe.

What's NOT detected

  • Database connection strings (no DSN pattern yet) โ€” use pii-in-prompt for IP addresses
  • Private key file contents (-----BEGIN PRIVATE KEY-----) โ€” coming in a future release
  • .env file contents pasted verbatim โ€” partially covered by the password= and api_key= patterns

Example โ€‹

Prompt:

Connect to OpenAI using key sk-abc123xyz456def789ghi012jkl345mno678pqr.
Use GitHub token ghp_abcdefghijklmnopqrstuvwxyz012345678901 for repo access.
The admin password is: password="sup3r_s3cret!"

Findings:

[ CRITICAL ] secret-in-prompt (line 1)
  OpenAI API key detected

[ CRITICAL ] secret-in-prompt (line 2)
  GitHub personal access token detected

[ CRITICAL ] secret-in-prompt (line 3)
  Hardcoded password detected

Note: matched values are not shown in output to prevent double-logging secrets.

How to Fix โ€‹

Replace every secret with an environment variable reference or template placeholder:

Never do this

Use API key sk-abc123xyz to call OpenAI.
Database password: hunter2
GitHub token: ghp_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Do this instead

Use the API key from the OPENAI_API_KEY environment variable.
Database password: {{DB_PASSWORD}}
GitHub token: {{GITHUB_TOKEN}}

Python โ€” inject at runtime:

python
import os

prompt = template.replace("{{DB_PASSWORD}}", os.environ["DB_PASSWORD"])

Never store secrets in:

  • Prompt files committed to version control
  • Hard-coded strings in application code
  • .env files that end up in the prompt text

Use a secrets manager (AWS Secrets Manager, HashiCorp Vault, 1Password Secrets Automation) and inject at runtime.

Configuration โ€‹

yaml
rules:
  secret_in_prompt: true  # or false to disable

No sub-options โ€” either enabled or disabled.

Why It Matters โ€‹

Even if your prompt is never deliberately logged:

  1. Model completions โ€” if the model echoes the prompt back (e.g., "Sure! You asked me to connect using sk-..."), the secret is in the completion
  2. Injection attacks โ€” prompt-injection attacks can instruct the model to repeat the system prompt, including embedded secrets
  3. API response logging โ€” many teams log completions for debugging; the secret ends up in the log
  4. Context window dumps โ€” some orchestration frameworks log the full context on error

Released under the Apache 2.0 License.