Chasing log shadows: why you rotate after a secret leaks into Claude Code
Loodud: 2026-04-08 | Suurus: 13314 baiti
TL;DR
You pasted an API key into a Claude Code session. Now you want to make it disappear. You can scrub the transcripts on your machine, but you cannot scrub Anthropic's server logs, plugin telemetry, your Time Machine snapshots, or GitHub's audit trail once the secret is written to an environment. The only remediation that actually closes the window is credential rotation. Deletion is theater, rotation is the fix, and rotation takes about two minutes.
If you just leaked a secret, do this now:
- Rotate the credential at the provider (not later, now).
- Update every consumer that depends on the old value.
- Try to use the old value. If it still works, rotation did not land.
Everything below this box is context. Those three steps are the incident.
This is not about "was the session captured"
The interesting question is not whether the secret was logged somewhere. Assume it was. The interesting question is which copies of it you actually control, and the answer is: fewer than you think.
When a credential moves through an agent loop, it passes through a chain of untrusted-for-this-purpose systems: the CLI's own session files, the model provider's API, any MCP servers you have wired in, hook scripts that may be mirroring stdout, and whatever backup daemon is quietly snapshotting your home directory. Each hop is a potential copy. Most of them belong to someone else.
What you can delete locally
This is the small, comforting half of the answer.
- Session transcripts: Claude Code persists conversations as JSONL under
~/.claude/projects/<slugified-path>/. You can delete the relevant project directory and the secret goes with it. - Shell history: If you ever
echo-ed, exported, or pasted the value into a shell,~/.zsh_historyor~/.bash_historymay have captured it. Grep, delete, truncate. - Hook output: If you configured
PreToolUseorPostToolUsehooks to log tool calls, those log files are yours to nuke. - Scratchpads: Any
tmp/,scratch/, or planning markdown file the agent touched.
Five minutes of cleanup, at most. None of it actually solves the problem.
What you cannot delete
This is the list that matters.
- Provider-side request and response logs. When the CLI sends a prompt, the raw text reaches the model provider's infrastructure. Retention is governed by their policy, not your filesystem. You have no delete button.
- Plugin and MCP telemetry. The broader agent ecosystem (ECC plugins, MCP servers, observability tooling) often emits metrics or traces you did not explicitly opt into. Each of those is a separate retention domain.
- Backups you forgot about. If
~/.claudesits inside a Time Machine path, Dropbox, iCloud Drive, or any cloud-synced folder, the session file was replicated the moment it was written. Deleting the original does not recall the copies. - GitHub audit log. The moment you actually use the secret, for example by setting it as a repository or Actions secret, GitHub records the write operation. The value itself is encrypted at rest, but the existence and timing of the operation are logged and visible to org admins.
Every entry on this list is a system you do not own. You cannot issue a delete. You cannot verify deletion. You cannot enforce retention.
Green nodes are yours. Red nodes are not. The red side is the majority of the surface area, and you cannot audit it.
The asymmetry of cost
Here is the shape of the decision.
| Action | Time cost | Closes the window? |
|---|---|---|
| Delete local session files | ~1 minute | No |
| Hunt through backups and sync services | 30+ minutes | No |
| File deletion requests with the provider | Hours to days | Maybe, eventually |
| Rotate the credential | ~2 minutes | Yes, immediately |
Rotation is the cheapest action on the list and the only one that actually invalidates the exposed value. Every other action is a risk-reduction ritual that costs more and delivers less. Once you rotate, the old secret is a string of characters nobody can do anything useful with. The log shadows can exist forever and it no longer matters.
Logs have a half-life. Secrets don't.
Not all rotations are cheap
The "two minutes" number is real for most developer-scoped credentials, but the hierarchy matters because people stall on the hard ones and that stall is where incidents become breaches.
| Secret type | Rotation effort | Who feels it |
|---|---|---|
| OpenAI / Anthropic API key | Seconds, one-click | You |
| GitHub personal access token | Under a minute | You |
| AWS IAM access key | Minute, plus redeploy | You and any running services |
| Database user password | Minutes, plus connection refresh | Every service with an open pool |
| Database master / root credential | Coordination window | Entire org |
| Signing or encryption key | Fleet redeploy, key ceremony | Entire product surface |
Name the pain before you hit it. If the leaked secret sits on the top rows, stop reading and rotate. If it sits on the bottom rows, the rotation is a project, not a reflex, and the right move is to page the right humans and start the coordination clock in parallel with containment.
The reason you pasted it in the first place
The root cause is not carelessness. It is that the agent has no bridge to your secret store. Your vault speaks its own protocol, the agent speaks raw stdin, and humans become the copy-paste shim between the two. Every leak that lands in an agent session starts as a reasonable person trying to get work done without a better option.
Fix the workflow, not the human:
- 1Password CLI:
op run --env-file=.env.tpl -- claudeinjects secrets into the process environment, never into the prompt. - aws-vault:
aws-vault exec prod -- clauderesolves AWS credentials at subprocess start so the agent inherits them viaprocess.env, not via paste. - Reference, don't reveal: teach the agent to write
${OPENAI_API_KEY}in code and commands, never the literal value. If the literal never enters the context window, it cannot leak out of it.
For the "secrets live in the repo" case, the one that catches most infra and GitOps workflows, the right primitive is SOPS. Commit an encrypted file, decrypt only at subprocess start, and the plaintext never touches disk or the agent's context window:
bash# secrets.enc.yaml is committed, encrypted with age or KMS sops exec-env secrets.enc.yaml 'claude'
This pattern composes with direnv (.envrc can call sops exec-env on directory entry), with CI (decrypt at job start, never log the environment), and with team sharing (recipients are listed in .sops.yaml, rotation is a git commit). It is the closest thing to "secrets as code" that does not trade safety for ergonomics.
The agent does not need to see the secret. It only needs the program it runs to see the secret. That distinction is the entire fix.
Prevent the next leak at the boundary
The cleanest backstop is a UserPromptSubmit hook that refuses to send prompts containing obvious secret shapes. Drop something like this in ~/.claude/settings.json:
json{ "hooks": { "UserPromptSubmit": [ { "command": "node -e \"let d='';process.stdin.on('data',c=>d+=c);process.stdin.on('end',()=>{const i=JSON.parse(d);const p=i.prompt||'';const pats=[/sk-[A-Za-z0-9]{20,}/,/ghp_[A-Za-z0-9]{36}/,/github_pat_[A-Za-z0-9_]{82}/,/AKIA[0-9A-Z]{16}/,/xox[baprs]-[A-Za-z0-9-]{10,}/,/-----BEGIN (RSA |EC |OPENSSH |)PRIVATE KEY-----/];for(const r of pats){if(r.test(p)){console.error('[Hook] BLOCKED: prompt matches secret pattern '+r);process.exit(2)}}console.log(d)})\"", "description": "Block prompts that contain recognizable secret patterns" } ] } }
Exit code 2 tells Claude Code to reject the submission before the prompt leaves your machine. It is not exhaustive (no regex list ever is) but it catches the boring majority: OpenAI keys, GitHub PATs, AWS access keys, Slack tokens, private key blocks. Pair it with a secret scanner on commits and you have two layers that catch accidents in the two places they actually happen.
Tripwires for the leaks you missed
Regex hooks catch the shapes you know. For the shapes you don't, plant canaries. Generate a fake AWS access key at canarytokens.org, drop it into an .env.old or a README alongside a real one, and wait. If anyone ever tries to use it, you get an alert with the source IP. The alert is the proof that a leak actually propagated beyond the systems you can see, which is exactly the question this whole post is trying to answer. "Assume compromise" stops being an article of faith and becomes a detectable event.
Team blast radius
One more reason people stall on rotation: shared credentials. If the leaked key is a team-scoped GitHub PAT, an org-level AWS IAM user, or a database role that half your services hold open connections to, your rotation just broke your teammates' builds. That is the correct outcome, but it needs a five-second heads-up in the right channel before you hit the button. Coordinate, then rotate. Do not let politeness stretch into hours of delay, and do not cowboy-rotate a shared secret at 2am without warning. Both failure modes are common and both are avoidable.
The operational rule
If a secret touches an AI coding agent, treat it as already public and rotate it. Do not debate whether the session was captured. Do not chase deletion across systems you do not control. Do not try to reason about provider retention policies under pressure. Just rotate.
This is the same posture experienced responders take with any suspected credential exposure: assume the worst, invalidate immediately, then investigate at leisure. The AI agent context makes it more tempting to stall because the loop feels local. The CLI runs on your machine, the files are in your home directory, the whole thing feels self-contained. It is not. The CLI is the thin front end of a much larger retention system, and you don't own the log, you own the consequence.
For context on why AI tooling is an especially porous surface right now, see the Claude Code source leak story from earlier this week: even the tool vendor accidentally shipped internal artifacts to npm. If the vendor cannot perfectly contain its own code, your pasted secrets are not going to enjoy special protection either.
What to actually do, in order
- Rotate the three (or however many) exposed credentials first. Before anything else. Two minutes. Go.
- Update every consumer that depended on the old value: local
.envfiles, CI secrets, deployment environments, teammate machines. - Confirm the old credential is dead by trying to use it. If the provider accepts it, rotation did not land. Redo.
- Then, if you feel like it, clean up local artifacts. This is now cosmetic, not remediation.
- Harden the paste path so the next incident doesn't happen: move secrets into a manager, use environment variable references the agent can resolve without ever seeing the raw value, or add a hook that refuses to let known secret patterns into a prompt.
Step 5 is the one most people skip. It is also the only one that reduces the probability of a repeat.
The closing frame
Deletion is a story you tell yourself so you don't have to make the phone call. Rotation is the phone call. The credential that leaked into your agent session is not coming back, and the copies you cannot reach are not going to apologize. The fastest way out of the incident is to make the leaked value worthless, not to convince yourself it was never seen. Treat every secret that touches an AI agent as burned the moment it lands in the context window, and the incident stops being an incident.
References
- Claude Code documentation - session files - Official docs for CLI session behavior
- GitHub Actions secrets - encrypted secrets - How GitHub stores and audits secret writes
- Canarytokens - Free tripwire tokens for detecting credential misuse
- 1Password CLI - op run - Inject secrets into subprocess environments without exposing them
- SOPS - CNCF-maintained tool for encrypted secrets in Git, with age/KMS/GPG backends
- Claude Code Source Leak - Daita blog, related incident showing why agent tooling is a porous surface