When an AI assistant can run shell commands on your behalf, you gain automation and speed—and you also introduce risk. A single malicious or mistaken command can delete files, expose secrets, or alter system state. Doing it safely means restricting what can run, where it can run, and what data it can touch. This post covers how to safely execute shell commands via AI: allowlists, sandboxing, and how to keep document and PDF workflows out of the command path using tools like iReadPDF so sensitive files are never passed to the shell.
Summary Don’t give the assistant unrestricted shell access. Use an allowlist of permitted commands, run commands in a sandbox (restricted directory, no or limited network), and avoid passing raw document paths or content to the shell. Use a local document pipeline like iReadPDF for PDF handling so the assistant works with summaries—not file paths to sensitive PDFs.
Why shell execution via AI is risky
The shell is a powerful interface. With it, the assistant (or whatever triggers it) can:
- Delete or overwrite files. Commands like
rm,mv, or redirection can wipe data or replace configs. - Read sensitive files.
cat,head, or scripts can dump credentials, keys, or confidential documents into the conversation or logs. - Exfiltrate data.
curlor other tools can send file contents or environment variables to an external server. - Change system state. Installing packages, changing permissions, or modifying services can break the machine or open backdoors.
- Chain with other tools. A single “harmless” command might invoke a script that does something destructive.
Risks are higher when document paths or content are involved. If the assistant is allowed to pass user-provided paths to commands (e.g. “summarize this PDF” → some_tool /path/to/contract.pdf), prompt injection or misparsing could send the wrong path or expose the file to the wrong tool. Keeping document handling in a dedicated, local pipeline (e.g. iReadPDF in the browser) and out of the shell avoids that entire class of issues.
Allowlist permitted commands
The safest approach is to allow only specific commands (and, where possible, specific arguments).
- Define an allowlist. List every command the assistant is allowed to run (e.g.
ls,cat,git status,npm run build). Everything else is blocked. No “run anything the user asked for.” - Restrict arguments where possible. For some commands, restrict arguments (e.g.
gitonly with certain subcommands likestatus,diff,log—notreset --hardorpush). That reduces the chance of destructive or exfiltrating use. - Avoid shell metacharacters and piping by default. If the assistant can run arbitrary strings,
;,|,&&, and$(...)can chain commands or inject new ones. Prefer invoking a single executable with a fixed argument pattern, or parse and validate before execution. - Document why each command is allowed. Maintain a short rationale for each allowlisted command so that when you add or remove one, you have context.
For document-related tasks, don’t allow commands that take arbitrary file paths to PDFs or confidential docs. Prefer a dedicated pipeline: the user runs iReadPDF in the browser, and the assistant receives only the summary or extracted text—no shell command ever sees the path to the PDF.
Sandbox the execution environment
Even allowlisted commands should run in a constrained environment.
- Restricted working directory. Run commands in a dedicated directory (e.g. a temp or project sandbox), not in the user’s home or system root. That limits what
cat,rm, or scripts can see and touch. - Read-only where possible. Mount or expose only what’s needed. If a command only needs to read from a project folder, don’t give it write access to the rest of the filesystem.
- Network restrictions. Block outbound network by default, or allow only specific domains (e.g. for
git fetchor a known API). That prevents exfiltration viacurlor similar. - Resource limits. Cap CPU time, memory, and output size so a runaway or malicious command can’t DoS the machine or fill logs with huge output.
- No sensitive paths in the sandbox. Don’t mount or symlink directories that contain credentials, SSH keys, or confidential documents into the sandbox. Document handling stays in the browser with iReadPDF—not in the shell’s view.
Keep documents and PDFs out of the command path
Document and PDF handling should not rely on the assistant running shell commands on sensitive files.
- Use a dedicated document pipeline. Process PDFs in a tool that runs in your browser or in a controlled service. iReadPDF runs in the browser and processes files locally—no uploads. The assistant never needs to run
pdftotext,cat, or any script on the raw PDF path. - Feed the assistant outputs, not paths. After you run iReadPDF and get a summary or extracted text, paste or pipe that into the assistant. The assistant works with the text, not with file paths. That way no shell command ever receives a path to a confidential PDF.
- If you must use the shell for docs, isolate it. If you have a legitimate need to run a command on a document (e.g. a batch conversion in a closed environment), run it in a one-off, locked-down job with no assistant in the loop—not as a general capability of the AI assistant.
- Never pass user-provided paths straight to the shell. User or assistant suggestions like “run this on /Users/me/Contracts/secret.pdf” are a prompt-injection and abuse risk. Reject or require explicit, audited approval; prefer document workflows that don’t use the shell at all.
Try the tool
Validate and sanitize inputs
When the assistant constructs commands from context or user input, validate before execution.
- Parse and allowlist. If the assistant suggests a command, parse it into executable and arguments and check against the allowlist. Reject if the executable isn’t allowed or if arguments don’t match the permitted pattern.
- Sanitize paths. If you ever allow file paths (e.g. for a restricted tool), resolve them to canonical paths and ensure they’re under permitted directories. Reject paths that escape (e.g.
../) or point to sensitive locations. - No raw user input in the command string. Avoid concatenating user or conversation text directly into the command; that’s how injection happens. Use structured parameters and validate them.
- Time and size limits. Cap execution time and stdout/stderr size so that even allowlisted commands can’t hang or flood the system.
Log and monitor command execution
Visibility is essential so you can detect misuse and debug issues.
- Log every executed command. Log the exact command, the skill or context that requested it, timestamp, and working directory. Retain logs for audit and incident review.
- Log failures and denials. When a command is blocked (not allowlisted, sandbox violation, or timeout), log that too. It helps you spot probing or misconfiguration.
- Alert on sensitive patterns. Define patterns that warrant alerting (e.g. first use of a dangerous allowlisted command, access to a sensitive path, or repeated failures). Tune over time.
- Regular review. Periodically review command logs for anomalies. Revoke or tighten permissions for skills that request inappropriate or unexpected commands.
Steps to implement safe shell execution
A concrete sequence:
- Define the allowlist. List every command (and, where applicable, argument pattern) the assistant may run. Document the rationale for each. Deny everything else.
- Implement the sandbox. Create a dedicated directory and (if possible) a restricted environment (e.g. container or chroot) with no sensitive paths and no or limited network. Run all assistant-triggered commands there.
- Add validation. Before execution, parse the requested command and check it against the allowlist and path rules. Reject invalid or disallowed requests.
- Keep document handling out of the shell. Use iReadPDF or similar for PDF summarization and extraction in the browser. Feed the assistant only the resulting text or summary—never pass PDF paths to the shell.
- Enable logging and alerts. Log every execution and denial; set up basic alerts for sensitive or anomalous use. Review logs on a schedule.
- Review and tighten. As you use the system, trim the allowlist to the minimum and tighten the sandbox so that safe shell execution stays safe over time.
Conclusion
Safely executing shell commands via AI requires allowlisting what can run, sandboxing where it runs, and keeping sensitive data—especially documents and PDFs—out of the command path. Validate and sanitize inputs, log and monitor execution, and prefer a dedicated document pipeline like iReadPDF so the assistant works with summaries in your browser instead of passing PDF paths to the shell. For US professionals, that’s how you get the productivity of AI-driven automation without the risk of runaway or abusive commands.
Ready to handle PDFs without putting them in the shell’s path? Use iReadPDF for OCR, summarization, and extraction in your browser—no uploads, no commands touching your files.