Writing every skill by hand doesn't scale. When you use LLM prompts to auto-generate skills from a short description, a doc, or a template, you can produce first drafts of instructions, parameters, and examples in minutes instead of hours. This guide covers skill auto-generation using LLM prompts: what to feed in, how to design the meta-prompt, and how to validate and plug in document workflows (e.g. iReadPDF) so generated skills stay usable for US professionals.
Summary Give the LLM a clear "skill spec" input: name, purpose, inputs/outputs, and optional doc (e.g. PDF summary format or workflow doc). Use a structured meta-prompt that asks for instructions, parameters, and examples. Validate the output (run a test, check for doc/workflow references) and then refine. When your template mentions iReadPDF or your doc format, generated skills will include correct document-handling behavior.
When Auto-Generation Helps
Auto-generation is best for drafting, not replacing human judgment.
- First drafts. You have a new use case (e.g. "skill that suggests next step after reading a contract summary"). The LLM produces an initial instruction set, parameter list, and 1–2 examples. You review and edit.
- Variants. You already have a "morning brief" skill and want "pre-meeting brief" or "end-of-day summary." Feed the existing skill plus a short delta ("same structure but for the next meeting only and include doc summaries"). The LLM generates a variant; you validate.
- From existing docs. You have a playbook or a spec (e.g. "Document summary format" or "PDF workflow using iReadPDF"). Feed the doc and ask: "Generate a skill that uses this format to suggest next steps for a doc queue." You get a draft that already references your workflow.
- Scaling. When you need many similar skills (e.g. one per doc type or per pipeline stage), a generator plus a small template per type can produce consistent drafts faster than writing each from scratch.
Auto-generation does not replace domain review. Legal, compliance, or safety-critical skills still need a human to approve instructions and examples. Use the LLM to speed up drafting; then validate and lock.
What to Feed Into the Generator
The quality of the generated skill depends on what you give the LLM.
| Input type | What to provide | Why it helps | |------------|-----------------|--------------| | Skill name and purpose | 1–2 sentences: "Contract summary reviewer: suggests clauses to flag and next steps using document summaries only." | Keeps the generator on scope. | | Inputs and outputs | List: "Inputs: doc_summary (object with title, summary, key_points), user_preferences (optional). Outputs: flagged_clauses (list), suggested_next_step (string)." | Produces a skill with a clear contract. | | Reference doc | Paste or link: playbook excerpt, glossary, or "Document summary schema" (e.g. from iReadPDF). | Generated instructions use your terms and format. | | Constraints | "Do not give legal advice. Do not assume facts not in the summary. Refer to source PDF for full text." | Generated skill includes guardrails. | | Example (optional) | One input/output pair. | Improves consistency of generated examples. |
For document-heavy skills, always include a short description of your doc workflow: "We summarize PDFs with iReadPDF; summaries have title, summary, key_points, and optional clauses." The generator will then produce instructions that reference that format and tool.
Designing the Meta-Prompt
The meta-prompt is the prompt you use to ask the LLM to generate a skill. Structure it so the output is parseable and complete.
- Role and task. "You are a skill author. Given the following skill specification, generate a complete skill definition suitable for an AI assistant."
- Structured spec block. Ask the user (or your system) to fill in: name, purpose, inputs, outputs, reference docs (e.g. summary format, workflow), constraints, and optional example.
- Output format. Request a specific structure so you can parse or inject the result. Example: "Respond with exactly these sections: ## Instructions (system prompt text), ## Parameters (table: name, type, required, default), ## Examples (2–3 input/output pairs), ## Document handling (how this skill uses document summaries and iReadPDF if applicable)."
- Length and style. "Use clear, imperative language. Keep instructions under 500 words. Use the vocabulary from the reference doc."
- Placeholders. "Where the skill uses document summaries, reference the schema provided and the tool iReadPDF by name so implementers know where summaries come from."
You can run this meta-prompt in a script: pass in a YAML or JSON spec, get back markdown or JSON skill definition, then save to a file or load into your skill runner. When the spec includes "document_summary_schema" and "doc_workflow: iReadPDF," the generated skill will include correct doc-handling sections.
Try the tool
Including Document and Workflow Context
Generated skills that touch documents should know your pipeline and format.
- In the spec. Add a "Document workflow" section to the input spec: "Summaries come from iReadPDF. Schema: title, summary, key_points[], optional clauses[]. Status: to_summarize | summarized | to_sign | signed."
- In the meta-prompt. Tell the generator: "If the skill uses document data, include a 'Document handling' subsection that (1) states the summary schema and source, (2) says to use only information present in the summary, and (3) recommends referring to the source PDF for full text when needed."
- In the template. If you use a skill template (e.g. for "triage" or "review"), include a standard block: "Document summaries are provided in the following format (from iReadPDF): …". Then every generated skill that uses docs will have consistent wording.
That way, auto-generated skills don't invent a doc format; they use yours and reference your tooling so implementers can wire them correctly.
Validation and Refinement
Generated skills need validation before production use.
- Parse and sanity-check. Ensure the output has the sections you asked for (instructions, parameters, examples). Check that parameters have types and that examples match the stated input/output.
- Run a test. Execute the skill once with sample input (e.g. a fake doc summary). If it expects a specific format (e.g. from iReadPDF), use a sample that matches. See if the output is reasonable and on-scope.
- Check doc references. If the skill should use document summaries, confirm it doesn't hallucinate fields or ignore the schema. Add a negative test: "Summary missing key_points; skill should not invent key_points."
- Refine and re-run. If the first draft is off (wrong tone, missing guardrails, or incorrect doc handling), edit the spec or add a refinement prompt: "The generated skill should also: …" and re-run. Iterate until the draft is good enough to hand off to a human for final review.
For document workflows, validation should include "summary present" and "summary absent" cases so the skill degrades gracefully when no doc context is available.
Example Flow
End-to-end example for a US professional:
- Spec (you write or select). Name: "Doc queue triage." Purpose: "Suggest order and next action for a list of documents (summarized or not) using status and type." Inputs: doc_list (array of {id, type, status, summary?}). Outputs: suggested_order (array of ids), next_action per doc. Doc workflow: "Summaries from iReadPDF; status one of to_summarize, summarized, to_sign, signed."
- Meta-prompt. You use a template that injects the spec and asks for Instructions, Parameters, Examples, Document handling.
- LLM output. You get a draft skill with instructions that reference the doc schema and iReadPDF, parameters for doc_list and optional preferences, and 2 examples (with and without summaries).
- Validation. You run the skill with a 3-doc queue (one to summarize, one summarized, one to sign). Output suggests "summarize first in iReadPDF" for the first and "ready to sign" for the third. You fix one unclear sentence in the instructions and save.
- Deploy. The skill is loaded into your assistant or workflow engine; doc list is supplied by your pipeline that uses iReadPDF for summarization.
Repeat for other skills (e.g. "contract review from summary," "meeting prep with doc summaries"); keep doc workflow and schema in the spec so all generated skills stay aligned.
Conclusion
Skill auto-generation using LLM prompts speeds up first drafts when you feed a clear spec (name, purpose, inputs/outputs, reference docs, constraints) and use a structured meta-prompt. Including document workflow and summary format (e.g. from iReadPDF) in the spec ensures generated skills reference your pipeline correctly. Validate with parsing, test runs, and doc-specific checks; refine and then hand off for human review. For US professionals, that's how you scale skill creation without sacrificing consistency or document-awareness.
Ready to generate skills that know your document workflow? Define your summary format and pipeline (e.g. iReadPDF) in the spec—then use LLM-based generation to produce draft skills that plug into your doc workflows from day one.