⚙️ Prompt Engineering

Code Assistance Prompt Builder

Compose reusable prompts from personas, tasks, requirements, and constraints. Browse patterns for refactoring, debugging, and planning.

Compose Prompt

PT-RC Framework

Select persona → pick task → add requirements & constraints → paste context.

Click chips to toggle. Add custom items. Generates a structured template.

Personas

Roles & Seniority

Tasks

Goal

Requirements

Stack & Style

Constraints

Limits

Paste your code in the CONTEXT placeholder.

Quick Recipe: ML Bugfix

Act as a senior Python & ML engineer.

Task:
- Debug and fix a training bug in my model code.
- Explain the root cause in simple terms.
- Suggest tests to prevent this in the future.

Requirements:
- Stack: PyTorch, numpy, pandas.
- Respect the current public API and function signatures.
- Keep behavior the same for already-correct parts.

Constraints:
- No new external dependencies.
- Assume no internet access and local-only execution.

Context:
- First, restate in 2–3 bullet points what you understand about the problem.
- Then propose a step-by-step debug plan.
- Finally, show the fixed code and a list of sanity checks.

Here is the code and error log:
```python
# CODE_SNIPPET_HERE
text
Copy code
# ERROR_LOG_HERE
```

Browse Patterns

Base refactor prompt

RCGF

Think: Who you are, what this is, why you touch it, how you answer.

You are a senior LANGUAGE engineer on SYSTEM.
I will give you a snippet from FILE_PATH.

Goal: Refactor to GOALS (e.g. readability, less duplication)
while keeping behavior identical.

Constraints:
- Keep public API and signatures the same.
- No new external dependencies.
- Match the style of STYLE_REFERENCE.

Output:
1) Refactored code.
2) Bullet list of changes and reasons.
3) Risks or edge cases to test.

Here is the code:
```LANGUAGE
CODE_HERE
```

Refactor with intent

PIT

State the pain, the plan, and the proof.

I want to refactor this because PROBLEM
(e.g. long function, duplication, hard to test).

Focus on:
- INTENT (e.g. extract helpers, better naming).
- Same behavior, no new features.
- Minimal, clear changes.

After refactoring:
- Name the code smells removed.
- Map old structure → new structure.
- List key tests that should still pass.

Code:
```LANGUAGE
CODE_HERE
```

Constrained refactor

L-CAB

Lock the sandbox: runtime, APIs, deps.

Refactor this code under these constraints:
- Runtime / environment: RUNTIME_ENV.
- No new external dependencies.
- No breaking changes to public signatures or types.
- Keep behavior identical for all existing callers.
- Avoid over-engineering or extra layers.

Goals: GOALS (e.g. readability + maintainability).

Output:
1) Refactored code.
2) Short note on how each constraint was respected.

Code:
```LANGUAGE
CODE_HERE
```

Match project style

SAME

“When in Rome, code as the Romans code.”

You are working inside EXISTING_PROJECT.
Use STYLE_FILES (e.g. fileA, fileB) as style references
for naming, error handling, and testing.

Task:
- Refactor the code to follow those patterns.
- Keep behavior and public API unchanged.
- Use the same error and logging conventions.

Also propose tests that match our existing test style.

Code to refactor:
```LANGUAGE
CODE_HERE
```

Refactor to PATTERN

PET

Name the pattern, the pain, and the price.

I want to refactor this code using the PATTERN pattern
(e.g. strategy, repository, module).

Current issues: CURRENT_PROBLEMS.
Goal: make it easier to EXTEND_BASIS (e.g. add new variants).

Task:
1) Show refactored code using PATTERN.
2) Briefly explain how PATTERN improves
   extensibility / testability / separation of concerns.
3) Call out any trade-offs or added complexity.

Code:
```LANGUAGE
CODE_HERE
```

Extract responsibilities

3R

From "god function" to a small team of helpers.

This function/class is doing too many things.

Step 1: List the distinct responsibilities in this code.
Step 2: Propose a new structure
        (smaller functions / classes, same public API).
Step 3: Show the refactored code.
Step 4: Suggest focused tests for each responsibility.

Behavior must remain the same.

Code:
```LANGUAGE
CODE_HERE
```

Performance-focused refactor

HOT

Only optimize what actually gets hot.

Refactor this code for better performance
without changing its externally visible behavior.

Context:
- Language / runtime: RUNTIME_ENV.
- Usage: WORKLOAD (e.g. hot loop, large lists).

Task:
- Identify obvious performance bottlenecks.
- Refactor to remove them while keeping code readable.
- Avoid micro-optimizations that hurt clarity unless significant.

Output:
1) Refactored code.
2) Explanation of each optimization and trade-off.
3) Suggested benchmarks or test cases.

Code:
```LANGUAGE
CODE_HERE
```

Security-focused refactor

SIP

Assume input is hostile until proven boring.

Refactor this code to improve security
without changing intended behavior.

Assume all external input is untrusted.
Environment: SECURITY_CONTEXT.

Task:
- Identify security risks (injection, leaks, unsafe parsing, etc.).
- Refactor to add validations, safer APIs, and clearer error handling.
- Keep the same functional behavior for valid inputs.

Output:
1) Refactored code.
2) List of risks addressed and how.
3) Tests that should be added or updated.

Code:
```LANGUAGE
CODE_HERE
```

Explain refactor diff

BAR

Every change tells a story: before, after, why.

Explain the refactor in this format:

For each significant change:
- Before: OLD_SNIPPET
- After: NEW_SNIPPET
- Reason: WHY_IT_IMPROVES_THINGS

Also:
- Call out any behavior that might have changed.
- Suggest regression tests that would catch bugs.

Old code:
```LANGUAGE
OLD_CODE
New code:

LANGUAGE
Copy code
NEW_CODE
```

Step 1 · Diagnose only

MAP

First look, then cut.

Step 1/3 – Do not change the code yet.

Task:
- List the main refactoring opportunities
  (long methods, duplication, tight coupling, etc.).
- Group them into a short refactoring plan.
- Prioritize them (high → low impact).

Code:
```LANGUAGE
CODE_HERE
```

Step 2 · Apply limited refactor

SLICE

Refactor in slices, not in avalanches.

Step 2/3 – Apply refactors only for the first
ONE_OR_TWO_ITEMS from your plan.

Task:
- Show the updated code.
- For each change, explain briefly what and why.
- Confirm behavior is intended to remain the same.

Original code:
```LANGUAGE
ORIGINAL_SNIPPET
Refactored code:

LANGUAGE
Copy code
REFACTORED_SNIPPET
```

Step 3 · Self-review the refactor

RRT

Make the model review the model.

Step 3/3 – Review your refactor for safety.

Task:
- Look for potential regressions or subtle behavior changes.
- Highlight risky assumptions or fragile parts.
- Suggest a concise test list to validate the refactor.

Refactored code:
```LANGUAGE
REFACTORED_CODE
```

Prompt pre-flight checklist

4C

Use this as a tiny meta-prompt before any refactor.

Before you answer, check these and fix my prompt
if needed:

- Clarity: Is the refactor goal clear and specific?
- Constraints: Are API, deps, and runtime limits stated?
- Chunking: Is the code small enough to handle safely?
- Checks: Are explanation + test suggestions required?

If anything is missing, improve my prompt first,
then perform the refactor.

Refactor ML pipeline for clarity

PIPE

Separate data prep, model, and evaluation into clear stages.

Act as a senior Python & ML engineer.

Goal:
- Refactor this ML pipeline for clarity and maintainability.
- Separate data loading, preprocessing, model definition, training, and evaluation into clear steps.
- Keep behavior and results as close as possible to the original.

Stack:
- Python, numpy, pandas, scikit-learn (and others if visible in the code).

Tasks:
1) Identify and list the current stages in the pipeline.
2) Propose a clearer structure (functions or classes) while keeping public entry points identical.
3) Refactor the code accordingly.
4) Suggest unit tests or integration tests for each stage.

Constraints:
- No new heavy dependencies.
- Do not change the high-level public API used by callers.
- Keep random seeds and evaluation logic consistent.

Code:
```python
CODE_HERE
```

Debug model training failure

BUG

Symptom → suspects → experiments → fix.

Act as a senior ML engineer.

Context:
- Framework: FRAMEWORK (e.g. PyTorch 2.x, TensorFlow, JAX).
- Task: TASK_TYPE (e.g. classification, regression, seq2seq).

Goal:
- Diagnose why training is failing or producing NaNs/inf/degenerate predictions.
- Propose targeted fixes and sanity checks.

Please:
1) Summarize the symptoms in 3–5 bullets.
2) List possible root causes grouped by category
   (data, model architecture, loss, optimizer, device, mixed precision, etc.).
3) Propose a prioritized debug plan.
4) Suggest specific code-level fixes.
5) Provide a small checklist of sanity checks (shapes, label ranges, loss trends).

Code + logs:
```python
TRAINING_CODE_HERE
text
Copy code
LOGS_OR_TRACEBACK_HERE
```

Plan an ML experiment loop

PLAN

Define problem, levers, metrics, and decision rules.

Act as a data scientist & experiment designer.

Goal:
- Design a small but rigorous sequence of experiments to improve METRIC
  (e.g. F1, AUROC, RMSE) on PROBLEM_DESCRIPTION.

Please:
1) Restate the problem, target metric, and constraints.
2) List the main levers (features, model families, regularization, data volume, etc.).
3) Propose 3–6 concrete experiment runs with:
   - What changes.
   - What you expect to learn.
   - Stopping criteria or success metric.
4) Suggest how to log results and compare them honestly.

Constraints:
- Data: DATA_CONSTRAINTS (e.g. imbalanced, time series, small dataset).
- Compute: COMPUTE_LIMITS (e.g. 1 GPU, 8 GB VRAM, 1 hour).
- Libraries: LIBS (e.g. sklearn, xgboost, pytorch, lightgbm).

Current baseline code (optional):
```python
BASELINE_CODE_HERE
```

Explain model predictions

XPL

What the model does, where it fails, and why to trust it (or not).

Act as a senior ML engineer focused on interpretability.

Goal:
- Explain what this trained model has learned in terms a technical stakeholder understands.
- Highlight where it is reliable and where it is fragile.

Please:
1) Summarize the model type, target, and key metrics.
2) Interpret feature importances / SHAP values / coefficients in 5–10 bullet points.
3) Describe 3–5 typical data patterns where the model performs well.
4) Describe 3–5 risky regions (e.g. sparse data, outliers, covariate shift).
5) Suggest simple diagnostic plots or checks to validate these claims.

Context:
- Model description and metrics:
```text
MODEL_SUMMARY_AND_METRICS_HERE
Optional code:

python
Copy code
CODE_SNIPPETS_HERE
```

Optimize inference for deployment

DEP

Latency, throughput, memory, robustness.

Act as an ML engineer focused on deployment and MLOps.

Goal:
- Optimize this inference path for latency and robustness.
- Keep the same public input/output contract.

Context:
- Environment: DEPLOY_ENV (e.g. REST API, batch job, streaming).
- Constraints: LATENCY_BUDGET, MEMORY_LIMIT, CONCURRENCY.

Tasks:
1) Identify sources of unnecessary latency and allocations.
2) Suggest and apply code-level optimizations:
   - pre-loading artifacts
   - vectorizing operations
   - avoiding repeated deserialization / parsing
3) Propose a small set of load / stress tests.
4) Call out any numerical or stability risks.

Code:
```python
INFERENCE_CODE_HERE
```

Debug data issues in pandas pipeline

DATA

Check schema → check ranges → check drift.

Act as a data engineer / scientist working with pandas.

Goal:
- Find and fix data issues that may break or silently bias the model.

Please:
1) Infer and list the intended schema (column names, dtypes, value ranges).
2) Identify suspicious patterns (constant columns, high cardinality, weird null patterns).
3) Propose and show code for safe cleaning / validation steps.
4) Add assertions or checks that would fail fast if data is corrupted in the future.

Code (data loading + preprocessing):
```python
PANDAS_PIPELINE_HERE
```

How to combine builder + patterns

Mix · Match · Iterate
  • In the builder, choose an ML/DS persona and task.
  • Add requirements (stack, style, outputs) and constraints (env, deps, compute).
  • Copy the generated prompt template.
  • Paste in your IDE/assistant and merge with a pattern card (e.g. “Constrained refactor”).
  • Attach your code, logs, or dataset description where the CONTEXT placeholder appears.
  • For risky changes, wrap everything with the Iterative loop patterns (Step 1–3).