initial commit
This commit is contained in:
20
.agents/plugins/marketplace.json
Normal file
20
.agents/plugins/marketplace.json
Normal file
@@ -0,0 +1,20 @@
|
||||
{
|
||||
"name": "local-harness-engineering",
|
||||
"interface": {
|
||||
"displayName": "Local Harness Engineering"
|
||||
},
|
||||
"plugins": [
|
||||
{
|
||||
"name": "harness-engineering",
|
||||
"source": {
|
||||
"source": "local",
|
||||
"path": "./plugins/harness-engineering"
|
||||
},
|
||||
"policy": {
|
||||
"installation": "AVAILABLE",
|
||||
"authentication": "ON_INSTALL"
|
||||
},
|
||||
"category": "Productivity"
|
||||
}
|
||||
]
|
||||
}
|
||||
57
.agents/skills/harness-review/SKILL.md
Normal file
57
.agents/skills/harness-review/SKILL.md
Normal file
@@ -0,0 +1,57 @@
|
||||
---
|
||||
name: harness-review
|
||||
description: Review a Harness Engineering repository against its persistent rules and design docs. Use when Codex is asked to review local changes, generated phase files, or implementation output against `AGENTS.md`, `docs/ARCHITECTURE.md`, `docs/ADR.md`, `docs/UI_GUIDE.md`, testing expectations, and Harness step acceptance criteria.
|
||||
---
|
||||
|
||||
# Harness Review
|
||||
|
||||
Use this skill when the user wants a repository-grounded review instead of generic commentary.
|
||||
|
||||
## Review input set
|
||||
|
||||
Read these first:
|
||||
|
||||
- `/AGENTS.md`
|
||||
- `/docs/ARCHITECTURE.md`
|
||||
- `/docs/ADR.md`
|
||||
- `/docs/UI_GUIDE.md`
|
||||
- the changed files or generated `phases/` files under review
|
||||
|
||||
If the user explicitly asks for delegated review, prefer the repo custom agent `harness_reviewer` or built-in read-only explorers.
|
||||
|
||||
## Checklist
|
||||
|
||||
Evaluate the patch against these questions:
|
||||
|
||||
1. Does it follow the architecture described in `docs/ARCHITECTURE.md`?
|
||||
2. Does it stay within the technology choices documented in `docs/ADR.md`?
|
||||
3. Are new or changed behaviors covered by tests or other explicit validation?
|
||||
4. Does it violate any CRITICAL rule in `AGENTS.md`?
|
||||
5. Do generated `phases/` files remain self-contained, executable, and internally consistent?
|
||||
6. If the user expects verification, does `python scripts/validate_workspace.py` succeed or is the failure explained?
|
||||
|
||||
## Output rules
|
||||
|
||||
- Lead with findings, ordered by severity.
|
||||
- Include file references for each finding.
|
||||
- Explain the concrete risk or regression, not just the rule name.
|
||||
- If there are no findings, say so explicitly and mention residual risks or missing evidence.
|
||||
- Keep summaries brief after the findings.
|
||||
|
||||
## Preferred review table
|
||||
|
||||
When the user asks for a checklist-style review, use this table:
|
||||
|
||||
| Item | Result | Notes |
|
||||
|------|------|------|
|
||||
| Architecture compliance | PASS/FAIL | {details} |
|
||||
| Tech stack compliance | PASS/FAIL | {details} |
|
||||
| Test coverage | PASS/FAIL | {details} |
|
||||
| CRITICAL rules | PASS/FAIL | {details} |
|
||||
| Build and validation | PASS/FAIL | {details} |
|
||||
|
||||
## What not to do
|
||||
|
||||
- Do not approve changes just because they compile.
|
||||
- Do not focus on style-only issues when correctness, architecture drift, or missing validation exists.
|
||||
- Do not assume a passing hook means the implementation is acceptable; review the actual diff and docs.
|
||||
4
.agents/skills/harness-review/agents/openai.yaml
Normal file
4
.agents/skills/harness-review/agents/openai.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
interface:
|
||||
display_name: "Harness Review"
|
||||
short_description: "Review changes against Harness project rules"
|
||||
default_prompt: "Use Harness review to check architecture, tests, and rules."
|
||||
145
.agents/skills/harness-workflow/SKILL.md
Normal file
145
.agents/skills/harness-workflow/SKILL.md
Normal file
@@ -0,0 +1,145 @@
|
||||
---
|
||||
name: harness-workflow
|
||||
description: Plan and run the Harness Engineering workflow for this repository. Use when Codex needs to read `AGENTS.md` and `docs/*.md`, discuss implementation scope, draft phase plans, or create/update `phases/index.json`, `phases/{phase}/index.json`, and `phases/{phase}/stepN.md` files for staged execution.
|
||||
---
|
||||
|
||||
# Harness Workflow
|
||||
|
||||
Use this skill when the user is working in the Harness template and wants structured planning or phase-file generation.
|
||||
|
||||
## Workflow
|
||||
|
||||
### 1. Explore first
|
||||
|
||||
Read these files before proposing steps:
|
||||
|
||||
- `/AGENTS.md`
|
||||
- `/docs/PRD.md`
|
||||
- `/docs/ARCHITECTURE.md`
|
||||
- `/docs/ADR.md`
|
||||
- `/docs/UI_GUIDE.md`
|
||||
|
||||
If the user explicitly asks for parallel exploration, use built-in Codex subagents such as `explorer`, or the repo-scoped custom agent `phase_planner`.
|
||||
|
||||
### 2. Discuss before locking the plan
|
||||
|
||||
If scope, sequencing, or architecture choices are still ambiguous, surface the decision points before creating `phases/` files.
|
||||
|
||||
### 3. Design steps with strict boundaries
|
||||
|
||||
When drafting a phase plan:
|
||||
|
||||
1. Keep scope minimal. One step should usually touch one layer or one module.
|
||||
2. Make each step self-contained. Every `stepN.md` must work in an isolated Codex session.
|
||||
3. List prerequisite files explicitly. Never rely on "as discussed above".
|
||||
4. Specify interfaces or invariants, not line-by-line implementations.
|
||||
5. Use executable acceptance commands, not vague success criteria.
|
||||
6. Write concrete warnings in "do not do X because Y" form.
|
||||
7. Use kebab-case step names.
|
||||
|
||||
## Files to generate
|
||||
|
||||
### `phases/index.json`
|
||||
|
||||
Top-level phase registry. Append to `phases[]` when the file already exists.
|
||||
|
||||
```json
|
||||
{
|
||||
"phases": [
|
||||
{
|
||||
"dir": "0-mvp",
|
||||
"status": "pending"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
- `dir`: phase directory name.
|
||||
- `status`: `pending`, `completed`, `error`, or `blocked`.
|
||||
- Timestamp fields are written by `scripts/execute.py`; do not seed them during planning.
|
||||
|
||||
### `phases/{phase}/index.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"project": "<project-name>",
|
||||
"phase": "<phase-name>",
|
||||
"steps": [
|
||||
{ "step": 0, "name": "project-setup", "status": "pending" },
|
||||
{ "step": 1, "name": "core-types", "status": "pending" },
|
||||
{ "step": 2, "name": "api-layer", "status": "pending" }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
- `project`: from `AGENTS.md`.
|
||||
- `phase`: directory name.
|
||||
- `steps[].step`: zero-based integer.
|
||||
- `steps[].name`: kebab-case slug.
|
||||
- `steps[].status`: initialize to `pending`.
|
||||
|
||||
### `phases/{phase}/stepN.md`
|
||||
|
||||
Each step file should contain:
|
||||
|
||||
1. A title.
|
||||
2. A "read these files first" section.
|
||||
3. A concrete task section.
|
||||
4. Executable acceptance criteria.
|
||||
5. Verification instructions.
|
||||
6. Explicit prohibitions.
|
||||
|
||||
Recommended structure:
|
||||
|
||||
```markdown
|
||||
# Step {N}: {name}
|
||||
|
||||
## Read First
|
||||
- /AGENTS.md
|
||||
- /docs/ARCHITECTURE.md
|
||||
- /docs/ADR.md
|
||||
- {files from previous steps}
|
||||
|
||||
## Task
|
||||
{specific instructions}
|
||||
|
||||
## Acceptance Criteria
|
||||
```bash
|
||||
python scripts/validate_workspace.py
|
||||
```
|
||||
|
||||
## Verification
|
||||
1. Run the acceptance commands.
|
||||
2. Check AGENTS and docs for rule drift.
|
||||
3. Update the matching step in phases/{phase}/index.json:
|
||||
- completed + summary
|
||||
- error + error_message
|
||||
- blocked + blocked_reason
|
||||
|
||||
## Do Not
|
||||
- {concrete prohibition}
|
||||
```
|
||||
```
|
||||
|
||||
## Execution
|
||||
|
||||
Run the generated phase with:
|
||||
|
||||
```bash
|
||||
python scripts/execute.py <phase-name>
|
||||
python scripts/execute.py <phase-name> --push
|
||||
```
|
||||
|
||||
`scripts/execute.py` handles:
|
||||
|
||||
- `feat-{phase}` branch checkout/creation
|
||||
- guardrail injection from `AGENTS.md` and `docs/*.md`
|
||||
- accumulation of completed-step summaries into later prompts
|
||||
- up to 3 retries with prior error feedback
|
||||
- two-phase commit of code changes and metadata updates
|
||||
- timestamps such as `created_at`, `started_at`, `completed_at`, `failed_at`, and `blocked_at`
|
||||
|
||||
## Recovery rules
|
||||
|
||||
- If a step is `error`, reset its status to `pending`, remove `error_message`, then rerun.
|
||||
- If a step is `blocked`, resolve the blocker, reset to `pending`, remove `blocked_reason`, then rerun.
|
||||
4
.agents/skills/harness-workflow/agents/openai.yaml
Normal file
4
.agents/skills/harness-workflow/agents/openai.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
interface:
|
||||
display_name: "Harness Workflow"
|
||||
short_description: "Guide Codex through Harness phase planning"
|
||||
default_prompt: "Use the Harness workflow to plan phases and step files."
|
||||
11
.codex/agents/harness-reviewer.toml
Normal file
11
.codex/agents/harness-reviewer.toml
Normal file
@@ -0,0 +1,11 @@
|
||||
name = "harness_reviewer"
|
||||
description = "Read-only reviewer for Harness projects, focused on architecture drift, critical rule violations, and missing validation."
|
||||
model = "gpt-5.4"
|
||||
model_reasoning_effort = "high"
|
||||
sandbox_mode = "read-only"
|
||||
developer_instructions = """
|
||||
Review changes like a repository owner.
|
||||
Prioritize correctness, architecture compliance, behavior regressions, and missing tests over style.
|
||||
Always compare the patch against AGENTS.md, docs/ARCHITECTURE.md, docs/ADR.md, and the requested acceptance criteria.
|
||||
Lead with concrete findings and file references. If no material issues are found, say so explicitly and mention residual risks.
|
||||
"""
|
||||
12
.codex/agents/phase-planner.toml
Normal file
12
.codex/agents/phase-planner.toml
Normal file
@@ -0,0 +1,12 @@
|
||||
name = "phase_planner"
|
||||
description = "Read-heavy Harness planner that decomposes docs into minimal, self-contained phase and step files."
|
||||
model = "gpt-5.4"
|
||||
model_reasoning_effort = "high"
|
||||
sandbox_mode = "read-only"
|
||||
developer_instructions = """
|
||||
Plan before implementing.
|
||||
Read AGENTS.md and the docs directory, identify the smallest coherent phase boundaries, and draft self-contained steps.
|
||||
Keep each step scoped to one layer or one module when possible.
|
||||
Do not make code changes unless the parent agent explicitly asks you to write files.
|
||||
Return concrete file paths, acceptance commands, and blocking assumptions.
|
||||
"""
|
||||
9
.codex/config.toml
Normal file
9
.codex/config.toml
Normal file
@@ -0,0 +1,9 @@
|
||||
# Project-scoped Codex defaults for the Harness template.
|
||||
# As of 2026-04-15, hooks are experimental and disabled on native Windows.
|
||||
|
||||
[features]
|
||||
codex_hooks = true
|
||||
|
||||
[agents]
|
||||
max_threads = 6
|
||||
max_depth = 1
|
||||
28
.codex/hooks.json
Normal file
28
.codex/hooks.json
Normal file
@@ -0,0 +1,28 @@
|
||||
{
|
||||
"hooks": {
|
||||
"PreToolUse": [
|
||||
{
|
||||
"matcher": "Bash",
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "python3 \"$(git rev-parse --show-toplevel)/.codex/hooks/pre_tool_use_policy.py\"",
|
||||
"statusMessage": "Checking risky shell command"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"Stop": [
|
||||
{
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "python3 \"$(git rev-parse --show-toplevel)/.codex/hooks/stop_continue.py\"",
|
||||
"statusMessage": "Running Harness validation",
|
||||
"timeout": 300
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
BIN
.codex/hooks/__pycache__/pre_tool_use_policy.cpython-312.pyc
Normal file
BIN
.codex/hooks/__pycache__/pre_tool_use_policy.cpython-312.pyc
Normal file
Binary file not shown.
BIN
.codex/hooks/__pycache__/stop_continue.cpython-312.pyc
Normal file
BIN
.codex/hooks/__pycache__/stop_continue.cpython-312.pyc
Normal file
Binary file not shown.
47
.codex/hooks/pre_tool_use_policy.py
Normal file
47
.codex/hooks/pre_tool_use_policy.py
Normal file
@@ -0,0 +1,47 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Block obviously destructive shell commands before Codex runs them."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import re
|
||||
import sys
|
||||
|
||||
|
||||
BLOCK_PATTERNS = (
|
||||
r"\brm\s+-rf\b",
|
||||
r"\bgit\s+push\s+--force(?:-with-lease)?\b",
|
||||
r"\bgit\s+reset\s+--hard\b",
|
||||
r"\bDROP\s+TABLE\b",
|
||||
r"\btruncate\s+table\b",
|
||||
r"\bRemove-Item\b.*\b-Recurse\b",
|
||||
r"\bdel\b\s+/s\b",
|
||||
)
|
||||
|
||||
|
||||
def main() -> int:
|
||||
try:
|
||||
payload = json.load(sys.stdin)
|
||||
except json.JSONDecodeError:
|
||||
return 0
|
||||
|
||||
command = payload.get("tool_input", {}).get("command", "")
|
||||
for pattern in BLOCK_PATTERNS:
|
||||
if re.search(pattern, command, re.IGNORECASE):
|
||||
json.dump(
|
||||
{
|
||||
"hookSpecificOutput": {
|
||||
"hookEventName": "PreToolUse",
|
||||
"permissionDecision": "deny",
|
||||
"permissionDecisionReason": "Harness guardrail blocked a risky shell command.",
|
||||
}
|
||||
},
|
||||
sys.stdout,
|
||||
)
|
||||
return 0
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
55
.codex/hooks/stop_continue.py
Normal file
55
.codex/hooks/stop_continue.py
Normal file
@@ -0,0 +1,55 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Run repository validation when a Codex turn stops and request one more pass if it fails."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import subprocess
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def main() -> int:
|
||||
try:
|
||||
payload = json.load(sys.stdin)
|
||||
except json.JSONDecodeError:
|
||||
return 0
|
||||
|
||||
if payload.get("stop_hook_active"):
|
||||
return 0
|
||||
|
||||
root = Path(payload.get("cwd") or ".").resolve()
|
||||
validator = root / "scripts" / "validate_workspace.py"
|
||||
if not validator.exists():
|
||||
return 0
|
||||
|
||||
result = subprocess.run(
|
||||
[sys.executable, str(validator)],
|
||||
cwd=root,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=240,
|
||||
)
|
||||
|
||||
if result.returncode == 0:
|
||||
return 0
|
||||
|
||||
summary = (result.stdout or result.stderr or "workspace validation failed").strip()
|
||||
if len(summary) > 1200:
|
||||
summary = summary[:1200].rstrip() + "..."
|
||||
|
||||
json.dump(
|
||||
{
|
||||
"decision": "block",
|
||||
"reason": (
|
||||
"Validation failed. Review the output, fix the repo, then continue.\n\n"
|
||||
f"{summary}"
|
||||
),
|
||||
},
|
||||
sys.stdout,
|
||||
)
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
10
.gitignore
vendored
Normal file
10
.gitignore
vendored
Normal file
@@ -0,0 +1,10 @@
|
||||
node_modules/
|
||||
.next/
|
||||
out/
|
||||
next-env.d.ts
|
||||
tsconfig.tsbuildinfo
|
||||
|
||||
# phase execution outputs
|
||||
phases/**/phase*-output.json
|
||||
phases/**/step*-output.json
|
||||
phases/**/step*-last-message.txt
|
||||
40
AGENTS.md
Normal file
40
AGENTS.md
Normal file
@@ -0,0 +1,40 @@
|
||||
# Project: {프로젝트명}
|
||||
|
||||
## Repository Role
|
||||
- This repository is a Codex-first Harness Engineering template.
|
||||
- Persistent repository instructions live in this `AGENTS.md`.
|
||||
- Reusable repo-scoped workflows live in `.agents/skills/`.
|
||||
- Project-scoped custom agents live in `.codex/agents/`.
|
||||
- Experimental hooks live in `.codex/hooks.json`.
|
||||
|
||||
## 기술 스택
|
||||
- {프레임워크 (예: Next.js 15)}
|
||||
- {언어 (예: TypeScript strict mode)}
|
||||
- {스타일링 (예: Tailwind CSS)}
|
||||
|
||||
## 아키텍처 규칙
|
||||
- CRITICAL: {절대 지켜야 할 규칙 1 (예: 모든 API 로직은 app/api/ 라우트 핸들러에서만 처리)}
|
||||
- CRITICAL: {절대 지켜야 할 규칙 2 (예: 클라이언트 컴포넌트에서 직접 외부 API를 호출하지 말 것)}
|
||||
- {일반 규칙 (예: 컴포넌트는 components/ 폴더에, 타입은 types/ 폴더에 분리)}
|
||||
|
||||
## Harness Workflow
|
||||
- 먼저 `docs/PRD.md`, `docs/ARCHITECTURE.md`, `docs/ADR.md`, `docs/UI_GUIDE.md`를 읽고 기획/설계 의도를 파악할 것
|
||||
- 단계별 실행 계획이 필요하면 repo skill `harness-workflow`를 사용해 `phases/` 아래 파일을 설계할 것
|
||||
- 변경사항 리뷰가 필요하면 repo skill `harness-review` 또는 Codex의 `/review`를 사용할 것
|
||||
- `phases/{phase}/index.json`은 phase 진행 상태의 단일 진실 공급원으로 취급할 것
|
||||
- 각 `stepN.md`는 독립된 Codex 세션에서도 실행 가능하도록 자기완결적으로 작성할 것
|
||||
|
||||
## 개발 프로세스
|
||||
- CRITICAL: 새 기능 구현 시 반드시 테스트를 먼저 작성하고, 테스트가 통과하는 구현을 작성할 것 (TDD)
|
||||
- 커밋 메시지는 conventional commits 형식을 따를 것 (`feat:`, `fix:`, `docs:`, `refactor:`)
|
||||
- `scripts/execute.py`는 step 완료 후 코드/메타데이터 커밋을 정리하므로, step 프롬프트 안에서 별도 커밋을 만들 필요는 없음
|
||||
|
||||
## 검증
|
||||
- 기본 검증 스크립트는 `python scripts/validate_workspace.py`
|
||||
- Node 프로젝트면 `package.json`의 `lint`, `build`, `test` 스크립트를 자동 탐지해 순서대로 실행
|
||||
- 다른 스택이면 `HARNESS_VALIDATION_COMMANDS` 환경 변수에 줄바꿈 기준으로 검증 커맨드를 지정
|
||||
|
||||
## 명령어
|
||||
- `python scripts/execute.py <phase-dir>`: Codex 기반 phase 순차 실행
|
||||
- `python scripts/execute.py <phase-dir> --push`: phase 완료 후 브랜치 push
|
||||
- `python scripts/validate_workspace.py`: 저장소 검증
|
||||
21
docs/ADR.md
Normal file
21
docs/ADR.md
Normal file
@@ -0,0 +1,21 @@
|
||||
# Architecture Decision Records
|
||||
|
||||
## 철학
|
||||
{프로젝트의 핵심 가치관 (예: MVP 속도 최우선. 외부 의존성 최소화. 작동하는 최소 구현을 선택.)}
|
||||
|
||||
---
|
||||
|
||||
### ADR-001: {결정 사항 (예: Next.js App Router 선택)}
|
||||
**결정**: {뭘 선택했는지}
|
||||
**이유**: {왜 선택했는지}
|
||||
**트레이드오프**: {뭘 포기했는지}
|
||||
|
||||
### ADR-002: {결정 사항}
|
||||
**결정**: {뭘 선택했는지}
|
||||
**이유**: {왜 선택했는지}
|
||||
**트레이드오프**: {뭘 포기했는지}
|
||||
|
||||
### ADR-003: {결정 사항}
|
||||
**결정**: {뭘 선택했는지}
|
||||
**이유**: {왜 선택했는지}
|
||||
**트레이드오프**: {뭘 포기했는지}
|
||||
26
docs/ARCHITECTURE.md
Normal file
26
docs/ARCHITECTURE.md
Normal file
@@ -0,0 +1,26 @@
|
||||
# 아키텍처
|
||||
|
||||
## 디렉토리 구조
|
||||
```
|
||||
src/
|
||||
├── Analysis/ # Analysis 관련 class
|
||||
├── Property/ # 요소 재료 및 속성 관련 class
|
||||
├── Element/ # 요소 관련 class
|
||||
├── Boundary/ # 경계조건 관련 class
|
||||
├── Load/ # 하중 관련 class
|
||||
└── Util/ # 수학 라이브러리 등 솔버 utility 관련 class
|
||||
```
|
||||
|
||||
## 패턴
|
||||
{사용하는 디자인 패턴 (예: Server Components 기본, 인터랙션이 필요한 곳만 Client Component)}
|
||||
|
||||
## 데이터 흐름
|
||||
```
|
||||
해석 입력 파일
|
||||
{데이터가 어떻게 흐르는지 (예:
|
||||
사용자 입력 → Client Component → API Route → 외부 API → 응답 → UI 업데이트
|
||||
)}
|
||||
```
|
||||
|
||||
## 상태 관리
|
||||
{상태 관리 방식 (예: 서버 상태는 Server Components, 클라이언트 상태는 useState/useReducer)}
|
||||
21
docs/PRD.md
Normal file
21
docs/PRD.md
Normal file
@@ -0,0 +1,21 @@
|
||||
# PRD: {프로젝트명}
|
||||
|
||||
## 목표
|
||||
{이 프로젝트가 해결하려는 문제를 한 줄로 요약}
|
||||
|
||||
## 사용자
|
||||
{누가 이 제품을 쓰는지}
|
||||
|
||||
## 핵심 기능
|
||||
1. {기능 1}
|
||||
2. {기능 2}
|
||||
3. {기능 3}
|
||||
|
||||
## MVP 제외 사항
|
||||
- {안 만들 것 1}
|
||||
- {안 만들 것 2}
|
||||
- {안 만들 것 3}
|
||||
|
||||
## 디자인
|
||||
- {디자인 방향 (예: 다크모드 고정, 미니멀)}
|
||||
- {색상 (예: 무채색 + 포인트 1가지)}
|
||||
22
plugins/harness-engineering/.codex-plugin/plugin.json
Normal file
22
plugins/harness-engineering/.codex-plugin/plugin.json
Normal file
@@ -0,0 +1,22 @@
|
||||
{
|
||||
"name": "harness-engineering",
|
||||
"version": "1.0.0",
|
||||
"description": "Repo-local Harness Engineering slash commands for Codex.",
|
||||
"interface": {
|
||||
"displayName": "Harness Engineering",
|
||||
"shortDescription": "Harness planning and review prompts for this repo",
|
||||
"longDescription": "Optional local plugin that exposes Harness Engineering slash commands while the core workflow remains in repo-native AGENTS, skills, custom agents, and hooks.",
|
||||
"developerName": "Local Repository",
|
||||
"category": "Productivity",
|
||||
"capabilities": [
|
||||
"Interactive",
|
||||
"Read",
|
||||
"Write"
|
||||
],
|
||||
"defaultPrompt": [
|
||||
"Use Harness Engineering to plan a new phase for this repository.",
|
||||
"Review my changes against the Harness docs and rules."
|
||||
],
|
||||
"brandColor": "#2563EB"
|
||||
}
|
||||
}
|
||||
4
plugins/harness-engineering/agents/openai.yaml
Normal file
4
plugins/harness-engineering/agents/openai.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
interface:
|
||||
display_name: "Harness Engineering"
|
||||
short_description: "Use Harness slash commands in this repository"
|
||||
default_prompt: "Use Harness Engineering to plan a phase or review changes in this repository."
|
||||
43
plugins/harness-engineering/commands/harness.md
Normal file
43
plugins/harness-engineering/commands/harness.md
Normal file
@@ -0,0 +1,43 @@
|
||||
---
|
||||
description: Run the Harness Engineering planning workflow for this repository.
|
||||
---
|
||||
|
||||
# /harness
|
||||
|
||||
## Preflight
|
||||
|
||||
- Read `/AGENTS.md`, `/docs/PRD.md`, `/docs/ARCHITECTURE.md`, `/docs/ADR.md`, and `/docs/UI_GUIDE.md` if they exist.
|
||||
- Confirm whether the user wants discussion only, a draft plan, or file generation under `phases/`.
|
||||
- Note whether the user explicitly asked for subagents; only then consider `phase_planner` or built-in explorers/workers.
|
||||
|
||||
## Plan
|
||||
|
||||
- State what will be created or updated before editing files.
|
||||
- If a plan already exists under `phases/`, say whether you are extending it or replacing part of it.
|
||||
- Keep each proposed step small, self-contained, and independently executable.
|
||||
|
||||
## Commands
|
||||
|
||||
- Invoke `$harness-workflow` and follow it.
|
||||
- When file generation is requested, create or update:
|
||||
- `phases/index.json`
|
||||
- `phases/{phase}/index.json`
|
||||
- `phases/{phase}/stepN.md`
|
||||
- Use `python scripts/execute.py <phase>` as the runtime target when you need to reference execution.
|
||||
|
||||
## Verification
|
||||
|
||||
- Re-read the generated phase files for consistency.
|
||||
- Check that step numbering, phase names, and acceptance commands line up.
|
||||
- If the repo has a validator, prefer `python scripts/validate_workspace.py` as the default acceptance command unless the user specified a narrower command.
|
||||
|
||||
## Summary
|
||||
|
||||
## Result
|
||||
- **Action**: planned or generated Harness phase files
|
||||
- **Status**: success | partial | failed
|
||||
- **Details**: phase name, step count, and any blockers
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Suggest the next natural command, usually `python scripts/execute.py <phase>` or a focused edit to one generated step.
|
||||
38
plugins/harness-engineering/commands/review.md
Normal file
38
plugins/harness-engineering/commands/review.md
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
description: Review local changes against Harness repository rules and docs.
|
||||
---
|
||||
|
||||
# /review
|
||||
|
||||
## Preflight
|
||||
|
||||
- Read `/AGENTS.md`, `/docs/ARCHITECTURE.md`, `/docs/ADR.md`, and `/docs/UI_GUIDE.md` if they exist.
|
||||
- Identify the changed files or generated `phases/` artifacts that need review.
|
||||
- If the user wants a delegated review, use the read-only custom agent `harness_reviewer` only when they explicitly asked for subagents.
|
||||
|
||||
## Plan
|
||||
|
||||
- State what evidence will be checked: docs, changed files, generated phase files, and validation output if available.
|
||||
- Prioritize correctness, architecture drift, CRITICAL rule violations, and missing tests over style commentary.
|
||||
|
||||
## Commands
|
||||
|
||||
- Invoke `$harness-review`.
|
||||
- Use Codex built-in `/review` when the user specifically wants a code-review style pass over the working tree or git diff.
|
||||
- If validation is relevant, run `python scripts/validate_workspace.py` or explain why it was not run.
|
||||
|
||||
## Verification
|
||||
|
||||
- Confirm that every finding is tied to a file and an actual rule or behavioral risk.
|
||||
- If no findings remain, say so explicitly and mention residual risks or missing evidence.
|
||||
|
||||
## Summary
|
||||
|
||||
## Result
|
||||
- **Action**: reviewed Harness changes
|
||||
- **Status**: success | partial | failed
|
||||
- **Details**: findings, docs checked, and validation status
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Suggest the smallest follow-up: fix the top finding, rerun validation, or execute a pending phase.
|
||||
BIN
scripts/__pycache__/execute.cpython-312.pyc
Normal file
BIN
scripts/__pycache__/execute.cpython-312.pyc
Normal file
Binary file not shown.
BIN
scripts/__pycache__/validate_workspace.cpython-312.pyc
Normal file
BIN
scripts/__pycache__/validate_workspace.cpython-312.pyc
Normal file
Binary file not shown.
424
scripts/execute.py
Normal file
424
scripts/execute.py
Normal file
@@ -0,0 +1,424 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Harness Step Executor - run phase steps sequentially with Codex and self-correction.
|
||||
|
||||
Usage:
|
||||
python scripts/execute.py <phase-dir> [--push]
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import contextlib
|
||||
import json
|
||||
import subprocess
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
import types
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
ROOT = Path(__file__).resolve().parent.parent
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def progress_indicator(label: str):
|
||||
"""터미널 진행 표시기. with 문으로 사용하며 .elapsed 로 경과 시간을 읽는다."""
|
||||
frames = "◐◓◑◒"
|
||||
stop = threading.Event()
|
||||
t0 = time.monotonic()
|
||||
|
||||
def _animate():
|
||||
idx = 0
|
||||
while not stop.wait(0.12):
|
||||
sec = int(time.monotonic() - t0)
|
||||
sys.stderr.write(f"\r{frames[idx % len(frames)]} {label} [{sec}s]")
|
||||
sys.stderr.flush()
|
||||
idx += 1
|
||||
sys.stderr.write("\r" + " " * (len(label) + 20) + "\r")
|
||||
sys.stderr.flush()
|
||||
|
||||
th = threading.Thread(target=_animate, daemon=True)
|
||||
th.start()
|
||||
info = types.SimpleNamespace(elapsed=0.0)
|
||||
try:
|
||||
yield info
|
||||
finally:
|
||||
stop.set()
|
||||
th.join()
|
||||
info.elapsed = time.monotonic() - t0
|
||||
|
||||
|
||||
class StepExecutor:
|
||||
"""Phase 디렉토리 안의 step들을 순차 실행하는 하네스."""
|
||||
|
||||
MAX_RETRIES = 3
|
||||
FEAT_MSG = "feat({phase}): step {num} - {name}"
|
||||
CHORE_MSG = "chore({phase}): step {num} output"
|
||||
TZ = timezone(timedelta(hours=9))
|
||||
|
||||
def __init__(self, phase_dir_name: str, *, auto_push: bool = False):
|
||||
self._root = str(ROOT)
|
||||
self._phases_dir = ROOT / "phases"
|
||||
self._phase_dir = self._phases_dir / phase_dir_name
|
||||
self._phase_dir_name = phase_dir_name
|
||||
self._top_index_file = self._phases_dir / "index.json"
|
||||
self._auto_push = auto_push
|
||||
|
||||
if not self._phase_dir.is_dir():
|
||||
print(f"ERROR: {self._phase_dir} not found")
|
||||
sys.exit(1)
|
||||
|
||||
self._index_file = self._phase_dir / "index.json"
|
||||
if not self._index_file.exists():
|
||||
print(f"ERROR: {self._index_file} not found")
|
||||
sys.exit(1)
|
||||
|
||||
idx = self._read_json(self._index_file)
|
||||
self._project = idx.get("project", "project")
|
||||
self._phase_name = idx.get("phase", phase_dir_name)
|
||||
self._total = len(idx["steps"])
|
||||
|
||||
def run(self):
|
||||
self._print_header()
|
||||
self._check_blockers()
|
||||
self._checkout_branch()
|
||||
guardrails = self._load_guardrails()
|
||||
self._ensure_created_at()
|
||||
self._execute_all_steps(guardrails)
|
||||
self._finalize()
|
||||
|
||||
# --- timestamps ---
|
||||
|
||||
def _stamp(self) -> str:
|
||||
return datetime.now(self.TZ).strftime("%Y-%m-%dT%H:%M:%S%z")
|
||||
|
||||
# --- JSON I/O ---
|
||||
|
||||
@staticmethod
|
||||
def _read_json(p: Path) -> dict:
|
||||
return json.loads(p.read_text(encoding="utf-8"))
|
||||
|
||||
@staticmethod
|
||||
def _write_json(p: Path, data: dict):
|
||||
p.write_text(json.dumps(data, indent=2, ensure_ascii=False), encoding="utf-8")
|
||||
|
||||
# --- git ---
|
||||
|
||||
def _run_git(self, *args) -> subprocess.CompletedProcess:
|
||||
cmd = ["git"] + list(args)
|
||||
return subprocess.run(cmd, cwd=self._root, capture_output=True, text=True)
|
||||
|
||||
def _checkout_branch(self):
|
||||
branch = f"feat-{self._phase_name}"
|
||||
|
||||
r = self._run_git("rev-parse", "--abbrev-ref", "HEAD")
|
||||
if r.returncode != 0:
|
||||
print(f" ERROR: git을 사용할 수 없거나 git repo가 아닙니다.")
|
||||
print(f" {r.stderr.strip()}")
|
||||
sys.exit(1)
|
||||
|
||||
if r.stdout.strip() == branch:
|
||||
return
|
||||
|
||||
r = self._run_git("rev-parse", "--verify", branch)
|
||||
r = self._run_git("checkout", branch) if r.returncode == 0 else self._run_git("checkout", "-b", branch)
|
||||
|
||||
if r.returncode != 0:
|
||||
print(f" ERROR: 브랜치 '{branch}' checkout 실패.")
|
||||
print(f" {r.stderr.strip()}")
|
||||
print(f" Hint: 변경사항을 stash하거나 commit한 후 다시 시도하세요.")
|
||||
sys.exit(1)
|
||||
|
||||
print(f" Branch: {branch}")
|
||||
|
||||
def _commit_step(self, step_num: int, step_name: str):
|
||||
output_rel = f"phases/{self._phase_dir_name}/step{step_num}-output.json"
|
||||
index_rel = f"phases/{self._phase_dir_name}/index.json"
|
||||
|
||||
self._run_git("add", "-A")
|
||||
self._run_git("reset", "HEAD", "--", output_rel)
|
||||
self._run_git("reset", "HEAD", "--", index_rel)
|
||||
|
||||
if self._run_git("diff", "--cached", "--quiet").returncode != 0:
|
||||
msg = self.FEAT_MSG.format(phase=self._phase_name, num=step_num, name=step_name)
|
||||
r = self._run_git("commit", "-m", msg)
|
||||
if r.returncode == 0:
|
||||
print(f" Commit: {msg}")
|
||||
else:
|
||||
print(f" WARN: 코드 커밋 실패: {r.stderr.strip()}")
|
||||
|
||||
self._run_git("add", "-A")
|
||||
if self._run_git("diff", "--cached", "--quiet").returncode != 0:
|
||||
msg = self.CHORE_MSG.format(phase=self._phase_name, num=step_num)
|
||||
r = self._run_git("commit", "-m", msg)
|
||||
if r.returncode != 0:
|
||||
print(f" WARN: housekeeping 커밋 실패: {r.stderr.strip()}")
|
||||
|
||||
# --- top-level index ---
|
||||
|
||||
def _update_top_index(self, status: str):
|
||||
if not self._top_index_file.exists():
|
||||
return
|
||||
top = self._read_json(self._top_index_file)
|
||||
ts = self._stamp()
|
||||
for phase in top.get("phases", []):
|
||||
if phase.get("dir") == self._phase_dir_name:
|
||||
phase["status"] = status
|
||||
ts_key = {"completed": "completed_at", "error": "failed_at", "blocked": "blocked_at"}.get(status)
|
||||
if ts_key:
|
||||
phase[ts_key] = ts
|
||||
break
|
||||
self._write_json(self._top_index_file, top)
|
||||
|
||||
# --- guardrails & context ---
|
||||
|
||||
def _load_guardrails(self) -> str:
|
||||
sections = []
|
||||
agents_md = ROOT / "AGENTS.md"
|
||||
if agents_md.exists():
|
||||
sections.append(
|
||||
f"## 프로젝트 규칙 (AGENTS.md)\n\n{agents_md.read_text(encoding='utf-8')}"
|
||||
)
|
||||
docs_dir = ROOT / "docs"
|
||||
if docs_dir.is_dir():
|
||||
for doc in sorted(docs_dir.glob("*.md")):
|
||||
sections.append(f"## {doc.stem}\n\n{doc.read_text(encoding='utf-8')}")
|
||||
return "\n\n---\n\n".join(sections) if sections else ""
|
||||
|
||||
@staticmethod
|
||||
def _build_step_context(index: dict) -> str:
|
||||
lines = [
|
||||
f"- Step {s['step']} ({s['name']}): {s['summary']}"
|
||||
for s in index["steps"]
|
||||
if s["status"] == "completed" and s.get("summary")
|
||||
]
|
||||
if not lines:
|
||||
return ""
|
||||
return "## 이전 Step 산출물\n\n" + "\n".join(lines) + "\n\n"
|
||||
|
||||
def _build_preamble(self, guardrails: str, step_context: str,
|
||||
prev_error: Optional[str] = None) -> str:
|
||||
retry_section = ""
|
||||
if prev_error:
|
||||
retry_section = (
|
||||
f"\n## ⚠ 이전 시도 실패 — 아래 에러를 반드시 참고하여 수정하라\n\n"
|
||||
f"{prev_error}\n\n---\n\n"
|
||||
)
|
||||
return (
|
||||
f"당신은 {self._project} 프로젝트의 개발자입니다. 아래 step을 수행하세요.\n\n"
|
||||
f"{guardrails}\n\n---\n\n"
|
||||
f"{step_context}{retry_section}"
|
||||
f"## 작업 규칙\n\n"
|
||||
f"1. 이전 step에서 작성된 코드를 확인하고 일관성을 유지하라.\n"
|
||||
f"2. 이 step에 명시된 작업만 수행하라. 추가 기능이나 파일을 만들지 마라.\n"
|
||||
f"3. 기존 테스트를 깨뜨리지 마라.\n"
|
||||
f"4. AC(Acceptance Criteria) 검증을 직접 실행하라.\n"
|
||||
f"5. /phases/{self._phase_dir_name}/index.json의 해당 step status를 업데이트하라:\n"
|
||||
f" - AC 통과 → \"completed\" + \"summary\" 필드에 이 step의 산출물을 한 줄로 요약\n"
|
||||
f" - {self.MAX_RETRIES}회 수정 시도 후에도 실패 → \"error\" + \"error_message\" 기록\n"
|
||||
f" - 사용자 개입이 필요한 경우 (API 키, 인증, 수동 설정 등) → \"blocked\" + \"blocked_reason\" 기록 후 즉시 중단\n"
|
||||
f"6. 변경사항은 워킹 트리에 남겨라. step 완료 후 커밋은 execute.py가 정리한다.\n\n---\n\n"
|
||||
)
|
||||
|
||||
# --- Codex 호출 ---
|
||||
|
||||
def _invoke_codex(self, step: dict, preamble: str) -> dict:
|
||||
step_num, step_name = step["step"], step["name"]
|
||||
step_file = self._phase_dir / f"step{step_num}.md"
|
||||
|
||||
if not step_file.exists():
|
||||
print(f" ERROR: {step_file} not found")
|
||||
sys.exit(1)
|
||||
|
||||
prompt = preamble + step_file.read_text(encoding="utf-8")
|
||||
last_message_path = self._phase_dir / f"step{step_num}-last-message.txt"
|
||||
result = subprocess.run(
|
||||
["codex", "exec", "--full-auto", "--json", "-C", self._root, "-o", str(last_message_path)],
|
||||
cwd=self._root,
|
||||
input=prompt,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=1800,
|
||||
)
|
||||
|
||||
if result.returncode != 0:
|
||||
print(f"\n WARN: Codex가 비정상 종료됨 (code {result.returncode})")
|
||||
if result.stderr:
|
||||
print(f" stderr: {result.stderr[:500]}")
|
||||
|
||||
final_message = None
|
||||
if last_message_path.exists():
|
||||
final_message = last_message_path.read_text(encoding="utf-8")
|
||||
|
||||
output = {
|
||||
"step": step_num, "name": step_name,
|
||||
"exitCode": result.returncode,
|
||||
"finalMessage": final_message,
|
||||
"stdout": result.stdout, "stderr": result.stderr,
|
||||
}
|
||||
out_path = self._phase_dir / f"step{step_num}-output.json"
|
||||
with open(out_path, "w", encoding="utf-8") as f:
|
||||
json.dump(output, f, indent=2, ensure_ascii=False)
|
||||
|
||||
return output
|
||||
|
||||
# --- 헤더 & 검증 ---
|
||||
|
||||
def _print_header(self):
|
||||
print(f"\n{'='*60}")
|
||||
print(f" Harness Step Executor")
|
||||
print(f" Phase: {self._phase_name} | Steps: {self._total}")
|
||||
if self._auto_push:
|
||||
print(f" Auto-push: enabled")
|
||||
print(f"{'='*60}")
|
||||
|
||||
def _check_blockers(self):
|
||||
index = self._read_json(self._index_file)
|
||||
for s in reversed(index["steps"]):
|
||||
if s["status"] == "error":
|
||||
print(f"\n ✗ Step {s['step']} ({s['name']}) failed.")
|
||||
print(f" Error: {s.get('error_message', 'unknown')}")
|
||||
print(f" Fix and reset status to 'pending' to retry.")
|
||||
sys.exit(1)
|
||||
if s["status"] == "blocked":
|
||||
print(f"\n ⏸ Step {s['step']} ({s['name']}) blocked.")
|
||||
print(f" Reason: {s.get('blocked_reason', 'unknown')}")
|
||||
print(f" Resolve and reset status to 'pending' to retry.")
|
||||
sys.exit(2)
|
||||
if s["status"] != "pending":
|
||||
break
|
||||
|
||||
def _ensure_created_at(self):
|
||||
index = self._read_json(self._index_file)
|
||||
if "created_at" not in index:
|
||||
index["created_at"] = self._stamp()
|
||||
self._write_json(self._index_file, index)
|
||||
|
||||
# --- 실행 루프 ---
|
||||
|
||||
def _execute_single_step(self, step: dict, guardrails: str) -> bool:
|
||||
"""단일 step 실행 (재시도 포함). 완료되면 True, 실패/차단이면 False."""
|
||||
step_num, step_name = step["step"], step["name"]
|
||||
done = sum(1 for s in self._read_json(self._index_file)["steps"] if s["status"] == "completed")
|
||||
prev_error = None
|
||||
|
||||
for attempt in range(1, self.MAX_RETRIES + 1):
|
||||
index = self._read_json(self._index_file)
|
||||
step_context = self._build_step_context(index)
|
||||
preamble = self._build_preamble(guardrails, step_context, prev_error)
|
||||
|
||||
tag = f"Step {step_num}/{self._total - 1} ({done} done): {step_name}"
|
||||
if attempt > 1:
|
||||
tag += f" [retry {attempt}/{self.MAX_RETRIES}]"
|
||||
|
||||
with progress_indicator(tag) as pi:
|
||||
self._invoke_codex(step, preamble)
|
||||
elapsed = int(pi.elapsed)
|
||||
|
||||
index = self._read_json(self._index_file)
|
||||
status = next((s.get("status", "pending") for s in index["steps"] if s["step"] == step_num), "pending")
|
||||
ts = self._stamp()
|
||||
|
||||
if status == "completed":
|
||||
for s in index["steps"]:
|
||||
if s["step"] == step_num:
|
||||
s["completed_at"] = ts
|
||||
self._write_json(self._index_file, index)
|
||||
self._commit_step(step_num, step_name)
|
||||
print(f" ✓ Step {step_num}: {step_name} [{elapsed}s]")
|
||||
return True
|
||||
|
||||
if status == "blocked":
|
||||
for s in index["steps"]:
|
||||
if s["step"] == step_num:
|
||||
s["blocked_at"] = ts
|
||||
self._write_json(self._index_file, index)
|
||||
reason = next((s.get("blocked_reason", "") for s in index["steps"] if s["step"] == step_num), "")
|
||||
print(f" ⏸ Step {step_num}: {step_name} blocked [{elapsed}s]")
|
||||
print(f" Reason: {reason}")
|
||||
self._update_top_index("blocked")
|
||||
sys.exit(2)
|
||||
|
||||
err_msg = next(
|
||||
(s.get("error_message", "Step did not update status") for s in index["steps"] if s["step"] == step_num),
|
||||
"Step did not update status",
|
||||
)
|
||||
|
||||
if attempt < self.MAX_RETRIES:
|
||||
for s in index["steps"]:
|
||||
if s["step"] == step_num:
|
||||
s["status"] = "pending"
|
||||
s.pop("error_message", None)
|
||||
self._write_json(self._index_file, index)
|
||||
prev_error = err_msg
|
||||
print(f" ↻ Step {step_num}: retry {attempt}/{self.MAX_RETRIES} — {err_msg}")
|
||||
else:
|
||||
for s in index["steps"]:
|
||||
if s["step"] == step_num:
|
||||
s["status"] = "error"
|
||||
s["error_message"] = f"[{self.MAX_RETRIES}회 시도 후 실패] {err_msg}"
|
||||
s["failed_at"] = ts
|
||||
self._write_json(self._index_file, index)
|
||||
self._commit_step(step_num, step_name)
|
||||
print(f" ✗ Step {step_num}: {step_name} failed after {self.MAX_RETRIES} attempts [{elapsed}s]")
|
||||
print(f" Error: {err_msg}")
|
||||
self._update_top_index("error")
|
||||
sys.exit(1)
|
||||
|
||||
return False # unreachable
|
||||
|
||||
def _execute_all_steps(self, guardrails: str):
|
||||
while True:
|
||||
index = self._read_json(self._index_file)
|
||||
pending = next((s for s in index["steps"] if s["status"] == "pending"), None)
|
||||
if pending is None:
|
||||
print("\n All steps completed!")
|
||||
return
|
||||
|
||||
step_num = pending["step"]
|
||||
for s in index["steps"]:
|
||||
if s["step"] == step_num and "started_at" not in s:
|
||||
s["started_at"] = self._stamp()
|
||||
self._write_json(self._index_file, index)
|
||||
break
|
||||
|
||||
self._execute_single_step(pending, guardrails)
|
||||
|
||||
def _finalize(self):
|
||||
index = self._read_json(self._index_file)
|
||||
index["completed_at"] = self._stamp()
|
||||
self._write_json(self._index_file, index)
|
||||
self._update_top_index("completed")
|
||||
|
||||
self._run_git("add", "-A")
|
||||
if self._run_git("diff", "--cached", "--quiet").returncode != 0:
|
||||
msg = f"chore({self._phase_name}): mark phase completed"
|
||||
r = self._run_git("commit", "-m", msg)
|
||||
if r.returncode == 0:
|
||||
print(f" ✓ {msg}")
|
||||
|
||||
if self._auto_push:
|
||||
branch = f"feat-{self._phase_name}"
|
||||
r = self._run_git("push", "-u", "origin", branch)
|
||||
if r.returncode != 0:
|
||||
print(f"\n ERROR: git push 실패: {r.stderr.strip()}")
|
||||
sys.exit(1)
|
||||
print(f" ✓ Pushed to origin/{branch}")
|
||||
|
||||
print(f"\n{'='*60}")
|
||||
print(f" Phase '{self._phase_name}' completed!")
|
||||
print(f"{'='*60}")
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Harness Step Executor")
|
||||
parser.add_argument("phase_dir", help="Phase directory name (e.g. 0-mvp)")
|
||||
parser.add_argument("--push", action="store_true", help="Push branch after completion")
|
||||
args = parser.parse_args()
|
||||
|
||||
StepExecutor(args.phase_dir, auto_push=args.push).run()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
562
scripts/test_execute.py
Normal file
562
scripts/test_execute.py
Normal file
@@ -0,0 +1,562 @@
|
||||
"""execute.py Codex migration safety-net tests."""
|
||||
|
||||
import json
|
||||
import sys
|
||||
from datetime import datetime, timedelta
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
import pytest
|
||||
|
||||
sys.path.insert(0, str(Path(__file__).parent))
|
||||
import execute as ex
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Fixtures
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
@pytest.fixture
|
||||
def tmp_project(tmp_path):
|
||||
"""phases/, AGENTS.md, docs/ 를 갖춘 임시 프로젝트 구조."""
|
||||
phases_dir = tmp_path / "phases"
|
||||
phases_dir.mkdir()
|
||||
|
||||
agents_md = tmp_path / "AGENTS.md"
|
||||
agents_md.write_text("# Rules\n- rule one\n- rule two")
|
||||
|
||||
docs_dir = tmp_path / "docs"
|
||||
docs_dir.mkdir()
|
||||
(docs_dir / "arch.md").write_text("# Architecture\nSome content")
|
||||
(docs_dir / "guide.md").write_text("# Guide\nAnother doc")
|
||||
|
||||
return tmp_path
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def phase_dir(tmp_project):
|
||||
"""step 3개를 가진 phase 디렉토리."""
|
||||
d = tmp_project / "phases" / "0-mvp"
|
||||
d.mkdir()
|
||||
|
||||
index = {
|
||||
"project": "TestProject",
|
||||
"phase": "mvp",
|
||||
"steps": [
|
||||
{"step": 0, "name": "setup", "status": "completed", "summary": "프로젝트 초기화 완료"},
|
||||
{"step": 1, "name": "core", "status": "completed", "summary": "핵심 로직 구현"},
|
||||
{"step": 2, "name": "ui", "status": "pending"},
|
||||
],
|
||||
}
|
||||
(d / "index.json").write_text(json.dumps(index, indent=2, ensure_ascii=False))
|
||||
(d / "step2.md").write_text("# Step 2: UI\n\nUI를 구현하세요.")
|
||||
|
||||
return d
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def top_index(tmp_project):
|
||||
"""phases/index.json (top-level)."""
|
||||
top = {
|
||||
"phases": [
|
||||
{"dir": "0-mvp", "status": "pending"},
|
||||
{"dir": "1-polish", "status": "pending"},
|
||||
]
|
||||
}
|
||||
p = tmp_project / "phases" / "index.json"
|
||||
p.write_text(json.dumps(top, indent=2))
|
||||
return p
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def executor(tmp_project, phase_dir):
|
||||
"""테스트용 StepExecutor 인스턴스. git 호출은 별도 mock 필요."""
|
||||
with patch.object(ex, "ROOT", tmp_project):
|
||||
inst = ex.StepExecutor("0-mvp")
|
||||
# 내부 경로를 tmp_project 기준으로 재설정
|
||||
inst._root = str(tmp_project)
|
||||
inst._phases_dir = tmp_project / "phases"
|
||||
inst._phase_dir = phase_dir
|
||||
inst._phase_dir_name = "0-mvp"
|
||||
inst._index_file = phase_dir / "index.json"
|
||||
inst._top_index_file = tmp_project / "phases" / "index.json"
|
||||
return inst
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _stamp (= 이전 now_iso)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestStamp:
|
||||
def test_returns_kst_timestamp(self, executor):
|
||||
result = executor._stamp()
|
||||
assert "+0900" in result
|
||||
|
||||
def test_format_is_iso(self, executor):
|
||||
result = executor._stamp()
|
||||
dt = datetime.strptime(result, "%Y-%m-%dT%H:%M:%S%z")
|
||||
assert dt.tzinfo is not None
|
||||
|
||||
def test_is_current_time(self, executor):
|
||||
before = datetime.now(ex.StepExecutor.TZ).replace(microsecond=0)
|
||||
result = executor._stamp()
|
||||
after = datetime.now(ex.StepExecutor.TZ).replace(microsecond=0) + timedelta(seconds=1)
|
||||
parsed = datetime.strptime(result, "%Y-%m-%dT%H:%M:%S%z")
|
||||
assert before <= parsed <= after
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _read_json / _write_json
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestJsonHelpers:
|
||||
def test_roundtrip(self, tmp_path):
|
||||
data = {"key": "값", "nested": [1, 2, 3]}
|
||||
p = tmp_path / "test.json"
|
||||
ex.StepExecutor._write_json(p, data)
|
||||
loaded = ex.StepExecutor._read_json(p)
|
||||
assert loaded == data
|
||||
|
||||
def test_save_ensures_ascii_false(self, tmp_path):
|
||||
p = tmp_path / "test.json"
|
||||
ex.StepExecutor._write_json(p, {"한글": "테스트"})
|
||||
raw = p.read_text()
|
||||
assert "한글" in raw
|
||||
assert "\\u" not in raw
|
||||
|
||||
def test_save_indented(self, tmp_path):
|
||||
p = tmp_path / "test.json"
|
||||
ex.StepExecutor._write_json(p, {"a": 1})
|
||||
raw = p.read_text()
|
||||
assert "\n" in raw
|
||||
|
||||
def test_load_nonexistent_raises(self, tmp_path):
|
||||
with pytest.raises(FileNotFoundError):
|
||||
ex.StepExecutor._read_json(tmp_path / "nope.json")
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _load_guardrails
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestLoadGuardrails:
|
||||
def test_loads_agents_md_and_docs(self, executor, tmp_project):
|
||||
with patch.object(ex, "ROOT", tmp_project):
|
||||
result = executor._load_guardrails()
|
||||
assert "# Rules" in result
|
||||
assert "rule one" in result
|
||||
assert "# Architecture" in result
|
||||
assert "# Guide" in result
|
||||
|
||||
def test_sections_separated_by_divider(self, executor, tmp_project):
|
||||
with patch.object(ex, "ROOT", tmp_project):
|
||||
result = executor._load_guardrails()
|
||||
assert "---" in result
|
||||
|
||||
def test_docs_sorted_alphabetically(self, executor, tmp_project):
|
||||
with patch.object(ex, "ROOT", tmp_project):
|
||||
result = executor._load_guardrails()
|
||||
arch_pos = result.index("arch")
|
||||
guide_pos = result.index("guide")
|
||||
assert arch_pos < guide_pos
|
||||
|
||||
def test_no_agents_md(self, executor, tmp_project):
|
||||
(tmp_project / "AGENTS.md").unlink()
|
||||
with patch.object(ex, "ROOT", tmp_project):
|
||||
result = executor._load_guardrails()
|
||||
assert "AGENTS.md" not in result
|
||||
assert "Architecture" in result
|
||||
|
||||
def test_no_docs_dir(self, executor, tmp_project):
|
||||
import shutil
|
||||
shutil.rmtree(tmp_project / "docs")
|
||||
with patch.object(ex, "ROOT", tmp_project):
|
||||
result = executor._load_guardrails()
|
||||
assert "Rules" in result
|
||||
assert "Architecture" not in result
|
||||
|
||||
def test_empty_project(self, tmp_path):
|
||||
with patch.object(ex, "ROOT", tmp_path):
|
||||
# executor가 필요 없는 static-like 동작이므로 임시 인스턴스
|
||||
phases_dir = tmp_path / "phases" / "dummy"
|
||||
phases_dir.mkdir(parents=True)
|
||||
idx = {"project": "T", "phase": "t", "steps": []}
|
||||
(phases_dir / "index.json").write_text(json.dumps(idx))
|
||||
inst = ex.StepExecutor.__new__(ex.StepExecutor)
|
||||
result = inst._load_guardrails()
|
||||
assert result == ""
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _build_step_context
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestBuildStepContext:
|
||||
def test_includes_completed_with_summary(self, phase_dir):
|
||||
index = json.loads((phase_dir / "index.json").read_text())
|
||||
result = ex.StepExecutor._build_step_context(index)
|
||||
assert "Step 0 (setup): 프로젝트 초기화 완료" in result
|
||||
assert "Step 1 (core): 핵심 로직 구현" in result
|
||||
|
||||
def test_excludes_pending(self, phase_dir):
|
||||
index = json.loads((phase_dir / "index.json").read_text())
|
||||
result = ex.StepExecutor._build_step_context(index)
|
||||
assert "ui" not in result
|
||||
|
||||
def test_excludes_completed_without_summary(self, phase_dir):
|
||||
index = json.loads((phase_dir / "index.json").read_text())
|
||||
del index["steps"][0]["summary"]
|
||||
result = ex.StepExecutor._build_step_context(index)
|
||||
assert "setup" not in result
|
||||
assert "core" in result
|
||||
|
||||
def test_empty_when_no_completed(self):
|
||||
index = {"steps": [{"step": 0, "name": "a", "status": "pending"}]}
|
||||
result = ex.StepExecutor._build_step_context(index)
|
||||
assert result == ""
|
||||
|
||||
def test_has_header(self, phase_dir):
|
||||
index = json.loads((phase_dir / "index.json").read_text())
|
||||
result = ex.StepExecutor._build_step_context(index)
|
||||
assert result.startswith("## 이전 Step 산출물")
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _build_preamble
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestBuildPreamble:
|
||||
def test_includes_project_name(self, executor):
|
||||
result = executor._build_preamble("", "")
|
||||
assert "TestProject" in result
|
||||
|
||||
def test_includes_guardrails(self, executor):
|
||||
result = executor._build_preamble("GUARD_CONTENT", "")
|
||||
assert "GUARD_CONTENT" in result
|
||||
|
||||
def test_includes_step_context(self, executor):
|
||||
ctx = "## 이전 Step 산출물\n\n- Step 0: done"
|
||||
result = executor._build_preamble("", ctx)
|
||||
assert "이전 Step 산출물" in result
|
||||
|
||||
def test_mentions_executor_commits(self, executor):
|
||||
result = executor._build_preamble("", "")
|
||||
assert "커밋은 execute.py가 정리한다" in result
|
||||
|
||||
def test_includes_rules(self, executor):
|
||||
result = executor._build_preamble("", "")
|
||||
assert "작업 규칙" in result
|
||||
assert "AC" in result
|
||||
|
||||
def test_no_retry_section_by_default(self, executor):
|
||||
result = executor._build_preamble("", "")
|
||||
assert "이전 시도 실패" not in result
|
||||
|
||||
def test_retry_section_with_prev_error(self, executor):
|
||||
result = executor._build_preamble("", "", prev_error="타입 에러 발생")
|
||||
assert "이전 시도 실패" in result
|
||||
assert "타입 에러 발생" in result
|
||||
|
||||
def test_includes_max_retries(self, executor):
|
||||
result = executor._build_preamble("", "")
|
||||
assert str(ex.StepExecutor.MAX_RETRIES) in result
|
||||
|
||||
def test_includes_index_path(self, executor):
|
||||
result = executor._build_preamble("", "")
|
||||
assert "/phases/0-mvp/index.json" in result
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _update_top_index
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestUpdateTopIndex:
|
||||
def test_completed(self, executor, top_index):
|
||||
executor._top_index_file = top_index
|
||||
executor._update_top_index("completed")
|
||||
data = json.loads(top_index.read_text())
|
||||
mvp = next(p for p in data["phases"] if p["dir"] == "0-mvp")
|
||||
assert mvp["status"] == "completed"
|
||||
assert "completed_at" in mvp
|
||||
|
||||
def test_error(self, executor, top_index):
|
||||
executor._top_index_file = top_index
|
||||
executor._update_top_index("error")
|
||||
data = json.loads(top_index.read_text())
|
||||
mvp = next(p for p in data["phases"] if p["dir"] == "0-mvp")
|
||||
assert mvp["status"] == "error"
|
||||
assert "failed_at" in mvp
|
||||
|
||||
def test_blocked(self, executor, top_index):
|
||||
executor._top_index_file = top_index
|
||||
executor._update_top_index("blocked")
|
||||
data = json.loads(top_index.read_text())
|
||||
mvp = next(p for p in data["phases"] if p["dir"] == "0-mvp")
|
||||
assert mvp["status"] == "blocked"
|
||||
assert "blocked_at" in mvp
|
||||
|
||||
def test_other_phases_unchanged(self, executor, top_index):
|
||||
executor._top_index_file = top_index
|
||||
executor._update_top_index("completed")
|
||||
data = json.loads(top_index.read_text())
|
||||
polish = next(p for p in data["phases"] if p["dir"] == "1-polish")
|
||||
assert polish["status"] == "pending"
|
||||
|
||||
def test_nonexistent_dir_is_noop(self, executor, top_index):
|
||||
executor._top_index_file = top_index
|
||||
executor._phase_dir_name = "no-such-dir"
|
||||
original = json.loads(top_index.read_text())
|
||||
executor._update_top_index("completed")
|
||||
after = json.loads(top_index.read_text())
|
||||
for p_before, p_after in zip(original["phases"], after["phases"]):
|
||||
assert p_before["status"] == p_after["status"]
|
||||
|
||||
def test_no_top_index_file(self, executor, tmp_path):
|
||||
executor._top_index_file = tmp_path / "nonexistent.json"
|
||||
executor._update_top_index("completed") # should not raise
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _checkout_branch (mocked)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestCheckoutBranch:
|
||||
def _mock_git(self, executor, responses):
|
||||
call_idx = {"i": 0}
|
||||
def fake_git(*args):
|
||||
idx = call_idx["i"]
|
||||
call_idx["i"] += 1
|
||||
if idx < len(responses):
|
||||
return responses[idx]
|
||||
return MagicMock(returncode=0, stdout="", stderr="")
|
||||
executor._run_git = fake_git
|
||||
|
||||
def test_already_on_branch(self, executor):
|
||||
self._mock_git(executor, [
|
||||
MagicMock(returncode=0, stdout="feat-mvp\n", stderr=""),
|
||||
])
|
||||
executor._checkout_branch() # should return without checkout
|
||||
|
||||
def test_branch_exists_checkout(self, executor):
|
||||
self._mock_git(executor, [
|
||||
MagicMock(returncode=0, stdout="main\n", stderr=""),
|
||||
MagicMock(returncode=0, stdout="", stderr=""),
|
||||
MagicMock(returncode=0, stdout="", stderr=""),
|
||||
])
|
||||
executor._checkout_branch()
|
||||
|
||||
def test_branch_not_exists_create(self, executor):
|
||||
self._mock_git(executor, [
|
||||
MagicMock(returncode=0, stdout="main\n", stderr=""),
|
||||
MagicMock(returncode=1, stdout="", stderr="not found"),
|
||||
MagicMock(returncode=0, stdout="", stderr=""),
|
||||
])
|
||||
executor._checkout_branch()
|
||||
|
||||
def test_checkout_fails_exits(self, executor):
|
||||
self._mock_git(executor, [
|
||||
MagicMock(returncode=0, stdout="main\n", stderr=""),
|
||||
MagicMock(returncode=1, stdout="", stderr=""),
|
||||
MagicMock(returncode=1, stdout="", stderr="dirty tree"),
|
||||
])
|
||||
with pytest.raises(SystemExit) as exc_info:
|
||||
executor._checkout_branch()
|
||||
assert exc_info.value.code == 1
|
||||
|
||||
def test_no_git_exits(self, executor):
|
||||
self._mock_git(executor, [
|
||||
MagicMock(returncode=1, stdout="", stderr="not a git repo"),
|
||||
])
|
||||
with pytest.raises(SystemExit) as exc_info:
|
||||
executor._checkout_branch()
|
||||
assert exc_info.value.code == 1
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _commit_step (mocked)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestCommitStep:
|
||||
def test_two_phase_commit(self, executor):
|
||||
calls = []
|
||||
def fake_git(*args):
|
||||
calls.append(args)
|
||||
if args[:2] == ("diff", "--cached"):
|
||||
return MagicMock(returncode=1)
|
||||
return MagicMock(returncode=0, stdout="", stderr="")
|
||||
executor._run_git = fake_git
|
||||
|
||||
executor._commit_step(2, "ui")
|
||||
|
||||
commit_calls = [c for c in calls if c[0] == "commit"]
|
||||
assert len(commit_calls) == 2
|
||||
assert "feat(mvp):" in commit_calls[0][2]
|
||||
assert "chore(mvp):" in commit_calls[1][2]
|
||||
|
||||
def test_no_code_changes_skips_feat_commit(self, executor):
|
||||
call_count = {"diff": 0}
|
||||
calls = []
|
||||
def fake_git(*args):
|
||||
calls.append(args)
|
||||
if args[:2] == ("diff", "--cached"):
|
||||
call_count["diff"] += 1
|
||||
if call_count["diff"] == 1:
|
||||
return MagicMock(returncode=0)
|
||||
return MagicMock(returncode=1)
|
||||
return MagicMock(returncode=0, stdout="", stderr="")
|
||||
executor._run_git = fake_git
|
||||
|
||||
executor._commit_step(2, "ui")
|
||||
|
||||
commit_msgs = [c[2] for c in calls if c[0] == "commit"]
|
||||
assert len(commit_msgs) == 1
|
||||
assert "chore" in commit_msgs[0]
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _invoke_codex (mocked)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestInvokeCodex:
|
||||
def test_invokes_codex_with_correct_args(self, executor):
|
||||
mock_result = MagicMock(returncode=0, stdout='{"result": "ok"}', stderr="")
|
||||
step = {"step": 2, "name": "ui"}
|
||||
preamble = "PREAMBLE\n"
|
||||
|
||||
with patch("subprocess.run", return_value=mock_result) as mock_run:
|
||||
output = executor._invoke_codex(step, preamble)
|
||||
|
||||
cmd = mock_run.call_args[0][0]
|
||||
kwargs = mock_run.call_args[1]
|
||||
assert cmd[0] == "codex"
|
||||
assert cmd[1] == "exec"
|
||||
assert "--full-auto" in cmd
|
||||
assert "--json" in cmd
|
||||
assert "-o" in cmd
|
||||
assert "PREAMBLE" in kwargs["input"]
|
||||
assert "UI를 구현하세요" in kwargs["input"]
|
||||
assert output["finalMessage"] is None
|
||||
|
||||
def test_saves_output_json(self, executor):
|
||||
def fake_run(*args, **kwargs):
|
||||
cmd = args[0]
|
||||
last_message_path = Path(cmd[cmd.index("-o") + 1])
|
||||
last_message_path.write_text("completed", encoding="utf-8")
|
||||
return MagicMock(returncode=0, stdout='{"ok": true}', stderr="")
|
||||
|
||||
step = {"step": 2, "name": "ui"}
|
||||
|
||||
with patch("subprocess.run", side_effect=fake_run):
|
||||
executor._invoke_codex(step, "preamble")
|
||||
|
||||
output_file = executor._phase_dir / "step2-output.json"
|
||||
assert output_file.exists()
|
||||
data = json.loads(output_file.read_text())
|
||||
assert data["step"] == 2
|
||||
assert data["name"] == "ui"
|
||||
assert data["exitCode"] == 0
|
||||
assert data["finalMessage"] == "completed"
|
||||
|
||||
def test_nonexistent_step_file_exits(self, executor):
|
||||
step = {"step": 99, "name": "nonexistent"}
|
||||
with pytest.raises(SystemExit) as exc_info:
|
||||
executor._invoke_codex(step, "preamble")
|
||||
assert exc_info.value.code == 1
|
||||
|
||||
def test_timeout_is_1800(self, executor):
|
||||
mock_result = MagicMock(returncode=0, stdout="{}", stderr="")
|
||||
step = {"step": 2, "name": "ui"}
|
||||
|
||||
with patch("subprocess.run", return_value=mock_result) as mock_run:
|
||||
executor._invoke_codex(step, "preamble")
|
||||
|
||||
assert mock_run.call_args[1]["timeout"] == 1800
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# progress_indicator (= 이전 Spinner)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestProgressIndicator:
|
||||
def test_context_manager(self):
|
||||
import time
|
||||
with ex.progress_indicator("test") as pi:
|
||||
time.sleep(0.15)
|
||||
assert pi.elapsed >= 0.1
|
||||
|
||||
def test_elapsed_increases(self):
|
||||
import time
|
||||
with ex.progress_indicator("test") as pi:
|
||||
time.sleep(0.2)
|
||||
assert pi.elapsed > 0
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# main() CLI 파싱 (mocked)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestMainCli:
|
||||
def test_no_args_exits(self):
|
||||
with patch("sys.argv", ["execute.py"]):
|
||||
with pytest.raises(SystemExit) as exc_info:
|
||||
ex.main()
|
||||
assert exc_info.value.code == 2 # argparse exits with 2
|
||||
|
||||
def test_invalid_phase_dir_exits(self):
|
||||
with patch("sys.argv", ["execute.py", "nonexistent"]):
|
||||
with patch.object(ex, "ROOT", Path("/tmp/fake_nonexistent")):
|
||||
with pytest.raises(SystemExit) as exc_info:
|
||||
ex.main()
|
||||
assert exc_info.value.code == 1
|
||||
|
||||
def test_missing_index_exits(self, tmp_project):
|
||||
(tmp_project / "phases" / "empty").mkdir()
|
||||
with patch("sys.argv", ["execute.py", "empty"]):
|
||||
with patch.object(ex, "ROOT", tmp_project):
|
||||
with pytest.raises(SystemExit) as exc_info:
|
||||
ex.main()
|
||||
assert exc_info.value.code == 1
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# _check_blockers (= 이전 main() error/blocked 체크)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
class TestCheckBlockers:
|
||||
def _make_executor_with_steps(self, tmp_project, steps):
|
||||
d = tmp_project / "phases" / "test-phase"
|
||||
d.mkdir(exist_ok=True)
|
||||
index = {"project": "T", "phase": "test", "steps": steps}
|
||||
(d / "index.json").write_text(json.dumps(index))
|
||||
|
||||
with patch.object(ex, "ROOT", tmp_project):
|
||||
inst = ex.StepExecutor.__new__(ex.StepExecutor)
|
||||
inst._root = str(tmp_project)
|
||||
inst._phases_dir = tmp_project / "phases"
|
||||
inst._phase_dir = d
|
||||
inst._phase_dir_name = "test-phase"
|
||||
inst._index_file = d / "index.json"
|
||||
inst._top_index_file = tmp_project / "phases" / "index.json"
|
||||
inst._phase_name = "test"
|
||||
inst._total = len(steps)
|
||||
return inst
|
||||
|
||||
def test_error_step_exits_1(self, tmp_project):
|
||||
steps = [
|
||||
{"step": 0, "name": "ok", "status": "completed"},
|
||||
{"step": 1, "name": "bad", "status": "error", "error_message": "fail"},
|
||||
]
|
||||
inst = self._make_executor_with_steps(tmp_project, steps)
|
||||
with pytest.raises(SystemExit) as exc_info:
|
||||
inst._check_blockers()
|
||||
assert exc_info.value.code == 1
|
||||
|
||||
def test_blocked_step_exits_2(self, tmp_project):
|
||||
steps = [
|
||||
{"step": 0, "name": "ok", "status": "completed"},
|
||||
{"step": 1, "name": "stuck", "status": "blocked", "blocked_reason": "API key"},
|
||||
]
|
||||
inst = self._make_executor_with_steps(tmp_project, steps)
|
||||
with pytest.raises(SystemExit) as exc_info:
|
||||
inst._check_blockers()
|
||||
assert exc_info.value.code == 2
|
||||
91
scripts/validate_workspace.py
Normal file
91
scripts/validate_workspace.py
Normal file
@@ -0,0 +1,91 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Run repository validation commands for the Harness template."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
DEFAULT_NPM_ORDER = ("lint", "build", "test")
|
||||
|
||||
|
||||
def load_env_commands() -> list[str]:
|
||||
raw = os.environ.get("HARNESS_VALIDATION_COMMANDS", "")
|
||||
return [line.strip() for line in raw.splitlines() if line.strip()]
|
||||
|
||||
|
||||
def load_npm_commands(root: Path) -> list[str]:
|
||||
package_json = root / "package.json"
|
||||
if not package_json.exists():
|
||||
return []
|
||||
|
||||
try:
|
||||
payload = json.loads(package_json.read_text(encoding="utf-8"))
|
||||
except json.JSONDecodeError:
|
||||
return []
|
||||
|
||||
scripts = payload.get("scripts", {})
|
||||
if not isinstance(scripts, dict):
|
||||
return []
|
||||
|
||||
commands = []
|
||||
for name in DEFAULT_NPM_ORDER:
|
||||
value = scripts.get(name)
|
||||
if isinstance(value, str) and value.strip():
|
||||
commands.append(f"npm run {name}")
|
||||
return commands
|
||||
|
||||
|
||||
def discover_commands(root: Path) -> list[str]:
|
||||
env_commands = load_env_commands()
|
||||
if env_commands:
|
||||
return env_commands
|
||||
return load_npm_commands(root)
|
||||
|
||||
|
||||
def run_command(command: str, root: Path) -> subprocess.CompletedProcess:
|
||||
return subprocess.run(
|
||||
command,
|
||||
cwd=root,
|
||||
shell=True,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
|
||||
|
||||
def emit_stream(prefix: str, content: str, *, stream) -> None:
|
||||
text = content.strip()
|
||||
if not text:
|
||||
return
|
||||
print(prefix, file=stream)
|
||||
print(text, file=stream)
|
||||
|
||||
|
||||
def main() -> int:
|
||||
root = Path(__file__).resolve().parent.parent
|
||||
commands = discover_commands(root)
|
||||
|
||||
if not commands:
|
||||
print("No validation commands configured.")
|
||||
print("Set HARNESS_VALIDATION_COMMANDS or add npm scripts for lint/build/test.")
|
||||
return 0
|
||||
|
||||
for command in commands:
|
||||
print(f"$ {command}")
|
||||
result = run_command(command, root)
|
||||
emit_stream("[stdout]", result.stdout, stream=sys.stdout)
|
||||
emit_stream("[stderr]", result.stderr, stream=sys.stderr)
|
||||
if result.returncode != 0:
|
||||
print(f"Validation failed: {command}", file=sys.stderr)
|
||||
return result.returncode
|
||||
|
||||
print("Validation succeeded.")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
Reference in New Issue
Block a user