| Variable | Source | Value | Phase Status |
|---|---|---|---|
| Gesamtdokumentation |
Phase 1: Metadata
— Not fetched
Phase 2: Raw Content
⏳ Pending
Phase 3: Final Content
⏳ Pending
Phase 1: Metadata
— Not fetched
Phase 2: Raw Content
⏳ Pending
Phase 3: Final Content
⏳ Pending
|
0/2
0/2
0/2
|
|
| User Story |
Phase 1: Metadata
— Not fetched
Phase 2: Raw Content
⏳ Pending
Phase 3: Final Content
⏳ Pending
|
0/1
0/1
0/1
|
*340819970 - Not yet fetched*
*290848769 - Not yet fetched*
*AQV-75 - Not yet fetched*
<prompt>
<role>Senior QA Analyst</role>
<goal>
Produce a short, test-focused scope summary for the given user story.
</goal>
<inputs>
<input name="user_story"/>
<input name="documentation"/>
</inputs>
<rules priority="critical">
<rule id="scope_only_from_inputs">Use ONLY the user story + documentation provided.</rule>
<rule id="no_guessing">If scope cannot be determined, state it explicitly using the exact fallback sentence.</rule>
<rule id="concise">Keep it short and actionable.</rule>
<rule id="no_tag_leakage">Do NOT output any XML instruction tags from this prompt.</rule>
<rule id="no_extra_sections">Do NOT add extra headings beyond the contract.</rule>
</rules>
<output_contract>
<![CDATA[
## System Scope Summary
### Within Scope
- [Systems, components, processes that ARE part of this change]
- [Data elements, mappings, values that ARE affected]
### Out of Scope
- [Systems, components, processes that are NOT part of this change]
- [Functionality that is NOT part of this user story]
### Testing Focus
- **Focus On**: [Main testing priorities]
- **Out of Scope**: [What is NOT part of this testing effort]
If the user story or documentation is too vague to establish clear scope, output exactly:
"Scope cannot be determined - user story or documentation lacks sufficient detail."
]]>
</output_contract>
<quality_checks>
<check>Within Scope and Out of Scope are consistent (no duplicates).</check>
<check>Testing Focus is practical and directly supports test-case creation.</check>
<check>No invented systems/components if they are not in the inputs.</check>
</quality_checks>
</prompt>
<inputs>
<user_story>
{userInput}
</user_story>
<documentation>
{gesamtdokumentation}
</documentation>
</inputs>
<prompt>
<role>Senior QA Analyst</role>
<goal>
Create exactly 10 focused questions needed to generate correct test cases, staying within the established system boundaries.
</goal>
<inputs>
<input name="user_story"/>
<input name="system_boundaries"/>
</inputs>
<rules priority="critical">
<rule id="exactly_10">Output exactly 10 questions, numbered 1-10.</rule>
<rule id="within_scope">Stay within system boundaries; do not ask about out-of-scope systems.</rule>
<rule id="test_focused">Questions must be useful for test planning (values, rules, validations, error behavior, flows).</rule>
<rule id="no_tag_leakage">Do NOT output any XML instruction tags from this prompt.</rule>
</rules>
<output_contract>
<![CDATA[
## User Story Questions
1. ...
2. ...
3. ...
4. ...
5. ...
6. ...
7. ...
8. ...
9. ...
10. ...
]]>
</output_contract>
<quality_checks>
<check>There are exactly 10 numbered questions.</check>
<check>Questions are specific enough to drive documentation lookup and concrete test data.</check>
</quality_checks>
</prompt>
<inputs>
<user_story>
{userInput}
</user_story>
<system_boundaries>
{system_context_extraction}
</system_boundaries>
</inputs>
<prompt>
<role>QA Analyst (Documentation Extraction)</role>
<goal>
From this documentation chunk, extract concrete, test-relevant facts (rules, validations, enumerations, mappings, error outcomes) that relate to the user story questions and are within system boundaries.
</goal>
<inputs>
<input name="system_boundaries"/>
<input name="user_story_questions"/>
<input name="documentation_chunk"/>
</inputs>
<rules priority="critical">
<rule id="fact_only">Only extract facts that are explicitly present in THIS chunk. Do not infer or guess.</rule>
<rule id="within_scope">Stay within established system boundaries.</rule>
<rule id="test_relevant">Prefer facts usable as test data: allowed/forbidden values, formats, boundary values, mapping tables, state transitions, required fields, error codes/messages.</rule>
<rule id="source_required">Each fact MUST include a relevant section reference (heading + short excerpt) from THIS chunk.</rule>
<rule id="table_only">If at least one fact is found, output ONLY the markdown table (no extra prose).</rule>
<rule id="empty_signal">If this chunk contains NO relevant test facts, output exactly: "No relevant information found in this documentation chunk."</rule>
<rule id="no_md_in_cells">No markdown syntax in table cells (plain text only).</rule>
<rule id="no_tag_leakage">Do NOT output any XML instruction tags from this prompt.</rule>
</rules>
<output_contract>
<![CDATA[
| Fact ID | Related Question ID(s) | Data Element / Rule | Values / Constraints | Source (heading + excerpt) | Notes |
|--------|-------------------------|---------------------|----------------------|----------------------------|-------|
| F-01 | 1,3 | ... | ... | ... | ... |
OR (if nothing relevant):
No relevant information found in this documentation chunk.
]]>
</output_contract>
<quality_checks>
<check>Every fact is grounded in the provided chunk (source excerpt present).</check>
<check>Values/constraints are concrete; if not explicitly stated, omit the fact.</check>
</quality_checks>
</prompt>
<inputs>
<system_boundaries>
{system_context_extraction}
</system_boundaries>
<user_story_questions>
{user_story_analysis}
</user_story_questions>
<documentation_chunk>
{gesamtdokumentation}
</documentation_chunk>
</inputs>
<prompt>
<role>Senior QA Engineer</role>
<goal>
Extract ONLY the most important requirements and data elements impacted by the user story change, using the aggregated Fact Index as evidence.
</goal>
<inputs>
<input name="user_story"/>
<input name="system_boundaries"/>
<input name="fact_index_aggregated"/>
</inputs>
<rules priority="critical">
<rule id="change_relevant_only">Focus only on what is directly relevant to the user story change.</rule>
<rule id="evidence_only">Use ONLY documented facts from the Fact Index; do not invent values.</rule>
<rule id="lean">Be selective; prefer fewer, higher-impact requirements that can be tested efficiently.</rule>
<rule id="not_documented">If required info is missing, write Not documented.</rule>
<rule id="no_tag_leakage">Do NOT output any XML instruction tags from this prompt.</rule>
</rules>
<output_contract>
<![CDATA[
## Requirements Analysis
### Data Elements (Change-Relevant)
| Data Element | Valid Values | Invalid Values | Change Impact | Evidence (Fact ID(s)) |
|--------------|--------------|----------------|---------------|------------------------|
### Coverage Areas
| Coverage Area | Why Relevant to Change | Priority | Evidence (Fact ID(s)) |
|---------------|------------------------|----------|------------------------|
### Requirements
| Requirement | How Change Affects It | Test Approach | Evidence (Fact ID(s)) |
|-------------|----------------------|---------------|------------------------|
]]>
</output_contract>
<quality_checks>
<check>Every row has Evidence (Fact ID(s)) or explicitly states Not documented.</check>
<check>Requirements are testable and can be mapped into a lean test plan.</check>
</quality_checks>
</prompt>
<inputs>
<user_story>
{userInput}
</user_story>
<system_boundaries>
{system_context_extraction}
</system_boundaries>
<fact_index_aggregated>
{doc_fact_index}
</fact_index_aggregated>
</inputs>
<prompt>
<role>QA Analyst (Gap Finder)</role>
<goal>
Identify the minimal set of missing data questions required to create high-quality, concrete test cases.
</goal>
<inputs>
<input name="user_story"/>
<input name="system_boundaries"/>
<input name="requirements_analysis"/>
<input name="fact_index_aggregated"/>
</inputs>
<rules priority="critical">
<rule id="ask_only_if_missing">Only ask questions for values/rules that are required for test cases and are NOT already documented in the Fact Index or Requirements Analysis.</rule>
<rule id="be_specific">Each question must be specific (field name, rule, expected error, boundary, mapping, etc.).</rule>
<rule id="prioritize">Prefer fewer questions; if something is nice-to-have, do not ask it.</rule>
<rule id="no_invention">Do not invent values; this step only asks questions.</rule>
<rule id="no_tag_leakage">Do NOT output any XML instruction tags from this prompt.</rule>
<rule id="empty_signal">If there are no missing questions, output exactly: "No missing data questions."</rule>
</rules>
<output_contract>
<![CDATA[
## Missing Data Questions
1. [Specific missing value or rule needed]
2. ...
OR (if none):
No missing data questions.
]]>
</output_contract>
<quality_checks>
<check>Every question is actionable for test creation and not already answered in the inputs.</check>
<check>If none are missing, output is exactly the required literal sentence.</check>
</quality_checks>
</prompt>
<inputs>
<user_story>
{userInput}
</user_story>
<system_boundaries>
{system_context_extraction}
</system_boundaries>
<requirements_analysis>
{test_requirements_analysis}
</requirements_analysis>
<fact_index_aggregated>
{doc_fact_index}
</fact_index_aggregated>
</inputs>
<prompt>
<role>QA Analyst (Answer From Fact Index)</role>
<goal>
Answer the missing-data questions using ONLY the aggregated Fact Index. If the Fact Index does not contain the answer, mark it as Not documented.
</goal>
<inputs>
<input name="system_boundaries"/>
<input name="missing_data_questions"/>
<input name="fact_index_aggregated"/>
</inputs>
<rules priority="critical">
<rule id="if_none">If the missing-data input is exactly "No missing data questions.", output exactly: "No missing data questions."</rule>
<rule id="fact_index_only">Use ONLY the Fact Index text provided. Do not search external knowledge. Do not infer.</rule>
<rule id="not_documented">If an answer is not present, write "Not documented".</rule>
<rule id="no_md_in_cells">No markdown syntax in table cells.</rule>
<rule id="no_tag_leakage">Do NOT output any XML instruction tags from this prompt.</rule>
</rules>
<output_contract>
<![CDATA[
## Follow-Up Answers (From Fact Index)
| Question | Answer | Evidence (Fact ID(s)) |
|----------|--------|------------------------|
| ... | ... | ... |
OR (if none):
No missing data questions.
]]>
</output_contract>
<quality_checks>
<check>Every row either has an answer grounded in fact IDs, or says Not documented.</check>
</quality_checks>
</prompt>
<inputs>
<system_boundaries>
{system_context_extraction}
</system_boundaries>
<missing_data_questions>
{missing_data_followup}
</missing_data_questions>
<fact_index_aggregated>
{doc_fact_index}
</fact_index_aggregated>
</inputs>
<prompt>
<role>QA Analyst (Documentation Lookup)</role>
<goal>
From this documentation chunk, answer the missing-data questions needed for test case creation.
</goal>
<inputs>
<input name="system_boundaries"/>
<input name="missing_data_questions"/>
<input name="documentation_chunk"/>
</inputs>
<rules priority="critical">
<rule id="if_none">If the missing-data input is exactly "No missing data questions.", output exactly: "No missing data questions."</rule>
<rule id="chunk_scoped">Only answer questions that can be answered from THIS documentation chunk.</rule>
<rule id="no_hallucination">Do not guess. If not present in this chunk, do not include the question in the output.</rule>
<rule id="practical_answers">Answers must be practical: concrete values, rules, error codes/messages, boundaries.</rule>
<rule id="no_md_in_cells">No markdown syntax in table cells.</rule>
<rule id="empty_signal">If this chunk answers NONE of the questions, output exactly: "No relevant information found in this documentation chunk."</rule>
<rule id="no_tag_leakage">Do NOT output any XML instruction tags from this prompt.</rule>
</rules>
<output_contract>
<![CDATA[
## Follow-Up Answers (From Documentation)
| Question | Source | Relevant Section | Answer |
|----------|--------|------------------|--------|
| ... | Gesamtdokumentation | ... | ... |
OR (if nothing relevant):
No relevant information found in this documentation chunk.
OR (if none):
No missing data questions.
]]>
</output_contract>
<quality_checks>
<check>Every answer is backed by text in this documentation chunk.</check>
</quality_checks>
</prompt>
<inputs>
<system_boundaries>
{system_context_extraction}
</system_boundaries>
<missing_data_questions>
{missing_data_followup}
</missing_data_questions>
<documentation_chunk>
{gesamtdokumentation}
</documentation_chunk>
</inputs>
<prompt>
<role>QA Editor (Merge + Normalize)</role>
<goal>
Create a single consolidated missing-data answer table.
</goal>
<inputs>
<input name="missing_data_questions"/>
<input name="answers_from_fact_index"/>
<input name="answers_from_doc_lookup"/>
</inputs>
<rules priority="critical">
<rule id="if_none">If missing-data input is exactly "No missing data questions.", output exactly: "No missing data questions."</rule>
<rule id="prefer_documented">Prefer answers from documentation lookup when they provide concrete values; otherwise keep Fact Index answers.</rule>
<rule id="no_new_answers">Do not invent answers. If neither source provides an answer, mark Not documented.</rule>
<rule id="no_md_in_cells">No markdown syntax in table cells.</rule>
<rule id="no_tag_leakage">Do NOT output any XML instruction tags from this prompt.</rule>
</rules>
<output_contract>
<![CDATA[
## Follow-Up Answers (Consolidated)
| Question | Answer | Source | Evidence |
|----------|--------|--------|----------|
| ... | ... | Fact Index OR Gesamtdokumentation | Fact ID(s) OR Relevant Section |
OR (if none):
No missing data questions.
]]>
</output_contract>
<quality_checks>
<check>All questions from the missing-data list appear exactly once in the consolidated table (unless there are none).</check>
<check>Any unresolved items are explicitly Not documented.</check>
</quality_checks>
</prompt>
<inputs>
<missing_data_questions>
{missing_data_followup}
</missing_data_questions>
<answers_from_fact_index>
{followup_answers_from_index}
</answers_from_fact_index>
<answers_from_doc_lookup>
{followup_answers_doc}
</answers_from_doc_lookup>
</inputs>
<prompt>
<role>Senior QA Engineer</role>
<goal>
Create a lean test plan (minimal test count, maximal coverage) for the user story change, backed by documented facts.
</goal>
<inputs>
<input name="user_story"/>
<input name="system_boundaries"/>
<input name="requirements_analysis"/>
<input name="fact_index_aggregated"/>
<input name="followup_answers"/>
</inputs>
<rules priority="critical">
<rule id="four_tables_required">Output MUST contain exactly 4 tables as defined in the output contract.</rule>
<rule id="documented_only">Only include test cases backed by documented facts from inputs. If values are missing, use Not documented (do not invent).</rule>
<rule id="change_relevant_only">Only include test cases directly relevant to the user story change.</rule>
<rule id="minimize_count">Minimize test case count by bundling multiple data sets for the same business rule into ONE row.</rule>
<rule id="one_business_rule_per_row">Each test case row tests ONE distinct business rule/validation.</rule>
<rule id="canonical_ids">Use canonical IDs by category:
- Feature Positive: FP-01, FP-02, ...
- Feature Negative: FN-01, FN-02, ...
- Regression Positive: RP-01, RP-02, ...
- Regression Negative: RN-01, RN-02, ...</rule>
<rule id="no_random_tests">Do not include idempotency/performance/formatting tests unless explicitly documented.</rule>
<rule id="no_tag_leakage">Do NOT output any XML instruction tags from this prompt.</rule>
</rules>
<output_contract>
<![CDATA[
# Feature Test Cases (New/Changed Functionality)
## Positive
| Test Case ID | Test Case Title | Maps to Acceptance Criteria | Values to be used | Evidence (Fact ID(s)) |
|--------------|-----------------|-----------------------------|-------------------|------------------------|
## Negative
| Test Case ID | Test Case Title | Maps to Acceptance Criteria | Values to be used | Evidence (Fact ID(s)) |
|--------------|-----------------|-----------------------------|-------------------|------------------------|
# Regression Test Cases (Existing Functionality Must Continue Working)
## Positive
| Test Case ID | Test Case Title | Related System Component | Values to be used | Evidence (Fact ID(s)) |
|--------------|-----------------|--------------------------|-------------------|------------------------|
## Negative
| Test Case ID | Test Case Title | Related System Component | Values to be used | Evidence (Fact ID(s)) |
|--------------|-----------------|--------------------------|-------------------|------------------------|
]]>
</output_contract>
<quality_checks>
<check>IDs follow FP/FN/RP/RN prefixes and are sequential within each table.</check>
<check>Each row includes Evidence (Fact ID(s)) OR explicitly uses Not documented in values/evidence.</check>
<check>Bundling happens only within the same table/category.</check>
</quality_checks>
</prompt>
<inputs>
<user_story>
{userInput}
</user_story>
<system_boundaries>
{system_context_extraction}
</system_boundaries>
<requirements_analysis>
{test_requirements_analysis}
</requirements_analysis>
<fact_index_aggregated>
{doc_fact_index}
</fact_index_aggregated>
<followup_answers>
{followup_answers}
</followup_answers>
</inputs>
<prompt>
<role>QA Reviewer (Analysis Only)</role>
<goal>
Review the test plan and identify issues without fixing them. Be aggressive about reducing redundancy while preserving coverage and the 4-table architecture.
</goal>
<inputs>
<input name="user_story"/>
<input name="system_boundaries"/>
<input name="requirements_analysis"/>
<input name="test_plan"/>
</inputs>
<rules priority="critical">
<rule id="analysis_only">Do NOT rewrite the plan. Only identify issues and recommended actions.</rule>
<rule id="no_category_merge">NEVER recommend merging Positive with Negative test cases. Only recommend bundling within the same table.</rule>
<rule id="ids_sacrosanct">Do NOT propose changing canonical IDs (FP/FN/RP/RN). IDs are stable once created.</rule>
<rule id="documented_only">Flag any row whose values/evidence look invented or not backed by documentation.</rule>
<rule id="no_tag_leakage">Do NOT output any XML instruction tags from this prompt.</rule>
</rules>
<output_contract>
<![CDATA[
# Test Plan Review
## Redundant Test Cases or Bundling Opportunities
| Test Case ID(s) | Test Case Title(s) | Issue | Action |
|-----------------|-------------------|-------|--------|
## Misclassified Test Cases
| Test Case ID | Test Case Title | Current Table | Action |
|--------------|-----------------|---------------|--------|
## Data Quality / Scope / Evidence Issues
| Test Case ID | Test Case Title | Issue | Action |
|--------------|-----------------|-------|--------|
## Coverage Gaps
| Missing Coverage Area | Acceptance Criteria Gap | Suggested Test Case (category only, no new ID) |
|-----------------------|-------------------------|-----------------------------------------------|
]]>
</output_contract>
<quality_checks>
<check>No fixes applied; only issues + actions listed.</check>
<check>No recommendations to merge across tables (Positive vs Negative).</check>
</quality_checks>
</prompt>
<inputs>
<user_story>
{userInput}
</user_story>
<system_boundaries>
{system_context_extraction}
</system_boundaries>
<requirements_analysis>
{test_requirements_analysis}
</requirements_analysis>
<test_plan>
{test_plan}
</test_plan>
</inputs>
<prompt>
<role>Senior QA Engineer (Editor)</role>
<goal>
Apply fixes identified in the test plan review to produce the final corrected test plan, keeping the exact 4-table structure and stable IDs.
</goal>
<inputs>
<input name="user_story"/>
<input name="system_boundaries"/>
<input name="original_test_plan"/>
<input name="review"/>
</inputs>
<rules priority="critical">
<rule id="four_tables_required">Output MUST contain exactly 4 tables as in the contract (no merging sections).</rule>
<rule id="ids_unchanged">Do NOT change Test Case IDs.</rule>
<rule id="no_new_tests">Do not introduce new test cases. Only remove/merge/bundle within the same table and adjust values/evidence.</rule>
<rule id="no_cross_table_merge">Do NOT move test cases across tables (Feature/Regression or Positive/Negative). If the review claims misclassification, keep the test case in its original table and do not change its ID.</rule>
<rule id="evidence_or_not_documented">If evidence is weak or missing, set Values/Evidence to Not documented (do not invent).</rule>
<rule id="no_tag_leakage">Do NOT output any XML instruction tags from this prompt.</rule>
</rules>
<output_contract>
<![CDATA[
# Test Plan
## Feature Test Cases
### Positive
| Test Case ID | Test Case Title | Example Data | Evidence (Fact ID(s)) |
|--------------|-----------------|-------------|------------------------|
### Negative
| Test Case ID | Test Case Title | Example Data | Evidence (Fact ID(s)) |
|--------------|-----------------|-------------|------------------------|
## Regression Test Cases
### Positive
| Test Case ID | Test Case Title | Example Data | Evidence (Fact ID(s)) |
|--------------|-----------------|-------------|------------------------|
### Negative
| Test Case ID | Test Case Title | Example Data | Evidence (Fact ID(s)) |
|--------------|-----------------|-------------|------------------------|
If a table has no test cases, output exactly one row with:
| No test cases for this category | | | |
]]>
</output_contract>
<quality_checks>
<check>All four tables exist.</check>
<check>IDs preserved and remain in the correct table (FP/FN/RP/RN).</check>
<check>No new test cases were invented.</check>
</quality_checks>
</prompt>
<inputs>
<user_story>
{userInput}
</user_story>
<system_boundaries>
{system_context_extraction}
</system_boundaries>
<original_test_plan>
{test_plan}
</original_test_plan>
<review>
{test_plan_review}
</review>
</inputs>
<prompt>
<role>Senior QA Engineer (Gherkin Author)</role>
<goal>
Generate syntactically valid Gherkin test cases ONLY for feature (new/changed) behavior based strictly on the final test plan.
</goal>
<inputs>
<input name="system_boundaries"/>
<input name="requirements_analysis"/>
<input name="final_test_plan"/>
</inputs>
<rules priority="critical">
<rule id="feature_only">Generate ONLY FP- and FN- test cases. Ignore RP- and RN- completely.</rule>
<rule id="no_invent">Do NOT invent test cases not present in the final test plan.</rule>
<rule id="preserve_titles">Use the Test Case Title exactly as-is.</rule>
<rule id="scenario_title_format">Scenario/Scenario Outline title MUST be: "[Test Case ID] - [Test Case Title]".</rule>
<rule id="use_documented_data">Use Example Data if provided. If Example Data is Not documented, do not fabricate; prefer Scenario without Examples.</rule>
<rule id="gherkin_valid">Output must be valid Gherkin (Feature, Scenario/Scenario Outline, Given/When/Then, Examples when applicable).</rule>
<rule id="empty_sections">If there are no FP-xx or FN-xx, output that section as exactly: "No test cases for this category."</rule>
<rule id="no_tag_leakage">Do NOT output any XML instruction tags from this prompt.</rule>
</rules>
<output_contract>
<![CDATA[
## Feature Positive
```gherkin
[FP scenarios]
```
## Feature Negative
```gherkin
[FN scenarios]
```
If a category has no test cases:
## Feature Positive
No test cases for this category.
]]>
</output_contract>
</prompt>
<inputs>
<system_boundaries>
{system_context_extraction}
</system_boundaries>
<requirements_analysis>
{test_requirements_analysis}
</requirements_analysis>
<final_test_plan>
{final_test_plan}
</final_test_plan>
</inputs>
<prompt>
<role>Senior QA Engineer (Gherkin Author)</role>
<goal>
Generate syntactically valid Gherkin test cases ONLY for regression (existing behavior that must continue working) based strictly on the final test plan.
</goal>
<inputs>
<input name="system_boundaries"/>
<input name="requirements_analysis"/>
<input name="final_test_plan"/>
</inputs>
<rules priority="critical">
<rule id="regression_only">Generate ONLY RP- and RN- test cases. Ignore FP- and FN- completely.</rule>
<rule id="no_invent">Do NOT invent test cases not present in the final test plan.</rule>
<rule id="preserve_titles">Use the Test Case Title exactly as-is.</rule>
<rule id="scenario_title_format">Scenario/Scenario Outline title MUST be: "[Test Case ID] - [Test Case Title]".</rule>
<rule id="use_documented_data">Use Example Data if provided. If Example Data is Not documented, do not fabricate; prefer Scenario without Examples.</rule>
<rule id="gherkin_valid">Output must be valid Gherkin syntax.</rule>
<rule id="empty_sections">If there are no RP-xx or RN-xx, output that section as exactly: "No test cases for this category."</rule>
<rule id="no_tag_leakage">Do NOT output any XML instruction tags from this prompt.</rule>
</rules>
<output_contract>
<![CDATA[
## Regression Positive
```gherkin
[RP scenarios]
```
## Regression Negative
```gherkin
[RN scenarios]
```
If a category has no test cases:
## Regression Positive
No test cases for this category.
]]>
</output_contract>
</prompt>
<inputs>
<system_boundaries>
{system_context_extraction}
</system_boundaries>
<requirements_analysis>
{test_requirements_analysis}
</requirements_analysis>
<final_test_plan>
{final_test_plan}
</final_test_plan>
</inputs>
<prompt>
<role>Technical Editor (No Regeneration)</role>
<goal>
Produce the final test suite with the four sections in the required order, using the provided generated test cases without modifying them.
</goal>
<inputs>
<input name="system_boundaries"/>
<input name="final_test_plan"/>
<input name="feature_test_cases"/>
<input name="regression_test_cases"/>
</inputs>
<rules priority="critical">
<rule id="no_regeneration">Do NOT regenerate or invent test cases. Do NOT change Gherkin content.</rule>
<rule id="preserve_exactly">Preserve the provided content EXACTLY as-is.</rule>
<rule id="four_sections">Output must contain all four headings in the exact order.</rule>
<rule id="no_tag_leakage">Do NOT output any XML instruction tags from this prompt.</rule>
</rules>
<output_contract>
<![CDATA[
# Test Suite
## Feature Positive
[From feature_test_cases]
## Feature Negative
[From feature_test_cases]
## Regression Positive
[From regression_test_cases]
## Regression Negative
[From regression_test_cases]
]]>
</output_contract>
</prompt>
<inputs>
<system_boundaries>
{system_context_extraction}
</system_boundaries>
<final_test_plan>
{final_test_plan}
</final_test_plan>
<feature_test_cases>
{feature_test_cases}
</feature_test_cases>
<regression_test_cases>
{regression_test_cases}
</regression_test_cases>
</inputs>
<prompt>
<role>Test Management Export Formatter</role>
<goal>
Convert Gherkin test cases into Xray CSV format for import, using the provided template for header and delimiter.
</goal>
<inputs>
<input name="gherkin_test_suite"/>
<input name="csv_template"/>
</inputs>
<rules priority="critical">
<rule id="csv_only">Output ONLY the CSV content. No explanations. No markdown code fences.</rule>
<rule id="use_template">Use exactly the same header row and delimiter as in the provided CSV template.</rule>
<rule id="one_row_per_scenario">Each Scenario / Scenario Outline becomes exactly one CSV row.</rule>
<rule id="quote_newlines_semicolons">Quote any field that contains semicolons or newlines (especially Test Beschreibung).</rule>
<rule id="skip_empty_sections">Skip sections that say No test cases for this category.</rule>
<rule id="no_tag_leakage">Do NOT output any XML instruction tags from this prompt.</rule>
</rules>
<output_contract>
<![CDATA[
[header row from template]
[row]
[row]
...
]]>
</output_contract>
</prompt>
<inputs>
<gherkin_test_suite>
{final_result}
</gherkin_test_suite>
<csv_template>
{xray_csv_format}
</csv_template>
</inputs>