Loading…
Workflow goal
Streamlined workflow for generating Gherkin test cases with separate review and correction phases plus feature/regression separation
Variable Source Value Phase Status
System Doc: Application Data Service
Phase 1: Metadata
— Not fetched
Phase 2: Raw Content
⏳ Pending
Phase 3: Final Content
⏳ Pending
0/1 0/1 0/1
System Doc: End-to-End
Phase 1: Metadata
— Not fetched
Phase 2: Raw Content
⏳ Pending
Phase 3: Final Content
⏳ Pending
Phase 1: Metadata
— Not fetched
Phase 2: Raw Content
⏳ Pending
Phase 3: Final Content
⏳ Pending
0/2 0/2 0/2
System Doc: Output Service
Phase 1: Metadata
— Not fetched
Phase 2: Raw Content
⏳ Pending
Phase 3: Final Content
⏳ Pending
Phase 1: Metadata
— Not fetched
Phase 2: Raw Content
⏳ Pending
Phase 3: Final Content
⏳ Pending
0/2 0/2 0/2
System Doc: Policy Manager
Phase 1: Metadata
— Not fetched
Phase 2: Raw Content
⏳ Pending
Phase 3: Final Content
⏳ Pending
0/1 0/1 0/1
System Doc: Webform Policy Application
Phase 1: Metadata
— Not fetched
Phase 2: Raw Content
⏳ Pending
Phase 3: Final Content
⏳ Pending
0/1 0/1 0/1
User Story
Phase 1: Metadata
— Not fetched
Phase 2: Raw Content
⏳ Pending
Phase 3: Final Content
⏳ Pending
Phase 1: Metadata
— Not fetched
Phase 2: Raw Content
⏳ Pending
Phase 3: Final Content
⏳ Pending
Phase 1: Metadata
— Not fetched
Phase 2: Raw Content
⏳ Pending
Phase 3: Final Content
⏳ Pending
0/3 0/3 0/3

System Doc: Application Data Service

317816834v1 ID: 317816834v1
*317816834v1 - Not yet fetched*

System Doc: End-to-End

317980675v1 ID: 317980675v1
*317980675v1 - Not yet fetched*
319258625v1 ID: 319258625v1
*319258625v1 - Not yet fetched*

System Doc: Output Service

317816843v1 ID: 317816843v1
*317816843v1 - Not yet fetched*
290848769v1 ID: 290848769v1
*290848769v1 - Not yet fetched*

System Doc: Policy Manager

317947913v1 ID: 317947913v1
*317947913v1 - Not yet fetched*

System Doc: Webform Policy Application

319029250v1 ID: 319029250v1
*319029250v1 - Not yet fetched*

User Story

AQV-75 ID: AQV-75
*AQV-75 - Not yet fetched*
339247106v1 ID: 339247106v1
*339247106v1 - Not yet fetched*
AQV-1 ID: AQV-1
*AQV-1 - Not yet fetched*
1 System Context and Boundaries
Extract system context and boundaries from user story and full documentation to establish clear scope for all subsequent analysis
pending
Value will be available once processing is complete.
Input will be available once processing is complete.
System Prompt (raw)
Create a simple summary identifying what is within scope and out of scope for this user story.

## System Scope Summary

### Within Scope
- [Systems, components, processes that ARE part of this change]
- [Data elements, mappings, values that ARE affected]

### Out of Scope  
- [Systems, components, processes that are NOT part of this change]
- [Functionality that is NOT part of this user story]

### Testing Focus
- **Focus On**: [Main testing priorities]
- **Out of Scope**: [What is NOT part of this testing effort]

Keep summary short and actionable.
User Prompt (raw)
User Story:
{userInput}

System Documentation:
Webform Policy Application: {system_doc_webform_policy_application}
Policy Manager: {system_doc_policy_manager}
Application Data Service: {system_doc_application_data_service}
Output Service: {system_doc_output_service}
End-to-End: {system_doc_end_to_end}
2 User Story Analysis
Analyze user story with 10 focused questions about core and secondary topics within the established system boundaries
pending
Value will be available once processing is complete.
Input will be available once processing is complete.
System Prompt (raw)
Analyze the user story and create exactly 10 focused questions within established system boundaries.

First, think step by step about what aspects need investigation. Then create your focused questions.

Focus on:
- What information do we need about the core topic?
- What is the current state vs. the change?
- What existing functionality must continue working?

Number each question 1-10 for easy reference. Stay within the established system boundaries.
User Prompt (raw)
User Story:
{userInput}

System Boundaries:
{system_context_extraction}
3 Analysis: Webform Policy Application
Analyze system-doc: Webform Policy Application
pending
Value will be available once processing is complete.
Input will be available once processing is complete.
System Prompt (raw)
Analyze this documentation chunk to answer questions from the User Story Analysis within system boundaries.

First, carefully review the questions and identify which ones this chunk can answer. Then create your analysis table.

Create a table with the following columns:

| ID | Question | Relevant section | Answer |
|----|----------|------------------|--------|

Rules:
- No markdown syntax in table cells (use plain text)
- Only include questions that CAN be answered from this chunk
- Focus on information within established system boundaries
- The table has to be valid markdown syntax.
User Prompt (raw)
System Boundaries:
{system_context_extraction}

User Story Analysis:
{user_story_analysis}

System Doc - Webform Policy Application:
{system_doc_webform_policy_application}
4 Analysis: Policy Manager
Analyze system-doc: Policy Manager
pending
Value will be available once processing is complete.
Input will be available once processing is complete.
System Prompt (raw)
Analyze this documentation chunk to answer questions from the User Story Analysis within system boundaries.

First, carefully review the questions and identify which ones this chunk can answer. Then create your analysis table.

Create a table with the following columns:

| ID | Question | Relevant section | Answer |
|----|----------|------------------|--------|

Rules:
- No markdown syntax in table cells (use plain text)
- Only include questions that CAN be answered from this chunk
- Focus on information within established system boundaries
- The table has to be valid markdown syntax.
User Prompt (raw)
System Boundaries:
{system_context_extraction}

User Story Analysis:
{user_story_analysis}

System Doc - Policy Manager:
{system_doc_policy_manager}
5 Analysis: Application Data Service
Analyze system-doc: Application Data Service
pending
Value will be available once processing is complete.
Input will be available once processing is complete.
System Prompt (raw)
Analyze this documentation chunk to answer questions from the User Story Analysis within system boundaries.

First, carefully review the questions and identify which ones this chunk can answer. Then create your analysis table.

Create a table with the following columns:

| ID | Question | Relevant section | Answer |
|----|----------|------------------|--------|

Rules:
- No markdown syntax in table cells (use plain text)
- Only include questions that CAN be answered from this chunk
- Focus on information within established system boundaries
- The table has to be valid markdown syntax.
User Prompt (raw)
System Boundaries:
{system_context_extraction}

User Story Analysis:
{user_story_analysis}

System Doc - Application Data Service:
{system_doc_application_data_service}
6 Analysis: Output Service
Analyze system-doc: Output Service
pending
Value will be available once processing is complete.
Input will be available once processing is complete.
System Prompt (raw)
Analyze this documentation chunk to answer questions from the User Story Analysis within system boundaries.

First, carefully review the questions and identify which ones this chunk can answer. Then create your analysis table.

Create a table with the following columns:

| ID | Question | Relevant section | Answer |
|----|----------|------------------|--------|

Rules:
- No markdown syntax in table cells (use plain text)
- Only include questions that CAN be answered from this chunk
- Focus on information within established system boundaries
- The table has to be valid markdown syntax.
User Prompt (raw)
System Boundaries:
{system_context_extraction}

User Story Analysis:
{user_story_analysis}

System Doc - Output Service:
{system_doc_output_service}
7 Analysis: End-to-End
Analyze system-doc: End-to-End
pending
Value will be available once processing is complete.
Input will be available once processing is complete.
System Prompt (raw)
Analyze this documentation chunk to answer questions from the User Story Analysis within system boundaries.

First, carefully review the questions and identify which ones this chunk can answer. Then create your analysis table.

Create a table with the following columns:

| ID | Question | Relevant section | Answer |
|----|----------|------------------|--------|

Rules:
- No markdown syntax in table cells (use plain text)
- Only include questions that CAN be answered from this chunk
- Focus on information within established system boundaries
- The table has to be valid markdown syntax.
User Prompt (raw)
System Boundaries:
{system_context_extraction}

User Story Analysis:
{user_story_analysis}

System Doc - End-to-End:
{system_doc_end_to_end}
8 Requirements Analysis
Analyze requirements, data elements, coverage areas, and edge values
pending
Value will be available once processing is complete.
Input will be available once processing is complete.
System Prompt (raw)
Extract test requirements specifically relevant to the user story change.

First, think step by step about what data elements, coverage areas, and requirements are directly impacted by this change. Then organize your findings.

## Requirements Analysis

### Data Elements (Change-Relevant)
| Data Element | Valid Values | Invalid Values | Change Impact |
|--------------|--------------|----------------|---------------|

### Coverage Areas
| Coverage Area | Why Relevant to Change | Priority |
|---------------|------------------------|----------|

### Requirements
| Requirement | How Change Affects It | Test Approach |
|-------------|----------------------|---------------|

Rules:
- Focus only on documented information relevant to the change.
- Mark unavailable info as *Not documented*
User Prompt (raw)
User Story:
{userInput}

System Boundaries:
{system_context_extraction}

Analysis Results:
Webform Policy Application: {analysis_webform_policy_application}
Policy Manager: {analysis_policy_manager}
Application Data Service: {analysis_application_data_service}
Output Service: {analysis_output_service}
End-to-End: {analysis_end_to_end}
9 Missing Information
Identify missing information needed for good test cases
pending
Value will be available once processing is complete.
Input will be available once processing is complete.
System Prompt (raw)
Identify missing data needed to create good test cases.

Create a simple list of follow-up questions:

## Missing Data Questions

1. [Specific missing value or data needed]
2. [Another missing piece of information]
3. [Additional data required for testing]

Focus on practical missing information like specific values, error codes, validation rules, or boundary values.
User Prompt (raw)
User Story:
{userInput}

System Boundaries:
{system_context_extraction}

Requirements Analysis:
{test_requirements_analysis}
10 Data Lookup
Search all documentation to find answers for missing information
pending
Value will be available once processing is complete.
Input will be available once processing is complete.
System Prompt (raw)
Search through all documentation chunks to answer the missing data questions.

First, systematically review each question against all documentation chunks. Then compile your findings.

For each question, search all chunks and provide:

## Follow-Up Answers

| Question | Relevant Section | Answer |
|----------|------------------|--------|

Rules:
- No markdown syntax in table cells (use plain text)
- If found, extract relevant section and note source
- If not found, mark as "Not documented"
- Provide practical answers for test case creation
User Prompt (raw)
System Boundaries:
{system_context_extraction}

Missing Data Follow-Up Questions:
{missing_data_followup}

System Doc - Webform Policy Application:
{system_doc_webform_policy_application}

System Doc - Policy Manager:
{system_doc_policy_manager}

System Doc - Application Data Service:
{system_doc_application_data_service}

System Doc - Output Service:
{system_doc_output_service}

System Doc - End-to-End:
{system_doc_end_to_end}
11 Test Plan
Create test plan with Feature and Regression tables
pending
Value will be available once processing is complete.
Input will be available once processing is complete.
System Prompt (raw)
Create test plan with four tables in two sections.

# Feature Test Cases (New/Changed Functionality)

## Positive
| Test Case ID | Test Case Title | Maps to Acceptance Criteria | Values to be used |
|--------------|-----------------|-----------------------------|-------------------|

## Negative  
| Test Case ID | Test Case Title | Maps to Acceptance Criteria | Values to be used |
|--------------|-----------------|-----------------------------|-------------------|

# Regression Test Cases (Existing Functionality Must Continue Working)

## Positive
| Test Case ID | Test Case Title | Related System Component | Values to be used |
|--------------|-----------------|--------------------------|-------------------|

## Negative
| Test Case ID | Test Case Title | Related System Component | Values to be used |
|--------------|-----------------|--------------------------|-------------------|

Test Plan Requirements:
- Only include test cases that are directly relevant to the user story change.
- Only include test cases that are backed by specific documented facts.
- The test title should be a short description of the business logic being tested.
- The 'Values to be used' column should contain the specific values that will populate the Examples table.
- Design the test plan for Scenario Outline + Examples pattern where it is applicable instead of repetitive test cases.
- Don't include Idempotency tests, performance tests, formatting tests, etc. if there is no documentation to back them up.
- Each test case must reference the source documentation and provide concrete data values, not placeholders or generic descriptions.
User Prompt (raw)
User Story: {userInput}
System Boundaries: {system_context_extraction}
Requirements Analysis: {test_requirements_analysis}
Follow-Up Answers: {followup_answers}
12 Test Plan Review
Review test plan and identify issues without fixing them - analysis only
pending
Value will be available once processing is complete.
Input will be available once processing is complete.
System Prompt (raw)
Review the test plan and identify issues without fixing them - analysis only.

First, systematically examine each test case for redundancy, classification, data quality, and coverage gaps. Then document your findings.

# Test Plan Review

## Redundant Test Cases
| Test Case ID | Test Case Title | Redundant to | Action |
|--------------|-----------------|--------------|--------|

## Misclassified Test Cases
| Test Case ID | Test Case Title | Current Table | Action |
|--------------|-----------------|---------------|--------|

## Data Quality, Boundary or Scope Issues or Irrelevant Test Cases
| Test Case ID | Test Case Title | Data Issue | Action |
|--------------|-----------------|------------|--------|

## Coverage Gaps
| Missing Coverage Area | Acceptance Criteria Gap | Suggested Test Case |
|-----------------------|-------------------------|---------------------|

Test Plan Requirements:
- Only include test cases that are directly relevant to the user story change.
- Only include test cases that are backed by specific documented facts.
- The test title should be a short description of the business logic being tested.
- The 'Values to be used' column should contain the specific values that will populate the Examples table.
- Design the test plan for Scenario Outline + Examples pattern where it is applicable instead of repetitive test cases.
- Don't include Idempotency tests, performance tests, formatting tests, etc. if there is no documentation to back them up.
- Each test case must reference the source documentation and provide concrete data values, not placeholders or generic descriptions.
User Prompt (raw)
User Story: {userInput}
System Boundaries: {system_context_extraction}
Requirements Analysis: {test_requirements_analysis}
Follow-Up Answers: {followup_answers}
Test Plan to Review: {test_plan}
13 Final Test Plan
Apply fixes from review to produce final corrected test plan with clean tables
pending
Value will be available once processing is complete.
Input will be available once processing is complete.
System Prompt (raw)
Apply fixes identified in the test plan review to produce the final corrected test plan.

# Test Plan

## Feature Test Cases

### Positive
| Test Case ID | Test Case Title | Example Data |
|--------------|-----------------|--------------|

### Negative
| Test Case ID | Test Case Title | Example Data |
|--------------|-----------------|--------------|

## Regression Test Cases

### Positive
| Test Case ID | Test Case Title | Example Data |
|--------------|-----------------|--------------|

### Negative
| Test Case ID | Test Case Title | Example Data |
|--------------|-----------------|--------------|

Test Plan Requirements:
- Only include test cases that are directly relevant to the user story change.
- Only include test cases that are backed by specific documented facts.
- The test title should be a short description of the business logic being tested.
- The 'Values to be used' column should contain the specific values that will populate the Examples table.
- Design the test plan for Scenario Outline + Examples pattern where it is applicable instead of repetitive test cases.
- Don't include Idempotency tests, performance tests, formatting tests, etc. if there is no documentation to back them up.
- Each test case must reference the source documentation and provide concrete data values, not placeholders or generic descriptions.
User Prompt (raw)
User Story: {userInput}
System Boundaries: {system_context_extraction}
Original Test Plan: {test_plan}
Test Plan Review: {test_plan_review}
14 Feature Test Cases
Generate feature test cases (new/changed functionality) from the verified Feature Table
pending
Value will be available once processing is complete.
Input will be available once processing is complete.
System Prompt (raw)
Generate Gherkin test cases for Feature tests from the final corrected test plan.

For each feature test case:
```gherkin
Feature: [Feature name]
Scenario Outline OR Scenario: [Test title]
  Given [setup for new/changed functionality]
  When [operation testing the change]
  Then [expected outcome]
  Examples:
    | [value 1] | [value 2] | [value 3] |
    | [value 4] | [value 5] | [value 6] |
```

Rules:
- Only include test cases from "feature positive" and "feature negative" sections.
- Do NOT add, remove or adjust the title of the test cases you are generating.
- Do NOT include any test case from "regression positive" or "regression negative" sections.
- Use documented data.
User Prompt (raw)
System Boundaries: {system_context_extraction}
Requirements Analysis: {test_requirements_analysis}
Final Test Plan: {final_test_plan}
15 Regression Test Cases
Generate regression test cases (existing functionality) from the final Regression Table
pending
Value will be available once processing is complete.
Input will be available once processing is complete.
System Prompt (raw)
Generate Gherkin test cases for Regression tests from the final corrected test plan.

For each regression test case:
```gherkin
Feature: [System component]
Scenario Outline OR Scenario: [Test title]
  Given [setup for existing functionality]
  When [operation testing unchanged functionality]
  Then [expected outcome proving it still works]
  Examples:
    | [value 1] | [value 2] | [value 3] |
    | [value 4] | [value 5] | [value 6] |
```

Rules:
- Only include test cases from "regression positive" and "regression negative" sections.
- Do NOT add, remove or adjust the title of the test cases you are generating.
- Do NOT include any test case from "feature positive" or "feature negative" sections.
- Use documented data.
User Prompt (raw)
System Boundaries: {system_context_extraction}
Requirements Analysis: {test_requirements_analysis}
Final Test Plan: {final_test_plan}
16 Final Gherkin Test Cases
Create final comprehensive Gherkin test suite
pending
Value will be available once processing is complete.
Input will be available once processing is complete.
System Prompt (raw)
Combine the test cases into final output format.

# Test Suite

## Feature Positive
```gherkin
Feature: [Feature name]

Scenario Outline OR Scenario: [Test title]
  Given [setup for new/changed functionality]
  When [operation testing the change]
  Then [expected outcome]
  Examples:
    | [value 1] | [value 2] | [value 3] |
    | [value 4] | [value 5] | [value 6] |
```

## Feature Negative

## Regression Positive

## Regression Negative

Output Requirements:
- Present the result in gherkin syntax as clear code blocks for each group
- The groups are: Feature Positive, Feature Negative, Regression Positive, Regression Negative
User Prompt (raw)
System Boundaries:
{system_context_extraction}

Test Plan:
{final_test_plan}

Feature Test Cases:
{feature_test_cases}

Regression Test Cases:
{regression_test_cases}