Skip to content

Fix run_tests response #20

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Apr 23, 2025

Conversation

nowsprinting
Copy link
Contributor

@nowsprinting nowsprinting commented Apr 20, 2025

Problem

Test count in response is always 0.
No test results.

e.g.,

Test run completed: UnityModelContextProtocolSandbox - Failed(Child)

{
  "testCount": 0,
  "passCount": 0,
  "failCount": 0,
  "results": []
}

Fixes

Returns

  • Test total, pass, fail, inconclusive, and skip count.
  • Test results

e.g.,

2/3 tests passed. Failed tests: FailTestDemo

{
  "testCount": 3,
  "passCount": 2,
  "failCount": 1,
  "inconclusiveCount": 0,
  "skipCount": 0,
  "results": [
    {
      "name": "FailTestDemo",
      "fullName": "McpServerDemo.Tests.McpServerTest.FailTestDemo",
      "result": "Failed",
      "message": "This is a demo failure.",
      "duration": 0.0338517
    },
    {
      "name": "SuccessTest1",
      "fullName": "McpServerDemo.Tests.McpServerTest.SuccessTest1",
      "result": "Passed",
      "message": null,
      "duration": 0.63412
    },
    {
      "name": "SuccessTest2",
      "fullName": "McpServerDemo.Tests.McpServerTest.SuccessTest2",
      "result": "Passed",
      "message": null,
      "duration": 0.5749152
    }
  ]
}

Discussions

When there are a lot of tests, you shouldn't return the results of all the tests.

Summary by CodeRabbit

  • New Features
    • Test results now display detailed counts for failed, skipped, and inconclusive tests, in addition to passed tests.
    • The full list of individual test results is included in the output for enhanced visibility.
  • Improvements
    • Test summary reporting is more accurate and detailed, providing clearer feedback after running tests.

Copy link
Contributor

coderabbitai bot commented Apr 20, 2025

📝 Walkthrough

Walkthrough

The changes update the test reporting logic across both the editor and server tools. The editor tool now records only leaf test results and constructs the test summary using direct counts from the test result adaptor, adding more detailed fields such as fail, skip, and inconclusive counts to the output JSON. Corresponding updates in the server-side JavaScript and TypeScript handlers extract and include these new counts from the response, ensuring that the returned JSON output reflects the enhanced detail and accuracy of test result reporting. No changes were made to public interfaces or exported entity signatures.

Changes

File(s) Change Summary
Editor/Tools/RunTestsTool.cs Modified to record only leaf test results; summary now uses direct counts from adaptor; output JSON includes pass, fail, skip, and inconclusive counts, and the full results array.
Server/build/tools/runTestsTool.js
Server/src/tools/runTestsTool.ts
Updated to extract and include failCount, skipCount, and inconclusiveCount from the response; output JSON now uses these fields directly instead of calculating fail count. Message fallback order adjusted.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant ServerTool (JS/TS)
    participant EditorTool (C#)

    User->>ServerTool: Request to run tests
    ServerTool->>EditorTool: Trigger test execution
    EditorTool->>EditorTool: Run tests, collect leaf results
    EditorTool->>EditorTool: Build summary with pass, fail, skip, inconclusive counts
    EditorTool-->>ServerTool: Return JSON with detailed counts and results
    ServerTool-->>User: Respond with enhanced JSON output
Loading

Poem

In the warren where tests are run,
Leaf by leaf, the work is done.
Now counts are clear—fail, skip, and more,
With details richer than before!
Rabbits cheer as numbers grow,
For every test, the truth we know.
🐇✨

Tip

⚡💬 Agentic Chat (Pro Plan, General Availability)
  • We're introducing multi-step agentic chat in review comments and issue comments, within and outside of PR's. This feature enhances review and issue discussions with the CodeRabbit agentic chat by enabling advanced interactions, including the ability to create pull requests directly from comments and add commits to existing pull requests.

📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2d40387 and 613353d.

📒 Files selected for processing (3)
  • Editor/Tools/RunTestsTool.cs (3 hunks)
  • Server/build/tools/runTestsTool.js (2 hunks)
  • Server/src/tools/runTestsTool.ts (2 hunks)
🔇 Additional comments (9)
Editor/Tools/RunTestsTool.cs (3)

117-121: Well-implemented filtering of test suites

This change correctly prevents double-counting by skipping test suites (tests with children) and only recording leaf test results. This is a key fix for the reported issue where test counts were incorrect.


145-150: Proper usage of ITestResultAdaptor for accurate test counts

Excellent improvement to use the counts directly from the test adaptor instead of calculating them from the results collection. This provides more accurate and comprehensive test statistics, including passed, failed, skipped, and inconclusive tests.


178-185: Comprehensive test result information in response

Great work including all the detailed test metrics in the completion result. This ensures that clients receive complete information about test outcomes, which directly fixes the reported issue where no test results were being returned.

Server/build/tools/runTestsTool.js (3)

60-62: Added extraction of additional test result metrics

Good addition of extracting failCount, inconclusiveCount, and skipCount from the response with default values of 0 if not present. This complements the changes made in the C# code.


75-75: Improved message fallback logic

The order of precedence has been corrected to prefer the constructed detailed message over the generic response message, providing better information to users.


82-85: Enhanced JSON output with comprehensive test metrics

Good job including all the detailed test metrics in the JSON output. This ensures consistent reporting of test results across the entire toolchain.

Server/src/tools/runTestsTool.ts (3)

78-80: Added extraction of additional test result metrics

Well-implemented extraction of failCount, inconclusiveCount, and skipCount from the response with default values of 0 if not present. This matches the corresponding changes in the compiled JavaScript file.


95-95: Improved message fallback logic

The order of precedence has been correctly updated to prefer the more detailed constructed message over the generic response message.


102-105: Enhanced JSON output with comprehensive test metrics

Excellent addition of detailed test metrics in the JSON output. This ensures that clients receive complete information about test outcomes, fixing the issue where incomplete test results were being returned.

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Owner

@CoderGamester CoderGamester left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a very good pull request
Thank you @nowsprinting

@CoderGamester CoderGamester merged commit c6e6a6c into CoderGamester:main Apr 23, 2025
1 check passed
@nowsprinting nowsprinting deleted the fix/run_tests_response branch April 23, 2025 10:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants