Skip to content

ValueError during agents.tracing shutdown even when tracing is disabled #502

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
cadenzadesigns opened this issue Apr 14, 2025 · 2 comments
Closed
Labels
bug Something isn't working

Comments

@cadenzadesigns
Copy link

When running tests involving the openai-agents SDK with pytest, even after explicitly disabling tracing using agents.set_tracing_disabled(True) at the start of the test session, ValueError: I/O operation on closed file errors occur during the final process shutdown. These errors originate from agents.tracing.setup.shutdown attempting to log debug messages after standard output/error streams appear to be closed.

This occurs reliably after the pytest summary (e.g., "XXX passed, Y skipped...") is printed, so it doesn't seem to affect test results but creates noisy output in CI/test logs.

It seems set_tracing_disabled(True) prevents trace export but does not prevent the tracing components from initializing or attempting their shutdown logging.

Debug information

  • Agents SDK version: 0.0.9
  • Python version: 3.12.7
  • Operating System: Darwin 24.3.0 arm64 (macOS Sonoma)
  • Other relevant packages:
    • openai: 1.72.0
    • pytest: 8.3.5
    • poetry: 1.8.5

Repro steps

  1. Set up a Python project using Poetry and install openai-agents, openai, and pytest.
  2. Create a test file (tests/test_agent_run.py) that uses agents.Runner.run or run_sync (even a simple "hello world" agent).
  3. Create a root tests/conftest.py with the following hook to disable tracing programmatically:
    # tests/conftest.py
    import pytest
    import logging
    
    def pytest_configure(config):
        """
        Uses the official SDK function to disable tracing globally.
        """
        try:
            from agents import set_tracing_disabled
            set_tracing_disabled(disabled=True)
            print("\nINFO (root conftest): Called set_tracing_disabled(True).") # Optional print for confirmation
        except ImportError:
            print("\nWARNING (root conftest): Could not import set_tracing_disabled from agents.")
        except Exception as e:
            print(f"\nERROR (root conftest): Error calling set_tracing_disabled: {e}")
  4. Run the tests using poetry run pytest tests/.
  5. Observe the output after the pytest summary line.

Expected behavior

When tracing is disabled using set_tracing_disabled(True), no tracing-related shutdown logic should execute, or at least it should not attempt to log messages that cause ValueError after streams are closed. The test run should finish cleanly after the pytest summary.

Actual behavior

The tests pass, but the following errors are printed to stderr after the pytest summary:

--- Logging error ---
Traceback (most recent call last):
File "/path/to/your/python/lib/python3.12/logging/init.py", line 1163, in emit
stream.write(msg + self.terminator)
ValueError: I/O operation on closed file.
Call stack:
File "/path/to/your/venv/lib/python3.12/site-packages/agents/tracing/setup.py", line 205, in shutdown
logger.debug("Shutting down trace provider")
Message: 'Shutting down trace provider'
Arguments: ()
--- Logging error ---
Traceback (most recent call last):
File "/path/to/your/python/lib/python3.12/logging/init.py", line 1163, in emit
stream.write(msg + self.terminator)
ValueError: I/O operation on closed file.
Call stack:
File "/path/to/your/venv/lib/python3.12/site-packages/agents/tracing/setup.py", line 206, in shutdown
self.multi_processor.shutdown()
File "/path/to/your/venv/lib/python3.12/site-packages/agents/tracing/setup.py", line 72, in shutdown
logger.debug(f"Shutting down trace processor {processor}")
Message: 'Shutting down trace processor <agents.tracing.processors.BatchTraceProcessor object at ...>'

Attempts to disable tracing via the OPENAI_AGENTS_DISABLE_TRACING=1 environment variable or RunConfig(tracing_disabled=True) also failed to prevent these specific shutdown errors in the test environment.

Thanks for looking into this!

@rm-openai
Copy link
Collaborator

Thanks for the bug report @cadenzadesigns. Not sure why, but I'm unable to reproduce this behavior. Would you be able to contribute a failing test case to the repo?

Also, here's a potential fix: #503

@cadenzadesigns
Copy link
Author

Hi @rm-openai ,

Following up on the request to provide a failing test case for this issue (#502).

I attempted to create a minimal test case within the openai-agents-python repository environment itself. Here's what I did:

  1. Cloned the repository.
  2. Installed development dependencies using uv sync --dev.
  3. Added a tests/conftest.py hook that calls agents.set_tracing_disabled(True) at the start of the session:
    # tests/conftest.py
    import pytest
    import logging
    
    @pytest.fixture(scope="session", autouse=True)
    def disable_tracing_globally(request):
        print("\nAttempting to disable tracing via set_tracing_disabled(True)...")
        try:
            from agents import set_tracing_disabled
            set_tracing_disabled(disabled=True)
            print("INFO: Called set_tracing_disabled(True).")
        except Exception as e:
            print(f"ERROR: Error calling set_tracing_disabled: {e}")
        yield
        print("Test session finishing...")
  4. Created a test file (tests/test_tracing_shutdown.py) with a minimal agent, mocking Runner.run and Runner.run_sync to avoid actual model calls (to bypass the repo's disable_real_model_clients fixture):
    # tests/test_tracing_shutdown.py
    import pytest
    from unittest.mock import patch, MagicMock, AsyncMock
    from agents import Agent, Runner, RunResult
    
    agent = Agent(name="MinimalAgent", instructions="Just acknowledge the input.")
    mock_result = MagicMock(spec=RunResult)
    mock_result.final_output = "Mocked acknowledgment"
    
    @pytest.mark.asyncio
    @patch("agents.Runner.run", new_callable=AsyncMock, return_value=mock_result)
    async def test_run_should_not_error_on_shutdown_when_tracing_disabled(mock_run: AsyncMock):
        result = await Runner.run(agent, "Hello")
        assert result.final_output == "Mocked acknowledgment"
    
    @patch("agents.Runner.run_sync", new_callable=MagicMock, return_value=mock_result)
    def test_run_sync_should_not_error_on_shutdown_when_tracing_disabled(mock_run_sync: MagicMock):
        result = Runner.run_sync(agent, "Hello sync")
        assert result.final_output == "Mocked acknowledgment"
  5. Ran the test using uv run pytest -vv tests/test_tracing_shutdown.py.

Outcome:
Interestingly, in the repository's test environment with this setup, the tests passed cleanly without the ValueError appearing during shutdown. The set_tracing_disabled(True) call successfully prevented the shutdown error in this context.

However:
This differs from the behavior in my original project where I initially encountered the bug. In that project, using the exact same set_tracing_disabled(True) call via pytest_configure, the ValueError still occurs consistently after tests complete.

Here are the environment details for my project where the error persists:

  • Python Version: 3.12.7
  • Operating System: Darwin 24.3.0 arm64 (macOS Sonoma)
  • openai-agents Version: 0.0.9
  • openai Version: 1.72.0
  • pytest Version: 8.3.5
  • poetry Version: 1.8.5
  • (Other notable dependencies: FastAPI, SQLAlchemy, etc.)

Conclusion:
While I couldn't create a test case that fails by showing the ValueError within the repo's current test setup (as set_tracing_disabled works correctly there), the bug definitely exists in other environments. The inconsistency suggests that the fix proposed in PR #503 (checking self._disabled within the shutdown function) is likely still the correct approach to ensure set_tracing_disabled reliably prevents the shutdown logic across different setups.

Let me know if there's any other way I can help test or provide more information. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants