Skip to content

[Bot] Update dependencies #2455

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 11, 2025
Merged

[Bot] Update dependencies #2455

merged 1 commit into from
Jun 11, 2025

Conversation

cibuildwheel-bot[bot]
Copy link
Contributor

@cibuildwheel-bot cibuildwheel-bot bot commented Jun 9, 2025

Update the versions of our dependencies.

PR generated by "Update dependencies" workflow.

@cibuildwheel-bot cibuildwheel-bot bot added CI: GraalPy Run the integration test suite with GraalPy included CI: PyPy Run the integration test suite with PyPy included dependencies Pull requests that update a dependency file labels Jun 9, 2025
@henryiii
Copy link
Contributor

henryiii commented Jun 9, 2025

@freakboy3742 Looks like this is a bit unstable. Any ideas on ways to make this a bit more resilient? Not sure how common this is, just worth making sure you don't see a quick way to make it better. :)

You can look at https://github.com/pypa/cibuildwheel/actions/runs/15527960535/job/43711065385?pr=2455, but here's the CoPilot summary:


The failure occurred in the test_ios.py::test_ios_platforms test case, and the logs indicate the following issues:

  1. Simulator Device Error: The error message suggests that the iOS simulator failed to launch the testbed app due to the system shell crashing. The underlying error mentions that the host is down.

  2. Subprocess Error: The subprocess.CalledProcessError was raised because the command to build and test the wheels for iOS (cibuildwheel) returned a non-zero exit status. This suggests an issue during the build or test execution.

Analysis

The error seems to stem from either:

  • Simulator Environment Issues: The iOS simulator couldn't launch properly, which is critical for running the tests on an iOS platform.
  • Missing/Improper Setup of Dependencies: The command failed while invoking the simulator or executing test commands. An improperly configured PATH or missing tools might be causing this.

Suggested Fixes

  1. Ensure Simulator Stability:

    • Check if the iOS simulator is functional on the CI environment. The logs indicate that the simulator failed to terminate org.python.iOSTestbed. Ensure that:
      • The simulator is not overloaded. Avoid running multiple iOS tests in parallel. The test already uses @pytest.mark.serial to run serially, but verify no other processes are interfering.
      • The simulator and Xcode are correctly installed and configured for the CI environment. The minimum required version of Xcode is 13.0, as specified in the test.
  2. Test Command Validation:

    • Ensure the CIBW_TEST_COMMAND and CIBW_TEST_SOURCES are correctly configured. The test uses "python -m this && python -m unittest discover tests test_platform.py". Double-check the validity of this command and ensure all required Python modules and dependencies are installed.
  3. Modify the Test File:
    The current test is reliant on creating a temporary symlink to /usr/bin/true. This might fail on certain CI systems. Update the test to validate the existence of this tool reliably:

    if shutil.which("true") is None:
        pytest.skip("`true` command not found on the system")
  4. Increase Debugging Information:

    • Add extra logs to the cibuildwheel_run utility in the test_ios_platforms function to capture more details about the failure. Update the verbosity of the build process:
      "CIBW_BUILD_VERBOSITY": "3",
  5. Simulator Timeout Handling:

    • Add a retry mechanism for the simulator launch to handle intermittent failures. If applicable, ensure the simulator device is reset before running the tests:
      xcrun simctl shutdown all
      xcrun simctl erase all

Updated Code Example

Update the test function test_ios_platforms as follows:

@pytest.mark.serial
@pytest.mark.parametrize(
    "build_config",
    [
        {"CIBW_PLATFORM": "ios"},
        {"CIBW_PLATFORM": "ios", "CIBW_BUILD_FRONTEND": "build"},
    ],
)
def test_ios_platforms(tmp_path, build_config, monkeypatch, capfd):
    skip_if_ios_testing_not_supported()

    # Ensure the `true` command is available
    if shutil.which("true") is None:
        pytest.skip("`true` command not found on the system")

    tools_dir = tmp_path / "bin"
    tools_dir.mkdir()
    tools_dir.joinpath("does-exist").symlink_to(shutil.which("true"))

    monkeypatch.setenv("PATH", str(tools_dir), prepend=os.pathsep)

    project_dir = tmp_path / "project"
    setup_py_add = "import subprocess\nsubprocess.run('does-exist', check=True)\n"
    basic_project = test_projects.new_c_project(setup_py_add=setup_py_add)
    basic_project.files.update(basic_project_files)
    basic_project.generate(project_dir)

    actual_wheels = utils.cibuildwheel_run(
        project_dir,
        add_env={
            "CIBW_BEFORE_BUILD": "does-exist",
            "CIBW_BUILD": "cp313-*",
            "CIBW_XBUILD_TOOLS": "does-exist",
            "CIBW_TEST_SOURCES": "tests",
            "CIBW_TEST_COMMAND": "python -m this && python -m unittest discover tests test_platform.py",
            "CIBW_BUILD_VERBOSITY": "3",  # Increased verbosity
            **build_config,
        },
    )

    expected_wheels = utils.expected_wheels(
        "spam", "0.1.0", platform="ios", python_abi_tags=["cp313-cp313"]
    )
    assert set(actual_wheels) == set(expected_wheels)

    captured = capfd.readouterr()
    assert "'does-exist' will be included in the cross-build environment" in captured.out
    assert "Zen of Python" in captured.out

@freakboy3742
Copy link
Contributor

@freakboy3742 Looks like this is a bit unstable. Any ideas on ways to make this a bit more resilient? Not sure how common this is, just worth making sure you don't see a quick way to make it better. :)

Unfortunately, I don't have any ideas on how to address the resiliency issue beyond the tweaks we've already made (single process, etc)

I see similar failures from time to time; it appears to be a "weather" thing - you'll get three failures in a row, and then everything starts working again. I can only presume there are certain machines in the CI pool that are either (a) under significant load or, (b) are old/near EOL and have poor performance; once you get the job allocated to a different machine, it resolves.

The good news is in my experience, "bad weather" like this is fairly infrequent.

@freakboy3742
Copy link
Contributor

One theory - it might be related to the rollout of the updated macOS-13 CI image... I've just noticed that an updated image became available around the same time as the CI builds... there might be a "firsts time cache warm" thing going on here. Its very difficult to confirm this is actually the cause, though.

@henryiii henryiii force-pushed the update-dependencies-pr branch from 9f898f5 to dd897d9 Compare June 10, 2025 05:55
@henryiii henryiii force-pushed the update-dependencies-pr branch from dd897d9 to 3be62fb Compare June 10, 2025 21:58
@henryiii henryiii merged commit 1b9a56e into main Jun 11, 2025
28 checks passed
@henryiii henryiii deleted the update-dependencies-pr branch June 11, 2025 01:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CI: GraalPy Run the integration test suite with GraalPy included CI: PyPy Run the integration test suite with PyPy included dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants