Skip to content

Fix POSIX port to respect configUSE_TIME_SLICING #1103

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jul 26, 2024

Conversation

aggarg
Copy link
Member

@aggarg aggarg commented Jul 21, 2024

Description

Fix POSIX port to respect configUSE_TIME_SLICING.

Test Steps

int main( void )
{
    BaseType_t xTaskCreationResult = pdFAIL;

    xTaskCreationResult = xTaskCreate( prvTask1Function,
                                       "Task1",
                                       configMINIMAL_STACK_SIZE,
                                       NULL,
                                       tskIDLE_PRIORITY,
                                       NULL );
    configASSERT( xTaskCreationResult == pdPASS );

    vTaskStartScheduler();

    for( ;; )
    {

    }

    return 0;
}
/*-----------------------------------------------------------*/

static void prvTask1Function( void * pvParams )
{
    uint64_t i;
    BaseType_t xTaskCreationResult = pdFAIL;

    /* Silence warning about unused parameters. */
    ( void ) pvParams;

    xTaskCreationResult = xTaskCreate( prvTask2Function,
                                       "Task2",
                                       configMINIMAL_STACK_SIZE,
                                       NULL,
                                       tskIDLE_PRIORITY,
                                       NULL );
    configASSERT( xTaskCreationResult == pdPASS );

    for( ;; )
    {
        configPRINTF( "Task 1 running...\r\n" );

        for( i = 0; i < 100000000; i++ )
        {
            /* This loop is just a very crude delay implementation. */
        }
    }
}
/*-----------------------------------------------------------*/

static void prvTask2Function( void * pvParams )
{
    uint64_t i;

    /* Silence warning about unused parameters. */
    ( void ) pvParams;

    for( ;; )
    {
        configPRINTF( "Task 2 running...\r\n" );

        for( i = 0; i < 100000000; i++ )
        {
            /* This loop is just a very crude delay implementation. */
        }
    }
}
/*-----------------------------------------------------------*/

When configUSE_TIME_SLICING is set to 0, without this change, the above test outputs the following -

Task 1 running...
Task 2 running...
Task 1 running...
Task 2 running...

After the change, the above test outputs the following -

Task 1 running...
Task 1 running...
Task 1 running...
Task 1 running...

Checklist:

  • I have tested my changes. No regression in existing tests.
  • [NA] I have modified and/or added unit-tests to cover the code changes in this Pull Request.

Related Issue

NA.

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@aggarg aggarg requested a review from a team as a code owner July 21, 2024 19:36
Copy link
Member

@ActoryOu ActoryOu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is log in PR descrpition in wrong order? The scheduler would always pick Task 1 if configUSE_TIME_SLICING is set to 0, right?

PR description:

When configUSE_TIME_SLICING is set to 0, without this change, the above test outputs the following -

Task 1 running...
Task 2 running...
Task 1 running...
Task 2 running...

After the change, the above test outputs the following -

Task 1 running...
Task 1 running...
Task 1 running...
Task 1 running...

Copy link

@aggarg aggarg merged commit d844312 into FreeRTOS:main Jul 26, 2024
16 checks passed
@aggarg aggarg deleted the posix_port branch July 26, 2024 05:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants