Skip to content

fixed apt get error and added schedule for the github CI workflow #594

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from Dec 8, 2021
Merged

fixed apt get error and added schedule for the github CI workflow #594

merged 2 commits into from Dec 8, 2021

Conversation

mr-cheffy
Copy link

@mr-cheffy mr-cheffy commented Dec 8, 2021

This made a daily check and workflow ci (x86_64, clang, debug, 1.57.0, src, rustup, common, none) will not fail anymore.

I made this change because I saw 675846f failing.

@alex alex merged commit a0f5988 into Rust-for-Linux:rust Dec 8, 2021
@bjorn3
Copy link
Member

bjorn3 commented Dec 9, 2021

These commits were not signed off. Is that a problem for CI changes too or only for source code?

@ojeda
Copy link
Member

ojeda commented Dec 9, 2021

The CI changes will not go to upstream, so it should be fine. However, it is best to simply require it for everything.

@mr-cheffy
Copy link
Author

So... My changes are not going to upstream or what?

@bjorn3
Copy link
Member

bjorn3 commented Dec 9, 2021

The CI configuration as a whole remains in this fork forever I think.

@mr-cheffy
Copy link
Author

a, ok

ojeda pushed a commit that referenced this pull request Dec 16, 2024
Deletion of the last rule referencing a given idletimer may happen at
the same time as a read of its file in sysfs:

| ======================================================
| WARNING: possible circular locking dependency detected
| 6.12.0-rc7-01692-g5e9a28f41134-dirty #594 Not tainted
| ------------------------------------------------------
| iptables/3303 is trying to acquire lock:
| ffff8881057e04b8 (kn->active#48){++++}-{0:0}, at: __kernfs_remove+0x20
|
| but task is already holding lock:
| ffffffffa0249068 (list_mutex){+.+.}-{3:3}, at: idletimer_tg_destroy_v]
|
| which lock already depends on the new lock.

A simple reproducer is:

| #!/bin/bash
|
| while true; do
|         iptables -A INPUT -i foo -j IDLETIMER --timeout 10 --label "testme"
|         iptables -D INPUT -i foo -j IDLETIMER --timeout 10 --label "testme"
| done &
| while true; do
|         cat /sys/class/xt_idletimer/timers/testme >/dev/null
| done

Avoid this by freeing list_mutex right after deleting the element from
the list, then continuing with the teardown.

Fixes: 0902b46 ("netfilter: xtables: idletimer target implementation")
Signed-off-by: Phil Sutter <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Darksonn pushed a commit to Darksonn/linux that referenced this pull request Jan 17, 2025
With CONFIG_PROVE_LOCKING, when creating a set of type bitmap:ip, adding
it to a set of type list:set and populating it from iptables SET target
triggers a kernel warning:

| WARNING: possible recursive locking detected
| 6.12.0-rc7-01692-g5e9a28f41134-dirty Rust-for-Linux#594 Not tainted
| --------------------------------------------
| ping/4018 is trying to acquire lock:
| ffff8881094a6848 (&set->lock){+.-.}-{2:2}, at: ip_set_add+0x28c/0x360 [ip_set]
|
| but task is already holding lock:
| ffff88811034c048 (&set->lock){+.-.}-{2:2}, at: ip_set_add+0x28c/0x360 [ip_set]

This is a false alarm: ipset does not allow nested list:set type, so the
loop in list_set_kadd() can never encounter the outer set itself. No
other set type supports embedded sets, so this is the only case to
consider.

To avoid the false report, create a distinct lock class for list:set
type ipset locks.

Fixes: f830837 ("netfilter: ipset: list:set set type support")
Signed-off-by: Phil Sutter <[email protected]>
Signed-off-by: Pablo Neira Ayuso <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

4 participants