Skip to content

Adjust "vm_memory_high_watermark" code to be more forgiving/strict about bad data (such as a memory limit of zero) #149

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 11, 2017

Conversation

tianon
Copy link
Member

@tianon tianon commented Apr 10, 2017

See also #148 and #147.

if [ -r /sys/fs/cgroup/memory/memory.limit_in_bytes ]; then
memLimitB="$(< /sys/fs/cgroup/memory/memory.limit_in_bytes)"
if [ "$memLimitB" = '0' ]; then
# it appears that "WARNING: No kernel memory limit support" leads to "/sys/fs/cgroup/memory/memory.limit_in_bytes" containing the value "0"
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, this is confirmed to be FALSE; time to keep digging (although this PR's changes are probably still sane, possibly minus the 0 special-casing):

$ docker info > /dev/null
WARNING: No memory limit support
WARNING: No swap limit support
WARNING: No kernel memory limit support
WARNING: No oom kill disable support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
$ docker run --rm bash ls -l /sys/fs/cgroup/memory
ls: /sys/fs/cgroup/memory: No such file or directory

@AnatolyRugalev
Copy link

AnatolyRugalev commented Apr 11, 2017

@tianon could you please information about this var to README please? I just spend an hour trying to figure out what is wrong with my container

UPD: ah, sorry. Didn't know that docs are in another repo

@tianon
Copy link
Member Author

tianon commented Apr 11, 2017

Docs PR is up at docker-library/docs#879. 👍

@tianon
Copy link
Member Author

tianon commented Apr 11, 2017

Yesss, thanks to @jamesmallen, I've managed to hack something together to reproduce the issue:

$ echo 18446744073709551615 > lol-limit
$ docker run -it --rm -v "$PWD/lol-limit":/sys/fs/cgroup/memory/memory.limit_in_bytes rabbitmq:alpine
...

=INFO REPORT==== 11-Apr-2017::20:23:20 ===
Memory limit set to 0MB of 32148MB total.

...

…ge data (such as a reported memory limit of 18446744073709551615/MAX_UINT)
@tianon
Copy link
Member Author

tianon commented Apr 11, 2017

PR is updated! I've tested that several values calculate correctly now (uses awk for all math processing, since we can't know what the scale of these numbers will be, but know that they need to handle at least 64bit unsigned integers).

@tianon
Copy link
Member Author

tianon commented Apr 11, 2017

By bind-mounting an empty file over /proc/meminfo (with my lol-limit from above to get 18446744073709551615), I got { vm_memory_high_watermark, { absolute, 7378697629483821056 } } (which is 40% of 18446744073709551615 👍).

I mean, RabbitMQ then gets (rightfully) grumpy, but it's a valid test of our code at least. 👍

=WARNING REPORT==== 11-Apr-2017::20:36:59 ===
Unknown total memory size for your OS {unix,linux}. Assuming memory size is 1024MB.

=INFO REPORT==== 11-Apr-2017::20:36:59 ===
Memory limit set to 1024MB of 1024MB total.

@tianon
Copy link
Member Author

tianon commented Apr 11, 2017

(And by bad things, I mean throws some scary output, and then starts successfully, like a true trooper. 🤘)

@yosifkit yosifkit merged commit c7e8eb5 into docker-library:master Apr 11, 2017
@yosifkit yosifkit deleted the limit-zero branch April 11, 2017 20:58
tianon added a commit to infosiftr/stackbrew that referenced this pull request Apr 11, 2017
- `mysql`: 5.7.18, 5.6.36, 5.5.55, 8.0.1
- `rabbitmq`: adjust `vm_memory_high_watermark` handling to fix several edge cases (docker-library/rabbitmq#149)
- `rocket.chat`: revert to 0.54.2, skipping RCs for the official image (RocketChat/Docker.Official.Image#31)
- `tomcat`: update `8` and `latest` to be `8.5` (docker-library/tomcat#67)
@AnatolyRugalev
Copy link

@tianon wow this is a huge numbers. Great work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants