Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENOENT: no such file or directory, open '/sys/fs/cgroup/memory/memory.limit_in_bytes / aarch64 #693

Closed
jektvik opened this issue May 24, 2020 · 3 comments · Fixed by #694
Labels
bug Something isn't working.

Comments

@jektvik
Copy link

jektvik commented May 24, 2020

Start with giving us feedback
Done

Now describe the bug
src/utils.js -> readPromised('/sys/fs/cgroup/memory/memory.limit_in_bytes'),
Is causing:
Main failed: ENOENT: no such file or directory, open '/sys/fs/cgroup/memory/memory.limit_in_bytes

To Reproduce
Run apify inside docker on a aarch64 computer (like RPI)

Expected behavior
It would be nice if it didn't error :)

System information:

  • OS: Ubuntu 1804
  • Node.js version - 12.6.3
  • Apify SDK version - 0.20.4

Additional context
The directory /sys/fs/cgroup/memory/ doesn't seem to exist on this architecture in general. Also, it It wouldn't be my intuition to depend on OS filesystem calls to do run the app but I mostly do web so maybe i know too little about it.
Also, there's something jinxed with respect to error handling, the stack trace disappears there I had to step in every await call before finding where the problem is.

@jektvik jektvik added the bug Something isn't working. label May 24, 2020
@jektvik jektvik changed the title ENOENT: no such file or directory, open '/sys/fs/cgroup/memory/memory.limit_in_bytes ENOENT: no such file or directory, open '/sys/fs/cgroup/memory/memory.limit_in_bytes / aarch64 May 24, 2020
@mnmkng
Copy link
Member

mnmkng commented May 24, 2020

Thanks for pointing this out. We originally built it for our own Docker environment and nobody complained until now. I guess it's time to make it more resilient.

@mnmkng
Copy link
Member

mnmkng commented May 24, 2020

@jekusz I'd like to hotfix this, but it would probably lead to invalid memory readings for you, unless you run your containers unlimited in Docker. (Basically, we'd fall back to system readings when the memory cgroups are unavailable.)

Would this work for you? https://archlinuxarm.org/forum/viewtopic.php?f=15&t=12086#p57035

@jektvik
Copy link
Author

jektvik commented May 24, 2020

@mnmkng This works 100% appreciate the help a lot.
To further specify, the cmdline.txt files doesn't exist on that particular distro version so here's the exact walkthrough on how to do this:
https://askubuntu.com/questions/1189480/raspberry-pi-4-ubuntu-19-10-cannot-enable-cgroup-memory-at-boostrap?newreg=8f87ba34547f468aae36251a576c7849

cac03 added a commit to cac03/testcontainers-java that referenced this issue Aug 16, 2021
The `testMemoryLimitModified` was failing in cgroup2 environment because of missing `/sys/fs/cgroup/memory/memory.limit_in_bytes`.

In the cgroup2 environment a new file should be checked instead -- `/sys/fs/cgroup/memory.max`.

This commit checks both files:

1. `/sys/fs/cgroup/memory/memory.limit_in_bytes` from the first version of cgroup
2. `/sys/fs/cgroup/memory.max` from the second

`cat`ing one of the files succeeds in both environments

Similar issues:

1. oracle/docker-images#1939
2. apify/crawlee#693
cac03 added a commit to cac03/testcontainers-java that referenced this issue Aug 16, 2021
The `testMemoryLimitModified` was failing in cgroup2 environment because of missing `/sys/fs/cgroup/memory/memory.limit_in_bytes`.

In the cgroup2 environment a new file should be checked instead -- `/sys/fs/cgroup/memory.max`.

This commit checks both files:

1. `/sys/fs/cgroup/memory/memory.limit_in_bytes` from the first version of cgroup
2. `/sys/fs/cgroup/memory.max` from the second

`cat`ing one of the files succeeds in both environments

Similar issues:

1. oracle/docker-images#1939
2. apify/crawlee#693
cac03 added a commit to cac03/testcontainers-java that referenced this issue Aug 17, 2021
The `testMemoryLimitModified` was failing in cgroup2 environment because of the missing
`/sys/fs/cgroup/memory/memory.limit_in_bytes` file.

In the cgroup2 environment a new file should be checked instead -- `/sys/fs/cgroup/memory.max`.

This commit intrdocues changes to `cat` both files:

1. `/sys/fs/cgroup/memory/memory.limit_in_bytes` from the first version of cgroup
2. `/sys/fs/cgroup/memory.max` from the second

Similar issues:

1. oracle/docker-images#1939
2. apify/crawlee#693
cac03 added a commit to cac03/testcontainers-java that referenced this issue Sep 4, 2021
The `testMemoryLimitModified` was failing in cgroup2 environment because of the missing
`/sys/fs/cgroup/memory/memory.limit_in_bytes` file.

In the cgroup2 environment a new file should be checked instead -- `/sys/fs/cgroup/memory.max`.

This commit intrdocues changes to `cat` both files:

1. `/sys/fs/cgroup/memory/memory.limit_in_bytes` from the first version of cgroup
2. `/sys/fs/cgroup/memory.max` from the second

Similar issues:

1. oracle/docker-images#1939
2. apify/crawlee#693
bsideup pushed a commit to testcontainers/testcontainers-java that referenced this issue Sep 14, 2021
…#4375)

* Fix `CmdModifierTest#testMemoryLimitModified` in cgroup2 environment.

The `testMemoryLimitModified` was failing in cgroup2 environment because of the missing
`/sys/fs/cgroup/memory/memory.limit_in_bytes` file.

In the cgroup2 environment a new file should be checked instead -- `/sys/fs/cgroup/memory.max`.

This commit intrdocues changes to `cat` both files:

1. `/sys/fs/cgroup/memory/memory.limit_in_bytes` from the first version of cgroup
2. `/sys/fs/cgroup/memory.max` from the second

Similar issues:

1. oracle/docker-images#1939
2. apify/crawlee#693

* Use `docker info` to distinguish cgroup version

* Use `DockerClientFactory.instance().client()` instead of `memoryLimitedRedis.getDockerClient()`
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants