Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: Memory leak #5

Open
abrichr opened this issue Apr 12, 2023 · 8 comments
Open

Bug: Memory leak #5

abrichr opened this issue Apr 12, 2023 · 8 comments
Assignees
Labels
bug Something isn't working

Comments

@abrichr
Copy link
Member

abrichr commented Apr 12, 2023

After a while, python puterbot/record.py will cause the system to run out of memory.

@abrichr abrichr added the bug Something isn't working label Apr 12, 2023
@abrichr abrichr changed the title Fix memory leak Bug: Memory leak May 3, 2023
@abrichr
Copy link
Member Author

abrichr commented Jun 13, 2023

First step: identify sources of memory leaks (there may be more than one!)

@abrichr
Copy link
Member Author

abrichr commented Jun 13, 2023

@abrichr
Copy link
Member Author

abrichr commented Jun 13, 2023

@angelala3252
Copy link
Collaborator

angelala3252 commented Jun 15, 2023

After using tracemalloc and pympler to profile a few longer recordings with a lot of ActionEvents, this is the output I get:

From tracemalloc, the top 3 largest remaining memory allocations after Ctrl + C:

C:\Users\Angel\Desktop\OpenAdapt.venv\lib\site-packages\mss\windows.py:287: size=9041 MiB, count=3430, average=2699 KiB
C:\Users\Angel\AppData\Local\Programs\Python\Python310\lib\multiprocessing\reduction.py:51: size=9122 KiB, count=140, average=65.2 KiB
<frozen importlib._bootstrap_external>:672: size=3018 KiB, count=6951, average=445 B

It's clear that the first one is the largest by a landslide, and following the traceback I found that it came from the function that returned the screenshots, _grab_impl in windows.py.

Using pympler, I printed out the newly created objects each time a key was pressed:

types |   # objects |   total size
================================ | =========== | ============
                       bytearray |         256 |      1.98 GB
                            dict |         729 |    169.21 KB
                  main.Event |         622 |     38.88 KB
                             int |        1241 |     33.95 KB
       mss.screenshot.ScreenShot |         256 |     18.00 KB
                           float |         626 |     14.67 KB
                  mss.models.Pos |         256 |     14.00 KB
                 mss.models.Size |         256 |     14.00 KB
               collections.deque |           0 |      4.12 KB
                            code |           0 |    386     B
                      memoryview |           2 |    368     B
              _winapi.Overlapped |           1 |    168     B
                            list |           2 |    144     B
         ctypes.c_wchar_Array_22 |           1 |    120     B
  pynput.keyboard._win32.KeyCode |           1 |     48     B

The bytearray was always the largest, which also points towards the screenshots being the issue.

Since the screenshots are put in the event queue in a loop, it seems like the most likely reason is that when a lot of input is given by the user, screenshots are put into the queue faster than they are removed from the queue, causing a pile-up of screenshots remaining in the queue.

@KrishPatel13 KrishPatel13 self-assigned this Jun 16, 2023
@KrishPatel13 KrishPatel13 removed their assignment Jun 26, 2023
@abrichr
Copy link
Member Author

abrichr commented Jul 27, 2023

@KrishPatel13 @angelala3252 can you please confirm whether this fixed?

@abrichr
Copy link
Member Author

abrichr commented Feb 29, 2024

Latest performance data:

2024-02-29 00:06:39.112 | INFO     | __main__:log_memory_usage:73 - source='File "/Users/abrichr/Library/Caches/pypoetry/virtualenvs/openadapt-VBXg4jpm-py3.10/lib/python3.10/site-packages/mss/darwin.py", line 238'
2024-02-29 00:06:39.113 | INFO     | __main__:log_memory_usage:74 -     new_KiB=4422056.185546875 total_KiB=4462521.392578125 new_blocks=177 total_blocks=179
2024-02-29 00:06:39.114 | INFO     | __main__:log_memory_usage:73 - source='File "/usr/local/Cellar/[email protected]/3.10.13_2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/reduction.py", line 51'
2024-02-29 00:06:39.114 | INFO     | __main__:log_memory_usage:74 -     new_KiB=26108.017578125 total_KiB=26108.4169921875 new_blocks=102 total_blocks=110
2024-02-29 00:06:39.114 | INFO     | __main__:log_memory_usage:73 - source='File "/Users/abrichr/Library/Caches/pypoetry/virtualenvs/openadapt-VBXg4jpm-py3.10/lib/python3.10/site-packages/mss/darwin.py", line 226'
2024-02-29 00:06:39.114 | INFO     | __main__:log_memory_usage:74 -     new_KiB=-23322.5556640625 total_KiB=0.0 new_blocks=-2 total_blocks=0
2024-02-29 00:06:39.187 | INFO     | __main__:wrapper_logging:129 -  <- Leave: process_events(None)
2024-02-29 00:06:48.405 | INFO     | __main__:log_memory_usage:77 - trace_str=
                      types |   # objects |   total size
=========================== | =========== | ============
                  bytearray |         176 |      4.18 GB
                _io.BytesIO |           4 |     25.49 MB
                      tuple |      169954 |     11.61 MB
                       dict |       17237 |      5.41 MB
                       list |       20874 |      3.12 MB
                        str |       34363 |      2.89 MB
                        int |       30835 |    843.79 KB
  tracemalloc.StatisticDiff |        2129 |    149.70 KB
      tracemalloc.Traceback |        2129 |     99.80 KB
          collections.deque |         157 |     96.70 KB
                       type |          53 |     67.10 KB
                       code |         192 |     49.04 KB
                        set |         170 |     39.11 KB
      weakref.ReferenceType |         392 |     27.56 KB
                  frozenset |          87 |     24.35 KB
image

Related: #570

@abrichr
Copy link
Member Author

abrichr commented Jun 21, 2024

Fixed in #628.

@abrichr
Copy link
Member Author

abrichr commented Jun 21, 2024

There is however another memory leak that involves zombie python processes consuming lots of resources. A new Issue should be created describing this in more detail before this one is closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants