Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

linux kernel crash after a lengthy chat #3982

Open
maxvaneck opened this issue Oct 28, 2024 · 3 comments
Open

linux kernel crash after a lengthy chat #3982

maxvaneck opened this issue Oct 28, 2024 · 3 comments
Labels
bug Something isn't working unconfirmed

Comments

@maxvaneck
Copy link

LocalAI version:
latest-aio-cpu docker container
2.22.1 at this time

Environment, CPU architecture, OS, and Version:
Linux Zana 6.1.106-Unraid #1 SMP PREEMPT_DYNAMIC Wed Aug 21 23:36:07 PDT 2024 x86_64 11th Gen Intel(R) Core(TM) i5-11400 @ 2.60GHz GenuineIntel GNU/Linux
64 GB of RAM

Describe the bug
after a while of doing some roleplay chatting with the system prompt set to a character description the linux kernel will simply crash included are the stacktrace pictures i took

To Reproduce
simply load up the webui choose a roleplay model enter a character description as a prompt and keep chatting until crash

Expected behavior
<really? -_-

Logs
i would but my problem prevents this kernel stacktraces are included

Additional context
this problem is so far reproducible on stheno and lewdplay(with the occasional anti horni bonk it's a good roleplay model )
i have also reproduced this problem with big-agi as frontend which seems to increase the likelihood of crashes but the default webui also exhibits this behaviour .
WhatsApp Image 2024-10-27 at 19 44 14(4)
WhatsApp Image 2024-10-27 at 19 44 14(3)
WhatsApp Image 2024-10-27 at 19 44 14(2)
WhatsApp Image 2024-10-27 at 19 44 14(1)
WhatsApp Image 2024-10-27 at 19 44 14

@maxvaneck maxvaneck added bug Something isn't working unconfirmed labels Oct 28, 2024
@levidehaan
Copy link

have you watched your temps?
sounds like overheating to me.

@maxvaneck
Copy link
Author

That was my first thought as well but that would cause a system reset not a full out kernel panic. Besides it's fine with 8 hours of transcoding video so I can reasonably rule out temperature

@maxvaneck
Copy link
Author

i also suspected memory errors since i'm using the cpu and llama-cpp is very memory intensive but that passed an 12 hour memcheck so thats fine as well

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working unconfirmed
Projects
None yet
Development

No branches or pull requests

2 participants