Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

socket timeout error #17

Open
anoopswsib opened this issue Sep 10, 2019 · 11 comments
Open

socket timeout error #17

anoopswsib opened this issue Sep 10, 2019 · 11 comments

Comments

@anoopswsib
Copy link

Hi there, We are getting following error when using clamv client, any suggestions we can do that will help us

2019-09-10 15:29:43.006 WARN 1 --- [tp1071097621-16] o.eclipse.jetty.servlet.ServletHandler : /scan

java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method) ~[na:1.8.0_201]
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) ~[na:1.8.0_201]
at java.net.SocketInputStream.read(SocketInputStream.java:171) ~[na:1.8.0_201]
at java.net.SocketInputStream.read(SocketInputStream.java:141) ~[na:1.8.0_201]
at java.net.SocketInputStream.read(SocketInputStream.java:127) ~[na:1.8.0_201]
at fi.solita.clamav.ClamAVClient.readAll(ClamAVClient.java:158) ~[clamav-client-1.0.1.jar!/:1.0.2]
at fi.solita.clamav.ClamAVClient.scan(ClamAVClient.java:111) ~[clamav-client-1.0.1.jar!/:1.0.2]
at fi.solita.clamav.ClamAVProxy.handleFileUpload(ClamAVProxy.java:42) ~[classes!/:1.0.2]
at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source) ~[na:na]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_201]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_201]
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:221) ~[spring-web-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:136) ~[spring-web-4.3.2.RELEASE.jar!/:4.3.2.RELEASE]

@drogin
Copy link
Contributor

drogin commented Sep 11, 2019

It could be that ClamAV is not configured correctly.
If you have netcat installed on your linux box, run:
nc -zv [machine name] [port number]

where machine name and portnumber is the same as what the clamav-java library is configured to use in your code. Make sure to run this from the same machine that is running your java code. It should report something like:
"Connection to ... ... port .. succeeded!"
If it doesent, then you need to configure ClamAV, network and firewalls correctly.

ClamAV needs to be configured to expose the daemon on a port you choose. Also, note that the IP/hostname you specify is strictly that. For example, if your machine name is "SuperComputer", and you type "localhost" in your clamav config, only localhost will work, not "SuperComputer".

If the netcat above works on the machine that runs ClamAV, then you must make sure the network and firewalls allow the machine that runs java, to talk to the machine that runs ClamAV, assuming they arent the same machine, of course

@anoopswsib
Copy link
Author

thanks for responding i am attaching a pic ture on how we are using this.
arch

Basically, we have three containers running in different pod, first is our upload service, then clam av service as per ur sample and then last is the clam av scanner that runs in its own container running inside pod for kubernetes

we suspect the error is coming between clam av service and the daemon...we increase the timeout to 7000 ms and its little better

@anoopswsib
Copy link
Author

we have clam av rest service that uses java library in a seperate pod(kubernetes) and clam av daemon inside seperate pod and file upload service in another pod. Eseenetially user request hits our file upload service, that sends the document to clam av rest service that in turn contacts the clam av daemon. The daemon does the scan and reply back to clam av rest service that responds back to upload service. so this is our chain of services linked. initially in the example timeout was set as 500 ms we increased to 3000 ms, But for big document for example 10 mb it was failing so now we increased time out to be 7000 ms. so its better now. is that the right approach?

@anoopswsib
Copy link
Author

we only get this error for big document for the first time we try to scan it. if we scan it again it works. we think its time out related

@drogin
Copy link
Contributor

drogin commented Sep 11, 2019

Yes, if it works for most files except large ones, it seems to be timeout-related, and the timeout appears to be somewhere outside of the clamav-java library. I can't help you much further than that - but increasing the timeout is a good idea if you want to handle larger files.
To debug or try to identfy where,network-related or if the clamav daemon or server lacks resources, you'd need to look at your kubernetes/openshift/whathaveyou setup for the network and pods.

@anoopswsib
Copy link
Author

Thanks for responding henrik. It appears time out is happening betwwn clam-av library and clamav daemon. We also noticed it happens for the initial large file transfer only. Once it fails then if you try again then it works. So we increased timeout to 15 seconds, then tested 1mb, 2, and 20 mb all succeeds in first time. We noticed that large file initially seem longer than 7 seconds...

@anoopswsib
Copy link
Author

these is the limit we set for the clamd pod
Limits:
cpu: 500m
memory: 4000Mi
Requests:
cpu: 50m
memory: 300Mi

@mraslam
Copy link

mraslam commented May 19, 2021

@anoopswsib @drogin I am facing the exact same issue, could you please tell me what changes did you make for it to work? I am seeing intermittent read time out issues and I am running clamav as a pod on eks. I have another filescan service which is calling clamav.

@abhishekgupta-ontic
Copy link

We are also facing this error. We increased the socket time out substantially to solve it in case of big-size files (like 80mb or 150mb) but intermittently the error is still coming. Even for smaller files like of size 6 mb.

@geemorin
Copy link

We had a similar intermittent problem in eks and it was related to the fact that our "Services" label selector were applying to all 3 containers in the helm chart so TCP requests (which are mapped by port) were hitting any of the 3 containers. So 1 out of 3 requests were failing.... At beginning we had 2 containers so one out of 2 was failing. Thats how we realized it was related.

@geemorin
Copy link

We also have a read timeout when we have big files when reading the clamav deamon response. I'll look into and keep you up to date if I ever find something interesting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants