-
Notifications
You must be signed in to change notification settings - Fork 539
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run Object Tracking mode on server / Run Object Tracking mode on prefilmed video instead of camera #340
Comments
For running on a video, it should be easy enough to write a python script to load one of the detection models and then feed images through. Sending video to a computer and controls back to the robot is already supported. What's missing is to run the network on the server to predict the controls. If you just want to test with Yolov5 with videos you can directly use the provided script - check section |
For running on a video, it should be easy enough to write a python script to load one of the detection models and then feed images through. Sending video to a computer and controls back to the robot is already supported. What's missing is to run the network on the server to predict the controls. If you just want to test with Yolov5 with videos you can directly use the provided script - check section Inference with detect.py. It works with images, videos, etc. |
Did you fix it? |
Hello @khoatranrb @RoBoFAU , I'm glad to hear that the issue has been resolved! For anyone who encounters the same problem, please note that the issue has been addressed and fixed in Issue #351 (Python controller cannot connect). The solution involves updating the Python code to use port 8081 instead of 19400, ensuring it matches the port configuration in the OpenBot Robot App. You can find more details and the fixed code in this link: Issue #351. |
Hello everybody,
I have two questions for the Object Tracking mode:
my first question:
is it possible to evaluate the camera recordings on a computer/server instead of on the smartphone?
that means the input for the neural network comes from the smartphone, the object detection runs externally on the server and the output of the evaluation is also sent back to the smartphone. I have read that the mode Autopilot and FreeRoam works with a computer, however in this case the car is controlled by the computer. In case of object detection the car should drive itself, only the object detection runs externally on the server.
the second question:
is it possible to use a pre-recorded video instead of the "live" camera to test the object recognition via the app. Of course, then it makes no sense to let the car drive. The point here is just to illustrate the performance of the pre-trained network.
The text was updated successfully, but these errors were encountered: