Replies: 2 comments
-
Hi @mprib, I'm an adjunct prof in Calgary, using your software to do some projects collecting walking and reaching data. So far, I've been really impressed by your implementation -- it directly addressed a few issues we had trying to extend markerless mocap into functional, easy-to-use applications via Jon's Freemocap project. I am new to understanding your tracker class and its design as an interface to other more powerful backends. I had a few questions (happy to zoom and take notes if it's just faster for you): Problem: foot swaps Because certain individuals wearing particular clothes seem to cause this problem more than others, I am inclined to think that it is a problem with Mediapipe itself. As far as I can tell though, the problem does not disappear even if I try to make the feet very distinct via foot-specific colours or patterns. Also, a 4th camera did not rectify the problem (perhaps more cameras, but at some point camera # becomes a burden for many practical applications) Perhaps I am making a different mistake, happy to hear your thoughts! Potential Solutions: backend or kf 2- Kalman filter 3- Different frame sync options thanks in advance for your time! |
Beta Was this translation helpful? Give feedback.
-
@jeremydwong , thank you for your interest and kind words on the project. Regarding Foot Swaps: my expectation is that this is fundamentally a mediapipe limitation. I consider mediapipe to definitely be more of a proof-of-concept tool for this that is easy to run on CPU. A serious limitation of mediapipe is the extent to which it down samples the image for the gross pose estimation (which includes the feet). The face and hand trackers run over regions of interest identified by the gross pose estimation at a relatively higher resolution, which is why the tracking from those tools has so much less 2D wiggle. than the gross pose estimates. Motion blur can further confound this issue. Given the velocity of the foot during swing compared to the pelvis/shoulders, this may be another source of error. A global shutter, or brighter lights/lower exposure might help in that case. Regarding Solutions: Help is definitely appreciated and I would absolutely welcome discussions about MMPose/DLC! As I learn from my experiences to date, I'm seeing the value in keeping tools more loosely coupled, particularly during iterations of development (tight integration as a Tracker can wait...). So having the camera calibration produce a standard configuration. And then 2d point tracking produce a standard output format. And then the triangulation tools can produce 3d landmark estimates in a standard format. which can then be picked up for additional filtering... Which brings up the Kalman filter: I think this would be a fantastic addition though I will confess that my bandwidth/background means that without external support that will likely languish. If this is something that interests you, then I think that would be an awesome thing to collaborate on! This quickly turns into a lot to unpack, so a real-time conversation may be a useful way to wade through things. I will reply via email with some more details. Mac |
Beta Was this translation helpful? Give feedback.
-
Purpose of this Post
I am posting this here to use as a quick point of entry for understanding how new landmark tracking tools can be integrated into the workflow. An important part of the next stage of development is expanding the options for tracking. MMpose and DeepLabCut both strike me as good projects to explore integrating.
Basics of the Tracker
Trackers must inherit from the Tracker abstract base class defined here:
https://github.com/mprib/pyxy3d/blob/e8608bca3fed9de39147af8149d70f900aecf905/pyxy3d/interface.py#L47
There are four properties/methods that must be included in the tracker. They are:
name(self)->str
get_points(self, frame: np.ndarray, port:int, rotation_count:int) -> PointPacket
get_point_name(self, point_id:int) -> str
draw_instructions(self, point_id:int) ->dict
Some optional methods area also defined here and can be ignored.
If you would like to see a few examples of how these are implemented with various flavors of Mediapipe, you can look here:
Hands: https://github.com/mprib/pyxy3d/blob/main/pyxy3d/trackers/hand_tracker.py
Pose: https://github.com/mprib/pyxy3d/blob/main/pyxy3d/trackers/pose_tracker.py
Holisitic: https://github.com/mprib/pyxy3d/blob/main/pyxy3d/trackers/holistic_tracker.py
Integrating a new tracker
The list of trackers that can be used is stored within a
TrackerEnum
:https://github.com/mprib/pyxy3d/blob/main/pyxy3d/trackers/tracker_enum.py
By adding a newly defined tracker to this Enum, it becomes an option in the drop down GUI and will slot into the rest of the workflow.
Beta Was this translation helpful? Give feedback.
All reactions