You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wonder if we could use interpolation for multi-view pose estimation like so:
Record the same scene from from 2 cameras of different view points
This would create 2 videos of the same person/body/skeleton, same motion but at different view points
Run interpolation.py on the 2 videos (time-synced) set keep_attr to none to allow averaging of the body attribute, averaging of the motion (reduce errors) and interpolation of the view axis
The interpolation should then result in a series of angle/view transformation from camera 1 to camera 2
To extract 3D data, we some how step through the transformation from camera 1 to camera 2
However, each step in the interpolation may not be geometrically propositional to a step in an angle
Q1. Can some one confirm if a step in the interpolation can be proportional to angular change?
Q2. It is plausible to adapt the interpolation code to work with more than 2 videos? Theoretically, unlimited?
This line seems to be key (need to generate 2D alphas for 3 videos, 3D alphas for 4 videos, etc?)
To Q1: We cannot say for sure whether there is a proportional relation. According to our experience, the interpolation results looks proportional to angular change when the two view points are relatively close (say < 90 degree). However, when the two view points are too far (say about 180 degree), the interpolation results in the middle will be weird, so surely not proportional.
This is actually an interesting problem that how to find an optimal interpolation strategy in the latent space. Linear interpolation is most simple, but maybe sub-optimal.
To Q2: Very interesting idea. We didn't try that, but I guess the results will still be smooth as it's still a continuous change in the latent space. For more than 2 videos, there would be more options of interpolation, e.g. Barycentric / bilinear.
Hi there.
Great paper!
I wonder if we could use interpolation for multi-view pose estimation like so:
This would create 2 videos of the same person/body/skeleton, same motion but at different view points
keep_attr
tonone
to allow averaging of the body attribute, averaging of the motion (reduce errors) and interpolation of the view axisThe interpolation should then result in a series of angle/view transformation from camera 1 to camera 2
However, each step in the interpolation may not be geometrically propositional to a step in an angle
Q1. Can some one confirm if a step in the interpolation can be proportional to angular change?
Q2. It is plausible to adapt the interpolation code to work with more than 2 videos? Theoretically, unlimited?
This line seems to be key (need to generate 2D alphas for 3 videos, 3D alphas for 4 videos, etc?)
2D-Motion-Retargeting/interpolate.py
Line 17 in 7eaae7e
And again, awesome paper!
Probably related issue:
#5
Thanks!
The text was updated successfully, but these errors were encountered: