-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Basic multi touch support (issue #279) #306
Basic multi touch support (issue #279) #306
Conversation
Unfortunately, winit does not seem to create _Touch_ events for the touch pad on my mac. Only _TouchpadPressure_ events are sent. Found some issues (like [this](rust-windowing/winit#54)), but I am not sure what they exactly mean: Sometimes, touch events are mixed with touch-to-pointer translation in the discussions.
The are a few open topics: - egui_web currently translates touch events into pointer events. I guess this should change, such that egui itself performs this kind of conversion. - `pub fn egui_web::pos_from_touch_event` is a public function, but I would like to change the return type to an `Option`. Shouldn't this function be private, anyway?
InputState.touch was introduced with type `TouchState`, just as InputState.pointer is of type `Pointer`. The TouchState internally relies on a collection of `Gesture`s. This commit provides the first rudimentary implementation of a Gesture, but has no functionality, yet.
So far, the method always returns `None`, but it should work as soon as the `Zoom` gesture is implemented.
Although quite unlikely, it is still possible to connect more than one touch device. (I have three touch pads connected to my MacBook in total, but unfortunately `winit` sends touch events for none of them.) We do not want to mix-up the touches from different devices.
The basic idea is that each gesture can focus on detection logic and does not have to care (too much) about managing touch state in general.
a simple `TODO` should be enough
Thanks for starting work on this! The primary use case I have in mind (and want to use) is simultaneous zoom and pan with two finger, something many UI:s fail to support. Therefor I don't think discreetizing the touches into gestures is the right approach. I think a simpler API could actually be more powerful. Say we provide an API that allows the user to query the number of fingers, the total transform, and the delta transform (where transform is zoom around a point + translation). Then the user can then use what they need. For most cases, that would be panning and zooming (either the total or the per-frame delta). If the user want to detect a two-finger swipe, it would be as easy as |
Yes, this all sounds reasonable to me. Thanks a lot for these comments. I have to admit that I was at first fascinated by the idea of having minimal and therefore very simple implementations for each individual gesture. However, I also noticed later that at least implementing zoom and rotation in a combined gesture is a very simple thing. Therefore, I guess I just needed your feedback to help me getting back on track. I consider deleting code a good thing, so I will be happy to give it another try. Still, I have one remark: discreetizing the gesture implementations does not mean that multiple "atomic" gestures (like zoom and rotate) cannot be active at the same time. I think it would just be an implementation detail. Still, I fully agree to focus on an MVP. (I admit that I even thought about offering the ability to implement app-specific gestures by implementing the |
For now, it works for two fingers only. The third finger interrupts the gesture. Bugs: - Pinching in the demo window also moves the window -> Pointer events must be ignored when touch is active - Pinching also works when doing it outside the demo window -> it would be nice to return the touch info in the `Response` of the painter allocation
Should the touch API be part of the |
Let's hold off on that and get just the important parts in first |
You are suggesting to offer tl;dr: I suggest not to implement this, at least not in the first step. I understand the idea, and I have mostly implemented it (see One solution could be to ignore the positions of fingers other than the first two – but what do you do if the third finger stays while the first gets removed? These issues can easily be avoided for the relative values by simply not delivering the information in the first frame after an individual touch has started or stopped. I did not come across anything similar to What do you think? |
I agree, scap |
egui_web/src/lib.rs
Outdated
|
||
// TO BE DISCUSSED: todo for all `touch*`-events: | ||
// Now that egui knows about Touch events, the backend does not need to simulate | ||
// Pointer events, any more. This simulation could be moved to `egui`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree - the integrations should either send Pointer
-events or Touch
-events, but not both. However, it can also wait for a separate PR if it would make this PR more complicated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that this should be done in a separate PR. Should I turn this comment into an issue?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code comment is already removed from PR.
This commit includes an improved Demo Window for egui_demo, and a complete re-write of the gesture detection. The PR should be ready for review, soon.
I have no idea why github complains about "1 workflow awaiting approval". I am not aware that my PR contains any such thing. I merged with upstream/master, though. Can this be the reason? |
@quadruple-output this is a new thing on GitHub - PR:s from first-time contributors do not have the CI run until an approve from a code owner. It is to stop people making PR:s that add bitcoin miners to a project and then have the CI do compute for them, and shady stuff like that. I didn't find any obvious cryptomining in your code, so I hazarded a press on the approve button :) |
Ready for review according to my judgement. Remember that there is a ready-to-try demo version here (check out the "Multi Touch" demo window). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really great work! Super clean code. I only had a few very minor suggestions, but feel free to ignore or postpone.
Bonus points for publishing a demo of this for me to test ⭐
The only thing I lack is for the multitouch demo to also show translation (dragging with two fingers) to demonstrate, and test, full multitouch manipulation.
...and please remove the dbg!(…)
call :)
Stroke::new(stroke_width, color), | ||
); | ||
self.time_of_last_update = Some(ui.input().time); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be great if this demo also showed translation; then it would demonstrate all the higher level parts of the multitouch gesture API
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I will add this. Maybe tomorrow…
I was a bit reluctant to add translation because, for some devices (like a touch pad), the units for translation would neither be points, nor pixels on the screen, but rather the physical resolution of the touch device itself. Therefore, it feels like 'cheating' to use the translation coordinates just as if they were real screen coordinates – although it would work just fine on touch screens.
Unfortunately, I am only able to test all this with an iPhone and iPad. Both have (obviously) touch screens, not touch pads, so the resolution of the touch device and screen are always identical. I have a track pad, but winit
does not support touch events for it. Same for web_sys
in any browser I have tried.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolved with 755ca30.
The online demo is up-to-date.
There is also an initial translation-jump when using two finger to zoom/pan in the Plot demo. |
Co-authored-by: Emil Ernerfeldt <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cargo fmt
whites about a single little whitespace.
Tip: configure your editor to run cargo fmt
on save!
Also: you can use ./check.sh
to run the CI steps locally
Resolved by acbefda. I also uploaded an updated version of the online demo for testing. Honestly, I feel that the new code I added to |
Thanks a lot for your detailed feedback. Very appreciated. I think I resolved all comments. You are welcome to request more changes. |
Actually, there is one thing I would like to propose to change: your method If the only existing consumer of |
I really dislike that clippy warning, and should disable it project-wide, but that is beside the point. I think you make a good argument, but if zoom returns an option, should scroll and mouse delta do so too? I think it is nicer to return the identity value to indicate no change. For additive values (like scroll) the identity is zero, an for zoom it is one. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amazing work - thanks you so much!
I guess I should open a draft PR before this gets too big. It would be great to get some feedback, @emilk .
Done:
Event::Touch
variantegui_glium
creates touch events(but I am not able to test this as it turns out that winit does not recognize my apple trackpad as a touch device)
egui_web
creates touch eventsTouchState
s to theInputState
struct(need a map because, technically there can be more than one touch device, identified by individual IDs)
TouchState::ui()
(used inInputState::ui()
), so touch events can be manually tested withegui_demo_lib
on a mobile device.This can be tested with the link mentioned below, but you need a device with a touch surface (otherwise you will see no change). Open the Inspection window and expand the Input collapsible. You should see a collapsible labelled "Touch State [device 0]".
This is realized with aGesture
trait object and a control loop capable of managing an arbitrary number of individualGesture
implementations. (Got this idea from studying ctk::Gesture)egui_demo_lib
(can be tested at the demo link below)The proposal of having just a simple
input.zoom()
method returning the zoom factor is charming, but we may need more flexibility (for example the starting position (center point) and current position of the gesture).→ I implemented the API as proposed by emilk in his initial comment (except that
touches.count()
is still missing).Out of scope of this PR (should I create issues?):
Response
ofui.interact()
, but only if the start position of the touch gesture is inside the rect area. (Not sure if this needs a deeper integration withSense
.)Link to online demo for this branch: https://quadruple-output.github.io/egui/ – remember that you need a touch screen (e.g. mobile phone or tablet) for testing.
Closes #279.