Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Basic multi touch support (issue #279) #306

Merged
merged 36 commits into from
May 6, 2021
Merged

Basic multi touch support (issue #279) #306

merged 36 commits into from
May 6, 2021

Conversation

quadruple-output
Copy link
Contributor

@quadruple-output quadruple-output commented Apr 13, 2021

I guess I should open a draft PR before this gets too big. It would be great to get some feedback, @emilk .

Done:

  • Created an Event::Touch variant
  • egui_glium creates touch events
    (but I am not able to test this as it turns out that winit does not recognize my apple trackpad as a touch device)
  • egui_web creates touch events
  • Added a map of TouchStates to the InputState struct
    (need a map because, technically there can be more than one touch device, identified by individual IDs)
  • Implemented TouchState::ui() (used in InputState::ui()), so touch events can be manually tested with egui_demo_lib on a mobile device.
    This can be tested with the link mentioned below, but you need a device with a touch surface (otherwise you will see no change). Open the Inspection window and expand the Input collapsible. You should see a collapsible labelled "Touch State [device 0]".
  • Implemented the first prototype of a pinch/zoom gesture
    This is realized with a Gesture trait object and a control loop capable of managing an arbitrary number of individual Gesture implementations. (Got this idea from studying ctk::Gesture)
  • implement Zoom and Rotate, including a test window for egui_demo_lib (can be tested at the demo link below)
  • having consumed two gestures in a demo app, decide on the final API for applications
    The proposal of having just a simple input.zoom() method returning the zoom factor is charming, but we may need more flexibility (for example the starting position (center point) and current position of the gesture).
    → I implemented the API as proposed by emilk in his initial comment (except that touches.count() is still missing).
  • add support for more than 2 fingers, and for determining the number of fingers on the surface

Out of scope of this PR (should I create issues?):

  • return the touch info in the Response of ui.interact(), but only if the start position of the touch gesture is inside the rect area. (Not sure if this needs a deeper integration with Sense.)
  • discuss whether the synthetic generation of pointer events from touch events, as it is currently implemented in egui_web, could be moved to egui. Advantage: backends don't have to care about this, any more.

Link to online demo for this branch: https://quadruple-output.github.io/egui/ – remember that you need a touch screen (e.g. mobile phone or tablet) for testing.

Closes #279.

Unfortunately, winit does not seem to create _Touch_ events for the touch pad
on my mac.  Only _TouchpadPressure_ events are sent.

Found some issues (like
[this](rust-windowing/winit#54)), but I am not sure
what they exactly mean:  Sometimes, touch events are mixed with
touch-to-pointer translation in the discussions.
The are a few open topics:
- egui_web currently translates touch events into pointer events.
  I guess this should change, such that egui itself performs this kind of
  conversion.
- `pub fn egui_web::pos_from_touch_event` is a public function, but I
  would like to change the return type to an `Option`.  Shouldn't this
  function be private, anyway?
InputState.touch was introduced with type `TouchState`, just as
InputState.pointer is of type `Pointer`.

The TouchState internally relies on a collection of `Gesture`s.  This commit
provides the first rudimentary implementation of a Gesture, but has no
functionality, yet.
So far, the method always returns `None`, but it should work as soon as the
`Zoom` gesture is implemented.
Although quite unlikely, it is still possible to connect more than one touch
device. (I have three touch pads connected to my MacBook in total, but
unfortunately `winit` sends touch events for none of them.)

We do not want to mix-up the touches from different devices.
The basic idea is that each gesture can focus on detection logic and does not
have to care (too much) about managing touch state in general.
a simple `TODO` should be enough
@emilk
Copy link
Owner

emilk commented Apr 15, 2021

Thanks for starting work on this!

The primary use case I have in mind (and want to use) is simultaneous zoom and pan with two finger, something many UI:s fail to support. Therefor I don't think discreetizing the touches into gestures is the right approach. I think a simpler API could actually be more powerful.

Say we provide an API that allows the user to query the number of fingers, the total transform, and the delta transform (where transform is zoom around a point + translation). Then the user can then use what they need. For most cases, that would be panning and zooming (either the total or the per-frame delta).

If the user want to detect a two-finger swipe, it would be as easy as touches.count() == 2 && touches.total.translation.len() > 100.0, and we can add a helpful function for that if we want (if ui.input().touches.swipe_left()). But I suggest we don't build that until we need it. Let's keep a minimal-viable-product mindset and build the smallest useful touch support first, and keep the PR as small as possible. Better with many small PR:s than one big one!

@quadruple-output
Copy link
Contributor Author

Yes, this all sounds reasonable to me. Thanks a lot for these comments. I have to admit that I was at first fascinated by the idea of having minimal and therefore very simple implementations for each individual gesture. However, I also noticed later that at least implementing zoom and rotation in a combined gesture is a very simple thing. Therefore, I guess I just needed your feedback to help me getting back on track. I consider deleting code a good thing, so I will be happy to give it another try.

Still, I have one remark: discreetizing the gesture implementations does not mean that multiple "atomic" gestures (like zoom and rotate) cannot be active at the same time. I think it would just be an implementation detail. Still, I fully agree to focus on an MVP. (I admit that I even thought about offering the ability to implement app-specific gestures by implementing the Gesture trait and registering trait objects. ;) But then - how many gestures could there possibly be?)

For now, it works for two fingers only.  The third finger interrupts the
gesture.

Bugs:
- Pinching in the demo window also moves the window -> Pointer events must be
  ignored when touch is active
- Pinching also works when doing it outside the demo window -> it would be nice
  to return the touch info in the `Response` of the painter allocation
@quadruple-output
Copy link
Contributor Author

Should the touch API be part of the Response from ui.interact()? Or is this beyond the MVP scope?

@emilk
Copy link
Owner

emilk commented Apr 20, 2021

Should the touch API be part of the Response from ui.interact()? Or is this beyond the MVP scope?

Let's hold off on that and get just the important parts in first

@quadruple-output
Copy link
Contributor Author

If the user want to detect a two-finger swipe, it would be as easy as touches.count() == 2 && touches.total.translation.len() > 100.0

You are suggesting to offer touches.total….

tl;dr: I suggest not to implement this, at least not in the first step.

I understand the idea, and I have mostly implemented it (see TouchInfo here). However, the more I think about it, the more questions come up: It is straight forward to implement this for gestures involving a fixed number of fingers, but it gets complicated when you consider that fingers can be added and removed during the course of a gesture. This means that the translation (position = average of all fingers?) of the guesture would probably need to be adjusted, and touches.total.translation would perform a sudden "jump". Similar things can happen to rotation and zoom.

One solution could be to ignore the positions of fingers other than the first two – but what do you do if the third finger stays while the first gets removed?

These issues can easily be avoided for the relative values by simply not delivering the information in the first frame after an individual touch has started or stopped.

I did not come across anything similar to touches.total… in the MacOS Cocoa docs I consulted, or in MDN. I conclude that there is no wide-spread need for the total touch data.

What do you think?

@emilk
Copy link
Owner

emilk commented Apr 20, 2021

I agree, scap total. egui is immediate mode anyway, so users should use the delta transform immediately!


// TO BE DISCUSSED: todo for all `touch*`-events:
// Now that egui knows about Touch events, the backend does not need to simulate
// Pointer events, any more. This simulation could be moved to `egui`.
Copy link
Owner

@emilk emilk Apr 20, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree - the integrations should either send Pointer-events or Touch-events, but not both. However, it can also wait for a separate PR if it would make this PR more complicated.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that this should be done in a separate PR. Should I turn this comment into an issue?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code comment is already removed from PR.

egui/src/data/input.rs Outdated Show resolved Hide resolved
This commit includes an improved Demo Window for egui_demo, and a complete
re-write of the gesture detection.  The PR should be ready for review, soon.
@quadruple-output
Copy link
Contributor Author

I have no idea why github complains about "1 workflow awaiting approval". I am not aware that my PR contains any such thing. I merged with upstream/master, though. Can this be the reason?

@emilk
Copy link
Owner

emilk commented Apr 24, 2021

@quadruple-output this is a new thing on GitHub - PR:s from first-time contributors do not have the CI run until an approve from a code owner. It is to stop people making PR:s that add bitcoin miners to a project and then have the CI do compute for them, and shady stuff like that. I didn't find any obvious cryptomining in your code, so I hazarded a press on the approve button :)

@quadruple-output
Copy link
Contributor Author

Ready for review according to my judgement. Remember that there is a ready-to-try demo version here (check out the "Multi Touch" demo window).

@quadruple-output quadruple-output marked this pull request as ready for review April 26, 2021 21:00
@quadruple-output quadruple-output changed the title Issue #279: Touch gesture support Basic multi touch support (issue #279) Apr 28, 2021
Copy link
Owner

@emilk emilk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really great work! Super clean code. I only had a few very minor suggestions, but feel free to ignore or postpone.

Bonus points for publishing a demo of this for me to test ⭐

The only thing I lack is for the multitouch demo to also show translation (dragging with two fingers) to demonstrate, and test, full multitouch manipulation.

...and please remove the dbg!(…) call :)

egui/src/data/input.rs Outdated Show resolved Hide resolved
egui/src/input_state.rs Outdated Show resolved Hide resolved
egui/src/input_state/touch_state.rs Outdated Show resolved Hide resolved
egui/src/input_state/touch_state.rs Show resolved Hide resolved
egui/src/input_state/touch_state.rs Outdated Show resolved Hide resolved
egui/src/input_state/touch_state.rs Outdated Show resolved Hide resolved
egui_demo_lib/src/apps/demo/zoom_rotate.rs Show resolved Hide resolved
egui_demo_lib/src/apps/demo/zoom_rotate.rs Outdated Show resolved Hide resolved
Stroke::new(stroke_width, color),
);
self.time_of_last_update = Some(ui.input().time);
});
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be great if this demo also showed translation; then it would demonstrate all the higher level parts of the multitouch gesture API

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, I will add this. Maybe tomorrow…

I was a bit reluctant to add translation because, for some devices (like a touch pad), the units for translation would neither be points, nor pixels on the screen, but rather the physical resolution of the touch device itself. Therefore, it feels like 'cheating' to use the translation coordinates just as if they were real screen coordinates – although it would work just fine on touch screens.

Unfortunately, I am only able to test all this with an iPhone and iPad. Both have (obviously) touch screens, not touch pads, so the resolution of the touch device and screen are always identical. I have a track pad, but winit does not support touch events for it. Same for web_sys in any browser I have tried.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resolved with 755ca30.
The online demo is up-to-date.

egui_glium/src/lib.rs Outdated Show resolved Hide resolved
@emilk
Copy link
Owner

emilk commented Apr 29, 2021

There is also an initial translation-jump when using two finger to zoom/pan in the Plot demo.

Copy link
Owner

@emilk emilk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cargo fmt whites about a single little whitespace.

Tip: configure your editor to run cargo fmt on save!

Also: you can use ./check.sh to run the CI steps locally

egui_glium/src/lib.rs Outdated Show resolved Hide resolved
@quadruple-output
Copy link
Contributor Author

There is also an initial translation-jump when using two finger to zoom/pan in the Plot demo.

Resolved by acbefda. I also uploaded an updated version of the online demo for testing.

Honestly, I feel that the new code I added to egui_web should actually move into egui itself, so other integrations don't have to re-implement the same stuff. But this would certainly require another PR.

@quadruple-output
Copy link
Contributor Author

Thanks a lot for your detailed feedback. Very appreciated. I think I resolved all comments. You are welcome to request more changes.

@quadruple-output
Copy link
Contributor Author

quadruple-output commented May 5, 2021

Actually, there is one thing I would like to propose to change: your method ui.input().zoom_delta() (which I adapted to multi touch) returns 1.0 in case there is no Zoom event. I personally would prefer this method to return an Option<f32>. I understand the motivation to return 1.0 by default, but even your own code in the implementation of Plot checks for the special value 1.0 (and has to silence a clippy warning to do so!).

If the only existing consumer of zoom_delta() would be better off with an Option<f32>, why do you think that most future consumers would prefer the current implementation? If someone does not care whether a Zoom event is ongoing, it is as easy as ui.input().zoom_delta().unwrap_or(1.0) to get the current behavior.

@emilk
Copy link
Owner

emilk commented May 6, 2021

I personally would prefer this method to return an Option<f32>. I understand the motivation to return 1.0 by default, but even your own code in the implementation of Plot checks for the special value 1.0 (and has to silence a clippy warning to do so!).

I really dislike that clippy warning, and should disable it project-wide, but that is beside the point.

I think you make a good argument, but if zoom returns an option, should scroll and mouse delta do so too? I think it is nicer to return the identity value to indicate no change. For additive values (like scroll) the identity is zero, an for zoom it is one.

Copy link
Owner

@emilk emilk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Amazing work - thanks you so much!

@emilk emilk merged commit 03721db into emilk:master May 6, 2021
@quadruple-output quadruple-output deleted the issue-279-touch-gesture-support branch May 20, 2021 17:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Touch gesture support
2 participants