Repurposing this bug for only gestures that emulate input. Examples: * Two-finger tap -> right click * Three-finger tap -> middle click * Two-finger pan -> scroll wheel * Two-finger pinch -> Ctrl + scroll
This bug encompass all ThinLinc clients (both native and web access).
See bug 7042 for sending touch events / gestures to the server. See bug 7251 for gestures that control client-side functions.
To get touch events we need to switch to XInput and we need 2.2 to handle multiple touches (necessary to get gestures). The plan is currently to require at least XInput 2.0, and gestures will require 2.2. This means that we have a hard minimum requirement on xserver 1.6. This was released 2009, so it means: * RHEL 6 * Ubuntu 10.04 For gestures the requirement is xserver 1.12. This was released 2012 and means: * RHEL 6.4 / RHEL 7 * Ubuntu 12.10
Touch event handling is a bit magical in Xorg, and we've encountered some issues: https://gitlab.freedesktop.org/xorg/xserver/issues/846 https://gitlab.freedesktop.org/xorg/xserver/issues/847 Hopefully we can work around them decently enough.
Upstream PR for the work so far: https://github.com/TigerVNC/tigervnc/pull/844
Currently there is an issue in gnome wayland (not with x11) where if you press down three fingers, a touchEnd is sent for all the active fingers. This happens as soon as the third finger triggers touchStart. Filed an upstream bug for gnome shell. https://gitlab.gnome.org/GNOME/gnome-shell/-/issues/2667
Upstream made an interesting comment there that Wayland apparently behaves very similar to how X11 behaves with XI_TouchOwnership active, which is precisely what we want.
We've decided to ignore macOS for now as it doesn't really have native support for touch screens. We can revisit it in the future if we see customer requests. At that point we'll know more how a real world deployment of macOS with touch looks like.
There's an issue where TouchEnd does not trigger if activities via e.g. windows key is triggered during a gesture. To reproduce: - Given running TigerVNC server - Given running client session - Start a panning gesture - Press windows key to induce to open activities view - Release gesture (lift finger) - Press windows key again/get back focus - Note that no new touch events will happen This issue is noticeable because we ignore all other touch inputs during a gesture, and due to the absence of TouchEnd we never end the gesture. In our code we use the device id "XIAllMasterDevices" which does not work but if we change to "XIAllDevices" it works better. If we use XIAllDevices we get both master and slave events normally but in the scenario above we only get slave events. There seems to be a bug where master events are missing.
So we seem to have pinpointed the issue to this section of dix/exevents.c of the X server: > else if (ti->emulate_pointer && dev->deviceGrab.grab && > !dev->deviceGrab.fromPassiveGrab) { > /* There may be an active pointer grab on the device */ > *grab = dev->deviceGrab.grab; > } This is part of the event delivery system and is called for every listener for every touch event. However this code looks at the device (dev) which would be common for every listener and seems to override them. Since "emulate_pointer" is in there I guess this was only supposed to handle touch-unaware clients that wanted to do a mouse grab. And since mouse events have no "End" it wasn't an issue there. So what we think happens is: a) We press a finger, get set up as a listener and get a TouchBegin b) Activities opens and GNOME does a grab on the master device. c) All event deliveries are now rerouted to GNOME instead of us (and other listeners). We can support this theory experimentally by the fact that if we exit Activities then we start getting events for our touch again. It also only affects the first touch, which is expected as it only seems to care about touches that can emulate mouse events.
Issue reported upstream: https://gitlab.freedesktop.org/xorg/xserver/-/issues/1016 Probably won't help us right now though so we need to continue digging for a workaround.
Unfortunately we've failed to find a way of detecting this scenario. AFAICT the only event sent out is a Leave when the device is grabbed. However it is only sent to the window below the cursor, which might not be us. Instead we've worked around the issue by also listening to slave events (which still work fine) and grab XI_TouchEnd there. For other events we still only bother with master devices, but we now have to do the filtering manually. Note that this means we'll see stray events from floating slave devices, but it doesn't seem to cause any practical issues.
Another issue found related to opening activities; if a touch is active when opening activities then we get weird FLTK FL_MOVE events with bogus coordinates whenever we move the finger. Seems to be a FLTK bug but we haven't pinpointed it fully yet. Is probably mostly harmless though as there are no lasting effects once you leave activities mode.
I've tested full screen with and without the grab option on different Linux environments for TigerVNC. Tested with Xfce and GNOME on Fedora 31. With the grab enabled, system keys are passed to the session and reacts accordingly, i.e, if alt+tab is pressed an overview of all the open applications pops up. Tested the meta key, alt+tab, and shortcuts to e.g, open a terminal. If grab is disabled the system keys are passed to the local session. Also tested: * to have a full screen session on a touch screen and then clicking outside (the second screen) to see if focus shifted to the other second and vice versa. * to mark some text in the session and then dragging the mouse to the second screen to see if the marker is still active. These worked fine with the grab enabled for both Xfce and GNOME. One problem I found in GNOME was that when opening the context menu on a touch screen, the GNOME on-screen keyboard pops up which is mildly annoying. However, since this keyboard is very buggy we don't know if we're doing something wrong or if GNOME is. Until we know more this behavior remains.
Currently there is an issue with panning using TigerVNC in Windows. The two finger drag gesture is easily hijacked by mouse emulation for a single drag. We don't get any pan event when the mouse emulation is in progress. This can be reproduced by: * Run TigerVnc on windows 10 with a touch screen. * Try to pan using two fingers, with a small delay on the second finger. * Notice that the mouse emulation of drag starts as soon as the first finger moves a little. * Also notice that the pan does not happen. A possible work around for this might be using single finger panning and adding a threshold before firing the drag event. To do this we need to be able to distinguish between a pan triggered by two fingers and a pan triggered by one finger. The only thing I have found that might be of use is the usage of 'ullArguments' in the GESTUREINFO of the pan event. The documentation states that in a pan event 'ullArguments' "indicates the distance between the two points". We hope that this distance can be used to differentiate the two pans.
(In reply to Alex Tanskanen from comment #20) > I've tested full screen with and without the grab option on different Linux > environments for TigerVNC. Tested with Xfce and GNOME on Fedora 31. > Another test I did was to check if mouse clicks are sent to the session while holding down the meta key. If grab is disabled meta+mouse_buttons are sent to the local system instead which is the correct behavior.
For gestures we want to handle coordinates and positions i virtual distance so scaling don't affect gestures in weird ways. According to msdn all distances and positions are provided in physical screen coordinates. However, this is wrong since our gestures seem to be handled as virtual coordinates. To prove this I: 1. adjusted the scaling in Windows 10 to 125 % 2. started a pinch gesture on a 1280x800 display 3. dragged my fingers to the right edge of the screen and observed what the position was. If msdn was correct, the position should still be 1280 even with scale but in our case the position had changed (been scaled). This means that virtual positioning is used. I will do some more testing to see how a session will be affected with multiple displays where only one display is using scaling.
(In reply to Niko Lehto from comment #21) > The only thing I have found that might be of use is the usage of > 'ullArguments' in the GESTUREINFO of the pan event. The documentation states > that in a pan event 'ullArguments' "indicates the distance between the two > points". We hope that this distance can be used to differentiate the two > pans. In theory ullArguments will be zero in the cases we have only one finger, but this is only the case when the touch screen is the primary screen. This seems to be due to the 'zero point' a.k.a. origin being relative to the monitor positioning. So in case of a primary screen the origin is at 0. In the case where the touch screen is the secondary screen, localized to the left of the primary screen, the origin would be a negative value equalling the screen resolution. We tested this by printing out ullArguments of one finger drag and two finger drag. For the two finger drag we pinpointed the minimum value (fingers close together) and the maximum horizontal value ( fingers as far from each other as possible horizontally). Resolution: 1920x1080 | 1 touch point | 2 touch point min | 2 touch point max | ----------+---------------+-------------------+-------------------| Primary | 0 | 0 | 0x780 | ----------+---------------+-------------------+-------------------| Secondary | -1920 decimal | | | Pos: Left | 0xfffff880 | 0xfffff880 | 0xffffffff | ----------+---------------+-------------------+-------------------| Secondary | | | | Pos: Right| 0x780 | 0x780 | 0xeff | ----------+---------------+-------------------+-------------------| Secondary | | | | Pos: Under| 0 | 0 | 0x780 | ----------+---------------+-------------------+-------------------| We also noticed that the Y value did not affect these values. We saw this when we tried to place the touch screen underneath the primary screen. So for us to be able to use this information in the differentiation of one finger drag and two finger drag, we need to get the origin value of the monitor our touch is in and then compare this to our value in ullArguments. If these values are the same, we have a one point touch.
(In reply to Alex Tanskanen from comment #23) > ... > I will do some more testing to see how a session will > be affected with multiple displays where only one display is using scaling. A primary touchscreen will not be affected by the second monitor scaling. If the touchscreen is a secondary display with scaling then it will scale independently of the other screen. In the case of both screens having scaling, each screen will scale respectively since they don't affect each other.
We detected an issue when the cursor was positioned outside our window when generating touch gestures inside our window. We see mouse leave events and mouse releases that mess up our one finger pan. The workaround we have at this point is to use 'SetCursorPos()' to place the cursor inside our window at the beginning of a touch gesture. This SetCursorPos() function seems to have some kind of delay before it stops the leave events, which is not a problem in our case in the single pan event. This is due to the threshold we have applied for the single pan detection gives SetCursorPos() enough time to do it's thing. In the case of the other gestures we have where we don't have a delay, we see some of the mouse leave problems. This doesn't seem to give any practical issues as the other gestures doesn't get disturbed by eventual mouse releases caused by these leave events.
The cause of this issue is that FLTK use TrackMouseEvent() in order to get notified of when the mouse leaves the window. However if the mouse is already outside the window then you get a WM_MOUSELEAVE right away. The reason this causes issues it because we want to do one last update of the RFB cursor position on a FL_LEAVE event in order to trigger edge events in the session reliably (e.g. the hot corner for activities for GNOME). And since we cannot get leave events with a button pressed down we assume they are all released. (we also cannot just filter out the WM_MOUSELEAVE events as that messes up the internal state for FLTK)
Tested with a build after vendordrop against a new server(Fedora 31). Tested native client on both Windows 10 and Fedora 31. Tested the following: - Touch in general - Tap - Touch and drag - Two finger pan - Pinch In/Out - Double tap - Two finger tap (Right click) - Special/use cases - Resize window by dragging in the corner - Mixing gestures, Pan -> Two finger pan -> Pan - Have mouse and focus outside of session, start a drag inside session - Client in fullscreen combined with all of the tests above On the Fedora 31 client I also tested three finger tap, which emulates the press of the middle mouse button. Everything works fine.
This isn't primarily about gestures but since it was vendor dropped into ThinLinc from tigerVNC so it's worth mentioning to keep track of it. I tested that a session can support up to 16K resolution without scaling on macOS. It seem to work fine, but I didn't test what will happen if a resolution is set to more than 16K.
Client gesture documentation looks great!
The native clients should be done now, so moving on to Web Access... We've looked at HammerJS and Zing Touch as possible gesture frameworks. Unfortunately both projects are dead, very complex and look like they need some work. So we'll make an attempt at porting our C++ code to JavaScript.
Upstream PR: https://github.com/novnc/noVNC/pull/1414
We still have some issues in Webaccess on Linux (GNOME) in Firefox and Chrome. I wrote a new bug to keep track of this as it is an upstream issue: bug 7514
Everything should now be in place.
The image of the Web Access toolbar needs to be updated now when the mouse button selector is gone.
The image is fixed, bug found another issue: On iOS, the page can scroll when the keyboard is open. When the page is not at the top then the cursor is rendered at an incorrect location. Mouse events are using the correct coordinates though.
The issue has to do with the confusion of the "visual viewport" compared to the "layout viewport". Fortunately Safari 13 exposes enough information to map between the two. Older versions seem to be out of luck and will have to live with an incorrect pointer. :/
Should be fixed now. Need to retest local cursor on touch devices.
Also fixed focus on click, which also needs to be retested.
Focus on click tested on Safari on iOS, Chrome on Android and Firefox on Linux. Works well. Mouse cursor is correctly displayed on Safari on iOS and Chrome on Android.
Translations does work in tlclient but not in vncviewer. This is a regression that seems to have happened together with this vendordrop.
Tested translations on client build 6516 for Windows 10, Fedora 31, and macOS. everything is being translated correctly except for macOS (see bug 7523). This bug can now be closed even though macOS doesn't work since it's caused by something else.