No, you have to implement such gesture recognition from scratch. It should not be too hard. A control need to have a state machine
with phased of swiping: 1) waiting; 2) mouse button pressed down; 3) mouse moved by a sufficient distance; direction of swipe detected; 4) mouse button goes up, swipe complete (here you invoke your swipe event).
So, basically, your state machine would be the enumeration of the phases and detected swipe direction; you can combine all these states on one enumeration type with members combining phase and detected direction using | (OR) operators, them the whole machine data set would be just one field. Now, you handle "low-level" mouse events which can cause the state machine to change state
. This algorithm can be implemented with
, WPF, ASP.NET, Silverlight, and so on, so please find the links to the API elements by yourself. On top of that, create your own event arguments type for your
event and create a public event instance for your custom control. This event should be invoked on the last phase of swipe detection. The idea is this: you should expect mouse events in the order described above. Any mouse event (and perhaps some other events) invoked not in the expected order break the phase sequence by setting the initial state; after successful swipe recognition and invocation of your event, the state also set to initial state value. Important point is "insufficient distance" in one swipe. The required minimal distance should be an optional value for your application. I suggest that a swipe by insufficient distance also sets the state to its initial value, without trying to wait for "continuation" of the swipe in next
event. This simple algorithm will filter out swipes which are not long and fast enough or those performed at an angle which cannot interpreted it as a swipe at certain direction. Also, you can consider only horizontal and vertical swipes (having exactly 4 directions); than you should filter only the swipes where the difference on coordinates in one of these directions clearly dominates over the perpendicular direction. The accuracy of recognition of the direction can also be an option. With some experience, you can figure out realistic set of options.
This is preliminary design of the feature which you can easily implement.
For zoom, the approach depends on the UI library you are using. The solutions are pretty obvious but different. If you clarify on that and ask me to elaborate on that, I'll try to help with it, too.
Just in case, look at my past answers, but don't expect a final solution:
Zoom image in C# .net mouse wheel
Read Big Tiff and JPEG files (>(23000 x 23000) pix) in a stream. And display part of it to the screen in realtime.
For WPF, zooming can be used with one or two of these things:
UIElement.RenderTransform Property (System.Windows)
Viewbox Class (System.Windows.Controls)