This article is an introductory look at the object model used for ink collection and recognition. We examine the mechanism for collecting ink, as well as the logical location and identification of ink strokes.
Let's begin with some discussion of the most important classes in our object model, the
Ink Class. In most cases you'll be using the
InkOverlay class to work with
Ink in your applications, but the
InkOverlay, as well as the
InkCollector, both handle instantiation of the
Ink object for you.
Figure 1. Object Model for ink
InkCollector: This is the fundamental object that is used to collect and render ink as it is being entered by the user. The
InkCollector fires events to your application as they happen, and also packages up Cursor movements into ink strokes for you. You can tell the
InkCollector what events and what data you are interested in receiving, as well as whether you want it to paint ongoing ink strokes as they are collected.
InkCollector was intentionally designed without all the support that the
InkOverlay brings to the picture, as it's a baseline for building component technology in which you want behavior different than the
InkOverlay object is a superset of the
InkCollector and adds selection and erasing of the ink that's been captured. For most all applications the
InkOverlay is the object of choice because its support for selection-related features is a real time-saver and typically a requirement of any application. To keep things simple, this is the object I use in Figure 1.
InkOverlay objects fire numerous events. Wiring these events enables you to respond to all of the important stages of ink input or pen movement.
The CursorInRange/CursorOutOfRange events specify that the user has moved the cursor in range of an ink-enabled area. Note that on some devices the cursor does not have to be touching the screen for these events to fire.
Following the CursorInRange event, if the cursor is not touching the digitizer, the NewInAirPackets event is raised.
When the cursor touches the digitizer, the CursorDown event fires.
The CursorDown event is followed by the NewPackets event which indicates that ink is being collected.
When the CursorOutOfRange event fires, you receive a Stroke or Gesture event, depending on the value of the
InkCollector object's CollectionMode property. The same is true for the
Your application will often receive various SystemGesture and Mouse events. By default the Tablet PC Platform supports numerous system gestures, many of which mimic traditional mouse events. For example, tap events map to mouse clicks, and drag, hold, and hover events map to the same type of Mouse events. For more information about system gestures, see System Gestures in the Tablet PC SDK.
A Stroke object consists of packet information (x, y,...) that represents the
Ink at a certain point on the coordinate system, which we now know starts 0,0 at the upper-left corner. Each Stroke object is assigned a unique ID (Stroke.ID) relative to the
Ink object in which that it resides. It allows the developer to define extended properties for custom data and exposes DrawingAttributes that affect the rendering of the data.
The Strokes collection is a collection of references to Stroke objects. A Strokes collection provides helpful operations common to the collection of Stroke objects it contains, such as Rotate() and Scale(), which are used for filtering, moving, or resizing the Stroke data.
Let's examine some Stroke information. I am using the StrokeIDViewer application that ships as part of the Building Tablet PC Applications book; I've rewritten part of that application for inclusion in this article and the download that accompanies it. In my sample source it's called StrokeViewer.
Figure 2. Stroke count for my name in cursive and print.
Notice the difference in the number of strokes when I write my first name in script versus printed format. I press down, write, and pull up four times in script and nine when I print. Wow, that's a lot of extra work. So a stroke contains the packet data that represents the ink from when I touch the digitizer to when I pull away. Strokes are basically a Bezier curve that represents the ink collected between a CursorDown and CursorUp event.
There are several things going on here.
- A Stroke object is created with each pen down/pen moves/pen up action.
- A Stroke contains all of the packet data (including things like pressure).
- A Stroke's visual representation is its Bezier by default (but this can be changed in the drawing attributes).
Let's work with the StrokeViewer sample (which is included in the download that accompanies this article) some more to see what's happening when we add, select and delete strokes. We'll add three strokes to the window and will display their stroke ID's.
Figure 3. I create three strokes.
I created three strokes and you can see they are labeled 1,2,3, which in this case are the Stroke ID's. Now I'll change the mode of the application to "delete" and I'll remove some of the strokes.
Figure 4. I erase one stroke.
When I add another stoke you would expect to see ID 4, correct? Well, you won't.
Figure 5. What happened to Stroke ID 4?
As you can see in the previous image, the new Stroke has an ID of 5. So, what happened to 4? Each pen action I performed in delete mode causes a new Stroke to be created; hence the IDs incremented. This is interesting because no ink was actually drawn. Let's try this out with select mode and you'll see a similar progression of the stroke IDs.
Figure 6. Experimenting with the selection's affect on a stroke.
Switch to select mode, select a Stroke, and then go back to ink mode and add a new Stroke.
Figure 7. Sequence of stroke ID jumps.
Now I am back in ink mode and I draw another stroke, which is labeled with an ID of 7. ID 6 was used for the selection process.
Finally, when we go into point-erase mode and run the eraser across all the strokes, splitting each one into two, you can see the same phenomena. The action of deleting the strokes actually wound up creating one to represent what was removed.
Figure 8. The effect of erasing on Stroke ID.
Let's take a look at the code I wrote to build the StrokeViewer; it's pretty straightforward
First, in my C# Windows Call (remember to add a reference to Microsoft.TabletPC.API):
m_InkOverlay = new Microsoft.Ink.InkOverlay(this.Handle);
m_InkOverlay.Enabled = true;
m_InkOverlay.Stroke += new InkCollectorStrokeEventHandler( InkStrokeAdded );
m_InkOverlay.Painted += new InkOverlayPaintedEventHandler(InkPainted);
The InkStrokeAdded event handler is called when a Stroke is added; what I do is invalidate the form in order to force a repaint.
private void InkStrokeAdded( object sender, InkCollectorStrokeEventArgs e)
The InkPainted handler is called when the WM_Paint message is sent to my form; because I want ownership of this process, I intercept the call and make a subsequent call into a helper function that does the real work.
private void InkPainted( object sender, PaintEventArgs e)
RendererEx.DrawStrokeIds(e.Graphics, Font, m_InkOverlay.Ink);
As mentioned, the real work is done in the following method, which is called to render the Ink and the Stroke.ID.
public static void DrawStrokeIds(
Renderer renderer, Graphics g, Font font, Strokes strokes)
foreach (Stroke s in strokes)
string str = s.Id.ToString(); Point pt = s.GetPoint(0);
renderer.InkSpaceToPixel(g, ref pt);
g.DrawString( str, font, Brushes.White, pt.X-10, pt.Y-10);
g.DrawString( str, font, Brushes.White, pt.X+1, pt.Y+1);
g.DrawString( str, font, Brushes.Black, pt.X, pt.Y);
There are a lot of great white papers on the MSDN Mobile PC Development Center that dig into stoke handling a bit more for you to read. Check out:
The Renderer object controls the actual drawing of
Ink data to a hardware device context (HDC), be it a window, printer, or other HDC. When rendering
Ink, you must keep in mind the two separate coordinate systems supported by the Tablet PC device: "device coordinates" and "ink coordinates". The ink coordinate system is the default. The Renderer object exposes methods to convert ink coordinates to device coordinates, and vice versa. These include the InkSpaceToPixel and PixelToInkSpace methods. The Renderer object is also responsible for the actual rendering of the
Ink to the HDC. The Renderer object uses the Draw and DrawStroke methods to accomplish this task. Finally, the Renderer object provides support for the manipulation of
Ink data, including the transforming, scaling, repositioning, and resizing of Strokes.
I encourage you to try things out and give strokes a whirl, there is a lot you can do with them.