Click here to Skip to main content
Click here to Skip to main content

The myUML Project.

, 30 Sep 2003
Rate this:
Please Sign up or sign in to vote.
This article explains the myUML project that provides a set of tools for the creation and manipulation of UML diagrams.

Table of contents

  1. Introduction.
  2. Usage.
  3. Not too gory details: classes and interfaces.
  4. Not too gory details: To shift or not to shift?
  5. Gory details: painting.
  6. Gory details: custom controls.
  7. Gory details: (De)Serializing: myUML and XML.
  8. The road goes on and on...
  9. History

Introduction

The intent of the myUML project is that of providing a set of tools for the creation and manipulation of UML diagrams.

This is the first release of any code from the project, and includes only the initial version of the "Use Case Diagram" portion of the project. In other words, the only UML diagrams considered in this release are the Use Case Diagrams.

Things you can do with this first release:

  • Create use case diagrams;
  • Save them to file and load them from file;
  • Modify them.

Code included in this release:

"myUML" solution
  |
  \--    "myUML" project
      |
      \--    Actor.cs:            The Actor class
      \--    ActorDisplay.cs:     The ActorDiaplay Control
      \--    Anchor.cs:           The Anchor class
      \--    App.ico:             Standard application icon
      \--    AssemblyInfo.cs:     Standard Assembly Information
      \--    Communication.cs:    The Communication class
      \--    CommunicationDisplay.cs:The COmmunicationDisplay Control
      \--    IUMLtoXML.cs:        The IUMLtoXML interface
      \--    TestEntry.cs:        Entry form to the project
      \--    TestUseCases.cs:     Form to work with Use Case Diagrams
      \--    UseCase.cs:          The UseCase class
      \--    UseCaseDisplay.cs:   The UseCaseDisplay Control

Before we move on, a little note on nomenclature: if you have never heard of UML, you will probably find this project rather silly. I suggest you check online or on some good software engineering text book, since this article does not intend to explain what UML is and what it is used for. There are so many good references for UML that I find it hard to point you to any specific place other than Google and CodeProject.

Since this first release deals with use case diagrams, though, let me define the following key terms (in my own words):

  • Use case diagram

    A diagram representing what the depicted system does, from the point of view of an external observer; Use Case Diagrams are composed of Actors, Communications, and Use Cases (I refer to all three of them as "figures");

  • Actor

    A figure, in a Use Case Diagram, representing a role that some user or component of the system plays in relationship to the use case depicted in the diagram; Actors are represented as stick-men;

  • Use Case

    A figure, in a Use Case Diagram, representing some task or goal that the depicted system should complete or achieve; Use Cases are represented as ellipses;

  • Communication

    A figure, in a Use Case Diagram, representing the participation of an Actor to a given Use Case; in other words, if an Actor is linked to a Use Case by a Communication, it means that the Actor is somehow involved (it must either do something or provide some information) in the implementation of the Use Case; Communications are represented as lines linking Actors to Use Cases.

Hope that avoids confusion down the line.

Usage

For those of you in a hurry, or not much interested in the details I present later on, here's a quick summary of what you can do with this code.

Compile it (I'm providing the source code), and start it up. You will be faced with a simple form with a single button ("Test Use Cases"). In future versions, there will be more buttons. Please, forgive the simple appearance of this splash form. Click on the button to access the tool that lets you work on Use Case Diagrams.

At this point you should see a simple form with a white panel on the left-hand side, and a toolbar at the top with a few buttons. Here's a review of the buttons in the toolbar (in order, from left to right):

  • New Diagram: this button clears out the current diagram;
  • Open Diagram: open a previously saved Use Case Diagram;
  • Save Diagram: save the current diagram to file;
  • Pointer Tool: use this tool when you want to select figures in the diagram;
  • Communication Tool: lets you draw new Communications;
  • Use Case Tool: lets you draw new Use Cases;
  • Actor Tool: lets you draw new Actors;
  • Delete button: deletes the figure(s) currently selected (note that this button does not appear if you don't have any figure selected).

To draw a new Use Case Diagram, click on the tools you need and start drawing in the white panel. It's a click-and-drag interface. You will notice that, as you draw your first figure, on the right-hand side of the form, a control will appear to display details regarding the figure you have just drawn. This detailed view appears only when you have a single figure selected, and lets you change its properties and appearance. In particular, it lets you move and resize the figures as you need (sorry, I haven't implemented yet the click-and-move-or-resize). You can select multiple figures by clicking on them while holding the Shift key down.

To save your Use Case Diagram to file, click on the "Save Diagram" button. The diagram is saved in XML form (so, yes, you could tinker with it outside the myUML project, but you can also mess the data up Smile | :) ). To clear the Use Case diagram, click on the "Clear Diagram" button. To load a Use Case Diagram from a file, click on the "Open Diagram" button.

That's pretty much all.

Not too gory details: classes and interfaces

The first thing I'd like to write about is the classes and interfaces in the project. Here's what I ended up with at the end:

  • The Anchor class simply contains a PointF;
  • The IUMLtoXML interface defines two methods (SaveTo and ReadFrom);
  • The Communication class contains three Anchors, and implements the IUMLtoXML interface;
  • The UseCase class contains 8 Anchors and implements the IUMLtoXML interface;
  • The Actor class inherits from the UseCase class and simply overrides a couple of methods (Paint and SaveTo);
  • The CommunicationDisplay inherits from the UserControl class, and "logically" contains a Communication;
  • Similarly for the UseCaseDisplay and ActorDisplay.

(Yes, once I'll expand the myUML project to include Class diagrams, I will be able to include a nice diagram here Smile | :) )

I am sure I could have done better (and, possibly, I'll go back and re-implement it all in some better way). However, this situation gave me enough stuff to think a bit about classes and interfaces. Here's what happened (in a sort-of-chronological way).

At first I started out with a Communication class, a UseCase class, and an Actor class. Once I figured out the basics (see "Gory Details: Painting"), I looked at my code and realized that there was *a lot* of replication going on and, as I moved on in the project, I could easily forget to update the source in three thousand different places (ok, not three thousand, but you get the picture).

By chance, at the same time I was reading Alexandrescu's book on generic template programming (yeah, that's another language, I know), the Gamma et Al. on Design Patterns, and a few other things on similar topics. So I figured it was a good chance to consider the myUML project and see if I could improve my classes-tree (which, at the time, was more like a forest of three single-node trees) and minimize the code replication.

So, since we don't have templates...oops, *Generics* in C# (yet), I figured I'd look at interfaces to solve my problem. In short, the code replication I was facing, was due to the fact that the three basic figures (i.e. classes) in the project, all implement the same set of characteristics and behavior. Alas, they implement them in different ways, and, in some cases, they don't share all of these characteristics and behaviors. For instance, all three classes need to be painted to screen (i.e. they all implement a Paint method). However, the Communication class is painted in a rather different way than the other two (it's a line, while both the UseCase and Actor classes are bounded by a rectangle!). Even if we look at the UseCase and Actor classes, they are both painted within a certain rectangle, but their similarities -in regard to this behavior- end there (the UseCase has an ellipse within that rectangle, the Actor has a stick-man in it, and they display any text associated with them in different positions).

So, thinking that interfaces could solve my problem, I started writing interfaces. I ended up with...hmm... 11 interfaces, each corresponding to a particular behavior I wanted to implement in one or more of the figures. Some interfaces also inherited from others.

Then, I realized a tricky thing that nobody had ever pointed out to me in regard to interfaces: Interfaces define method signatures, but no implementation (chorus: "Duh!"). Sure, it seems obvious, but I didn't realize how annoying this can be until I actually wanted to take advantage of them. What I was looking for, really, was a way to implement multiple inheritance. And, I'm sad to say, even if you might have heard a thousand times that interfaces allow you to do just that, that's not exactly true. Interfaces let you get around the problem, in a sense, but they are not a "Divine Intervention" that you can use to get what the language does not give you (namely, "multiple inheritance"). Let me give you an example.

Consider good old class inheritance. You have a base class (aptly named Base), implementing a certain method:

using System;
public class Base
{
  public Base() { }

  protected void Foo()
  {
    Console.WriteLine("Hello World!");
  }
}

I made Foo protected so that a class inheriting from Base would get access to it. And, here comes our class inheriting from Base:

using System;
public class myClass : Base
{
  public myClass() : base() { }
}

At this point, the following code compiles and produces the "Hello World!" output:

myClass C1 = new myClass();
C1.Foo();

We are now free to override the Foo method in myClass, if we wish to provide a different behavior for that class. For instance:

using System;
public class myClass : Base
{
  public myClass() : base() { }

  public override void Foo()
  {
    Console.Writeline("I am overriding!");
  }
}

On the other hand, consider the case of an interface, such as

public interface Ifoo
{
  void Foo();
}

This simply means that any class "inheriting" (ok, ok, "implementing") this interface *must* provide an implementation of the Foo method in order to compile. So, we would have to do something like:

using System;
public class Base : Ifoo
{
  public Base() { }

  public void Foo() { Console.Writeline("Hello World!"); }
}

At this point, if we were to implement myClass, we would have to provide a new implementation of the Foo method. Granted, we could make the Foo method in Base protected and inherit it, and possibly override it, but that simply replicates what we had to do with class inheritance.

So, where does this leave us? My conclusion is (until someone enlightens me otherwise) that interfaces are a good thing in certain specific situations, but they do not replace the good ol' class inheritance system. Indeed, I included one interface in the myUML project. We will get to that shortly. First, let me give you an example of a case where an interface would be the correct tool to use.

Consider the scenario where you are developing a Bag container, and you wish your bag to implement a "doubling" functionality. In other words, an application using a Bag object should be able to call:

myBag.Double();

and all the items in the Bag would be "doubled" (e.g.: an integer like 2 would become 4). Internally, your Bag class would probably use one of the standard collections objects to hold the objects it receives. Thus, when the Double method is invoked, you would parse through all of the objects in your internal collection, and double each. However, as soon as you put something in a collection, it is cast to object (again, when we will get generics, this scenario will change). So, in your doubling loop (which might be a foreach statement), you would have, for each object, code to figure out exactly what class the object belongs to, cast it to a reference of the appropriate type, and then double it (not to mention, value types and reference types would be treated differently). So, for example's sake, assume we are considering to store in the Bag object, of one of two classes (class Foo and class Bar). We would have:

public class Foo
{
  private int n = 0;

  public Foo(int x) { n = x; }

  public void Double() { n*=2; }
}

public class Bar
{
  private string s = "";

  public Bar(string x) { s = x; }

  public void Double() { s = s+s; }
}

using System;
using System.Collections;

public class Bag
{
  private ArrayList list = new ArrayList;

  public Bag() { }

  public void AddFoo(Foo f) { list.Add(f); }
  public void AddBar(Bar b) { list.Add(b); }

  //Actually, why not this one?
  public void AddWidget(object o) { list.Add(o); }

  public void Double()
  {
    foreach(object o in list)
    {
      if(o is Foo)
      {
        Foo f = o as Foo;
        f.Double();
      }
      else if (o is Bar)
      {
        Bar b = o as Bar;
        b.Double();
      }
      //And..ehr, what if it's not a Foo, nor a Bar ?
    }//FOREACH
  }
}

Sure, might be ugly, but, with only a Foo and a Bar class to deal with, it's manageable. Ok, now that you're done with your Bag class, and it fits nicely in your application, time goes by and someone (generally a pointed-hair manager) decides that your Bag should be able to accept objects of type Foobar as well.

So, you figure, you have to go back and add another "else if" case to your Bag.Double implementation. Sometimes things are not that simple. In the long term, this approach of "go-back-and-modify" is not a good thing. An interface would help you out by letting you define some requirement that any object stored in the bag *must* meet. Then, your code may assume that the objects in the list will meet that requirement, and you may deal with only that specific characteristic, regardless of the actual type of the objects. So, for instance:

public interface IcanDouble //Idoubleable sounded odd
{
  void Double();
}

public class Bag
{
  private ArrayList list = new ArrayList;

  public Bag() { }

  public void AddWidget(IcanDouble x) { list.Add(x); }

  public void Double()
  {
    foreach(IcanDouble x in list)
      x.Double();
  }
}

Looks better, at least to my eyes! Now, your Foo and Bar classes need to implement the interface:

public class Foo : IcanDouble
{
  private int n = 0;

  public Foo(int x) { n = x; }

  public void Double() { n*=2; }
}

public class Bar : IcanDouble
{
  private string s = "";

  public Bar(string x) { s = x; }

  public void Double() { s = s+s; }
}

And, if they really want to add the Foobar class, they just need to make sure it implements the IcanDouble interface, providing a Double method (that does whatever it needs to do to double a Foobar object).

Your Bag is now much more stable, because it relies upon the various classes it deals with to meet a certain requirement. In other words, the interface acts as a "public contract" that whatever class wants to work with the Bag must subscribe and respect. Incidentally, the Bag class itself provides a Double method, so we could make it "subscribe" the contract (by adding : IcanDouble to its first line) and we could store a Bag in a Bag.

So, after I thought of this example, I figured that interfaces were cool, but they solve a problem that was not the problem I was having with the myUML project.

So, how come I have the IUMLtoXML interface in the project?

Well, simply put, I ended up in a situation similar to the one described above for the Bag class. The form that holds the various figures keeps them in a list (aptly named figures), and, when the user requests to save the Use case Diagram to file, I had some ugly code looking exactly like that series of if else-is else-is... in the first Bag class. By defining the IUMLtoXML interface, I simplified that code a lot:

foreach(IUMLtoXML fig in figures)
  fig.SaveTo(xmlw);

Granted, I still have other places in the code where the if else-if else-if... ugly code is found, but I am dealing with "only" three classes, so it's still manageable. Plus, I have time to go back and clear it all out.. now that I understand interfaces a bit better.

Not too gory details: to shift or not to shift?

Well, I must admit this is one thing I made more difficult than it had to be, but just in case you've never dealt with capturing keyboard events and figuring out things like "Is the user pressing the Shift key?", here's how I did it, followed by what not to do.

First, I set to true the KeyPreview property of the form (in the Misc section of its properties). This lets the form peek at the key events that take place on any of the controls it contains. Be careful when setting this to true, since, if your controls are listening to these same events, you must synchronize the various listeners. Next, I provided two simple handlers for the form KeyDown and KeyUp events:

private void TestUseCases_KeyDown(object sender,
    System.Windows.Forms.KeyEventArgs e)
{
  if(e.Shift) shiftPressed=true;
    e.Handled=false;
}
private void TestUseCases_KeyUp(object sender,
    System.Windows.Forms.KeyEventArgs e)
{
  shiftPressed=e.Shift;
}

Where shiftPressed is a private bool variable of the form, set to true whenever the user is holding the shift key.

Now, what I did wrong before getting to this (who was that said that "failures are much more interesting than successes" ? Hoare, maybe?).

I think my first approach was 99% correct (good, but not good enough!). I was using this handler for the KeyUp event:

private void TestUseCases_KeyUp(object sender,
    System.Windows.Forms.KeyEventArgs e)
{
  shiftPressed=!e.Shift;
}

[chorus: "Uh?"] Well, let me explain my idea at the time: within the context of a KeyUp event, the event arguments should tell me what key(s) was/were released (just like they, for a KeyDown event, they tell me which key(s) was/were pressed). So, if the shift key is one of those that were released, I would find the Shift property of the event arguments set to true, right? *Wrong* It turns out that the key event arguments simply give you a "snapshot" of the keyboard's state at the time when the event is fired. So, since the KeyUp event is fired after the shift key is released, by that time the key is up, and the Shift property is set to false.

The other mistake I made when dealing with this was to use MessageBoxes to try and debug the whole thing (A "Key has been pressed" message in the KeyDown handler, and a "Key has been released" message in the KeyUp handler). This, while usually it's a quick and dirty way to trace pieces of code, threw me off track even more. The presence of the MessageBox.Show statements, in fact, did something odd to the chain of events, and the KeyUp event appeared to never be fired. In truth, this was only because of the popping-up of the MessageBox in the KeyDown handler (and, with carefully timed "click-on-the-message-box-and-immediately-release-the-shift-key" I could verify that the KeyUp event was indeed fired and handled).

All in all, I got a big scare for nothing (ehr... "So, if I handle the KeyDown event, I can't intercept the KeyUp event?").

Gory details: painting

One of the basic functionalities implemented by the myUML project is, of course, that of letting the user draw new diagrams. This was my first experience with painting objects on screen in C#, and hence my first face-to-face with GDI+. Alas, even with the earlier Visual Studio 6.0, I never did much with GDI and such. So, I'm sure my code will look ugly to many of you who have more experience with drawing stuff on screen than I do. I am also sure I could improve the effectiveness of the code, and I will probably do, at some point in the future, when I will understand the Painting event better. But, for the time being, I'm rather content with what I have. So, let me share a few thoughts on this topic.

First and foremost, let me suggest, if you approach the painting problem for the first time, then you read as many different tutorials on the topic as possible. I'd guess that all of the books on C# will include at least half a chapter on the GDI+ basics. However, I found that each book (or online reference) tends to give you some examples without covering everything. Maybe there is a book out there that talks *only* about GDI+, but I haven't seen it yet. In any case, the best thing you can do is to read as many different examples as you can, and then mix and match as you need.

In the myUML project, I had to allow the user to draw on screen (actually, only on the pnl_drawable Panel of my form) three types of figures. I decided to start out with the panel as-is, but I also realized I wanted to give the user, the freedom to use as much space as he wishes. For instance, maximizing the form will resize the panel to take up as much space as possible (ok, I still save some space for the other controls). So, the draw-able panel may potentially have scrollbars. This is the key point, I think. While painting per-se is a relatively simple operation, when you start thinking of scrollable controls, and painting on them, the whole deal becomes a bit confused. Let's take it one step at a time; I will consider the case of the UseCase class to show you how to implement this, but the same logic applies to the Communication and Actor classes.

First of all, the painting begins at the form level. In my case, I created an event handler for the Paint event of the pnl_drawable Panel, but you could use a whole form as draw-able area, and hence handle the Paint event of the form. In any case, the signature for that event handler, which Visual Studio kindly provides for you, is:

private void pnl_drawable_Paint(object sender, 
      System.Windows.Forms.PaintEventArgs e)

Admittedly, that doesn't seem to be much to work with, does it? Then again, the PaintEventArgs object actually holds a bunch of stuff that will prove rather useful. So, by using GDI+, if we wanted to paint an ellipse on the screen every time that the Paint event is fired, we could implement the paint handler like this:

private void pnl_drawable_Paint(object sender, 
                            System.Windows.Forms.PaintEventArgs e)
{
  Graphics g = e.Graphics;
  Pen pen = new Pen(Color.Black);
  RectangleF rectangle = new RectangleF(10,0f, 10,0f, 100.0f, 50.0f);
  g.DrawEllipse(pen, rectangle);
}

This draws an ellipse (just the outline of it) in black, within a rectangle with top-left corner at (10, 10) and with width 100 and height 50. Simple enough. And, you can go on and explore all of the GDI routines to draw and fill rectangles, ellipses, lines, and polygons!

Ok, so in the myUML case, I didn't have the luxury of painting figures always in the same spot. So, in the form's Paint event, I decided to loop through the various figures defined in the figures ArrayList, and ask each one, in turn, to paint itself (note that the following is not exactly the code you will find in the project, as I will explain briefly):

private void pnl_drawable_Paint(object sender, 
             System.Windows.Forms.PaintEventArgs e)
{
  Graphics g = e.Graphics;
  //For each Figure in the figures ArrayList:
  foreach(object o in figures)
  {
    //Check what the figure is:
    if(o is Communication)
    {
      Communication tmpC = o as Communication;
      tmpC.Paint(g);
    }
    else if(o is UseCase)
    {
      UseCase tmpUC = o as UseCase;
      tmpUC.Paint(g);
    }
    else if(o is Actor)
    {
      Actor tmpA = o as Actor;
      tmpA.Paint(g);
    }
  }//FOREACH
}

Aye, aye, this is one of the places where I could develop an Ipaintable interface and simplify the code (see above "Not too gory details: classes and interfaces"). In any case, this could be paired by something like this in the UseCase class:

public virtual void Paint(Graphics g)
{
  RectangleF tmpRectangle = new RectangleF(
    anchors[UseCase.UseCaseAnchorsToInt(UseCaseAnchors.TopLeft)].Center,
    new SizeF(width, height));

  if(filled)
  {
    Brush b = new SOlidBrush(color);
    g.FillEllipse(b, tmpRectangle);
  }
  if(bordered)
  {
    Pen pen = new Pen(brdColor);
    g.DrawEllipse(pen, tmpRectangle);
  }
  if(selected)
  {
    for(int i=0; i<NUM_USE_CASE_ANCHORS; ++i)
      anchors[i].Paint(g);
  }
  if(text.Length>0)
  {
    SizeF txtBox = g.MeasureString(text, arialFont10);
    PointF txtPoint = new 
         PointF(tmpRectangle.Left, tmpRectangle.Top);
    txtPoint.X+=(width/2.0f);
    txtPoint.Y+=(height/2.0f);
    txtPoint.X-=(txtBox.Width/2.0f);
    txtPoint.Y-=(txtBox.Height/2.0f);
    g.DrawString(text, arialFotn10, 
         new SolidBrush(txtColor), txtPoint);
  }
}

Simple enough...apparently. Note how I called a Paint method of the Anchor class to let each Anchor paint itself, and how I used the Graphics.MeasureString method to calculate the size of the rectangle surrounding the text to be painted. This is a pretty intensive method, and it would be better to have it readily available instead of calculating it every time the UseCase is painted, but it requires a Graphics object to be used (and I have no Graphics object available when the Text property of the UseCase is being modified.. at least in this version).

Anyway, it all looks simple and slick, but there is always a "but". In particular, this code does not work right if the surface over which you are painting is a scrollable one. This Paint method, in fact, does not account for the difference between "world coordinates" and "screen coordinates". Let me illustrate: if the surface that we will paint on is, say a 42x42 panel, when you click in the middle of it, your coordinates are (21, 21).. ALWAYS! Why am I stressing the "always" part? Because, if that same surface is a scrollable one, the game changes. With a scrollable surface, assuming your surface can show any point within a 84x84 area, you have a couple of scenarios:

    0         42       84      0         42       84
    |         |        |       |         |        |
    --------------------       --------------------
  0-|XXXXXXXXX         |     0-|         XXXXXXXXX|
    |XXXXXXXXX         |       |         XXXXXXXXX|
    |XXXXXXXXX         |       |         XXXXXXXXX|
    |XXXXXXXXX         |       |         XXXXXXXXX|
    |XXXX.XXXX         |       |         XXXX.XXXX|
    |XXXXXXXXX         |       |         XXXXXXXXX|
    |XXXXXXXXX         |       |         XXXXXXXXX|
    |XXXXXXXXX         |       |         XXXXXXXXX|
    |XXXXXXXXX         |       |         XXXXXXXXX|
 42-|XXXXXXXXX         |    42-|         XXXXXXXXX|
    |                  |       |                  |
    |                  |       |                  |
    |                  |       |                  |
    |                  |       |                  |
    |                  |       |                  |
    |                  |       |                  |
    |                  |       |                  |
    |                  |       |                  |
    |                  |       |                  |
 84-|                  |    84-|                  |
    --------------------       --------------------
         Case A                      Case B

The 'X' show which portion of the greater area the surface is showing; the coordinates at the top and left of the area are "world coordinates"; the '.' indicates where you are clicking. See the difference? In both cases, your surface tells you, you clicked at coordinates (21, 21), since you clicked in the middle of the surface. However, in Case A, this means you actually clicked at "world coordinates" (21, 21), while, in Case B, you are clicking at "world coordinates" (63, 21). So, if we don't consider this when painting objects, we end up painting them in wrong places. After all, when we create a new Usecase, we save its coordinates in terms of "world coordinates", since the "screen coordinates" may change based on the size and position of the draw-able panel itself. One more thing, before we look at the solution for this kind of problems: the clip area (or clip rectangle). Painting all the various graphics on screen is a rather resource intensive operation, and it takes place *Very Often*. So, in order to minimize the payload on the system, Windows provides (in the PaintEventArgs argument object of the Paint event) an important clue, named the ClipRectangle. This is a Rectangle object that indicates the area that you should repaint due to changes to the screen. For instance, consider the case where we have painted a rectangle to screen, and we have then moved some other object (say a window or form from another application) to overlap with the rectangle, like so:

                -----------
                |XXXXXXXXX|
                |XXXXXXXXX|
  --------------|XXXXXXXXX|
  |             |XXXXXXXXX|
  |             |XXXXXXXXX|
  |             |XXXXXXXXX|
  |             -----------
  |                 |
  |                 |
  -------------------

where the X'ed rectangle is the form from the other application. If we now move that other application's form out of the way, Windows would fire a Paint event for our rectangle, but the ClipRectangle property of the event arguments would tell us that we need to repaint *only* the portion that has just been un-covered, indicated in the following diagram with dots:

  -------------------
  |              ...|
  |              ...|
  |              ...|
  |              ...|
  |                 |
  |                 |
  -------------------

This is used to paint the least required by the system in all of the graphical object managed by Windows. Of course, this is a simple example. What actually happens is that the Paint event is fired a number of times *as we move the other form out of the way*, but I think you get the picture (pun intended) by now.

So, what does all this mean to our myUML project and its figures? Well, first of all, the Paint event handler from the form needs to consider the ClipRectangle when painting anything. Since we are asking each figure to paint itself, we will pass this on to the Paint methods of the UseCase (and Actor, and Communication) class. Additionally, the Paint handler passes on a scrollOffset Size object, which it gets directly from the pnl_drwable object. This Size describes the position of the scrollbars surrounding the panel, giving us a hint as to how we can reconcile the "world coordinates" and "screen coordinates" that we mentioned above. Furthermore, I defined two constants for the TestUseCases form (MAX_DRAW_WIDTH and MAX_DRAW_HEIGHT, both set to 1024 - I hope that's large enough) to describe the limits of the area that we may point on with our pnl_drawable Panel, and we pass those on to the Paint method of the UseCase (and Actor, and Communication) class. One more parameter I pass on is the PaintMode value that indicates if this UseCase (/Actor/Communication) should be painted in Standard mode or Outline mode. The Outline mode is used when the figure to be drawn is the one the user is currently drawing, and, generally, makes the figure paint itself by using a blue dashed line for its outline, without any filled area. The final result, for the UseCase, is:

public virtual void Paint(Graphics g, 
      Rectangle clipRectangle, Size scrollOffset,
      int MAX_DRAW_WIDTH, int MAX_DRAW_HEIGHT, PaintMode pm)
    {
      //Check for clipRectangle + scrollOffset
      // to be in drawable range
      if (clipRectangle.Top+scrollOffset.Width < MAX_DRAW_WIDTH ||
        clipRectangle.Left+scrollOffset.Height < MAX_DRAW_HEIGHT)
      {
        RectangleF tmpRectangle = new RectangleF(
          anchors[UseCase.UseCaseAnchorsToInt
            (UseCaseAnchors.TopLeft)].Center+scrollOffset,
          new SizeF(width, height));

        Pen pen;
        //Depending on the PaintMode:
        switch(pm)
        {
          case PaintMode.Standard:
            //Filled?
            if(filled)
            {
              Brush b = new SolidBrush(color);
              g.FillEllipse(b, tmpRectangle);
            }//IF filled
            //Bordered?
            if(bordered)
            {
              pen = new Pen(brdColor);
              g.DrawEllipse(pen, tmpRectangle);
            }//IF bordered
            //Anchor Points (if selected) ?
            if(selected)
            {
              for(int i=0; i<NUM_USE_CASE_ANCHORS; i++)
                anchors[i].Paint(g, clipRectangle, 
                  scrollOffset, MAX_DRAW_WIDTH, MAX_DRAW_HEIGHT, pm);
            }//IF selected
            //Text?
            if(text.Length>0)
            {
              SizeF txtBox = g.MeasureString(text, arialFont10);
              PointF txtPoint = new 
                PointF(tmpRectangle.Left, tmpRectangle.Top);
              txtPoint.X+=(width/2.0f);
              txtPoint.Y+=(height/2.0f);
              txtPoint.X-=(txtBox.Width/2.0f);
              txtPoint.Y-=(txtBox.Height/2.0f);
              g.DrawString(text, new Font("Arial", 10), 
                   new SolidBrush(txtColor), txtPoint);
            }//IF text.Length>0
            break;
          case PaintMode.Outline:
            //Paint online the border in dashed blue pen:
            pen = new Pen(Color.Blue, 2);
            pen.DashPattern = new float[] {5, 2};
            g.DrawEllipse(pen,
              tmpRectangle.Left, tmpRectangle.Top, 
                 tmpRectangle.Width, tmpRectangle.Height);
            break;
          default:
            //Should never happen.
            break;
        }//SWITCH on pm
      }//IF clipRectangle+scrollOffset is in drawable range
    }

Note how we define the tmpRectangle Rectangle (which will surround the UseCase) by considering the scrollOffset, the pnl_drwawable_Paint event handler passed us:

RectangleF tmpRectangle = new RectangleF(
 anchors[UseCase.UseCaseAnchorsToInt(UseCaseAnchors.TopLeft)].Center+
 scrollOffset, new SizeF(width, height));

Also, we pass all of these parameters to the various Anchors, to let them paint appropriately:

anchors[i].Paint(g, clipRectangle, scrollOffset,
    MAX_DRAW_WIDTH, MAX_DRAW_HEIGHT, pm);

Fine and dandy, right? Well, not exactly. The fact that we consider the scrollOffset when we are painting implies that, when we created the figure, we set it by using the "world coordinates". So, in other words, when we create a new figure, we have to account for the scrollOffset of the panel (at that time) to set the figure appropriately. The TestUseCases form's handler for the MouseUp event is where we do this. Here's a minimized version:

    private void pnl_drawable_MouseUp(object sender, 
                       System.Windows.Forms.MouseEventArgs e)
    {
      if(drawing)
      {
        if(currTool!=UseCaseDiagramTool.None)
        {
          //Set the endPoint:
          PointF endPoint = new PointF(e.X, e.Y);

          //Get the scroll offset
          // (used to adjust the Points' coordinates):
          SizeF scrollOffset = new 
              SizeF(this.pnl_drawable.AutoScrollPosition);

          //Variable used in more than one case:
          float minx=0.0f;
          float maxx=0.0f;
          float miny=0.0f;
          float maxy=0.0f;
          float theW=0.0f;
          float theH=0.0f;

          //Add a new figure to the figures ArrayList: only if the
          //startPoint and endPoint are at least MIN_FIGURE_SIZE away:
          float delta = PointDiff(startPoint, endPoint);

          if(delta>=MIN_FIGURE_SIZE)
          {
            //...

            switch(currTool)
            {
              //...
              case UseCaseDiagramTool.UseCase:
                //Make a new UseCase: adjust the Points' coordinates
                //with the scroll offset; use
                // the lowest (X,Y) pair as root, and
                //calculate the width and height:

                startPoint.X=(startPoint.X-scrollOffset.Width);
                startPoint.Y=(startPoint.Y-scrollOffset.Height);
                endPoint.X=(endPoint.X-scrollOffset.Width);
                endPoint.Y=(endPoint.Y-scrollOffset.Height);

                //...

                minx = Math.Min(startPoint.X, endPoint.X);
                maxx = Math.Max(startPoint.X, endPoint.X);
                miny = Math.Min(startPoint.Y, endPoint.Y);
                maxy = Math.Max(startPoint.Y, endPoint.Y);

                theW = (maxx-minx);
                theH = (maxy-miny);
                UseCase tmpUC = new 
                  UseCase(new PointF(minx, miny), theW, theH);
                tmpUC.Selected=true;

                //...

                figures.Add(tmpUC);
                break;
              //...

            }//SWITCH on currTool
          }//IF delta>=MIN_FIGURE_SIZE

          //Reset the startPoint and drawingPoint
          startPoint = new PointF(0.0f, 0.0f);
          drawingPoint = new PointF(0.0f, 0.0f);
        }//IF currTool!=DrawingTool.NONE
        //ELSE: do nothing: this should never happen!

        //Stop drawing:
        drawing=false;

        //...

        //Ask for a GUI refresh:
        this.pnl_drawable.Invalidate();

      }//IF drawing
      //ELSE: do nothing.
    }

Notice how we consider the scrollOffset by *subtracting* its dimensions from the startPoint and endPoint that are then used to define the UseCase. This is the similar operation to what we did when Painting the UseCase (in that case we were adding the scrollOffset to the coordinates of the UseCase), to balance it all out.

Of course, if you read around, you will find out that GDI provides the Graphics.TranslateTransform method to perform this conversion between "world coordinates" and "screen coordinates" for you. However, it isn't that hard to do it yourself, is it? And, by implementing it yourself, you are in control, and you can understand what's going on (or at least try Smile | :) .

Gory details: custom controls

So, once the painting issue was resolved, I realized I had already a bunch of code in the panel's MouseDown, MouseUp and MouseMove event handlers. After all, when you mouse-down on the panel, you might be starting a new drawing, or selecting a figure, or de-selecting all of the selected figures (if you click on no figure), and it all depends also on whether you are pressing the shift key or not... on a MouseUp, on the other hand, you might have finished drawing a new figure. So, I gave up, at least for the time being, on letting the user click and drag a figure on screen, or click on its Anchors and resize it (I admit that I did this partially because it all became a bit more complicated after I decided to let the user select multiple figures by shift-clicking..). So, once you draw a figure on screen, how can you move/resize it? For that matter, how can you change its color, or add some text to it? Hence, I figured the simple solution was to provide some custom controls to display the details of a given figure, and let the user interact with the control instead.

Enter the CommunicationDisplay, UseCaseDisplay and ActorDisplay controls. For the most part, they are simple controls including a series of text boxes, buttons, checkboxes, and even a combo box to let the user modify the figure's properties. However, there are a couple of interesting things to be said about these controls.

First, the way that the testUseCases form manages them. The custom controls are actually always there, but they are rendered invisible and disabled by the form if you have no figure selected, or more than one figure selected. Interestingly enough, in order to add the custom controls to the form in Visual studio's designer, I had to provide a parameter-less constructor for each (originally, I envisioned the controls as requiring you to pass a Communication, or UseCase, or Actor to the constructor to ensure they would never be in the state where they don't have a figure to display). This forces me to change the internal working of the controls to account for the situation where the control has been constructed but has no figure to display. When you run the application, though, you will never see this case, since, when that happens, the form hides the control directly.

Secondly, these custom controls gave me an opportunity to explore and experiment with custom events and event handling. In essence, I didn't want the controls themselves to have to deal with the consequences of the user changing some figure's attribute. For instance, aside from recording the change in its internal representation of the UseCase, when a user changes the Color of a UseCase in the UseCaseDisplay control, what should the control do?

One option was to provide the control a reference to the actual UseCase in the surrounding form, and let the control apply changes to it (and force a refresh of the GUI in the form). Possible, but not nice. After all, tomorrow the UseCaseDisplay might be used in some place where there is no GUI to refresh. So, event-driven architecture to the rescue, it sounded like a perfect situation to use events and delegates. When the user changes some property in the control, the control signals the event to anyone who is interested. For each control, I defined a series of custom events, so that the application listening to events could quickly determine what exactly happened to the control. Here's how to do it (I will show, for example, what I did in regard to the user changing the color of a UseCase).

  1. Define an event arguments class to describe the event:

    In this case, I defined the UseCaseColorChangedEventArgs class (you'll find it in the UseCaseDiaplsy.cs file, before the UseCaseDisplay class itself):

      public class UseCaseColorChangedEventArgs : System.EventArgs
      {
        public Color                NewColor = Color.Black;
    
        public UseCaseColorChangedEventArgs(Color inC) : base()
        { NewColor=inC; }
      }

    This class inherits from the System.EventArgs class, has a single public property, and implements a simple constructor (which, note, calls the base's constructor as well).

  2. Define the signature for the event handler for this event:

    In this case, we need an event handler for the UseCaseColorChanged event:

    public delegate void UseCaseColorChangedHandler(object sender, 
                 UseCaseColorChangedEventArgs e);

    As all event handlers, it gets a sender object, but the second parameter should be one of our newly created UseCaseColorChangedEventArgs.

  3. Fire the event:

    In this case, the user can click on the "Pick..." button next to the UseCase's color to select a new Color for the UseCase. Here's the handler for the click event on that button:

        private void btn_ColorPick_Click
                           (object sender, System.EventArgs e)
        {
          ColorDialog cd = new ColorDialog();
          cd.AllowFullOpen=false;
          cd.SolidColorOnly=true;
          cd.AnyColor=false;
          cd.Color=saveColor;
          DialogResult dr = cd.ShowDialog(this);
          if(dr==DialogResult.OK)
          {
            Color newColor = cd.Color;
            if(newColor!=saveColor)
            {
              saveColor = newColor;
              this.pnl_Color.BackColor=saveColor;
              if(!initializing)
                OnUseCaseColorChanged
                 (new UseCaseColorChangedEventArgs(saveColor));
            }
          }
        }

    Note how I am using the ColorDialog dialog, but I am restricting the number of colors that will be available through the dialog (I was trying to keep it simple Smile | :) by setting the ALlowFullOpen and AnyColor properties of the dialog to false (and the SolidCOlor property to true). Also note how I check if the newColor is different from the old one (the saveColor) before firing the event. After all, if the user does not select a *new* color, no change in the Color has taken place, right?

    So, next I change the background color of the pnl_Color Panel (a control in the UseCaseDisplay itself, so the display is responsible for updating it) to match the new color. Then, the following two lines:

              if(!initializing)
                OnUseCaseColorChanged
                 (new UseCaseColorChangedEventArgs(saveColor));

    In other words, if the display is currently being initialized (a situation marked by the initializing flag being true), we don't want to fire the event. Otherwise, we call the OnUseCaseColorChanged method. This sounds similar to the delegate we described in the previous step, but it's not: we have not defined this method yet. For instance, this OnUseCaseColorChanged method is called with one parameter, while the signature for the event handler mentioned two parameters!

  4. Define an event dispatcher:

    Remember how we defined the signature for an event handler for our event in step 2? It was

      public delegate void UseCaseColorChangedHandler(object sender, 
                UseCaseColorChangedEventArgs e);

    Well, if you want to fire the event, you may want someone to receive it, right? So, in the UseCaseDisplay class I defined the following method (which is called, as we've seen, when the event needs to be fired):

    protected virtual void 
       OnUseCaseColorChanged(UseCaseColorChangedEventArgs e)
    {
      if(UseCaseColorChanged!=null)
        UseCaseColorChanged(this, e);
    }

    The reason for this method is that it ensures that at least one event handler has been defined before firing the actual event (the if statement). That's why I call this an event dispatcher. The actual event handler, as we'll see in the next step, may be in some other class, but this method lets us avoid the trap where no such event handler was defined. Note how the method that we invoke to fire the event matches the signature we defined for the event handle in step 2.

  5. Define an event handler:

    In step 2 we defined the signature for an event handler for our custom event. Now it is time to provide the actual implementation. In the TestUseCases form, I added the following method:

    private void ucDisplay_UseCaseColorChanged(object sender,
          myUML.UseCaseColorChangedEventArgs e)
    {
      UpdateFigure(ucDisplay.UseCase);
    }

    This matches the signature we defined in step 2 (aside, of course, from the name of the method!). In the next step, we will ensure that the call from the event dispatcher in the UseCaseDisplay class will actually invoke this method.

  6. Register the handler:

    If you're using Visual Studio, this is simple:

    In your receiver component (in my case the TestUseCases form), select the custom control, and display its events. Find the event you wish to handle, and click to its right: a drop-down menu is available for you to select which method should handle the event.

    If you are not using Visual Studio, fear not: it's not that hard to do:

    When you create the custom control, and add it to the parent's object Controls array, you can add the event handler to the list of the "listeners" for the custom control with the following statement:

    <the_control>.<event_name> += new <delegate_name>(<handler_name>);

    So, for instance:

    this.ucDisplay.UseCaseColorChanged +=
      new myUML.UseCaseColorChangedHandler
        (this.ucDisplay_UseCaseColorChanged);

Not too difficult, is it?

This has been a simple overview of the "Event and Delegates" topic. There's much more to be said in regard, but I fear it would fall outside the scope of this article.

Of course, I still think I should go back and let a user click-and-drag figures across the screen, and resize them by acting on their anchors. Plus, it would be nice to have a context-menu pop-up when you right click on a selected figure to let you change its properties on the fly, without using the custom controls. But then again, it's also nice to have the custom control pop-up and tell you more about the figure you selected...

Gory details: (De)Serializing: myUML and XML.

The last functionality I implemented in this first version of the myUML project is the ability to save Use Case Diagrams to file and load them from file. When I first approached this problem, I thought I'd implement it so that the diagrams could be saved to file in two different formats: PNG and XML. After looking at the PNG option for a while, I lost interest. After all, if you were to pass the saved diagram to some other application, the XML format would probably be a good choice.

So, armed with a rough idea, and with some wisdom collected on the way regarding interfaces (again, see above "Not too gory details: classes and interfaces"), I made each of the relevant classes able to save and restore themselves to XML format, and I implemented the code behind the "Save Diagram" and "Open Diagram" buttons in the TestUseCases form. Mind you, the implementation is a first draft, and does not include much needed checks, but there is always time for improvements. Let me move "from the top down" to walk you through this functionality, as if we were the machine, reacting to the user asking us to save the diagram to file. After this, we will briefly see what happens when the user asks to load a diagram from a file.

When you click on the "Save Diagram" button, the handler for clicks on the toolbar buttons dispatches you to the SaveDiagram routine. Here, the program lets the user specify where the diagram should be saved to (and adds for you the .xml extension if you don't type it). Once you picked a filename, the SaveDiagram routine opens a try block, and calls the SaveXML method (passing it the selected filename). The goal of having this call within a try block is that it allows us to catch at this level, any exception that might be thrown during the save operation. Knowing that a catch block is ready at this level to report to the user any problem, lets me move through the saving code without worrying too much about catching exceptions where they might be fired. After all, in all of the code performing the save, which we will see in a second, I could only catch the exception and re-throw it. In any case, here's the SaveXML method:

    private void SaveXML(string fn)
    {
      XmlTextWriter xmlw = null;
      try
      {
        xmlw = new XmlTextWriter(fn, null);

        //Write header:
        xmlw.Formatting=Formatting.Indented;
        xmlw.WriteStartDocument(false);
        xmlw.WriteStartElement("UseCaseDiagram", null);
        xmlw.Flush();

        foreach(IUMLtoXML fig in figures)
          fig.SaveTo(xmlw);
      }
      catch(Exception e)
      {
        StringBuilder sb = new 
          StringBuilder
          ("An Error occurred while saving the Use Case Diagram\n");
        sb.Append("to the specified file ('");
        sb.Append(fn);
        sb.Append("')\nDetails:\n");
        sb.Append(e.ToString());
        throw new ApplicationException(sb.ToString());
      }
      finally
      {
        if(xmlw!=null)
        {
          xmlw.Flush();
          xmlw.Close();
        }
      }
    }

(Yeah, the try-catch block at this level is rather redundant, but it lets me use a finally clause to close the XmlTextWriter if something goes wrong). As you can see, I simply open a new XmlTextWriter, and set it up to use indented formatting. Then, I write the start of the document (i.e. the XML declaration), and parse through the figures in the figures ArrayList, treating each as an IUMLtoXML object. This is where I enjoyed one advantage of the interfaces. Since all of the objects I store in the figures ArrayList implement the IUMLtoXML interface, I am allowed to simply deal with the interface instead of each different class separately.

The IUMLtoXML interface simply defines two methods:

  public interface IUMLtoXML
  {
    void SaveTo(XmlTextWriter x);
    void ReadFrom(XmlTextReader x);
  }

And the Anchor, Communication, UseCase, and Actor class implement it. Note that the Actor class does not list the interface in its first line:

public class Actor : UseCase

However, since the UseCase class implemented the interface, the Actor class implicitly implements the interface. Sneaky, I know, and possibly not a good practice, but.. but it's good to know that it is possible, because sooner or later you might stumble upon some else's code that does just that!

In any case, let's assume that our diagram included only a UseCase. Then, the loop in the SaveXML method will invoke the SaveTo method of the UseCase object:

    public virtual void SaveTo(XmlTextWriter x)
    {
      x.WriteStartElement("UseCase", null);
      x.WriteAttributeString("Width", width.ToString());
      x.WriteAttributeString("Height", height.ToString());
      x.WriteAttributeString("Filled", filled.ToString());
      x.WriteAttributeString("Color", color.ToArgb().ToString());
      x.WriteAttributeString("Bordered", bordered.ToString());
      x.WriteAttributeString("BrdColor", brdColor.ToArgb().ToString());
      x.WriteAttributeString("Text", text);
      x.WriteAttributeString("TxtColor", txtColor.ToArgb().ToString());

      x.WriteStartElement("TopLeft", null);
      anchors[0].SaveTo(x);
      x.WriteEndElement();

      x.WriteEndElement();

      x.Flush();
    }

As you can see, it's pretty straight-forward stuff. The XmlTextWriter class has a bunch of methods to let you write whatever you need. In this case, we open a UseCase element, and add a few attributes to it (note how I'm using the int representation of the Colors of the UseCase!). As it is always the case with XML data, I then had the option of adding more attributes or let the UseCase element contain a child element. In order to apply some good practice, I decided to go for the containment option. Hence, I open a new attribute (named TopLeft, for it will describe the top-left corner of the UseCase), and asked the top-left corner of the UseCase (an Anchor object) to save itself to the XmltextWriter.

The Anchor's SaveTo method does pretty much the same:

    public void SaveTo(XmlTextWriter x)
    {
      x.WriteStartElement("Anchor", null);
      x.WriteAttributeString("X", center.X.ToString());
      x.WriteAttributeString("Y", center.Y.ToString());
      x.WriteEndElement();
      x.Flush();
    }

Note how this solution creates an element (the TopLeft element) that simply contains another element. A bit redundant, but good enough for a first version. Plus, it simulates the object hierarchy at hand: the UseCase object, in fact, contains a top-left corner, which is indeed an Anchor instance.

Anyway, writing to file was simple enough. Now, let's see what happens when you click on the "Open Diagram" button. The TestUseCases' button click handler dispatches you to OpenDiagram. In there, I first check if you have any figure on screen (if so, I ask you if you wish to save your current diagram), and then ask you to pick a file to be loaded (again, assuming the file will have .xml extension). Finally, we move to the ReadXML method (again, within a try block that lets us catch any exception and report to the user via standard MessageBox). Here's the ReadXML method:

    private void ReadXML(string fn)
    {
      XmlTextReader xmlr = null;
      try
      {
        xmlr = new XmlTextReader(fn);
        xmlr.WhitespaceHandling=WhitespaceHandling.None;
        while(xmlr.Read())
        {
          switch(xmlr.NodeType)
          {
            case XmlNodeType.Element:
            switch(xmlr.Name)
            {
              case "Communication":
                Communication tmpC = new Communication();
                tmpC.ReadFrom(xmlr);
                figures.Add(tmpC);
                break;
              case "UseCase":
                UseCase tmpUC = new UseCase();
                tmpUC.ReadFrom(xmlr);
                figures.Add(tmpUC);
                break;
              case "Actor":
                Actor tmpA = new Actor();
                tmpA.ReadFrom(xmlr);
                figures.Add(tmpA);
                break;
              default:
                break;
            }//SWITCH on Element's Name
              break;
            default:
              break;
          }//SWITCH
        }//WEND
      }
      catch(Exception e)
      {
        StringBuilder sb = new 
          StringBuilder
          ("An Error occurred while opening the Use Case Diagram\n");
        sb.Append("from the specified file ('");
        sb.Append(fn);
        sb.Append("')\nDetails:\n");
        sb.Append(e.ToString());
        throw new ApplicationException(sb.ToString());
      }
      finally
      {
        if(xmlr!=null)
          xmlr.Close();
        this.pnl_drawable.Invalidate();
      }
    }

Notice how I set the XmlTextReader to ignore all white space, and how I move from one element in the document to the next by using the XmlTestReader.Read() method. The while loop will terminate once the XmltextReader.read() method returns false (which is usually at the end of the file being read, or when something bad happens). Within the loop, I have two switch statements (arguably, one of the most confusing configuration of source code possible in modern programming languages Smile | :) ). We are lucky, though: the outer switch loop could really be a simple if statement, since it includes only one case (plus the default case). I used the switch statement instead of the if statement, since I can see more cases being added in future releases. Anyway, the outer switch is based on the type of node that the XmlTextReader is currently pointing to. We do something only for Elements, but we could add cases as needed to deal, for instance, with document declaration nodes, comments, and such.

If the XmlTextReader is pointing to an element (going back to the file we saved not long ago, this would happen on the second iteration of the encompassing while loop, since the XmlTextReader will first be pointing to the document declaration), we enter the second switch statement, which is based on the name of the node in question.

Within this switch, we do something only if we are pointing to a node names as "Communication", "UseCase", or "Actor". If you are following the code and looking at the file we saved above, you will notice that, on the second iteration of the outer while loop, we enter the inner switch statement but we don't do anything. In fact, we are currently pointing to the root node of our XML file (the "UseCaseDiagram" node). So, we step through and end up at the top of the while loop again. This time, after calling XmlTextReader.Read(), we end up pointing to the "UseCase" node we saved in the file. The Read method, in fact, does not distinguish between siblings and children. It simply moves to the next element XML node in the file.

So, now that we enter the inner switch. While pointing to the "UseCase" element, we instantiate a new UseCase object, and ask it to read itself from the XmlTextReader. After the UseCase is done with this, we simply add it to the figures ArrayList of the testUseCases form. Before we look at the UseCase.readFrom method, notice how the finally clause in this method ensures that we close the XmlTextReader and invalidate the pnl_drawable Panel (so that the GUI gets refreshed and we see all the figures we have just loaded).

The UseCase.readFrom method is rather simple:

    public virtual void ReadFrom(XmlTextReader x)
    {
      float w = 0.0f;
      float h = 0.0f;

      while(x.MoveToNextAttribute())
      {
        switch(x.Name)
        {
          case "Width":    
               w = float.Parse(x.Value);
               break;
          case "Height":      
               h = float.Parse(x.Value);
               break;
          case "Filled":
               filled=(x.Value==true.ToString());
               break;
          case "Color":
                color = Color.FromArgb(int.Parse(x.Value)); 
                break;
          case "Bordered":
                bordered = (x.Value==true.ToString());
                break;
          case "BrdColor":
                brdColor= Color.FromArgb(int.Parse(x.Value));
                break;
          case "Text":
                text = x.Value;
                break;
          case "TxtColor":
                txtColor = Color.FromArgb(int.Parse(x.Value));
                break;
          default:
                break;
        }
      }//WEND

      bool b =  x.Read();
      if(x.NodeType==XmlNodeType.Element && x.Name=="TopLeft")
      {
        x.Read();
        anchors[0].ReadFrom(x);
      }//WEND
      while(x.NodeType!=XmlNodeType.EndElement)
        x.Read();

      //Set width and height AFTER the TopLeft corner has been set:
      this.Width=w;
      this.Height=h;
    }

We are assuming that the XmltextReader is pointing to an Element node, named "UseCase". If this is not the case, it would be good (i.e. something to do in the next version) to throw an exception. We use the XmltextReader.MoveToNextAttribute to run through the attributes of the UseCase node, and, for each, we set the corresponding property in the UseCase object. As we used the integer representation of the Colors, here we use the Color.FromArgb(int) method to reconstruct the Color that is represented by the XML attribute. Something else to be noticed is that we do not set the width and height of the UseCase immediately. The reason for this will be explained in a minute.

Once we are done with the attributes (and note how we don't do anything if the attributes are not named as we expect them), we call XmltextReader.Read() to move to the next node, which should be the "TopLeft" child of the UseCase. If, at this point, we are indeed pointing to an Element XML node, named "TopLeft", then we ask our top-left Anchor to read itself from the XmlTextReader. Note how we have to interpose an XmltextReader.Read() before we do this. This is the result of having the "TopLeft" element contain the Anchor element. If we didn't have this call, in fact, when we pass the XmlTextReader to the Anchor, it would still be pointing to the "TopLeft" node instead of the actual "Anchor" element. In any case, the Anchor will simply read the attributes from the XML node. Once the Anchor has read itself, note how the UseCase.ReadFrom method implements a little loop:

      while(x.NodeType!=XmlNodeType.EndElement)
        x.Read();

This ensures that we move forward until we are pointing to an EndElement. In particular, it ensures that we get to the closing tag for the "TopLeft" element that surrounds the Anchor. Without this loop, when the ReadFrom method returns control, we would be out of synch with the expectation of the calling function (namely, the TestUseCases.ReadXML method) and we would end up in trouble.

Finally, we set the UseCase's width and height. Now, we are doing this at the end of the method, after we read and set the top-left Anchor, AND we are doing this by calling the UseCase's public attributes because those attributes actually move all of the Anchors of the UseCase based on the location of the top-left anchor and the dimensions of the UseCase.

The road goes on and on...

As I said in the beginning, this is a first version of the myUML project. What does this mean?

First and foremost, it means that many functionalities have not been implemented yet. In particular, you can only draw and modify Use Case Diagrams. The myUML project is envisioned to include other types of diagrams (classes diagrams, and sequence diagrams just to mention two).

Secondly, it means that some functionalities have been implemented in a "first draft/prototype" way. For instance, you can't drag the figures around, but you can move them by changing their location in the corresponding display control. Same goes for resizing them.

Thirdly, the code itself might enjoy a healthy refresh in the near future, to make it stronger, faster, better..

I am publishing this code as-is, and you certainly should feel free to use, re-use and modify it as you need (for non-commercial purpose, of course). Hope you will find it interesting, and don't forget to have fun with it!

History

  • v.1.0 - 23-SEP-2K3 -First draft.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here

About the Author

Frank Olorin Rizzi
Web Developer
United States United States
No Biography provided

Comments and Discussions

 
Generalwonderful Idea!!! Pinsussjustin_lee10-Mar-05 15:39 
GeneralRe: wonderful Idea!!! PinmemberFrank Olorin Rizzi10-Mar-05 16:07 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.

| Advertise | Privacy | Mobile
Web03 | 2.8.140718.1 | Last Updated 1 Oct 2003
Article Copyright 2003 by Frank Olorin Rizzi
Everything else Copyright © CodeProject, 1999-2014
Terms of Service
Layout: fixed | fluid