Rules and Patterns in Event Models

This booklet is published under the terms of the licence summarized in footnote 1.

 

IF DIAGRAMS DO NOT DISPLAY IN THIS PAPER, PLEASE EMAIL ME FOR A VERSION THAT SHOWS THEM

 

‘One must be careful to define an event clearly - in particular what the initial conditions and the final conditions are’. Richard Feynman writing on quantum electro dynamics

 

The balance between structural and behavioral models

The fuzziness of real world entities and events

Discrete event models

Seven basic event impact patterns

Behavioral constraints and derivations

Marriage registration system case study

More about constraints

Preconditions and control flow conditions

Generic events: reuse between event models

The granularity of events

Complications in event modeling

 

It is normal to define classes of persistent information objects, and the persistent relationships between them in a structural model. The figure below illustrates a logical data model, which is usually drawn as a stepping stone to a database design..

 

In most enterprise applications, the database schema is far outweighed by the code of the programs that access that database. In the days when the database schema and program listings were stored in hanging files, it was highly visible that the programs outweighed the database. Nowadays, much of the program code may be written in the form of operations attached to “classes” in an object-oriented structural model, but it remains true that the operations weigh more than the attributes, that the behavior weighs more than the structure.

 

Given logical models are abstractions from physical software, we must expect a similar imbalance between structural and behavioral models. Any data/structural model of a system will be outweighed by process/behavioral models.

 

 

Most analysis and design methodologies (e.g. Information Engineering, SSADM, RUP) use a structural model as the main graphical specification tool. But while the RAP group modelling framework has one structural dimension, it has two behavioral dimensions.

Orientation

Level

Structural model

Behavioral Models

What

When

How

Conceptual model

Real world entities

Business processes

Real world events

Logical model

Data entities

Entity life histories

Data events

Physical model

Entity objects and database tables

State constraints on procedures

Control objects and services

 

The behavior of an activity system can be modelled from both entity and event-oriented perspectives. The entity-event matrix below helps to show how entity and event-oriented views are orthogonal views of the same activities.

Persistent entity

Transient event

Entity Life History Views

Pupil

School

Event Impact Views

Pupil Registration

Create Pupil

 

Pupil Enrolment

Tie Pupil to School

Gain Pupil

Pupil Transfer

Swap Schools

Lose Pupil / Gain Pupil

School Closure

Cut Pupil from School

Lose All Pupils

 

Notice that:

  • A column contains the entity-oriented view of behaviour. This may be represented in entity life history diagrams or state charts.
  • A row contains the event-oriented view of behaviour. This may be represented in event impact structures or object interaction diagrams.
  • A cell can be completed with entries like create, update and delete, or with more specific event effects as shown above.

 

You can document one business rule an entity-oriented view, an event-oriented views, or both.

 

Methods such as Shlaer-Mellor, JSD and SSADM encourage us to document every class in the form of one or more entity life history diagrams that specify the dynamic or behavioral view of a persistent data object. A state chart or entity life history diagram shows the events that may affect an object and the states an object may pass through. In practice, people often find the step from structural model to entity life histories is difficult and obscure. There is a missing link – the modelling of discrete events.

 

We generally favour documenting rules in event impact views. We usually use the entity life history view only as an analysis tool. But the reverse approach is possible. So do what works best in your situation.

The equivalence of structural and behavioral rules

The difference between structural and behavioral rules is subtle.

 

  • “You can replace an invariant rule (constraint or derivation) of a persistent entity by a behavioral rule (a precondition or post condition) of every event that might threaten the truth of the rule.”

 

E.g. Consider this invariant constraint on an Account entity:

ENTITY: Account

Invariant

AccountBalance

> 0

 

At first sight, an invariant constraint in an entity model seems a good idea. The rule is declared once; and every operation on the AccountBalance attribute will be constrained in the same way, to maintain the value above zero.

 

You might however replace the invariant with a precondition applied on a Withdrawal event.

EVENT: Withdrawal (AccNum, Amount)

Entities affected

Precondition

Post condition

Account

WithdrawalAmount < AccountBalance

AccountBalance = same - Withdrawal

 

At first sight, a behavioral constraint seems less unsatisfactory, because if two operations can both reduce the AccountBalance then the rule would have to be repeated in both operations. You don't want to specify or code any rule more than once if you can help it. The goal must be SPOD: both Single Point of Declaration and Single Point of Deployment.

 

But Single Point of Declaration may not mean Single Point of Deployment, since to implement what is specified as an invariant rule (in a logical model) you might have to code it in each operation anyway (in a physical model or implementation).

 

And there is a more general problem wherever entities are numbered in thousands or millions, and they persist for months or years. Time changes everything. Persistence undermines invariance. What seems an invariant rule today may turn out over a longer time to be:

·         A rule that is contradicted by updates

·         A rule that has exceptions

·         A rule that is true only now and then

·         A rule that is changed.

Rules that are contradicted by updates

Time changes everything: persistence undermines invariance. If you maintain only a partial history of past events, then many rules are sooner or later contradicted by events that overwrite historical data.

 

  • E.g. consider an order processing system where the derivation is: Item.Value = Item.Quantity * Product,Price.

 

Is this an invariant rule? The condition must be true when an order item is placed. But when product price is updated, the condition will no longer be true. So, if you write a program to test the integrity of the invariant rules in the database, it will report perfectly valid orders as being in error.

Rules that have exceptions

Time changes everything: persistence undermines invariance. Structural rules are fragile where the entities are numbered in thousands or millions, and they persist for months or years. The longer a system lasts, the more likely that an exception to the rule will be discovered. What seems at first to be invariant rule of a class may turn out to be true only for some members of that class.

 

  • E.g. consider the invariant constraint on the Account entity that  'Account Balance > 0'. 

 

What if the sales department introduce a new rule such that favoured accounts are allowed to go overdrawn by $1000? Five options are considered below.

Option 1: Constraints on ‘subtype’ attributes

An OO designer's first thought might be to model account variations by drawing a class hierarchy.

 

·         Subtype accounts into 'Ordinary' and 'Favoured'.

·         Specify different Invariant Constraints for the balance attribute of the two subtypes.

 

But drawing a class hierarchy should, I dare suggest, be the last approach one considers. The danger is that the number of subtypes will grow very large, and the structure of super and subtypes will prove volatile as rules are changed.

 

I offer a rule of thumb: Do not try to express the options of a case statement as a class hierarchy in an entity model until or unless you know this for a fact that this will prevent the case statement being be replicated in several operations for a considerable period of time.

Option 2: Constraints on ‘role class’ attributes

There is a flexible alternative to defining subtypes. You can define 'aspect' or 'role' classes connected by association relationships as children of the basic class. Designers may then use forwarding or delegation rather than inheritance as a means to invoke operations of the classes.

 

Let me mention an illustration given to me by Haim Kilov. The rule is that if the down payment on a property is less than 20% of the purchase price, then you must connect a Mortgage Insurance to the basic Mortgage object.

 

However, for the given example of Ordinary and Favoured Accounts, role or aspect classes do not seem any better than subtypes.

Option 3: Constraints on ‘classification type’ attributes

A more flexible approach is to specify a classification type (that is, a parent of the basic entity) with attributes that hold values used in the business rule. So any transaction that invokes a relevant operation on the basic class must first retrieve the relevant rule element from the classification type.

 

·         Specify the class AccountType with the attribute MinBalance.

·         Specify the rule 'AccountBalance > AccountTypeMinBalance' as an invariant Constraint.

·         Store AccountType instances 'Ordinary' and 'Favoured' with different MinBalance values.

 

 

 

Note that this Invariant Constraint rule refers to data attributes owned by objects of different classes. So which class does the rule belong to?

 

  • 'Attaching an invariant to a particular class is not a specification decision for analysts; it is, if anything, an implementation decision for designers.' Haim Kilov

 

Haim’s suggestion still leaves the analysts with the task of specifying the rule somewhere in a logical model.

Option 4: Constraints on an event

What is it that brings the objects together? It is the discrete event that triggers a transaction. So it is at least a possibility that the constraint is better specified as a behavioral rule.

EVENT: Withdrawal (AccNum, WithdrawalAmount)

Entities affected

Precondition

Post condition

Account

o-- (ordinary)

WithdrawalAmount < AccountBalance

AccountBalance - Withdrawal

o-- (favored)

WithdrawalAmount < AccountBalance +1000

AccountBalance - Withdrawal

Option 5: Control flow in a procedure

The last approach is to specify rule variations under options of a case statement within a procedure. Even this old-fashioned procedural approach can give us both Single Point of Declaration and Single Point of Coding, provided that the case statement appears in only one operation.

Rules that are transient - true only now and then

Time changes everything: persistence undermines invariance. A condition that is true immediately after a special correction process has been run, but not true at any other time, is not best regarded or documented as an invariant constraint in an entity model.

 

E.g. Consider a double-entry bookkeeping example presented in 'Analysis Patterns' by Martin Fowler. He specifies two apparently invariant constraints on the structural model thus:

 

 

As Martin says below, he is only using this simple example to introduce more complex accounting patterns. My aim here is to explore the meaning of structural and behavioral constraints.

Dialogue between the editor and Martin Fowler

Graham: The rule that a transaction must have 2 entries appears in your entity model.  It is the basis of double-entry book keeping. However, the rule 'Sum (entries.amount) = 0' surely cannot be an invariant rule, because that negates the whole point of double-entry book keeping. The only reason to record both positive and negative entries is to test one against the other. They may be recorded separately, perhaps by different people. The idea is to check for errors by running a reconciliation process at a later date.

 

Martin: It depends. I have seen cases where the rule is part of the structure and you can only create balanced transactions. Other systems use a reconciliation process and I should have discussed that in the pattern. Another pattern that applies to reconciliation, is that of corresponding accounts. Most reconciliations that I have come across use something like this. That’s what happens when an individual reconciles a bank account.

 

Graham: If 'Sum (entries.amount) = 0'  is a transient post condition that holds true only at the end of a reconciliation process, then the rule should be documented with this event in a behavioral model, not with a class the structural model. At a meeting of the RAP group, Mike Burrows discussed a share-trading system in which entries do not balance at all points during the processing cycle, while individual credit and debit transactions are processed. The interesting part of the design was the post condition of the discrete end-of-cycle process that reconciled persistent data on distinct databases.

 

Martin: Yes, that will work. Another way is to rephrase the rule along the lines of "if reconciled then sum of amounts must be zero"

 

Graham: If 'Sum (entries.amount) = 0' truly is an invariant constraint, then there is no need to run a reconciliation process. You might code it for system testing, but you should switch it off afterwards. Or you might tell the designers that a reconciliation process will be run, threaten that they will be fined by the amount of any discrepancy, then not actually bother to code it at all!

Then your model can be improved. There is no need to store two entries for the transaction, since they carry the same data. You should specify the amount as an attribute of the transaction class and revise the structural model thus:

 

Martin: Yes that is true. I considered showing this. But multi-legged transactions are more useful, and not much more complex, so I used the two-legged case as an entry into the multi-legged case. We certainly need to explore this ground further, especially the patterns around letting incomplete and inaccurate data into the system and doing later reconciliation. Too many people want to bar errors at the gates, when often it is more effective to let them in and hunt them down once they are safely inside.

Rules that change

The longer data persists, the more likely a seemingly invariant rule is modified during the life of an entity whose state is recorded in the system. Even the laws of the land change from time to time. There are two ways to meet the challenge of volatile rules. The first is to store the rule itself as an attribute value that can be updated by end users. This approach has a limited application.

 

The second and more general approach is to define the rule as a transient pre or post condition of one or more processes (rather than an invariant of a data structure). It is often better, easier or safer to specify rules in a behavioral model, that is:

  • in a process model rather than a data model
  • with the events rather than the entities.

 

This helps us to minimise evolution problems, since changing a process structure is normally easier than changing a data structure. See also the later chapter <Preconditions and control flow conditions>.

Conclusions

Invariant rules are essential. We do need an analysis and design method that helps us to specify invariant rules, especially those that define the data type of an attribute or the multiplicity of a relationship.  Most analysis and design methods already focus on specifying invariant rules; they offer various means of declaring such rules on a data model or class diagram.

 

But the passage of time undermines structural models. Persistence undermines invariance:.

  • the substance of a thing grows, changes or decays
  • apparently fixed aggregations turn into loose associations
  • apparently fixed types turn into temporary states
  • rules are changed.

 

What seems an invariant rule today may turn out over a longer time to have exceptions, or be contradicted by updates, or be true only now and then, or be revised. The longer the view you take, the more that persistent types turn into transient states, and the more that invariant rules turn into behavioral rules. And changing a process structure is easier than changing a data structure.

 

So, it would seem better to specify rules in a behavior model rather than a structural model, as transient rules rather than invariant rules. For specifying enterprise applications (where the stored data persists for years, where the data may be distributed across several databases, where the rules evolve) behavior modeling is essential.

This chapter discusses the insubstantial nature of things and the fuzziness of the real world. It proposes reasons why event-orientation is as important object-orientation, and why phenomenology is as important as ontology.

 

Let me tell you a story. This is story about the passage of time and the insubstantial nature of things.

 

The ship of Theseus: episode one

Kero the boat builder sold a ship to Theseus the trader. Theseus bought a brand new ship’s log and set off on his first voyage, trading between Greece and Persia.

 

Theseus had exacting standards and was rich enough to maintain them.

After his first trip, Theseus paid for Kero to renew part of the deck that had been scratched when a load was dragged across it.

After his next trip, Theseus paid for Kero to replace the main sail, which had been torn a little.

After his third trip, Theseus paid for Kero to replace the rudder, which had become worn and loose.

 

Eventually, after many more trips, Kero had renewed every part of the original ship, every single molecule of its substance. Theseus noted each repair event in the ship’s log.

One day when Theseus was away sailing and trading, Kero looked around his boatyard. He noticed that all the parts of Theseus’ original ship were lying there. With new nails, some canvas, twine and a little spit and polish, his slaves were able to rebuild the original ship.

 

Meanwhile, out at sea there was a wild storm. Wave after wave swept over Theseus as he stood at the wheel. Eventually, Theseus gave the order ‘abandon ship’. He swam for the shore carrying his log book in a leather pouch.

 

When he got back home, he wrote ‘sunk in storm’ in the log book and sought out Kero. Could he please have another ship like the first? Kero couldn’t resist a little deception ‘I salvaged your ship after you abandoned it. It is sitting in my boatyard. I’ll be happy to sell it to you, at the full price of course.’

 

When Theseus overcame his surprise and his reluctance to pay twice for the same ship, he wrote ‘salvaged’ in the log and set out on another voyage.

 

The objects we think about are not concrete things

Any way you look at it, the ship of Theseus is not simply a tangible object; it is more abstract; it is behavior; it is a memory. The ship exists in Theseus’ mind; it is his experience of a thing that carries him around the Mediterranean. The ship is given a continuity of existence not only by his memory but by also by his written record. An object is something that persists for a while and is remembered. Objects are only memories of things.

 

The ship of Theseus: episode two

Another surprise awaited Theseus when he next returned home. A handful of his crew, poor swimmers, had been forced to cling to the apparently sinking ship. After the storm abated, they were able to bale out the sea water and bring the ship home. The laws of salvage meant that they were now entitled to claim ownership. They bought a new ship’s log and set up in business, competing with Theseus.

 

Kero felt obliged to tell Theseus the truth. To avoid any confusion between the two almost-identical ships in his boatyard, he nailed nameplates to their prows to distinguish them. Actually, this didn’t resolve the confusion at first, because he called them ‘New Ship’ and ‘Old Ship’. Nobody else was clear which was which, so he copied the names into the corresponding log books to make things clear.

 

There are different models of reality

If objects are distinguishable entities, you ought to be able to enumerate them. If a ship is an object, you ought to be able to count ships. So, how many ships are involved in the story?

 

  • There are two named ships; Kero has labelled them in his boatyard.
  • There are two recorded ships, documented in log books. But the named and recorded ships do not correspond. The recorded ship in Theseus’ log book has been at different times both of the named ships.
  • There are three paid-for ships in Kero’s sales ledger. Theseus has paid Kero for two whole ships plus a ship’s worth of parts.

 

Sometimes the model takes over

The ship’s log is an information system. In the end, you often have to go with the log book version of reality. The real world is just too fuzzy and complex to deal with. In abstract businesses like banking, the model is the business.

 

The real world is a lot fuzzier than you might think from looking at the things around you.

 

Objects are not discrete in nature

Classes are not discrete in nature. The boundaries between so-called types are not at all clear. In biology, the apparently firm boundaries between biological species cannot be firm, otherwise evolution would be impossible.

 

Similarly, the class hierarchy above species (genus, phylum, etc.) is a highly subjective notion, with no firm basis in reality. It certainly does not correspond to the cladogram, the hierarchical structure that shows the forking paths of evolutionary history.

Nor are object instances discrete in nature. We believe ourselves to be individual members of the human race, but the discreteness we cling to is a kind of egoism, mainly to do with the continuity of our memory. Consider: where is the individual in these cases?

 

·         A psychological curiosity: It has been shown that after a surgeon cuts the corpus callosum that connects the two halves of the brain, to relieve the symptoms of epilepsy, both sides of the brain think carry on thinking independently (though only one side may speak).

·         An entymological curiosity: The queen of an insect colony gives birth to clones of herself.

 

The edge of an object in space is disputable. In cosmology, where is the boundary of the planet earth? At its surface? Or at the top of its atmosphere? Or at the end of the light travelling away from the earth since it was created?

 

Events are not discrete in nature

Events are as fuzzy as objects. The moment when an event happens is disputable. Consider

 

·         A medical dilemma: Does a person die with the cessation of breathing? Or heart beat? Or brain activity?

·         A cosmological dilemma: Was the earth born with the division of a large gas cloud into smaller ones? Or when this smaller gas cloud started to condense? Or when the mass of the earth stopped increasing?

 

Entity and events only become discrete in our models

To build a business rules model, we crystallise entities and events out of a world that is much more fluid, fuzzy and formless than our models imply. Where an object starts and ends in space and time is something we decide and define in building systems.

 

A doctor might distinguish between the classes disease and drug. A biologist: species and gene. A cosmologist: star and planet. A nuclear physicist: electron and photon.

These are not just different levels or partial views of the same model. They are entirely different perspectives.

 

Even in physics, the hardest of sciences, Einstein’s model of cosmological forces is to date irreconcilable with the model of quantum mechanics, though both are tested and accepted in their field. We always separate out the entities and events that best suit the model we are building.

We model only the appearance of things

The problem of software engineering might be described as: How to discover, describe and connect the components of a system? In recent years, authors have stressed that the objects in a software system should somehow model or represent the things in the real world that the system seeks to monitor or control. 

 

Many data modeling courses begin with the notion that entities are concrete things you can touch. One object-oriented author quotes Aristotle’s ideas on studying the substance of things. Some authors propose we should base our modeling work on ontology, the study of how things are.

 

This does not feel right. The entities or objects in our systems are not concrete things. They are only memories of things. And the things users want to remember are often highly abstract concepts such as dates, promises, and contracts.

 

In software engineering, we never build a model of how things are. Our models are highly subjective. Subjectivity enters at many levels. Our models reflect only the narrow business perspective of the system’s owners and users. We model how things appear to these people. Then we must constrain our models even further to the appearance of things that can be detected by our systems.

 

The figure below shows the substance of the real world is filtered through several gauzes.

 

 

Michael Jackson suggested at one of our RAP group meetings that phenomenology (the study of how things appear) is probably more relevant to software engineering than ontology (the study of how things are). How things appear to a software system is limited to what input data it can read. What does this input data represent? It represents things happening in real world. It represents events. Events represent changes in that tiny, tiny portion of the world monitored by the system.

 

An event is a phenomenon by which a software system recognises a change in the real world (or a change in the user’s perception of the world, which amounts to the same thing). Events are the phenomena by which a system recognises time passing.

Conclusions

We can take both entity-oriented and event-oriented views of system behavior. Event-orientation matters as much as object-orientation.

An event carries data that represents something happening, or a decision being made. It triggers a discrete process in the system being designed.

An event is a discrete, atomic, all-or-nothing happening. It is a minimum unit of consistent change. You might call it an atomic transaction. It is logical commit unit’s worth of event effects.

An event happens in an instant and leaves its mark on, changes the state of, one or more persistent objects.

An event may also refer to the state of other objects, sometimes in a precondition that can fail the event, sometimes in a control flow condition that selects between event effects.

An event impact view or event impact structure documents how objects are affected by one event.

How to discover events?

Given any entity model, you can build an entity-event matrix by asking the following analysis questions:

  • What events create and destroy objects of an entity type?
  • What events connect and disconnect relationships of this object to other objects?
  • What events update the attributes of the object?

How to discover rules applied by an event?

Users must be confident that a system will perform correctly in terms of what rules apply to the processing of the events.

Our definition of behavioural rule encompasses:

  • Behavioral term: the name of an event or operation.  E.g. Wedding.
  • Behavioral fact: relates behavioral terms and structural terms in a behavioral statement. E.g. Wedding events join Brides and Grooms.
  • Behavioral constraint: a precondition that prevents an event from being accepted and processed. E.g. A Wedding event joins a Bride and Groom. Both Bride and Groom must be over 18 years old.
  • Behavioral derivation: usually declares a side effect or post condition that an event leaves in its wake. E.g. MaritalStatus = married.

 

Given an event, you can build a specification of rules by asking questions like:

  • What constraints prevent an event from being processed?
  • How are data items produced by the event’s process derived from input and stored data items?

How to model the rules applied by an event to entities?

Where the even impact structure is simple, you can document an event impact view in a table.

The event impact view below shows the terms and facts, constraints and derivations involved a Wedding event.

In a simple case like this, the sequence of object access can be shown top to bottom (as in a sequence diagram).

EVENT: Wedding (Person [bride], Person [groom])

Entities affected

Preconditions: Fail unless…

Post conditions

Person [bride]

Person exists

Age > 18

SexAtBirth = Female

MaritalStatus = unmarried

MaritalStatus = married

Person [groom]

Person exists

Age > 18

SexAtBirth = Male

MaritalStatus = unmarried

MaritalStatus = married

Marriage

 

Wife = Person [bride]

Husband = Person [groom]

MarriageDate = Today

MarriageStatus = active

 

This event impact view records constraints under the heading of preconditions, and derivations under the heading of post conditions.

 

If you model all the events in event impact views, then the entity-event matrix can be deduced or derived from that documentation.

A CASE tool should be able to navigate from column to row by selecting an event name, and from row to column by selecting an entity name.

 

Do I have to model every row of the entity event matrix, and every column?

No. Each view does help to validate the other. It is very satisfying to fully specify both entity and event-oriented views of a system, and bring these views into perfect harmony. But in practice, you do not have to model the more trivial rows and columns. Concentrate on events that affect more than one entity, change a relationship or update a state variable. Concentrate on entities with more than one state (these are surprisingly common in Enterprise Applications, since many business entities ‘die’ some time before they are deleted).

How to model the impact of an event on entities?

You can use a CASE tool to draw an object interaction diagram. However, we use event impact structures to record association facts - rather than message passing.

 

The main point of this chapter is to separate

  • the modeling of concurrency (a natural feature of the problem) from
  • the modeling of communication (a designed feature of the solution).

 

Some would say the primary purpose of object interaction analysis is to define communication between objects. I do not. I say the primary purpose is to define the behavioral fact that an event appears concurrently in several entity life histories. Our event impact structure diagrams model concurrency rather than communication.

 

This chapter is about drawing event impact structures to specify how objects act in a coordinated way when an event occurs. It shows how to specify the route by which an data event discovers objects, without considering message passing. It introduces the concept of transient association facts.

 

The chapter goes some way to explain why and how business rules specification differs a little from object-oriented design. Event impact structures are not only platform independent, but also OO and procedural programming independent. Yet they make good specifications for both kinds of programming.

 

The object-oriented paradigm focuses attention on message passing, and it would seem that the primary purpose of drawing event impact structures is to define communication between objects. The business rules paradigm takes a different view. You should draw Event impact structures to specify the synchronisation of concurrent information objects, before making an decision about message passing.

 

This chapter presents event impact structures as a variety of object interaction diagram that are useful as a formal problem-oriented modeling tool. Readers coming from a formal background may regard the chapter as philosophical rather than mathematical in spirit. However, the basic concept is mathematical, it is the notion of one-to-one association between the things and sets of things synchronously affected by an event.

Modeling the concurrency of objects

Almost every design notation or method is now based on what might grandly be called discrete object modeling. But while the business rules modeling cube has two entity-oriented dimensions, it also has an event-oriented dimension.

The need to model events

The objects in a system (be they graphical objects, information objects in a database, tasks in work flow modeling, or entity life histories in a process control system) have to be co-ordinated.

·         ‘Our understanding of message routing tends toward the magical. Message routing problems are resolved often in a haphazard way at coding time.’ Palmer 1993.

·         ‘The one-object-at-a-time view of system specification has its limitations.’ ‘No object stands alone; every object collaborates with other objects to achieve some behaviorBooch 1994.

·         ‘A near-universal short-coming’ of work flow modeling products lies in ‘managing rendezvous (or synchronisation) conditions in processes with parallel task threads.’ Ovum 1995.

What is missing?

OO analysis techniques involve ‘use cases’. But a use case is normally at a higher level of granularity; it is a package of events and enquiries designed to support a user in carrying out a task, or even a sequence of tasks. Use cases are rarely precisely or completely specified.

 

At the lower level, OO programming techniques include interaction diagrams for showing the messages passed between objects. Imagine how complex these diagrams become when you have to specify the assembly of a complex output data structure from many third normal form relations, or the two-phase commit in a database transaction.

 

The problem is not just one of complexity. Some of the object-oriented notations look friendly enough, but they are still based on the idea of message passing, which implies implementation choices have already been made to do with the programming environment and message-passing strategy. What is on offer are really coding notations, not problem modeling notations.

 

What is missing is what the late Keith Robinson used to call “discrete event modeling”.

Modeling real-world events

A real-world event may affect several objects, and so initiate collective behavior. The effects of the event are contemporaneous. This is true first in the real world, and then in the systems we build to model the real world.

  • ‘If a pupil enrols in a school that is an event shared by the school and the pupil… participation is not sequential… the two aspects of the event are contemporaneous’  Michael Jackson.

 

In general, you cannot assume one real-world object becomes one information object (or one information object becomes one technology-level database table), but I do so in figures below, for the sake of simplicity.

 

An event requires associations (albeit transient) between information objects. These transient associations are facts. They may look like constraints, but to me they are inevitable facts of life.

One-to-one transient association

The tables below use 1:1 associations to show how the possible effects of the event on different real-world objects are related. The first table shows two objects are in 1:1 transient association with respect to the event.

EVENT: Pupil Enrolment

Entities affected

Pupil

<-->

School

 

The second table shows three objects are in 1:1 transient association with respect to the event.

EVENT: Pupil Transfer

Entities affected

School [old]

<-->

Pupil

<-->

School [new]

Of course, not all objects will be in 1:1 association with respect to an event. However, you can always connect the objects using 1:1 associations by introducing selection and iteration components into the structure, as shown below.

One to many transient association

The diagram below (drawn using the old SSADMv4 notations for an effect correspondence diagram) shows two objects in one to many association with respect to the event. Note how the 1:1 transient association (here a two-headed arrow) is drawn to the set.

The diagram structure can be collapsed into a tabular form, using an asterisk to show the manyness of Pupils thus:

EVENT: School Closure

Entities affected

School

<-->*

Pupil

One-to-one-or-zero transient association

The diagram below shows two objects in one to one-or-zero association with respect to the event. Note how the 1:1 transient association is drawn to one option of the selection.

 

A fact or control flow condition is needed to determine whether a pupil is currently enrolled in a school.

This is condition is not constraint on, or precondition of, the Pupil Death event.

 

Again, the diagram structure can be collapsed into a tabular form, using an o to show the optionality.

EVENT: Pupil Death (PupilSerialNum)

Entities affected

Pupil

o-- (not at school)

o-- (leave school)

<-->

School

 

In practice, we do not build business rules models of real-world entities and events, we model information entities and events. These are a highly attenuated model of the real world. The attenuation from real world object to information object may not be so clear in embedded systems, where real-world objects are directly under the control of the system, but the attenuation is obvious in specifying Enterprise Applications. “I am more than a number!”

Modeling data events

Data events are the means by which a software system recognises objects in a real-world system, and detects changes in those objects. The job at the system level is to specify how persistent information objects are maintained by data events.

 

An data event initiates collective behavior. It advances the information objects in a system from one mutually consistent state to the next. An event is a minimum unit of consistent change to the information in a system. An event is a ‘logical commit unit’, meaning that if it fails in its effect on any one object, then it must fail in all objects.

 

Once you have accepted that you are specifying data event, not real-world events, it is natural to draw directed transient association arrows.

 

Given that the business rules model specifies information objects rather than real-world objects, you can and should specify transient association arrows as pointing in one direction.

 

What does the direction mean? It describes how the affected information objects are identified. See next section for more details. This direction is still ‘logical’; it does not imply any choice between technologies or decisions about physical design.

 

The table below separate transient one-way association arrows from higher and lower level concepts.

Level

Interaction concepts

Conceptual Model - real-world events

Multi-way associations between entities

Logical Model - data events

One-way association arrows between entities

Physical model – interactions

Messages and/or foreign keys and indexes

 

A one-way directed arrow does not say that an event’s effects in the real world are sequential. Nor does it prescribe choices in the machine domain to do with programming language, or message-passing strategy, or sequence of update processing.

 

Our systems should give the appearance that all the effects of an event are contemporaneous and co-ordinated. Whatever happens down at the technology level, the system users should believe that an event has a coherent and indivisible effect on the system. I’ll talk later about different message-passing strategies you may employ at the technology-level, after looking more closely at the arrows.

How to draw the arrows in an event impact structure

An arrow shows 1:1 association between the effects of the event at either end. The direction does not specify message passing; it specifies how the affected objects are identified. Sometimes, the system can identify all the persistent objects affected from the parameters supplied with an event. But generally, the system has to locate objects one after another, regardless of how you design the message passing.

 

You draw an arrow to an object from either the entry point, or from another object. An entry point arrow says the system can identify the affected object from the event parameters alone. An arrow from one object to another says the object at the tail of the arrow has to remember the identity of the object(s) at the head of the arrow.

The need to ask one object for the identifiers of other objects

A system must remember relationships as well as objects. You rely on one object remembering the identifiers of other objects, or somehow being able to find a memory of them.

An event will expect the current parent of a child to be remembered

The words ‘parent’ and ‘child’ often used by systems analysts to distinguish one end of a relationship from the other. Given a one-to-many relationship, the one end is the parent and the many end is the child.

 

Consider the Pupil Transfer event that swaps a Pupil from one School to another. The event identifies both the Pupil and the School [new]. These two objects may receive the event in parallel. But the event parameters do not identify the School [old]. This existing parent object is remembered by the system. The figure below shows this by a directed arrow.

EVENT: Pupil Transfer (PupilSerialNum, SchoolName [new])

Entities affected

Preconditions: Fail unless…

Post conditions

Pupil

 

 

Pupil swapped from old school to new school

-->

School [old]

 

Pupils = Pupils - 1

School [new]

School not full

Pupils = Pupils + 1

 

The main point is: the School [old] object cannot, in any reasonable implementation, receive the event until after the Pupil object. This precedence is neatly documented by drawing the transient association arrow as pointing in that direction.

 

The precise mechanism by which the system remembers the School [old] makes no difference to the event impact structure, whether it is via a pointer chain, or a foreign key inside the persistent Pupil object, or some other mechanism.

 

By the way, does an event necessarily affect all the objects whose keys are present in its parameters?

 

No. Suppose the key of a Pupil is a hierarchical composite of School and Pupil, this does not mean that an event carrying the key of a Pupil will access the School first (though this might later be forced on a deeper level of programming by an implementation decision of the database designer). Some events hit only School or only Pupil, some will go from School to Pupil, and some (like the one above) will go from Pupil to School.

An event will expect the current children of a parent to be remembered

Suppose the event ‘School Closure’ affects all the Pupils in the School.

 

It is common in Enterprise Applications to broadcast an event to all the objects of a given type, all the child objects belonging to a given parent object. The event does not identify all the child objects. It expects the system to remember and locate all the children, given only the identity of the parent.

 

The child objects cannot, in any reasonable implementation, receive the event until after the parent object. The figure below neatly documents this sequence by drawing the transient association arrow as pointing from the parent to the set of child objects affected by the event, and showing the set as an iterated element.

EVENT: School Closure (SchoolName)

Entities affected

School

-->*

Pupil

Conditions

There are two kinds of condition that might be annotated on an event impact structure.

  • A fact condition - a logic or guard condition that controls the entry to a selected option or iterated component in the control structure of the event impact structure.
  • A constraint condition - a precondition that stops the event from being processed or prevents the process from completing.

 

In the UML, these are easily confused, because discrete events are not distinguished from operations. The allocation of fact and constraint conditions is discussed in later chapters in this series.

Implementing an event impact structure

The choice of coding style or language is not important during business rules analysis. Business rules modeling must be entirely separable from OO programming. But nobody wants to spend their time building models that are no use. It is important that people can transform an event impact structure into program code. They key decision missing from an event impact structure is the choice of message-passing strategy.

 

Consider in the Pupil Transfer event example, how does the Pupil Transfer operation in Pupil communicate to the Pupil Transfer operation in ‘old’ School, and what data is passed back and forth?

 

An event impact structure documents the fact of an interface between objects, but not its data contents. You might assume each object passes on all the event data, and a copy the state of every object the event has passed through so far, but this is way over the top.

 

Three different ways to implement the interactions between objects are described below. The arrows in Event impact structures are not meant to be messages, but they do turn into messages under the second of these three strategies.

Chain or Staircase pattern: hand-to-hand message passing

In what might seem the most ‘object-oriented’ implementation, the objects pass the event from one to another, as though following the arrows in an event impact structure. This is called the chain or staircase pattern.

 

Any event (in a process control system perhaps) that simply triggers objects into action is easily implemented following the staircase paradigm. Difficulties arise where you need to get data back from the objects and assemble this into a report. The staircase solution is not so good in Enterprise Applications where the event may have to build up a complex output data structure from the many concurrent information objects it affects.

Fork pattern: centrally-controlled message passing

In one possible object-oriented implementation, an event manager controls the whole event impact structure. This solution is called the comb or fork because that’s what it looks like in an Event impact structure that records the messages going back and forth from the event manager object.

 

The event manager controls something like a two-phase commit. First it calls each object with the event, then it reads all the objects’ replies to check they are in the correct state, then it invokes each object again, telling it to process the event, update itself and reply with any required output.

 

Actually, it’s more complex than this because the event manager will have to request some objects to provide it with the identities of others.

 

The difficulty with this approach is that by the time you’ve put all the control logic into the event manager, there is so little left for the individual objects to do that it seems barely worth invoking them to do it. (cf. Martin Fowler’s transaction script pattern).

Procedure: combine the relevant parts of the objects into one

You can get around the need to define the message passing by extracting the relevant operations from each object, bringing them together into one procedure, and making them communicate via the local memory or working storage of that procedure.

 

This may seem strange to an OO programmer, but it is what procedural programmers do naturally. They code the event impact structure as a single procedure, and implement it within one commit unit controlled by the database management system. If an event finds one object is in the wrong state, it tells the database management system, which rolls back any effects of the event on other objects that have been processed so far.

 

You might code the procedural solution in an OO programming language or another kind of language. Procedural languages like COBOL and declarative languages like SQL remain an effective means of implementing Event impact structures in Enterprise Applications.

Automated forward engineering

You can develop an entity model to the point where a CASE tool can generate most of the detail in the event models. There are CASE tools that can generate the boxes in an Event impact structures from the information recorded in entity life history diagrams. The analyst is left to add the transient association arrows and the conditions on selections.

 

There is at least one CASE tool that can list and allocate actions to nodes of the event impact structure. It takes some actions from the information recorded in entity life histories, and it invents and allocates ‘read’ and ‘write’ actions if the objects have to be stored in and retrieved from a database.

 

Given an event impact structure you can code the event processing in either procedural or OO-style. A CASE tool can convert the event impact structure into the form of a structured procedure or ‘action diagram’. To design the object-oriented version involves choosing the staircase or fork message passing strategy and adding the message passing invocations to the operations of each entity type.

Conclusions and remarks

An event impact structure is a graphical representation of behavioral terms and facts

  • It specifies business rules without design detail; it stands firm in the face of different message passing strategies, object identifiers and implementation languages,
  • It primarily shows the synchronisation of objects, but may be annotated with preconditions, post conditions and implementation details.

 

The style of Event impact structures in this chapter has several interesting characteristics and some advantages over OO-style message-passing diagrams.

What without how

Event impact structures are simple, friendly, and technology-independent. They say what without how. They provide specification without implementation. They are not affected by the designer’s choice of programming environment or message-passing strategy, because the arrows represent 1:1 associations rather than messages.

 

An event impact structure does not commit you to any statement about communication. Messages are an implementation device. You may select between a number of viable message-passing strategies. You can choose to send messages along the paths specified in the event impact structure, or another route. You might eventually code the arrows in an event impact structure as message passing in C++ or a sequence of read actions in COBOL, but this is irrelevant at the stage of systems analysis.

 

Event impact structures provide a better problem-modeling tool than OO-style sequence diagrams. The more you use Event impact structures the more you realise that event-orientation is just as important in systems analysis and design as object-orientation.

Formal modeling of the problem domain

An event reflects a natural phenomenon. An event impact structure is a good place to document the behavioral facts of an event. It specifies the interactions between concurrent objects in a formal way. It is directed graph that tells you which objects are affected, the order they can be discovered in, and how one object naturally governs the route of the event to other objects.

Explicit and implicit relationships

An implicit relationship is implied by two or more explicit relationships. E.g. If a mother is explicitly connected by relationships to two children, a brother and sister, then the two children are implicitly related by a sibling relationship.

 

An event impact structure is a directed graph; the event travels along a relationship in a one-way direction. But an arrow in an event impact structure may follow an implicit relationship. On the other hand, physical message passing will have to follow explicit relationships if those are the only ones remembered by stored identifiers in the implemented system.

Concurrency without communication

An OO designer may consider the purpose of object interaction analysis is to define the message passing between objects. I say the primary purpose is to define the behavioral fact that an event appears concurrently in several entity life histories. I believe the concurrency of interacting objects is more fundamental, more objective, than the communication that makes it work.

 

There is a body of theory about concurrency and communication (Hoare’s Communicating Sequential Processes and Milner’s Calculus of Concurrent Systems, among others). but this is seen as difficult and obscure. People are frightened by the abstract mathematical calculi that are used to explore the nature of communication and concurrency.

 

An event in the concurrency theories of Hoare and Milner is an abstract construct describing a condition that occurs as the result of concurrency, such as deadlock and non-determinism. An event in this chapter is a cause of concurrency rather than a result; it carries the semantics and business rules of an application.

 

An event impact structure is not at all frightening. It helps us to separate the modeling of concurrency (a feature of the problem domain) from the modeling of communication (a feature designed into the solution).

 

Later chapters show how life history analysis helps you to specify classes as concurrent entity life histories and how object interaction analysis helps you to specify how these entity life histories interact when a discrete event, that is a minimum unit of consistent change occurs. Drawing an event impact structure helps you to see and define the relationships between classes that are used in object interaction analysis.

Better CASE tools

You can develop entity life history models to the point where a CASE tool can generate most of the detail in the event models. Three such CASE tools have been built. The only things you have to add by hand are the transient association arrows and the conditions governing selections and iterations.

 

This is a boon. Its value is quality assurance and configuration management. It also has a productivity benefit. But I do not advise anybody to develop a complete entity model then attempt to generate the event models. It is much better to develop the entity and event models in parallel. So in practice you will probably run the automated generator several times.

Automatic code generation from Event impact structures is an exciting area for CASE tool development. Again, the three message-passing strategies provide three different ways for the tool do this.

Modeling distributed systems

The style of event impact structures shown here could prove valuable to those who wish to combine federated systems or partition a single one into distributed business components. See the book “The Enterprise Modeler”.

You can specify an event using one or both of two tools: event impact view and event impact structure. In simple cases (and many cases are simple) it is possible to sketch the event impact structure within the event impact view.

 

This short chapter introduces seven patterns that form the basic building blocks of event specification. The figure below shows generic patterns drawn in the form of event impact views with directed transient association arrows. Each pattern is a shape that you can reuse over and over in specifying different events in different business rules models.

Event impact structure patterns - in tabular form

The table below shows the basic patterns.                

EVENT PATTERN: <<Child Birth >>

Pupil

Child

-

EVENT PATTERN: <<Child Death >>

Child

--->

Parent

-

EVENT PATTERN: <<Link Birth >>

Parent A

Parent B

Child

-

EVENT PATTERN: <<Link Death >>

Child

--->

Parent A

--->

Parent B

-

EVENT PATTERN: <<Swap Parent >>

Pupil

--->

Parent [old]

Parent [new]

-

EVENT PATTERN: <<Broadcast >>

Parent

--->*

Child

-

EVENT PATTERN: <<Gatekeeper>>

Monitor

o-->

Object

Event impact structure patterns – in diagram form

The same event impact structures can be drawn using the SSADMv4.2 notation for an effect correspondence diagram.

Event impact structure patterns - examples

The three events below exactly fit the <<pattern name>> shown.

EVENT: OrderItemCreate <<link birth>>

Entities affected

Order

Product

OrderItem

-

EVENT: Divorce <<link death>>

Entities affected

Marriage

--->

Person (wife)

--->

Person (husband)

-

EVENT: Product Withdrawal <<broadcast>>

Entities affected

Product

--->*

OrderItem

-

The Order Closure event is more complex, but you can see it includes the broadcast pattern.

EVENT: Order Closure

Entities affected

Order

--->

Customer

--->*

Order Item

--->

Product

A more complex pattern – gatekeeper cascade

A gatekeeper prevents an event instance from reaching all the object types or instances in the event impact structure. You might say a gatekeeper filters an event. (I believe a gatekeeper is called a “context filter” in Jackson System Design).

 

e.g. The figure below shows an event impact structure from my reworking of an old Shlaer-Mellor case study. It shows the possible effects of a Button Push event on the Oven-Power and the Oven-Light objects.

Notice, the Oven-Power object does not hear of the Button Push if the door is open. Similarly the Oven-Light object does not hear of the Button Push if the door is open, or if the power is already on.

 

The Oven-Power knows, by inspecting its state variable (cooking or idle), which of two optional effects the event will have (start cooking or extend cooking). The Oven-Light is only affected in one case (start cooking).

Patterns as a tool for analysis and design

Patterns make the work of the teacher easier; they shorten the learning curve. The teacher can illustrate the patterns via case studies, and teach how to use them as an analysis and design tool. I am especially interested in how patterns prompt you to ask important business analysis questions and so refine the specification. I call these refinements “generative pattern transformations”.

Occam’s razor in the gatekeeper pattern

Occam’s razor tells us to cut out needless dross, to prefer the simpler of two possible explanations. This is useful as a general principle of system design. The gatekeeper pattern gives us chances to apply Occam’s razor in the form of a more specific principle.

 

  • Guideline: “Do not allow two objects to duplicate the role of gatekeeper”

 

In other words, two objects should not maintain what is in effect the same state variable. (Or in the terms of Jackson’s structured programming method, you should resolve ‘boundary clashes’ wherever possible.)

e.g. It would be crazy to specify the Oven-Light as receiving the event even if the power is on, making it repeat the same test (cooking or idle) to choose between effects, and then ignoring one of the cases. This would mean the Oven-Light object has to maintain what is in effect the same state variable as that of Oven-Power.

 

So, Oven-Power has to act as gatekeeper for Oven-Light. Again, this precedence is neatly documented by drawing the 1:1 transient association arrow as pointing from an optional effect under Oven-Power to Oven-Light.

 

Using this principle, I have reworked old case studies by Jackson and by Shlaer. I find that introducing gatekeeper objects and applying the principle above helps us to produce more elegant solutions in which objects filter events for each other. The entity life histories reduce to what intuition suggests is the correct and minimal specification. And so, the resulting code is shorter, smaller.

 

You should not duplicate event control flow in different entities. Where an object chooses (by testing its state) which of two or more effects an event may have, no other object should have to make the same test. If it needs to know, it should learn from the first object.

Gatekeeper as a generative pattern

The gatekeeper is a generative pattern. It prompts the following analysis question.

 

·         Q) Given a gatekeeper at the entry point of an event: Does the gatekeeper object choose between effects by testing the event’s parameters or the object’s state?

 

If the former, then you should divide the event into two different classes of event, drawing a distinct Event impact structure for each. If the latter, keep the selection in the Event impact structure.

Conclusions and remarks

This short chapter has introduced six patterns that form the basic building blocks of event impact structures. Extremely complex Event impact structures can be constructed by assembling the basic building blocks into the shape that meets the requirements of the business at hand.

 

Even the simplest Event impact structure pattern can prompt you to ask questions in life history analysis, and enable you to uncover more exactly what the end-users’ requirements are.

I have not looked yet for more complex patterns in Event impact structures. But I have found scores more ‘analysis patterns’ in entity models and in entity life histories.

 

Another of our projects has compared and contrasted analysis patterns with design patterns (after Gamman et al.). It turns out that the differences are as instructive as the similarities.

Behavioral constraints and derivations

Behavioural constraints

Specifying constraints during interaction analysis.

 

Terms and facts are fundamental. But you can’t do much without the constraints; this is where all the useful stuff is. Analysts often neglect the constraints under which the enterprise operates. Frequently, required constraints are not articulated until it is time for programmers to code them.

 

This chapter discusses constraints and illustrates the specification of behavioral constraints in Event impact structures.

Constraints as event preconditions

A behavioral constraint is a precondition that prevents an event from being accepted and processed. E.g. A Wedding event joins a Bride and Groom. Both Bride and Groom must be over 18 years old.

A constraint is a precondition that prevents a system from accepting or containing information that breaks the business rules. Most constraints take the form: Fail event E unless object O is in a valid state for event E. I record constraints under the heading of preconditions. Consider for example the constraints on a Divorce event.

EVENT: Divorce (MarriageNum)

Entities affected

Preconditions: Fail unless…

Marriage

 

MarriageStatus = active

--->

Person [husband]

MaritalStatus = married

--->

Person [wife]

MaritalStatus = married

An interesting question arises here. Given that the Marriage event has been specified to set the husband and wife’s sate variables to ‘married’, do we need to test this on the Divorce event? I would say no, unless this is a safety-critical system and detecting every possible bug is important.

Does a constraint belong to an object or an event?

Logically, it belongs to both, to the effect of an event on an object. Physically, it may be coded with either. A constraint applies at the intersection of a transient event with a persistent object. A constraint belongs to the effect of an event on an object. Each operation on an object is only meaningful as part of an event, and you need to think about the preconditions and post-conditions of the whole event. You should analyse and specify constraints from both Entity and event-oriented points of view.

 

Using an object-oriented programming language, you might code a constraint in an operation of a class. Using a client-server programming language, you might code a constraint in a transaction procedure. This choice is more to do with implementation technology than with the logic of the business rules being implemented.

There are many and various ways to specify and code constraints; illustrated below.

Referential integrity constraints

A referential integrity constraint is a rule about the existence of relationships between objects before or after an event happens. You can always specify a referential integrity constraint as a precondition of one or more events.

E.g. a Wedding event will: Fail unless Wife exists and Fail unless Husband exists.

 

Alternatively, you can ask a database management system to automatically impose referential integrity constraints by checking for the existence of records in the database. The figure below shows you might define such constraints on an entity model.

 

 

Most people would expect to code such constraints on relationships in the form dictated by their database management system, so that it will automatically impose referential integrity tests.

Inter-relationship constraints

An inter-relationship constraint specifies a mutual exclusion between relationships. You can always specify an inter-relationship constraint as a precondition of one or more events. E.g. a Wedding event will:

·         Fail unless Wife SexAtBirth = Female

·         Fail unless Husband SexAtBirth = Male

 

Alternatively, you can specify such a constraint in terms of mutual exclusion between relationships. The figure below shows a Person can relate to a Marriage as either husband or wife, but not both.

Most database management systems do not understand the exclusion arc, so you will have to code this constraint in another way, probably as attribute value constraints, which are discussed below.

Relationship multiplicity constraints

A relationship multiplicity constraint specifies the number of relationship instances that an object is allowed. You can always specify a relationship multiplicity constraint as a precondition of one or more events. E.g.

EVENT: Project Start

Entities affected

Preconditions: Fail unless…

Project

Project has at least one Employee

 

Suppose you adapt the marriage registration system for a bizarre polygamous society:

  • A man can have only one wife
  • A woman can have five husbands.
  •  

You might specify such multiplicity constraints on an entity model.

Since most database management systems understand neither the ‘at least one child rule’ nor numbers written on relationships, you will have to code these multiplicity constraints in some other way, probably as attribute value constraints. Often you will test the value of a total held in the parent entity type.

 

However, it can be difficult to specify the ‘on event’ nature of some constraints in an entity model. Constraints defined as relationship multiplicity constraints apply to every event, whether you want them to or not.

Attribute value constraints

An attribute value constraint specifies a restriction on data values. You can always specify an attribute value constraint as a precondition of one or more events.

EVENT: Order Closure

Entities affected

Preconditions: Fail unless…

Order

OrderValue  > $100

Customer

OrderValue + CustomerDebt < CustomerCreditLimit

TotalUnpaidOrders  < 5

 

Alternatively, you can specify such a constraint in a data dictionary by declaring the valid range of an attribute. However, it can be difficult in a data dictionary to specify the ‘on event’ nature of some constraints.

Inter-attribute constraints

An inter-attribute constraint checks two or more attributes are compatible. A entity attribute may be compared with an event parameter. E.g.

EVENT: Promotion

Entities affected

Preconditions: Fail unless…

Employee

PromotionGrade [input] > EmployeeGrade

 

An attribute of one object may be compared with an attribute of another object.

E.g. the Order Closure event above will fail unless OrderValue + CustomerDebt < CustomerCreditLimit.

 

You can always specify an inter-attribute constraint as a precondition of one or more events. You do sometimes have to make a relatively arbitrary decision about which object will apply the constraint in the course of the event’s interaction. Further research may reveal useful heuristics in this area.

State variable constraints

A state variable constraint checks value of a state variable.

E.g. a Wedding event will: Fail unless bride’s MaritalStatus = ‘unmarried’ and Fail unless groom’s MaritalStatus = ‘unmarried’

 

You can always specify a state constraint as a precondition of one or more events. Any constraint you think of in the form ‘Fail event A unless event B has already happened’ is naturally coded by testing the value of a state variable.

 

There is little alternative here. It is normal to detect an out-of-sequence event by testing a state variable value. They will ensure that the previous event in the entity life history of the husband and wife must have been an event that set the state variable to ‘unmarried’ (presumably Birth or Divorce).

 

In fact, most constraints can be turned into tests on state variable values if you think hard enough, especially referential integrity and other constraints on relationships between objects.

Date constraints

A date constraint specifies when something should happen, before or after a specific date or time. You can always specify a date constraint as a precondition of one or more events.

E.g. a Wedding event will: Fail unless Wife Age > 18 years and Fail unless Husband Age > 18 years.

 

Alternatively, you might code a date constraint as an inter-attribute constraint by comparing the value of an input date or stored date with today’s date. Or you might code a date constraint as a state variable value constraint. There are two events involved: the first event, a date, puts the object into the state where the second event, the constrained event, is allowed.

Behavioral derivations

On specifying derivations during interaction analysis

Derivations define how knowledge in one form may be transformed by calculation into other knowledge, possibly in a different form.

This chapter illustrates the specification of behavioral derivations in event impact views under the heading of post conditions.

Structural derivations

A structural derivation describes how one piece of data derives from other data. A structural derivation must be true at all times. E.g. The Marriage Registration case study in the previous chapter contains a simple derivation.

  • PersonAge equals the difference between DateToday and DateOfBirth.

 

More elaborate rules describe how PersonCondition is derived for display on a marriage certificate.

  • PersonCondition = Bachelor provided that Marital Status = Unmarried and SexAtBirth = Male.
  • PersonCondition = Spinster provided that Marital Status = Unmarried and SexAtBirth = Female.

 

The Marriage Registration case study has no calculations. So for derivation rules, I turn below to an order processing case study.

Behavioral derivations

A behavioral derivation usually declares a side effect or post condition that an event leaves in its wake.

E.g. MaritalStatus = married.

 

A behavioral derivation is not true at all times, only just after a specific event has happened. In our order processing case study, an Order Closure event fires several derivations. These are only guaranteed to be true just after that event has been completely processed.


The figure below shows how these derivations can be specified as actions in the event impact view for Order Closure.

EVENT: Order Closure (Order num)

Event State: WorkingTotal (Derivation: WorkingTotal = WorkingTotal + ItemValue)

Entities affected

Post conditions

Order

 

OrderValue = SumValueCustomerDiscount

AmountDue = OrderValue

OrderClosure Date = Today

OrderState = ‘Closed’

Note 3

--->

Customer

CustomerDebt = same + OrderValue

CustomerUnpaidOrders = same + 1

Note 2

--->*

Order Item

o-- ItemQuantity = 0

No action

o-- else

ItemValue = ItemQuantity * ProductPrice

OrderItemState = Closed

Note 1

--->

Product

StockOnHand = same - ItemQuantity

 

The derivations in this example are more naturally specified with a behavioral event rather than an entity.

 

I try to allocate each derivation rule to the effect of an event on an object. The general principle is to allocate a derivation to the effect on the object that owns the derived attribute. However, there are complications.

 

Note 1: Derivation of a child’s attribute from its parent’s data

ItemValue, an attribute of Order Item, is derived using data in its parent object, Product. The relevant action is placed in an operation of Order Item, the entity type that owns the derived attribute.

The model does not say how ProductPrice gets from Product to Order Item. This is a matter to be decided during design/coding rather than analysis/specification. In an object-oriented design, it would be via message passing.

 

Note 2: Derivation of a parent’s attribute from its children’s data

CustomerDebt and CustomerUpaidOrders are both derived using data gathered from a set of child objects. The relevant actions are placed in an operation of Customer, the entity type that owns the derived attributes.

 

Note 3: Derivation of an attribute from both parent and children

The derived attribute OrderValue in Order is derived from data from its children Order Items and its parent Customer. The relevant actions are placed in an operation of Order, the entity type that owns the derived attribute. Notice again, the event impact view does not say how CustomerDiscount gets from Customer to Order. Again, this is a matter to be decided during design/coding rather than analysis/specification.

 

Note 4: Derivation of transient working data

Processing involves the maintenance of a transient attribute called SumValue. This is created and consumed within the process of the event.  Who owns SumValue? You could argue it is a phantom (never stored) attribute of Customer and maintain it there, but it is surely best to regard it is an attribute of the event process itself – part of the session state if you like.

 

There is a minor and deliberate mistake in the example event impact view. This will be discussed later under the heading of behavioral facts.

Not all post conditions are “derivations”

Some side effects of an event are derivations; some aren’t. Consider for example the post conditions of a Pupil Transfer event.

EVENT: Pupil Transfer (PupilSerialNum, SchoolName [new])

Entities affected

Preconditions: Fail unless…

Post conditions

Pupil

 

 

Pupil swapped from old school to new school

--->

School [old]

 

Pupils = Pupils - 1

School [new]

School not full

Pupils = Pupils - 1

 

The rule “Pupil swapped from old school to new school” is an abstraction from implementation-specific detail. In a relational database it would mean replacing the value of a foreign key. In some other kind of implementation, it might mean updating indexes.

Conclusions

An event impact view or impact structure is a good place to document the behavioral facts of an event. The Event impact structure tells you the objects that are affected, the order they can be discovered in, and how one object naturally governs the route of the event to other objects.

 

This chapter has discussed constraints and illustrates the specification of behavioral constraints in event impact views. This a good time also to document also behavioral derivations. Most are readily specified in event impact views.

 

Later chapters develop the Order Closure example further, through examination of the constraints and control flow that governs the processing.

This chapter uses a small case study to illustrate the modeling of business rules, especially behavioral constraints.

 

Suppose the British government wants a system to register all marriages conducted under their administration. The system must also register everybody who is eligible for marriage. Let us start by prototyping a key part of the UI - the window for displaying the details of a marriage.

 

Registry Office: Epsom

Registrar: S.C Humphrey

Marriage Certificate No. 3555

Date: 5th August 1996

Name

Linda Stevens

Peter Jones

Age

25

30

Condition

Spinster

Bachelor

Occupation

Consultant

Salesman

Residence

12 Broad Oaks, Old Town

5s The Flats, New Town

 

Witness 1: E. Entwistle

Witness 2: M. Jones

 

You don’t want anybody who looks at the marriage record being able to overtype the details. So you might have two versions of the above screen, one for data entry and one for display only.

 

Notice the marriage certificate number. This has nothing to do with computing. Businesses have always introduced business identifiers for recording keeping purposes. They uniquely label each real-world entity with a key or code. These keys are business identifiers, not the storage addresses of database records. You should not try to make business people use identifiers that are designed to help programmers locate data. Rather you should make programmers store the identifiers that business people use.

Terms and Facts

Structural terms in the case study include: Person, PersonNumber, PersonName, DateOfBirth, SexAtBirth, Marriage, MarriageDate, etc. Behavioral terms include: Birth, Death, Wedding and Divorce.

 

Structural facts in the case study include: Marriages relate Husbands and Wives. Behavioral facts in the case study include: Wedding events join Husbands to Wives. Divorce events separate Husbands from Wives. Behavioral facts are to do with how objects are co-ordinated by an event or enquiry; they are independent of how any program code works to achieve this.

 

Systems analysts have long been able to describe the structural and behavioral facts of an enterprise. They usually document terms and facts as entities, attributes, relationships, events and operations in diagrams and/or some kind of a CASE tool repository.

Constraints

E.g. English law lays down a number of constraints governing a marriage. A marriage must relate two partners, no more, no less. One partner (the groom) is male. One partner (the bride) is female. Both partners must be over 18 years of age. A person can have zero, one or many marriages. A person can have only one marriage at a time. A person can only have marriages in their sex of birth.

 

Note the last of the points listed above. In the UK, a person can change sex to become a transsexual, but cannot contract a marriage in their new sex. This was established by the April Ashley case in 1970, and is currently under review. The fact that the rules may be changed in the future is a matter worth exploring, and such ‘scheman evolution’ is discussed in a later chapter in this series.

 

In reality, there are other preconditions to do with the notice period, the number of witnesses, the residential addresses of the partners, the location of the marriage, and so on, but we’ll have to put them aside.

Constraints poorly specified in the structural model

How to specify the constraints on a Wedding? . The figure below shows you may readily specify some rules in the form of a structural model. 

 

How to specify the constraint that one Person cannot be related to Marriages in both husband and wife roles? The figure below shows two ways.

 

This structural model still does not capture all the rules. What about the constraint that a Person can only have one Marriage at a time? The figure below shows you can extend the structural model to capture the rule of monogamy.

 

What about the constraint that both partners must be over 18 years of age? Stop! This way lies madness. It is a mistake to keep on extending the structural model until it shows all constraints.

 

  • “Although we could invent new graphic notations for further constraints, this could make the graphical language hard to learn, and lead to cluttered diagrams. Constraints which cannot be expressed on the diagram may be specified as textual constraints.” ‘Conceptual Schema’ Halpin, page 231

 

This is true even of constraints that can be expressed (with some ingenuity) by adding subtypes into the diagram, such as the monogamy constraint on a Person’s Marriages.

 

Many people try to treat all constraints as Invariant Constraint in an entity models. But many constraints are transient, dynamic or volatile, so it is not appropriate to build them routinely into structural models. You need ways to make all constraints explicit, not just Invariant Constraints.

Constraints well specified in the behavioral model

In general, constraints are assertions about the actions that are possible. You prevent a data item from being entered, or a relationship from being established, by preventing an event from taking place. So constraints can be expressed as event preconditions.

 

  • “While state-transition diagrams are useful for visualising constraints, formal rules for enforcing these are best specified in an Event Condition Action language.” ‘Conceptual Schema’ Halpin, page 232

 

Our own ‘Event Condition Action language’ is a variation of the event modeling language that was developed through extensive research by the UK government during the 1980s. Every constraint is fired by an event. Most constraints apply to the intersection of a transient event object with a persistent information object.

 

The figure below shows you can record each constraint the effect of an event on an object.

EVENT: Wedding (Person [bride], Person [groom])

Entities affected

Preconditions: Fail unless…

Post conditions

Person [bride]

Person exists

Age > 18

SexAtBirth = Female

MaritalStatus = unmarried

MaritalStatus = married

Person [groom]

Person exists

Age > 18

SexAtBirth = Male

MaritalStatus = unmarried

MaritalStatus = married

Marriage

 

Wife = Person [bride]

Husband = Person [groom]

MarriageDate = Today

MarriageStatus = active

 

Clearly, some constraints are readily specified with a behavioral event rather than with structural entities. You can and should support a structural model with behavioral models. Instead of the annotating the constraints on the diagram, you can record them in a table along with the diagram.

Derivations

Notice that the sex of a person is not shown directly as an attribute on the marriage record; it is implied by the conditions ‘bachelor’ and ‘spinster’. Assuming the system is to maintain historical information about people’s past marriages, you will need further windows for entering personal details and displaying people’s records. The figure below shows a list of people.

Conclusions

An Event impact structure is a good place to document the behavioral facts of an event. The diagram tells you the objects that are affected, the order they can be discovered in, and how one object naturally governs the route of the event to other objects. At the same time, you should document behavioral constraints. Most are readily specified in event impact views.

This chapter says a little more about the specification of constraints, the preconditions that can cause an event to fail.

Constraints in event impact views

I like to specify an event using one or both of two tools: event impact view and event impact structure. In simple cases (and many cases are simple) it is possible to squash the Event impact structure into the event impact view.

 

During object interaction analysis, you can specify every constraint in an event impact structure. The figure below illustrates how you can write a constraint as a ‘Fail unless’ statement, and allocate it to an operation on the relevant entity in the event rule table.

EVENT: Divorce (MarriageNum)

Entities affected

Preconditions: Fail unless…

Post conditions

Marriage

 

MarriageStatus = active

MarriageEndDate = Today

--->

Person [bride]

MaritalStatus = married

MaritalStatus = unmarried

--->

Person [groom]

MaritalStatus = married

MaritalStatus = unmarried

An event impact structure specifies one event’s effects on several objects. This kind of event impact view is a specification rather than an implementation; it says what is to be done rather than how it is done.

 

Chapter two discussed three ways to implement an event impact structure. For object-oriented programming, you would extend the event impact structure with implementation-specific detail to do with message passing. For event-oriented programming, you would base the programming on a read/write access path derived from the diagram.

 

Remember the difference between ‘events’ and ‘operations’. A Divorce is an event that succeeds or fails as a whole. A Divorce event will trigger a number of lower-level elementary operations. If the event finds a necessary precondition is untrue, it will fail, backing out any operations done so far.

Constraints in state models

You may analyse the dynamic behavior of objects and describe the behavior of a entity type in the form of a state-transition diagram.

 

The figure below illustrates a structured form of state-transition diagram that imposes a regular expression (a hierarchical structure composed of sequence, selection and iteration components) over the event effects.

Drawing a hierarchical structure has some advantages; it makes it easier to tidy up the diagram and recognise standard patterns; and it naturally leaves space at the bottom of the diagram for annotation. By annotating the event effects with processing detail, you can specify the implementation details, all the processing operations, all the state transitions and all the constraints on event processing.

 

Some have assumed that every constraint can be defined graphically, in the shape of a entity life history model, in terms of the permitted sequence of events. If this were true a CASE tool could detect all the constraints from the shape of the diagram alone, and generate all the relevant preconditions in code.

 

I have found through extensive research that it is possible to specify most constraints as sequences of events in large entity life histories. But without going into the detailed research, I have found that it is clumsy to specify certain kinds of constraints in the shape of a entity life history diagram.

 

Given the various kinds of constraints listed in earlier, I find the following kinds are best listed as numbered constraints and allocated underneath event effects in entity life history diagrams:

·      Relationship multiplicity constraints

·      Attribute value constraints

·      Date constraints

·      Inter-attribute constraints

 

This leaves us with several kinds of constraint that are readily specified in the shape of a entity life history diagram:

·      Referential integrity constraints

·      Inter-relationship constraints (exclusion arcs)

·      State variable value constraints

 

Broadly-speaking you can specify referential integrity rules in a entity life history by showing the valid points at which the birth and death events of a child object can occur in the life histories of its parents. The main advantage is that you can get away from the restrictions of automated referential integrity. You can bend the rules so that full referential integrity is maintained on some events, but disregarded on others.

 

You may also specify ‘cascade’, ‘restrict’ and ‘no effect’ rules by showing how and where the death event of a parent appears in the life histories of its children. The main advantage is that you can apply these rules to logical death events as well as physical deletion events.

 

You can specify an inter-relationship constraint (an exclusion arc over relationships) as a high-level selection in a entity life history. This often involves drawing a parallel aspect entity life history for the purpose, as discussed in the chapters on ‘OO and business data’.

 

Last but not least, you can naturally specify all state variable value constraints in the shape of the entity life history diagram.

Conclusions

The question arises: So what? Yes, I can specify some constraints in the shape of entity life histories, but I know I can specify all constraints as numbered statements allocated to the event effects in an event impact structure, so why bother with the entity life histories?

 

The answer is threefold.

·         There are some problems that are difficult to grasp from only the event-oriented perspective. Looking at things event-by-event it is hard to visualise the state-transitions of an object and to validate that these state-transitions are sensible. This difficulty implies you need to formally analysis only the most complex of entity types.

·         The evidence suggests that people who make at least an attempt to analyse object behavior uncover more of the business rules than people who don’t. The earlier a rule is discovered the cheaper the costs of that discovering that rule. This benefit may be gained by carrying out a relatively informal analysis, concentrating on core business entities.

·         If you have a good CASE tool (admittedly a big if), you can generate most of the details in the event impact views automatically from the entity life histories (and perhaps vice-versa), and then generate code from the event impact views. Analysing from both perspectives gives you a better chance to ‘get it right first time’ when it comes to implementation.  This benefit can only be gained by a thorough and formal analysis.

 

In our opinion, training in life history analysis currently falls short of that required for people to achieve the last of these three benefits.

Bending the rules

The big advantage of specifying a constraint in the business rules layer or data storage layer is that the constraint is specified and coded only once. If the constraint changes, you only have to change one piece of code. You can design as many different on-line dialogues and off-line functions in the user interface layer as you like; you don’t have to specify any constraints within them, simply invoke events in the business rules layer. The constraint will always be applied, wherever or however the event is input.

 

There is however a disadvantage to implementing a constraint thus. The constraint is invariant. What the analyst at first thinks is a mandatory constraint to be applied to every case, may turn out to be optional in exceptional cases.

 

For example: Is it really true that every Project must have at least one Employee to be set up on the system? What if the users say that every now and then they do set up a Project without any Employees? There are a number of possible design strategies here. You could:

  • say the odd Project must be handled outside of the system being designed
  • insist the odd Project is registered as having a ‘dummy’ Employee it doesn’t really have
  • drop the ‘at-least-one-child’ constraint to handle exceptional cases like this.

 

The problem with the last approach is that the constraint is useful for the majority of cases. It seems a shame to throw away the constraint altogether. What you need is a way of implementing this as a business policy rather than a constraint.

 

There are no easy answers. You might transfer the constraint from the business rules layer into the user interface layer, and apply it there on some routes into the system, but not all. You might be able to develop an expert system for this more flexible kind of constraint.

I am not concerned with user interface layer constraints from now on. The remaining chapters show how you can specify event rules for the business rules layer.

Error handling

When an event fails a constraint test, the system must tell the user. You might begin designing a system under the assumption that all data is input correctly and constraints are automatically maintained. But sooner or later you must design how the system will detect contravention of these constraints, and respond to errors.

Error handling comes in three parts: error detection (see earlier chapters in this series), error reporting (see RAP group papers on Architecture definition), and error correction (see below).

Error correction

Despite all the best efforts of designers and users, some invalid events will be processed (for example, you might mistakenly identify the wrong person for input on a wedding event). The effects of processing mistaken events must be investigated and handled.

 

The effects of an error event are the same as effects of a valid event, except that they are mistaken, so the system will get out of step with the real world. The problems are:

  • output data will be produced that is incorrect
  • the stored data will be updated, but ‘corrupted’
  • future input data will be accepted or rejected, wrongly.

 

Whenever an error report is produced, someone must investigate and do whatever is necessary to put things right. A mistaken event may trigger processing which is:

  • beneficial: later proves to have been useful
  • neutral: later proves to have been unimportant
  • intolerable: has to be undone or handled by remedial action.

 

There are three things to do in handling intolerable side-effects:      

Erasing the effect of mistaken outputs

It is hard to generalise about this. You must find out whether users:

·            do not care about small errors in the output they receive

·            will find the errors for themselves and handle them without further help

·            will require some kind of ‘amendment notice’

·            will require the output to be redone from scratch

·            can be mollified by advance warning of possible error.

As an example of the last, consider the message often printed on reminder letters you receive, ‘if you have already paid this bill, please ignore it’.

Restoring stored data to the correct state

There are four ways to fix up the database:

·            Reversal events (state is restored by undoing previous event effects)

·            Deliberate abuse of proper events (acting as compensating transactions)

·            Data fixing system (data is directly manipulated)

·            Specially-designed compensating transactions

 

Ref. 2 says a little more about these.

Reinput events/input records which have been rejected

Ref. 2 says a little more about these.

This chapter illustrates how an event impact structure evolves as business rules change. The chapter distinguishes two kinds of condition that might be annotated on an event impact structure.

  • A fact condition - a logic or guard condition that controls the entry to a selected option or iterated component in the control structure of the event impact structure.
  • A constraint condition - a precondition that stops the event from being processed or prevents the process from completing.

 

The chapter illustrates how the balance between facts and constraints may shift as business rules change, and how discrete event modeling helps us to establish the right balance.

Rules that change

There are two ways to address the challenge of volatile rules. The first is to store the rule itself as an attribute value that can be updated by end users. E.g.

E.g. Suppose a business rules catalogue contains these rules related to transfer.

A Transfer subtracts an amount from the balance of one Account (the giver) and adds it to the balance of another Account (the receiver).

Only one Transfer per day is allowed between any two Accounts.

The amount received may be reduced by a percentage commission given to a third Account.

The rule for calculating commission may change.

Many of the rules explicit and implicit in the specification above can be attached to attributes of entities in a structural model. Here’s an entity-attribute relationship model with three entities and four one-to-many relationships:

 

·         Account [giver] --< Transfer

·         Account [receiver] --< Transfer

·         Account (commission receiver] --< Transfer

·         Transfer Rule --< Transfer

And here are definitions of the entities, with associated rules

Entity

Account

Attributes

Invariants

Account Id

= system generated key

Account Balance

= a number > 0

-

Entity

Transfer

Attributes

Invariants

Account Id [giver]

= Foreign key of a known Account,

Account Id [receiver]

= Foreign key of a known Account, not = giver

Transfer date

Not = Transfer the same day as any other Transfer

Amount given

= a number > 0

Amount received

= Transfer rule (see below)

Rule identifier

= Foreign key of a rule where Start date < Transfer date < End date

The factors used in calculating commission can be stored as attribute values in the transfer rule, so end users can change this invariant rule dynamically. (Invariant doesn’t mean for all time, only as long as the requirements remain the same.)

Entity

Transfer rule

Attributes

Invariants

Rule identifier

= system generated key

Commission percentage

< 50

Transfer rule

= “Amount given * (100 – Commission percentage) / 100”

Commission receiver

= Foreign key of a known Account (not the giver or receiver)

Start date

= a date

End date

= a date > Start date

 

This first approach has its limitations. The second and more general approach is to define the rule as a transient pre or post condition of one or more events.

 

e.g. Another chapter features a marriage registration case study in which the laws regarding marriage are specified as Transient Constraints of the Wedding event rather than as invariant rules on the relationships between Person and Marriage entities in the structural model. The assumption is that when the marriage laws change it will be easier to recode and recompile the Wedding event process than to restructure the entity model, with all that implies for changes to the database schema and/or the data abstraction layer.

The process granularity issue

At the extremes of design:

  • All conditions can coded as control flow conditions inside procedures
  • All conditions can be coded as constraints; preconditions of procedures.

 

It is probably obvious that you can specify every condition in a system as control flow conditions. You can specify the system entirely using procedural flowcharts. Every error/validation test can be specified and coded as a control flow condition within a procedure.

 

It may not be so obvious that you can specify every condition in a system as a precondition of a procedure. You can specify a system entirely in terms of atomic condition-less processes that only work under certain preconditions. You do this by decomposing high-level procedures into smaller and smaller modules until there is no control structure left, until all algorithms have been broken into their elementary component processes, and all conditions are expressed as constraints.

 

So the question arises: How to strike the right balance between control flow conditions and preconditions in a system specification? What is the right level of granularity for specifying business rules?

Example version 1

The figure below is a fragment of the specification for the Order Closure event process in a simple order processing system.

EVENT: Order Closure (Order num)

Entities affected

Post conditions

Order

 

OrderValue = SumValueCustomerDiscount

AmountDue = OrderValue

OrderClosure Date = Today

OrderState = ‘Closed’

---> Customer

CustomerDebt = same + OrderValue

CustomerUnpaidOrders = same + 1

--->* Order Item

o-- ItemQuantity = 0

No action

o-- else

ItemValue = ItemQuantity * ProductPrice

OrderItemState = Closed

---> Product

StockOnHand = same - ItemQuantity

Transient working data derivation

Sum Value = same + ItemValue

 

The informal specification above is reasonable, but not very precise, and you could not generate code from it. The event impact structure below is more formal; it documents the algorithmic control structure that governs the control flow of an Order Closure event’s effects on the objects in an order processing system. It also documents the constraints. If any one of the constraints is not satisfied, then the whole Order Closure event (not just an operation on one object) must be rolled back as though it never happened.

Other chapters in this series answer the questions. What do the arrows mean? How does the CustomerDiscount get from the Customer object to the Order object, where it is needed for a calculation operation?

A generative pattern in version 1

There is a generative pattern in the first version of the event impact structure, an iterated selection where one of the options has no processing beneath it. A generative pattern is a shape to look out for because it prompts an analysis question and a possible transformation.

 

Q) Ask of an iterated selection where one of the options has no actions beneath it: Does the option belong in the processing?

 

In this case the generative pattern prompts the question: What happens to an Order Item with an item quantity of zero? What should we do with Order Items that are incomplete when the Order is closed?

 

Of course, one might choose to prevent any Order Item from being entered with a zero quantity. But I am going to pursue the evolution of the Order Closure event impact structure through three business rule changes.

Example version 2

Rule change: delete any incomplete Order Items on an Order Closure event.

 

The figure below shows you can easily extend the event impact structure with an extra action (18).

Example version 3

Rule change: reject the Order Closure event if there is any incomplete Order Item

 

The figure below shows how you can redraw the event impact structure to capture this rule.

This variation of the business rule is a constraint condition rather than a business control flow condition; it appears as constraint number 9.

Notice that in this version of the specification, the processing logic of a successful event does not include incomplete Order Items.

Example version 4

Rule change: if there are incomplete Order Items, create a new Order and transfer all the incomplete Order Items to it.

 

The figure below extends the event impact structure with an extra component (the new Order) to specify the rule: create a new Order object for the same Customer and transfer all the incomplete Order Items to it.

 

Remember from the earlier chapters in this series that an arrow specifies first of all one-to-one association, and second the direction from which the object is identified. In this case the arrow suggests that the primary key of the new Order object is calculated from a value stored in the Customer object. If the primary key was input with the event parameters, then the arrow could go directly from the event to the Customer object.

Conclusions

Changes to business rules cause changes to event impact views and/or Event impact structures. This is not such a bad thing; changing a behavioral model takes less effort than changing a structural model. On the other hand, if the rules are likely to change you might well look to create business rules classes in which the rules can be declared as data attributes and changed dynamically.

 

Discrete event modeling gives us a natural level of granularity to define processing. It naturally distinguishes facts (control flow conditions) from constraints (preconditions). It helps us to establish the right balance between them.

Even where there are no generic super types in the entity model, you may find considerable potential for reuse of processing between event models. This chapter describes a rational way to discover reuse between events and specify an event class network.

 

SSADM includes a formal event-oriented technique for defining reuse between business services. In this technique, the business service is called a discrete "event" which has an “effect” on each of one or more “entities”. Two discrete events can share a common process, known as a "super event".  The OO concept of a responsibility is akin to an effect, or more interestingly, to a super event.

 

In short, you:

  • identify events.
  • identify where two or more events have same pre and post conditions wrt an entity (that is, the several events appear at the same point in the entity's life history and have the same effect).
  • name the shared effect as a super event.
  • analyse to see if the super event goes on from that entity (where the events’ access paths come together) to have a shared effect on one or more other entities, and if so, you adopt the super event name in specifying those other entities’ life histories.

 

I don't mean to persuade you to use this exact “super event” analysis and design technique. I only want to indicate that reuse via event-oriented analysis and design has a respectable and successful history, since many object-oriented designers are unaware of this history.

 

In the book ‘Object-Oriented SSADM’ Keith Robinson documented the entity and event model of recruitment agency case study. Look at the network of reuse between events documented in the figure below!

 

Keith’s recruitment agency case study is too large a case study to illustrate how reuse between events works here. The figure below introduces the structural model of very much smaller case study.

 

The figure below shows the network of invocations that can be discovered in this trivial two-class system; there are two superevents, both called by three ordinary events.

 

The figure below shows how the Transfer event calls both of the superevents (in actions 7 and 9).

 

The figures below are the event impact structures for the superevents called by Transfer (and two other ordinary events).

 

A CASE tool can generate most of the detail in these event impact structures from the entity life histories. A CASE tool would generate a flat three-way selection in the last figure; but I have chosen to reshape this into two binary selections by separating the two different conditions that must be tested.

Discovering reuse in life history analysis

The three different events that cause a Project to gain or lose an Employee come together in the entity life history of the Project class as options of a selection.

 

Where events under different options of a selection have the same effect (trigger the same actions), it is possible (though not always advisable) to declare the selection of options as a superevent. I use the symbol § to mark a superevent in a entity life history.

 

 

Think of each superevent as a common module, invoked by each one of the events shown as options beneath it.

 

Once a superevent has been declared like this in a entity life history, you may use the superevent in other entity life histories, instead of duplicating the same selection of three events.

In other words, wherever it appears in other entity life histories, the §Loss of Employee superevent is a common effect of the three different events that remove an Employee from a Project shown in the figure above.

The effects of events

An effect is the appearance of an event inside a entity life history. One event instance may trigger one of several effects within one entity life history. Different effects of one event can be distinguished by adding an effect name in brackets.

An effect name tells us briefly about the difference between effects. It may summarise the effect of actions (‘actual deletion’ or ‘intended deletion’). It may describe the state the object instance must be in for that event effect to occur, either in terms of different positions in the life (‘active’ or ‘dead’), or in terms of different values of an attribute (‘last’ or ‘not last’).

One event with optional effects on a class

The figure below shows that the Project Closure event has two different effects on a Project marked:

·      Project Closure (empty)

·      Project Closure (not empty)

 

The Project Closure event will delete the Project if it is empty (has no Employees remaining) or else act as a state-change on the way to deletion when later the last Employee is removed from the Project.

 

The figure above also shows that no new Employees may be added after a Project has been closed, since the §Gain of Employee superevent does not appear at this point in the entity life history. And it shows that existing Employees may be removed after a Project has been closed, since the §Loss of Employee superevent name does appear at this point in the entity life history.           

A superevent with optional effects on a class

The figures above show that §Loss of Employee superevent itself has three different effects on a Project, marked:

  • §Loss of Employee (before closure)
  • §Loss of Employee (after closure)
  • §Loss of Employee (last Employee)

 

 

Following a typical object-oriented analysis and design method you specify each class using some kind of template. You may also draw state-transition diagram for it. A entity life history diagram like the one above combines both the class specification and state-transition diagram.

Entity and event orientation

The figure below shows a few questions raised by OO methods, annotated on a crude metamodel of system specification concepts

The figure above is very much oversimplified, but it captures something of the orthogonality between persistent objects and transient events - the many-to-many relationship between them. You can address the questions in the diagram by taking an event-oriented approach.

What is the scope of a class?

There are various ways to partition a system into entity type. For example, relational data analysis and life history analysis can give different answers. You can define the size and scope of a class intuitively to begin with, then use event-oriented analysis techniques to refine the answers your intuition comes up with.

The notion of splitting one class into parallel entity life histories, one for each aspect of the class, turns out to be an important analysis and design technique, used in ‘OO and business data’.

What is the scope of an operation?

It is relatively easy to list the elementary data attributes of each class, especially where a business already maintains some persistent data that you can inspect. You might think it will be just as easy to recognise and list the operations, but this is not so in our experience.

Some object-oriented approaches simply list one enquiry and one update operation for each attribute. This is the wrong level of abstraction. These are elementary actions rather than operations. You need to work at a higher level, the level of an event effect composed of several (perhaps one, perhaps ten) elementary actions.

How to discover the ‘right’ set of operations?

You don’t want to clutter up your system with operations that are irrelevant, which fall into state of neglect and disrepair, which are never used by anyone. You do want to specify operations that are meaningful and useful.

To be meaningful and useful, an operation must be invoked by at least one event. So meaningful and useful operations naturally emerge from an event-oriented analysis. You can address the question of where the operations come from by taking an event-oriented approach to requirements capture and knowledge acquisition.

You should define each possible effect of a transient event on a persistent object, in an operation of that class. This approach encourages you to define exactly those operations that must be invoked to meet your system requirements, and only those.

How to name operations?

You should name operations after the entity type that own them and the events that invoke them. To begin with, you can assume that each event fires a unique operation in an object. The initial list of events can be taken to be the initial list of operations. This remains true for the majority of events and operations.

 

However, the behavior analysis can reveal operations fired by more than one event. You can define these as reusable ‘superevents’ and give them a name that reflects their effect on the object, rather than their invoking event. For example, §Light On is a superevent invoked by the Button Push event in chapter <>, as well as the Door Opening event not shown in that chapter.

 

Superevent analysis is a significant advance on current object-oriented techniques; it helps us to define useful and reusable operations via a rational analysis and design process.

How to specify the implementation of an operation?

One event effect in a entity life history is close to the object-oriented idea of an operation, but there are two variations on this simple picture.

  • One event may have more than one effect on a class. In an OOP implementation, you can join these effects within one operation under a selection of different cases. The selection is made by testing the state of the object when an event arrives.
  • One effect on a class may be triggered by more than one kind of event. You can show this common effect as a ‘superevent’ and in an OOP implementation it becomes one operation.

How to specify the co-ordination of objects’ operations?

The more you divide a system into self-contained modules, the more you have to work on the communication between modules, the interfaces and the message passing routes. Consider in the Pupil Transfer event example, how does the Pupil Transfer operation in Pupil communicate to Pupil Transfer operation in the School [old], and what data is passed back and forth?

 

Event modeling helps you sort out the way that operations in objects of different entity type are co-ordinated when an event happens. It helps you not only to specify the right or best set of operations, but also to design the message routing between objects.

 

·         ‘To ensure proper modularity… wherever two modules A and B communicate, this must be obvious from the text of A or B or both.’ Meyer

 

Event modeling encourages you to name communicating operations with the same name, but following the third of the three implementation strategies in chapter 2, you don’t have to make every interface fully explicit.

 

You can declare the data passing between objects in one place as a shared resource. You could create a Pupil Transfer event module/class that encloses the objects Pupil, School [old] and School [new] and enables them to communicate via the ‘working storage’ of the event module/class, rather than explicitly sending data to each other.

Conclusions

Object-oriented techniques suffer from not making the events that invoke the operations explicit. The techniques described here extend object-oriented theory in this direction. They provide a rational way to discover reuse between events and specifying an event class network.

This short chapter rounds up a few points that may be helpful to the event modeler.

The importance of achieving a shared understanding of events

Some analysts confuse events with ‘use cases’ or ‘functions’, or confuse an event with the ‘operation’ it fires in just one class. And some use the term ‘event’ in different contexts: business event, GUI event, message, etc.. Any methodology will come unstuck if such confusions are allowed to prevail. The concept and the level of granularity must be sharply defined.

 

People can find it helpful to think of an event as: a real-world event, or a business event, or a data group input when an end-user presses a ‘send’ button, or a user interface transaction, or a database transaction. But these ideas are too subjective. Designers on either side of the application-presentation interface must share the same idea of what an event/enquiry is, and what level of granularity it is defined at.

 

An event is a minimum unit of consistent change to the stored data within the scope of the system being engineered. It is a short-term process that affects one or more objects in the system; it moves a system as a whole from one consistent state to the next; it either happens or it doesn’t; it must succeed or fail as a whole.

 

Defined thus, an event fits nicely into a three-tier software architecture. It gives us reusable processing components in the business rules layer, ones that can be invoked from many different places in the user interface layer. It matches the idea of a database transaction or commit unit in the data storage layer.

 

  • An event tends to be small. The size of an event is not arbitrary. It is the smallest process that moves the system’s persistent data from one consistent state to the next. A large database update program, even though implemented as one physical database commit unit, may implement many logical events.
  • An event is sudden, happens in an instant, is transient, but leaves its mark on persistent objects.
  • An event moves a system from one state to the next. The state of an Enterprise Application is recorded in its memory or database. The database must be consistent. That is, the facts in it must not contradict each other. An event is a process that moves a set of related persistent objects from one consistent state to the next consistent state.
  • An event is all-or-nothing. It is indivisible. It cannot half happen. (By the way, this is also true of what physicists call ‘events’ in quantum electrodynamics.) All the effects of an event must fail if any one of them fails. If an event gets half way to completion and fails, then the whole event must be backed out or reversed. In the terms of database technology, this may be called the ‘roll-back’ of a commit unit.
  • An event has a scope. One event might affect only one object in a system, but in general it can affect many entity type and many objects. You have to envisage and specify the effect of the event on the system as a whole. Within the computer system, it is a process (not just one method in one object) with a beginning and an end. Some think of an event as merely the trigger of a process, but it is also the process itself.

Systems with only trivial events

Remember, the size of an event is not arbitrary.  It is the smallest process that moves the system’s persistent data from one consistent state to the next.  It is possible to construct the simplest kind of Enterprise Application out of events that do one of three things:

  • create a single object of a class (assigning a new key value and relating the object to any mandatory master objects)
  • update a single attribute of an object
  • delete a single object.

 

So there will be two events for each class and one trivial event for each attribute.  Some tools will generate a GUI that enables you to enter such simple create, update and delete events

What about the business rules? In the simplest kind of Enterprise Application, all the constraints on event processing are either:

  • constraints on the domain of an attribute, or
  • constraints on the presence or absence of a relationship between objects.

 

So ask of your technology, does it offer the following mechanisms for defining constraints:

  • a data dictionary, for the domain of an attribute?
  • a database structure, for the presence or absence of a relationship between objects?

Note for SSADM readers

This section explains why SSADM is over-the-top for some simple systems, designed using some technologies.  It also explains where SSADM event modeling techniques start to become more useful, for defining more complex constraints where events must test the state of stored attributes, inter-attribute domain constraints and so on. What is needed is a better understanding of ‘triage’, how to apply effort that is appropriate to the severity of the problem.

Aggregating small events

Whether data entry is on-line or off-line, you may choose to batch trivial events together into an ‘aggregate event’, and implement the whole aggregate as one physical database commit unit. 

Consider an object with twenty text attributes.  The user could update each attribute on its own without reference to the others, so each attribute replacement is, logically speaking, a distinct event. 

However, a common practice in physical design is to allow users to overtype data on the screen as they see fit, batch all of the attribute-replacement events into a single physical database commit unit, and only commit the data to the database when the user signals they are ready, perhaps by seeking to close the window.

Benefits

Motivations for designing aggregate events include:

  • reduce the number of events to be coded
  • reduce the number of accesses to data storage
  • reduce function-business component traffic
  • reduce client-server traffic
  • simplify the audit trail.

These motivations are normally stronger in on-line input.  Strange as it may seem, there is less reason for aggregate events in off-line input, where performance is less of an issue.

Costs

Of course the processing of an aggregate event is more complex, but this hardly matters if you are simply batching several attribute-replacement events for one object.

 

More seriously, the user has to wait longer before getting a response to their input. And the error response to an aggregate event raises some dilemmas.  Each one of the events within the aggregate may succeed or fail on its own.  What if one logical event fails? What if several fail?

 

You might set the standard that if one data input element validation fails on save, the system takes the user to that datan entry point.  If several, the system takes the user back to each in turn.  But hand coding this sort of thing in an ad-hoc environment (such as Delphi perhaps?) might be difficult.

 

If it is difficult to display multiple error messages, or users find them confusing, the simple option is to roll back the whole aggregate of events as soon one of them fails, reporting on just that one failure.

Dividing large events

Some don’t like the tedium of defining a large number of trivial events.  But this is only an objection to boring work, not something to worry about in principle.

 

It isn’t the simple events that take up the time.  Most of your time will be spent understanding and specifying a relatively small number of complex events.  Any rule or definition that reduces the scope and minimises the complexity of events is a great advantage here!

Mistakenly-defined large events

Designers may mistakenly define too large an event.  Perhaps the event is really more like what some people call a ‘business event’ or ‘user task’ or ‘use case’ or ‘scenario’.  Perhaps the large event is several birth events compressed into one.

 

Whatever the reason, you can simply divide the large event into smaller application events.  This should increase the reusability of the events.

If what is supposed to be one event can get half way to completion and fail, but the changes to persistent data do not have to be rolled-back (because the data is internally consistent), then what appeared to be one event was really an aggregate of two or more distinct events.

Properly-defined large events

An event may legitimately be large, in one of two ways.

 

Some events have a long list of parameters.  This is normally true only of birth events that create an object, or perhaps more than one object.  Or events that are really a batch of trivial attribute-replacement events

 

Some events have a wide-ranging set of effects on stored information objects.  This is normally true only of death events that ‘cascade’ from one object to another.

Designers are sometimes led to divide such a large event into two or more partial events.  This is dangerous because it can lead to the problem of the distributed commit unit, and the need to define a manual worklflow to ensure data integrity.

Events with a long list of parameters

Most events and enquiries only have one or two parameters.  Few events and enquiries have so many parameters that you need to validate them as you go along.

Lots of parameters might be a symptom that you’ve got the wrong idea about an event.  So one answer is - stop trying to be too clever.  If you have batched trivial events together, then you would do better to unbatch them.

 

But there is always the possibility of an event that really does require lots of parameters, and while some users don’t want datan entry interrupted with intrusive error messages, others may want event/enquiry parameters to be validated as they enter them, before all the parameters have been completed.  If so, what to do?

Pre-event enquiry solution

The normal solution is to preface the event with a pre-event enquiry that duplicates some or all of the data retrieval and business rule testing carried out by the event. 

So you don’t invoke the event until all the parameters have been entered and validated individually by the pre-event enquiry processes.

  • Difficulty: You duplicate process specification and code, meaning there is a performance overhead now and a maintenance overhead later.
  • Multi-user difficulty: If you do lock the data from the start of the pre-event enquiry module to the end of the event module, this means locking a lot of data for a lot of the time, sometimes unnecessarily.
  • Multi-user difficulty: If you do not lock the data from the start of the pre-event enquiry module to the end of the event module (and some technologies prevent you), then you have to repeat all the validation in the event module, just in case another user has been working on the same data! This is discussed further below.

 

Technology question

Will your technology automatically apply the validation tests for an event as it reads data, and then again when it commits an event, without you having to lock the data in the meantime or write two similar processes (i.e. a pre-event enquiry and an event)?

Co-routine solution

There is another solution, theoretically preferable but practically difficult.  You might run the event/enquiry module as a co-routine, executed in stages alongside the datan entry.  This involves ‘inverting’ the event/enquiry procedure (see Jackson, 1975) or dismembering it into distinct subroutines.

 

Unfortunately, few people understand program inversion and most technologies prevent you from implementing the several stages or parts of a co-routine within the span of a database commit unit; especially if the event processing spans client and server.

Events with a wide-ranging set of effects on stored data

Given an event with a wide-ranging set of effects on stored data, you may find you cannot contain the event within the span of an automated database commit unit.  Two reasons are:

Passage of time

The event takes so long you have to divide its processing into stages, and your technology prevents you from implementing the several stages within the span of a database commit unit.

This reason is very rare.  When it happens, you are probably best advised to process the event off-line, perhaps overnight, rather than divide it into stages.

Distribution of data

The affected data is stored in different locations beyond the control of a coherent database management system.

This is the most common reason.  It causes more pain and cost than almost anything else in system design.

The consequences of dividing an event, not being able to process one event in all related systems at the same time, within one commit unit, are many and various, as the next section indicates.

Conclusions

???

This chapter discusses various difficulties that arise in modeling the control flow of an event. It covers restructuring a selection to define a transient association, resolving a structure clash between selections, multiple hits in a V shape, and multiple hits in a diamond shape.

Restructuring a selection to achieve association

Some objects act as a monitor object for an event type. They decide whether an event instance has one effect or another by inspecting a condition. Some monitor objects go on to act as a gatekeeper object. They decide whether to pass event instance on to another object or not.

The figure below shows Project acting as a gatekeeper object for Employee, but there is an error in the event impact structure.

The diagram is an invalid specification because you cannot draw two arrows from one substructure to a single node in another structure. This destroys the concept of one-to-one association.

A multi-way selection is a generative pattern that prompts a question.

Q) Ask of a multi-way selection: Is there duplication of processing between two or more options?

 

If yes, try rearranging the selected options under a higher-level selection.

Rearranging the selection into two levels works in this case. In the general case it doesn’t always work, because drawing one combination of options together divides other combinations of options. The general solution to this kind of problem is revealed in the next section.


Resolving a structure clash between selections

The figure below shows report of all the Employees working on a Project. How to draw the enquiry access path for this report?

Suppose the words in the report are not stored directly as attributes of an Employee, so your enquiry process has to translate indicators into text as it goes along.

There are four different permutations of data in a print line. You might perhaps construct an enquiry access path as in the figure below.

 

The figure below extends the report with an extra field:

Try extending the enquiry access path above to show the extra permutations. Of course, there are now nine different permutations of data in a print line. This combinatorial explosion reveals there is a structure clash between selected options.

 

The figure below shows you can resolve the structure clash by constructing an enquiry access path with each selection drawn as a parallel aspect. You can now add further selections without any fear of combinatorial explosion.

 

Parallel aspects appear here in an enquiry access. They also appear in Event impact structures. Notice that there is logically no precedence between the parallel aspects, though you will have to introduce an arbitrary sequence of selections into any program you write based on this specification.

Ask of a multi-way selection: Is there a combinatorial explosion?

 

If yes, then restructure as parallel selections (or a sequence of selections).

Multiple hits in a V shape

A problem that does not seem to be recognised in OO literature to date is that one event instance may hit the same object more than once. If the objects are in a database, the event may thus lock an object from receiving further messages from itself!

 

There are two reasonably obvious situations where multiple hits may occur, related to patterns in the structural model. The first is in the V shape.

 

The figure below shows a model in which Task has its own serial number (not simply a compound key of Employee and Project), so an Employee may perform several Tasks within the same Project.

 

The figure below shows that when an employee resigns, the resignation event is broadcast around the V shape, cutting all the Employees Tasks from their Projects and perhaps having some update effect on the Project into the bargain.

 

The figure above looks OK. The use of arrows seems perfectly valid; but it is not. The transient association between Task and Project is not one-to-one as the arrow implies. There are more Task objects affected by the event than Project objects. The event will hit the same Project several times.


Data-oriented resolution of multiple impacts

The figure below shows you can avoid multiple impacts in V shapes by introducing a Y shape derivable sorting class. E.g. the Assignment object represents one combination of the two master entity types Employee and Project.

 

Assignment might be a V shape domain class, that is a class introduced by users to constrain who is allowed to work on a Project. Or it might be a derivable sorting class. Either way, it works to resolve the multiple impact problem.

 

The figure below shows that when an employee resigns, and the event is broadcast around the V shape, the event will hit each Project only once.

 

Given you can separate the application class model from the data storage structure, the question arises as to whether neither, either or both should include a Y shape derivable sorting class.

The book ‘Patterns in data modeling’ proposes you might specify the derivable sorting class in the business rules layer only. You can code enquiry processes in the business rules layer as though derivable sorting class exists, then code the application/data interface to sort the stored data and present the required objects to the business rules layer as it request them.

Process-oriented resolution of multiple impacts

The three strategies for implementing events described in the earlier chapters in this series provide other ways to resolve the problem of multiple impacts.

Comb: centrally-controlled message passing

This strategy was described earlier thus: ‘In one possible OO implementation, the whole event impact structure is controlled by an event manager that implements something like a two-phase commit. First it calls each object with the event, then it reads all the objects’ replies to check they are in the correct state, then it invokes each object again, telling it to process the event, update itself and reply with any required output.’

Following this strategy, you can get the event manager to resolve the multiple-impact problem. The event manager program keeps track of which objects have already been invoked with an event in the first phase, and when each is invoked in the second phase with a commit message.

Chain: Hand-to-hand message passing

This strategy was described in earlier thus: ‘In a more OO implementation, the objects pass the event from one to another (as though following the arrows in an event impact structure).

 

This is fine for process control systems. It is not quite so easy in Enterprise Applications where a system event must build up a complex output data structure from the many concurrent information objects it affects.’

 

Following this strategy, you can get each object to resolve the multiple-impact problem. You can add code to lock the object on the first invocation of the first phase of an event, and unlock the object on the last invocation of the second phase of the same event.

 

In the first phase, the necessary code must ask the question, Has this object already been accessed by this event? To do this it must store not only the lock on the object, but remember the identity of the event which locked it.

 

In the second phase, the necessary code must ask the question, Is this the last time this event will hit the object? To do this it must remember the number of hits in the first phase, and countdown during the second phase until all have been committed.

 

This extra code should be shielded from the business rules layer, which should know nothing about locking, or other multi-user issues. In terms of the 3-tier architecture the code belongs in the data storage layer, it is a module of the application/data interface.

Procedure: combine the relevant parts of the entity types into one

This strategy was described in earlier thus: ‘You can get around the need to define the message passing by extracting the relevant operations from each class, bringing them together into one procedure, and making them communicate via the local memory or working storage of that procedure.’

 

Following this strategy, you can move the database processing into the event manager program. This is really the conventional programming solution. I have arrived by a circuitous route at a procedural implementation of the event impact structure.

 

This may be viewed as a highly optimised form of OO implementation in which objects communicate via the working storage of a single procedure, rather than by sending messages to each other.

Multiple impacts in a diamond shape

The diamond shape gives another way for an event to hit an object more than once.

 

The figure below shows a Training Scheme Closure event is broadcast down both the sides of a diamond shape. The event may hit the same Course Booking via both routes. Worse, the event may travel further around the V shape.

 

The resolution is a little more complex than in the case of a V shape.

 

First of all you have to follow an arbitrary rule. When modeling an event that travels down both sides of the diamond, you have to break the circle on one side or the other. The figure below breaks the circle on one side.

Which side to break the circle? You should feel uncomfortable about making an arbitrary decision. You might say that if one relationship is optional at the detail end, then break the circle on that side. The rule fits this example, but other examples still leave you with an arbitrary decision.

 

And what about the gap in the circle? The event apparently never travels along the relationship from Trainee to Course Booking or vice-versa. So a Trainee gets to hear about the event’s effect on the relationship from Training Scheme, but not the same event’s effect on the relationship from Course Booking.

 

The figure below shows how (though it is not strictly necessary in this case) dividing the Trainee into parallel aspect entity types enables you to complete the circle.

So the Trainee entity does get to hear of the event twice, but only once in each parallel aspect class. This device, of splitting a class into parallel aspects, each recording a different effect of the same event, is useful in other situations. See the end of this chapter for another example.

 

Finally, you may want to extend the V shape as before into a Y shape. Can a Trainee be booked on the same Course more than once? Yes: if they fail the course they may attend it again.

The figure below resolves the V shape into a Y shape as before.

 

Course Eligibility might be a V shape domain class, that is a class introduced by users to constrain who is allowed to book on a Course. Or it might be a derivable sorting class. Either way, it works to resolve any multiple impact problem via Course Booking.

 

Drawing the event’s access path in the form of an event impact structure, I arrive at the figure below

 

One more example of double trouble. The figure below is the data model of a recruitment agency application.

 

One possible outcome of an Interview is the entry of Vacancy Acceptance event. Draw the Vacancy Acceptance event impact structure for the requirement written in text below.

Vacancy Acceptance

Event parameters

Identify of an Interview

Vacancy Acceptance event response

full details of the Interview that has been successful, list of all other Interviews cancelled for this Applicant, list of all other Interviews cancelled for Job (if this was the last vacancy for the Job)

Vacancy Acceptance event processing

Mark the Applicant for deletion and cancel all other Interviews for that Applicant and if there are no more Vacancies for the Job, mark the Job for deletion and cancel all other Interviews for the Job.

 

The event impact structure is extremely complex, as shown below, and it involves several small double-impact problems.

 

One double-impact problem occurs when you return from Applicant Skill to cancel the other Interviews in that set, you must skip over the successful Interview already being processed, by placing a constraint on the iteration, as shown above.

 

Notice that Interview is owned by the same Skill Type on both sides of the diamond. So the Vacancy Acceptance event will hit the same Skill Type from two directions, when cutting an Applicant Skill, and cutting a Job, from that Skill Type. The resolution of this multiple impact involves dividing Skill Type into parallel aspects, one for its relationship to Job and one for its relationship to Applicant Skill.

 

The full case study reveals reuse between events. It turns out that parts of the above event impact structure are ‘superevents’, common processes also invoked by the events Applicant Withdrawal and Job Withdrawal.  See the chapter <Generic events>.

 

 

References

Ref. 1:  “Software is not Hardware” in the Library at http://avancier.co.uk

Ref. 2:  Robinson K. & Berrisford G. [1994].

 

 

Footnote 1: Creative Commons Attribution-No Derivative Works Licence 2.0

No Derivative Works: You may copy, distribute, display only complete and verbatim copies of this page, not derivative works based upon it.

Attribution: You may copy, distribute and display this copyrighted work only if you clearly credit “Avancier Limited: http://avancier.co.uk” before the start and include this footnote at the end.

For more information about the licence, see  http://creativecommons.org