Tucker Taft, with review by S. Baird, G. Dismukes, and R. Duff, 27-April-2020

Introduction

Here is an attempt to answer various frequently asked questions about the Ada object-oriented programming model. As an introduction, the Ada OOP model arose during the Ada 9X process (which ran from 1990 to 1995), and was refined as part of Ada 2005 (by adding interface types). Ada’s OOP model was actually the first OOP model in an ISO standardized language. It was developed at a time when Simula, Smalltalk, CLOS (common lisp object system), Modula-3, Eiffel, Objective-C, and C++ were the main alternatives. Java came out about the same time Ada 95 was finalized. Of the languages mentioned, only C++ had stack-resident objects — all of the other languages allowed objects on the heap only. Some, like Eiffel, have since been extended to support “value” types as well as “reference” types (though generally with significant limitations on functionality).

For additional reading, the paper from OOPSLA 1993 called Ada 9X : From Abstraction-Oriented to Obiect-Oriented lays out much of the original rationale behind the Ada OOP model.

Frequently Asked Questions about Ada OOP

Why doesn’t Ada use a “class” like structure, where the type and the module are one construct, and operations are declared “inside” the class?

Ada’s OOP model was developed as part of the Ada 9X design process. At that time, it was widely agreed that Ada’s existing package-and-private-type model for defining abstract data types (ADTs) was one of its most valuable features. This model already supported the notion of inheritance as part of deriving one type from another, though it had no notion of type extension or run-time dispatching (aka run-time “polymorphism”). Ada 9X OOP was built on this existing ADT capability in Ada, rather than having two very different constructs for creating ADTs, which would have meant using a package-and-private-type model for the one that supported inheritance only, while using a completely different “class” (module=type) model for the other, the one that supported inheritance with extension and run-time polymorphism.

This uniformity between non-extensible and extensible types allowed programmers to move between the two approaches smoothly, without significant restructuring, depending on whether a given application needed the flexibility and attendant overhead of type extension and polymorphism.

Furthermore, by separating the module (i.e. package) from the type, binary operators (or other binary operations), or operations where the “main” operand was the second one (such as in a membership operation), the programmer gained some flexibility. C++ supported operators, but only by using a truly horrendous “friend” mechanism that no one wanted to emulate. [Editpr’s note: Remember, this reflected the state in 1993, and would be an oversimplification today.] Smalltalk supported operators, but with an asymmetric mechanism where the “receiver” object was necessarily the first operand, and the type of the second operand had no impact on what operation was invoked. CLOS and Modula-3 had somewhat more symmetrical models, though without the “systematic” substitution that Ada provided on inheritance of a primitive operation in the package-and-private-type model, where all parameters of the given type (and the result) are replaced with a parameter of the derived type upon inheritance. Systematic substitution is generally what you want for binary operators.

Why does Ada use a more “class”-like structure for protected types, while using the package-and-private-type model for unprotected types?

Task and protected types both enforce what is called “object encapsulation” where only when inside one of the synchronizing operations of the type can you see or update the components of the “target” object of the type. Non-synchronizing types generally enforce a different kind of encapsulation which provides information hiding, but not synchronization, where when inside an operation of the type you can see the representation of all objects of the type. Java confuses these two kinds of encapsulation pretty badly, because a “synchronized” operation in Java gets a lock only on the “receiver” object, but from within such an operation the program has visibility on the innards of all objects of the type. So rather than object encapsulation, Java is providing information-hiding-style encapsulation with a side-effect of getting a lock on one of its parameters. Java goes even beyond this confusion to provide synchronizing blocks, where it gets a lock but has no particular connection to what data it can safely reference.

So back to the original question, why do protected types (and task types) use a more class-like structure? It is because this structure conveys syntactically how there is only one object whose innards you can reference when inside a synchronizing operation, and it is the “target” (or “receiver”) protected or task object. If you were to pass in another object as a parameter to a synchronizing operation, even if it had the same type, the operation would have no special access to the components of that object. This is very different from the way ADTs work in general, where operations may need visibility on all parameters of the associated type to implement the operation, such as a set union, or a “+” on a “big num.”

Why do you distinguish between class-wide and specific types in Ada?

For languages that put every OOP object on the heap, there is always a level of indirection, so arrays of objects, records containing objects, etc., can always use a “polymorphic” (“class-wide type” in Ada) rather than a “monomorphic” (“specific type” in Ada) object for components. But in languages like C++ and Ada, there is no requirement that all OOP objects be on the heap through a level of indirection, so if we want to be able to have arrays of OOP objects, and records with OOP components, we need to make the distinction between monomorphic and polymorphic. Clearly polymorphic objects have unknown size at compile-time, so it would not be practical to have an array of them without a level of indirection for each component. Given Ada’s focus on real-time embedded systems, the Ada OOP model was designed to avoid implicit levels of indirection and implicit use of the heap. A secondary stack was already part of the Ada model when Ada 9X came along, but a secondary stack by its nature is manageable in a stack-like manner, whereas a heap is not, in general. Managing the storage for an array of polymorphic objects would be challenging, since every assignment to a component might change the size of the component, resulting in multiple heap operations, potentially.

Once it was concluded to distinguish between monomorphic and polymorphic objects, the question was how to do it. In C++, the general distinction was made according to whether the object was a component, a declared object, or a by-copy parameter, vs. a pointed-to object or a by-reference parameter, where the former were monomorphic and the latter were polymorphic. For Ada, this distinction seemed a bit arbitrary and error prone. So it was decided to treat polymorphic objects as having a type distinct from that of a monomorphic object. Ada already had the notion of a “class of types” representing a hierarchy of all types derived from the same “root” type. So the term “class-wide” was adopted to represent a polymorphic object whose underlying “run-time” type ranged over a class of types. Pretty much any OOP model that does not include garbage collection (or some sort of automatic storage reclamation) cannot avoid making the distinction between monomorphic and polymorphic objects.

Why does Ada, by default, use statically-bound calls rather than always using dispatching?

Most languages provide some method for distinguishing statically-bound calls from dynamically-bound (dispatching) calls. In languages that have only polymorphic objects (and hence put all OOP objects on the heap), the only statically-bound calls tend to be on some ancestor’s operation, in a kind of “pass-the-buck” maneuver (e.g. super.Op(Y)). Some also permit the target type to be identified explicitly with a prefix when calling an operation of a specific type C, such as X->C::Op(Y).

In languages like C++ where there are both monomorphic and polymorphic objects, static binding is used when applying operations to such monomorphic objects. In Ada, the rules are similar. If the “controlling” operands are of a polymorphic type (T’Class), then dynamic binding will occur (in general). If the “controlling” operands are all of a monomorphic type, then static binding is used. Mixing polymorphic and monomorphic controlling operands is not permitted in a single call. If there are multiple polymorphic operands, they are checked to be sure they all share the same run-time type (“tag”), so it is unambiguous which type’s operation should be invoked. It is possible to pass a polymorphic operand without incurring dispatching, if the formal parameter of the invoked operation is also polymorphic. In that case, the polymorphism is preserved across the call, and the called operation can do dispatching internally.

One effect of the Ada approach is that within a “primitive” operation, after any dispatching has occurred, the world returns, by default, to monomorphic semantics. This means that a set of operations can work together to implement an operation, without worrying that a “re-dispatch” may occur where it is not expected. Ada does permit an operation to “re-dispatch” internally if it so chooses, but this is important to indicate in the documentation of the operation, because such an operation cannot be safely inherited “as is” if any other primitive operation is overridden, because there may be unknown coupling between the primitive operations because of re-dispatching.

For example, given:

   type Port is <OO Type>;
   procedure Put_Character (F : in out Port; C : Character);
   procedure Put_String (F : in out Port; S : String);

what are the semantics when I extend type Port and override Put_String but do not override Put_Character? Perhaps I want Put_String to do UTF8 encoding, say, but want Put_Character to just put out the character as is. It depends heavily on the relationship (the “coupling”) between the original Put_Character and Put_String, and whether re-dispatching is happening between them. If Put_Character puts the character out directly, or if it makes a statically bound call on Put_String (or really, on any other overridable operation) with a single-character string to do its job, then overriding Put_String should work as expected. The overriding of Put_String can safely call the inherited Put_Character, no matter how it is implemented, so long as it doesn’t do a re-dispatch back to Put_String. On the other hand, if the original (and now inherited) Put_Character re-dispatches, say, to Put_String with a one-character string, then if the overriding Put_String calls the inherited Put_Character, we get an infinite loop. The bottom line is that any (re)dispatching that happens inside an operation, and exactly how it is used, affects the functionality on inheritance, so should be considered part of its “contract” with any inheritor.

The above might seem like a contrived example, but in the early days of OOP, this problem of hidden coupling between operations was identified as a serious maintenance issue. If the coupling between two operations changed on a subsequent release of a library, programs that overrode some but not all of the operations might work fine on the earlier release, but stop working as desired on the subsequent release. Because of this it became important to document any such coupling, or to avoid it by internally only calling routines that could not be overridden as part of inheritance.

The Ada approach switches the default, so static binding is what happens when inside the implementation of a primitive operation, unless the operation decides explicitly (and hopefully intentionally) to “re-dispatch” by converting an operand back to the polymorphic view (T’Class(X)). This means that “black box” inheritance works more reliably by default, and use of re-dispatching is generally intentional, and hopefully deserving of documentation in the interface spec of the operation.

Why does Ada support dispatching on the result of a function?

Ada uses parameterless functions for enumeration literals, and they also show up in some ADTs to provide things like an Empty_Set, or the value “i” for a Complex number. One advantage of representing these things as functions rather than as named constants is that they are “carried along” when you declare a type derived from the original type. Also, from a visibility point of view, they support overloading, so you can have multiple types that have the same enumeration literal, or the same name for the “Empty” function, and then when declaring an object, the type of the object determines which enumeration literal or Empty function is chosen at compile time.

When OOP was added to Ada, it was anticipated that one might want to create OOP-based types to implement things like sets, or “big nums,” and it seemed to make sense that the same situation would arise, but now using run-time types rather than compile-time types. So if you have an existing polymorphic object of a given run-time “Set” type, and you want to assign the Empty set to it, it makes sense to use the run-time type of the left-hand side to determine which particular Empty function to call.

More generally, it seemed to make sense to carry over the general capabilities of ADTs in Ada to the polymorphic “world,” where using “tags” at run time is analogous to using overloading at compile-time, and since overloading in Ada takes function result types into account, it seemed to make sense for dispatching to similarly apply to function results. As time has gone on, interest seems to have grown in reducing the differences between old-style ADTs using Ada’s untagged types, and new-style ADTs using OOP types, to further ease moving back and forth. So various features that showed up initially only for tagged types have been added back to untagged types. This seems to indicate that having them use a similar package-and-private-type structure is helpful to smoothing these transitions, and the choice of whether or not type extension or polymorphism is important for a given abstraction can be deferred.

Share and Enjoy:
  • email
  • LinkedIn
  • Twitter
  • Facebook
  • Digg
  • RSS