Thinking in C++ 2nd edition
VERSION TICA
7

Revision history:

TICA7, August 14, 1998. Strings chapter modified. Other odds and ends.

TICA6, August 6, 1998. Strings chapter added, still needs some work but itís in fairly good shape. The basic structure for the STL Algorithms chapter is in place and "just" needs to be filled out. Reorganized the chapters; this should be very close to the final organization (unless I discover Iíve left something out).

TICA5, August 2, 1998: Lots of work done on this version. Everything compiles (except for the design patterns chapter with the Java code) under Borland C++ 5.3. This is the only compiler that even comes close, but I have high hopes for the next verison of egcs. The chapters and organization of the book is starting to take on more form. A lot of work and new material added in the "STL Containers" chapter (in preparation for my STL talks at the Borland and SD conferences), although that is far from finished. Also, replaced many of the situations in the first edition where I used my home-grown containers with STL containers (typically vector). Changed all header includes to new style (except for C programs): <iostream> instead of <iostream.h>, <cstdlib> instead of <stdlib.h>, etc. Adjustment of namespace issues ("using namespace std" in .cpp files, full qualification of names in header files). Added appendix A to describe coding style (including namespaces). Added "require.h" error testing code and used it universally. Rearranged header include order to go from more general to more specific (consistency and style issue described in appendix A). Replaced Ďmain( ) {}í form with Ďint main( ) { }í form (this relies on the default "return 0" behavior, although some compilers, notably VC++, give warnings). Went through and implemented the class naming policy (following the Java/Smalltalk policy of starting with uppercase etc.) but not the member functions/data members (starting with lowercase etc.). Added appendix A on coding style. Tested code with my modified version of Borland C++ 5.3 (cribbed a corrected ostream_iterator from egcs and <sstream> from elsewhere) so not all the programs will compile with your compiler (VC++ in particular has a lot of trouble with namespaces). On the web site, I added the broken-up versions of the files for easier downloads.

TICA4, July 22, 1998: More changes and additions to the "CGI Programming" section at the end of Chapter 23. I think that section is finished now, with the exception of corrections.

TICA3, July 14, 1998: First revision with content editing (instead of just being a posting to test the formatting and code extraction process). Changes in the end of Chapter 23, on the "CGI Programming" section. Minor tweaks elsewhere. RTF format should be fixed now.

TICA2, July 9, 1998: Changed all fonts to Times and Courier (which are universal); changed distribution format to RTF (readable by most PC and Mac Word Processors, and by at least one on Linux: StarOffice from www.caldera.com. Please let me know if you know about other RTF word processors under Linux).

__________________________________________________________________________

The instructions on the web site (http://www.BruceEckel.com/ThinkingInCPP2e.html) show you how to extract code for both Win32 systems and Linux (only Red Hat Linux 5.0/5.1 has been tested). The contents of the book, including the contents of the source-code files generated during automatic code extraction, are not intended to indicate any accurate or finished form of the book or source code.

Please only add comments/corrections using the form found on http://www.BruceEckel.com/ThinkingInCPP2e.html

Please note that the book files are only available in Rich Text Format (RTF) or plain ASCII text without line breaks (that is, each paragraph is on a single line, so if you bring it into a typical text editor that does line wrapping, it will read decently). Please see the Web page for information about word processors that support RTF. The only fonts used are Times and Courier (so there should be no font difficulties); if you find any other fonts please report the location.

Thanks for your participation in this project.

Bruce Eckel

"This book is a tremendous achievement. You owe it to yourself to have a copy on your shelf. The chapter on iostreams is the most comprehensive and understandable treatment of that subject I've seen to date."

Al Stevens
Contributing Editor, Doctor Dobbs Journal

"Eckel's book is the only one to so clearly explain how to rethink program construction for object orientation. That the book is also an excellent tutorial on the ins and outs of C++ is an added bonus."

Andrew Binstock
Editor, Unix Review

"Bruce continues to amaze me with his insight into C++, and Thinking in C++ is his best collection of ideas yet. If you want clear answers to difficult questions about C++, buy this outstanding book."

Gary Entsminger
Author, The Tao of Objects

"Thinking in C++ patiently and methodically explores the issues of when and how to use inlines, references, operator overloading, inheritance and dynamic objects, as well as advanced topics such as the proper use of templates, exceptions and multiple inheritance. The entire effort is woven in a fabric that includes Eckelís own philosophy of object and program design. A must for every C++ developerís bookshelf, Thinking in C++ is the one C++ book you must have if youíre doing serious development with C++."

Richard Hale Shaw
Contributing Editor, PC Magazine

 

Thinking

In

C++

Bruce Eckel
President, MindView Inc.

Prentice Hall PTR
Upper Saddle River, New Jersey 07458
http://www.phptr.com

Publisher: Alan Apt
Production Editor: Mona Pompilli
Development Editor:
Sondra Chavez
Book Design, Cover Design and Cover Photo:
Daniel Will-Harris, daniel@will-harris.com
Copy Editor:
Shirley Michaels
Production Coordinator:Lori Bulwin
Editorial Assistant: Shirley McGuire

© 1998 by Bruce Eckel, MindView, Inc.
Published by Prentice Hall Inc.
A Paramount Communications Company
Englewood Cliffs, New Jersey 07632

The information in this book is distributed on an "as is" basis, without warranty. While every precaution has been taken in the preparation of this book, neither the author nor the publisher shall have any liability to any person or entitle with respect to any liability, loss or damage caused or alleged to be caused directly or indirectly by instructions contained in this book or by the computer software or hardware products described herein.

All rights reserved. No part of this book may be reproduced in any form or by any electronic or mechanical means including information storage and retrieval systems without permission in writing from the publisher or author, except by a reviewer who may quote brief passages in a review. Any of the names used in the examples and text of this book are fictional; any relationship to persons living or dead or to fictional characters in other works is purely coincidental.

Printed in the United States of America

10 9 8 7 6 5 4 3 2 1

ISBN 0-13-917709-4

Prentice-Hall International (UK) Limited, London

Prentice-Hall of Australia Pty. Limited, Sydney

Prentice-Hall Canada, Inc., Toronto

Prentice-Hall Hisapnoamericana, S.A., Mexico

Prentice-Hall of India Private Limited, New Delhi

Prentice-Hall of Japan, Inc., Tokyo

Simon & Schuster Asia Pte. Ltd., Singapore

Editora Prentice-Hall do Brasil, Ltda., Rio de Janeiro

dedication

to the scholar, the healer, and the muse

 

Whatís inside...

Thinking in C++ 2nd edition VERSION TICA6 *

Preface *

Prerequisites *

Thinking in C *

Learning C++ *

Goals *

Chapters *

Exercises *

Source code *

Coding standards *

Language standards *

Language support *

Seminars & CD Roms *

Errors *

Acknowledgements *

1: Introduction to objects *

The progress of abstraction *

An object has an interface *

The hidden implementation *

Reusing the implementation *

Inheritance: reusing the interface *

Overriding base-class functionality *

Is-a vs. is-like-a relationships *

Interchangeable objects with polymorphism *

Dynamic binding *

Abstract base classes and interfaces *

Objects: characteristics + behaviors *

Inheritance: type relationships *

Polymorphism *

Manipulating concepts: what an OOP program looks like *

Object landscapes and lifetimes *

Containers and iterators *

Exception handling: dealing with errors *

Introduction to methods *

Complexity *

Internal discipline *

External discipline *

Five stages of object design *

What a method promises *

What a method should deliver *

"Required" reading *

Scripting: a minimal method *

Premises *

1. High concept *

2. Treatment *

3. Structuring *

4. Development *

5. Rewriting *

Logistics *

Analysis and design *

Staying on course *

Phase 0: Letís make a plan *

Phase 1: What are we making? *

Phase 2: How will we build it? *

Phase 3: Letís build it! *

Phase 4: Iteration *

Plans pay off *

Other methods *

Booch *

Responsibility-Driven Design (RDD) *

Object Modeling Technique (OMT) *

Why C++ succeeds *

A better C *

Youíre already on the learning curve *

Efficiency *

Systems are easier to express and understand *

Maximal leverage with libraries *

Error handling *

Programming in the large *

Strategies for transition *

Stepping up to OOP *

Management obstacles *

Summary *

2: Making & using objects *

The process of language translation *

Interpreters *

Compilers *

The compilation process *

Tools for separate compilation *

Declarations vs. definitions *

Linking *

Using libraries *

Your first C++ program *

Using the iostreams class *

Fundamentals of program structure *

"Hello, world!" *

Running the compiler *

More about iostreams *

String concatenation *

Reading input *

Simple file manipulation *

Summary *

Exercises *

3: The C in C++ *

Controlling execution in C/C++ *

True and false in C *

if-else *

while *

do-while *

for *

The break and continue Keywords *

switch *

Introduction to C and C++ operators *

Precedence *

Auto increment and decrement *

Using standard I/O for easy file handling *

Simple "cat" program *

Handling spaces in input *

Utility programs using iostreams and standard I/O *

Pipes *

Text analysis program *

IOstream support for file manipulation *

Introduction to C++ data *

Basic built-in types *

bool, true, & false *

Specifiers *

Scoping *

Defining data on the fly *

Specifying storage allocation *

Global variables *

Local variables *

static *

extern *

Constants *

volatile *

Operators and their use *

Assignment *

Mathematical operators *

Relational operators *

Logical operators *

Bitwise operators *

Shift operators *

Unary operators *

Conditional operator or ternary operator *

The comma operator *

Common pitfalls when using operators *

Casting operators *

sizeof -- an operator by itself *

The asm keyword *

Explicit operators *

Creating functions *

Function prototyping *

Using the C function library *

Creating your own libraries with the librarian *

The header file *

Function collections & separate compilation *

Preventing re-declaration of classes *

struct: a class with all elements public *

Clarifying programs with enum *

Saving memory with union *

Debugging flags *

Turning a variable name into a string *

The Standard C assert( ) macro *

Debugging techniques combined *

Bringing it all together: project-building tools *

File names *

Make: an essential tool for separate compilation *

Make activities *

Makefiles in this book *

An example makefile *

Summary *

Exercises *

4: Data abstraction *

Declarations vs. definitions *

A tiny C library *

Dynamic storage allocation *

What's wrong? *

The basic object *

What's an object? *

Abstract data typing *

Object details *

Header file etiquette *

Using headers in projects *

Nested structures *

Global scope resolution *

Summary *

Exercises *

5: Hiding the implementation *

Setting limits *

C++ access control *

protected *

Friends *

Nested friends *

Is it pure? *

Object layout *

The class *

Modifying Stash to use access control *

Modifying stack to use access control *

Handle classes *

Visible implementation *

Reducing recompilation *

Summary *

Exercises *

6: Initialization & cleanup *

Guaranteed initialization with the constructor *

Guaranteed cleanup with the destructor *

Elimination of the definition block *

for loops *

Storage allocation *

Stash with constructors and destructors *

stack with constructors & destructors *

Aggregate initialization *

Default constructors *

Summary *

Exercises *

7: Function overloading & default arguments *

More mangling *

Overloading on return values *

Type-safe linkage *

Overloading example *

Default arguments *

A bit vector class *

Summary *

Exercises *

8: Constants *

Value substitution *

const in header files *

Safety consts *

Aggregates *

Differences with C *

Pointers *

Pointer to const *

const pointer *

Assignment and type checking *

Function arguments & return values *

Passing by const value *

Returning by const value *

Passing and returning addresses *

Classes *

const and enum in classes *

Compile-time constants in classes *

const objects & member functions *

ROMability *

volatile *

Summary *

Exercises *

9: Inline functions *

Preprocessor pitfalls *

Macros and access *

Inline functions *

Inlines inside classes *

Access functions *

Inlines & the compiler *

Limitations *

Order of evaluation *

Hidden activities in constructors & destructors *

Reducing clutter *

Preprocessor features *

Token pasting *

Improved error checking *

Summary *

Exercises *

10: Name control *

Static elements from C *

static variables inside functions *

Controlling linkage *

Other storage class specifiers *

Namespaces *

Creating a namespace *

Using a namespace *

Static members in C++ *

Defining storage for static data members *

Nested and local classes *

static member functions *

Static initialization dependency *

What to do *

Alternate linkage specifications *

Summary *

Exercises *

11: References & the copy-constructor *

Pointers in C++ *

References in C++ *

References in functions *

Argument-passing guidelines *

The copy-constructor *

Passing & returning by value *

Copy-construction *

Default copy-constructor *

Alternatives to copy-construction *

Pointers to members *

Functions *

Summary *

Exercises *

12: Operator overloading *

Warning & reassurance *

Syntax *

Overloadable operators *

Unary operators *

Binary operators *

Arguments & return values *

Unusual operators *

Operators you canít overload *

Nonmember operators *

Basic guidelines *

Overloading assignment *

Behavior of operator= *

Automatic type conversion *

Constructor conversion *

Operator conversion *

A perfect example: strings *

Pitfalls in automatic type conversion *

Summary *

Exercises *

13: Dynamic object creation *

Object creation *

Cís approach to the heap *

operator new *

operator delete *

A simple example *

Memory manager overhead *

Early examples redesigned *

Heap-only string class *

Stash for pointers *

The stack *

new & delete for arrays *

Making a pointer more like an array *

Running out of storage *

Overloading new & delete *

Overloading global new & delete *

Overloading new & delete for a class *

Overloading new & delete for arrays *

Constructor calls *

Object placement *

Summary *

Exercises *

14: Inheritance & composition *

Composition syntax *

Inheritance syntax *

The constructor initializer list *

Member object initialization *

Built-in types in the initializer list *

Combining composition & inheritance *

Order of constructor & destructor calls *

Name hiding *

Functions that donít automatically inherit *

Choosing composition vs. inheritance *

Subtyping *

Specialization *

private inheritance *

protected *

protected inheritance *

Multiple inheritance *

Incremental development *

Upcasting *

Why "upcasting"? *

Upcasting and the copy-constructor (not indexed) *

Composition vs. inheritance (revisited) *

Pointer & reference upcasting *

A crisis *

Summary *

Exercises *

15: Polymorphism & virtual functions *

Evolution of C++ programmers *

Upcasting *

The problem *

Function call binding *

virtual functions *

Extensibility *

How C++ implements late binding *

Storing type information *

Picturing virtual functions *

Under the hood *

Installing the vpointer *

Objects are different *

Why virtual functions? *

Abstract base classes and pure virtual functions *

Pure virtual definitions *

Inheritance and the VTABLE *

virtual functions & constructors *

Order of constructor calls *

Behavior of virtual functions inside constructors *

Destructors and virtual destructors *

Virtuals in destructors *

Summary *

Exercises *

16: Templates *

Containers & iterators *

The need for containers *

Overview of templates *

The C approach *

The Smalltalk approach *

The template approach *

Template syntax *

Non-inline function definitions *

The stack as a template *

Constants in templates *

Stash and stack as templates *

The ownership problem *

Stash as a template *

stack as a template *

Sstring & integer *

A string on the stack *

integer *

Templates & inheritance *

Design & efficiency *

Preventing template bloat *

Polymorphism & containers *

Function templates *

A memory allocation system *

Applying a function to a TStack *

Member function templates *

Controlling instantiation *

The export keyword *

Summary *

Exercises *

Part 2: The Standard C++ Library *

17: Library Overview *

Summary *

18: Strings *

19: Iostreams *

Why iostreams? *

True wrapping *

Iostreams to the rescue *

Sneak preview of operator overloading *

Inserters and extractors *

Common usage *

Line-oriented input *

File iostreams *

Open modes *

Iostream buffering *

Using get( ) with a streambuf *

Seeking in iostreams *

Creating read/write files *

stringstreams *

strstreams *

User-allocated storage *

Automatic storage allocation *

Output stream formatting *

Internal formatting data *

An exhaustive example *

Formatting manipulators *

Manipulators with arguments *

Creating manipulators *

Effectors *

Iostream examples *

Code generation *

A simple datalogger *

Counting editor *

Breaking up big files *

Summary *

Exercises *

20: STL Containers & Iterators *

Container types *

The STL container classes *

The Standard Template Library *

The basic concepts *

Containers of strings *

Inheriting from STL containers *

The amazing set *

Eliminating strtok( ) *

StreamTokenizer: a more flexible solution *

A completely reusable tokenizer *

Generators and fillers for associative containers *

The magic of maps *

Multimaps and duplicate keys *

Multisets *

Combining STL containers *

Cleaning up containers of pointers *

Summary *

Exercises *

21: STL Algorithms *

Algorithms are succinct *

Filling a container *

A test framework for the examples in this chapter *

Applying an operation to each element in a container *

Summary *

Exercises *

Part 3: Advanced Topics *

22: Multiple inheritance *

Perspective *

Duplicate subobjects *

Ambiguous upcasting *

virtual base classes *

The "most derived" class and virtual base initialization *

"Tying off" virtual bases with a default constructor *

Overhead *

Upcasting *

Persistence *

Avoiding MI *

Repairing an interface *

Summary *

Exercises *

23: Exception handling *

Error handling in C *

Throwing an exception *

Catching an exception *

The try block *

Exception handlers *

The exception specification *

Better exception specifications? *

Catching any exception *

Rethrowing an exception *

Uncaught exceptions *

Cleaning up *

Constructors *

Making everything an object *

Exception matching *

Standard exceptions *

Programming with exceptions *

When to avoid exceptions *

Typical uses of exceptions *

Overhead *

Summary *

Exercises *

24: Run-time type identification *

The "Shape" example *

What is RTTI? *

Two syntaxes for RTTI *

Syntax specifics *

typeid( ) with built-in types *

Producing the proper type name *

Nonpolymorphic types *

Casting to intermediate levels *

void pointers *

Using RTTI with templates *

References *

Exceptions *

Multiple inheritance *

Sensible uses for RTTI *

Revisiting the trash recycler *

Mechanism & overhead of RTTI *

Creating your own RTTI *

New cast syntax *

static_cast *

const_cast *

reinterpret_cast *

Summary *

Exercises *

25: Design patterns *

The pattern concept *

The singleton *

Classifying patterns *

The observer pattern *

The composite *

Simulating the trash recycler *

Improving the design *

"Make more objects" *

A pattern for prototyping creation *

Abstracting usage *

Multiple dispatching *

Implementing the double dispatch *

The "visitor" pattern *

RTTI considered harmful? *

Summary *

Exercises *

26: Tools & topics *

The code extractor *

Debugging *

assert( ) *

Trace macros *

Trace file *

Abstract base class for debugging *

Tracking new/delete & malloc/free *

CGI programming in C++ *

Encoding data for CGI *

The CGI parser *

Using POST *

Handling mailing lists *

A general information-extraction CGI program *

Parsing the data files *

Summary *

Exercises *

A: Coding style *

Begin and end comment tags *

Parens, braces and indentation *

Order of header inclusion *

Include guards on header files *

Use of namespaces *

Use of require( ) and assure( ) *

B: Programming guidelines *

C: Simulating virtual constructors *

All-purpose virtual constructors *

A remaining conundrum *

A simpler alternative *

D: Recommended reading *

Index *

Unique Features of C++ Functions *

Inline Functions *

C++ function overloading *

Default arguments *

The class: defining boundaries *

Thinking about objects *

Declaration vs. definition (again) *

Constructors and destructors (initialization & cleanup) *

Defining class member functions *

The scope resolution operator :: *

Calling other member functions *

friend : access to private elements of another class *

Other class-like items *

static member functions *

const and volatile member functions *

const objects *

const member functions *

volatile objects and member functions *

Debugging hints *

Preface

Like any human language, C++ provides a way to express concepts. If successful, this medium of expression will be significantly easier and more flexible than the alternatives as problems grow larger and more complex.

You canít just look at C++ as a collection of features; some of the features make no sense in isolation. You can only use the sum of the parts if you are thinking about design, not simply coding. And to understand C++ in this way, you must understand the problems with C and with programming in general. This book discusses programming problems, why they are problems, and the approach C++ has taken to solve such problems. Thus, the set of features I explain in each chapter will be based on the way I see a particular type of problem being solved with the language. In this way I hope to move you, a little at a time, from understanding C to the point where the C++ mindset becomes your native tongue.

Throughout, Iíll be taking the attitude that you want to build a model in your head that allows you to understand the language all the way down to the bare metal; if you encounter a puzzle youíll be able to feed it to your model and deduce the answer. I will try to convey to you the insights which have rearranged my brain to make me start "thinking in C++."

Prerequisites

In the first edition of this book, I decided to assume that someone else had taught you C and that you have at least a reading level of comfort with it. My primary focus was on simplifying what I found difficult ó the C++ language. In this edition I have added a chapter that is a very rapid introduction to C, assuming that you have some kind of programming experience already. In addition, just as you learn many new words intuitively by seeing them in context in a novel, itís possible to learn a great deal about C from the context in which it is used in the rest of the book.

Thinking in C

For those of you who need a gentler introduction to C than the chapter in this book, I have created with Chuck Allison a CD ROM called "Thinking in C: foundations for Java and C++" which will introduce you to the aspects of C that are necessary for you to move on to C++ or Java (leaving out the nasty bits that C programmers must deal with on a day-to-day basis but that the C++ and Java languages steer you away from). This CD can be ordered at http://www.BruceEckel.com. [Note: the CD will not be available until late Fall 98, at the earliest Ė watch the Web site for updates]

Learning C++

I clawed my way into C++ from exactly the same position as I expect the readers of this book will: As a C programmer with a very no-nonsense, nuts-and-bolts attitude about programming. Worse, my background and experience was in hardware-level embedded programming, where C has often been considered a high-level language and an inefficient overkill for pushing bits around. I discovered later that I wasnít even a very good C programmer, hiding my ignorance of structures, malloc( ) & free( ), setjmp( ) & longjmp( ), and other "sophisticated" concepts, scuttling away in shame when the subjects came up in conversation rather than reaching out for new knowledge.

When I began my struggle to understand C++, the only decent book was Stroustrupís self-professed "expertís guide, " so I was left to simplify the basic concepts on my own. This resulted in my first C++ book, which was essentially a brain dump of my experience. That was designed as a readerís guide, to bring programmers into C and C++ at the same time. Both editions of the book garnered an enthusiastic response and I still feel it is a valuable resource.

At about the same time that Using C++ came out, I began teaching the language. Teaching C++ has become my profession; Iíve seen nodding heads, blank faces, and puzzled expressions in audiences all over the world since 1989. As I began giving in-house training with smaller groups of people, I discovered something during the exercises. Even those people who were smiling and nodding were confused about many issues. I found out, by chairing the C++ track at the Software Development Conference for the last three years, that I and other speakers tended to give the typical audience too many topics, too fast. So eventually, through both variety in the audience level and the way that I presented the material, I would end up losing some portion of the audience. Maybe itís asking too much, but because I am one of those people resistant to traditional lecturing (and for most people, I believe, such resistance results from boredom), I wanted to try to keep everyone up to speed.

For a time, I was creating a number of different presentations in fairly short order. Thus, I ended up learning by experiment and iteration (a technique that also works well in C++ program design). Eventually I developed a course using everything I had learned from my teaching experience, one I would be happy giving for a long time. It tackles the learning problem in discrete, easy-to-digest steps and for a hands-on seminar (the ideal learning situation), there are exercises following each of the short lessons.

This book developed over the course of two years, and the material in this book has been road-tested in many forms in many different seminars. The feedback that Iíve gotten from each seminar has helped me change and refocus the material until I feel it works well as a teaching medium. But it isnít just a seminar handout ó I tried to pack as much information as I could within these pages, and structure it to draw you through, onto the next subject. More than anything, the book is designed to serve the solitary reader, struggling with a new programming language.

Goals

My goals in this book are to:

    1. Present the material a simple step at a time, so the reader can easily digest each concept before moving on.
    2. Use examples that are as simple and short as possible. This sometimes prevents me from tackling "real-world" problems,
      but Iíve found that beginners are usually happier when they can understand every detail of an example rather than being impressed by the scope of the problem it solves. Also, thereís a severe limit to the amount of code that can be absorbed in a classroom situation. For this I will no doubt receive criticism for using "toy examples," but Iím willing to accept that in favor of producing something pedagogically useful. Those who want more complex examples can refer to the later chapters of C++ Inside & Out.
    3. Carefully sequence the presentation of features so that you arenít seeing something you havenít been exposed to. Of course, this isnít always possible; in those situations, a brief introductory description will be given.
    4. Give you what I think is important for you to understand about the language, rather than everything I know. I believe there is an "information importance hierarchy," and there are some facts that 95% of programmers will never need to know, but would just confuse people and add to their perception of the complexity of the language ó and C++ is now considered to be more complex than ADA! To take an example from C, if you memorize the operator precedence table (I never did) you can write clever code. But if you have to think about it, it will confuse the reader/maintainer of that code. So forget about precedence, and use parentheses when things arenít clear. This same attitude will be taken with some information in the C++ language, which I think is more important for compiler writers than for programmers.
    5. Keep each section focused enough so the lecture time ó and the time between exercise periods ó is small. Not only does this keep the audienceí minds more active and involved during a hands-on seminar, but it gives the reader a greater sense of accomplishment.
    6. Provide the reader with a solid foundation so they can understand the issues well enough to move on to more difficult coursework and books.
    7. Iíve endeavored not to use any particular vendorís version of C++ because, for learning the language, I donít feel like the details of a particular implementation are as important as the language itself. Most vendorsí documentation concerning their own implementation specifics is adequate.

Chapters

C++ is a language where new and different features are built on top of an existing syntax. (Because of this it is referred to as a hybrid object-oriented programming language.) As more people have passed through the learning curve, weíve begun to get a feel for the way C programmers move through the stages of the C++ language features. Because it appears to be the natural progression of the C-trained mind, I decided to understand and follow this same path, and accelerate the process by posing and answering the questions that came to me as I learned the language and that came from audiences as I taught it.

This course was designed with one thing in mind: the way people learn the C++ language. Audience feedback helped me understand which parts were difficult and needed extra illumination. In the areas where I got ambitious and included too many features all at once, I came to know ó through the process of presenting the material ó that if you include a lot of new features, you have to explain them all, and the studentís confusion is easily compounded. As a result, Iíve taken a great deal of trouble to introduce the features as few at a time as possible; ideally, only one at a time per chapter.

The goal, then, is for each chapter to teach a single feature, or a small group of associated features, in such a way that no additional features are relied upon. That way you can digest each piece in the context of your current knowledge before moving on. To accomplish this, I leave many C features in place much longer than I would prefer. For example, I would like to be using the C++ iostreams IO library right away, instead of using the printf( ) family of functions so familiar to C programmers, but that would require introducing the subject prematurely, and so many of the early chapters carry the C library functions with them. This is also true with many other features in the language. The benefit is that you, the C programmer, will not be confused by seeing all the C++ features used before they are explained, so your introduction to the language will be gentle and will mirror the way you will assimilate the features if left to your own devices.

Here is a brief description of the chapters contained in this book.

(0) The evolution of objects. When projects became too big and too complicated to easily maintain, the "software crisis" was born, saying, "We canít get projects done, and if we can theyíre too expensive!" This precipitated a number of responses, which are discussed in this chapter along with the ideas of object-oriented programming (OOP) and how it attempts to solve the software crisis. Youíll also learn about the benefits and concerns of adopting the language and suggestions for moving into the world of C++.

(1) Data abstraction. Most features in C++ revolve around this key concept: the ability to create new data types. Not only does this provide superior code organization, but it lays the ground for more powerful OOP abilities. Youíll see how this idea is facilitated by the simple act of putting functions inside structures, the details of how to do it, and what kind of code it creates.

(2) Hiding the implementation. You can decide that some of the data and functions in your structure are unavailable to the user of the new type by making them private. This means you can separate the underlying implementation from the interface that the client programmer sees, and thus allow that implementation to be easily changed without affecting client code. The keyword class is also introduced as a fancier way to describe a new data type, and the meaning of the word "object" is demystified (itís a variable on steroids).

(3) Initialization & cleanup. One of the most common C errors results from uninitialized variables. The constructor in C++ allows you to guarantee that variables of your new data type ("objects of your class") will always be properly initialized. If your objects also require some sort of cleanup, you can guarantee that this cleanup will always happen with the C++ destructor.

(4) Function overloading & default arguments. C++ is intended to help you build big, complex projects. While doing this, you may bring in multiple libraries that use the same function name, and you may also choose to use the same name with different meanings within a single library. C++ makes this easy with function overloading, which allows you to reuse the same function name as long as the argument lists are different. Default arguments allow you to call the same function in different ways by automatically providing default values for some of your arguments.

(5) Introduction to iostreams. One of the original C++ libraries ó the one that provides the essential I/O facility ó is called iostreams. Iostreams is intended to replace Cís STDIO.H with an I/O library that is easier to use, more flexible, and extensible ó you can adapt it to work with your new classes. This chapter teaches you the ins and outs of how to make the best use of the existing iostream library for standard I/O, file I/O, and in-memory formatting.

(6) Constants. This chapter covers the const and volatile keywords that have additional meaning in C++, especially inside classes. It also shows how the meaning of const varies inside and outside classes and how to create compile-time constants in classes.

(7) Inline functions. Preprocessor macros eliminate function call overhead, but the preprocessor also eliminates valuable C++ type checking. The inline function gives you all the benefits of a preprocessor macro plus all the benefits of a real function call.

(8) Name control. Creating names is a fundamental activity in programming, and when a project gets large, the number of names can be overwhelming. C++ allows you a great deal of control over names: creation, visibility, placement of storage, and linkage. This chapter shows how names are controlled using two techniques. First, the static keyword is used to control visibility and linkage, and its special meaning with classes is explored. A far more useful technique for controlling names at the global scope is C++ís namespace feature, which allows you to break up the global name space into distinct regions.

(9) References & the copy-constructor. C++ pointers work like C pointers with the additional benefit of stronger C++ type checking. Thereís a new way to handle addresses; from Algol and Pascal, C++ lifts the reference which lets the compiler handle the address manipulation while you use ordinary notation. Youíll also meet the copy-constructor, which controls the way objects are passed into and out of functions by value. Finally, the C++ pointer-to-member is illuminated.

(10) Operator overloading. This feature is sometimes called "syntactic sugar." It lets you sweeten the syntax for using your type by allowing operators as well as function calls. In this chapter youíll learn that operator overloading is just a different type of function call and how to write your own, especially the sometimes-confusing uses of arguments, return types, and making an operator a member or friend.

(11) Dynamic object creation. How many planes will an air-traffic system have to handle? How many shapes will a CAD system need? In the general programming problem, you canít know the quantity, lifetime or type of the objects needed by your running program. In this chapter, youíll learn how C++ís new and delete elegantly solve this problem by safely creating objects on the heap.

(12) Inheritance & composition. Data abstraction allows you to create new types from scratch; with composition and inheritance, you can create new types from existing types. With composition you assemble a new type using other types as pieces, and with inheritance you create a more specific version of an existing type. In this chapter youíll learn the syntax, how to redefine functions, and the importance of construction and destruction for inheritance & composition.

(13) Polymorphism & virtual functions. On your own, you might take nine months to discover and understand this cornerstone of OOP. Through small, simple examples youíll see how to create a family of types with inheritance and manipulate objects in that family through their common base class. The virtual keyword allows you to treat all objects in this family generically, which means the bulk of your code doesnít rely on specific type information. This makes your programs extensible, so building programs and code maintenance is easier and cheaper.

(14) Templates & container classes. Inheritance and composition allow you to reuse object code, but that doesnít solve all your reuse needs. Templates allow you to reuse source code by providing the compiler with a way to substitute type names in the body of a class or function. This supports the use of container class libraries, which are important tools for the rapid, robust development of object-oriented programs. This extensive chapter gives you a thorough grounding in this essential subject.

(15) Multiple inheritance. This sounds simple at first: A new class is inherited from more than one existing class. However, you can end up with ambiguities and multiple copies of base-class objects. That problem is solved with virtual base classes, but the bigger issue remains: When do you use it? Multiple inheritance is only essential when you need to manipulate an object through more than one common base class. This chapter explains the syntax for multiple inheritance, and shows alternative approaches ó in particular, how templates solve one common problem. The use of multiple inheritance to repair a "damaged" class interface is demonstrated as a genuinely valuable use of this feature.

(16) Exception handling. Error handling has always been a problem in programming. Even if you dutifully return error information or set a flag, the function caller may simply ignore it. Exception handling is a primary feature in C++ that solves this problem by allowing you to "throw" an object out of your function when a critical error happens. You throw different types of objects for different errors, and the function caller "catches" these objects in separate error handling routines. If you throw an exception, it cannot be ignored, so you can guarantee that something will happen in response to your error.

(17) Run-time type identification. Run-time type identification (RTTI) lets you find the exact type of an object when you only have a pointer or reference to the base type. Normally, youíll want to intentionally ignore the exact type of an object and let the virtual function mechanism implement the correct behavior for that type. But occasionally it is very helpful to know the exact type of an object for which you only have a base pointer; often this information allows you to perform a special-case operation more efficiently. This chapter explains what RTTI is for and how to use it.

Appendix A: Etcetera. At this writing, the C++ Standard is unfinished. Although virtually all the features that will end up in the language have been added to the standard, some havenít appeared in all compilers. This appendix briefly mentions some of the other features you should look for in your compiler (or in future releases of your compiler).

Appendix B: Programming guidelines. This appendix is a series of suggestions for C++ programming. Theyíve been collected over the course of my teaching and programming experience, and also from the insights of other teachers. Many of these tips are summarized from the pages of this book.

Appendix C: Simulating virtual constructors. The constructor cannot have any virtual qualities, and this sometimes produces awkward code. This appendix demonstrates two approaches to "virtual construction."

Exercises

Iíve discovered that simple exercises are exceptionally useful during a seminar to complete a studentís understanding, so youíll find a set at the end of each chapter.

These are fairly simple, so they can be finished in a reasonable amount of time in a classroom situation while the instructor observes, making sure all the students are absorbing the material. Some exercises are a bit more challenging to keep advanced students entertained. Theyíre all designed to be solved in a short time and are only there to test and polish your knowledge rather than present major challenges (presumably, youíll find those on your own ó or more likely theyíll find you).

Source code

The source code for this book is copyrighted freeware, distributed via the web site http://www.BruceEckel.com. The copyright prevents you from republishing the code in print media without permission.

To unpack the code, you download the text version of the book and run the program ExtractCode (from chapter 23), the source for which is also provided on the Web site. The program will create a directory for each chapter and unpack the code into those directories. In the starting directory where you unpacked the code you will find the following copyright notice:

//:! :CopyRight.txt
Copyright (c) Bruce Eckel, 1998
Source code file from the book "Thinking in C++"
All rights reserved EXCEPT as allowed by the
following statements: You can freely use this file
for your own work (personal or commercial),
including modifications and distribution in
executable form only. Permission is granted to use
this file in classroom situations, including its
use in presentation materials, as long as the book
"Thinking in C++" is cited as the source. 
Except in classroom situations, you cannot copy
and distribute this code; instead, the sole
distribution point is http://www.BruceEckel.com 
(and official mirror sites) where it is
freely available. You cannot remove this
copyright and notice. You cannot distribute
modified versions of the source code in this
package. You cannot use this file in printed
media without the express permission of the
author. Bruce Eckel makes no representation about
the suitability of this software for any purpose.
It is provided "as is" without express or implied
warranty of any kind, including any implied
warranty of merchantability, fitness for a
particular purpose or non-infringement. The entire
risk as to the quality and performance of the
software is with you. Bruce Eckel and the
publisher shall not be liable for any damages
suffered by you or any third party as a result of
using or distributing software. In no event will
Bruce Eckel or the publisher be liable for any
lost revenue, profit, or data, or for direct,
indirect, special, consequential, incidental, or
punitive damages, however caused and regardless of
the theory of liability, arising out of the use of
or inability to use software, even if Bruce Eckel
and the publisher have been advised of the
possibility of such damages. Should the software
prove defective, you assume the cost of all
necessary servicing, repair, or correction. If you
think you've found an error, please submit the
correction using the form you will find at
www.BruceEckel.com. (Please use the same
form for non-code errors found in the book.)
///:~

You may use the code in your projects and in the classroom as long as the copyright notice is retained.

Coding standards

In the text of this book, identifiers (function, variable, and class names) will be set in bold. Most keywords will also be set in bold, except for those keywords which are used so much that the bolding can become tedious, like class and virtual.

I use a particular coding style for the examples in this book. It was developed over a number of years, and was inspired by Bjarne Stroustrupís style in his original The C++ Programming Language. The subject of formatting style is good for hours of hot debate, so Iíll just say Iím not trying to dictate correct style via my examples; I have my own motivation for using the style that I do. Because C++ is a free-form programming language, you can continue to use whatever style youíre comfortable with.

The programs in this book are files that are automatically extracted from the text of the book, which allows them to be tested to ensure they work correctly. (I use a special format on the first line of each file to facilitate this extraction; the line begins with the characters Ď/í Ď/í Ďand the file name and path information.) Thus, the code files printed in the book should all work without compiler errors when compiled with an implementation that conforms to Standard C++ (note that not all compilers support all language features). The errors that should cause compile-time error messages are commented out with the comment //! so they can be easily discovered and tested using automatic means. Errors discovered and reported to the author will appear first in the electronic version of the book (at www.BruceEckel.com) and later in updates of the book.

One of the standards in this book is that all programs will compile and link without errors (although they will sometimes cause warnings). To this end, some of the programs, which only demonstrate a coding example and donít represent stand-alone programs, will have empty main( ) functions, like this

main() {}

This allows the linker to complete without an error.

The standard for main( ) is to return an int, but Standard C++ states that if there is no return statement inside main( ), the compiler will automatically generate code to return 0. This option will be used in this book (although some compilers may still generate warnings for this).

Language standards

Throughout this book, when referring to conformance to the ANSI/ISO C standard, I will use the term Standard C.

At this writing the ANSI/ISO C++ committee was finished working on the language. Thus, I will use the term Standard C++.

Language support

Your compiler may not support all the features discussed in this book, especially if you donít have the newest version of your compiler. Implementing a language like C++ is a Herculean task, and you can expect that the features will appear in pieces rather than all at once. But if you attempt one of the examples in the book and get a lot of errors from the compiler, itís not necessarily a bug in the code or the compiler ó it may simply not be implemented in your particular compiler yet.

Seminars & CD Roms

My company provides public hands-on training seminars based on the material in this book. Selected material from each chapter represents a lesson, which is followed by a monitored exercise period so each student receives personal attention. Information and sign-up forms for upcoming seminars can be found at http://www.BruceEckel.com. If you have specific questions, you may direct them to Bruce@EckelObjects.com.

Errors

No matter how many tricks a writer uses to detect errors, some always creep in and these often leap off the page for a fresh reader. If you discover anything you believe to be an error, please use the correction form you will find at http://www.BruceEckel.com. Your help is appreciated.

Acknowledgements

The ideas and understanding in this book have come from many sources: friends like Dan Saks, Scott Meyers, Charles Petzold, and Michael Wilk; pioneers of the language like Bjarne Stroustrup, Andrew Koenig, and Rob Murray; members of the C++ Standards Committee like Tom Plum, Reg Charney, Tom Penello, Chuck Allison, Sam Druker, Nathan Myers, and Uwe Stienmueller; people who have spoken in my C++ track at the Software Development Conference; and very often students in my seminars, who ask the questions I need to hear in order to make the material clearer.

I have been presenting this material on tours produced by Miller Freeman Inc. with my friend Richard Hale Shaw. Richardís insights and support have been very helpful (and Kimís, too). Thanks also to KoAnn Vikoren, Eric Faurot, Jennifer Jessup, Nicole Freeman, Barbara Hanscome, Regina Ridley, Alex Dunne, and the rest of the cast and crew at MFI.

The book design, cover design, and cover photo were created by my friend Daniel Will-Harris, noted author and designer, who used to play with rub-on letters in junior high school while he awaited the invention of computers and desktop publishing. However, I produced the camera-ready pages myself, so the typesetting errors are mine. Microsoft® Word for Windows 97 was used to write the book and to create camera-ready pages. The body typeface is [Times for the electronic distribution] and the headlines are in [Times for the electronic distribution].

The people at Prentice Hall were wonderful. Thanks to Alan Apt, Sondra Chavez, Mona Pompili, Shirley McGuire, and everyone else there who made life easy for me.

A special thanks to all my teachers, and all my students (who are my teachers as well).

Personal thanks to my friends Gen Kiyooka and Kraig Brockschmidt. The supporting cast of friends includes, but is not limited to: Zack Urlocker, Andrew Binstock, Neil Rubenking, Steve Sinofsky, JD Hildebrandt, Brian McElhinney, Brinkley Barr, Larry OíBrien, Bill Gates at Midnight Engineering Magazine, Larry Constantine & Lucy Lockwood, Tom Keffer, Greg Perry, Dan Putterman, Christi Westphal, Gene Wang, Dave Mayer, David Intersimone, Claire Sawyers, Claire Jones, The Italians (Andrea Provaglio, Laura Fallai, Marco Cantu, Corrado, Ilsa and Christina Giustozzi), Chris & Laura Strand, The Almquists, Brad Jerbic, Marilyn Cvitanic, The Mabrys, The Haflingers, The Pollocks, Peter Vinci, The Robbins Families, The Moelter Families (& the McMillans), The Wilks, Dave Stoner, Laurie Adams, The Penneys, The Cranstons, Larry Fogg, Mike & Karen Sequeira, Gary Entsminger & Allison Brody, Chester Andersen, Joe Lordi, Dave & Brenda Bartlett, The Rentschlers, The Sudeks, Lynn & Todd, and their families. And of course, Mom & Dad.

1: Introduction
to objects

The genesis of the computer revolution was in a machine. The genesis of our programming languages thus tends to look like that machine.

But the computer is not so much a machine as it is a mind amplification tool and a different kind of expressive medium. As a result, the tools are beginning to look less like machines and more like parts of our minds, and more like other expressive mediums like writing, painting, sculpture, animation or filmmaking. Object-oriented programming is part of this movement toward the computer as an expressive medium.

This chapter will introduce you to the basic concepts of object-oriented programming (OOP), followed by a discussion of OOP development methods. Finally, strategies for moving yourself, your projects, and your company to object-oriented programming are presented.

This chapter is background and supplementary material. If youíre eager to get to the specifics of the language, feel free to jump ahead to later chapters. You can always come back here and fill in your knowledge later.

The progress of abstraction

All programming languages provide abstractions. It can be argued that the complexity of the problems you can solve is directly related to the kind and quality of abstraction. By "kind" I mean: what is it that you are abstracting? Assembly language is a small abstraction of the underlying machine. Many so-called "imperative" languages that followed (such as FORTRAN, BASIC, and C) were abstractions of assembly language. These languages are big improvements over assembly language, but their primary abstraction still requires you to think in terms of the structure of the computer rather than the structure of the problem you are trying to solve. The programmer must establish the association between the machine model (in the "solution space") and the model of the problem that is actually being solved (in the "problem space"). The effort required to perform this mapping, and the fact that it is extrinsic to the programming language, produces programs that are difficult to write and expensive to maintain, and as a side effect created the entire "programming methods" industry.

The alternative to modeling the machine is to model the problem youíre trying to solve. Early languages such as LISP and APL chose particular views of the world ("all problems are ultimately lists" or "all problems are algorithmic"). PROLOG casts all problems into chains of decisions. Languages have been created for constraint-based programming and for programming exclusively by manipulating graphical symbols. (The latter proved to be too restrictive.) Each of these approaches is a good solution to the particular class of problem theyíre designed to solve, but when you step outside of that domain they become awkward.

The object-oriented approach takes a step farther by providing tools for the programmer to represent elements in the problem space. This representation is general enough that the programmer is not constrained to any particular type of problem. We refer to the elements in the problem space and their representations in the solution space as "objects." (Of course, you will also need other objects that donít have problem-space analogs.) The idea is that the program is allowed to adapt itself to the lingo of the problem by adding new types of objects, so when you read the code describing the solution, youíre reading words that also express the problem. This is a more flexible and powerful language abstraction than what weíve had before. Thus OOP allows you to describe the problem in terms of the problem, rather than in the terms of the solution. Thereís still a connection back to the computer, though. Each object looks quite a bit like a little computer; it has a state, and it has operations you can ask it to perform. However, this doesnít seem like such a bad analogy to objects in the real world; they all have characteristics and behaviors.

Alan Kay summarized five basic characteristics of Smalltalk, the first successful object-oriented language and one of the languages upon which C++ is based. These characteristics represent a pure approach to object-oriented programming:

    1. Everything is an object. Think of an object as a fancy variable; it stores data, but you can also ask it to perform operations on itself by making requests. In theory, you can take any conceptual component in the problem youíre trying to solve (dogs, buildings, services, etc.) and represent it as an object in your program.
    2. A program is a bunch of objects telling each other what to do by sending messages. To make a request of an object, you "send a message" to that object. More concretely, you can think of a message as a request to call a function that belongs to a particular object.
    3. Each object has its own memory made up of other objects. Or, you make a new kind of object by making a package containing existing objects. Thus, you can build up complexity in a program while hiding it behind the simplicity of objects.
    4. Every object has a type. Using the parlance, each object is an instance of a class, where "class" is synonymous with "type." The most important distinguishing characteristic of a class is "what messages can you send to it?"
    5. All objects of a particular type can receive the same messages. This is actually a very loaded statement, as you will see later. Because an object of type circle is also an object of type shape, a circle is guaranteed to receive shape messages. This means you can write code that talks to shapes and automatically handle anything that fits the description of a shape. This substitutability is one of the most powerful concepts in OOP.

Some language designers have decided that object-oriented programming itself is not adequate to easily solve all programming problems, and advocate the combination of various approaches into multiparadigm programming languages.

An object has an interface

Aristotle was probably the first to begin a careful study of the concept of type. He was known to speak of "the class of fishes and the class of birds." The concept that all objects, while being unique, are also part of a set of objects that have characteristics and behaviors in common was directly used in the first object-oriented language, Simula-67, with its fundamental keyword class that introduces a new type into a program (thus class and type are often used synonymously).

Simula, as its name implies, was created for developing simulations such as the classic "bank teller problem." In this, you have a bunch of tellers, customers, accounts, transactions, etc. The members (elements) of each class share some commonality: every account has a balance, every teller can accept a deposit, etc. At the same time, each member has its own state; each account has a different balance, each teller has a name. Thus the tellers, customers, accounts, transactions, etc. can each be represented with a unique entity in the computer program. This entity is the object, and each object belongs to a particular class that defines its characteristics and behaviors.

So, although what we really do in object-oriented programming is create new data types, virtually all object-oriented programming languages use the "class" keyword. When you see the word "type" think "class" and vice versa.

Once a type is established, you can make as many objects of that type as you like, and then manipulate those objects as the elements that exist in the problem you are trying to solve. Indeed, one of the challenges of object-oriented programming is to create a one-to-one mapping between the elements in the problem space (the place where the problem actually exists) and the solution space (the place where youíre modeling that problem, such as a computer).

But how do you get an object to do useful work for you? There must be a way to make a request of that object so it will do something, such as complete a transaction, draw something on the screen or turn on a switch. And each object can satisfy only certain requests. The requests you can make of an object are defined by its interface, and the type is what determines the interface. The idea of type being equivalent to interface is fundamental in object-oriented programming.


A simple example might be a representation of a light bulb:

Light lt;

lt.on();

The name of the type/class is Light, and the requests that you can make of a Light object are to turn it on, turn it off, make it brighter or make it dimmer. You create a Light object simply by declaring a name (lt) for that identifier. To send a message to the object, you state the name and connect it to the message name with a period (dot). From the standpoint of the user of a pre-defined class, thatís pretty much all there is to programming with objects.

The hidden implementation

It is helpful to break up the playing field into class creators (those who create new data types) and client programmers (the class consumers who use the data types in their applications). The goal of the client programmer is to collect a toolbox full of classes to use for rapid application development. The goal of the class creator is to build a class that exposes only whatís necessary to the client programmer and keeps everything else hidden. Why? If itís hidden, the client programmer canít use it, which means that the class creator can change the hidden portion at will without worrying about the impact to anyone else.

The interface establishes what requests you can make for a particular object. However, there must be code somewhere to satisfy that request. This, along with the hidden data, comprises the implementation. From a procedural programming standpoint, itís not that complicated. A type has a function associated with each possible request, and when you make a particular request to an object, that function is called. This process is often summarized by saying that you "send a message" (make a request) to an object, and the object figures out what to do with that message (it executes code).

In any relationship itís important to have boundaries that are respected by all parties involved. When you create a library, you establish a relationship with the client programmer, who is another programmer, but one who is putting together an application or using your library to build a bigger library.

If all the members of a class are available to everyone, then the client programmer can do anything with that class and thereís no way to force any particular behaviors. Even though you might really prefer that the client programmer not directly manipulate some of the members of your class, without access control thereís no way to prevent it. Everythingís naked to the world.

There are two reasons for controlling access to members. The first is to keep client programmersí hands off portions they shouldnít touch Ė parts that are necessary for the internal machinations of the data type but not part of the interface that users need to solve their particular problems. This is actually a service to users because they can easily see whatís important to them and what they can ignore.

The second reason for access control is to allow the library designer to change the internal workings of the structure without worrying about how it will affect the client programmer. For example, you might implement a particular class in a simple fashion to ease development, and then later decide you need to rewrite it to make it run faster. If the interface and implementation are clearly separated and protected, you can accomplish this and require only a relink by the user.

C++ uses three explicit keywords and one implied keyword to set the boundaries in a class: public, private, protected and the implied "friendly," which is what you get if you donít specify one of the other keywords. Their use and meaning are remarkably straightforward. These access specifiers determine who can use the definition that follows. public means the following definition is available to everyone. The private keyword, on the other hand, means that no one can access that definition except you, the creator of the type, inside function members of that type. private is a brick wall between you and the client programmer. If someone tries to access a private member, theyíll get a compile-time error. "Friendly" has to do with something called a "package," which is C++ís way of making libraries. If something is "friendly" itís available only within the package. (Thus this access level is sometimes referred to as "package access.") protected acts just like private, with the exception that an inheriting class has access to protected members, but not private members. Inheritance will be covered shortly.

Reusing
the implementation

Once a class has been created and tested, it should (ideally) represent a useful unit of code. It turns out that this reusability is not nearly so easy to achieve as many would hope; it takes experience and insight to achieve a good design. But once you have such a design, it begs to be reused. Code reuse is arguably the greatest leverage that object-oriented programming languages provide.

The simplest way to reuse a class is to just use an object of that class directly, but you can also place an object of that class inside a new class. We call this "creating a member object." Your new class can be made up of any number and type of other objects, whatever is necessary to achieve the functionality desired in your new class. This concept is called composition, since you are composing a new class from existing classes. Sometimes composition is referred to as a "has-a" relationship, as in "a car has a trunk."

Composition comes with a great deal of flexibility. The member objects of your new class are usually private, making them inaccessible to client programmers using the class. This allows you to change those members without disturbing existing client code. You can also change the member objects at run time, which provides great flexibility. Inheritance, which is described next, does not have this flexibility since the compiler must place restrictions on classes created with inheritance.

Because inheritance is so important in object-oriented programming it is often highly emphasized, and the new programmer can get the idea that inheritance should be used everywhere. This can result in awkward and overcomplicated designs. Instead, you should first look to composition when creating new classes, since it is simpler and more flexible. If you take this approach, your designs will stay cleaner. It will be reasonably obvious when you need inheritance.

Inheritance:
reusing the interface

By itself, the concept of an object is a convenient tool. It allows you to package data and functionality together by concept, so you can represent an appropriate problem-space idea rather than being forced to use the idioms of the underlying machine. These concepts are expressed in the primary idea of the programming language as a data type (using the class keyword).

It seems a pity, however, to go to all the trouble to create a data type and then be forced to create a brand new one that might have similar functionality. Itís nicer if we can take the existing data type, clone it and make additions and modifications to the clone. This is effectively what you get with inheritance, with the exception that if the original class (called the base or super or parent class) is changed, the modified "clone" (called the derived or inherited or sub or child class) also reflects the appropriate changes. Inheritance is implemented in C++ using a special syntax that names another class as what is commonly referred to as the "base" class.

When you inherit you create a new type, and the new type contains not only all the members of the existing type (although the private ones are hidden away and inaccessible), but more importantly it duplicates the interface of the base class. That is, all the messages you can send to objects of the base class you can also send to objects of the derived class. Since we know the type of a class by the messages we can send to it, this means that the derived class is the same type as the base class. This type equivalence via inheritance is one of the fundamental gateways in understanding the meaning of object-oriented programming.

Since both the base class and derived class have the same interface, there must be some implementation to go along with that interface. That is, there must be some code to execute when an object receives a particular message. If you simply inherit a class and donít do anything else, the methods from the base-class interface come right along into the derived class. That means objects of the derived class have not only the same type, they also have the same behavior, which doesnít seem particularly interesting.

You have two ways to differentiate your new derived class from the original base class it inherits from. The first is quite straightforward: you simply add brand new functions to the derived class. These new functions are not part of the base class interface. This means that the base class simply didnít do as much as you wanted it to, so you add more functions. This simple and primitive use for inheritance is, at times, the perfect solution to your problem. However, you should look closely for the possibility that your base class might need these additional functions.

Overriding base-class functionality

Although inheritance may sometimes imply that you are going to add new functions to the interface, thatís not necessarily true. The second way to differentiate your new class is to change the behavior of an existing base-class function. This is referred to as overriding that function.

To override a function, you simply create a new definition for the function in the derived class. Youíre saying "Iím using the same interface function here, but I want it to do something different for my new type."

Is-a vs. is-like-a relationships

Thereís a certain debate that can occur about inheritance: Should inheritance override only base-class functions? This means that the derived type is exactly the same type as the base class since it has exactly the same interface. As a result, you can exactly substitute an object of the derived class for an object of the base class. This can be thought of as pure substitution. In a sense, this is the ideal way to treat inheritance. We often refer to the relationship between the base class and derived classes in this case as an is-a relationship, because you can say "a circle is a shape." A test for inheritance is whether you can state the is-a relationship about the classes and have it make sense.

There are times when you must add new interface elements to a derived type, thus extending the interface and creating a new type. The new type can still be substituted for the base type, but the substitution isnít perfect in a sense because your new functions are not accessible from the base type. This can be described as an is-like-a relationship; the new type has the interface of the old type but it also contains other functions, so you canít really say itís exactly the same. For example, consider an air conditioner. Suppose your house is wired with all the controls for cooling; that is, it has an interface that allows you to control cooling. Imagine that the air conditioner breaks down and you replace it with a heat pump, which can both heat and cool. The heat pump is-like-an air conditioner, but it can do more. Because your house is wired only to control cooling, it is restricted to communication with the cooling part of the new object. The interface of the new object has been extended, and the existing system doesnít know about anything except the original interface.

When you see the substitution principle itís easy to feel like thatís the only way to do things, and in fact it is nice if your design works out that way. But youíll find that there are times when itís equally clear that you must add new functions to the interface of a derived class. With inspection both cases should be reasonably obvious.

Interchangeable objects
with polymorphism

Inheritance usually ends up creating a family of classes, all based on the same uniform interface. We express this with an inverted tree diagram:

One of the most important things you do with such a family of classes is to treat an object of a derived class as an object of the base class. This is important because it means you can write a single piece of code that ignores the specific details of type and talks just to the base class. That code is then decoupled from type-specific information, and thus is simpler to write and easier to understand. And, if a new type Ė a Triangle, for example Ė is added through inheritance, the code you write will work just as well for the new type of Shape as it did on the existing types. Thus the program is extensible.

Consider the above example. If you write a function in C++:

void doStuff(Shape s) {

s.erase();

// ...

s.draw();

}

This function speaks to any Shape, so it is independent of the specific type of object itís drawing and erasing. If in some other program we use the doStuff( ) function:

Circle c = new Circle();

Triangle t = new Triangle();

Line l = new Line();

doStuff(c);

doStuff(t);

doStuff(l);

The calls to doStuff( ) automatically work right, regardless of the exact type of the object.

This is actually a pretty amazing trick. Consider the line:

doStuff(c);

Whatís happening here is that a Circle handle is being passed into a function thatís expecting a Shape handle. Since a Circle is a Shape it can be treated as one by doStuff( ). That is, any message that doStuff( ) can send to a Shape, a Circle can accept. So it is a completely safe and logical thing to do.

We call this process of treating a derived type as though it were its base type upcasting. The name cast is used in the sense of casting into a mold and the up comes from the way the inheritance diagram is typically arranged, with the base type at the top and the derived classes fanning out downward. Thus, casting to a base type is moving up the inheritance diagram: upcasting.

An object-oriented program contains some upcasting somewhere, because thatís how you decouple yourself from knowing about the exact type youíre working with. Look at the code in doStuff( ):

s.erase();

// ...

s.draw();

Notice that it doesnít say "If youíre a Circle, do this, if youíre a Square, do that, etc." If you write that kind of code, which checks for all the possible types a Shape can actually be, itís messy and you need to change it every time you add a new kind of Shape. Here, you just say "Youíre a shape, I know you can erase( ) yourself, do it and take care of the details correctly."

Dynamic binding

Whatís amazing about the code in doStuff( ) is that somehow the right thing happens. Calling draw( ) for Circle causes different code to be executed than when calling draw( ) for a Square or a Line, but when the draw( ) message is sent to an anonymous Shape, the correct behavior occurs based on the actual type that the Shape handle happens to be connected to. This is amazing because when the C++ compiler is compiling the code for doStuff( ), it cannot know exactly what types it is dealing with. So ordinarily, youíd expect it to end up calling the version of erase( ) for Shape, and draw( ) for Shape and not for the specific Circle, Square, or Line. And yet the right thing happens. Hereís how it works.

When you send a message to an object even though you donít know what specific type it is, and the right thing happens, thatís called polymorphism. The process used by object-oriented programming languages to implement polymorphism is called dynamic binding. The compiler and run-time system handle the details; all you need to know is that it happens and more importantly how to design with it.

Some languages require you to use a special keyword to enable dynamic binding. In C++ this keyword is virtual. In C++, you must remember to add a keyword because by default member functions are not dynamically bound. If a member function is virtual, then when you send a message to an object, the object will do the right thing, even when upcasting is involved.

Abstract base classes and interfaces

Often in a design, you want the base class to present only an interface for its derived classes. That is, you donít want anyone to actually create an object of the base class, only to upcast to it so that its interface can be used. This is accomplished by making that class abstract by giving it at least one pure virtual function. You can recognize a pure virtual function because it uses the virtual keyword and is followed by = 0. If anyone tries to make an object of an abstract class, the compiler prevents them. This is a tool to enforce a particular design.

When an abstract class is inherited, all pure virtual functions must be implemented, or the inherited class becomes abstract as well. Creating a pure virtual function allows you to put a member function in an interface without being forced to provide a possibly meaningless body of code for that member function.

Objects: characteristics + behaviors

The first object-oriented programming language was Simula-67, developed in the sixties to solve, as the name implies, simulation problems. A classic simulation is the bank teller problem, which involves a bunch of tellers, customers, transactions, units of money ó a lot of "objects." Objects that are identical except for their state during a programís execution are grouped together into "classes of objects" and thatís where the word class came from.

A class describes a set of objects that have identical characteristics (data elements) and behaviors (functionality). So a class is really a data type because a floating point number (for example) also has a set of characteristics and behaviors. The difference is that a programmer defines a class to fit a problem rather than being forced to use an existing data type that was designed to represent a unit of storage in a machine. You extend the programming language by adding new data types specific to your needs. The programming system welcomes the new classes and gives them all the care and type-checking that it gives to built-in types.

This approach was not limited to building simulations. Whether or not you agree that any program is a simulation of a system you design, the use of OOP techniques can easily reduce a large set of problems to a simple solution. This discovery spawned a number of OOP languages, most notably Smalltalk ó the most successful OOP language until C++.

Abstract data typing is a fundamental concept in object-oriented programming. Abstract data types work almost exactly like built-in types: You can create variables of a type (called objects or instances in object-oriented parlance) and manipulate those variables (called sending messages or requests; you send a message and the object figures out what to do with it).

Inheritance: type relationships

A type does more than describe the constraints on a set of objects; it also has a relationship with other types. Two types can have characteristics and behaviors in common, but one type may contain more characteristics than another and may also handle more messages (or handle them differently). Inheritance expresses this similarity between types with the concept of base types and derived types. A base type contains all the characteristics and behaviors that are shared among the types derived from it. You create a base type to represent the core of your ideas about some objects in your system. From the base type, you derive other types to express the different ways that core can be realized.

For example, a garbage-recycling machine sorts pieces of garbage. The base type is "garbage," and each piece of garbage has a weight, a value, and so on and can be shredded, melted, or decomposed. From this, more specific types of garbage are derived that may have additional characteristics (a bottle has a color) or behaviors (an aluminum can may be crushed, a steel can is magnetic). In addition, some behaviors may be different (the value of paper depends on its type and condition). Using inheritance, you can build a type hierarchy that expresses the problem youíre trying to solve in terms of its types.

A second example is the classic shape problem, perhaps used in a computer-aided design system or game simulation. The base type is "shape," and each shape has a size, a color, a position, and so on. Each shape can be drawn, erased, moved, colored, and so on. From this, specific types of shapes are derived (inherited): circle, square, triangle, and so on, each of which may have additional characteristics and behaviors. Certain shapes can be flipped, for example. Some behaviors may be different (calculating the area of a shape). The type hierarchy embodies both the similarities and differences between the shapes.

Casting the solution in the same terms as the problem is tremendously beneficial because you donít need a lot of intermediate models (used with procedural languages for large problems) to get from a description of the problem to a description of the solution; in pre-object-oriented languages the solution was inevitably described in terms of computers. With objects, the type hierarchy is the primary model, so you go directly from the description of the system in the real world to the description of the system in code. Indeed, one of the difficulties people have with object-oriented design is that itís too simple to get from the beginning to the end. A mind trained to look for complex solutions is often stumped by this simplicity at first.

Polymorphism

When dealing with type hierarchies, you often want to treat an object not as the specific type that it is but as a member of its base type. This allows you to write code that doesnít depend on specific types. In the shape example, functions manipulate generic shapes without respect to whether theyíre circles, squares, triangles, and so on. All shapes can be drawn, erased, and moved, so these functions simply send a message to a shape object; they donít worry about how the object copes with the message.

Such code is unaffected by the addition of new types, which is the most common way to extend an object-oriented program to handle new situations. For example, you can derive a new subtype of shape called pentagon without modifying the functions that deal only with generic shapes. The ability to extend a program easily by deriving new subtypes is important because it greatly reduces the cost of software maintenance. (The so-called "software crisis" was caused by the observation that software was costing more than people thought it ought to.)

Thereís a problem, however, with attempting to treat derived-type objects as their generic base types (circles as shapes, bicycles as vehicles, cormorants as birds). If a function is going to tell a generic shape to draw itself, or a generic vehicle to steer, or a generic bird to fly, the compiler cannot know at compile-time precisely what piece of code will be executed. Thatís the point ó when the message is sent, the programmer doesnít want to know what piece of code will be executed; the draw function can be applied equally to a circle, square, or triangle, and the object will execute the proper code depending on its specific type. If you add a new subtype, the code it executes can be different without changes to the function call. The compiler cannot know precisely what piece of code is executed, so what does it do?

The answer is the primary twist in object-oriented programming: The compiler cannot make a function call in the traditional sense. The function call generated by a non-OOP compiler causes what is called early binding, a term you may not have heard before because youíve never thought about it any other way. It means the compiler generates a call to a specific function name, and the linker resolves that call to the absolute address of the code to be executed. In OOP, the program cannot determine the address of the code until run-time, so some other scheme is necessary when a message is sent to a generic object.

To solve the problem, object-oriented languages use the concept of late binding. When you send a message to an object, the code being called isnít determined until run-time. The compiler does ensure that the function exists and performs type checking on the arguments and return value (a language where this isnít true is called weakly typed), but it doesnít know the exact code to execute.

To perform late binding, the compiler inserts a special bit of code in lieu of the absolute call. This code calculates the address of the function body to execute at run-time using information stored in the object itself (this subject is covered in great detail in Chapter 13). Thus, each object can behave differently according to the contents of that pointer. When you send a message to an object, the object actually does figure out what to do with that message.

You state that you want a function to have the flexibility of late-binding properties using the keyword virtual. You donít need to understand the mechanics of virtual to use it, but without it you canít do object-oriented programming in C++. Virtual functions allow you to express the differences in behavior of classes in the same family. Those differences are what cause polymorphic behavior.

Manipulating concepts: what an OOP program looks like

You know what a procedural program in C looks like: data definitions and function calls. To find the meaning of such a program you have to work a little, looking through the function calls and low-level concepts to create a model in your mind. This is the reason we need intermediate representations for procedural programs ó they tend to be confusing because the terms of expression are oriented more toward the computer than the problem youíre solving.

Because C++ adds many new concepts to the C language, your natural assumption may be that, of course, the main( ) in a C++ program will be far more complicated than the equivalent C program. Here, youíll be pleasantly surprised: A well-written C++ program is generally far simpler and much easier to understand than the equivalent C program. What youíll see are the definitions of the objects that represent concepts in your problem space (rather than the issues of the computer representation) and messages sent to those objects to represent the activities in that space. One of the delights of object-oriented programming is that itís generally very easy to understand the code by reading it. Usually thereís a lot less code, as well, because many of your problems will be solved by reusing existing library code.

Object landscapes
and lifetimes

Technically, OOP is just about abstract data typing, inheritance and polymorphism, but other issues can be at least as important. The remainder of this section will cover these issues.

One of the most important factors is the way objects are created and destroyed. Where is the data for an object and how is the lifetime of the object controlled? There are different philosophies at work here. C++ takes the approach that control of efficiency is the most important issue, so it gives the programmer a choice. For maximum run-time speed, the storage and lifetime can be determined while the program is being written, by placing the objects on the stack (these are sometimes called automatic or scoped variables) or in the static storage area. This places a priority on the speed of storage allocation and release, and control of these can be very valuable in some situations. However, you sacrifice flexibility because you must know the exact quantity, lifetime and type of objects while youíre writing the program. If you are trying to solve a more general problem such as computer-aided design, warehouse management or air-traffic control, this is too restrictive.

The second approach is to create objects dynamically in a pool of memory called the heap. In this approach you donít know until run time how many objects you need, what their lifetime is or what their exact type is. Those are determined at the spur of the moment while the program is running. If you need a new object, you simply make it on the heap at the point that you need it. Because the storage is managed dynamically, at run time, the amount of time required to allocate storage on the heap is significantly longer than the time to create storage on the stack. (Creating storage on the stack is often a single assembly instruction to move the stack pointer down, and another to move it back up.) The dynamic approach makes the generally logical assumption that objects tend to be complicated, so the extra overhead of finding storage and releasing that storage will not have an important impact on the creation of an object. In addition, the greater flexibility is essential to solve the general programming problem.

C++ allows you to determine whether the objects are created while you write the program or at run time to allow the control of efficiency. You might think that since itís more flexible, youíd always want to create objects on the heap rather than the stack. Thereís another issue, however, and thatís the lifetime of an object. If you create an object on the stack or in static storage, the compiler determines how long the object lasts and can automatically destroy it. However, if you create it on the heap the compiler has no knowledge of its lifetime. A programmer has two options for destroying objects: you can determine programmatically when to destroy the object, or the environment can provide a feature called a garbage collector that automatically discovers when an object is no longer in use and destroys it. Of course, a garbage collector is much more convenient, but it requires that all applications must be able to tolerate the existence of the garbage collector and the other overhead for garbage collection. This does not meet the design requirements of the C++ language and so it was not included, but C++ does have a garbage collector (as does Smalltalk; Delphi does not but one could be added. Third-party garbage collectors exist for C++).

The rest of this section looks at additional factors concerning object lifetimes and landscapes.

Containers and iterators

If you donít know how many objects youíre going to need to solve a particular problem, or how long they will last, you also donít know how to store those objects. How can you know how much space to create for those objects? You canít, since that information isnít known until run time.

The solution to most problems in object-oriented design seems flippant: you create another type of object. The new type of object that solves this particular problem holds objects, or pointers to objects. Of course, you can do the same thing with an array, which is available in most languages. But thereís more. This new type of object, which is typically referred to in C++ as a container (also called a collection in some languages), will expand itself whenever necessary to accommodate everything you place inside it. So you donít need to know how many objects youíre going to hold in a collection. Just create a collection object and let it take care of the details.

Fortunately, a good OOP language comes with a set of containers as part of the package. In C++, itís the Standard Template Library (STL). Object Pascal has containers in its Visual Component Library (VCL). Smalltalk has a very complete set of containers. Java has a standard set of containers. In some libraries, a generic container is considered good enough for all needs, and in others (C++ in particular) the library has different types of containers for different needs: a vector for consistent access to all elements, and a linked list for consistent insertion at all elements, for example, so you can choose the particular type that fits your needs. These may include sets, queues, hash tables, trees, stacks, etc.

All containers have some way to put things in and get things out. The way that you place something into a container is fairly obvious. Thereís a function called "push" or "add" or a similar name. Fetching things out of a container is not always as apparent; if itís an array-like entity such as a vector, you might be able to use an indexing operator or function. But in many situations this doesnít make sense. Also, a single-selection function is restrictive. What if you want to manipulate or compare a set of elements in the container instead of just one?

The solution is an iterator, which is an object whose job is to select the elements within a container and present them to the user of the iterator. As a class, it also provides a level of abstraction. This abstraction can be used to separate the details of the container from the code thatís accessing that container. The container, via the iterator, is abstracted to be simply a sequence. The iterator allows you to traverse that sequence without worrying about the underlying structure Ė that is, whether itís a vector, a linked list, a stack or something else. This gives you the flexibility to easily change the underlying data structure without disturbing the code in your program.

From the design standpoint, all you really want is a sequence that can be manipulated to solve your problem. If a single type of sequence satisfied all of your needs, thereíd be no reason to have different kinds. There are two reasons that you need a choice of containers. First, containers provide different types of interfaces and external behavior. A stack has a different interface and behavior than that of a queue, which is different than that of a set or a list. One of these might provide a more flexible solution to your problem than the other. Second, different containers have different efficiencies for certain operations. The best example is a vector and a list. Both are simple sequences that can have identical interfaces and external behaviors. But certain operations can have radically different costs. Randomly accessing elements in a vector is a constant-time operation; it takes the same amount of time regardless of the element you select. However, in a linked list it is expensive to move through the list to randomly select an element, and it takes longer to find an element if it is further down the list. On the other hand, if you want to insert an element in the middle of a sequence, itís much cheaper in a list than in a vector. These and other operations have different efficiencies depending upon the underlying structure of the sequence. In the design phase, you might start with a list and, when tuning for performance, change to a vector. Because of the abstraction via iterators, you can change from one to the other with minimal impact on your code.

In the end, remember that a container is only a storage cabinet to put objects in. If that cabinet solves all of your needs, it doesnít really matter how it is implemented (a basic concept with most types of objects). If youíre working in a programming environment that has built-in overhead due to other factors (running under Windows, for example, or the cost of a garbage collector), then the cost difference between a vector and a linked list might not matter. You might need only one type of sequence. You can even imagine the "perfect" container abstraction, which can automatically change its underlying implementation according to the way it is used.

Exception handling:
dealing with errors

Ever since the beginning of programming languages, error handling has been one of the most difficult issues. Because itís so hard to design a good error-handling scheme, many languages simply ignore the issue, passing the problem on to library designers who come up with halfway measures that can work in many situations but can easily be circumvented, generally by just ignoring them. A major problem with most error-handling schemes is that they rely on programmer vigilance in following an agreed-upon convention that is not enforced by the language. If the programmer is not vigilant, which is often if they are in a hurry, these schemes can easily be forgotten.

Exception handling wires error handling directly into the programming language and sometimes even the operating system. An exception is an object that is "thrown" from the site of the error and can be "caught" by an appropriate exception handler designed to handle that particular type of error. Itís as if exception handling is a different, parallel path of execution that can be taken when things go wrong. And because it uses a separate execution path, it doesnít need to interfere with your normally-executing code. This makes that code simpler to write since you arenít constantly forced to check for errors. In addition, a thrown exception is unlike an error value thatís returned from a function or a flag thatís set by a function in order to indicate an error condition, These can be ignored. An exception cannot be ignored so itís guaranteed to be dealt with at some point. Finally, exceptions provide a way to reliably recover from a bad situation. Instead of just exiting you are often able to set things right and restore the execution of a program, which produces much more robust programs.

Itís worth noting that exception handling isnít an object-oriented feature, although in object-oriented languages the exception is normally represented with an object. Exception handling existed before object-oriented languages.

Introduction to methods

A method is a set of processes and heuristics used to break down the complexity of a programming problem. Especially in OOP, methodology is a field of many experiments, so it is important to understand the problem the method is trying to solve before you consider adopting one. This is particularly true with C++, where the programming language itself is intended to reduce the complexity involved in expressing a program. This may in fact alleviate the need for ever-more-complex methodologies. Instead, simpler ones may suffice in C++ for a much larger class of problems than you could handle with simple methods for procedural languages.

Its also important to realize that the term "methodology" is often too grand and promises too much. Whatever you do now when you design and write a program is a method. It may be your own method, and you may not be conscious of doing it, but it is a process you go through as you create. If it is an effective process, it may need only a small tune-up to work with C++. If you are not satisfied with your productivity and the way your programs turn out, you may want to consider adopting a formal method.

Complexity

To analyze this situation, I shall start with a premise:

Computer programming is about managing complexity by imposing discipline.

This discipline appears two ways, each of which can be examined separately:

    1. Internal discipline is seen in the structure of the program itself, through the expressiveness of the programming language and the cleverness and insight of the programmers.
    2. External discipline is seen in the meta-information about the program, loosely described as "design documentation" (not to be confused with product documentation).
    3. I maintain these two forms of discipline are at odds with each other: one is the essence of a program, driven by the need to make the program work the first time, and the other is the analysis of a program, driven by the need to understand and maintain the program in the future. Both creation and maintenance are fundamental properties of a programís lifetime, and a useful programming method will integrate both in the most expedient fashion, without going overboard in one direction or another.

      Internal discipline

      The evolution of computer programming (in which C++ is just a step on the path) began by imposing internal discipline on the programming model, allowing the programmer to alias names to machine locations and machine instructions. This was such a jump from numerical machine programming that it spawned other developments over the years, generally involving further abstractions away from the low-level machine and toward a model more suited to solving the problem at hand. Not all these developments caught on; often the ideas originated in the academic world and spread into the computing world at large depending on the set of problems they were well suited for.

      The creation of named subroutines as well as linking techniques to support libraries of these subroutines was a huge leap forward in the 50ís and spawned two languages that would be heavy-hitters for decades: FORTRAN ("FORmula-TRANslation") for the scientific crowd and COBOL ("COmmon Business-Oriented Language") for the business folks. The successful language in "pure" computer science was Lisp ("List-Processing"), while the more mathematically oriented could use APL ("A Programming Language").

      All of these languages had in common their use of procedures. Lisp and APL were created with language elegance in mind ó the "mission statement" of the language is embodied in an engine that handles all cases of that mission. FORTRAN and COBOL were created to solve specific types of problems, and then evolved when those problems got more complex or new ones appeared. Even in their twilight years they continue to evolve: Versions of both FORTRAN and COBOL are appearing with object-oriented extensions. (A fundamental tenet of post-modern philosophy is that any organization takes on an independent life of its own; its primary goal becomes to perpetuate that life.)

      The named subroutine was recognized as a major leverage point in programming, and languages were designed around the concept, Algol and Pascal, in particular. Other languages also appeared, successfully solved a subset of the programming problem, and took their place in the order of things. Two of the most interesting of these were Prolog, built around an inference engine (something you see popping up in other languages, often as a library) and FORTH, which is an extensible language. FORTH allows the programmer to re-form the language itself until it fits the problem, a concept akin to object-oriented programming. However, FORTH also allows you to change the base language itself. Because of this, it becomes a maintenance nightmare and is thus probably the purest expression of the concept of internal discipline, where the emphasis is on the one-time solution of the problem rather than the maintenance of that solution.

      Numerous other languages have been invented to solve a portion of the programming problem. Usually, these languages begin with a particular objective in mind. BASIC ("Beginners All-purpose Symbolic Instruction Code"), for example, was designed in the 60ís to make programming simpler for the beginner. APL was designed for mathematical manipulations. Both languages can solve other problems, but the question becomes whether they are the most ideal solutions for the entire problem set. The joke is, "To a three-year-old with a hammer, everything looks like a nail," but it displays an underlying economic truth: If your only language is BASIC or APL, then thatís probably the best solution for your problem, especially if the deadline is short term and the solution has a limited lifetime.

      However, two factors eventually creep in: the management of complexity, and maintenance (discussed in the next section). Of course, complexity is what the language was created to manage in the first place, and the programmer, loath to give up the years of time invested in fluency with the language, will go to greater and greater lengths to bend the language to the problem at hand. In fact, the boundary of chaos is fuzzy rather than clear: whoís to say when your language begins to fail you? It doesnít, not all at once.

      The solution to a problem begins to take longer and becomes more of a challenge to the programmer. More cleverness is required to get around the limitations of the language, and this cleverness becomes standard lore, things you "just have to do to make the language work." This seems to be the way humans operate; rather than grumbling every time we encounter a flaw, we stop calling it a flaw.

      But eventually the programming problems became too difficult to solve and to maintain ó that is, the solutions were too expensive. It was finally clear that the complexity was more than we could handle. Although a large class of programming problems involves doing most of the work during development and creating a solution that requires minimal maintenance (or might simply be thrown away or replaced with a different solution), this is only a subset of the general problem. In the general problem, you view the software as providing a service to people. As the needs of the users evolve, that service must evolve with it. Thus a project is not finished when version one ships; it is a living entity that continues to evolve, and the evolution of a program becomes part of the general programming problem.

      External discipline

      The need to evolve a program requires new ways of thinking about the problem. Itís not just "How do we make it work?" but "How do we make it work and make it easy to change?" And thereís a new problem: When youíre just trying to make a program work, you can assume that the team is stable (you can hope, anyway), but if youíre thinking in terms of a programís lifetime, you must assume that team members will change. This means that a new team member must somehow learn the essentials about a program that previous team members communicated to each other (probably using spoken words). Thus the program needs some form of design documentation.

      Because documentation is not essential to making a program work, there are no rules for its creation as there are rules imposed by a programming language on a program. Thus, if you require your documentation to satisfy a particular need, you must impose an external discipline. Whether documentation "works" or not is much more difficult to determine (and requires a programís lifetime to verify), so the "best" form of external discipline can be more hotly debated than the "best" programming language.

      The important question to keep in mind when making decisions about external discipline is, "What problem am I trying to solve?" The essence of the problem was stated above: "How do we make it work and make it easy to change?" However, this question has often gone through so many interpretations that it becomes "How can I conform to the FoobleBlah documentation specifications so the government will pay me for this project?" That is, the goal of the external discipline becomes the creation of a document rather than a good, maintainable program design; the document may become more important than the program itself.

      When asking questions about the directions of the future in general, and computing in particular, I start by applying an economic Occamís Razor: Which solution costs less? Assuming the solution satisfies the needs, is the price difference enough to motivate you out of your current, comfortable way of doing things? If your method involves saving every document ever created during the analysis and design of the project and maintaining all those documents as the project evolves, then you will have a system that maximizes the overhead of evolving a project in favor of complete understanding by new team members (assuming thereís not so much documentation that it becomes daunting to read). Taken to an extreme, such a method can conceivably cost as much for program creation and maintenance as the approaches it is intended to replace.

      At the other end of the external-structure spectrum are the minimalist methods. Perform enough of an analysis to be able to come up with a design, then throw the analysis away so you donít spend time and money maintaining it. Do enough of a design to begin coding, then throw the design away, again, so you donít spend time and money to maintain the document. (The following may or may not be ironic, depending on your situation.) Then the code is so elegant and clear that it needs minimal comments. The code and comments together are enough for the new team member to get up to speed on the project. Because less time is spent with all that tedious documentation (which no one really understands anyway), new members integrate faster.

      Throwing everything away, however, is probably not the best idea, although if you donít maintain your documents, thatís effectively what you do. Some form of document is usually necessary. (See the description of scripting, described later in this chapter.)

      Communication

      Expecting your code to suffice as documentation for a larger project is not particularly reasonable, even though it happens more often than not in practice. But it contains the essence of what we really want an external discipline to produce: communication. Youíd like to communicate just enough to a new team member that she can help evolve the program. But youíd also like to keep the amount of money you spend on external discipline to a minimum because ultimately people are paying for the service the program provides, not the design documentation behind it. And to be truly useful, the external discipline should do more than just generate documentation ó it should be a way for team members to communicate about the design as theyíre creating it. The goal of the ideal external discipline is to facilitate communication about the analysis and design of a program. This helps the people working on the program now and those who will work on the program in the future. The focus is not just to enable communication, but to create good designs.

      Because people (and programmers, in particular) are drawn to computers because the machine does work for you ó again, an economic motivation ó external disciplines that require the developer to do a lot of work for the machine seem doomed from the beginning. A successful method (that is, one that gets used) has two important features:

    4. It helps you analyze and design. That is, itís much easier to think about and communicate the analysis and design with the method than without it. The difference between your current productivity and the productivity youíll have using the method must be significant; otherwise you might as well stay where you are. Also, it must be simple enough to use that you donít need to carry a handbook. When youíre solving your problem, thatís what you want to think about, not whether youíre using symbols or techniques properly.
    5. It doesnít impose overhead without short-term payback. Without some short-term reward in the form of visible progress toward your goal, you arenít going to feel very productive with a method, and youíre going to find ways to avoid it. This progress cannot be in the guise of the transformation of one intermediate form to another. Youíve got to see your classes appear, along with the messages they send each other. To someone creating a method this may seem like an arbitrary constraint, but itís simple psychology: People want to feel like theyíre doing real creative work, and if your method keeps them from a goal rather than helping them gallop toward it, theyíll find a way to get around your method.
    6. Magnitude

      One of the arguments against my view on the subject of methodologies is, "Well, yes, you can get away with anything as long as youíre working with small projects," with "small" apparently meaning anything the listener is capable of imagining. Although this attitude is often used to intimidate the unconverted, there is a kernel of truth inside: What you need may depend on the scale of the problem youíre attempting to solve. Tiny projects need no external discipline at all other than the patterns of problem solving learned in the lifetime of the individual programmer. Big projects with many people have little communication among those people and so must have a formal way for that communication to occur effectively and accurately.

      The gray area is the projects in between. Their needs may vary depending on the complexity of the project and the experience of the developers. Certainly all medium-sized projects donít require adherence to a full-blown method, generating many reports, lots of paper, and lots of work. Some probably do, but many can get away with "methodology lite" (more code, less documentation). The complexity of all the methodologies we are faced with may fall under an 80% Ė 20% (or less) rule: We are being deluged with details of methodologies that may be needed for less than 20% of the programming problems being solved. If your designs are adequate and maintenance is not a nightmare, maybe you donít need it, or not all of it anyway.

      Structured OOP?

      An even more significant question arises. Suppose a methodology is needed to facilitate communication. This meta-communication about the program is necessary because the programming language is inadequate ó it is too oriented toward the machine paradigm and is not very helpful for talking about the problem. The procedural-programming model of the world, for example, requires you to talk about a program in terms of data and functions that transform the data. Because this is not the way we discuss the real problem thatís being solved, you must translate back and forth between the problem description and the solution description. Once you get a solution description and implement it, proper etiquette requires that you make changes to the problem description anytime you change the solution. This means you must translate from the machine paradigm backward into the problem space. To get a truly maintainable program that can be adapted to changes in the problem space, this is necessary. The overhead and organization required seem to demand an external discipline of some sort. The most important methodology for procedural programming is the structured techniques.

      Now consider this: What if the language in the solution space were uprooted from the machine paradigm? What if you could force the solution space to use the same terminology as the problem space? For example, an air conditioner in your climate-controlled building becomes an air conditioner in your climate-control program, a thermostat becomes a thermostat, and so on. (This is what you do, not coincidentally, with OOP.) Suddenly, translating from the problem space to the solution space becomes a minor issue. Conceivably, each phase in the analysis, design, and implementation of a program could use the same terminology, the same representation. So the question becomes, "Do we still need a document about the document, if the essential document (the program) can adequately describe itself?" If OOP does what it claims, then the shape of the programming problem may have changed to the point that all the difficulties solved by the structured techniques might not exist in this new world.

      This is not just a fanciful argument, as a thought experiment will reveal. Suppose you need to write a little utility, for example, one that performs an operation on a text file like those youíll find in the latter pages of Chapter 5. Some of those took a few minutes to write; the most difficult took a few hours. Now suppose youíre back in the 50ís and the project must be done in machine language or assembly, with minimal libraries. It goes from a few minutes for one person to weeks or months and many people. In the 50ís youíd need a lot of external discipline and management; now you need none. Clearly, the development of tools has greatly increased the complexity of the problems weíre able to solve without external discipline (and just as clearly, we go find problems that are more complicated).

      This is not to suggest that no external discipline is necessary, simply that a useful external discipline for OOP will solve different problems than those solved by a useful external discipline for procedural programming. In particular, the goal of an OOP method must be first and foremost to generate a good design. Not only do good designs of any kind promote reuse, but the need for a good design is directly in line with the needs of developers at all levels of a project. Thus, they will be more likely to adopt such a system.

      With these points in mind, letís consider some of the issues of an OOP design method.

      Five stages of object design

      The design life of an object is not limited to the period of time when youíre writing the program. Instead, the design of an object appears to happen over a sequence of stages. Itís helpful to have this perspective because you stop expecting perfection right away; instead, you realize that the understanding of what an object does and what it should look like happens over time. This view also applies to the design of various types of programs; the pattern for a particular type of program emerges through struggling again and again with that problem. Objects, too, have their patterns that emerge through understanding, use, and reuse.

      The following is a description, not a method. It is simply an observation of when you can expect design of an object to occur.

      1. Object discovery

      This phase occurs during the initial analysis of a program. Objects may be discovered by looking for external factors and boundaries, duplication of elements in the system, and the smallest conceptual units. Some objects are obvious if you already have a set of class libraries. Commonality between classes suggesting base classes and inheritance may appear right away, or later in the design process.

      2. Object assembly

      As youíre building an object youíll discover the need for new members that didnít appear during discovery. The internal needs of the object may require new classes to support it.

      3. System construction

      Once again, more requirements for an object may appear at this later stage. As you learn, you evolve your objects. The need for communication and interconnection with other objects in the system may change the needs of your classes or require new classes.

      4. System extension

      As you add new features to a system you may discover that your previous design doesnít support easy system extension. With this new information, you can restructure parts of the system, very possibly adding new classes.

      5. Object reuse

      This is the real stress test for a class. If someone tries to reuse it in an entirely new situation, theyíll probably discover some shortcomings. As you change a class to adapt to more new programs, the general principles of the class will become clearer, until you have a truly reusable object.

      Guidelines for object development

      These stages suggest some guidelines when thinking about developing your classes:

    7. Let a specific problem generate a class, then let the class grow and mature during the solution of other problems.
    8. Remember, discovering the classes you need is the majority of the system design. If you already had those classes, this would be a trivial project.
    9. Donít force yourself to know everything at the beginning; learn as you go. Thatís the way it will happen anyway.
    10. Start programming; get something working so you can prove or disprove your design. Donít fear procedural-style spaghetti code ó classes partition the problem and help control anarchy and entropy. Bad classes do not break good classes.
    11. Always keep it simple. Little clean objects with obvious utility are better than big complicated interfaces. You can always start small and simple and expand the class interface when you understand it better. It can be impossible to reduce the interface of an existing class.
    12. What a method promises

      For various reasons methods have often promised a lot more than they can deliver. This is unfortunate because programmers are already a suspicious lot when it comes to strategies and unrealistic expectations; the bad reputation of some methods can cause others to be discarded out of hand. Because of this, valuable techniques can be ignored at significant financial and productivity costs.

      A managerís silver bullet

      The worst promise is to say, "This method will solve all your problems." Such a promise will more likely come couched in the idea that a method will solve problems that donít really have a solution, or at least not in the domain of program design: An impoverished corporate culture; exhausted, alienated, or adversarial team members; insufficient schedule and resources; or attempting to solve a problem that may in fact be insoluble (insufficient research). The best methodology, regardless of what it promises, will solve none of these problems or any problems in the same class. For that matter, OOP and C++ wonít help either. Unfortunately, a manager in such a situation is precisely the person thatís most vulnerable to the siren song of the silver bullet.

      A tool for productivity

      This is what a method should be. Increased productivity should come not only in the form of easy and inexpensive maintenance but especially in the creation of a good design in the first place. Because the motivating factor for the creation of methodologies was improved maintenance, some methods ignore the beauty and integrity of the program design in favor of maintenance issues. Instead, a good design should be the foremost goal; a good OOP design will have easy maintenance as a side-effect.

      What a method should deliver

      Regardless of what claims are made for a particular method, it should provide a number of essential features, covered in this section: A contract to allow you to communicate about what the project will accomplish and how it will do it; a system to support the structuring of that project; and a set of tools to represent the project in some abstract form so you can easily view and manipulate it. A more subtle issue, covered last, is the "attitude" of the method concerning that most precious of all resources, The enthusiasm of the team members.

      A communication contract

      For very small teams, you can keep in such close contact that communication happens naturally. This is the ideal situation. One of the great benefits of C++ is that it allows projects to be built with fewer team members, so this intimate style of communication can be maintained, which means communication overhead is lower and projects can be built more quickly.

      The situation is not always so ideal. There can come a point where there are too many team members or the project is too complex, and some form of communication discipline is necessary. A method provides a way to form a "contract" between the members of a team. You can view the concept of such a contract in two ways:

    13. Adversarial. The contract is an expression of suspicion between the parties involved, to make sure that no one gets out of line and everyone does what theyíre supposed to. The contract spells out the bad things that happen if they donít. If you are looking at any contract this way, youíve already lost the game because you already think the other party is not trustworthy. If you canít trust someone, a contract wonít ensure good behavior.
    14. Informational. The contract is an attempt to make sure everyone knows what weíve agreed upon. It is an aid to communication so everyone can look at it and say, "Yes, thatís what I think weíre going to do." Itís an expression of an agreement after the agreement has been made, just to clean up misunderstandings. This sort of contract can be minimalist and easy to read.
    15. A useful method will not foment an adversarial contract; the emphasis will be on communication.

      A structuring system

      The structure is the heart of your system. If a method accomplishes nothing else it must be able to tell programmers:

    16. What classes you need.
    17. How you hook them together to build a working system.
    18. A method generates these answers through a process that begins with an analysis of the problem and ends with some sort of representation of the classes, the system, and the messages passed between the classes in the system.

      Tools for representation

      The model should not be more complex than the system it represents. A good model presents an abstraction.

      You are certainly not constrained to using the representation tools that come with a particular method. You can make up your own to suit your needs. (For example, later in this chapter thereís a suggested notation for use with a commercial word processor.) Following are guidelines for a useful notation:

    19. Include no more detail than necessary. Remember the "seven plus or minus two" rule of complexity. (You can only hold that many items in your mind at one moment.) Extra detail becomes baggage that must be maintained and costs money.
    20. You should be able to get as much information as you need by probing deeper into the representation levels. That is, levels can be created if necessary, hidden at higher levels of abstraction and made visible on demand.
    21. The notation should be as minimal as possible. "Too much magic causes software rot."
    22. System design and class design are separate issues. Classes are reusable tools, while systems are solutions to specific problems (although a system design, too, may be reusable). The notation should focus first on system design.
    23. Is a class design notation necessary? The expression of classes provided by the C++ language seems to be adequate for most situations. If a notation doesnít give you a significant boost over describing classes in their native language, then itís a hindrance.
    24. The notation should hide the implementation internals of the objects. Those are generally not important during design.
    25. Keep it simple. The analysis is the design. Basically, all you want to do in your method is discover your objects and how they connect with each other to form a system. If a method and notation require more from you, then you should question whether that method is spending your time wisely.
    26. Donít deplete
      the most important resource

      My friend Michael Wilk, after allowing that he came from academia and perhaps wasnít qualified to make a judgment (the type of preamble you hear from someone with a fresh perspective), observed that the most important resource that a project, team, or company has is enthusiasm. It seems that no matter how thorny the problem, how badly youíve failed in the past, the primitiveness of your tools or what the odds are, enthusiasm can overcome the obstacle.

      Unfortunately, various management techniques often do not consider enthusiasm at all, or, because it cannot easily be measured, consider it an "unimportant" factor, thinking that if enough management structure is in place, the project can be forced through. This sort of thinking has the effect of damping the enthusiasm of the team, because they can feel like no more than a means to a companyís profit motive, a cog. Once this happens a team member becomes an "employee," watching the clock and seeking interesting distractions.

      A method and management technique built upon motivation and enthusiasm as the most precious resources would be an interesting experiment indeed. At least, you should consider the effect that an OOP design method will have on the morale of your team members.

      "Required" reading

      Before you choose any method, itís helpful to gain perspective from those who are not trying to sell one. Itís easy to adopt a method without really understanding what you want out of it or what it will do for you. Others are using it, which seems a compelling reason. However, humans have a strange little psychological quirk: If they want to believe something will solve their problems, theyíll try it. (This is experimentation, which is good.) But if it doesnít solve their problems, they may redouble their efforts and begin to announce loudly what a great thing theyíve discovered. (This is denial, which is not good.) The assumption here may be that if you can get other people in the same boat, you wonít be lonely, even if itís going nowhere.

      This is not to suggest that all methodologies go nowhere, but that you should be armed to the teeth with mental tools that help you stay in experimentation mode ("Itís not working; letís try something else") and out of denial mode ("No, thatís not really a problem. Everythingís wonderful, we donít need to change"). I think the following books, read before you choose a method, will provide you with these tools.

      Software Creativity, by Robert Glass (Prentice-Hall, 1995). This is the best book Iíve seen that discusses perspective on the whole methodology issue. Itís a collection of short essays and papers that Glass has written and sometimes acquired (P.J. Plauger is one contributor), reflecting his many years of thinking and study on the subject. Theyíre entertaining and only long enough to say whatís necessary; he doesnít ramble and lose your interest. Heís not just blowing smoke, either; there are hundreds of references to other papers and studies. All programmers and managers should read this book before wading into the methodology mire.

      Peopleware, by Tom Demarco and Timothy Lister (Dorset House, 1987). Although they have backgrounds in software development, this book is about projects and teams in general. But the focus is on the people and their needs rather than the technology and its needs. They talk about creating an environment where people will be happy and productive, rather than deciding what rules those people should follow to be adequate components of a machine. This latter attitude, I think, is the biggest contributor to programmers smiling and nodding when XYZ method is adopted and then quietly doing whatever theyíve always done.

      Complexity, by M. Mitchell Waldrop (Simon & Schuster, 1992). This chronicles the coming together of a group of scientists from different disciplines in Santa Fe, New Mexico, to discuss real problems that the individual disciplines couldnít solve (the stock market in economics, the initial formation of life in biology, why people do what they do in sociology, etc.). By crossing physics, economics, chemistry, math, computer science, sociology, and others, a multidisciplinary approach to these problems is developing. But more importantly, a different way of thinking about these ultra-complex problems is emerging: Away from mathematical determinism and the illusion that you can write an equation that predicts all behavior and toward first observing and looking for a pattern and trying to emulate that pattern by any means possible. (The book chronicles, for example, the emergence of genetic algorithms.) This kind of thinking, I believe, is useful as we observe ways to manage more and more complex software projects.

      Scripting:
      a minimal method

      Iíll start by saying this is not tried or tested anywhere. I make no promises ó itís a starting point, a seed for other ideas, and a thought experiment, albeit after a great deal of thought and a fair amount of reading and observation of myself and others in the process of development. It was inspired by a writing class I took called "Story Structure," taught by Robert McKee, primarily to aspiring and practicing screenwriters, but also for novelists and playwrights. It later occurred to me that programmers have a lot in common with that group: Our concepts ultimately end up expressed in some sort of textual form, and the structure of that expression is what determines whether the product is successful or not. There are a few amazingly well-told stories, many stories that are uninspired but competent and get the job done, and a lot of badly told stories, some of which donít get published. Of course, stories seem to want to be told while programs demand to be written.

      Writers have an additional constraint that does not always appear in programming: They generally work alone or possibly in groups of two. Thus they must be very economical with their time, and any method that does not bear significant fruit is discarded. Two of McKeeís goals were to reduce the typical amount of time spent on a screenplay from one year to six months and to significantly increase the quality of the screenplays in the process. Similar goals are shared by software developers.

      Getting everyone to agree on anything is an especially tough part of the startup process of a project. The minimal nature of this system should win over even the most independent of programmers.

      Premises

      Iím basing the method described here on two significant premises, which you must carefully consider before you adopt the rest of the ideas:

    27. C++, unlike typical procedural languages (and most existing languages, for that matter) has many guards in the language and language features so you can build in your own guards. These guards are intended to prevent the program you create from losing its structure, both during the process of creating it and over time, as the program is maintained.
    28. No matter how much analysis you do, there are some things about a system that wonít reveal themselves until design time, and more things that wonít reveal themselves until a program is up and running. Because of this, itís critical to move fairly quickly through analysis and design to implement a test of the proposed system. Because of Point 1, this is far safer than when using procedural languages, because the guards in C++ are instrumental in preventing the creation of "spaghetti code."
    29. This second point is worth emphasizing. Because of the history weíve had with procedural languages, it is commendable that a team will want to proceed carefully and understand every minute detail before moving to design and implementation. Certainly, when creating a DBMS, it pays to understand a customerís needs thoroughly. But a DBMS is in a class of problems that is very well-posed and well-understood. The class of programming problem discussed in this chapter is of the "wild-card" variety, where it isnít simply re-forming a well-known solution, but instead involves one or more wild-card factors ó elements where there is no well-understood previous solution, and research is necessary. Attempting to thoroughly analyze a wild-card problem before moving into design and implementation results in analysis paralysis because you donít have enough information to solve this kind of problem during the analysis phase. Solving such a problem requires iteration through the whole cycle, and that requires risk-taking behavior (which makes sense, because youíre trying to do something new and the potential rewards are higher). It may seem like the risk is compounded by "rushing" into a preliminary implementation, but it can instead reduce the risk in a wild-card project because youíre finding out early whether a particular design is viable.

      The goal of this method is to attack wild-card projects by producing the most rapid development of a proposed solution, so the design can be proved or disproved as early as possible. Your efforts will not be lost. Itís often proposed that you "build one to throw away." With OOP, you may still throw part of it away, but because code is encapsulated into classes, you will inevitably produce some useful class designs and develop some worthwhile ideas about the system design during the first iteration that do not need to be thrown away. Thus, the first rapid pass at a problem not only produces critical information for the next analysis, design, and implementation iteration, it also creates a code foundation for that iteration.

      Another important feature of this method is support for brainstorming at the early part of a project. By keeping the initial document small and concise, it can be created in a few sessions of group brainstorming with a leader who dynamically creates the description. This not only solicits input from everyone, it also fosters initial buy-in and agreement by everyone on the team. Perhaps most importantly, it can kick off a project with a lot of enthusiasm (as noted previously, the most essential resource).

      Representation

      The writerís most valuable computer tool is the word processor, because it easily supports the structure of a document. With programming projects, the structure of the program is usually supported and described by some form of separate documentation. As the projects become more complex, the documentation is essential. This raises a classic problem, stated by Brooks:

      A basic principle of data processing teaches the folly of trying to maintain independent files in synchronism .... Yet our practice in programming documentation violates our own teaching. We typically attempt to maintain a machine-readable form of a program and an independent set of human-readable documentation ...."

      A good tool will connect the code and its documentation.

      I consider it very important to use familiar tools and modes of thinking; the change to OOP is challenging enough by itself. Early OOP methodologies have suffered by using elaborate graphical notation schemes. You inevitably change your design a lot, so expressing it with a notation thatís difficult to modify is a liability because youíll resist changing it to avoid the effort involved. Only recently have tools been appearing that manipulate these graphical notations. Tools for easy use of a design notation must already be in place before you can expect people to use a method. Combining this with the fact that documents are usually expected during the software design process, the most logical tool is a full-featured word processor. Virtually every company already has these in place (so thereís no cost to trying this method), most programmers are familiar with them, and as programmers they are comfortable creating tools using the underlying macro language. This follows the spirit of C++, where you build on your existing knowledge and tool base rather than throwing it away.

      The mode of thinking used by this method also follows that spirit. Although a graphical notation is useful to express a design in a report, it is not fast enough to support brainstorming. However, everyone understands outlining, and most word processors have some sort of outlining mode that allows you to grab pieces of the outline and quickly move them around. This is perfect for rapid design evolution in an interactive brainstorming session. In addition, you can expand and collapse outlines to see various levels of granularity in the system. And (as described later), as you create the design, you create the design document, so a report on the state of the project can be produced with a process not unlike running a compiler.

      1. High concept

      Any system you build, no matter how complicated, has a fundamental purpose, the business that itís in, the basic need that it satisfies. If you can look past the user interface, the hardware- or system-specific details, the coding algorithms and the efficiency problems, you will eventually find the core of its being, simple and straightforward. Like the so-called high concept from a Hollywood movie, you can describe it in one or two sentences. This pure description is the starting point.

      The high concept is quite important because it sets the tone for your project; itís a mission statement. You wonít necessarily get it right the first time (you may be developing the treatment or building the design before it becomes completely clear), but keep trying until it feels right. For example, in an air-traffic control system you may start out with a high concept focused on the system that youíre building: "The tower program keeps track of the aircraft." But consider what happens when you shrink the system to a very small airfield; perhaps thereís only a human controller or none at all. A more useful model wonít concern the solution youíre creating as much as it describes the problem: "Aircraft arrive, unload, service and reload, and depart."

      2. Treatment

      A treatment of a script is a summary of the story in one or two pages, a fleshing out of the high concept. The best way to develop the high concept and treatment for a computer system may be in a group situation with a facilitator who has writing ability. Ideas can be suggested in a brainstorming environment, while the facilitator tries to express the ideas on a computer thatís networked with the group or projected on screen. The facilitator takes the role of a ghostwriter and doesnít judge the ideas but instead simply tries to make them clear and keep them flowing.

      The treatment becomes the jumping-off point for the initial object discovery and first rough cut at design, which can also be performed in a group setting with a facilitator.

      3. Structuring

      Structure is the key to the system. Without structure you have a random collection of meaningless events. With structure you have a story. The structure of a story is expressed through characters, which correspond to objects, and plot, which corresponds to system design.

      Organizing the system

      As mentioned earlier, the primary representation tool for this method is a sophisticated word processor with outlining facility.

      You start with level-1 sections for high concept, treatment, objects, and design. As the objects are discovered, they are placed as level-2 subsections under objects. Object interfaces are added as level-3 subsections under the specific type of object. If essential descriptive text comes up, it is placed as normal text under the appropriate subsection.

      Because this technique involves typing and outlining, with no drawing, the brainstorming process is not hindered by the speed of creating the representation.

      Characters: initial object discovery

      The treatment contains nouns and verbs. As you find these, the nouns will suggest classes, and the verbs will become either methods for those classes or processes in the system design. Although you may not be comfortable that youíve found everything after this first pass, remember that itís an iterative process. You can add additional classes and methods at further stages and later design passes, as you understand the problem better. The point of this structuring is that you donít currently understand the problem, so donít expect the design to be revealed to you all at once.

      Start by simply moving through the treatment and creating a level-2 subsection in objects for each unique noun that you find. Take verbs that are clearly acting upon an object and place them as level-3 method subsections beneath the appropriate noun. Add the argument list (even if itís initially empty) and return type for each method. This will give you a rough cut and something to talk about and push around.

      If a class is inherited from another class, its level-2 subsection should be placed as close as possible after the base class, and its subsection name should indicate the inheritance relationship just as you would when writing the code: derived : public base. This allows the code to be properly generated.

      Although you can set your system up to express methods that are hidden from the public interface, the intent here is to create only the classes and their public interfaces; other elements are considered part of the underlying implementation and not the high-level design. If expressed, they should appear as text-level notes beneath the appropriate class.

      When decision points come up, use a modified Occamís Razor approach: Consider the choices and select the one that is simplest, because simple classes are almost always best. Itís easy to add more elements to a class, but as time goes on, itís difficult to take them away.

      If you need to seed the process, look at the problem from a lazy programmerís standpoint: What objects would you like to magically appear to solve your problem? Itís also helpful to have references on hand for the classes that are available and the various system design patterns, to clarify proposed classes or designs.

      You wonít stay in the objects section the entire time; instead, youíll move back and forth between objects and system design as you analyze the treatment. Also, at any time you may want to write some normal text beneath any of the subsections as ideas or notes about a particular class or method.

      Plot: initial system design

      From the high concept and treatment, a number of "subplots" should be apparent. Often they may be as simple as "input, process, output," or "user interface, actions." Each subplot has its own level-2 subsection under design. Most stories follow one of a set of common plots; in OOP the analogy is being called a "pattern." Refer to resources on OOP design patterns to aid in searching for plots.

      At this point, youíre just trying to create a rough sketch of the system. During the brainstorming session, people in the group make suggestions about activities they think occur in the system, and each activity is recorded individually, without necessarily working to connect it to the whole. Itís especially important to have the whole team, including mechanical design (if necessary), marketing, and managers, included in this session, not only so everyone is comfortable that the issues have been considered, but because everyoneís input is valuable at this point.

      A subplot will have a set of stages or states that it moves through, conditions for moving between stages, and the actions involved in each transition. Each stage is given its own level-3 subsection under that particular subplot. The conditions and transitions can be described as text under the stage subhead. Ideally, youíll eventually (as the design iteration proceeds) be able to write the essentials of each subplot as the creation of objects and sending messages to them. This becomes the initial code body for that subplot.

      The design discovery and object discovery processes will stimulate each other, so youíll be adding subentries to both sections during the session.

      4. Development

      This is the initial conversion from the rough design to a compiling body of code that can be tested, and especially that will prove or disprove your design. This is not a one-pass process, but rather the beginning of a series of writes and rewrites, so the emphasis is on converting from the document into a body of code in such a way that the document can be regenerated using any changes to the structure or associated prose in the code. This way, generating design documentation after coding begins (and the inevitable changes occur) becomes reasonably effortless, and the design document can become a tool for reporting on the progress of the project.

      Initial translation

      By using the standard section names objects and design at level-1 section headings, you can key your tools to lift out those sections and generate your header files from them. You perform different activities depending on what major section youíre in and the level of subsection youíre working on. The easiest approach may be to have your tool or macro break the document into pieces and work on each one appropriately.

      Each level-2 section in objects should have enough information in the section name (the name of the class and its base class, if any) to generate the class declaration automatically, and each level-3 subsection beneath the class name should have enough information in the section name (member function name, argument list, and return type) to generate the member function declaration. Your tool will simply move through these and create the class declarations.

      For simplicity, a single class declaration will appear in each header file. The best approach to naming the header files is probably to include the file name as tagged information in the level-2 section name for that class.

      Plotting can be more subtle. Each subplot may produce an independent function, called from inside main( ), or simply a section in main( ). Start with something that gets the job done; a more refined pattern may emerge in future iterations.

      Code generation

      Using automatic tools (most word-processor scripting tools are adequate for this),

    30. Generate a header file for each class described in your objects section, creating a class declaration for each one, with all the public interface functions and their associated description blocks, surrounding each with special tags that can be easily parsed later.
    31. Generate a header file for each subplot and copy its description as a commented block at the beginning of the file, followed by function declarations.
    32. Mark each subplot, class, and method with its outline heading level as a tagged, commented identifier: //#[1], //#[2], etc.). All generated files have document comments in specially identified blocks with tags. Class names and function declarations also retain comment markers. This way, a reversing tool can go through, extract all the information and regenerate the source document, preferably, in a document-description language like Rich Text Format (RTF).
    33. The interfaces and plots should be compilable at this point (but not linkable), so syntax checking can occur. This will ensure the high-level integrity of the design. The document can be regenerated from the correctly compiling files.
    34. At this point, two things can happen. If the design is still very early, itís probably easiest to work on the document (rather than the code) in brainstorming sessions, or on subparts of the document in groups responsible for them. However, if the design is complete enough, you can begin coding. If interface elements are added during coding, they must be tagged by the programmer along with tagged comments, so the regeneration program can use the new information to produce the document.

If you had the front end to a compiler, you could certainly do this for classes and functions automatically, but thatís a big job and the language is evolving. Using explicit tags is fairly fail-safe, and commercial browsing tools can be used to verify that all public functions have made it into the document (that is, they were tagged).

5. Rewriting

This is the analogy of rewriting a screenplay to refine it and make it shine. In programming, itís the process of iteration. Itís where your program goes from good to great, and where those issues that you didnít really understand in the first pass become clear. Itís also where your classes can evolve from single-project usage to reusable resources.

From a tool standpoint, reversing the process is a bit more complicated. You want to be able to decompose the header files so they can be reintegrated into the design document, including all the changes that have been made during coding. Then, if any changes are made to the design in the design document, the header files must be completely rebuilt, without losing any of the work that was done to get the header file to compile in the first iteration. Thus, your tool must not only look for your tagged information to turn into section levels and text, it must also find, tag, and store the other information such as the #includes at the beginning of each file. If you keep in mind that the header file expresses the class design and that you must be able to regenerate the header from your design document, youíll be OK.

Also notice that the text level notes and discussions, which were turned into tagged comments on the initial generation, have more than likely been modified by the programmer as the design evolved. Itís essential that these are captured and put into their respective places, so the design document reflects the new information. This allows you to change that information, and itís carried back to the generated header files.

For the system design (main( ) and any supporting functions) you may want to capture the whole file, add section identifiers like A, B, C, and so on, as tagged comments (do not use line numbers, because these may change), and attach your section descriptions (which will then be carried back and forth into the main( ) file as tagged, commented text).

You have to know when to stop when iterating the design. Ideally, you achieve target functionality and are in the process of refinement and addition of new features when the deadline comes along and forces you to stop and ship that version. (Remember, software is a subscription business.)

Logistics

Periodically, youíll want to get an idea of where the project is by reintegrating the document. This process can be painless if itís done over a network using automatic tools. Regularly integrating and maintaining the master design document is the responsibility of the project leader or manager, while teams or individuals are responsible for subparts of the document (that is, their code and comments).

Supplemental features, such as class diagrams, can be generated using third-party tools and automatically included in the document.

A current report can be generated at any time by simply "refreshing" the document. The state of all parts of the program can then be viewed; this also provides immediate updates for support groups, especially end-user documentation. The document is also critically valuable for rapid start-up of new team members.

A single document is more reasonable than all the documents produced by some analysis, design, and implementation methods. Although one smaller document is less impressive, itís "alive," whereas an analysis document, for example, is only valuable for a particular phase of the project and then rapidly becomes obsolete. Itís hard to put a lot of effort into a document that you know will be thrown away.

Analysis and design

The object-oriented paradigm is a new and different way of thinking about programming and many folks have trouble at first knowing how to approach a project. Now that you know that everything is supposed to be an object, you can create a "good" design, one that will take advantage of all the benefits that OOP has to offer.

Books on OOP analysis and design are coming out of the woodwork. Most of these books are filled with lots of long words, awkward prose and important-sounding pronouncements. I come away thinking the book would be better as a chapter or at the most a very short book and feeling annoyed that this process couldnít be described simply and directly. (It disturbs me that people who purport to specialize in managing complexity have such trouble writing clear and simple books.) After all, the whole point of OOP is to make the process of software development easier, and although it would seem to threaten the livelihood of those of us who consult because things are complex, why not make it simple? So, hoping Iíve built a healthy skepticism within you, I shall endeavor to give you my own perspective on analysis and design in as few paragraphs as possible.

Staying on course

While youíre going through the development process, the most important issue is this: donít get lost. Itís easy to do. Most of these methodologies are designed to solve the largest of problems. (This makes sense; these are the especially difficult projects that justify calling in that author as consultant, and justify the authorís large fees.) Remember that most projects donít fit into that category, so you can usually have a successful analysis and design with a relatively small subset of what a methodology recommends. But some sort of process, no matter how limited, will generally get you on your way in a much better fashion than simply beginning to code.

That said, if youíre looking at a methodology that contains tremendous detail and suggests many steps and documents, itís still difficult to know when to stop. Keep in mind what youíre trying to discover:

  1. What are the objects? (How do you partition your project into its component parts?)
  2. What are their interfaces? (What messages do you need to be able to send to each object?)

If you come up with nothing more than the objects and their interfaces then you can write a program. For various reasons you might need more descriptions and documents than this, but you canít really get away with any less.

The process can be undertaken in four phases, and a phase 0 which is just the initial commitment to using some kind of structure.

Phase 0: Letís make a plan

The first step is to decide what steps youíre going to have in your process. It sounds simple (in fact, all of this sounds simple) and yet, often, people donít even get around to phase one before they start coding. If your plan is "letís jump in and start coding," fine. (Sometimes thatís appropriate when you have a well-understood problem.) At least agree that this is the plan.

You might also decide at this phase that some additional process structure is necessary but not the whole nine yards. Understandably enough, some programmers like to work in "vacation mode" in which no structure is imposed on the process of developing their work: "It will be done when itís done." This can be appealing for awhile, but Iíve found that having a few milestones along the way helps to focus and galvanize your efforts around those milestones instead of being stuck with the single goal of "finish the project." In addition, it divides the project into more bite-sized pieces and make it seem less threatening.

When I began to study story structure (so that I will someday write a novel) I was initially resistant to the idea, feeling that when I wrote I simply let it flow onto the page. What I found was that when I wrote about computers the structure was simple enough so I didnít need to think much about it, but I was still structuring my work, albeit only semi-consciously in my head. So even if you think that your plan is to just start coding, you still go through the following phases while asking and answering certain questions.

Phase 1: What are we making?

In the previous generation of program design (procedural design), this would be called "creating the requirements analysis and system specification." These, of course, were places to get lost: intimidatingly-named documents that could become big projects in their own right. Their intention was good, however. The requirements analysis says "Make a list of the guidelines we will use to know when the job is done and the customer is satisfied." The system specification says "Hereís a description of what the program will do (not how) to satisfy the requirements." The requirements analysis is really a contract between you and the customer (even if the customer works within your company or is some other object or system). The system specification is a top-level exploration into the problem and in some sense a discovery of whether it can be done and how long it will take. Since both of these will require consensus among people, I think itís best to keep them as bare as possible Ė ideally, to lists and basic diagrams Ė to save time. You might have other constraints that require you to expand them into bigger documents.

Itís necessary to stay focused on the heart of what youíre trying to accomplish in this phase: determine what the system is supposed to do. The most valuable tool for this is a collection of what are called "use-cases." These are essentially descriptive answers to questions that start with "What does the system do if Ö" For example, "What does the auto-teller do if a customer has just deposited a check within 24 hours and thereís not enough in the account without the check to provide the desired withdrawal?" The use-case then describes what the auto-teller does in that case.

You try to discover a full set of use-cases for your system, and once youíve done that youíve got the core of what the system is supposed to do. The nice thing about focusing on use-cases is that they always bring you back to the essentials and keep you from drifting off into issues that arenít critical for getting the job done. That is, if you have a full set of use-cases you can describe your system and move onto the next phase. You probably wonít get it all figured out perfectly at this phase, but thatís OK. Everything will reveal itself in the fullness of time, and if you demand a perfect system specification at this point youíll get stuck.

It helps to kick-start this phase by describing the system in a few paragraphs and then looking for nouns and verbs. The nouns become the objects and the verbs become the methods in the object interfaces. Youíll be surprised at how useful a tool this can be; sometimes it will accomplish the lionís share of the work for you.

Although itís a black art, at this point some kind of scheduling can be quite useful. You now have an overview of what youíre building so youíll probably be able to get some idea of how long it will take. A lot of factors come into play here: if you estimate a long schedule then the company might not decide to build it, or a manager might have already decided how long the project should take and will try to influence your estimate. But itís best to have an honest schedule from the beginning and deal with the tough decisions early. There have been a lot of attempts to come up with accurate scheduling techniques (like techniques to predict the stock market), but probably the best approach is to rely on your experience and intuition. Get a gut feeling for how long it will really take, then double that and add 10 percent. Your gut feeling is probably correct; you can get something working in that time. The "doubling" will turn that into something decent, and the 10 percent will deal with final polishing and details. However you want to explain it, and regardless of the moans and manipulations that happen when you reveal such a schedule, it just seems to work out that way.

Phase 2: How will we build it?

In this phase you must come up with a design that describes what the classes look like and how they will interact. A useful diagramming tool that has evolved over time is the Unified Modeling Language (UML). You can get the specification for UML at www.rational.com. UML can also be helpful as a descriptive tool during phase 1, and some of the diagrams you create there will probably show up unmodified in phase 2. You donít need to use UML, but it can be helpful, especially if you want to put a diagram up on the wall for everyone to ponder, which is a good idea. An alternative to UML is a textual description of the objects and their interfaces (as I described in Thinking in C++), but this can be limiting.

The most successful consulting experiences Iíve had when coming up with an initial design involves standing in front of a team, who hadnít built an OOP project before, and drawing objects on a whiteboard. We talked about how the objects should communicate with each other, and erased some of them and replaced them with other objects. The team (who knew what the project was supposed to do) actually created the design; they "owned" the design rather than having it given to them. All I was doing was guiding the process by asking the right questions, trying out the assumptions and taking the feedback from the team to modify those assumptions. The true beauty of the process was that the team learned how to do object-oriented design not by reviewing abstract examples, but by working on the one design that was most interesting to them at that moment: theirs.

Youíll know youíre done with phase 2 when you have described the objects and their interfaces. Well, most of them Ė there are usually a few that slip through the cracks and donít make themselves known until phase 3. But thatís OK. All you are concerned with is that you eventually discover all of your objects. Itís nice to discover them early in the process but OOP provides enough structure so that itís not so bad if you discover them later.

Phase 3: Letís build it!

If youíre reading this book youíre probably a programmer, so now weíre at the part youíve been trying to get to. By following a plan Ė no matter how simple and brief Ė and coming up with design structure before coding, youíll discover that things fall together far more easily than if you dive in and start hacking, and this provides a great deal of satisfaction. Getting code to run and do what you want is fulfilling, even like some kind of drug if you look at the obsessive behavior of some programmers. But itís my experience that coming up with an elegant solution is deeply satisfying at an entirely different level; it feels closer to art than technology. And elegance always pays off; itís not a frivolous pursuit. Not only does it give you a program thatís easier to build and debug, but itís also easier to understand and maintain, and thatís where the financial value lies.

After you build the system and get it running, itís important to do a reality check, and hereís where the requirements analysis and system specification comes in. Go through your program and make sure that all the requirements are checked off, and that all the use-cases work the way theyíre described. Now youíre done. Or are you?

Phase 4: Iteration

This is the point in the development cycle that has traditionally been called "maintenance," a catch-all term that can mean everything from "getting it to work the way it was really supposed to in the first place" to "adding features that the customer forgot to mention before" to the more traditional "fixing the bugs that show up" and "adding new features as the need arises." So many misconceptions have been applied to the term "maintenance" that it has taken on a slightly deceiving quality, partly because it suggests that youíve actually built a pristine program and that all you need to do is change parts, oil it and keep it from rusting. Perhaps thereís a better term to describe whatís going on.

The term is iteration. That is, "You wonít get it right the first time, so give yourself the latitude to learn and to go back and make changes." You might need to make a lot of changes as you learn and understand the problem more deeply. The elegance youíll produce if you iterate until youíve got it right will pay off, both in the short and the long run.

What it means to "get it right" isnít just that the program works according to the requirements and the use-cases. It also means that the internal structure of the code makes sense to you, and feels like it fits together well, with no awkward syntax, oversized objects or ungainly exposed bits of code. In addition, you must have some sense that the program structure will survive the changes that it will inevitably go through during its lifetime, and that those changes can be made easily and cleanly. This is no small feat. You must not only understand what youíre building, but also how the program will evolve (what I call the vector of change). Fortunately, object-oriented programming languages are particularly adept at supporting this kind of continuing modification Ė the boundaries created by the objects are what tend to keep the structure from breaking down. They are also what allow you to make changes that would seem drastic in a procedural program without causing earthquakes throughout your code. In fact, support for iteration might be the most important benefit of OOP.

With iteration, you create something that at least approximates what you think youíre building, and then you kick the tires, compare it to your requirements and see where it falls short. Then you can go back and fix it by redesigning and re-implementing the portions of the program that didnít work right. You might actually need to solve the problem, or an aspect of the problem, several times before you hit on the right solution. (A study of Design Patterns, described in Chapter 16, is usually helpful here.)

Iteration also occurs when you build a system, see that it matches your requirements and then discover it wasnít actually what you wanted. When you see the system, you realize you want to solve a different problem. If you think this kind of iteration is going to happen, then you owe it to yourself to build your first version as quickly as possible so you can find out if itís what you want.

Iteration is closely tied to incremental development. Incremental development means that you start with the core of your system and implement it as a framework upon which to build the rest of the system piece by piece. Then you start adding features one at a time. The trick to this is in designing a framework that will accommodate all the features you plan to add to it. (See Chapter 16 for more insight into this issue.) The advantage is that once you get the core framework working, each feature you add is like a small project in itself rather than part of a big project. Also, new features that are incorporated later in the development or maintenance phases can be added more easily. OOP supports incremental development because if your program is designed well, your increments will turn out to be discrete objects or groups of objects.

Plans pay off

Of course you wouldnít build a house without a lot of carefully-drawn plans. If you build a deck or a dog house, your plans wonít be so elaborate but youíll still probably start with some kind of sketches to guide you on your way. Software development has gone to extremes. For a long time, people didnít have much structure in their development, but then big projects began failing. In reaction, we ended up with methodologies that had an intimidating amount of structure and detail. These were too scary to use Ė it looked like youíd spend all your time writing documents and no time programming. (This was often the case.) I hope that what Iíve shown you here suggests a middle path Ė a sliding scale. Use an approach that fits your needs (and your personality). No matter how minimal you choose to make it, some kind of plan will make a big improvement in your project as opposed to no plan at all. Remember that, by some estimates, over 50 percent of projects fail.

Other methods

There are currently a large number (more than 20) of formal methods available for you to choose from. Some are not entirely independent because they share fundamental ideas, but at some higher level they are all unique. Because at the lowest levels most of the methods are constrained by the default behavior of the language, each method would probably suffice for a simple project. The true benefit is claimed to be at the higher levels; one method may excel at the design of real-time hardware controllers, but that method may not as easily fit the design of an archival database.

Each approach has its cheerleading squad, but before you worry too much about a large-scale method, you should understand the language basics a little better, to get a feel for how a method fits your particular style, or whether you even need a method at all. The following descriptions of three of the most popular methods are mainly for flavor, not comparison shopping. If you want to learn more about methods, there are many books and courses available.

Booch

The Booch method is one of the original, most basic, and most widely referenced. Because it was developed early, it was meant to be applied to a variety of programming problems. It focuses on the unique features of OOP: classes, methods, and inheritance. The steps are as follows:

    1. Identify classes and objects at a certain level of abstraction. This is predictably a small step. You state the problem and solution in natural language and identify key features such as nouns that will form the basis for classes. If youíre in the fireworks business, you may want to identify Workers, Firecrackers, and Customers; more specifically youíll need Chemists, Assemblers, and Handlers; AmateurFirecrackers and ProfessionalFirecrackers; Buyers and Spectators. Even more specifically, you could identify YoungSpectators, OldSpectators, TeenageSpectators, and ParentSpectators.
    2. Identify their semantics. Define classes at an appropriate level of abstraction. If you plan to create a class, you should identify that classís audience properly. For example, if you create a class Firecracker, who is going to observe it, a Chemist or a Spectator? The former will want to know what chemicals go into the construction, and the latter will respond to the colors and shapes released when it explodes. If your Chemist requests a firecrackerís primary color-producing chemicals, it had better not get the reply, "Some really cool greens and reds." Similarly, a Spectator would be puzzled at a Firecracker that spouted only chemical equations when it was lit. Perhaps your program is for a vertical market, and both Chemists and Spectators will use it; in that case, your Firecracker will have both objective and subjective attributes, and will be able to appear in the appropriate guise for the observer.
    3. Identify relationships between them (CRC cards). Define how the classes interact with other classes. A common method for tabulating the information about each class uses the Class, Responsibility, Collaboration (CRC) card. This is a small card (usually an index card) on which you write the state variables for the class, the responsibilities it has (i.e., the messages it gives and receives), and references to the other classes with which it interacts. Why an index card? The reasoning is that if you canít fit all you need to know about a class on a small card, the class is too complex. The ideal class should be understood at a glance; index cards are not only readily available, they also happen to hold what most people consider a reasonable amount of information. A solution that doesnít involve a major technical innovation is one thatís available to everyone (like the document structuring in the scripting method described earlier in this chapter).
    4. Implement the classes. Now that you know what to do, jump in and code it. In most projects the coding will affect the design.
    5. Iterate the design. The design process up to this point has the feeling of the classic waterfall method of program development. Now it diverges. After a preliminary pass to see whether the key abstractions allow the classes to be separated cleanly, iterations of the first three steps may be necessary. Booch writes of a "round-trip gestalt design process." Having a gestalt view of the program should not be impossible if the classes truly reflect the natural language of the solution. Perhaps the most important thing to remember is that by default ó by definition, really ó if you modify a class its super- and subclasses will still function. You need not fear modification; it cannot break the program, and any change in the outcome will be limited to subclasses and/or specific collaborators of the class you change. A glance at your CRC card for the class will probably be the only clue you need to verify the new version.
    6. Responsibility-Driven Design (RDD)

      This method also uses CRC cards. Here, as the name implies, the cards focus on delegation of responsibilities rather than appearance. To illustrate, the Booch method might produce an Employee-BankEmployee-BankManager hierarchy; in RDD this might come out Manager-FinanceManager-BankManager. The bank managerís primary responsibilities are managerial, so the hierarchy reflects that.

      More formally, RDD involves the following:

    7. Data or state. A description of the data or state variables for each class.
    8. Sinks and sources. Identification of data sinks and sources, classes that process or generate data.
    9. Observer or view. View or observer classes that separate hardware dependencies.
    10. Facilitator or helper. Facilitator or helper classes, such as a linked list, that contain little or no state information and simply help other classes to function.
    11. Object Modeling Technique (OMT)

      Object Modeling Technique (OMT) adds one more level of complexity to the process. Boochís method emphasizes the fundamental appearance of classes and defines them simply as outgrowths of the natural language solution. RDD takes that one step further by emphasizing the class responsibility more than its appearance. OMT describes not only the classes but various states of the system using detailed diagramming, as follows:

    12. Object model, "what," object diagram. The object model is similar to that produced by Boochís method and RDD. Object classes are connected by responsibilities.
    13. Dynamic model, "when," state diagram. The dynamic model describes time-dependent states of the system. Different states are connected by transitions. An example that contains time-dependent states is a real-time sensor that collects data from the outside world.
    14. Functional model, "how," data flow diagram. The functional model traces the flow of data. The theory is that because the real work at the lowest level of the program is accomplished using procedures, the low-level behavior of the program is best understood by diagramming the data flow rather than by diagramming its objects.
    15. Why C++ succeeds

      Part of the reason C++ has been so successful is that the goal was not just to turn C into an OOP language (although it started that way), but to solve many other problems facing developers today, especially those who have large investments in C. Traditionally, OOP languages have suffered from the attitude that you should dump everything you know and start from scratch with a new set of concepts and a new syntax, arguing that itís better in the long run to lose all the old baggage that comes with procedural languages. This may be true, in the long run. But in the short run, a lot of that baggage was valuable. The most valuable elements may not be the existing code base (which, given adequate tools, could be translated), but instead the existing mind base. If youíre a functioning C programmer and must drop everything you know about C in order to adopt a new language, you immediately become nonproductive for many months, until your mind fits around the new paradigm. Whereas if you can leverage off of your existing C knowledge and expand upon it, you can continue to be productive with what you already know while moving into the world of object-oriented programming. As everyone has his/her own mental model of programming, this move is messy enough as it is without the added expense of starting with a new language model from square one. So the reason for the success of C++, in a nutshell, is economic: It still costs to move to OOP, but C++ costs a lot less.

      The goal of C++ is improved productivity. This productivity comes in many ways, but the language is designed to aid you as much as possible, while hindering you as little as possible with arbitrary rules or any requirement that you use a particular set of features. The reason C++ is successful is that it is designed with practicality in mind: Decisions are based on providing the maximum benefits to the programmer.

      A better C

      You get an instant win even if you continue to write C code because C++ has closed the holes in the C language and provides better type checking and compile-time analysis. Youíre forced to declare functions so the compiler can check their use. The preprocessor has virtually been eliminated for value substitution and macros, which removes a set of difficult-to-find bugs. C++ has a feature called references that allows more convenient handling of addresses for function arguments and return values. The handling of names is improved through function overloading, which allows you to use the same name for different functions. Namespaces also improve the control of names. There are numerous other small features that improve the safety of C.

      Youíre already on the learning curve

      The problem with learning a new language is productivity: No company can afford to suddenly lose a productive software engineer because sheís learning a new language. C++ is an extension to C, not a complete new syntax and programming model. It allows you to continue creating useful code, applying the features gradually as you learn and understand them. This may be one of the most important reasons for the success of C++.

      In addition, all your existing C code is still viable in C++, but because the C++ compiler is pickier, youíll often find hidden errors when recompiling the code.

      Efficiency

      Sometimes it is appropriate to trade execution speed for programmer productivity. A financial model, for example, may be useful for only a short period of time, so itís more important to create the model rapidly than to execute it rapidly. However, most applications require some degree of efficiency, so C++ always errs on the side of greater efficiency. Because C programmers tend to be very efficiency-conscious, this is also a way to ensure they wonít be able to argue that the language is too fat and slow. A number of features in C++ are intended to allow you to tune for performance when the generated code isnít efficient enough.

      Not only do you have the same low-level control as in C (and the ability to directly write assembly language within a C++ program), but anecdotal evidence suggests that the program speed for an object-oriented C++ program tends to be within Ī10% of a program written in C, and often much closer. The design produced for an OOP program may actually be more efficient than the C counterpart.

      Systems are easier
      to express and understand

      Classes designed to fit the problem tend to express it better. This means that when you write the code, youíre describing your solution in the terms of the problem space ("put the grommet in the bin") rather than the terms of the computer, which is the solution space ("set the bit in the chip that means that the relay will close"). You deal with higher-level concepts and can do much more with a single line of code.

      The other benefit of this ease of expression is maintenance, which (if reports can be believed) takes a huge portion of the cost over a programís lifetime. If a program is easier to understand, then itís easier to maintain. This can also reduce the cost of creating and maintaining the documentation.

      Maximal leverage with libraries

      The fastest way to create a program is to use code thatís already written: a library. A major goal in C++ is to make library use easier. This is accomplished by casting libraries into new data types (classes), so bringing in a library is adding a new data type to the language. Because the compiler takes care of how the library is used ó guaranteeing proper initialization and cleanup, ensuring functions are called properly ó you can focus on what you want the library to do, not how you have to do it.

      Because names can be sequestered to portions of your program, you can use as many libraries as you want without the kinds of name clashes youíd run into with C.

      Source-code reuse with templates

      There is a significant class of types that require source-code modification in order to reuse them effectively. The template performs the source code modification automatically, making it an especially powerful tool for reusing library code. A type you design using templates will work effortlessly with many other types. Templates are especially nice because they hide the complexity of this type of code reuse from the client programmer.

      Error handling

      Error handling in C is a notorious problem, and one that is often ignored ó finger-crossing is usually involved. If youíre building a large, complex program, thereís nothing worse than having an error buried somewhere with no vector telling you where it came from. C++ exception handling (the subject of Chapter 16) is a way to guarantee that an error is noticed and that something happens as a result.

      Programming in the large

      Many traditional languages have built-in limitations to program size and complexity. BASIC, for example, can be great for pulling together quick solutions for certain classes of problems, but if the program gets more than a few pages long or ventures out of the normal problem domain of that language, itís like trying to run through an ever-more viscous solution. C, too, has these limitations. For example, when a program gets beyond perhaps 50,000 lines of code, name collisions start to become a problem. In short, you run out of function and variable names. Another particularly bad problem is the little holes in the C language ó errors can get buried in a large program that are extremely difficult to find.

      Thereís no clear line that tells when your language is failing you, and even if there were, youíd ignore it. You donít say, "My BASIC program just got too big; Iíll have to rewrite it in C!" Instead, you try to shoehorn a few more lines in to add that one extra feature. So the extra costs come creeping up on you.

      C++ is designed to aid programming in the large, that is, to erase those creeping-complexity boundaries between a small program and a large one. You certainly donít need to use OOP, templates, namespaces, and exception handling when youíre writing a hello-world-class utility program, but those features are there when you need them. And the compiler is aggressive about ferreting out bug-producing errors for small and large programs alike.

      Strategies for transition

      If you buy into OOP, you next question is probably, "How can I get my manager/colleagues/department/peers to start using objects?" Think about how you ó one independent programmer ó would go about learning to use a new language and a new programming paradigm. Youíve done it before. First comes education and examples; then comes a trial project to give you a feel for the basics without doing anything too confusing; then you try to do a "real world" project that actually does something useful. Throughout your first projects you continue your education by reading, asking questions of gurus, and trading hints with friends. In essence, this is the approach many authors suggest for the switch from C to C++. Switching an entire company will of course introduce certain group dynamics, but it will help at each step to remember how one person would do it.

      Stepping up to OOP

      Here are some guidelines to consider when making the transition to OOP and C++:

      1. Training

      The first step is some form of education. Remember the companyís investment in plain C code, and try not to throw it all into disarray for 6 to 9 months while everyone puzzles over how multiple inheritance works. Pick a small group for indoctrination, preferably one composed of people who are curious, work well together, and can function as their own support network while theyíre learning C++.

      An alternative approach that is sometimes suggested is the education of all company levels at once, including overview courses for strategic managers as well as design and programming courses for project builders. This is especially good for smaller companies making fundamental shifts in the way they do things, or at the division level of larger companies. Because the cost is higher, however, some may choose to start with project-level training, do a pilot project (possibly with an outside mentor), and let the project team become the teachers for the rest of the company.

      2. Low-risk project

      Try a low-risk project first and allow for mistakes. Once youíve gained some experience, you can either seed other projects from members of this first team or use the team members as an OOP technical support staff. This first project may not work right the first time, so it should be not very important in the grand scheme of things. It should be simple, self-contained, and instructive; this means that it should involve creating classes that will be meaningful to the other programmers in the company when they get their turn to learn C++.

      3. Model from success

      Seek out examples of good object-oriented design before starting from scratch. Thereís a good probability that someone has solved your problem already, and if they havenít solved it exactly you can probably apply what youíve learned about abstraction to modify an existing design to fit your needs. This is the general concept of design patterns.

      4. Use existing class libraries

      The primary economic motivation for switching to C++ is the easy use of existing code in the form of class libraries; the shortest application development cycle will result when you donít have to write anything but main( ) yourself. However, some new programmers donít understand this, are unaware of existing class libraries, or through fascination with the language desire to write classes that may already exist. Your success with OOP and C++ will be optimized if you make an effort to seek out and reuse other peopleís code early in the transition process.

      5. Donít rewrite existing code in C++

      Although compiling your C code in C++ usually produces (sometimes great) benefits by finding problems in the old code, it is not usually the best use of your time to take existing, functional code and rewrite it in C++. There are incremental benefits, especially if the code is slated for reuse. But chances are you arenít going to see the dramatic increases in productivity that you hope for in your first few projects unless that project is a new one. C++ and OOP shine best when taking a project from concept to reality.

      Management obstacles

      If youíre a manager, your job is to acquire resources for your team, to overcome barriers to your teamís success and in general to try to provide the most productive and enjoyable environment so your team is most likely to perform those miracles that are always being asked of you. Moving to C++ falls in all three of these categories, and it would be wonderful if it didnít cost you anything as well. Although it is arguably cheaper than the OOP alternatives for team of C programmers (and probably for programmers in other procedural languages), it isnít free, and there are obstacles you should be aware of before trying to sell the move to C++ within your company and embarking on the move itself.

      Startup costs

      The cost is more than just the acquisition of a C++ compiler. Your medium- and long-term costs will be minimized if you invest in training (and possibly mentoring for your first project) and also if you identify and purchase class libraries that solve your problem rather than trying to build those libraries yourself. These are hard-money costs that must be factored into a realistic proposal. In addition, there are the hidden costs in loss of productivity while learning a new language and possibly a new programming environment. Training and mentoring can certainly minimize these but team members must overcome their own struggles to understand the issues. During this process they will make more mistakes (this is a feature, because acknowledged mistakes are the fastest path to learning) and be less productive. Even then, with some types of programming problems, the right classes, and the right development environment, itís possible to be more productive while youíre learning C++ (even considering that youíre making more mistakes and writing fewer lines of code per day) than if youíd stayed with C.

      Performance issues

      A common question is, "Doesnít OOP automatically make my programs a lot bigger and slower?" The answer is, "It depends." Most traditional OOP languages were designed with experimentation and rapid prototyping in mind rather than lean-and-mean operation. Thus, they virtually guaranteed a significant increase in size and decrease in speed. C++, however, is designed with production programming in mind. When your focus is on rapid prototyping, you can throw together components as fast as possible while ignoring efficiency issues. If youíre using any third-party libraries, these are usually already optimized by their vendors; in any case itís not an issue while youíre in rapid-development mode. When you have a system you like, if itís small and fast enough, then youíre done. If not, you begin tuning with a profiling tool, looking first for speedups that can be done with simple applications of built-in C++ features. If that doesnít help, you look for modifications that can be made in the underlying implementation so no code that uses a particular class needs to be changed. Only if nothing else solves the problem do you need to change the design. The fact that performance in that portion of the design is so critical is an indicator that it must be part of the primary design criteria. You have the benefit of finding this out early through rapid prototyping.

      As mentioned earlier in this chapter, the number that is most often given for the difference in size and speed between C and C++ is Ī10%, and often much closer to par. You may actually get a significant improvement in size and speed for C++ over C because the design you make for C++ could be quite different from the one youíd make for C.

      The evidence for size and speed comparisons between C and C++ is so far all anecdotal and is likely to remain so. Regardless of the number of people who suggest that a company try the same project using C and C++, no company is likely to waste money that way, unless itís very big and interested in such research projects. Even then it seems like the money could be better spent. Almost universally, programmers who have moved from C (or some other procedural language) to C++ have had the personal experience of a great acceleration in their programming productivity, and thatís the most compelling argument you can find.

      Common design errors

      When starting your team into OOP and C++, programmers will typically go through a series of common design errors. This often happens because of too little feedback from experts during the design and implementation of early projects, because no experts have been developed within the company. Itís easy to feel that you understand OOP too early in the cycle and go off on a bad tangent; something thatís obvious to someone experienced with the language may be a subject of great internal debate for a novice. Much of this trauma can be skipped by using an outside expert for training and mentoring.

      Summary

      This chapter attempts to give you a feel for the broad issues of object-oriented programming and C++, including why OOP is different, and why C++ in particular is different; concepts of OOP methods and why you should (or should not) use one; a suggestion for a minimal method that Iíve developed to allow you to get started on an OOP project with minimal overhead; discussions of other methods; and finally the kinds of issues you will encounter when moving your own company to OOP and C++.

      OOP and C++ may not be for everyone. Itís important to evaluate your own needs and decide whether C++ will optimally satisfy those needs, or if you might be better off with another programming system. If you know that your needs will be very specialized for the foreseeable future and if you have specific constraints that may not be satisfied by C++, then you owe it to yourself to investigate the alternatives. Even if you eventually choose C++ as your language, youíll at least understand what the options were and have a clear vision of why you took that direction.

    2: Making & using objects

    This chapter will introduce enough of the concepts of C++ and program construction to allow you to write and run a simple object-oriented program. In the following chapter we will cover the basic syntax of C & C++ in detail.

    Classes that someone else has created are often packaged into a library. This chapter uses the iostream library of classes, which comes with all C++ implementations. Iostreams are a very useful way to read from files and the keyboard, and to write to files and the display. After covering the basics of building a program in C and C++, iostreams will be used to show how easy it is to utilize a pre-defined library of classes.

    To create your first program you must understand the tools used to build applications.

    The process of language translation

    All computer languages are translated from something that tends to be easy for a human to understand (source code) into something that is executed on a computer (machine instructions). Traditionally, translators fall into two classes: interpreters and compilers.

    Interpreters

    An interpreter translates source code (written in the programming language) into activities (which may comprise groups of machine instructions) and immediately executes those activities. BASIC is the most popular interpreted language. BASIC interpreters translate and execute one line at a time, and then forget the line has been translated. This makes them slow, since they must re-translate any repeated code. More modern interpreters translate the entire program into an intermediate language, that is executed by a much faster interpreter.

    Interpreters have many advantages. The transition from writing code to executing code is almost immediate, and the source code is always available so the interpreter can be much more specific when an error occurs. The benefits often cited for interpreters are ease of interaction and rapid development (but not execution) of programs.

    Interpreters usually have severe limitations when building large projects. The interpreter (or a reduced version) must always be in memory to execute the code, and even the fastest interpreter may introduce unacceptable speed restrictions. Most interpreters require that the complete source code be brought into the interpreter all at once. Not only does this introduce a space limitation, it can also cause more difficult bugs if the language doesnít provide facilities to localize the effect of different pieces of code.

    Compilers

    A compiler translates source code directly into assembly language or machine instructions. This is an involved process, and usually takes several steps. The transition from writing code to executing code is significantly longer with a compiler.

    Depending on the acumen of the compiler writer, programs generated by a compiler tend to require much less space to run, and run much more quickly. Although size and speed are probably the most often cited reasons for using a compiler, in many situations they arenít the most important reasons. Some languages (such as C) are designed to allow pieces of a program to be compiled independently. These pieces are eventually combined into a final executable program by a program called the linker. This is called separate compilation.

    Separate compilation has many benefits. A program that, taken all at once, would exceed the limits of the compiler or the compiling environment can be compiled in pieces. Programs can be built and tested a piece at a time. Once a piece is working, it can be saved and forgotten. Collections of tested and working pieces can be combined into libraries for use by other programmers. As each piece is created, the complexity of the other pieces is hidden. All these features support the creation of large programs.

    Compiler debugging features have improved significantly. Early compilers only generated machine code, and the programmer inserted print statements to see what was going on. This is not always effective. Recent compilers can insert information about the source code into the executable program. This information is used by powerful source-level debuggers to show exactly what is happening in a program by tracing its progress through the source code.

    Some compilers tackle the compilation-speed problem by performing in-memory compilation. Most compilers work with files, reading and writing them in each step of the compilation process. In-memory compiler keep the program in RAM. For small programs, this can seem as responsive as an interpreter.

    The compilation process

    If you are going to create large programs, you need to understand the steps and tools in the compilation process. Some languages (C and C++, in particular) start compilation by running a preprocessor on the source code. The preprocessor is a simple program that replaces patterns in the source code with other patterns the programmer has defined (using preprocessor directives). Preprocessor directives are used to save typing and to increase the readability of the code (Later in the book, youíll learn how the design of C++ is meant to discourage much of the use of the preprocessor, since it can cause subtle bugs). The pre-processed code is written to an intermediate file.

    Compilers often do their work in two passes. The first pass parses the pre-processed code. The compiler breaks the source code into small units and organizes it into a structure called a tree. In the expression "A + B" the elements ĎAí, Ď+í and ĎBí are leaves on the parse tree. The parser generates a second intermediate file containing the parse tree.

    A global optimizer is sometimes used between the first and second passes to produce smaller, faster code.

    In the second pass, the code generator walks through the parse tree and generates either assembly language code or machine code for the nodes of the tree. If the code generator creates assembly code, the assembler is run. The end result in both cases is an object module (a file with an extension of .o or .obj). A peephole optimizer is sometimes used in the second pass to look for pieces of code containing redundant assembly-language statements.

    The use of the word "object" to describe chunks of machine code is an unfortunate artifact. The word came into use before anyone thought of object-oriented programming. "Object" is used in the same sense as "goal" when discussing compilation, while in object-oriented programming it means "a thing with boundaries."

    The linker combines a list of object modules into an executable program that can be loaded and run by the operating system. When a function in one object module makes a reference to a function or variable in another object module, the linker resolves these references. The linker brings in a special object module to perform start-up activities.

    The linker can also search through special files called libraries. A library contains a collection of object modules in a single file. A library is created and maintained by a program called a librarian.

    Static type checking

    The compiler performs type checking during the first pass. Type checking tests for the proper use of arguments in functions, and prevents many kinds of programming errors. Since type checking occurs during compilation rather than when the program is running, it is called static type checking.

    Some object-oriented languages (notably Smalltalk) perform all type checking at run-time (dynamic type checking). Dynamic type checking is less restrictive during development, since you can send any message to any object (the object figures out, at run time, whether the message is an error). It also adds overhead to program execution and leaves the program open for run-time errors that can only be detected through exhaustive testing.

    C++ uses static type checking because the language cannot assume any particular run-time support for bad messages. Static type checking notifies the programmer about misuse of types right away, and maximizes execution speed. As you learn C++ you will see that most of the language design decisions favor the same kind of high-speed, robust, production-oriented programming the C language is famous for.

    You can disable static type checking. You can also do your own dynamic type checking ó you just need to write the code.

    Tools for separate compilation

    Separate compilation is particularly important when building large projects. In C and C++, a program can be created in small, manageable, independently tested pieces. To create a program with multiple files, functions in one file must access functions and data in other files. When compiling a file, the C or C++ compiler must know about the functions and data in the other files: their names and proper usage. The compiler insures the functions and data are used correctly. This process of "telling the compiler" the names of external functions and data and what they should look like is called declaration. Once you declare a function or variable, the compiler knows how to check to make sure it is used properly.

    At the end of the compilation process, the executable program is constructed from the object modules and libraries. The compiler produces object modules from the source code. These are files with extensions of .o or .obj, and should not be confused with object-oriented programming "objects."

    The linker must go through all the object modules and resolve all the external references, i.e., make sure that all the external functions and data you claimed existed via declarations during compilation actually exist.

    Declarations vs. definitions

    A declaration tells the compiler "this function or this piece of data exists somewhere else, and here is what it should look like." A definition tells the compiler: "make this piece of data here" or "make this function here." You can declare a piece of data or a function in many different places, but there must only be one definition in C and C++. When the linker is uniting all the object modules, it will complain if it finds more than one definition for the same function or piece of data.

    Almost all C/C++ programs require declarations. Before you can write your first program, you need to understand the proper way to write a declaration.

    Function declaration syntax

    A function declaration in Standard C and C++ gives the function name, the argument types passed to the function, and the return value of the function. For example, here is a declaration for a function called func1 that takes two integer arguments (integers are denoted in C/C++ with the keyword int) and returns an integer:

    int func1(int,int);

    C programmers should note that this is different from function declarations in K&R C. The first keyword you see is the return value, all by itself: int. The arguments are enclosed in parentheses after the function name, in the order they are used. The semicolon indicates the end of a statement; in this case, it tells the compiler "that's all -- there is no function definition here!"

    C/C++ declarations attempt to mimic the form of the item's use. For example, if A is another integer the above function might be used this way:

    A = func1(2,3);

    Since func1( ) returns an integer, the C or C++ compiler will check the use of func1( ) to make sure that A is an integer and both arguments are integers.

    In C and C++, arguments in function declarations may have names. The compiler ignores the names but they can be helpful as mnemonic devices for the user. For example, we can declare func1( ) in a different fashion that has the same meaning:

    int func1(int length, int width);

    A gotcha

    There is a significant difference between C (both Standard C and K&R) and C++ for functions with empty argument lists. In C, the declaration:

    int func2();

    means "a function with any number and type of argument." This prevents type-checking, so in C++ it means "a function with no arguments." If you declare a function with an empty argument list in C++, remember it's different from what you may be used to in C.

    Function definitions

    Function definitions look like function declarations except they have bodies. A body is a collection of statements enclosed in braces. Braces denote the beginning and ending of a block of code; they have the same purpose as the begin and end keywords in Pascal. To give func1( ) a definition which is an empty body (a body containing no code), write this:

    int func1(int length, int width) { }

    Notice that in the function definition, the braces replace the semicolon. Since braces surround a statement or group of statements, you don't need a semicolon. Notice also that the arguments in the function definition must have names if you want to use the arguments in the function body (since they are never used here, they are optional).

    Function definitions are explored later in the book.

    Variable declaration syntax

    The meaning attributed to the phrase "variable declaration" has historically been confusing and contradictory, and it's important that you understand the correct definition so you can read code properly. A variable declaration tells the compiler what a variable looks like. It says "I know you haven't seen this name before, but I promise it exists someplace, and it's a variable of X type."

    In a function declaration, you give a type (the return value), the function name, the argument list, and a semicolon. That's enough for the compiler to figure out that it's a declaration, and what the function should look like. By inference, a variable declaration might be a type followed by a name. For example:

    int A;

    could declare the variable A as an integer, using the above logic. Here's the conflict: there is enough information in the above code for the compiler to create space for an integer called A, and that's what happens. To resolve this dilemma, a keyword was necessary for C and C++ to say "this is only a declaration; it's defined elsewhere." The keyword is extern. It can mean the definition is external to the file, or later in the file.

    Declaring a variable without defining it means using the extern keyword before a description of the variable, like this:

    extern int A;

    extern can also apply to function declarations. For func1( ), it looks like this:

    extern int func1(int length, int width);

    This statement is equivalent to the previous func1( ) declarations. Since there is no function body, the compiler must treat it as a function declaration rather than a function definition. The extern keyword is superfluous and optional for function declarations. It is probably unfortunate that the designers of C did not require the use of extern for function declarations; it would have been more consistent and less confusing (but would have required more typing, which certainly explains what they did).

    Including headers

    Most libraries contain significant numbers of functions and variables. To save work and ensure consistency when making the external declarations for these items, C/C++ uses a device called the header file. A header file is a file containing the external declarations for a library; it conventionally has a file name extension of 'h', such as headerfile.h (You may also see some older code using different extensions like .hxx or .hpp, but this is rapidly becoming very rare)

    The programmer who creates the library provides the header file. To declare the functions and external variables in the library, the user simply includes the header file. To include a header file, use the #include preprocessor directive. This tells the preprocessor to open the named header file and insert its contents where the #include statement appears. Files may be named in a #include statement in two ways: in double quotes, or in angle brackets (< >). File names in double quotes, such as:

    #include "local.h"

    tell the preprocessor to search the current directory for the file and report an error if the file does not exist. File names in angle brackets tell the preprocessor to look through a search path specified in the environment. Setting the search path varies between machines, operating systems and C++ implementations. To include the iostream header file, you say:

    #include <iostream>

    The preprocessor will find the iostream header file (often in a subdirectory called INCLUDE) and insert it.

    In C, a header file should not contain any function or data definitions because the header can be included in more than one file. At link time, the linker would then find multiple definitions and complain. In C++, there are two exceptions: inline functions and const constants (described later in the book) can both be safely placed in header files.

    New include format

    As C++ has evolved, different compiler vendors chose different extensions for file names. In addition, various operating systems have different restrictions on file names, in particular on name length. To smooth over these rough edges, the standard adopts a new format that allows file names longer than the notorious eight characters and eliminates the extension. For example, including iostream.h becomes

    #include <iostream>

    The translator can implement the includes in a way to suit the needs of that particular compiler and operating system, if necessary truncating the name and adding an extension. Of course, you can also copy the headers given you by your compiler vendor to ones without extensions if you want to use this style before a vendor has provided support for it.

    The libraries that have been inherited from Standard C are still available with the .h extension. However, you can also use them in the C++ include style by prepending a "c" before the name. Thus:

    #include <stdio.h>

    #include <stdlib.h>

    Become:

    #include <cstdio>

    #include <cstdlib>

    And so on, for all the Standard C headers. This provides a nice distinction to the reader indicating when youíre using C versus C++ libraries.

    Linking

    The linker collects object modules (with file name extensions of .o or .obj), generated by the compiler, into an executable program the operating system can load and run. It is the last phase of the compilation process.

    Linker characteristics vary from system to system. Generally, you just tell the linker the names of the object modules and libraries you want linked together, and the name of the executable, and it goes to work. Some systems require you to invoke the linker yourself. With most C++ packages you invoke the linker through C++. In many situations, the linker is invoked for you, invisibly.

    Many linkers won't search object files and libraries more than once, and they search through the list you give them from left to right. This means that the order of object files and libraries can be important. If you have a mysterious problem that doesn't show up until link time, one possibility is the order in which the files are given to the linker.

    Using libraries

    Now that you know the basic terminology, you can understand how to use a library. To use a library:

  1. Include the library's header file
  2. Use the functions and variables in the library
  3. Link the library into the executable program
  4. These steps also apply when the object modules aren't combined into a library. Including a header file and linking the object modules are the basic steps for separate compilation in both C and C++.

    How the linker searches a library

    When you make an external reference to a function or variable in C or C++, the linker, upon encountering this reference, can do one of two things. If it has not already encountered the definition for the function or variable, it adds it to its list of "unresolved references." If the linker has already encountered the definition, the reference is resolved.

    If the linker cannot find the definition in the list of object modules, it searches the libraries. Libraries have some sort of indexing so the linker doesn't need to look through all the object modules in the library -- it just looks in the index. When the linker finds a definition in a library, the entire object module, not just the function definition, is linked into the executable program. Note that the whole library isn't linked, just the object module in the library that contains the definition you want (otherwise programs would be unnecessarily large). If you want to minimize executable program size, you might consider putting a single function in each source code file when you build your own libraries. This requires more editing, but it can be helpful to the user.

    Because the linker searches files in the order you give them, you can pre-empt the use of a library function by inserting a file with your own function, using the same function name, into the list before the library name appears. Since the linker will resolve any references to this function by using your function before it searches the library, your function is used instead of the library function.

    Secret additions

    When a C or C++ executable program is created, certain items are secretly linked in. One of these is the startup module, which contains initialization routines that must be run any time a C or C++ program executes. These routines set up the stack and initialize certain variables in the program.

    The linker always searches the standard library for the compiled versions of any "standard" functions called in the program. The iostream functions, for example, are in the standard C++ library.

    Because the standard library is always searched, you can use any function (or class, in C++) in the library by simply including the appropriate header file in your program. To use the iostream functions, you just include the iostream.h header file.

    In non-standard implementations of C (and C++ C-code generators that use non-standard implementations of C), commonly used functions are not always contained in the library that is searched by default. Math functions, in particular, are often kept in a separate library. You must explicitly add the library name to the list of files handed to the linker.

    Using plain C libraries

    Just because you are writing code in C++, you are not prevented from using C library functions. There has been a tremendous amount of work done for you in these functions, so they can save you a lot of time. You should hunt through the manuals for your C and/or C++ compiler before writing new functions.

    This book will use C library functions when convenient (Standard C library functions will be used to increase the portability of the programs).

    Using pre-defined C library functions is quite simple: just include the appropriate header file and use the function.

    NOTE: since Standard C header files use function prototyping, their function declarations agree with C++. If, however, your C header files use the older K&R C "empty-argument-list" style for function declarations, you will have trouble because the C++ compiler takes these to mean "functions with no arguments." To fix the problem, you must create new header files and either put the proper argument lists in the declarations or simply put ellipses (...) in the argument list, which mean "any number and type of arguments."

    Your first C++ program

    You now know enough of the basics to create and compile a program. The program will use the pre-defined C++ iostream classes that comes with all C++ packages. The iostreams class handles input and output for files, with the console, and with "standard" input and output (which may be redirected to files or devices). In this very simple program, a stream object will be used to print a message on the screen.

    Using the iostreams class

    To declare the functions and external data in the iostreams class, include the header file with the statement

    #include <iostream>

    The first program uses the concept of standard output, which means "a general-purpose place to send output." You will see other examples using standard output in different ways, but here it will just go to the screen. The iostream package automatically defines a variable (an object) called cout that accepts all data bound for standard output.

    To send data to standard output, use the operator <<. C programmers know this operator as the bitwise left shift. C++ allows operators to be overloaded. When you overload an operator, you give it a new meaning when that operator is used with an object of a particular type. With iostream objects, the operator << means "send to." For example:

    cout << "howdy!";

    sends the string "howdy!" to the object called cout.

    Chapter XX covers operator overloading in detail.

    Fundamentals of program structure

    A C/C++ program is a collection of variables, function definitions and function calls. When the program starts, it executes initialization code and calls a special function, "main( )." You put the primary code for the program here. (All functions in this book use parentheses after the function name.)

    A function definition consists of a return value type (which defaults to integer if none is specified), a function name, an argument list in parentheses, and the function code contained in braces. Here is a sample function definition:

    int function() {

    // Function code here (this is a comment)

    }

    The above function has an empty argument list, and a body that only contains a comment.

    There can be many sets of braces within a function definition, but there must always be at least one set surrounding the function body. Since main( ) is a function, it must follow these rules. Unless you intend to return a value from your program (some operating systems can utilize a return value from a program), main( ) should have a return type of void, so the compiler won't issue a warning message.

    C and C++ are free form languages. With few exceptions, the compiler ignores carriage returns and white space, so it must have some way to determine the end of a statement. In C/C++, statements are delimited by semicolons.

    C comments start with /* and end with */. They can include carriage returns. C++ uses C-style comments and adds a new type of comment: //. The // starts a comment that terminates with a carriage return. It is more convenient than /* */ for one-line comments, and is used extensively in this book.

    "Hello, world!"

    And now, finally, the first program:

    //: C02:Hello.cpp
    // Saying Hello with C++
    #include <iostream> // Stream declarations
    using namespace std;
    
    int main() {
      cout << "Hello, World! I am " << 8 << " Today!" << endl;
    } ///:~

    The cout object is handed a series of arguments, which it prints out in left-to-right order. With iostreams, you can string together a series of arguments like this, which makes the class easy to use.

    Text inside double quotes is called a string. The compiler creates space for strings and stores the ASCII equivalent for each character in this space. The string is terminated with a value of 0 to indicate the end of the string. The special iostream function endl outputs the line and a newline.

    Inside a character string, you can insert special characters that do not print using escape sequences. These consist of a backslash (\) followed by a special code. For example \n means new line. Your compiler manual or local Standard C guide gives a complete set of escape sequences; others include \t (tab), \\ (backslash) and \b (backspace).

    Notice that the entire phrase terminates with a semicolon.

    String arguments and constant numbers are mixed in the cout statement. Because the operator << is overloaded with a variety of meanings when used with cout, you can send cout a variety of different arguments, and it will "figure out what to do with the message."

    Running the compiler

    To compile the program, edit it into a plain text file called HELLO.CPP and invoke the compiler with HELLO.CPP as the argument. For simple, one-file programs like this one, most compilers will take you all the way through the process. For example, to use the Gnu C++ compiler (which is freely available), you say:

    g++ Hello.cpp

    Other compilers will have a similar syntax; consult your compilerís documenation for details.

    More about iostreams

    So far you have seen only the most rudimentary aspect of the iostreams class. The output formatting available with iostreams includes number formatting in decimal, octal and hex. Here's another example of the use of iostreams:

    //: C02:Stream2.cpp
    // More streams features
    #include <iostream>
    using namespace std;
    
    int main() {
      // Specifying formats with manipulators:
      cout << "a number in decimal: "
           << dec << 15 << endl;
      cout << "in octal: " << oct << 15 << endl;
      cout << "in hex: " << hex << 15 << endl;
      cout << "a floating-point number: "
           << 3.14159 << endl;
      cout << "non-printing char (escape): "
           << char(27) << endl;
    } ///:~

    This example shows the iostreams class printing numbers in decimal, octal and hexadecimal using iostream manipulators (which don't print anything, but change the state of the output stream). Floating-point numbers are determined automatically, by the compiler. In addition, any character can be sent to a stream object using a cast to a character (a char is a data type designed to hold characters), which looks like a function call: char( ), along with the character's ASCII value. In the above program, an escape is sent to cout.

    String concatenation

    An important feature of the Standard C preprocessor is string concatenation. This feature is used in some of the C++ examples in this book. If two quoted strings are adjacent, and no punctuation is between them, the compiler will paste the strings together as a single string. This is particularly useful when printing code listings in books and magazines that have width restrictions:

    //: C02:Concat.cpp
    // String Concatenation
    #include <iostream>
    using namespace std;
    
    int main() {
      cout << "This string is far too long to put on a single "
        "line but it can be broken up with no ill effects\n"
        "as long as there is no punctuation separating "
        "adjacent strings.\n";
    } ///:~

     

    Reading input

    The iostreams class provides the ability to read input. The object used for standard input is cin. cin normally expects input from the console, but input can be redirected from other sources. An example of redirection is shown later in this chapter.

    The iostreams operator used with cin is >>. This operator waits for the same kind of input as its argument. For example, if you give it an integer argument, it waits for an integer from the console. Here's an example program that converts number bases:

    //: C02:Numconv.cpp
    // Converts decimal to octal and hex
    #include <iostream>
    using namespace std;
    
    int main() {
      int number;
      cout << "Enter a decimal number: ";
      cin >> number;
      // Using format manipulators:
      cout << "value in octal = 0" << oct << number << endl;
      cout << "value in hex = 0x" << hex << number << endl;
    } ///:~

    Notice the declaration of the integer number at the beginning of main( ). Since the extern keyword isn't used, the compiler creates space for number at that point.

    Simple file manipulation

    Standard I/O provides a very simple way to read and write files, called I/O redirection. If a program takes input from standard input (cin for iostreams) and sends its output to standard output (cout for iostreams), that input and output can be redirected. Input can be taken from a file, and output can be sent to a file. To re-direct I/O on the command line, use the < sign to redirect input and the > sign to redirect output. For example, if we have a fictitious program fiction.exe (or simply fiction, in Unix) which reads from standard input and writes to standard output, you can redirect standard input from the file stuff and redirect the output to the file such with the command:

    fiction < stuff > such

    Since the files are opened for you, the job is much easier (although you'll see later that iostreams has a very simple mechanism for opening files).

    As a useful example, suppose you want to record the number of times you perform an activity, but the program that records the incidents must be loaded and run many times, and the machine may be turned off, etc. To keep a permanent record of the incidents, you must store the data in a file. This file will be called INCIDENT.DAT and will initially contain the character 0. For easy reading, it will always contain ASCII digits representing the number of incidents.

    The program to increment the number is very simple:

    //: C02:Incr.cpp
    // Read a number, add one and write it
    #include <iostream>
    using namespace std;
    
    int main() {
      int num;
      cin >> num;
      cout << num + 1;
    } ///:~

    To test the program, run it and type a number followed by a carriage return. The program should print a number one larger than the one you type.

    The program can be called from inside another program using the Standard C system( ) function, which is declared in the header file stdlib.h:

    //: C02:Incident.cpp
    // Records an incident using INCR
    #include <cstdlib> // Declare the system() function
    using namespace std;
    
    int main() {
      // Other code here...
      system("incr < incident.dat > incident.dat");
    } ///:~

    To use the system( ) function, you give it a string that you would normally type at the operating system command prompt. The command executes and control returns to the program.

    Notice that the file INCIDENT.DAT is read and written using I/O redirection. Since the single > is used, the file is overwritten. Although it works fine here, reading and writing the same file isn't always a safe thing to do -- if you aren't careful you can end up with garbage in the file.

    If a double >> is used instead of a single >, the output is appended to the file (and this program wouldn't work correctly).

    This program shows you how easy it is to use plain C library functions in C++: just include the header file and call the function. The upward compatibility from C to C++ is a big advantage if you are learning the language starting from a background in C.

    Notice that the file INCIDENT.DAT is read and written using I/O redirection. Since the single > is used, the file is overwritten. Although it works fine here, reading and writing the same file isn't always a safe thing to do -- if you aren't careful you can end up with garbage in the file.

    If a double >> is used instead of a single >, the output is appended to the file (and this program wouldn't work correctly).

    This program shows you how easy it is to use plain C library functions in C++: just include the header file and call the function. The upward compatibility from C to C++ is a big advantage if you are learning the language starting from a background in C.

    Summary

    Exercises

    3: The C in C++

    The user-defined data type, or class, is what distinguishes C++ from traditional procedural languages. A class is a new data type that you or someone else creates to solve a particular type of problem. Once a class is created, anyone can use it without knowing the specifics of how it works, or even how classes are built. This chapter will teach you enough of the basics of C and C++ so you can utilize a class that someone else has written. The quick coverage of C++ features which are similar to C features will continue in chapters 3 and 4.

    This chapter treats classes as if they are just another built-in data type available for use in programs. So you don't see any undefined concepts, the process of writing your own classes must be delayed until the following chapter. This may cause a tedious delay for experienced C programmers. However, to leap past the necessary basics would hopelessly confuse programmers attempting to move to C++ from other languages.

    If you program with Pascal or some other procedural language, this chapter gives you a decent background in the style of C used in C++. If you are familiar with the style of C described in the first edition of Kernighan & Ritchie (often called K&R C) you will find some new and different features in C++ as well as Standard C. If you are familiar with Standard C, and in particular with function prototypes, you should skim through this chapter looking for features that are particular to C++.

    Controlling execution in C/C++

    This section covers the execution control statements in C++. You must be familiar with these statements before you can read C or C++ code.

    C++ uses all C's execution control statements. These include if-else, while, do-while, for, and a selection statement called switch. C++ also allows the infamous goto, which will be avoided in this book.

    True and false in C

    An expression is true if it produces a non-zero integral value. An expression is false if it produces an integral zero.

    All conditional statements use the truth or falsehood of a conditional expression to determine the execution path. An example of a conditional expression is A == B. This uses the conditional operator == to see if the variable A is equivalent to the variable B. The expression returns 1 if the statement is true and 0 if it is false. Other conditional operators are >, <, >=, etc. The next chapter covers conditional statements.

    if-else

    The if-else statement can exist in two forms: with or without the else. The two forms are:

    if(expression)
    statement

    or

    if(expression)
    statement
    else
    statement

    The "expression" evaluates to true or false. The "statement" means either a simple statement terminated by a semicolon or compound statement, which is a group of simple statements enclosed in braces. Any time the word "statement" is used, it is always implied that the statement can be simple or compound. Note this statement can also be another if, so they can be strung together.

    Pascal programmers should notice that the "then" is implied in C and C++, which are terse languages. "Then" isn't essential, so it was left out.

    //: C03:Ifthen.cpp
    // Demonstration of if and if-else conditionals
    #include <iostream>
    using namespace std;
    
    int main() {
      int i;
      cout << "type a number and a carriage return" << endl;
      cin >> i;
      if(i > 5)
        cout << "the number was greater than 5 " << endl;
      else
        if(i < 5)
          cout << "the number was less than 5 " << endl;
        else
          cout << "the number must be equal to 5 " << endl;
    
      cout << "type a number and a carriage return" << endl;
      cin >> i;
      if(i < 10)
        if(i > 5)  // "if" is just another type of statement
          cout << "5 < i < 10 " << endl;
        else
          cout << "i <= 5 " << endl;
      else // Matches "if(i < 10) "
        cout << "i >= 10 " << endl;
    } ///:~

    Indentation makes C/C++ code easier to read. Since C and C++ are "free form" languages, the extra spaces, tabs and carriage returns do not affect the resulting program. It is conventional to indent the body of a control flow statement so the reader may easily determine where it begins and ends.

    while

    while, do-while and for control looping. A statement repeats until the controlling expression evaluates to false.

    The form for a while loop is

    while(expression)
    statement

    The expression is evaluated once at the beginning of the loop, and again before each further iteration of the statement.

    This example stays in the body of the while loop until you type the secret number or press control-C.

    //: C03:Guess.cpp
    // Guess a number (demonstrates "while")
    #include <iostream>
    using namespace std;
    
    int main() {
      int secret = 15;
      int guess = 0;
      // "!=" is the "not-equal" conditional:
      while(guess != secret) { // Compound statement
        cout << "guess the number: ";
        cin >> guess;
      }
      cout << "You got it!" << endl;
    } ///:~

     

    do-while

    The form for do-while is

    do
    statement
    while
    (expression);

    The do-while is different from the while because the statement always executes at least once, even if the expression evaluates to false the first time. In a simple while, if the conditional is false the first time the statement never executes.

    If a do-while is used in the "GUESS" program, the variable guess does not need an initial dummy value, since it is initialized by the cin statement before it is tested:

    //: C03:Guess2.cpp
    // The guess program using do-while
    #include <iostream>
    using namespace std;
    
    int main() {
      int secret = 15;
      int guess; // No initialization needed this time
      do {
        cout << "guess the number: ";
        cin >> guess;
      }   while(guess != secret);
      cout << "You got it!" << endl;
    } ///:~

     

    for

    A for loop performs initialization before the first iteration. Then it performs conditional testing and, at the end of each iteration, some form of "stepping." The form of the for loop is:

    for(initialization; expression; step)
    statement

    Any of the expressions initialization, expression or step may be empty. The initialization code executes once at the very beginning. The expression is tested before each iteration (if it evaluates to false at the beginning, the statement never executes). At the end of each loop, the step executes.

    for loops are usually used for "counting" tasks:

    //: C03:Charlist.cpp
    // Display all the ASCII characters.
    // Demonstrates "for."
    #include <iostream>
    using namespace std;
    
    int main() {
      for(int i = 0; i < 128; i = i + 1)
        if (i != 26)  // ANSI Terminal/ANSI.SYS Clear screen
          cout << " value: " << i <<
        " character: " << char(i) << endl; // Type conversion
    } ///:~

    You may notice that the variable i is defined at the point where it is used, instead of at the beginning of the block denoted by the open curly brace {. Traditional procedural languages require that all variables be defined at the beginning of the block so when the compiler creates a block it can allocate space for those variables.

    Declaring all variables at the beginning of the block requires the programmer to write in a particular way because of the implementation details of the language. Most people don't know all the variables they are going to use before they write the code, so they must keep jumping back to the beginning of the block to insert new variables, which is awkward and causes errors. It is confusing to read the code because each block starts with a clump of variable declarations, and the variables might not be used until much later in the block.

    In C++ (not in C) you can spread your variable declarations throughout the block. Whenever you need a new variable, you can define it right where you use it. In addition, you can initialize the variable at the point you declare it, which prevents a certain class of errors. Defining variables at any point in a scope allows a more natural coding style and makes code easier to understand. C++ compilers collect all the variable declarations in the block and secretly place them at the beginning of the block.

    The break and continue Keywords

    Inside the body of any of the looping constructs you can control the flow of the loop using break and continue. break quits the loop without executing the rest of the statements in the loop. continue stops the execution of the current iteration and goes back to the beginning of the loop to begin a new iteration.

    As an example of the use of break and continue, this program is a very simple menu system:

    //: C03:Menu.cpp
    // Simple menu program demonstrating
    // the use of "break" and "continue"
    #include <iostream>
    using namespace std;
    
    int main() {
      char c; // To hold response
      while(1) {
        cout << "MAIN MENU:" << endl;
        cout << "l for left, r for right, q to quit: ";
        cin >> c;
        if(c == 'q')
          break; // Out of "while(1)"
        if(c == 'l') {
          cout << "LEFT MENU:" << endl;
          cout << "select a or b: ";
          cin >> c;
          if(c == 'a') {
            cout << "you chose 'a'" << endl;
            continue; // Back to main menu
          }
          if(c == 'b') {
            cout << "you chose 'b'" << endl;
            continue; // Back to main menu
          }
          else {
            cout << "you didn't choose a or b!" 
                 << endl;
            continue; // Back to main menu
          }
        }
        if(c == 'r') {
          cout << "RIGHT MENU:" << endl;
          cout << "select c or d: ";
          cin >> c;
          if(c == 'c') {
            cout << "you chose 'c'" << endl;
            continue; // Back to main menu
          }
          if(c == 'd') {
            cout << "you chose 'd'" << endl;
            continue; // Back to main menu
          }
          else {
            cout << "you didn't choose c or d!" 
                 << endl;
            continue; // Back to main menu
          }
        }
        cout << "you must type l or r or q!" << endl;
      }
      cout << "quitting menu..." << endl;
    } ///:~

    If the user selects 'q' in the main menu, the break keyword is used to quit, otherwise the program just continues to execute indefinitely. After each of the sub-menu selections, the continue keyword is used to pop back up to the beginning of the while loop.

    The while(1) statement is the equivalent of saying "do this loop forever." The break statement allows you to break out of this infinite while loop when the user types a 'q.'

    switch

    A switch statement selects from among pieces of code based on the value of an integral expression. Its form is:

    switch(selector) {

    case integral-value1 : statement; break;

    case integral-value2 : statement; break;

    case integral-value3 : statement; break;

    case integral-value4 : statement; break;

    case integral-value5 : statement; break;

    (...)

    default: statement;

    }

    Selector is an expression that produces an integral value. The switch compares the result of selector to each integral-value. If it finds a match, the corresponding statement (simple or compound) executes. If no match occurs, the default statement executes.

    You will notice in the above definition that each case ends with a break, which causes execution to jump to the end of the switch body. This is the conventional way to build a switch statement, but the break is optional. If it is missing, the code for the following case statements execute until a break is encountered. Although you don't usually want this kind of behavior, it can be useful to an experienced C programmer.

    The switch statement is a very clean way to implement multi-way selection (i.e., selecting from among a number of different execution paths), but it requires a selector that evaluates to an integral value at compile-time. If you want to use, for example, a string as a selector, it won't work in a switch statement. For a string selector, you must use instead a series of if statements and compare the string inside the conditional.

    Menus often lend themselves neatly to a switch:

    //: C03:Menu2.cpp
    // A menu using a switch statment
    #include <iostream>
    using namespace std;
    
    int main() {
      char response; // The user's response
      int quit = 0;  // Flag for quitting
      while(quit == 0) {
        cout << "Select a, b, c or q to quit: ";
        cin >> response;
        switch(response) {
          case 'a' : cout << "you chose 'a'" << endl;
                     break;
          case 'b' : cout << "you chose 'b'" << endl;
                     break;
          case 'c' : cout << "you chose 'c'" << endl;
                     break;
          case 'q' : cout << "quitting menu" << endl;
                     quit = 1;
                     break;
          default  : cout << "Please use a,b,c or q!"
                     << endl;
        }
      }
    } ///:~

    Notice that selecting 'q' sets the quit flag to 1. The next time the selector is evaluated, quit == 0 returns false so the body of the while does not execute.

    Introduction to C and C++ operators

    You can think of operators as a special type of function (C++ operator overloading treats operators precisely that way). An operator takes one or more arguments and produces a new value. The arguments are in a different form than ordinary function calls, but the effect is the same.

    You should be reasonably comfortable with the operators used so far from your previous programming experience. The concepts of addition (+), subtraction and unary minus (-), multiplication (*), division (/) and assignment(=) all work much the same in any programming language. The full set of operators are enumerated in the next chapter.

    Precedence

    Operator precedence defines the order in which an expression evaluates when several different operators are present. C and C++ have specific rules to determine the order of evaluation. The easiest to remember is that multiplication and division happen before addition and subtraction. After that, if an expression isn't transparent to you it probably won't be for anyone reading the code, so you should use parentheses to make the order of evaluation explicit. For example:

    A = X + Y - 2/2 + Z;

    has a very different meaning from the same statement with a particular grouping of parentheses:

    A = X + (Y - 2)/(2 + Z);

     

    Auto increment and decrement

    C, and therefore C++, are full of shortcuts. Shortcuts can make code much easier to type, and sometimes much harder to read. Perhaps the designers thought it would be easier to understand a tricky piece of code if your eyes didn't have to scan as large an area of print.

    One of the nicer shortcuts is the auto-increment and auto-decrement operators. You often use these to change loop variables, which control the number of times a loop executes.

    The auto-decrement operator is -- and means "decrease by one unit." The auto-increment operator is ++ and means "increase by one unit." If A is an int, for example, the expression ++A is equivalent to (A = A + 1). Auto-increment and auto-decrement operators produce the value of the variable as a result. If the operator appears before the variable, (i.e., ++A), the operation is performed and the value is produced. If the operator appears after the variable (i.e. A++), the value is produced, then the operation is performed. As an example:

    //: C03:Autoinc.cpp
    // Shows use of auto-increment
    // and auto-decrement operators.
    #include <iostream>
    using namespace std;
    
    int main() {
      int i = 0;
      int j = 0;
      cout << ++i << endl; // Pre-increment
      cout << j++ << endl; // Post-increment
      cout << --i << endl; // Pre-decrement
      cout << j-- << endl; // Post decrement
    } ///:~

    If you've been wondering about the name "C++," now you understand. It implies "one step beyond C."

    Using standard I/O for easy file handling

    The iostream class contains functions to read and write files. Often, however, it is easiest to read from cin and write to cout. The program can be tested by typing at the console, and when it is working, files can be manipulated via redirection on the command line (in Unix and MS-DOS).

    Simple "cat" program

    So far, all the messages you've seen are sent via operator overloading to stream objects. In C++, a message is usually sent to an object by calling a member function for that object. A member function looks like a regular function -- it has a name, argument list and return value. However, it must always be connected to an object. It can never be called by itself. A member function is always selected for a particular object via the dot (.) member selection operator.

    The iostream class has several non-operator member functions. One of these is get( ), which can be used to fetch a single character (or a string, if it is called differently). The following program uses get( ) to read characters from the cin object. The program uses the complementary member function put( ) to send characters the cout object. Characters are read from standard input and written to standard output.

    //: C03:Cat.cpp
    // Demonstrates member function calls
    // and simple file i/o.
    #include <iostream>
    using namespace std;
    
    int main() {
      char c;
      while(cin.get(c))
        cout.put(c);
    } ///:~

    get( ) returns a value that is tested to determine the end of the input is reached. As long as the return value is non-zero (true), there is more input available and the body of the while loop is executed, but when the expression cin.get(c) produces a result of 0, there is no more input so it stops looping..

    To use cat, simply redirect a file into it; the results will appear on the screen:

    cat < infile

    If you redirect the output file you've created a simple "copy" program:

    cat < infile > outfile

    Pass by reference

    C programmers may find the above program puzzling. According to plain C syntax, the character variable c looks like it is passed by value to the member function get( ). Yet c is used in the put( ) member function as if get( ) had modified the value of c, which is impossible if it was passed by value! What goes on here?

    C++ has added another kind of argument passing: pass-by-reference. If a function argument is defined as pass-by-reference, the compiler takes the address of the variable when the function is called. The argument of the stream function get( ) is defined as pass-by-reference, so the above program works correctly.

    Chapter 4 describes passing by reference in more detail. The first part of that chapter describes addresses, which you must understand before references make any sense.

    Handling spaces in input

    To read and use more than a character at a time from standard input, you will need to use a buffer. A buffer is a data-storage area used to hold and manipulate a group of data items with identical types.

    In C and C++, you can create a buffer to hold text with an array of characters. Arrays in C and C++ are denoted with the bracket operator ([ ]). To define an array, give the data type, a name for the array, and the size in brackets. For an array of characters (a character buffer) called buf the declaration could be:

    char buf[100]; // Space for 100 contiguous characters

    To read an entire word instead of a character, use cin and the >> operator, but send the input to a character buffer instead of just a single character. The operator >> is overloaded so you can use it with a number of different types of arguments. The idea is the same in each case: you want to get some input. You need different kinds of input, but you don't have to worry about it because the language takes care of the differentiation for you.

    Here's a program to read and echo a word:

    //: C03:Readword.cpp
    // Read and echo a word from standard input
    #include <iostream>
    using namespace std;
    
    int main() {
      char buf[100];
      cout << "type a word: ";
      cin >> buf;
      cout << "the word you typed is: " << buf << endl;
    } ///:~

    You will notice the program works fine if you type a word, but if you type more than one word it only takes the first one. The >> operator is word-oriented; it looks for white space, which it doesn't copy into the buffer, to break up the input. You must type a carriage return before any of the input is read.

    To read and manipulate anything more than a simple character or word using iostreams, it is best to use the get( ) function. get( ) doesn't discard white space, and it can be used with a single character, as shown in the CAT.CPP program, or a character buffer (get( ) is an overloaded function). When used with a character buffer, get( ) needs to know the maximum number of characters it should read (usually the size of the buffer) and optionally the terminating character it should look for before it stops reading input.

    This terminating character that get( ) looks for (the delimiter) defaults to a new line (\n). You don't need to change the delimiter if you just want to read the input a line at a time. To change the delimiter, add the character you wish to be the delimiter in single quotes as the third argument in the list. When get( ) matches the delimiter with the terminating character, the terminating character isn't copied into the character buffer; it stays on the input stream. This means you must read the terminating character and throw it away, otherwise the next time you try to fill your character buffer using get( ), the function will immediately read the terminating character and stop.

    Here's a program that reads input a line at a time using get( ):

    //: C03:Getline.cpp
    // Stream input by lines
    #include <iostream>
    using namespace std;
    
    int main() {
      char buf[100];
      char trash;
      while(cin.get(buf,100)) { // Get chars until '\n'
        cin.get(trash); // Throw away the terminator
        cout << buf << "\n";  // Add the '\n' at the end
      }
    } ///:~

    The get( ) function reads input and places it into buf until either 100 characters are read, or a '\n' is found. get( ) puts the zero byte, required for all strings, at the end of the string in buf. The character trash is only used for throwing away the line terminator. Because the new line was never put in buf, you must send a new line out when you print the line.

    The return value of cin.get( ) for lines is the same as the overloaded version of the same function for single characters. It is true as long as it read some input (so the body of the loop is executed) and false when the end of the input is reached.

    Try redirecting the contents of a text file into GETLINE.

    Aside: examining header files

    As your knowledge of C++ increases, you will find that the best way to discover the capabilities of the iostreams class, or any class, is to look at the header file where the class is defined. The header file will contain the class declaration. You won't completely understand the class declaration until you've read the next chapter. The class declaration contains some private and protected elements, which you don't have access to, and a list of public elements, usually functions, that you as the user of the class may utilize. Although there isn't necessarily a description of the functions in the class definition, the function names are often helpful and the class definition acts as a sort of "table of contents."

    Header files for pre-defined classes like iostreams are usually located in a subdirectory, often called INCLUDE, under the installation directory for your C++ package or the associated C package, if you use a C-code generator. On Unix, you must ask your system administrator where the C++ INCLUDE files are located.

    Utility programs using iostreams and standard I/O

    Now that you've had an introduction to iostreams and you know how to manipulate files with I/O redirection, you can write some simple programs. This section contains examples of useful utilities.

    Pipes

    Notice that in Unix and MS-DOS, you can also use pipes on the command line for I/O redirection. Pipes feed the output of one program into the input of another program if both programs use standard I/O. If prog1 writes to standard I/O and prog2 reads from standard I/O, you can pipe the output of prog1 into the input of prog2 with the following command:

    prog1 | prog2

    where '|' is the pipe symbol. If all the following programs use standard I/O, you can chain them together like this:

    prog1 | prog2 | prog3 | prog4

    Text analysis program

    The following program counts the number of words and lines in a file and checks to make sure no line is greater than maxwidth. It uses two functions from the Standard C library, both of which are declared in the header file string.h. strlen( ) finds the length of a string, not including the zero byte that terminates all strings. strtok( ) is used to count the number of words in a line; it breaks the line up into chunks that are separated by any of the characters in the second argument. For this program, a word is separated by white space, which is a space or a tab. The first time you call strtok( ), you hand it the character buffer, and all the subsequent times you hand it a zero, which tells it to use the same buffer it used for the last call (moving ahead each time strtok( ) is called). When it can't find any more words in the line, strtok( ) returns zero.

    //: C03:Textchek.cpp
    // Counts words and lines in a text file.
    // Ensures no line is wider than maxwidth.
    #include <iostream>
    #include <cstring> // strlen() & strtok()
    using namespace std;
    
    int main() {
      // const means "you can't change it":
      const int maxwidth = 64;
      int linecount = 0;
      int wordcount = 0;
      char buf[100], c, trash;
      while(cin.get(buf,100)) {
        cin.get(trash); // Discard terminator
        linecount++; // We just read a whole line
        if(strtok(buf," \t")) {
          wordcount++;  // Count the first word
          while(strtok(0," \t"))
            wordcount++; // Count the rest
        }
        if(strlen(buf) > maxwidth)
          cout << "line " << linecount 
            << "is too long." << endl;
      }
      cout << "Lines: " << linecount << endl;
      cout << "Words: " << wordcount << endl;
    } ///:~

    Notice the use of the auto-increment to count lines and words. Since the value produced by auto-incrementing the variable is ignored, it doesn't matter whether you put the increment first or last.

    To count words, strtok( ) is set up for the first call by handing it the text buffer buf. If it finds a word, the word is counted. If there are more words, they are counted.

    The keyword const is used to prevent maxwidth from being changed. const was invented for C++ and later added to Standard C. It has two purposes: the compiler will generate an error message if you ever try to change the value, and an optimizer can use the fact that a variable is const to create better code. It is always a good idea to make a variable const if you know it should never change.

    Notice the way buf, c, and trash are all declared with a single char statement. You can declare all types of data this way, just by separating the variable names with commas.

    IOstream support for file manipulation

    All the examples in this chapter have used IO redirection to handle input and output. Although this approach works fine, iostreams have a much faster and safer way to read and write files. This is accomplished by including fstream.h instead of (or in addtion to) iostream.h, then creating and using fstream objects in almost the identical fashion you use ordinary iostream objects. Here's a program that copies one file onto another (you'll learn later how to use command-line arguments so the file names aren't fixed):

    //: C03:IOcopy.cpp
    // fstreams for opening files.
    // Copies itself to TMP.TXT
    #include <fstream>
    #include "../require.h"
    using namespace std;
    
    int main() {
      ifstream infile("IOcopy.cpp");
      assure(infile, "IOcopy.cpp");
      ofstream outfile("tmp.txt");
      assure(outfile, "tmp.txt");
      char ch;
      while(infile.get(ch))
        outfile.put(ch);
    } ///:~

    The first line creates an ifstream object called infile and hands it the name of the file (which happens to be the same name as the source-code file). ifstream is a special type of iostream object declared in fstream.h that opens and reads from a file. The second line checks to see if the file was successfully opened, using a function in require.h that will be described later in the book. The third line creates an ofstream operator that is just like an ifstream except it writes to a file. This is also checked for successful opening.

    The while loop simply gets characters from infile with the member function get( ), and puts them into outfile with put( ), until the get( ) returns false (that is, zero). The files are automatically closed when the objects are destroyed, which is another benefit of using fstreams for manipulating files -- you don't have to remember to close the files.

    There's also a set of iostream classes for doing in-memory formatting, in the header file strstream.h.

    Introduction to C++ data

    Data types can be built-in or abstract . A built-in data type is one that the compiler intrinsically understands, one that "comes with the compiler." The types of built-in data are identical in C and C++. A user-defined data type is one you or another programmer create as a class. These are commonly referred to as abstract data types. The compiler knows how to handle built-in types when it starts up; it "learns" how to handle abstract data types by reading header files containing class declarations.

    Basic built-in types

    The Standard C specification doesn't say how many bits each of the built-in types must contain. Instead, it stipulates the minimum and maximum values the built-in type must be able to hold. When a machine is based on binary, this maximum value is directly translated into bits. If a machine uses, for instance, binary-coded decimal (BCD) to represent numbers then the amount of space in the machine required to hold the maximum numbers for each data type will change. The minimum and maximum values that can be stored in the various data types are defined in the system header files LIMITS.H and FLOAT.H

    C & C++ have four basic built-in data types, described here for binary-based machines. A char is for character storage and uses a minimum of one byte of storage. An int stores an integral number and uses a minimum of two bytes of storage. The float and double types store floating-point numbers, often in IEEE floating-point format. float is for single-precision floating point and double is for double precision floating point.

    You can define and initialize variables at the same time. Here's how to define variables using the four basic data types:

    //: C03:Basic.cpp
    // Defining the four basic data
    // types in C & C++
    
    int main() {
      // Definition without initialization:
      char protein;
      int carbohydrates;
      float fiber;
      double fat;
      // Definition & initialization at the same time:
      char pizza = 'A', pop = 'Z';
      int DongDings = 100, Twinkles = 150, HeeHos = 200;
      float chocolate = 3.14159;
      double fudge_ripple = 6e-4; // Exponential notation
    } ///:~

    The first part of the program defines variables of the four basic data types without initializing them. If you don't initialize a variable, its contents are undefined (although some compilers will initialize to 0). The second part of the program defines and initializes variables at the same time. Notice the use of exponential notation in the constant 6e-4, meaning: "6 times 10 to the minus fourth power."

    bool, true, & false

    Virtually everyone uses Booleans, and everyone defines them differently. Some use enumerations, others use typedefs. A typedef is a particular problem because you canít overload on it (a typedef to an int is still an int) or instantiate a unique template with it.

    A class could have been created for bool in the standard library, but this doesnít work very well either, because you can only have one automatic type conversion operator from a class without causing overload resolution problems.

    The best approach for such a useful type is to build it into the language. A bool type can have two states expressed by the built-in constants true (which converts to an integral one) and false (which converts to an integral zero). All three names are keywords. In addition, some language elements have been adapted:

    Element

    Usage with bool

    && || !

    Take bool arguments and return bool.

    < > <= >= == !=

    Produce bool results

    if, for,
    while, do

    Conditional expressions convert to bool values

    ? :

    First operand converts to bool value

    Because thereís a lot of existing code that uses an int to represent a flag, the compiler will implicitly convert from an int to a bool. Ideally, the compiler will give you a warning as a suggestion to correct the situation.

    An idiom that falls under "poor programming style" is the use of ++ to set a flag to true. This is still allowed, but deprecated, which means that at some time in the future it will be made illegal. The problem is the same as incrementing an enum: Youíre making an implicit type conversion from bool to int, incrementing the value (perhaps beyond the range of the normal bool values of zero and one), and then implicitly converting it back again.

    Pointers will also be automatically converted to bool when necessary.

    Specifiers

    Specifiers modify the meanings of the basic built-in types, and expand the built-in types to a much larger set. There are four specifiers: long, short, signed and unsigned.

    Long and short modify the maximum and minimum values a data type will hold. A plain int must be at least the size of a short. The size hierarchy for integral types is: short int, int, long int. All the sizes could conceivably be the same, as long as they satisfy the minimum/maximum value requirements. On a machine with a 64-bit word, for instance, all the data types might be 64 bits.

    The size hierarchy for floating point numbers is: float, double, and long double. Long float is not allowed in Standard C. There are no short floating-point numbers.

    The signed and unsigned specifiers tell the compiler how to use the sign bit with integral types and characters (floating-point numbers always contain a sign). An unsigned number does not keep track of the sign and can store positive numbers twice as large as the positive numbers that can be stored in a signed number. Signed is the default and is only necessary with char; char may or may not default to signed. By specifying signed char, you force the sign bit to be used.

    The following example shows the size of the data in bytes using the sizeof( ) operator, introduced later in this chapter:

    //: C03:Specify.cpp
    // Demonstrates the use of specifiers
    #include <iostream>
    using namespace std;
    
    int main() {
      char c;
      unsigned char cu;
      int i;
      unsigned int iu;
      short int is;
      short iis; // Same as short int
      unsigned short int isu;
      unsigned short iisu;
      long int il;
      long iil;  // Same as long int
      unsigned long int ilu;
      unsigned long iilu;
      float f;
      double d;
      long double ld;
      cout << "sizeof(char) = " << sizeof(c) << endl;
      cout << "sizeof(unsigned char) = " << sizeof(cu) << endl;
      cout << "sizeof(int) = " << sizeof(i) << endl;
      cout << "sizeof(unsigned int) = " << sizeof(iu) << endl;
      cout << "sizeof(short) = " << sizeof(is) << endl;
      cout << "sizeof(unsigned short) = " << sizeof(isu) << endl;
      cout << "sizeof(long) = " << sizeof(il) << endl;
      cout << "sizeof(unsigned long) = " << sizeof(ilu) << endl;
      cout << "sizeof(float) = " << sizeof(f) << endl;
      cout << "sizeof(double) = " << sizeof(d) << endl;
      cout << "sizeof(long double) = " << sizeof(ld) << endl;
    } ///:~

    When you are modifying an int with short or long, the keyword int is optional, as shown above.

    Scoping

    Scoping rules tell you where a variable is valid, where it is created and where it gets destroyed (i.e., goes out of scope). The scope of a variable extends from the point where it is defined to the first closing brace matching the closest opening brace before the variable is declared. To illustrate:

    //: C03:Scope.cpp
    // How variables are scoped.
    
    int main() {
      int scp1;
      // scp1 visible here
      {
        // scp1 still visible here
        //.....
        int scp2;
        // scp2 visible here
        //.....
        {
          // scp1 & scp2 still visible here
          //..
          int scp3;
          // scp1, scp2 & scp3 visible here
          // ...
        } // <-- scp3 destroyed here
        // scp3 not available here
        // scp1 & scp2 still visible here
        // ...
      } // <-- scp2 destroyed here
      // scp3 & scp2 not available here
      // scp1 still visible here
      //..
    } // <-- scp1 destroyed here
    ///:~

    The above example shows when variables are visible, and when they are unavailable (go out of scope). A variable can only be used when inside its scope. Scopes can be nested, indicated by matched pairs of braces inside other matched pairs of braces. Nesting means you can access a variable in a scope that encloses the scope you are in. In the above example, the variable scp1 is available inside all of the other scopes, while scp3 is only available in the innermost scope.

    Defining data on the fly

    There is a significant difference between C and C++ when defining variables. Both languages require that variables be defined before they are used, but C requires all the variable definitions at the beginning of a scope. While reading C code, a block of variable definitions is often the first thing you see when entering a scope. These variable definitions don't usually mean much to the reader because they appear apart from the context in which they are used.

    C++ allows you to define variables anywhere in the scope, so you can define a variable right before you use it. This makes the code much easier to write and reduces the errors you get from being forced to jump back and forth within a scope. It makes the code easier to understand because you see the variable definition in the context of its use. This is especially important when you are defining and initializing a variable at the same time ÄÄ you can see the meaning of the initialization value by the way the variable is used.

    Here's an example showing on-the-fly data definitions:

    //: C03:OnTheFly.cpp
    // On-the-fly data definitions
    
    int main() {
      //..
      { // Begin a new scope
        int q = 0; // Plain C requires definitions here
        //..
        for(int i = 0; i < 100; i++) { // Define at point of use
          q++;
          // Notice q comes from a larger scope
          int p = 12; // Definition at the end of the scope
        }
        int p = 1;  // A different p
      } // End scope containing q & outer p
    } ///:~

    In the innermost scope, p is defined right before the scope ends, so it is really a useless gesture (but it shows you can define a variable anywhere). The p in the outer scope is in the same situation.

    The definition of i in the for loop is rather tricky. You might think that i is only valid within the scope bounded by the opening brace that appears after the for. The variable i is actually valid from the point where it is declared to the end of the scope that encloses the for loop. This is consistent with C, where the variable i must be declared at the beginning of the scope enclosing the for if it is to be used by the for.

    Specifying storage allocation

    When creating data, you have a number of options to specify the lifetime of the data, how the data is allocated, and how the data is treated by the compiler.

    Global variables

    Global variables are defined outside all function bodies and are available to all parts of the program (even code in other files). Global variables are unaffected by scopes and are always available (i.e., the lifetime of a global variable lasts until the program ends). If the existence of a global variable in one file is declared using the extern keyword in another file, the data is available for use by the second file. Here's an example of the use of global variables:

    //: C03:Global.cpp
    // Demonstration of global variables
    int global;
    int main() {
      global = 12;
    } ///:~

    Here's a file that accesses global as an extern:

    //: C03:Global2.cpp {O}
    // Accessing external global variables
    extern int global;  
    // (The linker resolves the reference)
    void foo() {
      global = 47;
    } ///:~

    Storage for the variable global is created by the definition in GLOBAL.CPP, and that same variable is accessed by the code in GLOBAL2.CPP. Since the code in GLOBAL2.CPP is compiled separately from the code in GLOBAL.CPP, the compiler must be informed that the variable exists elsewhere by the declaration

    extern int global;

    Local variables

    Local variables occur within a scope; they are "local" to a function. They are often called "automatic" variables because they automatically come into being when the scope is entered, and go away when the scope closes. The keyword auto makes this explicit, but local variables default to auto so it is never necessary to declare something as an auto.

    Register variables

    A register variable is a type of local variable. The register keyword tells the compiler "make accesses to this variable as fast as possible." Increasing the access speed is implementation dependent but, as the name suggests, it is often done by placing the variable in a register. There is no guarantee that the variable will be placed in a register or even that the access speed will increase. It is a hint to the compiler.

    There are restrictions to the use of register variables. You cannot take or compute the address of a register variable. A register variable can only be declared within a block (you cannot have global or static register variables). You can use a register variable as a formal argument in a function (i.e., in the argument list).

    static

    The static keyword has several distinct meanings. Normally, variables defined local to a function disappear at the end of the function scope. When you call the function again, storage for the variables is created anew and the data is re-initialized. If you want the data to be extant throughout the life of the program, you can define that variable to be static and give it an initial value. The initialization is only performed when the program begins to execute, and the data retains its value between function calls. This way, a function can "remember" some piece of information between function calls.

    You may wonder why global data isn't used instead. The beauty of static data is that it is unavailable outside the scope of the function, so it can't be inadvertently changed. This localizes errors.

    An example of the use of static data is:

    //: C03:Static.cpp
    // Using static data in a function
    #include <iostream>
    using namespace std;
    
    void func() {
      static int i = 0;
      cout << "i = " << ++i << endl;
    }
    int main() {
      for(int x = 0; x < 10; x++)
        func();
    } ///:~

    Each time func( ) is called in the for loop, it prints a different value. If the keyword static is not used, the value printed will always be '1'.

    The second meaning of static is related to the first in the "unavailable outside a certain scope" sense. When static is applied to a function name or to a variable that is outside of all functions, it means "this name is unavailable outside of this file." The function name or variable is local to the file or has file scope. As a demonstration, compiling and linking the following two files will cause a linker error:

    //: C03:FileStatic.cpp
    // File scope demonstration. Compiling and 
    // linking this file with FSTAT2.CPP
    // will cause a linker error
    
    static int fs; // File scope: only available in this file
    
    int main() {
      fs = 1;
    } ///:~

    Even though the variable fs is claimed to exist as an extern in the following file, the linker won't find it because it has been declared static in FileStatic.cpp.

    //: C03:FileStatic2.cpp {O}
    // Trying to reference fs
    extern int fs;
    void func() {
      fs = 100;
    } ///:~

    The static specifier may also be used inside a class. This definition will be delayed until after classes have been described later in the chapter.

    extern

    The extern keyword was briefly described in chapter XX. It tells the compiler that a piece of data or a function exists, even if the compiler hasn't yet seen it in the file currently being compiled. This piece of data or function may exist in some other file or further on in the current file. As an example of the latter:

    //: C03:Forward.cpp
    // Forward function & data declarations
    #include <iostream>
    using namespace std;
    
    // This is not actually external, but the 
    // compiler must be told it exists somewhere:
    extern int i; 
    extern void foo();
    int main() {
      i = 0;
      foo();
    }
    int i; // The data definition
    void foo() {
      i++;
      cout << i;
    } ///:~

    When the compiler encounters the declaration extern int i; it knows that the definition for i must exist somewhere as a global variable. This definition can be in the current file, later on, or in a separate file. When the compiler reaches the definition of i, no other declaration is visible so it knows it has found the same i declared earlier in the file. If you were to define i as static, you would be telling the compiler that i is defined globally (via the extern), but it also has file scope (via the static), so the compiler will generate an error.

    Linkage

    To understand the behavior of C & C++ programs, you need to know about linkage. Linkage describes the storage created in memory to represent an identifier as it is seen by the linker. An identifier is represented by storage in memory to hold a variable or a compiled function body. There are two types of linkage: internal linkage and external linkage.

    Internal linkage means that storage is created to represent the identifier for the file being compiled only. Other files may use the same identifier with internal linkage or for a global variable, and no conflicts will be found by the linker ÄÄ separate storage is created for each identifier. Internal linkage is specified by the keyword static in C and C++.

    External linkage means that a single piece of storage is created to represent the identifier for all files being compiled. The storage is created once, and the linker must resolve all other references to that storage. Global variables and function names have external linkage. These are accessed from other files by declaring them with the keyword extern. Variables defined outside all functions (with the exception of const in C++) and function definitions default to external linkage. You can specifically force them to have internal linkage using the static keyword. You can explicitly state that an identifier has external linkage by defining it with the extern keyword. Defining a variable or function with extern is not necessary in C, but it is sometimes necessary for const in C++.

    Automatic (local) variables exist only temporarily, on the stack, while a function is being called. The linker doesn't know about automatic variables, and they have no linkage.

    Constants

    In old (pre-Standard) C, if you wanted to make a constant, you had to use the preprocessor:

    #define PI 3.14159

    Everywhere you used PI, the value was substituted by the preprocessor (you can still use this method in C & C++).

    When you use the preprocessor to create constants, you place control of those constants outside the scope of the compiler. No type checking is performed on the name PI and you can't take the address of PI (so you can't pass a pointer or a reference to PI). PI cannot be a variable of a user-defined type. The meaning of PI lasts from the point it is defined to the end of the file; the preprocessor doesn't recognize scoping.

    C++ introduces the concept of a named constant that is just like a variable, except its value cannot be changed. The modifier const tells the compiler that a name represents a constant. Any data type, built-in or user-defined, may be defined as const. If you define something as const and then attempt to modify it, the compiler will generate an error.

    You cannot use the const modifier alone (at one time, it defaulted to int when used by itself). You must specify the type, like this:

    const int x = 10;

    In Standard C and C++, you can use a named constant in an argument list, even if the argument it fills is a pointer or a reference (i.e., you can take the address of a const). A const has a scope, just like a regular variable, so you can "hide" a const inside a function and be sure that the name will not affect the rest of the program.

    The const was taken from C++ and incorporated into Standard C, albeit quite differently. In Standard C, the compiler treats a const just like a variable that has a special tag attached that says "don't change me." When you define a const in Standard C, the compiler creates storage for it, so if you define more than one const with the same name in two different files (or put the definition in a header file), the linker will generate error messages about conflicts. The intended use of const in Standard C is quite different from its intended use in C++.

    Differences in const between C++ and Standard C

    In C++, const replaces the use of #define in most situations requiring a constant value with an associated name. In C++, const is meant to go into header files, and to be used in places where you would normally use a #define name. For instance, C++ lets you use a const in declarations such as arrays:

    const sz = 100;

    int buf[sz]; // Not allowed in Standard C !

    In Standard C, a const cannot be used where the compiler is expecting a constant expression.

    A const must have an initializer in C++. Standard C doesn't require an initializer; if none is given it initializes the const to 0.

    In C++, a const doesn't necessarily create storage. In Standard C a const always creates storage. Whether or not storage is reserved for a const in C++ depends on how it is used. In general, if a const is used simply to replace a name with a value (just as you would use a #define), then storage doesn't have to be created for the const. If no storage is created (this depends on the complexity of the data type and the sophistication of the compiler), the values may be folded into the code for greater efficiency after type checking, not before, as with #define. If, however, you take an address of a const (even unknowingly, by passing it to a function that takes a reference argument) or you define it as extern, then storage is created for the const.

    In C++, a const that is outside all functions has file scope (i.e., it is invisible outside the file). That is, it defaults to internal linkage. This is very different from all other identifiers in C++ (and from const in Standard C!) that default to external linkage. Thus, if you declare a const of the same name in two different files and you don't take the address or define that name as extern, the ideal compiler won't allocate storage for the const, but simply fold it into the code (admittedly very difficult for complicated types). Because const has implied file scope, you can put it in header files (in C++ only) with no conflicts at link time.

    Since a const in C++ defaults to internal linkage, you can't just define a const in one file and reference it as an extern in another file. To give a const external linkage so it can be referenced from another file, you must explicitly define it as extern, like this:

    extern const x = 1;

    Notice that by giving it an initializer and saying it is extern, you force storage to be created for the const (although the compiler still has the option of doing constant folding here). The initialization establishes this as a definition, not a declaration. The declaration:

    extern const x;

    in C++ means that the definition exists elsewhere (again, this is not necessarily true in Standard C). You can now see why C++ requires a const definition to have an initializer: the initializer distinguishes a declaration from a definition (in Standard C it's always a definition, so no initializer is necessary). With an external const declaration, the compiler cannot do constant folding because it doesn't know the value.

    Constant values

    In C++, a const must always have an initialization value (in Standard C, this is not true). Constant values for built-in types are expressed as decimal, octal, hexadecimal, or floating-point numbers (sadly, binary numbers were not considered important), or as characters.

    In the absence of any other clues, the compiler assumes a constant value is a decimal number. The numbers 47, 0 and 1101 are all treated as decimal numbers.

    A constant value with a leading 0 is treated as an octal number (base 8). Base 8 numbers can only contain digits 0-7; the compiler flags other digits as an error. A legitimate octal number is 017 (15 in base 10).

    A constant value with a leading 0x is treated as a hexadecimal number (base 16). Base 16 numbers contain the digits 0-9 and a-f or A-F. A legitimate hexadecimal number is 0x1fe (510 in base 10).

    Floating point numbers can contain decimal points and exponential powers (represented by e, which means "10 to the power"). Both the decimal point and the e are optional. If you assign a constant to a floating-point variable, the compiler will take the constant value and convert it to a floating-point number (this process is called implicit type conversion). However, it is a good idea to use either a decimal point or an e to remind the reader you are using a floating-point number; some older compilers also need the hint.

    Legitimate floating-point constant values are: 1e4, 1.0001, 47.0, 0.0 and -1.159e-77. You can add suffixes to force the type of floating-point number: f or F forces a float, L or l forces a long double, otherwise the number will be a double.

    Character constants are characters surrounded by single quotes, as: 'A', '0', ' '. Notice there is a big difference between the character '0' (ASCII 96) and the value 0. Special characters are represented with the "backslash escape": '\n' (new-line), '\t' (tab), '\\' (backslash), '\r' (carriage return), '\"' (double quote), '\'' (single quotes), etc. You can also express char constants in octal: '\17' or hexadecimal: '\xff'.

    volatile

    Whereas the qualifier const tells the compiler "this never changes" (which allows the compiler to perform extra optimizations) the qualifier volatile tells the compiler "you never know when this will change," and prevents the compiler from performing any optimizations. Use this keyword when you read some value outside the control of the system, such as a register in a piece of communication hardware. A volatile variable is always read whenever its value is required, even if it was just read the line before.

    Operators and their use

    Operators were briefly introduced in chapter 2. This section covers all the operators in C & C++.

    All operators produce a value from their operands. This value is produced without modifying the operands, except with assignment, increment and decrement operators. Modifying an operand is called a side effect. The most common use for operators that modify their operands is to generate the side effect, but you should keep in mind that the value produced is available for your use just as in operators without side effects.

    Assignment

    Assignment is performed with the operator =. It means "take the right-hand side (often called the rvalue) and copy it into the left-hand side (often called the lvalue). An rvalue is any constant, variable, or expression that can produce a value, but an lvalue must be a distinct, named variable (that is, there must be a physical space to store a value). For instance, you can assign a constant value to a variable (A = 4;), but you cannot assign anything to constant value ó it cannot be an lvalue (you can't say 4 = A;).

    Mathematical operators

    The basic mathematical operators are the same as the ones available in most programming languages: addition (+), subtraction (-), division (/), multiplication (*) and modulus (%, this produces the remainder from integer division). Integer division truncates the result (it doesn't round). The modulus operator cannot be used with floating-point numbers.

    C/C++ also introduces a shorthand notation to perform an operation and an assignment at the same time. This is denoted by an operator followed by an equal sign, and is consistent with all the operators in the language (whenever it makes sense). For example, to add 4 to the variable x and assign x to the result, you say: x += 4;.

    This example shows the use of the mathematical operators:

    //: C03:Mathops.cpp
    // Mathematical operators
    #include <iostream>
    using namespace std;
    
    // A macro to display a string and a value.
    #define print(str, var) cout << str " = " << var << endl
    
    int main() {
      int i, j, k;
      float u,v,w;  // Applies to doubles, too
      cout << "enter an integer: ";
      cin >> j;
      cout << "enter another integer: ";
      cin >> k;
      print("j",j);  print("k",k);
      i = j + k; print("j + k",i);
      i = j - k; print("j - k",i);
      i = k / j; print("k / j",i);
      i = k * j; print("k * j",i);
      i = k % j; print("k % j",i);
      // The following only works with integers:
      j %= k; print("j %= k", j);
      cout << "enter a floating-point number: ";
      cin >> v;
      cout << "enter another floating-point number: ";
      cin >> w;
      print("v",v); print("w",w);
      u = v + w; print("v + w", u);
      u = v - w; print("v - w", u);
      u = v * w; print("v * w", u);
      u = v / w; print("v / w", u);
      // The following works for ints, chars, and doubles too:
      u += v; print("u += v", u);
      u -= v; print("u -= v", u);
      u *= v; print("u *= v", u);
      u /= v; print("u /= v", u);
    } ///:~

    The rvalues of all the assignments can, of course, be much more complex.

    Introduction to preprocessor macros

    Notice the use of the macro print( ) to save typing (and typing errors!). The arguments in the parenthesized list following the macro name are substituted in all the code following the closing parenthesis. The preprocessor removes the name print and substitutes the code wherever the macro is called, so the compiler cannot generate any error messages using the macro name, and it doesn't do any type checking on the arguments (the latter can be beneficial, as shown in the debugging macros at the end of the chapter).

    Operators are just a different kind of function call

    There are two differences between the use of an operator and an ordinary function call. The syntax is different; an operator is often "called" by placing it between or sometimes after the arguments. The second difference is that the compiler determines what function to call. For instance, if you are using the operator + with floating-point arguments, the compiler "calls" the function to perform floating-point addition (this "call" is sometimes the action of inserting in-line code, or a floating-point coprocessor instruction). If you use operator + with a floating-point number and an integer, the compiler "calls" a special function to turn the int into a float, and then "calls" the floating-point addition code.

    It is important to be aware that operators are simply a different kind of function call. In C++ you can define your own functions for the compiler to call when it encounters operators used with your abstract data types. This feature is called operator overloading and is described in chapter 5.

    Relational operators

    Relational operators establish a relationship between the values of the operands. They produce a value of 1 if the relationship is true, and a value of 0 if the relationship is false. The relational operators are less than (<), greater than (>), less than or equal to (<=), greater than or equal to (>=), equivalent (==) and not equivalent (!=). They may be used with all built-in data types in C and C++. They may be given special definitions for user-defined data types in C++.

    Logical operators

    The logical operators AND (&&) and OR (||) produce a true (1) or false (0) based on the logical relationship of its arguments. Remember that in C and C++, a statement is true if it has a non-zero value, and false if it has a value of zero.

    This example uses the relational and logical operators:

    //: C03:Boolean.cpp
    // Relational and logical operators.
    #include <iostream>
    using namespace std;
    
    int main() {
      int i,j;
      cout << "enter an integer: ";
      cin >> i;
      cout << "enter another integer: ";
      cin >> j;
      cout << "i > j is " << (i > j) << endl;
      cout << "i < j is " << (i < j) << endl;
      cout << "i >= j is " << (i >= j) << endl;
      cout << "i <= j is " << (i <= j) << endl;
      cout << "i == j is " << (i == j) << endl;
      cout << "i != j is " << (i != j) << endl;
      cout << "i && j is " << (i && j) << endl;
      cout << "i || j is " << (i || j) << endl;
      cout << " (i < 10) && (j < 10) is "
           << ((i < 10) && (j < 10))  << endl;
    } ///:~

    You can replace the definition for int with float or double in the above program. Be aware, however, that the comparison of a floating-point number with the value of zero is very strict: a number that is the tiniest fraction different from another number is still "not equal." A number that is the tiniest bit above zero is still true.

    Bitwise operators

    The bitwise operators allow you to manipulate individual bits in a number (thus they only work with integral numbers). Bitwise operators perform boolean algebra on the corresponding bits in the two arguments to produce the result.

    The bitwise AND operator (&) produces a one in the output bit if both input bits are one; otherwise it produces a zero. The bitwise OR operator (|) produces a one in the output bit if either input bit is a one and only produces a zero if both input bits are zero. The bitwise, EXCLUSIVE OR, or XOR (^) produces a one in the output bit if one or the other input bit is a one, but not both. The bitwise NOT (~, also called the ones complement operator) is a unary operator ó it only takes one argument (all other bitwise operators are binary operators). Bitwise NOT produces the opposite of the input bit ó a one if the input bit is zero, a zero if the input bit is one.

    Bitwise operators can be combined with the = sign to unite the operation and assignment: &=, |= and ^= are all legitimate (since ~ is a unary operator it cannot be combined with the = sign).

    Shift operators

    The shift operators also manipulate bits. The left-shift operator (<<) produces the operand to the left of the operator shifted to the left by the number of bits specified after the operator. The right-shift operator (>>) produces the operand to the left of the operator shifted to the right by the number of bits specified after the operator. These are shifts, and not rotates ó even though a rotate command is usually available in assembly language, you can build your own rotate command so presumably the designers of C felt justified in leaving "rotate" off (aiming, as they said, for a minimal language).

    If the value after the shift operator is greater than the number of bits in the left-hand operand, the result is undefined. If the left-hand operand is unsigned, the right shift is a logical shift so the upper bits will be filled with zeros. If the left-hand operand is signed, the right shift may or may not be a logical shift.

    Shifts can be combined with the equal sign (<<= and >>=). The lvalue is replaced by the lvalue shifted by the rvalue.

    Here's an example that demonstrates the use of all the operators involving bits:

    //: C03:Bitwise.cpp
    // Demonstration of bit manipulation
    #include <iostream>
    using namespace std;
    
    // A macro to print a new-line (saves typing):
    #define NL cout << endl
    // Notice the trailing ';' is omitted -- this forces the
    // programmer to use it and maintain consistent syntax
    // This function takes a single byte and displays it
    // bit-by-bit.  The (1 << i) produces a one in each
    // successive bit position; in binary: 00000001, 00000010, etc.
    // If this bit bitwise ANDed with val is nonzero, it means
    // there was a one in that position in val.
    void print_binary(const unsigned char val) {
      for(int i = 7; i >= 0; i--)
        if(val & (1 << i))
          cout << "1";
        else
          cout << "0";
    }
    // Generally, you don't want signs when you are working with
    // bytes, so you use an unsigned char.
    int main() {
      // An int must be used instead of a char here because the
      // "cin >>" statement will otherwise treate the first digit
      // As a character.  By assigning getval to a and b, the value
      // is converted to a single byte (by truncating it)
      unsigned int getval;
      unsigned char a,b;
      cout << "enter a number between 0 and 255: ";
      cin >> getval; a = getval;
      cout << "a in binary: "; print_binary(a); cout << endl;
      cout << "enter another number between 0 and 255: ";
      cin >> getval; b = getval;
      cout << "b in binary: "; print_binary(b); NL;
      cout << "a | b = "; print_binary(a | b); NL;
      cout << "a & b = "; print_binary(a & b); NL;
      cout << "a ^ b = "; print_binary(a ^ b); NL;
      cout << "~a = "; print_binary(~a); NL;
      cout << "~b = "; print_binary(~b); NL;
      unsigned char c = 0x5A; // Interesting bit pattern
      cout << "c in binary: "; print_binary(c); NL;
      a |= c;
      cout << "a |= c; a = "; print_binary(a); NL;
      b &= c;
      cout << "b &= c; b = "; print_binary(b); NL;
      b ^= a;
      cout << "b ^= a; b = "; print_binary(b); NL;
    } ///:~

    Here are functions to perform left and right rotations:

    //: C03:Rolror.cpp {O}
    // Perform left and right rotations
    
    unsigned char ROL(unsigned char val) {
      int highbit;
      if(val & 0x80) // 0x80 is the high bit only
        highbit = 1;
      else
        highbit = 0;
      val <<= 1;  // Left shift (bottom bit becomes 0)
      val |= highbit;  // Rotate the high bit onto the bottom
      return val;  // This becomes the function value
    }
    
    unsigned char ROR(unsigned char val) {
      int lowbit;
      if(val & 1) // Check the low bit
        lowbit = 1;
      else
        lowbit = 0;
      val >>= 1; // Right shift by one position
      val |= (lowbit << 7); // Rotate the low bit onto the top
      return val;
    } ///:~

    Try using these functions in the BITWISE program. Notice the definitions (or at least declarations) of ROL( ) and ROR( ) must be seen by the compiler in BITWISE.CPP before the functions are used.

    The bitwise functions are generally extremely efficient to use because they translate directly into assembly language statements. Sometimes a single C or C++ statement will generate a single line of assembly code.

    Unary operators

    Bitwise NOT isn't the only operator that takes a single argument. Its companion, the logical NOT (!), will take a true value (nonzero) and produce a false value (zero). The unary minus (-) and unary plus (+) are the same operators as binary minus and plus ó the compiler figures out which usage is intended by the way you write the expression. For instance, the statement

    x = -a;

    has an obvious meaning. The compiler can figure out:

    x = a * -b;

    but the reader might get confused, so it is safer to say:

    x = a * (-b);

    The unary minus produces the negative of the value. Unary plus provides symmetry with unary minus, although it doesn't do much.

    The increment and decrement operators (++ and --) were introduced in chapter 2. These are the only operators other than those involving assignment that have side effects. The increment operator increases the variable by one unit ("unit" can have different meanings according to the data type ó see the chapter on pointers) and the decrement operator decreases the variable by one unit. The value produced depends on whether the operator is used as a prefix or postfix operator (before or after the variable). Used as a prefix, the operator changes the variable and produces the changed value. As a postfix, the operator produces the unchanged value and then the variable is modified.

    The last unary operators are the address-of (&), dereference (*) and cast operators in C and C++, and new and delete in C++. Address-of and dereference are used with pointers, which will be described in Chapter 4. Casting is described later in this chapter, and new and delete are described in Chapter 6.

    Conditional operator or ternary operator

    This operator is unusual because it has 3 operands. It is truly an operator because it produces a value, unlike the ordinary if-else statement. It consists of three expressions: if the first expression (followed by a ?) evaluates to true, the expression following the ? is evaluated and its result becomes the value produced by the operator. If the first expression is false, the third expression (following a :) is executed and its result becomes the value produced by the operator.

    The conditional operator can be used for its side effects or for the value it produces. Here's a code fragment that demonstrates both:

    A = --B ? B : (B = -99);

    Here, the conditional produces the rvalue. A is assigned to the value of B if the result of decrementing B is nonzero. If B became zero, A and B are both assigned to -99. B is always decremented, but it is only assigned to -99 if the decrement causes B to become 0. A similar statement can be used without the "A =" just for its side effects:

    --B ? B : (B = -99);

    Here the second B is superfluous, since the value produced by the operator is unused. An expression is required between the ? and :. In this case the expression could simply be a constant that might make the code run a bit faster.

    The comma operator

    The comma is not restricted to separating variable names in multiple definitions (i.e.: int i, j, k;). When used as an operator to separate expressions, it produces only the value of the last expression. All the rest of the expressions in the comma-separated list are only evaluated for their side effects. This code fragment increments a list of variables and uses the last one as the rvalue:

    A = (B++,C++,D++,E++);

    The parentheses are critical here. Without them, the statement will evaluate to:

    (A = B++), C++, D++, E++;

    Common pitfalls when using operators

    As illustrated above, one of the pitfalls when using operators is trying to get away without parentheses when you are even the least bit uncertain about how an expression will evaluate (consult your local C manual for the order of expression evaluation).

    Another extremely common error looks like this:

    //: C03:Pitfall.cpp
    // Operator mistakes
    
    int main() {
      int a = 1, b = 1;
      while(a = b) {
        // ....
      }
    } ///:~

    The statement a = b will always evaluate to true when b is non-zero. The variable a is assigned to the value of b, and the value of b is also produced by the operator =. Generally you want to use the equivalence operator == inside a conditional statement, not assignment. This one bites a lot of programmers.

    A similar problem is using bitwise AND and OR instead of logical. Bitwise AND and OR use one of the characters (& or |) while logical AND and OR use two (&& and ||). Just as with = and ==, it's easy to just type one character instead of two.

    Casting operators

    The word Cast in C is used in the sense of "casting into a mold." C will automatically change one type of data into another if it makes sense to the compiler. For instance, if you assign an integral value to a floating-point variable, the compiler will secretly call a function (or more probably, insert code) to convert the int to a float. Casting allows you to make this type conversion explicit, or to force it when it wouldn't normally happen.

    To perform a cast, put the desired data type (including all modifiers) inside parentheses to the left of the value. This value can be a variable, constant, the value produced by an expression or the return value of a function. Here's an example:

    int B = 200;

    A = (unsigned long int)B;

    You can even define casting operators for user-defined data types. Casting is very powerful, but it can cause some headaches because in some situations it forces the compiler to treat data as if it were (for instance) larger than it really is, so it will occupy more space in memory ó this can trample over other data. This usually occurs when casting pointers, not when making simple casts like the one shown above.

    C++ has an additional kind of casting syntax, which follows the "function-call" syntax used with constructors (defined later in this chapter). This syntax puts the parentheses around the argument, like a function call, rather than around the data type:

    float A = float(200);

    This is equivalent to:

    float A = (float)200;

    sizeof -- an operator by itself

    The sizeof( ) operator stands alone because it satisfies an unusual need. sizeof( ) gives you information about the amount of memory allocated for data items. As described earlier in this chapter, sizeof( ) tells you the number of bytes used by any particular variable. It can also give the size of a data type (with no variable name):

    printf("sizeof(double) = %d\n", sizeof(double));

    sizeof( ) can also give you the sizes of user-defined data types. This is used later in the book.

    The asm keyword

    This is an escape mechanism that allows you to write assembly code for your hardware within a C++ program. Often youíre able to reference C++ variables within the assembly code, which means you can easily communicate with your C++ code and limit the assembly code to that necessary for efficiency tuning or to utilize special processor instructions. The exact syntax of the assembly language is compiler-dependent and can be discovered in your compilerís documentation.

    Explicit operators

    These are keywords for bitwise and logical operators. Non-U.S. programmers without keyboard characters like &, |, ^, and so on, were forced to use Cís horrible trigraphs, which were not only annoying to type, but obscure when reading. This is repaired in C++ by the addition of new keywords:

    Keyword

    Meaning

    and

    && (logical AND)

    or

    || (logical OR)

    not

    ! (logical NOT)

    not_eq

    != (logical not-equivalent)

    bitand

    & (bitwise AND)

    and_eq

    &= (bitwise AND-assignment)

    bitor

    | (bitwise OR)

    or_eq

    |= (bitwise OR-assignment)

    xor

    ^ (bitwise exclusive-OR)

    xor_eq

    ^= (bitwise exclusive-OR-assignment)

    compl

    ~ (ones complement)

     

    Creating functions

    Most modern languages have an ability to create named subroutines or subprograms. In C++, a subprogram is called a function. All functions have return values (although that value can be "nothing") so functions in C++ are very similar to functions in Pascal. (The Pascal procedure is the specialized case of a function with no return value. It hardly seems worthwhile to give it a separate name.)

    Function prototyping

    You have been seeing function prototyping in this book described as "telling the compiler that a function exists, and how it is called." Now it's time for more details.

    In old (pre-Standard) C, you could call a function with any number or type of arguments, and the compiler wouldn't complain. Everything seemed fine until you ran the program. You got mysterious results (or worse, the program crashed) with no hints as to why. The lack of help with argument passing and the enigmatic bugs that resulted is probably one reason why C was dubbed a "high-level assembly language." Pre-Standard C programmers just adapted to it.

    With function prototyping, you always use a prototype when declaring and defining a function. When the function is called, the compiler uses the prototype to insure the proper arguments are passed in, and that the return value is treated correctly. If the programmer makes a mistake when calling the function, the compiler catches the mistake.

    Telling the compiler how arguments are passed

    In a function prototype, the argument list (which follows the name and is surrounded by parentheses) contains the types of arguments that must be passed to the function and (optionally for the declaration) the names of the arguments. The order and type of the arguments must match in the declaration, definition and function call. Here's an example of a function prototype in a declaration:

    int translate(float x, float y, float z);

    You cannot use the same form as when defining variables in function argument lists as you do in ordinary variable definitions, i.e., float x, y, z. You must indicate the type of each argument. In a function declaration, the following form is also acceptable:

    int translate(float, float, float);

    since the compiler doesn't do anything but check for types when the function is called.

    In the function definition, names are required because the arguments are referenced inside the function:

    int translate(float x, float y, float z) {

    x = y = z;

    // ...

    }

    The only exception to this rule occurs in C++: an argument may be unnamed in the argument list of the function definition. Since it is unnamed, you cannot use it in the function body, of course. The reason unnamed arguments are allowed is to give the programmer a way to "reserve space in the argument list." You must still call the function with the proper arguments, but you can use the argument in the future without modifying any of the other code. This option of ignoring an argument in the list is possible if you leave the name in, but you will get an obnoxious warning message about the value being unused every time you compile the function. The warning is eliminated if you remove the name.

    Standard C and C++ have two other ways to declare an argument list. If you have an empty argument list you can declare it as foo( ) in C++, which tells the compiler there are exactly zero arguments. Remember this only means an empty argument list in C++. In Standard C it means "an indeterminate number of arguments (which is a "hole" in Standard C since it disables type checking in that case). In both Standard C and C++, the declaration foo(void); means an empty argument list. The void keyword means "nothing" in this case (it can also mean "no type" when applied to certain variables).

    The other option for argument lists occurs when you don't know how many arguments or what type of arguments you will have; this is called a variable argument list. This "uncertain argument list" is represented by ellipses (...). Defining a variable argument list is significantly more complicated than a plain function. You can use a variable argument list declaration for a function that has a fixed set of arguments if (for some reason) you want to disable the error checks of function prototyping. Handling variable argument lists is described in the library section of your local Standard C guide.

    Function return values

    A function prototype may also specify the return value of a function. The type of this value precedes the function name. If no type is given, the return value type defaults to int (most things in C default to int). If you want to specify that no value is returned, as in a Pascal procedure, the void keyword is used. This will generate an error if you try to return a value from the function. Here are some complete function prototypes:

    foo1(void); // Returns an int, takes no arguments

    foo2(); // Like foo2() in C++ but not in Standard C!

    float foo3(float, int, char, double); // Returns a float

    void foo4(void); // Takes no arguments, returns nothing

    At this point, you may wonder how to specify a return value in the function definition. This is done with the return statement. return exits the function, back to the point right after the function call. If return has an argument, it becomes the return value of the function. You can have more than one return statement in a function definition:

    //: C03:Return.cpp
    // Use of "return"
    #include <iostream>
    using namespace std;
    
    char cfunc(const int i) {
      if(i == 0)
        return 'a';
      if(i == 1)
        return 'g';
      if(i == 5)
        return 'z';
      return 'c';
    }
    
    int main() {
      cout << "type an integer: ";
      int val;
      cin >> val;
      cout << cfunc(val) << endl;
    } ///:~

    The code in cfunc( ) acts like an if-else statement. The else is unnecessary because the first if that evaluates true causes an exit of the function via the return statement. Notice that a function declaration is not necessary because the function definition appears before it is used in main( ), so the compiler knows about it. Arguments and return values are covered in detail in chapter 9.

    Using the C function library

    All the functions in your local C function library are available while you are programming in C++. You should look hard at the function library before defining your own function ó chances are, someone has solved the problem for you, and probably given it a lot more thought (as well as debugging!).

    A word of caution, though: many compilers include a lot of extra functions that make life even easier and are very tempting to use, but are not part of the Standard C library. If you are certain you will never want to move the application to another platform (and who is certain of that?), go ahead óuse those functions and make your life easier. If you want your application to be portable, you should restrict yourself to Standard C functions (this is safe because the Standard C library is part of C++). Keep a guide to Standard C handy and refer to that when looking for a function rather than your local C or C++ guide. If you must perform platform-specific activities, try to isolate that code in one spot so it can easily be changed when porting to another platform. Platform-specific activities are often encapsulated in a class ó this is the ideal solution.

    The formula for using a library function is as follows: first, find the function in your guidebook (many guidebooks will index the function by category as well as alphabetically). The description of the function should include a section that demonstrates the syntax of the code. The top of this section usually has at least one #include line, showing you the header file containing the function prototype. Duplicate this #include line in your file, so the function is properly declared. Now you can call the function in the same way it appears in the syntax section. If you make a mistake, the compiler will discover it by comparing your function call to the function prototype in the header, and tell you about your error. The linker searches the standard library by default, so that's all you need to do: include the header file, and call the function.

    Creating your own libraries with the librarian

    You can collect your own functions together into a library, or add new functions to the library the linker secretly searches (you should back up the old one before doing this). Most packages come with a librarian that manages groups of object modules. Each librarian has its own commands, but the general idea is this: if you want to create a library, make a header file containing the function prototypes for all the functions in your library. Put this header file somewhere in the preprocessor's search path, either in the local directory (so it can be found by #include "header") or in the include directory (so it can be found by #include <header>). Now take all the object modules and hand them to the librarian along with a name for the finished library (most librarians require a common extension, such as .LIB). Place the finished library in the same spot the other libraries reside, so the linker can find it. When you use your library, you will have to add something to the command line so the linker knows to search the library for the functions you call. You must find all the details in your local manual, since they vary from system to system.

    The header file

    When you create a class, you are creating a new data type. Generally, you want this type to be easily accessible to yourself and others. In addition, you want to separate the interface (the class declaration) from the implementation (the definition of the class member functions) so the implementation can be changed without forcing a re-compile of the entire system. You achieve this end by putting the class declaration in a header file.

    Function collections & separate compilation

    Instead of putting the class declaration, the definition of the member functions and the main( ) function in the same file, it is best to isolate the class declaration in a header file that is included in every file where the class is used. The definitions of the class member functions are also separated into their own file. The member functions are debugged and compiled once, and are then available as an object module (or in a library, if the librarian is used) for anyone who wants to use the class. The user of the class simply includes the header file, creates objects (instances) of that class, and links in the object module or library (i.e.: the compiled code).

    The concept of a collection of associated functions combined into the same object module or library, and a header file containing all the declarations for the functions, is very standard when building large projects in C. It is de rigueur in C++: you could throw any function into a collection in C, but the class in C++ determines which functions are associated by dint of their common access to the private data. Any member function for a class must be declared in the class declaration; you cannot put it in some separate file. The use of function libraries was encouraged in C and institutionalized in C++.

    Importance of using a common header file

    When using a function from a library, C allows you the option of ignoring the header file and simply declaring the function by hand. You may want the compiler to speed up just a bit by avoiding the task of opening and including the file. For example, here's an extremely lazy declaration of the C function printf( ):

    printf(...);

    It says: printf( ) has some number of arguments, and they all have some type but just take whatever arguments you see and accept them. By using this kind of declaration, you suspend all error checking on the arguments.

    This practice can cause subtle problems. If you declare functions by hand in each different file, you may make a mistake the compiler accepts in a particular file. The program will link correctly, but the use of the function in that one file will be faulty. This is a tough error to find, and is easily avoided.

    If you place all your function declarations in a header file, and include that file everywhere you use the function (and especially where you define the function) you insure a consistent declaration across the whole system. You also insure that the declaration and the definition match by including the header in the definition file.

    C does not enforce this practice. It is very easy, for instance, to leave the header file out of the function definition file. Header files often confuse the novice programmer (who may ignore them or use them improperly).

    If a class is declared in a header file in C++, you must include the header file everywhere a class is used and where class member functions are defined. The compiler will give an error message if you try to call a function without declaring it first. By enforcing the proper use of header files, the language ensures consistency in libraries, and reduces bugs by forcing the same interface to be used everywhere.

    There was an additional problem in earlier releases of the language. When you overloaded ordinary (non-member) functions, the order of overloading was important. If you used the same function names in separate header files, you could change the order of overloading without knowing it, simply by including the files in a different order. The compiler didn't complain, but the linker did ó it was mystifying. This problem existed in C++ compilers following AT&T releases up through 1.2. It was solved by a change in the language called type-safe linkage (described later in the book).

    Preventing re-declaration of classes

    When you put a class declaration in a header file, it is possible for the file to be included more than once in a complicated program. The streams class is a good example. Any time a class does I/O (especially in inline functions) it may include the streams class. If the file you are working on uses more than one kind of class, you run the risk of including the streams header more than once and re-declaring streams.

    The compiler considers the re-declaration of a class to be an error, since it would otherwise allow you to use the same name for different classes. To prevent this error when multiple header files are included, you need to build some intelligence into your header files using the preprocessor (the streams class already has this "intelligence").

    The preprocessor directives
    #define, #ifdef and #endif

    As shown earlier in this chapter, #define will create preprocessor macros that look similar to function definitions. #define can also create flags. You have two choices: you can simply tell the preprocessor that the flag is defined, without specifying a value:

    #define FLAG

    or you can give it a value (which is the pre-Standard C way to define a constant):

    #define PI 3.14159

    In either case, the label can now be tested by the preprocessor to see if it has been defined:

    #ifdef FLAG

    will yield a true result, and the code following the #ifdef will be included in the package sent to the compiler. This inclusion stops when the preprocessor encounters the statement

    #endif

    or

    #endif // FLAG

    Any non-comment after the #endif on the same line is illegal, even though some compilers may accept it. The #ifdef/#endif pairs may be nested within each other.

    The complement of #define is #undef (short for "un-define"), which will make an #ifdef statement using the same variable yield a false result. #undef will also cause the preprocessor to stop using a macro. The complement of #ifdef is #ifndef, which will yield a true if the label has not been defined (this is the one we use in header files).

    There are other useful features in the C preprocessor. You should check your local guide for the full set.

    Standard for each class header file

    In each header file that contains a class, you should first check to see if the file has already been included in this particular code file. You do this by checking a preprocessor flag. If the flag isn't set, the file wasn't included and you should set the flag (so the class can't get re-declared) and declare the class. If the flag was set the class has already been declared so you should just ignore the code declaring the class. Here's how the header file should look:

    #ifndef CLASS_FLAG_

    #define CLASS_FLAG_

    // Class declaration here...

    #endif // CLASS_FLAG_

    As you can see, the first time the header file is included, the class declaration will be included by the preprocessor but all the subsequent times the class declaration will be ignored. The name CLASS_FLAG_ can be any unique name, but a reliable standard to follow is to take the name of the header file and replace periods with underscores, and follow it with a trailing underscore (leading underscores are reserved by Standard C for system names). Here's an example:

    //: C03:Simple.h
    // Simple class that prevents re-definition
    #ifndef SIMPLE_H_
    #define SIMPLE_H_
    
    class Simple {
      int i,j,k;
    public:
      Simple() { i = j = k = 0; }
    };
    
    #endif // SIMPLE_H_ ///:~

    Although the SIMPLE_H_ after the #endif is commented out and thus by the preprocessor, it is useful for documentation.

    Portable inclusion of header files

    C++ was created in a Unix environment, where the file names have case sensitivity. Thus, Unix programmers could name C header files as header.h and C++ header files with a capital H, as header.H. This didn't translate to some other systems such as MS-DOS, so programmers there distinguished C++ header files with .HXX or .HPP. Thus you will sometimes see old header files with these extensions. However, the common practice now is to name C++ header files the same as C header files: header.h. It turns out that using the same naming convention as C is not a problem since programmers must know what they are doing when including a header file, and the compiler will catch the error if you try to include a C++ header in a C compilation. All header files in this book use the .h convention.

    struct: a class with all elements public

    The data structure keyword struct was developed for C so a programmer could group together several pieces of data and treat them as a single data item. As you can imagine, the struct is an early attempt at abstract data typing (without the associated member functions). In C, you must create non-member functions that take your struct as an argument. There is no concept of private data, so anyone (not just the functions you define) can change the elements of a struct.

    C++ will accept any struct you can declare in C (so it's upward compatible). However, C++ expands the definition of a struct so it is just like a class, except a class defaults to private while a struct defaults to public. Any struct you define in C++ can have member functions, constructors and a destructor, etc. Although the struct is an artifact from C it emphasizes that all elements are public. You can make a class in C++ work just like a struct in C++ by putting public: at the beginning of your class. Notice that a struct in Standard C doesn't have constructors, destructors or member functions.

    As you can see from this example, all the elements in a struct are public:

    //: C03:Struct.cpp
    // Demonstration of structures vs classes
    
    class CL {
      int i, j, k;
    public:
      CL(int init = 0) { i = j = k = init; }
    };
    
    struct ST {
      int i, j, k;
      // Don't need to say "public."  Everything is public!
      ST (int init = 0) { i = j = k = init; }
    };
    
    int main() {
      CL A(10);
      ST B(11);
      B.i = 44; // This is OK
    //! A.i = 44; // This will cause an error!
    } ///:~

     

    Clarifying programs with enum

    An enumerated data type is a way of attaching names to numbers, thereby giving more meaning to anyone reading the code. The enum keyword (from C) automatically enumerates any list of words you give it by assigning them values of 0, 1, 2, etc. You can declare enum variables (which are always ints). The declaration of an enum looks similar to a class declaration, but an enum cannot have any member functions.

    An enumerated data type is very useful when you want to keep track of some sort of feature:

    //: C03:Enum.cpp
    // Keeping track of shapes.
    
    enum shape_type {
      circle,
      square,
      rectangle
    };  // Must end with a semicolon like a class
    
    int main() {
      shape_type shape = circle;
      // Activities here....
      // Now do something based on what the shape is:
      switch(shape) {
        case circle:  /* circle stuff */ break;
        case square:  /* square stuff */ break;
        case rectangle:  /* rectangle stuff */ break;
      }
    } ///:~

    Shape is a variable of the shape_type enumerated data type, and its value is compared with the value in the enumeration. Since shape is really just an int, however, it can be any value an int can hold (including a negative number). You can also compare an int variable with a value in the enumeration.

    If you don't like the way the compiler assigns values, you can do it yourself, like this:

    enum shape_type { circle = 10, square = 20, rectangle = 50};

    If you give values to some names and not to others, the compiler will use the next integral value. For example,

    enum snap { crackle = 25, pop };

    The compiler gives pop the value 26.

    You can see how much more readable the code is when you use enumerated data types.

    Saving memory with union

    Sometimes a program will handle different types of data using the same variable. In this situation, you have two choices: you can create a class or struct containing all the possible different types you might need to store, or you can use a union. A union piles all the data into a single space; it figures out the amount of space necessary for the largest item you've put in the union, and makes that the size of the union. Use a union to save memory.

    Anytime you place a value in a union, the value always starts in the same place at the beginning of the union, but only uses as much space as is necessary. Thus, you create a "super-variable," capable of holding any of the union variables. All the addresses of the union variables are the same (in a class or struct, the addresses are different).

    Here's a simple use of a union. Try removing various elements and see what effect it has on the size of the union. Notice that it makes no sense to declare more than one instance of a single data type in a union (unless you're just doing it to use a different name).

    //: C03:Union.cpp
    // The size and simple use of a union
    #include <iostream>
    using namespace std;
    
    union packed { // Declaration similar to a class
      char i;
      short j;
      int k;
      long l;
      float f;
      double d;  // The union will be the size of a double,
                 // since it's the largest element
    };  // Semicolon ends a union, like a class
    
    int main() {
      cout << "sizeof(packed) = " << sizeof(packed) << endl;
      packed X;
      X.i = 'c';
      X.d = 3.14159;
    } ///:~

    The compiler performs the proper assignment according to the union member you select.

    Once you perform an assignment, the compiler doesn't care what you do with the union. In the above example, you could assign a floating-point value to X:

    X.f = 2.222;

    and then send it to the output as if it were an int:

    cout << X.i;

    This would produce complete garbage.

    C++ allows a union to have a constructor, destructor and member functions just like a class:

    //: C03:Union2.cpp
    //  Unions with constructors and member functions
    
    union U {
      int i;
      float f;
      U(int a) { i = a; }
      U(float b) { f = b;}
      ~U() { f = 0; }
      int read_int() { return i; }
      float read_float() { return f; }
    };
    
    int main() {
      U X(12), Y(1.9F);
      X.i = 44;
      X.read_int();
      Y.read_float();
    } ///:~

    Although the member functions civilize access to the union somewhat, there is still no way to prevent the user from selecting the wrong element once the union is initialized. A "safe" union can be encapsulated in a class like this (notice how the enum clarifies the code):

    //: C03:SuperVar.cpp
    // A super-variable
    #include <iostream>
    using namespace std;
    
    class SuperVar {
      enum {
        character,
        integer,
        floating_point
      } vartype;  // Define one
      union {  // Anonymous union
        char c;
        int i;
        float f;
      };
    public:
      SuperVar(char ch) {
        vartype = character;
        c = ch;
      }
      SuperVar(int ii) {
        vartype = integer;
        i = ii;
      }
      SuperVar(float ff) {
        vartype = floating_point;
        f = ff;
      }
      void print();
    };
    
    void SuperVar::print() {
      switch (vartype) {
        case character:
          cout << "character: " << c << endl;
          break;
        case integer:
          cout << "integer: " << i << endl;
          break;
        case floating_point:
          cout << "float: " << f << endl;
          break;
      }
    }
    
    int main() {
      SuperVar A('c'), B(12), C(1.44F);
      A.print();
      B.print();
      C.print();
    } ///:~

    In the above code, the enum has no type name (it is an untagged enumeration). This is acceptable if you are going to immediately define instances of the enum, as is done here. There is no need to refer to the enum's type in the future, so the type is optional.

    The union has no type name and no variable name. This is called an anonymous union, and creates space for the union but doesn't require accessing the union elements with a variable name and the dot operator. For instance, if your anonymous union is:

    union { int i, float f };

    you access members by saying:

    i = 12;

    f = 1.22;

    just like other variables. The only difference is that both variables occupy the same space. If the anonymous union is at file scope (outside all functions and classes) then it must be declared static so it has internal linkage.

    Debugging flags

    If you hard-wire your debugging code into a program, you can run into problems. You start to get too much information, which makes the bugs difficult to isolate. When you think you've found the bug you start tearing out debugging code, only to find you need to put it back in again. You can solve these problems with two types of flags: preprocessor debugging flags and run-time debugging flags.

    Preprocessor debugging flags

    By using the preprocessor to #define one or more debugging flags (preferably in a header file), you can test a flag using a #ifdef statement to conditionally include debugging code. When you think your debugging is finished, you can simply #undef the flag(s) and the code will automatically be removed (and you'll reduce the size of your executable file).

    It is best to decide on names for debugging flags before you begin building your project so the names will be consistent. Preprocessor flags are often distinguished from variables by writing them in all upper case. A common flag name is simply DEBUG (but be careful you don't use NDEBUG, which is reserved in Standard C). The sequence of statements might be:

    #define DEBUG // Probably in a header file

    //...

    #ifdef DEBUG // Check to see if flag is defined

    /* debugging code here */

    #endif // DEBUG

    Many C and C++ implementations will even let you #define and #undef flags from the compiler command line, so you can re-compile code and insert debugging information with a single command (preferably via the makefile). Check your local guide for details.

    Run-time debugging flags

    In some situations it is more convenient to turn debugging flags on and off during program execution (it is much more elegant to turn flags on and off when the program starts up using the command line. See chapter 4 for details of using the command line). Large programs are tedious to recompile just to insert debugging code.

    You can create integer flags and use the fact that nonzero values are true to increase the readability of your code. For instance:

    int debug = 0; // Default off

    //..

    cout << "turn debugger on? (y/n): ";

    cin >> reply;

    if(reply == 'y') debug++; // Turn flag on

    //..

    if(debug) {

    // Debugging code here

    }

    Notice that the variable is in lower case letters to remind the reader it isn't a preprocessor flag.

    Turning a variable name into a string

    When writing debugging code, it is tedious to write print expressions consisting of a string containing the variable name followed by the variable. Fortunately, Standard C has introduced the "string-ize" operator #. When you put a # before an argument in a preprocessor macro, that argument is turned into a string by putting quotes around it. This, combined with the fact that strings with no intervening punctuation are concatenated into a single string, allows us to make a very convenient macro for printing the values of variables during debugging:

    #define PR(x) cout << #x " = " << x << "\n";

    If you print the variable A by calling the macro PR(A), it will have the same effect as the code:

    cout << "A = " << A << "\n";

     

    The Standard C assert( ) macro

    assert( ) is a very convenient debugging macro. When you use assert( ), you give it an argument that is an expression you are "asserting to be true." The preprocessor generates code that will test the assertion. If the assertion isn't true, the program will stop after issuing an error message telling you what the assertion was and that it failed. Here's a trivial example:

    //: C03:Assert.cpp
    // Use of the assert() debugging macro
    #include <cassert>  // Contains the macro
    using namespace std;
    
    int main() {
      int i = 100;
      assert(i != 100);
    } ///:~

    The Standard C library header file assert.h contains the macro for assertion. When you are finished debugging, you can remove the code generated by the macro simply by placing the line:

    #define NDEBUG

    in the program before the inclusion of assert.h, or by defining NDEBUG on the compiler command line. NDEBUG is a flag used in assert.h to change the way code is generated by the macros.

    Debugging techniques combined

    By combining the techniques discussed in this section, a framework arises that you can follow when writing your own debugging code. Keep in mind that if you want to isolate certain types of debugging code you can create variables debug1, debug2, etc., and preprocessor flags DEBUG1, DEBUG2, etc.

    The following example shows the use of command-line flags, formally introduced in the next chapter. It is better to show you the right way to do something and risk confusing you for a bit rather than teaching you some method that will later need to be un-learned.

    The flags on the command line are accessed through the arguments to main( ), called argc and argv.

    //: C03:Debug2.cpp
    // Framework for writing debug code
    #include <iostream>
    #include <fstream>
    #include <cstdlib>
    using namespace std;
    #define DEBUG
    
    int main(int argc, char * argv[]) {
      int debug = 0;
      if(argc > 1) { // If more than one argument
        if (*argv[1] == 'd')
          debug++; // Set the debug flag
        else {
          cout << "usage: debug2   OR   debug2 d" << endl;
            "optional flag turns debugger on.";
          exit(1);  // Quit program
        }
      }
      // ....
    #ifdef DEBUG
      if(debug)
        cout << "debugger on" << endl;
    #endif // DEBUG
      // ...
    } ///:~

    All the debugging code occurs between the

    #ifdef DEBUG

    and

    #endif //DEBUG

    lines. If you type on the command line:

    debug2

    nothing will happen, but if you type

    debug2 d

    The "debugger" will be turned on. When you want to remove the debugging code at some later date to reduce the size of the executable program, simply change the #define DEBUG to a #undef DEBUG (or better yet, do it from the compiler command line).

    Bringing it all together:
    project-building tools

    When using separate compilation (breaking code into a number of translation units), you need some way to compile them all and to tell the linker to put them with the appropriate libraries and startup code into an executable file. Most compilers allow you to do this with a single command-line statement. For a compiler named cpp, for example, you might say

    cpp Libtest.cpp lib.cpp

    The problem with this approach is that the compiler will first compile each individual translation unit, regardless of whether it needs to be rebuilt or not. With many files in a project, it can get very tedious to recompile everything if youíve only changed a single file.

    The first solution to this problem, developed on Unix (which is where C was created), was a program called make. Make compares the date on the source-code file to the date on the object file, and if the object-file date is earlier than the source-code file, make invokes the compiler on the source.

    Because make is available in some form for virtually all C++ compilers (and even if it isnít, you can use freely-available makes with any compiler), it will be the tool used throughout this book. However, compiler vendors also came up with their own project building tools. These tools ask you which translation units are in your project, and determine all the relationships themselves. They have something similar to a makefile, generally called a project file, but the programming environment maintains this file so you donít have to worry about it. The configuration and use of project files vary from system to system, so it will be assumed here that you are using the project-building tool of your choice to create these programs, and that you will find the appropriate documentation on how to use them (although project file tools provided by compiler vendors are usually so simple to use that you can learn them quite effortlessly). The makefiles used within this book should work regardless of whether you are also using a specific vendorís project-building tool.

    File names

    One other issue you should be aware of is file naming. In C, it has been traditional to name header files (containing declarations) with an extension of .h and implementation files (that cause storage to be allocated and code to be generated) with an extension of .c. C++ went through an evolution. It was first developed on Unix, where the operating system was aware of upper and lower case in file names. The original file names were simply capitalized versions of the C extensions: .H and .C. This of course didnít work for operating systems that didnít distinguish upper and lower case, like MS-DOS. DOS C++ vendors used extensions of .hxx and .cxx for header files and implementation files, respectively, or .hpp and .cpp. Later, someone figured out that the only reason you needed a different extension for a file was so the compiler could determine whether to compile it as a C or C++ file. Because the compiler never compiled header files directly, only the implementation file extension needed to be changed. The custom, virtually across all systems, has now become to use .cpp for implementation files and .h for header files.

    Make: an essential tool for separate compilation

    There is one more tool you should understand before creating programs in C++. The make utility manages all the individual files in a project. When you edit the files in a project, make insures that only the source files that were changed, and other files that are affected by the modified files, are re-compiled. By using make, you don't have to re-compile all the files in your project every time you make a change. make also remembers all the commands to put your project together. Learning to use make will save you a lot of time and frustration.

    make was developed on Unix. The C language was developed to write the Unix operating system. As programs encompassed more and more files, the job of deciding which files should be recompiled because of changes became tedious and error-prone, so make was invented. Most C compilers come with a make program. All C++ packages either come with a make, or are used with a C compiler that has a make.

    Make activities

    When you type make, the make program looks in the current directory for a file named makefile, which you've created if it's your project. This file lists dependencies between source code files. make looks at the dates on files. If a dependent file has an older date than a file it depends on, make executes the rule given after the dependency.

    All comments in makefiles start with a # and continue to the end of the line.

    As a simple example, the makefile for the "hello" program might contain:

    # A comment

    hello.exe: hello.cpp

    g++ hello.cpp

    This says that hello.exe (the target) depends on hello.cpp. When hello.cpp has a newer date than hello.exe, make executes the rule g++ hello.cpp. There may be multiple dependencies and multiple rules. All the rules must begin with a tab.

    By creating groups of interdependent dependency-rule sets, you can modify source code files, type make and be certain that all the affected files will be re-compiled correctly.

    Macros

    A makefile may contain macros. Macros allow convenient string replacement. The makefiles in this book use a macro to invoke the C++ compiler. For example,

    #Macro to invoke Gnu C++

    CPP = g++

    hello.exe: hello.cpp

    $(CPP) hello.cpp

    The $ and parentheses expand the macro. To expand means to replace the macro call $(CPP) with the string g++. With the above macro, if you want to change to a different compiler you just change the macro to:

    CPP = cpp

    You can also add compiler flags, etc., to the macro.

    Makefiles in this book

    Using the program ExtractCode.cpp which is shown in Chapter XX, all the code listings in this book are automatically extracted from the ASCII text version of this book and placed in subdirectories according to their chapters. In addition, ExtractCode.cpp creates a makefile in each subdirectory so that you can simply move into that subdirectory and type make. Finally, ExtractCode.cpp creates a "master" makefile in the root directory where the bookís files are expanded, and this makefile descends into each subdirectory and calls make. This way you can compile all the code in the book by invoking a single make command, and the process will stop whenever your compiler is unable to handle a particular file (note that a Standard C++ conforming compiler should be able to compile all the files in this book). Because implementations of make vary from system to system, only the the most basic, common features are used in the generated makefiles. You should be aware that there are many advanced shortcuts that can save a lot of time when using make. Your local documentation will describe the further features of your particular make.

    An example makefile

    As mentioned before, the makefile for each chapter will be automatically generated by the code-extraction tool ExtractCode.cpp that is shown and described in Chapter XX. Thus, the makefile for each chapter will not be placed in the book. However, itís useful to see an example of one makefile, which is a very abbreviated version of the one that was automatically generated for this chapter by the extraction tool:

    # Automatically-generated MAKEFILE

    # For examples in directory C03

    CPP = g++

    OFLAG = -o

    all: \

    Hello.exe \

    Stream2.exe \

    Concat.exe

    Hello.exe: Hello.obj

    $(CPP) $(OFLAG)Hello.exe Hello.obj

    Hello.obj: Hello.cpp

    $(CPP) -c Hello.cpp

    Stream2.exe: Stream2.obj

    $(CPP) $(OFLAG)Stream2.exe Stream2.obj

    Stream2.obj: Stream2.cpp

    $(CPP) -c Stream2.cpp

    Concat.exe: Concat.obj

    $(CPP) $(OFLAG)Concat.exe Concat.obj

    Concat.obj: Concat.cpp

    $(CPP) -c Concat.cpp

    The macro CPP is set to the name of the compiler. To use a different compiler, you can either edit the makefile or change the value of the macro on the command line, like this:

    make CPP=cpp

    The second macro OFLAG is the flag thatís used to indicate the name of the output file. Although many compilers automatically assume the output file has the same base name as the input file, others donít (such as Linux/Unix compilers, which default to creating a file called a.out).

    You can see that this makefile takes the absolute safest route of using as few make features as possible Ė it only uses the basic make concepts of targets and dependencies, as well as macros. This way it is virtually assured of working with as many make programs as possible. It tends to produce a much larger makefile, but thatís not so bad since itís automatically generated by ExtractCode.cpp.

    One of the features not used here is called rules (or implicit rules or inference rules). Hereís an example:

    .cpp.exe:

    $(CPP) $<

    A rule is the way to teach make how to convert a file with one type of extension (.cpp) into a file with another type of extension (.obj or .exe). This eliminates a lot of redundancy in a makefile. Once you teach make the rules for producing one kind of file from another, all you have to do is tell make which files depend on which other files. When make finds a file with a date earlier than the file it depends on (which means the source file has been changed and not yet recompiled), it uses the rule to create a new file.

    The implicit rule tells make that it doesn't need explicit rules to build everything, but instead it can figure out how to build things based on their file extension. In this case it says: "to build a file that ends in .exe from one which ends in .cpp, invoke the following command." The command is the compiler name, followed by a special built-in macro. This macro, $<, will produce the name of the source file (sometimes called the dependent). Although the makefile contains no explicit dependencies, the implicit conversion implies the proper dependencies. (Unfortunately, not all make programs use the same rule syntax so they are avoided in the bookís generated makefiles.)

    The make program looks at the first target (item to be made) in the makefile unless you specify one on the command line, such as:

    make textchek.exe

    Thus, if you want to make all the files in a subdirectory by typing make, the first target should be a dummy name that depends on all the other targets in the file. In the above makefile the dummy target is called all.

    When a line is too long in a makefile, you can continue it on the next line by using a backslash (\). White space is ignored here, so you can format for readability.

    Summary

    Exercises

    4: Data abstraction

    C++ is a productivity enhancement tool. Why else would you make the effort (and it is an effort, regardless of how easy we attempt to make the transition) to

    switch from some language that you already know and are productive in (C, in this case) to a new language where youíre going to be less productive for a while, until you get the hang of it? Itís because youíve become convinced that youíre going to get big gains by using this new tool.

    Productivity, in computer programming terms, means that fewer people can make much more complex and impressive programs in less time. There are certainly other issues when it comes to choosing a language, like efficiency (does the nature of the language cause code bloat?), safety (does the language help you ensure that your program will always do what you plan, and handle errors gracefully?), and maintenance (does the language help you create code that is easy to understand, modify and extend?). These are certainly important factors that will be examined in this book.

    But raw productivity means a program that might take three of you a week takes one of you a day or two. This touches several levels of economics. Youíre happy because you get the rush of power that comes from building something, your client (or boss) is happy because products are produced faster and with fewer people, and the customers are happy because they get products more cheaply. The only way to get massive increases in productivity is to leverage off other peopleís code, that is, to use libraries.

    A library is simply a bunch of code that someone else has written, packaged together somehow. Often, the most minimal package is a file with an extension like .LIB and one or more header files to declare whatís in the library to your compiler. The linker knows how to search through the LIB file and extract the appropriate compiled code. But thatís only one way to deliver a library. On platforms that span many architectures, like Unix, often the only sensible way to deliver a library is with source code, so it can be recompiled on the new target. And on Microsoft Windows, the dynamic-link library (DLL) is a much more sensible approach ó for one thing, you can often update your program by sending out a new DLL, which your library vendor may have sent you.

    So libraries are probably the most important way to improve productivity, and one of the primary design goals of C++ is to make library use easier. This implies that thereís something hard about using libraries in C. Understanding this factor will give you a first insight into the design of C++, and thus insight into how to use it.

    Declarations vs. definitions

    First, itís important to understand the difference between declarations and definitions because the terms will be used precisely throughout the book. A declaration introduces a name to the compiler. It says, "Hereís what this name means." A definition, on the other hand, allocates storage for the name. This meaning works whether youíre talking about a variable or a function; in either case, at the point of definition the compiler allocates storage. For a variable, it determines how big that variable is and generates space in memory to hold information. For a function, the compiler generates code, which ends up allocating storage in memory. The storage for a function has an address that can be produced using the function name with no argument list, or with the address-of operator.

    A definition can also be a declaration. If the compiler hasnít seen the name A before and you define int A, the compiler sees the name for the first time and allocates storage for it all at once.

    Declarations are often made using the extern keyword. extern is required if youíre declaring a variable but not defining it. With a function declaration, extern is optional because a function name, argument list, or a return value without a function body is automatically a declaration.

    A function prototype contains all the information about argument types and return values. int f(float, char); is a function prototype because it not only introduces f as the name of the function, it tells the compiler what the arguments and return value are so they can be handled properly. C++ provides function prototyping because it adds a significant level of safety.

    Here are some examples of declarations:

    /*: C04:Declare.c
    Declaration/definition examples */
    extern int i; /* Declaration without definition */
    extern float f(float); /* Function declaration */
    
    float b;  /* Declaration & definition */
    float f(float a) {  /* Definition */
      return a + 1.0;
    }
    
    int i; /* Definition */
    int h(int x) { /* Declaration & definition */
      return x + 1;
    }
    
    int main() {
      b = 1.0;
      i = 2;
      f(b);
      h(i);
    } /* ///:~
    */

    In the function declarations, the argument names are optional. In the definitions, they are required. This is true only in C, not C++.

    Throughout this book youíll notice that the first line of a file will be a comment that starts with the open-comment syntax followed by a colon. This is a technique I use to allow easy extraction of information from code files using a text-manipulation tool like "grep" or "awk." The first line also has the name of the file, so it can be referred to in text and in other files, and so you can easily locate it on the source-code disk for the book.

    A tiny C library

    A small library usually starts out as a collection of functions, but those of you who have used third-party C libraries know that thereís usually more to it than that because thereís more to life than behavior, actions and functions. There are also characteristics (blue, pounds, texture, luminance), which are represented by data. And when you start to deal with a set of characteristics in C, it is very convenient to clump them together into a struct, especially if you want to represent more than one similar thing in your problem space. Then you can make a variable of this struct for each thing.

    Thus, most C libraries have a set of structs and a set of functions that act on those structs. As an example of what such a system looks like, consider a programming tool that acts like an array, but whose size can be established at run-time, when it is created. Iíll call it a stash:

    /*: C04:Lib.h
    Header file: example C library */
    /* Array-like entity created at run-time */
    
    typedef struct STASHtag {
      int size;      /* Size of each space */
      int quantity;  /* Number of storage spaces */
      int next;      /* Next empty space */
      /* Dynamically allocated array of bytes: */
      unsigned char* storage;
    } stash;
    
    void initialize(stash* S, int Size);
    void cleanup(stash* S);
    int add(stash* S, void* element);
    void* fetch(stash* S, int index);
    int count(stash* S);
    void inflate(stash* S, int increase);
    /* ///:~
    */

    The tag name for the struct is generally used in case you need to reference the struct inside itself. For example, when creating a linked list, you need a pointer to the next struct. But almost universally in a C library youíll see the typedef as shown above, on every struct in the library. This is done so you can treat the struct as if it were a new type and define variables of that struct like this:

    stash A, B, C;

    Note that the function declarations use the Standard C style of function prototyping, which is much safer and clearer than the "old" C style. You arenít just introducing a function name; youíre also telling the compiler what the argument list and return value look like.

    The storage pointer is an unsigned char*. This is the smallest piece of storage a C compiler supports, although on some machines it can be the same size as the largest. Itís implementation dependent. You might think that because the stash is designed to hold any type of variable, a void* would be more appropriate here. However, the purpose is not to treat this storage as a block of some unknown type, but rather as a block of contiguous bytes.

    The source code for the implementation file (which you may not get if you buy a library commercially ó you might get only a compiled OBJ or LIB or DLL, etc.) looks like this:

    /*: C04:Lib.c {O}
    Implementation of example C library */
    /* Declare structure and functions: */
    #include "Lib.h"
    /* Error testing macros: */
    #include <assert.h>
    /* Dynamic memory allocation functions: */
    #include <stdlib.h>
    #include <string.h> /* memcpy() */
    #include <stdio.h>
    
    void initialize(stash* S, int Size) {
      S->size = Size;
      S->quantity = 0;
      S->storage = 0;
      S->next = 0;
    }
    
    void cleanup(stash* S) {
      if(S->storage) {
       puts("freeing storage");
       free(S->storage);
      }
    }
    
    int add(stash* S, void* element) {
      /* enough space left? */
      if(S->next >= S->quantity)
        inflate(S, 100);
      /* Copy element into storage,
      starting at next empty space: */
      memcpy(&(S->storage[S->next * S->size]),
        element, S->size);
      S->next++;
      return(S->next - 1); /* Index number */
    }
    
    void* fetch(stash* S, int index) {
      if(index >= S->next || index < 0)
        return 0;  /* Not out of bounds? */
      /* Produce pointer to desired element: */
      return &(S->storage[index * S->size]);
    }
    
    int count(stash* S) {
      /* Number of elements in stash */
      return S->next;
    }
    
    void inflate(stash* S, int increase) {
      void* v =
        realloc(S->storage,
          (S->quantity + increase)
          * S->size);
      /* Was it successful? */
      assert(v != 0);
      S->storage = v;
      S->quantity += increase;
    } /* ///:~
    */

    Notice the style for local #includes: Even though the header file exists in a local directory, its path is given relative to the root directory of this book. By doing this, you can easily create another directory off the bookís root and copy code to it for experimentation without worrying about changing #include paths.

    initialize( ) performs the necessary setup for struct stash by setting the internal variables to appropriate values. Initially, the storage pointer is set to zero, and the size indicator is also zero ó no initial storage is allocated.

    The add( ) function inserts an element into the stash at the next available location. First, it checks to see if there is any available space left. If not, it expands the storage using the inflate( ) function, described later.

    Because the compiler doesnít know the specific type of the variable being stored (all the function gets is a void*), you canít just do an assignment, which would certainly be the convenient thing. Instead, you must use the Standard C library function memcpy( ) to copy the variable byte-by-byte. The first argument is the destination address where memcpy( ) is to start copying bytes. It is produced by the expression:

    &(S->storage[S->next * S->size])

    This indexes from the beginning of the block of storage to the next available piece. This number, which is simply a count of the number of pieces used plus one, must be multiplied by the number of bytes occupied by each piece to produce the offset in bytes. This doesnít produce the address, but instead the byte at the address. To produce the address, you must use the address-of operator &.

    The second and third arguments to memcpy( ) are the starting address of the variable to be copied and the number of bytes to copy, respectively. The next counter is incremented, and the index of the value stored is returned, so the programmer can use it later in a call to fetch( ) to select that element.

    fetch( ) checks to see that the index isnít out of bounds and then returns the address of the desired variable, calculated the same way as it was in add( ).

    count( ) may look a bit strange at first to a seasoned C programmer. It seems like a lot of trouble to go through to do something that would probably be a lot easier to do by hand. If you have a struct stash called intStash, for example, it would seem much more straightforward to find out how many elements it has by saying intStash.next instead of making a function call (which has overhead) like count(&intStash). However, if you wanted to change the internal representation of stash and thus the way the count was calculated, the function call interface allows the necessary flexibility. But alas, most programmers wonít bother to find out about your "better" design for the library. Theyíll look at the struct and grab the next value directly, and possibly even change next without your permission. If only there were some way for the library designer to have better control over things like this! (Yes, thatís foreshadowing.)

    Dynamic storage allocation

    You never know the maximum amount of storage you might need for a stash, so the memory pointed to by storage is allocated from the heap. The heap is a big block of memory used for allocating smaller pieces at run-time. You use the heap when you donít know the size of the memory youíll need while youíre writing a program. That is, only at run-time will you find out that you need space to hold 200 airplane variables instead of 20. Dynamic-memory allocation functions are part of the Standard C library and include malloc( ), calloc( ), realloc( ), and free( ).

    The inflate( ) function uses realloc( ) to get a bigger chunk of space for the stash. realloc( ) takes as its first argument the address of the storage thatís already been allocated and that you want to resize. (If this argument is zero ó which is the case just after initialize( ) has been called ó it allocates a new chunk of memory.) The second argument is the new size that you want the chunk to be. If the size is smaller, thereís no chance the block will need to be copied, so the heap manager is simply told that the extra space is free. If the size is larger, as in inflate( ),there may not be enough contiguous space, so a new chunk might be allocated and the memory copied. The assert( ) checks to make sure that the operation was successful. (malloc( ), calloc( ) and realloc( ) all return zero if the heap is exhausted.)

    Note that the C heap manager is fairly primitive. It gives you chunks of memory and takes them back when you free( ) them. Thereís no facility for heap compaction, which compresses the heap to provide bigger free chunks. If a program allocates and frees heap storage for a while, you can end up with a heap that has lots of memory free, just not anything big enough to allocate the size of chunk youíre looking for at the moment. However, a heap compactor moves memory chunks around, so your pointers wonít retain their proper values. Some operating environments, such as Microsoft Windows, have heap compaction built in, but they require you to use special memory handles (which can be temporarily converted to pointers, after locking the memory so the heap compactor canít move it) instead of pointers.

    assert( ) is a preprocessor macro in ASSERT.H. assert( ) takes a single argument, which can be any expression that evaluates to true or false. The macro says, "I assert this to be true, and if itís not, the program will exit after printing an error message." When you are no longer debugging, you can define a flag so asserts are ignored. In the meantime, it is a very clear and portable way to test for errors. Unfortunately, itís a bit abrupt in its handling of error situations: "Sorry, mission control. Our C program failed an assertion and bailed out. Weíll have to land the shuttle on manual." In Chapter 16, youíll see how C++ provides a better solution to critical errors with exception handling.

    When you create a variable on the stack at compile-time, the storage for that variable is automatically created and freed by the compiler. It knows exactly how much storage it needs, and it knows the lifetime of the variables because of scoping. With dynamic memory allocation, however, the compiler doesnít know how much storage youíre going to need, and it doesnít know the lifetime of that storage. It doesnít get cleaned up automatically. Therefore, youíre responsible for releasing the storage using free( ), which tells the heap manager that storage can be used by the next call to malloc( ), calloc( ) or realloc( ). The logical place for this to happen in the library is in the cleanup( ) function because that is where all the closing-up housekeeping is done.

    To test the library, two stashes are created. The first holds ints and the second holds arrays of 80 chars. (You could almost think of this as a new data type. But that happens later.)

    /*: C04:Libtestc.c 
    //{L} Lib
    Test demonstration library */
    #include <stdio.h>
    #include <assert.h>
    #include "Lib.h"
    #define BUFSIZE 80
    
    int main() {
      stash intStash, stringStash;
      int i;
      FILE* file;
      char buf[BUFSIZE];
      char* cp;
      /* .... */
      initialize(&intStash, sizeof(int));
      for(i = 0; i < 100; i++)
        add(&intStash, &i);
      /* Holds 80-character strings: */
      initialize(&stringStash,
                 sizeof(char) * BUFSIZE);
      file = fopen("Libtestc.c", "r");
      assert(file);
      while(fgets(buf, BUFSIZE, file))
        add(&stringStash, buf);
      fclose(file);
    
      for(i = 0; i < count(&intStash); i++)
        printf("fetch(&intStash, %d) = %d\n", i,
               *(int*)fetch(&intStash, i));
    
      i = 0;
      while((cp = fetch(&stringStash, i++)) != 0)
        printf("fetch(&stringStash, %d) = %s",
               i - 1, cp);
      putchar('\n');
      cleanup(&intStash);
      cleanup(&stringStash);
    } /* ///:~
    */

    At the beginning of main( ), the variables are defined, including the two stash structures. Of course, you must remember to initialize these later in the block. One of the problems with libraries is that you must carefully convey to the user the importance of the initialization and cleanup functions. If these functions arenít called, there will be a lot of trouble. Unfortunately, the user doesnít always wonder if initialization and cleanup are mandatory. They know what they want to accomplish, and theyíre not as concerned about you jumping up and down saying, "Hey, wait, you have to do this first!" Some users have even been known to initialize the elements of the structure themselves. Thereís certainly no mechanism to prevent it (more foreshadowing).

    The intStash is filled up with integers, and the stringStash is filled with strings. These strings are produced by opening the source code file, Libtest.c, and reading the lines from it into the stringStash. Notice something interesting here: The Standard C library functions for opening and reading files use the same techniques as in the stash library! fopen( ) returns a pointer to a FILE struct, which it creates on the heap, and this pointer is passed to any function that refers to that file (fgets( ), in this case). One of the things fclose( ) does is release the FILE struct back to the heap. Once you start noticing this pattern of a C library consisting of structs and associated functions, you see it everywhere!

    After the two stashes are loaded, you can print them out. The intStash is printed using a for loop, which uses count( ) to establish its limit. The stringStash is printed with a while, which breaks out when fetch( ) returns zero to indicate it is out of bounds.

    There are a number of other things you should understand before we look at the problems in creating a C library. (You may already know these because youíre a C programmer.) First, although header files are used here because itís good practice, they arenít essential. Itís possible in C to call a function that you havenít declared. A good compiler will warn you that you probably ought to declare a function first, but it isnít enforced. This is a dangerous practice, because the compiler can assume that a function that you call with an int argument has an argument list containing int, and it will treat it accordingly ó a very difficult bug to find.

    Note that the Lib.h header file must be included in any file that refers to stash because the compiler canít even guess at what that structure looks like. It can guess at functions, even though it probably shouldnít, but thatís part of the history of C.

    Each separate C file is a translation unit. That is, the compiler is run separately on each translation unit, and when it is running it is aware of only that unit. Thus, any information you provide by including header files is quite important because it provides the compilerís understanding of the rest of your program. Declarations in header files are particularly important, because everywhere the header is included, the compiler will know exactly what to do. If, for example, you have a declaration in a header file that says void foo(float);, the compiler knows that if you call it with an integer argument, it should promote the int to a float. Without the declaration, the compiler would simply assume that a function foo(int) existed, and it wouldnít do the promotion.

    For each translation unit, the compiler creates an object file, with an extension of .o or .obj or something similar. These object files, along with the necessary start-up code, must be collected by the linker into the executable program. During linking, all the external references must be resolved. For example, in Libtest.c, functions like initialize( ) and fetch( ) are declared (that is, the compiler is told what they look like) and used, but not defined. They are defined elsewhere, in Lib.c. Thus, the calls in Libtest.c are external references. The linker must, when it puts all the object files together, take the unresolved external references and find the addresses they actually refer to. Those addresses are put in to replace the external references.

    Itís important to realize that in C, the references are simply function names, generally with an underscore in front of them. So all the linker has to do is match up the function name where it is called and the function body in the object file, and itís done. If you accidentally made a call that the compiler interpreted as foo(int) and thereís a function body for foo(float) in some other object file, the linker will see _foo in one place and _foo in another, and it will think everythingís OK. The foo( ) at the calling location will push an int onto the stack, and the foo( ) function body will expect a float to be on the stack. If the function only reads the value and doesnít write to it, it wonít blow up the stack. In fact, the float value it reads off the stack might even make some kind of sense. Thatís worse because itís harder to find the bug.

    What's wrong?

    We are remarkably adaptable, even with things where perhaps we shouldnít adapt. The style of the stash library has been a staple for C programmers, but if you look at it for a while, you might notice that itís rather . . . awkward. When you use it, you have to pass the address of the structure to every single function in the library. When reading the code, the mechanism of the library gets mixed with the meaning of the function calls, which is confusing when youíre trying to understand whatís going on.

    One of the biggest obstacles, however, to using libraries in C is the problem of name clashes. C has a single name space for functions; that is, when the linker looks for a function name, it looks in a single master list. In addition, when the compiler is working on a translation unit, it can only work with a single function with a given name.

    Now suppose you decide to buy two libraries from two different vendors, and each library has a structure that must be initialized and cleaned up. Both vendors decided that initialize( ) and cleanup( ) are good names. If you include both their header files in a single translation unit, what does the C compiler do? Fortunately, Standard C gives you an error, telling you thereís a type mismatch in the two different argument lists of the declared functions. But even if you donít include them in the same translation unit, the linker will still have problems. A good linker will detect that thereís a name clash, but some linkers take the first function name they find, by searching through the list of object files in the order you give them in the link list. (Indeed, this can be thought of as a feature because it allows you to replace a library function with your own version.)

    In either event, you canít use two C libraries that contain a function with the identical name. To solve this problem, C library vendors will often prepend a string of unique characters to the beginning of all their function names. So initialize( ) and cleanup( ) might become stash_initialize( ) and stash_cleanup( ). This is a logical thing to do because it "mangles" the name of the struct the function works on with the name of the function.

    Now itís time to take the very first step into C++. Variable names inside a struct do not clash with global variable names. So why not take advantage of this for function names, when those functions operate on a particular struct? That is, why not make functions members of structs?

    The basic object

    Step one in C++ is exactly that. Functions can now be placed inside structs as "member functions." Hereís what it looks like after converting the C version of stash to the C++ Stash (note the C++ version starts with a capital letter):

    //: C04:Libcpp.h
    // C library converted to C++
    
    struct Stash {
      int size;      // Size of each space
      int quantity;  // Number of storage spaces
      int next;      // Next empty space
       // Dynamically allocated array of bytes:
      unsigned char* storage;
      // Functions!
      void initialize(int Size);
      void cleanup();
      int add(void* element);
      void* fetch(int index);
      int count();
      void inflate(int increase);
    }; ///:~

    The first thing youíll notice is the new comment syntax, //. This is in addition to C-style comments, which still work fine. The C++ comments only go to the end of the line, which is often very convenient. In addition, in this book we put a colon after the // on the first line of the file, followed by the name of the file and a brief description. This allows an exact inclusion of the file from the source code. In addition, you can easily identify the file in the electronic source code from its name in the book listing.

    Next, notice there is no typedef. Instead of requiring you to create a typedef, the C++ compiler turns the name of the structure into a new type name for the program (just like int, char, float and double are type names). The use of Stash is still the same.

    All the data members are exactly the same as before, but now the functions are inside the body of the struct. In addition, notice that the first argument from the C version of the library has been removed. In C++, instead of forcing you to pass the address of the structure as the first argument to all the functions that operate on that structure, the compiler secretly does this for you. Now the only arguments for the functions are concerned with what the function does, not the mechanism of the functionís operation.

    Itís important to realize that the function code is effectively the same as it was with the C library. The number of arguments are the same (even though you donít see the structure address being passed in, itís still there); and thereís only one function body for each function. That is, just because you say

    Stash A, B, C;

    doesnít mean you get a different add( ) function for each variable.

    So the code thatís generated is almost the same as you would have written for the C library. Interestingly enough, this includes the "name mangling" you probably would have done to produce Stash_initialize( ), Stash_cleanup( ), and so on. When the function name is inside the struct, the compiler effectively does the same thing. Therefore, initialize( ) inside the structure Stash will not collide with initialize( ) inside any other structure. Most of the time you donít have to worry about the function name mangling ó you use the unmangled name. But sometimes you do need to be able to specify that this initialize( ) belongs to the struct Stash, and not to any other struct. In particular, when youíre defining the function you need to fully specify which one it is. To accomplish this full specification, C++ has a new operator, :: the scope resolution operator (named so because names can now be in different scopes: at global scope, or within the scope of a struct). For example, if you want to specify initialize( ), which belongs to Stash, you say Stash::initialize(int Size, int Quantity);. You can see how the scope resolution operator is used in the function definitions for the C++ version of Stash:

    //: C04:Libcpp.cpp {O}
    // C library converted to C++
    // Declare structure and functions:
    #include <cstdlib> // Dynamic memory
    #include <cstring> // memcpy()
    #include <cstdio>
    #include "../require.h" // Error testing code
    #include "Libcpp.h"
    using namespace std;
    
    void Stash::initialize(int Size) {
      size = Size;
      quantity = 0;
      storage = 0;
      next = 0;
    }
    
    void Stash::cleanup() {
      if(storage) {
        puts("freeing storage");
        free(storage);
      }
    }
    
    int Stash::add(void* element) {
      if(next >= quantity) // Enough space left?
        inflate(100);
      // Copy element into storage,
      // starting at next empty space:
      memcpy(&(storage[next * size]),
             element, size);
      next++;
      return(next - 1); // Index number
    }
    
    void* Stash::fetch(int index) {
      if(index >= next || index < 0)
        return 0;  // Not out of bounds?
      // Produce pointer to desired element:
      return &(storage[index * size]);
    }
    
    int Stash::count() {
      return next; // Number of elements in Stash
    }
    
    void Stash::inflate(int increase) {
      void* v =
        realloc(storage, (quantity+increase)*size);
      require(v != 0);  // Was it successful?
      storage = (unsigned char*)v;
      quantity += increase;
    } ///:~

    There are several other things that are different about this file. First, the declarations in the header files are required by the compiler. In C++ you cannot call a function without declaring it first. The compiler will issue an error message otherwise. This is an important way to ensure that function calls are consistent between the point where they are called and the point where they are defined. By forcing you to declare the function before you call it, the C++ compiler virtually ensures you will perform this declaration by including the header file. If you also include the same header file in the place where the functions are defined, then the compiler checks to make sure the declaration in the header and the definition match up. This means that the header file becomes a validated repository for function declarations and ensures that functions are used consistently throughout all translation units in the project.

    Of course, global functions can still be declared by hand every place where they are defined and used. (This is so tedious that it becomes very unlikely.) However, structures must always be declared before they are defined or used, and the most convenient place to put a structure definition is in a header file, except for those you intentionally hide in a file).

    You can see that all the member functions are virtually the same, except for the scope resolution and the fact that the first argument from the C version of the library is no longer explicit. Itís still there, of course, because the function has to be able to work on a particular struct variable. But notice that inside the member function the member selection is also gone! Thus, instead of saying SĖ>size = Size; you say size = Size; and eliminate the tedious SĖ>, which didnít really add anything to the meaning of what you were doing anyway. Of course, the C++ compiler must still be doing this for you. Indeed, it is taking the "secret" first argument and applying the member selector whenever you refer to one of the data members of a class. This means that whenever you are inside the member function of another class, you can refer to any member (including another member function) by simply giving its name. The compiler will search through the local structureís names before looking for a global version of that name. Youíll find that this feature means that not only is your code easier to write, itís a lot easier to read.

    But what if, for some reason, you want to be able to get your hands on the address of the structure? In the C version of the library it was easy because each functionís first argument was a stash* called S. In C++, things are even more consistent. Thereís a special keyword, called this, which produces the address of the struct. Itís the equivalent of S in the C version of the library. So we can revert to the C style of things by saying

    this->size = Size;

    The code generated by the compiler is exactly the same. Usually, you donít use this very often, but when you need it, itís there.

    Thereís one last change in the definitions. In inflate( ) in the C library, you could assign a void* to any other pointer like this:

    S->storage = v;

    and there was no complaint from the compiler. But in C++, this statement is not allowed. Why? Because in C, you can assign a void* (which is what malloc( ), calloc( ), and realloc( ) return) to any other pointer without a cast. C is not so particular about type information, so it allows this kind of thing. Not so with C++. Type is critical in C++, and the compiler stamps its foot when there are any violations of type information. This has always been important, but it is especially important in C++ because you have member functions in structs. If you could pass pointers to structs around with impunity in C++, then you could end up calling a member function for a struct that doesnít even logically exist for that struct! A real recipe for disaster. Therefore, while C++ allows the assignment of any type of pointer to a void* (this was the original intent of void*, which is required to be large enough to hold a pointer to any type), it will not allow you to assign a void pointer to any other type of pointer. A cast is always required, to tell the reader and the compiler that you know the type that it is going to. Thus you will see the return values of calloc( ) and realloc( ) are explicitly cast to (unsigned char*).

    This brings up an interesting issue. One of the important goals for C++ is to compile as much existing C code as possible to allow for an easy transition to the new language. Notice in the above example how Standard C library functions are used. In addition, all C operators and expressions are available in C++. However, this doesnít mean any code that C allows will automatically be allowed in C++. There are a number of things the C compiler lets you get away with that are dangerous and error-prone. (Weíll look at them as the book progresses.) The C++ compiler generates warnings and errors for these situations. This is often much more of an advantage than a hindrance. In fact, there are many situations where you are trying to run down an error in C and just canít find it, but as soon as you recompile the program in C++, the compiler points out the problem! In C, youíll often find that you can get the program to compile, but then you have to get it to work. In C++, often when the program compiles correctly, it works, too! This is because the language is a lot stricter about type.

    You can see a number of new things in the way the C++ version of Stash is used, in the following test program:

    //: C04:Libtest.cpp
    //{L} Libcpp
    // Test of C++ library
    #include <cstdio>
    #include "../require.h"
    #include "Libcpp.h"
    using namespace std;
    #define BUFSIZE 80
    
    int main() {
      Stash intStash, stringStash;
      int i;
      FILE* file;
      char buf[BUFSIZE];
      char* cp;
      // ....
      intStash.initialize(sizeof(int));
      for(i = 0; i < 100; i++)
        intStash.add(&i);
      // Holds 80-character strings:
      stringStash.initialize(sizeof(char)*BUFSIZE);
      file = fopen("Libtest.cpp", "r");
      require(file != 0);
      while(fgets(buf, BUFSIZE, file))
        stringStash.add(buf);
      fclose(file);
    
      for(i = 0; i < intStash.count(); i++)
        printf("intStash.fetch(%d) = %d\n", i,
               *(int*)intStash.fetch(i));
    
      i = 0;
      while(
        (cp = (char*)stringStash.fetch(i++))!=0)
          printf("stringStash.fetch(%d) = %s",
                 i - 1, cp);
      putchar('\n');
      intStash.cleanup();
      stringStash.cleanup();
    } ///:~

    The code is quite similar, but when a member function is called, the call occurs using the member selection operator Ď.í preceded by the name of the variable. This is a convenient syntax because it mimics the selection of a data member of the structure. The difference is that this is a function member, so it has an argument list.

    Of course, the call that the compiler actually generates looks much more like the original C library function. Thus, considering name mangling and the passing of this, the C++ function call intStash.initialize(sizeof(int), 100) becomes something like Stash_initialize(&intStash, sizeof(int), 100). If you ever wonder whatís going on underneath the covers, remember that the original C++ compiler cfront from AT&T produced C code as its output, which was then compiled by the underlying C compiler. This approach meant that cfront could be quickly ported to any machine that had a C compiler, and it helped to rapidly disseminate C++ compiler technology.

    Youíll also notice an additional cast in

    while(cp = (char*)stringStash.fetch(i++))

    This is due again to the stricter type checking in C++.

    What's an object?

    Now that youíve seen an initial example, itís time to step back and take a look at some terminology. The act of bringing functions inside structures is the root of the changes in C++, and it introduces a new way of thinking about structures as concepts. In C, a structure is an agglomeration of data, a way to package data so you can treat it in a clump. But itís hard to think about it as anything but a programming convenience. The functions that operate on those structures are elsewhere. However, with functions in the package, the structure becomes a new creature, capable of describing both characteristics (like a C struct could) and behaviors. The concept of an object, a free-standing, bounded entity that can remember and act, suggests itself.

    The terms "object" and "object-oriented programming" (OOP) are not new. The first OOP language was Simula-67, created in Scandinavia in 1967 to aid in solving modeling problems. These problems always seemed to involve a bunch of identical entities (like people, bacteria, and cars) running around interacting with each other. Simula allowed you to create a general description for an entity that described its characteristics and behaviors and then make a whole bunch of them. In Simula, the "general description" is called a class (a term youíll see in a later chapter), and the mass-produced item that you stamp out from a class is called an object. In C++, an object is just a variable, and the purest definition is "a region of storage." Itís a place where you can store data, and itís implied that there are also operations that can be performed on this data.

    Unfortunately thereís not complete consistency across languages when it comes to these terms, although they are fairly well-accepted. You will also sometimes encounter disagreement about what an object-oriented language is, although that seems to be fairly well sorted out by now. There are languages that are object-based, which means they have objects like the C++ structures-with-functions that youíve seen so far. This, however, is only part of the picture when it comes to an object-oriented language, and languages that stop at packaging functions inside data structures are object-based, not object-oriented.

    Abstract data typing

    The ability to package data with functions allows you to create a new data type. This is often called encapsulation. An existing data type, like a float, has several pieces of data packaged together: an exponent, a mantissa, and a sign bit. You can tell it to do things: add to another float or to an int, and so on. It has characteristics and behavior.

    The Stash is also a new data type. You can add( ) and fetch( ) and inflate( ). You create one by saying Stash S, as you create a float by saying float f. A Stash also has characteristics and behavior. Even though it acts like a real, built-in data type, we refer to it as an abstract data type, perhaps because it allows us to abstract a concept from the problem space into the solution space. In addition, the C++ compiler treats it like a new data type, and if you say a function expects a Stash, the compiler makes sure you pass a Stash to that function. The same level of type checking happens with abstract data types (sometimes called user-defined types) as with built-in types.

    You can immediately see a difference, however, in the way you perform operations on objects. You say object.member_function(arglist). This is "calling a member function for an object." But in object-oriented parlance, this is also referred to as "sending a message to an object." So for a Stash S, the statement S.add(&i) "sends a message to S" saying "add( ) this to yourself." In fact, object-oriented programming can be summed up in a single sentence as "sending messages to objects." Really, thatís all you do ó create a bunch of objects and send messages to them. The trick, of course, is figuring out what your objects and messages are, but once you accomplish that the implementation in C++ is surprisingly straightforward.

    Object details

    At this point youíre probably wondering the same thing that most C programmers do because C is a language that is very low-level and efficiency-oriented. A question that comes up a lot in seminars is "How big is an object, and what does it look like?" The answer is "Pretty much the same as you expect from a C struct." In fact, a C struct.(with no C++ adornments) will usually look exactly the same in the code that the C and C++ compilers produce, which is reassuring to those C programmers who depend on the details of size and layout in their code, and for some reason directly access structure bytes instead of using identifiers, although depending on a particular size and layout of a structure is a nonportable activity.

    The size of a struct is the combined size of all its members. Sometimes when a struct is laid out by the compiler, extra bytes are added to make the boundaries come out neatly ó this may increase execution efficiency. In Chapters 13 and 15, youíll see how in some cases "secret" pointers are added to the structure, but you donít need to worry about that right now.

    You can determine the size of a struct using the sizeof operator. Hereís a small example:

    //: C04:Sizeof.cpp
    // Sizes of structs
    #include <cstdio>
    #include "Lib.h"
    #include "Libcpp.h"
    using namespace std;
    
    struct A {
      int I[100];
    };
    
    struct B {
      void f();
    };
    
    void B::f() {}
    
    int main() {
      printf("sizeof struct A = %d bytes\n",
             sizeof(A));
      printf("sizeof struct B = %d bytes\n",
             sizeof(B));
      printf("sizeof stash in C = %d bytes\n",
             sizeof(stash));
      printf("sizeof Stash in C++ = %d bytes\n",
             sizeof(Stash));
    } ///:~

    The first print statement produces 200 because each int occupies two bytes. struct B is something of an anomaly because it is a struct with no data members. In C, this is illegal, but in C++ we need the option of creating a struct whose sole task is to scope function names, so it is allowed. Still, the result produced by the second printf( ) statement is a somewhat surprising nonzero value. In early versions of the language, the size was zero, but an awkward situation arises when you create such objects: They have the same address as the object created directly after them, and so are not distinct. Thus, structures with no data members will always have some minimum nonzero size.

    The last two sizeof statements show you that the size of the structure in C++ is the same as the size of the equivalent version in C. C++ endeavors not to add any overhead.

    Header file etiquette

    When I first learned to program in C, the header file was a mystery to me. Many C books donít seem to emphasize it, and the compiler didnít enforce function declarations, so it seemed optional most of the time, except when structures were declared. In C++ the use of header files becomes crystal clear. They are practically mandatory for easy program development, and you put very specific information in them: declarations. The header file tells the compiler what is available in your library. Because you can use the library without the source code for the CPP file (you only need the object file or library file), the header file is where the interface specification is stored.

    The header is a contract between you and the user of your library. It says, "Hereís what my library does." It doesnít say how because thatís stored in the CPP file, and you wonít necessarily deliver the sources for "how" to the user.

    The contract describes your data structures, and states the arguments and return values for the function calls. The user needs all this information to develop the application and the compiler needs it to generate proper code.

    The compiler enforces the contract by requiring you to declare all structures and functions before they are used and, in the case of member functions, before they are defined. Thus, youíre forced to put the declarations in the header and to include the header in the file where the member functions are defined and the file(s) where they are used. Because a single header file describing your library is included throughout the system, the compiler can ensure consistency and prevent errors.

    There are certain issues that you must be aware of in order to organize your code properly and write effective header files. The first issue concerns what you can put into header files. The basic rule is "only declarations," that is, only information to the compiler but nothing that allocates storage by generating code or creating variables. This is because the header file will probably be included in several translation units in a project, and if storage is allocated in more than one place, the linker will come up with a multiple definition error.

    This rule isnít completely hard and fast. If you define a piece of data that is "file static" (has visibility only within a file) inside a header file, there will be multiple instances of that data across the project, but the linker wonít have a collision. Basically, you donít want to do anything in the header file that will cause an ambiguity at link time.

    The second critical issue concerning header files is redeclaration. Both C and C++ allow you to redeclare a function, as long as the two declarations match, but neither will allow the redeclaration of a structure. In C++ this rule is especially important because if the compiler allowed you to redeclare a structure and the two declarations differed, which one would it use?

    The problem of redeclaration comes up quite a bit in C++ because each data type (structure with functions) generally has its own header file, and you have to include one header in another if you want to create another data type that uses the first one. In the whole project, itís very likely that youíll include several files that include the same header file. During a single compilation, the compiler can see the same header file several times. Unless you do something about it, the compiler will see the redeclaration of your structure.

    The typical preventive measure is to "insulate" the header file by using the preprocessor. If you have a header file named FOO.H, itís common to do your own "name mangling" to produce a preprocessor name that is used to prevent multiple inclusion of the header file. The inside of FOO.H might look like this:

    #ifndef FOO_H_

    #define FOO_H_

    // Rest of header here...

    #endif // FOO_H_

    Notice a leading underscore was not used because Standard C reserves identifiers with leading underscores.

    Using headers in projects

    When building a project in C++, youíll usually create it by bringing together a lot of different types (data structures with associated functions). Youíll usually put the declaration for each type or group of associated types in a separate header file, then define the functions for that type in a translation unit. When you use that type, you must include the header file to perform the declarations properly.

    Sometimes that pattern will be followed in this book, but more often the examples will be very small, so everything ó the structure declarations, function definitions, and the main( ) function ó may appear in a single file. However, keep in mind that youíll want to use separate files and header files in practice.

    Nested structures

    The convenience of taking data and function names out of the global name space extends to structures. You can nest a structure within another structure, and therefore keep associated elements together. The declaration syntax is what you would expect, as you can see in the following structure, which implements a push-down stack as a very simple linked list so it "never" runs out of memory:

    //: C04:Nested.h
    // Nested struct in linked list
    #ifndef NESTED_H_
    #define NESTED_H_
    
    struct Stack {
      struct link {
        void* data;
        link* next;
        void initialize(void* Data, link* Next);
      } * head;
      void initialize();
      void push(void* Data);
      void* peek();
      void* pop();
      void cleanup();
    };
    #endif // NESTED_H_ ///:~

    The nested struct is called link, and it contains a pointer to the next link in the list and a pointer to the data stored in the link. If the next pointer is zero, it means youíre at the end of the list.

    Notice that the head pointer is defined right after the declaration for struct link, instead of a separate definition link* head. This is a syntax that came from C, but it emphasizes the importance of the semicolon after the structure declaration ó the semicolon indicates the end of the list of definitions of that structure type. (Usually the list is empty.)

    The nested structure has its own initialize( ) function, like all the structures presented so far, to ensure proper initialization. Stack has both an initialize( ) and cleanup( ) function, as well as push( ), which takes a pointer to the data you wish to store (assumed to have been allocated on the heap), and pop( ), which returns the data pointer from the top of the Stack and removes the top element. (Notice that you are responsible for destroying the destination of the data pointer.) The peek( ) function also returns the data pointer from the top element, but it leaves the top element on the Stack.

    cleanup goes through the Stack and removes each element and frees the data pointer (so it must be on the heap).

    Here are the definitions for the member functions:

    //: C04:Nested.cpp {O}
    // Linked list with nesting
    #include <cstdlib>
    #include "../require.h"
    #include "Nested.h"
    using namespace std;
    
    void Stack::link::initialize(
        void* Data, link* Next) {
      data = Data;
      next = Next;
    }
    
    void Stack::initialize() { head = 0; }
    
    void Stack::push(void* Data) {
      link* newlink = (link*)malloc(sizeof(link));
      require(newlink != 0);
      newlink->initialize(Data, head);
      head = newlink;
    }
    
    void* Stack::peek() { return head->data; }
    
    void* Stack::pop() {
      if(head == 0) return 0;
      void* result = head->data;
      link* oldHead = head;
      head = head->next;
      free(oldHead);
      return result;
    }
    
    void Stack::cleanup() {
      link* cursor = head;
      while(head) {
        cursor = cursor->next;
        free(head->data); // Assumes a malloc!
        free(head);
        head = cursor;
      }
    } ///:~

    The first definition is particularly interesting because it shows you how to define a member of a nested structure. You simply use the scope resolution operator a second time, to specify the name of the enclosing struct. The Stack::link::initialize( ) function takes the arguments and assigns them to its members. Although you can certainly do these things by hand quite easily, youíll see a different form of this function in the future, so it will make much more sense.

    The Stack::initialize( ) function sets head to zero, so the object knows it has an empty list.

    Stack::push( ) takes the argument, a pointer to the piece of data you want to keep track of using the Stack, and pushes it on the Stack. First, it uses malloc( ) to allocate storage for the link it will insert at the top. Then it calls the initialize( ) function to assign the appropriate values to the members of the link. Notice that the next pointer is assigned to the current head; then head is assigned to the new link pointer. This effectively pushes the link in at the top of the list.

    Stack::pop( ) stores the data pointer at the current top of the Stack; then it moves the head pointer down and deletes the old top of the Stack. Stack::cleanup( ) creates a cursor to move through the Stack and free( ) both the data in each link and the link itself.

    Hereís an example to test the Stack:

    //: C04:NestTest.cpp
    //{L} Nested
    //{T} NestTest.cpp
    // Test of nested linked list
    #include <cstdio>
    #include <cstdlib>
    #include <cstring>
    #include "../require.h"
    #include "Nested.h"
    using namespace std;
    
    int main(int argc, char* argv[]) {
      Stack textlines;
      FILE* file;
      char* s;
      #define BUFSIZE 100
      char buf[BUFSIZE];
      requireArgs(argc,  2); // File name is argument
      textlines.initialize();
      file = fopen(argv[1], "r");
      require(file != 0);
      // Read file and store lines in the Stack:
      while(fgets(buf, BUFSIZE, file)) {
        char* string =(char*)malloc(strlen(buf)+1);
        require(string != 0);
        strcpy(string, buf);
        textlines.push(string);
      }
      // Pop the lines from the Stack and print them:
      while((s = (char*)textlines.pop()) != 0) {
        printf("%s", s); free(s); }
      textlines.cleanup();
    } ///:~

    This is very similar to the earlier example, but it pushes the lines on the Stack and then pops them off, which results in the file being printed out in reverse order. In addition, the file name is taken from the command line.

    Global scope resolution

    The scope resolution operator gets you out of situations where the name the compiler chooses by default (the "nearest" name) isnít what you want. For example, suppose you have a structure with a local identifier A, and you want to select a global identifier A inside a member function. The compiler would default to choosing the local one, so you must tell it to do otherwise. When you want to specify a global name using scope resolution, you use the operator with nothing in front of it. Hereís an example that shows global scope resolution for both a variable and a function:

    //: C04:Scoperes.cpp {O}
    // Global scope resolution
    int A;
    void f() {}
    
    struct S {
      int A;
      void f();
    };
    
    void S::f() {
      ::f();  // Would be recursive otherwise!
      ::A++;  // Select the global A
      A--;    // The A at struct scope
    }
    ///:~

    Without scope resolution in S::f( ), the compiler would default to selecting the member versions of f( ) and A.

    Summary

    In this chapter, youíve learned the fundamental "twist" of C++: that you can place functions inside of structures. This new type of structure is called an abstract data type, and variables you create using this structure are called objects, or instances, of that type. Calling a member function for an object is called sending a message to that object. The primary action in object-oriented programming is sending messages to objects.

    Although packaging data and functions together is a significant benefit for code organization and makes library use easier because it prevents name clashes by hiding the names, thereís a lot more you can do to make programming safer in C++. In the next chapter, youíll learn how to protect some members of a struct so that only you can manipulate them. This establishes a clear boundary between what the user of the structure can change and what only the programmer may change.

    Exercises

  5. Create a struct declaration with a single member function; then create a definition for that member function. Create an object of your new data type, and call the member function.
  6. Write and compile a piece of code that performs data member selection and a function call using the this keyword (which refers to the address of the current object).
  7. Show an example of a structure declared within another structure (a nested structure). Also show how members of that structure are defined.
  8. How big is a structure? Write a piece of code that prints the size of various structures. Create structures that have data members only and ones that have data members and function members. Then create a structure that has no members at all. Print out the sizes of all these. Explain the reason for the result of the structure with no data members at all.
  9. C++ automatically creates the equivalent of a typedef for enumerations and unions as well as structs, as youíve seen in this chapter. Write a small program that demonstrates this.
  10. 5: Hiding the implementation

    A typical C library contains a struct and some associated functions to act on that struct. So far, you've seen how C++ takes functions that are conceptually associated and makes them literally associated, by

    putting the function declarations inside the scope of the struct, changing the way functions are called for the struct, eliminating the passing of the structure address as the first argument, and adding a new type name to the program (so you donít have to create a typedef for the struct tag).

    These are all convenient ó they help you organize your code and make it easier to write and read. However, there are other important issues when making libraries easier in C++, especially the issues of safety and control. This chapter looks at the subject of boundaries in structures.

    Setting limits

    In any relationship itís important to have boundaries that are respected by all parties involved. When you create a library, you establish a relationship with the user (also called the client programmer) of that library, who is another programmer, but one putting together an application or using your library to build a bigger library.

    In a C struct, as with most things in C, there are no rules. Users can do anything they want with that struct, and thereís no way to force any particular behaviors. For example, even though you saw in the last chapter the importance of the functions named initialize( ) and cleanup( ), the user could choose whether to call those functions or not. (Weíll look at a better approach in the next chapter.) And even though you would really prefer that the user not directly manipulate some of the members of your struct, in C thereís no way to prevent it. Everythingís naked to the world.

    There are two reasons for controlling access to members. The first is to keep usersí hands off tools they shouldnít touch, tools that are necessary for the internal machinations of the data type, but not part of the interface that users need to solve their particular problems. This is actually a service to users because they can easily see whatís important to them and what they can ignore.

    The second reason for access control is to allow the library designer to change the internal workings of the structure without worrying about how it will affect the client programmer. In the Stack example in the last chapter, you might want to allocate the storage in big chunks, for speed, rather than calling malloc( ) each time an element is added. If the interface and implementation are clearly separated and protected, you can accomplish this and require only a relink by the user.

    C++ access control

    C++ introduces three new keywords to set the boundaries in a structure: public, private, and protected. Their use and meaning are remarkably straightforward. These access specifiers are used only in a structure declaration, and they change the boundary for all the declarations that follow them. Whenever you use an access specifier, it must be followed by a colon.

    public means all member declarations that follow are available to everyone. public members are like struct members. For example, the following struct declarations are identical:

    //: C05:Public.cpp {O}
    // Public is just like C struct
    
    struct A {
      int i;
      char j;
      float f;
      void foo();
    };
    
    void A::foo() {}
    
    struct B {
    public:
      int i;
      char j;
      float f;
      void foo();
    };
    
    void B::foo() {}  ///:~

    The private keyword, on the other hand, means no one can access that member except you, the creator of the type, inside function members of that type. private is a brick wall between you and the user; if someone tries to access a private member, theyíll get a compile-time error. In struct B in the above example, you may want to make portions of the representation (that is, the data members) hidden, accessible only to you:

    //: C05:Private.cpp
    // Setting the boundary
    
    struct B {
    private:
      char j;
      float f;
    public:
      int i;
      void foo();
    };
    
    void B::foo() {
      i = 0;
      j = '0';
      f = 0.0;
    };
    
    int main() {
      B b;
      b.i = 1;    // OK, public
    //!  b.j = '1';  // Illegal, private
    //!  b.f = 1.0;  // Illegal, private
    } ///:~

    Although foo( ) can access any member of B, an ordinary global function like main( ) cannot. Of course, neither can member functions of other structures. Only the functions that are clearly stated in the structure declaration (the "contract") can have access to private members.

    There is no required order for access specifiers, and they may appear more than once. They affect all the members declared after them and before the next access specifier.

    protected

    The last access specifier is protected. protected acts just like private, with one exception that we canít really talk about right now: Inherited structures have access to protected members, but not private members. But inheritance wonít be introduced until Chapter 12, so this doesnít have any meaning to you. For the current purposes, consider protected to be just like private; it will be clarified when inheritance is introduced.

    Friends

    What if you want to explicitly grant access to a function that isnít a member of the current structure? This is accomplished by declaring that function a friend inside the structure declaration. Itís important that the friend declaration occurs inside the structure declaration because you (and the compiler) must be able to read the structure declaration and see every rule about the size and behavior of that data type. And a very important rule in any relationship is "who can access my private implementation?"

    The class controls which code has access to its members. Thereís no magic way to "break in"; you canít declare a new class and say "hi, Iím a friend of Bob!" and expect to see the private and protected members of Bob.

    You can declare a global function as a friend, and you can also declare a member function of another structure, or even an entire structure, as a friend. Hereís an example :

    //: C05:Friend.cpp
    // Friend allows special access
    
    struct X; // Declaration (incomplete type spec)
    
    struct Y {
      void f(X*);
    };
    
    struct X { // Definition
    private:
      int i;
    public:
      void initialize();
      friend void g(X*, int); // Global friend
      friend void Y::f(X*);  // Struct member friend
      friend struct Z; // Entire struct is a friend
      friend void h();
    };
    
    void X::initialize() { i = 0; }
    
    void g(X* x, int i) { x->i = i; }
    
    void Y::f(X* x) { x->i = 47; }
    
    struct Z {
    private:
      int j;
    public:
      void initialize();
      void g(X* x);
    };
    
    void Z::initialize() { j = 99; }
    
    void Z::g(X* x) { x->i += j; }
    
    void h() {
      X x;
      x.i = 100; // Direct data manipulation
    }
    
    int main() {
      X x;
      Z z;
      z.g(&x);
    } ///:~

    struct Y has a member function f( ) that will modify an object of type X. This is a bit of a conundrum because the C++ compiler requires you to declare everything before you can refer to it, so struct Y must be declared before its member Y::f(X*) can be declared as a friend in struct X. But for Y::f(X*) to be declared, struct X must be declared first!

    Hereís the solution. Notice that Y::f(X*) takes the address of an X object. This is critical because the compiler always knows how to pass an address, which is of a fixed size regardless of the object being passed, even if it doesnít have full information about the size of the type. If you try to pass the whole object, however, the compiler must see the entire structure definition of X, to know the size and how to pass it, before it allows you to declare a function such as Y::g(X).

    By passing the address of an X, the compiler allows you to make an incomplete type specification of X prior to declaring Y::f(X*). This is accomplished in the declaration struct X;. This simply tells the compiler thereís a struct by that name, so if it is referred to, itís OK, as long as you donít require any more knowledge than the name.

    Now, in struct X, the function Y::f(X*) can be declared as a friend with no problem. If you tried to declare it before the compiler had seen the full specification for Y, it would have given you an error. This is a safety feature to ensure consistency and eliminate bugs.

    Notice the two other friend functions. The first declares an ordinary global function g( ) as a friend. But g( ) has not been previously declared at the global scope! It turns out that friend can be used this way to simultaneously declare the function and give it friend status. This extends to entire structures: friend struct Z is an incomplete type specification for Z, and it gives the entire structure friend status.

    Nested friends

    Making a structure nested doesnít automatically give it access to private members. To accomplish this you must follow a particular form: first define the nested structure, then declare it as a friend using full scoping. The structure definition must be separate from the friend declaration, otherwise it would be seen by the compiler as a nonmember. Hereís an example:

    //: C05:Nestfrnd.cpp
    // Nested friends
    #include <cstdio>
    #include <cstring> // memset()
    using namespace std;
    #define SZ 20
    
    struct holder {
    private:
      int a[SZ];
    public:
      void initialize();
      struct pointer {
      private:
        holder* h;
        int* p;
      public:
        void initialize(holder* H);
        // Move around in the array:
        void next();
        void previous();
        void top();
        void end();
        // Access values:
        int read();
        void set(int i);
      };
      friend holder::pointer;
    };
    
    void holder::initialize() {
     memset(a, 0, SZ * sizeof(int));
    }
    
    void holder::pointer::initialize(holder* H) {
      h = H;
      p = h->a;
    }
    
    void holder::pointer::next() {
      if(p < &(h->a[SZ - 1])) p++;
    }
    
    void holder::pointer::previous() {
      if(p > &(h->a[0])) p--;
    }
    
    void holder::pointer::top() {
      p = &(h->a[0]);
    }
    
    void holder::pointer::end() {
      p = &(h->a[SZ - 1]);
    }
    
    int holder::pointer::read() {
      return *p;
    }
    
    void holder::pointer::set(int i) {
      *p = i;
    }
    
    int main() {
      holder h;
      holder::pointer hp, hp2;
      int i;
    
      h.initialize();
      hp.initialize(&h);
      hp2.initialize(&h);
      for(i = 0; i < SZ; i++) {
        hp.set(i);
        hp.next();
      }
      hp.top();
      hp2.end();
      for(i = 0; i < SZ; i++) {
        printf("hp = %d, hp2 = %d\n",
               hp.read(), hp2.read());
        hp.next();
        hp2.previous();
      }
    } ///:~

    The struct holder contains an array of ints and the pointer allows you to access them. Because pointer is strongly associated with holder, itís sensible to make it a member of that class. Once pointer is defined, it is granted access to the private members of holder by saying:

    friend holder::pointer;

    Notice that the struct keyword is not necessary because the compiler already knows what pointer is.

    Because pointer is a separate class from holder, you can make more than one of them in main( ) and use them to select different parts of the array. Because pointer is a class instead of a raw C pointer, you can guarantee that it will always safely point inside the holder.

    Is it pure?

    The class definition gives you an audit trail, so you can see from looking at the class which functions have permission to modify the private parts of the class. If a function is a friend, it means that it isnít a member, but you want to give permission to modify private data anyway, and it must be listed in the class definition so all can see that itís one of the privileged functions.

    C++ is a hybrid object-oriented language, not a pure one, and friend was added to get around practical problems that crop up. Itís fine to point out that this makes the language less "pure," because C++ is designed to be pragmatic, not to aspire to an abstract ideal.

    Object layout

    Chapter 1 stated that a struct written for a C compiler and later compiled with C++ would be unchanged. This referred primarily to the object layout of the struct, that is, where the storage for the individual variables is positioned in the memory allocated for the object. If the C++ compiler changed the layout of C structs, then any C code you wrote that inadvisably took advantage of knowledge of the positions of variables in the struct would break.

    When you start using access specifiers, however, youíve moved completely into the C++ realm, and things change a bit. Within a particular "access block" (a group of declarations delimited by access specifiers), the variables are guaranteed to be laid out contiguously, as in C. However, the access blocks themselves may not appear in the object in the order that you declare them. Although the compiler will usually lay the blocks out exactly as you see them, there is no rule about it, because a particular machine architecture and/or operating environment may have explicit support for private and protected that might require those blocks to be placed in special memory locations. The language specification doesnít want to restrict this kind of advantage.

    Access specifiers are part of the structure and donít affect the objects created from the structure. All of the access specification information disappears before the program is run; generally this happens during compilation. In a running program, objects become "regions of storage" and nothing more. Thus, if you really want to you can break all the rules and access memory directly, as you can in C. C++ is not designed to prevent you from doing unwise things. It just provides you with a much easier, highly desirable alternative.

    In general, itís not a good idea to depend on anything thatís implementation-specific when youíre writing a program. When you must, those specifics should be encapsulated inside a structure, so any porting changes are focused in one place.

    The class

    Access control is often referred to as implementation hiding. Including functions within structures (encapsulation) produces a data type with characteristics and behaviors, but access control puts boundaries within that data type, for two important reasons. The first is to establish what users can and canít use. You can build your internal mechanisms into the structure without worrying that users will think itís part of the interface they should be using.

    This feeds directly into the second reason, which is to separate the interface from the implementation. If the structure is used in a set of programs, but users canít do anything but send messages to the public interface, then you can change anything thatís private without requiring modifications to their code.

    Encapsulation and implementation hiding together invent something more than a C struct. Weíre now in the world of object-oriented programming, where a structure is describing a class of objects, as you would describe a class of fishes or a class of birds: Any object belonging to this class will share these characteristics and behaviors. Thatís what the structure declaration has become, a description of the way all objects of this type will look and act.

    In the original OOP language, Simula-67, the keyword class was used to describe a new data type. This apparently inspired Stroustrup to choose the same keyword for C++, to emphasize that this was the focal point of the whole language, the creation of new data types that are more than C structs with functions. This certainly seems like adequate justification for a new keyword.

    However, the use of class in C++ comes close to being an unnecessary keyword. Itís identical to the struct keyword in absolutely every way except one: class defaults to private, whereas struct defaults to public. Here are two structures that produce the same result:

    //: C05:Class.cpp {O}
    // Similarity of struct and class
    
    struct A {
    private:
      int i, j, k;
    public:
      int f();
      void g();
    };
    
    int A::f() { return i + j + k; }
    
    void A::g() { i = j = k = 0; }
    
    // Identical results are produced with:
    
    class B {
      int i, j, k;
    public:
      int f();
      void g();
    };
    
    int B::f() { return i + j + k; }
    
    void B::g() { i = j = k = 0; }
    ///:~

    The class is the fundamental OOP concept in C++. It is one of the keywords that will not be set in bold in this book ó it becomes annoying with a word repeated as often as "class." The shift to classes is so important that I suspect Stroustrupís preference would have been to throw struct out altogether, but the need for backwards compatibility of course wouldnít allow it.

    Many people prefer a style of creating classes that is more struct-like than class-like, because you override the "default-to-private" behavior of the class by starting out with public elements:

    class X {

    public:

    void interface_function();

    private:

    void private_function();

    int internal_representation;

    };

    The logic behind this is that it makes more sense for the reader to see the members they are concerned with first, then they can ignore anything that says private. Indeed, the only reasons all the other members must be declared in the class at all are so the compiler knows how big the objects are and can allocate them properly, and so it can guarantee consistency.

    The examples in this book, however, will put the private members first, like this:

    class X {

    void private_function();

    int internal_representation;

    public:

    void interface_function();

    };

    Some people even go to the trouble of mangling their own private names:

    class Y {

    public:

    void f();

    private:

    int mX; // "Self-mangled" name

    };

    Because mX is already hidden in the scope of Y, the m is unnecessary. However, in projects with many global variables (something you should strive to avoid, but is sometimes inevitable in existing projects) it is helpful to be able to distinguish, inside a member function definition, which data is global and which is a member.

    Modifying Stash to use access control

    It makes sense to take the examples from Chapter 1 and modify them to use classes and access control. Notice how the user portion of the interface is now clearly distinguished, so thereís no possibility of users accidentally manipulating a part of the class that they shouldnít.

    //: C05:Stash.h
    // Converted to use access control
    #ifndef STASH_H_
    #define STASH_H_
    
    class Stash {
      int size;      // Size of each space
      int quantity;  // Number of storage spaces
      int next;      // Next empty space
      // Dynamically allocated array of bytes:
      unsigned char* storage;
      void inflate(int increase);
    public:
      void initialize(int Size);
      void cleanup();
      int add(void* element);
      void* fetch(int index);
      int count();
    };
    #endif // STASH_H_ ///:~

    The inflate( ) function has been made private because it is used only by the add( ) function and is thus part of the underlying implementation, not the interface. This means that, sometime later, you can change the underlying implementation to use a different system for memory management.

    Other than the name of the include file, the above header is the only thing thatís been changed for this example. The implementation file and test file are the same.

    Modifying stack to use
    access control

    As a second example, hereís the Stack turned into a class. Now the nested data structure is private, which is nice because it ensures that the user will neither have to look at it nor be able to depend on the internal representation of the Stack:

    //: C05:Stack.h
    // Nested structs via linked list
    #ifndef STACK_H_
    #define STACK_H_
    
    class Stack {
      struct link {
        void* data;
        link* next;
        void initialize(void* Data, link* Next);
      } * head;
    public:
      void initialize();
      void push(void* Data);
      void* peek();
      void* pop();
      void cleanup();
    };
    #endif // STACK_H_ ///:~

    As before, the implementation doesnít change and so is not repeated here. The test, too, is identical. The only thing thatís been changed is the robustness of the class interface. The real value of access control is during development, to prevent you from crossing boundaries. In fact, the compiler is the only one that knows about the protection level of class members. There is no information mangled into the member name that carries through to the linker. All the protection checking is done by the compiler; itís vanished by run-time.

    Notice that the interface presented to the user is now truly that of a push-down stack. It happens to be implemented as a linked list, but you can change that without affecting what the user interacts with, or (more importantly) a single line of client code.

    Handle classes

    Access control in C++ allows you to separate interface from implementation, but the implementation hiding is only partial. The compiler must still see the declarations for all parts of an object in order to create and manipulate it properly. You could imagine a programming language that requires only the public interface of an object and allows the private implementation to be hidden, but C++ performs type checking statically (at compile time) as much as possible. This means that youíll learn as early as possible if thereís an error. It also means your program is more efficient. However, including the private implementation has two effects: The implementation is visible even if you canít easily access it, and it can cause needless recompilation.

    Visible implementation

    Some projects cannot afford to have their implementation visible to the end user. It may show strategic information in a library header file that the company doesnít want available to competitors. You may be working on a system where security is an issue ó an encryption algorithm, for example ó and you donít want to expose any clues in a header file that might enable people to crack the code. Or you may be putting your library in a "hostile" environment, where the programmers will directly access the private components anyway, using pointers and casting. In all these situations, itís valuable to have the actual structure compiled inside an implementation file rather than exposed in a header file.

    Reducing recompilation

    The project manager in your programming environment will cause a recompilation of a file if that file is touched or if another file itís dependent upon ó that is, an included header file ó is touched. This means that any time you make a change to a class, whether itís to the public interface or the private implementation, youíll force a recompilation of anything that includes that header file. For a large project in its early stages this can be very unwieldy because the underlying implementation may change often; if the project is very big, the time for compiles can prohibit rapid turnaround.

    The technique to solve this is sometimes called handle classes or the "Cheshire Cat" ó everything about the implementation disappears except for a single pointer, the "smile." The pointer refers to a structure whose definition is in the implementation file along with all the member function definitions. Thus, as long as the interface is unchanged, the header file is untouched. The implementation can change at will, and only the implementation file needs to be recompiled and relinked with the project.

    Hereís a simple example demonstrating the technique. The header file contains only the public interface and a single pointer of an incompletely specified class:

    //: C05:Handle.h
    // Handle classes
    #ifndef HANDLE_H_
    #define HANDLE_H_
    
    class Handle {
      struct cheshire; // Class declaration only
      cheshire* smile;
    public:
      void initialize();
      void cleanup();
      int read();
      void change(int);
    };
    #endif // HANDLE_H_ ///:~

    This is all the client programmer is able to see. The line

    struct cheshire;

    is an incomplete type specification or a class declaration (A class definition includes the body of the class.) It tells the compiler that cheshire is a structure name, but nothing about the struct. This is only enough information to create a pointer to the struct; you canít create an object until the structure body has been provided. In this technique, that body contains the underlying implementation and is hidden away in the implementation file:

    //: C05:Handle.cpp {O}
    // Handle implementation
    #include <cstdlib>
    #include "../require.h"
    #include "Handle.h"
    using namespace std;
    
    // Define Handle's implementation:
    struct Handle::cheshire {
      int i;
    };
    
    void Handle::initialize() {
      smile = (cheshire*)malloc(sizeof(cheshire));
      require(smile != 0);
      smile->i = 0;
    }
    
    void Handle::cleanup() {
      free(smile);
    }
    
    int Handle::read() {
      return smile->i;
    }
    
    void Handle::change(int x) {
      smile->i = x;
    } ///:~

    cheshire is a nested structure, so it must be defined with scope resolution:

    struct Handle::cheshire {

    In the Handle::initialize( ), storage is allocated for a cheshire structure, and in Handle::cleanup( ) this storage is released. This storage is used in lieu of all the data elements youíd normally put into the private section of the class. When you compile HANDLE.CPP, this structure definition is hidden away in the object file where no one can see it. If you change the elements of cheshire, the only file that must be recompiled is HANDLE.CPP because the header file is untouched.

    The use of Handle is like the use of any class: Include the header, create objects, and send messages.

    //: C05:Usehandl.cpp
    //{L} Handle
    // Use the Handle class
    #include "Handle.h"
    
    int main() {
      Handle u;
      u.initialize();
      u.read();
      u.change(1);
      u.cleanup();
    } ///:~

    The only thing the client programmer can access is the public interface, so as long as the implementation is the only thing that changes, this file never needs recompilation. Thus, although this isnít perfect implementation hiding, itís a big improvement.

    Summary

    Access control in C++ is not an object-oriented feature, but it gives valuable control to the creator of a class. The users of the class can clearly see exactly what they can use and what to ignore. More important, though, is the ability to ensure that no user becomes dependent on any part of the underlying implementation of a class. If you know this as the creator of the class, you can change the underlying implementation with the knowledge that no client programmer will be affected by the changes because they canít access that part of the class.

    When you have the ability to change the underlying implementation, you can not only improve your design at some later time, but you also have the freedom to make mistakes. No matter how carefully you plan and design, youíll make mistakes. Knowing that itís relatively safe to make these mistakes means youíll be more experimental, youíll learn faster, and youíll finish your project sooner.

    The public interface to a class is what the user does see, so that is the most important part of the class to get "right" during analysis and design. But even that allows you some leeway for change. If you donít get the interface right the first time, you can add more functions, as long as you donít remove any that client programmers have already used in their code.

    Exercises

  11. Create a class with public, private, and protected data members and function members. Create an object of this class and see what kind of compiler messages you get when you try to access all the class members.
  12. Create a class and a global friend function that manipulates the private data in the class.
  13. Modify cheshire in HANDLE.CPP, and verify that your project manager recompiles and relinks only this file, but doesnít recompile USEHANDL.CPP.
  14. 6: Initialization
    & cleanup

    Chapter 1 made a significant improvement in library use by taking all the scattered components of a typical C library and encapsulating them into a structure (an abstract data type, called a class from now on).

    This not only provides a single unified point of entry into a library component, but it also hides the names of the functions within the class name. In Chapter 2, access control (implementation hiding) was introduced. This gives the class designer a way to establish clear boundaries for determining what the user is allowed to manipulate and what is off limits. It means the internal mechanisms of a data typeís operation are under the control and discretion of the class designer, and itís clear to users what members they can and should pay attention to.

    Together, encapsulation and implementation hiding make a significant step in improving the ease of library use. The concept of "new data type" they provide is better in some ways than the existing built-in data types inherited from C. The C++ compiler can now provide type-checking guarantees for that data type and thus ensure a level of safety when that data type is being used.

    When it comes to safety, however, thereís a lot more the compiler can do for us than C provides. In this and future chapters, youíll see additional features engineered into C++ that make the bugs in your program almost leap out and grab you, sometimes before you even compile the program, but usually in the form of compiler warnings and errors. For this reason, you will soon get used to the unlikely sounding scenario that a C++ program that compiles usually runs right the first time.

    Two of these safety issues are initialization and cleanup. A large segment of C bugs occur when the programmer forgets to initialize or clean up a variable. This is especially true with libraries, when users donít know how to initialize a struct, or even that they must. (Libraries often do not include an initialization function, so the user is forced to initialize the struct by hand.) Cleanup is a special problem because C programmers are used to forgetting about variables once they are finished, so any cleaning up that may be necessary for a libraryís struct is often missed.

    In C++ the concept of initialization and cleanup is essential to making library use easy and to eliminating the many subtle bugs that occur when the user forgets to perform these activities. This chapter examines the features in C++ that help guarantee proper initialization and cleanup.

    Guaranteed initialization with the constructor

    Both the Stash and Stack classes have had functions called initialize( ), which hint that it should be called before using the object in any other way. Unfortunately, this means the user must ensure proper initialization. Users are prone to miss details like initialization in their headlong rush to make your amazing library solve their problem. In C++ initialization is too important to leave to the user. The class designer can guarantee initialization of every object by providing a special function called the constructor. If a class has a constructor, the compiler automatically calls that constructor at the point an object is created, before users can even get their hands on the object. The constructor call isnít even an option for the user; it is performed by the compiler at the point the object is defined.

    The next challenge is what to name this function. There are two issues. The first is that any name you use is something that can potentially clash with a name you might like to use as a member in the class. The second is that because the compiler is responsible for calling the constructor, it must always know which function to call. The solution Stroustrup chose seems the easiest and most logical: The name of the constructor is the same as the name of the class. It makes sense that such a function will be called automatically on initialization.

    Hereís a simple class with a constructor:

    class X {

    int i;

    public:

    X(); // Constructor

    };

    Now, when an object is defined,

    void f() {

    X a;

    // ...

    }

    the same thing happens as if a were an int: Storage is allocated for the object. But when the program reaches the sequence point (point of execution) where a is defined, the constructor is called automatically. That is, the compiler quietly inserts the call to X::X( ) for the object a at its point of definition. Like any member function, the first (secret) argument to the constructor is the address of the object for which it is being called.

    Like any function, the constructor can have arguments to allow you to specify how an object is created, give it initialization values, and so on. Constructor arguments provide you with a way to guarantee that all parts of your object are initialized to appropriate values. For example, if the class Tree has a constructor that takes a single integer argument denoting the height of the tree, you must then create a tree object like this:

    Tree t(12); // 12-foot tree

    If tree(int) is your only constructor, then the compiler wonít let you create an object any other way. (Weíll look at multiple constructors and different ways to call constructors in the next chapter.)

    Thatís really all there is to a constructor: Itís a specially named function that is called automatically by the compiler for every object. However, it eliminates a large class of problems and makes the code easier to read. In the preceding code fragment, for example, you donít see an explicit function call to some initialize( ) function that is conceptually separate from definition. In C++, definition and initialization are unified concepts ó you canít have one without the other.

    Both the constructor and destructor are very unusual types of functions: They have no return value. This is distinctly different from a void return value, where the function returns nothing but you still have the option to make it something else. Constructors and destructors return nothing and you donít have an option. The acts of bringing an object into and out of the program are special, like birth and death, and the compiler always makes the function calls itself, to make sure they happen. If there were a return value, and if you could select your own, the compiler would somehow have to know what to do with the return value, or the user would have to explicitly call constructors and destructors, which would eliminate their safety.

    Guaranteed cleanup with the destructor

    As a C programmer, you often think about the importance of initialization, but itís rarer to think about cleanup. After all, what do you need to do to clean up an int? Just forget about it. However, with libraries, just "letting go" of an object once youíre done with it is not so safe. What if it modifies some piece of hardware, or puts something on the screen, or allocates storage on the heap? If you just forget about it, your object never achieves closure upon its exit from this world. In C++, cleanup is as important as initialization and is therefore guaranteed with the destructor.

    The syntax for the destructor is similar to that for the constructor: The class name is used for the name of the function. However, the destructor is distinguished from the constructor by a leading tilde (~). In addition, the destructor never has any arguments because destruction never needs any options. Hereís the declaration for a destructor:

    class Y {

    public:

    ~Y();

    };

    The destructor is called automatically by the compiler when the object goes out of scope. You can see where the constructor gets called by the point of definition of the object, but the only evidence for a destructor call is the closing brace of the scope that surrounds the object. Yet the destructor is called, even when you use goto to jump out of a scope. (goto still exists in C++, for backward compatibility with C and for the times when it comes in handy.) You should note that a nonlocal goto, implemented by the Standard C library functions setjmp( ) and longjmp( ), doesnít cause destructors to be called. (This is the specification, even if your compiler doesnít implement it that way. Relying on a feature that isnít in the specification means your code is nonportable.)

    Hereís an example demonstrating the features of constructors and destructors youíve seen so far:

    //: C06:Constr1.cpp
    // Constructors & destructors
    #include <cstdio>
    using namespace std;
    
    class Tree {
      int height;
    public:
      Tree(int initialHeight);  // Constructor
      ~Tree();  // Destructor
      void grow(int years);
      void printsize();
    };
    
    Tree::Tree(int initialHeight) {
      height = initialHeight;
    }
    
    Tree::~Tree() {
      puts("inside Tree destructor");
      printsize();
    }
    
    void Tree::grow(int years) {
      height += years;
    }
    
    void Tree::printsize() {
      printf("Tree height is %d\n", height);
    }
    
    int main() {
      puts("before opening brace");
      {
        Tree t(12);
        puts("after Tree creation");
        t.printsize();
        t.grow(4);
        puts("before closing brace");
      }
      puts("after closing brace");
    } ///:~

    Hereís the output of the above program:

    before opening brace

    after Tree creation

    Tree height is 12

    before closing brace

    inside Tree destructor

    Tree height is 16

    after closing brace

    You can see that the destructor is automatically called at the closing brace of the scope that encloses it.

    Elimination of the definition block

    In C, you must always define all the variables at the beginning of a block, after the opening brace. This is not an uncommon requirement in programming languages (Pascal is another example), and the reason given has always been that itís "good programming style." On this point, I have my suspicions. It has always seemed inconvenient to me, as a programmer, to pop back to the beginning of a block every time I need a new variable. I also find code more readable when the variable definition is close to its point of use.

    Perhaps these arguments are stylistic. In C++, however, thereís a significant problem in being forced to define all objects at the beginning of a scope. If a constructor exists, it must be called when the object is created. However, if the constructor takes one or more initialization arguments, how do you know you will have that initialization information at the beginning of a scope? In the general programming situation, you wonít. Because C has no concept of private, this separation of definition and initialization is no problem. However, C++ guarantees that when an object is created, it is simultaneously initialized. This ensures you will have no uninitialized objects running around in your system. C doesnít care; in fact, C encourages this practice by requiring you to define variables at the beginning of a block before you necessarily have the initialization information.

    Generally C++ will not allow you to create an object before you have the initialization information for the constructor, so you donít have to define variables at the beginning of a scope. In fact, the style of the language would seem to encourage the definition of an object as close to its point of use as possible. In C++, any rule that applies to an "object" automatically refers to an object of a built-in type, as well. This means that any class object or variable of a built-in type can also be defined at any point in a scope. It also means that you can wait until you have the information for a variable before defining it, so you can always define and initialize at the same time:

    //: C06:Definit.cpp
    // Defining variables anywhere
    #include <cstdio>
    #include <cstdlib>
    #include "../require.h"
    using namespace std;
    
    class G {
      int i;
    public:
      G(int I);
    };
    
    G::G(int I) { i = I; }
    
    int main() {
      #define SZ 100
      char buf[SZ];
      printf("initialization value? ");
      int retval = (int)gets(buf);
      require(retval != 0);
      int x = atoi(buf);
      int y = x + 3;
      G g(y);
    } ///:~

    You can see that buf is defined, then some code is executed, then x is defined and initialized using a function call, then y and g are defined. C, of course, would never allow a variable to be defined anywhere except at the beginning of the scope.

    Generally, you should define variables as close to their point of use as possible, and always initialize them when they are defined. (This is a stylistic suggestion for built-in types, where initialization is optional.) This is a safety issue. By reducing the duration of the variableís availability within the scope, you are reducing the chance it will be misused in some other part of the scope. In addition, readability is improved because the reader doesnít have to jump back and forth to the beginning of the scope to know the type of a variable.

    for loops

    In C++, you will often see a for loop counter defined right inside the for expression:

    for(int j = 0; j < 100; j++) {

    printf("j = %d\n", j);

    }

    for(int i = 0; i < 100; i++)

    printf("i = %d\n", i);

    The above statements are important special cases, which cause confusion to new C++ programmers.

    The variables i and j are defined directly inside the for expression (which you cannot do in C). They are then available for use in the for loop. Itís a very convenient syntax because the context removes all question about the purpose of i and j, so you donít need to use such ungainly names as i_loop_counter for clarity.

    The problem is the lifetime of the variables, which was formerly determined by the enclosing scope. This is a situation where a design decision was made from a compiler-writerís view of what is logical because as a programmer you obviously intend i to be used only inside the statement(s) of the for loop. Unfortunately, however, if you previously took this approach and said

    for(int i = 0; i < 100; i++)

    printf("i = %d\n", i);

    // ....

    for(int i = 0; i < 100; i++){

    printf("i = %d\n", i);

    }

    (with or without curly braces) within the same scope, compilers written for the old specification gave you a multiple-definition error for i. The new Standard C++ specification says that the lifetime of a loop counter defined within the control expression of a for loop lasts until the end of the controlled expression, so the above statements will work. (However, not all compilers may support this yet, and you may encounter code based on the old style.) If the transition causes errors, the compiler will point them out to you; the solution requires only a small edit. Watch out, though, for local variables that hide variables in the enclosing scope.

    I find small scopes an indicator of good design. If you have several pages for a single function, perhaps youíre trying to do too much with that function. More granular functions are not only more useful, but itís also easier to find bugs.

    Storage allocation

    A variable can now be defined at any point in a scope, so it might seem initially that the storage for a variable may not be defined until its point of definition. Itís more likely that the compiler will follow the practice in C of allocating all the storage for a block at the opening brace of that block. It doesnít matter because, as a programmer, you canít get the storage (a.k.a. the object) until it has been defined. Although the storage is allocated at the beginning of the block, the constructor call doesnít happen until the sequence point where the object is defined because the identifier isnít available until then. The compiler even checks to make sure you donít put the object definition (and thus the constructor call) where the sequence point only conditionally passes through it, such as in a switch statement or somewhere a goto can jump past it. Uncommenting the statements in the following code will generate a warning or an error:

    //: C06:Nojump.cpp {O}
    // Can't jump past constructors
    
    class X {
    public:
      X() {}
    };
    
    void f(int i) {
      if(i < 10) {
       //! goto jump1; // Error: goto bypasses init
      }
      X x1;  // Constructor called here
     jump1:
      switch(i) {
        case 1 :
          X x2;  // Constructor called here
          break;
      //! case 2 : // Error: case bypasses init
          X x3;  // Constructor called here
          break;
      }
    } ///:~

    In the above code, both the goto and the switch can potentially jump past the sequence point where a constructor is called. That object will then be in scope even if the constructor hasnít been called, so the compiler gives an error message. This once again guarantees that an object cannot be created unless it is also initialized.

    All the storage allocation discussed here happens, of course, on the stack. The storage is allocated by the compiler by moving the stack pointer "down" (a relative term, which may indicate an increase or decrease of the actual stack pointer value, depending on your machine). Objects can also be allocated on the heap, but thatís the subject of Chapter 11.

    Stash with constructors and destructors

    The examples from previous chapters have obvious functions that map to constructors and destructors: initialize( ) and cleanup( ). Hereís the Stash header using constructors and destructors:

    //: C06:Stash3.h
    // With constructors & destructors
    #ifndef STASH3_H_
    #define STASH3_H_
    
    class Stash {
      int size;      // Size of each space
      int quantity;  // Number of storage spaces
      int next;      // Next empty space
      // Dynamically allocated array of bytes:
      unsigned char* storage;
      void inflate(int increase);
    public:
      Stash(int Size);
      ~Stash();
      int add(void* element);
      void* fetch(int index);
      int count();
    };
    #endif // STASH3_H_ ///:~

    The only member function definitions that are changed are initialize( ) and cleanup( ), which have been replaced with a constructor and destructor:

    //: C06:Stash3.cpp {O}
    // Constructors & destructors
    #include <cstdlib>
    #include <cstring>
    #include <cstdio>
    #include "../require.h"
    #include "Stash3.h"
    using namespace std;
    
    Stash::Stash(int Size) {
      size = Size;
      quantity = 0;
      storage = 0;
      next = 0;
    }
    
    Stash::~Stash() {
      if(storage) {
        puts("freeing storage");
        free(storage);
      }
    }
    
    int Stash::add(void* element) {
      if(next >= quantity) // Enough space left?
        inflate(100);
      // Copy element into storage,
      // starting at next empty space:
      memcpy(&(storage[next * size]),
             element, size);
      next++;
      return(next - 1); // Index number
    }
    
    void* Stash::fetch(int index) {
      if(index >= next || index < 0)
        return 0;  // Not out of bounds?
      // Produce pointer to desired element:
      return &(storage[index * size]);
    }
    
    int Stash::count() {
      return next; // Number of elements in Stash
    }
    
    void Stash::inflate(int increase) {
      void* v =
        realloc(storage, (quantity+increase)*size);
      require(v);  // Was it successful?
      storage = (unsigned char*)v;
      quantity += increase;
    } ///:~

    Notice, in the following test program, how the definitions for Stash objects appear right before they are needed, and how the initialization appears as part of the definition, in the constructor argument list:

    //: C06:Stshtst3.cpp
    //{L} Stash3
    // Constructors & destructors
    #include <cstdio>
    #include "../require.h"
    #include "Stash3.h"
    using namespace std;
    #define BUFSIZE 80
    
    int main() {
      Stash intStash(sizeof(int));
      for(int j = 0; j < 100; j++)
        intStash.add(&j);
    
      FILE* file = fopen("Stshtst3.cpp", "r");
      require(file);
      // Holds 80-character strings:
      Stash stringStash(sizeof(char) * BUFSIZE);
      char buf[BUFSIZE];
      while(fgets(buf, BUFSIZE, file))
        stringStash.add(buf);
      fclose(file);
    
      for(int k = 0; k < intStash.count(); k++)
        printf("intStash.fetch(%d) = %d\n", k,
               *(int*)intStash.fetch(k));
    
      for(int i = 0; i < stringStash.count(); i++)
        printf("stringStash.fetch(%d) = %s",
               i, (char*)stringStash.fetch(i++));
      putchar('\n');
    } ///:~

    Also notice how the cleanup( ) calls have been eliminated, but the destructors are still automatically called when intStash and stringStash go out of scope.

    stack with constructors & destructors

    Reimplementing the linked list (inside Stack) with constructors and destructors shows up a significant problem. Hereís the modified header file:

    //: C06:Stack3.h
    // With constructors/destructors
    #ifndef STACK3_H_
    #define STACK3_H_
    
    class Stack {
      struct link {
        void* data;
        link* next;
        void initialize(void* Data, link* Next);
      } * head;
    public:
      Stack();
      ~Stack();
      void push(void* Data);
      void* peek();
      void* pop();
    };
    #endif // STACK3_H_ ///:~

    Notice that although Stack has a constructor and destructor, the nested class link does not. This has nothing to do with the fact that itís nested. The problem arises when it is used:

    //: C06:Stack3.cpp {O}
    // Constructors/destructors
    #include <cstdlib>
    #include "../require.h"
    #include "Stack3.h"
    using namespace std;
    
    void Stack::link::initialize(
      void* Data, link* Next) {
      data = Data;
      next = Next;
    }
    
    Stack::Stack() { head = 0; }
    
    void Stack::push(void* Data) {
      // Can't use a constructor with malloc!
      link* newlink = (link*)malloc(sizeof(link));
      require(newlink);
      newlink->initialize(Data, head);
      head = newlink;
    }
    
    void* Stack::peek() { return head->data; }
    
    void* Stack::pop() {
      if(head == 0) return 0;
      void* result = head->data;
      link* oldHead = head;
      head = head->next;
      free(oldHead);
      return result;
    }
    
    Stack::~Stack() {
      link* cursor = head;
      while(head) {
        cursor = cursor->next;
        free(head->data); // Assumes malloc!
        free(head);
        head = cursor;
      }
    } ///:~

    link is created inside Stack::push, but itís created on the heap and thereís the rub. How do you create an object on the heap if it has a constructor? So far weíve been saying, "OK, hereís a piece of memory on the heap and I want you to pretend that itís actually a real object." But the constructor doesnít allow us to hand it a memory address upon which it will build an object. The creation of an object is critical, and the C++ constructor wants to be in control of the whole process to keep things safe. There is an easy solution to this problem, the operator new, that weíll look at in Chapter 11, but for now the C approach to dynamic allocation will have to suffice. Because the allocation and cleanup are hidden within Stack ó itís part of the underlying implementation ó you donít see the effect in the test program:

    //: C06:Stktst3.cpp
    //{L} Stack3
    // Constructors/destructors
    #include <cstdio>
    #include <cstdlib>
    #include <cstring>
    #include "../require.h"
    #include "Stack3.h"
    using namespace std;
    
    int main(int argc, char* argv[]) {
      requireArgs(argc,  2); // File name is argument
      FILE* file = fopen(argv[1], "r");
      require(file);
      #define BUFSIZE 100
      char buf[BUFSIZE];
      Stack textlines;  // Constructor called here
      // Read file and store lines in the Stack:
      while(fgets(buf, BUFSIZE, file)) {
        char* string =
          (char*)malloc(strlen(buf) + 1);
        require(string);
        strcpy(string, buf);
        textlines.push(string);
      }
      // Pop lines from the Stack and print them:
      char* s;
      while((s = (char*)textlines.pop()) != 0) {
        printf("%s", s); free(s); 
      }
    }  // Destructor called here ///:~

    The constructor and destructor for textlines are called automatically, so the user of the class can focus on what to do with the object and not worry about whether or not it will be properly initialized and cleaned up.

    Aggregate initialization

    An aggregate is just what it sounds like: a bunch of things clumped together. This definition includes aggregates of mixed types, like structs and classes. An array is an aggregate of a single type.

    Initializing aggregates can be error-prone and tedious. C++ aggregate initialization makes it much safer. When you create an object thatís an aggregate, all you must do is make an assignment, and the initialization will be taken care of by the compiler. This assignment comes in several flavors, depending on the type of aggregate youíre dealing with, but in all cases the elements in the assignment must be surrounded by curly braces. For an array of built-in types this is quite simple:

    int a[5] = { 1, 2, 3, 4, 5 };

    If you try to give more initializers than there are array elements, the compiler gives an error message. But what happens if you give fewer initializers, such as

    int b[6] = {0};

    Here, the compiler will use the first initializer for the first array element, and then use zero for all the elements without initializers. Notice this initialization behavior doesnít occur if you define an array without a list of initializers. So the above expression is a very succinct way to initialize an array to zero, without using a for loop, and without any possibility of an off-by-one error (Depending on the compiler, it may also be more efficient than the for loop.)

    A second shorthand for arrays is automatic counting, where you let the compiler determine the size of the array based on the number of initializers:

    int c[] = { 1, 2, 3, 4 };

    Now if you decide to add another element to the array, you simply add another initializer. If you can set your code up so it needs to be changed in only one spot, you reduce the chance of errors during modification. But how do you determine the size of the array? The expression sizeof c / sizeof *c (size of the entire array divided by the size of the first element) does the trick in a way that doesnít need to be changed if the array size changes:

    for(int i = 0; i < sizeof c / sizeof *c; i++)

    c[i]++;

    Because structures are also aggregates, they can be initialized in a similar fashion. Because a C-style struct has all its members public, they can be assigned directly:

    struct X {

    int i;

    float f;

    char c;

    };

    X x1 = { 1, 2.2, 'c' };

    If you have an array of such objects, you can initialize them by using a nested set of curly braces for each object:

    X x2[3] = { {1, 1.1, 'a'}, {2, 2.2, 'b'} };

    Here, the third object is initialized to zero.

    If any of the data members are private, or even if everythingís public but thereís a constructor, things are different. In the above examples, the initializers are assigned directly to the elements of the aggregate, but constructors are a way of forcing initialization to occur through a formal interface. Here, the constructors must be called to perform the initialization. So if you have a struct that looks like this,

    struct Y {

    float f;

    int i;

    Y(int A); // Presumably assigned to i

    };

    You must indicate constructor calls. The best approach is the explicit one as follows:

    Y y2[] = { Y(1), Y(2), Y(3) };

    You get three objects and three constructor calls. Any time you have a constructor, whether itís a struct with all members public or a class with private data members, all the initialization must go through the constructor, even if youíre using aggregate initialization.

    Hereís a second example showing multiple constructor arguments:

    //: C06:Multiarg.cpp
    // Multiple constructor arguments
    // with aggregate initialization
    
    class X {
      int i, j;
    public:
      X(int I, int J) {
        i = I;
        j = J;
      }
    };
    
    int main() {
      X xx[] = { X(1,2), X(3,4), X(5,6), X(7,8) };
    } ///:~

    Notice that it looks like an explicit but unnamed constructor is called for each object in the array.

    Default constructors

    A default constructor is one that can be called with no arguments. A default constructor is used to create a "vanilla object," but itís also very important when the compiler is told to create an object but isnít given any details. For example, if you take Y and use it in a definition like this,

    Y y4[2] = { Y(1) };

    the compiler will complain that it cannot find a default constructor. The second object in the array wants to be created with no arguments, and thatís where the compiler looks for a default constructor. In fact, if you simply define an array of Y objects,

    Y y5[7];

    or an individual object,

    Y y;

    the compiler will complain because it must have a default constructor to initialize every object in the array. (Remember, if you have a constructor the compiler ensures it is always called, regardless of the situation.)

    The default constructor is so important that if (and only if) there are no constructors for a structure (struct or class), the compiler will automatically create one for you. So this works:

    class Z {

    int i; // private

    }; // No constructor

    Z z, z2[10];

    If any constructors are defined, however, and thereís no default constructor, the above object definitions will generate compile-time errors.

    You might think that the default constructor should do some intelligent initialization, like setting all the memory for the object to zero. But it doesnít ó that would add extra overhead but be out of the programmerís control. This would mean, for example, that if you compiled C code under C++, the effect would be different. If you want the memory to be initialized to zero, you must do it yourself.

    The automatic creation of default constructors was not simply a feature to make life easier for new C++ programmers. Itís virtually required to aid backward compatibility with existing C code, which is a critical issue in C++. In C, itís not uncommon to create an array of structs. Without the default constructor, this would cause a compile-time error in C++.

    If you had to modify your C code to recompile it under C++ just because of stylistic issues, you might not bother. When you move C code to C++, you will almost always have new compile-time error messages, but those errors are because of genuine bad C code that the C++ compiler can detect because of its stronger rules. In fact, a good way to find obscure errors in a C program is to run it through a C++ compiler.

    Summary

    The seemingly elaborate mechanisms provided by C++ should give you a strong hint about the critical importance placed on initialization and cleanup in the language. As Stroustrup was designing C++, one of the first observations he made about productivity in C was that a very significant portion of programming problems are caused by improper initialization of variables. These kinds of bugs are very hard to find, and similar issues apply to improper cleanup. Because constructors and destructors allow you to guarantee proper initialization and cleanup (the compiler will not allow an object to be created and destroyed without the proper constructor and destructor calls), you get complete control and safety.

    Aggregate initialization is included in a similar vein ó it prevents you from making typical initialization mistakes with aggregates of built-in types and makes your code more succinct.

    Safety during coding is a big issue in C++. Initialization and cleanup are an important part of this, but youíll also see other safety issues as the book progresses.

    Exercises

  15. Modify the HANDLE.H, HANDLE.CPP, and USEHANDL.CPP files at the end of Chapter 2 to use constructors and destructors.
  16. Create a class with a destructor and nondefault constructor, each of which print something to announce their presence. Write code that demonstrates when the constructor and destructor are called.
  17. Demonstrate automatic counting and aggregate initialization with an array of objects of the class you created in Exercise 2. Add a member function to that class that prints a message. Calculate the size of the array and move through it, calling your new member function.
  18. Create a class without any constructors, and show you can create objects with the default constructor. Now create a nondefault constructor (one with an argument) for the class, and try compiling again. Explain what happened.
  19. 7: Function overloading & default arguments

    One of the important features in any programming language is the convenient use of names.

    When you create an object (a variable), you give a name to a region of storage. A function is a name for an action. By using names that you make up to describe the system at hand, you create a program that is easier for people to understand and change. Itís a lot like writing prose ó the goal is to communicate with your readers.

    A problem arises when mapping the concept of nuance in human language onto a programming language. Often, the same word expresses a number of different meanings, depending on context. That is, a single word has multiple meanings ó itís overloaded. This is very useful, especially when it comes to trivial differences. You say "wash the shirt, wash the car." It would be silly to be forced to say, "shirt_wash the shirt, car_wash the car" just so the hearer doesnít have to make any distinction about the action performed. Most human languages are redundant, so even if you miss a few words, you can still determine the meaning. We donít need unique identifiers ó we can deduce meaning from context.

    Most programming languages, however, require that you have a unique identifier for each function. If you have three different types of data you want to print, int, char, and float, you generally have to create three different function names, for example, print_int( ), print_char( ), and print_float( ). This loads extra work on you as you write the program, and on readers as they try to understand it.

    In C++, another factor forces the overloading of function names: the constructor. Because the constructorís name is predetermined by the name of the class, there can be only one constructor name. But what if you want to create an object in more than one way? For example, suppose you build a class that can initialize itself in a standard way and also by reading information from a file. You need two constructors, one that takes no arguments (the default constructor) and one that takes a character string as an argument, which is the name of the file to initialize the object. Both are constructors, so they must have the same name ó the name of the class. Thus function overloading is essential to allow the same function name, the constructor in this case, to be used with different argument types.

    Although function overloading is a must for constructors, itís a general convenience and can be used with any function, not just class member functions. In addition, function overloading means that if you have two libraries that contain functions of the same name, the chances are they wonít conflict as long as the argument lists are different. Weíll look at all these factors in detail throughout this chapter.

    The theme of this chapter is convenient use of function names. Function overloading allows you to use the same name for different functions, but thereís a second way to make calling a function more convenient. What if youíd like to call the same function in different ways? When functions have long argument lists, it can become tedious to write and confusing to read the function calls when most of the arguments are the same for all the calls. A very commonly used feature in C++ is called default arguments. A default argument is one the compiler inserts if the person calling a function doesnít specify it. Thus the calls f("hello"), f("hi", 1) and f("howdy", 2, Ďcí) can all be calls to the same function. They could also be calls to three overloaded functions, but when the argument lists are this similar, youíll usually want similar behavior that calls for a single function.

    Function overloading and default arguments really arenít very complicated. By the time you reach the end of this chapter, youíll understand when to use them and the underlying mechanisms used during compiling and linking to implement them.

    More mangling

    In Chapter 1 the concept of name mangling was introduced. (Sometimes the more gentle term decoration is used.) In the code

    void f();

    class X { void f(); };

    the function f( ) inside the scope of class X does not clash with the global version of f( ). The compiler performs this scoping by manufacturing different internal names for the global version of f( ) and X::f( ). In Chapter 1 it was suggested that the names are simply the class name "mangled" together with the function name, so the internal names the compiler uses might be _f and _X_f. It turns out that function name mangling involves more than the class name.

    Hereís why. Suppose you want to overload two function names

    void print(char);

    void print(float);

    It doesnít matter whether they are both inside a class or at the global scope. The compiler canít generate unique internal identifiers if it uses only the scope of the function names. Youíd end up with _print in both cases. The idea of an overloaded function is that you use the same function name, but different argument lists. Thus, for overloading to work the compiler must mangle the names of the argument types with the function name. The above functions, defined at global scope, produce internal names that might look something like _print_char and _print_float. Itís worth noting there is no standard for the way names must be mangled by the compiler, so you will see very different results from one compiler to another. (You can see what it looks like by telling the compiler to generate assembly-language output.) This, of course, causes problems if you want to buy compiled libraries for a particular compiler and linker, but those problems can also exist because of the way different compilers generate code.

    Thatís really all there is to function overloading: You can use the same function name for different functions, as long as the argument lists are different. The compiler mangles the name, the scope, and the argument lists to produce internal names for it and the linker to use.

    Overloading on return values

    Itís common to wonder "why just scopes and argument lists? Why not return values?" It seems at first that it would make sense to also mangle the return value with the internal function name. Then you could overload on return values, as well:

    void f();

    int f();

    This works fine when the compiler can unequivocally determine the meaning from the context, as in int x = f( );. However, in C youíve always been able to call a function and ignore the return value. How can the compiler distinguish which call is meant in this case? Possibly worse is the difficulty the reader has in knowing which function call is meant. Overloading solely on return value is a bit too subtle, and thus isnít allowed in C++.

    Type-safe linkage

    There is an added benefit to all this name mangling. A particularly sticky problem in C occurs when the user misdeclares a function, or, worse, a function is called without declaring it first, and the compiler infers the function declaration from the way it is called. Sometimes this function declaration is correct, but when it isnít, it can be a very difficult bug to find.

    Because all functions must be declared before they are used in C++, the opportunity for this problem to pop up is greatly diminished. The compiler refuses to declare a function automatically for you, so itís likely you will include the appropriate header file. However, if for some reason you still manage to misdeclare a function, either by declaring it yourself by hand or by including the wrong header file (perhaps one that is out of date), the name-mangling provides a safety net that is often referred to as type-safe linkage.

    Consider the following scenario. In one file is the definition for a function:

    //: C07:Def.cpp {O}
    // Function definition
    void f(int) {}
    ///:~

    In the second file, the function is misdeclared and then called:

    //: C07:Use.cpp
    //{L} Def
    // Function misdeclaration
    void f(char);
    
    int main() {
    //!  f(1); // Causes a linker error
    } ///:~

    Even though you can see that the function is actually f(int), the compiler doesnít know this because it was told ó through an explicit declaration ó that the function is f(char). Thus, the compilation is successful. In C, the linker would also be successful, but not in C++. Because the compiler mangles the names, the definition becomes something like f_int, whereas the use of the function is f_char. When the linker tries to resolve the reference to f_char, it can find only f_int, and it gives you an error message. This is type-safe linkage. Although the problem doesnít occur all that often, when it does it can be incredibly difficult to find, especially in a large project. This is one of the cases where you can find a difficult error in a C program simply by running it through the C++ compiler.

    Overloading example

    Consider the examples weíve been looking at so far in this series, modified to use function overloading. As stated earlier, an immediately useful place for overloading is in constructors. You can see this in the following version of the Stash class:

    //: C07:Stash4.h
    // Function overloading
    #ifndef STASH4_H_
    #define STASH4_H_
    
    class Stash {
      int size;      // Size of each space
      int quantity;  // Number of storage spaces
      int next;      // Next empty space
      // Dynamically allocated array of bytes:
      unsigned char* storage;
      void inflate(int increase);
    public:
      Stash(int Size); // Zero quantity
      Stash(int Size, int InitQuant);
      ~Stash();
      int add(void* element);
      void* fetch(int index);
      int count();
    };
    #endif // STASH4_H_ ///:~

    The first Stash( ) constructor is the same as before, but the second one has a Quantity argument to indicate the initial quantity of storage places to be allocated. In the definition, you can see that the internal value of quantity is set to zero, along with the storage pointer:

    //: C07:Stash4.cpp {O}
    // Function overloading
    #include <cstdlib>
    #include <cstring>
    #include <cstdio>
    #include "../require.h"
    #include "Stash4.h"
    using namespace std;
    
    Stash::Stash(int Size) {
      size = Size;
      quantity = 0;
      next = 0;
      storage = 0;
    }
    
    Stash::Stash(int Size, int InitQuant) {
      size = Size;
      quantity = 0;
      next = 0;
      storage = 0;
      inflate(InitQuant);
    }
    
    Stash::~Stash() {
      if(storage) {
        puts("freeing storage");
        free(storage);
      }
    }
    
    int Stash::add(void* element) {
      if(next >= quantity) // Enough space left?
        inflate(100); // Add space for 100 elements
      // Copy element into storage,
      // starting at next empty space:
      memcpy(&(storage[next * size]),
             element, size);
      next++;
      return(next - 1); // Index number
    }
    
    void* Stash::fetch(int index) {
      if(index >= next || index < 0)
        return 0;  // Not out of bounds?
      // Produce pointer to desired element:
      return &(storage[index * size]);
    }
    
    int Stash::count() {
      return next; // Number of elements in Stash
    }
    
    void Stash::inflate(int increase) {
      void* v =
        realloc(storage, (quantity+increase)*size);
      require(v);  // Was it successful?
      storage = (unsigned char*)v;
      quantity += increase;
    } ///:~

    When you use the first constructor no memory is allocated for storage. The allocation happens the first time you try to add( ) an object and any time the current block of memory is exceeded inside add( ).

    This is demonstrated in the test program, which exercises the first constructor:

    //: C07:Stshtst4.cpp
    //{L} Stash4
    // Function overloading
    #include <cstdio>
    #include "../require.h"
    #include "Stash4.h"
    using namespace std;
    #define BUFSIZE 80
    
    int main() {
      int i;
      FILE* file;
      char buf[BUFSIZE];
      char* cp;
      // ....
      Stash intStash(sizeof(int));
      for(i = 0; i < 100; i++)
        intStash.add(&i);
      file = fopen("STSHTST4.CPP", "r");
      require(file);
      // Holds 80-character strings:
      Stash stringStash(sizeof(char) * BUFSIZE);
      while(fgets(buf, BUFSIZE, file))
        stringStash.add(buf);
      fclose(file);
    
      for(i = 0; i < intStash.count(); i++)
        printf("intStash.fetch(%d) = %d\n", i,
               *(int*)intStash.fetch(i));
    
      i = 0;
      while(
        (cp = (char*)stringStash.fetch(i++)) != 0)
        printf("stringStash.fetch(%d) = %s",
               i - 1, cp);
      putchar('\n');
    } ///:~

    You can modify this code to use the second constructor just by adding another argument; presumably youíd know something about the problem that allows you to choose an initial size for the Stash.

    Default arguments

    Examine the two constructors for Stash( ). They donít seem all that different, do they? In fact, the first constructor seems to be the special case of the second one with the initial size set to zero. In this situation it seems a bit of a waste of effort to create and maintain two different versions of a similar function.

    C++ provides a remedy with default arguments. A default argument is a value given in the declaration that the compiler automatically inserts if you donít provide a value in the function call. In the Stash example, we can replace the two functions:

    Stash(int Size); // Zero quantity

    Stash(int Size, int Quantity);

    with the single declaration

    Stash(int Size, int Quantity = 0);

    The Stash(int) definition is simply removed ó all that is necessary is the single Stash(int, int) definition.

    Now, the two object definitions

    Stash A(100), B(100, 0);

    will produce exactly the same results. The identical constructor is called in both cases, but for A, the second argument is automatically substituted by the compiler when it sees the first argument is an int and there is no second argument. The compiler has seen the default argument, so it knows it can still make the function call if it substitutes this second argument, which is what youíve told it to do by making it a default.

    Default arguments are a convenience, as function overloading is a convenience. Both features allow you to use a single name in different situations. The difference is that the compiler is substituting arguments when you donít want to put them in yourself. The preceding example is a good place to use default arguments instead of function overloading; otherwise you end up with two or more functions that have similar signatures and similar behaviors. Obviously, if the functions have very different behaviors, it usually doesnít make sense to use default arguments.

    There are two rules you must be aware of when using default arguments. First, only trailing arguments may be defaulted. That is, you canít have a default argument followed by a nondefault argument. Second, once you start using default arguments, all the remaining arguments must be defaulted. (This follows from the first rule.)

    Default arguments are only placed in the declaration of a function, which is placed in a header file. The compiler must see the default value before it can use it. Sometimes people will place the commented values of the default arguments in the function definition, for documentation purposes

    void fn(int x /* = 0 */) { // ...

    Default arguments can make arguments declared without identifiers look a bit funny. You can end up with

    void f(int x, int = 0, float = 1.1);

    In C++ you donít need identifiers in the function definition, either:

    void f(int x, int, float f) { /* ... */ }

    In the function body, x and f can be referenced, but not the middle argument, because it has no name. The calls must still use a placeholder, though: f(1) or f(1,2,3.0). This syntax allows you to put the argument in as a placeholder without using it. The idea is that you might want to change the function definition to use it later, without changing all the function calls. Of course, you can accomplish the same thing by using a named argument, but if you define the argument for the function body without using it, most compilers will give you a warning message, assuming youíve made a logical error. By intentionally leaving the argument name out, you suppress this warning.

    More important, if you start out using a function argument and later decide that you donít need it, you can effectively remove it without generating warnings, and yet not disturb any client code that was calling the previous version of the function.

    A bit vector class

    As a further example of function overloading and default arguments, consider the problem of efficiently storing a set of true-false flags. If you have a number of pieces of data that can be expressed as "on" or "off," it may be convenient to store them in an object called a bit vector. Sometimes a bit vector is not a tool to be used by the application developer, but a part of other classes.

    Of course, the easiest way to code a group of flags is with a byte of data for each flag, as shown in this example:

    //: C07:Flags.cpp
    // List of true/false flags
    #include <cstdio>
    #include <cstring>
    #include "../require.h"
    using namespace std;
    
    #define FSIZE 100
    #define TRUE 1
    #define FALSE 0
    
    class Flags {
      unsigned char f[FSIZE];
    public:
      Flags();
      void set(int i);
      void clear(int i);
      int read(int i);
      int size();
    };
    
    Flags::Flags() {
      memset(f, FALSE, FSIZE);
    }
    
    void Flags::set(int i) {
      require(i >= 0 && i < FSIZE);
      f[i] = TRUE;
    }
    
    void Flags::clear(int i) {
      require(i >= 0 && i < FSIZE);
      f[i] = FALSE;
    }
    
    int Flags::read(int i) {
      require(i >= 0 && i < FSIZE);
      return f[i];
    }
    
    int Flags::size() { return FSIZE; }
    
    int main() {
      Flags fl;
      for(int i = 0; i < fl.size(); i++)
        if(i % 3 == 0) fl.set(i);
      for(int j = 0; j < fl.size(); j++)
        printf("fl.read(%d)= %d\n", j, fl.read(j));
    } ///:~

    However, this is wasteful, because youíre using eight bits for a flag that could be expressed as a single bit. Sometimes this storage is important, especially if you want to build other classes using this class. So consider instead the following BitVector, which uses a bit for each flag. The function overloading occurs in the constructor and the bits( ) function:

    //: C07:Bitvect.h
    // Bit Vector
    #ifndef BITVECT_H_
    #define BITVECT_H_
    
    class BitVector {
      unsigned char* bytes;
      int Bits, numBytes;
    public:
      BitVector(); // Default: 0 size
      // init points to an array of bytes
      // size is measured in bytes
      BitVector(unsigned char* init,
                int size = 8);
      // binary is a string of 1s and 0s
      BitVector(char* binary);
      ~BitVector();
      void set(int bit);
      void clear(int bit);
      int read(int bit);
      int bits(); // Number of bits in the vector
      void bits(int sz); // Set number of bits
      void print(const char* msg = "");
    };
    #endif // BITVECT_H_ ///:~

    The first (default) constructor creates a BitVector of size zero. You canít set any bits in this vector because there are none. First you have to increase the size of the vector with the overloaded bits( ) function. The version with no arguments returns the current size of the vector in bits, and bits(int) changes the size to what is specified in the argument. Thus you both set and read the size using the same function name. Note that thereís no restriction on the new size ó you can make it smaller as well as larger.

    The second constructor takes a pointer to an array of unsigned chars, that is, an array of raw bytes. The second argument tells the constructor how many bytes are in the array. If the first argument is zero rather than a valid pointer, the array is initialized to zero. If you donít give a second argument, the default size is eight bytes.

    You might think you can create a BitVector of size eight bytes and set it to zero by saying BitVector b(0);. This would work if not for the third constructor, which takes a char* as its only argument. The argument 0 could be used in either the second constructor (with the second argument defaulted) or the third constructor. The compiler has no way of knowing which one it should choose, so youíll get an ambiguity error. To successfully create a BitVector this way, you must cast zero to a pointer of the proper type: BitVector b((unsigned char*)0). This is awkward, so you may instead want to create an empty vector with BitVector b and then expand it to the desired size with b.bits(64) to allocate eight bytes.

    Itís important that the compiler distinguish char* and unsigned char* as two distinct data types. If it did not (a problem in the past) then BitVector(unsigned char*, int) (with the second argument defaulted) and BitVector(char*) would look the same when the compiler tried to match the function call.

    Note that the print( ) function has a default argument for its char* argument. This may look a bit puzzling if you know how the compiler handles string constants. Does the compiler create a new default character string every time you call the function? The answer is no; it creates a single string in a special area reserved for static and global data, and passes the address of that string every time it needs to use it as a default.

    A string of bits

    The third constructor for the BitVector takes a pointer to a character string that represents a string of bits. This is a convenient syntax for the user because it allows the vector initialization values to be expressed in the natural form 0110010. The object is created to match the length of the string, and each bit is set or cleared according to the string.

    The other functions are the all-important set( ), clear( ), and read( ), each of which takes the bit number of interest as an argument. The print( ) function prints a message, which has a default argument of an empty string, and then the bit pattern of the BitVector, again using ones and zeros.

    Two issues are immediately apparent when implementing the BitVector class. One is that if the number of bits you need doesnít fall on an 8-bit boundary (or whatever word size your machine uses), you must round up to the nearest boundary. The second is the care necessary in selecting the bits of interest. For example, when creating a BitVector using an array of bytes, each byte in the array must be read in from left to right so it will appear the way you expect it in the print( ) function.

    Here are the member function definitions:

    //: C07:Bitvect.cpp {O}
    // BitVector Implementation
    #include <cstdio>
    #include <cstdlib>
    #include <cstring>
    #include <climits> // CHAR_BIT = # bits in char
    #include "../require.h"
    #include "Bitvect.h"
    using namespace std;
    // A byte with the high bit set:
    const unsigned char highbit =
      1 << (CHAR_BIT - 1);
    
    BitVector::BitVector() {
      numBytes = 0;
      Bits = 0;
      bytes = 0;
    }
    // Notice default args are not duplicated:
    BitVector::BitVector(unsigned char* init,
                         int size) {
      numBytes = size;
      Bits = numBytes * CHAR_BIT;
      bytes = (unsigned char*)calloc(numBytes, 1);
      require(bytes);
      if(init == 0) return; // Default to all 0
      // Translate from bytes into bit sequence:
      for(int index = 0; index<numBytes; index++)
        for(int offset = 0;
            offset < CHAR_BIT; offset++)
          if(init[index] & (highbit >> offset))
             set(index * CHAR_BIT + offset);
    }
    
    BitVector::BitVector(char* binary) {
      Bits = strlen(binary);
      numBytes =  Bits / CHAR_BIT;
      // If there's a remainder, add 1 byte:
      if(Bits % CHAR_BIT) numBytes++;
      bytes = (unsigned char*)calloc(numBytes, 1);
      require(bytes);
      for(int i = 0; i < Bits; i++)
        if(binary[i] == '1') set(i);
    }
    
    BitVector::~BitVector() {
      free(bytes);
    }
    
    void BitVector::set(int bit) {
      require(bit >= 0 && bit < Bits);
      int index = bit / CHAR_BIT;
      int offset = bit % CHAR_BIT;
      unsigned char mask = (1 << offset);
      bytes[index] |= mask;
    }
    
    int BitVector::read(int bit) {
      require(bit >= 0 && bit < Bits);
      int index = bit / CHAR_BIT;
      int offset = bit % CHAR_BIT;
      return (bytes[index] >> offset) & 1;
    }
    
    void BitVector::clear(int bit) {
      require(bit >= 0 && bit < Bits);
      int index = bit / CHAR_BIT;
      int offset = bit % CHAR_BIT;
      unsigned char mask = ~(1 << offset);
      bytes[index] &= mask;
    }
    
    int BitVector::bits() { return Bits; }
    
    void BitVector::bits(int size) {
      int oldsize = Bits;
      Bits = size;
      numBytes =  Bits / CHAR_BIT;
      // If there's a remainder, add 1 byte:
      if(Bits % CHAR_BIT) numBytes++;
      void* v = realloc(bytes, numBytes);
      require(v);
      bytes = (unsigned char*)v;
      for(int i = oldsize; i < Bits; i++)
        clear(i); // Erase additional bits
    }
    
    void BitVector::print(const char* msg) {
      puts(msg);
      for(int i = 0; i < Bits; i++){
        if(read(i)) putchar('1');
          else putchar('0');
        // Format into byte blocks:
        if((i + 1) % CHAR_BIT == 0) putchar(' ');
      }
      putchar('\n');
    } ///:~

    The first constructor is trivial because it just sets everything to zero. The second constructor allocates storage and initializes the number of bits, and then it gets a little tricky. The outer for loop indexes through the array of bytes, and the inner for loop indexes through each byte a bit at a time. However, the bit is selected from the byte from left to right using the expression init[index] & (0x80 >> offset). Notice this is a bitwise AND, and the hex 0x80 (a 1-bit in the highest location) is shifted to the right by offset to create a mask. If the result is nonzero, there is a one in that particular bit position, and the set( ) function is used to set the bit inside the BitVector. It was important to scan the source bytes from left to right so the print( ) function makes sense to the viewer.

    The third constructor converts from a character string representing a binary sequence of ones and zeroes into a BitVector. The number of bits is taken at face value ó the length of the character string. But because the character string may produce a number of bits that isnít a multiple of eight, the number of bytes numBytes is calculated by first doing an integer division and then checking to see if thereís a remainder by using the modulus operator. In this case, unlike the second constructor, the bits are scanned in from left to right from the source string.

    The set( ), clear( ), and read( ) functions follow a nearly identical format. The first three lines are identical in each case: assert( ) that the argument is in range, and create an index into the array of bytes and an offset into the selected byte. Both set( ) and read( ) create their mask the same way: by shifting a bit left into the desired position. But set( ) forces the bit in the array to be set by ORing the appropriate byte with the mask, and read( ) checks the value by ANDing the mask with the byte and seeing if the result is nonzero. clear( ) creates its mask by shifting the one into the desired position, then flipping all the bits with the binary NOT operator (the tilde: ~), then ANDing the mask onto the byte so only the desired bit is forced to zero.

    Note that set( ), read( ), and clear( ) could be written much more succinctly. For example, clear( ) could be reduced to

    bytes[bit/CHAR_BIT] &= ~(1 << (bit % CHAR_BIT));

    While this is more efficient, it certainly isnít as readable.

    The two overloaded bits( ) functions are quite different in their behavior. The first is simply an access function (a function that produces a value based on private data without allowing access to that data) that tells how many bits are in the array. The second uses its argument to calculate the new number of bytes required, realloc( )s the memory (which allocates fresh memory if bytes is zero) and zeroes the additional bits. Note that if you ask for the same number of bits youíve already got, this may actually reallocate the memory (depending on the implementation of realloc( )) but it wonít hurt anything.

    The print( ) function puts out the msg string. The Standard C library function puts( ) always adds a new line, so this will result in a new line for the default argument. Then it uses read( ) on each successive bit to print the appropriate character. For easier visual scanning, after each eight bits it prints out a space. Because of the way the second BitVector constructor reads in its array of bytes, the print( ) function will produce results in a familiar form.

    The following program tests the BitVector class by exercising all the functions:

    //: C07:Bvtest.cpp
    //{L} Bitvect
    // Testing the BitVector class
    #include "Bitvect.h"
    
    int main() {
      unsigned char b[] = {
        0x0f, 0xff, 0xf0,
        0xAA, 0x78, 0x11
      };
      BitVector bv1(b, sizeof b / sizeof *b),
        bv2("10010100111100101010001010010010101");
      bv1.print("bv1 before modification");
      for(int i = 36; i < bv1.bits(); i++)
        bv1.clear(i);
      bv1.print("bv1 after modification");
      bv2.print("bv2 before modification");
      for(int j=bv2.bits()-10; j<bv2.bits(); j++)
        bv2.clear(j);
      bv2.set(30);
      bv2.print("bv2 after modification");
      bv2.bits(bv2.bits() / 2);
      bv2.print("bv2 cut in half");
      bv2.bits(bv2.bits() + 10);
      bv2.print("bv2 grown by 10");
      BitVector bv3((unsigned char*)0);
    } ///:~

    The objects bv1, bv2, and bv3 show three different types of BitVectors and their constructors. The set( ) and clear( ) functions are demonstrated. (read( ) is exercised inside print( ).) Toward the end of this example, bv2 is cut in half and then grown to demonstrate a way to zero the end of the BitVector.

    You should be aware that the Standard C++ library contains bits and bitstring classes which are much more complete (and standard) implementations of bit vectors.

    Summary

    Both function overloading and default arguments provide a convenience for calling function names. It can seem confusing at times to know which technique to use. For example, in the BitVector class it seems like the two bits( ) functions could be combined into a single version:

    int bits(int sz = -1);

    If you called it without an argument, the function would check for the -1 default and interpret that as meaning that you wanted it to tell you the current number of bits. The use appears to be the same as the previous scheme. However, there are a number of significant differences that jump out, or at least should make you feel uncomfortable.

    Inside bits( ) youíll have to do a conditional based on the value of the argument. If you have to look for the default rather than treating it as an ordinary value, that should be a clue that you will end up with two different functions inside one: one version for the normal case, and one for the default. You might as well split it up into two distinct function bodies and let the compiler do the selection. This results in a slight increase in efficiency, because the extra argument isnít passed and the extra code for the conditional isnít executed. The slight efficiency increase for two functions could make a difference if you call the function many times.

    You do lose something when you use a default argument in this case. First, the default has to be something you wouldnít ordinarily use, -1 in this case. Now you canít tell if a negative number is an accident or a default substitution. Second, thereís only one return value with a single function, so the compiler loses the information that was available for the overloaded functions. Now, if you say

    int i = bv1.set(10);

    the compiler will accept it and no longer sees something that you, as the class designer, might want, to be an error.

    And consider the plight of the user, always. Which design will make more sense to users of your class as they peruse the header file? What does a default argument of -1 suggest? Not much. The two separate functions are much clearer because one takes a value and doesnít return anything and the other doesnít take a value but returns something. Even without documentation, itís far easier to guess what the two different functions do.

    As a guideline, you shouldnít use a default argument as a flag upon which to conditionally execute code. You should instead break the function into two or more overloaded functions if you can. A default argument should be a value you would ordinarily put in that position. Itís a value that is more likely to occur than all the rest, so users can generally ignore it or use it only if they want to change it from the default value.

    The default argument is included to make function calls easier, especially when those functions have many arguments with typical values. Not only is it much easier to write the calls, itís easier to read them, especially if the class creator can order the arguments so the least-modified defaults appear latest in the list.

    An especially important use of default arguments is when you start out with a function with a set of arguments, and after itís been used for a while you discover you need to add arguments. By defaulting all the new arguments, you ensure that all client code using the previous interface is not disturbed.

    Exercises

  20. Create a message class with a constructor that takes a single char* with a default value. Create a private member char*, and assume the constructor will be passed a static quoted string; simply assign the argument pointer to your internal pointer. Create two overloaded member functions called print( ): one that takes no arguments and simply prints the message stored in the object, and one that takes a char* argument, which it prints in addition to the internal message. Does it make sense to use this approach rather than the one used for the constructor?
  21. Determine how to generate assembly output with your compiler, and run experiments to deduce the name-mangling scheme.
  22. Modify STASH4.H and STASH4.CPP to use default arguments in the constructor. Test the constructor by making two different versions of a Stash object.
  23. Compare the execution speed of the Flags class versus the BitVector class. To ensure there's no confusion about efficiency, first remove the index, offset, and mask clarification definitions in set( ), clear( ) and read( ) by combining them into a single statement that performs the appropriate action. (Test the new code to make sure you haven't broken anything.)
  24. Change FLAGS.CPP so it dynamically allocates the storage for the flags. Give the constructor an argument that is the size of the storage, and put a default of 100 on that argument. Make sure you properly clean up the storage in the destructor.
  25. 8: Constants

    The concept of constant (expressed by the const keyword) was created to allow the programmer to draw a line between what changes and what doesnít.

    This provides safety and control in a C++ programming project. Since its origin, it has taken on a number of different purposes. In the meantime it trickled back into the C language where its meaning was changed. All this can seem a bit confusing at first, and in this chapter youíll learn when, why, and how to use the const keyword. At the end thereís a discussion of volatile, which is a near cousin to const (because they both concern change) and has identical syntax.

    The first motivation for const seems to have been to eliminate the use of preprocessor #defines for value substitution. It has since been put to use for pointers, function arguments, and return types, and class objects and member functions. All of these have slightly different but conceptually compatible meanings and will be looked at in separate sections.

    Value substitution

    When programming in C, the preprocessor is liberally used to create macros and to substitute values. Because the preprocessor simply does text replacement and has no concept nor facility for type checking, preprocessor value substitution introduces subtle problems that can be avoided in C++ by using const values.

    The typical use of the preprocessor to substitute values for names in C looks like this:

    #define BUFSIZE 100

    BUFSIZE is a name that doesnít occupy storage and can be placed in a header file to provide a single value for all translation units that use it. Itís very important to use value substitution instead of so-called "magic numbers" to support code maintenance. If you use magic numbers in your code, not only does the reader have no idea where the numbers come from or what they represent, but if you decide to change a value, you must perform hand editing, and you have no trail to follow to ensure you donít miss one.

    Most of the time, BUFSIZE will behave like an ordinary variable, but not all the time. In addition, thereís no type information. This can hide bugs that are very difficult to find. C++ uses const to eliminate these problems by bringing value substitution into the domain of the compiler. Now you can say

    const int bufsize = 100;

    You can use bufsize anyplace where the compiler must know the value at compile time so it can perform constant folding, which means the compiler will reduce a complex constant expression to a simple one by performing the necessary calculations at compile time. This is especially important in array definitions:

    char buf[bufsize];

    You can use const for all the built-in types (char, int, float, and double) and their variants (as well as class objects, as youíll see later in this chapter). You should always use const instead of #define value substitution.

    const in header files

    To use const instead of #define, you must be able to place const definitions inside header files as you can with #define. This way, you can place the definition for a const in a single place and distribute it to a translation unit by including the header file. A const in C++ defaults to internal linkage; that is, it is visible only within the file where it is defined and cannot be seen at link time by other translation units. You must always assign a value to a const when you define it, except when you make an explicit declaration using extern:

    extern const bufsize;

    The C++ compiler avoids creating storage for a const, but instead holds the definition in its symbol table, although the above extern forces storage to be allocated, as do certain other cases, such as taking the address of a const. When the const is used, it is folded in at compile time.

    Of course, this goal of never allocating storage for a const cannot always be achieved, especially with complicated structures. In these cases, the compiler creates storage, which prevents constant folding. This is why const must default to internal linkage, that is, linkage only within that particular translation unit; otherwise, linker errors would occur with complicated consts because they allocate storage in multiple CPP files. The linker sees the same definition in multiple object files, and complains. However, a const defaults to internal linkage, so the linker doesnít try to link those definitions across translation units, and there are no collisions. With built-in types, which are used in the majority of cases involving constant expressions, the compiler can always perform constant folding.

    Safety consts

    The use of const is not limited to replacing #defines in constant expressions. If you initialize a variable with a value that is produced at run-time and you know it will not change for the lifetime of that variable, it is good programming practice to make it a const so the compiler will give you an error message if you accidentally try to change it. Hereís an example:

    //: C08:Safecons.cpp
    // Using const for safety
    #include <iostream>
    using namespace std;
    
    const int i = 100;  // Typical constant
    const int j = i + 10; // Value from const expr
    long address = (long)&j; // Forces storage
    char buf[j + 10]; // Still a const expression
    
    int main() {
      cout << "type a character & CR:";
      const char c = cin.get(); // Can't change
      const char c2 = c + 'a';
      cout << c2;
      // ...
    } ///:~

    You can see that i is a compile-time const, but j is calculated from i. However, because i is a const, the calculated value for j still comes from a constant expression and is itself a compile-time constant. The very next line requires the address of j and therefore forces the compiler to allocate storage for j. Yet this doesnít prevent the use of j in the determination of the size of buf because the compiler knows j is const and that the value is valid even if storage was allocated to hold that value at some point in the program.

    In main( ), you see a different kind of const in the identifier c because the value cannot be known at compile time. This means storage is required, and the compiler doesnít attempt to keep anything in its symbol table (the same behavior as in C). The initialization must still happen at the point of definition, and once the initialization occurs, the value cannot be changed. You can see that c2 is calculated from c and also that scoping works for consts as it does for any other type ó yet another improvement over the use of #define.

    As a matter of practice, if you think a value shouldnít change, you should make it a const. This not only provides insurance against inadvertent changes, it also allows the compiler to generate more efficient code by eliminating storage and memory reads.

    Aggregates

    Itís possible to use const for aggregates, but youíre virtually assured that the compiler will not be sophisticated enough to keep an aggregate in its symbol table, so storage will be allocated. In these situations, const means "a piece of storage that cannot be changed." However, the value cannot be used at compile time because the compiler is not required to know the contents of storage at compile time. Thus, you cannot say

    //: C08:Constag.cpp {O}
    // Constants and aggregates
    
    const int i[] = { 1, 2, 3, 4 };
    
    //! float f[i[3]]; // Illegal
    
    struct s { int i, j; };
    
    const s S[] = { { 1, 2 }, { 3, 4 } };
    
    //! double d[S[1].j]; // Illegal
    ///:~

    In an array definition, the compiler must be able to generate code that moves the stack pointer to accommodate the array. In both of the illegal definitions, the compiler complains because it cannot find a constant expression in the array definition.

    Differences with C

    Constants were introduced in early versions of C++ while the Standard C specification was still being finished. It was then seen as a good idea and included in C. But somehow, const in C came to mean "an ordinary variable that cannot be changed." In C, it always occupies storage and its name is global. The C compiler cannot treat a const as a compile-time constant. In C, if you say

    const bufsize = 100;

    char buf[bufsize];

    you will get an error, even though it seems like a rational thing to do. Because bufsize occupies storage somewhere, the C compiler cannot know the value at compile time. You can optionally say

    const bufsize;

    in C, but not in C++, and the C compiler accepts it as a declaration indicating there is storage allocated elsewhere. Because C defaults to external linkage for consts, this makes sense. C++ defaults to internal linkage for consts so if you want to accomplish the same thing in C++, you must explicitly change the linkage to external using extern:

    extern const bufsize; // Declaration only

    This line also works in C.

    The C approach to const is not very useful, and if you want to use a named value inside a constant expression (one that must be evaluated at compile time), C almost forces you to use #define in the preprocessor.

    Pointers

    Pointers can be made const. The compiler will still endeavor to prevent storage allocation and do constant folding when dealing with const pointers, but these features seem less useful in this case. More importantly, the compiler will tell you if you attempt changes using such a pointer later in your code, which adds a great deal of safety.

    When using const with pointers, you have two options: const can be applied to what the pointer is pointing to, or the const can be applied to the address stored in the pointer itself. The syntax for these is a little confusing at first but becomes comfortable with practice.

    Pointer to const

    The trick with a pointer definition, as with any complicated definition, is to read it starting at the identifier and working your way out. The const specifier binds to the thing it is "closest to." So if you want to prevent any changes to the element you are pointing to, you write a definition like this:

    const int* x;

    Starting from the identifier, we read "x is a pointer, which points to a const int." Here, no initialization is required because youíre saying that x can point to anything (that is, it is not const), but the thing it points to cannot be changed.

    Hereís the mildly confusing part. You might think that to make the pointer itself unchangeable, that is, to prevent any change to the address contained inside x, you would simply move the const to the other side of the int like this:

    int const* x;

    Itís not all that crazy to think that this should read "x is a const pointer to an int." However, the way it actually reads is "x is an ordinary pointer to an int that happens to be const." That is, the const has bound itself to the int again, and the effect is the same as the previous definition. The fact that these two definitions are the same is the confusing point; to prevent this confusion on the part of your reader, you should probably stick to the first form.

    const pointer

    To make the pointer itself a const, you must place the const specifier to the right of the *, like this:

    int d = 1;

    int* const x = &d;

    Now it reads: "x is a pointer, which is const that points to an int." Because the pointer itself is now the const, the compiler requires that it be given an initial value that will be unchanged for the life of that pointer. Itís OK, however, to change what that value points to by saying

    *x = 2;

    You can also make a const pointer to a const object using either of two legal forms:

    int d = 1;

    const int* const x = &d; // (1)

    int const* const x2 = &d; // (2)

    Now neither the pointer nor the object can be changed.

    Some people argue that the second form is more consistent because the const is always placed to the right of what it modifies. Youíll have to decide which is clearer for your particular coding style.

    Formatting

    This book makes a point of only putting one pointer definition on a line, and initializing each pointer at the point of definition whenever possible. Because of this, the formatting style of "attaching" the Ď*í to the data type is possible:

    int* u = &w;

    as if int* were a discrete type unto itself. This makes the code easier to understand, but unfortunately thatís not actually the way things work. The Ď*í in fact binds to the identifier, not the type. It can be placed anywhere between the type name and the identifier. So you can do this:

    int* u = &w, v = 0;

    which creates an int* u, as before, and a nonpointer int v. Because readers often find this confusing, it is best to follow the form shown in this book.

    Assignment and type checking

    C++ is very particular about type checking, and this extends to pointer assignments. You can assign the address of a non-const object to a const pointer because youíre simply promising not to change something that is OK to change. However, you canít assign the address of a const object to a non-const pointer because then youíre saying you might change the object via the pointer. Of course, you can always use a cast to force such an assignment, but this is bad programming practice because you are then breaking the constness of the object, along with any safety promised by the const. For example:

    int d = 1;

    const int e = 2;

    int* u = &d; // OK -- d not const

    int* v = &e; // Illegal -- e const

    int* w = (int*)&e; // Legal but bad practice

    Although C++ helps prevent errors it, does not protect you from yourself if you want to break the safety mechanisms.

    String literals

    The place where strict constness is not enforced is with string literals. You can say

    char* cp = "howdy";

    and the compiler will accept it without complaint. This is technically an error because a string literal ("howdy" in this case) is created by the compiler as a constant string, and the result of the quoted string is its starting address in memory.

    So string literals are actually constant strings. Of course, the compiler lets you get away with treating them as non-const because thereís so much existing C code that relies on this. However, if you try to change the values in a string literal, the behavior is undefined, although it will probably work on many machines.

    Function arguments
    & return values

    The use of const to specify function arguments and return values is another place where the concept of constants can be confusing. If you are passing objects by value, specifying const has no meaning to the client (it means that the passed argument cannot be modified inside the function). If you are returning an object of a user-defined type by value as a const, it means the returned value cannot be modified. If you are passing and returning addresses, const is a promise that the destination of the address will not be changed.

    Passing by const value

    You can specify that function arguments are const when passing them by value, such as

    void f1(const int i) {

    i++; // Illegal -- compile-time error

    }

    but what does this mean? Youíre making a promise that the original value of the variable will not be changed by the function x( ). However, because the argument is passed by value, you immediately make a copy of the original variable, so the promise to the client is implicitly kept.

    Inside the function, the const takes on meaning: the argument cannot be changed. So itís really a tool for the creator of the function, and not the caller.

    To avoid confusion to the caller, you can make the argument a const inside the function, rather than in the argument list. You could do this with a pointer, but a nicer syntax is achieved with the reference, a subject that will be fully developed in Chapter 9. Briefly, a reference is like a constant pointer that is automatically dereferenced, so it has the effect of being an alias to an object. To create a reference, you use the & in the definition. So the nonconfusing function definition looks like this:

    void f2(int ic) {

    const int& i = ic;

    i++; // Illegal -- compile-time error

    }

    Again, youíll get an error message, but this time the constness of the local object is not part of the function signature; it only has meaning to the implementation of the function so itís hidden from the client.

    Returning by const value

    A similar truth holds for the return value. If you return by value from a function, as a const

    const int g();

    you are promising that the original variable (inside the function frame) will not be modified. And again, because youíre returning it by value, itís copied so the original value is automatically not modified.

    At first, this can make the specification of const seem meaningless. You can see the apparent lack of effect of returning consts by value in this example:

    //: C08:Constval.cpp
    // Returning consts by value
    // has no meaning for built-in types
    
    int f3() { return 1; }
    const int f4() { return 1; }
    
    int main() {
      const int j = f3(); // Works fine
      int k = f4(); // But this works fine too!
    } ///:~

    For built-in types, it doesnít matter whether you return by value as a const, so you should avoid confusing the client programmer by leaving off the const when returning a built-in type by value.

    Returning by value as a const becomes important when youíre dealing with user-defined types. If a function returns a class object by value as a const, the return value of that function cannot be an lvalue (that is, it cannot be assigned to or otherwise modified). For example:

    //: C08:Constret.cpp
    // Constant return by value
    // Result cannot be used as an lvalue
    
    class X {
      int i;
    public:
      X(int I = 0) { i = I; }
      void modify() { i++; }
    };
    
    X f5() {
      return X();
    }
    
    const X f6() {
      return X();
    }
    
    void f7(X& x) { // Pass by non-const reference
      x.modify();
    }
    
    int main() {
      f5() = X(1); // OK -- non-const return value
      f5().modify(); // OK
      f7(f5()); // OK
      // Causes compile-time errors:
    //!  f6() = X(1);
    //!  f6().modify();
    //!  f7(f6());
    } ///:~

    f5( ) returns a non-const X object, while f6( ) returns a const X object. Only the non-const return value can be used as an lvalue. Thus, itís important to use const when returning an object by value if you want to prevent its use as an lvalue.

    The reason const has no meaning when youíre returning a built-in type by value is that the compiler already prevents it from being an lvalue (because itís always a value, and not a variable). Only when youíre returning objects of user-defined types by value does it become an issue.

    The function f7( ) takes its argument as a non-const reference (an additional way of handling addresses in C++ which is the subject of Chapter 9). This is effectively the same as taking a non-const pointer; itís just that the syntax is different.

    Temporaries

    Sometimes, during the evaluation of an expression, the compiler must create temporary objects. These are objects like any other: they require storage and they must be constructed and destroyed. The difference is that you never see them ó the compiler is responsible for deciding that theyíre needed and the details of their existence. But there is one thing about temporaries: theyíre automatically const. Because you usually wonít be able to get your hands on a temporary object, telling it to do something that will change that temporary is almost certainly a mistake because you wonít be able to use that information. By making all temporaries automatically const, the compiler informs you when you make that mistake.

    The way the constness of class objects is preserved is shown later in the chapter.

    Passing and returning addresses

    If you pass or return a pointer (or a reference), itís possible for the user to take the pointer and modify the original value. If you make the pointer a const, you prevent this from happening, which may be an important factor. In fact, whenever youíre passing an address into a function, you should make it a const if at all possible. If you donít, youíre excluding the possibility of using that function with a pointer to a const.

    The choice of whether to return a pointer to a const depends on what you want to allow your user to do with it. Hereís an example that demonstrates the use of const pointers as function arguments and return values:

    //: C08:Constp.cpp
    // Constant pointer arg/return
    
    void t(int*) {}
    
    void u(const int* cip) {
    //!  *cip = 2; // Illegal -- modifies value
      int i = *cip; // OK -- copies value
    //!  int* ip2 = cip; // Illegal: non-const
    }
    
    const char* v() {
      // Returns address of static string:
      return "result of function v()";
    }
    
    const int* const w() {
      static int i;
      return &i;
    }
    
    int main() {
      int x = 0;
      int* ip = &x;
      const int* cip = &x;
      t(ip);  // OK
    //!  t(cip); // Not OK
      u(ip);  // OK
      u(cip); // Also OK
    //!  char* cp = v(); // Not OK
      const char* ccp = v(); // OK
    //!  int* ip2 = w(); // Not OK
      const int* const ccip = w(); // OK
      const int* cip2 = w(); // OK
    //!  *w() = 1; // Not OK
    } ///:~

    The function t( ) takes an ordinary non-const pointer as an argument, and u( ) takes a const pointer. Inside u( ) you can see that attempting to modify the destination of the const pointer is illegal, but you can of course copy the information out into a non-const variable. The compiler also prevents you from creating a non-const pointer using the address stored inside a const pointer.

    The functions v( ) and w( ) test return value semantics. v( ) returns a const char* that is created from a string literal. This statement actually produces the address of the string literal, after the compiler creates it and stores it in the static storage area. As mentioned earlier, this string is technically a constant, which is properly expressed by the return value of v( ).

    The return value of w( ) requires that both the pointer and what it points to be a const. As with v( ), the value returned by w( ) is valid after the function returns only because it is static. You never want to return pointers to local stack variables because they will be invalid after the function returns and the stack is cleaned up. (Another common pointer you might return is the address of storage allocated on the heap, which is still valid after the function returns.

    In main( ), the functions are tested with various arguments. You can see that t( ) will accept a non-const pointer argument, but if you try to pass it a pointer to a const, thereís no promise that t( ) will leave the pointerís destination alone, so the compiler gives you an error message. u( ) takes a const pointer, so it will accept both types of arguments. Thus, a function that takes a const pointer is more general than one that does not.

    As expected, the return value of v( ) can be assigned only to a const pointer. You would also expect that the compiler refuses to assign the return value of w( ) to a non-const pointer, and accepts a const int* const, but it might be a bit surprising to see that it also accepts a const int*, which is not an exact match to the return type. Once again, because the value (which is the address contained in the pointer) is being copied, the promise that the original variable is untouched is automatically kept. Thus, the second const in const int* const is only meaningful when you try to use it as an lvalue, in which case the compiler prevents you.

    Standard argument passing

    In C itís very common to pass by value, and when you want to pass an address your only choice is to use a pointer. However, neither of these approaches is preferred in C++. Instead, your first choice when passing an argument is to pass by reference, and by const reference at that. To the client programmer, the syntax is identical to that of passing by value, so thereís no confusion about pointers ó they donít even have to think about the problem. For the creator of the class, passing an address is virtually always more efficient than passing an entire class object, and if you pass by const reference it means your function will not change the destination of that address, so the effect from the client programmerís point of view is exactly the same as pass-by-value.

    Because of the syntax of references (it looks like pass-by-value) itís possible to pass a temporary object to a function that takes a reference, whereas you can never pass a temporary object to a function that takes a pointer ó the address must be explicitly taken. So passing by reference produces a new situation that never occurs in C: a temporary, which is always const, can have its address passed to a function. This is why, to allow temporaries to be passed to functions by reference the argument must be a const reference. The following example demonstrates this:

    //: C08:Consttmp.cpp
    // Temporaries are const
    
    class X {};
    
    X f() { return X(); } // Return by value
    
    void g1(X&) {} // Pass by non-const reference
    void g2(const X&) {} // Pass by const reference
    
    int main() {
      // Error: const temporary created by f():
    //!  g1(f());
      // OK: g2 takes a const reference:
      g2(f());
    } ///:~

    f( ) returns an object of class X by value. That means when you immediately take the return value of f( ) and pass it to another function as in the calls to g1( ) and g2( ), a temporary is created and that temporary is const. Thus, the call in g1( ) is an error because g1( ) doesnít take a const reference, but the call to g2( ) is OK.

    Classes

    This section shows the two ways to use const with classes. You may want to create a local const in a class to use inside constant expressions that will be evaluated at compile time. However, the meaning of const is different inside classes, so you must use an alternate technique with enumerations to achieve the same effect.

    You can also make a class object const (and as youíve just seen, the compiler always makes temporary class objects const). But preserving the constness of a class object is more complex. The compiler can ensure the constness of a built-in type but it cannot monitor the intricacies of a class. To guarantee the constness of a class object, the const member function is introduced: Only a const member function may be called for a const object.

    const and enum in classes

    One of the places youíd like to use a const for constant expressions is inside classes. The typical example is when youíre creating an array inside a class and you want to use a const instead of a #define to establish the array size and to use in calculations involving the array. The array size is something youíd like to keep hidden inside the class, so if you used a name like size, for example, you could use that name in another class without a clash. The preprocessor treats all #defines as global from the point they are defined, so this will not achieve the desired effect.

    Initially, you probably assume that the logical choice is to place a const inside the class. This doesnít produce the desired result. Inside a class, const partially reverts to its meaning in C. It allocates storage within each class object and represents a value that is initialized once and then cannot change. The use of const inside a class means "This is constant for the lifetime of the object." However, each different object may contain a different value for that constant.

    Thus, when you create a const inside a class, you cannot give it an initial value. This initialization must occur in the constructor, of course, but in a special place in the constructor. Because a const must be initialized at the point it is created, inside the main body of the constructor the const must already be initialized. Otherwise youíre left with the choice of waiting until some point later in the constructor body, which means the const would be un-initialized for a while. Also, thereís nothing to keep you from changing the value of the const at various places in the constructor body.

    The constructor initializer list

    The special initialization point is called the constructor initializer list, and it was originally developed for use in inheritance (an object-oriented subject of a later chapter). The constructor initializer list ó which, as the name implies, occurs only in the definition of the constructor ó is a list of "constructor calls" that occur after the function argument list and a colon, but before the opening brace of the constructor body. This is to remind you that the initialization in the list occurs before any of the main constructor code is executed. This is the place to put all const initializations, so the proper form for const inside a class is

    class fred {

    const size;

    public:

    fred();

    };

    fred::fred() : size(100) {}

    The form of the constructor initializer list shown above is at first confusing because youíre not used to seeing a built-in type treated as if it has a constructor.

    "Constructors" for built-in types

    As the language developed and more effort was put into making user-defined types look like built-in types, it became apparent that there were times when it was helpful to make built-in types look like user-defined types. In the constructor initializer list, you can treat a built-in type as if it has a constructor, like this:

    class B {

    int i;

    public:

    B(int I);

    };

    B::B(int I) : i(I) {}

    This is especially critical when initializing const data members because they must be initialized before the function body is entered.

    It made sense to extend this "constructor" for built-in types (which simply means assignment) to the general case. Now you can say

    float pi(3.14159);

    Itís often useful to encapsulate a built-in type inside a class to guarantee initialization with the constructor. For example, hereís an integer class:

    class integer {

    int i;

    public:

    integer(int I = 0);

    };

    integer::integer(int I) : i(I) {}

    Now if you make an array of integers, they are all automatically initialized to zero:

    integer I[100];

    This initialization isnít necessarily more costly than a for loop or memset( ). Many compilers easily optimize this to a very fast process.

    Compile-time constants in classes

    Because storage is allocated in the class object, the compiler cannot know what the contents of the const are, so it cannot be used as a compile-time constant. This means that, for constant expressions inside classes, const becomes as useless as it is in C. You cannot say

    class bob {

    const size = 100; // Illegal

    int array[size]; // Illegal

    //...

    The meaning of const inside a class is "This value is const for the lifetime of this particular object, not for the class as a whole." How then do you create a class constant that can be used in constant expressions? A common solution is to use an untagged enum with no instances. An enumeration must have all its values established at compile time, itís local to the class, and its values are available for constant expressions. Thus, you will commonly see

    class Bunch {

    enum { size = 1000 };

    int i[size];

    };

    The use of enum here is guaranteed to occupy no storage in the object, and the enumerators are all evaluated at compile time. You can also explicitly establish the values of the enumerators:

    enum { one = 1, two = 2, three };

    With integral enum types, the compiler will continue counting from the last value, so the enumerator three will get the value 3.

    Hereís an example that shows the use of enum inside a container that represents a Stack of string pointers:

    //: C08:SStack.cpp
    // enum inside classes
    #include <cstring>
    #include <iostream>
    using namespace std;
    
    class StringStack {
      enum { size = 100 };
      const char* Stack[size];
      int index;
    public:
       StringStack();
       void push(const char* s);
       const char* pop();
    };
    
    StringStack::StringStack() : index(0) {
      memset(Stack, 0, size * sizeof(char*));
    }
    
    void StringStack::push(const char* s) {
      if(index < size)
        Stack[index++] = s;
    }
    
    const char* StringStack::pop() {
      if(index > 0) {
        const char* rv = Stack[--index];
        Stack[index] = 0;
        return rv;
      }
    }
    
    const char* iceCream[] = {
      "pralines & cream",
      "fudge ripple",
      "jamocha almond fudge",
      "wild mountain blackberry",
      "raspberry sorbet",
      "lemon swirl",
      "rocky road",
      "deep chocolate fudge"
    };
    
    const ICsz = sizeof iceCream/sizeof *iceCream;
    
    int main() {
      StringStack SS;
      for(int i = 0; i < ICsz; i++)
        SS.push(iceCream[i]);
      const char* cp;
      while((cp = SS.pop()) != 0)
        cout << cp << endl;
    } ///:~

    Notice that push( ) takes a const char* as an argument, pop( ) returns a const char*, and Stack holds const char*. If this were not true, you couldnít use a StringStack to hold the pointers in iceCream. However, it also prevents you from doing anything that will change the objects contained by StringStack. Of course, not all containers are designed with this restriction.

    Although youíll often see the enum technique in legacy code, C++ also has the static const which produces a more flexible compile-time constant inside a class. This is described in Chapter 8.

    Type checking for enumerations

    Cís enumerations are fairly primitive, simply associating integral values with names, but providing no type checking. In C++, as you may have come to expect by now, the concept of type is fundamental, and this is true with enumerations. When you create a named enumeration, you effectively create a new type just as you do with a class: The name of your enumeration becomes a reserved word for the duration of that translation unit.

    In addition, thereís stricter type checking for enumerations in C++ than in C. Youíll notice this in particular if you have an instance of an enumeration color called a. In C you can say a++ but in C++ you canít. This is because incrementing an enumeration is performing two type conversions, one of them legal in C++ and one of them illegal. First, the value of the enumeration is implicitly cast from a color to an int, then the value is incremented, then the int is cast back into a color. In C++ this isnít allowed, because color is a distinct type and not equivalent to an int. This makes sense because how do you know the increment of blue will even be in the list of colors? If you want to increment a color, then it should be a class (with an increment operation) and not an enum. Any time you write code that assumes an implicit conversion to an enum type, the compiler will flag this inherently dangerous activity.

    Unions have similar additional type checking.

    const objects & member functions

    Class member functions can be made const. What does this mean? To understand, you must first grasp the concept of const objects.

    A const object is defined the same for a user-defined type as a built-in type. For example:

    const int i = 1;

    const blob B(2);

    Here, B is a const object of type blob. Its constructor is called with an argument of two. For the compiler to enforce constness, it must ensure that no data members of the object are changed during the objectís lifetime. It can easily ensure that no public data is modified, but how is it to know which member functions will change the data and which ones are "safe" for a const object?

    If you declare a member function const, you tell the compiler the function can be called for a const object. A member function that is not specifically declared const is treated as one that will modify data members in an object, and the compiler will not allow you to call it for a const object.

    It doesnít stop there, however. Just claiming a function is const inside a class definition doesnít guarantee the member function definition will act that way, so the compiler forces you to reiterate the const specification when defining the function. (The const becomes part of the function signature, so both the compiler and linker check for constness.) Then it enforces constness during the function definition by issuing an error message if you try to change any members of the object or call a non-const member function. Thus, any member function you declare const is guaranteed to behave that way in the definition.

    Preceding the function declaration with const means the return value is const, so that isnít the proper syntax. You must place the const specifier after the argument list. For example,

    class X {

    int i;

    public:

    int f() const;

    };

    The const keyword must be repeated in the definition using the same form, or the compiler sees it as a different function:

    int X::f() const { return i; }

    If f( )attempts to change i in any way or to call another member function that is not const, the compiler flags it as an error.

    Any function that doesnít modify member data should be declared as const, so it can be used with const objects.

    Hereís an example that contrasts a const and non-const member function:

    //: C08:Quoter.cpp
    // Random quote selection
    #include <iostream>
    #include <cstdlib> // Random number generator
    #include <ctime> // To seed random generator
    using namespace std;
    
    class Quoter {
      int lastquote;
    public:
      Quoter();
      int Lastquote() const;
      const char* quote();
    };
    
    Quoter::Quoter(){
      lastquote = -1;
      srand(time(0)); // Seed random number generator
    }
    
    int Quoter::Lastquote() const {
      return lastquote;
    }
    
    const char* Quoter::quote() {
      static const char* quotes[] = {
        "Are we having fun yet?",
        "Doctors always know best",
        "Is it ... Atomic?",
        "Fear is obscene",
        "There is no scientific evidence "
        "to support the idea "
        "that life is serious",
      };
      const qsize = sizeof quotes/sizeof *quotes;
      int qnum = rand() % qsize;
      while(lastquote >= 0 && qnum == lastquote)
        qnum = rand() % qsize;
      return quotes[lastquote = qnum];
    }
    
    int main() {
      Quoter q;
      const Quoter cq;
      cq.Lastquote(); // OK
    //!  cq.quote(); // Not OK; non const function
      for(int i = 0; i < 20; i++)
        cout << q.quote() << endl;
    } ///:~

    Neither constructors nor destructors can be const member functions because they virtually always perform some modification on the object during initialization and cleanup. The quote( ) member function also cannot be const because it modifies the data member lastquote in the return statement. However, Lastquote( ) makes no modifications, and so it can be const and can be safely called for the const object cq.

    mutable: bitwise vs. memberwise const

    What if you want to create a const member function, but youíd still like to change some of the data in the object? This is sometimes referred to as the difference between bitwise const and memberwise const. Bitwise const means that every bit in the object is permanent, so a bit image of the object will never change. Memberwise const means that, although the entire object is conceptually constant, there may be changes on a member-by-member basis. However, if the compiler is told that an object is const, it will jealously guard that object. There are two ways to change a data member inside a const member function.

    The first approach is the historical one and is called casting away constness. It is performed in a rather odd fashion. You take this (the keyword that produces the address of the current object) and you cast it to a pointer to an object of the current type. It would seem that this is already such a pointer, but itís a const pointer, so by casting it to an ordinary pointer, you remove the constness for that operation. Hereís an example:

    //: C08:Castaway.cpp
    // "Casting away" constness
    
    class Y {
      int i, j;
    public:
      Y() { i = j = 0; }
      void f() const;
    };
    
    void Y::f() const {
    //!    i++; // Error -- const member function
        ((Y*)this)->j++; // OK: cast away const-ness
    }
    
    int main() {
      const Y yy;
      yy.f(); // Actually changes it!
    } ///:~

    This approach works and youíll see it used in legacy code, but it is not the preferred technique. The problem is that this lack of constness is hidden away in a member function of an object, so the user has no clue that itís happening unless she has access to the source code (and actually goes looking for it). To put everything out in the open, you should use the mutable keyword in the class declaration to specify that a particular data member may be changed inside a const object:

    //: C08:Mutable.cpp
    // The "mutable" keyword
    
    class Y {
      int i;
      mutable int j;
    public:
      Y() { i = j = 0; }
      void f() const;
    };
    
    void Y::f() const {
    //! i++; // Error -- const member function
        j++; // OK: mutable
    }
    
    int main() {
      const Y yy;
      yy.f(); // Actually changes it!
    } ///:~

    Now the user of the class can see from the declaration which members are likely to be modified in a const member function.

    ROMability

    If an object is defined as const, it is a candidate to be placed in read-only memory (ROM), which is often an important consideration in embedded systems programming. Simply making an object const, however, is not enough ó the requirements for ROMability are much more strict. Of course, the object must be bitwise-const, rather than memberwise-const. This is easy to see if memberwise constness is implemented only through the mutable keyword, but probably not detectable by the compiler if constness is cast away inside a const member function. In addition,

  26. The class or struct must have no user-defined constructors or destructor.
  27. There can be no base classes (covered in the future chapter on inheritance) or member objects with user-defined constructors or destructors.
  28. The effect of a write operation on any part of a const object of a ROMable type is undefined. Although a suitably formed object may be placed in ROM, no objects are ever required to be placed in ROM.

    volatile

    The syntax of volatile is identical to that for const, but volatile means "This data may change outside the knowledge of the compiler." Somehow, the environment is changing the data (possibly through multitasking), and volatile tells the compiler not to make any assumptions about the data ó this is particularly important during optimization. If the compiler says, "I read the data into a register earlier, and I havenít touched that register," normally it wouldnít need to read the data again. But if the data is volatile, the compiler cannot make such an assumption because the data may have been changed by another process, and it must reread the data rather than optimizing the code.

    You can create volatile objects just as you create const objects. You can also create const volatile objects, which canít be changed by the programmer but instead change through some outside agency. Here is an example that might represent a class to associate with some piece of communication hardware:

    //: C08:Volatile.cpp
    // The volatile keyword
    
    class Comm {
      const volatile unsigned char byte;
      volatile unsigned char flag;
      enum { bufsize = 100 };
      unsigned char buf[bufsize];
      int index;
    public:
      Comm();
      void isr() volatile;
      char read(int Index) const;
    };
    
    Comm::Comm() : index(0), byte(0), flag(0) {}
    
    // Only a demo; won't actually work
    // as an interrupt service routine:
    void Comm::isr() volatile {
      if(flag) flag = 0;
      buf[index++] = byte;
      // Wrap to beginning of buffer:
      if(index >= bufsize) index = 0;
    }
    
    char Comm::read(int Index) const {
      if(Index < 0 || Index >= bufsize)
        return 0;
      return buf[Index];
    }
    
    int main() {
      volatile Comm Port;
      Port.isr(); // OK
    //!  Port.read(0); // Not OK;
                    // read() not volatile
    } ///:~

    As with const, you can use volatile for data members, member functions, and objects themselves. You can call only volatile member functions for volatile objects.

    The reason that isr( ) canít actually be used as an interrupt service routine is that in a member function, the address of the current object (this) must be secretly passed, and an ISR generally wants no arguments at all. To solve this problem, you can make isr( ) a static member function, a subject covered in a future chapter.

    The syntax of volatile is identical to const, so discussions of the two are often treated together. To indicate the choice of either one, the two are referred to in combination as the c-v qualifier.

    Summary

    The const keyword gives you the ability to define objects, function arguments and return values, and member functions as constants, and to eliminate the preprocessor for value substitution without losing any preprocessor benefits. All this provides a significant additional form of type checking and safety in your programming. The use of so-called const correctness (the use of const anywhere you possibly can) has been a lifesaver for projects.

    Although you can ignore const and continue to use old C coding practices, itís there to help you. Chapters 9 & 10 begin using references heavily, and there youíll see even more about how critical it is to use const with function arguments.

    Exercises

  29. Create a class called bird that can fly( ) and a class rock that canít. Create a rock object, take its address, and assign that to a void*. Now take the void*, assign it to a bird*, and call fly( ) through that pointer. Is it clear why Cís permission to openly assign via a void* is a "hole" in the language?
  30. Create a class containing a const member that you initialize in the constructor initializer list and an untagged enumeration that you use to determine an array size.
  31. Create a class with both const and non-const member functions. Create const and non-const objects of this class, and try calling the different types of member functions for the different types of objects.
  32. Create a function that takes an argument by value as a const; then try to change that argument in the function body.
  33. Prove to yourself that the C and C++ compilers really do treat constants differently. Create a global const and use it in a constant expression; then compile it under both C and C++.
  34. 9: Inline functions

    One of the important features C++ inherits from C is efficiency. If the efficiency of C++ were dramatically less than C, there would be a significant contingent of programmers who couldnít justify its use.

    In C, one of the ways to preserve efficiency is through the use of macros, which allow you to make what looks like a function call without the normal overhead of the function call. The macro is implemented with the preprocessor rather than the compiler proper, and the preprocessor replaces all macro calls directly with the macro code, so thereís no cost involved from pushing arguments, making an assembly-language CALL, returning arguments, and performing an assembly-language RETURN. All the work is performed by the preprocessor, so you have the convenience and readability of a function call but it doesnít cost you anything.

    There are two problems with the use of preprocessor macros in C++. The first is also true with C: A macro looks like a function call, but doesnít always act like one. This can bury difficult-to-find bugs. The second problem is specific to C++: The preprocessor has no permission to access private data. This means preprocessor macros are virtually useless as class member functions.

    To retain the efficiency of the preprocessor macro, but to add the safety and class scoping of true functions, C++ has the inline function. In this chapter, weíll look at the problems of preprocessor macros in C++, how these problems are solved with inline functions, and guidelines and insights on the way inlines work.

    Preprocessor pitfalls

    The key to the problems of preprocessor macros is that you can be fooled into thinking that the behavior of the preprocessor is the same as the behavior of the compiler. Of course, it was intended that a macro look and act like a function call, so itís quite easy to fall into this fiction. The difficulties begin when the subtle differences appear.

    As a simple example, consider the following:

    #define f (x) (x + 1)

    Now, if a call is made to f like this

    f(1)

    the preprocessor expands it, somewhat unexpectedly, to the following:

    (x) (x + 1)(1)

    The problem occurs because of the gap between f and its opening parenthesis in the macro definition. When this gap is removed, you can actually call the macro with the gap

    f (1)

    and it will still expand properly, to

    (1 + 1)

    The above example is fairly trivial and the problem will make itself evident right away. The real difficulties occur when using expressions as arguments in macro calls.

    There are two problems. The first is that expressions may expand inside the macro so that their evaluation precedence is different from what you expect. For example,

    #define floor(x,b) x>=b?0:1

    Now, if expressions are used for the arguments

    if(floor(a&0x0f,0x07)) // ...

    the macro will expand to

    if(a&0x0f>=0x07?0:1)

    The precedence of & is lower than that of >=, so the macro evaluation will surprise you. Once you discover the problem (and as a general practice when creating preprocessor macros) you can solve it by putting parentheses around everything in the macro definition. Thus,

    #define floor(x,b) ((x)>=(b)?0:1)

    Discovering the problem may be difficult, however, and you may not find it until after youíve taken the proper macro behavior for granted. In the unparenthesized version of the preceding example, most expressions will work correctly, because the precedence of >= is lower than most of the operators like +, /, Ė Ė, and even the bitwise shift operators. So you can easily begin to think that it works with all expressions, including those using bitwise logical operators.

    The preceding problem can be solved with careful programming practice: Parenthesize everything in a macro. The second difficulty is more subtle. Unlike a normal function, every time you use an argument in a macro, that argument is evaluated. As long as the macro is called only with ordinary variables, this evaluation is benign, but if the evaluation of an argument has side effects, then the results can be surprising and will definitely not mimic function behavior.

    For example, this macro determines whether its argument falls within a certain range:

    #define band(x) (((x)>5 && (x)<10) ? (x) : 0)

    As long as you use an "ordinary" argument, the macro works very much like a real function. But as soon as you relax and start believing it is a real function, the problems start. Thus,

    //: C09:Macro.cpp
    // Side effects with macros
    #include <fstream>
    #include "../require.h"
    using namespace std;
    
    #define band(x) (((x)>5 && (x)<10) ? (x) : 0)
    
    int main() {
      ofstream out("macro.out");
      assure(out, "macro.out");
      for(int i = 4; i < 11; i++) {
        int a = i;
        out << "a = " << a << endl << '\t';
        out << "band(++a)=" << band(++a) << endl;
        out << "\t a = " << a << endl;
      }
    } ///:~

    Hereís the output produced by the program, which is not at all what you would have expected from a true function:

    a = 4

    band(++a)=0

    a = 5

    a = 5

    band(++a)=8

    a = 8

    a = 6

    band(++a)=9

    a = 9

    a = 7

    band(++a)=10

    a = 10

    a = 8

    band(++a)=0

    a = 10

    a = 9

    band(++a)=0

    a = 11

    a = 10

    band(++a)=0

    a = 12

    When a is four, only the first part of the conditional occurs, so the expression is evaluated only once, and the side effect of the macro call is that a becomes five, which is what you would expect from a normal function call in the same situation. However, when the number is within the band, both conditionals are tested, which results in two increments. The result is produced by evaluating the argument again, which results in a third increment. Once the number gets out of the band, both conditionals are still tested so you get two increments. The side effects are different, depending on the argument.

    This is clearly not the kind of behavior you want from a macro that looks like a function call. In this case, the obvious solution is to make it a true function, which of course adds the extra overhead and may reduce efficiency if you call that function a lot. Unfortunately, the problem may not always be so obvious, and you can unknowingly get a library that contains functions and macros mixed together, so a problem like this can hide some very difficult-to-find bugs. For example, the putc( ) macro in STDIO.H may evaluate its second argument twice. This is specified in Standard C. Also, careless implementations of toupper( ) as a macro may evaluate the argument more than once, which will give you unexpected results with toupper(*p++).

    Macros and access

    Of course, careful coding and use of preprocessor macros are required with C, and we could certainly get away with the same thing in C++ if it werenít for one problem: A macro has no concept of the scoping required with member functions. The preprocessor simply performs text substitution, so you cannot say something like

    class X {

    int i;

    public:

    #define val (X::i) // Error

    or anything even close. In addition, there would be no indication of which object you were referring to. There is simply no way to express class scope in a macro. Without some alternative to preprocessor macros, programmers will be tempted to make some data members public for the sake of efficiency, thus exposing the underlying implementation and preventing changes in that implementation.

    Inline functions

    In solving the C++ problem of a macro with access to private class members, all the problems associated with preprocessor macros were eliminated. This was done by bringing macros under the control of the compiler, where they belong. In C++, the concept of a macro is implemented as an inline function, which is a true function in every sense. Any behavior you expect from an ordinary function, you get from an inline function. The only difference is that an inline function is expanded in place, like a preprocessor macro, so the overhead of the function call is eliminated. Thus, you should (almost) never use macros, only inline functions.

    Any function defined within a class body is automatically inline, but you can also make a nonclass function inline by preceding it with the inline keyword. However, for it to have any effect, you must include the function body with the declaration; otherwise the compiler will treat it as an ordinary function declaration. Thus,

    inline int PlusOne(int x);

    has no effect at all other than declaring the function (which may or may not get an inline definition sometime later). The successful approach is

    inline int PlusOne(int x) { return ++x; }

    Notice that the compiler will check (as it always does) for the proper use of the function argument list and return value (performing any necessary conversions), something the preprocessor is incapable of. Also, if you try to write the above as a preprocessor macro, you get an unwanted side effect.

    Youíll almost always want to put inline definitions in a header file. When the compiler sees such a definition, it puts the function type (signature + return value) and the function body in its symbol table. When you use the function, the compiler checks to ensure the call is correct and the return value is being used correctly, and then substitutes the function body for the function call, thus eliminating the overhead. The inline code does occupy space, but if the function is small, this can actually take less space than the code generated to do an ordinary function call (pushing arguments on the stack and doing the CALL).

    An inline function in a header file defaults to internal linkage ó that is, it is static and can only be seen in translation units where it is included. Thus, as long as they arenít declared in the same translation unit, there will be no clash at link time between an inline function and a global function with the same signature. (Remember the return value is not included in the resolution of function overloading.

    Inlines inside classes

    To define an inline function, you must ordinarily precede the function definition with the inline keyword. However, this is not necessary inside a class definition. Any function you define inside a class definition is automatically an inline. Thus,

    //: C09:Inline.cpp
    // Inlines inside classes
    #include <iostream>
    using namespace std;
    
    class Point {
      int i, j, k;
    public:
      Point() { i = j = k = 0; }
      Point(int I, int J, int K) {
        i = I;
        j = J;
        k = K;
      }
      void print(const char* msg = "") const {
        if(*msg) cout << msg << endl;
        cout << "i = " << i << ", "
             << "j = " << j << ", "
             << "k = " << k << endl;
      }
    };
    
    int main() {
      Point p, q(1,2,3);
      p.print("value of p");
      q.print("value of q");
    } ///:~

    Of course, the temptation is to use inlines everywhere inside class declarations because they save you the extra step of making the external member function definition. Keep in mind, however, that the idea of an inline is to reduce the overhead of a function call. If the function body is large, chances are youíll spend a much larger percentage of your time inside the body versus going in and out of the function, so the gains will be small. But inlining a big function will cause that code to be duplicated everywhere the function is called, producing code bloat with little or no speed benefit.

    Access functions

    One of the most important uses of inlines inside classes is the access function. This is a small function that allows you to read or change part of the state of an object ó that is, an internal variable or variables. The reason inlines are so important with access functions can be seen in the following example:

    //: C09:Access.cpp
    // Inline access functions
    
    class Access {
      int i;
    public:
      int read() const { return i; }
      void set(int I) { i = I; }
    };
    
    int main() {
      Access A;
      A.set(100);
      int x = A.read();
    } ///:~

    Here, the class user never has direct contact with the state variables inside the class, and they can be kept private, under the control of the class designer. All the access to the private data members can be controlled through the member function interface. In addition, access is remarkably efficient. Consider the read( ), for example. Without inlines, the code generated for the call to read( ) would include pushing this on the stack and making an assembly language CALL. With most machines, the size of this code would be larger than the code created by the inline, and the execution time would certainly be longer.

    Without inline functions, an efficiency-conscious class designer will be tempted to simply make i a public member, eliminating the overhead by allowing the user to directly access i. From a design standpoint, this is disastrous because i then becomes part of the public interface, which means the class designer can never change it. Youíre stuck with an int called i. This is a problem because you may learn sometime later that it would be much more useful to represent the state information as a float rather than an int, but because int i is part of the public interface, you canít change it. If, on the other hand, youíve always used member functions to read and change the state information of an object, you can modify the underlying representation of the object to your heartís content (and permanently remove from your mind the idea that you are going to perfect your design before you code it and try it out).

    Accessors and mutators

    Some people further divide the concept of access functions into accessors (to read state information from an object) and mutators (to change the state of an object). In addition, function overloading may be used to provide the same function name for both the accessor and mutator; how you call the function determines whether youíre reading or modifying state information. Thus,

    //: C09:Rectangl.cpp
    // Accessors & mutators
    
    class Rectangle {
      int Width, Height;
    public:
      Rectangle(int W = 0, int H = 0)
        : Width(W), Height(H) {}
      int width() const { return Width; } // Read
      void width(int W) { Width = W; } // Set
      int height() const { return Height; } // Read
      void height(int H) { Height = H; } // Set
    };
    
    int main() {
      Rectangle R(19, 47);
      // Change width & height:
      R.height(2 * R.width());
      R.width(2 * R.height());
    } ///:~

    The constructor uses the constructor initializer list (briefly introduced in Chapter 6 and covered fully in Chapter 12) to initialize the values of Width and Height (using the pseudoconstructor-call form for built-in types).

    Of course, accessors and mutators donít have to be simple pipelines to an internal variable. Sometimes they can perform some sort of calculation. The following example uses the Standard C library time functions to produce a simple Time class:

    //: C09:Cpptime.h
    // A simple time class
    #ifndef CPPTIME_H_
    #define CPPTIME_H_
    #include <ctime>
    #include <cstring>
    
    class Time {
      time_t t;
      tm local;
      char Ascii[26];
      unsigned char lflag, aflag;
      void updateLocal() {
        if(!lflag) {
          local = *localtime(&t);
          lflag++;
        }
      }
      void updateAscii() {
        if(!aflag) {
          updateLocal();
          strcpy(Ascii, asctime(&local));
          aflag++;
        }
      }
    public:
      Time() { mark(); }
      void mark() {
        lflag = aflag = 0;
        time(&t);
      }
      const char* ascii() {
        updateAscii();
        return Ascii;
      }
      // Difference in seconds:
      int delta(Time* dt) const {
        return difftime(t, dt->t);
      }
      int DaylightSavings() {
        updateLocal();
        return local.tm_isdst;
      }
      int DayOfYear() { // Since January 1
        updateLocal();
        return local.tm_yday;
      }
      int DayOfWeek() { // Since Sunday
        updateLocal();
        return local.tm_wday;
      }
      int Since1900() { // Years since 1900
        updateLocal();
        return local.tm_year;
      }
      int Month() { // Since January
        updateLocal();
        return local.tm_mon;
      }
      int DayOfMonth() {
        updateLocal();
        return local.tm_mday;
      }
      int Hour() { // Since midight, 24-hour clock
        updateLocal();
        return local.tm_hour;
      }
      int Minute() {
        updateLocal();
        return local.tm_min;
      }
      int Second() {
        updateLocal();
        return local.tm_sec;
      }
    };
    #endif // CPPTIME_H_ ///:~

    The Standard C library functions have multiple representations for time, and these are all part of the Time class. However, it isnít necessary to update all of them all the time, so instead the time_t T is used as the base representation, and the tm local and ASCII character representation Ascii each have flags to indicate if theyíve been updated to the current time_t. The two private functions updateLocal( ) and updateAscii( ) check the flags and conditionally perform the update.

    The constructor calls the mark( ) function (which the user can also call to force the object to represent the current time), and this clears the two flags to indicate that the local time and ASCII representation are now invalid. The ascii( ) function calls updateAscii( ), which copies the result of the Standard C library function asctime( ) into a local buffer because asctime( ) uses a static data area that is overwritten if the function is called elsewhere. The return value is the address of this local buffer.

    In the functions starting with DaylightSavings( ), all use the updateLocal( ) function, which causes the composite inline to be fairly large. This doesnít seem worthwhile, especially considering you probably wonít call the functions very much. However, this doesnít mean all the functions should be made out of line. If you leave updateLocal( ) as an inline, its code will be duplicated in all the out-of-line functions, eliminating the extra overhead.

    Hereís a small test program:

    //: C09:Cpptime.cpp
    // Testing a simple time class
    #include <iostream>
    #include "Cpptime.h"
    using namespace std;
    
    int main() {
      Time start;
      for(int i = 1; i < 1000; i++) {
        cout << i << ' ';
        if(i%10 == 0) cout << endl;
      }
      Time end;
      cout << endl;
      cout << "start = " << start.ascii();
      cout << "end = " << end.ascii();
      cout << "delta = " << end.delta(&start);
    } ///:~

    A Time object is created, then some time-consuming activity is performed, then a second Time object is created to mark the ending time. These are used to show starting, ending, and elapsed times.

    Inlines & the compiler

    To understand when inlining is effective, itís helpful to understand what the compiler does when it encounters an inline. As with any function, the compiler holds the function type (that is, the function prototype including the name and argument types, in combination with the function return value) in its symbol table. In addition, when the compiler sees the inline function body and the function body parses without error, the code for the function body is also brought into the symbol table. Whether the code is stored in source form or as compiled assembly instructions is up to the compiler.

    When you make a call to an inline function, the compiler first ensures that the call can be correctly made; that is, all the argument types must be the proper types, or the compiler must be able to make a type conversion to the proper types, and the return value must be the correct type (or convertible to the correct type) in the destination expression. This, of course, is exactly what the compiler does for any function and is markedly different from what the preprocessor does because the preprocessor cannot check types or make conversions.

    If all the function type information fits the context of the call, then the inline code is substituted directly for the function call, eliminating the call overhead. Also, if the inline is a member function, the address of the object (this) is put in the appropriate place(s), which of course is another thing the preprocessor is unable to perform.

    Limitations

    There are two situations when the compiler cannot perform inlining. In these cases, it simply reverts to the ordinary form of a function by taking the inline definition and creating storage for the function just as it does for a non-inline. If it must do this in multiple translation units (which would normally cause a multiple definition error), the linker is told to ignore the multiple definitions.

    The compiler cannot perform inlining if the function is too complicated. This depends upon the particular compiler, but at the point most compilers give up, the inline probably wouldnít gain you any efficiency. Generally, any sort of looping is considered too complicated to expand as an inline, and if you think about it, looping probably entails much more time inside the function than embodied in the calling overhead. If the function is just a collection of simple statements, the compiler probably wonít have any trouble inlining it, but if there are a lot of statements, the overhead of the function call will be much less than the cost of executing the body. And remember, every time you call a big inline function, the entire function body is inserted in place of each call, so you can easily get code bloat without any noticeable performance improvement. Some of the examples in this book may exceed reasonable inline sizes in favor of conserving screen real estate.

    The compiler also cannot perform inlining if the address of the function is taken, implicitly or explicitly. If the compiler must produce an address, then it will allocate storage for the function code and use the resulting address. However, where an address is not required, the compiler will probably still inline the code.

    It is important to understand that an inline is just a suggestion to the compiler; the compiler is not forced to inline anything at all. A good compiler will inline small, simple functions while intelligently ignoring inlines that are too complicated. This will give you the results you want ó the true semantics of a function call with the efficiency of a macro.

    Order of evaluation

    If youíre imagining what the compiler is doing to implement inlines, you can confuse yourself into thinking there are more limitations than actually exist. In particular, if an inline makes a forward reference to a function that hasnít yet been declared in the class, it can seem like the compiler wonít be able to handle it:

    //: C09:Evorder.cpp
    // Inline evaluation order
    
    class Forward {
      int i;
    public:
      Forward() : i(0) {}
      // Call to undeclared function:
      int f() const { return g() + 1; }
      int g() const { return i; }
    };
    
    int main() {
      Forward F;
      F.f();
    } ///:~

    In f( ), a call is made to g( ), although g( ) has not yet been declared. This works because the language definition states that no inline functions in a class shall be evaluated until the closing brace of the class declaration.

    Of course, if g( ) in turn called f( ), youíd end up with a set of recursive calls, which are too complicated for the compiler to inline. (Also, youíd have to perform some test in f( ) or g( ) to force one of them to "bottom out," or the recursion would be infinite.)

    Hidden activities in constructors & destructors

    Constructors and destructors are two places where you can be fooled into thinking that an inline is more efficient than it actually is. Both constructors and destructors may have hidden activities, because the class can contain subobjects whose constructors and destructors must be called. These sub-objects may be member objects, or they may exist because of inheritance (which hasnít been introduced yet). As an example of a class with member objects

    //: C09:Hidden.cpp
    // Hidden activites in inlines
    #include <iostream>
    using namespace std;
    
    class Member {
      int i, j, k;
    public:
      Member(int x = 0) { i = j = k = x; }
      ~Member() { cout << "~Member" << endl; }
    };
    
    class WithMembers {
      Member Q, R, S; // Have constructors
      int i;
    public:
      WithMembers(int I) : i(I) {} // Trivial?
      ~WithMembers() {
        cout << "~WithMembers" << endl;
      }
    };
    
    int main() {
      WithMembers WM(1);
    } ///:~

    In class WithMembers, the inline constructor and destructor look straightforward and simple enough, but thereís more going on than meets the eye. The constructors and destructors for the member objects Q, R, and S are being called automatically, and those constructors and destructors are also inline, so the difference is significant from normal member functions. This doesnít necessarily mean that you should always make constructor and destructor definitions out-of-line. When youíre making an initial "sketch" of a program by quickly writing code, itís often more convenient to use inlines. However, if youíre concerned about efficiency, itís a place to look.

    Reducing clutter

    In a book like this, the simplicity and terseness of putting inline definitions inside classes is very useful because more fits on a page or screen (in a seminar). However, Dan Saks has pointed out that in a real project this has the effect of needlessly cluttering the class interface and thereby making the class harder to use. He refers to member functions defined within classes using the Latin in situ (in place) and maintains that all definitions should be placed outside the class to keep the interface clean. Optimization, he argues, is a separate issue. If you want to optimize, use the inline keyword. Using this approach, the earlier RECTANGL.CPP example (page *) becomes

    //: C09:Noinsitu.cpp
    // Removing in situ functions
    
    class Rectangle {
      int Width, Height;
    public:
      Rectangle(int W = 0, int H = 0);
      int width() const; // Read
      void width(int W); // Set
      int height() const; // Read
      void height(int H); // Set
    };
    
    inline Rectangle::Rectangle(int W, int H)
      : Width(W), Height(H) {
    }
    
    inline int Rectangle::width() const {
      return Width;
    }
    
    inline void Rectangle::width(int W) {
      Width = W;
    }
    
    inline int Rectangle::height() const {
      return Height;
    }
    
    inline void Rectangle::height(int H) {
      Height = H;
    }
    
    int main() {
      Rectangle R(19, 47);
      // Transpose width & height:
      R.height(R.width());
      R.width(R.height());
    } ///:~

    Now if you want to compare the effect of inlining with out-of-line functions, you can simply remove the inline keyword. (Inline functions should normally be put in header files, however, while non-inline functions must reside in their own translation unit.) If you want to put the functions into documentation, itís a simple cut-and-paste operation. In situ functions require more work and have greater potential for errors. Another argument for this approach is that you can always produce a consistent formatting style for function definitions, something that doesnít always occur with in situ functions.

    Preprocessor features

    Earlier, I said you almost always want to use inline functions instead of preprocessor macros. The exceptions are when you need to use three special features in the Standard C preprocessor (which is, by inheritance, the C++ preprocessor): stringizing, string concatenation, and token pasting. Stringizing, performed with the # directive, allows you to take an identifier and turn it into a string, whereas string concatenation takes place when two adjacent strings have no intervening punctuation, in which case the strings are combined. These two features are exceptionally useful when writing debug code. Thus,

    #define DEBUG(X) cout << #X " = " << X << endl

    This prints the value of any variable. You can also get a trace that prints out the statements as they execute:

    #define TRACE(S) cout << #S << endl; S

    The #S stringizes the statement for output, and the second S reiterates the statement so it is executed. Of course, this kind of thing can cause problems, especially in one-line for loops:

    for(int i = 0; i < 100; i++)

    TRACE(f(i));

    Because there are actually two statements in the TRACE( ) macro, the one-line for loop executes only the first one. The solution is to replace the semicolon with a comma in the macro.

    Token pasting

    Token pasting is very useful when you are manufacturing code. It allows you to take two identifiers and paste them together to automatically create a new identifier. For example,

    #define FIELD(A) char* A##_string; int A##_size

    class record {

    FIELD(one);

    FIELD(two);

    FIELD(three);

    // ...

    };

    Each call to the FIELD( ) macro creates an identifier to hold a string and another to hold the length of that string. Not only is it easier to read, it can eliminate coding errors and make maintenance easier. Notice, however, the use of all upper-case characters in the name of the macro. This is a helpful practice because it tells the reader this is a macro and not a function, so if there are problems, it acts as a little reminder.

    Improved error checking

    Itís convenient to improve the error checking for the rest of the book; with inline functions you can simply include the file and not worry about what to link. Up until now, the assert( ) macro has been used for "error checking," but itís really for debugging and should be replaced with something that provides useful information at run-time. In addition, exceptions (presented in Chapter 16) provide a much more effective way of handling many kinds of errors Ė especially those that youíd like to recover from, instead of just halting the program. The conditions described in this section, however, are ones which prevent the continuation of the program, such as if the user doesnít provide enough command-line arguments or a file cannot be opened.

    Inline functions are convenient here because they allow everything to be placed in a header file, which simplifies the process of using the package. You just include the header file and you donít need to worry about linking.

    The following header file will be placed in the bookís root directory so itís easily accessed from all chapters.

    //: :require.h
    // Test for error conditions in programs
    #ifndef REQUIRE_H_
    #define REQUIRE_H_
    #include <cstdio>
    #include <cstdlib>
    #include <fstream>
    
    inline void 
    require(bool requirement, 
      char* msg = "Requirement failed") {
      using namespace std;
      if (!requirement) {
        fprintf(stderr, "%s", msg);
        exit(1);
      }
    }
    
    inline void 
    requireArgs(int argc, int args, 
      char* msg = "Must use %d arguments") {
      using namespace std;
       if (argc != args) {
         fprintf(stderr, msg, args);
         exit(1);
       }
    }
    
    inline void 
    requireMinArgs(int argc, int minArgs, 
      char* msg = "Must use at least %d arguments") {
      using namespace std;
      if(argc < minArgs) {
        fprintf(stderr, msg, minArgs);
        exit(1);
      }
    }
      
    inline void 
    assure(std::ifstream& in, 
      const char* filename = "") {
      using namespace std;
      if(!in) {
        fprintf(stderr,
          "Could not open file %s", filename);
        exit(1);
      }
    }
    
    inline void 
    assure(std::ofstream& in, 
      const char* filename = "") {
      using namespace std;
      if(!in) {
        fprintf(stderr,
          "Could not open file %s", filename);
        exit(1);
      }
    }
    #endif // REQUIRE_H_ ///:~

    The default values provide reasonable messages that can be changed if necessary.

    Youíll note that, unlike most of the headers in this book, require.h uses using namespace std rather than full namespace qualification (that is, std::). Thatís because at the time of this writing some compilers still hadnít put the Standard C functions (in <cstdio> etc.) into the std namespace. The above form allows both approaches to work without polluting the std namespace into every file that includes this header, since a using namespace thatís within a function only affects that function.

    Hereís a simple program to test require.h:

    //: C09:Errtest.cpp
    // Testing require.h
    #include "../require.h"
    #include <fstream>
    
    int main(int argc, char* argv[]) {
      int i = 1;
      require(i, "value must be nonzero");
      requireArgs(argc, 2);
      requireMinArgs(argc, 2);
      ifstream in(argv[1]);
      assure(in, argv[1]); // Use the file name
      ifstream nofile("nofile.xxx");
      assure(nofile); // The default argument
      ofstream out("tmp.txt");
      assure(out);
    } ///:~

    You might be tempted to go one step further for opening files and add a macro to require.h:

    #define IFOPEN(VAR, NAME) \

    ifstream VAR(NAME); \

    assure(VAR, NAME);

    Which could then be used like this:

    IFOPEN(in, argv[1])

    At first, this might seem appealing since youíve got less to type. Itís not terribly unsafe, but itís a road best avoided. Note that, once again, a macro looks like a function but behaves differently: itís actually creating an object (in) whose scope persists beyond the macro. You may understand this, but for new programmers and code maintainers itís just one more thing they have to puzzle out. C++ is complicated enough without adding to the confusion, so try to talk yourself out of using macros whenever you can.

    Summary

    Itís critical that you be able to hide the underlying implementation of a class because you may want to change that implementation sometime later. Youíll do this for efficiency, or because you get a better understanding of the problem, or because some new class becomes available that you want to use in the implementation. Anything that jeopardizes the privacy of the underlying implementation reduces the flexibility of the language. Thus, the inline function is very important because it virtually eliminates the need for preprocessor macros and their attendant problems. With inlines, member functions can be as efficient as preprocessor macros.

    The inline function can be overused in class definitions, of course. The programmer is tempted to do so because itís easier, so it will happen. However, itís not that big an issue because later, when looking for size reductions, you can always move the functions out of line with no effect on their functionality. The development guideline should be "First make it work, then optimize it."

    Exercises

  35. Take Exercise 2 from Chapter 6, and add an inline constructor, and an inline member function called print( ) to print out all the values in the array.
  36. Take the NESTFRND.CPP example from Chapter 2 and replace all the member functions with inlines. Make them non-in situ inline functions. Also change the initialize( ) functions to constructors.
  37. Take the NL.CPP example from Chapter 5 and turn nl into an inline function in its own header file.
  38. Create a class A with a default constructor that announces itself. Now make a new class B and put an object of A as a member of B, and give B an inline constructor. Create an array of B objects and see what happens.
  39. Create a large quantity of the objects from Exercise 4, and use the Time class to time the difference between a non-inline constructor and an inline constructor. (If you have a profiler, also try using that.)

10: Name control

Creating names is a fundamental activity in programming, and when a project gets large the number of names can easily be overwhelming. C++ allows you a great deal of control over both the creation and visibility of names, where storage for those names is placed, and linkage for names.

The static keyword was overloaded in C before people knew what the term "overload" meant, and C++ has added yet another meaning. The underlying concept with all uses of static seems to be "something that holds its position" (like static electricity), whether that means a physical location in memory or visibility within a file.

In this chapter, youíll learn how static controls storage and visibility, and an improved way to control access to names via C++ís namespace feature. Youíll also find out how to use functions that were written and compiled in C.

Static elements from C

In both C and C++ the keyword static has two basic meanings, which unfortunately often step on each otherís toes:

    1. Allocated once at a fixed address; that is, the object is created in a special static data area rather than on the stack each time a function is called. This is the concept of static storage.
    2. Local to a particular translation unit (and class scope in C++, as you will see later). Here, static controls the visibility of a name, so that name cannot be seen outside the translation unit or class. This also describes the concept of linkage, which determines what names the linker will see.

This section will look at the above meanings of static as they were inherited from C.

static variables inside functions

Normally, when you create a variable inside a function, the compiler allocates storage for that variable each time the function is called by moving the stack pointer down an appropriate amount. If there is an initializer for the variable, the initialization is performed each time that sequence point is passed.

Sometimes, however, you want to retain a value between function calls. You could accomplish this by making a global variable, but that variable would not be under the sole control of the function. C and C++ allow you to create a static object inside a function; the storage for this object is not on the stack but instead in the programís static storage area. This object is initialized once the first time the function is called and then retains its value between function invocations. For example, the following function returns the next character in the string each time the function is called:

//: C10:Statfun.cpp
// Static vars inside functions
#include <iostream>
#include "../require.h"
using namespace std;

char onechar(const char* string = 0) {
  static const char* s;
  if(string) {
    s = string;
    return *s;
  }
  else
    require(s, "un-initialized s");
  if(*s == '\0')
    return 0;
  return *s++;
}

char* a = "abcdefghijklmnopqrstuvwxyz";

int main() {
  // Onechar(); // require() fails
  onechar(a); // Initializes s to a
  char c;
  while((c = onechar()) != 0)
    cout << c << endl;
} ///:~

The static char* s holds its value between calls of onechar( ) because its storage is not part of the stack frame of the function, but is in the static storage area of the program. When you call onechar( ) with a char* argument, s is assigned to that argument, and the first character of the string is returned. Each subsequent call to onechar( ) without an argument produces the default value of zero for string, which indicates to the function that you are still extracting characters from the previously initialized value of s. The function will continue to produce characters until it reaches the null terminator of the string, at which point it stops incrementing the pointer so it doesnít overrun the end of the string.

But what happens if you call onechar( ) with no arguments and without previously initializing the value of s? In the definition for s, you could have provided an initializer,

static char* s = 0;

but if you do not provide an initializer for a static variable of a built-in type, the compiler guarantees that variable will be initialized to zero (converted to the proper type) at program start-up. So in onechar( ), the first time the function is called, s is zero. In this case, the if(!s) conditional will catch it.

The above initialization for s is very simple, but initialization for static objects (like all other objects) can be arbitrary expressions involving constants and previously declared variables and functions.

static class objects inside functions

The rules are the same for static objects of user-defined types, including the fact that some initialization is required for the object. However, assignment to zero has meaning only for built-in types; user-defined types must be initialized with constructor calls. Thus, if you donít specify constructor arguments when you define the static object, the class must have a default constructor. For example,

//: C10:Funobj.cpp
// Static objects in functions
#include <iostream>
using namespace std;

class X {
  int i;
public:
  X(int I = 0) : i(I) {} // Default
  ~X() { cout << "X::~X()" << endl; }
};

void f() {
  static X x1(47);
  static X x2; // Default constructor required
}

int main() {
  f();
} ///:~

The static objects of type X inside f( ) can be initialized either with the constructor argument list or with the default constructor. This construction occurs the first time control passes through the definition, and only the first time.

Static object destructors

Destructors for static objects (all objects with static storage, not just local static objects as in the above example) are called when main( ) exits or when the Standard C library function exit( ) is explicitly called, main( ) in most implementations calls exit( ) when it terminates. This means that it can be dangerous to call exit( ) inside a destructor because you can end up with infinite recursion. Static object destructors are not called if you exit the program using the Standard C library function abort( ).

You can specify actions to take place when leaving main( ) (or calling exit( )) by using the Standard C library function atexit( ). In this case, the functions registered by atexit( ) may be called before the destructors for any objects constructed before leaving main( ) (or calling exit( )).

Destruction of static objects occurs in the reverse order of initialization. However, only objects that have been constructed are destroyed. Fortunately, the programming system keeps track of initialization order and the objects that have been constructed. Global objects are always constructed before main( ) is entered, so this last statement applies only to static objects that are local to functions. If a function containing a local static object is never called, the constructor for that object is never executed, so the destructor is also not executed. For example,

//: C10:Statdest.cpp
// Static object destructors
#include <fstream>
using namespace std;
ofstream out("statdest.out"); // Trace file

class Obj {
  char c; // Identifier
public:
  Obj(char C) : c(C) {
    out << "Obj::Obj() for " << c << endl;
  }
  ~Obj() {
    out << "Obj::~Obj() for " << c << endl;
  }
};

Obj A('A'); // Global (static storage)
// Constructor & destructor always called

void f() {
  static Obj B('B');
}

void g() {
  static Obj C('C');
}

int main() {
  out << "inside main()" << endl;
  f(); // Calls static constructor for B
  // g() not called
  out << "leaving main()" << endl;
} ///:~

In Obj, the char c acts as an identifier so the constructor and destructor can print out information about the object theyíre working on. The Obj A is a global object, so the constructor is always called for it before main( ) is entered, but the constructors for the static Obj B inside f( ) and the static Obj C inside g( ) are called only if those functions are called.

To demonstrate which constructors and destructors are called, inside main( ) only f( ) is called. The output of the program is

Obj::Obj() for A

inside main()

Obj::Obj() for B

leaving main()

Obj::~Obj() for B

Obj::~Obj() for A

The constructor for A is called before main( ) is entered, and the constructor for B is called only because f( ) is called. When main( ) exits, the destructors for the objects that have been constructed are called in reverse order of their construction. This means that if g( ) is called, the order in which the destructors for B and C are called depends on whether f( ) or g( ) is called first.

Notice that the trace file ofstream object out is also a static object. It is important that its definition (as opposed to an extern declaration) appear at the beginning of the file, before there is any possible use of out. Otherwise youíll be using an object before it is properly initialized.

In C++ the constructor for a global static object is called before main( ) is entered, so you now have a simple and portable way to execute code before entering main( ) and to execute code with the destructor after exiting main( ). In C this was always a trial that required you to root around in the compiler vendorís assembly-language startup code.

Controlling linkage

Ordinarily, any name at file scope (that is, not nested inside a class or function) is visible throughout all translation units in a program. This is often called external linkage because at link time the name is visible to the linker everywhere, external to that translation unit. Global variables and ordinary functions have external linkage.

There are times when youíd like to limit the visibility of a name. You might like to have a variable at file scope so all the functions in that file can use it, but you donít want functions outside that file to see or access that variable, or to inadvertently cause name clashes with identifiers outside the file.

An object or function name at file scope that is explicitly declared static is local to its translation unit (in the terms of this book, the .CPP file where the declaration occurs); that name has internal linkage. This means you can use the same name in other translation units without a name clash.

One advantage to internal linkage is that the name can be placed in a header file without worrying that there will be a clash at link time. Names that are commonly placed in header files, such as const definitions and inline functions, default to internal linkage. (However, const defaults to internal linkage only in C++; in C it defaults to external linkage.) Note that linkage refers only to elements that have addresses at link/load time; thus, class declarations and local variables have no linkage.

Confusion

Hereís an example of how the two meanings of static can cross over each other. All global objects implicitly have static storage class, so if you say (at file scope),

int a = 0;

then storage for a will be in the programís static data area, and the initialization for a will occur once, before main( ) is entered. In addition, the visibility of a is global, across all translation units. In terms of visibility, the opposite of static (visible only in this translation unit) is extern, which explicitly states that the visibility of the name is across all translation units. So the above definition is equivalent to saying

extern int a = 0;

But if you say instead,

static int a = 0;

all youíve done is change the visibility, so a has internal linkage. The storage class is unchanged ó the object resides in the static data area whether the visibility is static or extern.

Once you get into local variables, static stops altering the visibility (and extern has no meaning) and instead alters the storage class.

With function names, static and extern can only alter visibility, so if you say,

extern void f();

itís the same as the unadorned declaration

void f();

and if you say,

static void f();

it means f( ) is visible only within this translation unit; this is sometimes called file static.

Other storage class specifiers

You will see static and extern used commonly. There are two other storage class specifiers that occur less often. The auto specifier is almost never used because it tells the compiler that this is a local variable. The compiler can always determine this fact from the context in which the variable is defined, so auto is redundant.

A register variable is a local (auto) variable, along with a hint to the compiler that this particular variable will be heavily used, so the compiler ought to keep it in a register if it can. Thus, it is an optimization aid. Various compilers respond differently to this hint; they have the option to ignore it. If you take the address of the variable, the register specifier will almost certainly be ignored. You should avoid using register because the compiler can usually do a better job at of optimization than you.

Namespaces

Although names can be nested inside classes, the names of global functions, global variables, and classes are still in a single global name space. The static keyword gives you some control over this by allowing you to give variables and functions internal linkage (make them file static). But in a large project, lack of control over the global name space can cause problems. To solve these problems for classes, vendors often create long complicated names that are unlikely to clash, but then youíre stuck typing those names. (A typedef is often used to simplify this.) Itís not an elegant, language-supported solution.

You can subdivide the global name space into more manageable pieces using the namespace feature of C++. The namespace keyword, like class, struct, enum, and union, puts the names of its members in a distinct space. While the other keywords have additional purposes, the creation of a new name space is the only purpose for namespace.

Creating a namespace

The creation of a namespace is notably similar to the creation of a class:

namespace MyLib {

// Declarations

}

This produces a new namespace containing the enclosed declarations. There are significant differences with class, struct, union and enum, however:

    1. A namespace definition can only appear at the global scope, but namespaces can be nested within each other.
    2. No terminating semicolon is necessary after the closing brace of a namespace definition.
    3. A namespace definition can be "continued" over multiple header files using a syntax that would appear to be a redefinition for a class:
    4. //: C10:Header1.h
      namespace MyLib {
        extern int X;
        void f();
        // ...
      } ///:~

      //: C10:Header2.h
      // Add more names to MyLib
      namespace MyLib { // NOT a redefinition!
        extern int Y;
        void g();
        // ...
      } ///:~

       

    5. A namespace name can be aliased to another name, so you donít have to type an unwieldy name created by a library vendor:
    6. namespace BobsSuperDuperLibrary {

      class widget { /* ... */ };

      class poppit { /* ... */ };

      // ...

      }

      // Too much to type! Iíll alias it:

      namespace Bob = BobsSuperDuperLibrary;

    7. You cannot create an instance of a namespace as you can with a class.
    8. Unnamed namespaces

      Each translation unit contains an unnamed namespace that you can add to by saying namespace without an identifier:

      namespace {

      class Arm { /* ... */ };

      class Leg { /* ... */ };

      class Head { /* ... */ };

      class Robot {

      Arm arm[4];

      Leg leg[16];

      Head head[3];

      // ...

      } Xanthan;

      int i, j, k;

      }

      The names in this space are automatically available in that translation unit without qualification. It is guaranteed that an unnamed space is unique for each translation unit. If you put local names in an unnamed namespace, you donít need to give them internal linkage by making them static.

      Friends

      You can inject a friend declaration into a namespace by declaring it within an enclosed class:

      namespace me {

      class us {

      //...

      friend you();

      };

      }

       

      Now the function you( ) is a member of the namespace me.

      Using a namespace

      You can refer to a name within a namespace in two ways: one name at a time, using the scope resolution operator, and more expediently with the using keyword.

      Scope resolution

      Any name in a namespace can be explicitly specified using the scope resolution operator, just like the names within a class:

      namespace X {

      class Y {

      static int i;

      public:

      void f();

      };

      class Z;

      void foo();

      }

      int X::Y::i = 9;

      class X::Z {

      int u, v, w;

      public:

      Z(int I);

      int g();

      };

      X::Z::Z(int I) { u = v = w = I; }

      int X::Z::g() { return u = v = w = 0; }

      void X::foo() {

      X::Z a(1);

      a.g();

      }

      So far, namespaces look very much like classes.

      The using directive

      Because it can rapidly get tedious to type the full qualification for an identifier in a namespace, the using keyword allows you to import an entire namespace at once. When used in conjunction with the namespace keyword, this is called a using directive. The using directive declares all the names of a namespace to be in the current scope, so you can conveniently use the unqualified names:

      namespace math {

      enum sign { positive, negative };

      class integer {

      int i;

      sign s;

      public:

      integer(int I = 0)

      : i(I),

      s(i >= 0 ? positive : negative)

      {}

      sign Sign() { return s; }

      void Sign(sign S) { s = S; }

      // ...

      };

      integer A, B, C;

      integer divide(integer, integer);

      // ...

      }

      Now you can declare all the names in math inside a function, but leave those names nested within the function:

      void arithmetic() {

      using namespace math;

      integer X;

      X.Sign(positive);

      }

      Without the using directive, all the names in the namespace would need to be fully qualified.

      One aspect of the using directive may seem slightly counterintuitive at first. The visibility of the names introduced with a using directive is the scope where the directive is made. But you can override the names from the using directive as if theyíve been declared globally to that scope!

      void q() {

      using namespace math;

      integer A; // Hides math::A;

      A.Sign(negative);

      math::A.Sign(positive);

      }

      If you have a second namespace:

      namespace calculation {

      class integer {};

      integer divide(integer, integer);

      // ...

      }

      And this namespace is also introduced with a using directive, you have the possibility of a collision. However, the ambiguity appears at the point of use of the name, not at the using directive:

      void s() {

      using namespace math;

      using namespace calculation;

      // Everythingís ok until:

      divide(1, 2); // Ambiguity

      }

      Thus itís possible to write using directives to introduce a number of namespaces with conflicting names without ever producing an ambiguity.

      The using declaration

      You can introduce names one at a time into the current scope with a using declaration. Unlike the using directive, which treats names as if they were declared globally to the scope, a using declaration is a declaration within the current scope. This means it can override names from a using directive:

      namespace U {

      void f();

      void g();

      }

      namespace V {

      void f();

      void g();

      }

      void func() {

      using namespace U; // Using directive

      using V::f; // Using declaration

      f(); // Calls V::f();

      U::f(); // Must fully qualify to call

      }

      The using declaration just gives the fully specified name of the identifier, but no type information. This means that if the namespace contains a set of overloaded functions with the same name, the using declaration declares all the functions in the overloaded set.

      You can put a using declaration anywhere a normal declaration can occur. A using declaration works like a normal declaration in all ways but one: itís possible for a using declaration to cause the overload of a function with the same argument types (which isnít allowed with normal overloading). This ambiguity, however, doesnít show up until the point of use, rather than the point of declaration.

      A using declaration can also appear within a namespace, and it has the same effect as anywhere else: that name is declared within the space:

      namespace Q {

      using U::f;

      using V::g;

      // ...

      }

      void m() {

      using namespace Q;

      f(); // Calls U::f();

      g(); // Calls V::g();

      }

      A using declaration is an alias, and it allows you to declare the same function in separate namespaces. If you end up redeclaring the same function by importing different namespaces, itís OK ó there wonít be any ambiguities or duplications.

      Static members in C++

      There are times when you need a single storage space to be used by all objects of a class. In C, you would use a global variable, but this is not very safe. Global data can be modified by anyone, and its name can clash with other identical names in a large project. It would be ideal if the data could be stored as if it were global, but be hidden inside a class, and clearly associated with that class.

      This is accomplished with static data members inside a class. There is a single piece of storage for a static data member, regardless of how many objects of that class you create. All objects share the same static storage space for that data member, so it is a way for them to "communicate" with each other. But the static data belongs to the class; its name is scoped inside the class and it can be public, private, or protected.

      Defining storage for static data members

      Because static data has a single piece of storage regardless of how many objects are created, that storage must be defined in a single place. The compiler will not allocate storage for you, although this was once true, with some compilers. The linker will report an error if a static data member is declared but not defined.

      The definition must occur outside the class (no inlining is allowed), and only one definition is allowed. Thus it is usual to put it in the implementation file for the class. The syntax sometimes gives people trouble, but it is actually quite logical. For example,

      class A {

      static int i;

      public:

      //...

      };

      and later, in the definition file,

      int A::i = 1;

      If you were to define an ordinary global variable, you would say

      int i = 1;

      but here, the scope resolution operator and the class name are used to specify A::i.

      Some people have trouble with the idea that A::i is private, and yet hereís something that seems to be manipulating it right out in the open. Doesnít this break the protection mechanism? Itís a completely safe practice for two reasons. First, the only place this initialization is legal is in the definition. Indeed, if the static data were an object with a constructor, you would call the constructor instead of using the = operator. Secondly, once the definition has been made, the end-user cannot make a second definition ó the linker will report an error. And the class creator is forced to create the definition, or the code wonít link during testing. This ensures that the definition happens only once and that itís in the hands of the class creator.

      The entire initialization expression for a static member is in the scope of the class. For example,

      //: C10:Statinit.cpp
      // Scope of static initializer
      #include <iostream>
      using namespace std;
      
      int x = 100;
      
      class WithStatic {
        static int x;
        static int y;
      public:
        void print() const {
          cout << "WithStatic::x = " << x << endl;
          cout << "WithStatic::y = " << y << endl;
        }
      };
      
      int WithStatic::x = 1;
      int WithStatic::y = x + 1;
      // WithStatic::x NOT ::x
      
      int main() {
        WithStatic WS;
        WS.print();
      } ///:~

      Here, the qualification WithStatic:: extends the scope of WithStatic to the entire definition.

      static array initialization

      Itís possible to create static const objects as well as arrays of static objects, both const and non-const. Hereís the syntax you use to initialize such elements:

      //: C10:Statarry.cpp {O}
      // Initializing static arrays
      
      class Values {
        static const int size;
        static const float table[4];
        static char letters[10];
      };
      
      const int Values::size = 100;
      
      const float Values::table[4] = {
        1.1, 2.2, 3.3, 4.4
      };
      
      char Values::letters[10] = {
        'a', 'b', 'c', 'd', 'e',
        'f', 'g', 'h', 'i', 'j'
      };
      ///:~

      As with all static member data, you must provide a single external definition for the member. These definitions have internal linkage, so they can be placed in header files. The syntax for initializing static arrays is the same as any aggregate, but you cannot use automatic counting. With the exception of the above paragraph, the compiler must have enough knowledge about the class to create an object by the end of the class declaration, including the exact sizes of all the components.

      Compile-time constants inside classes

      In Chapter 6 enumerations were introduced as a way to create a compile-time constant (one that can be evaluated by the compiler in a constant expression, such as an array size) thatís local to a class. This practice, although commonly used, is often referred to as the "enum hack" because it uses enumerations in a way they were not originally intended.

      To accomplish the same thing using a better approach, you can use a static const inside a class. Because itís both const (it wonít change) and static (thereís only one definition for the whole class), a static const inside a class can be used as a compile-time constant, like this:

      class X {

      static const int size;

      int array[size];

      public:

      // ...

      };

      const int X::size = 100; // Definition

      If youíre using it in a constant expression inside a class, the definition of the static const member must appear before any instances of the class or member function definitions (presumably in the header file). As with an ordinary global const used with a built-in type, no storage is allocated for the const, and it has internal linkage so no clashes occur.

      An additional advantage to this approach is that any built-in type may be made a member static const. With enum, youíre limited to integral values.

      Nested and local classes

      You can easily put static data members in that are nested inside other classes. The definition of such members is an intuitive and obvious extension ó you simply use another level of scope resolution. However, you cannot have static data members inside local classes (classes defined inside functions). Thus,

      //: C10:Local.cpp {O}
      // Static members & local classes
      #include <iostream>
      using namespace std;
      
      // Nested class CAN have static data members:
      class Outer {
        class Inner {
          static int i; // OK
        };
      };
      
      int Outer::Inner::i = 47;
      
      // Local class cannot have static data members:
      void f() {
        class Foo {
        public:
      //! static int i;  // Error
          // (How would you define i?)
        } x;
      } ///:~

      You can see the immediate problem with a static member in a local class: How do you describe the data member at file scope in order to define it? In practice, local classes are used very rarely.

      static member functions

      You can also create static member functions that, like static data members, work for the class as a whole rather than for a particular object of a class. Instead of making a global function that lives in and "pollutes" the global or local namespace, you bring the function inside the class. When you create a static member function, you are expressing an association with a particular class.

      A static member function cannot access ordinary data members, only static data members. It can call only other static member functions. Normally, the address of the current object (this) is quietly passed in when any member function is called, but a static member has no this, which is the reason it cannot access ordinary members. Thus, you get the tiny increase in speed afforded by a global function, which doesnít have the extra overhead of passing this, but the benefits of having the function inside the class.

      Using static to indicate that only one piece of storage for a class member exists for all objects of a class parallels its use with functions, to mean that only one copy of a local variable is used for all calls of a function.

      Hereís an example showing static data members and static member functions used together:

      //: C10:StaticMemberFunctions.cpp
      
      class X {
        int i;
        static int j;
      public:
        X(int I = 0) : i(I) {
           // Non-static member function can access
           // static member function or data:
          j = i;
        }
        int val() const { return i; }
        static int incr() {
          //! i++; // Error: static member function
          // cannot access non-static member data
          return ++j;
        }
        static int f() {
          //! val(); // Error: static member function
          // cannot access non-static member function
          return incr(); // OK -- calls static
        }
      };
      
      int X::j = 0;
      
      int main() {
        X x;
        X* xp = &x;
        x.f();
        xp->f();
        X::f(); // Only works with static members
      } ///:~

      Because they have no this pointer, static member functions can neither access nonstatic data members nor call nonstatic member functions. (Those functions require a this pointer.)

      Notice in main( ) that a static member can be selected using the usual dot or arrow syntax, associating that function with an object, but also with no object (because a static member is associated with a class, not a particular object), using the class name and scope resolution operator.

      Hereís an interesting feature: Because of the way initialization happens for static member objects, you can put a static data member of the same class inside that class. Hereís an example that allows only a single object of type egg to exist by making the constructor private. You can access that object, but you canít create any new egg objects:

      //: C10:Selfmem.cpp
      // Static member of same type
      // ensures only one object of this type exists.
      // Also referred to as a "singleton" pattern.
      #include <iostream>
      using namespace std;
      
      class Egg {
        static Egg e;
        int i;
        Egg(int I) : i(I) {}
      public:
        static Egg* instance() { return &e; }
        int val() { return i; }
      };
      
      Egg Egg::e(47);
      
      int main() {
      //!  Egg x(1); // Error -- can't create an Egg
        // You can access the single instance:
        cout << Egg::instance()->val() << endl;
      } ///:~

      The initialization for E happens after the class declaration is complete, so the compiler has all the information it needs to allocate storage and make the constructor call.

      Static initialization dependency

      Within a specific translation unit, the order of initialization of static objects is guaranteed to be the order in which the object definitions appear in that translation unit. The order of destruction is guaranteed to be the reverse of the order of initialization.

      However, there is no guarantee concerning the order of initialization of static objects across translation units, and thereís no way to specify this order. This can cause significant problems. As an example of an instant disaster (which will halt primitive operating systems, and kill the process on sophisticated ones), if one file contains

      // First file

      #include <fstream>

      ofstream out("out.txt");

      and another file uses the out object in one of its initializers

      // Second file

      #include <fstream>

      extern ofstream out;

      class oof {

      public:

      oof() { out << "barf"; }

      } OOF;

      the program may work, and it may not. If the programming environment builds the program so that the first file is initialized before the second file, then there will be no problem. However, if the second file is initialized before the first, the constructor for oof relies upon the existence of out, which hasnít been constructed yet and this causes chaos. This is only a problem with static object initializers that depend on each other, because by the time you get into main( ), all constructors for static objects have already been called.

      A more subtle example can be found in the ARM. In one file,

      extern int y;

      int x = y + 1;

      and in a second file,

      extern int x;

      int y = x + 1;

      For all static objects, the linking-loading mechanism guarantees a static initialization to zero before the dynamic initialization specified by the programmer takes place. In the previous example, zeroing of the storage occupied by the fstream out object has no special meaning, so it is truly undefined until the constructor is called. However, with built-in types, initialization to zero does have meaning, and if the files are initialized in the order they are shown above, y begins as statically initialized to zero, so x becomes one, and y is dynamically initialized to two. However, if the files are initialized in the opposite order, x is statically initialized to zero, y is dynamically initialized to one, and x then becomes two.

      Programmers must be aware of this because they can create a program with static initialization dependencies and get it working on one platform, but move it to another compiling environment where it suddenly, mysteriously, doesnít work.

      What to do

      There are three approaches to dealing with this problem:

    9. Donít do it. Avoiding static initializer dependencies is the best solution.
    10. If you must do it, put the critical static object definitions in a single file, so you can portably control their initialization by putting them in the correct order.
    11. If youíre convinced itís unavoidable to scatter static objects across translation units ó as in the case of a library, where you canít control the programmer who uses it ó there is a technique pioneered by Jerry Schwarz while creating the iostream library (because the definitions for cin, cout, and cerr live in a separate file).
    12. This technique requires an additional class in your library header file. This class is responsible for the dynamic initialization of your libraryís static objects. Here is a simple example:

      //: C10:Depend.h
      // Static initialization technique
      #ifndef DEPEND_H_
      #define DEPEND_H_
      #include <iostream>
      extern int x; // Delarations, not definitions
      extern int y;
      
      class Initializer {
        static int init_count;
      public:
        Initializer() {
          std::cout << "Initializer()" << endl;
          // Initialize first time only
          if(init_count++ == 0) {
            std::cout << "performing initialization"
                 << endl;
            x = 100;
            y = 200;
          }
        }
        ~Initializer() {
          std::cout << "~Initializer()" << endl;
          // Clean up last time only
          if(--init_count == 0) {
            std::cout << "performing cleanup" << endl;
            // Any necessary cleanup here
          }
        }
      };
      
      // The following creates one object in each
      // file where DEPEND.H is included, but that
      // object is only visible within that file:
      static Initializer init;
      #endif // DEPEND_H_ ///:~

      The declarations for x and y announce only that these objects exist, but donít allocate storage for them. However, the definition for the Initializer init allocates storage for that object in every file where the header is included, but because the name is static (controlling visibility this time, not the way storage is allocated because that is at file scope by default), it is only visible within that translation unit, so the linker will not complain about multiple definition errors.

      Here is the file containing the definitions for x, y, and init_count:

      //: C10:Depdefs.cpp {O}
      // Definitions for DEPEND.H
      #include "Depend.h"
      // Static initialization will force
      // all these values to zero:
      int x;
      int y;
      int Initializer::init_count;
      ///:~

      (Of course, a file static instance of init is also placed in this file.) Suppose that two other files are created by the library user:

      //: C10:Depend.cpp {O}
      // Static initialization
      #include "Depend.h"
      ///:~

      and

      //: C10:Depend2.cpp
      //{L} Depdefs Depend
      // Static initialization
      #include "Depend.h"
      using namespace std;
      
      int main() {
        cout << "inside main()" << endl;
        cout << "leaving main()" << endl;
      } ///:~

      Now it doesnít matter which translation unit is initialized first. The first time a translation unit containing DEPEND.H is initialized, init_count will be zero so the initialization will be performed. (This depends heavily on the fact that global objects of built-in types are set to zero before any dynamic initialization takes place.) For all the rest of the translation units, the initialization will be skipped. Cleanup happens in the reverse order, and ~Initializer( ) ensures that it will happen only once.

      This example used built-in types as the global static objects. The technique also works with classes, but those objects must then be dynamically initialized by the Initializer class. One way to do this is to create the classes without constructors and destructors, but instead with initialization and cleanup member functions using different names. A more common approach, however, is to have pointers to objects and to create them dynamically on the heap inside Initializer( ). This requires the use of two C++ keywords, new and delete, which will be explored in Chapter 11.

      Alternate linkage specifications

      What happens if youíre writing a program in C++ and you want to use a C library? If you make the C function declaration,

      float f(int a, char b);

      the C++ compiler will mangle (decorate) this name to something like _f_int_int to support function overloading (and type-safe linkage). However, the C compiler that compiled your C library has most definitely not mangled the name, so its internal name will be _f. Thus, the linker will not be able to resolve your C++ calls to f( ).

      The escape mechanism provided in C++ is the alternate linkage specification, which was produced in the language by overloading the extern keyword. The extern is followed by a string that specifies the linkage you want for the declaration, followed by the declaration itself:

      extern "C" float f(int a, char b);

      This tells the compiler to give C linkage to f( ); that is, donít mangle the name. The only two types of linkage specifications supported by the standard are "C" and "C++," but compiler vendors have the option of supporting other languages in the same way.

      If you have a group of declarations with alternate linkage, put them inside braces, like this:

      extern "C" {

      float f(int a, char b);

      double d(int a, char b);

      }

      Or, for a header file,

      extern "C" {

      #include "Myheader.h"

      }

      Most C++ compiler vendors handle the alternate linkage specifications inside their header files that work with both C and C++, so you donít have to worry about it.

      The only alternate linkage specification strings that are standard are "C" and "C++" but implementations can support other languages using the same mechanism.

      Summary

      The static keyword can be confusing because in some situations it controls the location of storage, and in others it controls visibility and linkage of a name.

      With the introduction of C++ namespaces, you have an improved and more flexible alternative to control the proliferation of names in large projects.

      The use of static inside classes is one more way to control names in a program. The names do not clash with global names, and the visibility and access is kept within the program, giving you greater control in the maintenance of your code.

      Exercises

    13. Create a class that holds an array of ints. Set the size of the array using an untagged enumeration inside the class. Add a const int variable, and initialize it in the constructor initializer list. Add a static int member variable and initialize it to a specific value. Add a static member function that prints the static data member. Add an inline constructor and an inline member function called print( ) to print out all the values in the array, and to call the static member function.
    14. In STATDEST.CPP, experiment with the order of constructor and destructor calls by calling f( ) and g( ) inside main( ) in different orders. Does your compiler get it right?
    15. In STATDEST.CPP, test the default error handling of your implementation by turning the original definition of out into an extern declaration and putting the actual definition after the definition of A (whose obj constructor sends information to out). Make sure thereís nothing else important running on your machine when you run the program or that your machine will handle faults robustly.
    16. Create a class with a destructor that prints a message and then calls exit( ). Create a global static object of this class and see what happens.
    17. Modify VOLATILE.CPP from Chapter 6 to make comm::isr( ) something that would actually work as an interrupt service routine.

    11: References & the copy-constructor

    References are a C++ feature that are like constant pointers automatically dereferenced by the compiler.

    Although references also exist in Pascal, the C++ version was taken from the Algol language. They are essential in C++ to support the syntax of operator overloading (see Chapter 10), but are also a general convenience to control the way arguments are passed into and out of functions.

    This chapter will first look briefly at the differences between pointers in C and C++, then introduce references. But the bulk of the chapter will delve into a rather confusing issue for the new C++ programmer: the copy-constructor, a special constructor (requiring references) that makes a new object from an existing object of the same type. The copy-constructor is used by the compiler to pass and return objects by value into and out of functions.

    Finally, the somewhat obscure C++ pointer-to-member feature is illuminated.

    Pointers in C++

    The most important difference between pointers in C and in C++ is that C++ is a more strongly typed language. This stands out where void* is concerned. C doesnít let you casually assign a pointer of one type to another, but it does allow you to quietly accomplish this through a void*. Thus,

    bird* b;

    rock* r;

    void* v;

    v = r;

    b = v;

    C++ doesnít allow this because it leaves a big hole in the type system. The compiler gives you an error message, and if you really want to do it, you must make it explicit, both to the compiler and to the reader, using a cast. (See Chapter 17 for C++ís improved casting syntax.)

    References in C++

    A reference (&) is like a constant pointer that is automatically dereferenced. It is usually used for function argument lists and function return values. But you can also make a free-standing reference. For example,

    int x;

    int & r = x;

    When a reference is created, it must be initialized to a live object. However, you can also say

    int & q = 12;

    Here, the compiler allocates a piece of storage, initializes it with the value 12, and ties the reference to that piece of storage. The point is that any reference must be tied to someone elseís piece of storage. When you access a reference, youíre accessing that storage. Thus if you say,

    int x = 0;

    int & a = x;

    a++;

    incrementing a is actually incrementing x. Again, the easiest way to think about a reference is as a fancy pointer. One advantage of this pointer is you never have to wonder whether itís been initialized (the compiler enforces it) and how to dereference it (the compiler does it).

    There are certain rules when using references:

  1. A reference must be initialized when it is created. (Pointers can be initialized at any time.)
  2. Once a reference is initialized to an object, it cannot be changed to refer to another object. (Pointers can be pointed to another object at any time.)
  3. You cannot have NULL references. You must always be able to assume that a reference is connected to a legitimate piece of storage.
  4. References in functions

    The most common place youíll see references is in function arguments and return values. When a reference is used as a function argument, any modification to the reference inside the function will cause changes to the argument outside the function. Of course, you could do the same thing by passing a pointer, but a reference has much cleaner syntax. (You can think of a reference as nothing more than a syntax convenience, if you want.)

    If you return a reference from a function, you must take the same care as if you return a pointer from a function. Whatever the reference is connected to shouldnít go away when the function returns; otherwise youíll be referring to unknown memory.

    Hereís an example:

    //: C11:Refrnce.cpp
    // Simple C++ references
    
    int* f(int* x) {
      (*x)++;
      return x; // Safe; x is outside this scope
    }
    
    int& g(int& x) {
      x++; // Same effect as in f()
      return x; // Safe; outside this scope
    }
    
    int& h() {
      int q;
    //!  return q;  // Error
      static int x;
      return x; // Safe; x lives outside scope
    }
    
    int main() {
      int A = 0;
      f(&A); // Ugly (but explicit)
      g(A);  // Clean (but hidden)
    } ///:~

    The call to f( ) doesnít have the convenience and cleanliness of using references, but itís clear that an address is being passed. In the call to g( ), an address is being passed (via a reference), but you donít see it.

    const references

    The reference argument in REFRNCE.CPP works only when the argument is a non-const object. If it is a const object, the function g( ) will not accept the argument, which is actually a good thing, because the function does modify the outside argument. If you know the function will respect the constness of an object, making the argument a const reference will allow the function to be used in all situations. This means that, for built-in types, the function will not modify the argument, and for user-defined types the function will call only const member functions, and wonít modify any public data members.

    The use of const references in function arguments is especially important because your function may receive a temporary object, created as a return value of another function or explicitly by the user of your function. Temporary objects are always const, so if you donít use a const reference, that argument wonít be accepted by the compiler. As a very simple example,

    //: C11:Pasconst.cpp
    // Passing references as const
    
    void f(int&) {}
    void g(const int&) {}
    
    int main() {
    //!  f(1); // Error
      g(1);
    } ///:~

    The call to f(1) produces a compiler error because the compiler must first create a reference. It does so by allocating storage for an int, initializing it to one and producing the address to bind to the reference. The storage must be a const because changing it would make no sense ó you can never get your hands on it again. With all temporary objects you must make the same assumption, that theyíre inaccessible. Itís valuable for the compiler to tell you when youíre changing such data because the result would be lost information.

    Pointer references

    In C, if you wanted to modify the contents of the pointer rather than what it points to, your function declaration would look like

    void f(int**);

    and youíd have to take the address of the pointer when passing it in:

    int I = 47;

    int* ip = &I;

    f(&ip);

    With references in C++, the syntax is cleaner. The function argument becomes a reference to a pointer, and you no longer have to take the address of that pointer. Thus,

    //: C11:Refptr.cpp
    // Reference to pointer
    #include <iostream>
    using namespace std;
    
    void increment(int*& i) { i++; }
    
    int main() {
      int* i = 0;
      cout << "i = " << i << endl;
      increment(i);
      cout << "i = " << i << endl;
    } ///:~

    By running this program, youíll prove to yourself that the pointer itself is incremented, not what it points to.

    Argument-passing guidelines

    Your normal habit when passing an argument to a function should be to pass by const reference. Although this may at first seem like only an efficiency concern (and you normally donít want to concern yourself with efficiency tuning while youíre designing and assembling your program), thereís more at stake: as youíll see in the remainder of the chapter, a copy-constructor is required to pass an object by value, and this isnít always available.

    The efficiency savings can be substantial for such a simple habit: to pass an argument by value requires a constructor and destructor call, but if youíre not going to modify the argument then passing by const reference only needs an address pushed on the stack.

    In fact, virtually the only time passing an address isnít preferable is when youíre going to do such damage to an object that passing by value is the only safe approach (rather than modifying the outside object, something the caller doesnít usually expect). This is the subject of the next section.

    The copy-constructor

    Now that you understand the basics of the reference in C++, youíre ready to tackle one of the more confusing concepts in the language: the copy-constructor, often called X(X&) ("X of X ref"). This constructor is essential to control passing and returning of user-defined types by value during function calls.

    Passing & returning by value

    To understand the need for the copy-constructor, consider the way C handles passing and returning variables by value during function calls. If you declare a function and make a function call,

    int f(int x, char c);

    int g = f(a, b);

    how does the compiler know how to pass and return those variables? It just knows! The range of the types it must deal with is so small ó char, int, float, and double and their variations ó that this information is built into the compiler.

    If you figure out how to generate assembly code with your compiler and determine the statements generated by the function call to f( ), youíll get the equivalent of,

    push b

    push a

    call f()

    add sp,4

    mov g, register a

    This code has been cleaned up significantly to make it generic ó the expressions for b and a will be different depending on whether the variables are global (in which case they will be _b and _a) or local (the compiler will index them off the stack pointer). This is also true for the expression for g. The appearance of the call to f( ) will depend on your name-mangling scheme, and "register a" depends on how the CPU registers are named within your assembler. The logic behind the code, however, will remain the same.

    In C and C++, arguments are pushed on the stack from right to left, the function call is made, then the calling code is responsible for cleaning the arguments off the stack (which accounts for the add sp,4). But notice that to pass the arguments by value, the compiler simply pushes copies on the stack ó it knows how big they are and that pushing those arguments makes accurate copies of them.

    The return value of f( ) is placed in a register. Again, the compiler knows everything there is to know about the return value type because itís built into the language, so the compiler can return it by placing it in a register. The simple act of copying the bits of the value is equivalent to copying the object.

    Passing & returning large objects

    But now consider user-defined types. If you create a class and you want to pass an object of that class by value, how is the compiler supposed to know what to do? This is no longer a built-in type the compiler writer knows about; itís a type someone has created since then.

    To investigate this, you can start with a simple structure that is clearly too large to return in registers:

    //: C11:Passtruc.cpp
    // Passing a big structure
    
    struct big {
      char buf[100];
      int i;
      long d;
    } B, B2;
    
    big bigfun(big b) {
      b.i = 100; // Do something to the argument
      return b;
    }
    
    int main() {
      B2 = bigfun(B);
    } ///:~

    Decoding the assembly output is a little more complicated here because most compilers use "helper" functions rather than putting all functionality inline. In main( ), the call to bigfun( ) starts as you might guess ó the entire contents of B is pushed on the stack. (Here, you might see some compilers load registers with the address of B and its size, then call a helper function to push it onto the stack.)

    In the previous example, pushing the arguments onto the stack was all that was required before making the function call. In PASSTRUC.CPP, however, youíll see an additional action: The address of B2 is pushed before making the call, even though itís obviously not an argument. To comprehend whatís going on here, you need to understand the constraints on the compiler when itís making a function call.

    Function-call stack frame

    When the compiler generates code for a function call, it first pushes all the arguments on the stack, then makes the call. Inside the function itself, code is generated to move the stack pointer down even further to provide storage for the functionís local variables. ("Down" is relative here; your machine may increment or decrement the stack pointer during a push.) But during the assembly-language CALL, the CPU pushes the address in the program code where the function call came from, so the assembly-language RETURN can use that address to return to the calling point. This address is of course sacred, because without it your program will get completely lost. Hereís what the stack frame looks like after the CALL and the allocation of local variable storage in the function:

    The code generated for the rest of the function expects the memory to be laid out exactly this way, so it can carefully pick from the function arguments and local variables without touching the return address. I shall call this block of memory, which is everything used by a function in the process of the function call, the function frame.

    You might think it reasonable to try to return values on the stack. The compiler could simply push it, and the function could return an offset to indicate how far down in the stack the return value begins.

    Re-entrancy

    The problem occurs because functions in C and C++ support interrupts; that is, the languages are re-entrant. They also support recursive function calls. This means that at any point in the execution of a program an interrupt can occur without disturbing the program. Of course, the person who writes the interrupt service routine (ISR) is responsible for saving and restoring all the registers he uses, but if the ISR needs to use any memory thatís further down on the stack, that must be a safe thing to do. (You can think of an ISR as an ordinary function with no arguments and void return value that saves and restores the CPU state. An ISR function call is triggered by some hardware event rather than an explicit call from within a program.)

    Now imagine what would happen if the called function tried to return values on the stack from an ordinary function. You canít touch any part of the stack thatís above the return address, so the function would have to push the values below the return address. But when the assembly-language RETURN is executed, the stack pointer must be pointing to the return address (or right below it, depending on your machine), so right before the RETURN, the function must move the stack pointer up, thus clearing off all its local variables. If youíre trying to return values on the stack below the return address, you become vulnerable at that moment because an interrupt could come along. The ISR would move the stack pointer down to hold its return address and its local variables and overwrite your return value.

    To solve this problem, the caller could be responsible for allocating the extra storage on the stack for the return values before calling the function. However, C was not designed this way, and C++ must be compatible. As youíll see shortly, the C++ compiler uses a more efficient scheme.

    Your next idea might be to return the value in some global data area, but this doesnít work either. Re-entrancy means that any function can interrupt any other function, including the same function youíre currently inside. Thus, if you put the return value in a global area, you might return into the same function, which would overwrite that return value. The same logic applies to recursion.

    The only safe place to return values is in the registers, so youíre back to the problem of what to do when the registers arenít large enough to hold the return value. The answer is to push the address of the return valueís destination on the stack as one of the function arguments, and let the function copy the return information directly into the destination. This not only solves all the problems, itís more efficient. Itís also the reason that, in PASSTRUC.CPP, the compiler pushes the address of B2 before the call to bigfun( ) in main( ). If you look at the assembly output for bigfun( ), you can see it expects this hidden argument and performs the copy to the destination inside the function.

    Bitcopy versus initialization

    So far, so good. Thereís a workable process for passing and returning large simple structures. But notice that all you have is a way to copy the bits from one place to another, which certainly works fine for the primitive way that C looks at variables. But in C++ objects can be much more sophisticated than a patch of bits; they have meaning. This meaning may not respond well to having its bits copied.

    Consider a simple example: a class that knows how many objects of its type exist at any one time. From Chapter 8, you know the way to do this is by including a static data member:

    //: C11:HowMany.cpp
    // Class counts its objects
    #include <fstream>
    using namespace std;
    ofstream out("HowMany.out");
    
    class HowMany {
      static int object_count;
    public:
      HowMany() {
        object_count++;
      }
      static void print(const char* msg = 0) {
        if(msg) out << msg << ": ";
        out << "object_count = "
             << object_count << endl;
      }
      ~HowMany() {
        object_count--;
        print("~HowMany()");
      }
    };
    
    int HowMany::object_count = 0;
    
    // Pass and return BY VALUE:
    HowMany f(HowMany x) {
      x.print("x argument inside f()");
      return x;
    }
    
    int main() {
      HowMany h;
      HowMany::print("after construction of h");
      HowMany h2 = f(h);
      HowMany::print("after call to f()");
    } ///:~

    The class HowMany contains a static int and a static member function print( ) to report the value of that int, along with an optional message argument. The constructor increments the count each time an object is created, and the destructor decrements it.

    The output, however, is not what you would expect:

    after construction of h: object_count = 1

    x argument inside f(): object_count = 1

    ~HowMany(): object_count = 0

    after call to f(): object_count = 0

    ~HowMany(): object_count = -1

    ~HowMany(): object_count = -2

    After h is created, the object count is one, which is fine. But after the call to f( ) you would expect to have an object count of two, because h2 is now in scope as well. Instead, the count is zero, which indicates something has gone horribly wrong. This is confirmed by the fact that the two destructors at the end make the object count go negative, something that should never happen.

    Look at the point inside f( ), which occurs after the argument is passed by value. This means the original object h exists outside the function frame, and thereís an additional object inside the function frame, which is the copy that has been passed by value. However, the argument has been passed using Cís primitive notion of bitcopying, whereas the C++ HowMany class requires true initialization to maintain its integrity, so the default bitcopy fails to produce the desired effect.

    When the local object goes out of scope at the end of the call to f( ), the destructor is called, which decrements object_count, so outside the function, object_count is zero. The creation of h2 is also performed using a bitcopy, so the constructor isnít called there, either, and when h and h2 go out of scope, their destructors cause the negative values of object_count.

    Copy-construction

    The problem occurs because the compiler makes an assumption about how to create a new object from an existing object. When you pass an object by value, you create a new object, the passed object inside the function frame, from an existing object, the original object outside the function frame. This is also often true when returning an object from a function. In the expression

    HowMany h2 = f(h);

    h2, a previously unconstructed object, is created from the return value of f( ), so again a new object is created from an existing one.

    The compilerís assumption is that you want to perform this creation using a bitcopy, and in many cases this may work fine but in HowMany it doesnít fly because the meaning of initialization goes beyond simply copying. Another common example occurs if the class contains pointers ó what do they point to, and should you copy them or should they be connected to some new piece of memory?

    Fortunately, you can intervene in this process and prevent the compiler from doing a bitcopy. You do this by defining your own function to be used whenever the compiler needs to make a new object from an existing object. Logically enough, youíre making a new object, so this function is a constructor, and also logically enough, the single argument to this constructor has to do with the object youíre constructing from. But that object canít be passed into the constructor by value because youíre trying to define the function that handles passing by value, and syntactically it doesnít make sense to pass a pointer because, after all, youíre creating the new object from an existing object. Here, references come to the rescue, so you take the reference of the source object. This function is called the copy-constructor and is often referred to as X(X&), which is its appearance for a class called X.

    If you create a copy-constructor, the compiler will not perform a bitcopy when creating a new object from an existing one. It will always call your copy-constructor. So, if you donít create a copy-constructor, the compiler will do something sensible, but you have the choice of taking over complete control of the process.

    Now itís possible to fix the problem in HowMany.cpp:

    //: C11:HowMany2.cpp
    // The copy-constructor
    #include <fstream>
    #include <cstring>
    using namespace std;
    ofstream out("HowMany2.out");
    
    class HowMany2 {
      enum { bufsize = 30 };
      char id[bufsize]; // Object identifier
      static int object_count;
    public:
      HowMany2(const char* ID = 0) {
        if(ID) strncpy(id, ID, bufsize);
        else *id = 0;
        ++object_count;
        print("HowMany2()");
      }
      // The copy-constructor:
      HowMany2(const HowMany2& h) {
        strncpy(id, h.id, bufsize);
        strncat(id, " copy", bufsize - strlen(id));
        ++object_count;
        print("HowMany2(HowMany2&)");
      }
      // Can't be static (printing id):
      void print(const char* msg = 0) const {
        if(msg) out << msg << endl;
        out << '\t' << id << ": "
            << "object_count = "
            << object_count << endl;
      }
      ~HowMany2() {
        --object_count;
        print("~HowMany2()");
      }
    };
    
    int HowMany2::object_count = 0;
    
    // Pass and return BY VALUE:
    HowMany2 f(HowMany2 x) {
      x.print("x argument inside f()");
      out << "returning from f()" << endl;
      return x;
    }
    
    int main() {
      HowMany2 h("h");
      out << "entering f()" << endl;
      HowMany2 h2 = f(h);
      h2.print("h2 after call to f()");
      out << "call f(), no return value" << endl;
      f(h);
      out << "after call to f()" << endl;
    } ///:~

    There are a number of new twists thrown in here so you can get a better idea of whatís happening. First, the character buffer id acts as an object identifier so you can figure out which object the information is being printed about. In the constructor, you can put an identifier string (usually the name of the object) that is copied to id using the Standard C library function strncpy( ), which only copies a certain number of characters, preventing overrun of the buffer.

    Next is the copy-constructor, HowMany2(HowMany2&). The copy-constructor can create a new object only from an existing one, so the existing objectís name is copied to id, followed by the word "copy" so you can see where it came from. Note the use of the Standard C library function strncat( ) to copy a maximum number of characters into id, again to prevent overrunning the end of the buffer.

    Inside the copy-constructor, the object count is incremented just as it is inside the normal constructor. This means youíll now get an accurate object count when passing and returning by value.

    The print( ) function has been modified to print out a message, the object identifier, and the object count. It must now access the id data of a particular object, so it can no longer be a static member function.

    Inside main( ), you can see a second call to f( ) has been added. However, this call uses the common C approach of ignoring the return value. But now that you know how the value is returned (that is, code inside the function handles the return process, putting the result in a destination whose address is passed as a hidden argument), you might wonder what happens when the return value is ignored. The output of the program will throw some illumination on this.

    Before showing the output, hereís a little program that uses iostreams to add line numbers to any file:

    //: C11:Linenum.cpp
    // Add line numbers
    #include <fstream>
    #include <strstream>
    #include <cstdlib>
    #include "../require.h"
    using namespace std;
    
    int main(int argc, char* argv[]) {
      requireArgs(argc, 2, "Usage: linenum file\n"
        "Adds line numbers to file");
      strstream text;
      {
        ifstream in(argv[1]);
        assure(in, argv[1]);
        text << in.rdbuf(); // Read in whole file
      } // Close file
      ofstream out(argv[1]); // Overwrite file
      assure(out, argv[1]);
      const bsz = 100;
      char buf[bsz];
      int line = 0;
      while(text.getline(buf, bsz)) {
        out.setf(ios::right, ios::adjustfield);
        out.width(2);
        out << ++line << ") " << buf << endl;
      }
    } ///:~

    The entire file is read into a strstream (which can be both written to and read from) and the ifstream is closed with scoping. Then an ofstream is created for the same file, overwriting it. getline( ) fetches a line at a time from the strstream and line numbers are added as the line is written back into the file.

    The line numbers are printed right-aligned in a field width of two, so the output still lines up in its original configuration. You can change the program to add an optional second command-line argument that allows the user to select a field width, or you can be more clever and count all the lines in the file to determine the field width automatically.

    When LINENUM.CPP is applied to HOWMANY2.OUT, the result is

    1) HowMany2()

    2) h: object_count = 1

    3) entering f()

    4) HowMany2(HowMany2&)

    5) h copy: object_count = 2

    6) x argument inside f()

    7) h copy: object_count = 2

    8) returning from f()

    9) HowMany2(HowMany2&)

    10) h copy copy: object_count = 3

    11) ~HowMany2()

    12) h copy: object_count = 2

    13) h2 after call to f()

    14) h copy copy: object_count = 2

    15) call f(), no return value

    16) HowMany2(HowMany2&)

    17) h copy: object_count = 3

    18) x argument inside f()

    19) h copy: object_count = 3

    20) returning from f()

    21) HowMany2(HowMany2&)

    22) h copy copy: object_count = 4

    23) ~HowMany2()

    24) h copy: object_count = 3

    25) ~HowMany2()

    26) h copy copy: object_count = 2

    27) after call to f()

    28) ~HowMany2()

    29) h copy copy: object_count = 1

    30) ~HowMany2()

    31) h: object_count = 0

    As you would expect, the first thing that happens is the normal constructor is called for h, which increments the object count to one. But then, as f( ) is entered, the copy-constructor is quietly called by the compiler to perform the pass-by-value. A new object is created, which is the copy of h (thus the name "h copy") inside the function frame of f( ), so the object count becomes two, courtesy of the copy-constructor.

    Line eight indicates the beginning of the return from f( ). But before the local variable "h copy" can be destroyed (it goes out of scope at the end of the function), it must be copied into the return value, which happens to be h2. A previously unconstructed object (h2) is created from an existing object (the local variable inside f( )), so of course the copy-constructor is used again in line nine. Now the name becomes "h copy copy" for h2ís identifier because itís being copied from the copy that is the local object inside f( ). After the object is returned, but before the function ends, the object count becomes temporarily three, but then the local object "h copy" is destroyed. After the call to f( ) completes in line 13, there are only two objects, h and h2, and you can see that h2 did indeed end up as "h copy copy."

    Temporary objects

    Line 15 begins the call to f(h), this time ignoring the return value. You can see in line 16 that the copy-constructor is called just as before to pass the argument in. And also, as before, line 21 shows the copy-constructor is called for the return value. But the copy-constructor must have an address to work on as its destination (a this pointer). Where is the object returned to?

    It turns out the compiler can create a temporary object whenever it needs one to properly evaluate an expression. In this case it creates one you donít even see to act as the destination for the ignored return value of f( ). The lifetime of this temporary object is as short as possible so the landscape doesnít get cluttered up with temporaries waiting to be destroyed, taking up valuable resources. In some cases, the temporary might be immediately passed to another function, but in this case it isnít needed after the function call, so as soon as the function call ends by calling the destructor for the local object (lines 23 and 24), the temporary object is destroyed (lines 25 and 26).

    Now, in lines 28-31, the h2 object is destroyed, followed by h, and the object count goes correctly back to zero.

    Default copy-constructor

    Because the copy-constructor implements pass and return by value, itís important that the compiler will create one for you in the case of simple structures ó effectively, the same thing it does in C. However, all youíve seen so far is the default primitive behavior: a bitcopy.

    When more complex types are involved, the C++ compiler will still automatically create a copy-constructor if you donít make one. Again, however, a bitcopy doesnít make sense, because it doesnít necessarily implement the proper meaning.

    Hereís an example to show the more intelligent approach the compiler takes. Suppose you create a new class composed of objects of several existing classes. This is called, appropriately enough, composition, and itís one of the ways you can make new classes from existing classes. Now take the role of a naive user whoís trying to solve a problem quickly by creating the new class this way. You donít know about copy-constructors, so you donít create one. The example demonstrates what the compiler does while creating the default copy-constructor for your new class:

    //: C11:Autocc.cpp
    // Automatic copy-constructor
    #include <iostream>
    #include <cstring>
    using namespace std;
    
    class WithCC { // With copy-constructor
    public:
      // Explicit default constructor required:
      WithCC() {}
      WithCC(const WithCC&) {
        cout << "WithCC(WithCC&)" << endl;
      }
    };
    
    class WoCC { // Without copy-constructor
      enum { bsz = 30 };
      char buf[bsz];
    public:
      WoCC(const char* msg = 0) {
        memset(buf, 0, bsz);
        if(msg) strncpy(buf, msg, bsz);
      }
      void print(const char* msg = 0) const {
        if(msg) cout << msg << ": ";
        cout << buf << endl;
      }
    };
    
    class Composite {
      WithCC withcc; // Embedded objects
      WoCC wocc;
    public:
      Composite() : wocc("Composite()") {}
      void print(const char* msg = 0) {
        wocc.print(msg);
      }
    };
    
    int main() {
      Composite c;
      c.print("contents of c");
      cout << "calling Composite copy-constructor"
           << endl;
      Composite c2 = c;  // Calls copy-constructor
      c2.print("contents of c2");
    } ///:~

    The class WithCC contains a copy-constructor, which simply announces it has been called, and this brings up an interesting issue. In the class Composite, an object of WithCC is created using a default constructor. If there were no constructors at all in WithCC, the compiler would automatically create a default constructor, which would do nothing in this case. However, if you add a copy-constructor, youíve told the compiler youíre going to handle constructor creation, so it no longer creates a default constructor for you and will complain unless you explicitly create a default constructor as was done for WithCC.

    The class WoCC has no copy-constructor, but its constructor will store a message in an internal buffer that can be printed out using print( ). This constructor is explicitly called in Compositeís constructor initializer list (briefly introduced in Chapter 6 and covered fully in Chapter 12). The reason for this becomes apparent later.

    The class Composite has member objects of both WithCC and WoCC (note the embedded object WOCC is initialized in the constructor-initializer list, as it must be), and no explicitly defined copy-constructor. However, in main( ) an object is created using the copy-constructor in the definition:

    Composite c2 = c;

    The copy-constructor for Composite is created automatically by the compiler, and the output of the program reveals how it is created.

    To create a copy-constructor for a class that uses composition (and inheritance, which is introduced in Chapter 12), the compiler recursively calls the copy-constructors for all the member objects and base classes. That is, if the member object also contains another object, its copy-constructor is also called. So in this case, the compiler calls the copy-constructor for WithCC. The output shows this constructor being called. Because WoCC has no copy-constructor, the compiler creates one for it, which is the default behavior of a bitcopy, and calls that inside the Composite copy-constructor. The call to Composite::print( ) in main shows that this happens because the contents of c2.WOCC are identical to the contents of c.WOCC. The process the compiler goes through to synthesize a copy-constructor is called memberwise initialization.

    Itís best to always create your own copy-constructor rather than letting the compiler do it for you. This guarantees it will be under your control.

    Alternatives to copy-construction

    At this point your head may be swimming, and you might be wondering how you could have possibly written a functional class without knowing about the copy-constructor. But remember: You need a copy-constructor only if youíre going to pass an object of your class by value. If that never happens, you donít need a copy-constructor.

    Preventing pass-by-value

    "But," you say, "if I donít make a copy-constructor, the compiler will create one for me. So how do I know that an object will never be passed by value?"

    Thereís a simple technique for preventing pass-by-value: Declare a private copy-constructor. You donít even need to create a definition, unless one of your member functions or a friend function needs to perform a pass-by-value. If the user tries to pass or return the object by value, the compiler will produce an error message because the copy-constructor is private. It can no longer create a default copy-constructor because youíve explicitly stated youíre taking over that job.

    Hereís an example:

    //: C11:Stopcc.cpp
    // Preventing copy-construction
    
    class NoCC {
      int i;
      NoCC(const NoCC&); // No definition
    public:
      NoCC(int I = 0) : i(I) {}
    };
    
    void f(NoCC);
    
    int main() {
      NoCC n;
    //! f(n); // Error: copy-constructor called
    //! NoCC n2 = n; // Error: c-c called
    //! NoCC n3(n); // Error: c-c called
    } ///:~

    Notice the use of the more general form

    NoCC(const NoCC&);

    using the const.

    Functions that modify outside objects

    Reference syntax is nicer to use than pointer syntax, yet it clouds the meaning for the reader. For example, in the iostreams library one overloaded version of the get( ) function takes a char& as an argument, and the whole point of the function is to modify its argument by inserting the result of the get( ). However, when you read code using this function itís not immediately obvious to you the outside object is being modified:

    char c;

    cin.get(c);

    Instead, the function call looks like a pass-by-value, which suggests the outside object is not modified.

    Because of this, itís probably safer from a code maintenance standpoint to use pointers when youíre passing the address of an argument to modify. If you always pass addresses as const references except when you intend to modify the outside object via the address, where you pass by non-const pointer, then your code is far easier for the reader to follow.

    Pointers to members

    A pointer is a variable that holds the address of some location, which can be either data or a function, so you can change what a pointer selects at run-time. The C++ pointer-to-member follows this same concept, except that what it selects is a location inside a class. The dilemma here is that all a pointer needs is an address, but there is no "address" inside a class; selecting a member of a class means offsetting into that class. You canít produce an actual address until you combine that offset with the starting address of a particular object. The syntax of pointers to members requires that you select an object at the same time youíre dereferencing the pointer to member.

    To understand this syntax, consider a simple structure:

    struct simple { int a; };

    If you have a pointer sp and an object so for this structure, you can select members by saying

    sp->a;

    so.a;

    Now suppose you have an ordinary pointer to an integer, ip. To access what ip is pointing to, you dereference the pointer with a *:

    *ip = 4;

    Finally, consider what happens if you have a pointer that happens to point to something inside a class object, even if it does in fact represent an offset into the object. To access what itís pointing at, you must dereference it with *. But itís an offset into an object, so you must also refer to that particular object. Thus, the * is combined with the object dereferencing. As an example using the simple class,

    sp->*pm = 47;

    so.*pm = 47;

    So the new syntax becomes Ė>* for a pointer to an object, and .* for the object or a reference. Now, what is the syntax for defining pm? Like any pointer, you have to say what type itís pointing at, and you use a * in the definition. The only difference is you must say what class of objects this pointer-to-member is used with. Of course, this is accomplished with the name of the class and the scope resolution operator. Thus,

    int simple::*pm;

    You can also initialize the pointer-to-member when you define it (or any other time):

    int simple::*pm = &simple::a;

    There is actually no "address" of simple::a because youíre just referring to the class and not an object of that class. Thus, &simple::a can be used only as pointer-to-member syntax.

    Functions

    A similar exercise produces the pointer-to-member syntax for member functions. A pointer to a function is defined like this:

    int (*fp)(float);

    The parentheses around (*fp) are necessary to force the compiler to evaluate the definition properly. Without them this would appear to be a function that returns an int*.

    To define and use a pointer to a member function, parentheses play a similarly important role. If you have a function inside a structure:

    struct simple2 { int f(float); };

    you define a pointer to that member function by inserting the class name and scope resolution operator into an ordinary function pointer definition:

    int (simple2::*fp)(float);

    You can also initialize it when you create it, or at any other time:

    int (simple2::*fp)(float) = &simple2::f;

    As with normal functions, the & is optional; you can give the function identifier without an argument list to mean the address:

    fp = simple2::f;

    An example

    The value of a pointer is that you can change what it points to at run-time, which provides an important flexibility in your programming because through a pointer you can select or change behavior at run-time. A pointer-to-member is no different; it allows you to choose a member at run-time. Typically, your classes will have only member functions publicly visible (data members are usually considered part of the underlying implementation), so the following example selects member functions at run-time.

    //: C11:Pmem.cpp
    // Pointers to members
    
    class Widget {
    public:
      void f(int);
      void g(int);
      void h(int);
      void i(int);
    };
    
    void Widget::h(int) {}
    
    int main() {
      Widget w;
      Widget* wp = &w;
      void (Widget::*pmem)(int) = &Widget::h;
      (w.*pmem)(1);
      (wp->*pmem)(2);
    } ///:~

    Of course, it isnít particularly reasonable to expect the casual user to create such complicated expressions. If the user must directly manipulate a pointer-to-member, then a typedef is in order. To really clean things up, you can use the pointer-to-member as part of the internal implementation mechanism. Hereís the preceding example using a pointer-to-member inside the class. All the user needs to do is pass a number in to select a function.

    //: C11:Pmem2.cpp
    // Pointers to members
    #include <iostream>
    using namespace std;
    
    class Widget {
      void f(int) const {cout << "Widget::f()\n";}
      void g(int) const {cout << "Widget::g()\n";}
      void h(int) const {cout << "Widget::h()\n";}
      void i(int) const {cout << "Widget::i()\n";}
      enum { count = 4 };
      void (Widget::*fptr[count])(int) const;
    public:
      Widget() {
        fptr[0] = &Widget::f; // Full spec required
        fptr[1] = &Widget::g;
        fptr[2] = &Widget::h;
        fptr[3] = &Widget::i;
      }
      void select(int I, int J) {
        if(I < 0 || I >= count) return;
        (this->*fptr[I])(J);
      }
      int Count() { return count; }
    };
    
    int main() {
      Widget w;
      for(int i = 0; i < w.Count(); i++)
        w.select(i, 47);
    } ///:~

    In the class interface and in main( ), you can see that the entire implementation, including the functions themselves, has been hidden away. The code must even ask for the Count( ) of functions. This way, the class implementor can change the quantity of functions in the underlying implementation without affecting the code where the class is used.

    The initialization of the pointers-to-members in the constructor may seem overspecified. Shouldnít you be able to say

    fptr[1] = &g;

    because the name g occurs in the member function, which is automatically in the scope of the class? The problem is this doesnít conform to the pointer-to-member syntax, which is required so everyone, especially the compiler, can figure out whatís going on. Similarly, when the pointer-to-member is dereferenced, it seems like

    (this->*fptr[i])(j);

    is also over-specified; this looks redundant. Again, the syntax requires that a pointer-to-member always be bound to an object when it is dereferenced.

    Summary

    Pointers in C++ are remarkably similar to pointers in C, which is good. Otherwise a lot of C code wouldnít compile properly under C++. The only compiler errors you will produce is where dangerous assignments occur. If these are in fact what are intended, the compiler errors can be removed with a simple (and explicit!) cast.

    C++ also adds the reference from Algol and Pascal, which is like a constant pointer that is automatically dereferenced by the compiler. A reference holds an address, but you treat it like an object. References are essential for clean syntax with operator overloading (the subject of the next chapter), but they also add syntactic convenience for passing and returning objects for ordinary functions.

    The copy-constructor takes a reference to an existing object of the same type as its argument, and it is used to create a new object from an existing one. The compiler automatically calls the copy-constructor when you pass or return an object by value. Although the compiler will automatically create a copy-constructor for you, if you think one will be needed for your class you should always define it yourself to ensure that the proper behavior occurs. If you donít want the object passed or returned by value, you should create a private copy-constructor.

    Pointers-to-members have the same functionality as ordinary pointers: You can choose a particular region of storage (data or function) at run-time. Pointers-to-members just happen to work with class members rather than global data or functions. You get the programming flexibility that allows you to change behavior at run-time.

    Exercises

  5. Create a function that takes a char& argument and modifies that argument. In main( ), print out a char variable, call your function for that variable, and print it out again to prove to yourself it has been changed. How does this affect program readability?
  6. Write a class with a copy-constructor that announces itself to cout. Now create a function that passes an object of your new class in by value and another one that creates a local object of your new class and returns it by value. Call these functions to prove to yourself that the copy-constructor is indeed quietly called when passing and returning objects by value.
  7. Discover how to get your compiler to generate assembly language, and produce assembly for PASSTRUC.CPP. Trace through and demystify the way your compiler generates code to pass and return large structures.
  8. (Advanced) This exercise creates an alternative to using the copy-constructor. Create a class X and declare (but donít define) a private copy-constructor. Make a public clone( ) function as a const member function that returns a copy of the object created using new (a forward reference to Chapter 11). Now create a function that takes as an argument a const X& and clones a local copy that can be modified. The drawback to this approach is that you are responsible for explicitly destroying the cloned object (using delete) when youíre done with it.
  9. 12: Operator overloading

    Operator overloading is just "syntactic sugar," which means it is simply another way for you to make a function call.

    The difference is the arguments for this function donít appear inside parentheses, but instead surrounding or next to characters youíve always thought of as immutable operators.

    But in C++, itís possible to define new operators that work with classes. This definition is just like an ordinary function definition except the name of the function begins with the keyword operator and ends with the operator itself. Thatís the only difference, and it becomes a function like any other function, which the compiler calls when it sees the appropriate pattern.

    Warning & reassurance

    Itís very tempting to become overenthusiastic with operator overloading. Itís a fun toy, at first. But remember itís only syntactic sugar, another way of calling a function. Looking at it this way, you have no reason to overload an operator except that it will make the code involving your class easier to write and especially read. (Remember, code is read much more than it is written.) If this isnít the case, donít bother.

    Another common response to operator overloading is panic: Suddenly, C operators have no familiar meaning anymore. "Everythingís changed and all my C code will do different things !" This isnít true. All the operators used in expressions that contain only built-in data types cannot be changed. You can never overload operators such that

    1 << 4;

    behaves differently, or

    1.414 << 2;

    has meaning. Only an expression containing a user-defined type can have an overloaded operator.

    Syntax

    Defining an overloaded operator is like defining a function, but the name of that function is operator@, where @ represents the operator. The number of arguments in the function argument list depends on two factors:

  10. Whether itís a unary (one argument) or binary (two argument) operator.
  11. Whether the operator is defined as a global function (one argument for unary, two for binary) or a member function (zero arguments for unary, one for binary ó the object becomes the left-hand argument).
  12. Hereís a small class that shows the syntax for operator overloading:

    //: C12:Opover.cpp
    // Operator overloading syntax
    #include <iostream>
    using namespace std;
    
    class Integer {
      int i;
    public:
      Integer(int I) { i = I; }
      const Integer
      operator+(const Integer& rv) const {
        cout << "operator+" << endl;
        return Integer(i + rv.i);
      }
      Integer&
      operator+=(const Integer& rv){
        cout << "operator+=" << endl;
        i += rv.i;
        return *this;
      }
    };
    
    int main() {
      cout << "built-in types:" << endl;
      int i = 1, j = 2, k = 3;
      k += i + j;
      cout << "user-defined types:" << endl;
      Integer I(1), J(2), K(3);
      K += I + J;
    } ///:~

    The two overloaded operators are defined as inline member functions that announce when they are called. The single argument is what appears on the right-hand side of the operator for binary operators. Unary operators have no arguments when defined as member functions. The member function is called for the object on the left-hand side of the operator.

    For nonconditional operators (conditionals usually return a Boolean value) youíll almost always want to return an object or reference of the same type youíre operating on if the two arguments are the same type. If theyíre not, the interpretation of what it should produce is up to you. This way complex expressions can be built up:

    K += I + J;

    The operator+ produces a new Integer (a temporary) that is used as the rv argument for the operator+=. This temporary is destroyed as soon as it is no longer needed.

    Overloadable operators

    Although you can overload almost all the operators available in C, the use is fairly restrictive. In particular, you cannot combine operators that currently have no meaning in C (such as ** to represent exponentiation), you cannot change the evaluation precedence of operators, and you cannot change the number of arguments an operator takes. This makes sense ó all these actions would produce operators that confuse meaning rather than clarify it.

    The next two subsections give examples of all the "regular" operators, overloaded in the form that youíll most likely use.

    Unary operators

    The following example shows the syntax to overload all the unary operators, both in the form of global functions and member functions. These will expand upon the Integer class shown previously and add a new byte class. The meaning of your particular operators will depend on the way you want to use them, but consider the client programmer before doing something unexpected.

    //: C12:Unary.cpp
    // Overloading unary operators
    #include <iostream>
    using namespace std;
    
    class Integer {
      long i;
      Integer* This() { return this; }
    public:
      Integer(long I = 0) : i(I) {}
      // No side effects takes const& argument:
      friend const Integer&
        operator+(const Integer& a);
      friend const Integer
        operator-(const Integer& a);
      friend const Integer
        operator~(const Integer& a);
      friend Integer*
        operator&(Integer& a);
      friend int
        operator!(const Integer& a);
      // Side effects don't take const& argument:
      // Prefix:
      friend const Integer&
        operator++(Integer& a);
      // Postfix:
      friend const Integer
        operator++(Integer& a, int);
      // Prefix:
      friend const Integer&
        operator--(Integer& a);
      // Postfix:
      friend const Integer
        operator--(Integer& a, int);
    };
    
    // Global operators:
    const Integer& operator+(const Integer& a) {
      cout << "+Integer\n";
      return a; // Unary + has no effect
    }
    const Integer operator-(const Integer& a) {
      cout << "-Integer\n";
      return Integer(-a.i);
    }
    const Integer operator~(const Integer& a) {
      cout << "~Integer\n";
      return Integer(~a.i);
    }
    Integer* operator&(Integer& a) {
      cout << "&Integer\n";
      return a.This(); // &a is recursive!
    }
    int operator!(const Integer& a) {
      cout << "!Integer\n";
      return !a.i;
    }
    // Prefix; return incremented value
    const Integer& operator++(Integer& a) {
      cout << "++Integer\n";
      a.i++;
      return a;
    }
    // Postfix; return the value before increment:
    const Integer operator++(Integer& a, int) {
      cout << "Integer++\n";
      Integer r(a.i);
      a.i++;
      return r;
    }
    // Prefix; return decremented value
    const Integer& operator--(Integer& a) {
      cout << "--Integer\n";
      a.i--;
      return a;
    }
    // Postfix; return the value before decrement:
    const Integer operator--(Integer& a, int) {
      cout << "Integer--\n";
      Integer r(a.i);
      a.i--;
      return r;
    }
    
    void f(Integer a) {
      +a;
      -a;
      ~a;
      Integer* ip = &a;
      !a;
      ++a;
      a++;
      --a;
      a--;
    }
    
    // Member operators (implicit "this"):
    class Byte {
      unsigned char b;
    public:
      Byte(unsigned char B = 0) : b(B) {}
      // No side effects: const member function:
      const Byte& operator+() const {
        cout << "+Byte\n";
        return *this;
      }
      const Byte operator-() const {
        cout << "-Byte\n";
        return Byte(-b);
      }
      const Byte operator~() const {
        cout << "~Byte\n";
        return Byte(~b);
      }
      Byte operator!() const {
        cout << "!Byte\n";
        return Byte(!b);
      }
      Byte* operator&() {
        cout << "&Byte\n";
        return this;
      }
      // Side effects: non-const member function:
      const Byte& operator++() { // Prefix
        cout << "++Byte\n";
        b++;
        return *this;
      }
      const Byte operator++(int) { // Postfix
        cout << "Byte++\n";
        Byte before(b);
        b++;
        return before;
      }
      const Byte& operator--() { // Prefix
        cout << "--Byte\n";
        --b;
        return *this;
      }
      const Byte operator--(int) { // Postfix
        cout << "Byte--\n";
        Byte before(b);
        --b;
        return before;
      }
    };
    
    void g(Byte b) {
      +b;
      -b;
      ~b;
      Byte* bp = &b;
      !b;
      ++b;
      b++;
      --b;
      b--;
    }
    
    int main() {
      Integer a;
      f(a);
      Byte b;
      g(b);
    } ///:~

    The functions are grouped according to the way their arguments are passed. Guidelines for how to pass and return arguments are given later. The above forms (and the ones that follow in the next section) are typically what youíll use, so start with them as a pattern when overloading your own operators.

    Increment & decrement

    The overloaded ++ and Ė Ė operators present a dilemma because you want to be able to call different functions depending on whether they appear before (prefix) or after (postfix) the object theyíre acting upon. The solution is simple, but some people find it a bit confusing at first. When the compiler sees, for example, ++a (a preincrement), it generates a call to operator++(a); but when it sees a++, it generates a call to operator++(a, int). That is, the compiler differentiates between the two forms by making different function calls. In UNARY.CPP for the member function versions, if the compiler sees ++b, it generates a call to B::operator++( ); and if it sees b++ it calls B::operator++(int).

    The user never sees the result of her action except that a different function gets called for the prefix and postfix versions. Underneath, however, the two functions calls have different signatures, so they link to two different function bodies. The compiler passes a dummy constant value for the int argument (which is never given an identifier because the value is never used) to generate the different signature for the postfix version.

    Binary operators

    The following listing repeats the example of UNARY.CPP for binary operators. Both global versions and member function versions are shown.

    //: C12:Binary.cpp
    // Overloading binary operators
    #include <fstream>
    #include "../require.h"
    using namespace std;
    
    ofstream out("binary.out");
    
    class Integer { // Combine this with UNARY.CPP
      long i;
    public:
      Integer(long I = 0) : i(I) {}
      // Operators that create new, modified value:
      friend const Integer
        operator+(const Integer& left,
                  const Integer& right);
      friend const Integer
        operator-(const Integer& left,
                  const Integer& right);
      friend const Integer
        operator*(const Integer& left,
                  const Integer& right);
      friend const Integer
        operator/(const Integer& left,
                  const Integer& right);
      friend const Integer
        operator%(const Integer& left,
                  const Integer& right);
      friend const Integer
        operator^(const Integer& left,
                  const Integer& right);
      friend const Integer
        operator&(const Integer& left,
                  const Integer& right);
      friend const Integer
        operator|(const Integer& left,
                  const Integer& right);
      friend const Integer
        operator<<(const Integer& left,
                   const Integer& right);
      friend const Integer
        operator>>(const Integer& left,
                   const Integer& right);
      // Assignments modify & return lvalue:
      friend Integer&
        operator+=(Integer& left,
                   const Integer& right);
      friend Integer&
        operator-=(Integer& left,
                   const Integer& right);
      friend Integer&
        operator*=(Integer& left,
                   const Integer& right);
      friend Integer&
        operator/=(Integer& left,
                   const Integer& right);
      friend Integer&
        operator%=(Integer& left,
                   const Integer& right);
      friend Integer&
        operator^=(Integer& left,
                   const Integer& right);
      friend Integer&
        operator&=(Integer& left,
                   const Integer& right);
      friend Integer&
        operator|=(Integer& left,
                   const Integer& right);
      friend Integer&
        operator>>=(Integer& left,
                    const Integer& right);
      friend Integer&
        operator<<=(Integer& left,
                    const Integer& right);
      // Conditional operators return true/false:
      friend int
        operator==(const Integer& left,
                   const Integer& right);
      friend int
        operator!=(const Integer& left,
                   const Integer& right);
      friend int
        operator<(const Integer& left,
                  const Integer& right);
      friend int
        operator>(const Integer& left,
                  const Integer& right);
      friend int
        operator<=(const Integer& left,
                   const Integer& right);
      friend int
        operator>=(const Integer& left,
                   const Integer& right);
      friend int
        operator&&(const Integer& left,
                   const Integer& right);
      friend int
        operator||(const Integer& left,
                   const Integer& right);
      // Write the contents to an ostream:
      void print(ostream& os) const { os << i; }
    };
    
    const Integer
      operator+(const Integer& left,
                const Integer& right) {
      return Integer(left.i + right.i);
    }
    const Integer
      operator-(const Integer& left,
                const Integer& right) {
      return Integer(left.i - right.i);
    }
    const Integer
      operator*(const Integer& left,
                const Integer& right) {
      return Integer(left.i * right.i);
    }
    const Integer
      operator/(const Integer& left,
                const Integer& right) {
      require(right.i != 0, "divide by zero");
      return Integer(left.i / right.i);
    }
    const Integer
      operator%(const Integer& left,
                const Integer& right) {
      require(right.i != 0, "modulo by zero");
      return Integer(left.i % right.i);
    }
    const Integer
      operator^(const Integer& left,
                const Integer& right) {
      return Integer(left.i ^ right.i);
    }
    const Integer
      operator&(const Integer& left,
                const Integer& right) {
      return Integer(left.i & right.i);
    }
    const Integer
      operator|(const Integer& left,
                const Integer& right) {
      return Integer(left.i | right.i);
    }
    const Integer
      operator<<(const Integer& left,
                 const Integer& right) {
      return Integer(left.i << right.i);
    }
    const Integer
      operator>>(const Integer& left,
                 const Integer& right) {
      return Integer(left.i >> right.i);
    }
    // Assignments modify & return lvalue:
    Integer& operator+=(Integer& left,
                        const Integer& right) {
       if(&left == &right) {/* self-assignment */}
       left.i += right.i;
       return left;
    }
    Integer& operator-=(Integer& left,
                        const Integer& right) {
       if(&left == &right) {/* self-assignment */}
       left.i -= right.i;
       return left;
    }
    Integer& operator*=(Integer& left,
                        const Integer& right) {
       if(&left == &right) {/* self-assignment */}
       left.i *= right.i;
       return left;
    }
    Integer& operator/=(Integer& left,
                        const Integer& right) {
       require(right.i != 0, "divide by zero");
       if(&left == &right) {/* self-assignment */}
       left.i /= right.i;
       return left;
    }
    Integer& operator%=(Integer& left,
                        const Integer& right) {
       require(right.i != 0, "modulo by zero");
       if(&left == &right) {/* self-assignment */}
       left.i %= right.i;
       return left;
    }
    Integer& operator^=(Integer& left,
                        const Integer& right) {
       if(&left == &right) {/* self-assignment */}
       left.i ^= right.i;
       return left;
    }
    Integer& operator&=(Integer& left,
                        const Integer& right) {
       if(&left == &right) {/* self-assignment */}
       left.i &= right.i;
       return left;
    }
    Integer& operator|=(Integer& left,
                        const Integer& right) {
       if(&left == &right) {/* self-assignment */}
       left.i |= right.i;
       return left;
    }
    Integer& operator>>=(Integer& left,
                         const Integer& right) {
       if(&left == &right) {/* self-assignment */}
       left.i >>= right.i;
       return left;
    }
    Integer& operator<<=(Integer& left,
                         const Integer& right) {
       if(&left == &right) {/* self-assignment */}
       left.i <<= right.i;
       return left;
    }
    // Conditional operators return true/false:
    int operator==(const Integer& left,
                   const Integer& right) {
        return left.i == right.i;
    }
    int operator!=(const Integer& left,
                   const Integer& right) {
        return left.i != right.i;
    }
    int operator<(const Integer& left,
                  const Integer& right) {
        return left.i < right.i;
    }
    int operator>(const Integer& left,
                  const Integer& right) {
        return left.i > right.i;
    }
    int operator<=(const Integer& left,
                   const Integer& right) {
        return left.i <= right.i;
    }
    int operator>=(const Integer& left,
                   const Integer& right) {
        return left.i >= right.i;
    }
    int operator&&(const Integer& left,
                   const Integer& right) {
        return left.i && right.i;
    }
    int operator||(const Integer& left,
                   const Integer& right) {
        return left.i || right.i;
    }
    
    void h(Integer& c1, Integer& c2) {
      // A complex expression:
      c1 += c1 * c2 + c2 % c1;
      #define TRY(op) \
      out << "c1 = "; c1.print(out); \
      out << ", c2 = "; c2.print(out); \
      out << ";  c1 " #op " c2 produces "; \
      (c1 op c2).print(out); \
      out << endl;
      TRY(+) TRY(-) TRY(*) TRY(/)
      TRY(%) TRY(^) TRY(&) TRY(|)
      TRY(<<) TRY(>>) TRY(+=) TRY(-=)
      TRY(*=) TRY(/=) TRY(%=) TRY(^=)
      TRY(&=) TRY(|=) TRY(>>=) TRY(<<=)
      // Conditionals:
      #define TRYC(op) \
      out << "c1 = "; c1.print(out); \
      out << ", c2 = "; c2.print(out); \
      out << ";  c1 " #op " c2 produces "; \
      out << (c1 op c2); \
      out << endl;
      TRYC(<) TRYC(>) TRYC(==) TRYC(!=) TRYC(<=)
      TRYC(>=) TRYC(&&) TRYC(||)
    }
    
    // Member operators (implicit "this"):
    class Byte { // Combine this with UNARY.CPP
      unsigned char b;
    public:
      Byte(unsigned char B = 0) : b(B) {}
      // No side effects: const member function:
      const Byte
        operator+(const Byte& right) const {
        return Byte(b + right.b);
      }
      const Byte
        operator-(const Byte& right) const {
        return Byte(b - right.b);
      }
      const Byte
        operator*(const Byte& right) const {
        return Byte(b * right.b);
      }
      const Byte
        operator/(const Byte& right) const {
        require(right.b != 0, "divide by zero");
        return Byte(b / right.b);
      }
      const Byte
        operator%(const Byte& right) const {
        require(right.b != 0, "modulo by zero");
        return Byte(b % right.b);
      }
      const Byte
        operator^(const Byte& right) const {
        return Byte(b ^ right.b);
      }
      const Byte
        operator&(const Byte& right) const {
        return Byte(b & right.b);
      }
      const Byte
        operator|(const Byte& right) const {
        return Byte(b | right.b);
      }
      const Byte
        operator<<(const Byte& right) const {
        return Byte(b << right.b);
      }
      const Byte
        operator>>(const Byte& right) const {
        return Byte(b >> right.b);
      }
      // Assignments modify & return lvalue.
      // operator= can only be a member function:
      Byte& operator=(const Byte& right) {
        // Handle self-assignment:
        if(this == &right) return *this;
        b = right.b;
        return *this;
      }
      Byte& operator+=(const Byte& right) {
        if(this == &right) {/* self-assignment */}
        b += right.b;
        return *this;
      }
      Byte& operator-=(const Byte& right) {
        if(this == &right) {/* self-assignment */}
        b -= right.b;
        return *this;
      }
      Byte& operator*=(const Byte& right) {
        if(this == &right) {/* self-assignment */}
        b *= right.b;
        return *this;
      }
      Byte& operator/=(const Byte& right) {
        require(right.b != 0, "divide by zero");
        if(this == &right) {/* self-assignment */}
        b /= right.b;
        return *this;
      }
      Byte& operator%=(const Byte& right) {
        require(right.b != 0, "modulo by zero");
        if(this == &right) {/* self-assignment */}
        b %= right.b;
        return *this;
      }
      Byte& operator^=(const Byte& right) {
        if(this == &right) {/* self-assignment */}
        b ^= right.b;
        return *this;
      }
      Byte& operator&=(const Byte& right) {
        if(this == &right) {/* self-assignment */}
        b &= right.b;
        return *this;
      }
      Byte& operator|=(const Byte& right) {
        if(this == &right) {/* self-assignment */}
        b |= right.b;
        return *this;
      }
      Byte& operator>>=(const Byte& right) {
        if(this == &right) {/* self-assignment */}
        b >>= right.b;
        return *this;
      }
      Byte& operator<<=(const Byte& right) {
        if(this == &right) {/* self-assignment */}
        b <<= right.b;
        return *this;
      }
      // Conditional operators return true/false:
      int operator==(const Byte& right) const {
          return b == right.b;
      }
      int operator!=(const Byte& right) const {
          return b != right.b;
      }
      int operator<(const Byte& right) const {
          return b < right.b;
      }
      int operator>(const Byte& right) const {
          return b > right.b;
      }
      int operator<=(const Byte& right) const {
          return b <= right.b;
      }
      int operator>=(const Byte& right) const {
          return b >= right.b;
      }
      int operator&&(const Byte& right) const {
          return b && right.b;
      }
      int operator||(const Byte& right) const {
          return b || right.b;
      }
      // Write the contents to an ostream:
      void print(ostream& os) const {
        os << "0x" << hex << int(b) << dec;
      }
    };
    
    void k(Byte& b1, Byte& b2) {
      b1 = b1 * b2 + b2 % b1;
    
      #define TRY2(op) \
      out << "b1 = "; b1.print(out); \
      out << ", b2 = "; b2.print(out); \
      out << ";  b1 " #op " b2 produces "; \
      (b1 op b2).print(out); \
      out << endl;
    
      b1 = 9; b2 = 47;
      TRY2(+) TRY2(-) TRY2(*) TRY2(/)
      TRY2(%) TRY2(^) TRY2(&) TRY2(|)
      TRY2(<<) TRY2(>>) TRY2(+=) TRY2(-=)
      TRY2(*=) TRY2(/=) TRY2(%=) TRY2(^=)
      TRY2(&=) TRY2(|=) TRY2(>>=) TRY2(<<=)
      TRY2(=) // Assignment operator
    
      // Conditionals:
      #define TRYC2(op) \
      out << "b1 = "; b1.print(out); \
      out << ", b2 = "; b2.print(out); \
      out << ";  b1 " #op " b2 produces "; \
      out << (b1 op b2); \
      out << endl;
    
      b1 = 9; b2 = 47;
      TRYC2(<) TRYC2(>) TRYC2(==) TRYC2(!=) TRYC2(<=)
      TRYC2(>=) TRYC2(&&) TRYC2(||)
    
      // Chained assignment:
      Byte b3 = 92;
      b1 = b2 = b3;
    }
    
    int main() {
      Integer c1(47), c2(9);
      h(c1, c2);
      out << "\n member functions:" << endl;
      Byte b1(47), b2(9);
      k(b1, b2);
    } ///:~

    You can see that operator= is only allowed to be a member function. This is explained later.

    Notice that all the assignment operators have code to check for self-assignment, as a general guideline. In some cases this is not necessary; for example, with operator+= you may want to say A+=A and have it add A to itself. The most important place to check for self-assignment is operator= because with complicated objects disastrous results may occur. (In some cases itís OK, but you should always keep it in mind when writing operator=.)

    All of the operators shown in the previous two examples are overloaded to handle a single type. Itís also possible to overload operators to handle mixed types, so you can add apples to oranges, for example. Before you start on an exhaustive overloading of operators, however, you should look at the section on automatic type conversion later in this chapter. Often, a type conversion in the right place can save you a lot of overloaded operators.

    Arguments & return values

    It may seem a little confusing at first when you look at UNARY.CPP and BINARY.CPP and see all the different ways that arguments are passed and returned. Although you can pass and return arguments any way you want to, the choices in these examples were not selected at random. They follow a very logical pattern, the same one youíll want to use in most of your choices.

  13. As with any function argument, if you only need to read from the argument and not change it, default to passing it as a const reference. Ordinary arithmetic operations (like + and Ė, etc.) and Booleans will not change their arguments, so pass by const reference is predominantly what youíll use. When the function is a class member, this translates to making it a const member function. Only with the operator-assignments (like +=) and the operator=, which change the left-hand argument, is the left argument not a constant, but itís still passed in as an address because it will be changed.
  14. The type of return value you should select depends on the expected meaning of the operator. (Again, you can do anything you want with the arguments and return values.) If the effect of the operator is to produce a new value, you will need to generate a new object as the return value. For example, Integer::operator+ must produce an Integer object that is the sum of the operands. This object is returned by value as a const, so the result cannot be modified as an lvalue.
  15. All the assignment operators modify the lvalue. To allow the result of the assignment to be used in chained expressions, like A=B=C, itís expected that you will return a reference to that same lvalue that was just modified. But should this reference be a const or nonconst? Although you read A=B=C from left to right, the compiler parses it from right to left, so youíre not forced to return a nonconst to support assignment chaining. However, people do sometimes expect to be able to perform an operation on the thing that was just assigned to, such as (A=B).foo( ); to call foo( ) on A after assigning B to it. Thus the return value for all the assignment operators should be a nonconst reference to the lvalue.
  16. For the logical operators, everyone expects to get at worst an int back, and at best a bool. (Libraries developed before most compilers supported C++ís built-in bool will use int or an equivalent typedef).
  17. The increment and decrement operators present a dilemma because of the pre- and postfix versions. Both versions change the object and so cannot treat the object as a const. The prefix version returns the value of the object after it was changed, so you expect to get back the object that was changed. Thus, with prefix you can just return *this as a reference. The postfix version is supposed to return the value before the value is changed, so youíre forced to create a separate object to represent that value and return it. Thus, with postfix you must return by value if you want to preserve the expected meaning. (Note that youíll often find the increment and decrement operators returning an int or bool to indicate, for example, whether an iterator is at the end of a list). Now the question is: Should these be returned as const or nonconst? If you allow the object to be modified and someone writes (++A).foo( );, foo( ) will be operating on A itself, but with (A++).foo( );, foo( ) operates on the temporary object returned by the postfix operator++. Temporary objects are automatically const, so this would be flagged by the compiler, but for consistencyís sake it may make more sense to make them both const, as was done here. Because of the variety of meanings you may want to give the increment and decrement operators, they will need to be considered on a case-by-case basis.
  18. Return by value as const

    Returning by value as a const can seem a bit subtle at first, and so deserves a bit more explanation. Consider the binary operator+. If you use it in an expression such as f(A+B), the result of A+B becomes a temporary object that is used in the call to f( ). Because itís a temporary, itís automatically const, so whether you explicitly make the return value const or not has no effect.

    However, itís also possible for you to send a message to the return value of A+B, rather than just passing it to a function. For example, you can say (A+B).g( ), where g( ) is some member function of Integer, in this case. By making the return value const, you state that only a const member function can be called for that return value. This is const-correct, because it prevents you from storing potentially valuable information in an object that will most likely be lost.

    return efficiency

    When new objects are created to return by value, notice the form used. In operator+, for example:

    return Integer(left.i + right.i);

    This may look at first like a "function call to a constructor," but itís not. The syntax is that of a temporary object; the statement says "make a temporary Integer object and return it." Because of this, you might think that the result is the same as creating a named local object and returning that. However, itís quite different. If you were to say instead:

    Integer tmp(left.i + right.i);

    return tmp;

    three things will happen. First, the tmp object is created including its constructor call. Then, the copy-constructor copies the tmp to the location of the outside return value. Finally, the destructor is called for tmp at the end of the scope.

    In contrast, the "returning a temporary" approach works quite differently. When the compiler sees you do this, it knows that you have no other need for the object itís creating than to return it so it builds the object directly into the location of the outside return value. This requires only a single ordinary constructor call (no copy-constructor is necessary) and thereís no destructor call because you never actually create a local object. Thus, while it doesnít cost anything but programmer awareness, itís significantly more efficient.

    Unusual operators

    Several additional operators have a slightly different syntax for overloading.

    The subscript, operator[ ], must be a member function and it requires a single argument. Because it implies that the object acts like an array, you will often return a reference from this operator, so it can be used conveniently on the left-hand side of an equal sign. This operator is commonly overloaded; youíll see examples in the rest of the book.

    The comma operator is called when it appears next to an object of the type the comma is defined for. However, operator, is not called for function argument lists, only for objects that are out in the open, separated by commas. There doesnít seem to be a lot of practical uses for this operator; itís in the language for consistency. Hereís an example showing how the comma function can be called when the comma appears before an object, as well as after:

    //: C12:Comma.cpp
    // Overloading the Ď,í operator
    #include <iostream>
    using namespace std;
    
    class After {
    public:
      const After& operator,(const After&) const {
        cout << "After::operator,()" << endl;
        return *this;
      }
    };
    
    class Before {};
    
    Before& operator,(int, Before& b) {
      cout << "Before::operator,()" << endl;
      return b;
    }
    
    int main() {
      After a, b;
      a, b;  // Operator comma called
    
      Before c;
      1, c;  // Operator comma called
    } ///:~

    The global function allows the comma to be placed before the object in question. The usage shown is fairly obscure and questionable. Although you would probably use a comma-separated list as part of a more complex expression, itís too subtle to use in most situations.

    The function call operator( ) must be a member function, and it is unique in that it allows any number of arguments. It makes your object look like itís actually a function name, so itís probably best used for types that only have a single operation, or at least an especially prominent one.

    The operators new and delete control dynamic storage allocation, and can be overloaded. This very important topic is covered in the next chapter.

    The operatorĖ>* is a binary operator that behaves like all the other binary operators. It is provided for those situations when you want to mimic the behavior provided by the built-in pointer-to-member syntax, described in the previous chapter.

    The smart pointer operatorĖ> is designed to be used when you want to make an object appear to be a pointer. This is especially useful if you want to "wrap" a class around a pointer to make that pointer safe, or in the common usage of an iterator, which is an object that moves through a collection or container of other objects and selects them one at a time, without providing direct access to the implementation of the container. (Youíll often find containers and iterators in class libraries.)

    A smart pointer must be a member function. It has additional, atypical constraints: It must return either an object (or reference to an object) that also has a smart pointer or a pointer that can be used to select what the smart pointer arrow is pointing at. Hereís a simple example:

    //: C12:Smartp.cpp
    // Smart pointer example
    #include <iostream>
    #include <cstring>
    using namespace std;
    
    class Obj {
      static int i, j;
    public:
      void f() { cout << i++ << endl; }
      void g() { cout << j++ << endl; }
    };
    
    // Static member definitions:
    int Obj::i = 47;
    int Obj::j = 11;
    
    // Container:
    class ObjContainer {
      enum { sz = 100 };
      Obj* a[sz];
      int index;
    public:
      ObjContainer() {
        index = 0;
        memset(a, 0, sz * sizeof(Obj*));
      }
      void add(Obj* OBJ) {
        if(index >= sz) return;
        a[index++] = OBJ;
      }
      friend class Sp;
    };
    
    // Iterator:
    class Sp {
      ObjContainer* oc;
      int index;
    public:
      Sp(ObjContainer* objc) {
        index = 0;
        oc = objc;
      }
      // Return value indicates end of list:
      int operator++() { // Prefix
        if(index >= oc->sz) return 0;
        if(oc->a[++index] == 0) return 0;
        return 1;
      }
      int operator++(int) { // Postfix
        return operator++(); // Use prefix version
      }
      Obj* operator->() const {
        if(oc->a[index]) return oc->a[index];
        static Obj dummy;
        return &dummy;
      }
    };
    
    int main() {
      const sz = 10;
      Obj o[sz];
      ObjContainer oc;
      for(int i = 0; i < sz; i++)
        oc.add(&o[i]); // Fill it up
      Sp sp(&oc); // Create an iterator
      do {
        sp->f(); // Smart pointer calls
        sp->g();
      } while(sp++);
    } ///:~

    The class Obj defines the objects that are manipulated in this program. The functions f( ) and g( ) simply print out interesting values using static data members. Pointers to these objects are stored inside containers of type ObjContainer using its add( ) function. ObjContainer looks like an array of pointers, but youíll notice thereís no way to get the pointers back out again. However, Sp is declared as a friend class, so it has permission to look inside the container. The Sp class looks very much like an intelligent pointer ó you can move it forward using operator++ (you can also define an operatorĖ Ė), it wonít go past the end of the container itís pointing to, and it returns (via the smart pointer operator) the value itís pointing to. Notice that an iterator is a custom fit for the container itís created for ó unlike a pointer, there isnít a "general purpose" iterator. Containers and iterators are covered in more depth in Chapter XX.

    In main( ), once the container oc is filled with Obj objects, an iterator SP is created. The smart pointer calls happen in the expressions:

    sp->f(); // Smart pointer calls

    sp->g();

    Here, even though sp doesnít actually have f( ) and g( ) member functions, the smart pointer mechanism calls those functions for the Obj* that is returned by Sp::operatorĖ>. The compiler performs all the checking to make sure the function call works properly.

    Although the underlying mechanics of the smart pointer are more complex than the other operators, the goal is exactly the same ó to provide a more convenient syntax for the users of your classes.

    Operators you canít overload

    There are certain operators in the available set that cannot be overloaded. The general reason for the restriction is safety: If these operators were overloadable, it would somehow jeopardize or break safety mechanisms. Often it makes things harder, or confuses existing practice.

    The member selection operator.. Currently, the dot has a meaning for any member in a class, but if you allow it to be overloaded, then you couldnít access members in the normal way; instead youíd have to use a pointer and the arrow operator Ė>.

    The pointer to member dereference operator.*. For the same reason as operator..

    Thereís no exponentiation operator. The most popular choice for this was operator** from Fortran, but this raised difficult parsing questions. Also, C has no exponentiation operator, so C++ didnít seem to need one either because you can always perform a function call. An exponentiation operator would add a convenient notation, but no new language functionality, to account for the added complexity of the compiler.

    There are no user-defined operators. That is, you canít make up new operators that arenít currently in the set. Part of the problem is how to determine precedence, and part of the problem is an insufficient need to account for the necessary trouble.

    You canít change the precedence rules. Theyíre hard enough to remember as it is, without letting people play with them.

    Nonmember operators

    In some of the previous examples, the operators may be members or nonmembers, and it doesnít seem to make much difference. This usually raises the question, "Which should I choose?" In general, if it doesnít make any difference, they should be members, to emphasize the association between the operator and its class. When the left-hand operand is an object of the current class, it works fine.

    This isnít always the case ó sometimes you want the left-hand operand to be an object of some other class. A very common place to see this is when the operators << and >> are overloaded for iostreams:

    //: C12:Iosop.cpp
    // Iostream operator overloading
    // Example of non-member overloaded operators
    #include <iostream>
    #include <strstream>
    #include <cstring>
    #include "../require.h"
    using namespace std;
    
    class IntArray {
      enum { sz = 5 };
      int i[sz];
    public:
      IntArray() {
        memset(i, 0, sz* sizeof(*i));
      }
      int& operator[](int x) {
        require(x >= 0 && x < sz,
               "operator[] out of range");
        return i[x];
      }
      friend ostream&
        operator<<(ostream& os,
                   const IntArray& ia);
      friend istream&
        operator>>(istream& is, IntArray& ia);
    };
    
    ostream& operator<<(ostream& os,
                        const IntArray& ia){
      for(int j = 0; j < ia.sz; j++) {
        os << ia.i[j];
        if(j != ia.sz -1)
          os << ", ";
      }
      os << endl;
      return os;
    }
    
    istream& operator>>(istream& is, IntArray& ia){
      for(int j = 0; j < ia.sz; j++)
        is >> ia.i[j];
      return is;
    }
    
    int main() {
      istrstream input("47 34 56 92 103");
      IntArray I;
      input >> I;
      I[4] = -1; // Use overloaded operator[]
      cout << I;
    } ///:~

    This class also contains an overloaded operator[ ], which returns a reference to a legitimate value in the array. A reference is returned, so the expression

    I[4] = -1;

    not only looks much more civilized than if pointers were used, it also accomplishes the desired effect.

    The overloaded shift operators pass and return by reference, so the actions will affect the external objects. In the function definitions, expressions like

    os << ia.i[j];

    cause existing overloaded operator functions to be called (that is, those defined in IOSTREAM.H). In this case, the function called is ostream& operator<<(ostream&, int) because ia.i[j] resolves to an int.

    Once all the actions are performed on the istream or ostream, it is returned so it can be used in a more complicated expression.

    The form shown in this example for the inserter and extractor is standard. If you want to create a set for your own class, copy the function signatures and return types and follow the form of the body.

    Basic guidelines

    Murray suggests these guidelines for choosing between members and nonmembers:

    Operator

    Recommended use

    All unary operators

    member

    = ( ) [ ] Ė>

    must be member

    += Ė= /= *= ^=
    &= |= %= >>= <<=

    member

    All other binary operators

    nonmember

     

    Overloading assignment

    A common source of confusion with new C++ programmers is assignment. This is no doubt because the = sign is such a fundamental operation in programming, right down to copying a register at the machine level. In addition, the copy-constructor (from the previous chapter) can also be invoked when using the = sign:

    foo b;

    foo a = b;

    a = b;

    In the second line, the object a is being defined. A new object is being created where one didnít exist before. Because you know by now how defensive the C++ compiler is about object initialization, you know that a constructor must always be called at the point where an object is defined. But which constructor? a is being created from an existing foo object, so thereís only one choice: the copy-constructor. So even though an equal sign is involved, the copy-constructor is called.

    In the third line, things are different. On the left side of the equal sign, thereís a previously initialized object. Clearly, you donít call a constructor for an object thatís already been created. In this case foo::operator= is called for a, taking as an argument whatever appears on the right-hand side. (You can have multiple operator= functions to take different right-hand arguments.)

    This behavior is not restricted to the copy-constructor. Any time youíre initializing an object using an = instead of the ordinary function-call form of the constructor, the compiler will look for a constructor that accepts whatever is on the right-hand side:

    //: C12:FeeFi.cpp
    // Copying vs. initialization
    
    class Fi {
    public:
      Fi() {}
    };
    
    class Fee {
    public:
      Fee(int) {}
      Fee(const Fi&) {}
    };
    
    int main() {
      Fee f = 1; // Fee(int)
      Fi fi;
      Fee fum = fi; // Fee(Fi)
    } ///:~

    When dealing with the = sign, itís important to keep this distinction in mind: If the object hasnít been created yet, initialization is required; otherwise the assignment operator= is used.

    Itís even better to avoid writing code that uses the = for initialization; instead, always use the explicit constructor form; the last line becomes

    Fee fum(fi);

    This way, youíll avoid confusing your readers.

    Behavior of operator=

    In BINARY.CPP, you saw that operator= can be only a member function. It is intimately connected to the object on the left side of the =, and if you could define operator= globally, you could try to redefine the built-in = sign:

    int operator=(int, foo); // Global = not allowed!

    The compiler skirts this whole issue by forcing you to make operator= a member function.

    When you create an operator=, you must copy all the necessary information from the right-hand object into yourself to perform whatever you consider "assignment" for your class. For simple objects, this is obvious:

    //: C12:Simpcopy.cpp
    // Simple operator=()
    #include <iostream>
    using namespace std;
    
    class Value {
      int a, b;
      float c;
    public:
      Value(int A = 0, int B = 0, float C = 0.0) {
        a = A;
        b = B;
        c = C;
      }
      Value& operator=(const Value& rv) {
        a = rv.a;
        b = rv.b;
        c = rv.c;
        return *this;
      }
      friend ostream&
        operator<<(ostream& os, const Value& rv) {
          return os << "a = " << rv.a << ", b = "
            << rv.b << ", c = " << rv.c;
        }
    };
    
    int main() {
      Value A, B(1, 2, 3.3);
      cout << "A: " << A << endl;
      cout << "B: " << B << endl;
      A = B;
      cout << "A after assignment: " << A << endl;
    } ///:~

    Here, the object on the left side of the = copies all the elements of the object on the right, then returns a reference to itself, so a more complex expression can be created.

    A common mistake was made in this example. When youíre assigning two objects of the same type, you should always check first for self-assignment: Is the object being assigned to itself? In some cases, such as this one, itís harmless if you perform the assignment operations anyway, but if changes are made to the implementation of the class it, can make a difference, and if you donít do it as a matter of habit, you may forget and cause hard-to-find bugs.

    Pointers in classes

    What happens if the object is not so simple? For example, what if the object contains pointers to other objects? Simply copying a pointer means youíll end up with two objects pointing to the same storage location. In situations like these, you need to do bookkeeping of your own.

    There are two common approaches to this problem. The simplest technique is to copy whatever the pointer refers to when you do an assignment or a copy-constructor. This is very straightforward:

    //: C12:Copymem.cpp {O}
    // Duplicate during assignment
    #include <cstdlib>
    #include <cstring>
    #include "../require.h"
    using namespace std;
    
    class WithPointer {
      char* p;
      enum { blocksz = 100 };
    public:
      WithPointer() {
        p = (char*)malloc(blocksz);
        require(p != 0);
        memset(p, 1, blocksz);
      }
      WithPointer(const WithPointer& wp) {
        p = (char*)malloc(blocksz);
        require(p != 0);
        memcpy(p, wp.p, blocksz);
      }
      WithPointer&
      operator=(const WithPointer& wp) {
        // Check for self-assignment:
        if(&wp != this)
          memcpy(p, wp.p, blocksz);
        return *this;
      }
      ~WithPointer() {
        free(p);
      }
    }; ///:~

    This shows the four functions you will always need to define when your class contains pointers: all necessary ordinary constructors, the copy-constructor, operator= (either define it or disallow it), and a destructor. The operator= checks for self-assignment as a matter of course, even though itís not strictly necessary here. This virtually eliminates the possibility that youíll forget to check for self-assignment if you do change the code so that it matters.

    Here, the constructors allocate the memory and initialize it, the operator= copies it, and the destructor frees the memory. However, if youíre dealing with a lot of memory or a high overhead to initialize that memory, you may want to avoid this copying. A very common approach to this problem is called reference counting. You make the block of memory smart, so it knows how many objects are pointing to it. Then copy-construction or assignment means attaching another pointer to an existing block of memory and incrementing the reference count. Destruction means reducing the reference count and destroying the object if the reference count goes to zero.

    But what if you want to write to the block of memory? More than one object may be using this block, so youíd be modifying someone elseís block as well as yours, which doesnít seem very neighborly. To solve this problem, an additional technique called copy-on-write is often used. Before writing to a block of memory, you make sure no one else is using it. If the reference count is greater than one, you must make yourself a personal copy of that block before writing it, so you donít disturb someone elseís turf. Hereís a simple example of reference counting and copy-on-write:

    //: C12:Refcount.cpp
    // Reference count, copy-on-write
    #include <cstring>
    #include "../require.h"
    using namespace std;
    
    class Counted {
      class MemBlock {
        enum { size = 100 };
        char c[size];
        int refcount;
      public:
        MemBlock() {
          memset(c, 1, size);
          refcount = 1;
        }
        MemBlock(const MemBlock& rv) {
          memcpy(c, rv.c, size);
          refcount = 1;
        }
        void attach() { ++refcount; }
        void detach() {
          require(refcount != 0);
          // Destroy object if no one is using it:
          if(--refcount == 0) delete this;
        }
        int count() const { return refcount; }
        void set(char x) { memset(c, x, size); }
        // Conditionally copy this MemBlock.
        // Call before modifying the block; assign
        // resulting pointer to your block;
        MemBlock* unalias() {
          // Don't duplicate if not aliased:
          if(refcount == 1) return this;
          --refcount;
          // Use copy-constructor to duplicate:
          return new MemBlock(*this);
        }
      } * block;
    public:
      Counted() {
        block = new MemBlock; // Sneak preview
      }
      Counted(const Counted& rv) {
        block = rv.block; // Pointer assignment
        block->attach();
      }
      void unalias() { block = block->unalias(); }
      Counted& operator=(const Counted& rv) {
        // Check for self-assignment:
        if(&rv == this) return *this;
        // Clean up what you're using first:
        block->detach();
        block = rv.block; // Like copy-constructor
        block->attach();
        return *this;
      }
      // Decrement refcount, conditionally destroy
      ~Counted() { block->detach(); }
      // Copy-on-write:
      void write(char value) {
        // Do this before any write operation:
        unalias();
        // It's safe to write now.
        block->set(value);
      }
    };
    
    int main() {
      Counted A, B;
      Counted C(A);
      B = A;
      C = C;
      C.write('x');
    } ///:~

    The nested class MemBlock is the block of memory pointed to. (Notice the pointer block defined at the end of the nested class.) It contains a reference count and functions to control and read the reference count. Thereís a copy-constructor so you can make a new MemBlock from an existing one.

    The attach( ) function increments the reference count of a MemBlock to indicate thereís another object using it. detach( ) decrements the reference count. If the reference count goes to zero, then no one is using it anymore, so the member function destroys its own object by saying delete this.

    You can modify the memory with the set( ) function, but before you make any modifications, you should ensure that you arenít walking on a MemBlock that some other object is using. You do this by calling Counted::unalias( ), which in turn calls MemBlock::unalias( ). The latter function will return the block pointer if the reference count is one (meaning no one else is pointing to that block), but will duplicate the block if the reference count is more than one.

    This example includes a sneak preview of the next chapter. Instead of Cís malloc( ) and free( ) to create and destroy the objects, the special C++ operators new and delete are used. For this example, you can think of new and delete just like malloc( ) and free( ), except new calls the constructor after allocating memory, and delete calls the destructor before freeing the memory.

    The copy-constructor, instead of creating its own memory, assigns block to the block of the source object. Then, because thereís now an additional object using that block of memory, it increments the reference count by calling MemBlock::attach( ).

    The operator= deals with an object that has already been created on the left side of the =, so it must first clean that up by calling detach( ) for that MemBlock, which will destroy the old MemBlock if no one else is using it. Then operator= repeats the behavior of the copy-constructor. Notice that it first checks to detect whether youíre assigning the same object to itself.

    The destructor calls detach( ) to conditionally destroy the MemBlock.

    To implement copy-on-write, you must control all the actions that write to your block of memory. This means you canít ever hand a raw pointer to the outside world. Instead you say, "Tell me what you want done and Iíll do it for you!" For example, the write( ) member function allows you to change the values in the block of memory. But first, it uses unalias( ) to prevent the modification of an aliased block (a block with more than one Counted object using it).

    main( ) tests the various functions that must work correctly to implement reference counting: the constructor, copy-constructor, operator=, and destructor. It also tests the copy-on-write by calling the write( ) function for object C, which is aliased to Aís memory block.

    Tracing the output

    To verify that the behavior of this scheme is correct, the best approach is to add information and functionality to the class to generate a trace output that can be analyzed. Hereís REFCOUNT.CPP with added trace information:

    //: C12:Rctrace.cpp
    // REFCOUNT.CPP w/ trace info
    #include <cstring>
    #include <fstream>
    #include "../require.h"
    using namespace std;
    
    ofstream out("rctrace.out");
    
    class Counted {
      class MemBlock {
        enum { size = 100 };
        char c[size];
        int refcount;
        static int blockcount;
        int blocknum;
      public:
        MemBlock() {
          memset(c, 1, size);
          refcount = 1;
          blocknum = blockcount++;
        }
        MemBlock(const MemBlock& rv) {
          memcpy(c, rv.c, size);
          refcount = 1;
          blocknum = blockcount++;
          print("copied block");
          out << endl;
          rv.print("from block");
        }
        ~MemBlock() {
          out << "\tdestroying block "
              << blocknum << endl;
        }
        void print(const char* msg = "") const {
          if(*msg) out << msg << ", ";
          out << "blocknum:" << blocknum;
          out << ", refcount:" << refcount;
        }
        void attach() { ++refcount; }
        void detach() {
          require(refcount != 0);
          // Destroy object if no one is using it:
          if(--refcount == 0) delete this;
        }
        int count() const { return refcount; }
        void set(char x) { memset(c, x, size); }
        // Conditionally copy this MemBlock.
        // Call before modifying the block; assign
        // resulting pointer to your block;
        MemBlock* unalias() {
          // Don't duplicate if not aliased:
          if(refcount == 1) return this;
          --refcount;
          // Use copy-constructor to duplicate:
          return new MemBlock(*this);
        }
      } * block;
      enum { sz = 30 };
      char id[sz];
    public:
      Counted(const char* ID = "tmp") {
        block = new MemBlock; // Sneak preview
        strncpy(id, ID, sz);
      }
      Counted(const Counted& rv) {
        block = rv.block; // Pointer assignment
        block->attach();
        strncpy(id, rv.id, sz);
        strncat(id, " copy", sz - strlen(id));
      }
      void unalias() { block = block->unalias(); }
      void addname(const char* nm) {
        strncat(id, nm, sz - strlen(id));
      }
      Counted& operator=(const Counted& rv) {
        print("inside operator=\n\t");
        if(&rv == this) {
          out << "self-assignment" << endl;
          return *this;
        }
        // Clean up what you're using first:
        block->detach();
        block = rv.block; // Like copy-constructor
        block->attach();
        return *this;
      }
      // Decrement refcount, conditionally destroy
      ~Counted() {
        out << "preparing to destroy: " << id
            << endl << "\tdecrementing refcount ";
        block->print();
        out << endl;
        block->detach();
      }
      // Copy-on-write:
      void write(char value) {
        unalias();
        block->set(value);
      }
      void print(const char* msg = "") {
        if(*msg) out << msg << " ";
        out << "object " << id << ": ";
        block->print();
        out << endl;
      }
    };
    
    int Counted::MemBlock::blockcount = 0;
    
    int main() {
      Counted A("A"), B("B");
      Counted C(A);
      C.addname(" (C) ");
      A.print();
      B.print();
      C.print();
      B = A;
      A.print("after assignment\n\t");
      B.print();
      out << "Assigning C = C" << endl;
      C = C;
      C.print("calling C.write('x')\n\t");
      C.write('x');
      out << endl << "exiting main()" << endl;
    } ///:~

    Now MemBlock contains a static data member blockcount to keep track of the number of blocks created, and to create a unique number (stored in blocknum) for each block so you can tell them apart. The destructor announces which block is being destroyed, and the print( ) function displays the block number and reference count.

    The Counted class contains a buffer id to keep track of information about the object. The Counted constructor creates a new MemBlock object and assigns the result (a pointer to the MemBlock object on the heap) to block. The identifier, copied from the argument, has the word "copy" appended to show where itís copied from. Also, the addname( ) function lets you put additional information about the object in id (the actual identifier, so you can see what it is as well as where itís copied from).

    Hereís the output:

    object A: blocknum:0, refcount:2

    object B: blocknum:1, refcount:1

    object A copy (C) : blocknum:0, refcount:2

    inside operator=

    object B: blocknum:1, refcount:1

    destroying block 1

    after assignment

    object A: blocknum:0, refcount:3

    object B: blocknum:0, refcount:3

    Assigning C = C

    inside operator=

    object A copy (C) : blocknum:0, refcount:3

    self-assignment

    calling C.write('x')

    object A copy (C) : blocknum:0, refcount:3

    copied block, blocknum:2, refcount:1

    from block, blocknum:0, refcount:2

    exiting main()

    preparing to destroy: A copy (C)

    decrementing refcount blocknum:2, refcount:1

    destroying block 2

    preparing to destroy: B

    decrementing refcount blocknum:0, refcount:2

    preparing to destroy: A

    decrementing refcount blocknum:0, refcount:1

    destroying block 0

    By studying the output, tracing through the source code, and experimenting with the program, youíll deepen your understanding of these techniques.

    Automatic operator= creation

    Because assigning an object to another object of the same type is an activity most people expect to be possible, the compiler will automatically create a type::operator=(type) if you donít make one. The behavior of this operator mimics that of the automatically created copy-constructor: If the class contains objects (or is inherited from another class), the operator= for those objects is called recursively. This is called memberwise assignment. For example,

    //: C12:Autoeq.cpp
    // Automatic operator=()
    #include <iostream>
    using namespace std;
    
    class Bar {
    public:
      Bar& operator=(const Bar&) {
        cout << "inside Bar::operator=()" << endl;
        return *this;
      }
    };
    
    class Foo {
      Bar b;
    };
    
    int main() {
      Foo a, b;
      a = b; // Prints: "inside Bar::operator=()"
    } ///:~

    The automatically generated operator= for Foo calls Bar::operator=.

    Generally you donít want to let the compiler do this for you. With classes of any sophistication (especially if they contain pointers!) you want to explicitly create an operator=. If you really donít want people to perform assignment, declare operator= as a private function. (You donít need to define it unless youíre using it inside the class.)

    Automatic type conversion

    In C and C++, if the compiler sees an expression or function call using a type that isnít quite the one it needs, it can often perform an automatic type conversion from the type it has to the type it wants. In C++, you can achieve this same effect for user-defined types by defining automatic type-conversion functions. These functions come in two flavors: a particular type of constructor and an overloaded operator.

    Constructor conversion

    If you define a constructor that takes as its single argument an object (or reference) of another type, that constructor allows the compiler to perform an automatic type conversion. For example,

    //: C12:Autocnst.cpp
    // Type conversion constructor
    
    class One {
    public:
      One() {}
    };
    
    class Two {
    public:
      Two(const One&) {}
    };
    
    void f(Two) {}
    
    int main() {
      One one;
      f(one); // Wants a Two, has a One
    } ///:~

    When the compiler sees f( ) called with a One object, it looks at the declaration for f( ) and notices it wants a Two. Then it looks to see if thereís any way to get a Two from a One, and it finds the constructor Two::Two(One), which it quietly calls. The resulting Two object is handed to f( ).

    In this case, automatic type conversion has saved you from the trouble of defining two overloaded versions of f( ). However, the cost is the hidden constructor call to Two, which may matter if youíre concerned about the efficiency of calls to f( ).

    Preventing constructor conversion

    There are times when automatic type conversion via the constructor can cause problems. To turn it off, you modify the constructor by prefacing with the keyword explicit (which only works with constructors). Used to modify the constructor of class Two in the above example:

    class One {

    public:

    One() {}

    };

    class Two {

    public:

    explicit Two(const One&) {}

    };

    void f(Two) {}

    int main() {

    One one;

    //! f(one); // No auto conversion allowed

    f(Two(one)); // OK -- user performs conversion

    }

    By making Twoís constructor explicit, the compiler is told not to perform any automatic conversion using that particular constructor (other non-explicit constructors in that class can still perform automatic conversions). If the user wants to make the conversion happen, the code must be written out. In the above code, f(Two(one)) creates a temporary object of type Two from one, just like the compiler did in the previous version.

    Operator conversion

    The second way to effect automatic type conversion is through operator overloading. You can create a member function that takes the current type and converts it to the desired type using the operator keyword followed by the type you want to convert to. This form of operator overloading is unique because you donít appear to specify a return type ó the return type is the name of the operator youíre overloading. Hereís an example:

    //: C12:Opconv.cpp
    // Op overloading conversion
    
    class Three {
      int i;
    public:
      Three(int I = 0, int = 0) : i(I) {}
    };
    
    class Four {
      int x;
    public:
      Four(int X) : x(X) {}
      operator Three() const { return Three(x); }
    };
    
    void g(Three) {}
    
    int main() {
      Four four(1);
      g(four);
      g(1);  // Calls Three(1,0)
    } ///:~

    With the constructor technique, the destination class is performing the conversion, but with operators, the source class performs the conversion. The value of the constructor technique is you can add a new conversion path to an existing system as youíre creating a new class. However, creating a single-argument constructor always defines an automatic type conversion (even if itís got more than one argument, if the rest of the arguments are defaulted), which may not be what you want. In addition, thereís no way to use a constructor conversion from a user-defined type to a built-in type; this is possible only with operator overloading.

    Reflexivity

    One of the most convenient reasons to use global overloaded operators rather than member operators is that in the global versions, automatic type conversion may be applied to either operand, whereas with member objects, the left-hand operand must already be the proper type. If you want both operands to be converted, the global versions can save a lot of coding. Hereís a small example:

    //: C12:Reflex.cpp
    // Reflexivity in overloading
    
    class Number {
      int i;
    public:
      Number(int I = 0) { i = I; }
      const Number
      operator+(const Number& n) const {
        return Number(i + n.i);
      }
      friend const Number
        operator-(const Number&, const Number&);
    };
    
    const Number
      operator-(const Number& n1,
                const Number& n2) {
        return Number(n1.i - n2.i);
    }
    
    int main() {
      Number a(47), b(11);
      a + b; // OK
      a + 1; // 2nd arg converted to Number
    //! 1 + a; // Wrong! 1st arg not of type Number
      a - b; // OK
      a - 1; // 2nd arg converted to Number
      1 - a; // 1st arg converted to Number
    } ///:~

    Class Number has a member operator+ and a friend operatorĖ. Because thereís a constructor that takes a single int argument, an int can be automatically converted to a Number, but only under the right conditions. In main( ), you can see that adding a Number to another Number works fine because itís an exact match to the overloaded operator. Also, when the compiler sees a Number followed by a + and an int, it can match to the member function Number::operator+ and convert the int argument to a Number using the constructor. But when it sees an int and a + and a Number, it doesnít know what to do because all it has is Number::operator+, which requires that the left operand already be a Number object. Thus the compiler issues an error.

    With the friend operatorĖ, things are different. The compiler needs to fill in both its arguments however it can; it isnít restricted to having a Number as the left-hand argument. Thus, if it sees 1 Ė a, it can convert the first argument to a Number using the constructor.

    Sometimes you want to be able to restrict the use of your operators by making them members. For example, when multiplying a matrix by a vector, the vector must go on the right. But if you want your operators to be able to convert either argument, make the operator a friend function.

    Fortunately, the compiler will not take 1 Ė 1 and convert both arguments to Number objects and then call operatorĖ. That would mean that existing C code might suddenly start to work differently. The compiler matches the "simplest" possibility first, which is the built-in operator for the expression 1 Ė 1.

    A perfect example: strings

    An example where automatic type conversion is extremely helpful occurs with a string class. Without automatic type conversion, if you wanted to use all the existing string functions from the Standard C library, youíd have to create a member function for each one, like this:

    //: C12:Strings1.cpp
    // No auto type conversion
    #include <cstring>
    #include <cstdlib>
    #include "../require.h"
    using namespace std;
    
    class Stringc {
      char* s;
    public:
      Stringc(const char* S = "") {
        s = (char*)malloc(strlen(S) + 1);
        require(s != 0);
        strcpy(s, S);
      }
      ~Stringc() { free(s); }
      int Strcmp(const Stringc& S) const {
        return ::strcmp(s, S.s);
      }
      // ... etc., for every function in string.h
    };
    
    int main() {
      Stringc s1("hello"), s2("there");
      s1.Strcmp(s2);
    } ///:~

    Here, only the strcmp( ) function is created, but youíd have to create a corresponding function for every one in STRING.H that might be needed. Fortunately, you can provide an automatic type conversion allowing access to all the functions in STRING.H:

    //: C12:Strings2.cpp
    // With auto type conversion
    #include <cstring>
    #include <cstdlib>
    #include "../require.h"
    using namespace std;
    
    class Stringc {
      char* s;
    public:
      Stringc(const char* S = "") {
        s = (char*)malloc(strlen(S) + 1);
        require(s != 0);
        strcpy(s, S);
      }
      ~Stringc() { free(s); }
      operator const char*() const { return s; }
    };
    
    int main() {
      Stringc s1("hello"), s2("there");
      strcmp(s1, s2); // Standard C function
      strspn(s1, s2); // Any string function!
    } ///:~

    Now any function that takes a char* argument can also take a Stringc argument because the compiler knows how to make a char* from a Stringc.

    Pitfalls in automatic type conversion

    Because the compiler must choose how to quietly perform a type conversion, it can get into trouble if you donít design your conversions correctly. A simple and obvious situation occurs with a class X that can convert itself to an object of class Y with an operator Y( ). If class Y has a constructor that takes a single argument of type X, this represents the identical type conversion. The compiler now has two ways to go from X to Y, so it will generate an ambiguity error when that conversion occurs:

    //: C12:Ambig.cpp
    // Ambiguity in type conversion
    
    class Y; // Class declaration
    
    class X {
    public:
      operator Y() const; // Convert X to Y
    };
    
    class Y {
    public:
      Y(X); // Convert X to Y
    };
    
    void f(Y);
    
    int main() {
      X x;
    //! f(x); // Error: ambiguous conversion
    } ///:~

    The obvious solution to this problem is not to do it: Just provide a single path for automatic conversion from one type to another.

    A more difficult problem to spot occurs when you provide automatic conversion to more than one type. This is sometimes called fan-out:

    //: C12:Fanout.cpp
    // Type conversion fanout
    
    class A {};
    class B {};
    
    class C {
    public:
      operator A() const;
      operator B() const;
    };
    
    // Overloaded h():
    void h(A);
    void h(B);
    
    int main() {
      C c;
    //! h(c); // Error: C -> A or C -> B ???
    } ///:~

    Class C has automatic conversions to both A and B. The insidious thing about this is that thereís no problem until someone innocently comes along and creates two overloaded versions of h( ). (With only one version, the code in main( ) works fine.)

    Again, the solution ó and the general watchword with automatic type conversion ó is to only provide a single automatic conversion from one type to another. You can have conversions to other types; they just shouldnít be automatic. You can create explicit function calls with names like make_A( ) and make_B( ).

    Hidden activities

    Automatic type conversion can introduce more underlying activities than you may expect. As a little brain teaser, look at this modification of FeeFi.cpp:

    //: C12:FeeFi2.cpp
    // Copying vs. initialization
    
    class Fi {};
    
    class Fee {
    public:
      Fee(int) {}
      Fee(const Fi&) {}
    };
    
    class Fo {
      int i;
    public:
      Fo(int x = 0) { i = x; }
      operator Fee() const { return Fee(i); }
    };
    
    int main() {
      Fo fo;
      Fee fiddle = fo;
    } ///:~

    There is no constructor to create the Fee fiddle from a Fo object. However, Fo has an automatic type conversion to a Fee. Thereís no copy-constructor to create a Fee from a Fee, but this is one of the special functions the compiler can create for you. (The default constructor, copy-constructor, operator=, and destructor can be created automatically.) So for the relatively innocuous statement

    Fee fiddle = FO;

    the automatic type conversion operator is called, and a copy-constructor is created.

    Automatic type conversion should be used carefully. Itís excellent when it significantly reduces a coding task, but itís usually not worth using gratuitously.

    Summary

    The whole reason for the existence of operator overloading is for those situations when it makes life easier. Thereís nothing particularly magical about it; the overloaded operators are just functions with funny names, and the function calls happen to be made for you by the compiler when it spots the right pattern. But if operator overloading doesnít provide a significant benefit to you (the creator of the class) or the user of the class, donít confuse the issue by adding it.

    Exercises

  19. Create a simple class with an overloaded operator++. Try calling this operator in both pre- and postfix form and see what kind of compiler warning you get.
  20. Create a class that contains a single private char. Overload the iostream operators << and >> (as in IOSOP.CPP) and test them. You can test them with fstreams, strstreams, and stdiostreams (cin and cout).
  21. Write a Number class with overloaded operators for +, Ė, *, /, and assignment. Choose the return values for these functions so that expressions can be chained together, and for efficiency. Write an automatic type conversion operator int( ).
  22. Combine the classes in UNARY.CPP and BINARY.CPP.
  23. Fix FANOUT.CPP by creating an explicit function to call to perform the type conversion, instead of one of the automatic conversion operators.
  24. 13: Dynamic object creation

    Sometimes you know the exact quantity, type, and lifetime of the objects in your program. But not always.

    How many planes will an air-traffic system have to handle? How many shapes will a CAD system need? How many nodes will there be in a network?

    To solve the general programming problem, itís essential that you be able to create and destroy objects at run-time. Of course, C has always provided the dynamic memory allocation functions malloc( )and free( )(along with variants of malloc( )) that allocate storage from the heap (also called the free store) at run-time.

    However, this simply wonít work in C++. The constructor doesnít allow you to hand it the address of the memory to initialize, and for good reason: If you could do that, you might

  25. Forget. Then guaranteed initialization of objects in C++ wouldnít be guaranteed.
  26. Accidentally do something to the object before you initialize it, expecting the right thing to happen.
  27. Hand it the wrong-sized object.
  28. And of course, even if you did everything correctly, anyone who modifies your program is prone to the same errors. Improper initialization is responsible for a large portion of programming errors, so itís especially important to guarantee constructor calls for objects created on the heap.

    So how does C++ guarantee proper initialization and cleanup, but allow you to create objects dynamically, on the heap?

    The answer is, "by bringing dynamic object creation into the core of the language." malloc( ) and free( ) are library functions, and thus outside the control of the compiler. However, if you have an operator to perform the combined act of dynamic storage allocation and initialization and another to perform the combined act of cleanup and releasing storage, the compiler can still guarantee that constructors and destructors will be called for all objects.

    In this chapter, youíll learn how C++ís new and delete elegantly solve this problem by safely creating objects on the heap.

    Object creation

    When a C++ object is created, two events occur:

  29. Storage is allocated for the object.
  30. The constructor is called to initialize that storage.
  31. By now you should believe that step two always happens. C++ enforces it because uninitialized objects are a major source of program bugs. It doesnít matter where or how the object is created ó the constructor is always called.

    Step one, however, can occur in several ways, or at alternate times:

  32. Storage can be allocated before the program begins, in the static storage area. This storage exists for the life of the program.
  33. Storage can be created on the stack whenever a particular execution point is reached (an opening brace). That storage is released automatically at the complementary execution point (the closing brace). These stack-allocation operations are built into the instruction set of the processor and are very efficient. However, you have to know exactly how much storage you need when youíre writing the program so the compiler can generate the right code.
  34. Storage can be allocated from a pool of memory called the heap (also known as the free store). This is called dynamic memory allocation. To allocate this memory, a function is called at run-time; this means you can decide at any time that you want some memory and how much you need. You are also responsible for determining when to release the memory, which means the lifetime of that memory can be as long as you choose ó it isnít determined by scope.
  35. Often these three regions are placed in a single contiguous piece of physical memory: the static area, the stack, and the heap (in an order determined by the compiler writer). However, there are no rules. The stack may be in a special place, and the heap may be implemented by making calls for chunks of memory from the operating system. As a programmer, these things are normally shielded from you, so all you need to think about is that the memory is there when you call for it.

    Cís approach to the heap

    To allocate memory dynamically at run-time, C provides functions in its standard library: malloc( ) and its variants calloc( ) and realloc( ) to produce memory from the heap, and free( ) to release the memory back to the heap. These functions are pragmatic but primitive and require understanding and care on the part of the programmer. To create an instance of a class on the heap using Cís dynamic memory functions, youíd have to do something like this:

    //: C13:Malclass.cpp
    // Malloc with class objects
    // What you'd have to do if not for "new"
    #include <cstdlib> // Malloc() & free()
    #include <cstring> // Memset()
    #include <iostream>
    #include "../require.h"
    using namespace std;
    
    class Obj {
      int i, j, k;
      enum { sz = 100 };
      char buf[sz];
    public:
      void initialize() { // Can't use constructor
        cout << "initializing Obj" << endl;
        i = j = k = 0;
        memset(buf, 0, sz);
      }
      void destroy() { // Can't use destructor
        cout << "destroying Obj" << endl;
      }
    };
    
    int main() {
      Obj* obj = (Obj*)malloc(sizeof(Obj));
      require(obj != 0);
      obj->initialize();
      // ... sometime later:
      obj->destroy();
      free(obj);
    } ///:~

    You can see the use of malloc( ) to create storage for the object in the line:

    Obj* obj = (Obj*)malloc(sizeof(Obj));

    Here, the user must determine the size of the object (one place for an error). malloc( ) returns a void* because itís just a patch of memory, not an object. C++ doesnít allow a void* to be assigned to any other pointer, so it must be cast.

    Because malloc( ) may fail to find any memory (in which case it returns zero), you must check the returned pointer to make sure it was successful.

    But the worst problem is this line:

    Obj->initialize();

    If they make it this far correctly, users must remember to initialize the object before it is used. Notice that a constructor was not used because the constructor cannot be called explicitly ó itís called for you by the compiler when an object is created. The problem here is that the user now has the option to forget to perform the initialization before the object is used, thus reintroducing a major source of bugs.

    It also turns out that many programmers seem to find Cís dynamic memory functions too confusing and complicated; itís not uncommon to find C programmers who use virtual memory machines allocating huge arrays of variables in the static storage area to avoid thinking about dynamic memory allocation. Because C++ is attempting to make library use safe and effortless for the casual programmer, Cís approach to dynamic memory is unacceptable.

    operator new

    The solution in C++ is to combine all the actions necessary to create an object into a single operator called new. When you create an object with new (using a new-expression), it allocates enough storage on the heap to hold the object, and calls the constructor for that storage. Thus, if you say

    Foo *fp = new Foo(1,2);

    at run-time, the equivalent of malloc(sizeof(Foo)) is called (often, it is literally a call to malloc( )), and the constructor for Foo is called with the resulting address as the this pointer, using (1,2) as the argument list. By the time the pointer is assigned to fp, itís a live, initialized object ó you canít even get your hands on it before then. Itís also automatically the proper Foo type so no cast is necessary.

    The default new also checks to make sure the memory allocation was successful before passing the address to the constructor, so you donít have to explicitly determine if the call was successful. Later in the chapter youíll find out what happens if thereís no memory left.

    You can create a new-expression using any constructor available for the class. If the constructor has no arguments, you can make the new-expression without the constructor argument list:

    Foo *fp = new Foo;

    Notice how simple the process of creating objects on the heap becomes ó a single expression, with all the sizing, conversions, and safety checks built in. Itís as easy to create an object on the heap as it is on the stack.

    operator delete

    The complement to the new-expression is the delete-expression, which first calls the destructor and then releases the memory (often with a call to free( )). Just as a new-expression returns a pointer to the object, a delete-expression requires the address of an object.

    delete fp;

    cleans up the dynamically allocated Foo object created earlier.

    delete can be called only for an object created by new. If you malloc( ) (or calloc( ) or realloc( )) an object and then delete it, the behavior is undefined. Because most default implementations of new and delete use malloc( ) and free( ), youíll probably release the memory without calling the destructor.

    If the pointer youíre deleting is zero, nothing will happen. For this reason, people often recommend setting a pointer to zero immediately after you delete it, to prevent deleting it twice. Deleting an object more than once is definitely a bad thing to do, and will cause problems.

    A simple example

    This example shows that the initialization takes place:

    //: C13:Newdel.cpp
    // Simple demo of new & delete
    #include <iostream>
    using namespace std;
    
    class Tree {
      int height;
    public:
      Tree(int Height) {
        height = Height;
      }
      ~Tree() { cout << "*"; }
      friend ostream&
      operator<<(ostream& os, const Tree* t) {
        return os << "Tree height is: "
                  << t->height << endl;
      }
    };
    
    int main() {
      Tree* t = new Tree(40);
      cout << t;
      delete t;
    } ///:~

    We can prove that the constructor is called by printing out the value of the Tree. Here, itís done by overloading the operator<< to use with an ostream. Note, however, that even though the function is declared as a friend, it is defined as an inline! This is a mere convenience ó defining a friend function as an inline to a class doesnít change the friend status or the fact that itís a global function and not a class member function. Also notice that the return value is the result of the entire output expression, which is itself an ostream& (which it must be, to satisfy the return value type of the function).

    Memory manager overhead

    When you create auto objects on the stack, the size of the objects and their lifetime is built right into the generated code, because the compiler knows the exact quantity and scope. Creating objects on the heap involves additional overhead, both in time and in space. Hereís a typical scenario. (You can replace malloc( ) with calloc( ) or realloc( ).)

  36. You call malloc( ), which requests a block of memory from the pool. (This code may actually be part of malloc( ).)
  37. The pool is searched for a block of memory large enough to satisfy the request. This is done by checking a map or directory of some sort that shows which blocks are currently in use and which blocks are available. Itís a quick process, but it may take several tries so it might not be deterministic ó that is, you canít necessarily count on malloc( ) always taking exactly the same amount of time.
  38. Before a pointer to that block is returned, the size and location of the block must be recorded so further calls to malloc( ) wonít use it, and so that when you call free( ), the system knows how much memory to release.
  39. The way all this is implemented can vary widely. For example, thereís nothing to prevent primitives for memory allocation being implemented in the processor. If youíre curious, you can write test programs to try to guess the way your malloc( ) is implemented. You can also read the library source code, if you have it.

    Early examples redesigned

    Now that new and delete have been introduced (as well as many other subjects), the Stash and Stack examples from the early part of this book can be rewritten using all the features discussed in the book so far. Examining the new code will also give you a useful review of the topics.

    Heap-only string class

    At this point in the book, neither the Stash nor Stack classes will "own" the objects they point to; that is, when the Stash or Stack object goes out of scope, it will not call delete for all the objects it points to. The reason this is not possible is because, in an attempt to be generic, they hold void pointers. If you delete a void pointer, the only thing that happens is the memory gets released, because thereís no type information and no way for the compiler to know what destructor to call. When a pointer is returned from the Stash or Stack object, you must cast it to the proper type before using it. These problems will be dealt with in the next chapter, and in Chapter 14.

    Because the container doesnít own the pointer, the user must be responsible for it. This means thereís a serious problem if you add pointers to objects created on the stack and objects created on the heap to the same container because a delete-expression is unsafe for a pointer that hasnít been allocated on the heap. (And when you fetch a pointer back from the container, how will you know where its object has been allocated?) To solve this problem in the following version of a simple String class, steps have been taken to prevent the creation of a String anywhere but on the heap:

    //: C13:Strings.h
    // Simple string class
    // Can only be built on the heap
    #ifndef STRINGS_H_
    #define STRINGS_H_
    #include <cstring>
    #include <iostream>
    
    class String {
      char* s;
      String(const char* S) {
        s = new char[strlen(S) + 1];
        std::strcpy(s, S);
      }
      // Prevent copying:
      String(const String&);
      void operator=(String&);
    public:
      // Only make Strings on the heap:
      friend String* makeString(const char* S) {
        return new String(S);
      }
      // Alternate approach:
      static String* make(const char* S) {
        return new String(S);
      }
      ~String() { delete s; }
      operator char*() const { return s;}
      char* str() const { return s; }
      friend std::ostream&
        operator<<(std::ostream& os, const String& S) {
          return os << S.s;
      }
    };
    #endif // STRINGS_H_ ///:~

    To restrict what the user can do with this class, the main constructor is made private, so no one can use it but you. In addition, the copy-constructor is declared private but never defined, because you want to prevent anyone from using it, and the same goes for the operator=. The only way for the user to create an object is to call a special function that creates a String on the heap (so you know all String objects are created on the heap) and returns its pointer.

    There are two approaches to this function. For ease of use, it can be a global friend function (called makeString( )), but if you donít want to pollute the global name space, you can make it a static member function (called make( )) and call it by saying String::make( ). The latter form has the benefit of more explicitly belonging to the class.

    In the constructor, note the expression:

    s = new char[strlen(S) + 1];

    The square brackets mean that an array of objects is being created (in this case, an array of char), and the number inside the brackets is the number of objects to create. This is how you create an array at run-time.

    The automatic type conversion to char* means that you can use a String object anywhere you need a char*. In addition, an iostream output operator extends the iostream library to handle String objects.

    Stash for pointers

    This version of the Stash class, which you last saw in Chapter 4, is changed to reflect all the new material introduced since Chapter 4. In addition, the new PStash holds pointers to objects that exist by themselves on the heap, whereas the old Stash in Chapter 4 and earlier copied the objects into the Stash container. With the introduction of new and delete, itís easy and safe to hold pointers to objects that have been created on the heap.

    Hereís the header file for the "pointer Stash":

    //: C13:PStash.h
    // Holds pointers instead of objects
    #ifndef PSTASH_H_
    #define PSTASH_H_
    
    class PStash {
      int quantity; // Number of storage spaces
      int next; // Next empty space
       // Pointer storage:
      void** storage;
      void inflate(int increase);
    public:
      PStash() {
        quantity = 0;
        storage = 0;
        next = 0;
      }
      // No ownership:
      ~PStash() { delete storage; }
      int add(void* element);
      void* operator[](int index) const; // Fetch
      // Number of elements in Stash:
      int count() const { return next; }
    };
    #endif // PSTASH_H_ ///:~

    The underlying data elements are fairly similar, but now storage is an array of void pointers, and the allocation of storage for that array is performed with new instead of malloc( ). In the expression

    storage = new void*[quantity = Quantity];

    the type of object allocated is a void*, so the expression allocates an array of void pointers.

    The destructor deletes the storage where the void pointers are held, rather than attempting to delete what they point at (which, as previously noted, will release their storage and not call the destructors because a void pointer has no type information).

    The other change is the replacement of the fetch( ) function with operator[ ], which makes more sense syntactically. Again, however, a void* is returned, so the user must remember what types are stored in the container and cast the pointers when fetching them out (a problem which will be repaired in future chapters).

    Here are the member function definitions:

    //: C13:PStash.cpp {O}
    // Pointer Stash definitions
    #include "PStash.h"
    #include <iostream>
    #include <cstring> // Mem functions
    using namespace std;
    
    int PStash::add(void* element) {
      const InflateSize = 10;
      if(next >= quantity)
        inflate(InflateSize);
      storage[next++] = element;
      return(next - 1); // Index number
    }
    
    // Operator overloading replacement for fetch
    void* PStash::operator[](int index) const {
      if(index >= next || index < 0)
        return 0;  // Out of bounds
      // Produce pointer to desired element:
      return storage[index];
    }
    
    void PStash::inflate(int increase) {
      const psz = sizeof(void*);
      // realloc() is cleaner than this:
      void** st = new void*[quantity + increase];
      memset(st, 0, (quantity + increase) * psz);
      memcpy(st, storage, quantity * psz);
      quantity += increase;
      delete storage; // Old storage
      storage = st; // Point to new memory
    } ///:~

    The add( ) function is effectively the same as before, except that the pointer is stored instead of a copy of the whole object, which, as youíve seen, actually requires a copy-constructor for normal objects.

    The inflate( ) code is actually more complicated and less efficient than in the earlier version. This is because realloc( ), which was used before, can resize an existing chunk of memory, or failing that, automatically copy the contents of your old chunk to a bigger piece. In either event you donít have to worry about it, and itís potentially faster if memory doesnít have to be moved. Thereís no equivalent of realloc( ) with new, however, so in this example you always have to allocate a bigger chunk, perform a copy, and delete the old chunk. In this situation it might make sense to use malloc( ), realloc( ), and free( ) in the underlying implementation rather than new and delete. Fortunately, the implementation is hidden so the client programmer will remain blissfully ignorant of these kinds of changes; also the malloc( ) family of functions is guaranteed to interact safely in parallel with new and delete, as long as you donít mix calls with the same chunk of memory, so this is a completely plausible thing to do.

    A test

    Hereís the old test program for Stash rewritten for the PStash:

    //: C13:Pstest.cpp
    //{L} PStash
    // Test of pointer stash
    #include <iostream>
    #include <fstream>
    #include "../require.h"
    #include "PStash.h"
    #include "Strings.h"
    using namespace std;
    
    int main() {
      PStash intStash;
      // new works with built-in types, too:
      for(int i = 0; i < 25; i++)
        intStash.add(new int(i)); // Pseudo-constr.
      for(int u = 0; u < intStash.count(); u++)
        cout << "intStash[" << u << "] = "
             << *(int*)intStash[u] << endl;
    
      ifstream infile("pstest.cpp");
      assure(infile, "pstest.cpp");
      const bufsize = 80;
      char buf[bufsize];
      PStash stringStash;
      // Use global function makeString:
      for(int j = 0; j < 10; j++)
        if(infile.getline(buf, bufsize))
          stringStash.add(makeString(buf));
      // Use static member make:
      while(infile.getline(buf, bufsize))
        stringStash.add(String::make(buf));
      // Print out the strings:
      for(int v = 0; stringStash[v]; v++) {
        char* p = *(String*)stringStash[v];
        cout << "stringStash[" << v << "] = "
             << p << endl;
      }
    } ///:~

    As before, Stashes are created and filled with information, but this time the information is the pointers resulting from new-expressions. In the first case, note the line:

    intStash.add(new int(i));

    The expression new int(i) uses the pseudoconstructor form, so storage for a new int object is created on the heap, and the int is initialized to the value i.

    Note that during printing, the value returned by PStash::operator[ ] must be cast to the proper type; this is repeated for the rest of the PStash objects in the program. Itís an undesirable effect of using void pointers as the underlying representation and will be fixed in later chapters.

    The second test opens the source code file and reads it into another PStash, converting each line into a String object. You can see that both makeString( ) and String::make( ) are used to show the difference between the two. The static member is probably the better approach because itís more explicit.

    When fetching the pointers back out, you see the expression:

    char* p = *(String*)stringStash[i];

    The pointer returned from operator[ ] must be cast to a String* to give it the proper type. Then the String* is dereferenced so the expression evaluates to an object, at which point the compiler sees a String object when it wants a char*, so it calls the automatic type conversion operator in String to produce a char*.

    In this example, the objects created on the heap are never destroyed. This is not harmful here because the storage is released when the program ends, but itís not something you want to do in practice. It will be fixed in later chapters.

    The stack

    The Stack benefits greatly from all the features introduced since Chapter 3. Hereís the new header file:

    //: C13:Stack11.h
    // New version of Stack
    #ifndef STACK11_H_
    #define STACK11_H_
    
    class Stack {
      struct link {
        void* data;
        link* next;
        link(void* Data, link* Next) {
          data = Data;
          next = Next;
        }
      } * head;
    public:
      Stack() { head = 0; }
      ~Stack();
      void push(void* Data) {
        head = new link(Data,head);
      }
      void* peek() const { return head->data; }
      void* pop();
    };
    #endif // STACK11_H_ ///:~

    The nested struct link can now have its own constructor because in Stack::push( ) the use of new safely calls that constructor. (And notice how much cleaner the syntax is, which reduces potential bugs.) The link::link( ) constructor simply initializes the data and next pointers, so in Stack::push( ) the line

    head = new link(Data,head);

    not only allocates a new link, but neatly initializes the pointers for that link.

    The rest of the logic is virtually identical to what it was in Chapter 3. Here is the implementation of the two remaining (non-inline) functions:

    //: C13:Stack11.cpp {O}
    // New version of Stack
    #include "Stack11.h"
    
    void* Stack::pop() {
      if(head == 0) return 0;
      void* result = head->data;
      link* oldHead = head;
      head = head->next;
      delete oldHead;
      return result;
    }
    
    Stack::~Stack() {
      link* cursor = head;
      while(head) {
        cursor = cursor->next;
        delete head;
        head = cursor;
      }
    } ///:~

    The only difference is the use of delete instead of free( ) in the destructor.

    As with the Stash, the use of void pointers means that the objects created on the heap cannot be destroyed by the Stack, so again there is the possibility of an undesirable memory leak if the user doesnít take responsibility for the pointers in the Stack. You can see this in the test program:

    //: C13:Stktst11.cpp
    //{L} Stack11
    // Test new Stack
    #include <iostream>
    #include <fstream>
    #include "../require.h"
    #include "Stack11.h"
    #include "Strings.h"
    using namespace std;
    
    int main() {
      // Could also use command-line argument:
      ifstream file("stktst11.cpp");
      assure(file, "stktst11.cpp");
      const bufsize = 100;
      char buf[bufsize];
      Stack textlines;
      // Read file and store lines in the Stack:
      while(file.getline(buf,bufsize))
        textlines.push(String::make(buf));
      // Pop lines from the Stack and print them:
      String* s;
      while((s = (String*)textlines.pop()) != 0)
        cout << *s << endl;
    } ///:~

    As with the Stash example, a file is opened and each line is turned into a String object, which is stored in a Stack and then printed. This program doesnít delete the pointers in the Stack and the Stack itself doesnít do it, so that memory is lost.

    new & delete for arrays

    In C++, you can create arrays of objects on the stack or on the heap with equal ease, and (of course) the constructor is called for each object in the array. Thereís one constraint, however: There must be a default constructor, except for aggregate initialization on the stack (see Chapter 3), because a constructor with no arguments must be called for every object.

    When creating arrays of objects on the heap using new, thereís something else you must do. An example of such an array is

    Foo* fp = new Foo[100];

    This allocates enough storage on the heap for 100 Foo objects and calls the constructor for each one. Now, however, you simply have a Foo*, which is exactly the same as youíd get if you said

    Foo* fp2 = new Foo;

    to create a single object. Because you wrote the code, you know that fp is actually the starting address of an array, so it makes sense to select array elements with fp[2]. But what happens when you destroy the array? The statements

    delete fp2; // OK

    delete fp; // Not the desired effect

    look exactly the same, and their effect will be the same: The destructor will be called for the Foo object pointed to by the given address, and then the storage will be released. For fp2 this is fine, but for fp this means the other 99 destructor calls wonít be made. The proper amount of storage will still be released, however, because it is allocated in one big chunk, and the size of the whole chunk is stashed somewhere by the allocation routine.

    The solution requires you to give the compiler the information that this is actually the starting address of an array. This is accomplished with the following syntax:

    delete []fp;

    The empty brackets tell the compiler to generate code that fetches the number of objects in the array, stored somewhere when the array is created, and calls the destructor for that many array objects. This is actually an improved syntax from the earlier form, which you may still occasionally see in old code:

    delete [100]fp;

    which forced the programmer to include the number of objects in the array and introduced the possibility that the programmer would get it wrong. The additional overhead of letting the compiler handle it was very low, and it was considered better to specify the number of objects in one place rather than two.

    Making a pointer more like an array

    As an aside, the fp defined above can be changed to point to anything, which doesnít make sense for the starting address of an array. It makes more sense to define it as a constant, so any attempt to modify the pointer will be flagged as an error. To get this effect, you might try

    int const* q = new int[10];

    or

    const int* q = new int[10];

    but in both cases the const will bind to the int, that is, what is being pointed to, rather than the quality of the pointer itself. Instead, you must say

    int* const q = new int[10];

    Now the array elements in q can be modified, but any change to q itself (like q++) is illegal, as it is with an ordinary array identifier.

    Running out of storage

    What happens when the operator new cannot find a contiguous block of storage large enough to hold the desired object? A special function called the new-handler is called. Or rather, a pointer to a function is checked, and if the pointer is nonzero, then the function it points to is called.

    The default behavior for the new-handler is to throw an exception, the subject covered in Chapter 16. However, if youíre using heap allocation in your program, itís wise to at least replace the new-handler with a message that says youíve run out of memory and then aborts the program. That way, during debugging, youíll have a clue about what happened. For the final program youíll want to use more robust recovery.

    You replace the new-handler by including NEW.H and then calling set_new_handler( ) with the address of the function you want installed:

    //: C13:Newhandl.cpp
    // Changing the new-handler
    #include <iostream>
    #include <cstdlib>
    #include <new>
    using namespace std;
    
    void out_of_memory() {
      cerr << "memory exhausted!" << endl;
      exit(1);
    }
    
    int main() {
      set_new_handler(out_of_memory);
      while(1)
        new int[1000]; // Exhausts memory
    } ///:~

    The new-handler function must take no arguments and have void return value. The while loop will keep allocating int objects (and throwing away their return addresses) until the free store is exhausted. At the very next call to new, no storage can be allocated, so the new-handler will be called.

    Of course, you can write more sophisticated new-handlers, even one to try to reclaim memory (commonly known as a garbage collector). This is not a job for the novice programmer.

    Overloading new & delete

    When you create a new-expression, two things occur: First, storage is allocated using the operator new, then the constructor is called. In a delete-expression, the destructor is called, then storage is deallocated using the operator delete. The constructor and destructor calls are never under your control (otherwise you might accidentally subvert them), but you can change the storage allocation functions operator new and operator delete.

    The memory allocation system used by new and delete is designed for general-purpose use. In special situations, however, it doesnít serve your needs. The most common reason to change the allocator is efficiency: You might be creating and destroying so many objects of a particular class that it has become a speed bottleneck. C++ allows you to overload new and delete to implement your own storage allocation scheme, so you can handle problems like this.

    Another issue is heap fragmentation: By allocating objects of different sizes itís possible to break up the heap so that you effectively run out of storage. That is, the storage might be available, but because of fragmentation no piece is big enough to satisfy your needs. By creating your own allocator for a particular class, you can ensure this never happens.

    In embedded and real-time systems, a program may have to run for a very long time with restricted resources. Such a system may also require that memory allocation always take the same amount of time, and thereís no allowance for heap exhaustion or fragmentation. A custom memory allocator is the solution; otherwise programmers will avoid using new and delete altogether in such cases and miss out on a valuable C++ asset.

    When you overload operator new and operator delete, itís important to remember that youíre changing only the way raw storage is allocated. The compiler will simply call your new instead of the default version to allocate storage, then call the constructor for that storage. So, although the compiler allocates storage and calls the constructor when it sees new, all you can change when you overload new is the storage allocation portion. (delete has a similar limitation.)

    When you overload operator new, you also replace the behavior when it runs out of memory, so you must decide what to do in your operator new: return zero, write a loop to call the new-handler and retry allocation, or (typically) throw a bad_alloc exception (discussed in Chapter 16).

    Overloading new and delete is like overloading any other operator. However, you have a choice of overloading the global allocator or using a different allocator for a particular class.

    Overloading global new & delete

    This is the drastic approach, when the global versions of new and delete are unsatisfactory for the whole system. If you overload the global versions, you make the defaults completely inaccessible ó you canít even call them from inside your redefinitions.

    The overloaded new must take an argument of size_t (the Standard C standard type for sizes). This argument is generated and passed to you by the compiler and is the size of the object youíre responsible for allocating. You must return a pointer either to an object of that size (or bigger, if you have some reason to do so), or to zero if you canít find the memory (in which case the constructor is not called!). However, if you canít find the memory, you should probably do something more drastic than just returning zero, like calling the new-handler or throwing an exception, to signal that thereís a problem.

    The return value of operator new is a void*, not a pointer to any particular type. All youíve done is produce memory, not a finished object ó that doesnít happen until the constructor is called, an act the compiler guarantees and which is out of your control.

    The operator delete takes a void* to memory that was allocated by operator new. Itís a void* because you get that pointer after the destructor is called, which removes the object-ness from the piece of storage. The return type is void.

    Hereís a very simple example showing how to overload the global new and delete:

    //: C13:GlobalNew.cpp
    // Overload global new/delete
    #include <cstdio>
    #include <cstdlib>
    using namespace std;
    
    void* operator new(size_t sz) {
      printf("operator new: %d Bytes\n", sz);
      void* m = malloc(sz);
      if(!m) puts("out of memory");
      return m;
    }
    
    void operator delete(void* m) {
      puts("operator delete");
      free(m);
    }
    
    class S {
      int i[100];
    public:
      S() { puts("S::S()"); }
      ~S() { puts("S::~S()"); }
    };
    
    int main() {
      puts("creating & destroying an int");
      int* p = new int(47);
      delete p;
      puts("creating & destroying an s");
      S* s = new S;
      delete s;
      puts("creating & destroying S[3]");
      S* sa = new S[3];
      delete []sa;
    } ///:~

    Here you can see the general form for overloading new and delete. These use the Standard C library functions malloc( ) and free( ) for the allocators (which is probably what the default new and delete use, as well!). However, they also print out messages about what they are doing. Notice that printf( ) and puts( ) are used rather than iostreams. Thus, when an iostream object is created (like the global cin, cout, and cerr), they call new to allocate memory. With printf( ), you donít get into a deadlock because it doesnít call new to initialize itself.

    In main( ), objects of built-in types are created to prove that the overloaded new and delete are also called in that case. Then a single object of type s is created, followed by an array. For the array, youíll see that extra memory is requested to put information about the number of objects in the array. In all cases, the global overloaded versions of new and delete are used.

    Overloading new & delete for a class

    Although you donít have to explicitly say static, when you overload new and delete for a class, youíre creating static member functions. Again, the syntax is the same as overloading any other operator. When the compiler sees you use new to create an object of your class, it chooses the member operator new over the global version. However, the global versions of new and delete are used for all other types of objects (unless they have their own new and delete).

    In the following example, a very primitive storage allocation system is created for the class Framis. A chunk of memory is set aside in the static data area at program start-up, and that memory is used to allocate space for objects of type Framis. To determine which blocks have been allocated, a simple array of bytes is used, one byte for each block:

    //: C13:Framis.cpp
    // Local overloaded new & delete
    #include <cstddef> // Size_t
    #include <fstream>
    using namespace std;
    ofstream out("Framis.out");
    
    class Framis {
      char c[10];
      static unsigned char pool[];
      static unsigned char alloc_map[];
    public:
      enum { psize = 100 };  // # of frami allowed
      Framis() { out << "Framis()\n"; }
      ~Framis() { out << "~Framis() ... "; }
      void* operator new(size_t);
      void operator delete(void*);
    };
    unsigned char Framis::pool[psize * sizeof(Framis)];
    unsigned char Framis::alloc_map[psize] = {0};
    
    // Size is ignored -- assume a Framis object
    void* Framis::operator new(size_t) {
      for(int i = 0; i < psize; i++)
        if(!alloc_map[i]) {
          out << "using block " << i << " ... ";
          alloc_map[i] = 1; // Mark it used
          return pool + (i * sizeof(Framis));
        }
      out << "out of memory" << endl;
      return 0;
    }
    
    void Framis::operator delete(void* m) {
      if(!m) return; // Check for null pointer
      // Assume it was created in the pool
      // Calculate which block number it is:
      unsigned long block = (unsigned long)m
        - (unsigned long)pool;
      block /= sizeof(Framis);
      out << "freeing block " << block << endl;
      // Mark it free:
      alloc_map[block] = 0;
    }
    
    int main() {
      Framis* f[Framis::psize];
      for(int i = 0; i < Framis::psize; i++)
        f[i] = new Framis;
      new Framis; // Out of memory
      delete f[10];
      f[10] = 0;
      // Use released memory:
      Framis* x = new Framis;
      delete x;
      for(int j = 0; j < Framis::psize; j++)
        delete f[j]; // Delete f[10] OK
    } ///:~

    The pool of memory for the Framis heap is created by allocating an array of bytes large enough to hold psize Framis objects. The allocation map is psize bytes long, so thereís one byte for every block. All the bytes in the allocation map are initialized to zero using the aggregate initialization trick of setting the first element to zero so the compiler automatically initializes all the rest.

    The local operator new has the same form as the global one. All it does is search through the allocation map looking for a zero byte, then sets that byte to one to indicate itís been allocated and returns the address of that particular block. If it canít find any memory, it issues a message and returns zero (Notice that the new-handler is not called and no exceptions are thrown because the behavior when you run out of memory is now under your control.) In this example, itís OK to use iostreams because the global operator new and delete are untouched.

    The operator delete assumes the Framis address was created in the pool. This is a fair assumption, because the local operator new will be called whenever you create a single Framis object on the heap ó but not an array. Global new is used in that case. So the user might accidentally have called operator delete without using the empty bracket syntax to indicate array destruction. This would cause a problem. Also, the user might be deleting a pointer to an object created on the stack. If you think these things could occur, you might want to add a line to make sure the address is within the pool and on a correct boundary.

    operator delete calculates which block in the pool this pointer represents, and then sets the allocation mapís flag for that block to zero to indicate the block has been released.

    In main( ), enough Framis objects are dynamically allocated to run out of memory; this checks the out-of-memory behavior. Then one of the objects is freed, and another one is created to show that the released memory is reused.

    Because this allocation scheme is specific to Framis objects, itís probably much faster than the general-purpose memory allocation scheme used for the default new and delete.

    Overloading new & delete for arrays

    If you overload operator new and delete for a class, those operators are called whenever you create an object of that class. However, if you create an array of those class objects, the global operator new( ) is called to allocate enough storage for the array all at once, and the global operator delete( ) is called to release that storage. You can control the allocation of arrays of objects by overloading the special array versions of operator new[ ] and operator delete[ ] for the class. Hereís an example that shows when the two different versions are called:

    //: C13:ArrayNew.cpp
    // Operator new for arrays
    #include <new> // Size_t definition
    #include <fstream>
    using namespace std;
    ofstream trace("ArrayNew.out");
    
    class Widget {
      int i[10];
    public:
      Widget() { trace << "*"; }
      ~Widget() { trace << "~"; }
      void* operator new(size_t sz) {
        trace << "Widget::new: "
             << sz << " bytes" << endl;
        return ::new char[sz];
      }
      void operator delete(void* p) {
        trace << "Widget::delete" << endl;
        ::delete []p;
      }
      void* operator new[](size_t sz) {
        trace << "Widget::new[]: "
             << sz << " bytes" << endl;
        return ::new char[sz];
      }
      void operator delete[](void* p) {
        trace << "Widget::delete[]" << endl;
        ::delete []p;
      }
    };
    
    int main() {
      trace << "new Widget" << endl;
      Widget* w = new Widget;
      trace << "\ndelete Widget" << endl;
      delete w;
      trace << "\nnew Widget[25]" << endl;
      Widget* wa = new Widget[25];
      trace << "\ndelete []Widget" << endl;
      delete []wa;
    } ///:~

    Here, the global versions of new and delete are called so the effect is the same as having no overloaded versions of new and delete except that trace information is added. Of course, you can use any memory allocation scheme you want in the overloaded new and delete.

    You can see that the array versions of new and delete are the same as the individual-object versions with the addition of the brackets. In both cases youíre handed the size of the memory you must allocate. The size handed to the array version will be the size of the entire array. Itís worth keeping in mind that the only thing the overloaded operator new is required to do is hand back a pointer to a large enough memory block. Although you may perform initialization on that memory, normally thatís the job of the constructor that will automatically be called for your memory by the compiler.

    The constructor and destructor simply print out characters so you can see when theyíve been called. Hereís what the trace file looks like for one compiler:

    new Widget

    Widget::new: 20 bytes

    *

    delete Widget

    ~Widget::delete

    new Widget[25]

    Widget::new[]: 504 bytes

    *************************

    delete []Widget

    ~~~~~~~~~~~~~~~~~~~~~~~~~Widget::delete[]

    Creating an individual object requires 20 bytes, as you might expect. (This machine uses two bytes for an int). The operator new is called, then the constructor (indicated by the *). In a complementary fashion, calling delete causes the destructor to be called, then the operator delete.

    When an array of Widget objects is created, the array version of operator new is used, as promised. But notice that the size requested is four more bytes than expected. This extra four bytes is where the system keeps information about the array, in particular, the number of objects in the array. That way, when you say

    delete []Widget;

    the brackets tell the compiler itís an array of objects, so the compiler generates code to look for the number of objects in the array and to call the destructor that many times.

    You can see that, even though the array operator new and operator delete are only called once for the entire array chunk, the default constructor and destructor are called for each object in the array.

    Constructor calls

    Considering that

    Foo* f = new Foo;

    calls new to allocate a Foo-sized piece of storage, then invokes the Foo constructor on that storage, what happens if all the safeguards fail and the value returned by operator new is zero? The constructor is not called in that case, so although you still have an unsuccessfully created object, at least you havenít invoked the constructor and handed it a zero pointer. Hereís an example to prove it:

    //: C13:NoMemory.cpp
    // Constructor isn't called
    // If new returns 0
    #include <iostream>
    #include <new> // size_t definition
    using namespace std;
    
    void my_new_handler() {
      cout << "new handler called" << endl;
    }
    
    class NoMemory {
    public:
      NoMemory() {
        cout << "NoMemory::NoMemory()" << endl;
      }
      void* operator new(size_t sz) {
        cout << "NoMemory::operator new" << endl;
        return 0; // "Out of memory"
      }
    };
    
    int main() {
      set_new_handler(my_new_handler);
      NoMemory* nm = new NoMemory;
      cout << "nm = " << nm << endl;
    } ///:~

    When the program runs, it prints only the message from operator new. Because new returns zero, the constructor is never called so its message is not printed.

    Object placement

    There are two other, less common, uses for overloading operator new.

  40. You may want to place an object in a specific location in memory. This is especially important with hardware-oriented embedded systems where an object may be synonymous with a particular piece of hardware.
  41. You may want to be able to choose from different allocators when calling new.
  42. Both of these situations are solved with the same mechanism: The overloaded operator new can take more than one argument. As youíve seen before, the first argument is always the size of the object, which is secretly calculated and passed by the compiler. But the other arguments can be anything you want: the address you want the object placed at, a reference to a memory allocation function or object, or anything else that is convenient for you.

    The way you pass the extra arguments to operator new during a call may seem slightly curious at first: You put the argument list (without the size_t argument, which is handled by the compiler) after the keyword new and before the class name of the object youíre creating. For example,

    X* xp = new(a) X;

    will pass a as the second argument to operator new. Of course, this can work only if such an operator new has been declared.

    Hereís an example showing how you can place an object at a particular location:

    //: C13:PlacementNew.cpp
    // Placement with operator new
    #include <cstddef> // Size_t
    #include <iostream>
    using namespace std;
    
    class X {
      int i;
    public:
      X(int I = 0) { i = I; }
      ~X() {
        cout << "X::~X()" << endl;
      }
      void* operator new(size_t, void* loc) {
        return loc;
      }
    };
    
    int main() {
      int l[10];
      X* xp = new(l) X(47); // X at location l
      xp->X::~X(); // Explicit destructor call
      // ONLY use with placement!
    } ///:~

    Notice that operator new only returns the pointer thatís passed to it. Thus, the caller decides where the object is going to sit, and the constructor is called for that memory as part of the new-expression.

    A dilemma occurs when you want to destroy the object. Thereís only one version of operator delete, so thereís no way to say, "Use my special deallocator for this object." You want to call the destructor, but you donít want the memory to be released by the dynamic memory mechanism because it wasnít allocated on the heap.

    The answer is a very special syntax: You can explicitly call the destructor, as in

    xp->X::~X(); // Explicit destructor call

    A stern warning is in order here. Some people see this as a way to destroy objects at some time before the end of the scope, rather than either adjusting the scope or (more correctly) using dynamic object creation if they want the objectís lifetime to be determined at run-time. You will have serious problems if you call the destructor this way for an object created on the stack because the destructor will be called again at the end of the scope. If you call the destructor this way for an object that was created on the heap, the destructor will execute, but the memory wonít be released, which probably isnít what you want. The only reason that the destructor can be called explicitly this way is to support the placement syntax for operator new.

    Although this example shows only one additional argument, thereís nothing to prevent you from adding more if you need them for other purposes.

    Summary

    Itís convenient and optimally efficient to create automatic objects on the stack, but to solve the general programming problem you must be able to create and destroy objects at any time during a programís execution, particularly to respond to information from outside the program. Although Cís dynamic memory allocation will get storage from the heap, it doesnít provide the ease of use and guaranteed construction necessary in C++. By bringing dynamic object creation into the core of the language with new and delete, you can create objects on the heap as easily as making them on the stack. In addition, you get a great deal of flexibility. You can change the behavior of new and delete if they donít suit your needs, particularly if they arenít efficient enough. Also, you can modify what happens when the heap runs out of storage. (However, exception handling, described in Chapter 16, also comes into play here.)

    Exercises

  43. Prove to yourself that new and delete always call the constructors and destructors by creating a class with a constructor and destructor that announce themselves through cout. Create an object of that class with new, and destroy it with delete. Also create and destroy an array of these objects on the heap.
  44. Create a PStash object, and fill it with new objects from Exercise 1. Observe what happens when this PStash object goes out of scope and its destructor is called.
  45. Create a class with an overloaded operator new and delete, both the single-object versions and the array versions. Demonstrate that both versions work.
  46. Devise a test for FRAMIS.CPP to show yourself approximately how much faster the custom new and delete run than the global new and delete.
  47. 14: Inheritance & composition

    One of the most compelling features about C++ is code reuse. But to be revolutionary, youíve got to be able to do a lot more than copy code and change it.

    Thatís the C approach, and it hasnít worked very well. As with most everything in C++, the solution revolves around the class. You reuse code by creating new classes, but instead of creating them from scratch, you use existing classes that someone else has built and debugged.

    The trick is to use the classes without soiling the existing code. In this chapter youíll see two ways to accomplish this. The first is quite straightforward: You simply create objects of your existing class inside the new class. This is called composition because the new class is composed of objects of existing classes.

    The second approach is more subtle. You create a new class as a type of an existing class. You literally take the form of the existing class and add code to it, without modifying the existing class. This magical act is called inheritance, and most of the work is done by the compiler. Inheritance is one of the cornerstones of object-oriented programming and has additional implications that will be explored in the next chapter.

    It turns out that much of the syntax and behavior are similar for both composition and inheritance (which makes sense; they are both ways of making new types from existing types). In this chapter, youíll learn about these code reuse mechanisms.

    Composition syntax

    Actually, youíve been using composition all along to create classes. Youíve just been composing classes using built-in types. It turns out to be almost as easy to use composition with user-defined types.

    Consider an existing class that is valuable for some reason:

    //: C14:Useful.h
    // A class to reuse
    #ifndef USEFUL_H_
    #define USEFUL_H_
    
    class X {
      int i;
      enum { factor = 11 };
    public:
      X() { i = 0; }
      void set(int I) { i = I; }
      int read() const { return i; }
      int permute() { return i = i * factor; }
    };
    #endif // USEFUL_H_ ///:~

    The data members are private in this class, so itís completely safe to embed an object of type X as a public object in a new class, which makes the interface straightforward:

    //: C14:Compose.cpp
    // Reuse code with composition
    #include "Useful.h"
    
    class Y {
      int i;
    public:
      X x; // Embedded object
      Y() { i = 0; }
      void f(int I) { i = I; }
      int g() const { return i; }
    };
    
    int main() {
      Y y;
      y.f(47);
      y.x.set(37); // Access the embedded object
    } ///:~

    Accessing the member functions of the embedded object (referred to as a subobject) simply requires another member selection.

    Itís probably more common to make the embedded objects private, so they become part of the underlying implementation (which means you can change the implementation if you want). The public interface functions for your new class then involve the use of the embedded object, but they donít necessarily mimic the objectís interface:

    //: C14:Compose2.cpp
    // Private embedded objects
    #include "Useful.h"
    
    class Y {
      int i;
      X x; // Embedded object
    public:
      Y() { i = 0; }
      void f(int I) { i = I; x.set(I); }
      int g() const { return i * x.read(); }
      void permute() { x.permute(); }
    };
    
    int main() {
      Y y;
      y.f(47);
      y.permute();
    } ///:~

    Here, the permute( ) function is carried through to the new class interface, but the other member functions of X are used within the members of Y.

    Inheritance syntax

    The syntax for composition is obvious, but to perform inheritance thereís a new and different form.

    When you inherit, you are saying, "This new class is like that old class." You state this in code by giving the name of the class, as usual, but before the opening brace of the class body, you put a colon and the name of the base class (or classes, for multiple inheritance). When you do this, you automatically get all the data members and member functions in the base class. Hereís an example:

    //: C14:Inherit.cpp
    // Simple inheritance
    #include "Useful.h"
    #include <iostream>
    using namespace std;
    
    class Y : public X {
      int i; // Different from X's i
    public:
      Y() { i = 0; }
      int change() {
        i = permute(); // Different name call
        return i;
      }
      void set(int I) {
        i = I;
        X::set(I); // Same-name function call
      }
    };
    
    int main() {
      cout << "sizeof(X) = " << sizeof(X) << endl;
      cout << "sizeof(Y) = "
           << sizeof(Y) << endl;
      Y D;
      D.change();
      // X function interface comes through:
      D.read();
      D.permute();
      // Redefined functions hide base versions:
      D.set(12);
    } ///:~

    In Y you can see inheritance going on, which means that Y will contain all the data elements in X and all the member functions in X. In fact, Y contains a subobject of X just as if you had created a member object of X inside Y instead of inheriting from X. Both member objects and base class storage are referred to as subobjects.

    In main( ) you can see that the data elements are added because the sizeof(Y) is twice as big as sizeof(X).

    Youíll notice that the base class is preceded by public. During inheritance, everything defaults to private, which means all the public members of the base class are private in the derived class. This is almost never what you want; the desired result is to keep all the public members of the base class public in the derived class. You do this by using the public keyword during inheritance.

    In change( ), the base-class permute( ) function is called. The derived class has direct access to all the public base-class functions.

    The set( ) function in the derived class redefines the set( ) function in the base class. That is, if you call the functions read( ) and permute( ) for an object of type Y, youíll get the base-class versions of those functions (you can see this happen inside main( )), but if you call set( ) for a Y object, you get the redefined version. This means that if you donít like the version of a function you get during inheritance, you can change what it does. (You can also add completely new functions like change( ).)

    However, when youíre redefining a function, you may still want to call the base-class version. If, inside set( ), you simply call set( ) youíll get the local version of the function ó a recursive function call. To call the base-class version, you must explicitly name it, using the base-class name and the scope resolution operator.

    The constructor initializer list

    Youíve seen how important it is in C++ to guarantee proper initialization, and itís no different during composition and inheritance. When an object is created, the compiler guarantees that constructors for all its subobjects are called. In the examples so far, all the subobjects have default constructors, and thatís what the compiler automatically calls. But what happens if your subobjects donít have default constructors, or if you want to change a default argument in a constructor? This is a problem because the new class constructor doesnít have permission to access the private data elements of the subobject, so it canít initialize them directly.

    The solution is simple: Call the constructor for the subobject. C++ provides a special syntax for this, the constructor initializer list. The form of the constructor initializer list echoes the act of inheritance. With inheritance, you put the base classes after a colon and before the opening brace of the class body. In the constructor initializer list, you put the calls to subobject constructors after the constructor argument list and a colon, but before the opening brace of the function body. For a class Foo, inherited from Bar, this might look like

    Foo::Foo(int i) : Bar(i) { // ...

    if Bar has a constructor that takes a single int argument.

    Member object initialization

    It turns out that you use this very same syntax for member object initialization when using composition. For composition, you give the names of the objects rather than the class names. If you have more than one constructor call in the initializer list, you separate the calls with commas:

    Foo2:Foo2(int I) : Bar(i), memb(i+1) { // ...

    This is the beginning of a constructor for class Foo2, which is inherited from Bar and contains a member object called memb. Note that while you can see the type of the base class in the constructor initializer list, you only see the member object identifier.

    Built-in types in the initializer list

    The constructor initializer list allows you to explicitly call the constructors for member objects. In fact, thereís no other way to call those constructors. The idea is that the constructors are all called before you get into the body of the new classís constructor. That way, any calls you make to member functions of subobjects will always go to initialized objects. Thereís no way to get to the opening brace of the constructor without some constructor being called for all the member objects and base-class objects, even if the compiler must make a hidden call to a default constructor. This is a further enforcement of the C++ guarantee that no object (or part of an object) can get out of the starting gate without its constructor being called.

    This idea that all the member objects are initialized by the opening brace of the constructor is a convenient programming aid, as well. Once you hit the opening brace, you can assume all subobjects are properly initialized and focus on specific tasks you want to accomplish in the constructor. However, thereís a hitch: What about embedded objects of built-in types, which donít have constructors?

    To make the syntax consistent, youíre allowed to treat built-in types as if they have a single constructor, which takes a single argument: a variable of the same type as the variable youíre initializing. Thus, you can say

    class X {

    int i;

    float f;

    char c;

    char* s;

    public:

    X() : i(7), f(1.4), c(Ďxí), s("howdy") {}

    // ...

    The action of these "pseudoconstructor calls" is to perform a simple assignment. Itís a convenient technique and a good coding style, so youíll often see it used.

    Itís even possible to use the pseudoconstructor syntax when creating a variable of this type outside of a class:

    int i(100);

    This makes built-in types act a little bit more like objects. Remember, though, that these are not real constructors. In particular, if you donít explicitly make a pseudo-constructor call, no initialization is performed.

    Combining composition & inheritance

    Of course, you can use the two together. The following example shows the creation of a more complex class, using both inheritance and composition.

    //: C14:Combined.cpp
    // Inheritance & composition
    
    class A {
      int i;
    public:
      A(int I) { i = I; }
      ~A() {}
      void f() const {}
    };
    
    class B {
      int i;
    public:
      B(int I) { i = I; }
      ~B() {}
      void f() const {}
    };
    
    class C : public B {
      A a;
    public:
      C(int I) : B(I), a(I) {}
      ~C() {} // Calls ~A() and ~B()
      void f() const {  // Redefinition
        a.f();
        B::f();
      }
    };
    
    int main() {
      C c(47);
    } ///:~

    C inherits from B and has a member object ("is composed of") A. You can see the constructor initializer list contains calls to both the base-class constructor and the member-object constructor.

    The function C::f( ) redefines B::f( ) that it inherits, and also calls the base-class version. In addition, it calls a.f( ). Notice that the only time you can talk about redefinition of functions is during inheritance; with a member object you can only manipulate the public interface of the object, not redefine it. In addition, calling f( ) for an object of class C would not call a.f( ) if C::f( ) had not been defined, whereas it would call B::f( ).

    Automatic destructor calls

    Although you are often required to make explicit constructor calls in the initializer list, you never need to make explicit destructor calls because thereís only one destructor for any class, and it doesnít take any arguments. However, the compiler still ensures that all destructors are called, and that means all the destructors in the entire hierarchy, starting with the most-derived destructor and working back to the root.

    Itís worth emphasizing that constructors and destructors are quite unusual in that every one in the hierarchy is called, whereas with a normal member function only that function is called, but not any of the base-class versions. If you also want to call the base-class version of a normal member function that youíre overriding, you must do it explicitly.

    Order of constructor & destructor calls

    Itís interesting to know the order of constructor and destructor calls when an object has many subobjects. The following example shows exactly how it works:

    //: C14:Order.cpp
    // Constructor/destructor order
    #include <fstream>
    using namespace std;
    ofstream out("order.out");
    
    #define CLASS(ID) class ID { \
    public: \
      ID(int) { out << #ID " constructor\n"; } \
      ~ID() { out << #ID " destructor\n"; } \
    };
    
    CLASS(Base1);
    CLASS(Member1);
    CLASS(Member2);
    CLASS(Member3);
    CLASS(Member4);
    
    class Derived1 : public Base1 {
      Member1 m1;
      Member2 m2;
    public:
      Derived1(int) : m2(1), m1(2), Base1(3) {
        out << "Derived1 constructor\n";
      }
      ~Derived1() {
        out << "Derived1 destructor\n";
      }
    };
    
    class Derived2 : public Derived1 {
      Member3 m3;
      Member4 m4;
    public:
      Derived2() : m3(1), Derived1(2), m4(3) {
        out << "Derived2 constructor\n";
      }
      ~Derived2() {
        out << "Derived2 destructor\n";
      }
    };
    
    int main() {
      Derived2 d2;
    } ///:~

    First, an ofstream object is created to send all the output to a file. Then, to save some typing and demonstrate a macro technique that will be replaced by a much improved technique in Chapter 17, a macro is created to build some of the classes, which are then used in inheritance and composition. Each of the constructors and destructors report themselves to the trace file. Note that the constructors are not default constructors; they each have an int argument. The argument itself has no identifier; its only job is to force you to explicitly call the constructors in the initializer list. (Eliminating the identifier prevents compiler warning messages.)

    The output of this program is

    Base1 constructor

    Member1 constructor

    Member2 constructor

    Derived1 constructor

    Member3 constructor

    Member4 constructor

    Derived2 constructor

    Derived2 destructor

    Member4 destructor

    Member3 destructor

    Derived1 destructor

    Member2 destructor

    Member1 destructor

    Base1 destructor

    You can see that construction starts at the very root of the class hierarchy, and that at each level the base class constructor is called first, followed by the member object constructors. The destructors are called in exactly the reverse order of the constructors ó this is important because of potential dependencies.

    Itís also interesting that the order of constructor calls for member objects is completely unaffected by the order of the calls in the constructor initializer list. The order is determined by the order that the member objects are declared in the class. If you could change the order of constructor calls via the constructor initializer list, you could have two different call sequences in two different constructors, but the poor destructor wouldnít know how to properly reverse the order of the calls for destruction, and you could end up with a dependency problem.

    Name hiding

    If a base class has a function name thatís overloaded several times, redefining that function name in the derived class will hide all the base-class versions. That is, they become unavailable in the derived class:

    //: C14:Hide.cpp
    // Name hiding during inheritance
    
    class Homer {
    public:
      int doh(int) const { return 1; }
      char doh(char) const { return 'd';}
      float doh(float) const { return 1.0; }
    };
    
    class Bart : public Homer {
    public:
      class Milhouse {};
      void doh(Milhouse) const {}
    };
    
    int main() {
      Bart b;
    //! b.doh(1); // Error
    //! b.doh('x'); // Error
    //! b.doh(1.0); // Error
    } ///:~

    Because Bart redefines doh( ), none of the base-class versions can be called for a Bart object. In each case, the compiler attempts to convert the argument into a Milhouse object and complains because it canít find a conversion.

    As youíll see in the next chapter, itís far more common to redefine functions using exactly the same signature and return type as in the base class.

    Functions that donít automatically inherit

    Not all functions are automatically inherited from the base class into the derived class. Constructors and destructors deal with the creation and destruction of an object, and they can know what to do with the aspects of the object only for their particular level, so all the constructors and destructors in the entire hierarchy must be called. Thus, constructors and destructors donít inherit.

    In addition, the operator= doesnít inherit because it performs a constructor-like activity. That is, just because you know how to initialize all the members of an object on the left-hand side of the = from an object on the right-hand side doesnít mean that initialization will still have meaning after inheritance.

    In lieu of inheritance, these functions are synthesized by the compiler if you donít create them yourself. (With constructors, you canít create any constructors for the default constructor and the copy-constructor to be automatically created.) This was briefly described in Chapter 10. The synthesized constructors use memberwise initialization and the synthesized operator= uses memberwise assignment. Hereís an example of the functions that are created by the compiler rather than inherited:

    //: C14:Ninherit.cpp
    // Non-inherited functions
    #include <fstream>
    using namespace std;
    ofstream out("ninherit.out");
    
    class Root {
    public:
      Root() { out << "Root()\n"; }
      Root(Root&) { out << "Root(Root&)\n"; }
      Root(int) { out << "Root(int)\n"; }
      Root& operator=(const Root&) {
        out << "Root::operator=()\n";
        return *this;
      }
      class Other {};
      operator Other() const {
        out << "Root::operator Other()\n";
        return Other();
      }
      ~Root() { out << "~Root()\n"; }
    };
    
    class Derived : public Root {};
    
    void f(Root::Other) {}
    
    int main() {
      Derived d1;  // Default constructor
      Derived d2 = d1; // Copy-constructor
    //! Derived d3(1); // Error: no int constructor
      d1 = d2; // Operator= not inherited
      f(d1); // Type-conversion IS inherited
    } ///:~

    All the constructors and the operator= announce themselves so you can see when theyíre used by the compiler. In addition, the operator Other( ) performs automatic type conversion from a Root object to an object of the nested class Other. The class Derived simply inherits from Root and creates no functions (to see how the compiler responds). The function f( ) takes an Other object to test the automatic type conversion function.

    In main( ), the default constructor and copy-constructor are created and the Root versions are called as part of the constructor-call hierarchy. Even though it looks like inheritance, new constructors are actually created. As you might expect, no constructors with arguments are automatically created because thatís too much for the compiler to intuit.

    The operator=( ) is also synthesized as a new function in Derived using memberwise assignment because that function was not explicitly written in the new class.

    Because of all these rules about rewriting functions that handle object creation, it may seem a little strange at first that the automatic type conversion operator is inherited. But itís not too unreasonable ó if there are enough pieces in Root to make an Other object, those pieces are still there in anything derived from Root and the type conversion operator is still valid (even though you may in fact want to redefine it).

    Choosing composition vs. inheritance

    Both composition and inheritance place subobjects inside your new class. Both use the constructor initializer list to construct these subobjects. You may now be wondering what the difference is between the two, and when to choose one over the other.

    Composition is generally used when you want the features of an existing class inside your new class, but not its interface. That is, you embed an object that youíre planning on using to implement features of your new class, but the user of your new class sees the interface youíve defined rather than the interface from the original class. For this effect, you embed private objects of existing classes inside your new class.

    Sometimes it makes sense to allow the class user to directly access the composition of your new class, that is, to make the member objects public. The member objects use implementation hiding themselves, so this is a safe thing to do and when the user knows youíre assembling a bunch of parts, it makes the interface easier to understand. A car object is a good example:

    //: C14:Car.cpp
    // Public composition
    
    class Engine {
    public:
      void start() const {}
      void rev() const {}
      void stop() const {}
    };
    
    class Wheel {
    public:
      void inflate(int psi) const {}
    };
    
    class Window {
    public:
      void rollup() const {}
      void rolldown() const {}
    };
    
    class Door {
    public:
      Window window;
      void open() const {}
      void close() const {}
    };
    
    class Car {
    public:
      Engine engine;
      Wheel wheel[4];
      Door left, right; // 2-door
    };
    
    int main() {
      Car car;
      car.left.window.rollup();
      car.wheel[0].inflate(72);
    } ///:~

    Because the composition of a car is part of the analysis of the problem (and not simply part of the underlying design), making the members public assists the client programmerís understanding of how to use the class and requires less code complexity for the creator of the class.

    With a little thought, youíll also see that it would make no sense to compose a car using a vehicle object ó a car doesnít contain a vehicle, it is a vehicle. The is-a relationship is expressed with inheritance, and the has-a relationship is expressed with composition.

    Subtyping

    Now suppose you want to create a type of ifstream object that not only opens a file but also keeps track of the name of the file. You can use composition and embed both an ifstream and a strstream into the new class:

    //: C14:FName1.cpp
    // An fstream with a file name
    #include <iostream>
    #include <fstream>
    #include <strstream>
    #include "../require.h"
    using namespace std;
    
    class FName1 {
      ifstream File;
      enum { bsize = 100 };
      char buf[bsize];
      ostrstream Name;
      int nameset;
    public:
      FName1() : Name(buf, bsize), nameset(0) {}
      FName1(const char* filename)
        : File(filename), Name(buf, bsize) {
          assure(File, filename);
          Name << filename << ends;
          nameset = 1;
      }
      const char* name() const { return buf; }
      void name(const char* newname) {
        if(nameset) return; // Don't overwrite
        Name << newname << ends;
        nameset = 1;
      }
      operator ifstream&() { return File; }
    };
    
    int main() {
      FName1 file("FName1.cpp");
      cout << file.name() << endl;
      // Error: rdbuf() not a member:
    //!  cout << file.rdbuf() << endl;
    } ///:~

    Thereís a problem here, however. An attempt is made to allow the use of the FName1 object anywhere an ifstream object is used, by including an automatic type conversion operator from FName1 to an ifstream&. But in main, the line

    cout << file.rdbuf() << endl;

    will not compile because automatic type conversion happens only in function calls, not during member selection. So this approach wonít work.

    A second approach is to add the definition of rdbuf( ) to FName1:

    filebuf* rdbuf() { return File.rdbuf(); }

    This will work if there are only a few functions you want to bring through from the ifstream class. In that case youíre only using part of the class, and composition is appropriate.

    But what if you want everything in the class to come through? This is called subtyping because youíre making a new type from an existing type, and you want your new type to have exactly the same interface as the existing type (plus any other member functions you want to add), so you can use it everywhere youíd use the existing type. This is where inheritance is essential. You can see that subtyping solves the problem in the preceding example perfectly:

    //: C14:FName2.cpp
    // Subtyping solves the problem
    #include <iostream>
    #include <fstream>
    #include <strstream>
    #include "../require.h"
    using namespace std;
    
    class FName2 : public ifstream {
      enum { bsize = 100 };
      char buf[bsize];
      ostrstream fname;
      int nameset;
    public:
      FName2() : fname(buf, bsize), nameset(0) {}
      FName2(const char* filename)
        : ifstream(filename), fname(buf, bsize) {
          assure(*this, filename);
          fname << filename << ends;
          nameset = 1;
      }
      const char* name() const { return buf; }
      void name(const char* newname) {
        if(nameset) return; // Don't overwrite
        fname << newname << ends;
        nameset = 1;
      }
    };
    
    int main() {
      FName2 file("FName2.cpp");
      assure(file, "FName2.cpp");
      cout << "name: " << file.name() << endl;
      const bsize = 100;
      char buf[bsize];
      file.getline(buf, bsize); // This works too!
      file.seekg(-200, ios::end);
      cout << file.rdbuf() << endl;
    } ///:~

    Now any member function that works with an ofstream object also works with an FName2 object. Thatís because an FName2 is a type of ofstream; it doesnít simply contain one. This is a very important issue that will be explored at the end of this chapter and in Chapter 13.

    Specialization

    When you inherit, you take an existing class and make a special version of it. Generally, this means youíre taking a general-purpose class and specializing it for a particular need.

    For example, consider the Stack class from the previous chapter. One of the problems with that class is that you had to perform a cast every time you fetched a pointer from the container. This is not only tedious, itís unsafe ó you could cast the pointer to anything you want.

    An approach that seems better at first glance is to specialize the general Stack class using inheritance. Hereís an example that uses the class from the previous chapter:

    //: C14:Inhstak.cpp
    //{L} ../C14/Stack11
    // Specializing the Stack class
    #include <iostream>
    #include <fstream>
    #include <string>
    #include "../require.h"
    #include "../C14/Stack11.h"
    using namespace std;
    
    class StringList : public Stack {
    public:
      void push(string* str) {
        Stack::push(str);
      }
      string* peek() const {
        return (string*)Stack::peek();
      }
      string* pop() {
        return (string*)Stack::pop();
      }
    };
    
    int main() {
      ifstream file("Inhstak.cpp");
      assure(file, "Inhstak.cpp");
      string line;
      StringList textlines;
      while(getline(file,line))
        textlines.push(new string(line));
      string* s;
      while((s = textlines.pop()) != 0) // No cast!
        cout << *s << endl;
    } ///:~

    The Stack11.h header file is brought in from Chapter 11. (The Stack11 object file must be linked in as well.)

    Stringlist specializes Stack so that push( ) will accept only String pointers. Before, Stack would accept void pointers, so the user had no type checking to make sure the proper pointers were inserted. In addition, peek( ) and pop( ) now return String pointers rather than void pointers, so no cast is necessary to use the pointer.

    Amazingly enough, this extra type-checking safety is free! The compiler is being given extra type information, that it uses at compile-time, but the functions are inline and no extra code is generated.

    Unfortunately, inheritance doesnít solve all the problems with this container class. The destructor still causes trouble. Youíll remember from Chapter 11 that the Stack::~Stack( ) destructor moves through the list and calls delete for all the pointers. The problem is, delete is called for void pointers, which only releases the memory and doesnít call the destructors (because void* has no type information). If a Stringlist::~Stringlist( ) destructor is created to move through the list and call delete for all the String pointers in the list, the problem is solved if

  48. The Stack data members are made protected so the Stringlist destructor can access them. (protected is described a bit later in the chapter.)
  49. The Stack base class destructor is removed so the memory isnít released twice.
  50. No more inheritance is performed, because youíd end up with the same dilemma again: multiple destructor calls versus an incorrect destructor call (to a String object rather than what the class derived from Stringlist might contain).
  51. This issue will be revisited in the next chapter, but will not be fully solved until templates are introduced in Chapter 14.

    A more important observation to make about this example is that it changes the interface of the Stack in the process of inheritance. If the interface is different, then a Stringlist really isnít a Stack, and you will never be able to correctly use a Stringlist as a Stack. This questions the use of inheritance here: if youíre not creating a Stringlist that is-a type of Stack, then why are you inheriting? A more appropriate version of Stringlist will be shown later in the chapter.

    private inheritance

    You can inherit a base class privately by leaving off the public in the base-class list, or by explicitly saying private (probably a better policy because it is clear to the user that you mean it). When you inherit privately, youíre "implementing in terms of"; that is, youíre creating a new class that has all the data and functionality of the base class, but that functionality is hidden, so itís only part of the underlying implementation. The class user has no access to the underlying functionality, and an object cannot be treated as a member of the base class (as it was in FNAME2.CPP on page *).

    You may wonder what the purpose of private inheritance is, because the alternative of creating a private object in the new class seems more appropriate. private inheritance is included in the language for completeness, but if for no other reason than to reduce confusion, youíll usually want to use a private member rather than private inheritance. However, there may occasionally be situations where you want to produce part of the same interface as the base class and disallow the treatment of the object as if it were a base-class object. private inheritance provides this ability.

    Publicizing privately inherited members

    When you inherit privately, all the public members of the base class become private. If you want any of them to be visible, just say their names (no arguments or return values) in the public section of the derived class:

    //: C14:Privinh.cpp
    // Private inheritance
    
    class Base1 {
    public:
      char f() const { return 'a'; }
      int g() const { return 2; }
      float h() const { return 3.0; }
    };
    
    class Derived : Base1 { // Private inheritance
    public:
      Base1::f; // Name publicizes member
      Base1::h;
    };
    
    int main() {
      Derived d;
      d.f();
      d.h();
    //! d.g(); // Error -- private function
    } ///:~

    Thus, private inheritance is useful if you want to hide part of the functionality of the base class.

    You should think carefully before using private inheritance instead of member objects; private inheritance has particular complications when combined with run-time type identification (the subject of Chapter 17).

    protected

    Now that youíve been introduced to inheritance, the keyword protected finally has meaning. In an ideal world, private members would always be hard-and-fast private, but in real projects there are times when you want to make something hidden from the world at large and yet allow access for members of derived classes. The protected keyword is a nod to pragmatism; it says, "This is private as far as the class user is concerned, but available to anyone who inherits from this class."

    The best tact to take is to leave the data members private ó you should always preserve your right to change the underlying implementation. You can then allow controlled access to inheritors of your class through protected member functions:

    //: C14:Protect.cpp {O}
    // The protected keyword
    #include <fstream>
    using namespace std;
    
    class Base {
      int i;
    protected:
      int read() const { return i; }
      void set(int I) { i = I; }
    public:
      Base(int I = 0) : i(I) {}
      int value(int m) const { return m*i; }
    };
    
    class Derived : public Base {
      int j;
    public:
      Derived(int J = 0) : j(J) {}
      void change(int x) { set(x); }
    }; ///:~

    You can see an excellent example of the need for protected in the SSHAPE examples in Appendix C.

    protected inheritance

    When youíre inheriting, the base class defaults to private, which means that all the public member functions are private to the user of the new class. Normally, youíll make the inheritance public so the interface of the base class is also the interface of the derived class. However, you can also use the protected keyword during inheritance.

    Protected derivation means "implemented-in-terms-of" to other classes but "is-a" for derived classes and friends. Itís something you donít use very often, but itís in the language for completeness.

    Multiple inheritance

    You can inherit from one class, so it would seem to make sense to inherit from more than one class at a time. Indeed you can, but whether it makes sense as part of a design is a subject of continuing debate. One thing is generally agreed upon: You shouldnít try this until youíve been programming quite a while and understand the language thoroughly. By that time, youíll probably realize that no matter how much you think you absolutely must use multiple inheritance, you can almost always get away with single inheritance.

    Initially, multiple inheritance seems simple enough: You add more classes in the base-class list during inheritance, separated by commas. However, multiple inheritance introduces a number of possibilities for ambiguity, which is why Chapter 15 is devoted to the subject.

    Incremental development

    One of the advantages of inheritance is that it supports incremental development by allowing you to introduce new code without causing bugs in existing code and isolating new bugs to the new code. By inheriting from an existing, functional class and adding data members and member functions (and redefining existing member functions) you leave the existing code ó that someone else may still be using ó untouched and unbugged. If a bug happens, you know itís in your new code, which is much shorter and easier to read than if you had modified the body of existing code.

    Itís rather amazing how cleanly the classes are separated. You donít even need the source code for the member functions to reuse the code, just the header file describing the class and the object file or library file with the compiled member functions. (This is true for both inheritance and composition.)

    Itís important to realize that program development is an incremental process, just like human learning. You can do as much analysis as you want, but you still wonít know all the answers when you set out on a project. Youíll have much more success ó and more immediate feedback ó if you start out to "grow" your project as an organic, evolutionary creature, rather than constructing it all at once like a glass-box skyscraper.

    Although inheritance for experimentation is a useful technique, at some point after things stabilize you need to take a new look at your class hierarchy with an eye to collapsing it into a sensible structure. Remember that underneath it all, inheritance is meant to express a relationship that says, "This new class is a type of that old class." Your program should not be concerned with pushing bits around, but instead with creating and manipulating objects of various types to express a model in the terms given you by the problemís space.

    Upcasting

    Earlier in the chapter, you saw how an object of a class derived from ofstream has all the characteristics and behaviors of an ofstream object. In FName2.cpp, any ofstream member function could be called for an FName2 object.

    The most important aspect of inheritance is not that it provides member functions for the new class, however. Itís the relationship expressed between the new class and the base class. This relationship can be summarized by saying, "The new class is a type of the existing class."

    This description is not just a fanciful way of explaining inheritance ó itís supported directly by the compiler. As an example, consider a base class called Instrument that represents musical instruments and a derived class called Wind. Because inheritance means that all the functions in the base class are also available in the derived class, any message you can send to the base class can also be sent to the derived class. So if the Instrument class has a play( ) member function, so will Wind instruments. This means we can accurately say that a Wind object is also a type of Instrument. The following example shows how the compiler supports this notion:

    //: C14:Wind.cpp
    // Inheritance & upcasting
    enum note { middleC, Csharp, Cflat }; // Etc.
    
    class Instrument {
    public:
      void play(note) const {}
    };
    
    // Wind objects are Instruments
    // because they have the same interface:
    class Wind : public Instrument {};
    
    void tune(Instrument& i) {
      // ...
      i.play(middleC);
    }
    
    int main() {
      Wind flute;
      tune(flute); // Upcasting
    } ///:~

    Whatís interesting in this example is the tune( ) function, which accepts an Instrument reference. However, in main( ) the tune( ) function is called by giving it a Wind object. Given that C++ is very particular about type checking, it seems strange that a function that accepts one type will readily accept another type, until you realize that a Wind object is also an Instrument object, and thereís no function that tune( ) could call for an Instrument that isnít also in Wind. Inside tune( ), the code works for Instrument and anything derived from Instrument, and the act of converting a Wind object, reference, or pointer into an Instrument object, reference, or pointer is called upcasting.

    Why "upcasting"?

    The reason for the term is historical and is based on the way class inheritance diagrams have traditionally been drawn: with the root at the top of the page, growing downward. (Of course, you can draw your diagrams any way you find helpful.) The inheritance diagram for WIND.CPP is then:

    Casting from derived to base moves up on the inheritance diagram, so itís commonly referred to as upcasting. Upcasting is always safe because youíre going from a more specific type to a more general type ó the only thing that can occur to the class interface is that it lose member functions, not gain them. This is why the compiler allows upcasting without any explicit casts or other special notation.

    Downcasting

    You can also perform the reverse of upcasting, called downcasting, but this involves a dilemma that is the subject of Chapter 17.

    Upcasting and the copy-constructor (not indexed)

    If you allow the compiler to synthesize a copy-constructor for a derived class, it will automatically call the base-class copy-constructor, and then the copy-constructors for all the member objects (or perform a bitcopy on built-in types) so youíll get the right behavior:

    //: C14:Ccright.cpp
    // Correctly synthesizing the CC
    #include <iostream>
    using namespace std;
    
    class Parent {
      int i;
    public:
      Parent(int I) : i(I) {
        cout << "Parent(int I)\n";
      }
      Parent(const Parent& b) : i(b.i) {
        cout << "Parent(Parent&)\n";
      }
      Parent() :i(0) { cout << "Parent()\n"; }
      friend ostream&
        operator<<(ostream& os, const Parent& b) {
        return os << "Parent: " << b.i << endl;
      }
    };
    
    class Member {
      int i;
    public:
      Member(int I) : i(I) {
        cout << "Member(int I)\n";
      }
      Member(const Member& m) : i(m.i) {
        cout << "Member(Member&)\n";
      }
      friend ostream&
        operator<<(ostream& os, const Member& m) {
        return os << "Member: " << m.i << endl;
      }
    };
    
    class Child : public Parent {
      int i;
      Member m;
    public:
      Child(int I) : Parent(I), i(I), m(I) {
        cout << "Child(int I)\n";
      }
      friend ostream&
        operator<<(ostream& os, const Child& d){
        return os << (Parent&)d << d.m
                  << "Child: " << d.i << endl;
      }
    };
    
    int main() {
      Child d(2);
      cout << "calling copy-constructor: " << endl;
      Child d2 = d; // Calls copy-constructor
      cout << "values in d2:\n" << d2;
    } ///:~

    The operator<< for Child is interesting because of the way that it calls the operator<< for the Parent part within it: by casting the Child object to a Parent& (if you cast to a Parent object instead of a reference youíll end up creating a temporary):

    return os << (Parent&)d << d.m

    Since the compiler then sees it as a Parent, it calls the Parent version of operator<<.

    You can see that Child has no explicitly-defined copy-constructor. The compiler then synthesizes the copy-constructor (since that is one of the four functions it will synthesize, along with the default constructor Ė if you donít create any constructors Ė the operator= and the destructor) by calling the Parent copy-constructor and the Member copy-constructor. This is shown in the output

    Parent(int I)

    Member(int I)

    Child(int I)

    calling copy-constructor:

    Parent(Parent&)

    Member(Member&)

    values in d2:

    Parent: 2

    Member: 2

    Child: 2

    However, if you try to write your own copy-constructor for Child and you make an innocent mistake and do it badly:

    Child(const Child& d) : i(d.i), m(d.m) {}

    The default constructor will be automatically called, since thatís what the compiler falls back on when it has no other choice of constructor to call (remember that some constructor must always be called for every object, regardless of whether itís a subobject of another class). The output will then be:

    Parent(int I)

    Member(int I)

    Child(int I)

    calling copy-constructor:

    Parent()

    Member(Member&)

    values in d2:

    Parent: 0

    Member: 2

    Child: 2

    This is probably not what you expect, since generally youíll want the base-class portion to be copied from the existing object to the new object as part of copy-construction.

    To repair the problem you must remember to properly call the base-class copy-constructor (as the compiler does) whenever you write your own copy-constructor. This can seem a little strange-looking at first but itís another example of upcasting:

    Child(const Child& d)

    : Parent(d), i(d.i), m(d.m) {

    cout << "Child(Child&)\n";

    }

    The strange part is where the Parent copy-constructor is called: Parent(d). What does it mean to pass a Child object to a Parent constructor? Hereís the trick: Child is inherited from Parent, so a Child reference is a Parent reference. So the base-class copy-constructor upcasts a reference to Child to a reference to Parent and uses it to perform the copy-construction. When you write your own copy constructors youíll generally want to do this.

    Composition vs. inheritance (revisited)

    One of the clearest ways to determine whether you should be using composition or inheritance is by asking whether youíll ever need to upcast from your new class. Earlier in this chapter, the Stack class was specialized using inheritance. However, chances are the Stringlist objects will be used only as String containers, and never upcast, so a more appropriate alternative is composition:

    //: C14:Inhstak2.cpp
    //{L} ../C14/Stack11
    // Composition vs inheritance
    #include <iostream>
    #include <fstream>
    #include <string>
    #include "../require.h"
    #include "../C14/Stack11.h"
    using namespace std;
    
    class StringList {
      Stack stack; // Embed instead of inherit
    public:
      void push(string* str) {
        stack.push(str);
      }
      string* peek() const {
        return (string*)stack.peek();
      }
      string* pop() {
        return (string*)stack.pop();
      }
    };
    
    int main() {
      ifstream file("inhlst2.cpp");
      assure(file, "inhlst2.cpp");
      string line;
      StringList textlines;
      while(getline(file,line))
        textlines.push(new string(line));
      string* s;
      while((s = textlines.pop()) != 0) // No cast!
        cout << *s << endl;
    } ///:~

    The file is identical to INHSTACK.CPP (page *), except that a Stack object is embedded in Stringlist, and member functions are called for the embedded object. Thereís still no time or space overhead because the subobject takes up the same amount of space, and all the additional type checking happens at compile time.

    You can also use private inheritance to express "implemented in terms of." The method you use to create the Stringlist class is not critical in this situation ó they all solve the problem adequately. One place it becomes important, however, is when multiple inheritance might be warranted. In that case, if you can detect a class where composition can be used instead of inheritance, you may be able to eliminate the need for multiple inheritance.

    Pointer & reference upcasting

    In WIND.CPP (page *), the upcasting occurs during the function call ó a Wind object outside the function has its reference taken and becomes an Instrument reference inside the function. Upcasting can also occur during a simple assignment to a pointer or reference:

    Wind w;

    Instrument* ip = &w; // Upcast

    Instrument& ir = w; // Upcast

    Like the function call, neither of these cases require an explicit cast.

    A crisis

    Of course, any upcast loses type information about an object. If you say

    Wind w;

    Instrument* ip = &w;

    the compiler can deal with ip only as an Instrument pointer and nothing else. That is, it cannot know that ip actually happens to point to a Wind object. So when you call the play( ) member function by saying

    ip->play(middleC);

    the compiler can know only that itís calling play( ) for an Instrument pointer, and call the base-class version of Instrument::play( ) instead of what it should do, which is call Wind::play( ). Thus you wonít get the correct behavior.

    This is a significant problem; it is solved in the next chapter by introducing the third cornerstone of object-oriented programming: polymorphism (implemented in C++ with virtual functions).

    Summary

    Both inheritance and composition allow you to create a new type from existing types, and both embed subobjects of the existing types inside the new type. Typically, however, you use composition to reuse existing types as part of the underlying implementation of the new type and inheritance when you want to reuse the interface as well as the implementation. If the derived class has the base-class interface, it can be upcast to the base, which is critical for polymorphism as youíll see in the next chapter.

    Although code reuse through composition and inheritance is very helpful for rapid project development, youíll generally want to redesign your class hierarchy before allowing other programmers to become dependent on it. Your goal is a hierarchy where each class has a specific use and is neither too big (encompassing so much functionality that itís unwieldy to reuse) nor annoyingly small (you canít use it by itself or without adding functionality). Your finished classes should themselves be easily reused.

    Exercises

  52. Modify CAR.CPP so it also inherits from a class called vehicle, placing appropriate member functions in vehicle (that is, make up some member functions). Add a nondefault constructor to vehicle, which you must call, inside carís constructor.
  53. Create two classes, A and B, with default constructors that announce themselves. Inherit a new class called C from A, and create a member object of B in C, but do not create a constructor for C. Create an object of class C and observe the results.
  54. Use inheritance to specialize the PStash class in Chapter 11 (PSTASH.H & PSTASH.CPP) so it accepts and returns String pointers. Also modify PSTEST.CPP and test it. Change the class so PStash is a member object.
  55. Use private and protected inheritance to create two new classes from a base class. Then attempt to upcast objects of the derived class to the base class. Explain what happens.
  56. Take the example CCRIGHT.CPP in this chapter and modify it by adding your own copy-constructor without calling the base-class copy-constructor and see what happens. Fix the problem by making a proper explicit call to the base-class copy constructor in the constructor-initializer list of the Child copy-constructor.
  57. 15: Polymorphism & virtual functions

    Polymorphism (implemented in C++ with virtual functions) is the third essential feature of an object-oriented programming language, after data abstraction and inheritance.

    It provides another dimension of separation of interface from implementation, to decouple what from how. Polymorphism allows improved code organization and readability as well as the creation of extensible programs that can be "grown" not only during the original creation of the project, but also when new features are desired.

    Encapsulation creates new data types by combining characteristics and behaviors. Implementation hiding separates the interface from the implementation by making the details private. This sort of mechanical organization makes ready sense to someone with a procedural programming background. But virtual functions deal with decoupling in terms of types. In the last chapter, you saw how inheritance allows the treatment of an object as its own type or its base type. This ability is critical because it allows many types (derived from the same base type) to be treated as if they were one type, and a single piece of code to work on all those different types equally. The virtual function allows one type to express its distinction from another, similar type, as long as theyíre both derived from the same base type. This distinction is expressed through differences in behavior of the functions you can call through the base class.

    In this chapter, youíll learn about virtual functions starting from the very basics, with simple examples that strip away everything but the "virtualness" of the program.

    Evolution of C++ programmers

    C programmers seem to acquire C++ in three steps. First, as simply a "better C," because C++ forces you to declare all functions before using them and is much pickier about how variables are used. You can often find the errors in a C program simply by compiling it with a C++ compiler.

    The second step is "object-based" C++. This means that you easily see the code organization benefits of grouping a data structure together with the functions that act upon it, the value of constructors and destructors, and perhaps some simple inheritance. Most programmers who have been working with C for a while quickly see the usefulness of this because, whenever they create a library, this is exactly what they try to do. With C++, you have the aid of the compiler.

    You can get stuck at the object-based level because itís very easy to get to and you get a lot of benefit without much mental effort. Itís also easy to feel like youíre creating data types ó you make classes, and objects, and you send messages to those objects, and everything is nice and neat.

    But donít be fooled. If you stop here, youíre missing out on the greatest part of the language, which is the jump to true object-oriented programming. You can do this only with virtual functions.

    Virtual functions enhance the concept of type rather than just encapsulating code inside structures and behind walls, so they are without a doubt the most difficult concept for the new C++ programmer to fathom. However, theyíre also the turning point in the understanding of object-oriented programming. If you donít use virtual functions, you donít understand OOP yet.

    Because the virtual function is intimately bound with the concept of type, and type is at the core of object-oriented programming, there is no analog to the virtual function in a traditional procedural language. As a procedural programmer, you have no referent with which to think about virtual functions, as you do with almost every other feature in the language. Features in a procedural language can be understood on an algorithmic level, but virtual functions can be understood only from a design viewpoint.

    Upcasting

    In the last chapter you saw how an object can be used as its own type or as an object of its base type. In addition, it can be manipulated through an address of the base type. Taking the address of an object (either a pointer or a reference) and treating it as the address of the base type is called upcasting because of the way inheritance trees are drawn with the base class at the top.

    You also saw a problem arise, which is embodied in the following code:

    //: C15:Wind2.cpp
    // Inheritance & upcasting
    #include <iostream>
    using namespace std;
    enum note { middleC, Csharp, Cflat }; // Etc.
    
    class Instrument {
    public:
      void play(note) const {
        cout << "Instrument::play" << endl;
      }
    };
    
    // Wind objects are Instruments
    // because they have the same interface:
    class Wind : public Instrument {
    public:
      // Redefine interface function:
      void play(note) const {
        cout << "Wind::play" << endl;
      }
    };
    
    void tune(Instrument& i) {
      // ...
      i.play(middleC);
    }
    
    int main() {
      Wind flute;
      tune(flute); // Upcasting
    } ///:~

    The function tune( ) accepts (by reference) an Instrument, but also without complaint anything derived from Instrument. In main( ), you can see this happening as a Wind object is passed to tune( ), with no cast necessary. This is acceptable; the interface in Instrument must exist in Wind, because Wind is publicly inherited from Instrument. Upcasting from Wind to Instrument may "narrow" that interface, but it cannot make it any less than the full interface to Instrument.

    The same arguments are true when dealing with pointers; the only difference is that the user must explicitly take the addresses of objects as they are passed into the function.

    The problem

    The problem with WIND2.CPP can be seen by running the program. The output is Instrument::play. This is clearly not the desired output, because you happen to know that the object is actually a Wind and not just an Instrument. The call should resolve to Wind::play. For that matter, any object of a class derived from Instrument should have its version of play used, regardless of the situation.

    However, the behavior of WIND2.CPP is not surprising, given Cís approach to functions. To understand the issues, you need to be aware of the concept of binding.

    Function call binding

    Connecting a function call to a function body is called binding. When binding is performed before the program is run (by the compiler and linker), itís called early binding. You may not have heard the term before because itís never been an option with procedural languages: C compilers have only one kind of function call, and thatís early binding.

    The problem in the above program is caused by early binding because the compiler cannot know the correct function to call when it has only an Instrument address.

    The solution is called late binding, which means the binding occurs at run-time, based on the type of the object. Late binding is also called dynamic binding or run-time binding. When a language implements late binding, there must be some mechanism to determine the type of the object at run-time and call the appropriate member function. That is, the compiler still doesnít know the actual object type, but it inserts code that finds out and calls the correct function body. The late-binding mechanism varies from language to language, but you can imagine that some sort of type information must be installed in the objects themselves. Youíll see how this works later.

    virtual functions

    To cause late binding to occur for a particular function, C++ requires that you use the virtual keyword when declaring the function in the base class. Late binding occurs only with virtual functions, and only when youíre using an address of the base class where those virtual functions exist, although they may also be defined in an earlier base class.

    To create a member function as virtual, you simply precede the declaration of the function with the keyword virtual. You donít repeat it for the function definition, and you donít need to repeat it in any of the derived-class function redefinitions (though it does no harm to do so). If a function is declared as virtual in the base class, it is virtual in all the derived classes. The redefinition of a virtual function in a derived class is often called overriding.

    To get the desired behavior from WIND2.CPP, simply add the virtual keyword in the base class before play( ):

    //: C15:Wind3.cpp
    // Late binding with virtual
    #include <iostream>
    using namespace std;
    enum note { middleC, Csharp, Cflat }; // Etc.
    
    class Instrument {
    public:
      virtual void play(note) const {
        cout << "Instrument::play" << endl;
      }
    };
    
    // Wind objects are Instruments
    // because they have the same interface:
    class Wind : public Instrument {
    public:
      // Redefine interface function:
      void play(note) const {
        cout << "Wind::play" << endl;
      }
    };
    
    void tune(Instrument& i) {
      // ...
      i.play(middleC);
    }
    
    int main() {
      Wind flute;
      tune(flute); // Upcasting
    } ///:~

    This file is identical to WIND2.CPP except for the addition of the virtual keyword, and yet the behavior is significantly different: Now the output is Wind::play.

    Extensibility

    With play( ) defined as virtual in the base class, you can add as many new types as you want to the system without changing the tune( ) function. In a well-designed OOP program, most or all of your functions will follow the model of tune( ) and communicate only with the base-class interface. Such a program is extensible because you can add new functionality by inheriting new data types from the common base class. The functions that manipulate the base-class interface will not need to be changed at all to accommodate the new classes.

    Hereís the instrument example with more virtual functions and a number of new classes, all of which work correctly with the old, unchanged tune( ) function:

    //: C15:Wind4.cpp
    // Extensibility in OOP
    #include <iostream>
    using namespace std;
    enum note { middleC, Csharp, Cflat }; // Etc.
    
    class Instrument {
    public:
      virtual void play(note) const {
        cout << "Instrument::play" << endl;
      }
      virtual char* what() const {
        return "Instrument";
      }
      // Assume this will modify the object:
      virtual void adjust(int) {}
    };
    
    class Wind : public Instrument {
    public:
      void play(note) const {
        cout << "Wind::play" << endl;
      }
      char* what() const { return "Wind"; }
      void adjust(int) {}
    };
    
    class Percussion : public Instrument {
    public:
      void play(note) const {
        cout << "Percussion::play" << endl;
      }
      char* what() const { return "Percussion"; }
      void adjust(int) {}
    };
    
    class Stringed : public Instrument {
    public:
      void play(note) const {
        cout << "Stringed::play" << endl;
      }
      char* what() const { return "Stringed"; }
      void adjust(int) {}
    };
    
    class Brass : public Wind {
    public:
      void play(note) const {
        cout << "Brass::play" << endl;
      }
      char* what() const { return "Brass"; }
    };
    
    class Woodwind : public Wind {
    public:
      void play(note) const {
        cout << "Woodwind::play" << endl;
      }
      char* what() const { return "Woodwind"; }
    };
    
    // Identical function from before:
    void tune(Instrument& i) {
      // ...
      i.play(middleC);
    }
    
    // New function:
    void f(Instrument& i) { i.adjust(1); }
    
    // Upcasting during array initialization:
    Instrument* A[] = {
      new Wind,
      new Percussion,
      new Stringed,
      new Brass
    };
    
    int main() {
      Wind flute;
      Percussion drum;
      Stringed violin;
      Brass flugelhorn;
      Woodwind recorder;
      tune(flute);
      tune(drum);
      tune(violin);
      tune(flugelhorn);
      tune(recorder);
      f(flugelhorn);
    } ///:~

    You can see that another inheritance level has been added beneath Wind, but the virtual mechanism works correctly no matter how many levels there are. The adjust( ) function is not redefined for Brass and Woodwind. When this happens, the previous definition is automatically used ó the compiler guarantees thereís always some definition for a virtual function, so youíll never end up with a call that doesnít bind to a function body. (This would spell disaster.)

    The array A[ ] contains pointers to the base class Instrument, so upcasting occurs during the process of array initialization. This array and the function f( ) will be used in later discussions.

    In the call to tune( ), upcasting is performed on each different type of object, yet the desired behavior always takes place. This can be described as "sending a message to an object and letting the object worry about what to do with it." The virtual function is the lens to use when youíre trying to analyze a project: Where should the base classes occur, and how might you want to extend the program? However, even if you donít discover the proper base class interfaces and virtual functions at the initial creation of the program, youíll often discover them later, even much later, when you set out to extend or otherwise maintain the program. This is not an analysis or design error; it simply means you didnít have all the information the first time. Because of the tight class modularization in C++, it isnít a large problem when this occurs because changes you make in one part of a system tend not to propagate to other parts of the system as they do in C.

    How C++ implements late binding

    How can late binding happen? All the work goes on behind the scenes by the compiler, which installs the necessary late-binding mechanism when you ask it to (you ask by creating virtual functions). Because programmers often benefit from understanding the mechanism of virtual functions in C++, this section will elaborate on the way the compiler implements this mechanism.

    The keyword virtual tells the compiler it should not perform early binding. Instead, it should automatically install all the mechanisms necessary to perform late binding. This means that if you call play( ) for a Brass object through an address for the base-class Instrument, youíll get the proper function.

    To accomplish this, the compiler creates a single table (called the VTABLE) for each class that contains virtual functions. The compiler places the addresses of the virtual functions for that particular class in the VTABLE. In each class with virtual functions, it secretly places a pointer, called the vpointer (abbreviated as VPTR), which points to the VTABLE for that object. When you make a virtual function call through a base-class pointer (that is, when you make a polymorphic call), the compiler quietly inserts code to fetch the VPTR and look up the function address in the VTABLE, thus calling the right function and causing late binding to take place.

    All of this ó setting up the VTABLE for each class, initializing the VPTR, inserting the code for the virtual function call ó happens automatically, so you donít have to worry about it. With virtual functions, the proper function gets called for an object, even if the compiler cannot know the specific type of the object.

    The following sections go into this process in more detail.

    Storing type information

    You can see that there is no explicit type information stored in any of the classes. But the previous examples, and simple logic, tell you that there must be some sort of type information stored in the objects; otherwise the type could not be established at run-time. This is true, but the type information is hidden. To see it, hereís an example to examine the sizes of classes that use virtual functions compared with those that donít:

    //: C15:Sizes.cpp
    // Object sizes vs. virtual funcs
    #include <iostream>
    using namespace std;
    
    class NoVirtual {
      int a;
    public:
      void x() const {}
      int i() const { return 1; }
    };
    
    class OneVirtual {
      int a;
    public:
      virtual void x() const {}
      int i() const { return 1; }
    };
    
    class TwoVirtuals {
      int a;
    public:
      virtual void x() const {}
      virtual int i() const { return 1; }
    };
    
    int main() {
      cout << "int: " << sizeof(int) << endl;
      cout << "NoVirtual: "
           << sizeof(NoVirtual) << endl;
      cout << "void* : " << sizeof(void*) << endl;
      cout << "OneVirtual: "
           << sizeof(OneVirtual) << endl;
      cout << "TwoVirtuals: "
           << sizeof(TwoVirtuals) << endl;
    } ///:~

    With no virtual functions, the size of the object is exactly what youíd expect: the size of a single int. With a single virtual function in OneVirtual, the size of the object is the size of NoVirtual plus the size of a void pointer. It turns out that the compiler inserts a single pointer (the VPTR) into the structure if you have one or more virtual functions. There is no size difference between OneVirtual and TwoVirtuals. Thatís because the VPTR points to a table of function addresses. You need only one because all the virtual function addresses are contained in that single table.

    This example required at least one data member. If there had been no data members, the C++ compiler would have forced the objects to be a nonzero size because each object must have a distinct address. If you imagine indexing into an array of zero-sized objects, youíll understand. A "dummy" member is inserted into objects that would otherwise be zero-sized. When the type information is inserted because of the virtual keyword, this takes the place of the "dummy" member. Try commenting out the int a in all the classes in the above example to see this.

    Picturing virtual functions

    To understand exactly whatís going on when you use a virtual function, itís helpful to visualize the activities going on behind the curtain. Hereís a drawing of the array of pointers A[ ] in WIND4.CPP (page *):

    The array of Instrument pointers has no specific type information; they each point to an object of type Instrument. Wind, Percussion, Stringed, and Brass all fit into this category because they are derived from Instrument (and thus have the same interface as Instrument, and can respond to the same messages), so their addresses can also be placed into the array. However, the compiler doesnít know they are anything more than Instrument objects, so left to its own devices, it would normally call the base-class versions of all the functions. But in this case, all those functions have been declared with the virtual keyword, so something different happens.

    Each time you create a class that contains virtual functions, or you derive from a class that contains virtual functions, the compiler creates a VTABLE for that class, seen on the right of the diagram. In that table it places the addresses of all the functions that are declared virtual in this class or in the base class. If you donít redefine a function that was declared virtual in the base class, the compiler uses the address of the base-class version in the derived class. (You can see this in the adjust entry in the Brass VTABLE.) Then it places the VPTR (discovered in SIZES.CPP) into the class. There is only one VPTR for each object when using simple inheritance like this. The VPTR must be initialized to point to the starting address of the appropriate VTABLE. (This happens in the constructor, which youíll see later in more detail.)

    Once the VPTR is initialized to the proper VTABLE, the object in effect "knows" what type it is. But this self-knowledge is worthless unless it is used at the point a virtual function is called.

    When you call a virtual function through a base class address (the situation when the compiler doesnít have all the information necessary to perform early binding), something special happens. Instead of performing a typical function call, which is simply an assembly-language CALL to a particular address, the compiler generates different code to perform the function call. Hereís what a call to adjust( ) for a Brass object it looks like, if made through an Instrument pointer. An Instrument reference produces the same result:

    The compiler starts with the Instrument pointer, which points to the starting address of the object. All Instrument objects or objects derived from Instrument have their VPTR in the same place (often at the beginning of the object), so the compiler can pick the VPTR out of the object. The VPTR points to the starting address of the VTABLE. All the VTABLEs are laid out in the same order, regardless of the specific type of the object. play( ) is first, what( ) is second, and adjust( ) is third. The compiler knows that regardless of the specific object type, the adjust( ) function is at the location VPTR+2. Thus instead of saying, "Call the function at the absolute location Instrument::adjust" (early binding; the wrong action), it generates code that says, in effect, "Call the function at VPTR+2." Because the fetching of the VPTR and the determination of the actual function address occur at run-time, you get the desired late binding. You send a message to the object, and the object figures out what to do with it.

    Under the hood

    It can be helpful to see the assembly-language code generated by a virtual function call, so you can see that late-binding is indeed taking place. Hereís the output from one compiler for the call

    i.adjust(1);

    inside the function f(Instrument& i):

    push 1

    push si

    mov bx,word ptr [si]

    call word ptr [bx+4]

    add sp,4

    The arguments of a C++ function call, like a C function call, are pushed on the stack from right to left (this order is required to support Cís variable argument lists), so the argument 1 is pushed on the stack first. At this point in the function, the register si (part of the Intel X86 processor architecture) contains the address of i. This is also pushed on the stack because it is the starting address of the object of interest. Remember that the starting address corresponds to the value of this, and this is quietly pushed on the stack as an argument before every member function call, so the member function knows which particular object it is working on. Thus youíll always see the number of arguments plus one pushed on the stack before a member function call (except for static member functions, which have no this).

    Now the actual virtual function call must be performed. First, the VPTR must be produced, so the VTABLE can be found. For this compiler the VPTR is inserted at the beginning of the object, so the contents of this correspond to the VPTR. The line

    mov bx,word ptr [si]

    fetches the word that si (that is, this) points to, which is the VPTR. It places the VPTR into the register bx.

    The VPTR contained in bx points to the starting address of the VTABLE, but the function pointer to call isnít at the zeroth location of the VTABLE, but instead the second location (because itís the third function in the list). For this memory model each function pointer is two bytes long, so the compiler adds four to the VPTR to calculate where the address of the proper function is. Note that this is a constant value, established at compile time, so the only thing that matters is that the function pointer at location number two is the one for adjust( ). Fortunately, the compiler takes care of all the bookkeeping for you and ensures that all the function pointers in all the VTABLEs occur in the same order.

    Once the address of the proper function pointer in the VTABLE is calculated, that function is called. So the address is fetched and called all at once in the statement

    call word ptr [bx+4]

    Finally, the stack pointer is moved back up to clean off the arguments that were pushed before the call. In C and C++ assembly code youíll often see the caller clean off the arguments but this may vary depending on processors and compiler implementations.

    Installing the vpointer

    Because the VPTR determines the virtual function behavior of the object, you can see how itís critical that the VPTR always be pointing to the proper VTABLE. You donít ever want to be able to make a call to a virtual function before the VPTR is properly initialized. Of course, the place where initialization can be guaranteed is in the constructor, but none of the WIND examples has a constructor.

    This is where creation of the default constructor is essential. In the WIND examples, the compiler creates a default constructor that does nothing except initialize the VPTR. This constructor, of course, is automatically called for all Instrument objects before you can do anything with them, so you know that itís always safe to call virtual functions.

    The implications of the automatic initialization of the VPTR inside the constructor are discussed in a later section.

    Objects are different

    Itís important to realize that upcasting deals only with addresses. If the compiler has an object, it knows the exact type and therefore (in C++) will not use late binding for any function calls ó or at least, the compiler doesnít need to use late binding. For efficiencyís sake, most compilers will perform early binding when they are making a call to a virtual function for an object because they know the exact type. Hereís an example:

    //: C15:Early.cpp
    // Early binding & virtuals
    #include <iostream>
    using namespace std;
    
    class Base {
    public:
      virtual int f() const { return 1; }
    };
    
    class Derived : public Base {
    public:
      int f() const { return 2; }
    };
    
    int main() {
      Derived d;
      Base* b1 = &d;
      Base& b2 = d;
      Base b3;
      // Late binding for both:
      cout << "b1->f() = " << b1->f() << endl;
      cout << "b2.f() = " << b2.f() << endl;
      // Early binding (probably):
      cout << "b3.f() = " << b3.f() << endl;
    } ///:~

    In b1Ė>f( ) and b2.f( ) addresses are used, which means the information is incomplete: b1 and b2 can represent the address of a Base or something derived from Base, so the virtual mechanism must be used. When calling b3.f( ) thereís no ambiguity. The compiler knows the exact type and that itís an object, so it canít possibly be an object derived from Base ó itís exactly a Base. Thus early binding is probably used. However, if the compiler doesnít want to work so hard, it can still use late binding and the same behavior will occur.

    Why virtual functions?

    At this point you may have a question: "If this technique is so important, and if it makes the Ďrightí function call all the time, why is it an option? Why do I even need to know about it?"

    This is a good question, and the answer is part of the fundamental philosophy of C++: "Because itís not quite as efficient." You can see from the previous assembly-language output that instead of one simple CALL to an absolute address, there are two more sophisticated assembly instructions required to set up the virtual function call. This requires both code space and execution time.

    Some object-oriented languages have taken the approach that late binding is so intrinsic to object-oriented programming that it should always take place, that it should not be an option, and the user shouldnít have to know about it. This is a design decision when creating a language, and that particular path is appropriate for many languages. However, C++ comes from the C heritage, where efficiency is critical. After all, C was created to replace assembly language for the implementation of an operating system (thereby rendering that operating system ó Unix ó far more portable than its predecessors). One of the main reasons for the invention of C++ was to make C programmers more efficient. And the first question asked when C programmers encounter C++ is "What kind of size and speed impact will I get?" If the answer were, "Everythingís great except for function calls when youíll always have a little extra overhead," many people would stick with C rather than make the change to C++. In addition, inline functions would not be possible, because virtual functions must have an address to put into the VTABLE. So the virtual function is an option, and the language defaults to nonvirtual, which is the fastest configuration. Stroustrup stated that his guideline was "If you donít use it, you donít pay for it."

    Thus the virtual keyword is provided for efficiency tuning. When designing your classes, however, you shouldnít be worrying about efficiency tuning. If youíre going to use polymorphism, use virtual functions everywhere. You only need to look for functions to make non-virtual when looking for ways to speed up your code (and there are usually much bigger gains to be had in other areas).

    Anecdotal evidence suggests that the size and speed impacts of going to C++ are within 10% of the size and speed of C, and often much closer to the same. The reason you might get better size and speed efficiency is because you may design a C++ program in a smaller, faster way than you would using C.

    Abstract base classes and pure virtual functions

    In all the instrument examples, the functions in the base class Instrument were always "dummy" functions. If these functions are ever called, they indicate youíve done something wrong. Thatís because the intent of Instrument is to create a common interface for all the classes derived from it, as seen on the diagram on the following page.

    The dashed lines indicate a class (a class is only a description and not a physical item ó the dashed lines suggest its "nonphysical" nature), and the arrows from the derived classes to the base class indicate the inheritance relationship.

    The only reason to establish the common interface is so it can be expressed differently for each different subtype. It establishes a basic form, so you can say whatís in common with all the derived classes. Nothing else. Another way of saying this is to call Instrument an abstract base class (or simply an abstract class). You create an abstract class when you want to manipulate a set of classes through this common interface.

    Notice you are only required to declare a function as virtual in the base class. All derived-class functions that match the signature of the base-class declaration will be called using the virtual mechanism. You can use the virtual keyword in the derived-class declarations (and some people do, for clarity), but it is redundant.

    If you have a genuine abstract class (like Instrument), objects of that class almost always have no meaning. That is, Instrument is meant to express only the interface, and not a particular implementation, so creating an Instrument object makes no sense, and youíll probably want to prevent the user from doing it. This can be accomplished by making all the virtual functions in Instrument print error messages, but this delays the information until run-time and requires reliable exhaustive testing on the part of the user. It is much better to catch the problem at compile time.

    C++ provides a mechanism for doing this called the pure virtual function. Here is the syntax used for a declaration:

    virtual void X() = 0;

    By doing this, you tell the compiler to reserve a slot for a function in the VTABLE, but not to put an address in that particular slot. If only one function in a class is declared as pure virtual, the VTABLE is incomplete. A class containing pure virtual functions is called a pure abstract base class.

    If the VTABLE for a class is incomplete, what is the compiler supposed to do when someone tries to make an object of that class? It cannot safely create an object of a pure abstract class, so you get an error message from the compiler if you try to make an object of a pure abstract class. Thus, the compiler ensures the purity of the abstract class, and you donít have to worry about misusing it.

    Hereís WIND4.CPP (page *) modified to use pure virtual functions:

    //: C15:Wind5.cpp
    // Pure abstract base classes
    #include <iostream>
    using namespace std;
    enum note { middleC, Csharp, Cflat }; // Etc.
    
    class Instrument {
    public:
      // Pure virtual functions:
      virtual void play(note) const = 0;
      virtual char* what() const = 0;
      // Assume this will modify the object:
      virtual void adjust(int) = 0;
    };
    // Rest of the file is the same ...
    
    class Wind : public Instrument {
    public:
      void play(note) const {
        cout << "Wind::play" << endl;
      }
      char* what() const { return "Wind"; }
      void adjust(int) {}
    };
    
    class Percussion : public Instrument {
    public:
      void play(note) const {
        cout << "Percussion::play" << endl;
      }
      char* what() const { return "Percussion"; }
      void adjust(int) {}
    };
    
    class Stringed : public Instrument {
    public:
      void play(note) const {
        cout << "Stringed::play" << endl;
      }
      char* what() const { return "Stringed"; }
      void adjust(int) {}
    };
    
    class Brass : public Wind {
    public:
      void play(note) const {
        cout << "Brass::play" << endl;
      }
      char* what() const { return "Brass"; }
    };
    
    class Woodwind : public Wind {
    public:
      void play(note) const {
        cout << "Woodwind::play" << endl;
      }
      char* what() const { return "Woodwind"; }
    };
    
    // Identical function from before:
    void tune(Instrument& i) {
      // ...
      i.play(middleC);
    }
    
    // New function:
    void f(Instrument& i) { i.adjust(1); }
    
    int main() {
      Wind flute;
      Percussion drum;
      Stringed violin;
      Brass flugelhorn;
      Woodwind recorder;
      tune(flute);
      tune(drum);
      tune(violin);
      tune(flugelhorn);
      tune(recorder);
      f(flugelhorn);
    } ///:~

    Pure virtual functions are very helpful because they make explicit the abstractness of a class and tell both the user and the compiler how it was intended to be used.

    Note that pure virtual functions prevent a function call with the pure abstract class being passed in by value. Thus it is also a way to prevent object slicing from accidentally upcasting by value. This way you can ensure that a pointer or reference is always used during upcasting.

    Because one pure virtual function prevents the VTABLE from being generated doesnít mean you donít want function bodies for some of the others. Often you will want to call a base-class version of a function, even if it is virtual. Itís always a good idea to put common code as close as possible to the root of your hierarchy. Not only does this save code space, it allows easy propagation of changes.

    Pure virtual definitions

    Itís possible to provide a definition for a pure virtual function in the base class. Youíre still telling the compiler not to allow objects of that pure abstract base class, and the pure virtual functions must be defined in derived classes in order to create objects. However, there may be a piece of code you want some or all the derived class definitions to use in common, and you donít want to duplicate that code in every function. Hereís what it looks like:

    //: C15:Pvdef.cpp
    // Pure virtual base definition
    #include <iostream>
    using namespace std;
    
    class Base {
    public:
      virtual void v() const = 0;
      // In situ:
      virtual void f() const = 0 {
        cout << "Base::f()\n";
      }
    };
    
    void Base::v() const { cout << "Base::v()\n";}
    
    class D : public Base {
    public:
      // Use the common Base code:
      void v() const { Base::v(); }
      void f() const { Base::f(); }
    };
    
    int main() {
      D d;
      d.v();
      d.f();
    } ///:~

    The slot in the Base VTABLE is still empty, but there happens to be a function by that name you can call in the derived class.

    The other benefit to this feature is that it allows you to change to a pure virtual without disturbing the existing code. (This is a way for you to locate classes that donít redefine that virtual function).

    Inheritance and the VTABLE

    You can imagine what happens when you perform inheritance and redefine some of the virtual functions. The compiler creates a new VTABLE for your new class, and it inserts your new function addresses, using the base-class function addresses for any virtual functions you donít redefine. One way or another, thereís always a full set of function addresses in the VTABLE, so youíll never be able to make a call to an address that isnít there (which would be disastrous).

    But what happens when you inherit and add new virtual functions in the derived class? Hereís a simple example:

    //: C15:Addv.cpp
    // Adding virtuals in derivation
    #include <iostream>
    using namespace std;
    
    class Base {
      int i;
    public:
      Base(int I) : i(I) {}
      virtual int value() const { return i; }
    };
    
    class Derived : public Base {
    public:
      Derived(int I) : Base(I) {}
      int value() const {
        return Base::value() * 2;
      }
      // New virtual function in the Derived class:
      virtual int shift(int x) const {
        return Base::value() << x;
      }
    };
    
    int main() {
      Base* B[] = { new Base(7), new Derived(7) };
      cout << "B[0]->value() = "
           << B[0]->value() << endl;
      cout << "B[1]->value() = "
           << B[1]->value() << endl;
    //! cout << "B[1]->shift(3) = "
    //!      << B[1]->shift(3) << endl; // Illegal
    } ///:~

    The class Base contains a single virtual function value( ), and Derived adds a second one called shift( ), as well as redefining the meaning of value( ). A diagram will help visualize whatís happening. Here are the VTABLEs created by the compiler for Base and Derived:

    Notice the compiler maps the location of the value address into exactly the same spot in the Derived VTABLE as it is in the Base VTABLE. Similarly, if a class is inherited from Derived, its version of shift would be placed in its VTABLE in exactly the same spot as it is in Derived. This is because (as you saw with the assembly-language example) the compiler generates code that uses a simple numerical offset into the VTABLE to select the virtual function. Regardless of what specific subtype the object belongs to, its VTABLE is laid out the same way, so calls to the virtual functions will always be made the same way.

    In this case, however, the compiler is working only with a pointer to a base-class object. The base class has only the value( ) function, so that is the only function the compiler will allow you to call. How could it possibly know that you are working with a Derived object, if it has only a pointer to a base-class object? That pointer might point to some other type, which doesnít have a shift function. It may or may not have some other function address at that point in the VTABLE, but in either case, making a virtual call to that VTABLE address is not what you want to do. So itís fortunate and logical that the compiler protects you from making virtual calls to functions that exist only in derived classes.

    There are some less-common cases where you may know that the pointer actually points to an object of a specific subclass. If you want to call a function that only exists in that subclass, then you must cast the pointer. You can remove the error message produced by the previous program like this:

    ((Derived*)B[1])->shift(3)

    Here, you happen to know that B[1] points to a Derived object, but generally you donít know that. If your problem is set up so that you must know the exact types of all objects, you should rethink it, because youíre probably not using virtual functions properly. However, there are some situations where the design works best (or you have no choice) if you know the exact type of all objects kept in a generic container. This is the problem of run-time type identification (RTTI).

    Run-time type identification is all about casting base-class pointers down to derived-class pointers ("up" and "down" are relative to a typical class diagram, with the base class at the top). Casting up happens automatically, with no coercion, because itís completely safe. Casting down is unsafe because thereís no compile time information about the actual types, so you must know exactly what type the object really is. If you cast it into the wrong type, youíll be in trouble.

    Chapter 17 describes the way C++ provides run-time type information.

    Object slicing

    There is a distinct difference between passing addresses and passing values when treating objects polymorphically. All the examples youíve seen here, and virtually all the examples you should see, pass addresses and not values. This is because addresses all have the same size, so passing the address of an object of a derived type (which is usually bigger) is the same as passing the address of an object of the base type (which is usually smaller). As explained before, this is the goal when using polymorphism ó code that manipulates objects of a base type can transparently manipulate derived-type objects as well.

    If you use an object instead of a pointer or reference as the recipient of your upcast, something will happen that may surprise you: the object is "sliced" until all that remains is the subobject that corresponds to your destination. In the following example you can see whatís left after slicing by examining the size of the objects:

    //: C15:Slice.cpp
    // Object slicing
    #include <iostream>
    using namespace std;
    
    class Base {
      int i;
    public:
      Base(int I = 0) : i(I) {}
      virtual int sum() const { return i; }
    };
    
    class Derived : public Base {
      int j;
    public:
      Derived(int I = 0, int J = 0)
        : Base(I), j(J) {}
      int sum() const { return Base::sum() + j; }
    };
    
    void call(Base b) {
      cout << "sum = " << b.sum() << endl;
    }
    
    int main() {
      Base b(10);
      Derived d(10, 47);
      call(b);
      call(d);
    } ///:~

    The function call( ) is passed an object of type Base by value. It then calls the virtual function sum( ) for the Base object. In main( ), you might expect the first call to produce 10, and the second to produce 57. In fact, both calls produce 10.

    Two things are happening in this program. First, call( ) accepts only a Base object, so all the code inside the function body will manipulate only members associated with Base. Any calls to call( ) will cause an object the size of Base to be pushed on the stack and cleaned up after the call. This means that if an object of a class inherited from Base is passed to call( ), the compiler accepts it, but it copies only the Base portion of the object. It slices the derived portion off of the object, like this:

    Now you may wonder about the virtual function call. Here, the virtual function makes use of portions of both Base (which still exists) and Derived, which no longer exists because it was sliced off! So what happens when the virtual function is called?

    Youíre saved from disaster precisely because the object is being passed by value. Because of this, the compiler thinks it knows the precise type of the object (and it does, here, because any information that contributed extra features to the objects has been lost). In addition, when passing by value, it uses the copy-constructor for a Base object, which initializes the VPTR to the Base VTABLE and copies only the Base parts of the object. Thereís no explicit copy-constructor here, so the compiler synthesizes one. Under all interpretations, the object truly becomes a Base during slicing.

    Object slicing actually removes part of the object rather than simply changing the meaning of an address as when using a pointer or reference. Because of this, upcasting into an object is not often done; in fact, itís usually something to watch out for and prevent. You can explicitly prevent object slicing by putting pure virtual functions in the base class; this will cause a compile-time error message for an object slice.

    virtual functions & constructors

    When an object containing virtual functions is created, its VPTR must be initialized to point to the proper VTABLE. This must be done before thereís any possibility of calling a virtual function. As you might guess, because the constructor has the job of bringing an object into existence, it is also the constructorís job to set up the VPTR. The compiler secretly inserts code into the beginning of the constructor that initializes the VPTR. In fact, even if you donít explicitly create a constructor for a class, the compiler will create one for you with the proper VPTR initialization code (if you have virtual functions). This has several implications.

    The first concerns efficiency. The reason for inline functions is to reduce the calling overhead for small functions. If C++ didnít provide inline functions, the preprocessor might be used to create these "macros." However, the preprocessor has no concept of access or classes, and therefore couldnít be used to create member function macros. In addition, with constructors that must have hidden code inserted by the compiler, a preprocessor macro wouldnít work at all.

    You must be aware when hunting for efficiency holes that the compiler is inserting hidden code into your constructor function. Not only must it initialize the VPTR, it must also check the value of this (in case the operator new returns zero) and call base-class constructors. Taken together, this code can impact what you thought was a tiny inline function call. In particular, the size of the constructor can overwhelm the savings you get from reduced function-call overhead. If you make a lot of inline constructor calls, your code size can grow without any benefits in speed.

    Of course, you probably wonít make all tiny constructors non-inline right away, because theyíre much easier to write as inlines. But when youíre tuning your code, remember to remove inline constructors.

    Order of constructor calls

    The second interesting facet of constructors and virtual functions concerns the order of constructor calls and the way virtual calls are made within constructors.

    All base-class constructors are always called in the constructor for an inherited class. This makes sense because the constructor has a special job: to see that the object is built properly. A derived class has access only to its own members, and not those of the base class; only the base-class constructor can properly initialize its own elements. Therefore itís essential that all constructors get called; otherwise the entire object wouldnít be constructed properly. Thatís why the compiler enforces a constructor call for every portion of a derived class. It will call the default constructor if you donít explicitly call a base-class constructor in the constructor initializer list. If there is no default constructor, the compiler will complain. (In this example, class X has no constructors so the compiler can automatically make a default constructor.)

    The order of the constructor calls is important. When you inherit, you know all about the base class and can access any public and protected members of the base class. This means you must be able to assume that all the members of the base class are valid when youíre in the derived class. In a normal member function, construction has already taken place, so all the members of all parts of the object have been built. inside the constructor, however, you must be able to assume that all members that you use have been built. The only way to guarantee this is for the base-class constructor to be called first. Then when youíre in the derived-class constructor, all the members you can access in the base class have been initialized. "Knowing all members are valid" inside the constructor is also the reason that, whenever possible, you should initialize all member objects (that is, objects placed in the class using composition) in the constructor initializer list. If you follow this practice, you can assume that all base class members and member objects of the current object have been initialized.

    Behavior of virtual functions inside constructors

    The hierarchy of constructor calls brings up an interesting dilemma. What happens if youíre inside a constructor and you call a virtual function? Inside an ordinary member function you can imagine what will happen ó the virtual call is resolved at run-time because the object cannot know whether it belongs to the class the member function is in, or some class derived from it. For consistency, you might think this is what should happen inside constructors.

    This is not the case. If you call a virtual function inside a constructor, only the local version of the function is used. That is, the virtual mechanism doesnít work within the constructor.

    This behavior makes sense for two reasons. Conceptually, the constructorís job is to bring the object into existence (which is hardly an ordinary feat). Inside any constructor, the object may only be partially formed ó you can only know that the base-class objects have been initialized, but you cannot know which classes are inherited from you. A virtual function call, however, reaches "forward" or "outward" into the inheritance hierarchy. It calls a function in a derived class. If you could do this inside a constructor, youíd be calling a function that might manipulate members that hadnít been initialized yet, a sure recipe for disaster.

    The second reason is a mechanical one. When a constructor is called, one of the first things it does is initialize its VPTR. However, it can only know that it is of the "current" type. The constructor code is completely ignorant of whether or not the object is in the base of another class. When the compiler generates code for that constructor, it generates code for a constructor of that class, not a base class and not a class derived from it (because a class canít know who inherits it). So the VPTR it uses must be for the VTABLE of that class. The VPTR remains initialized to that VTABLE for the rest of the objectís lifetime unless this isnít the last constructor call. If a more-derived constructor is called afterwards, that constructor sets the VPTR to its VTABLE, and so on, until the last constructor finishes. The state of the VPTR is determined by the constructor that is called last. This is another reason why the constructors are called in order from base to most-derived.

    But while all this series of constructor calls is taking place, each constructor has set the VPTR to its own VTABLE. If it uses the virtual mechanism for function calls, it will produce only a call through its own VTABLE, not the most-derived VTABLE (as would be the case after all the constructors were called). In addition, many compilers recognize that a virtual function call is being made inside a constructor, and perform early binding because they know that late-binding will produce a call only to the local function. In either event, you wonít get the results you might expect from a virtual function call inside a constructor.

    Destructors and virtual destructors

    Constructors cannot be made explicitly virtual (and the technique in Appendix B only simulates virtual constructors), but destructors can and often must be virtual.

    The constructor has the special job of putting an object together piece-by-piece, first by calling the base constructor, then the more derived constructors in order of inheritance. Similarly, the destructor also has a special job ó it must disassemble an object that may belong to a hierarchy of classes. To do this, the compiler generates code that calls all the destructors, but in the reverse order that they are called by the constructor. That is, the destructor starts at the most-derived class and works its way down to the base class. This is the safe and desirable thing to do: The current destructor always knows that the base-class members are alive and active because it knows what it is derived from. Thus, the destructor can perform its own cleanup, then call the next-down destructor, which will perform its own cleanup, knowing what it is derived from, but not what is derived from it.

    You should keep in mind that constructors and destructors are the only places where this hierarchy of calls must happen (and thus the proper hierarchy is automatically generated by the compiler). In all other functions, only that function will be called, whether itís virtual or not. The only way for base-class versions of the same function to be called in ordinary functions (virtual or not) is if you explicitly call that function.

    Normally, the action of the destructor is quite adequate. But what happens if you want to manipulate an object through a pointer to its base class (that is, manipulate the object through its generic interface)? This is certainly a major objective in object-oriented programming. The problem occurs when you want to delete a pointer of this type for an object that has been created on the heap with new. If the pointer is to the base class, the compiler can only know to call the base-class version of the destructor during delete. Sound familiar? This is the same problem that virtual functions were created to solve for the general case. Fortunately virtual functions work for destructors as they do for all other functions except constructors.

    Even though the destructor, like the constructor, is an "exceptional" function, it is possible for the destructor to be virtual because the object already knows what type it is (whereas it doesnít during construction). Once an object has been constructed, its VPTR is initialized, so virtual function calls can take place.

    For a time, pure virtual destructors were legal and worked if you combined them with a function body, but in the final C++ standard function bodies combined with pure virtual functions were outlawed. This means that a virtual destructor cannot be pure, and must have a function body because (unlike ordinary functions) all destructors in a class hierarchy are always called. Hereís an example:

    //: C15:Pvdest.cpp
    // Pure virtual destructors
    // require a function body.
    #include <iostream>
    using namespace std;
    
    class Base {
    public:
      virtual ~Base() {
        cout << "~Base()" << endl;
      }
    };
    
    class Derived : public Base {
    public:
      ~Derived() {
        cout << "~Derived()" << endl;
      }
    };
    
    int main() {
      Base* bp = new Derived; // Upcast
      delete bp; // Virtual destructor call
    } ///:~

    As a guideline, any time you have a virtual function in a class, you should immediately add a virtual destructor (even if it does nothing). This way, you ensure against any surprises later.

    Virtuals in destructors

    Thereís something that happens during destruction that you might not immediately expect. If youíre inside an ordinary member function and you call a virtual function, that function is called using the late-binding mechanism. This is not true with destructors, virtual or not. Inside a destructor, only the "local" version of the member function is called; the virtual mechanism is ignored.

    Why is this? Suppose the virtual mechanism were used inside the destructor. Then it would be possible for the virtual call to resolve to a function that was "further out" (more derived) on the inheritance hierarchy than the current destructor. But destructors are called from the "outside in" (from the most-derived destructor down to the base destructor), so the actual function called would rely on portions of an object that has already been destroyed! Thus, the compiler resolves the calls at compile-time and calls only the "local" version of the function. Notice that the same is true for the constructor (as described earlier), but in the constructorís case the information wasnít available, whereas in the destructor the information (that is, the VPTR) is there, but is isnít reliable.

    Summary

    Polymorphism ó implemented in C++ with virtual functions ó means "different forms." In object-oriented programming, you have the same face (the common interface in the base class) and different forms using that face: the different versions of the virtual functions.

    Youíve seen in this chapter that itís impossible to understand, or even create, an example of polymorphism without using data abstraction and inheritance. Polymorphism is a feature that cannot be viewed in is