emBuild-atool -- embedded software build tools

by Ted Merrill.  Last Revised 2006.06.10
(NOTE: the revision control system that comes with atool will be very diffent in the next release of atool).

About embuild-atool
Handy Links
Concepts
Performing Common Tasks
Performing Administrative Tasks
Programs

About embuild-atool

(Note: embuild-atool is not related to the archiving tool by the name "atool" ).

Description: embuild-atool (atool for short) is an embedded software build environment and toolkit that most notably includes the following features:

Other uses:


Development: atool has been developed (largely by Ted Merrill, emBuild.org) since 1986 (since 1996 under the sponsorship of ArrayComm, Inc.) and has been used by hundreds of software engineers working on embedded software projects.  

Legal / License: atool has been released to the public domain (except a few files are GPL or LGPL, which is clearly marked at the tops of these files).  Except for the few GPL/LGPL files, you may do what every you want with this code.  However, the term embuild is a trademark of Ted Merrill.  You are requested to rename any derivative works to a new name that does not include embuild.  For example, you might take embuild-atool and modify and redistribute the modified code as foo-atool, where foo is a name other than embuild.

Disclaimer: atool source code is offered without warranty of any kind, in the hope that it might be useful to some one. Neither the developers nor sponsors of atool offer support or take responsibility for it's use; use at your own risk.

Platforms: atool has been used extensively on i86 Linux ("Linux") and Sparc SunOS 5.x  ("sun5") development machines; it should work with only minor modifications on any Unix-compatible operating system. It may require a case-sensitive filesystem (perhaps a problem for Mac users).  It would probably be difficult to get it to work under CygWin.

Some possible disadvantages to using atool:

Distribution: embuild-atool is distributed in two main forms.   The full distribution occupies an entire CD and includes source and output code for all components, including the gnu tools compiled for Linux, and also a set of Linux C library files (no source code).  Due to excessive size and problems with portability of the gnu binaries and C library files, there is no public outlet for this distribution.  The publicly available distribution (see Handy Links) include source code only, and does not include gcc tools or C library files; it by default uses the native gnu tools.  Hopefully sufficient instructions are included that one can bring the compiler and other gnu tools and library files under version control (especially important for cross-compilation).

Handy Links

This document (that you are now reading) is not intended as an authoritative reference manual for embuild-atool.  Such documentation is included with the source code.

The latest revision of this document (that you are now reading)  may be found at http://embuild.sourceforge.net or at http://embuildsw.org .

Downloading: embuild-atool source code and other resources may be found from the SourceForge embuild project page  http://sourceforge.net/projects/embuild .

atool Concepts

Source and Output Files

embuild-atool makes a strong distinction between source and output files.  Source files are edited by the user, using methods outside of the scope of atool.  Output files are in principal created by a single, non-iterative step from the source files.  Output files are in principal recreatable at any time from source files.  (For efficiency of development, atool supports a variety of methods of incremental building).  Output and source files are stored in separate directories.

Version

embuild-atool uses the term version to refer to a single universe of software, e.g. a set of all source files and corresponding output (a.k.a. object) files sufficient to satisfy the requirements for a product (family) or other project... i.e. a version of software for that project.  There may be many related versions sharing some of the same source and output files while having other files be unique. embuild-atool defines a variety of ways of sharing such files, which makes it powerful for software development.  A version is embodied by a version directory.

Revision

embuild-atool uses the term revision to refer to a particular controlled instance of an atomic unit of software source code, which for embuild-atool is not a single file but instead a module source directory.  (Output files are controllable only in a much looser fashion, e.g. frozen versions).

Shell

embuild-atool is a set of command line programs that you run from a shell.  Specific support is provided for sh/bash and for csh/tcsh.  Before using the programs, there are a few environmental variables that should be set, plus the shell alias "chver"; this is explained in the installation instructions. Setting these variables allows the shell to use the correct tools and for the tools to find the correct files.

Host Architecture

The term "host" refers to the development host machine(s)  (which must be Unix-compatible).  When using atool programs, the environmental variable $ARCH must be set to indicate the host architecture name.  The name "Linux" is used for  typical Pentium-class or better i86 compatible cpus running the Linux operating system.  The name "sun5" is used for typical Sun Sparc machines running SunOS 5.x operating system.

This glosses over substantial differences between different revisions of the operating systems, and different distributions.  Binary compatibility of program executables frequently requires the availability of specific versions of shared object libraries; programs linked using new distributions frequently fail to run under older distributions.  System services such as password entry retrieval are not supported by system calls or any other organized method, but instead the C library may look in a variety of "standard" places, which are in fact not standard between distributions, perhaps not even for the same distributor.  Such issues become important in a networked system where people using different computers expect to be able to share executable programs.  A solution is to ensure that all computers are always updated at exactly the same time.  A better solution is to bring  the compilers, linkers, include files and libraries into the development system, under version control, and to link all programs absolutely, instead of using shared libraries.  This entails somewhat higher program load times than when using shared libraries, but this may not be a very noticeable.  The atool distribution provided on SourceForge does not include compilers and libraries and is configured to use native tools; to use version-controlled tools see Configuring New Targets.

Target Architecture

The term target literally refers to the machine for which code is being compiled (as distinct from the host, which is the machine doing the compiling).  As an atool user, you must define target architecture types  for which you are developing code.  Thus target is also used as a short hand for a target architecture type.

The following must be configured per target architecture and per version in the version configuration file:

I recommend (and atool easily supports) a three-tiered definition of target architectures:

Additional intermediate levels may be used as your powers of organization suggest.

Target Aliases

A group of targets (target architecture types) may be referred to by a single name (a target alias).  For example, I typically use the target alias "all" to refer to all generic targets, "big" for targets capable of supporting onboard debugging (e.g. printf) and group similar product components into other aliases. A target alias name may be used in most places where a target name may be used, which (with some forethought) greatly simplifies the addition of new target architectures to a project.

Current Directory

atool doesn't make much use of the Unix current directory concept.  The major exception is the atool make program (amake) where the current directory selects the module to be compiled.

Module Directory

atool is highly oriented around the module directory concept.  A module has some source code that is made (built) using the amake program according to instructions in a module source file named Amake. A module has output files created by amake. The atool version control tools (for multiuser versions) lock and update a module as an atomic entity.

A module directory logically contains two subdirectories:

A module is known to be a module because it lives beneath a version source directory or alternately because you manually run amake with the module directory as your current directory (in which case, amake must be able to identify which version the module belongs to by searching upwards to find an Aconfig file, or a soft linked named Aconfig that points to the Aconfig file of the version directory.

Module Source Directory

A module source directory $S typically contains a small number of files, although it can contain hundreds if necessary (e.g. a standard C library for embedded targets).  I highly advise that the module be somewhat self-contained, and include such things as .c files, .h files, documentation files, unit test programs and data, etc. In many commonly used systems, related files are stored in unrelated files according to their file type; ,h files are stored in an include or "h" directory,  .c files beneath a "src" directory etc.  The atool scheme rejects such a division; all related files should be kept in the same module directory.  However, note the install directory concept.

The source code directory must contain a file named "Amake", which is the configuration file for the atool make program (amake). The module source directory $S is found from the module directory as the first of:

 This extraordinary flexibility in locating the module source directory from the module directory allows the module source to be specified in an interesting variety of ways including:



Module Output Directory

In general, the output directory may be deleted at any time and recreated from the source code using the atool make program (amake)  (with an exception for a few "boot" modules needed to create amake itself).  The output directory typically contains one level of subdirectories.  Relocatable object files for target x go into o/lib.x ,  executable files for target x are in o/bin.x , automatically generated include files in o/include , automatically generated documentation files in o/doc .  This intentionally mirrors the layout of the version install directory.

DO NOT PLACE ANY SOURCE CODE IN AN OUTPUT DIRECTORY... it will likely be lost.

Version Directory

A version directory is intended to be a code development universe, possibly even containing all of the software development tools including compilers and linkers. atool uses the term "version directory", or for short just "version", to refer to a directory  with the following components:


For this to work for you, the PATH environmental variable set in your shell (and equivalent environmental variables for perl etc.) need to be modified so that you are running programs out of the correct version install directory.  Fortunately, atool provides a very simple way of doing this.  The shell alias chver switches these environmental variables to the version directory that you specify,  and "chver -remove" returns them to their original values. In addition, chver sets the environmental variable VERDIR in your shell to the path of the version directory that you specify.

Version Configuration File

The version configuration file "Aconfig" lives directly beneath the version directory.  A directory is a version directory only if it contains a regular file named "Aconfig".  If a symbolic link named "Aconfig" is found, then this directory and subdirectories thereof have an extended relationship with the version directory actually containing the configuration file.

The version configuration file is a catchall for all miscellaneous configurations that for various reasons should not simply be in a module,  but excluding multiuser version configuration which is handled by the Multiuser file.  In writing this section, i have gotten the urge to greatly reduce what is kept in the Aconfig file. Here are some of the things typically now in Aconfig:

Version Source Directory

Beneath the version directory is the version source directory "src".  Beneath the version source directory are module directories, where all source and output files exist (apart from the version configuration file Aconfig).  Modules may be named arbitrarily in a self-contained version so long as they don't conflict with each and obey certain basic syntactic rules, but the names become very important when a version borrows from another version(s); in this case, each module in the current version is assumed to be a replacement for any like-named module in  a borrowed version.

Install Directory

Modules should use only files from within the containing version directory. If a module uses files from outside the version directory, then it is no longer version-controlled, and one loses control over how that module may be used. To complicate things further, for maximum flexibility, the module source code  must not know which version it belongs to; in fact, it may be linked into different versions. The module source code exists outside of any version, but comes to life only when linked (or perhaps copied) as the s link or subdirectory of a module directory.  For further flexibility, a module should not know what other module provided a file (except where it was the module's own file). Since a module doesn't know which version it belongs to and yet must only use files from a version, and doesn't even know what module a file came from, this might sound impossible.  

The solution is the version install directory $I, which is always the "i" subdirectory of a version.  The version install directory contains one layer of subdirectories, known as "visible directories". These visible directories have names such as "include", "lib.linux",  "bin.linux", "doc", etc.  Beneath the visible directories are symbolic links which point to files within module directories. Thus these "exported" files appear as if they exist beneath the visible directories.  Using symbolic links is not only more efficient  (both in space and installation time) and more reliable than using a copy, but also is self-documenting; listing the link shows immediately what module it came from.

Visible Directory

A visible directory is a subdirectory of the install subdirectory of a version directory. See description under Install Directory.

Module Exports

In addition to creating the files beneath the regular target files beneath  module output directory $O, amake also creates the module exports file, $O/adm/Exports .  A file is "exported" when it is listed in the Exports file. Based on instructions generated based on the contents of the module Amake file and files it pulls in, amake creates a list of files local to the module and under what visible directories and by what name the files are to be known.  Most of the time, the file is known by the same name as the local name but for special purposes a file may be exported using different name(s).  Files beneath the module output directory are usually created in a subdirectory with a name matching a visible directory, but files in the module source directory are not.

The programs mm ("mastermake") and mmInstall fix up the contents of the visible directories to agree with the Exports files of all modules that are either contained in the version source directory or are borrowed by the version (i.e. belong to a borrowed version and don't have the same name as a module closer to the current version or in the current version).

Module Imports

A file external to a module is said to be imported by the module if it is referenced in any way in the process of using amake (and e.g. doing the compilations that amake directs).  To be more exact, file paths are imported, including file paths that do not lead to a currently existing file... since they might do so later. Only filepaths relative to the install directory $I are considered, making the filepaths portable (i.e. useful even if you copy the version to somewhere else). With the help of various plugins, amake is largely successful of making a list of all the imported file paths, and stores it when done in the file $O/adm/Imports.

The principal use of the Imports file is so that the mastermake program mm can determine the correct order to run amake on modules. In addition, mm creates a report file, $VERDIR/mm/MMreport, that shows the relationships between modules in a human readable form.

Module Revision

A module revision directory, like any module source directory,  contains a set of source code for a module (usually a few e.g. C source files, perhaps a documentation file, and necessarily the module configuration file named Amake).  However, a module revision directory is not necessarily in use in a module.  Typically, module revision directories are named in the form of YYYYMMDD.hhmmss (year, month, day, hour, minute, second) according to when they were first added to their repository.  The usual repository for module revisions is beneath a directory with the name of the module for which the revision was intended, which in turn is beneath the version revision directory, $VERDIR/rev .

Version Borrowing

One of the most powerful features of atool is version borrowing. As configured in the version configuration file (typically, a softlink named borrow appears below the version directory), the version borrows as follows from some other version:

atool provides the powerful commands (for single-user versions) addmodule and copymodule. addmodule with a simple module name argument creates a module in your current version with the module source code pointed by a soft link "s" to the module source code of the borrowed-from version.  With a full(er) pathname, "s" points to the given directory.  Copymodule creates a copy of the source code instead of a link to it. In both cases, a link "s.orig" is created that points to the original source code revision (where possible).  Typically one should add modules to a new version including:  the module that one is changing; the module that links the executables to be tested; and any modules need to avoid hidden imports problems (although this can be automatically taken care instead).

Fixing the Hidden Imports Problem

A module import is where a file from module A is used in the building of output code of module B (by way of the install directory).  A hidden import is where the output files of module B are based upon an incorrect revision of module A.  This occurs if you've put a modified revision of module A in your version directory, but module B is borrowed using version borrowing (and module B in the borrowed version imports from the module A that was in the borrowed version).  In this situation, your changes to your local module A have no effect whatsoever on module B because module B is never recompiled.  Only local modules get recompiled.  The problem may also effect a local module C which imports from B (and possibly from A); changes to module A do not propagate forward to module C.

[As you might gather,  a module in the local version is considered by mastermake to be the same module as (but possibly different revision than) a module in the borrowed version if they have the same name.  This approach is satisfactory most of the time.]

Example: suppose that module A in the borrowed version defines a data struct as:

struct wiz
{
int foo;
int bar;
};

whereas in the local module A this has been rewritten, switching the offsets of foo and bar:

struct wiz
{
int bar;
int foo;
};

If modules B and C both use this data struct, clearly both should be recompiled.  However, module B will not be recompiled unless it is local to your version, which for this example it was not.

The solution is to bring module B into your version directory.  It is not necessary to actually copy the source code revision directory for module B, since a soft link to it named s will suffice.  This can be done manually, e.g. with the addmodule command.   However, it would be tedious to discover and correct all of these cases manually.  Mastermake provides your choice of three possible automatic HiddenImportsFix solutions:

If you have a good understanding of module interdependencies, you may set HiddenImportsFix to off and do all necessary addmodule's or copymodule's yourself.  Mastermake will announce warnings for every hidden imports case it sees.  In a large number of cases, the warnings are bogus.  For example, suppose that you did not make an externally visible change to module A (in the above example) but only an internal implementation change that would have affect only when linking programs, and further suppose that module B creates no programs.  This is a very common case, but mastermake doesn't have the smarts to detect this case.  Automatic hidden import fixing can provide a much more optimized solution than e.g. pulling in and recompiling every module of a borrowed version,  but automatic hidden import fixing typically involves much more recompilation than what a knowledgeable user can do manually.

Lightweight Version

A lightweight version directory is a version directory (single- or multi-user) that borrows heavily from another version (which may borrow recursively).  The more the version borrows, and thus the less that is in the version, the lighter it is. At the (non-useful) extreme is a version that has no modules of its own but borrows all from another version.  Such a version is easily created using the newversion command, and becomes useful as modules are added to it, replacing (by name) modules in the borrowed-from version(s) or adding modules with new names.  See the section on borrowing.

Single-user Version

A single-user version is one that has no revision control at all (apart from something you may do informally).  This includes typical  lightweight versions and all frozen versions.

Multiuser Version

For a multiuser version, all files are made effectively readonly for ordinary users (by making them belong to a special user id and name, with write permission for owner only).  The least confusion result if the special user is in fact no real user, but is invented for this specific purpose.  Multiple administrators of such a multiuser version (or versions sharing the same special user) may all know the password for this special user name.  On the other hand, it is way overkill to use the "root" user id. Make sure that the shell login files for your special user use the umask command to set the file creation mask of the shell such that files are by default created with write permission for owner only.

Since users cannot directly modify files in a multiuser version, they must do so via setuid programs:  mm, mmInstall and the amanager family.  Actually, these programs are not themselves setuid but make use of the Setuid feature.

The atool programs know that a version directory is multiuser by the existence of a second configuration file (the first was Aconfig) named "Multiuser" with certain fields set.  In addition to setting Multiuser mode,  this also sets certain options available only in multiuser mode.  Among these options are:



Frozen Version

atool implements output file control (a.k.a. object file control) in two ways (only for multiuser versions).

 First, the checkin program checks in output files as well as source files, going as far running amake to make sure they are up to date.  The output files are stored in the module output directory for the module in the multiuser version.  However, it is impractical to immediately propagate the results of checking in one module onto the module output directories of other modules (although checkin does run mmInstall to fix up the visible directories).

To ensure consistent output files for all modules of a version, it is necessary to do a mastermake.  If mastermake is done in place, then such consistency is quickly lost as soon as someone checks the next module in (perhaps even before the mastermake is finished). To get around this problem, the Multiuser file can configure that mastermakes are done in conjunction with a frozen version. In this mode, the following things are done:

Each frozen version provides a consistent set of code for a given point in time, which can be quite useful.  It also can consume a lot of disk space. To deal with the disk space problems, several approaches may be used:

It is possible to restore frozen versions that have been removed using the restoreVersion program and the restore file upon which the frozen version was based.  In general, the output files recreated will not be exactly identical to the originals due to e.g. embedded compilation dates.  In addition, note that there is a bootup issue; it may be necessary to seed the restored frozen version with compatible output files for some of the tools, or use a tool bootup procedure.

It must also be stressed that frozen versions do not contain source code, only links to source code.  The places these links point to must remain unchanged for as long as the frozen version is to be useful. Frozen versions work best in conjunction with management of checkin versions using a revision directory, taking care never to remove any checked in revisions.  Fortunately, the source code revision directory size for a multiuser version typically is never very large (by modern disk size standards) even after years of use and many contributors.  Provided that you are saving as source code only e.g. typical C language files and similar, you should never have to worry about the disk space occupied by source code revisions.

Setuid

Write access to multiuser versions requires gaining privilege through a setuid program. So that the tools themselves can be conveniently version controlled, they are not marked setuid.  Instead a small non-version controlled program is used.  (This is similar in spirit to the "sudo" program). The source, object and configuration for this small program conventionally live in a directory conventionally named Setuid. The default configuration is to have a Setuid directory beneath (and serving) each multiuser version directory.  However, such a directory may live elsewhere (location configured in Multiuser file) and may service several version directories that are owned by the same owner id. The configuration file within the Setuid directory lists both the version directories and program names to which this privilege applies so that the setuid program may ensure that the desired executable is run from the proper version visible directory.  

NOTE: this is not intended as a high security system.  If a user is permitted to checkin code changes, then they could checkin code that would gain full privilege of the special owner of the multiuser version (when mastermake runs, or by replacing the management commands). Many other security holes probably exist.  This system is designed only to keep honest people honest.

amake Dependency Predictors

The problem to be solved: An object file (or C preprocessor output) depends upon the source file it was built from, plus all of the include files it was built from (and also all of the nonexistent files that, had they existed, would been chosen as an include file instead of a file that was actually chosen). Some of these include files might need to be automatically created in current module in advance of the object file. On the other hand,  some  include files in the current module might be generated by special-purpose programs which are built in the current module for that purpose; so we can't simply have a rule that says "make all include files first".

The atool solution: use a predictor that predicts which include files might be included by a given file (recursively).  This is easily done by examining all of the #include statements... although such an easy approach can give the wrong answer due to #ifdefs. Still, something must be done, so such predictors are used. But they are only used as predictors, and only for the purpose of identifying output files of the  current module that need to be made first. Predicted dependencies are not used for anything else, and in particular are not remembered in the amake database (but see about finalizers).  Note that any dependency known with certainty before amake builds a file is both a predictor and a finalizer.

amake Dependency Finalizers

In some cases, the true dependencies are not known until after a target file is created.  For example, the actual list of #include files used by the C preprocessor is not known until after the preprocessor has run.  For many compilers, the preprocessor output can be examined and contains the names of all files visited.  This final information about dependencies is known as a finalizer.  Also, any dependency known earlier that is not merely a predictor is a finalizer.

Performing Common Tasks

Creating a New Module

The program newmodule is a command line wizard type of program that creates a new module based upon a template set of your choice.  You add your application code to the files generated by the template.  By default in a multiuser version, (if you give a simple module name), newmodule will put a lock on that name and put the module directory in the standard place for checked out modules. By default in a single user version, if you give a simple module name,
newmodule will put the module directory into the version source directory. If you give a path instead, then newmodule will put the module directory it creates there.

The efforts of many are greatly simplified by carefully setting up useful templates in advance that incorporate standard methodology.  For example, templates can include code for module initialization (depending on target), module memory allocation, module debugging methods and help messages for those.

Modifying Existing Modules

For a single user version that will always be a single user version, with no revision control, you simply modify files as you wish.  New templated files may be added with the command line wizard program "newfile".

For a multiuser version, the simplest approach is to use the checkout program.  This puts a lock on the module and gives you a copy of the module source and  output directories in a standard place (configurable in version Multiuser file).  You then edit the files of the module with your favorite editor.

Header Comment Coding Style

No need to ever write a .h file again! atool provides automatic extraction of data definitions and function prototypes.  Well of course, nothing is free. You really would not want to put every function and definition into an auto-generated include file.  In fact it is best to create two categories of include files; one for use within a module, and one for export for use by other modules.

To make it clear what you want to have extracted (if anything) by cmakeFun you need to precede the thing with special comments.  For example,  to add a function to the automatically created file for internal and external use:

/*-F- foobarBellRing -- strike a bell for poetic justice.
*/
int foobarBellRing( int Tonality )
{
xxxxxx.....
}

For a function to be used within the module only, you would use a lower case "f".

If your C file #includes a header file that is extracted from the C file, it will be necessary to prevent duplicate definition of typedefs and some other C constructs.  Also, definitions require a special ending comment. Here is an example:

#if 0 /* auto-extract only */
/*-D- foobarToneType -- supported tone types
*/
typedef enum
{
foobarTone_Precise,
foobarTone_Fuzzy
} foobarToneType;
/*------------------------------------------------------------*/
#endif /* end auto-extract */

One more example. In order for your include file to be parsable when #included into some other file, it may be necessary to have some other
#includes first. A simple solution:

/*-D- Includes required in order to parse auto-generated .h file
*/
#include <widget.h>
/*----------------------------------------------------------*/

Speaking of #include's,  you should use the angle bracket form with simple filenames whenever possible.  amake does not properly predict dependencies in some cases (and for some compilers) if you use the double quote form, and doesn't know what to do with the macro form at all.

This section is a very fast overview of this subject; be sure to refer to the documentation that comes with atool for more precise instructions.

Ensuring Proper Linking

With some computer languages, #including the equivalent of a module header file would also instruct the linker to link in the required code.  Not so with the C language. Some header files do not require linking code whereas others may have more exotic requirements.  atool can't totally relieve this burden, but  it does make it substantially easier.

Where atool makes life easier is the case where a library that you use in turn needs another library (in general, unknown to you).  atool associates with every library some special include files.  First, there is one to one relation ship between each library file you generate (e.g. $O/lib.linux/foo.a) and an auto-generated ow-level "clink" include file (e.g. $O/lib.linux/foo.clink).  Then, to add flexibility, there is a manually provided (but usually taken straight from templates) single "cmake" include file (e.g. $O/lib.linux/foo.cmake) which most commonly simply refers to the corresponding .clink file.
You DO have to do something.  When you  specify inputs to your library in the module configuration file Amake, you also specify any libraries that your library will need by writing their "cmake" files (for flexibility, you never refer to the .clink files except in the .cmake files).  An example may help:

Here is the example portion of Amake file:

[Make]
cmake -Library foo.a -t linux,product -export
fooBar.c
fooWidget.c
-t linux fooSimulation.c
othermodule1.cmake
othermodule2.cmake
-t linux othermodule3.cmake

This creates the code library foo.a for target architectures linux and product (product might be an alias referring to several targets); the code library (for each target) plus associated files are exported (marked to be installed).  The manually (but usually from template) file foo.cmake must exist in the module source so it can be exported. The files fooBar.c and fooWidget.c are compiled for alll targets whereas fooSimulation.c is compiled for linux only. All targets use code or data from othermodul1 and othermodule2 but only linux uses othermodule3.

All of the cmake files usually simply refer to the .clink files; for example, foo.cmake looks like (typically created directly from templates:

foo.clink

Given this to be true for othermodule1, othermodule2 and othermodule3, as well as foo itself, the auto-generated foo.clink file for linux will look like:

foo.a
othermodule1.clink
othermodule2.clink
othermodule3.clink

In other words, to link with foo, you link in the library foo.a, plus recursively all of the libraries for othermodule1 (including othermodule1.a) and etc.
For non-linux targets, the foo.clink file omits othermodule3.

The above can get more complicated for the rare special case.  The most common case is where an obsolete library (replaced with a new one) is covered by e.g. a file bar.cmake that now directs things to foo:

foo.cmake

The particularly nice thing about this recursive approach is that, provided you avoid dependency looping between modules,  you never have worry about the order that you specify the .cmake files in ... provided that every module properly declares all it's immediate dependencies, amake can easily sort out the correct linking order by simply omitting all but the last reference.  The downside is that if someone forgets to list a needed .cmake file, then sometimes (e.g. not if some other module manages to provide the missing dependency) programs in other modules won't link.  A solution for that is to link test programs in every module (if nothing else, they prove that the dependencies are correct).  Unfortunately, linking is often slow and so the practical thing to do is to put test programs in only some of the modules.

Testing Changes Before Committing

In some sort of ideal world, every module would have a test program that would allow complete validation of your changes before checking in.  But of course, even if your code  is entirely self-consistent, changes you made to the external interface may result in something else breaking... or something else may already be broken that your changes will reveal.  Thus testing with a large part of the version directory (from which you checked out the module) is crucial.  You could of course, simply checkin your changes (assuming amake is happy), perform a mastermake in the version, and then test the version... but with multiple people doing the same thing at the same time, this simply won't work.  One solution would be to make a copy of the entire version, insert your changes, and then test.  This would be very time consuming.  Fortunately, atool provides less time consuming methods.

Especially to facillitate testing of new code, atool provides the lightweight version method. Create an empty lightweight version (with a suitable Aconfig file), borrowing from and based on the current version, using the command line newversion wizard program. (Note; it is safer if the version you ran newversion from was a frozen version, so it won't change underneath you).  Select the new version as your current version with the chver shell alias. Add to it a link to your changed module source with the addmodule command. From here, you have several choices that trade off speed versus safety:

After making your lighteweight version, you can test with the tests of your choice.

Merging Module Revisions

If you've held the module lock since before you started making your changes, then probably you have no merge issues.  But when many people are working on a software version then you may be unable to get the lock and so need to work without that protection.  In this case, you'll be particularly like to copy the module into a lightweight version and do your changes there.  The copymodule program is the best way to make such a copy; it copies the module source code to your_version/src/module_name/s and makes a link or copy of the original (of what you are copying) to s.orig .
Later, when you are satisified with your changes, you will need to get the lock; usually you will use the checkout program which will give you a copy of the last checked up code.  You now need to do a three way merge between your s, your s.orig, and your checked out copy of module (which represents other peoples changes since you made your copy), generally writing your changes into the checked out copy.  However, first you must review everything that happened to identify if this is really the appropriate thing to do; this can get surprisingly tricky sometimes.

No merge tools come with atool. In reviewing the web, there seems to be a real deficit of standalone diff and merge tools  One free software tool that I like, in spite of some shortcomings, is xxdiff.  It does 3-way file diff/merges, but unfortunately only 2-way directory diffs.  I strongly recommend starting with directory diffs between your modified module and your checked out module, clicking down into files that are different, and adding a 3rd file (the original that you started your changes from) for those that are complicated. I'll have to confess that i've used xxdiff strictly as a diff tool so far so I can't report on how well it's merge capability works.

Performing Administrative Tasks

Configuring New Targets

This section will discuss how to configure new CPU type targets to be used with cmake.  It is also possible to define targets which do not correspond to running code, such as the provided targets xhdr and ihdr (for header file autogeneration) and ctags and etags (for editor tag file generation).

If the new target will require installation of a new C compiler, see Installing GCC Compilers .

Usually it is best to define a base target corresponding to the computer architecture, compiler and libraries you will be using, and then one or more product targets that are based on the base targets.  The base target can be used for compiling generic libraries, whereas the product targets can be used for compiling code that has differences according to product.

To define a new target,  add a [Target:name] record to the Aconfig file of your version., where name is the name of your target.  (For a multiuser version, this will require administrative login privelege).  You can look at existing target records for inspiration, and refer to verdir.txt for full details. The crucial fields of such a record are:

Also in the Aconfig file, you will want to add the new target to appropriate aliases in the [TargetAliases] section.  The thoughtful use of aliases in the Amake files of every  module allows rapid addition or deletion of targets by simply modifying the Aconfig file (perhaps providing a cmake helper) and doing a mastermake.

You will also need to provide a cmake helper program (cmaker) which was named in the Cmaker field.  The cmake helper program can be a shell script, native executable program, dynamic link object, or all of the above (make sure multiple forms are consistent with each other!).  Dynamic link objects  execute most quickly, but shell scripts are the easiest to write and modify.  Such code lives in an ordinary module with the helpers exported to the usual visible directories.  Typically such scripts etc. will define, by way of arguments passed to underlying programs, the options for the various stages of compilation.  This is described in the cmake documentation ;  I'll just given an example (linux_cmaker) here:

#!/bin/sh
#
# linux_cmaker -- cmake helper for linux target
#
# The following typically changes per target:
TARGET=linux
BASETARGET=$TARGET
MACHOP=''
NEXT=gcc_cmaker
# The following is common for most gcc targets:
CC=$BASETARGET'_cc'
# SYSLIBS: libraries that users would take for granted.
# Unfortunately there is a circular dependency loop between
# libgcc.a and libc.a ... so linux.linker provides a final
# -lgcc (cmake would eliminate the redundancy).
SYSLIBS="-ldl -lm -lgcc -lc"
# generate dependency on next level of cmaker
echo 'FILES_amakefile = ${!PathListBin_host:'$NEXT'}'
# Added args: MUST be before passed args!
# Note that we default to compile direct from .i to .o,
# but dynamically (if -S) we compile to .s instead,
# thus both have to be configured.
# -lpostfix .[so,a] links to shared objects preferentially
# to archive files, producing smaller executables but
# increasing risk of runtime failure.
# .[a,so] prefers archive files and produces executables
# likely to run on various o.s. releases.
exec $NEXT \
-t $TARGET \
-DATOOL_TARGET_$TARGET \
-2-cc+syslibs "$SYSLIBS" \
-2-cc-default-linker $TARGET.linker \
-2-cc-default-librarian $TARGET.librarian \
-2-cc-lpostfix '.[a,so]' \
-2-cc-ext .c/cmaker=cc_cmaker_c2i \
-2-cc-c2i.c-tool $CC \
-2-cc-ext .i/cmaker=cc_cmaker_i2s \
-2-cc-i2o.c-tool $CC \
-2-cc-i2o.c+required "$MACHOP" \
-2-cc-i2s.c-tool $CC \
-2-cc-i2s.c+required "$MACHOP" \
-2-cc-ext .S/cmaker=cc_cmaker_S2s \
-2-cc-S2s.S-tool $CC \
-2-cc-ext .s/cmaker=cc_cmaker_s2o \
-2-cc-s2o.c-tool $CC \
-2-cc-s2o.i-tool $CC \
-2-cc-s2o.S-tool $CC \
-2-cc-s2o.s-tool $CC \
-2-cc-s2o.c+required '' \
-2-cc-s2o.i+required '' \
-2-cc-s2o.S+required '' \
-2-cc-s2o.s+required '' \
-2-cc-ext .o/final/clink=o \
-2-cc-ext .ro/final/clink=o \
-2-cc-ext .so/final/clink=y \
-2-cc-ext .a/final/clink=y \
-2-file $NEXT "$@"

An argument is defined as a single word (no whitespace, or quoted).  An option may consist of multiple arguments.  Except for a few priveleged options, the first word of every option begins with hyphen, digit, hyphen, where the digit identifies the total number of arguments for the option.  This allows unknown options to be silently passed through to lower layers or ignored entirely.  The remainder of the first argument of each standard format option also follows an organized pattern as you may gather from reading the above.

Generally, later appearing options in the argument stream override earlier ones (or append, for those named with a '+' sign).  This means that the options in the example above override any that appear in the NEXT layer (which will prepend options) and in turn may be overridden by a script that uses uses this script.  In the example above, the target name is set (with the -t option) as "linux" but may be redefined with another -t option in a script that calls this script.

You will also need to define linker and librarian scripts (dynamic link objects for speed) which are written in a similar spirt.  All of these scripts etc. reside in ordinary modules and are exported to the usual visible directories.  

The power of this approach becomes apparent when we want to define a derived target.  Suppose we have several products that are similar, yet different, all running on Linux.  The same code will be compiled, but with conditional compilation based on which product we are compiling for.  We can derive our product targets from the linux target.  After creating the target records and adding to the TargetAliases in Aconfig, we can create a very simple cmake helper for each.  For example, for product1:

#!/bin/sh
# product1_cmaker -- cmake helper for our first product
TARGET=product1
NEXT=linux_cmaker
exec $NEXT \
-t $TARGET \
-DATOOL_TARGET_$TARGET \
-2-file $NEXT "$@"

It is possible to define macros that you can use in your Amake file. For an example of this, see the discussion of stritMake .

Installing GCC Compilers

The procedure for installing GCC (cross-)compilers is described in some detail in the gccbase module.  The solution fixes the following problems:

The following should also be noted:

After following the procedure, you will have (in addition to the usual GNU binaries) compiler, librarian and linker cover programs for each target (e.g. for linux you will have linux_cc, linux_ar and linux_ld cover programs ) that can be configured in your cmaker scripts.  These cover programs will find the GNU binaries (they are exported) and run them with the correct options telling them where to find all their components.

Unfortunately, you aren't done yet.  You will probably also need standard C library files (.a files) and matching header (.h) files, some of which are (and must be) provided with GCC but most of which must come from another source.  For Linux, you might simply copy libraries and header files from a suitable distribution (taking care that the distribution was built with a compatible GCC version).  For an embedded target, you might use a C library such as the free newlib library.  In either case, you won't want to directly export each and every file.  Instead, the pragmatic thing to do is to export directories of such files, and modify the cmaker scripts to add searching in these directories.

Providing version control of compilers provides a lot of benefit, but can be too much for a small project.  In this case, the gccnative module provides the minimum support needed to use the native gcc compiler already installed.  For other targets, you can adapt your cmaker scripts to specify non-version controlled tools as desired.

atool is distributed in two principal forms. There is a full form which provides a working atool version including gcc compiler, and also a gccnative based version.  And there is a smaller form which provides only a gccnative based version. As of this writing, embed-atool provides only the latter (smaller) form on the SourceForge site.

Changing Multiuser Access Permissions

In the Multiuser file of the multiuser version, modify the following fields:

Access for checkout, checkin and mastermake are controlled by the last three fields.  A user is allowed to do the operations if their user name appears in the field; or if the field contains the word "-all" and the Users field contains their user name or "-all".

Forking Versions

Forking a version means to create an entirely new version, based upon the current state of the current version.  The newly forked version will typically have module s links that point to the original revision directories, although it is also possible to create new revision directories.

Like all new versions, the "newversion" program should be used.  This is a command line driven, interactive wizard program that guides you through the process of creating a new version.  If run as "newversion -fork" it's defaults are appropriate for forking a version.

Here is a sample dialog:

[aadmin@localhost atool]# chver .
VERDIR set to /a/atool
[aadmin@localhost atool]# newversion -fork
Enter full pathname of new version
[default /atool.0]: /a/atool.1
Currently selected options are:
-------------------------------
Base new version upon: /a/atool
(to change base, you must quit and chver and restart).
New version will NOT dynamically borrow from /a/atool
New version Aconfig will be mostly copy from /a/atool
New version i directory will use relative links (rlink)
HiddenImportsFix option will be:
(off) HiddenImportsFix == off
Module selection option:
(all) Populate src dir w/ all modules from current version and parents
No AddModuleOptions options selected.
mm/MMplist file WILL be copied (good for heavyweight versions)
Path of version directory to create: /a/atool.1
-------------------------------
OK to create /a/atool.1 now ? [y/n/a/q]
(use y if ok, n/a to be prompted for popular/all options, or q to quit)
a
For help on a topic, enter topic name, else press enter to continue
(Topic names are:
lw -- Lightweight Versions
fork -- Forking Heavy Versions
Enter Hidden Imports option from following list:
off HiddenImportsFix == off
hide HiddenImportsFix == hide
link HiddenImportsFix == link
auto HiddenImportsFix == auto
[default off]:
Unchanged.
Ok to dynamically borrow from /a/atool ? [y/N]
(YES for lightweight versions!)
Unchanged.
Ok to comment out most of Aconfig? [y/N]
(YES for lightweight versions!)
Unchanged.
Use relative/absolute/truepath links in install (i) directory? [R/a/t]
Unchanged.
Enter module list option from following list:
empty Empty src directory
all Populate src dir w/ all modules from current version and parents
parent Populate src dir w/ all modules from current version only
list Populate src dir w/ modules (from c.v. or parents) listed in file
user Populate src dir w/ modules from user input
[default all]:
Unchanged.
Enter addmodule options for new version
[ ]:
(or just enter to accept current selection)
(or enter `x' to erase current selection)
(or enter `h' for list of options)
Unchanged.
Copy mm/MMplist file (good for heavy versions only)? [Y/n]
Unchanged.
Enter full pathname of new version
[default /a/atool.1]:
Currently selected options are:
-------------------------------
Base new version upon: /a/atool
(to change base, you must quit and chver and restart).
New version will NOT dynamically borrow from /a/atool
New version Aconfig will be mostly copy from /a/atool
New version i directory will use relative links (rlink)
HiddenImportsFix option will be:
(off) HiddenImportsFix == off
Module selection option:
(all) Populate src dir w/ all modules from current version and parents
No AddModuleOptions options selected.
mm/MMplist file WILL be copied (good for heavyweight versions)
Path of version directory to create: /a/atool.1
-------------------------------
OK to create /a/atool.1 now ? [y/n/a/q]
(use y if ok, n/a to be prompted for popular/all options, or q to quit)

When to Re-make a Version

Ideally, the output files of the modules of a version would automatically and instantly be changed to correspond to every change of source files.  Realistically, remaking output files is a somewhat time-consuming procedure.  Therefore atool takes a pragmatic approach.  When you check in a module to a multiuser version, the checkin program runs amake on it, but does not remake any modules depending on your module.  For a single user version, it is entirely up to you.  You should run mastermake on a version when you know that important changes have occurred.  You should probably also run it e.g. once a day to be sure.  You may want to run it from a shell script which in turn is run by a cron job; such a shell script should first source chver_init.sh (or chver_init.csh) and chver to the version, before running mastermake.

Cleaning up Frozen Versions

Unwanted frozen versions can be removed manually (by administrative user) and can be removed or abbreviated automatically by mastermake as configured by the following Multiuser file fields:

Refer to verdir.txt for more information.

Restoring Deleted Versions

Obviously, deleted versions can be restored from backup media.  They can also be regenerated based on "restore" files automatically created by multiuser mastermake.  The files list the paths of the revision directories for all the modules, and contain the Aconfig file content (also Multiuser file).  In order to restore the output files of the version, it will be necessary to seed the version with output files for the "boot modules" from a compatible version, or to use the "aboot" procedure. Of course it is not possible to exactly restore the output files, since they often contain information such as the date compiled, etc.

atool Programs

chver

chver is a shell alias (actually, a shell function under sh/bash) which modifies your environmental variables (e.g. $PATH, $PERL5LIB) appropriately for a version directory that you specify (e.g. "chver /my/version/directory").  It also sets the environmental variable $VERDIR to the path of selected version directory.  You can chver between versions as you would cd between directories. "chver -remove" sets things back as they were. Whereas all other atool programs are specific to a version, chver sits outside and allows switching versions.

The pathlists that chver uses are configured in the Aconfig file of the version directory.  It should be kept in mind that these pathlists do double duty in that they are interpreted by amake as well as by your shell. A typical pathlist for host execution (used for $PATH) contains:

. $S $O/bin.<host> $O/bin.share $I/bin.<host> $I/bin.share

where <host> is the host architecture name.  Since chver doesn't know which module you are using (and in fact, you can be working on multiple modules), it replaces $S with "s" and $O with "o"; if you cd into a module directory, you will preferentially execute shell scripts in the current or module source directory,  then executable binaries created by your module, then shell scripts (in o/bin.share) created by your module, and finally executable binaries or shell scripts installed in the version install directory.  chver replaces $I with the path of the version install directory.


amake

amake is a unique "make" tool that creates output files in the "o" subdirectory of a module directory, based upon source code from the module source revision directory (which may be the module directory itself, or an "s" subdirectory, depending on where the file named "Amake" is found), from exported files of other modules of the same version (found in version install directory) and possibly other sources. The goal is to cross-compile to various targets with a minimum of syntax entry by the developer, and to the largest extent possible avoiding creation of new descriptive languages.

The following example Amake file creates a code library called widget for targets product1, product2 and product2a.  The code library incorporates code from file1.c, file2.c and file3.c.  Separately, dependencies by widget upon other code libraries, namely knob, slot and tree, are remembered.

[Synopsis]
Widget management.
[Make]
cmake -Library widget.a -t product1,product2,product2a
file1.c
file2.c
file3.c
# following refers to libraries needed by widget:
knob.cmake
slot.cmake
tree.cmake
[End]

The input to amake (in the module "Amake" file) is very simple. Unindented lines in the [Make] section specify programs to run, with command line arguments, whereas indented lines are simply input lines to the program previously specified.  The programs may be written in any convenient programming language, e.g. C, perl, etc.  The output of the programs are used by amake to direct its action, and are in a low level language that the user doesn't usually need to consider.  This low level language has some similarity to that used by the traditional make program.  For this reason, programs such as cmake are referred to as "amake preprocessor programs".

Although amake's input is quite simple, it's action is quite sophisticated.  amake keeps a database of all compilations performed, including the file attributes of all files searched for (including those found and those not found), as well as the compile lines used; amake will rebuild output files when any inputs change.  amake runs multiple compilations at the same time (except where dependency considerations intervene), and can recover in the face of newly discovered dependency information.

amake has two basic stages: preprocess stage and run stage.  In the first stage, it runs preprocessors as specified in the Amake file and accumulates their output in $O/adm/amakefile, which has a syntax remeniscent of a traditional make makefile. Unindented lines introduce a new section, and indented lines (spaces are just fine, none of that tab nonsense) are additional information for the unindented lines. There are the following types of unindented lines in the amakefile:

Files may be specified in the following ways:

A file specification can be further qualified (with a special prefix usually) according to when it is used:



cmake

The cmake program is by far the most useful preprocessor for amake.  In using cmake, your fundamental choices are:

Handling of cmake targets is taken care of by some minor configuration in the version Aconfig file plus a plugin for each target (referenced from the cmake file).  Plugins are included with the standard atool distributions for the following targets:



cmakeFun

The cmakeFun program automatically extracts header (.h) files from .c files.  This is most conveniently done using the cmake xhdr and ihdr targets.  For example, your Amake file might have the following in it's [Make] section:

cmake -Library foo.a -t xhdr,ihdr,products -export
foo.c
foo2.c
foo3.c
...

Of course, nothing is free.  You need to annote your .c files with special comments so that cmakeFun will know what to extract.  On the other hand, you may now find that you never need to create a header file manually again!

cmakeFun can make header files for module-internal use only (ihdr) or for external use (xhdr).  The external extractions are denoted by special comments with upper case letters.  The internal extractions are denoted by special comments with either upper or lower case letters (after all, you should see internally all that those external see, yes?).  xhdr creates a single header file matching a library; for example, when making the library foo.a in the example above, xhdr creates $O/include/foo.h (which is automatically exported for use by other modules if -export is used).  By contrast, ihdr creates a separate header file for each C source file, for example it creates $O/include/foo2.c.h from foo2.c .  Each C file in a module should include all of the ihdr (.c.h) header files that it needs (including it's own) but be sure not to include both the ihdr and xhdr files because that would result in conflicts.

C functions are preceded by an F comment (upper case for exported, lower case for internal):

/*-F- name -- description
* more description
*/
int name( int arg )
{
}

Global variables are preceded by a G comment (one per variable).  For example (using the C++ comment style):

//-g- fooState -- essential private data for foo
struct fooStateStruct fooState;


Data definitions are preceded by a D comment.  Unlike F and G, he scope of D comment extraction ends only at another special comment of the form (begin of line)(slash)(star)(dash)(something)(dash)... so care must be taken to provide such a comment.  Also, if the C file includes it's own auto-generated header file, there can be conflicts (C does NOT like to see certain declarations repeated) so it can become necessary to wrap the mess in an #if 0 ... #endif.  For example:

#if 0 /*auto-extract only*/
/*-D- fooWidgetEnum -- widget types for foo
*/
typedef enum fooWidget
{
fooWidget_eCam,
fooWidget_eGear,
fooWidget_NWidgets
} fooWidgetEnum;
/*------------------------------------*/
#endif /*end auto-extract*/

Clearly we've drifted away from standard C. The obvious question is, why not just use an overt meta-language?  The answer is largely cultural. By making it as much like standard C as possible,  we minimize training time.

stritMake

The name strit is short for structure iterator.  A structure iterator is code that can find its way through a C data struct, doing something useful with it: for example, printing the contents, converting endianness, etc. etc.  My favorite application is an interactive data editor.

The idea is that you write ordinary C data structs, get them automatically parsed and then pass the parse tables and your data to structure iterator code to do it's thing.  Of course, there are always problems:


While (sadly) no structure iterator code is shipped with atool, you do get the tool (stritMake) that creates the parse tables used by structure iterators. You will also need to add strit support to your cmake helpers. This is done by inserting the following line in your customized _cmaker scripts:

-[-cc-c2i-macro -strit -2-cc-c2i.c-filter stritMake -]-cc-c2i-macro

Having done this, you need to do two things to get a parse table inserted into the code from one of your C files.  You must enable strit preprocessing for that file (it is harmless but inefficient to do it for all files) e.g. by adding the -strit option defined above:

cmake -Library foo.a -t xhdr,ihdr,products -export
-strit foo.c

and you must put the following construct into your C file at the point where you want the parse table to go:

    STRIT_EMIT fooParseTable {
add fooStateStruct;
};

Of course, you can name the parse table anything you want instead of fooParseTable.  The add statements give a struct tag, union tag or typedef name which you want added to the parse table.  You may have multiple add statements.  Any tags needed recursively are added automatically by stritMake as needed.

mm

Mastermake (mm for short) is the program that runs amake in the modules of a version and fixes up the links in the visible directories, all in the correct order.

The procedural steps of a simple mastermake are as follows:

It is an error (reported in detail) if two modules depend upon each other, directly or indirectly.  This is a basic design limitation of atool tools.

Modules not present by name in a version but present in a borrowed version are "borrowed" in the sense that the files of the borrowed module are used without ever doing an amake (i.e. as if amake would always produce the same result).  

Modules duplicated by name in the current version and a borrowed version are used from the current version only.  The module by that name in the borrowed version is hidden and basically ignored, except that hidden Imports detection (and optionally fixing) is performed.  A hidden Imports condition exists if a borrowed module was in fact made using files (as documented in module Imports file) from a hidden module.  Mastermake offers four methods of handling hidden Imports conditions (configured using the HiddenImportsFix field in the Aconfig file):


Multiuser mastermake: mm may be executed as a priveleged user via the Setuid mechanism.  Provided that the original user is allowed this privelege as configured in the version Multiuser file, ordinary users may thus begin mastermakes in a multiuser version.  The mastermake is performed in the background (using a lock file to prevent redundant execution).  The mastermake may be killed using the "-k" option or mmKill program. A number of options are configurable in the version Multiuser file:

Also for multiuser versions, mastermake creates a restore file in $VERDIR/restore/<year>/<datestamp> that may be used to restore the current content of the version.  The restore files contain copies of Aconfig files, Multiuser file, and list of all rev directories used for modules.

mmInstall

The mmInstall program is a much abbreviated version of mm.  It fixes up the visible directories based upon module Exports files, without ever running amake. Correct operation requires that amake was previously run for all modules; this is best assured by running mastermake instead, but knowledgeable users may be able to save time by simply running mmInstall as necessary. For multiuser versions, mmInstall works with the Setuid scheme to allow users the privelege.  In addition, the checkin (and checkup) programs run mmInstall after revising a module.

amanager programs

The amanager programs perform version control for multiuser versions.  They are:



newfile

The newfile program creates new files (or entire module source directories) based upon templates. The templates are organized into subdirectories called "template sets" and are exported into the "template" visible directory from various modules. Each template set contains an index file specifying which template file is to be used for which pattern (regular expression) of files to be created.  Each template file contains special keywords which are substituted for relevent information such as module name, date, time, etc. (The module name is guessed from the name of the current directory).  For creating entire modules, a separate index file is used which specifies which template files are to be used and how they are to be named (based upon module name).

newmodule

The newmodule program is a cover for newfile.  For a multiuser version, if the module name is given simply (without any slashes) then a lock is placed on the module name and a user copy of the module is created in the standard user checkout location.

newversion

New version directories are created (based upon the current version directory) using the newversion program. Defaults are appropriate for single-user "lightweight" versions, and by default the program serves as an interactive (command line) "wizard", presenting various options for your selection and approval.  By default, the Aconfig file of the new version is created with contents from current (and borrowed) versions, but mostly commented out. This behavior can be modified while running newversion, or selected lines may later be uncommented out and changed as desired using a text editor on the Aconfig file.

addmodule

If addmodule is invoked with the simple name of a module which is present in a borrowed version, then a module of the same name is added to the current version with an "s" link back to the original module (by default).  Various options (and defaults for such options may be placed in the Aconfig file) control how the source code is referenced and whether output files are copied.  The addmodule program also attempts to created an "s.orig" link or directory in the new module that may be used for later finding of changes.

copymodule

The copymodule program is a cover for addmodule that defaults to copying the module source code.

restoreVersion

The restoreVersion program may be used with a multiuser version to generate a restore file (containing copies of Aconfig files, Multiuser file, and source code locations for all modules).  The mastermake program uses restoreVersion to generate a restore file automatically.  In addition, the restoreVersion program may be used to regenerate a version directory based upon a restore file (and rev directories which must continue to exist).  In general, however, it is not possible to generate the exact same output files (because e.g. compilation date and time are embedded in output files).

truepath

The truepath program is a small standalone utility that prints to the stdout the true absolute path corresponding to a file (including directory, etc.) path given as an argument.  The resulting path points to the same file, but does not traverse any symbolic links to do so.

ifdiff

The ifdiff program is a small standalone utility that compares two regular files and takes action if the same or different.
It can be used provide amake optimization, avoiding replacing a file if the new result would be the same, while at the same time obeying the important rule that existing files must never be overwritten.

ifdiff -- compare and optional replace files
Usage: -h for help
[-replace] newfile oldfile
With -replace, newfile is renamed onto old file if they are different,
else newfile is deleted; 0 exit value unless error.
Else newfile is compared with old file and exit value is:
0=same; 1=different; 2=error.
Files are different if old file does not exist or is not ordinary.
It is an error if newfile does not exist or is not ordinary.
Otherwise, files are different if there contents are different;
the dates and attributes of the files are deliberately not considered.

trash

The trash program is a small standalone utility that attempts to optimize and sanitize deletion of files and directories.  It is most useful for large directories, whcih can take a long time to remove.  The trash program attempts to avoid actual file removal by renaming the file or directory to beneath a garbage directory (which necessarily, under Unix, must be on the same partition for a rename to succeed). The new name is descriptive, allowing recovery of trashed files (at least until e.g. a nightly trash removal is performed).  The environmental variable TRASH must be set up (ala PATH) to give the paths of garbage directories.  If no garbage directory is found with a compatible partition, then trash by default will remove the file or directory recursively (readonly directories are first made writeable if possible).

recp

A small standalone recursive copy program, something like "cp -rp" but with much more control and options.

aff

Find a file using PathLists defined in current version Aconfig file.  By default,  prints matches for all defined PathLists, including installed location and actual location.  Has many options that make aff useful in shell and perl scripts.

detab

A small standalone utiltiy program that converts TAB characters into equivalent spaces in named files in place. Uses heuristics to avoid typically unwanted conversions, thus refuses to convert multiply linked files, binary files, non-regular files, readonly files etc.

decr

A small standalone utility program that removes CR characters (placed by e.g. Microsoft editors) from named files in place.  Uses heuristics to avoid typically unwanted conversions, thus refuses to convert multiply linked files, binary files, non-regular files, readonly files etc.

rpn

A moderately small standalone utility program that executes an expression (expressed in reverse Polish notation, thus the name) upon an input stream of data to generate an output stream. Particularly useful are primitives to convert between various data formats (text, various sizes of integer and float, and big and little endian) when reading the input or writing the output.  Actually, this is two programs; irpn is a version of rpn that is integer only.

trunc

A small standalone utility program to shorten or lengthen a file.  Here is it's help message:

Usage: trunc <nbytes> {<file>|-}
Truncates or lengthens file to given size
With `-', passes on nbytes of stdin (or zeroes).



extract

A small standalone utility program to extract sections of fixed format (i.e. binary) files. Here is it's help message:

Usage:
<sourcefile extract nbytes1 outfile1 nbytes2 outfile2 ...
where
the first nbytes1 bytes are placed in outfile1,
the next nbytes2 bytes are placed in outfile2,
and
an nbytes specification may be `all' to
place all remaining bytes into the corresponding outfile,
and
a file spec may be `-' to write the output
to the std output, or `' to skip the next nbytes,
or may be of the form `+filename' to indicate that
the file should be appended to if it exists.



zero

A small standalone utility program to write zeroed bytes.  Here is it's help message:

Usage:
zero <file> Zeroes existing file, keeping size.
zero <size> <file> Creates/rewrites file, setting size
as requested and zeroing.
zero - Writes endless zeros to STDOUT.
zero <size> - Writes size zero bytes to STDOUT.