Overview
About This Club
- What's new in this club
-
ForTran Automatic Coding System from Backus & Company
richardmurray posted a blog entry in BlackGamesElite's BGE Journal
The FORTRAN Automatic Coding System J. W. BACKUS?, R. J. BEEBERt, S. BEST$, R. GOLDBERG?, L. M. HAIBTt, H. L. HERRICK?, R. A. NELSON?, D. SAYRE?, P. B. SHERIDAN?, H.STERNt, I. ZILLERt, R. A. HUGHES§, AN^ .. . R. NUTT~ THE FORTRAN project was begun in the sum- mer of 1954. Its purpose was to reduce by a large factor the task of preparing scientific problems for IBM's next large computer, the 704. If it were possible for the 704 to code problems for itself and produce as good programs as human coders (but without the errors), it was clear that large benefits could be achieved. For it was known that about two-thirds of the cost of solving most scientific and engineering problems on large computers was that of problem preparation. Furthermore, more than 90 per cent of the elapsed time for a problem was usually devoted to planning, writing, and debugging the program. In many cases the de- velopment of a general plan for solving a problem was a small job in comparison to the task of devising and coding machine procedures to carry out the plan. The goal of the FORTRAN project was to enable the pro- grammer to specify a numerical procedure using a con- cise language like that of mathematics and obtain automatically from this specification an efficient 704 program to carry out the procedure. It was expected that such a system would reduce the coding and de- bugging task to less than one-fifth of the job it had been. Two and one-half years and 18 man years have elapsed since the beginning of the project. The FORTRAN system is now copplete. It has two components: the FORTRAN language, in which programs are written, and the translator or executive routine for the 704 which effects the translation of FORTRAN language programs into 704 programs. Descriptions of the FOR- TRAN language and the translator form the principal sections of this paper. The experience of the FORTRAN group in using the system has confirmed the original expectations con- cerning reduction of the task of problem preparation and the efficiency of output programs. A brief case history of one job done with a system seldom gives a good measure of its usefulness, particularly when the selection is made by the authors of the system. Nevertheless, here are the facts about a rather simple but sizable job. The programmer attended a one-day course on FORTRAN and spent some more time re- ferring to the manual. He then programmed the job in four hours, using 47 FORTRAN statements. These were compiled by the 704 in six minutes, producing about 1000 instructions. He ran the program and found the output incorrect. He studied the output (no tracing or memory dumps were used) and was able to localize his error in a FORTRAN statement he had written. He rewrote the offending statement, recompiled, and found that the resulting program was correct. He esti- mated that it might have taken three days to code this job by hand, plus an unknown time to debug it, and that no appreciable increase in speed of execution would have been achieved thereby. THE FORTRAN LANGUAG The FORTRAN language is most easily described by reviewing some examples. Arithmetic Statements Example 1 : Compute : root =(- (B/2) 4- d(B/2) - AC .)/A FORTRAN Program ROOT= ( - (B/2.0) + SQRTF((B/2.0) * * 2 - A * C))/A. Notice that the desired erogram is a single FOR- TRAN statement, an arithmetic formula. Its meaning is: "Evaluate the expression on the right of the = sign and make this the value of the variable on the left.?' The symbol * denotes multiplication and * * denotes exponentiation (i.e., A * * B means AB). The program which is generated from this statement effects the computation in floating point arithmetic, avoids com- puting (B/2.0) twice and computes (B/2.0) * * 2 by a multiplication rather than by an exponentiation routine. [Had (B/2.O) * * 2.01 appeared instead, an exponentia- tion routine would necessarily be used, requiring more time than the multiplication.] The programmer can refer to quantities in both floating point and integer form. Integer quantities \are somewhat restricted in their use and serve primarily as subscripts or exponents. Integer constants are written without a decimal point. Example: 2 (integer form) vs 2.0 (floating point form). Integer variables begin with I, J, K, L, M, or N. Any meaningful arithmetic expres- sion may appear on the right-hand side of an arithmetic statement, provided the following restriction is ob- served: an integer quantity can appear in a floating- point expression only as a subscript or as an exponent or as the argument of certain functions. The functions which the programmer may refer to are limited only by those available on the library tape at the time, such as SQRTF, plus those simple functions which he has defined for the given problem by means of function statements. An example will serve to describe the latter. Function Statements Example 2: Define a function of three variables to be used throughout a given problem, as follows: Function statements must precede the rest of the pro- gram. They are composed of tho desired function name (ending in F) followed by any desired arguments which appear in the arithmetic expression on the right of the = sign. The definition of a function may employ any previously defined functions. Having defined ROOTF as above, the programmer may apply it to any set of arguments in any subsequent arithmetic statements. For example, a later arithmetic statement might be THETA = 1.0 + GAMMA * ROOTF(P1, 3.2 * Y + 14.0, 7.63). DO Statements, DIMENSION Statements, and Sub- scripted Variables Examgle 3: Set Qm,, equal to the largest quantity P(ai+bi)/P(ai- bi) for some i between 1 and 1000 .where P(x) =C~+~~X+C~X~+C~X~ . FORTRAN Program: 1) POLYF(X) =CO+X * (Cl+X * (C2+X * C3)). 2) DIMENSION A(1000), B(1000). 3) QMAX = - 1.0 E20. 4) DO 5 I =1, 1000. 5) QMAX = MAXF(QMAX, POLYF(A(1) +B(I))/POLYF(A(I) -B(I))). 6) STOP. The program above is complete except for input and output statements which will be described later. The first statement is not executed; it defines the desired polynomial (in factored form for efficient output pro- gram). Similarly, the second statement merely informs the executive routine that the vectors A and B each have 1000 elements. Statement 3 assigns a large negative initial value to QMAX, - 1.0 X 1020, using a special concise form for writing floating-point constants. State- ment 4 says "DO the following sequence of statements down to and including the statement numbered 5 for successive values of I from 1 to 1000." In this case there is only one statement 5 to be repeated. It is exe- cuted 1000 times; the first time reference is made to A(l) and B(1), the second time to A(2) and B(2), etc. After the 1000th execution of statement 5, statement 6-STOP-is finally encountered. In statement 5, the function MAXF appears. MAXF may have two or more arguments and its value, by definition, is the value of its largest argument. Thus on each repetition of statement 5 the old value of QMAX is replaced by itself or by the value of POLY F(A(1) +B (I)) /POLYF (A(1) - B (I)), whichever is larger. The value of QMAX f after the 1000th repetition is therefore the desired maximum. Example 4: Multiply the n Xlr matrix 520) by its transpose, obtaining the product elements on or be- low the main diagonal by the relation cis j = 5 ai.ke a j,k k-1 (for j < i) and the remaining elements by the relation FORTRAN Program As in the preceding example, the DIMENSION statement says that there are two matrices of maximum size 20 X20 named A and C. For explanatory purposes only, the three boxes around the program show the sequence of statements controlled by each DO state- ment. The first DO statement says that procedure P, i.e., the following statements down to statement 2 (outer box) is fo be carried out for I = 1 then for I = 2 and so on up to I =N. The first statement of procedure P(D0 2 J = 1, I) directs that procedure Q be done for J = 1 to J = I. And of course each execution of pro- cedure Q involves N executions of procedure R for K=l, 2, . . , N. Consider procedure Q. Each time its last statement is completed the "index" J of its controlling DO 'state- ment is increased by 1 and control goes to the first statement of Q, until finally its last statement is reached and J = 1. Since this is also the last statement of P and P has not been repeated until I = N, I will be increased and control will then pass to the first statement of P. This statement (DO 2 J = 1, I) causes the repetition of Q to begin again. Finally, the last statement~f Q and P (statement 2) will be reached with J =I and I = M, meaning that both Q and P have been repeated the required number of times. Control will then go to the next statement, STOP. Each time R is executed a new term is added to a product element. Each time Q is executed a new product element and its mate are ob: tained. Each time P is executed a product row (over to the diagonal) and the corresponding column (down to the diagonal) are obtained. ' The last example contains a "nest" of Jstate- ments, meaning that the sequence of statements con- trolled by one DO statement contains other DO state- ments. Another example of such a nest is shown in the next column, on the left. Nests of the type shown on the right are not permitted, since they would usually be meaningless. Although not illustrated in the examples given, the programmer may also employ subscripted variables having three independent subscripts. Examplep 5: For each case, read from cards two vec- tors, ALPHA and RHO, and the number ARG. ALPHA and RHO each have 25 elements and ALPHA(1) LALPHA(I+I), I = 1 to 24. Find the SUM of all the elements of ALPHA from the beginning to the last one which is less than or equal to ARG [assume ALPHA(1) 5 ARG The FORMAT statement mys that numbers are to be found (or print&) 5 per card (or line), that .each number is in fixed; point form, that each number, oa- cupies a field 12 mlumns wide and that *thq; decimal point is lmated 4 digits hrn the right, TblFQRMAT statemeat is not executed; it is referred Wbfr the READ and PRINT sgatements to describe itbg Wred arrange- ment iof data in the external medh The READ statement says 'RE339.D eards in the card reader which are arranged acc&iTg' to FORMAT ej,tatement 1 and assign the suewsiwe nambers obtained as values of ALPHA(1) I =? 1, 2& aigd RBQ(1) I = 1, 25 and ARG." Thus "ALPHA, RHO, ARC" is a descrip- tion of a list of 51 quantities( (tb~'size of ALPHA and RHO being obtained fidrn' kf& '~IMENSION state- ment), Reading of cade ,'prxwmx!& until these SL,quarati- ties have been obtai~ed~hahh QWQ having five nlmibers, as per the FORMAT: d~wiptiah, except the Ids* w&.ich has the value of sARG'ddyr ,8ine:ee ARG te~$niaitbd~the list, the remaining f~a>~g,fiiel$sla~ the. last G~W? imp not read. The PRINT statement is similar to READ except that it specifies a list of only three quantities. Thus each execution of PRINT causes a single line to be printed with ARG, SUM, VALUE printed in the first three of the five fields described by FORMAT state- ment 1. The IF' statement says "If A RG -ALPHA (I) is negative go tostatement 4, if it 3s zero go to statement 3, and if it is' 'positive go to 3." Thus the repetition of the two Statements controlled by the DO consists normally of computing ARG - ALPHA(1) , finding it zero or positive, and going to statement 3 followed by the next repetition. H~wever, when I has been in- creased to the extent that the first ALPHA exceeding ARG is encountered, control will pass to statement 4: Note that this statement does not belong to the se- quence controlled by the DO. In such cases, the repeti- tion specified by the DO is terminated and the value of the index (in this ease I) is preserved. Thus if the first ALPHA exceeding ARG were ALPHA (20), then RHO (19) would be obtaihed in statement 4. The GO TO statement, of course, passes control to statement 2, which initiates reading the 11 cards for the next case.The process will continue until there are no more cards in the reader. The above program is entirely complete. When punched in cards as shown, and comd piled, the jcrandlator will produce a ready-to-run 704 program which will perform the job specified. Other Types of FORTRAN Statements In the above examples the following types of FOR- TRAN statements have been exhibited. Arithmetic statements Function statements DO statements IF statements GO TO statements READ statements PRINT statements STOP' statements DIMENSION statements FORMAT statements. The explanations accompanying each example have attempted to show some of the possible applications and variations of these statements. It is felt that these examples give a representative picture of the FOR- TRAN language; however, many of its features have had to be omitted. There are 23 other types of state- ments in the language, many of them completely analogous to some of those described here. They pro- vide facilities for referring to other input-output' and auxiliary storage devices (tapes, drums, and card punch), for specifying preset and computed branching of control, for detecting various conditions which may arise such as an attempt to divide by zero, and for pro- viding various information about a program to the translator. A complete description of the language is to be found in Programmer's Reference Manual, the FOR- TRA N Automatic Coding System for the IB M 704. Preparation of a Program for Translation The translator accepts statements punched one per card (continuation cards may be used for very long statements). There is a separate key on the keypunch- ing device for each character used in FORTRAN state- ments and each character is represented in the card by several holes in a single column of the card. Five columns are reserved for a statement number (if pres- ent) and 66 are available for the statement. Keyguhch- ing a FORTRAN program is therefore a prockss similar to that of typing the program. Translation The deck of cards obtained by keypunching may then be put in the card reader of a 704 equipped'with the translator program. When the load buttori is Sressed one gets either 1) a list of input statements which fail to conform to specifications of the FORTRAN language accompanied by remarks which indicate the type of error in each case; 2) a deck of binary cards representing the desired 704 program, 3) a binary tape of the program which can either be preserved or loaded and executed immediately after translation is complete, or 4) a tape containing the output program in symbolic form suitable for alteration and later assembly. (Some of these out- puts may be unavailable at the time of publication.) THE FORTRAN TRANSLATOR General Organization of the System The FORTRAN translator consists of six successive sections, as follows. Sectiorc 1: Reads in and classifies statements. For arithmetic formulas, compiles the object (output) in- structions. For nonarithmetic statements including input-output, does a partial compilation, and records the remaining information in tables. All instructions compiled in this section are in the COMPAIL file. Section 2: Compiles the instructions associated with indexing, which result from DO statements and the oe- currence of subscripted variables, These instructions are placed in the COMPDO file, Section 3: Merges the COMPAIL and COMPDO files into a single file, meanwhile completing the compila- tion of nonarithmetic statements begun in Section 1. The object program is now complete, but assumes an object machine with a large number of index registers. Section 4: Carries out an analysis of the flow of the object program, to be used by Section 5. Section 5: Converts the object program to one which involves only the three index registers of the 704. Section 6: Assembles the object program, producing a relocatable binary program ready for running. Alsc on demand produces the object program in SHARE symbolic language. (Note: Section 3 is of internal importance only; Sec- tion 6 is a fairly conventional assembly program. These sections will be treated only briefly in what follows.) Within the translator, information is passed from section to section in two principal forms: as compiled instructions, and as tables. The compiled instructions (e.g., the COMPAIL and COMPDO files, and later their merged result) exist in a four-word format which con- tains all the elements of a symbolic 704 instruction; ie., symbolic location, three-letter operation code, sym- bolic address with relative absolute part, symbolic tag, and absolute decrement. (Instructions which refer to quantities given symbolic names by the programmer have those same names in their addresses.) This sym- bolic format is retained until section 6. Throughout, the order of the compiled instructions is maintained by means of the symbolic locations (internal statement numbers), which are assigned in sequential fashion by section 1 as each new statement is encountered. The tables contain all information which cannot yet be embodied in compiled instructions. For this reason the translator requires only the single scan of the source program performed in section 1. A final observation should be made about the organ- ization of the system. Basically, it is simple, and most of the complexities which it does possess arise from the effort to cause it to produce object programs which can compete in efficiency with hand-written programs. S~me of these complexities will be found within the individual sections; but also, in the system as a whole, the sometimes complicated interplay between compiled instructions and tables is a consequence of the desire to postpone compiling until the analysis necessary to produce high object-program efficiency has been per- formed. For an input-output statement, section 1 compiles the appropriate read or write select (RDS or WRS) in-struction, and the necessary copy (CPY) instructions (for binary operations) or transfer instructions to pre-written input-output routines which perform conver-sion between decimal and binary and govern format (for decimal operations). When the list of the input-output statement is repetitive, table entries are made which will cause section 2 to generate the indexing instructions necessary to make the appropriate loops. The treatment of state-ments which are neither input- output nor arithmetic is similar; i.e., those instructions which can be compiled are compiltd, and the remaining information is. extracted and placed in one or more of the appropriate tables. In contrast, arithmetic formulas are completely treated in section 1, except for open (built-in) sub- routines, which are added in section 3; a complete set of compiled instructions is produced in the COMPAIL file. This compilation involves two principal tasks: 1) the generation of an appropriate sequence of arith- metic instructions to carry out the computation speci- fied by the formula, and 2) the generation of (symbolic) tags for those arithmetic instructions which refer to subscripted variables (variables which denote arrays) which in combination with the indexing instructions to be compiled in section 2 will refer correctly to the indi- vidual members of those arrays. Both these tasks are accomplished in the course of a single scan of the for- mula. Task 2) can be quickly disposed of. When a sub- scripted variable is encountered in the scan, its sub- script(~) are examined to determine the symbols used in the subscripts, their multiplicative coefficients, and the dimensions of the array. These items of information are placed in tables where they will be available to section 2 ; also from them is generated a subscript com- bination name which is used as the symbolic tag of those instructions which refer to the subscripted vari- able. The difficulty in carrying out ta~k 1) is one of level; there is implicit in every arithmetic formula an order of computation, which arises from the control over order- ing assigned by convention to the various symbols (parentheses, + , - , * , /, etc.) which can appear, and this implicit ordering must be made explicit before compilation of the instructions can be done. This ex- plicitness is achieved, during the formula scan, by associating with each operation required, by the formula a level number, such that if the operations are carried out in the order of increasing level number the correct sequence of arithmetic instructions will be obtained. The sequence of level numbers is obtained by means of a set of rules, which specify for each possible pair formed of operation type and symbol type the increment to be 'added to or subtracted from the level number of the preceding pair. In fact, the compilation is not carried out with the raw set of level numbers produced during the scan. After the scan, but before the compilation, the levels are examined for empty sections which can be deleted, for permutations of operations on the same level, which will reduce the number of accesses to memory, and for redundant computation (arising from the existence of common subexpressions) which can be eliminated. An example will serve to show (somewhat inaccurate- ly) some of the principles employed in the level-analysis process. Consider the following arithmetic expression: A+B**C*(E+F) In the level analysis of this expression parentheses are in effect inserted which define the proper order in which the operations are to be performed. If only three implied levels are recognized (corresponding to +, * a and * * ) the expression obtains the following: +(* (* *A))+(* (* *B* *C)* [+(* (* *EN+(* (* *~))l). The brackets represent the parentheses appearing in the original expression. (The level-analysis routine actually recognizes an additional level corresponding to func- tions.) Given the above expression the level-analysis routine proceeds to define a sequence of new dependent variables the first of which represents the value of the entire expression. Each new variable is generated when- ever a left parenthesis is encountered and its definition is entered on another line. In the single scan of the ex- pression it is often necessary to begin the definition of one new variable before the definition of another has been completed. The subscripts of the u's in the follow- ing sets of definitions indicate the order in which they were defined. This is the point reached at the end of the formula scan. What follows illustrates the further processing applied to the set of levels. Notice that ua, for example, is defined as * * F. Since there are not two or more operands to be combined the * * serves only as a level indication and no further purpose is served by having defined us. The procedure therefore substitutes F for UQ wherever UQ appears and the line uo = * * F is deleted. Similarly, F is then substituted for us and us= * F is deleted. This elimination of "redundant" u's is carried to completion and results in the following: These definitions, read up, describe a legitimate proc-cdure for obtaining the value of the original expression. The number of u's remaining at this point (in this case four) determines the number of intermedi- ate quantities which may need to be stored. However, further examination of this case reveals that the result of 243 is in the accumulator, ready for uo; therefore the store and load instructions which would usually be compiled between u3 and uo are omitted. Section 2 (Nelson and Ziller) Throughout the object program will appear in- structions which refer to subscripted variables. Each of these instructions will (until section 5) be tagged with a symbolic index register corresponding to the particu- l b subscript combination of the subscripts of the varia- ble [e.g., (I, K, J) and (K, I, J) are two different sub- script combinations]. If the object program is to work correctly, every symbolic index register must be so governed that it will have the appropriate contents at every instant that it is being used. It is the source pro- gram, of course, which determines what these appro- priate contents must be, primarily through its DO statements, but also through arithmetic formulas (e.g. I= N+1) which may define the values of variables ap- pearing in subscripts, or input formulas which may read such values in at object time. Moreover, in the case of DO statements, which are designed to produce loops in the object program, it is necessary to provide tests for loop exit. It is these two tasks, the governing of symbolic index registers and the testing of their contents, which section 2 must carry out. Much of the complexity of what follows arises from the wish to carry out these tasks optimally; i.e., when a variable upon which many subscript combinations de- pend undergoes a change, to alter only those index registers which really require changing in the light of the problem flow, and to handle exits correctly with a minimum number of tests. If the following subscripted variable appears in a FORTRAN program A(2* I + 1,4* J + 3,6* K + 5), the index quantity which must be in its symbolic index register when this reference to A is made is (cli - 1) 3, (~2 j - 1)di + (~3k - 1)didj + 1, where GI, h, and c3 in this case have the values 2, 4, and 6; i, j, and k are the values of I, J, and K at the moment, and di and dj are the I and J dimensions of A. The effect of the addends 1, 3, and 5 is incorporated in the address of the instruction which makes the reference. In general, the index quantity associated with a sub- script combination as given above, once formed, is not recomputed. Rather, every time one of the variables in a subscript combination is incremented under control of a DO, the corresponding quantity is incremented by the appropriate amount. In the example given, if K is increased by n (under control of a DO), the index quantity is increased by cSdid,rt, giving the correct new value' The following paragraphs discuss in further detail the ways in which index quantities are computed and modified. Choosing the Indexing Instructions; Case of Subscrifits Controlled by DO'S We distinguish between two classes of subscript ; those which are in the range of a DO having that sub- script as its index symbol, and those subscripts which are not controlled by DO'S. The fundamental idea for subscripts controlled by DO'S is that a sequence of indexing instruction groups can be selected to answer the requirements, and that the choice of a particular instruction group depends mainly on the arrangement of the subscripts within the subscript combination and the order of the DO'S con- trolling each subscript. DO'S often exist in nests. A nest of PO'S consists of all the DO'S contained by some one DO which is itself not contained by any other. Within a nest, DO'S are assigned level numbers. Wherever the index symbol of a DO appears as a subscript within the range of that DO, the level number of the DO is assigned to the subscript. The relative values of the level numbers in a subscript combination produce a group number which, along with other information, determines which indexing instruc- tion group is to be compiled. The source language, produces the following DO structure and group combi- nations : Producing the Decrement Parts of Indexing Instructions The part of the TO4 instruction used to change or test the contents of an index register is called the decrement part of the instruction. The decrement parts of the FORTRAN indexing instructions are functions of the dimensions of arrays and of the parameters of DO's; that is, of the initial value nl, the upper bound n~, and the increment n3 appearing in the statement DO 1 i=nl, nz, n3. The general form of the function is [(nz - nl +m)/ns]fiag where g represents necessary coefficients and dimen- sions, and [x] denates the integral part of x. If all the parameters are constants, the decrement parts are computed during the execution of the FOR- TRAN executive program. If the parametel's are vari- able symbols, then instructions are compiled in the object program to compute the proper decrement val- ues. For object program efficiency, it is desirable to associate these computing instructions with the outer- most DO of a nest, where possible, and not with the inner loops, even fhough these inner DO's may have variable parameters. Such a variable parameter (e.g., N in "DO 7 I= 1, N") may be assigned values by the programmer by any of a number of methods; it may be a value brought in by a READ statement, it 'may be calculated by an arithmetic statement, it may take its value from a transfer exit from some other DO whose index symbol is the pertinent variable symbol, or it may be under the control of a DO in the nest. A search is made to determine the smallest level number in the nest within which the variable parameter is not assigned a new value. This level number determines the place at which computing instructions can best be compiled. Case of Subscripts not Controlled by DO'S The second of the twos classes of subscript symbols is that of subscript symbols which are not under control of DO'S. Such a subscript can be given a value in a number of ways similar to the defining of DO param- eters: a value may be read in by a READ statement, it may be calculated by an arithmetic statement, or it may be defined by an exit made from a DO &h that index symbol. For subscript combinations with no subsc(ipt under the control of a DO, the basic technique use$ to intro- duce the proper values into a symbolic in&x register is that of determining where such definitipns occur, and, at the point of definition, using a subroutine to compute the new index quantity. These subrou$~es are generated at executive time, if it is determined that they are necessary. If the index quantity exists in a DO nest at the time of a transfer exit, then no cubr routine calculations are necessary since the exit values are precisely the desired values Mixed Cases In cases in which some subscripts in a subscript com- bination are controlled by DO'S, andmxne are not, instructions are compiled to compute the initial value of the subscript combination at the beginning of the outside loop. If the non-DO-controlled subscript sym- bol is then defined inside the loop (that is, after the computing of the load quantity) the procedure of using a subroutine at the point,of subscript definition will bring the new value into the index register. An exception to the use of a subroutine is made when the subscript is defined by a transfer exit from a DO, and that DO is within the range of a DO controlling some other subscript in the subscript combination. In such instances, if the index quantity is used in the inner DO, no calculation is necessary; the exit values are used. If the index quantity is not used, instructions are compiled to simulate this use, so that in either case the transfer exit leaves the correct function value in the index register. Modification and O@timization Initializing and computing instructions correspond- ing to a given DO are placed in the object program at a point corresponding to the lowest possible (outermost) DO level rather than at the point corresponding to the given DO. This technique results in the desired removal of certain instructions from the most frequent inner-most loops of the object program. However, it necessi- tates the consideration of some complex questions when the flow within a nest of DO'S is complicated by the occurrence of transfer escapes from DO-type repetition and by other IF and GO TO flow paths. Consider a simple example, a nest having a DO on I containing a DO on J, where the subscript combination (I, J) appears only in the inner loop. If the object program corre- sponded precisely to the FORTRAN language pro- gram, there would be instructions at the entrance point of the inner loop to set the value of J in (I, J) to the initial value specified by the inner DO. Usually, how- ever, it is more efficient to reset the value of J in (I, J) at the end of the inner loop upon leaving it, and the ob- ject program is so constructed. In this case it becomes necessary to compile instructions which follow every transfer exit from the inner loop into the outer loop (if there are any such exits) which will also reset the value of J in (I, J) to the initial value it should have at the entrance of the inner loop. These instructions, plus the initialization of both I and J in (I, J) at the entrance of the outer loop (on I), insure that J always has its proper initial value at the entrance of the inner loop even though no instructions appear at that point which change J. The situation becomes considerably more complicated if the subscript combination (I, J) also ap- pears in the outer loop. In this case two independent index quantities are created, one corresponding to (I, J) in the inner loop, the other to (I, J) in the outer loop. Optimizing features play an important role in the modification of the procedures and techniques outlined above. It may be the case that the DO structure and subscript combinations of a nest describe the scanning of a two- or three-dimensional array which is the equiva- lent of a sequential scan of a vector; i.e., a reference to each of a set of memory locations in descending order. Such an equivalent procedure is discovered, and where the flow of a nest permits, is used in place of more com- plicated indexing. This substitution is not of an empiri- cal nature, but is instead the logical result of a general- ized analysis. Other optimizing techniques concern, for example, the computing instructions compiled to evaluate the functions (governing index values and decrements) men- fioned previously. When some of the parameters are constant, the functions are reduced at executive time, and a frequent result is th2 compilation of only one instruction, a reference to a variable, to obtain a proper initializing value. In choosing the symbolic index register in which to test the value of a subscript for exit purposes, those index registers are avoided which would require the compilation of instructions to modify the test instruc- tion decrement. Section 4 (Haibt) pnd Section 5 (Best) The result of section 3 is a complete program, but one in which tagged instructions are tagged only sym- bolically, and which assumes that there will be a real index register available for every symbolic one. It is the task of sections 4 and 5 to convert this program to one involving only the three real index registers of the 704. Generally, this requires the setting up, for each symbolic index register, of a storage cell which will act as an index cell, and the addition of instructions to load the real index registers from, and store them into, the index cells. This is done in section 5 (tag analysis) on the basis of information about the pattern and frequency of flow provided by section 4 (flow analysis) in such a way that the time spent in loading and storing index registers will be nearly minimum. The fundamental unit of program is the basic block; a basic block is a stretch of program which has a single entry point and a single exit point. The purpose of sec- tion 4 is to prepare for section 5 a table of predecessors (PRED table) which enumerates the basic blocks and lists for every basic block each of the basic blocks which can be its immediate predecessor in flow, together with the absolute frequency of each such basic block link. This table is obtained by an actual "execution" of the program in Monte-Carlo fashion, in which the outcome of conditional transfers arising out of IF-type state- ments and computed GO TO'S is determined by a ran- dom number generator suitably weighted according to whatever FREQUENCY statements have been pro- vided. 1 Section 5 is divided into four parts, of which part,l is the most important. It makes all the major decisions concerning the handling of index registers, but records them simply as bits in the PRED table and a table of all tagged instructions, the STAG table. Part 2 merely reorganizes those tables; part 3 adds a slight further treatment to basic blocks which are terminated by an assigned GO TO; and finally part 4 compiles the finished program under the direction of the bits in the PRED and STAG tables. Since part 1 does the real work involved in handling the index registers, attention will be con- fined to this part in the sequel. The basic flow of part 1 of sectipn 5 is, Consider a moment partway through the execution of part 1, when a new region has just been treated. The less frequent basic blocks have not yet been encoun- tered; each basic block that has been treated is a mem- ber of some region. The existing regions are of two types: transparent, in which there is at least one real index register which has not beeq used in any of the member basic blocks, and opaque. Bits have been en- tered in the STAG table, calling where necessary for an LXD (load index register from index cell) instruc- tion preceding, or an SXD (store index register in index cell) instruction following-, the tagged instructions of the basic blocks that have been treated. For each basic block that has been treated is recorded the required contents of each of the three real index registers for entrance into the block, and the contents upon exit. In the PRED table, entries that have been considered may contain bits calling for interblock LXD's and SXD's, when the exit and entrance conditions across the link do not match. Now the PRED table is scanned for the highest- frequency link not yet considered. The new region is formed by working both forward over successors and backward over predecessors from this point, always choosing the most frequent remaining path of control. The marking out of a new region is terminated by en- countering 1) a basic block which belongs to an opaque region, 2) a basic block which has no remaining links into it (when working backward) or from it (when working forward), or which belongs to a transparent region with no such links remaining, or 3) a basic block which closes a loop. Thus the new region generally includes both basic blocks not hitherto encountered, and entire regions of basic blocks which have already been treated. The treatment of hitherto untreated basic blocks in the new region is carried out by simulating the action of the program. Three cells are set aside to represent the object machine index registers. As each new tagged in- struction is encountered these cells are examined to see if one of them contains the required tag; if not, the program is searched ahead to determine which oS/ the three index registers is the least undesirable to replace, and a bit is entered in the STAG table calling for an LXD instruction to that index register. When the simulation of a new basic block is finished, the en- trance and exit conditions are recorded, and the next item in the new region is considered. If it is a new basic block, the simulation continues; if it is a region, the index register assignment throdghout the region is examined to see if a permutation of the index registers would not make it match better, and any remaining mis- match is taken care of by entries in PRED calling for interblock LXD's. A final concept is that of index register activity. When a symbolic index register is initialized, or when its contents are altered by an indexing instruction, the value of the corresponding index cell falls out of date, and a subsequent LXD will be incorrect without an intervening SXD. This problem is handled by activity bits, which indicate when the index cell is out of date; when an LXD is required the activity bit is interrogated, and if it is on an SXD is called for immediately after the initializing or indexing instruction responsible for the activity, or in the interblock link from the region con- . taining that instruction, depending upon whether the basic block containing that instruction was a new basic block or one in a .region already treated. When the new region has been treated, all of the old regions yhich belonged to it simply lose their iden- tity; their basic blocks and the hitherto untreated basic blocks become the basic blocks of the new region. Thus at the end of part 1 there is but one single region, and it is the entire program. The high-frequency parts of the program were treated early; the entrance and exit con- ditions and indeed the whole handling of the index registers reflect primarily the efficiency needs of these high-frequency paths. The loading and unloading of the index registers is therefore as much as possible placed in the low-frequency paths, and the object program time consumed in these qerations is thus brought near to a minimum. Conclusion The preceding sections of this paper have described the language and the translator program of the FOR- TRAN system. Following are some comments on the system aqd its application. Scope of A pfilicability The language of the system is intended to be capable of expressing virtually any numerical procedure. Some problems programmed in FORTRAN language to date include: reactor shielding, matrix inversion, numerical integration, tray-to-tray distillation, microwave propa- gation, radome design, numerical weather prediction, plotting and root location of a quartic, a pracedure for playing the game "nim," helicopter design, and a number of others. The sizes of these first programs range from about 10 FORTRAN statements to well over 1000, or in terms of machine instructions, from about 100 to 7500. Conciseness and Convenience The statement of a program in FORTRAN lan- guage rather than in machine language or assembly program language is intended to result in a considerable reduction in the amount of thinking, bookkeeping, writing, and time required. In the problems mentioned in the preceding paragraph, the ratio of the number of output machine instructions to the number of input FORTRAN statements for each problem varied be- tween about 4 and 20. (The number of machine instruc- tions does not include any library subroutines and thus represents approximately the number which would need to be hand coded, since FORTRAN does not normally produce programs appreciably longer than correspond- ing hand-coded ones.) The ratio tends to be high, of course, for problems with many long arithmetic expres- sions or with complex loop structure and subscript ma- nipulation. The ratio is a rough measure of the concise- ness of the language. The convenience of using FORTRAN language is necessarily more difficult to measure than its concise- ness. However the ratio of coding times, assembly pro- gram language vs FORTRAN language, gives some in- dication of the reduction in thinking and bookkeeping as well as in writing. This time reduction ratio appears to range also from about 4 to 20 although it is difficult to estimate accurately. The largest ratios are usually obtained by those problems with complex loops and subscript manipulation as a result of the planning of indexing and bookkeeping procedures by the translator rather than by the programmer. Education It is considerably easier to teach people untrained in the use of computers how to write programs in FORTRAN language than it is to teach them machine language. A FORTRAN manual specifically designed as a teaching tool will be available soon. Despite the unavailability of this manual, a number of successful courses for nonprogrammers, ranging from one to three days, have been completed using only the present ref- erence manual. Debugging The structure of FORTRAN statements is such that the translator can detect and indicate many errors which may occur in a FORTRAN-language program. Furthermore, the nature of the language makes it possi- ble to write programs with far fewer errors than are to be expected in machine-language programs. Of course, it is only necessary to obtain a correct FORTRAN-language program for a problem, therefore all debugging efforts are directed toward this end. Any errors in the translator program or any machine mal- function during the process of translation will be de- tected and corrected by procedures distinct from the process of debugging a particular FORTRAN program. In order to produce a program with built-in debugging facilities, it is a simple matter for the programmer to write various PRINT statements, which cause "snap- shots" of pertinent information to be taken at appropri- ate points in his procedure, and insert these in the deck of cards comprising his original FORTRAN program. After compiling this program, running the resulting machine program, and comparing the resulting snap- shots with hand-calculated or known values, the pro- grammer can localize the specific area in his FORTRAN program which is causing the difficulty. After making the appropriate corrections in the FORTRAN program he mky remove the snapshot cards and recompile the final program or leave them in and recompile if the prod gram is not yet fully checked. Experience in debugging, FORTRAN programs to date has been somewhat clouded by the simultaneous process of debugging the translator program. However, it becomes clear that most errors in FORTRAN pro- grams are detected in the process of translation. So far, those programs having errors undetected by the trans- lator have been corrected with ease by examining the FORTRAN program and the data output of the ma- chine program. Method of Translation In general the translation of a FORTRAN program to a machine-language program is characterized by the fact that each piece of the output program has been constructed, instruction by instruction, so as not only to produce an efficient piece locally but also to fit effi- ciently into its context as a result of many consideratjons of the structure of its neighboring pieces and of the entire program. With the exception of subroutines (cor- responding to various functions and input-output statements appearing in the FORTRAN program), the output program does not contain long precoded instruc- tion sequences with parameters inserted during trans- lation. Such instruction sequences must be designed to do a variety of related tasks and are often not efficient in particular cases to which they are applied. FORTRAN-written programs seldom contain sequences of even three instructions whose operation parts alone could be considered a precoded "skeleton." There are a number of interesting observations con- cerning FORTRAN-written programs which may throw some light on the nature of the translation process. Many object programs, for example, contain a large number of instructions which are not attributable to any particular statement in the original FORTRAN program. Even transfers of control will appear which do not correspond to any control statement (e.g., DO, IF, GO TO) in the original program. The instructions arising from an arithmetic expression are optimally arranged, often in asurprisingly different sequence than the expression would lead one to expect. Depending on its context, the same DO statement may give rise to no instructions or to several complicated groups of in- structions located at different points in the program. While it is felt that the.ability of the system to trana- late algebraic expressions provides an important and necessary convenience, its ability to treat subscripted variables, DO statements, and the various input-output and FORMAT statements often provides even more significant conveniences. In any case, the major part of the translator program is devoted to handling these last mentioned facilities rather than to translating arithmetic expressions. (The near-optimal treatment of arithmetic expressions is sim- ply not as complex a task as a similar treatment of "housekeepingn operations.) A list of the approximate number of instructions in each of the six sections of the translator will give a crude picture of the effort expend- ed in each area. (Recall that Section 1 completely treats arithmetic statements in addition to performing a num- ber of other tasks.) The generality and complexity of some of the tech- niques employed to achieve efficient output programs may often be superfluous in many common applications. However the use af such techniques should enable the EQRTRAN system to produce efficient programs for . important problems which involve complex and unusual procedures. In any case the intellectual satisfaction of having formulated and solved some difficult problems of translation and the knowledge and experience ac- quired in the process are themselves almost a sufficient reward for the long effort expended on the FORTRAN project. URL https://www.softwarepreservation.org/projects/FORTRAN/paper/BackusEtAl-FortranAutomaticCodingSystem-1957.pdf If the url does not work I have a public space with the original content as a pdf or set of images https://1drv.ms/f/c/ea9004809c2729bb/EisCos3pDwdFtiDupCEt7hgBDDfkri_mSFruQi6cKvvZHA?e=NGLC8d -
FORTRAN a history from John Backus
richardmurray posted a blog entry in BlackGamesElite's BGE Journal
ForTran for formula translating system, I think this is a wonderful read. I must admit, while my agenda in Black Games Elite is for a set of Black people to develop games, wit my involvement as one of them. I do think as a maker, a side project of making a computer is not worthless. It will be ideal to make a computer with its machine code and build upwards , if for no other reason than the acute experience of such a thing which this history partially proves. THE HISTORY OF FORTRAN I, II, AND III John Backus IBM Research Laboratory San Jose, California I. 1.1 Early background and environment. Attitudes about automatic programming in the 1950's. Before 1954 almost all programming was done in machine language or assembly lan- guage. Programmers rightly regarded their work as a complex, creative art that re- quired human inventiveness to produce an efficient program. Much of their effort was devoted to overcoming the difficulties created by the computers of that era: the lack of index registers, the lack of built- in floating point operations, restricted instruction sets (which might have AND but not OR, for example), and primitive input- output arrangements. Given the nature of computers, the services which "automatic programming" performed for the programmer were concerned with overcoming the machine's shortcomings. Thus the primary concern of some "automatic programming" systems was to allow the use of symbolic addresses and decimal numbers (e.g., the MIDAC Input Translation Program [Brown and Carr 1954]). But most of the larger "automatic. pro- gramming" systems (with the exception of Laning and Zierler's algebraic system [Lan- ing and Zierler 1954] and the A-2 compiler [Remington Rand 1953; Moser 1954]) simply provided a synthetic "computer" with an or- der code different from that of the real machine. This synthetic computer usually had floating point instructions and index registers and had improved input-output com- mands; it was therefore much easier to pro- gram than its real counterpart. The A-2 compiler also came to be a syn- thetic computer sometime after early 1954. But in early 1954 its input had a much cruder form; instead of "pseudo-instruc- tions" its input was then a complex sequence of "compiling instructions" that could take a variety of forms ranging from machine code itself to lengthy groups of words consti- tuting rather clumsy calling sequences for the desired floating point subroutine, to "abbreviated form" instructions that were converted by a "Translator" into ordinary "compiling instructions" [Moser 1954]. After May 1954 the A-2 compiler acquired a "pseudocode" which was similar to the or- der codes for many floating point interpret- ive systems that were already in operation in 1953: e.g., the Los Alamos systems, DUAL and SHACO [Bouricius 1953; Schlesinger 1953], the MIT "Summer Session Computer" [Adams and Laning 1954], a system for the ILLIAC de- signed by D. J. Wheeler [Muller 1954], and the SPEEDCODING system for the IBM 701 [Backus 1954]. The Laning and zierler system was quite a different story: it was the world's first operating algebraic compiler, a rather ele- gant but simple one. Knuth and Pardo [1977] assign this honor to Alick Glennie's AUTO- CODE, but I, for one, am unable to recognize the sample AUTOCODE program they give as "algebraic", especially when it is compared to the corresponding Laning and Zierler program. All of the early "automatic programming" systems were costly to use, since they slow- ed the machine down by a factor of five or ten. The most common reason for the slow- down was that these systems were spending most of their time in floating point sub- routines. Simulated indexing and other "housekeeping" operations could be done with simple inefficient techniques, since, slow as they were, they took far less time than the floating point work. Experience with slow "automatic program- ming" systems, plus their own experience with the problems of organizing loops and address modification, had convinced programmers that efficient programming was something that could not be automated. An- other reason that "automatic programming" was not taken seriously by the computing community came from the energetic public relations efforts of some visionaries to spread the word that their "automatic pro- gramming" systems had almost human abilities to understand the language and needs of the user; whereas closer inspection of these same systems would often reveal a complex, exception-ridden performer of clerical tasks which was both difficult to use and ineffi- cient. Whatever the reasons, it is diffi- cult to convey to a reader in the late sev-enties the strength of the skepticism about "automatic programming" in general and about its ability to produce efficient programs in particular, as it existed in 1954. (In the above discussion of attitudes about "automatic programming" in 1954 I have mentioned only those actual systems of which my colleagues and I were aware at the time. For a comprehensive treatment of early pro- graining systems and languages I recommend the article by Knuth and Pardo [1977] and Sammet [1969].) 1.2 The economics of programming. Another factor which influenced the de- velopment of FORTRAN was the economics of programming in 1954. The cost of program- mers associated with a computer center was usually at least as great as the cost of the computer itself. (This fact follows from the average salary-plus-overhead and number of programmers at each center and from the computer rental figures.) In addition, from one quarter to one half of the computer's time was spent in debugging. Thus p~ogram- ming and debugging accounted for as much as three quarters of the cost of operating a computer; and obviously, as computers got cheaper, this situation would get worse. This economic factor was one of the prime motivations which led me to propose the FOR- TRAN project in a letter to my boss, Cuth- bert Hurd, in late 1953 (the exact date is not known but other facts suggest December 1953 as a likely date). I believe that the economic need for a system like FORTRAN was one reason why IBM and my successive bosses, Hurd, Charles DeCarlo, and John McPherson, provided for our constantly expanding needs over the next five years without ever ask- ing us to project or justify those needs in a formal budget. 1.3 Programming systems in 1954. It is difficult for a programmer of to- day to comprehend what "automatic program- ming" meant to programmers in 1954. To many it then meant simply providing mnemon- ic operation codes and symbolic addresses, to others it meant the simple'process of obtaining subroutines from a library and inserting the addresses of operands into each subroutine. Most "automatic program- ming" systems were either assembly programs, or subroutine-fixing programs, or, most popularly, interpretive systems to provide floating point and indexing operations. My friends and I were aware of a number of assembly programs and interpretive systems, some of which have been mentioned above; besides these there were primarily two other systems of significance: the A-2 compiler [Remington Rand 1953; Moser 1954] and the Laning and Zierler [1954] algebraic compiler at MIT. As noted above, the A-2 compiler was at that time largely a sub- routine-fixer (its other principal task was to provide for "overlays"); but from the standpoint of its input "programs" it pro- vided fewer conveniences than most of the then current interpretive systems mention- ed earlier; it later adopted a "pseudo- code" as input which was similar to the input codes of these interpretive systems. The Laning and Zierler system accepted as input an elegant but rather simple alge- braic language. It permitted single-letter variables (identifiers) which could have a single constant or variable subscript. The repertoire of functions one could use were denoted by "F" with an integer superscript to indicate the "catalog number" of the de- sired function. Algebraic expressions were compiled into closed subroutines and placed on a magnetic drum for subsequent use. The system was originally designed for the Whirlwind computer when it had 1,024 stor- age cells, with the result that it caused a slowdown in execution speed by a factor of about ten [Adams and Laning 1954]. The effect of the Laning and Zierler system on the development of FORTRAN is a question which has been muddled by many misstatements on my part. For many years I believed that we had gotten the idea for using algebraic notation in FORTRAN from seeing a demonstration of the Laning and Zierler system at MIT. In preparing a pa- per [Backus 1976] for the International Research Conference on the History of Com- puting at Los Alamos (June 10-15, 1976), I reviewed the matter with Irving Ziller and obtained a copy of a 1954 letter [Backus 1954a] (which Dr. Laning kindly sent to me). As a result the facts of the matter have become clear. The letter in question is one I sent to Dr. Laning asking for a demonstration of his system. It makes clear that we had learned of his work at the Office of Naval Research Symposium on Auto- matic Programming for Digital Computers, May 13-14, 1954, and that the demonstration took place on June 2, 1954. The letter also makes clear that the FORTRAN project was well under way when the letter was sent (May 21, 1954) and included Harlan Herrick, Robert A. Nelson, and Irving Ziller as well as myself. Furthermore, an article in the proceedings of that same ONR Symposium by Herrick and myself [Backus and Herrick 1954] shows clearly that we were already consid- ering input expressions like "Zaij jk • b " and "X÷Y". We went on to raise the ques- tion "...can a machine translate a suffi- ciently rich mathematical language into a sufficiently economical program at a suf- ficiently low cost to make the whole affair feasible?" These and other remarks in our paper presented at the Symposium in May 1954 make it clear that we were already considering algebraic input considerably more sophis- ticated than that of Laning and Zierler's system when we first heard of their pioneer- ing work. Thus, although Laning and Zierler had already produced the world's first al-gebraic compiler, our basic ideas for FOR- TRAN had been developed independently; thus it is difficult to know what, if any, new ideas we got from seeing the demonstration of their system. Quasi-footnote: In response to suggestions of the Program Committee let me try to deal explicitly with the question of what work might have in- fluenced our early ideas for FORTRAN, al- though it is mostly a matter of listing work of which we were then unaware. I have already discussed the work of Laning and Zierler and the A-2 compiler. The work of Heinz Rutishauser [1952] is discussed later on. Like most of the world (except perhaps Rutishauser and Corrado B6hm--who was the first to describe a compiler in its own language [B6hm 195~]) we were entirely un- aware of the work of Konrad Zuse [1959; 1972]. Zuse's "Plankalk~l", which he com- pleted in 1945, was, in some ways, a more elegant and advanced programming language than those that appeared ten and fifteen years later. We were also unaware of the work of Mauchly et al. ("Short Code", 1950) , Burks ("Intermediate PL", 1950) , B6hm (1951) , Glennie ("AUTOCODE", 1952) as discussed in Knuth and Pardo [1977]. We were aware of but not influenced by the automatic program- ming efforts which simulated a synthetic computer (e.g., MIT "Summer Session Com- puter", SHACO, DUAL, SPEEDCODING, and the ILLIAC system), since their languages and systems were so different from those of FORTRAN. Nor were we influenced by alge- braic systems which were designed after our "Preliminary Report" [1954] but which began operation before FORTRAN (e.g., BACAIC [Grems and Porter 1956], IT [Per- lis, Smith and Van Zoeren 1957], MATH- MATIC [Ash et al. 1957]). Although PACT I [Baker 1956] was not an algebraic com- piler, it deserves mention as a signifi- cant development designed after the FOR- TRAN language but in operation before FORTRAN, which also did not influence our work. (End of quasi-footnote.) Our ONR Symposium article [Backus and Herrick 195~] also makes clear that the FORTRAN group was already aware that it faced a new kind of problem in automatic programming. The viability of most compil- ers and interpreters prior to FORTRAN had rested on the fact that most source language operations were not machine operations. Thus even large inefficiencies in perform- ing both looping/testing operations and computing addresses were masked by most op- erating time being spent in floating point subroutines. But the advent of the 70~ with built in. floating point and indexing radi- cally altered the situation. The 70~ pre- sented a double challenge to those who wanted to simplify programming; first it re- moved the raison d'Etre of earlier systems by providing in hardware the operations they existed to provide; second, it increased the problem of generating efficient programs by an order of magnitude by speeding up float- ing point operations by a factor of ten and thereby leaving inefficiencies nowhere to hide. In view of the widespread skepticism about the possibility of producing efficient programs with an automatic programming sys- tem and the fact that inefficiencies could no longer be hidden, we were convinced that the kind of system we had in mind would be widely used only if we could demonstrate that it would produce programs almost as efficient as hand coded ones and do so on virtually every job. It was our belief that if FORTRAN, dur- ing its first months, were to translate any reasonable "scientific" source program into an object program only half as fast as its hand coded counterpart, then acceptance of our system would be in serious danger. This belief caused us to regard the design of the translator as the real challenge, not the simple task of designing the lan- guage. Our belief in the simplicity of language design was partly confirmed by the relative ease with which similar languages had been independently developed by Rutis- hauser [1952], Laning and Zierler [1954], and ourselves; whereas we were alone in seeking to produce really efficient object programs. To this day I believe that our emphasis on object program efficiency rather than on language design was basically correct. I believe that had we failed to produce ef- ficient programs, the widespread use of languages like FORTRAN would have been se- riously delayed. In fact, I believe that we are in a similar, but unrecognized, sit- uation today: in spite of all the fuss that has been made over myriad language details, current conventional languages are still very weak programming aids, and far more powerful languages would be in use today if anyone had found a way to make them run with adequate efficiency. In other words, the next revolution in programming will take place only when both of the following requirements have been met: (a) a new kind of programming language, far more powerful than those of today, has been developed and (b) a technique has been found for ex- ecuting its programs at not much greater cost than that of today's programs. Because of our 1954 view that success in producing efficient programs was more im- portant than the design of the FORTRAN lan- guage, I consider the history of the com- piler construction and the work of its in- ventors an integral part of the history of the FORTRAN language; therefore a later section deals with that subject. 2. The early stages of the FORTRAN project. After Cuthbert Hurd approved my proposal to develop a practical automatic program- ming system for the 704 in December 1953 or January 1954, Irving Ziller was assigned to the project. We started work in one of the many small offices the project was to oc- cupy in the vicinity of IBM headquarters at 590 Madison Avenue in New York; the first of these was in the Jay Thorpe Build- ing on Fifth Avenue. By May 1954 we had been joined by Harlan Herrick and then by a new employee who had been hired to do technical typing, Robert A. Nelson (with Ziller, he soon began designing one of the most sophisticated sections of the compiler; he is now an IBM Fellow). By about May we had moved to the 19th floor of the annex of 590 Madison Avenue, next to the elevator machinery; the ground floor of this build- ing housed the 701 installation on which customers tested their programs before the arrival of their own machines. It was here that most of the FORTRAN language was de- signed, mostly by Herrick, Ziller and my- self, except that most of the input-output language and facilities were designed by Roy Nutt, an employee of United Aircraft Corp. who was soon to become a member of the FORTRAN project. After we had finished designing most of the language we heard about Rutishauser's proposals for a similar language [Rutis- hauser 1952]. It was characteristic of the unscholarly attitude of most programmers then, and of ourselves in particular, that we did not bother to carefully review the sketchy translation of his proposals that we finally obtained, since from their sym- bolic content they did not appear to add anything new to our proposed language. Rutishauser's language had a for statement and one-dimensional arrays, but no IF, GOTO, nor I/O statements. Subscript variables could not be used as ordinary variables and operator precedence was ignored. His 1952 article described two compilers for this language (for more details see [Knuth and Pardo 1977]). As far as we were aware, we simply made up the language as we went along. We did not regard language design as a difficult problem, merely a simple prelude to the real problem: designing a compiler which could produce efficient programs. Of course one of our goals was to design a language which would make it possible for engineers and scientists to write programs themselves for the 704. We also wanted to eliminate a lot of the bookkeeping and de- tailed, repetitive planning which hand cod- ing involved. Very early in our work we had in mind the notions of assignment state- ments, subscripted variables, and the DO statement (which I believe was proposed by Herrick). We felt that these provided a good basis for achieving our goals for the language, and whatever else was needed e- merged as we tried to build a way of pro- gramming on these basic ideas. We certainly had no idea that languages almost identical to the one we were working on would be used for more than one IBM com-puter, not to mention those of other manu- facturers. (After all, there were very few computers around then.) But we did expect our system to have a big impact, in the sense that it would make programming for the 704 very much faster, cheaper, more re- liable. We also expected that, if we were successful in meeting our goals, other groups and manufacturers would follow our example in reducing the cost of programming by providing similar systems with different but similar languages [Preliminary Report 1954]. By the fall of 1954 we had become the "Programming Research Group" and I had be- come its "manager". By November of that year we had produced a paper: "Preliminary Report, Specifications for the IBM Mathemat- ical FORmula TRANslating System, FORTRAN" [Preliminary Report 1954] dated November 10. In its introduction we noted that "systems which have sought to reduce the job of cod- ing and debugging problems have offered the choice of easy coding and slow execution or laborious coding and fast execution." On the basis more of faith than of knowledge, we suggested that programs "will be executed in about the same time that would be re- quired had the problem been laboriously hand coded." In what turned out to be a true statement, we said that "FORTRAN may apply complex, lengthy techniques in coding a problem which the human coder would have neither the time nor inclination to derive or apply." The language described in the "Prelimin- ary Report" had variables of one or two characters in length, function names of three or more characters, recursively de- fined "expressions", subscripted variables with up to three subscripts, "arithmetic formulas" (which turn out to be assignment statements), and "DO-formulas". These lat- ter formulas could specify both the first and last statements to be controlled, thus permitting a DO to control a distant se- quence of statements, as well as specifying a third statement to which control would pass following the end of the iteration. If only one statement was specified, the "range" of the DO was the sequence of state- ments following the DO down to the specified statement. Expressions in "arithmetic formulas" could be "mixed": involve both "fixed point" (integer) and "floating point" quantities. The arithmetic used (all integer or all floating point) to evaluate a mixed expres- sion was determined by the type of the variable on the left of the "=" sign. "IF- formulas" employed an equality or inequal- ity sign ("=" or ">" or ">=") between two (restricted) expressions, followed by two statement numbers, one for the "true" case, the other for the "false" case. A "Relabel formula" was designed to make it easy to rotate, say, the indices of the rows of a matrix so that the same computa-tion would apply, after relabelling, even though a new row had been read in and the next computation was now to take place on a different, rotated set of rows. Thus, for example, if b is a 4 by 4 matrix, after RELABEL b(3,1), a reference to b(1,j) has the same meaning as b(3,j) before relabel- ling; b(2,j) after = b(4,j) before; b(3,j) after = b(1,j) before; and b(4,j) after = b(2,j) before relabelling. The input-output statements provided in- cluded the basic notion of specifying the sequence in which data was to be read in or out, but did not include any "Format" state- ments. The Report also lists four kinds of "specification sentences": (I) "dimension sentences" for giving the dimensions of ar- rays, (2) "equivalence sentences" for as- signing the same storage locations to var- iables, (3) "frequency sentences" for in- dicating estimated relative frequency of branch paths or loops to help the compiler optimize the object program, and (4) "rel- ative constant sentences" to indicate sub- script variables which are expected to change their values very infrequently. Toward the end of the Report (pp. 26-27) there is a section "Future additions to the FORTRAN system". Its first item is: "a variety of new input-output formulas which would enable the programmer to specify var- ious formats for cards, printing, input tapes and output tapes" It is believed that this item is a result of our early consultations with Roy Nutt. This section goes on to list other proposed facilities to be added: complex and double precision arithmetic, matrix arithmetic, sorting, solving simultaneous equations, differential equations, and linear programming problems. It also describes function definition cap- abilities similar to those which later ap- peared in FORTRAN II; facilities for num- erical integration; a summation operator; and table lookup facilities. The final section of the Report (pp 28- 29) discusses programming techniques to use to help the system produce efficient pro- grams. It discusses how to use parentheses to help the system identify identical sub- expressions within an expression and there- by eliminate their duplicate calculation. These parentheses had to be supplied only when a recurring subexpression occurred as part of a term (e.g., if a~b occurred in several places, it would be better to write the term a~b~c as (a~b)~c to avoid duplicate calculation); otherwise the system would identify duplicates without any assistance. It also observes that the system would not produce optimal code for loops constructed without DO statements. This final section of the Report also notes that "no special provisions have been included in the FORTRAN system for locating errors in formulas". It suggests checking a program "by independently recreating the specifications for a problem from its FOR- TRAN formulation [!]" It says nothing about the system catching syntactic errors, but notes that an error-finding program can be written after some experience with errors has been accumulated. Unfortunately we were hopelessly opti- mistic in 1954 about the problems of debug- ging FORTRAN programs (thus we find on page 2 of the Report: "Since FORTRAN should vir- tually eliminate coding and debugging... [!]") and hence syntactic error checking facilities in the first distribution of FORTRAN I were weak. Better facilities were added not long after distribution and fairly good syntactic checking was provided in FORTRAN II. The FORTRAN language described in the Programmer's Reference Manual dated October 15, 1956 [IBM 1956] differed in a few re- spects from that of the Preliminary Report, but, considering our ignorance in 1954 of the problems we would later encounter in producing the compiler, there were remark- ably few deletions (the Relabel and Rela- tive Constant statements), a few retreats, some fortunate, some not (simplification of DO statements, dropping inequalities from IF statements--for lack of a ">" symbol, and prohibiting most "mixed" expressions and subscripted subscripts), and the recti- fication of a few omissions (addition of FORMAT, CONTINUE, computed and assigned GO- TO statements, increasing the length of var- iables to up to six characters, and general improvement of input-output statements). Since our entire attitude about language design had always been a very casual one, the changes which we felt to be desirable during the course of writing the compiler were made equally casually. We never felt that any of them involved a real sacrifice in convenience or power (with the possible exception of the Relabel statement, whose purpose was to coordinate input-output with computations on arrays, but this was one facility which we felt would have been really difficult to implement). I believe the simplification of the original DO state- ment resulted from the realization that (a) it would be hard to describe precisely, (b) it was awkward to compile, and (c) it provided little power beyond that of the final version. In our naive unawareness of language design problems--of course we knew nothing of many issues which were later thought to be important, e.g., block structure, con- ditional expressions, type declarations-- it seemed to us that once one had the no- tions of the assignment statement, the sub- scripted variable, and the DO statement in hand (and these were among our earliest i- deas), then the remaining problems of lan- guage design were trivial: either their sol- ution was thrust upon one by the need to provide some machine facility such as read-ing input, or by some programming task which could not be done with existing structures (e.g., skipping to the end of a DO loop without skipping the indexing instructions there: this gave rise to the CONTINUE state- ment). One much-criticized design choice in FORTRAN concerns the use of spaces: blanks were ignored, even blanks in the middle of an identifier. Roy Nutt reminds me that that choice was partly in recognition of a problem widely known in SHARE, the 704 us- ers' association. There was a common pro- blem with keypunchers not recognizing or properly counting blanks in handwritten data, and this caused many errors. We also regarded ignoring blanks as a device to en- able programmers to arrange their programs in a more readable form without altering their meaning or introducing complex rules for formatting statements. Another debatable design choice was to rule out "mixed" mode expressions involving both integer and floating point quantities. Although our Preliminary Report had included such expressions, and rules for evaluating them, we felt that if code for type conver- sion were to be generated, the user should be aware of that, and the best way to insure that he was aware was to ask him to specify them. I believe we were also doubtful of the usefulness of the rules in our Report for evaluating mixed expressions. In any case, the most common sort of "mixtures" was allowed: integer exponents and func- tion arguments were allowed in a floating point expression. In late 1954 and early 1955, after com- pleting the Preliminary Report, Harlan Her- rick, Irving Ziller and I gave perhaps five or six talks about our plans for FORTRAN to various groups of IBM customers who had or- dered a 704 (the 704 had been announced about May 1954). At these talks we covered the material in the Report and discussed our plans for the compiler (which was to be com- pleted within about six months [!] ; this was to remain the interval-to-completion until it actually was completed over two years later, in April 1957). In addition to informing customers about our plans, an- other purpose of these talks was to assemble a list of their objections and further re- quirements. In this we were disappointed; our listeners were mostly skeptical; I be- lieve they had heard too many glowing des- criptions of what turned out to be clumsy systems to take us seriously. In those days one was accustomed to finding lots of pecul- iar but significant restrictions in a system when it finally arrived that had not been mentioned in its original description. Most of all, our claims that we would produce ef- ficient object programs were disbelieved. Whatever thereasons, we received almost no suggestions or feedback; our listeners had done almost no thinking about the problems we faced and had almost no suggestions or criticisms. Thus we felt that our trips to Washington (D.C.), Albuquerque, Pittsburgh, Los Angeles, and one or two other places were not very helpful. One trip to give our talk, probably in January 1955, had an excellent payoff. This talk, at United Aircraft Corp., resulted in an agreement between our group and Walter Ramshaw at United Aircraft that Roy Nutt would become a regular part of our effort (although remaining an employee of United Aircraft) to contribute his expertise on input-output and assembly routines. With a few breaks due to his involvement in writing various SHARE programs, he would thenceforth come to New York two or three times a week until early 1957. It is difficult to assess the influence the early work of the FORTRAN group had on other projects. Certainly the discussion of Laning and Zierler's algebraic compiler at the ONR Symposium in May 1954 would have been more likely to persuade someone to un- dertake a similar line of effort than would the brief discussion of the merits of using "a fairly natural mathematical language" that appeared there in the paper by Herrick and myself [Backus and Herrick 1954]. But it was our impression that our discussions with various groups after that time, their access to our Preliminary Report, and their awareness of the extent and seriousness of our efforts, that these factors either gave the initial stimulus to some other projects or at least caused them to be more active than they might have been otherwise. It was our impression, for example, that the "IT" project [Perlis, Smith and Van Zoeren 1957] at Purdue and later at Carnegie-Mellon began shortly after the distribution of our Preliminary Report, as did the "MATH-MATIC" project [Ash et al. 1957] at Sperry Rand. It is not clear what influence, if any, our Los Angeles talk and earlier contacts with members of their group had on the PACT I effort [Baker 1956], which I believe was already in its formative stages when we got to Los Angeles. It is clear, whatever in- fluence the specifications for FORTRAN may have had on other projects in 1954-55-56, that our plans were well advanced and quite firm by the end of 1954 and before we had contact or knowledge of those other pro- jects. Our specifications were not affected by them in any significant way as far as I am aware, even though some were operating before FORTRAN was (since they were prima- rily interested in providing an input lan- guage rather than in optimization, their task was considerably simpler than ours). 3. The construction of the compiler. The FORTRAN compiler (or "translator" as we called it then) was begun in early 1955, although a lot of work on various schemes which would be used in it had been done in 1954; e.g., Herrick had done a lot of trial programming to test out our language and we had worked out the basic sort of machine programs which we wanted the compiler to generate for various source language phrases; Ziller and I had worked out a basic scheme for translating arithmetic expres- sions. But the real work on the compiler got under way in our third location on the fifth floor of 15 East 56th Street. By the middle of February three separate efforts were un- derway. The first two of these concerned sections I and 2 of the compiler, and the third concerned the input, output and as- sembly programs we were going to need (see below). We believed then that these efforts would produce most of the compiler. (The entire project was carried on by a loose cooperation between autonomous, sep- arate groups of one, two, or three people; each group was responsible for a "section" of the compiler; each group gradually devel- oped and agreed upon its own input and out- put specifications with the groups for neighboring sections; each group invented and programmed the necessary techniques for doing its assigned job.) Section I was to read the entire source program, compile what instructions it could, and fi]e all the rest of the information from the source program in appropriate tables'. Thus the compiler was "one pass" in the sense that it "saw" the source pro- gram only once. Herrick was responsible for creating most of the tables, Peter Sheridan (who had recently joined us) com- piled all the arithmetic expressions, and Roy Nutt compiled and/or filed the I/O statements. Herrick, Sheridan and Nutt got some help later on from R. J. Beeber and H. Stern, but they were the architects of sec- tion I and wrote most of its code. Sheridan devised and implemented a number of optimiz- ing transformations on expressions [Sheridan 1959] which sometimes radically altered them (of course without changing their meaning). Nutt transformed the I/O "lists of quan- tities" into nests of DO statements which were then treated by the regular mechanisms of the compiler. The rest of the I/O infor- mation he filed for later treatment in sec- tion 6, the assembler section. (For further details about how the various sections of the compiler worked see [Backus et al. 1957] .) Using the information that was filed in section I, section 2 faced a completely new kind of problem; it was required to an- alyze the entire structure of the program in order to generate optimal code from DO statements and references to subscripted variables. The simplest way to effect a reference to A(I,J) is to evaluate an ex- pression involving the address of A(I,1), I, and K×J, where K is the length of a col- umn (when A is stored column-wise). But this calculation, with its multiplication, is much less efficient than the way most hand coded programs effect a reference to A(I,J), namely, by adding an appropriate constant to the address of the preceding reference to the array A whenever I and J are changing linearly. To employ this far more efficient method section 2 had to determine when the surrounding program was changing I and J linearly. Thus one problem was that of distinguish- ing between, on the one hand, references to an array element which the translator might treat by incrementing the ad4ress used for a previous reference, and those array ref- erences, on the other hand, which would re- quire an address calculation starting from scratch with the current values of the sub- scripts. It was decided that it was not practical to track down and identify linear changes in subscripts resulting from assignment statements. Thus the sole criterion for linear changes, and hence for efficient handling of array references, was to be that the subscripts involved were being controlled by DO statements. Despite this simplifying assumption, the number of cases that section 2 had to analyze in order to produce optimal or near-optimal code was very large. (The number of such cases in- creased exponentially with the number of subscripts; this was a prime factor in our decision to limit them to three; the fact that the 704 had only three index registers was not a factor.) It is beyond the scope of this paper to go into the details of the analysis which section 2 carried out. It will suffice to say that it produced code of such efficien- cy that its output would startle the pro- grammers who studied it. It moved code out of loops where that was possible; it took advantage of the differences between row- wise and column-wise scans; it took note of special cases to optimize even the exits from loops. The degree of optimization performed by section 2 in its treatment of indexing, array references, and loops was not equalled again until optimizing compil- ers began to appear in the middle and late sixties. The architecture and all the techniques employed in section 2 were invented by Rob- ert A. Nelson and Irving Ziller. They plan- ned and programmed the entire section. Orig- inally it was their intention to produce the complete code for their area, including the choice of the index registers to be used (the 704 had three index registers). When they started looking at that problem it rapidly became clear that it was not go- ing to be easy to treat it optimally. At that point I proposed that they should pro- duce a program for a 704 with an unlimited number of index registers, and that later sections would analyze the frequency of ex- ecution of various parts of the program (by a Monte Carlo simulation of its execu- tion) and then make index register assign- ments so as to minimize the transfers of items between the store and the index reg-isters. This proposal gave rise to two new sec- tions of the compiler which we had not an- ticipated, sections 4 and 5 (section 3 was added still later to convert the output of sections I and 2 to the form required for sections 4, 5, and 6). In the fall of 1955 Lois Mitchell Haibt joined our group to plan and program section 4, which was to analyze the flow of a program produced by sections I and 2, divide it into "basic blocks" (which contained no branching), do a Monte Carlo (statistical) analysis of the expected frequency of execution of basic blocks--by simulating the behavior of the program and keeping counts of the use of each block--using information from DO state- ments and FREQUENCY statements, and collect information about index register usage (for more details see [Backus et al. 1957; Cocke and Schwartz 1970 p.511]) . Section 5 would then do the actual transformation of the program from one having an unlimited number of index registers to one having only three. Again, the section was entirely planned and programmed by Haibt. Section 5 was planned and programmed by Sheldon Best, who was loaned to our group by agreement with his employer, Charles W. Adams, at the Digital Computer Laboratory at MIT; during his stay with us Best was a temporary IBM employee. Starting in the early fall of 1955, he designed what turned out to be, along with section 2, one of the most intricate and complex sections of the compiler, one which had perhaps more in- fluence on the methods used in later com- pilers than any other part of the FORTRAN compiler. (For a discussion of his tech- niques see [Cocke and Schwartz 1970 pp. 510- 515].) It is impossible to describe his register allocation method here; it suffices to say that it was to become the basis for much subsequent work and produced code which was very difficult to improve. Although I believe that no provably optimal register allocation algorithm is known for general programs with loops, etc., empirically Best's 1955-56 procedure ap- peared to be optimal. For straight-line code Best's replacement policy was the same as that used in Belady's MIN algorithm, which Belady proved to be optimal [Belady 1965]. Although Best did not publish a formal proof, he had convincing arguments for the optimality of his policy in 1955. Late in 1955 it was recognized that yet another section, section 3, was needed. This section merged the outputs of the pre- ceding sections into a single uniform 704 program which could refer to any number of index registers. It was planned and pro- grammed by Richard Goldberg, a mathematician who joined us in November 1955. Also, late in 1956, after Best had returned to MIT and during the debugging of the system, section 5 was taken over by Goldberg and David Sayre (see below), who diagrammed it care-fully and took charge of its final debug- ging. The final section of the compiler, sec- tion 6, assembled the final program into a relocatable binary program (it could also produce a symbolic program in SAP, the SHARE Assembly Program for the 704). It produced a storage map of the program and data that was a compact summary of the FOR- TRAN output. Of course it also obtained the necessary library programs for inclusion in the object program, including those re- quired to interpret FORMAT statements and perform input-output operations. Taking advantage of the special features of the programs it assembled, this assembler was about ten times faster than SAP. It was designed and programmed by Roy Nutt, who also wrote all the I/O programs and the re- locating binary loader for loading object programs. By the summer of 1956 large parts of the system were working. Sections I, 2, and 3 could produce workable code provided no more than three index registers were needed. A number of test programs were compiled and run at this time. Nutt took part of the system to United Aircraft (sections I, 2, and 3 and the part of section 6 which pro- duced SAP output). This part of the system was productive there from the summer of 1956 until the complete system was available in early 1957. From late spring of 1956 to early 1957 the pace of debugging was intense; often we would rent rooms in the Langdon Hotel (which disappeared long ago) on 56th Street, sleep there a little during the day and then stay up all night to get as much use of the computer (in the headquarters annex on 57th Street) as possible. It was an exciting period; when later on we began to get fragments of compiled pro- grams out of the system, we were often as- tonished at the surprising transformations in the indexing operations and in the ar- rangement of the computation which the com- piler made, changes which made the object program efficient but which we would not have thought to make as programmers our- selves (even though, of course, Nelson or Ziller could figure out how the indexing worked, Sheridan could explain how an ex- pression had been optimized beyond recog- nition, and Goldberg or Sayre could tell us how section 5 had generated additional in- dexing operations). Transfers of control appeared which corresponded to no source statement, expressions were radically re- arranged, and the same DO statement might produce no instructions in the object pro- gram in one context, and in another it would produce many instructions in differ- ent places in the program. By the summer of 1956 what appeared to be the imminent completion of the project started us worrying (for perhaps the first time) about documentation. David Sayre, a crystallographer who had joined us in the spring (he had earlier consulted with Best on the design of section 5 and had later be- gun serving as second-in-command of what was now the '~Programming Research Department") took up the task of writing the Programmer's Reference Manual [IBM 1956]. It appeared in a glossy cover, handsomely printed, with the date October 15, 1956. It stood for some time as a unique example of a manual for a programming language (perhaps it still does): it had wide margins, yet was only 51 pages long. Its description of the FORTRAN language, exclusive of input-output state- ments, was 21 pages; the I/O description occupied another 11 pages; the rest of it was examples and details about arithmetic, tables, etc.. It gave an elegant recursive definition of expressions (as given by Sher- idan), and concise, clear descriptions, with examples, of each statement type, of which there were 32, mostly machine dependent i- tems like SENSE LIGHT, IF DIVIDE CHECK, PUNCH, READ DRUM, and so on. (For examples of its style see figs. I, 2, and 3.) One feature of FORTRAN I is missing from the Programmer's Reference Manual, not from an oversight of Sayre's, but because it was added to the system after the manual was written and before the system was distrib- uted. This feature was the ability to de- fine a function by a "function statement". These statements had to precede the rest of the program. They looked like assignment statements, with the defined function and dummy arguments on the left and an expres- sion involving those arguments on the right. They are described in the addenda to the Programmer's Reference Manual [Addenda 1957] which we sent on February 8, 1957 to John Greenstadt, who was in charge of IBM's fac- ility for distributing information to SHARE. They are also described in all sub- sequent material on FORTRAN I. The next documentation task we set our- selves was to write a paper describing the FORTRAN language and the translator program. The result was a paper entitled "The FOR- TRAN automatic coding system" [Backus et al. 1957] which we presented at the Western Joint Computer Conference in Los Angeles in February 1957. I have mentioned all of the thirteen authors of that paper in the pre- ceding narrative except one: Robert A. Hughes. He was employed by the Livermore Radiation Laboratory; by arrangement with Sidney Fernbach, he visited us for two or three months in the summer of 1956 to help us document the system. (The authors of that paper were: J. W. Backus, R. J. Beeber, S. Best, R. Goldberg, L. M. Haibt, H. L. Herrick, R. A. Hughes, R. A. Nelson, R. Nutt, D. Sayre, P. B. Sheridan, H. Stern, I. Ziller.) At about the time of the Western Joint Computer Conference we spent some time in Los Angeles still frantically debugging the system. North American Aviation gave us time at night on their 704 to help us in our mad rush to distribute the system. Up to this point there had been relatively little interest from 704 instablations (with the exception of Ramshaw's United Aircraft shop, Harry Cantrell's GE installation in Schenectady, and Sidney Fernbach's Liver- more operation), but now that the full sys- tem was beginning to generate object pro- grams, interest picked up in a number of places. Sometime in early April 1957 we felt the system was sufficiently bug-free to distrib- ute to all 704 installations. Sayre and Grace Mitchell (see below) started to punch out the binary decks of the system, each of about 2,000 cards, with the intention to make 30 or 40 decks for distribution. This process was so error-prone that they had to give up after spending an entire night in producing only one or two decks. (Apparently one of those decks was sent, without any identification or directions, to the Westinghouse Bettis installation, where a puzzled group headed by Herbert S. Bright, suspecting that it might be the long-awaited FORTRAN deck, proceeded, en- tirely by guesswork, to get it to compile a test program--after a diagnostic print- out noting that a comma was missing in a specific statement! This program then printed 28 pages of correct results--with a few FORMAT errors. The date: April 20, 1957. An amusing account of this incident by Bright is in Computers and Automation [Bright 1971].) After failing to produce binary decks, Sayre devised and programmed the simple editor and loader that made it possible to distribute and update the system from mag- netic tapes; this arrangement served as the mechanism for creating new system tapes from a master tape and the binary correction cards which our group would generate in large numbers during the long field debug- ging and maintenance period which followed distribution. With the distribution of the system tapes went a "Preliminary Operator's Man- ual" [Operator's Manual 1957] dated April 8, 1957. It describes how to use the tape ed-itor and how to maintain the library of functions. Five pages of such general in- structions are followed by 32 pages of er- ror stops; many of these say "source program error, get off machine, correct for-mula in question and restart problem" and then, for example (stop 3624) "non-zero level reduction due to insufficient or re- dundant parentheses in arithmetic or IF- type formula". Shortly after the distrib- ution of the system we distributed--one copy per installation--what was fondly known as the "Tome", the complete symbolic listing of the entire compiler plus other system and diagnostic information, an 11" by 15" volume about four or five inches thick. NOTE:the graphics below are explanatory, so I placed pertinent text under each image but continue past the graphics as before absent text, note the link at the end of this post has pertinent information Subscripts. GENERAL FORM Let v represent any fixed point variable and c (or c') any-unsigned fixed point constant. Then a subscript is an expression of one of the forms: V C V+C or V--C c*v c* V+C' or c*v--c' EXAMPLES I 3 MU+2 MU-2 5*J 5"J+2 5"J-2 The symbol • denotes multiplication. The variable v must not itself be sub- scripted. Subscripted Variables. GENERAL FORM A fixed or floating point variable followed, by parentheses enclosing 1, 2, or 3 subscripts separated by commas. EXAMPLES A(I) K(3) BEIA(5*.I-2, K + 2,L) For each wlriable that appears in subscripted form the size of the array (i.e. the maxinuun wdues which its subscripts can attain) must be stated in a DIMEN- SION statement (see Chapter 6) preceding the first appearance of the variable. The minimum value which a subscript may assume in the object program is + 1. A rrangement o/A rrays in Storage. A 2-dimensional array A will, in the object program, be stored sequentially in the order A1,1, A2.1, • ..... Am,l, A],z, A2,2, • ..... Am,2, • ........ Am,,. Thus it is stored "columnwise", with the first of its subscripts varying most rapidly, and the last varying least rapidly. The same is true of 3-dimensional arrays. l-dimensional arrays are of course simply stored sequentially. All arrays are stored backwards in storage; i.e. the above sequence is in the order of decreas- ing absolute location. Any such routine will be compiled into the object program as a closed subrou- tine. In the section on Writing Subroutines for the Master Tape in Chapter 7 are given the specifications which any such routine must meet. Expressions An expression is any sequence of constants, w~riables (subscripted or not sub- scripted), and functions, separated by operation symbols, commas, and paren- theses so as to form a meaningful mattmmatical expression. However, one special restriction does exist. A FORTRAN expression may be either a fixed or a lloating point expression, but it must not be a mixed expression. This does not mean that a floating point quantity can not appear in a fixed point expression, or vice versa, but rather that a quantity of one mode can appear in an expression of the other mode only in certain ways. Brielty, a floating point quantity can appear in a fixed point expression only as an argument of a function; a fixed point quantity can appear in a floating point expression only as an argument of a function, or as a subscript, or as an exponent. Formal Rules /or Forming Expressions. By repeated use of the following rules, all permissible expressions may be derived. 1. Any fixed point (floating point) constant, variable, or subscripted variable is an expression of the same mode. Thus 3 and I are fixed point expressions, and AI.I'HA and A(I,J,K) are tloating point exprcssi~ms. 2. If SOMEF is some function of n wLriahles, and if E, F ...... , H are a set of n expressions of the correct modes for SOMEF, then SONIEF (E, F, .... H) is an expression of the same mode as SOMEF. 3. If E is an expression, and if its lirst character is not -t or --, then t- E and --E are expressions of the same mode as E. Thus -A is an expression, but -k-A is not. 4. If E is an expression, then (E) is an expression of the same mode as E. Thus (A), ((A)), (((A))),.ctc. are expressions. 5. If E and F are expressions of the same mode, and if the first character of F is not + or--, then E+F E-F E* F [/F are expressions of the same mode. Thus A--+ B and A/4 B are not expres- sions. The characters +, -, *, and / denote addition, subtraction, multi- plication, and division. STOP GENERAL FORM "STOP" or "STOP n" where n is an unsigned octal fixed point constant. EXAMPLES STOP STOP 77777 This statement causes the machine to HALT in such a way that pressing the START button has no effect. Therefore, in contrast to the PAUSE, it is used where a get-oil-the-machine stop, rather than a temporary stop, is desired. The octal number n is displayed on the 704 console in the address field of the storage register. (If n is not stated it is taken to be 0.) DO GENERAL FORM "DO n i = m,, m2" or "DO n i = m,, m2, m3" where n is a statement number, i is a non-subscripted fixed point variable, and m,, m2, ma are each either an unsigned fixed point constant or a non-subscripted fixed point variable. If ma is not stated it is taken to be 1. EXAMPLES DO 301 = 1,10 DO301 = 1, M, 3 The DO statement is a command to "DO the statements which follow, to and including the statement with statement number n, repeatedly, the first time with i = m~ and with i increased by mz for each succeeding time; after they have been done with i equal to the highest of this sequence of values which does not exceed m., let control reach the statement following the statement with state- mcnt number n". The range of a DO is the set of statements which will be executed re- peatedly; it is the sequence of consecutive statements immediately following the DO, to and including the statement numbered n. The index of a DO is the fixed point variable i, which is controlled by the DO in such a way that its value begins at ml and is increased each time by ma until it is about to exceed m> Throughout the range it is available for com- putation, either as an ordinary fixed point variable or as the variable of a subscript. During the last execution of the range, the DO is said to be satisfied. Suppose, for example, that control has reached statement 10 of the program 10 DO 11 I= 1, 10 11 A(I) = I*N(I) 12 NOTE: Continuing text from here on The proprietors of the six sections were kept busy tracking down bugs elicited by our customers' use of FORTRAN until the late summer of 1957. Hal Stern served as the co- ordinator of the field debugging and main- tenance effort; he received a stream of telegrams, mail and phone calls from all over the country and distributed the in- coming problems to the appropriate members of our group to track down the errors and generate correction cards, which he then distributed to every installation. In the spring of 1957 Grace E. Mitchell joined our group to write the Programmer's Primer [IBM 1957] for FORTRAN. The Primer was divided into three sections; each des- cribed successively larger subsets of the language accompanied by many example pro- grams. The first edition of the Primer was issued in the late fall or winter of 1957; a slightly revised edition appeared in March 1958. Mitchell planned and wrote the 64-page Primer with some consultation with the rest of the group; she later programmed most of the extensive changes in the system which resulted in FORTRAN II (see below). The Primer had an important influence on the subsequent growth in the use of the sys- tem. I believe it was the only available simplified instruction manual (other than reference manuals) until the later appear- ance of books such as [McCracken 1961], [Organick 1963] and many others. A report on FORTRAN usage in November 1958 [Backus 1958] says that "a survey in April [1958] of twenty-six 704 installations indicates that over half of them use FORTRAN [I] for more than half of their problems. Many use it for 80~ or more of their work... and almost all use it for some of their work." By the fall of 1958 there were some 60 installations with about 66 704s, and "... more than half the machine instruc- tions for these machines are being produced by FORTRAN. SHARE recently designated FOR- TRAN as the second official medium for transmittal of programs [SAP was the first] ., ." 4. FORTRAN II During the field debugging period some shortcomings of the system design, which we had been aware of earlier but had no time to deal with, were constantly coming to our attention. In the early fall of 1957 we started to plan ways of correcting these shortcomings; a document dated September 25, 1957 [Proposed Specifications 1957] characterizes them as (a) a need for better diagnostics, clearer comments about the nature of source program errors, and (b) the need for subroutine definition capabil- ities. "(Although one FORTRAN I diagnostic would pinpoint, in a printout, a missing comma in a particular statement, others could be very cryptic.) This document is titled "Proposed Specifications for FORTRAN II for the 704"; it sketches a more general diagnostic system and describes the new "subroutine definition" and END statements, plus some others which were not implemented. It describes how symbolic information is retained in the relocatable binary form of a subroutine so that the "binary symbolic subroutine [BSS] loader" can implement ref- erences to separately compiled subroutines. It describes new prologues for these sub- routines and points out that mixtures of FORTRAN-coded and assembly-coded relocat- able binary programs could be loaded and run together. It does not discuss the FUNC- TION statement that was also available in FORTRAN II. FORTRAN II was designed mostly by Nelson, Ziller, and myself. Mitchell programmed the majority of new code for FORTRAN II (with the most unusual feature that she delivered it ahead of schedule). She was aided in this by Bernyce Brady and LeRoy May. Sheridan planned and made the necessary changes in his part of section I; Nutt did the same for section 6. FORTRAN II was distributed in the spring of 1958. 5. FORTRAN III While FORTRAN II was being developed, Ziller was designing an even more advanced system that he called FORTRAN III. It al- lowed one to write intermixed symbolic in- structions and FORTRAN statements. The sym- bolic (704) instructions could have FORTRAN variables (with or without subscripts) as "addresses". In addition to this machine dependent feature (which assured the demise of FORTRAN III along with that of the 704), it contained early versions of a number of improvements that were later to appear in FORTRAN IV. It had "Boolean" expressions, function and subroutine names could be passed as arguments, and it had facilities for handling alphanumeric data, including a new FO~4AT code "A" similar to codes "I" and "E". This system was planned and pro- grammed by Ziller with some help from Nelson and Nutt. Ziller maintained it and made it available to about 20 (mostly IBM) instal- lations. It was never distributed general- ly. It was accompanied by a brief descrip- tive document [Additions to FORTRAN II 1958]. It became available on this limited scale in the winter of 1958-59 and was in operation until the early sixties, in part on the 709 using the compatibility feature (which made the 709 order code the same as that of the 704). 6. FORTRAN after 1958; comments. By the end of 1958 or early 1959 the FORTRAN group (the Programming Research Department), while still helping with an occasional debugging problem with FORTRAN II, was primarily occupied with other re- search. Another IBM department had long since taken responsibility for the FORTRAN system and was revising it in the course of producing a translator for the 709 which used the same procedures as the 704 FORTRAN II translator. Since my friends and I no longer had responsibility for FORTRAN and were busy thinking about other things by the end of 1958, that seems like a good point to break off this account. There remain only a few comments to be made about the subsequent development of FORTRAN. The most obvious defect in FORTRAN II for many of its users was the time spent in compiling. Even though the facilities of FORTRAN II permitted separate compilation of subroutines and hence eliminated the need to recompile an entire program at each step in debugging it, nevertheless compile times were long and, during debugging, the considerable time spent in optimizing was wasted. I repeatedly suggested to those who were in charge of FORTRAN that they should now develop a fast compiler and/or interpreter without any optimizing at all for use during debugging and for short-run jobs. Unfortunately the developers of FORTRAN IV thought they could have the best of both worlds in a single compiler, one which was both fast and produced optimized code. I was unsuccessful in convincing them that two compilers would have been far bet- ter than the compromise which became the original FORTRAN IV compiler. The latter was not nearly as fast as later compilers like WATFOR [Cress, Dirksen and Graham 1970] nor did it produce as good code as FORTRAN II. (For more discussion of later develop- ments with FORTRAN see [Backus and Heising 196~] .) My own opinion as to the effect of FOR- TRAN on later languages and the collective impact of such languages on programming gen- erally is not a popular opinion. That viewpoint is the subject of a long paper [Backus 1978] which should appear soon in the Communications of the ACM. I now re- gard all conventional languages (e.g., the FORTRANs, the ALGOLs, their successors and derivatives) as increasingly complex elab- orations of the style of programming dic- tated by the von Neumann computer. These "von Neumann languages" create enormous, unnecessary intellectual roadblocks in thinking about programs and in creating the higher level combining forms required in a really powerful programming methodology. Von Neumann languages constantly keep our noses pressed in the dirt of address com- putation and the separate computation of single words, whereas we should be focusing on the form and content of the overall re- sult we are trying to produce. We have come to regard the DO, FOR, WHILE statements and the like as powerful tools, whereas they are in fact weak palliatives that are necessary to make the primitive von Neumann style of programming viable at all. By splitting programming into a world of expressions on the one hand and a world of statements on the other, von Neumann lan- guages prevent the effective use of higher level combining forms; the lack of the lat- ter makes the definitional capabilities of yon Neumann languages so weak that most of their important features cannot be defined--starting with a small, elegant framework-- but must be built into the framework of the language at the outset. The Gargantuan size of recent von Neumann languages is eloquent proof of their inability to define new con- structs: for no one would build in so many complex features if they could be defined and would fit into the existing framework later on. The world of expressions has some elegant and useful mathematical properties whereas the world of statements is a disorderly one, without useful mathemetical properties. Structured programming can be viewed as a modest effort to introduce a small amount of order into the chaotic world of state- ments. The Dijkstra work [1976] of Hoare [1969], and others to axiom- atize the properties of the statement world can be viewed as a valiant and effective effort to be precise about those properties, ungainly as they may be. This is not the place for me to elaborate any further my views about von Neumann lan- guages. My point is this: while it was perhaps natural and inevitable that lan- guages like FORTRAN and its successors should have developed out of the concept of the von Neumann computer as they did, the fact that such languages have dominated our thinking for twenty years is unfortunate. It is unfortunate because their long-stand- ing familiarity will make it hard for us to understand and adopt new programming styles which one day will offer far greater intel- lectual and computational power. Acknowledgments My greatest debt in connection with this paper is to my old friends and colleagues whose creativity, hard work and invention made FORTRAN possible. It is a pleasure to acknowledge my gratitude to them for their contributions to the project, for making our work together so long ago such a con- genial and memorable experience, and, more recently, for providing me with a great amount of information and helpful material in preparing this paper and for their care- ful reviews of an earlier draft. For all this I thank all those who were associated with the FORTRAN project but who are too numerous to list here. In particular I want to thank those who were the principal movers in making FORTRAN a reality: Sheldon Best, Richard Goldberg, Lois Haibt, Harlan Herrick, Grace Mitchell, Robert Nelson, Roy Nutt, David Sayre, Peter Sheridan, and Irving Ziller. I also wish to thank Bernard Galler, J. A. N. Lee, and Henry Tropp for their am- iable, extensive and invaluable suggestions for improving the first draft of this paper. I am grateful too for all the work of the program committee in preparing helpful ques- tions that suggested a number of topics in the paper. REFERENCES Most of the items listed below have dates in the fifties, thus many that appeared in the open literature will be obtainable only in the largest and oldest collections. The items with an asterisk were either not pub- lished or were of such a nature as to make their availability even less likely than that of the other items. Adams, Charles W. and Laning, J. H., Jr. 195~ May. The MIT systems of automatic coding: Comprehensive, Summer Session, and Algebraic. In Proc. Symp. on Auto- matic Programming for Digital Computers. Washington DC: The Office of Naval Re- search. •Addenda to the FORTRAN Programmer's Ref- erence Manual. 1957 February 8. (Trans- mitted to Dr. John Greenstadt, Special Programs Group, Applied Science Division, IBM, for distribution to SHARE members, by letter from John W. Backus, Program- ming Research Dept. IBM. 5 pages.) •Additions to FORTRAN II 1958. Description of Source Language Additions to the FOR- TRAN II System. New York: Programming Research, IBM Corp. (Distributed to users of FORTRAN III. 12 pages.) •Ash, R.; Broadwin, E.; Della Valle, V.; Katz, C.; Greene, M.; Jenny, A.; and Yu, L. 1957. Preliminary Manual for MATH- MATIC and AR!TH-MATIC Systems (for Alge- braic Translation and Compilation for UNIVAC I and II). Philadelphia Pa: Rem- ington Rand UNIVAC. Backus, J. W. 1954 January. The IBM 701 Speedcoding system. JACM I (I):4-6. *Backus, John 1954 May 21. Letter to J. H. Laning, Jr. Backus, J. W. 1958 November. Automatic programming: properties and performance of FORTRAN systems I and II. In Proc. S~mp. on the Mechanisation of Thought Processes. Teddington, Middlesex, Eng- land: The National Physical Laboratory. Backus, John 1976 June 10-15. Programming in America in the nineteen fifties-- some personal impressions. In Proc. International Conf. on the History of Computing, Los Alamos. (Publisher yet to be selected.) Backus, John 1978. The von Neumann style as an obstacle to high level programming; an alternative functional style and its algebra of programs. (to appear CACM). Backus, J. W. and Heising, W. P. 1964 Aug- ust. "FORTRAN. In IEEE Transactions on Electronic Computers. Vol EC-13 (4): 382-385. Backus, John W. and Herrick, Harlan 1954 May. IBM 701 Speedcoding and other auto-matic programming systems. In Proc. Symp. on Automatic Programming for Digi- tal Computers. Washington DC: The Office of Naval Research. Backus, J. W.; Beeber, R. J.; Best, S.; Goldberg, R.; Haibt, L. M.; Herrick, H. L.; Nelson, R. A.; Sayre, D.; Sheri- dan, P. B.; Stern, H.; Ziller, I.; Hughes, R. A.; and Nutt, R. 1957 Feb- ruary. The FORTRAN automatic coding system. In Proc. Western Joint Computer Conf. Los Angeles. Baker, Charles L. 1956 October. The PACT I coding system for the IBM Type 701. JACM 3 (4): 272-278. Belady, L.A. 1965 June 15. Measurements on programs: one level store simulation. Yorktown Heights NY: IBM Thomas J. Watson Research Center. Report RC 1420. B6hm, Corrado 1954. Calculatrices digi- tales: Du d~chiffrage de formules logi- co-math~matiques par la machine m~me dans la conception du programme. In Annali di Matematica Pura ed Applicata 37 (4): 175-217. Bouricius, Willard G. 1953 December. Op- erating experience with the Los Alamos 701. In Proc. Eastern Joint_Computer Conf. Washington DC. Bright, Herbert S. 1971 November. FORTRAN comes to Westinghouse-Bettis, 1957. In Computers and Automation. Brown, J. H. and Carr, John W., III 1954 May. Automatic programming and its de- velopment on MIDAC. In Proc. Symp. on Automatic Programming for Digital Com- puters. Washington DC: The Office of Naval Research. Cocke, John and Schwartz, J. T. 1970 April. Programming Languages and their Compil- ers. (Preliminary Notes, Second Revised Version, April 1970) New York: New York University, The Courant Institute of Mathematical Sciences. Cress, Paul; Dirksen, Paul; and Graham, J. Wesley 1970. FORTRAN IV with WATFOR and WATFIV. Englewood Cliffs NJ: Pren- tice-Hall. Dijkstra, Edsger W. 1976. A Discipline of Programming. Englewood Cliffs NJ: Pren- tice-Hall. Grems, Mandalay and Porter, R. E. 1956. A truly automatic programming system. In Proc. Western Joint Computer Conf. 10-21. Hoare, C. A. R. 1969 October. An axiomatic basis for computer programming. CACM 12 (10): 576-580, 583. • IBM 1956 October 15. Programmer's Refer- ence Manual, The FORTRAN Automatic Cod- ing System for the IBM 704 EDPM. New York: IBM Corp. (Applied Science Division and Programming Research Dept., Working Committee: J. W. Backus, R. J. Beeber, S. Best, R. Goldberg, H. L. Herrick, R. A. Hughes [Univ. of calif. Radiation Lab. Livermore, Calif.], L. B. Mitchell, R. A. Nelson, R. Nutt [United Aircraft Corp., East Hartford, Conn.], D. Sayre, P. B. Sheridan, H. Stern, I. Ziller). • IBM 1957. Progra~nmer's Primer for FORTRAN Automatic Coding System for the IBM 704. New York: IBM Corp. Form No. 32-0306. Knuth, Donald E. and Pardo, Luis Trabb 1977. Early development of programming languages. In Encyclopedia of Computer Science and Technology. Vol 7:419-493. New York: Marcel Dekker. • Laning, J. H. and Zierler, N. 1954 Jan- uary. A program for translation of math- ematical equations for Whirlwind I. Cambridge Mass.: MIT Instrumentation Lab. Engineering Memorandum E-364. McCracken, Daniel D. 1961. A Guide to FORTRAN Programming. New York: Wiley. Moser, Nora B. 1954 May. Compiler method of automatic programming. In Proc. Symp. on Automatic Programming for Digital Computers. Washington DC: The Office of Naval Research. Muller, David E. 1954 May. Interpretive routines in the ILLIAC library. In Proc. Symp. on Automatic Programming for Digital Computers. Washington DC: The Office of Naval Research. • Operator's Manual 1957 April 8. Prelim- inary Operator's Manual for the FORTRAN Automatic Coding System for the IBM 704 EDPM. New York: IBM Corp. Programming Research Dept. Organick, Elliot I. 1963. A FORTRAN Prim- er. Reading Mass.: Addison-Wesley. • Perlis, A. J.; Smith, J. W.; and Van Zoer- en, H. R. 1957 March. Internal Trans- lator (IT): a compiler for the 650. Pittsburgh: Carnegie Institute of Tech. • Preliminary Report 1954 November 10. Specifications for the IBM mathematical FORmula TRANslating system, FORTRAN. New York: IBM Corp. (Report by Program- ming Research Group, Applied Science Div- ision, IBM. Distributed to prospective 704 customers and other interested par- ties. 29 pages.) • Proposed Specifications 1957 September 25. Proposed Specifications for FORTRAN II for the 704. (Unpublished memorandum, Programming Research Dept. IBM.) *Remington Rand, Inc. 1953 November 15. The A-2 compiler system operations man- ual. Prepared by Richard K. Ridgway and Margaret H. Harper under the direction of Grace M. Hopper. Rutishauser, Heinz 1952. Automatische Rechenplanfertigung bei progran~ges- teuerten Rechenmaschinen. In Mitteilung- en aus dem Inst. fur angew. Math. an der E. T. H. ZUrich. Nr. 3. Basel: Birk- h~user. Sammet, Jean E. 1969. Progranuaing Lan- guages: History and Fundamentals. Englewood Cliffs NJ: Prentice Hall. Sheridan, Peter B. 1959 February. The arithmetic translator-compiler of the IBM FORTRAN automatic coding system. CACM 2 (2) :9-21. • Schlesinger, S. I. 1953 July. Dual cod- ing system. Los Alamos NM: Los Alamos Scientific Lab. Los Alamos Report LA 1573. Zuse, K. 1959. Dber den PlankalkUl. In Elektron. Rechenanl. 1:68-71. Zuse, K. 1972. Der Plankalkul. In Ber- ichte der Gesellschaft fur Mathematik und Datenverarbeitung. 63, part 3. Bonn. (Manuscript prepared in 1945.) URL https://www.softwarepreservation.org/projects/FORTRAN/paper/p165-backus.pdf if above url doesn't work, i have the pdf and the pdf as images in my public storage https://1drv.ms/f/c/ea9004809c2729bb/EooBcDF17hpDo608yokm4bMBRedtOlqRpkbMsm32ztSddw?e=rQZfJh converted from pdf's at the following https://pdf2png.com/ -
Tactical and Strategy games.
richardmurray replied to Rodney campbell's topic in BlackGamesElite's BGE Forum
thank you @gio74 + @mellypops for joining, and you are free to share gaming news or interest of your own:) please do so -
Tactical and Strategy games.
mellypops replied to Rodney campbell's topic in BlackGamesElite's BGE Forum
Such games have been my favorite for a long time. I like challenges, I like to think, to plan, and so on. I can play such games for hours without being bored. But from time to time I also like play shooters -
Tactical and Strategy games.
gio74 replied to Rodney campbell's topic in BlackGamesElite's BGE Forum
I've always been more into sports games, but now I'm discovering new games. Tactical games are interesting, and they really make you think I've already tried Sins Of A Solar Empire and it's really interesting game -
For those that may know I have always said I will honor those who follow me but I have been too busy creating my own work. A few months ago, I realized I wanted/needed to program more and so the HDKiriban series was born and this is the first in the series. DogoKwan is a simple tile game. You can change the settings , the dimensions or the difficulty. For my first 25 followers on deviantart I have a dropdown list to display their work. I will continue my HDKiriban series with the second game in the list for members 1-50 . I am open to discussion:) And please save a screenshot of you finishing a game in the comments. WARNING!:) let me help you, this game has an easy bug, if you change the dimension or difficulty settings while playing you will cause problems
-
Tonfa Girl Game Prototype from Dualmask
richardmurray posted a blog entry in BlackGamesElite's BGE Journal
referral https://www.deviantart.com/dualmask/art/Tonfa-Girl-Game-Prototype-1034337267 -
Rock Paper Scissors project with Angelalita77
richardmurray posted a blog entry in BlackGamesElite's BGE Arcade
A friend of mine inspired this project, this is version 1, more are coming Version1- Yellow means a draw, blue means computer won, brown means you the user won. R means Rock, P means Paper, S means Scissors. Click start to start the game. Stop to end it. You can click the R, P, or S button to set your value, the computer chooses its own away from your eyes:) but the results will be shown Version 2- suggestions from angelalita about showing what the computer is doing and my own trim with the tile like dominoes. Tileshift can be accessed at anytime but the wisest thing is to start it after you have stopped the rock paper scissors game. Version 3 -
I answered Yes. I even have training on how to build a PC as recently as 2017.
-
An Early Laptop
richardmurray commented on richardmurray's blog entry in BlackGamesElite's BGE Journal
@Troy computational elements means programs/code, physical elements means battery/power source/resistors/capacitors/cords/circuit boards and more -
No, I don’t have any tech from the 70’s there wasn’t much personal tech to speak of back then just a TV, radio, Variety of devices to play music, a calculator, and maybe a digital watch. Yeah, that laptop did not have a hard drive. One of the floppy disks held the operating system and other held the program that you were running. I’m not sure they called that device a laptop back then might’ve been called a portable computer, but I could be wrong. I did not answer the question because I did not know what you meant between computational and physical elements. Are used to build and sell personal computers back in the early 90s.
-
An early laptop
-
FractalLens 01
richardmurray commented on richardmurray's blog entry in BlackGamesElite's BGE Arcade
example- have fun getting images factor .425 Restrictions repetition None No Restrictions axis Not bottom right Restrictions coin toss None -
Have any of you played this? I think a nice project for this group will be making an electronic version of this. what do you think @Milton? We can work together and make it in the aalbc website? link Buying page at MVmediaatl Info on the book Ki Khanga: The Sword and Soul Role Playing Game puts you in the role of a character of your liking in a world of mystery and magic; of villainy and victory; of sword... and soul. Will you delve for lost artifacts in the ruins of ancient temples? Strap on beaded armor and an nkisi necklace to battle undead legions as they storm your city upon the backs of skeletal camels, or defend your village from a swarm of ravenous impundulu? Whether you're making your way through the magical forests of Wandatu or fighting to survive in the palm oil-lit back alleys of Sati-Baa, you and your team will need all your wits, combat skill, and magic to make it through. But most of all, you'll need each other. $20.00 $25.00
-
Christina Game updates from Dualmask
richardmurray commented on richardmurray's blog entry in BlackGamesElite's BGE Journal
-
Christina Game updates from Dualmask
richardmurray posted a blog entry in BlackGamesElite's BGE Journal
-
christiana in unreal engine from dualmask
richardmurray posted a blog entry in BlackGamesElite's BGE Journal
Enjoy just a test -
Fallen Kingdom - Human Species. Ep 3
richardmurray posted a blog entry in BlackGamesElite's BGE Journal
from mystic-skillz -
Prototype for a weaponcombatleague game from Dualmask
richardmurray posted a blog entry in BlackGamesElite's BGE Journal
Prototype for a weaponcombatleague game from Dualmask/ Jonathan Price https://www.deviantart.com/dualmask/art/WCL-GridBattle-998197144 -
The complete Jet Dancer game playthrough Get the game https://store.steampowered.com/app/2084470/Jet_Dancer/
-
Black Lion and Cubs a Valley of the Kings Cartoon
richardmurray posted a blog entry in BlackGamesElite's BGE Journal
I played the game, here are some shots. I enjoyed the trailer. Couldn't embed, so you have to use the link below. Official site https://www.blacklionandcubs.com/ Video game http://valleyofthekings.blacklionandcubs.com/ On the blackeducationstation https://www.blackeducationstation.com/black-lion-and-cubs-cartoon Season 2 trailer https://www.blackeducationstation.com/black-lion-and-cubs-cartoon/videos/blc-season-2-intro more information info@blacklionandcubs.com