Printable Version of Topic

Click here to view this topic in its original format

Unmanned Spaceflight.com _ Chit Chat _ question about how JPL handled double precision on IBM 7094 in the 60

Posted by: ncc1701d Apr 20 2019, 06:37 PM

Hello,
I have a question about they handled double precision floating point numbers at JPL in the 60s on the IBM 7094. First I give you a quote then ask the question.

John Strand: Author of the book Memoirs of an Astrophyscist Path to the Planets who worked at JPL in the 60s said the following on page 63

"The IBM 7040-7094 Direct Couple was the joing of two main frames. One computer handled
I/O and the other was principally for number crunching. The operating system was batch mode.
Double precision used two single precision floating-point words each with seperate characteristics and mantissas. I will never for forget this single precision augmentation because of the work necessary to unpack both words from the octal dump in order to find the decimal number. True double precsion with a single characteristic and mantissa would have to wait"....etc
Later in the book it says: "The IBM 360 had a single characteristic and mantissa for its double precision word." IBM 360 came right after the IBM 7094.

My question is if anyone knows about this method of combining 2 single precision words in order to make 1 double precision word? I am trying to read a binary file that may have used this method since the data was created on a IMB 7094 computer which came before the IBM 360. Nasa archives people say they dont have access anymore to orginal fortran programs that might give me some clues about my problem or the format and I have no way to get in touch with the books author so I am curious if anyone knows about this method?

this link here:
https://nssdc.gsfc.nasa.gov/nssdc/formats/IBM7044_7090_7094.htm
refers to 2 words that would make up a double precision floating point but it doesnt look like the "Second 36-bit Word" has its own characteristic AND mantissa or does it?
I am trying to reconsile what the auther said about his "I will never forget" comment about how he handled double precision and what my link says about how nasa handled double precision on the IBM 7094.
Any opinions, insights or info on this subject would be helpfull.
thanks

Posted by: HSchirmer Apr 20 2019, 07:07 PM

QUOTE (ncc1701d @ Apr 20 2019, 06:37 PM) *
My question is if anyone knows about this method of combining 2 single precision words in order to make 1 double precision word?
... so I am curious if anyone knows about this method?

this link here:
https://nssdc.gsfc.nasa.gov/nssdc/formats/IBM7044_7090_7094.htm
refers to 2 words that would make up a double precision floating point but it doesnt look like the "Second 36-bit Word" has its own characteristic AND mantissa or does it?
I am trying to reconsile what the auther said about his "I will never forget" comment about how he handled double precision and what my link says about how nasa handled double precision on the IBM 7094.
Any opinions, insights or info on this subject would be helpfull.
thanks



My father worked on the 7044 and 360s, might still have some manuals in the attic....


A bit of browsing around found this...


Posted by: ncc1701d Apr 20 2019, 10:22 PM

Hi thanks. I have some manuals and an emulator.
From what I understand though the fortran code was not portable from one mainframe to the next so
not sure if my emulator would be doing the same thing the JPL computers would have been doing when dealing with a particular mission. I would imagine JPL would be doing something more specialized? Someone correct me if I am wrong.

Posted by: mcaplinger Apr 20 2019, 11:04 PM

QUOTE (ncc1701d @ Apr 20 2019, 02:22 PM) *
From what I understand though the fortran code was not portable from one mainframe to the next...

Typically these sorts of operations are done in the compiler (7094 double-precision operations were implemented in hardware) so the code itself would be mostly portable except maybe for hacks dealing with overflow and underflow. It's pretty hard to get to the underlying representation of floating-point numbers from Fortran as far as I remember.

Have you read https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19660001134.pdf "Study of the Accuracy of the Double-Precision Arithmetic Operations on the IBM 7094 Computer", JPL Technical Memorandum No. 33-742, 1963?

Posted by: nogal Apr 21 2019, 03:54 PM

I worked a lot with FORTRAN IV on an IBM 360 Model 44. The architecture seems to be quite different from the 7094 (which I never worked with). As mcaplinger says the compiler would handle these subtleties.

If the need to peek inside a floating-point number word or double word would arise, in FORTRAN IV, we would set up overlapping COMMON areas in different program modules (for instance main and a subroutine) one with real numbers defined and the other with an overlapping string. Using the string we could manipulate any byte (character) - this was devil-daring code!

Looks like the 7094 mantissa for floating-point numbers was in binary (base 2) but the 360 one was in base 16. So 6 hexadecimal numbers (or 14 in the double precision case) allowed representation of numbers between 16−65 to 1663 (approx. 5.39761 × 10−79 to 7.237005 × 1075). The 360 exponent had a bias of 64, while the 7094 seems to have had biases of 128 (short precision) and 1024 (double precision).

In the 360 the hardware would operate directly on the single or double word and those would have to start on a "word boundary" - a byte with an address multiple of four - or "double word boundary". The compiler would allocate the areas according to the variable's type definition.

Post here or PM me if I can be of further assistance. Cheers
Fernando

Posted by: mcaplinger Apr 21 2019, 05:00 PM

It's worth noting that I doubt if JPL was using anything unique to them about the 7094, which was a standard well-supported IBM product that came with all the needed software infrastructure (although the Direct Coupled OS was developed by a third party it subsequently became an IBM product, apparently.)

Posted by: mcaplinger Apr 21 2019, 05:24 PM

QUOTE (nogal @ Apr 21 2019, 07:54 AM) *
If the need to peek inside a floating-point number word or double word would arise, in FORTRAN IV, we would set up overlapping COMMON areas in different program modules (for instance main and a subroutine) one with real numbers defined and the other with an overlapping string.

Standard Fortran didn't have anything like a character type until Fortran 77, though IIRC there might have been non-standard extensions in some compilers before then. https://en.wikipedia.org/wiki/Hollerith_constant

I guess you could COMMON a floating-point value with an integer and extract the bytes with divisions; I certainly hope this was uncommon in code.

Posted by: nogal Apr 21 2019, 06:18 PM

In the early 1970s I used the FORTRAN IV level F compiler first with the IBM 44-PS than with the DOS 26.2 operating system. Though, as mcaplinger says, there was not a string or character data type, we would do string manipulation using arrays of the LOGICAL type. Young students can be quite inventive... We did what we needed to do in order get the required results, using the tools at hand.

I no longer have the IBM language manual for FORTRAN IV but was able to locate my "A guide to Fortran IV programming" by Daniel D. McCracken.

Fernando

Posted by: JRehling Apr 22 2019, 12:25 AM

This page, not specific to JPL, might be informative or interesting:

https://en.wikipedia.org/wiki/Single-precision_floating-point_format#IEEE_754_single-precision_binary_floating-point_format:_binary32

23 bits allows for considerable precision in a significand (about 7 digits decimal). While spaceflight is an area where you never want any sources of failure, I'm curious when there was first the sense of need for more precision than that. 1960s engineering would not likely be able to operationalize sensors, actuators, etc., that could provide or require more precision than that, and with 1960s processors, the cost in speed would not be trivial. The question is, when would an iota of added precision be worth the halving of speed?

Posted by: mcaplinger Apr 22 2019, 01:04 AM

QUOTE (JRehling @ Apr 21 2019, 04:25 PM) *
23 bits allows for considerable precision in a significand (about 7 digits decimal).

Many numerical algorithms misbehave or even fail miserably in single precision. Double precision was added very early in the development of Fortran.

Powered by Invision Power Board (http://www.invisionboard.com)
© Invision Power Services (http://www.invisionpower.com)