The first machine Multics ran on was the GE-645, using an extension to the basic architecture of the GE 600 series CPUs; after GE's computer business was sold to Honeywell, it then it ran on a number of models from the follow-on Honeywell 6000 series.
Multics was born in the days when mainframes had lots of lights and switches, and the early machines that it ran on were no exception; early Multics hardware was festooned with lights.
A number of Web sites claim to have pictures of Multics panels, but... Alas, the 6000 series included models that ran another OS, GCOS, and the CPUs (and their front panels) are not the same; Multics CPUs included an extension called the 'Appending Unit' to support both virtual memory, and Multics' single-level store architecture. A lot of pictures of 'Multics' front panels actually show panels from GCOS machines.
To make things even more complicated, in the late 1970s Honeywell came up with an alternative extension to the 6000 Series architecture, the so-called 'New System Architecture'. It added a new security architecture (using domains, instead of the 'rings' of Multics), and also support for virtual memory (which the basic 6000 series architecture did not have). A new version of GCOS, GCOS 8, which came into use starting in 1980, made use of this.
So, this Web site attempts to show actual Multics panels; and also shows some of the panels from GCOS and NSA machines which are confused with panels from Multics machines.
'Active' units included CPUs and IOMs (I/O controllers); the 'passive' units are SCUs ('System Control Units', each of which contained a multi-port memory controller, and the memory attached to that controller). A 6000 Series system contained at least one CPU, one SCU and one IOM - and potentially many more of any or all of them (7-CPU systems existed).
(Having multiple units of each type was good for system robustness - a system could continue to operate, albeit with degraded peformance, if one unit of a particular type had to be taken offline. It was also possible to split a large system into two smaller working systems - a capability which was used at several sites.)
So, in a system with 5 CPUs, 3 IOMs, and 6 SCUs, there would have been 30 separate CPU<->SCU links, and 18 IOM<->SCU links - and each CPU would have had to have had 6 ports (one for each SCU), and each SCU would have required 8 ports (5 for the CPUs, and 3 for the IOMs).
This is important because early GCOS machines did not support the larger configurations: e.g. CPUs and IOMs could hook up to at most 4 SCUs, so GCOS CPUs had only 4 ports, and IOMs only 2. This means that early GCOS active unit configuration panels only show a limited number of ports - making them easy to tell apart from Multics panels.
The early Multics active units needed to be able to connect up to 8 SCUs, because early SCUs contained a limited amount (256KW) of memory. Some of the later NSA active units had more ports - e.g. the NSA IOM (below) had 8 ports.
When the later SCUs, which supported much more memory (4MW) arrived, machines did not need as many SCUs. Since ports were expensive, the later active units of both systems did not have as many ports - 4 ports, enough for 4 SCUs, was plenty.
Most of the early front panels were divided into several 'sub-panels', for:
It appears that there were basically three different generations of Honeywell machines that ran Multics:
(Most images below can be clicked on, to show a larger image.)
In addition to the large maintenance panel (shown below), the early CPUs held another, smaller, half-panel, the configuration panel, but we don't yet have a good image of that entire half-panel (but see the image of the MIT 6180 system, below); we do have a (poor) image of the the Configuration sub-panel of that half-panel, though (below).
We don't have an image of the maintenance panel from the later Series 60 Level 68 Multics CPU, but we do have (below) the corresponding Series 60 Level 66 GCOS CPU panel, which is identical to the 6000 series GCOS CPU panel, except that it has changed from white to black, and the light bulbs have been replaced with LEDs; the Multics CPU panel for the Level 68 probably went through the same transformation.
|H6180 CPU Maintenance panel||Note the section at the top labeled 'Maintenance' - this is actually
the sub-panel for the 'Appending Unit', the special extension found only in
This particular panel now belongs to the Living Computers Museum, where it has been hooked up to a Multics CPU simulator; a video of it while Multics boots can be seen here.
|6000 Series / Level 68 Multics CPU Configuration panel||In GCOS CPUs, the CPU's Configuration sub-panel is part of the main CPU physical front panel. In Multics CPUs, that space was taken up by the sub-panel for the Appending Unit - and with more ports, the Multics CPU's configuration panel was larger anyway, so it was apparently moved to a separate half-panel.|
|DPS-8 Multics CPU Configuration panel||This seems to be functionally basically the same as the CPU Configuration
panel above, but with a panel 'look' consistent with the other DPS-8
Note, however, that it only supports 4 ports; by now, the larger SCU's were ubiquitous, so there was no need for more than 4.
|256KW SCU Configuration and Maintenance panel|
|4MW SCU Configuration and Maintenance panel||This is the later, larger capacity SCU; it apparently did not have the full maintenance panel (as above), just this smaller configuration/maintenance panel.|
|DPS-8 SCU Configuration panel||This is the SCU Configuration sub-panel for the DPS-8 machines; it has a panel 'look' consistent with the other DPS-8 panels.|
|DPS-8 SCU Syndrome panel||This is an ancillary sub-panel to the DPS-8 SCU Configuration sub-panel above, which displays memory error information.|
|GCOS IOM Configuration, Maintenance and Test panel||These panels were also probably mounted on the back of a large swing-out doors,
on the front of the IOM cabinet, and the same size as the early SCUs.
Note the limited number of ports.
|NSA IOM Maintenance and Test panel||The NSA IOM Maintenance and Test panel appears to be indentical to the GCOS
one above, but it is missing the Configuration sub-panel of that, which has
been moved to a separate half-panel (below).
It is believed that the Multics IOM Maintenance and Test panel is identical to this, but confirmation is currently unavailable.
|NSA IOM Configuration and Bootload panel||The NSA IOM Configuration panel is identical to the original GCOS one,
but it has a lot more ports (i.e. can be used in a system with more SCUs,
i.e. more memory), so would no longer fit in the same panel, and had to be
It looks much the same as the Multics one below, but it is missing the two sub-panels at the top of the Multics panel.
|DPS-8 GCOS IOM Configuration and Bootload panel||This seems to be functionally basically the same as the NSA and Multics
IOM Configuration panels (above and below), but with a panel 'look'
consistent with the other DPS-8 panels.
Note that it only has 4 ports, standard for DPS-8 machines with both Multics and GCOS.
|Multics IOM Configuration and Bootload panel||Like the CPU Configuration, these panels (which supported 8 ports, i.e.
connection to 8 SCUs), and were thus pparently were too large to go where the
Configuration sub-panel went on a GCOS IOM front panel (above), so it was
moved to a separate half-panel, much like the CPU's.
One can be seen at the far right of the image of the MIT H6180 Multics system, below.
|DPS-8 Multics IOM Configuration and Bootload panel||This is the basically the same as the DPS-8 GCOS IOM Configuration panel above. It is not clear with the minor differences between this one, and the 'GCOS' IOM panel represent merely different temporal versions of a universal DPS-8 IOM panel, or if the Multics and GCOS IOM panels were slightly different.|
|6000 Series GCOS CPU Configuration and Maintenance panel||This panel is the same size as the Multics CPU Maintenance panel, and
like those, was probably mounted on the back of a large swing-out door on
the front of the CPU cabinet,
Note that the large section at the top for the Multics Appending Unit is not there; that space holds the Configuration sub-panel on these CPUs.
|Level 66 CPU Configuration and Maintenance panel||Identical to the previous one, except for the colour, and lights.|
|Level 66 VU Configuration and Maintenance panel||The 'Virtual Unit' (VU) was the NSA counterpart to the Multics Appending Unit (AU); apparently Series 60 CPUs could optionally have either an AU or VU added to them. (The VU was standard on the later DPS-8 CPUs.) It too apparently occupied its own half-panel.|
|6000 Series Microprogrammed Peripheral Controller panel||Originally, all disk drives, tape drives, etc were connected to IOMs via mass storage and magnetic tape processors, respectively; those were eventually replaced by this 'Microprogrammed Peripheral Controller'.|
|H6180 Multics System at MIT||This image shows part of the dual-processor H6180 Multics system at MIT.
The unit on the far left is a 256KW SCU; to its right, there is a 6180 CPU.
Note the smaller panel just to the right of the CPU's maintenance panel (mostly edge-on in this view); this is the 6180 CPU configuration half-panel (which contains things moved from the top of the 60xx CPU maintenance panel to make room for the Appending Unit sub-panel).
The panel visible in the distance at the right-hand edge of this image is a Multics-type IOM's Configuration half-panel.
|DPS-8/M Multics system at the University of Mainz||This shows the Mainz Multics system; that system included (from the left) an SCU, a CPU, another SCU, another CPU, and an IOM.|
Some of the images here are from a number of Internet sites which host Multics/GCOS panel images:
Back to JNC's home page
© Copyright 2017 by J. Noel Chiappa
Last updated: 19/November/2017