Press "Enter" to skip to content

The beginnings of French computing (1/2): The French way of failing (1939-1961)

Part of my series on national beginnings: Spain Argentina Uruguay France (part 1)

The state of computing in the world’s leading powers in 1952 (UNIVAC, Ferranti Mark 1, МЭСМ, Couffignal machine). Spoiler: The French thought they were doing OK.

Prologue: A world of punched cards and calculation

The origin of computing is usually told as a simple story: before the Second World War, there were no computers; after the Second World War there were, with maybe discussion about whether some of the machines built during the war like the Colossus or the Zuse Z3 were transitional curiosities or already full-fledged computers. That narrative is largely correct, yet it obscures the remarkable progress in automatic calculation that unfolded from the nineteenth century to the interwar years. This omission matters little when speaking of pioneers like Aiken or von Neumann, who would rethink everything once electronic computing became possible; however one cannot understand the French postwar failure in computing without first grasping the technical and intellectual paradigms the French scientists were worked with before the war, and from which most of them were unable to break free.

The Industrial Revolution summons images of factories, locomotives, steamboats and of course coal, an ungodly amount of coal to feed the steam Moloch. Yet the same revolution also created a world almost as hungry for calculation as it was for energy. Banking expanded exponentially, and with it came the need to compute interest, risk, and repayments. Engineers, insurers, architects, and scientists all faced problems too complex to solve “on the fly”; calculating had to become a specialized task to organize and optimize. As so, the need for computation was addressed much like the need for textiles, houses or frying pans: with technology and division of labor.

The Nautical Almanach. Lots and lots of calculations there.

Division on labor in calculation can be traced back to the celestial calculation of the late 18th century, for instance Royal Astronomer Nevil Maskelyne‘s British Nautical Almanac (1767), a yearly collection of pre-calculated star positions allowing the sailors to know their longitude. The calculation method was chosen by a chief mathematician, and then the thousands of hours of calculation required were divided in tiny manageable parts and distributed to human computers. A comparer would then checking the accuracy of the calculation by comparing the same calculation done by two different persons. This organization became standard in the calculation offices of the 19th and 20th century, either as departments of large organizations or as work-for-hire. Over time, calculation would be massively accelerated by mechanical counting machines, which can be traced back to 1645 (the Pascaline), but really became mass produced with the comptometer (1887). Before long, its use became so widespread (particularly in accounting departments and insurance/banking companies) that by the First World War the occupation “comptometrist” made it into job boards.

Article found in Comptometer News, December 1926, about a comptometer competition organized by FIAT in Italy. Comptometrist was almost exclusively a female job. Source

Comptometers and their equivalents could only do simplest calculations, but innovation kept marching on and in the years that followed analog computers – purpose-built computing machines – would tackle increasingly complex operations and equip calculation offices. This being a wargaming blog, the example we all know about is the fire director: large analog computers deployed on warships starting from WWI, which calculated fire solutions by taking into account (eventually) the range, the ship’s speed and bearing, the target expected position (itself another complex calculation) and the Coriolis effect.

Fire Control System on the HMS Belfast (1936). Source: Wikipedia

The other decisive technological innovation for computing is the punched card. Once again, the technology is older than one would expect: glossing over precursors from as far back as 1725, the first successful punched card operated machine was the Jacquard looming machine (1804), which could weave complex patterns according to the program written on the cards. In the 1820s there were tens of thousands of Jacquard machines in operation in the world, which in addition to the massive economic and social impact of the machines themselves, triggered a conceptual epiphany: every industry and service wanted its punched card operation.

The Jacquard machine of the National Museum of Scotland. If there was a game about rebuilding civilization, I would not pass the loom level. Source: Wikipedia

It is the Jacquard machine that inspired computing pioneer Charles Babbage to imagine his (unbuilt) punch-card operated analytical engine that would execute sequentially the arithmetic operations it would have been programmed to do. To really expand into services however, the punched cards needed one last invention: the tabulator, which converted punched cards into human-readable data. The first tabulator had been invented by Herman Hollerith in 1884 and famously used for the 1890 US Census, reducing the time needed to parse through the survey from 8 years (1880 census) to “only” 6 years. Hollerith had created the new field of data processing, which he immediately dominated through his company – the Tabulating Machine Company in 1896 (which became in 1911 the Computing-Tabulating-Recording Company, which became in 1924 International Business Machines, or IBM).

Hollerith in front of his tabulator, 1894. Source: Colombia University

The tabulator changed everything. Insurance companies, with their massive amount of data to process, were the first to jump on the bandwagon, with Prudential leasing its first tabulator in 1891. Other followed quickly, and soon entire services and administrations moved from paper to punched cards and adapted their process to the tabulator. The use of punched cards in calculation, however, had advanced timidly, with the first calculating machines that used punched cards in input and/or output only arriving on the market in the interwar period, and even then they were usually single-purpose, for instance the IBM 601 Multiplying Punch (1931) could only multiply, and the IBM Type 405 Alphabetical Accounting Machine (1934) could only add and substract. It is in this technological universe that the main protagonist of this article grew up.

An IBM 1601 calculating punch. Source: Wikipedia

The Price of Defeat

In January 1938, mathematician Louis Couffignal (1902-1966) published his thesis “L’analyse mécanique, application aux machines à calculer et à la mécanique céleste, [Mechanical Analysis, application to calculation machine and celestial mechanics]. His claims were visionary, as he described with some details and schematics a machine controlled by punched cards that would do “Multiplication, division, and sign computation; Sequencing; Comparison; Printing; Recording and documentary control; Mechanical Tables [= short term memory]; Perfect pre-control of the preparation of printing and control documents“, in a word a machine that we would call Turing-completish in its purpose – though as we will see not in its specifics. Couffignal, who was familiar with the works of Babbage, was obsessed about calculation machines; he had designed a few (mechanical) of his own before being recruited, in 1937, by the Centre National de la Recherche Scientifique (CNRS) to work on mechanical calculation. In his thesis L‘Analyse mécanique, Couffignal proposed two designs to automate calculation: one was an incredibly complex mechanical machine working on a decimal basis, the second one was more interesting to us: an electro-mechanical machine counting in binary. Couffignal stated in his thesis his preference for the latter solution: faster and safer.

Couffignal’s description of how his electrical-binary machine would multiply and divide.

This is not to say that Couffignal was ahead of his time, merely at the frontier; Charles Eryl Wynn-Williams for instance had designed a scale-of-two counter in 1932 at the latest. Furthermore, the Couffignal machine was closely inspired by mechanical machines, and so addition/substraction and multiplication/division were managed by different parts of the machine, as if several non-Turing complete machines had been stapled together. But in 1938, the main issue with Couffignal’s proposal was that it was purely theoretical – to build it he would need someone to foot the bill.

Wynn-Williams’ scale of two counter. I show it here because it was invented long before Couffignal wrote about it, an information that will be marginally relevant later. Source: Wikipedia

Earlier in WWI, the French had experienced what they considered a technical German superiority, particularly in the field of ballistics. To bridge the gap, the French founded in 1915 the Direction des Inventions intéressant la Défense nationale, headed by mathematician Émile Borel – more famous nowadays for his parables of monkeys statistically typing all of Shakespeare. After the war, the Republic kept a keen interest in applied mathematics, and Borel created in 1928 the Institut Henri Poincaré in 1928 with four laboratories: Mathematics, Calculation, Theoretical Physics and Statistics & Probabilities. As the risk of a new war increasingly loomed over France, the needs of the French military for calculation grew exponentially: aerodynamic, optics and above all ballistics, what with the increase of the gun range and the specific problems of hitting air targets from the ground and ground targets from the air. In this context, Couffignal had frequently worked with the military and had been among others tasked in 1936 with an aiming device for heavy bombers. In 1939, when the Institut Henri Poincaré became swamped by requests for calculs (three times more demands than could be served, according to the director), it created two new laboratories – ballistics and mechanical calculation – with Couffignal appointed as head of the second. Couffignal was quick to point out that the existing mechanical machines were slow and error-prone for complex calculation, as the user had to keep inputting manually intermediary results for the next calculation. A solution existed, of course, in that thesis he had written one year earlier – maybe not the complete machine, but close enough: an electro-mechanical connection between the keyboard of a calculating machine and an accounting (tabulating?) machine, their interaction being programmed by punched cards. In theory, the output of the calculating machine would have been fed back into it by the tabulating machine, allowing to chain operation without human input. The military understood the impact such automation could have, and gave the greenlight.

The Institut Henri Poincaré in 1928. Couffignal did not stay there for long.

Alas, the débâcle interrupted those promising developments, and the members of the Institut, after having to relocate several times in front of the advancing German armies, eventually dispersed. The débâcle similarly swallowed less documented projects that could have been a path to computing, particularly a project by Georges Vieillard and Franklin Maurice of Bull, who were commissioned by the Chiffre (the French cryptology service) to build machines that could decipher the Enigma code. In April 1940, they finished their two (or three? sources vary) machines: W1, W2 and possibly W3. Alas, just like the other machines, they had to be destroyed before the Germans could find them, along with any documentation there was for the project.

Ideological isolation

As the war ended, France was in ruins, its industrial production in tatters, and a good number of its top mathematicians and engineers had lost their lives during the war, deported for their race or for being part of the Résistance. Countless others could not devote fully to their work, whether because they were prisoners of war, active and surviving members of the Résistance or under Free French uniforms. In the latter case, they had no opportunity to receive the practical computing experience that their British and American equivalents could enjoy. And so, when in 1946 the CNRS created the Institut Blaise Pascal with the mission to build a French computer, it chose as the head of the project the most obvious candidate: Louis Couffignal.

Neither hero nor villain, Couffignal had passed the war years in relative normalcy as the world collapsed around him. He kept his cushy position at the CNRS, and convinced the Vichy government to let him finish his machine – though this authorisation did not have any practical impact by lack of funding and available parts. His transition from France de Vichy to France occupée to France libérée was seamless, and in the beginnings of 1945, he was given the grade of Commandant, a jeep and a driver to join a mission that tagged along the 1st French Army and grabbed whatever technical tools and machines the Germans had left behind them. A chunk of this loot would be the starting working equipment of the Institut Blaise Pascal.

The Institut Blaise Pascal was one of the many initiatives of the CNRS to recoup the lost years in various scientific domains. This drive was supported by the Americans, who invited Couffignal in 1946 to tour their most advanced labs : he met Howard Aiken, saw the ENIAC and Mark I and was introduced to the Von Neumann architecture. After returning to France, Couffignal proceeded with the next logical thing: ignore whatever Von Neumann was saying and build an improved version of his pre-war machine.

The ENIAC in 1946 – Couffignal saw it and thought something along the lines of “Primitive. I can do better”. – Source: CNRS

While Couffignal transitioned to an all-electronic design after visiting the Americans, he did not understand the paradigm shift allowed by a fully electronic machine, and kept his vision of a machine with specialized parts. As he explained: “the problem of organizing a calculation is essentially the same as the problem of organizing an assembly line”. His plan also matched his odd vision of “artificial evolution” according to which, just like with natural evolution, machine would become increasingly complex over time. More practically, he argued that the Von Neumann architecture was too slow (his 1936 architecture allowed several operations in parallel so in theory it could work faster) and more importantly too memory-consuming, which of course was a real limitation of early computers. Couffignal believed that adding memory to ease calculation would be a never-ending treadmill: “memory can record 5,000 results in certain machines currently being studied, [but] an examination of the mathematical physics problems that are intended to be solved with these machines shows that this number will be insufficient; it will have to be increased to 10,000, 20,000, and perhaps even 50,000 in order to solve certain problems involving the integration of partial differential equations.” No, it was simply better to organize the calculation in a way that required as little memory as possible. Even worse, because he minimized the importance of memory in computers, he did not understand the interest of having programs directly in memory: some basic functions like the binary-decimal conversion had dedicated circuits, for everything else punched cards would be enough!

In any normal situation, Couffignal’s peers would have stepped in, but in this case Couffignal was working in a perfect echo chamber. The directors of the CNRS (first the Nobel-prized physicist Frédéric Joliot-Curie up to 1946, then Georges Teissier) were high-ranking members of the French Communist Party, and they saw Couffignal’s proposal as a French answer to the American design. In addition to the geopolitical rivalry, Couffignal’s architecture was ideologically stronger: Couffignal’s design replicated the scientific organization of labor found in well-run offices and factories, whereas Von Neumann’s simple architecture was closer to the workshops of old where each worker was supposed to produce items from start to finish. And so Joseph Pérès, the director of the Institut Blaise Pascal, could confidently state in June 1947 that “thanks to the work of M. Couffignal, we are still ahead [of the Americans] from the theorical point of view“. The CNRS would never fail to mention that the ENIAC was still using decimal numbers, and didn’t Couffignal invent binary counting back in 1936? The French media ate it up, and created of whole cloth a rivalry between the French approach and the American approach, a rivalry the Americans were I suppose never aware of. The Couffignal machine would be, anytime soon, the pride of France.

The CNRS would not interact with the French private firms interested in computing either. For the directors of the CNRS, working alongside a private company would go against the intérêt général – after all companies were expecting a competitive advantage from such a collaboration, and so the CNRS would end up serving a private interest. As for Logabax, the firm that had been handpicked by Couffignal to build his machine, it was not in a position to comment. Logabax had built some of the most impressive fully mechanical calculating machines in the 40s, with in particular a mechanical long-term memory. Alas, it was otherwise the worst possible choice for the project: it was small, always on the brink of bankruptcy and had never ever worked with electronics.

Logabax machine with mechanical memory. Source: P-E Mounier-Kuhn’s Histoire de l’Informatique en France

The pride of France would take its sweet time, much to the despair of some of the French scientists that had been sidelined for political or personal reasons, chief among them Léon Brillouin, professor of Physics at Harvard and well acquainted with the latest American research. Back in 1947 he had used a French conference on computing to expose the latest technological advancements on the topic, and to remind the CNRS that the Americans were willing to train more young engineers in computing, a proposition that was never followed up. Years passed, the 40s turned into the 50s, the CNRS still had nothing to show. Finally, in January 1951, the Institut Blaise Pascal held an international conference on computing, where Couffignal showed a “pilot-machine”, that is a pint-sized version of the final computer. While small, it worked as expected, provided you did not ask it to calculate any number with more than 8 digits. Still, the Americans and British, present at the conference, must have been singularly unimpressed.

The “pilot” of the Couffignal machine and its inner organisation. Note the different areas of the machine in charge of the different types of calculations. Source: Pierre-Eric Mounier-Kuhn’s Histoire de l’Informatique en France

The final version was never delivered. Scaling up proved technically more complicated than just increasing the number of components, and in any case Logabax went bankrupt in 1952, leaving the CNRS without anyone to build its machine. Millions of Francs had been lost, and worse still – the failure of the project was seen for many more years as an industrial failure (the one of Logabax) rather than as a technological/design failure (the one of Couffignal and the CNRS). The CNRS spent some time trying to find a new partner to finish the machine, but in 1955 it finally threw in the towel and bought a British Elliot 402, which did not work that well either.

Could the French have had a working computer with another strategy in 1952? There seems to be little doubt about this. France had managed an impressive rebound in nuclear (the Zoé reactor was operational in 1948, largely thanks to Joliot-Curie), in jet engine (the Atar engine in 1948) and even in missiles where it was in some aspects ahead of the Americans. Léon Brillouin, who knew the topic well, had explained in 1949 that France had everything it needed to produce a computer quickly: highly-trained engineers and an electronics industry; and yet the vanguard of French computing had made everyone squander more than 5 years. Without solutions coming from the French research apparatus, labs started to turn toward private solutions.

A few other public labs had their own computer projects, usually to support their broader scope. In particular, the Office national d’études et de recherches aéronautiques (ONERA) built en passant its own computer with a Von Neumann architecture as a side project. It was apparently almost finished, but then the lead engineer left in 1953 and his replacement decided to restart from scratch. The second version, showed here incomplete in 1957 was obsolete by the time it was ready – for instance still using tubes and magnetic drums when everyone was transitioning to transistor and ferrite cores.

The leader and the challenger

Outside of the public laboratories, two French companies would build their own computers in the early 50s. The oldest one is La Compagnie des Machines Bull (or more simply Bull), a company that was only French by chance. The name Bull comes from Fredrik Rosing Bull who founded a company in Oslo in 1921 to sell a punched-card sorting, recording, and adding machine (for accounting) he had patented two years earlier. The machine was instantly successful, as it broke the monopoly that IBM had leveraged on both terms (no sales, only leasing!) and prices. Fredrik Rosing Bull died in 1925 at 37 years old, and the ownership of Bull was transferred to a consortium, who sold licenses to build and commercialize Bull machines in individual countries.

One of the buyers was a Belgian businessman called Emile Genon, who bought licenses for most of Western Europe and then established his HQ in France (1931), a large market where manpower was relatively cheap, patents well-protected and data processing still lagging behind. After some convoluted financial shenanigans I prefer to skip, the new company became La Compagnie des Machines Bull in 1932, a company capitalistically unrelated to the Norwegian Bull, but that managed to poach the original Bull’s lead engineer: Knut Andreas Knutsen. Knutsen was a Tier S engineer, and designed machines that would turn Bull into an European leader of data processing equipment. The cash cows were the T30 (1931) and T50 (1934), tabulating printers that could print respectively 120 and 150 lines by minute – I don’t know intuitively what’s a good speed for an interwar tabulating printer, but apparently IBM only reached 80 lines by minute in the interwar. Bull owned 15% of the French data processing market in 1939, and the Vichy years arrived… which were absolutely great for Bull, because IBM was de facto out of a market that expanded suddenly when the Vichy regime made the ID card compulsory.

Bull’s T30 tabulator with the visible wheels ; Source: Fédération des Equipes Bull Belgique-Luxembourg

After the liberation of France, Bull went through the épuration without issue (its stance during the war is still hotly debated among historians) and expanded everywhere: Europe, USA, South America, Japan and even the Soviet Union. While most of its revenue was still traditional data processing, Bull gradually pivoted to electronics, and through it ended up with a computer without even planning for it. It started with the Gamma 3 (1952), a Von Neumann electronic calculating machine that only worked as a peripheral to a tabulator – so not a full-fledged computer. Then Bull realized it could remove the tabulator and let the Gamma 3 be programmed by card: it was the Gamma 3 PPC – but still not a full-fledged computer, because it lacked memory. Finally, in 1955, Bull added drum memory to the Gamma 3, and so the Gamma 3 ER was a full-fledged computer, and a commercially successful one at that. All models included, 1200 Gamma 3 were sold – an absolutely tremendous success, though most were not self-contained computers with memory.

Bull Gamma 3, not ER. Source: Wikipedia

Yet, the Gamma 3 ER was not the first French computer – this feat had been achieved neither by a public laboratory nor an historical giant, but by what we would nowadays call a start-up: François-Henri Raymond’s Société d’Electronique et d’Automatisme (SEA). Raymond was an engineer in electronics who joined the Marine Nationale shortly before the war to work on the latest electronic technologies: radar, sonar, fire control. Demobilized after the defeat, his war years were uneventful. After the war he joined the leadership of the machine-tool maker GSP and became advisor for Sadir-Carpentier, a company that built electronic equipment (including the first French radar in 1939); it is in this latter role that he was sent to the United States to find out what was the state of the art – and that’s how he found and read the Von Neumann report.

Returning to France, Raymond suggested to Sadir-Carpentier to build one of those computers. Sadir-Carpentier did not see the rationale for following the Americans in their collective pet projects.. Fine then, Raymond would do it himself. GSP however very much did see the point when Raymond discussed his project, so much so that not only did they buy shares of SEA and joined the board, but so did their own shareholder (Gaz et Eaux). With the capital secured, Raymond assembled a team of engineers (many of them friends and colleagues from his years working for the Marine Nationale R&D), founded the SEA (1947) and leveraged his contacts in the army, navy and air forces for his first contracts. The SEA first orders were analog computers, but of course Raymond was biding his time for an opportunity to build a full-fledged computer.

The SEA finally nailed such an opportunity in 1951, when French Army’s Laboratoire Central de l’Armement ordered a scientific computer – possibly they had lost patience with Couffignal. The SEA’s response was the CUBA (Calculateur Universel Binaire de l’Armement). The CUBA was state of the art when designed (Raymond was keeping tabs on the technological progresses in UK and US), but building it turned out to be harder than expected given that whatever new technology the Americans had started to use (magnetic core memory! transistors!) was not yet readily available in France. SEA had to import parts from UK and America, and even then the CUBA was delivered later than expected (1954 or 1955) and barely used – a mere footnote in history beyond “first French computer”. Meanwhile, the SEA had worked on two more universal computers: the CAB 1011 (July 1955) for the French spying agency for cryptographic works, and the CAB 2000 (October 1955) for the missile manufacturer Matra – 4 CAB 2000 would be eventually built. CAB stood for Calculatrice Automatique Binaire, but despite the name they were computers in the modern sense of the term.

What wasn’t a footnote of history however was the CAB 500 (designed in the late 50s, the first orders were delivered in 1961). The CAB 500 was an attempt to build a “simple” computer that did not need highly specialized engineers to operate. Raymond made radical choices: the CAB 500 was shaped like a desk, could be plugged in a wall socket and did not need a dedicated room with air conditioning. It is sometimes described as the first desk computer: you could put it on your desk, sit down and use the typewriter to give commands to it – of course you wouldn’t have a lot to do with it if you were not a scientist yourself.. The SEA even embedded a specially designed language: the PAF [Programmation Automatique des Formules], which looks like BASIC but earlier and in French. The CAB 500 was the real breakthrough for SEA, with 100 machines sold, some as far as Japan, Soviet Union (2 machines) and China (1 machine).

I will end this first part of this article now. France had missed the first 10 years of computing and at the end of 1955, there were only 5 computers in France: the CUBA, the CAB 1011, one CAB 2000, one Elliott 402 and an IBM 650 at the IBM France HQ; UK had twice that and fully British-made. By 1960 however, French computing was looking up, with two companies able to export abroad. That wouldn’t last however, and significant setbacks in the early 60s would be turned into an unmitigated disaster by the French state’s attempt to fix it – but this will be in our next (and last) episode.

Sources:

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

The maximum upload file size: 128 MB. You can upload: image, video, document, spreadsheet. Links to YouTube, Facebook, Twitter and other services inserted in the comment text will be automatically embedded. Drop files here