Home About Meetings

In the beginning

Among the first modern computers outside military applications such as ‘Colossus’ at Bletchley Park was EDSAC, built at Cambridge University. This became the basis for LEO 1 (or L_yons_ E_lectronic_ O_ffice_ 1), the first commercial computer, developed for the J Lyons and Co. bakery and restaurant chain. Meanwhile at Manchester University, the Manchester Baby had evolved into the Manchester Mark 1 which became the basis for the Ferranti Mark 1.

At this time computer software was provided and maintained free with the hardware as it would only run on the hardware for which it was written — a situation common until recently with embedded computer systems such as in-car electronics and mobile ’phones.

The first software capable of running on different hardware was developed at IBM’s research centre at Hursley in Hampshire. However, most people were not aware of this until the 1970s when the development of Unix for mainframes and CP/M for microcomputers allowed people to write software independently of the hardware in much the same way as the Android operating system has allowed people to write software for a wide variety of mobile ’phones. But as programmers were not being paid for any hardware, they did not want people to buy their software and hand it on to anyone else. So they hit on the idea of ‘licensing’ the software to individual users who were barred from modifying it in any way. Among those who profited from this new approach were Bill Gates and Paul Allen who had written some software for one of the newfangled micro-computers and went on to build a multinational company, Microsoft, on the back of this new approach.

At the same time computer companies were beginning to sell computers to organisations which did not have skilled technicians on whom they could rely to report and, in some cases, solve problems for the mutual benefit of all. Companies without skilled technicians expected the computer company to solve all their problems. Not wanting to be called on to sort out problems caused by an unskilled person, computer companies began charging for supporting the software and imposing restrictions on what people could do with it, much as mobile ’phone networks do today.

IBM had begun to sell some of its software separately from its hardware in 1969 and in 1982, at the end of the long running anti-trust litigation brought against it by the US Department of Justice, it began charging for all its software separately from its hardware though some people thought it ‘underpriced’ it to keep the competition at bay (Glass, 2005).

The expansion of copyright

Meanwhile, as the US economy flagged, US legislators hit on the idea of promoting innovation and extending copyright to provide a greater incentive for innovation. In 1980 it became possible to copyright software in the US while the courts became more favourable to patenting software, a position formalised in law in 1990.

During the 1980s media publishers began to argue that strengthening copyright laws would benefit the US economy and by 1990 they had persuaded the US government to include in trade deals the requirement that other countries would respect US copyright laws. In 1992 breaking copyright law became a criminal, not just a civil, offence in the US and in 1994 the US was able to make respecting its copyright laws part of GATT (General Agreement on Trade and Tariffs), to be succeeded the following year by the World Trade Organization.

In 1998 the media publishers succeeded in persuading Bill Clinton to sign into law the Digital Millennium Copyright Act which imposes penalties for circumventing digital copyright systems and have continued to pursue copyright ‘pirates’ ever since.

These changes had been justified as part of neo-liberal economic theory about creating a free market but they actually had the effect of severely curtailing the freedom of the individual which is supposed to be at the heart of such theories (Coleman, 2013).

Free software

Though the original developers of Unix were based at AT&T’s Bell Labs, much of the development of the software had, as in England, taken place in universities. These were required by US legislation relating to bodies in receipt of public funds to make their research publicly available. Initially, they did this by printing the code for the software in research papers but eventually the length of the code became too great to include in papers and they began publishing it separately as computer files. To comply with US legislation, they created permissive licences, that is, ones allowing users to modify and publish their changes to the code, of which the MIT (Massachusetts Institute of Technology) and BSD (Berkeley Systems Department) licences are probably the most well-known.

Because AT&T was virtually a monopoly communications company in the US, it had been barred from selling computing equipment but, in 1984, it was broken up and released from this restriction. Once the ban was lifted, AT&T began charging people for Unix. This created problems for the universities whose contributions to Unix were now being sold, in apparent contravention of their legal obligations, and annoyance to many of the academics who had contributed to it. Over the next decade, there were numerous attempts to resolve this impasse including developing different versions of Unix based on the universities’ work, of which BSD Unix, which is used by Yahoo and Apple, is probably the most well-known. The impasse was finally resolved in 1993 when the US computer company, Novell, bought the rights to Unix from AT&T and declared that it would not charge anyone for using Unix.1

Before this, programmers had been exchanging free (and not so free) software through locally based ‘Bulletin Boards’ but, concerned at the increasing restrictions being placed on programmers by the commercialisation of software and fearful that his work might be hijacked by his employer, the Massachusetts Institute of Technology, Richard Stallman, who worked at the Artificial Intelligence Laboratory, resigned his post in 1984 and recruited a number of like-minded programmers to create the Free Software Foundation in 1985 and articulate in the ‘GNU Manifesto’ the four freedoms:2

In 1989 he formalised these principles in the GPL (GNU General Public License) which, in its several versions, is now used to protect free and open source software. It differs markedly from traditional forms of copyright in asserting both the right of the author to be identified as the author of the software and the rights of the user to use the software in the ways set out in the GNU Manifesto. Arguably, their stance also sustained the existing collaboration in the development of the Internet which has meant that most of the software used to run the Internet is free and open source software.

Stallman and his colleagues tried to develop a free version of Unix — a project as yet unfinished — but they managed to pull together and write a lot of very useful software, sometimes collectively known as the GNU utilities, which runs on most Unix-like systems.

Open source software

The term ‘open source’ was not coined until 1998 (Biancuzzi, 2008) and it is a little misleading since, apart from the proprietary software developed since the 1970s and ‘shareware,’ that is, software distributed free but normally with a request to make a donation to the author, most software is ‘open source’ in the sense that anyone can read the code. The term was coined in 1998 to cover a new way of creating and supporting computer software, initiated in 1988 by Intel. They hired Michael Tiemann to write the software for a new engineering chip and then gave it away with the chip, telling customers who were themselves not sufficiently skilled to use it to contact Michael for help. In association with John Gilmore and David Henkel-Wallace, he built up Cygnus Support to provide support for a variety of free and open source software as well as to develop a number of the GNU utilities.3

The key feature of this model is that creating software and supporting it became separate activities with charges only being made for support. It was taken up by other companies, most notably the Swedish database company MySQL4 which gave away its software but offered support contracts for people needing help to get the most out of it.

Linux

With increasing adoption of Unix on larger computers in the 1980s, Andrew Tannenbaum had created a smaller alternative, Minix, to use in teaching students the principles of Unix. In 1991 a computing student at Helsinki University, Linus Torvalds, became so impatient waiting to get onto the university computer that he used Minix as the starting point for building his own operating system which, with the help of the systems administrator at the university, who called it ‘Linux,’ he put on the Internet. Within 36 hours he had people contacting him and within 18 months they had developed a kernel which worked well enough with the software which the universities and Richard Stallman and his colleagues had developed to make it a viable system for use on PCs using Intel processors. A contributory factor was the permissive licences issued by the universities and the GPL (General Public License) created by the Free Software Foundation which facilitated collaboration among developers.

Among those who spotted its potential were Patrick Volkerding who released Slackware on 17 July 1993 and Debbie and Ian Murdoch, whose Debian distribution (1993) underpins most desktop Linux systems. Even though they are no longer associated with it, volunteers continue to maintain and develop Debian with support from companies like Hewlett-Packard. Another was the German company SuSE which distributed Softlanding Linux System and Slackware before creating a German version of Slackware in 1994 and their own distribution in 1996.5 Along with RedHat, a US company set up in 1993 to distribute Linux, they work closely with IBM to offer Linux to business customers.

Apart from the fact that Linux would run on PCs as opposed to the mainframes needed for most varieties of Unix, it also had the advantage of more similarities than differences among the different Linux distributions, something which could not be said for Unix at the time.

In 1996 Digital Equipment Corporation approached Linus Torvalds to adapt Linux for their new Alpha processors;6 he wrote this up for his Masters degree, in the process making it much easier to use the Linux kernel with any computer chip. This flexibility meant that Linux began to appear on a wide range of computer systems from supercomputers (Linux runs nearly all the top 500 supercomputers) to digital televisions. Only with in-car entertainment systems and in the personal computer market does Linux hold less than half the market share though, with sales of personal computers in decline and Windows’ share of the combined PC, tablet and smartphone market predicted to be 16% in 2014, most people are choosing FOSS, in the form of iOS or Android, which has Linux at its heart, for their everyday computing needs.

The charitable foundations

During the 1990s another group of programmers developed some free server software to facilitate communication between computer programs on different computers. IBM was so impressed by Apache that it wanted to use it; however, in the light of the anti-trust settlement IBM had made in 1982, it wanted to know the provenance of every line of code they used. At the time, the Apache developers had collaborated without keeping a check on who did what; so they had to go through the code and document who had done what and they also had to create a legal entity with whom IBM could make an agreement. This led to the creation of the Apache Foundation in 1999 and to IBM’s programmers beginning to contribute code to the Apache server.

It was also the start of a major shift towards the open source model for software development and support which Intel had pioneered. Major computer companies now contribute both programming and financial resources to charitable foundations who manage software development rather than competing among themselves to develop software. They then make their money offering specialised support to users as Michael Tiemann had first done.

There are numerous charities involved in managing the Internet and free and open source software, including the Free Software Foundation, the World Wide Web Consortium (W3C), Software in the Public Interest (originally set up to host the Debian distribution but now acting as the umbrella organisation for a number of projects) and ICANN, which manages the allocation of Internet domain names. The most well-known charities involved in developing free and open source software are probably:

They were joined in 2010 by the Document Foundation which was set up to continue the development, under the name LibreOffice,7 of OpenOffice, the open source office suite developed under the auspices of Sun Microsystems, and in 2013 by the MariaDB Foundation, set up to develop MariaDB as an alternative to MySQL after it had been acquired by Oracle.

All also provide software which runs on proprietary systems such as Microsoft Windows while Microsoft has contributed to the Linux Foundation to enable Windows computers to work more efficiently with Linux computers.

In practice Intel and IBM are among the most prolific contributors to free and open source software because they have realised the huge advantages of cooperating to make computer software work in as many situations as possible while Google, which owes its existence to the availability of free and open source software, makes many contributions in cash and kind to the various software foundations.

FOSS principles in practice

It is important to stress that no one who contributes to free and open source software loses copyright; indeed, because everyone who contributes retains copyright, free and open source programs often have very long lists of all those who have contributed unlike proprietary programs where copyright is owned by the company. So an immediate advantage for programmers is that, if they are going for a job, there is a publicly available record of what they have done which any potential employer can examine.

A second advantage is that all software is peer reviewed, either directly by other programmers working on the program or indirectly by users using and commenting on it. Though this system is not foolproof, as an unnoticed error in a Debian security program revealed a few years ago, it means potential weaknesses are more likely to be revealed in free and open source software than they are in proprietary software (Mockus et al.,2005). In the light of 9/11, the US Department of Homeland Security reviewed all software to evaluate its potential use in an enemy attack and its broad conclusion was that, when software is initially released, it has about the same number of weaknesses whether it is proprietary or open source but, because weaknesses in open source software are fixed more quickly, open source software soon becomes more reliable than proprietary software.

A third advantage is the wide range of people involved in software development — from full-time programmers through employees whose companies allow them to contribute code as part of their work and people who program in their spare time to people who have limited programming skills but are prepared to contribute as users though bug hunting and developing documentation. The majority of contributors to a project may only contribute one small change but the sum total of those small changes makes the key contributors’ job much easier. No individual company could ever gain access to such a wide spectrum of skills or feedback during the development of software.

A fourth advantage is that you can adapt the software to meet your needs without having to ask anyone’s permission and in most cases without telling anyone. You only need to understand what the software licence says if you decide to share your changes with anyone else. The most restrictive licence, known as GPL3, requires you to share any changes with everyone and prohibits you from linking the software with proprietary software; others have lower requirements about publishing the changes and do not place any limits on links with proprietary software.

The unexpected dividend

Linus Torvalds is somewhat bemused at how far his student project has gone and in particular the way in which it has opened up the possibility for third world countries like Nepal, Bhutan and Vietnam to develop home-grown computer systems. Three technical features of Linux have been particularly helpful, its limited demands on hardware, the adoption of the utf-8 encoding of Unicode and the way in which the instructions that appear on the screen are stored. But it has also had a major impact on the ways people work together.

The Linux kernel consists of a small core and modules which can be available all the time, on-call or not at all. So programmers only use the modules relevant to what they want to do; that is why Linux can run on a supercomputer or a digital television. If your computer has limited hardware, you can create your own version to suit the limited hardware, something highly relevant in third world countries.

Unicode is a sequence of all the characters in all the known languages, living and dead, in the world. Because there are so many characters, a font only covers a limited number of the possible characters but it is theoretically possible, using the right combination of fonts, to write documents in any known language using any Linux program. Since the utf-8 encoding has also been adopted as the only language encoding to be used on websites under HTML5 and ICANN has approved the use of non-Roman scripts for website names, something only Unicode can handle comprehensively, Linux is the perfect operating system for website development.

More importantly, because the instructions that appear on screen are stored in simple lists within the program, it is relatively easy, if tedious in programs that have lots of messages for the user, for someone to create a translation of any Linux program as has happened in Nepal, Bhutan and Vietnam. No one needs to ask permission and it depends not on a company decision but on the willingness of volunteers or, in some cases, governments to support people to do the job.

Forty years ago Richard Titmuss (1970) found that, in relation to blood donation, people who give blood free are more likely than paid donors to be committed to the quality of what they are giving. The FOSS community has gone one step further in bringing together paid and voluntary contributors in the joint pursuit of quality. Volunteers, whose only public reward is to see their name in the list of contributors, overwhelmingly outnumber those who are supported by their employers and the leaders of free and open source software projects have had to hone their relationship skills to sustain that commitment.

There have been some problems with cultures where giving and, in particular, giving feedback is not the norm, where people have been reluctant to engage fully in the FOSS ethos. There have also been problems with people who have adopted exploitative or domineering approaches to software development and there has sometimes been frustration with those who only take and never give back. But, relative to the success of FOSS worldwide, these have been irritations rather than major hurdles to its development. FOSS has developed, and continues to develop, out of a spirit of giving, whether by companies, governments or individuals. It is a model for co-operation between government, commercial and voluntary organisations that has stood the test of time, delivering quality outcomes to far more people in the world than possibly any other community or voluntary enterprise ever.

Notes

1. In 2011 Novell was taken over by Attachmate, who announced that they would continue Novell’s approach to the rights to Unix.


2. The term is used in imitation of the four freedoms articulated by US President Franklin D. Roosevelt on January 6, 1941


3. In 1999 Cygnus Solutions, as it was called by then, merged with RedHat.


4. In 2008 Sun Microsystems, a long-standing manufacturer of high performance computers, acquired MySQL for $1 billion dollars. Since Oracle acquired Sun Microsystems in 2010, MySQL is now supported by Oracle.


5. In 2003 they were absorbed by Novell; when Novell was taken over by Attachmate in 2011, Novell and SUSE became separate divisions, an arrangement which continued following the merger of Attachmate with MicroFocus International in 2014.


6. Digital Equipment Corporation was acquired in 1998 by Compaq and they in turn were acquired in 2002 by Hewlett-Packard who continue to produce the Alpha processor.


7. The name OpenOffice is owned by Oracle as a result of its acquisition of Sun Microsystems. Oracle declined an invitation to join the Document Foundation and in 2011 offered the OpenOffice suite to the Apache Foundation.

References

Biancuzzi, F. (2008, 7 April). Open source decade: 10 years after the Free Software Summit. arstechnica.com.

Coleman, E. G. (2013). Coding freedom: the ethics and aesthetics of hacking.. Oxford: Princeton University Press.

Glass, R. L. (2005). Standing in front of the open source steamroller. In J. Feller, B. Fitzgerald, S. A. Hissam, and K. R. Lakhani (Eds.), Perspectives on free and open source software, Chapter 4, pp. 81-92. Cambridge MA/London: MIT Press.

Mockus, A., R. T. Fielding, and J. D. Herbsleb (2005). Two case studies of open sourcesoftware development: Apache and Mozilla. In J. Feller, B. Fitzgerald, S. A. Hissam, and K. R. Lakhani (Eds.), Perspectives on free and open source software, Chapter 10, pp. 163-209. Cambridge MA/London: MIT Press.

Titmuss, R. M. (1970). The gift relationship: from human blood to social policy. London: Allen & Unwin.