Skip to content

Parallel Programming. Offprint from: The Computer Journal, Vol. 1, No. 1, April 1958

Parallel Programming. Offprint from: The Computer Journal, Vol. 1, No. 1, April 1958

Click for full-size.

Parallel Programming. Offprint from: The Computer Journal, Vol. 1, No. 1, April 1958

by GILL, Stanley

  • Used
  • first
Condition
See description
Seller
Seller rating:
This seller has earned a 5 of 5 Stars rating from Biblio customers.
Koebenhavn V, Denmark
Item Price
£6,452.80
Or just £6,436.67 with a
Bibliophiles Club Membership
FREE Shipping to USA Standard delivery: 2 to 2 days
More Shipping Options

Payment Methods Accepted

  • Visa
  • Mastercard
  • American Express
  • Discover
  • PayPal

About This Item

London: The British Computer Society Ltd, 1958. First edition. THE BIRTH OF PARALLEL COMPUTING
EXCEPTIONALLY RARE INSCRIBED OFFPRINT.

First edition, exceptionally rare separately-paginated offprint, inscribed by Gill, of "the first paper on parallel programming ... Subsequent papers on the subject did not appear for another seven years ... A decade later, interest in parallel programming had increased dramatically" (Dauben, p. 361). The origins of parallel programming can be traced to the early 19th century work of Charles Babbage, Luigi Menabrea, and Ada Lovelace, but it was Gill who first formalised the concept on 16 December 1957, in a lecture he delivered to the British Computer Society, published as the offered paper. He summarizes the aims of the paper as follows: "By 'parallel programming' is meant the control of two or more operations which are executed virtually simultaneously, and each of which entails following a series of instructions. This can be brought about in a single computer either by equipping it with more than one control unit, or by allowing time-sharing of one control unit between several activities; the latter case seems of greater practical interest. Some of the advantages to be gained and some of the programming problems to be solved in putting these ideas into practice are discussed." Gill goes on to discuss the meaning, problems, and potentialities of parallel programming, as well as time-sharing and automatic interruption. The paper concludes with comments and questions from the audience in Gill's lecture, together with his responses. This prescient article correctly predicted the future importance of parallel computing: "The use of multiple control units within a single machine will enable still higher overall computing speeds to be achieved when the ultimate speed of a single arithmetical unit has been reached" (p. 6). In the 1960s and 1970s parallel computing was heavily utilized in industries that relied on large investments for R&D such as aircraft design and defence, as well as modelling scientific problems such as meteorology. Parallelism became of central importance in high-performance computing, especially with the advent of supercomputers in the late 1960s that employed multiple physical CPUs on nodes with their respective memory, networked together in a hybrid-memory model. Today, "parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors" (Wikipedia). Multi-core processors are used not only in modern supercomputers but also in desktop PCs (for example, Intel's Core Duo was the CPU for the first-generation Apple MacBook Pro). Stanley Gill was one of the most important early computer scientists. From 1946-48 he was employed at the National Physical Laboratory on punch card computing and the design of the Pilot ACE, a cut down version of Alan Turing's full ACE design and one of the world's first stored-program computers. From 1952 to 1955 he was a Research Fellow at St John's College, Cambridge, working in a team led by Maurice Wilkes on the EDSAC, the first full-size stored-program computer. With Wilkes and David Wheeler, Gill co-authored The Preparation of Programs for an Electronic Digital Computer (1951), "the first book on computer programming" (Tomash & Williams). We have been unable to locate any other copy of this offprint, either in institutional collections or in commerce.


Provenance: Stanley Gill (1926-75), computer pioneer (inscribed on front wrapper 'With the author's compliments S. G.').


"Parallel processing is information processing that uses more than one computer processor simultaneously to perform work on a problem. This should not be confused with multitasking, in which many tasks are performed on a single processor by continuously switching between them, a common practice on serial machines. Computers that are designed for parallel processing are called parallel processors or parallel machines. Many parallel processors can also be described as supercomputers, a more general term applied to the class of computer systems that is most powerful at any given time.


"The need to coordinate the work of the individual processors makes parallel processing more complex than processing with a single processor. The processing resources available must be assigned efficiently, and the processors may need to share information as the work progresses. Parallel processors are used for problems that are computationally intensive, that is, they require a very large number of computations. Parallel processing may be appropriate when the problem is very difficult to solve or when it is important to get the results very quickly.


"Some examples of problems that may require parallel processing are image processing, molecular modeling, computer animations and simulations, and analysis of models to predict climate and economics. Many problems, such as weather forecasting, can be addressed with increasingly complex models as the computing power is developed to implement them, so there is always an incentive to create newer, more powerful parallel processors. Although early work in parallel processing focused on complex scientific and engineering applications, current uses also include commercial applications such as data mining and risk evaluation in investment portfolios. In some situations, the reliability added by additional processors is also important.


"Parallel processors are one of the tools used in high-performance computing, a more general term that refers to a group of activities aimed at developing and applying advanced computers and computer networks. In 1991 a U.S. federal program, the HPCC (High Performance Computing and Communications) program, was introduced to support the development of supercomputing, gigabit networking , and computation-intensive science and engineering applications. The HPCC program uses the term 'Grand Challenges' to identify computationally intensive tasks with broad economic and scientific impact that will only be solved with high performance computing technologies.


"As of 2002, most of the world's fastest computers are parallel processors. The number of processors may be from fewer than fifty to many thousands. Companies manufacturing these machines include IBM, SGI, Cray, Hitachi, and Fujitsu ...


"The two main categories of parallel processor are SIMD and MIMD. In a SIMD (Single Instruction, Multiple Data) machine, many processors operate simultaneously, carrying out the same operation on many different pieces of data. In a MIMD (Multiple Instruction, Multiple Data) machine, the number of processors may be fewer but they are capable of acting independently on different pieces of data ...


"Different parallel architectures have varying strengths and weaknesses depending on the task to be performed. SIMD machines usually have a very large number of simple processors. They are suited to tasks that are massively parallel, in which there are relatively simple operations to be performed on huge amounts of data. Each data stream is assigned to a different processor and the processors operate in lockstep (synchronously), each performing the same operation on its data at the same time. Processors communicate to exchange data and results, either through a shared memory and shared variables or through messages passed on an interconnection network between processors, each of which has its own local memory ...


"There is greater variety in the design of MIMD machines, which operate asynchronously with each processor under the control of its own program. In general, MIMD machines have fewer, more powerful processors than SIMD machines. They are divided into two classes: multiprocessors (also called tightly coupled machines) which have a shared memory, and multicomputers (or loosely coupled machines) which operate with an interconnection network. Although many of the earlier, high-performance parallel processors used in government research were very large, highly expensive SIMD machines, MIMD machines can be built more easily and cheaply, often with off-the-shelf components. Many different experimental designs have been created and marketed.


"In a serial algorithm, each step in the algorithm is completed before the next is begun. A parallel algorithm is a sequence of instructions for solving a problem that identifies the parts of the process that can be carried out simultaneously. To write a program for a parallel processor, the programmer must decide how each sub-task in the algorithm will be assigned to processors and in what order the necessary steps will be performed, and at what points communication is necessary. There can be many algorithms for a particular problem, so the programmer needs to identify and implement the one best suited for a particular parallel architecture.


"Sometimes 'software inertia,' the cost of converting programming applications to parallel form, is cited as a barrier to parallel processing. Some systems automatically adapt a serial process for parallel processing but this may not result in the best performance that can be obtained for that problem. In general, it is difficult to write parallel programs that achieve the kind of high-speed performance of which parallel processors are theoretically capable. Programming languages have been developed specifically for use on parallel processors to handle parallel data structures and functions, scheduling, and memory management ...


"The speedup that can be obtained on a parallel machine depends on the number of processors available, and also on the size of the problem and the way in which it can be broken into parts. Ideally, speedup would be linear so that five processors would give a speedup of five, or ten processors a speedup of ten. However, a number of factors contribute to sub-linear speedup, including additional software overhead in the parallel implementation, load balancing to prevent idle processors, and time spent communicating data between processors. A critical limitation is the amount of parallel activity that the problem allows. Gene Amdahl's Law says that the speedup of a parallel algorithm is limited by the fraction of the problem that must be performed sequentially" (Edie Rasmussen, ).


"The systems with multiple cores and processors were no longer just the domain of supercomputing but rather ubiquitous. The new laptops and even mobile phones contain more than one processing core. The mainstream adoption of parallel computing is a result of the cost of components dropping due to Moore's law that the number of transistors on a microchip doubles every two years, though the cost of computers halved ... The manufacturers such as Dell and Apple have produced even faster machines for the home market that easily outperform the supercomputers of old that once took a room to house. Devices that contain multiple cores allow us to explore parallel-based programming on a single machine" ().


Since the 1990s, many neuroscientists have argued that the human brain must function by means of parallel computation. Neurons are much slower than silicon-based components of digital computers. For this reason, neurons could not execute serial computation quickly enough to match rapid human performance in perception, linguistic comprehension, decision-making, etc. The only viable solution seems to be to replace serial computation with a "massively parallel" computational architecture in brain simulation models. It follows that artificial intelligence, which aims to replicate human intelligence in certain areas, will also make use of parallel processing. "There are several aspects of artificial intelligence which suggest that the application of some form of parallel computation and distributed systems might be appropriate" (Roosta, p. 501).


"Stanley Gill was born 26 March 1926 in Worthing, West Sussex, England. He was educated at Worthing High School for Boys and was, during his schooldays, a member of an amateur dramatic society. In 1943, he was awarded a State Scholarship and went to St John's College, Cambridge, where he read Mathematics/Natural Sciences. He graduated BA in 1947 and MA in 1950.


"Gill worked at the National Physical Laboratory from 1947 to 1950, where he met his wife, Audrey Lee, whom he married in 1949. From 1952 to 1955 he was a Research Fellow at St John's working in a team led by Maurice Wilkes; the research involved pioneering work with the EDSAC computer in the Cavendish Laboratory. In 1952, he developed a very early computer game. It involved a dot (termed a sheep) approaching a line in which one of two gates could be opened. The game was controlled via the lightbeam of the EDSAC's paper tape reader. Interrupting it (such as by the player placing their hand in it) would open the upper gate. Leaving the beam unbroken would result in the lower gate opening.


"He gained a PhD in 1953 and, following a year as Visiting Assistant Professor at the University of Illinois, Urbana, joined the Computer Department at Ferranti Ltd. In the UK in 1963 he was appointed Professor of Automatic Data Processing, UMIST, Manchester and, following various consultancies including International Computers Ltd he was appointed in 1964 to the newly created Chair of Computing Science and Computing Unit at Imperial College, University of London. This was later merged into the Imperial College Centre for Computing and Automation, of which Gill became director, whilst he worked as a consultant to the Ministry of Technology ...


"Gill travelled widely and advised on the establishment of departments of computing in several universities around the world. He was also President of the British Computer Society from 1967 to 1968" (Wikipedia).


Dauben, Abraham Robinson: The Creation of Non-Standard Analysis, 1995. Roosta, 'Artificial Intelligence and Parallel Processing,' pp. 501-534 in: Parallel Processing and Parallel Algorithms, 2000.



4to (280 x 210 mm), pp. 9, [3, blank] (journal pagination 2-10). Original buff printed wrappers (very minor creasing).

Reviews

(Log in or Create an Account first!)

You’re rating the book as a work, not the seller or the specific copy you purchased!

Details

Bookseller
SOPHIA RARE BOOKS DK (DK)
Bookseller's Inventory #
6173
Title
Parallel Programming. Offprint from: The Computer Journal, Vol. 1, No. 1, April 1958
Author
GILL, Stanley
Book Condition
Used
Quantity Available
1
Edition
First edition
Publisher
The British Computer Society Ltd
Place of Publication
London
Date Published
1958

Terms of Sale

SOPHIA RARE BOOKS

30 day return guarantee, with full refund including shipping costs for up to 30 days after delivery if an item arrives misdescribed or damaged.

About the Seller

SOPHIA RARE BOOKS

Seller rating:
This seller has earned a 5 of 5 Stars rating from Biblio customers.
Biblio member since 2009
Koebenhavn V

About SOPHIA RARE BOOKS

Rare and and important books in science.

Glossary

Some terminology that may be used in this description includes:

New
A new book is a book previously not circulated to a buyer. Although a new book is typically free of any faults or defects, "new"...
G
Good describes the average used and worn book that has all pages or leaves present. Any defects must be noted. (as defined by AB...
First Edition
In book collecting, the first edition is the earliest published form of a book. A book may have more than one first edition in...
Inscribed
When a book is described as being inscribed, it indicates that a short note written by the author or a previous owner has been...
Offprint
A copy of an article or reference material that once appeared in a larger publication.
Wrappers
The paper covering on the outside of a paperback. Also see the entry for pictorial wraps, color illustrated coverings for...

Frequently asked questions

tracking-