Category: Presentation Formats

  • Origins of the book

    Origins of the book Edit
    The binding of a Chinese bamboo book (Sun Tzu’s The Art of War)
    The craft of bookbinding probably originated in India, where religious sutras were copied on to palm leaves (cut into two, lengthwise) with a metal stylus. The leaf was then dried and rubbed with ink, which would form a stain in the wound. The finished leaves were given numbers, and two long twines were threaded through each end through wooden boards, making a palm-leaf book. When the book was closed, the excess twine would be wrapped around the boards to protect the manuscript leaves. Buddhist monks took the idea through Afghanistan to China in the first century BC.

    Similar techniques can also be found in ancient Egypt where priestly texts were compiled on scrolls and books of papyrus. Another version of bookmaking can be seen through the ancient Mayan codex; only four are known to have survived the Spanish invasion of Latin America.

    Writers in the Hellenistic-Roman culture wrote longer texts as scrolls; these were stored in boxes or shelving with small cubbyholes, similar to a modern winerack. Court records and notes were written on wax tablets, while important documents were written on papyrus or parchment. The modern English word book comes from the Proto-Germanic *bokiz, referring to the beechwood on which early written works were recorded.[4]

    The book was not needed in ancient times, as many early Greek texts—scrolls—were 30 pages long, which were customarily folded accordion-fashion to fit into the hand. Roman works were often longer, running to hundreds of pages. The Greeks used to call their books tome, meaning “to cut”. The Egyptian Book of the Dead was a massive 200 pages long and was used in funerary services for the deceased. Torah scrolls, editions of the Jewish holy book, were—and still are—also held in special holders when read.

    Scrolls can be rolled in one of two ways. The first method is to wrap the scroll around a single core, similar to a modern roll of paper towels. While simple to construct, a single core scroll has a major disadvantage: in order to read text at the end of the scroll, the entire scroll must be unwound. This is partially overcome in the second method, which is to wrap the scroll around two cores, as in a Torah. With a double scroll, the text can be accessed from both beginning and end, and the portions of the scroll not being read can remain wound. This still leaves the scroll a sequential-access medium: to reach a given page, one generally has to unroll and re-roll many other pages.

    Early book formats Edit

    In addition to the scroll, wax tablets were commonly used in Antiquity as a writing surface. Diptychs and later polyptych formats were often hinged together along one edge, analogous to the spine of modern books, as well as a folding concertina format. Such a set of simple wooden boards sewn together was called by the Romans a codex (pl. codices)—from the Latin word caudex, meaning ‘the trunk’ of a tree, around the first century AD. Two ancient polyptychs, a pentaptych and octoptych, excavated at Herculaneum employed a unique connecting system that presages later sewing on thongs or cords.[5]

    At the turn of the first century, a kind of folded parchment notebook called pugillares membranei in Latin, became commonly used for writing in the Roman Empire.[6] This term was used by both the pagan poet Martial and Christian apostle Paul the Apostle. Martial used the term with reference to gifts of literature exchanged by Romans during the festival of Saturnalia. According to T. C. Skeat, “…in at least three cases and probably in all, in the form of codices” and he theorized that this form of notebook was invented in Rome and then “…must have spread rapidly to the Near East…”[7] In his discussion of one of the earliest pagan parchment codices to survive from Oxyrhynchus in Egypt, Eric Turner seems to challenge Skeat’s notion when stating “…its mere existence is evidence that this book form had a prehistory” and that “early experiments with this book form may well have taken place outside of Egypt.”[8]

    Early intact codices were discovered at Nag Hammadi in Egypt. Consisting of primarily Gnostic texts in Coptic, the books were mostly written on papyrus, and while many are single-quire, a few are multi-quire. Codices were a significant improvement over papyrus or vellum scrolls in that they were easier to handle. However, despite allowing writing on both sides of the leaves, they were still foliated—numbered on the leaves, like the Indian books. The idea spread quickly through the early churches, and the word Bible comes from the town where the Byzantine monks established their first scriptorium, Byblos, in modern Lebanon. The idea of numbering each side of the page—Latin pagina, “to fasten”—appeared when the text of the individual testaments of the Bible were combined and text had to be searched through more quickly. This book format became the preferred way of preserving manuscript or printed material.

  • Bookbinding

    Bookbinding

    Bookbinding is the process of physically assembling a book from an ordered stack of paper sheets that are folded together into sections or sometimes left as a stack of individual sheets. The stack is then bound together along one edge by either sewing with thread through the folds or by a layer of flexible adhesive. For protection, the bound stack is either wrapped in a flexible cover or attached to stiff boards. Finally, an attractive cover is adhered to the boards and a label with identifying information is attached to the covers along with additional decoration. Book artists or specialists in book decoration can greatly expand the previous explanation to include book like objects of visual art with high value and artistic merit of exceptional quality in addition to the book’s content of text and illustrations.

    Bookbinding is a specialized trade that relies on basic operations of measuring, cutting, and gluing. A finished book depends on a minimum of about two dozen operations to complete but sometimes more than double that according to the specific style and materials. All operations have a specific order and each one relies on accurate completion of the previous step with little room for back tracking. An extremely durable binding can be achieved by using the best hand techniques and finest materials when compared to a common publisher’s binding that falls apart after normal use.

    Bookbinding combines skills from other trades such as paper and fabric crafts, leather work, model making, and graphic arts. It requires knowledge about numerous varieties of book structures along with all the internal and external details of assembly. A working knowledge of the materials involved is required. A book craftsman needs a minimum set of hand tools but with experience will find an extensive collection of secondary hand tools and even items of heavy equipment that are valuable for greater speed, accuracy, and efficiency.

    History of bookbinding 


    Decorative binding with figurehead of a 12th Century manuscript – Liber Landavensis.

    9th Century Qur’an in Reza Abbasi Museum

    Sammelband of three alchemical treatises, bound in Strasbourg by Samuel Emmel ca.1568, showing metal clasps and leather covering of boards
    Western books from the fifth century onwards[citation needed] were bound between hard covers, with pages made from parchment folded and sewn on to strong cords or ligaments that were attached to wooden boards and covered with leather. Since early books were exclusively handwritten on handmade materials, sizes and styles varied considerably, and there was no standard of uniformity. Early and medieval codices were bound with flat spines, and it was not until the fifteenth century that books began to have the rounded spines associated with hardcovers today.[9] Because the vellum of early books would react to humidity by swelling, causing the book to take on a characteristic wedge shape, the wooden covers of medieval books were often secured with straps or clasps. These straps, along with metal bosses on the book’s covers to keep it raised off the surface that it rests on, are collectively known as furniture.[10]

    The earliest surviving European bookbinding is the St Cuthbert Gospel of about 700, in red goatskin, now in the British Library, whose decoration includes raised patterns and coloured tooled designs. Very grand manuscripts for liturgical rather than library use had covers in metalwork called treasure bindings, often studded with gems and incorporating ivory relief panels or enamel elements. Very few of these have survived intact, as they have been broken up for their precious materials, but a fair number of the ivory panels have survived, as they were hard to recycle; the divided panels from the Codex Aureus of Lorsch are among the most notable. The 8th century Vienna Coronation Gospels were given a new gold relief cover in about 1500, and the Lindau Gospels (now Morgan Library, New York) have their original cover from around 800.[11]

    Luxury medieval books for the library had leather covers decorated, often all over, with tooling (incised lines or patterns), blind stamps, and often small metal pieces of furniture. Medieval stamps showed animals and figures as well as the vegetal and geometric designs that would later dominate book cover decoration. Until the end of the period books were not usually stood up on shelves in the modern way. The most functional books were bound in plain white vellum over boards, and had a brief title hand-written on the spine. Techniques for fixing gold leaf under the tooling and stamps were imported from the Islamic world in the 15th century, and thereafter the gold-tooled leather binding has remained the conventional choice for high quality bindings for collectors, though cheaper bindings that only used gold for the title on the spine, or not at all, were always more common. Although the arrival of the printed book vastly increased the number of books produced in Europe, it did not in itself change the various styles of binding used, except that vellum became much less used.[12]

    Up to the early nineteenth century books were hand-bound, sometimes using precious metals such as gold or silver and costly materials, including jewels, to cover valuable books and manuscripts. Book bindings had existed as a functional device for hundreds of years, both to protect the pages and also as an elaborate decorative element, to reflect the value of the contents.

    The process of bookbinding began to change with the introduction of mechanical bookbinding techniques in the 1820s. The availability of cheaper materials such as cloth and paper and the faster industrialised process of mechanically-produced paper and steam-powered presses gave rise to a greater volume of books at a lower cost per book. Hand-binding began to wane as a consequence.

     
    Historical forms of binding

    • Coptic binding: a method of sewing leaves/pages together
    • Ethiopian binding
    • Long-stitch bookbinding
    • Islamic bookcover with a distinctive flap on the back cover that wraps around to the front when the book is closed.
    • Wooden-board binding
    • Limp vellum binding
    • Calf binding (“leather-bound”)
    • Paper case binding
    • In-board cloth binding
    • Cased cloth binding
    • Bradel binding
    • Traditional Chinese and Korean bookbinding and Japanese stab binding
    • Girdle binding
    • Anthropodermic bibliopegy (rare or fictional) bookbinding in human skin.
    • Secret Belgian binding (or “criss-cross binding”), invented in 1986, was erroneously identified as a historical method.

    Basic types of binding

    bookbinding workshop Singapore overview http://www.bookbindingworkshopsg.com/bookbinding-techniques/

    Bookbinding is an artistic craft of great antiquity, and at the same time, a highly mechanized industry. The division between craft and industry is not so wide as might at first be imagined. It is interesting to observe that the main problems faced by the mass-production bookbinder are the same as those that confronted the medieval craftsman or the modern hand binder. The first problem is still how to hold together the pages of a book; secondly is how to cover and protect the gathering of pages once they are held together; and thirdly, how to label and decorate the protective cover.[1] Few crafts can give as much satisfaction at all stages as bookbinding—from making a cloth cover for a paperback, or binding magazines and newspapers for storage, or to the ultimate achievement of a fine binding in full leather with handmade lettering and gold tooling.[2]

    Before the computer age, the bookbinding trade involved two divisions. First, there was Stationery binding (known as vellum binding in the trade) which deals with making new books to be written into and intended for handwritten entries such as accounting ledgers, business journals, blank books, and guest log books, along with other general office stationery such as note books, manifold books, day books, diaries, portfolios, etc. Second was Letterpress binding which deals with making new books intended to be read from and includes fine binding, library binding, edition binding, and publisher’s bindings.[3] A result of the new bindings is a third division dealing with the repair, restoration, and conservation of old used bindings. With the digital age, personal computers have replaced the pen and paper based accounting that used to drive most of the work in the stationery binding industry.

    Today, modern bookbinding is divided between hand binding by individual craftsmen working in a one-room studio shop and commercial bindings mass-produced by high speed machines in a production line factory. There is a broad grey area between the two divisions. The size and complexity of a bindery shop varies with job types, for example, from one of a kind custom jobs, to repair/restoration work, to library rebinding, to preservation binding, to small edition binding, to extra binding, and finally to large run publisher’s binding. There are cases where the printing and binding jobs are combined in one shop. A step up to the next level of mechanization is determined by economics of scale until you reach production runs of ten thousand copies or more in a factory employing a dozen or more workers.

    Single leaf binding

    Double fan binding

    Case and bradel binding

    Coptic stitch vs kettle stitch

    Coptic binding

    Coptic binding or Coptic sewing comprises methods of bookbinding employed by early Christians in Egypt, the Copts, and used from as early as the 2nd century AD to the 11th century.The term is also used to describe modern bindings sewn in the same style.

    Coptic bindings, the first true codices, are characterized by one or more sections of parchment, papyrus, or paper sewn through their folds, and (if more than one section) attached to each other with chain stitch linkings across the spine, rather than to the thongs or cords running across the spine that characterise European bindings from the 8th century onwards. In practice, the phrase “Coptic binding” usually refers to multi-section bindings, while single-section Coptic codices are often referred to as “Nag Hammadi bindings,” after the 13 codices found in 1945 which exemplify the form.

    Nag Hammadi bindings

    Nag Hammadi bindings were constructed with a textblock of papyrus sheets, assembled into a single section and trimmed along the fore edge after folding to prevent the inner sheets from extending outward beyond the outer sheets. Because the inner sheets were narrower than the outer sheets after trimming, the width of text varied through the textblock, and it is likely that the papyrus was not written on until after it was bound; this, in turn, would have made it a necessity to calculate the number of sheets needed for a manuscript before it was written and bound.[3][4] Covers of Nag Hammadi bindings were limp leather, stiffened with waste sheets of papyrus. The textblocks were sewn with tackets, with leather stays along the inside fold as reinforcement. These tackets also secured the textblock to the covers; on some of the Nag Hammadi bindings, the tackets extended to the outside of the covering leather, while on others the tackets were attached to a strip of leather which served as a spine liner, and which was in turn pasted to the covers.[5] A flap, either triangular or rectangular, extended from the front cover of the book, and was wrapped around the fore edge of the book when closed. Attached to the flap was a long leather thong which was wrapped around the book two or three times, and which served as a clasp to keep the book securely shut.

    Multi-section Coptic bindings

    Multi-section Coptic bindings had cover boards that were initially composed of layers of papyrus, though by the 4th century, wooden boards were also frequent. Leather covering was also common by the 4th century, and all subsequent Western decorated leather bindings descend from Coptic bindings.

    Approximately 120 original and complete Coptic bindings survive in the collections of museums and libraries, though the remnants of as many as 500 Coptic bindings survive.

    The few surviving very early European bindings to survive use the Coptic sewing technique, notably the St Cuthbert Gospel in the British Library (c. 698) and the Cadmug Gospels at Fulda (c. 750)

    Modern Coptic bindings

    Modern Coptic bindings can be made with or without covering leather; if left uncovered, a Coptic binding is able to open 360°. If the leather is omitted, a Coptic binding is non-adhesive, and does not require any glue in its construction.

    Artisans and crafters often use coptic binding when creating hand made art journals or other books.

    Ethiopian binding

    The Ethiopian bookbinding technique is a chain stitch sewing that looks similar to the multi section Coptic binding method. According to J. A. Szirmai, the chain stitch binding dates from about the sixteenth century in Ethiopia. These books typically had paired sewing stations, sewn using two needles for each pair of sewing stations (so if there are 2 holes, use 2 needles…or 6 holes, 6 needles etc.). The covers were wooden and attached by sewing through holes made into edge of the board. Most of these books were left uncovered without endbands.

    Japanese Bookbinding

    Islamic bookbinding

    Case and Bradel binding

    A Bradel binding (also called a bonnet or bristol board binding, a German Case binding, or in French as Cartonnage à la Bradel or en gist) is a style of book binding with a hollow back. It most resembles a case binding in that it has a hollow back and visible joint, but unlike a case binding, it is built up on the book. Characteristic of the binding is the material covering the outside boards is separate from the material covering the spine. Many bookbinders consider the Bradel binding to be stronger than a case binding.

    This type of binding may be traced to 18th century Germany. The originator is uncertain, but the name comes from a French binder working in Germany, Alexis-Pierre Bradel (also known as Bradel l’ainé or Bradel-Derome). The binding originally appeared as a temporary binding, but the results were durable, and the binding had great success in the nineteenth century.Today, it is most likely to be encountered in photo albums and scrapbooks.

    The binding has the advantage of allowing the book to open fully, where traditional leather bindings are too rigid. It is sometimes modified to provide a rounded spine. This lends the appearance of a book where the paper is not suited to spine rounding; this is also to provide a rounded spine to a book too thin for a spine rounding to hold. The binding may also provide an impressive-looking leather spine to a book without incurring the full expense of binding a book in full or partial leather.

  • Book Design Process

    Working to a Brief

    Working to a ‘Brief’

    A book designer generally works to a ‘brief’ – a specific set of requirements for a particular project. The brief may be set by an external agency, or it may be self-initiated. The scope of the brief may vary in terms of how much creative input the designer can exercise. In some assignments the designer is provided with text and images, along with clear guidelines as to how these are to be set out. In other cases they may be provided with a brief outline of content and title and asked to ‘come up with ideas’ – to devise concepts for cover images, for example.

    The role of the designer

    The designer’s role is collaborative and communicative. The designer is responsible for the visual elements on the page, the structure, arrangement and layout of typography and images. The role can be highly creative, particularly when the role crosses over into art direction; where this is the case, the designer’s ideas play a major part in shaping the visual book form.

    There is a clear distinction between:

    • editorial roles: an editor deals with all the text
    • designing roles: a designer deals with the images and layout. A designer deals with the arrangement of the text and images but never edits the text. Although errors in the text may be apparent, a designer never makes corrections without first alerting the client and the editor.

    Depending on the publishing and production model used, the designer may
    be largely responsible for aspects of the proposal, development and realisation of the book form and may oversee the control of various elements as the book makes its way through the production process, ultimately checking printer’s proofs and ‘signing off’ a project when it is ready to go to print.

    Creative Design Process

    Book design is related to graphic design and a similar working process underpins much of the creative thinking and evolution of any particular design job. The creative design process includes the following stages:

    • Ideas generation
    • Research
    • Development
    • Visuals
    • Presentation

    But it is not a prescriptive process – key phases (eg research and development) often overlap and link quite organically. Design work generally follows a cyclical rather than a linear process, repeating the phases many times from a micro to macro scale and back to refine and ultimately conclude the design.

    Generating ideas
    The design process begins with the generating of visual ideas. In this early formative stage, be as wide-ranging and imaginative as possible in your ideas. ALL ideas are valid at this point, so don’t censor; this is not the stage to decide what is a ‘good’ or ‘bad’ idea – at this point they are all just ‘ideas’ with equal
    merit. Record your thought processes and ideas using (in no particular order):

    • Brainstorming/spidergrams/mindmaps: ideas expressed in short sentences, ideas can be triggered by previous, or be new, bizarre ideas are encouraged – good triggers, freethinking – the more the merrier, all ideas stated uninterrupted and accepted – none are rubbished. Concept maps like spidegrams can be used to explore concepts and the connections between them, these can be coloured, rearranged and reworked and redrawn to progressively clarify and then link, or deconstruct again to get new thoughts and angles. It may be useful to mix working with computer programmes like Mindjet, Inspiration or iThoughtsHD and work on paper.
    • thumbnail sketches: quick pen or pencil line drawings to give a reminder of a fleeting idea, and can give an indication of composition and art direction. For example, how does the subject sit in the frame? How is the subject lit? What particular attributes does that subject have? Often experimentation in digital form using Illustrator or Photoshop can usefully complement the work on paper to quickly explore different compositions and colour combinations from scanned sketches.
    • annotation in wiring or ‘sketchnoting’ on the paper thumbnails. Again it is often useful to scan in combinations and get printouts that can be scribbled over without overshadowing the original ideas.

    It is important to let one idea flow fluidly, intuitively and organically into another to make unexpected links and associations.

     

    Review and selection
    Review your thumbnail sketches and analyse each one through a process of critical evaluation. Which ideas are you drawn to? Which ideas have ‘legs’ – possible interesting outcomes which are worth pursuing? Often the ideas which are strongest are those which have depth, or many layers of association. Perhaps you are intuitively drawn to a particular idea. Select several ideas/thumbnails which you would like to develop further.

    Research and development

    The form your research will take depends on the individual elements of your idea. It may be that you need to make some objective drawings, for example, to understand your subject better, and to consider aspects of composition. Other research activities include arranging a photoshoot to further explore your visual ideas, or going on-line to source material that informs your ideas. You can use both primary and secondary sources of research in this way. Research feeds into the development of your visual work, informing and advancing your ideas. Document this phase of the work accordingly.

    Visuals
    This is the culmination of all your preliminary work. Work up some more developed visual sketches. These can be hand-drawn illustrations, photographs, and/or include typography. The presentation can be a little rough around the edges but should show the main elements of the design.

    Presentation
    Present your ideas as finished visual images. Create digital files of your images, making sure these are a reasonable resolution – 180dpi is a good minimum, 300dpi is optimum.

  • Fanzines

     

    It was first used by US sci fi enthusiast Louis Russell Chauvenet in 1940 and by 1949 was in common use.

    ‘Fanzine’ was abbreviated to ‘zine’ in 1970s.  The rise of fanzines  was part of the punk subcultural response to mainstream society – in this case, mainstream print.

    • Distribution: Zines were hand-made publications produced in small quantities on an irregular basis.They were usually small enough to easily fit in the hand although sometimes they were oversized broadsheets. They were distributed by hand and word of mouth or via independent music or book stores or through zine fairs and symposia.
    • Readership were super-niche interest groups and cultural underground.
    • Production: Created by a single producer as both author and designer. Unencumbered by censorship or corporate strategy. Producers were often readers and/or fans sharing same interests.
    • Subject matter As ‘Genuine voices outside of all mass manipulation’  they  explored a wide variety of themes political, humorous, poetic, underground music not necessarily represented in more conventional print.  They were also a forum for personal experimentation ‘perzines’ as unique auto/biographical snapshots.’practice of self-making though zine-making is particularly momentary,’Sometimes a testing ground to ideas which then move to the mainstream. Eg Giant Robot and Bust Magazine.
    • Style Lively Do it yourself style uninhibited by design conventions. Often chaotically lively layout.
    • Cheap and designed to be ephemeral They were often printed using photocopiers, stencil and other ‘hands-on’ processes. Sometimes they were more 3-dimensional and incorporated recycled objects or materials.
    • Materials different coloured papers, crayons, felt-tip markers, Ribbons, stickers. Collages photos hand-drawn illustrations. often made with very basic tools: scissors, glue.
    • Typography handwritten or typewritten or using rub-down lettering.

    Feminist zines use provocative language, sexual imagery, a mix of styles aiming to shock. Guerilla Girls, a feminist group fighting sexism in arts practice produced many fanzines. Formed in New York in 1985, the group maintain their anonymity by wearing gorilla masks and using the names of dead female artists as pseudonyms, e.g. Frida Kahlo and Hannah HÖch. They put pressure on organisations such as the Metropolitan Museum of Art in New York by uncovering statistics that reveal the extent of patriarchy in the art world past and present. The original group disbanded in 2001 but several Guerrilla Girl spin-offs still exist. Recent campaigns include ‘Unchain female directors’ targeted at the male-dominated world of the Hollywood film studio.

    In the 1990s faux fanzines started to be produced by multinational companies. Dirt by Warner Brothers, Full Voice by Body Shop etc.

    There are now also online and digital forms.

  • History of book design

    Origins

    The earliest forms of books were scrolls produced by Egyptian scribes over 4,000 years ago.

    • Images and vertical text were hand-drawn onto palm leaves, then later onto papyrus scrolls.
    • Papyrus was made from the pith of the papyrus plant and was rather like thick paper. It was used throughout the ancient world until the development of parchment.
    • Parchment was a superior material to papyrus. Made from dried, treated animal skin, parchment could be written on on both sides and was more pliable than papyrus, which meant that it could be folded.
    • Folding a large parchment sheet in half created two folios – a word we still use today to number pages. Folding the sheet in half again created a quarto (4to) and folding that in half again made eight pages – an octavo (8vo).The development of parchment created a break with the scroll form. Folded pages were now piled together and bound along one edge to create a codex, a manuscript text bound in book form.

    Paper, invented in China, spread through the Islamic world to reach medieval Europe in the 13th century, where the first paper mills were built. See ‘Paper’ full post.

    Skilled hand-lettering was laborious and time-consuming and a world apart from the printing methods of today.

    Illuminated manuscripts

    The term manuscript comes from the Latin for hand ‘manus’ and writing ‘scriptum’. Illuminated manuscripts, often containing religious, historical or instructive texts, were coloured with rich and delicate pigments, often with the addition of gold leaf. These were objects of rare beauty. Bound manuscripts were produced in Britain from around 600 to 1600.

    The advent of movable type

    Movable type brought about a massive revolution in the way books were designed, produced and perceived.
    Sandcast type was used in Korean book design from around 1230 and woodblocks were used to print paper money and cards in China from the seventh century.
    Johann Gutenberg produced the first western book printed using movable type in 1454. This was the Gutenberg Bible or ‘42-line Bible’. This led the way for a revolution in the way books were designed and printed. Having set the metal type, the printer could then produce multiple copies. The printing process made books much more widely available to a larger audience. By 1500, printing presses in Western Europe had produced more than twenty million books.

    Arts and Crafts movement

    Private English presses such as Doves Press and the Ashendene Press typified the publishing industry in the early part of the twentieth century. The influential designer, craftsman, artist and writer William Morris (1834–96) founded the Kelmscott Press in 1891; this was dedicated to publishing limited edition, illuminated style books. The designer Eric Gill (1882–1940), a
    fellow member of the Arts and Crafts movement, designed books for both English and German publishers. Gill also produced The Canterbury Tales (1931) for Golden Cockerel Press, which was one of the last English presses still going strong after 1925.

    While English and German publishers were known for the quality and craftsmanship of their typography and overall book design, French publishers such as Ambroise Vollard (1866–1939) focused on the illustrative elements of book design. ‘Livres de luxe’ were expensive editions of books illustrated by contemporary artists such as Bonnard, Chagall, Degas, Dufy and Picasso.

    20th Century

    Artistic movements had a real and direct impact on book design in the twentieth century, with the Fauvists, Futurists, Dadaists, Constructivists and Bauhaus feeding into a febrile pot of manifestos, ideas and approaches to typography and book design.
    Allen Lane founded Penguin Books in England in 1935, with the aim of producing affordable books for the masses. The books were characterised by strong typographic and design principles. In the early 1950s designers Jan Tschichold and Ruari McLean created modernist iconographic cover designs for Penguin books.
    Advancing print technologies, letterpress, offset lithography and the development of graphic design, gave rise to a plethora of colour printed material, making book design one of the earliest and best examples of mass communication.

    Fanzines

    The digital era

    The 1980s and 1990s saw the burgeoning of desktop publishing (DTP). Book design was no longer bound by the constraints of metal typesetting. Apple Macintosh computer systems enabled book designers to integrate text and images into multiple pages digitally, on-screen. This move away from traditional design and printing processes created massive upheaval in the publishing industry, and many long-established forms of working were usurped by the new digital technology.

    In new wave of graphic and book designers emerged who embraced the new technology and, like the Futurists and Dadaists before them, questioned and experimented with some of the conventional approaches to typography and book design. Designers such as Neville Brody (Fuse magazine) and David Carson (The End of Print) captured the experimental mood of the time.

    The revolution in printing processes continues apace today, but the book in its traditional form remains a pervasive presence alongside its digital counterparts – the e-book is a good example. The internet has revolutionised the way book designers work, making distance book design work a  commonplace reality. In addition, a huge and often overwhelming range of fonts, images and resources is immediately available online. The word ‘font’ has entered everyday vocabulary – even for schoolchildren – and choosing the best font for the job is now something that many of us do almost without thinking. DTP means that everyone can potentially access what they need to design a book. From a purist perspective, the inherent danger with this creative freedom is that poor design choices result from uninformed ‘quick-fix’ solutions. The positive aspect is that the designer has never before had so many options to choose from, in terms of typography, design and production values.

  • Book Covers

    Book Covers

    Book cover design can be:

    • ‘conceptual’ or ideas-based where there is an underlying message within the design.
    •  ‘expressive’ employing representational or abstract imagery in a variety of media to communicate the main message/s of the book.
    • be designed mainly around photographic, illustrative and typographic elements.

    The cover, or dustjacket, serves two purposes:

    To spell out the contents:

    • So that you are in no doubt as to what you are buying? This has a tendency to mean that many titles within a particular genre look the same.
    • Sex sells: Twentieth-century American pulp fiction covers often used an archetypal hour-glass figure of a woman (with smoking gun) to entice its buying audience. These covers were printed post-war, using inks that produced vibrant colours.

    Branding for the publisher:  to create a positive association with a particular publisher and to build a relationship with the book-buying audience.

    • to protect the book: 
    • to express something of its contents and nature – to sell the product in a highly competitive market.

    History

    Early books were handbound with strong heavy covers. In the Victorian era cheap paper-covered reprints had been widely available. In 19C as books became cheaper to produce, and with developments in printing processes, using colour lithography, the book cover became more than a functional protective device: it was a space to advertise and communicate information about the book’s content. Poster designers and graphic designers of the era began to use it as such.

    Cover of the Yellow Book by Aubrey Beardsley

    Aubrey Beardsley’s The Yellow Book (1894–97) is a good example of design to promote a book.

    The Arts and Crafts movement of the early twentieth century revitalised interest in book cover design and this began to influence and infiltrate mainstream publishing.

    El Lissitsky, book cover for the Isms of Art: 1914-1924.
    El Lissitsky, book cover for the Isms of Art: 1914-1924.

    In the 1920s radically modern cover designs were produced in the Soviet Union by Aleksandr Rodchenko and El Lissitzky.

    Victor Gollantcz book covers

    In the late 1920s the publisher Victor Gollancz carried out research on busy railway platforms, noticing which colours caught the eye on the book covers that appeared on newstands, as seen through the crowds. Based on his research he designed his publishing ‘house style,’ using what was at the time a very bright yellow, with inventive black and magenta typography. After black and white, yellow and black is the most easily readable colour combination.

    Penguin Books:‘What is cheap need not be nasty’ Britain’s approach to cover design was somewhat more restrained. When Allen Lane approached the established publisher Bodley Head in the late 1920s, with ideas for a new, affordable approach to book design, his ideas were turned down. Lane went on to form Penguin Books and to champion a new publishing model in economically depressed Britain. Penguin’s iconic orange, black and white covers from the same era are a striking example of simple and effective design: clear, uncluttered and an early example of successful branding. In the mid 1930s Penguin formed part of the ‘paperback revolution’, producing affordable books with quality design, and their publishing identity sought to be associated with this approach. Penguin’s designs used classic yet modern typography within a clearly defined structure. The template was set up by Jan Tschichold in 1947 and broadly applied to all Penguin’s books. Penguin’s approach has become a defining mainstay of British book design and an excellent example of successful book branding.

    Producing a cover

    The cover has been likened to a mini-poster, and in many respects serves the same purpose, in that the design needs to grab the attention of its audience within a few seconds. Sometimes it is a new book. Publishers periodically re-vamp cover designs, to tie in with promotional features, anniversaries of the book’s original publication, film versions and other marketing opportunities.

    Elements

    • Concept
    • Typography: the title, the author’s name, subtitle and quotes.
    • Colour
    • Imagery : As a general rule of thumb, to have maximum effect the cover usually bears a single image. There are, of course, exceptions to this rule; many manuals, non-fiction and ‘how to’ books use multiple images. Dorling Kindersley publications, for example, are recognisable for their crisp colour cutout photographs on a white background.

    In publishing workflow, the cover is treated as a separate entity to the main book contents. The evolution of a cover design, from inception to completion, can take as long as the design of the main book itself. For example, a reasonable timespan for the design and publication of a 256-page illustrated book could be nine months. The cover or jacket design for the same book can take just as long, even though the image and textual material is significantly less. This is due in part to the many requirements that a cover has to fulfil, including commercial and marketing aspects.

    The marketing and sales departments within publishing organisations know the importance of the cover with regard to revenue, so often the design of a cover involves considering a great many aspects, to meet multiple needs. This can sometimes confuse the brief. Cover design meetings can turn into ‘design by committee’, with all parties – editor, designer, sales and marketing – having their input, often with different approaches to the project. This inevitably
    slows the process and can lead to conflicting messages for the designer.

    Whilst it’s important to take on board everyone’s input, and adjust designs accordingly if required, ‘design by committee’ can be confusing. Essentially, the brief for a cover design needs to be clear at the outset, so that the designer has clear parameters to work within. As a designer it can occasionally be your role to argue the merits for what you consider to be a strong cover design, one that has quality and integrity within the various elements of the design.

    See design workflow

  • The Yellow Book

    edited from Wikipedia The Yellow Book

    Cover of the Yellow Book by Aubrey Beardsley

    The Yellow Book was a British quarterly literary periodical that was published in London from 1894 to 1897. It was a leading journal of the British 1890s and lent its name to the “Yellow Nineties” and the magazine contained a wide range of literary and artistic genres, poetry, short stories, essays, book illustrations, portraits, and reproductions of paintings.

    It was published by Elkin Mathews and John Lane, and later by John Lane alone, and edited by the American Henry Harland.

    Aubrey Beardsley was its first art editor, and he has been credited with the idea of the yellow cover, with its association with illicit French fiction of the period. He obtained works by such artists as Charles Conder, William Rothenstein, John Singer Sargent, Walter Sickert, and Philip Wilson Steer. The literary content was no less distinguished; authors who contributed were: Max Beerbohm, Arnold Bennett, “Baron Corvo“, Ernest Dowson, George Gissing, Sir Edmund Gosse, Henry James, Richard Le Gallienne, Charlotte Mew, Arthur Symons, H. G. Wells, William Butler Yeats and Frank Swettenham. A notable feature was the inclusion of work by women writers and illustrators, among them Ella D’Arcy and Ethel Colburn Mayne (both also served as Harland’s subeditors), George Egerton, Charlotte Mew, Rosamund Marriott Watson, Ada Leverson, Netta and Nellie Syrett, and Ethel Reed.

    It was to some degree associated with Aestheticism and Decadence, but also style. The first issue of The Yellow Book‘s prospectus introduces it “as a book in form, a book in substance; a book beautiful to see and convenient to handle; a book with style, a book with finish; a book that every book-lover will love at first sight; a book that will make book-lovers of many who are now indifferent to books”. The periodical was priced at 5 shillings.

    Cover: The Yellow Book‘s brilliant colour immediately associated the periodical with illicit French novels – an anticipation, many thought, of the scurrilous content inside.  It was issued clothbound.

    Art separate from text:  Harland and Beardsley rejected the idea that the function of artwork was merely explanatory: “There is to be no connection whatever [between the text and illustrations]. [They] will be quite separate”. The equilibrium which The Yellow Book poses between art and text is emphasized by the separate title pages before each individual work whether literary or pictorial.

    Page layout: The Yellow Book‘s mise-en-page differed dramatically from current Victorian periodicals: “… its asymmetrically placed titles, lavish margins, abundance of white space, and relatively square page declare The Yellow Book’s specific and substantial debt to Whistler”. The use of white space is positive rather than negative, simultaneously drawing the reader’s eye to the blank page as an aesthetic and essentially created object. 

    Typography: The decision to print The Yellow Book in Caslon-old face further signified the ties which The Yellow Book held to the Revivalists. Caslon-old face, “an eighteenth-century revival of a seventeenth-century typographical style” became “the type-face of deliberate and principled reaction or anachronism”. A type-face generally reserved for devotional and ecclesiastical work, its use in the pages of The Yellow Book at once identified it with the “Religion of Beauty”.

    Use of catch-words on every page enhanced The Yellow Book‘s link to the obsolescent. Both antiquated and obtrusive, the catch-phrase interrupts the cognitive process of reading: “making-transparent … the physical sign which constitutes the act of reading; and in doing this, catch-words participate in the ‘pictorialization’ of typography”. By interrupting readers through the very use of irrelevant text, catch-words lend the printed word a solidity of form which is otherwise ignored.

  • Digital Printing

    THE PIXEL: A FUNDAMENTAL UNIT OF DIGITAL IMAGES

    Every digital image consists of a fundamental small-scale descriptor: THE PIXEL, invented by combining the words “PICture ELement.” Each pixel contains a series of numbers which describe its color or intensity. The precision to which a pixel can specify color is called its bit or color depth. The more pixels your image contains, the more detail it has the ability to describe (although more pixels alone don’t necessarily result in more detail; more on this later).

    PRINT SIZE: PIXELS PER INCH vs. DOTS PER INCH

    Since a pixel is just a unit of information, it is useless for describing real-world prints — unless you also specify their size. The terms pixels per inch (PPI) and dots per inch (DPI) were both introduced to relate this theoretical pixel unit to real-world visual resolution. These terms are often inaccurately interchanged — misleading the user about a device’s maximum print resolution (particularly with inkjet printers).

    “Pixels per inch” (PPI) is the more straightforward of the two terms. It describes just that: how many pixels an image contains per inch of distance (horizontally or vertically). PPI is also universal because it describes resolution in a way that doesn’t vary from device to device.

    “Dots per inch” (DPI) may seem deceptively simple at first, but the complication arises because multiple dots are often needed to create a single pixel — and this varies from device to device. In other words, a given DPI does not always lead to the same resolution. Using multiple dots to create each pixel is a process called “dithering.”

    Printers use dithering to create the appearance of more colors than they actually have. However, this trick comes at the expense of resolution, since dithering requires each pixel to be created from an even smaller pattern of dots. As a result, images will require more DPI than PPI in order to depict the same level of detail.

    In the above example, note how the dithered version is able to create the appearance of 128 pixel colors — even though it has far fewer dot colors (only 24). However, this result is only possible because each dot in the dithered image is much smaller than the pixels.

    The standard for prints done in a photo lab is about 300 PPI, but inkjet printers require several times this number of DPI (depending on the number of ink colors) for photographic quality. The required resolution also depends on the application; magazine and newspaper prints can get away with much less than 300 PPI.

    However, the more you try to enlarge a given image, the lower its PPI will become…

    MEGAPIXELS AND MAXIMUM PRINT SIZE

    A “megapixel” is simply a million pixels. If you require a certain resolution of detail (PPI), then there is a maximum print size you can achieve for a given number of megapixels. The following chart gives the maximum print sizes for several common camera megapixels.

    # of Megapixels Maximum 3:2 Print Size
    at 300 PPI: at 200 PPI:
    2 5.8″ x 3.8″ 8.7″ x 5.8″
    3 7.1″ x 4.7″ 10.6″ x 7.1″
    4 8.2″ x 5.4″ 12.2″ x 8.2″
    5 9.1″ x 6.1″ 13.7″ x 9.1″
    6 10.0″ x 6.7″ 15.0″ x 10.0″
    8 11.5″ x 7.7″ 17.3″ x 11.5″
    12 14.1″ x 9.4″ 21.2″ x 14.1″
    16 16.3″ x 10.9″ 24.5″ x 16.3″
    22 19.1″ x 12.8″ 28.7″ x 19.1″

    Note how a 2 megapixel camera cannot even make a standard 4×6 inch print at 300 PPI, whereas it requires a whopping 16 megapixels to make a 16×10 inch photo. This may be discouraging, but do not despair! Many will be happy with the sharpness provided by 200 PPI, although an even lower PPI may suffice if the viewing distance is large (see “Digital Photo Enlargement“). For example, most wall posters are often printed at less than 200 PPI, since it’s assumed that you won’t be inspecting them from 6 inches away.

    CAMERA & IMAGE ASPECT RATIO

    The print size calculations above assumed that the camera’s aspect ratio, or ratio of longest to shortest dimension, is the standard 3:2 used for 35 mm cameras. In fact, most compact cameras, monitors and TV screens have a 4:3 aspect ratio, while most digital SLR cameras are 3:2. Many other types exist though: some high end film equipment even use a 1:1 square image, and DVD movies are an elongated 16:9 ratio.

    This means that if your camera uses a 4:3 aspect ratio, but you need a 4 x 6 inch (3:2) print, then some of your megapixels will be wasted (11%). This should be considered if your camera has a different ratio than the desired print dimensions.

    Pixels themselves can also have their own aspect ratio, although this is less common. Certain video standards and earlier Nikon cameras have pixels with skewed dimensions.

    SENSOR SIZE: NOT ALL PIXELS ARE CREATED EQUAL

    Even if two cameras have the same number of pixels, it does not necessarily mean that the size of their pixels are also equal. The main distinguishing factor between a more expensive digital SLR and a compact camera is that the former has a much greater digital sensor area. This means that if both an SLR and a compact camera have the same number of pixels, the size of each pixel in the SLR camera will be much larger.

    Compact Camera Sensor
    SLR Camera Sensor

    Why does one care about how big the pixels are? A larger pixel has more light-gathering area, which means the light signal is stronger over a given interval of time.

    This usually results in an improved signal to noise ratio (SNR), which createsa smoother and more detailed image. Furthermore, the dynamic range of the images (range of light to dark which the camera can capture without becoming either black or clipping highlights) also increases with larger pixels. This is because each pixel well can contain more photons before it fills up and becomes completely white.

    The diagram below illustrates the relative size of several standard sensor sizes on the market today. Most digital SLR’s have either a 1.5X or 1.6X crop factor (compared to 35 mm film), although some high-end models actually have a digital sensor which has the same area as 35 mm. Sensor size labels given in inches do not reflect the actual diagonal size, but instead reflect the approximate diameter of the “imaging circle” (not fully utilized). Nevertheless, this number is in the specifications of most compact cameras.

    Why not just use the largest sensor possible? The main disadvantage of having a larger sensor is that they are much more expensive, so they are not always beneficial.

    Other factors are beyond the scope of this tutorial, however more can be read on the following points:larger sensors requiresmaller apertures in order to achieve the same depth of field, however they are alsoless susceptible to diffraction at a given aperture.

    Does all this mean it is bad to squeeze more pixels into the same sensor area? This will usually produce more noise, but only when viewed at 100% on your computer monitor. In an actual print, the higher megapixel model’s noise will be much more finely spaced — even though it appears noisier on screen (see “Image Noise: Frequency and Magnitude“). This advantage usually offsets any increase in noise when going to a larger megapixel model (with a few exceptions).

  • Colour in photography and film: digital colour management

     Sources

    Cambridge in Colour: Colour Management and Printing series

    Underlying concepts and principles: Human Perception; Bit DepthBasics of digital cameras: pixels

    Color Management from camera to display Part 1: Concept and Overview; Part 2: Color Spaces; Part 3: Color Space Conversion; Understanding Gamma Correction

    Bit Depth

    Every color pixel in a digital image is created through some combination of the three primary colors: red, green, and blue –  often referred to as a “color channel”. Bit depth quantifies how many unique colors are available in an image’s color palette in terms of the number of 0’s and 1’s, or “bits,” which are used to specify each color channel (bpc) or per pixel (bpp). Images with higher bit depths can encode more shades or colors – or intensity of values -since there are more combinations of 0’s and 1’s available.

    Most color images from digital cameras have 8-bits per channel and so they can use a total of eight 0’s and 1’s. This allows for 28 or 256 different combinations—translating into 256 different intensity values for each primary color. When all three primary colors are combined at each pixel, this allows for as many as 28*3 or 16,777,216 different colors, or “true color.” This is referred to as 24 bits per pixel since each pixel is composed of three 8-bit color channels. The number of colors available for any X-bit image is just 2X if X refers to the bits per pixel and 23X if X refers to the bits per channel. The following table illustrates different image types in terms of bits (bit depth), total colors available, and common names.

    Bits Per Pixel Number of Colors Available Common Name(s)
    1 2 Monochrome
    2 4 CGA
    4 16 EGA
    8 256 VGA
    16 65536 XGA, High Color
    24 16777216 SVGA, True Color
    32 16777216 + Transparency
    48 281 Trillion
    USEFUL TIPS
    • The human eye can only discern about 10 million different colors, so saving an image in any more than 24 bpp is excessive if the only intended purpose is for viewing. On the other hand, images with more than 24 bpp are still quite useful since they hold up better under post-processing (see “Posterization Tutorial“).
    • Color gradations in images with less than 8-bits per color channel can be clearly seen in the image histogram.
    • The available bit depth settings depend on the file type. Standard JPEG and TIFF files can only use 8-bits and 16-bits per channel, respectively.

    BASICS OF DIGITAL CAMERA PIXELS

    The continuous advance of digital camera technology can be quite confusing because new terms are constantly being introduced. This tutorial aims to clear up some of this digital pixel confusion — particularly for those who are either considering or have just purchased their first digital camera. Concepts such as sensor size, megapixels, dithering and print size are discussed.

    OVERVIEW OF COLOR MANAGEMENT

    “Color management” is a process where the color characteristics for every device in the imaging chain is known precisely and utilized in color reproduction. It often occurs behind the scenes and doesn’t require any intervention, but when color problems arise, understanding this process can be critical.

    In digital photography, this imaging chain usually starts with the camera and concludes with the final print, and may include a display device in between:

    digital imaging chain

    Many other imaging chains exist, but in general, any device which attempts to reproduce color can benefit from color management. For example, with photography it is often critical that your prints or online gallery appear how they were intended. Color management cannot guarantee identical color reproduction, as this is rarely possible, but it can at least give you more control over any changes which may occur.

    THE NEED FOR PROFILES & REFERENCE COLORS

    Color reproduction has a fundamental problem: a given “color number” doesn’t necessarily produce the same color in all devices. We use an example of spiciness to convey both why this creates a problem, and how it is managed.

    Let’s say that you’re at a restaurant and are about to order a spicy dish. Although you enjoy spiciness, your taste buds are quite sensitive, so you want to be careful that you specify a pleasurable amount. The dilemma is this: simply saying “medium” might convey one level of spice to a cook in Thailand, and a completely different level to someone from England. Restaurants could standardize this based on the number of peppers included in the dish, but this alone wouldn’t be sufficient. Spice also depends on how sensitive the taster is to each pepper:

    calibration table

    To solve your spiciness dilemma, you could undergo a one-time taste test where you eat a series of dishes, with each containing slightly more peppers (shown above). You could then create a personalized table to carry with you at restaurants which specifies that 3 equals “mild,” 5 equals “medium,” and so on (assuming that all peppers are the same). Next time, when you visit a restaurant and say “medium,” the waiter could look at your personal table and translate this into a standardized concentration of peppers. This waiter could then go to the cook and say to make the dish “extra mild,” knowing all too well what this concentration of peppers would actually mean to the cook.

    As a whole, this process involved (1) characterizing each person’s sensitivity to spice, (2)standardizing this spice based on a concentration of peppers and (3) being able to collectively use this information to translate the “medium” value from one person into an “extra mild” value for another. These same three principles are used to manage color.

    COLOR PROFILES

    A device’s color response is characterized similar to how the personalized spiciness table was created in the above example. Various numbers are sent to this device, and its output is measured in each instance:

    Input Number (Green) Output Color
    Device 1 Device 2
    200
    150
    100
    50

    Real-world color profiles include all three colors, more values, and are usually more sophisticated than the above table — but the same core principles apply. However, just as with the spiciness example, a profile on its own is insufficient. These profiles have to be recorded in relation to standardized reference colors, and you need color-aware software that can use these profiles to translate color between devices.

    COLOR MANAGEMENT OVERVIEW

    Putting it all together, the following diagram shows how these concepts might apply when converting color between a display device and a printer:

    display device printer output device
    Characterized
    Input Device
    Standardized
    Profile Connection Space
    Characterized
    Output Device
    Additive RGB Colors
    RGB
    Color Profile
    (color space)
    CMM Translation CMM Translation Subtractive CMYK Colors
    CMYK
    Color Profile
    (color space)
    1. Characterize. Every color-managed device requires a personalized table, or “color profile,” which characterizes the color response of that particular device.
    2. Standardize. Each color profile describes these colors relative to a standardized set of reference colors (the “Profile Connection Space”).
    3. Translate. Color-managed software then uses these standardized profiles to translate color from one device to another. This is usually performed by a color management module (CMM).

    The above color management system was standardized by the International Color Consortium (ICC), and is now used in most computers. It involves several key concepts: color profiles (discussed above), color spaces, and translation between color spaces.

    Color Space. This is just a way of referring to the collection of colors/shades that are described by a particular color profile. Put another way, it describes the set of all realizable color combinations. Color spaces are therefore useful tools for understanding the color compatibility between two different devices. See the tutorial on color spaces for more on this topic.

    Profile Connection Space (PCS). This is a color space that serves as a standardized reference (a “reference space”), since it is independent of any particular device’s characteristics. The PCS is usually the set of all visible colors defined by the Commission International de l’éclairage (CIE) and used by the ICC.

    Note: The thin trapezoidal region drawn within the PCS is what is called a “working space.” The working space is used in image editing programs (such as Adobe Photoshop), and defines the subset of colors available to work with when performing any image editing.

    Color Translation. The color management module (CMM) is the workhorse of color management, and is what performs all the calculations needed to translate from one color space into another. Contrary to previous examples, this is rarely a clean and simple process. For example, what if the printer weren’t capable of producing as intense a color as the display device? This is called a “gamut mismatch,” and would mean that accurate reproduction is impossible. In such cases the CMM therefore just has to aim for the best approximation that it can. See the tutorial on color space conversion for more on this topic.

    UNDERSTANDING GAMMA CORRECTION

    Gamma is an important but seldom understood characteristic of virtually all digital imaging systems. It defines the relationship between a pixel’s numerical value and its actual luminance. Without gamma, shades captured by digital cameras wouldn’t appear as they did to our eyes (on a standard monitor). It’s also referred to as gamma correction, gamma encoding or gamma compression, but these all refer to a similar concept. Understanding how gamma works can improve one’s exposure technique, in addition to helping one make the most of image editing.

    WHY GAMMA IS USEFUL

    1. Our eyes do not perceive light the way cameras do. With a digital camera, when twice the number of photons hit the sensor, it receives twice the signal (a “linear” relationship). Pretty logical, right? That’s not how our eyes work. Instead, we perceive twice the light as being only a fraction brighter — and increasingly so for higher light intensities (a “nonlinear” relationship).

    linear vs nonlinear gamma - cameras vs human eyes
    Reference Tone
    Perceived as 50% as Bright
    by Our Eyes
    Detected as 50% as Bright
    by the Camera

    Refer to the tutorial on the photoshop curves tool if you’re having trouble interpreting the graph.
    Accuracy of comparison depends on having a well-calibrated monitor set to a display gamma of 2.2.
    Actual perception will depend on viewing conditions, and may be affected by other nearby tones.
    For extremely dim scenes, such as under starlight, our eyes begin to see linearly like cameras do.

    Compared to a camera, we are much more sensitive to changes in dark tones than we are to similar changes in bright tones. There’s a biological reason for this peculiarity: it enables our vision to operate over a broader range of luminance. Otherwise the typical range in brightness we encounter outdoors would be too overwhelming.

    But how does all of this relate to gamma? In this case, gamma is what translates between our eye’s light sensitivity and that of the camera. When a digital image is saved, it’s therefore “gamma encoded” — so that twice the value in a file more closely corresponds to what we would perceive as being twice as bright.

    Technical Note: Gamma is defined by Vout = Vingamma , where Vout is the output luminance value and Vin is the input/actual luminance value. This formula causes the blue line above to curve. When gamma<1, the line arches upward, whereas the opposite occurs with gamma>1.

    2. Gamma encoded images store tones more efficiently. Since gamma encoding redistributes tonal levels closer to how our eyes perceive them, fewer bits are needed to describe a given tonal range. Otherwise, an excess of bits would be devoted to describe the brighter tones (where the camera is relatively more sensitive), and a shortage of bits would be left to describe the darker tones (where the camera is relatively less sensitive):

    Original: smooth 8-bit gradient (256 levels)
    Encoded using only 32 levels (5 bits)
    Linear
    Encoding:
    linearly encoded gradient
    Gamma
    Encoding:
    gamma encoded gradient

    Note: Above gamma encoded gradient shown using a standard value of 1/2.2
    See the tutorial on bit depth for a background on the relationship between levels and bits.

    Notice how the linear encoding uses insufficient levels to describe the dark tones — even though this leads to an excess of levels to describe the bright tones. On the other hand, the gamma encoded gradient distributes the tones roughly evenly across the entire range (“perceptually uniform”). This also ensures that subsequent image editing, color andhistograms are all based on natural, perceptually uniform tones.

    However, real-world images typically have at least 256 levels (8 bits), which is enough to make tones appear smooth and continuous in a print. If linear encoding were used instead, 8X as many levels (11 bits) would’ve been required to avoid image posterization.

    GAMMA WORKFLOW: ENCODING & CORRECTION

    Despite all of these benefits, gamma encoding adds a layer of complexity to the whole process of recording and displaying images. The next step is where most people get confused, so take this part slowly. A gamma encoded image has to have “gamma correction” applied when it is viewed — which effectively converts it back into light from the original scene. In other words, the purpose of gamma encoding is for recording the image — not for displaying the image. Fortunately this second step (the “display gamma”) is automatically performed by your monitor and video card. The following diagram illustrates how all of this fits together:

    RAW Camera Image is Saved as a JPEG File JPEG is Viewed on a Computer Monitor Net Effect
    image file gamma + display gamma = system gamma
    1. Image File Gamma 2. Display Gamma 3. System Gamma

    1. Depicts an image in the sRGB color space (which encodes using a gamma of approx. 1/2.2).
    2. Depicts a display gamma equal to the standard of 2.2

    1. Image Gamma. This is applied either by your camera or RAW development software whenever a captured image is converted into a standard JPEG or TIFF file. It redistributes native camera tonal levels into ones which are more perceptually uniform, thereby making the most efficient use of a given bit depth.

    2. Display Gamma. This refers to the net influence of your video card and display device, so it may in fact be comprised of several gammas. The main purpose of the display gamma is to compensate for a file’s gamma — thereby ensuring that the image isn’t unrealistically brightened when displayed on your screen. A higher display gamma results in a darker image with greater contrast.

    3. System Gamma. This represents the net effect of all gamma values that have been applied to an image, and is also referred to as the “viewing gamma.” For faithful reproduction of a scene, this should ideally be close to a straight line (gamma = 1.0). A straight line ensures that the input (the original scene) is the same as the output (the light displayed on your screen or in a print). However, the system gamma is sometimes set slightly greater than 1.0 in order to improve contrast. This can help compensate for limitations due to the dynamic range of a display device, or due to non-ideal viewing conditions and image flare.

    IMAGE FILE GAMMA

    The precise image gamma is usually specified by a color profile that is embedded within the file. Most image files use an encoding gamma of 1/2.2 (such as those using sRGB and Adobe RGB 1998 color), but the big exception is with RAW files, which use a linear gamma. However, RAW image viewers typically show these presuming a standard encoding gamma of 1/2.2, since they would otherwise appear too dark:

    linear RAWLinear RAW Image
    (image gamma = 1.0)
    gamma encoded sRGB imageGamma Encoded Image
    (image gamma = 1/2.2)

    If no color profile is embedded, then a standard gamma of 1/2.2 is usually assumed. Files without an embedded color profile typically include many PNG and GIF files, in addition to some JPEG images that were created using a “save for the web” setting.

    Technical Note on Camera Gamma. Most digital cameras record light linearly, so their gamma is assumed to be 1.0, but near the extreme shadows and highlights this may not hold true. In that case, the file gamma may represent a combination of the encoding gamma and the camera’s gamma. However, the camera’s gamma is usually negligible by comparison. Camera manufacturers might also apply subtle tonal curves, which can also impact a file’s gamma.

    DISPLAY GAMMA

    This is the gamma that you are controlling when you perform monitor calibration and adjust your contrast setting. Fortunately, the industry has converged on a standard display gamma of 2.2, so one doesn’t need to worry about the pros/cons of different values. Older macintosh computers used a display gamma of 1.8, which made non-mac images appear brighter relative to a typical PC, but this is no longer the case.

    Recall that the display gamma compensates for the image file’s gamma, and that the net result of this compensation is the system/overall gamma. For a standard gamma encoded image file (), changing the display gamma () will therefore have the following overall impact () on an image:

    gamma curves chart with a display gamma of 1.0
    Display Gamma 1.0 Gamma 1.0
    gamma curves chart with a display gamma of 1.8
    Display Gamma 1.8 Gamma 1.8
    gamma curves chart with a display gamma of 2.2
    Display Gamma 2.2 Gamma 2.2
    gamma curves chart with a display gamma of 4.0
    Display Gamma 4.0 Gamma 4.0

    Diagrams assume that your display has been calibrated to a standard gamma of 2.2.
    Recall from before that the image file gamma () plus the display gamma () equals the overall system gamma (). Also note how higher gamma values cause the red curve to bend downward.

    If you’re having trouble following the above charts, don’t despair! It’s a good idea to first have an understanding of how tonal curves impact image brightness and contrast. Otherwise you can just look at the portrait images for a qualitative understanding.

    How to interpret the charts. The first picture (far left) gets brightened substantially because the image gamma () is uncorrected by the display gamma (), resulting in an overall system gamma () that curves upward. In the second picture, the display gamma doesn’t fully correct for the image file gamma, resulting in an overall system gamma that still curves upward a little (and therefore still brightens the image slightly). In the third picture, the display gamma exactly corrects the image gamma, resulting in an overall linear system gamma. Finally, in the fourth picture the display gamma over-compensates for the image gamma, resulting in an overall system gamma that curves downward (thereby darkening the image).

    The overall display gamma is actually comprised of (i) the native monitor/LCD gamma and (ii) any gamma corrections applied within the display itself or by the video card. However, the effect of each is highly dependent on the type of display device.

    CRT Monitor LCD Monitor
    CRT Monitors LCD (Flat Panel) Monitors

    CRT Monitors. Due to an odd bit of engineering luck, the native gamma of a CRT is 2.5 — almost the inverse of our eyes. Values from a gamma-encoded file could therefore be sent straight to the screen and they would automatically be corrected and appear nearly OK. However, a small gamma correction of ~1/1.1 needs to be applied to achieve an overall display gamma of 2.2. This is usually already set by the manufacturer’s default settings, but can also be set during monitor calibration.

    LCD Monitors. LCD monitors weren’t so fortunate; ensuring an overall display gamma of 2.2 often requires substantial corrections, and they are also much less consistent than CRT’s. LCDs therefore require something called a look-up table (LUT) in order to ensure that input values are depicted using the intended display gamma (amongst other things). See the tutorial on monitor calibration: look-up tables for more on this topic.

    Technical Note: The display gamma can be a little confusing because this term is often used interchangeably with gamma correction, since it corrects for the file gamma. However, the values given for each are not always equivalent. Gamma correction is sometimes specified in terms of the encoding gamma that it aims to compensate for — not the actual gamma that is applied. For example, the actual gamma applied with a “gamma correction of 1.5” is often equal to 1/1.5, since a gamma of 1/1.5 cancels a gamma of 1.5 (1.5 * 1/1.5 = 1.0). A higher gamma correction value might therefore brighten the image (the opposite of a higher display gamma).

    OTHER NOTES & FURTHER READING

    Other important points and clarifications are listed below.

    • Dynamic Range. In addition to ensuring the efficient use of image data, gamma encoding also actually increases the recordable dynamic range for a given bit depth. Gamma can sometimes also help a display/printer manage its limited dynamic range (compared to the original scene) by improving image contrast.
    • Gamma Correction. The term “gamma correction” is really just a catch-all phrase for when gamma is applied to offset some other earlier gamma. One should therefore probably avoid using this term if the specific gamma type can be referred to instead.
    • Gamma Compression & Expansion. These terms refer to situations where the gamma being applied is less than or greater than one, respectively. A file gamma could therefore be considered gamma compression, whereas a display gamma could be considered gamma expansion.
    • Applicability. Strictly speaking, gamma refers to a tonal curve which follows a simple power law (where Vout = Vingamma), but it’s often used to describe other tonal curves. For example, the sRGB color space is actually linear at very low luminosity, but then follows a curve at higher luminosity values. Neither the curve nor the linear region follow a standard gamma power law, but the overall gamma is approximated as 2.2.
    • Is Gamma Required? No, linear gamma (RAW) images would still appear as our eyes saw them — but only if these images were shown on a linear gamma display. However, this would negate gamma’s ability to efficiently record tonal levels.

    For more on this topic, also visit the following tutorials:

    In gathering together all the information for a book in order to design and lay out the pages,
    you’ll usually be working with images – photographs and illustrations – scanned and saved at
    300dpi and saved in CMYK mode (see below).
    In this project you’ll look at managing colour within the pre-print process. The designer is the
    ‘bridge’ between the original manuscript and the printed product so it helps to have a good
    understanding of the colour management process involved prior to print production, so that you
    can manage your book project accordingly.
    Colour theory – RGB
    When you lay out your pages using DTP
    software, you work with digitised images,
    usually viewing your work via a computer
    monitor. Screens, TVs and monitors all work
    on the principle of transmitted white light,
    which is created from mixing Red, Green and
    Blue light. Therefore, we refer to this colour
    mode as ‘RGB’ or ‘additive colour’.
    CMYK
    It is important to be aware that although we are looking at a RGB colour monitor, and we
    perceive colours via this means, when it comes to printing we have to use physical pigment
    in the form of inks as opposed to light waves. The colour system used for printing is known as
    ‘subtractive colour’ or CMYK.
    Cyan, Magenta and Yellow, when mixed together, form a dull sort of brown, which isn’t quite
    black. So Black is added as a fourth colour and is represented here by the letter ‘K’. (This stands
    for ‘Key’ in printers’ terms rather than ‘B’, which may get confused with ‘Blue’.)
    Project Managing colour
    RGB additive colour
    A CMYK strip, often visible on newspaper margins – but without the identifying
    letters. These strips form part of the quality control process, enabling the print
    manager to see that all inks are running to correct capacity.
    Book Design 1 81
    Colour matching
    CMYK forms the colour-printing process for much printed material, and you need to be aware
    that the colours you’ll see on-screen will not be the same as the printouts you receive as ‘proofs’
    from the printer. Who hasn’t printed out something from their desktop printer and exclaimed
    ‘the colour’s nothing like that!’? When it comes to expensive print processes, you can’t afford
    unpleasant suprises in terms of colour reproduction; you have to be sure exactly how the colour
    is going to turn out. So you have to establish a way to calibrate your colours at the outset,
    so that you know exactly how any particular colour will turn out. One way of doing this is to
    work with CMYK sample books that printers provide. This enables you to specify exactly the
    proportions of Cyan, Magenta, Yellow and Black that are contained in any colour. You can then
    input these specifications into your DTP document and then rest assured that, although it may
    not look entirely right on-screen, it will match when you come to print it out because it is set up
    to the printer’s CMYK requirements, and not the computer’s inherent RGB mode.
    Pantone
    Another way of matching colour is to use a Pantone swatch book. Pantone is the trademark
    name for a range of ready-mixed inks, also sometimes known as ‘spot colours’. The Pantone
    range encompasses a wide range of colours, including metallic and pastels. Pantone Reference
    swatches can give both the Pantone ink number, plus the corresponding CMYK specification.
    Pantone ‘Solid to Process’ swatch book
    82 Book Design 1
    Halftone screens
    In order to print a continuous tone image
    – such as a photograph, illustration or
    artwork – using the CMYK four-colour
    printing process, the image has first to be
    converted from a continuous tone image
    to a series of lines. In order to facilitate
    this the image goes through a ‘halftone
    screening’ process – so that the colours
    within the photogaph can ultimately be
    reproduced by using the printing colours
    CMYK.
    The majority of printed photographs and artwork we see in books, newspapers and magazines
    are made up of many CMYK dots of varying sizes. These are printed via four screens, one for
    each of the print colours, set at different angles.
    The Black screen is set to 45 degrees, Magenta at 75°, 90° for Yellow and 105° for Cyan.
    You can see the evidence of this process when you look at a four-colour process (CMYK) printed
    photograph through a magnifiying lens or loupe. You’ll see clearly that the image is composed
    from those four inks, and it is their relative proximity, size and overlap that creates various
    colours and in this way re-presents continuous tone images. The size of the screen affects the
    quality of the image printed: the finer the screen, the better the image quality. Pictures printed
    on newsprint, for example, are printed via a relatively coarse screen, at 55lpi (lines per inch)
    whereas the images for books are printed using a higher grade screen, such as 170lpi. Within
    photo editing software there are options to adjust the settings for halftone screens, changing
    the shape and size of the dot elements.
    Moiré
    A moiré pattern occurs when screens are overlaid onto each other and the resulting image
    becomes distorted. The moiré effect is noticeable when the colours start to visually mix, in a
    swirly, jarring way. You can see it, for example, if you are watching someone on TV wearing a
    dogtooth jacket; the lines clash and this causes a visual interference.
    You need to be aware of moiré pattern when you scan images that have been printed once
    already, as they have already undergone a screening process. To offset this, you can apply the
    ‘Descreen’ option in photo editing software, and this removes the problem.