1 Tajora

Typeface Family Definition Essay

An essay on the history and definition of type families, type design parameters, and the possibilities of creating larger type systems today.

The size and complexity of recently-developed type families has reached unprecedented levels. Look, for instance, at United, a recent release (2007) from House Industries. The family includes 105 fonts composed of three styles (sans, serif and italic), available in seven weights and five widths. It takes a couple of minutes just to scroll through all the variants listed in the font menu. For a further example of this trend, Hoefler & Frere-Jones have just released their Chronicle type family (2002-2007), the range of which extends through widths (from regular to compressed), weights (from extra light to black), and optical size (from text to headline). In terms of sheer size, Chronicle comprises 106 fonts and beats the rival United by a single stylistic variant.

United, type family of 105 fonts designed by Tal Leming, published by House Industries in 2007.

Of course these ‘superfamilies’ benefit from the inventions of the past centuries; an ongoing series of typographic innovations that broke new ground for generations of designers to come.

History as a continuous series of discoveries

Ever since the earliest use of movable metal type; certain typefaces have included versions cut for specific point sizes. Claude Garamond's type from the 1530s (also known as the caractères de l'Université), included 15 versions ranging in size from 6 to 36 points. Each size was drawn, cut and cast separately; characters were designed specifically for the optical appearance of the printed text, with optimised letter widths and contrasts between the thick and thin parts of the letterforms. When photographically scaled to the same size, it is easy to see significant differences between the different designed sizes. Earlier typographers would therefore choose various sizes, just as we might choose various weights of a particular typeface today.

Garamond’s caracterès de l’Université from the 1530s includes 15 optical versions ranging from 6 to 36 points. Above are 7pt and 36pt type at 100%. Below is 7pt sample scaled 425% to match the 36pt version. Note the difference in contrast between the thick and thin strokes, and overall differences in details between the two versions.

In the age of the enlightenment, there was a clear need to organise and rationalise these differing sizes of printing types. In 1737, Pierre Simon Fournier published a table of graded sizes of printing types, introducing the first-ever standardized system for producing and using type. Fournier related type size to the 'pouce' (a French version of the inch), and subdivided the ‘pouce’ to 72 'points'. This new system also became the adopted standard in the English-speaking world, and in 1742 Fournier published his Modèles de caractères de l'imprimerie, in which he further systematised the body sizes of printing types, and suggested names for the most commonly-used sizes. The first mention of types being organized into ‘families’ also originates with Fournier’s work.

Pierre Simon Fournier’s printed scale of his point system, from Modèles de caractères de l’imprimerie, 1742. Fournier was the first to introduce a standard of producing and using type, sugesting a typographic unit called ‘point’.

Subsequent technological discoveries perhaps allowed typographers to forget the great invention of optically-adjusted type sizes. Type produced by pantographic reproduction (scaling a master drawing to many different sizes), and the later technologies of photocomposition and digital type, allowed working from a single master design regardless of the size of the final application. Typefaces made between 1960s-1990s almost entirely ignored optical sizes because photocomposition allowed unprecedented possibilities of mathematical scaling. Optically-adjusted sizes for type designs made a minor comeback in the early 1990s, most notably in ITC Bodoni, featuring size-specific designs similar to those used by the types originator, Giambattista Bodoni. These included Bodoni Six, designed for small captions, Bodoni Twelve for text setting, and Bodoni Seventy Two for display use.

However, optical size is just one parameter which determines the appearance of a typeface. It seems that typefaces need to be linked by several other shared parameters in order to be seen as part of a coherent group or family. Another such parameter is the weight of the type. For about 400 years printers and publishers did well with a single weight of a typeface, using just the type size as the main means of semantic differentiation. Even complex documents, such as Samuel Johnson's dictionary (A Dictionary of the English Language, 1755), use only a single weight of type set in different sizes to show the hierarchical differences between keywords, definitions and descriptions.

The idea of varying the weight of a single typeface probably happened in the mid-19th century. Heavy typefaces did exist before that time, but they were generally seen on their own and not in relationship to the regular (text) weight. The commercial pressures of the industrial revolution inspired the creation of different weights of typefaces. The idea was simple; to differentiate one text from another, or to highlight a particular part of the text. There were plenty of opportunities to use different weights of type in a western market-driven economy in the 19th century. For example, the Besley and Company foundry’s Clarendon type (1842), is widely acknowledged as one of the first bold typefaces, but soon after its three-year copyright protection expired it was extensively imitated and pirated.

Clarendon and its clones, however, although they were clearly designed to be used next to a Roman (regular weight, or text typeface), had not yet established a systematic relationship between the various weights (or widths) of a family-based type design.

Not one but many
From the early 20th century it became standard practice to include several weights of a typeface to complement the release of new type designs. The best example of this may be the work of Morris Fuller Benton, who complemented the many typefaces he designed for American Type Founders (ATF) with both condensed and heavy versions. Technology and aesthetics worked hand-in-hand for Benton, who used his fathers recently-invented pantographic engraving machine (1886), capable not only of scaling a single typeface design to a variety of sizes, but could also condense, extend, and slant the design. These fundamental geometric operations are the same basic transformations that most digital typographic systems use today.

In the later part of the 20th century, the work of Adrian Frutiger uniquely shifted attention from the design of a single typeface to the design of a complete typeface system, seeing the design of a type family as a continuous space defined by two axes; width and weight. The Deberny & Peignot Foundry released Frutiger’s masterpiece, Univers, in 1957 in an unprecedented 21 variants. Frutiger’s systematic approach and innovative naming scheme eliminated confusion in type specification, and was perhaps even more interesting that the actual typeface design itself. He created a novel system of double digit numerically-referenced styles, where the initial number 5 refers to the basic (roman or text) weight, and the subsequent number refers to the width (5 being standard or normal). Higher numbers signified increasing weight or width, so while Univers Regularwas Univers 55, Univers Bold was referred to as Univers 75, and Univers Regular Condensed was Univers 57. The Univers system anticipated 9 weights and 9 widths (also incorporating an oblique, or sans serif italic variant), although some combinations of these proved unworkable in practice, so there is no Univers 79, or Black Condensed. Linotype further expanded the Univers family in 1997 to 63 versions – for this the numbering system was extended to three digits to reflect the large number of variants in the family. Frutiger originally envisioned this system to be used with other typeface families, however his systematic numbering convention never gained wider acceptance with either the foundries or his contemporary type designers.

The Univers type family, designed by Adrian Frutiger in 1957, consisting of 21 typefaces. Rather than focusing on a single typeface, Frutiger developed an interrelated typeface system.

The incorporation of two different styles of typeface into one family was probably first explored in 1932, by Jan van Krimpen in his Romulus project. Van Krimpen's intention was to create a large family of types for book printing; these would comprise a roman, an italic, a script type, bold and condensed types, at least four weights of sans serif, Greek text type, and possibly more. This was deliberately more ambitious than the type family of Lucian Bernhard, who released his types (Bernhard Gothic, Kingsley ATF, 1930) two years earlier.

Romulus, designed by Jan van Krimpen in 1932, was one of the first type families which included Sans and Serif versions in range of weights. The complete package included, a slanted roman, a chancery italic (Cancelleresca Bastarda), and an infamous Greek.

The sans and serif forms of Romulus share the same construction principles, but the resulting letterforms of the two styles are quite different. Van Krimpen quotes the type historian John Dreyfus in his book On Designing and Devising Type 1957: 'The purpose of the Romulus family was to provide the basic necessities for book printing and by means of a series of related designs to make possible consistent, flexible ... style.' Interestingly, Van Krimpen attempted to separate the style of his roman type and to apply it to Greek script as well. Although RomulusGreek is a fallacy, as it misunderstands the translation of letterforms from Latin to Greek, the method which Van Krimpen suggested is successfully used in localising most of non-Latin type today. When type design is understood as a system, it can be seen to consist of many shared parameters amongst letterforms from even such different origins as Greek and Latin.

Parametric design
A radically new view to understanding type families was paradoxically not proposed by any designer but by a mathematician. In 1977, Donald Knuth conceived a programming language that he called Metafont, which defined the shapes of letterforms with powerful geometric equations. Rather than describing the outlines of glyphs (like the later PostScript and TrueType font formats), Metafont describes an imaginary 'pen' that creates the stroke paths for constructing letterforms. Because of this unique approach, one can change a single input parameter for a typeface, such as optical size, angle of slant, or size of serif, and produce a consistent change throughout the entire font. A single font file can thus be a complex type family with many different versions. Metafont can control over 70 different parameters, which theoretically can define the appearance of any typeface designed with it. Even despite the obvious advantages of the system, and Knuth's close collaboration with the celebrated type designer Herman Zapf, Metafont never became widely used. Later technologies such as Apple's GX and Adobe's MultipleMaster font formats were similarly ill-fated.

The Hague school
Gerrit Noordzij, who taught writing and type design at the Royal Academy of Arts in The Hague for 30 years, outlined and developed his theory of writing in several books. His model presents the serif style as high-contrast type, as distinct from the sans serif style as low-contrast type, and arranges them in a coherent model of typographic possibilities. Therefore, instead of ideological discussions of ‘serif’ vs. ‘sans’, Noordzij focuses on the influence of tools while making marks on a surface. Noordzij describes three ways of producing typefaces: translation, expansion and rotation, each referring to different processes and the resulting stylistic differences between various groups of typefaces.

Published in 1985 in the book, The stroke Dutch typographer and teacher Gerrit Noordzij proposed a theory of writing regardless of the used tool. His diagrame illustrates the main argument of his theory of writing, presenting the concept of translation, expansion and rotation.

Noordzij's pragmatic theories were highly influential amongst a group of designers that studied at the Academy. One of them was Lucas de Groot, who designed Thesis, (1994-99) a typeface family with three constructional variants of the type (sans serif, serif, and mix), comprised of 8 weights and totalling 144 variants. This type ‘superfamily’ was later further expanded by the addition of monospaced and condensed versions. De Groot developed and applied his own interpolation theory to the design of Thesis, which makes non-linear relationships between the weights of the type design. Thesis, first released in 1994, was the largest type family created at the time.

Part of the contemporary program at the Royal Academy of Arts in The Hague is Type & Media, a postgraduate program focused on type design education. It is only natural that such a place should be at the forefront of of typographic experimentation, redefining what we understand by the terms ‘typeface’ and ‘type family’.

Gustavo Ferreira, a recent graduate of Type & Media produced Elementar (2003-2006), a comprehensive system of pixel fonts generated by a series of Python scripts. Elementar draws its inspiration from Metafont and Univers rather than existing bitmap fonts. It is a parametric system responding to selected input criteria; a basic design for a simplified pixel typeface serves as a model, on which other parameters are applied. Because of the limitations of rendering glyphs on screen in small sizes, glyphs are expressed in terms of exact fractions or multiples of the model design.

Gustavo Ferreira’s system of bitmap fonts Elementar includes over 500 fonts, so a special application is necessary to select the fonts based on the user’s input criteria.

Such large typeface systems can become quite impractical to use, as the list of stylistic variations in the font menu gets larger and larger. In the case of Elementar, it is over 500 individual bitmap fonts, so an alternative solution has to be offered to the user to select the correct variant. Rather than presenting the full list of typeface possibilities this way, Elementar comes with its own online interface, whereby the user can choose the parameters, and get the right stylistic variant(s).

Kalliculator was a Type & Media graduation project of Frederik Berlaen (2006). Instead of drawing a typeface, Berlaen made a tool that makes typefaces based on a predefined set of parameters. Similarly to Knuth's tools from the 1970s, Kalliculator simulates pens and their relationships to a drawn stroke. Berlaen’s project uses Noordzij's theories as a base and the Kalliculator electronic pen ranges between pointed and broad nib styles. Users can input a line drawing, and the programme calculates the contrast around the skeleton, mixing the mathematical middle of a stroke and and a path made by an imaginary pen. The idea is that the trajectory of the hand is separate from the style of the pen, so users can experiment by applying various parameters to their sketched strokes. A single drawing of an 'a', can result in hundreds of versions, with each one being directly linked to others via its source drawing. In this way, Berlaen's application challenges traditional views of type families, as a typeface generated from the same skeleton is related to its family variants in a uniform manner.

Kalliculator is not a typeface but a tool that makes typefaces based on predefined set of parameters. User can input a line drawing, and the programme simulates either broad nib or pointed pen (or anything in between), controls the weight and contrast, applies the same parameters to all the glyphs in the database, and finally generates the font file.

So what exactly defines a type family? An analogy with a real physical family is not often helpful because unlike the biological world, different generations of typefaces are usually not considered to be part of the same family.

Similarly, at the level of individual glyphs, each style of the type family must be recognizably different in order to remain functional. Yet each style must adhere to common principles governing the consistency of the type family. It is clear that individual members of the family need to share one or more attributes, and typographic history offers many examples of this; optical size, weight, width, stylistic differences (sans, serif and semi-serif), construction differences (formal and informal), are the most common parameters linking members of type families. We can also find less common relationships such as varying serif types, changing proportions of x-height, ascenders and descenders, or contextually-appropriate possibilities of different versions.

Work by designers like Berlaen and Ferreira build on centuries of typographic innovation and help to explore new territory for type design. They participate in a cumulative, ongoing and inspirational history of type development, requiring that we continue creating this work in progress.

Join our mailing list and get €20 off Typotheque fonts

March 25, 2010
Gerry Leonidas

Teaching on a postgraduate course feels very much like a spiral: the annual repetition of projects, each a vehicle for a journey of education and discovery for the student, blurs into cyclical clouds of shapes, paragraphs, and personalities. There seems to be little opportunity for reflection across student cohorts, and yet it is only this process that improves the process from one year to the next. Having passed the tenth anniversary of the MA Typeface Design programme was as good an opportunity as any to reflect, and ILT’s offer to publish the result an ideal environment to get some ideas out in the open. Although my perspective is unavoidably linked to the course at Reading, I think that the points I make have wider relevance.

Our students, both young and mature, often find themselves for the first time in an environment where research and rigorous discussion inform design practice. The strong focus on identifying user needs and designing within a rigorous methodology is often at odds with past experiences of design as a self-expressive enterprise: in other words, design with both feet on the ground, in response to real-world briefs. In addition, students are expected to immerse themselves in the literature of the field, and, as much as possible, contribute to the emerging discourse. (There are many more books and articles on typeface design than people generally think; some are not worth the paper they’re printed on, but some are real gems.) I shouldn’t need to argue that research, experimentation, and reflection on the design process lead not only to better designs, but better designers.

In recent years, two significant factors have started influencing attitudes to design. Firstly, as generations grow up using computers from primary school onwards, it is more difficult to identify the influence of the computer as a tool for making design decisions, rather than implementing specifications. Secondly, the trend in higher education to restructure courses as collections of discrete modules results in a compartmentalisation of students’ skills and knowledge: it is becoming more difficult for the experience in one class to have an impact on the work done in another. (A third, less ubiquitous, factor would be the diminishing importance of manual skills in rendering and form-making in design Foundation and BA/BFA courses, a subject worthy of discussion in itself.)

So, repeating the caveat that these observations are strictly personal, I offer them in the hope they will prove interesting at least to the people setting up and running new courses in typeface design, and the many designers teaching themselves.

1. Design has memory (even if many designers don’t)

Typography and typeface design are essentially founded on a four-way dialogue between the desire for identity and originality within each brief (“I want mine to be different, better, more beautiful”), the constraints of the type-making and type-setting technology, the characteristics of the rendering process (printing or illuminating), and the responses to similar conditions given by countless designers already, from centuries ago to this day. Typographic design never happens in a vacuum. A recent example is Emigre magazine: can its early period be seen without reference to the sea-change in type-making and typesetting tools of the mid-eighties? and is not its middle period a mark of emerging maturity and focusing, critically and selectively, on those conventions worth preserving in a digital domain? Emigre is important as a mirror to our responses to new conditions and opportunities, and cannot be fully appreciated just by looking at the issues. (Especially if you look at scaled-down images, rather than the poster-like original sizes!). At a more subtle level, the basic pattern of black and white, foreground and background, for “readable text” sizes has been pretty stable for centuries, and pretty impervious to stylistic treatments. Does not a type designer gain by studying how this pattern survives the rendering environments and the differentiation imposed by genre and style?

And yet, many designers have a very patchy knowledge of the history of typography and letterforms. More worryingly, students and designers alike have little opportunity to experience genre-defining objects in reality (imagine discussing a building looking only at the blueprints for building it, not walking up to it, and through its rooms). It is perhaps not surprising that the wide but shallow knowledge gained from online sources is dominant; there seems also to be little discrimination between sources that employ review and editorial mechanisms, and those that are open to wide, unchecked contributions. This shallow approach to reading and investigating results in a lack of coherent narratives, not only about how things happened, but also why. And how were similar design problems addressed under different design and production environments? What can artifacts tell us about how people made decisions in similar situations before? How did changing conditions give rise to new solutions? To paraphrase Goudy, the problem is not any more that the old-timers stole all the best ideas, but that the old ideas are in danger of being re-discovered from scratch. (Just look at the web designers rediscovering the basic principles of text typography and information design, as if these were newly-found disciplines.)

2. Design is iterative, and improved by dialogue

The process of typeface design is, in essence, a reductive refinement of ever smaller details. First ideas are just that: sketches that may offer starting points, but have to be followed by a clear methodology of structured changes, reviews, testing – and repetition of the whole process. The attention of the typeface designer must progress in ever decreasing scales of focus: from paragraph-level values on the overall density of a design, to the fundamental interplay of space and main strokes, to elements within a typeform that ensure consistency and homogeneity, and those that impart individuality and character. At the heart of this process is dialogue with the brief: what conditions of use are imposed on the new design, and what are the criteria to determine excellence in responding to the brief? (For example, how will the end users make value associations with the typeface?)

The wider the typeface family, the deeper the need to test conclusively, not only with documents that highlight the qualities of the typeface, but also with documents that approximate a wide range of possible uses. Even in cases of very tight briefs (as in the case of bespoke typefaces for corporate clients), the range of uses can be extremely broad. But good designers are also aware of the constraints of their testing environment. The misleading impression of transparency and fidelity that computer applications give, and the limitations of laser-printer output, obstruct trustworthy decisions. Designers must be aware of how looking at medium resolution printouts in dark toner on highly bleached paper can bias their decisions.

We are also seeing a gradual return to typeface design being a team enterprise, drawing on the expertise of a group rather than an individual. This, of course, is not new: typeface design in the hot-metal and phototype eras was very much a team product. But just as the digital, platform-independent formats enabled designers to function outside a heavy engineering world, so it enabled the explosion of character sets and families to unprecedented levels. The necessary skills and the sheer volume of work required for text typefaces have driven a growth of mid-size foundries, where people with complementary skills collaborate in a single product. The corollary is a rise in the need for documentation and explanation to a community of fellows. The short-lived “creative hermit” model is giving way to new models of work.

3. Scale effects are not intuitive

The conventional curriculum for design education rarely tackles scales smaller than a postcard. More importantly, the compositional aspects of design tend to take precedence over details at the level of the paragraph, let alone the word. Typeforms for continuous reading are designed at fairly large sizes (on paper or, more usually, occupying most of a computer screen) but are experienced in much smaller sizes where their features have cumulative effects, weighted by the frequency with which specific combinations occur. These conditions arise in every text setting, be it for prose read forty centimetres away, or a sign viewed from a distance of tens of metres.

Of all the skills typeface designers need to develop, understanding how to make shapes at one scale behave a particular way in another scale is the most troublesome one. Imagining the difference that a small change in a single letter will have in a line or paragraph of typeset text is not an innate skill: it is entirely the result of practice. The best designers are the ones who will naturally ask “why does this paragraph look this way?” and try to connect the answer to specific design choices.

A common example of problems connected to scale effects arises whenever a student follows a writing tool too closely as a guide for designing typeforms: whereas the ductus (the movement of the stroke) and and the modulation can be preserved across scales without much difficulty, the details of stroke endings and joints cannot; typographic scales demand a sensitivity to optical effects that simply do not apply at writing scales. The best examples come from typefaces designed for the extremes of text scales: for telephone directories (famously by Ladislas Mandel and Matthew Carter), Agate sizes for listings, and early typefaces for screen rendering. The smaller the size (or the coarser the rendering resolution), the more the designer primarily separates blobs and bars of white space, and only secondarily deals with style and detail.

4. Tools are concepts

Regardless of the scale effects mentioned above, there is a requirement to appreciate the link between typeface design and writing, and the tools used for writing. To be clear: I am not talking about calligraphy, but writing in the widest possible sense, from graffiti, a hasty ‘back in five minutes’ sign, to the most elaborate piece of public lettering. More than the specific forms of letters, the process of writing illuminates the patterns and combinations we are used to seeing, and gives insights into the balance of shapes and the space between them. The relationship of writing tools to the marks they make has been discussed in some depth (for the Latin script by Noordzij and Smeijers, most importantly), but the transformation of these marks through the computer much less so. (There are some texts, but mostly they focus on specific cases, rather than general principles; the notable exception is Richard Southall.)

And yet, since the early days of punchcutting, type-making involves a process of fracturing the typeforms, modularizing and looking for patterns. Later on, when the roles of designer and maker began to be distinguished (most emblematically with the Romain du Roi, like the Encyclopédie a true product of the Age of Reason) typeface design became programmatic, each typeface an instance of a class of objects, rooted in a theory of letter construction – however sensitive to human practice or aloof that may be. Later, the hot metal “pattern libraries” and the rubylith cutouts of shapes to be photographically scaled and distorted for phototype point to the same process, of abstracting the typographic shapes into elements that have little to do with the movements of a tool. As for the digital domain, deconstruction and repeatability remain key aspects of the design process.

To ensure a typeface built with fragmentary processes has internal consistency, the designer needs to develop a mental model of a tool that may follow the tracks of a writing tool, but may include mark-making and movement behaviours quite distinct from anything that is possible to render with a real writing tool. (Easy example: the parallelogram-like serifs of a slab, on a typeface with a pen-like modulation.) Such mental models for typemaking are increasingly important as type families expand into extremes of weight and width, where any relationship with a writing tool quickly evaporates. So, an invented tool that, for example, makes incised vertical strokes and pen-like bowls, can become the basis for a wide range of styles, ensuring consistency without the limitations of a specific tool; at the same time, because the model is agnostic of weight and width, it does not hinder the generation of large families with overall consistency but local richness. (Compare this approach with a wide family developed through extremes of multiple master outlines, where consistency relies on the details of typeforms having close correspondences.)

5. The Latin script is the odd one out

The demand for typefaces with extended character sets has been growing steadily for many years. OEM and branding typefaces are expected to cover more than one script, and often three or more. Beyond the obvious scripts of the wider European region (Cyrillic, Greek, and Latin), the interest has shifted strongly towards Arabic and the Indian scripts. But there are two key differences between the Latin typographic script, and pretty much everything else: firstly, that the type-making and typesetting equipment were developed for a simple alphabetic left-to-right model that would have to be adapted and extended to work with the complexities of the non-Latins. Although rectangular sorts will work sufficiently for the simple structure of western European languages, the model strains at the seams when the diacritics start multiplying, and pretty much collapses when the shapes people use do not fit in neat boxes, or change shape in ways that are not easy to describe algorithmically. No surprise that most non-Latin typesetting implementations make use of compromises and technical hacks to get the script to work. The second factor is that most non-Latin scripts did not experience the full profusion in styles that arises from a competitive publications market, as well as a culture of constant text production. (It’s no surprise that the language of display typography first developed in nineteenth-century Britain, in parallel with the Industrial Revolution: urbanization, rising literacy, and trade in goods and services go hand in hand with the need for typographic richness and differentiation.)

Many students (indeed, many professionals) will ask ‘Can a non-speaker design a script well for a language they do not read?’ But a typeface arises in response to a brief, which by definition taps into wider design problems. For example, many of the conventions surrounding newspapers apply regardless of the market; the constraints on the typographic specification can be deduced from the general qualities of the script and the language (e.g. can you hyphenate? how long are the words and sentences? with what range of word lengths? what is the editorial practice in the region in terms of article structure, levels of hierarchy, and headline composition?). Having established the typographic environment, we can examine the written forms of the language, and the tools that have determined the key shapes. In this matter most scripts other than the Latin (and to some degree Cyrillic) maintain a very close relationship between writing and typographic forms. Writing exercises and a structural analysis of examples can help the designer develop a feel for the script, before reading the words. More importantly, in their non-Latin work, analysis of the script’s structure and the relationship between mark-making tools and typeforms can help the designers to develop criteria for evaluating quality.

Typographic history is well populated with designers excelling in the design of scripts they could not read – indeed, the examples are so numerous that it would be difficult to choose. Encouraging students to address the complicated design problems inherent in non-Latin scripts is not only a way of enriching the global typographic environment, it is also a superb means of producing designers who can tackle a higher level of difficulty in any aspect of their design.

6. And finally…

The final lesson for students of typeface design is that a formal environment can teach the functional aspects of design, but can only help them at a distance to develop the aesthetic qualities of their typefaces. Especially when they are working in categories already heavily populated with typefaces, the distinctions between the simply good and the superb will be very refined. And when the consideration turns to originality, inventiveness, and how much a particular design causes us to rethink our responses to typeset text, then teachers have little input. The student, balancing between the deep knowledge of the specialist and the broad curiosity of the generalist, must develop, largely on their own, their capacity to be conscious of past and emerging idioms, to see their own work in the context of developing styles, and – most difficult of all – to identify how their own personal style can co-exist with the restrictions of utility and the conventions of genre.

About the author

Gerry Leonidas is a Senior Lecturer in Typography at the University of Reading (UK) and Programme Director of the MA in Typeface Design. He spends most of his time talking and writing about typeface and document design, and is frequently invited to speak, teach, and review work.

Leave a Comment


Your email address will not be published. Required fields are marked *