Functional Programming StrategiesIn Scala with Cats

By Noel Welsh

June 2024 Edition

Published by Inner Product

Preface

Some twenty years ago I started my first job in the UK. This job involved a commute by train, giving me about an hour a day to read without distraction. Around about the same time I first heard about Structure and Interpretation of Computer Programs, referred to as the “wizard book” and spoken of in reverential terms. It sounded like the just the thing for a recent graduate looking to become a better developer. I purchased a copy and spent the journey reading it, doing most of the exercises in my head. Structure and Interpretation of Computer Programs was already an old book at this time, and it’s programming style was archaic. However it’s core concepts were timeless and it’s fair to say it absolutely blew my mind, putting me on a path I’m still on today.

Another notable stop on this path occured some ten years ago when Dave and I started writing Scala with Cats. In Scala with Cats we attempted to explain the core type classes found in the Cats library, and their use in building software. I’m proud of the book we wrote together, but time and experience showed that type classes are only a small piece of the puzzle of building software in a functional programming style. We needed a much wider scope if we were to show people how to effectively build software with all the tools that functional programming provides. Still, writing a book is a lot of work, and we were busy with other projects, so Scala with Cats remained largely untouched for many years.

Around 2020 I got the itch to return to Scala with Cats. My initial plan was simply to update the book for Scala 3. Dave was busy with other projects so I decided to go alone. As the writing got underway I realized I really wanted to cover the additional topics I thought were missing. If Scala with Cats was a good book, I wanted to aim to write a great book; one that would contain almost everything I had learned about building software. The title Scala with Cats no longer fit the content, and hence I adopted a new name for what is largely a new book. The result, Functional Programming Strategies in Scala with Cats, is what you are reading now. I hope you find it useful, and I hope that just maybe some young developer will find this book inspiring the same way I found Structure and Interpretation of Computer Programs inspiring all those years ago.

Preface from Scala with Cats

The aims of this book are two-fold: to introduce monads, functors, and other functional programming patterns as a way to structure program design, and to explain how these concepts are implemented in Cats.

Monads, and related concepts, are the functional programming equivalent of object-oriented design patterns—architectural building blocks that turn up over and over again in code. They differ from object-oriented patterns in two main ways:

This generality means they can be difficult to understand. Everyone finds abstraction difficult. However, it is generality that allows concepts like monads to be applied in such a wide variety of situations.

In this book we aim to show the concepts in a number of different ways, to help you build a mental model of how they work and where they are appropriate. We have extended case studies, a simple graphical notation, many smaller examples, and of course the mathematical definitions. Between them we hope you’ll find something that works for you.

Ok, let’s get started!

Versions

This book is written for Scala 3.3.3 and Cats 2.10.0. Here is a minimal build.sbt containing the relevant dependencies and settings1:

scalaVersion := "3.3.3"

libraryDependencies +=
  "org.typelevel" %% "cats-core" % "2.10.0"

scalacOptions ++= Seq(
  "-Xfatal-warnings"
)

Template Projects

For convenience, we have created a Giter8 template to get you started. To clone the template type the following:

$ sbt new scalawithcats/cats-seed.g8

This will generate a sandbox project with Cats as a dependency. See the generated README.md for instructions on how to run the sample code and/or start an interactive Scala console.

The cats-seed template is very minimal. If you’d prefer a more batteries-included starting point, check out Typelevel’s sbt-catalysts template:

$ sbt new typelevel/sbt-catalysts.g8

This will generate a project with a suite of library dependencies and compiler plugins, together with templates for unit tests and documentation. See the project pages for catalysts and sbt-catalysts for more information.

Conventions Used in This Book

This book contains a lot of technical information and program code. We use the following typographical conventions to reduce ambiguity and highlight important concepts:

Typographical Conventions

New terms and phrases are introduced in italics. After their initial introduction they are written in normal roman font.

Terms from program code, filenames, and file contents, are written in monospace font. Note that we do not distinguish between singular and plural forms. For example, we might write String or Strings to refer to java.lang.String.

References to external resources are written as hyperlinks. References to API documentation are written using a combination of hyperlinks and monospace font, for example: scala.Option.

Source Code

Source code blocks are written as follows. Syntax is highlighted appropriately where applicable:

object MyApp extends App {
  println("Hello world!") // Print a fine message to the user!
}

Most code passes through mdoc to ensure it compiles. mdoc uses the Scala console behind the scenes, so we sometimes show console-style output as comments:

"Hello Cats!".toUpperCase
// res0: String = "HELLO CATS!"

Callout Boxes

We use two types of callout box to highlight particular content:

Tip callouts indicate handy summaries, recipes, or best practices.

Advanced callouts provide additional information on corner cases or underlying mechanisms. Feel free to skip these on your first read-through—come back to them later for extra information.

License

This work is licensed under CC BY-SA 4.0. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/4.0/

Portions of this work are based on Scala with Cats by Dave Pereira-Gurnell and Noel Welsh, which is licensed under CC BY-SA 3.0.

1 Functional Programming Strategies

This is a book on strategies for creating code in a functional programming (FP) style, seen through a Scala lens. If you understand most of the mechanics of Scala, but feel there is something missing in your understanding of how to use the language effectively, this book is for you. If you don’t know so much Scala, but are prepared to learn it as part of learning about functional programming, this book is also for you. It covers the usual functional programming abstractions like monads and monoids, but more than that it tries to teach you how to think like a functional programmer. It’s a book as much about process as it is about the code that results from process, and in particular it focuses on what I can metacognitive programming strategies.

I would guess most programmers would struggle to describe the process they use to write code. Some might mention “test driven development” and perhaps “pair programming”, but I wouldn’t expect much more from the general programming population. Both the above techniques come from eXtreme Programming, which dates to the late 90s, and you would hope our field had added new knowledge in that time. But it’s not really the fault of the developers—most of them haven’t been taught any explicit process. Our industry certainly likes to talk about process, in the form of agile, kanban boards, and so on, and in recent times a tremendous effort has spent on expanding those who are taught programming. However the actual programming—the bit that produces the code that is the whole point of the endeavour—is still largely treated as magic. It doesn’t have to be that way.

Functional programmers love fancy words for simple ideas, so it’s no surprise I’m drawn to metacognitive programming strategies. Let’s unpack that phrase to see what it means. Metacognition means thinking about thinking. A lot of research has shown the benefits of metacognition in learning, and that it is an important part of developing expertise. Metacognition is not just one thing—it’s not sufficient to just tell someone to think about their thinking. Rather we should expect metacognition to be a collection of different strategies, some of which are general and some of which are domain specific. From this we get the idea of metacognitive programming strategies—explicitly naming and describing different thinking strategies that proficient programmers use.

I believe metacognitive programming strategies are useful for both beginners and experts. For beginners we can make programming a more systematic and repeatable process. Producing code no longer requires magic in the majority of cases, but rather the application of some well defined steps. For experts, the benefit is exactly the same. At least that is my experience (and I believe I’ve been programming long enough to call myself an expert.) By having an explicit process I can run it exactly the same way every day, which makes my code simpler to write and read, and saves my brain cycles for more important problems. In some ways this is an attempt to bring to programming the benefit that process and standardization has brought to manufacturing, particularly the “Toyota Way”. In Toyota’s process individuals are expected to think about how their work is done and how it can be improved. This is, in effect, metacognition for assembly lines. This is only possible if the actual work itself does not require their full attention. The dramatic improvements in productivity and quality in car manufacturing that Toyota pioneered speak to the effectiveness of this approach. Software development is more varied than car manufacturing but we should still expect some benefit, particularly given the primitive state of our current industry.

The question then becomes: what metacognitive strategies can programmers use? I believe that functional programming is particularly well suited to answer this question. A major theme in functional programming research is finding and naming useful code structures. Once we have discovered a useful abstraction we can get the programmer to ask themselves “would this abstraction solve this problem?” This is essentially what the design patterns community did, also back in the nineties, but there is an important difference. The academic FP community strongly values formal models, which means that the building blocks of FP have a precision that design patterns lack. However there is more to process than categorizing the output. There is also the actual process of how the code comes to be. Code doesn’t usually spring fully formed from our keyboard, and in the iterative refinement of code we also find structure. Here the academic FP community has less to say, but there is a strong folklore of techniques such as “type driven development”

Over the last ten or so years of programming and teaching programming I’ve collected a wide range of strategies. Some come from others (for example, How to Design Programs and its many offshoots remain very influential for me) and some I’ve found myself. Ultimately I don’t think anything here is new; rather my contribution is in collecting and presenting these strategies as one coherent whole.

1.1 Three Levels for Thinking About Code

Let’s start thinking about thinking about programming, with a model that describes three different levels that we can use to think about code. The levels, from highest to lowest, are paradigm, theory, and craft. Each level provides guidance for the ones below.

The paradigm level refers to the programming paradigm, such as object-oriented or functional programming. You’re probably familiar with these terms, but what exactly is a programming paradigm? To me, the core of a programming paradigm is a set of principles that define, usually somewhat loosely, the properties of good code. A paradigm is also, implicitly, a claim that code that follows these principles will be better than code that does not. For functional programming I believe these principles are composition and reasoning. I’ll explain these shortly. Object-oriented programmers might point to, say, the SOLID principles as guiding their coding decisions.

The importance of the paradigm is that it provides criteria for choosing between different implementation strategies. There are many possible solutions for any programming problem, and we can use the principles in the paradigm to decide which approach to take. For example, if we’re a functional programmer we can consider how easily we can reason about a particular implementation, or how composable it is. Without the paradigm we have no basis for making a choice.

The theory level translates the broad principles of the paradigm to specific well defined techniques that apply to many languages within the paradigm. We are still, however, at a level above the code. Design patterns are an example in the object-oriented world. Algebraic data types are an example in functional programming. Most languages that are in the functional programming paradigm, such as Haskell and O’Caml, support algebraic data types, as do many languages that straddle multiple paradigms, such as Rust, Scala, and Swift.

The theory level is where we find most of our programming strategies.

At the craft level we get to actual code, and the language specific nuance that goes into it. An example in Scala is the implementation of algebraic data types in terms of sealed trait and final case class in Scala 2, or enum in Scala 3. There are many concerns at this level that are important for writing idiomatic code, such as placing constructors on companion objects in Scala, that are not relevant at the higher levels.

In the next section I’ll describe the functional programming paradigm. The remainder of this book is primarily concerned with theory and craft. The theory is language agnostic but the craft is firmly in the world of Scala. Before we move onto the functional programming paradigm are two points I want to emphasize:

  1. Paradigms are social constructs. They change over time. Object-oriented programming as practiced todays differs from the style originally used in Simula and Smalltalk, and functional programming todays is very different from the original LISP code.

  2. The three level organization is just a tool for thought. In real world is more complicated.

1.2 Functional Programming

This is a book about the techniques and practices of functional programming (FP). This naturally leads to the question: what is FP and what does it mean to write code in a functional style? It’s common to view functional programming as a collection of language features, such as first class functions, or to define it as a programming style using immutable data and pure functions. (Pure functions always return the same output given the same input.) This was my view when I started down the FP route, but I now believe the true goals of FP are enabling local reasoning and composition. Language features and programming style are in service of these goals. Let me attempt to explain the meaning and value of local reasoning and composition.

1.2.1 What Functional Programming Is

I believe that functional programming is a hypothesis about software quality: that it is easier to write and maintain software that can be understood before it is run, and is built of small reusable components. The first property is known as local reasoning, and the second as composition. Let’s address each in turn.

Local reasoning means we can understand pieces of code in isolation. When we see the expression 1 + 1 we know what it means regardless of the weather, the database, or the current status of our Kubernetes cluster. None of these external events can change it. This is a trivial and slightly silly example, but it illustrates the point. A goal of functional programming is to extend this ability across our code base.

It can help to understand local reasoning by looking at what it is not. Shared mutable state is out because relying on shared state means that other code can change what our code does without our knowledge. It means no global mutable configuration, as found in many web frameworks and graphics libraries for example, as any random code can change that configuration. Metaprogramming has to be carefully controlled. No monkey patching, for example, as again it allows other code to change our code in non-obvious ways. As we can see, adapting code to enable local reasoning can mean quite some sweeping changes. However if we work in a language that embraces functional programming this style of programming is the default.

Composition means building big things out of smaller things. Numbers are compositional. We can take any number and add one, giving us a new number. Lego is also compositional. We compose Lego by sticking it together. In the particular sense we’re using composition we also require the original elements we combine don’t change in any way when they are composed. When we create by 2 by adding 1 and 1 we get a new result that doesn’t change what 1 means.

We can find compositional ways to model common programming tasks once we start looking for them. React components are one example familiar to many front-end developers: a component can consist of many components. HTTP routes can be modelled in a compositional way. A route is a function from an HTTP request to either a handler function or a value indicating the route did not match. We can combine routes as a logical or: try this route or, if it doesn’t match, try this other route. Processing pipelines are another example that often use sequential composition: perform this pipeline stage and then this other pipeline stage.

1.2.1.1 Types

Types are not strictly part of functional programming but statically typed FP is the most popular form of FP and sufficiently important to warrant a mention. Types help compilers generate efficient code but types in FP are as much for the programmer as they are the compiler. Types express properties of programs, and the type checker automatically ensures that these properties hold. They can tell us, for example, what a function accepts and what it returns, or that a value is optional. We can also use types to express our beliefs about a program and the type checker will tell us if those beliefs are correct. For example, we can use types to tell the compiler we do not expect an error at a particular point in our code and the type checker will let us know if this is the case. In this way types are another tool for reasoning about code.

Type systems push programs towards particular designs, as to work effectively with the type checker requires designing code in a way the type checker can understand. As modern type systems come to more languages they naturally tend to shift programmers in those languages towards a FP style of coding.

1.2.2 What Functional Programming Isn’t

In my view functional programming is not about immutability, or keeping to “the substitution model of evaluation”, and so on. These are tools in service of the goals of enabling local reasoning and composition, but they are not the goals themselves. Code that is immutable always allows local reasoning, for example, but it is not necessary to avoid mutation to still have local reasoning. Here is an example of summing a collection of numbers.

def sum(numbers: List[Int]): Int = {
  var total = 0
  numbers.foreach(x => total = total + x)
  total
}

In the implementation we mutate total. This is ok though! We cannot tell from the outside that this is done, and therefore all users of sum can still use local reasoning. Inside sum we have to be careful when we reason about total but this block of code is small enough that it shouldn’t cause any problems.

In this case we can reason about our code despite the mutation, but the Scala compiler can determine that this is ok. Scala allows mutation but it’s up to us to use it appropriately. A more expressive type system, perhaps with features like Rust’s, would be able to tell that sum doesn’t allow mutation to be observed by other parts of the system2. Another approach, which is the one taken by Haskell, is to disallow all mutation and thus guarantee it cannot cause problems.

Mutation also interferes with composition. For example, if a value relies on internal state then composing it may produce unexpected results. Consider Scala’s Iterator. It maintains internal state that is used to generate the next value. If we have two Iterators we might want to combine them into one Iterator that yields values from the two inputs. The zip method does this.

This works if we pass two distinct generators to zip.

val it = Iterator(1, 2, 3, 4)

val it2 = Iterator(1, 2, 3, 4)
it.zip(it2).next()
// res0: Tuple2[Int, Int] = (1, 1)

However if we pass the same generator twice we get a surprising result.

val it3 = Iterator(1, 2, 3, 4)
it3.zip(it3).next()
// res1: Tuple2[Int, Int] = (1, 2)

The usual functional programming solution is to avoid mutable state but we can envisage other possibilities. For example, an effect tracking system would allow us to avoid combining two generators that use the same memory region. These systems are still research projects, however.

So in my opinion immutability (and purity, referential transparency, and no doubt more fancy words that I have forgotten) have become associated with functional programming because they guarantee local reasoning and composition, and until recently we didn’t have the language tools to automatically distinguish safe uses of mutation from those that cause problems. Restricting ourselves to immutability is the easiest way to ensure the desirable properties of functional programming, but as languages evolve this might come to be regarded as a historical artifact.

1.2.3 Why It Matters

I have described local reasoning and composition but have not discussed their benefits. Why are they are desirable? The answer is that they make efficient use of knowledge. Let me expand on this.

We care about local reasoning because it allows our ability to understand code to scale with the size of the code base. We can understand module A and module B in isolation, and our understanding does not change when we bring them together in the same program. By definition if both A and B allow local reasoning there is no way that B (or any other code) can change our understanding of A, and vice versa. If we don’t have local reasoning every new line of code can force us to revisit the rest of the code base to understand what has changed. This means it becomes exponentially harder to understand code as it grows in size as the number of interactions (and hence possible behaviours) grows exponentially. We can say that local reasoning is compositional. Our understanding of module A calling module B is just our understanding of A, our understanding of B, and whatever calls A makes to B.

We introduced numbers and Lego as examples of composition. They have an interesting property in common: the operations that we can use to combine them (for example, addition, subtraction, and so on for numbers; for Lego the operation is “sticking bricks together”) give us back the same kind of thing. A number multiplied by a number is a number. Two bits of Lego stuck together is still Lego. This property is called closure: when you combine things you end up with the same kind of thing. Closure means you can apply the combining operations (sometimes called combinators) an arbitrary number of times. No matter how many times you add one to a number you still have a number and can still add or subtract or multiply or…you get the idea. If we understand module A, and the combinators that A provides are closed, we can build very complex structures using A without having to learn new concepts! This is also one reason functional programmers tend to like abstractions such a monads (beyond liking fancy words): they allow us to use one mental model in lots of different contexts.

In a sense local reasoning and composition are two sides of the same coin. Local reasoning is compositional; composition allows local reasoning. Both make code easier to understand.

1.2.4 The Evidence for Functional Programming

I’ve made arguments in favour of functional programming and I admit I am biased—I do believe it is a better way to develop code than imperative programming. However, is there any evidence to back up my claim? There has not been much research on the effectiveness of functional programming, but there has been a reasonable amount done on static typing. I feel static typing, particularly using modern type systems, serves as a good proxy for functional programming so let’s look at the evidence there.

In the corners of the Internet I frequent the common refrain is that static typing has neglible effect on productivity. I decided to look into this and was surprised that the majority of the results I found support the claim that static typing increases productivity. For example, the literature review in this dissertation (section 2.3, p16–19) shows a majority of results in favour of static typing, in particular the most recent studies. However the majority of these studies are very small and use relatively inexperienced developers—which is noted in the review by Dan Luu that I linked. My belief is that functional programming comes into its own on larger systems. Furthermore, programming languages, like all tools, require proficiency to use effectively. I’m not convinced very junior developers have sufficient skill to demonstrate a significant difference between languages.

To me the most useful evidence of the effectiveness of functional programming is that industry is adopting functional programming en masse. Consider, say, the widespread and growing adoption of Typescript and React. If we are to argue that FP as embodied by Typescript or React has no value we are also arguing that the thousands of Javascript developers who have switched to using them are deluded. At some point this argument becomes untenable.

This doesn’t mean we’ll all be using Haskell in five years. More likely we’ll see something like the shift to object-oriented programming of the nineties: Smalltalk was the paradigmatic example of OO, but it was more familiar languages like C++ and Java that brought OO to the mainstream. In the case of FP this probably means languages like Scala, Swift, Kotlin, or Rust, and mainstream languages like Javascript and Java continuing to adopt more FP features.

1.2.5 Final Words

I’ve given my opinion on functional programming—that the real goals are local reasoning and composition, and programming practices like immutability are in service of these. Other people may disagree with this definition, and that’s ok. Words are defined by the community that uses them, and meanings change over time.

Functional programming emphasises formal reasoning, and there are some implications that I want to briefly touch on.

Firstly, I find that FP is most valuable in the large. For a small system it is possible to keep all the details in our head. It’s when a program becomes too large for anyone to understand all of it that local reasoning really shows its value. This is not to say that FP should not be used for small projects, but rather that if you are, say, switching from an imperative style of programming you shouldn’t expect to see the benefit when working on toy projects.

The formal models that underlie functional programming allow systematic construction of code. This is in some ways the reverse of reasoning: instead of taking code and deriving properties, we start from some properties and derive code. This sounds very academic but is in fact very practical, and how I develop most of my code.

Finally, reasoning is not the only way to understand code. It’s valuable to appreciate the limitations of reasoning, other methods for gaining understanding, and using a variety of strategies depending on the situation.

In this first part of the book we’re building the foundational strategies on which the rest of the book will build and elaborate. In Chapter 2 we look at algebraic data types, which are our main way of modelling data. We turn to codata in Chapter 3, which is the opposite, or dual, or algebraic data. Type classes are the focus on Chapter 4, while fundamentals of interpreters are discussed in Chapter 5. These four strategies all describe code artifacts. For example, we can label part of code as an algebraic data type or a type class. We’ll also see strategies that help us write code but don’t necessarily end up directly reflected in it, such as following the types.

2 Algebraic Data Types

This chapter has our first example of a programming strategy: algebraic data types. Any data we can describe using logical ands and logical ors is an algebraic data type. Once we recognize an algebraic data type we get three things for free:

The key point is this: from an implementation independent representation of data we can automatically derive most of the interesting implementation specific parts of working with that data.

We’ll start with some examples of data, from which we’ll extract the common structure that motivates algebraic data types. We will then look at their representation in Scala 2 and Scala 3. Next we’ll turn to structural recursion for transforming algebraic data types, followed by structural corecursion for constructing them. We’ll finish by looking at the algebra of algebraic data types, which is interesting but not essential.

2.1 Building Algebraic Data Types

Let’s start with some examples of data from a few different domains. These are simplified description but they are all representative of real applications.

A user in a discussion forum will typically have a screen name, an email address, and a password. Users also typically have a specific role: normal user, moderator, or administrator, for example. From this we get the following data:

A product in an e-commerce store might have a stock keeping unit (a unique identifier for each variant of a product), a name, a description, a price, and a discount.

In two-dimensional vector graphics it’s typical to represent shapes as a path, which is a sequence of actions of a virtual pen. The possible actions are usually straight lines, Bezier curves, or movement that doesn’t result in visible output. A straight line has an end point (the starting point is implicit), a Bezier curve has two control points and an end point, and a move has an end point.

What is common between all the examples above is that the individual elements—the atoms, if you like—are connected by either a logical and or a logical or. For example, a user is a screen name and an email address and a password and a role. A 2D action is a straight line or a Bezier curve or a move. This is the core of algebraic data types: an algebraic data type is data that is combined using logical ands or logical ors. Conversely, whenever we can describe data in terms of logical ands and logical ors we have an algebraic data type.

2.1.1 Sums and Products

Being functional programmers we can’t let a simple concept go without attaching some fancy jargon:

So algebraic data types consist of sum and product types.

2.1.2 Closed Worlds

Algebraic data types are closed worlds, which means they cannot be extended after they have been defined. In practical terms this means we have to modify the source code where we define the algebraic data type if we want to add or remove elements.

The closed world property is important because it gives us guarantees we would not otherwise have. In particular, it allows the compiler to check that we handle all possible cases when we use an algebraic data type. This is known as exhaustivity checking. This is an example of how functional programming prioritizes reasoning about code—in this case automated reasoning by the compiler—over other properties such as extensibility. We’ll learn more about exhaustivity checking soon.

2.2 Algebraic Data Types in Scala

Now we know what algebraic data types are, we will turn to their representation in Scala. The important point here is that the translation to Scala is entirely determined by the structure of the data; no thinking is required! This means the work is in finding the structure of the data that best represents the problem at hand. Work out the structure of the data and the code directly follows from it.

As algebraic data types are defined in terms of logical ands and logical ors, to represent algebraic data types in Scala we must know how to represent these two concepts. Scala 3 simplifies the representation of algebraic data types compared to Scala 2, so we’ll look at each language version separately.

I’m assuming that you’re familiar with the language features we use to represent algebraic data types in Scala, so I won’t be going over them.

2.2.1 Algebraic Data Types in Scala 3

In Scala 3 a logical and (a product type) is represented by a final case class. If we define a product type A is B and C, the representation in Scala 3 is

final case class A(b: B, c: C)

Not everyone makes their case classes final, but they should. A non-final case class can still be extended by a class, which breaks the closed world criteria for algebraic data types.

A logical or (a sum type) is represented by an enum. For the sum type A is B or C, the Scala 3 representation is

enum A {
  case B
  case C
}

There are a few wrinkles to be aware of.

If we have a sum of products, such as:

the representation is

enum A {
  case B(d: D, e: E)
  case C(f: F, g: G)
}

In other words we don’t write final case class inside an enum. You also can’t nest enum inside enum. Nested logical ors can be rewritten into a single logical or containing only logical ands (known as disjunctive normal form) so this is not a limitation in practice. However the Scala 2 representation is still available in Scala 3 should you want more expressivity.

2.2.2 Algebraic Data Types in Scala 2

A logical and (product type) has the same representation in Scala 2 as in Scala 3. If we define a product type A is B and C, the representation in Scala 2 is

final case class A(b: B, c: C)

A logical or (a sum type) is represented by a sealed abstract class. For the sum type A is a B or C the Scala 2 representation is

sealed abstract class A
final case class B() extends A
final case class C() extends A

Scala 2 has several little tricks to defining algebraic data types.

Firstly, instead of using a sealed abstract class you can use a sealed trait. There isn’t much practical difference between the two. When teaching beginners I’ll often use sealed trait to avoid having to introduce abstract class. I believe sealed abstract class has slightly better performance and Java interoperability, but I haven’t tested this. I also think sealed abstract class is closer, semantically, to the meaning of a sum type.

For extra style points we can extend Product with Serializable from sealed abstract class. Compare the reported types below with and without this little addition.

Let’s first see the code without extending Product and Serializable.

sealed abstract class A
final case class B() extends A
final case class C() extends A
val list = List(B(), C())
// list: List[A extends Product with Serializable] = List(B(), C())

Notice how the type of list includes Product and Serializable.

Now we have extending Product and Serializable.

sealed abstract class A extends Product with Serializable
final case class B() extends A
final case class C() extends A
val list = List(B(), C())
// list: List[A] = List(B(), C())

Much easier to read!

You’ll only see this in Scala 2. Scala 3 has the concept of transparent traits, which aren’t reported in inferred types, so you’ll see the same output in Scala 3 no matter whether you add Product and Serializable or not.

Finally, if a logical and holds no data we can use a case object instead of a case class. For example, if we’re defining some type A that holds no data we can just write

case object A

There is no need to mark the case object as final, as objects cannot be extended.

2.2.3 Examples

Let’s make the discussion above more concrete with some examples.

2.2.3.1 Role and User

In the discussion forum example, we said a role is normal, moderator, or administrator. This is a logical or, so we can directly translate it to Scala using the appropriate pattern. In Scala 3 we write

enum Role {
  case Normal
  case Moderator
  case Administrator
}

In Scala 2 we write

sealed abstract class Role extends Product with Serializable
case object Normal extends Role
case object Moderator extends Role
case object Administrator extends Role

The cases within a role don’t hold any data, so we used a case object in the Scala 2 code.

We defined a user as a screen name, an email address, a password, and a role. In both Scala 3 and Scala 2 this becomes

final case class User(
  screenName: String,
  emailAddress: String,
  password: String,
  role: Role
)

I’ve used String to represent most of the data within a User, but in real code we might want to define distinct types for each field.

2.2.3.2 Paths

We defined a path as a sequence of actions of a virtual pen. The possible actions are straight lines, Bezier curves, or movement that doesn’t result in visible output. A straight line has an end point (the starting point is implicit), a Bezier curve has two control points and an end point, and a move has an end point.

This has a straightforward translation to Scala. We can represent paths as the following in both Scala 3 and Scala 2.

final case class Path(actions: Seq[Action])

An action is a logical or, so we have different representations in Scala 3 and Scala 2. In Scala 3 we’d write

enum Action {
  case Line(end: Point)
  case Curve(cp1: Point, cp2: Point, end: Point)
  case Move(end: Point)
}

where Point is a suitable representation of a two-dimensional point.

In Scala 2 we have to go with the more verbose

sealed abstract class Action extends Product with Serializable 
final case class Line(end: Point) extends Action
final case class Curve(cp1: Point, cp2: Point, end: Point)
  extends Action
final case class Move(end: Point) extends Action

2.2.4 Representing ADTs in Scala 3

We’ve seen that the Scala 3 representation of algebraic data types, using enum, is more compact than the Scala 2 representation. However the Scala 2 representation is still available. Should you ever use the Scala 2 representation in Scala 3? There are a few cases where you may want to:

Exercise: Tree

To gain a bit of practice defining algebraic data types, code the following description in Scala (your choice of version, or do both.)

A Tree with elements of type A is:

We can directly translate this binary tree into Scala. Here’s the Scala 3 version.

enum Tree[A] {
  case Leaf(value: A)
  case Node(left: Tree[A], right: Tree[A])
}

In the Scala 2 encoding we write

sealed abstract class Tree[A] extends Product with Serializable
final case class Leaf[A](value: A) extends Tree[A]
final case class Node[A](left: Tree[A], right: Tree[A]) extends Tree[A]

2.3 Structural Recursion

Structural recursion is our second programming strategy. Algebraic data types tell us how to create data given a certain structure. Structural recursion tells us how to transform an algebraic data types into any other type. Given an algebraic data type, the transformation can be implemented using structural recursion.

As with algebraic data types, there is distinction between the concept of structural recursion and the implementation in Scala. This is more obvious because there are two ways to implement structural recursion in Scala: via pattern matching or via dynamic dispatch. We’ll look at each in turn.

2.3.1 Pattern Matching

I’m assuming you’re familiar with pattern matching in Scala, so I’ll only talk about how to implement structural recursion using pattern matching. Remember there are two kinds of algebraic data types: sum types (logical ors) and product types (logical ands). We have corresponding rules for structural recursion implemented using pattern matching:

  1. For each branch in a sum type we have a distinct case in the pattern match; and
  2. Each case corresponds to a product type with the pattern written in the usual way.

Let’s see this in code, using an example ADT that includes both sum and product types:

which we represent (in Scala 3) as

enum A {
  case B(d: D, e: E)
  case C(f: F, g: G)
}

Following the rules above means a structural recursion would look like

anA match {
  case B(d, e) => ???
  case C(f, g) => ???
}

The ??? bits are problem specific, and we cannot give a general solution for them. However we’ll soon see strategies to help create them.

2.3.2 The Recursion in Structural Recursion

At this point you might be wondering where the recursion in structural recursion comes from. This is an additional rule for recursion: whenever the data is recursive the method is recursive in the same place.

Let’s see this in action for a real data type.

We can define a list with elements of type A as:

This is exactly the definition of List in the standard library. Notice it’s an algebraic data type as it consists of sums and products. It is also recursive: in the pair case the tail is itself a list.

We can directly translate this to code, using the strategy for algebraic data types we saw previously. In Scala 3 we write

enum MyList[A] {
  case Empty()
  case Pair(head: A, tail: MyList[A])
}

Let’s implement map for MyList. We start with the method skeleton specifying just the name and types.

enum MyList[A] {
  case Empty()
  case Pair(head: A, tail: MyList[A])
  
  def map[B](f: A => B): MyList[B] = 
    ???
}

Our first step is to recognize that map can be written using a structural recursion. MyList is an algebraic data type, map is transforming this algebraic data type, and therefore structural recursion is applicable. We now apply the structural recursion strategy, giving us

enum MyList[A] {
  case Empty()
  case Pair(head: A, tail: MyList[A])
  
  def map[B](f: A => B): MyList[B] = 
    this match {
      case Empty() => ???
      case Pair(head, tail) => ???
    }
}

I forgot the recursion rule! The data is recursive in the tail of Pair, so map is recursive there as well.

enum MyList[A] {
  case Empty()
  case Pair(head: A, tail: MyList[A])
  
  def map[B](f: A => B): MyList[B] = 
    this match {
      case Empty() => ???
      case Pair(head, tail) => ??? tail.map(f)
    }
}

I left the ??? to indicate that we haven’t finished with that case.

Now we can move on to the problem specific parts. Here we have three strategies to help us:

  1. reasoning independently by case;
  2. assuming recursion the is correct; and
  3. following the types

The first two are specific to structural recursion, while the final one is a general strategy we can use in many situations. Let’s briefly discuss each and then see how they apply to our example.

The first strategy is relatively simple: when we consider the problem specific code on the right hand side of a pattern matching case, we can ignore the code in any other pattern match cases. So, for example, when considering the case for Empty above we don’t need to worry about the case for Pair, and vice versa.

The next strategy is a little bit more complicated, and has to do with recursion. Remember that the structural recursion strategy tells us where to place any recursive calls. This means we don’t have to think through the recursion. Instead we assume the recursive call will correctly compute what it claims, and only consider how to further process the result of the recursion. The result is guaranteed to be correct so long as we get the non-recursive parts correct.

In the example above we have the recursion tail.map(f). We can assume this correctly computes map on the tail of the list, and we only need to think about what we should do with the remaining data: the head and the result of the recursive call.

It’s this property that allows us to consider cases independently. Recursive calls are the only thing that connect the different cases, and they are given to us by the structural recursion strategy.

Our final strategy is following the types. It can be used in many situations, not just structural recursion, so I consider it a separate strategy. The core idea is to use the information in the types to restrict the possible implementations. We can look at the types of inputs and outputs to help us.

Now let’s use these strategies to finish the implementation of map. We start with

enum MyList[A] {
  case Empty()
  case Pair(head: A, tail: MyList[A])
  
  def map[B](f: A => B): MyList[B] = 
    this match {
      case Empty() => ???
      case Pair(head, tail) => ??? tail.map(f)
    }
}

Our first strategy is to consider the cases independently. Let’s start with the Empty case. There is no recursive call here, so reasoning about recursion doesn’t come into play. Let’s instead use the types. There is no input here other than the Empty case we have already matched, so we cannot use the input types to further restrict the code. Let’s instead consider the output type. We’re trying to create a MyList[B]. There are only two ways to create a MyList[B]: an Empty or a Pair. To create a Pair we need a head of type B, which we don’t have. So we can only use Empty. This is the only possible code we can write. The types are sufficiently restrictive that we cannot write incorrect code for this case.

enum MyList[A] {
  case Empty()
  case Pair(head: A, tail: MyList[A])
  
  def map[B](f: A => B): MyList[B] = 
    this match {
      case Empty() => Empty()
      case Pair(head, tail) => ??? tail.map(f)
    }
}

Now let’s move to the Pair case. We can apply both the structural recursion reasoning strategy and following the types. Let’s use each in turn.

The case for Pair is

case Pair(head, tail) => ??? tail.map(f)

Remember we can consider this independently of the other case. We assume the recursion is correct. This means we only need to think about what we should do with the head, and how we should combine this result with tail.map(f). Let’s now follow the types to finish the code. Our goal is to produce a MyList[B]. We already the following available:

We could return just Empty, matching the case we’ve already written. This has the correct type but we might expect it is not the correct answer because it does not use the result of the recursion, head, or f in any way.

We could return just tail.map(f). This has the correct type but we might expect it is not correct because we don’t use head or f in any way.

We can call f on head, producing a value of type B, and then combine this value and the result of the recursive call using Pair to produce a MyList[B]. This is the correct solution.

enum MyList[A] {
  case Empty()
  case Pair(head: A, tail: MyList[A])
  
  def map[B](f: A => B): MyList[B] = 
    this match {
      case Empty() => Empty()
      case Pair(head, tail) => Pair(f(head), tail.map(f))
    }
}

If you’ve followed this example you’ve hopefully see how we can use the three strategies to systematically find the correct implementation. Notice how we interleaved the recursion strategy and following the types to guide us to a solution for the Pair case. Also note how following the types alone gave us three possible implementations for the Pair case. In this code, and as is usually the case, the solution was the implementation that used all of the available inputs.

2.3.3 Exhaustivity Checking

Remember that algebraic data types are a closed world: they cannot be extended once defined. The Scala compiler can use this to check that we handle all possible cases in a pattern match, so long as we write the pattern match in a way the compiler can work with. This is known as exhaustivity checking.

Here’s a simple example. We start by defining a straight-forward algebraic data type.

// Some of the possible units for lengths in CSS
enum CssLength {
  case Em(value: Double)
  case Rem(value: Double)
  case Pt(value: Double)
}

If we write a pattern match using the structural recursion strategy, the compiler will complain if we’re missing a case.

import CssLength.*

CssLength.Em(2.0) match {
  case Em(value) => value
  case Rem(value) => value
}
// -- [E029] Pattern Match Exhaustivity Warning: ----------------------------------
// 1 |CssLength.Em(2.0) match {
//   |^^^^^^^^^^^^^^^^^
//   |match may not be exhaustive.
//   |
//   |It would fail on pattern case: CssLength.Pt(_)
//   |
//   | longer explanation available when compiling with `-explain`

Exhaustivity checking is incredibly useful. For example, if we add or remove a case from an algebraic data type, the compiler will tell us all the pattern matches that need to be updated.

2.3.4 Dynamic Dispatch

Using dynamic dispatch to implement structural recursion is an implementation technique that may feel more natural to people with a background in object-oriented programming.

The dynamic dispatch approach consists of:

  1. defining an abstract method at the root of the algebraic data types; and
  2. implementing that abstract method at every leaf of the algebraic data type.

This implementation technique is only available if we use the Scala 2 encoding of algebraic data types.

Let’s see it in the MyList example we just looked at. Our first step is to rewrite the definition of MyList to the Scala 2 style.

sealed abstract class MyList[A] extends Product with Serializable
final case class Empty[A]() extends MyList[A]
final case class Pair[A](head: A, tail: MyList[A]) extends MyList[A]

Next we define an abstract method for map on MyList.

sealed abstract class MyList[A] extends Product with Serializable {
  def map[B](f: A => B): MyList[B]
}
final case class Empty[A]() extends MyList[A]
final case class Pair[A](head: A, tail: MyList[A]) extends MyList[A]

Then we implement map on the concrete subtypes Empty and Pair.

sealed abstract class MyList[A] extends Product with Serializable {
  def map[B](f: A => B): MyList[B]
}
final case class Empty[A]() extends MyList[A] {
  def map[B](f: A => B): MyList[B] = 
    Empty()
}
final case class Pair[A](head: A, tail: MyList[A]) extends MyList[A] {
  def map[B](f: A => B): MyList[B] =
    Pair(f(head), tail.map(f))
}

We can use exactly the same strategies we used in the pattern matching case to create this code. The implementation technique is different but the underlying concept is the same.

Given we have two implementation strategies, which should we use? If we’re using enum in Scala 3 we don’t have a choice; we must use pattern matching. In other situations we can choose between the two. I prefer to use pattern matching when I can, as it puts the entire method definition in one place. However, Scala 2 in particular has problems inferring types in some pattern matches. In these situations we can use dynamic dispatch instead. We’ll learn more about this when we look at generalized algebraic data types.

Exercise: Methods for Tree

In a previous exercise we created a Tree algebraic data type:

enum Tree[A] {
  case Leaf(value: A)
  case Node(left: Tree[A], right: Tree[A])
}

Or, in the Scala 2 encoding:

sealed abstract class Tree[A] extends Product with Serializable
final case class Leaf[A](value: A) extends Tree[A]
final case class Node[A](left: Tree[A], right: Tree[A]) extends Tree[A]

Let’s get some practice with structural recursion and write some methods for Tree. Implement

Use whichever you prefer of pattern matching or dynamic dispatch to implement the methods.

I chose to use pattern matching to implement these methods. I’m using the Scala 3 encoding so I have no choice.

I start by creating the method declarations with empty bodies.

enum Tree[A] {
  case Leaf(value: A)
  case Node(left: Tree[A], right: Tree[A])
  
  def size: Int = 
    ???

  def contains(element: A): Boolean =
    ???
    
  def map[B](f: A => B): Tree[B] =
    ???
}

Now these methods all transform an algebraic data type so I can implement them using structural recursion. I write down the structural recursion skeleton for Tree, remembering to apply the recursion rule.

enum Tree[A] {
  case Leaf(value: A)
  case Node(left: Tree[A], right: Tree[A])
  
  def size: Int = 
    this match { 
      case Leaf(value)       => ???
      case Node(left, right) => left.size ??? right.size
    }

  def contains(element: A): Boolean =
    this match { 
      case Leaf(value)       => ???
      case Node(left, right) => left.contains(element) ??? right.contains(element)
    }
    
  def map[B](f: A => B): Tree[B] =
    this match { 
      case Leaf(value)       => ???
      case Node(left, right) => left.map(f) ??? right.map(f)
    }
}

Now I can use the other reasoning techniques to complete the method declarations. Let’s work through size.

def size: Int = 
  this match { 
    case Leaf(value)       => 1
    case Node(left, right) => left.size ??? right.size
  }

I can reason independently by case. The size of a Leaf is, by definition, 1.

def size: Int = 
  this match { 
    case Leaf(value)       => 1
    case Node(left, right) => left.size ??? right.size
  }

Now I can use the rule for reasoning about recursion: I assume the recursive calls successfully compute the size of the left and right children. What is the size then of the combined tree? It must be the sum of the size of the children. With this, I’m done.

def size: Int = 
  this match { 
    case Leaf(value)       => 1
    case Node(left, right) => left.size + right.size
  }

I can use the same process to work through the other two methods, giving me the complete solution shown below.

enum Tree[A] {
  case Leaf(value: A)
  case Node(left: Tree[A], right: Tree[A])
  
  def size: Int = 
    this match { 
      case Leaf(value)       => 1
      case Node(left, right) => left.size + right.size
    }

  def contains(element: A): Boolean =
    this match { 
      case Leaf(value)       => element == value
      case Node(left, right) => left.contains(element) || right.contains(element)
    }
    
  def map[B](f: A => B): Tree[B] =
    this match { 
      case Leaf(value)       => Leaf(f(value))
      case Node(left, right) => Node(left.map(f), right.map(f))
    }
}

2.3.5 Folds as Structural Recursions

Let’s finish by looking at the fold method as an abstraction over structural recursion. If you did the Tree exercise above, you will have noticed that we wrote the same kind of code again and again. Here are the methods we wrote. Notice the left-hand sides of the pattern matches are all the same, and the right-hand sides are very similar.

def size: Int = 
  this match { 
    case Leaf(value)       => 1
    case Node(left, right) => left.size + right.size
  }

def contains(element: A): Boolean =
  this match { 
    case Leaf(value)       => element == value
    case Node(left, right) => left.contains(element) || right.contains(element)
  }
  
def map[B](f: A => B): Tree[B] =
  this match { 
    case Leaf(value)       => Leaf(f(value))
    case Node(left, right) => Node(left.map(f), right.map(f))
  }

This is the point of structural recursion: to recognize and formalize this similarity. However, as programmers we might want to abstract over this repetition. Can we write a method that captures everything that doesn’t change in a structural recursion, and allows the caller to pass arguments for everything that does change? It turns out we can. For any algebraic data type we can define at least one method, called a fold, that captures all the parts of structural recursion that don’t change and allows the caller to specify all the problem specific parts.

Let’s see how this is done using the example of MyList. Recall the definition of MyList is

enum MyList[A] {
  case Empty()
  case Pair(head: A, tail: MyList[A])
}

We know the structural recursion skeleton for MyList is

def doSomething[A](list: MyList[A]) =
  list match {
    case Empty()          => ???
    case Pair(head, tail) => ??? doSomething(tail)
  } 

Implementing fold for MyList means defining a method

def fold[A, B](list: MyList[A]): B =
  list match {
    case Empty() => ???
    case Pair(head, tail) => ??? fold(tail)
  }

where B is the type the caller wants to create.

To complete fold we add method parameters for the problem specific (???) parts. In the case for Empty, we need a value of type B (notice that I’m following the types here).

def fold[A, B](list: MyList[A], empty: B): B =
  list match {
    case Empty() => empty
    case Pair(head, tail) => ??? fold(tail, empty)
  }

For the Pair case, we have the head of type A and the recursion producing a value of type B. This means we need a function to combine these two values.

def foldRight[A, B](list: MyList[A], empty: B, f: (A, B) => B): B =
  list match {
    case Empty() => empty
    case Pair(head, tail) => f(head, foldRight(tail, empty, f))
  }

This is foldRight (and I’ve renamed the method to indicate this). You might have noticed there is another valid solution. Both empty and the recursion produce values of type B. If we follow the types we can come up with

def foldLeft[A,B](list: MyList[A], empty: B, f: (A, B) => B): B =
  list match {
    case Empty() => empty
    case Pair(head, tail) => foldLeft(tail, f(head, empty), f)
  }

which is foldLeft, the tail-recursive variant of fold for a list. (We’ll talk about tail-recursion in a later chapter.)

We can follow the same process for any algebraic data type to create its folds. The rules are:

Returning to MyList, it has:

Exercise: Tree Fold

Implement a fold for Tree defined earlier. There are several different ways to traverse a tree (pre-order, post-order, and in-order). Just choose whichever seems easiest.

I start by add the method declaration without a body.

enum Tree[A] {
  case Leaf(value: A)
  case Node(left: Tree[A], right: Tree[A])
  
  def fold[B]: B =
    ???
}

Next step is to add the structural recursion skeleton.

enum Tree[A] {
  case Leaf(value: A)
  case Node(left: Tree[A], right: Tree[A])
  
  def fold[B]: B =
    this match {
      case Leaf(value)       => ???
      case Node(left, right) => left.fold ??? right.fold
    }
}

Now I follow the types to add the method parameters. For the Leaf case we want a function of type A => B.

enum Tree[A] {
  case Leaf(value: A => B)
  case Node(left: Tree[A], right: Tree[A])
  
  def fold[B](leaf: A => B): B =
    this match {
      case Leaf(value)       => leaf(value)
      case Node(left, right) => left.fold ??? right.fold
    }
}

For the Node case we want a function that combines the two recursive results, and therefore has type (B, B) => B.

enum Tree[A] {
  case Leaf(value: A)
  case Node(left: Tree[A], right: Tree[A])
  
  def fold[B](leaf: A => B)(node: (B, B) => B): B =
    this match {
      case Leaf(value)       => leaf(value)
      case Node(left, right) => node(left.fold(leaf)(node), right.fold(leaf)(node))
    }
}

Exercise: Using Fold

Prove to yourself that you can replace structural recursion with calls to fold, by redefining size, contains, and map for Tree using only fold.

enum Tree[A] {
  case Leaf(value: A)
  case Node(left: Tree[A], right: Tree[A])
  
  def fold[B](leaf: A => B)(node: (B, B) => B): B =
    this match {
      case Leaf(value)       => leaf(value)
      case Node(left, right) => node(left.fold(leaf)(node), right.fold(leaf)(node))
    }
    
  def size: Int = 
    this.fold(_ => 1)(_ + _)

  def contains(element: A): Boolean =
    this.fold(_ == element)(_ || _)
    
  def map[B](f: A => B): Tree[B] =
    this.fold(v => Leaf(f(v)))((l, r) => Node(l, r))
}

2.4 Structural Corecursion

Structural corecursion is the opposite—more correctly, the dual—of structural recursion. Whereas structural recursion tells us how to take apart an algebraic data type, structural corecursion tells us how to build up, or construct, an algebraic data type. Whereas we can use structural recursion whenever the input of a method or function is an algebraic data type, we can use structural corecursion whenever the output of a method or function is an algebraic data type.

Duality in Functional Programming

Two concepts or structures are duals if one can be translated in a one-to-one fashion to the other. Duality is one of the main themes of this book. By relating concepts as duals we can transfer knowledge from one domain to another.

Duality is often indicated by attaching the co- prefix to one of the structures or concepts. For example, corecursion is the dual of recursion, and sum types, also known as coproducts, are the dual of product types.

Structural recursion works by considering all the possible inputs (which we usually represent as patterns), and then working out what we do with each input case. Structural corecursion works by considering all the possible outputs, which are the constructors of the algebraic data type, and then working out the conditions under which we’d call each constructor.

Let’s return to the list with elements of type A, defined as:

In Scala 3 we write

enum MyList[A] {
  case Empty()
  case Pair(head: A, tail: MyList[A])
}

We can use structural corecursion if we’re writing a method that produces a MyList. A good example is map:

enum MyList[A] {
  case Empty()
  case Pair(head: A, tail: MyList[A])
  
  def map[B](f: A => B): MyList[B] = 
    ???
}

The output of this method is a MyList, which is an algebraic data type. Since we need to construct a MyList we can use structural corecursion. The structural corecursion strategy says we write down all the constructors and then consider the conditions that will cause us to call each constructor. So our starting point is to just write down the two constructors, and put in dummy conditions.

enum MyList[A] {
  case Empty()
  case Pair(head: A, tail: MyList[A])
  
  def map[B](f: A => B): MyList[B] = 
    if ??? then Empty()
    else Pair(???, ???)
}

We can also apply the recursion rule: where the data is recursive so is the method.

enum MyList[A] {
  case Empty()
  case Pair(head: A, tail: MyList[A])
  
  def map[B](f: A => B): MyList[B] = 
    if ??? then Empty()
    else Pair(???, ???.map(f))
}

To complete the left-hand side we can use the strategies we’ve already seen:

In short order we arrive at the correct solution

enum MyList[A] {
  case Empty()
  case Pair(head: A, tail: MyList[A])
  
  def map[B](f: A => B): MyList[B] = 
    this match {
      case Empty() => Empty()
      case Pair(head, tail) => Pair(f(head), tail.map(f))
    }
}

There are a few interesting points here. Firstly, we should acknowledge that map is both a structural recursion and a structural corecursion. This is not always the case. For example, foldLeft and foldRight are not structural corecursions because they are not constrained to only produce an algebraic data type. Secondly, note that when we walked through the process of creating map as a structural recursion we implicitly used the structural corecursion pattern, as part of following the types. We recognised that we were producing a List, that there were two possibilities for producing a List, and then worked out the correct conditions for each case. Formalizing structural corecursion as a separate strategy allows us to be more conscious of where we apply it. Finally, notice how I switched from an if expression to a pattern match expression as we progressed through defining map. This is perfectly fine. Both kinds of expression achieve the same effect. Pattern matching is a little bit safer due to exhaustivity checking. If we wanted to continue using an if we’d have to define a method (for example, isEmpty) that allows us to distinguish an Empty element from a Pair. This method would have to use pattern matching in its implementation, so avoiding pattern matching directly is just pushing it elsewhere.

2.4.1 Unfolds as Structural Corecursion

Just as we could abstract structural recursion as a fold, for any given algebraic data type we can abstract structural corecursion as an unfold. Unfolds are much less commonly used than folds, but they are still a nice tool to have.

Let’s work through the process of deriving unfold, using MyList as our example again.

enum MyList[A] {
  case Empty()
  case Pair(head: A, tail: MyList[A])
}

The corecursion skeleton is

if ??? then MyList.Empty()
else MyList.Pair(???, recursion(???))

Our starting point is writing the skeleton for unfold. It’s a little bit unusual in that I’ve added a parameter seed. This is the information we use to create an element. We’ll need this, but we cannot derive it from our strategies, so I’ve added it in here as a starting assumption.

def unfold[A, B](seed: A): MyList[B] =
  ???

Now we start using our strategies to fill in the missing pieces. I’m using the corecursion skeleton and I’ve applied the recursion rule immediately in the code below, to save a bit of time in the derivation.

def unfold[A, B](seed: A): MyList[B] =
  if ??? then MyList.Empty()
  else MyList.Pair(???, unfold(seed))

We can abstract the condition using a function from A => Boolean.

def unfold[A, B](seed: A, stop: A => Boolean): MyList[B] =
  if stop(seed) then MyList.Empty()
  else MyList.Pair(???, unfold(seed, stop))

Now we need to handle the case for Pair. We have a value of type A (seed), so to create the head element of Pair we can ask for a function A => B

def unfold[A, B](seed: A, stop: A => Boolean, f: A => B): MyList[B] =
  if stop(seed) then MyList.Empty()
  else MyList.Pair(f(seed), unfold(???, stop, f))

Finally we need to update the current value of seed to the next value. That’s a function A => A.

def unfold[A, B](seed: A, stop: A => Boolean, f: A => B, next: A => A): MyList[B] =
  if stop(seed) then MyList.Empty()
  else MyList.Pair(f(seed), unfold(next(seed), stop, f, next))

At this point we’re done. Let’s see that unfold is useful by declaring some other methods in terms of it. We’re going to declare map, which we’ve already seen is a structural corecursion, using unfold. We will also define fill and iterate, which are methods that construct lists and correspond to the methods with the same names on List in the Scala standard library.

To make this easier to work with I’m going to declare unfold as a method on the MyList companion object. I have made a slight tweak to the definition to make type inference work a bit better. In Scala, types inferred for one method parameter cannot be used for other method parameters in the same parameter list. However, types inferred for one method parameter list can be used in subsequent lists. Separating the function parameters from the seed parameter means that the value inferred for A from seed can be used for inference of the function parameters’ input parameters.

I have also declared some destructor methods, which are methods that take apart an algebraic data type. For MyList these are head, tail, and the predicate isEmpty. We’ll talk more about these a bit later.

Here’s our starting point.

enum MyList[A] {
  case Empty()
  case Pair(_head: A, _tail: MyList[A])

  def isEmpty: Boolean =
    this match {
      case Empty() => true
      case _       => false
    }
    
  def head: A =
    this match {
      case Pair(head, _) => head
    }
    
  def tail: MyList[A] =
    this match {
      case Pair(_, tail) => tail
    }
}
object MyList {
  def unfold[A, B](seed: A)(stop: A => Boolean, f: A => B, next: A => A): MyList[B] =
    if stop(seed) then MyList.Empty()
    else MyList.Pair(f(seed), unfold(next(seed))(stop, f, next))
}

Now let’s define the constructors fill and iterate, and map, in terms of unfold. I think the constructors are a bit simpler, so I’ll do those first.

object MyList {
  def unfold[A, B](seed: A)(stop: A => Boolean, f: A => B, next: A => A): MyList[B] =
    if stop(seed) then MyList.Empty()
    else MyList.Pair(f(seed), unfold(next(seed))(stop, f, next))
    
  def fill[A](n: Int)(elem: => A): MyList[A] =
    ???
    
  def iterate[A](start: A, len: Int)(f: A => A): MyList[A] =
    ???
}

Here I’ve just added the method skeletons, which are taken straight from the List documentation. To implement these methods we can use one of two strategies:

Let’s talk about each in turn.

You might have noticed that the parameters to unfold are almost exactly those you need to create a for-loop in a language like Java. A classic for-loop, of the for(i = 0; i < n; i++) kind, has four components:

  1. the initial value of the loop counter;
  2. the stopping condition of the loop;
  3. the statement that advances the counter; and
  4. the body of the loop that uses the counter.

These correspond to the seed, stop, next, and f parameters of unfold respectively.

Loop variants and invariants are the standard way of reasoning about imperative loops. I’m not going to describe them here, as you have probably already learned how to reason about loops (though perhaps not using these terms). Instead I’m going to discuss the second reasoning strategy, which relates writing unfold to something we’ve already discussed: structural recursion.

Our first step is to note that natural numbers (the integers 0 and larger) are conceptually algebraic data types even though the implementation in Scala—using Int—is not. A natural number is either:

It’s the simplest possible algebraic data type that is both a sum and a product type.

Once we see this, we can use the reasoning tools for structural recursion for creating the parameters to unfold. Let’s show how this works with fill. The n parameter tells us how many elements there are in the List we’re creating. The elem parameter creates those elements, and is called once for each element. So our starting point is to consider this as a structural recursion over the natural numbers. We can take n as seed, and stop as the function x => x == 0. These are the standard conditions for a structural recursion over the natural numbers. What about next? Well, the definition of natural numbers tells us we should subtract one in the recursive case, so next becomes x => x - 1. We only need f, and that comes from the definition of how fill is supposed to work. We create the value from elem, so f is just _ => elem

object MyList {
  def unfold[A, B](seed: A)(stop: A => Boolean, f: A => B, next: A => A): MyList[B] =
    if stop(seed) then MyList.Empty()
    else MyList.Pair(f(seed), unfold(next(seed))(stop, f, next))
    
  def fill[A](n: Int)(elem: => A): MyList[A] =
    unfold(n)(_ == 0, _ => elem, _ - 1)
    
  def iterate[A](start: A, len: Int)(f: A => A): MyList[A] =
    ???
}

We should check that our implementation works as intended. We can do this by comparing it to List.fill.

List.fill(5)(1)
// res6: List[Int] = List(1, 1, 1, 1, 1)
MyList.fill(5)(1)
// res7: MyList[Int] = MyList(1, 1, 1, 1, 1)

Here’s a slightly more complex example, using a stateful method to create a list of ascending numbers. First we define the state and method that uses it.

var counter = 0
def getAndInc(): Int = {
  val temp = counter
  counter = counter + 1
  temp 
}

Now we can create it to create lists.

List.fill(5)(getAndInc())
// res8: List[Int] = List(0, 1, 2, 3, 4)
counter = 0
MyList.fill(5)(getAndInc())
// res10: MyList[Int] = MyList(0, 1, 2, 3, 4)

Exercise: Iterate

Implement iterate using the same reasoning as we did for fill. This is slightly more complex than fill as we need to keep two bits of information: the value of the counter and the value of type A.

object MyList {
  def unfold[A, B](seed: A)(stop: A => Boolean, f: A => B, next: A => A): MyList[B] =
    if stop(seed) then MyList.Empty()
    else MyList.Pair(f(seed), unfold(next(seed))(stop, f, next))
    
  def fill[A](n: Int)(elem: => A): MyList[A] =
    unfold(n)(_ == 0)(_ => elem, _ - 1)
    
  def iterate[A](start: A, len: Int)(f: A => A): MyList[A] =
    unfold((len, start)){
      (len, _) => len == 0,
      (_, start) => start,
      (len, start) => (len - 1, f(start))
    }
}

We should check that this works.

List.iterate(0, 5)(x => x - 1)
// res11: List[Int] = List(0, -1, -2, -3, -4)
MyList.iterate(0, 5)(x => x - 1)
// res12: MyList[Int] = MyList(0, -1, -2, -3, -4)

Exercise: Map

Once you’ve completed iterate, try to implement map in terms of unfold. You’ll need to use the destructors to implement it.

def map[B](f: A => B): MyList[B] =
  MyList.unfold(this)(
    _.isEmpty,
    pair => f(pair.head),
    pair => pair.tail
  )
List.iterate(0, 5)(x => x + 1).map(x => x * 2)
// res13: List[Int] = List(0, 2, 4, 6, 8)
MyList.iterate(0, 5)(x => x + 1).map(x => x * 2)
// res14: MyList[Int] = MyList(0, 2, 4, 6, 8)

Now a quick discussion on destructors. The destructors do two things:

  1. distinguish the different cases within a sum type; and
  2. extract elements from each product type.

So for MyList the minimal set of destructors is isEmpty, which distinguishes Empty from Pair, and head and tail. The extractors are partial functions in the conceptual, not Scala, sense; they are only defined for a particular product type and throw an exception if used on a different case. You may have also noticed that the functions we passed to fill are exactly the destructors for natural numbers.

The destructors are another part of the duality between structural recursion and corecursion. Structural recursion is:

Structural corecursion instead is:

One last thing before we leave unfold. If we look at the usual definition of unfold we’ll probably find the following definition.

def unfold[A, B](in: A)(f: A => Option[(A, B)]): List[B]

This is equivalent to the definition we used, but a bit more compact in terms of the interface it presents. We used a more explicit definition that makes the structure of the method clearer.

2.5 The Algebra of Algebraic Data Types

A question that sometimes comes up is where the “algebra” in algebraic data types comes from. I want to talk about this a little bit and show some of the algebraic manipulations that can be done on algebraic data types.

The term algebra is used in the sense of abstract algebra, an area of mathematics. Abstract algebra deals with algebraic structures. An algebraic structure consists of a set of values, operations on that set, and properties that those operations must maintain. An example is the set of integers, the operations addition and multiplication, and the familiar properties of these operations such as associativity, which says that a + (b+c) = (a+b) + c. The abstract in abstract algebra means that it doesn’t deal with concrete values like integers—that would be far too easy to understand—and instead with abstractions with wacky names like semigroup, monoid, and ring. The example of integers above is an instance of a ring. We’ll see a lot more of these soon enough!

Algebraic data types also correspond to the algebraic structure called a ring. A ring has two operations, which are conventionally written + and ×. You’ll perhaps guess that these correspond to sum and product types respectively, and you’d be absolutely correct. What about the properties of these operations? We’ll they are similar to what we know from basic algebra:

So far, so abstract. Let’s make it concrete by looking at actual examples in Scala.

Remember the algebraic data types work with types, so the operations + and × take types as parameters. So Int × String is equivalent to

final case class IntAndString(int: Int, string: String)

We can use tuples to avoid creating lots of names.

type IntAndString = (Int, String)

We can do the same thing for +. Int + String is

enum IntOrString {
  case IsInt(int: Int)
  case IsString(string: String)
}

or just

type IntOrString = Either[Int, String]

Exercise: Identities

Can you work out which Scala type corresponds to the identity 1 for product types?

It’s Unit, because adding Unit to any product doesn’t add any more information. So, Int contains exactly as much information as Int × Unit (written as the tuple (Int, Unit) in Scala).

What about the Scala type corresponding to the identity 0 for sum types?

It’s Nothing, following the same reasoning as products: a case of Nothing adds no further information (and we cannot even create a value with this type.)

What about the distribution law? This allows us to manipulate algebraic data types to form equivalent, but perhaps more useful, representations. Consider this example of a user data type.

final case class Person(name: String, permissions: Permissions)
enum Permissions {
  case User
  case Moderator
}

Written in mathematical notation, this is

Person = String × Permissions Permissions = User + Moderator

Performing substitution gets us

Person = String × (User+Moderator)

Applying distribution results in

Person = (String×User) + (String×Moderator)

which in Scala we can represent as

enum Person {
  case User(name: String)
  case Moderator(name: String)
}

Is this representation more useful? I can’t say without the context of where the data is being used. However I can say that knowing this manipulation is possible, and correct, is useful.

There is a lot more that could be said about algebraic data types, but at this point I feel we’re really getting into the weeds. I’ll finish up with a few pointers to other interesting facts:

2.6 Conclusions

We have covered a lot of material in this chapter. Let’s recap the key points.

Algebraic data types allow us to express data types by combining existing data types with logical and and logical or. A logical and constructs a product type while a logical or constructs a sum type. Algebraic data types are the main way to represent data in Scala.

Structural recursion gives us a skeleton for transforming any given algebraic data type into any other type. Structural recursion can be abstracted into a fold method.

We use several reasoning principles to help us complete the problem specific parts of a structural recursion:

  1. reasoning independently by case;
  2. assuming recursion is correct; and
  3. following the types.

Following the types is a very general strategy that is can be used in many other situations.

Structural corecursion gives us a skeleton for creating any given algebraic data type from any other type. Structural corecursion can be abstracted into an unfold method. When reasoning about structural corecursion we can reason as we would for an imperative loop, or, if the input is an algebraic data type, use the principles for reasoning about structural recursion.

Notice that the two main themes of functional programming—composition and reasoning—are both already apparent. Algebraic data types are compositional: we compose algebraic data types using sum and product. We’ve seen many reasoning principles in this chapter.

I haven’t covered everything there is to know about algebraic data types; I think doing so would be a book in its own right. Below are some references that you might find useful if you want to dig in further, as well as some biographical remarks.

Algebraic data types are standard in introductory material on functional programming. Structural recursion is certainly extremely common in functional programming, but strangely seems to rarely be explicitly defined as I’ve done here. I learned about both from How to Design Programs [Felleisen et al. 2018].

I’m not aware of any approachable yet thorough treatment of either algebraic data types or structural recursion. Both seem to have become assumed background of any researcher in the field of programming languages, and relatively recent work is caked in layers of mathematics and obtuse notation that I find difficult reading. The infamous Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire [Meijer et al. 1991] is an example of such work. I suspect the core ideas of both date back to at least the emergence of computability theory in the 1930s, well before any digital computers existed.

The earliest reference I’ve found to structural recursion is Proving Properties of Programs by Structural Induction [Burstall 1969]. Algebraic data types don’t seem to have been fully developed, along with pattern matching, until NPL in 1977. NPL was quickly followed by the more influential language Hope, which spread the concept to other programming languages.

Corecursion is a bit better documented in the contemporary literature. How to Design Co-Programs [Gibbons 2021] covers the main ideas we have looked at here, while Gibbons and Jones [1998] discusses uses of unfold.

The Derivative of a Regular Type is its Type of One-Hole Contexts [McBride 2001] describes the derivative of algebraic data types.

3 Objects as Codata

In this chapter we will look at codata, the dual of algebraic data types. Algebraic data types focus on how things are constructed. Codata, in contrast, focuses on how things are used. We define codata by specifying the operations that can be performed on the type. This is very similar to the use of interfaces in object-oriented programming, and this is the first reason that we are interested in codata: codata puts object-oriented programming into a coherent conceptual framework with the other strategies we are discussing.

We’re not only interested in codata as a lens to view object-oriented programming. Codata also has properties that algebraic data does not. Codata allows us to create structures with an infinite number of elements, such as a list that never ends or a server loop that runs indefinitely. Codata has a different form of extensibility to algebraic data. Whereas we can easily write new functions that transform algebraic data, we cannot add new cases to the definition of an algebraic data type without changing the existing code. The reverse is true for codata. We can easily create new implementations of codata, but functions that transform codata are limited by the interface the codata defines.

In the previous chapter we saw structural recursion and structural corecursion as strategies to guide us in writing programs using algebraic data types. The same holds for codata. We can use codata forms of structural recursion and corecursion to guide us in writing programs that consume and produce codata respectively.

We’ll begin our exploration of codata by more precisely defining it and seeing some examples. We’ll then talk about representing codata in Scala, and the relationship to object-oriented programming. Once we can create codata, we’ll see how to work with it using structural recursion and corecursion, using an example of an infinite structure. Next we will look at transforming algebraic data to codata, and vice versa. We will finish by examining differences in extensibility.

A quick note about terminology before we proceed. We might expect to use the term algebraic codata for the dual of algebraic data, but conventionally just codata is used. I assume this is because data is usually understood to have a wider meaning than just algebraic data, but codata is not used outside of programming language theory. For simplicity and symmetry, within this chapter I’ll just use the term data to refer to algebraic data types.

3.1 Data and Codata

Data describes what things are, while codata describes what things can do.

We have seen that data is defined in terms of constructors producing elements of the data type. Let’s take a very simple example: a Bool is either True or False. We know we can represent this in Scala as

enum Bool {
  case True
  case False
}

The definition tells us there are two ways to construct an element of type Bool. Furthermore, if we have such an element we can tell exactly which case it is, by using a pattern match for example. Similarly, if the instances themselves hold data, as in List for example, we can always extract all the data within them. Again, we can use pattern matching to achieve this.

Codata, in contrast, is defined in terms of operations we can perform on the elements of the type. These operations are sometimes called destructors (which we’ve already encountered), observations, or eliminators. A common example of codata is a data structures such as a set. We might define the operations on a Set with elements of type A as:

In Scala we could implement this definition as

trait Set[A] {
  
  /** True if this set contains the given element */
  def contains(elt: A): Boolean
  
  /** Construct a new set containing all elements in this set and the given element */
  def insert(elt: A): Set[A]
  
  /** Construct the union of this and that set */
  def union(that: Set[A]): Set[A]
}

This definition does not tell us anything about the internal representation of the elements in the set. It could use a hash table, a tree, or something more exotic. It does, however, tell us what we can do with the set. We know we can take the union but not the intersection, for example.

If you come from the object-oriented world you might recognize the description of codata above as programming to an interface. In some ways codata is just taking concepts from the object-oriented world and presenting them in a way that is consistent with the rest of the functional programming paradigm. However, this does not mean adopting all the features of object-oriented programming. We won’t use state, which is difficult to reason about. We also won’t use implementation inheritance either, for the same reason. In our subset of object-oriented programming we’ll either be defining interfaces (which may have default implementations of some methods) or final classes that implement those interfaces. Interestingly, this subset of object-oriented programming is often recommended by advocates of object-oriented programming.

Let’s now be a little more precise in our definition of codata, which will make the duality between data and codata clearer. Remember the definition of data: it is defined in terms of sums (logical ors) and products (logical ands). We can transform any data into a sum of products. Each product in the sum is a constructor, and the product itself is the parameters that the constructor accepts. Finally, we can think of constructors as functions which take some arbitrary input and produce an element of data. Our end point is a sum of functions from arbitrary input to data.

More abstractly, if we are constructing an element of some data type A we call one of the constructors

Now we’ll turn to codata. Codata is defined as a product of functions, these functions being the destructors. The input to a destructor is always an element of the codata type and possibly some other parameters. The output is usually something that is not of the codata type. Thus constructing an element of some codata type A means defining

This hopefully makes the duality between the two clearer.

Now we understand what codata is, we will turn to representing codata in Scala.

3.2 Codata in Scala

We have already seen an example of codata, which I have repeated below.

trait Set[A] {
  
  def contains(elt: A): Boolean
  
  def insert(elt: A): Set[A]
  
  def union(that: Set[A]): Set[A]
}

The abstract definition of this, which is a product of functions, defines a Set with elements of type A as:

Notice that the first parameter of each function is the type we are defining, Set[A].

The translation to Scala is:

This gives us the Scala representation we started with.

This is only half the story for codata. We also need to actually implement the interface we’ve just defined. There are three approaches we can use:

  1. a final subclass, in the case where we want to name the implementation;
  2. an anonymous subclass; or
  3. more rarely, an object.

Neither final nor anonymous subclasses can be further extended, meaning we cannot create deep inheritance hierarchies. This in turn avoids the difficulties that come from reasoning about deep hierarchies. Using a class rather than a case class means we don’t expose implementation details like constructor arguments.

Some examples are in order. Here’s a simple example of Set, which uses a List to hold the elements in the set.

final class ListSet[A](elements: List[A]) extends Set[A] {

  def contains(elt: A): Boolean =
    elements.contains(elt)

  def insert(elt: A): Set[A] =
    ListSet(elt :: elements)

  def union(that: Set[A]): Set[A] =
    elements.foldLeft(that) { (set, elt) => set.insert(elt) }
}
object ListSet {
  def empty[A]: Set[A] = ListSet(List.empty)
}

This uses the first implementation approach, a final subclass. Where would we use an anonymous subclass? They are most useful when implementing methods that return our codata type. Let’s take union as an example. It returns our codata type, Set, and we could implement it as shown below.

trait Set[A] {
  
  def contains(elt: A): Boolean
  
  def insert(elt: A): Set[A]
  
  def union(that: Set[A]): Set[A] = {
    val self = this
    new Set[A] {
      def contains(elt: A): Boolean =
        self.contains(elt) || that.contains(elt)
        
      def insert(elt: A): Set[A] =
        // Arbitrary choice to insert into self
        self.insert(elt).union(that)
    }
  }
}

This uses an anonymous subclass to implement union on the Set trait, and hence defines the method for all subclasses. I haven’t made the method final so that subclasses can override it with a more efficient implementation. This does open up the danger of implementation inheritance. This is an example of where theory and craft diverge. In theory we never want implementation inheritance, but in practice it can be useful as an optimization.

It can also be useful to implement utility methods defined purely in terms of the destructors. Let’s say we wanted to implement a method containsAll that checks if a Set contains all the elements in an Iterable collection.

def containsAll(elements: Iterable[A]): Boolean

We can implement this purely in terms of contains on Set and forall on Iterable.

trait Set[A] {
  
  def contains(elt: A): Boolean
  
  def insert(elt: A): Set[A]
  
  def union(that: Set[A]): Set[A]
  
  def containsAll(elements: Iterable[A]): Boolean =
    elements.forall(elt => this.contains(elt))
}

Once again we could make this a final method. In this case it’s probably more justified as it’s difficult to imagine a more efficient implementation.

Data and codata are both realized in Scala as variations of the same language features of classes and objects. This means we can define types that have properties of both data and codata. We have actually already done this. When we define data we must define names for the fields within the data, thus defining destructors. This is the same in most languages, which don’t make a hard distinction between data and codata.

Part of the appeal, I think, of classes and objects is that they can express so many conceptually different abstractions with the same language constructs. This gives them a surface appearance of simplicity; it seems we need to learn only one abstraction to solve a huge of number of coding problems. However this apparent simplicity hides real complexity, as this variety of uses forces us to reverse engineer the conceptual intention from the code.

3.3 Structural Recursion and Corecursion for Codata

In this section we’ll build a library for streams, also known as lazy lists. These are the codata equivalent of lists. Whereas a list must have a finite length, streams have an infinite length. We’ll use this example to explore structural recursion and structural corecursion as applied to codata.

Let’s start by reviewing structural recursion and corecursion. The key idea is to use the input or output type, respectively, to drive the process of writing the method. We’ve already seen how this works with data, where we emphasized structural recursion. With codata it’s more often the case that structural corecursion is used. The steps for using structural corecursion are:

  1. recognize the output of the method or function is codata;
  2. write down the skeleton to construct an instance of the codata type, usually using an anonymous subclass; and
  3. fill in the methods, using strategies such as structural recursion or following the types to help.

It’s important that any computation takes places within the methods, and so only runs when the methods are called. Once we start creating streams the importance of this will become clear.

For structural recursion the steps are:

  1. recognize the input of the method or function is codata;
  2. note the codata’s destructors as possible sources of values in writing the method; and
  3. complete the method, using strategies such as following the types or structural corecursion and the methods identified above.

Our first step is to define our stream type. As this is codata, it is defined in terms of its destructors. The destructors that define a Stream of elements of type A are:

Note these are almost the destructors of List. We haven’t defined isEmpty as a destructor because our streams never end and thus this method would always return false. (A lot of real implementations, such as the LazyList in the Scala standard library, do define such a method which allows them to represent finite and infinite lists in the same structure. We’re not doing this for simplicity and because we want to work with codata in its purest form.)

We can translate this to Scala, as we’ve previously seen, giving us

trait Stream[A] {
  def head: A
  def tail: Stream[A]
}

Now we can create an instance of Stream. Let’s create a never-ending stream of ones. We will start with the skeleton below and apply strategies to complete the code.

val ones: Stream[Int] = ???

The first strategy is structural corecursion. We’re returning an instance of codata, so we can insert the skeleton to construct a Stream.

val ones: Stream[Int] =
  new Stream[Int] {
    def head: Int = ???
    def tail: Stream[Int] = ???
  }

Here I’ve used the anonymous subclass approach, so I can just write all the code in one place.

The next step is to fill in the method bodies. The first method, head, is trivial. The answer is 1 by definition.

val ones: Stream[Int] =
  new Stream[Int] {
    def head: Int = 1
    def tail: Stream[Int] = ???
  }

It’s not so obvious what to do with tail. We want to return a Stream[Int] so we could apply structural corecursion again.

val ones: Stream[Int] =
  new Stream[Int] {
    def head: Int = 1
    def tail: Stream[Int] =
      new Stream[Int] {
        def head: Int = 1
        def tail: Stream[Int] = ???
      }
  }

This approach doesn’t seem like it’s going to work. We’ll have to write this out an infinite number of times to correctly implement the method, which might be a problem.

Instead we can follow the types. We need to return a Stream[Int]. We have one in scope: ones. This is exactly the Stream we need to return: the infinite stream of ones!

val ones: Stream[Int] =
  new Stream[Int] {
    def head: Int = 1
    def tail: Stream[Int] = ones
  }

You might be alarmed to see the circular reference to ones in tail. This works because it is within a method, and so is only evaluated when that method is called. This delaying of evaluation is what allows us to represent an infinite number of elements, as we only ever evaluate a finite portion of them. This is a core difference from data, which is fully evaluated when it is constructed.

Let’s check that our definition of ones does indeed work. We can’t extract all the elements from an infinite Stream (at least, not in finite time) so in general we’ll have to resort to checking a finite sequence of elements.

ones.head
// res0: Int = 1
ones.tail.head
// res1: Int = 1
ones.tail.tail.head
// res2: Int = 1

This all looks correct. We’ll often want to check our implementation in this way, so let’s implement a method, take, to make this easier.

trait Stream[A] {
  def head: A
  def tail: Stream[A]
  
  def take(count: Int): List[A] =
    count match {
      case 0 => Nil
      case n => head :: tail.take(n - 1)
    }
}

We can use either the structural recursion or structural corecursion strategies for data to implement take. Since we’ve already covered these in detail I won’t go through them here. The important point is that take only uses the destructors when interacting with the Stream.

Now we can more easily check our implementations are correct.

ones.take(5)
// res4: List[Int] = List(1, 1, 1, 1, 1)

For our next task we’ll implement map. Implementing a method on Stream allows us to see both structural recursion and corecursion for codata in action. As usual we begin by writing out the method skeleton.

trait Stream[A] {
  def head: A
  def tail: Stream[A]
  
  def map[B](f: A => B): Stream[B] = 
    ???
}

Now we have a choice of strategy to use. Since we haven’t used structural recursion yet, let’s start with that. The input is codata, a Stream, and the structural recursion strategy tells us we should consider using the destructors. Let’s write them down to remind us of them.

trait Stream[A] {
  def head: A
  def tail: Stream[A]
  
  def map[B](f: A => B): Stream[B] = {
    this.head ???
    this.tail ???
  }
}

To make progress we can follow the types or use structural corecursion. Let’s choose corecursion to see another example of it in use.

trait Stream[A] {
  def head: A
  def tail: Stream[A]
  
  def map[B](f: A => B): Stream[B] = {
    this.head ???
    this.tail ???
    
    new Stream[B] {
      def head: B = ???
      def tail: Stream[B] = ???
    }
  }
}

Now we’ve used structural recursion and structural corecursion, a bit of following the types is in order. This quickly arrives at the correct solution.

trait Stream[A] {
  def head: A
  def tail: Stream[A]
  
  def map[B](f: A => B): Stream[B] = {
    val self = this 
    new Stream[B] {
      def head: B = f(self.head)
      def tail: Stream[B] = self.tail.map(f)
    }
  }
}

There are two important points. Firstly, notice how I gave the name self to this. This is so I can access the value inside the new Stream we are creating, where this would be bound to this new Stream. Next, notice that we access self.head and self.tail inside the methods on the new Stream. This maintains the correct semantics of only performing computation when it has been asked for. If we performed the computation outside of the methods that we would do it too early, which is some cases can lead to an infinite loop.

As our final example, let’s return to constructing Stream, and implement the universal constructor unfold. We start with the skeleton for unfold, remembering the seed parameter.

trait Stream[A] {
  def head: A
  def tail: Stream[A]
}
object Stream {
  def unfold[A, B](seed: A): Stream[B] =
    ???
}

It’s natural to apply structural corecursion to make progress.

trait Stream[A] {
  def head: A
  def tail: Stream[A]
}
object Stream {
  def unfold[A, B](seed: A): Stream[B] =
    new Stream[B]{
      def head: B = ???
      def tail: Stream[B] = ???
    }
}

Now we can follow the types, adding parameters as we need them. This gives us the complete method shown below.

trait Stream[A] {
  def head: A
  def tail: Stream[A]
}
object Stream {
  def unfold[A, B](seed: A, f: A => B, next: A => A): Stream[B] =
    new Stream[B]{
      def head: B = 
        f(seed)
      def tail: Stream[B] = 
        unfold(next(seed), f, next)
    }
}

We can use this to implement some interesting streams. Here’s a stream that alternates between 1 and -1.

val alternating = Stream.unfold(
  true, 
  x => if x then 1 else -1, 
  x => !x
)

We can check it works.

alternating.take(5)
// res11: List[Int] = List(1, -1, 1, -1, 1)

Exercise: Stream Combinators

It’s time for you to get some practice with structural recursion and structural corecursion using codata. Implement filter, zip, and scanLeft on Stream. They have the same semantics as the same methods on List, and the signatures shown below.

trait Stream[A] {
  def head: A
  def tail: Stream[A]

  def filter(pred: A => Boolean): Stream[A]
  def zip[B](that: Stream[B]): Stream[(A, B)]
  def scanLeft[B](zero: B)(f: (B, A) => B): Stream[B]
}

For all of these methods I found that structural corecursion was the most natural way to tackle them. You could start with structural recursion, though.

You might be worried about the inefficiency of filter. That’s something we’ll discuss a bit later.

trait Stream[A] {
  def head: A
  def tail: Stream[A]

  def filter(pred: A => Boolean): Stream[A] = {
    val self = this
    new Stream[A] {
      def head: A = {
        def loop(stream: Stream[A]): A =
          if pred(stream.head) then stream.head
          else loop(stream.tail)
          
        loop(self)
      }
      
      def tail: Stream[A] = {
        def loop(stream: Stream[A]): Stream[A] =
          if pred(stream.head) then stream.tail
          else loop(stream.tail)
          
        loop(self)
      }
    }
  }

  def zip[B](that: Stream[B]): Stream[(A, B)] = {
    val self = this 
    new Stream[(A, B)] {
      def head: (A, B) = (self.head, that.head)
      
      def tail: Stream[(A, B)] =
        self.tail.zip(that.tail)
    }
  }

  def scanLeft[B](zero: B)(f: (B, A) => B): Stream[B] = {
    val self = this
    new Stream[B] {
      def head: B = f(zero, self.head)
      
      def tail: Stream[B] =
        self.tail.scanLeft(this.head)(f)
    }
  }
}

We can do some neat things with the methods defined above. For example, here is the stream of natural numbers.

val naturals = Stream.ones.scanLeft(0)((b, a) => b + a)

As usual, we should check it works.

naturals.take(5)
// res15: List[Int] = List(1, 2, 3, 4, 5)

We could also define naturals using unfold. More interesting is defining it in terms of itself.

val naturals: Stream[Int] =
  new Stream {
    def head = 1
    def tail = naturals.map(_ + 1)
  }

This might be confusing. If so, spend a bit of time thinking about it. It really does work!

naturals.take(5)
// res17: List[Int] = List(1, 2, 3, 4, 5)

3.3.1 Efficiency and Effects

You may have noticed that our implement recomputes values, possibly many times. A good example is the implementation of filter. This recalculates the head and tail on each call, which could be a very expensive operation.

def filter(pred: A => Boolean): Stream[A] = {
  val self = this
  new Stream[A] {
    def head: A = {
      def loop(stream: Stream[A]): A =
        if pred(stream.head) then stream.head
        else loop(stream.tail)
        
      loop(self)
    }
    
    def tail: Stream[A] = {
      def loop(stream: Stream[A]): Stream[A] =
        if pred(stream.head) then stream.tail
        else loop(stream.tail)
        
      loop(self)
    }
  }
}

We know that delaying the computation until the method is called is important, because that is how we can handle infinite and self-referential data. However we don’t need to redo this computation on succesive calls. We can instead cache the result from the first call and use that next time. Scala makes this easy with lazy val, which is a val that is not computed until its first call. Additionally, Scala’s use of the uniform access principle means we can implement a method with no parameters using a lazy val. Here’s a quick example demonstrating it in use.

def always[A](elt: => A): Stream[A] =
  new Stream[A] {
    lazy val head: A = elt
    lazy val tail: Stream[A] = always(head)
  }
  
val twos = always(2)

As usual we should check our work.

twos.take(5)
// res18: List[Int] = List(2, 2, 2, 2, 2)

We get the same result whether we use a method or a lazy val, because we are assuming that we are only dealing with pure computations that have no dependency on state that might change. In this case a lazy val simply consumes additional space to save on time.

Recomputing a result every time it is needed is known as call by name, while caching the result the first time it is computed is known as call by need. These two different evaluation strategies can be applied to individual values, as we’ve done here, or across an entire programming. Haskell, for example, uses call by need. All values in Haskell are only computed the first time they are need. This is approach is sometimes known as lazy evaluation. Another alternative, called call by value, computes results when they are defined instead of waiting until they are needed. This is the default in Scala.

We can illustrate the difference between call by name and call by need if we use an impure computation. For example, we can define a stream of random numbers. Random number generators depend on some internal state.

Here’s the call by name implementation, using the methods we have already defined.

import scala.util.Random

val randoms: Stream[Double] = 
  Stream.unfold(Random, r => r.nextDouble(), r => r)

Notice that we get different results each time we take a section of the Stream. We would expect these results to be the same.

randoms.take(5)
// res19: List[Double] = List(
//   0.6334382512313489,
//   0.7396049793308253,
//   0.3922997735119247,
//   0.501891892971581,
//   0.6230535998247712
// )
randoms.take(5)
// res20: List[Double] = List(
//   0.2833341188592764,
//   0.6911714726693775,
//   0.8789224476608815,
//   0.27127277488637436,
//   0.46046234590888224
// )

Now let’s define the same stream in a call by need style, using lazy val.

val randomsByNeed: Stream[Double] =
  new Stream[Double] {
    lazy val head: Double = Random.nextDouble()
    lazy val tail: Stream[Double] = randomsByNeed
  }

This time we get the same result when we take a section, and each number is the same.

randomsByNeed.take(5)
// res21: List[Double] = List(
//   0.5198235372150013,
//   0.5198235372150013,
//   0.5198235372150013,
//   0.5198235372150013,
//   0.5198235372150013
// )
randomsByNeed.take(5)
// res22: List[Double] = List(
//   0.5198235372150013,
//   0.5198235372150013,
//   0.5198235372150013,
//   0.5198235372150013,
//   0.5198235372150013
// )

If we wanted a stream that had a different random number for each element but those numbers were constant, we could redefine unfold to use call by need.

def unfoldByNeed[A, B](seed: A, f: A => B, next: A => A): Stream[B] =
  new Stream[B]{
    lazy val head: B = 
      f(seed)
    lazy val tail: Stream[B] = 
      unfoldByNeed(next(seed), f, next)
  }

Now redefining randomsByNeed using unfoldByNeed gives us the result we are after. First, redefine it.

val randomsByNeed2 =
  unfoldByNeed(Random, r => r.nextDouble(), r => r)

Then check it works.

randomsByNeed2.take(5)
// res23: List[Double] = List(
//   0.2004412758108174,
//   0.6721532116774,
//   0.2960856144992955,
//   0.48769348667396584,
//   0.5995184771230712
// )
randomsByNeed2.take(5)
// res24: List[Double] = List(
//   0.2004412758108174,
//   0.6721532116774,
//   0.2960856144992955,
//   0.48769348667396584,
//   0.5995184771230712
// )

These subtleties are one of the reasons that functional programmers try to avoid using state as far as possible.

3.4 Relating Data and Codata

In this section we’ll explore the relationship between data and codata, and in paritcular converting one to the other. We’ll look at it in two ways: firstly a very surface-level relationship between the two, and then a deep connection via fold.

Remember that data is a sum of products, where the products are constructors and we can view constructors as functions. So we can view data as a sum of functions. Meanwhile, codata is a product of functions. We can easily make a direct correspondence between the functions-as-constructors and the functions in codata. What about the difference between the sum and the product that remains. Well, when we have a product of functions we only call one at any point in our code. So the logical or is in the choice of function to call.

Let’s see how this works with a familiar example of data, List. As an algebraic data type we can define

enum List[A] {
  case Pair(head: A, tail: List[A])
  case Empty()
}

The codata equivalent is

trait List[A] {
  def pair(head: A, tail: List[A]): List[A]
  def empty: List[A]
}

In the codata implementation we are explicitly representing the constructors as methods, and pushing the choice of constructor to the caller. In a few chapters we’ll see a use for this relationship, but for now we’ll leave it and move on.

The other way to view the relationship is a connection via fold. We’ve already learned how to derive the fold for any algebraic data type. For Bool, defined as

enum Bool {
  case True
  case False
}

the fold method is

enum Bool {
  case True
  case False
  
  def fold[A](t: A)(f: A): A =
    this match {
      case True => t
      case False => f
    }
}

We know that fold is universal: we can write any other method in terms of it. It therefore provides a universal destructor and is the key to treating data as codata. In this case the fold is something we use all the time, except we usually call it if.

Here’s the codata version of Bool, with fold renamed to if. (Note that Scala allows us to define methods with the same name as key words, in this case if, but we have to surround them in backticks to use them.)

trait Bool {
  def `if`[A](t: A)(f: A): A
}

Now we can define the two instances of Bool purely as codata.

val True = new Bool {
  def `if`[A](t: A)(f: A): A = t
}

val False = new Bool {
  def `if`[A](t: A)(f: A): A = f
}

Let’s see this in use by defining and in terms of if, and then creating some examples. First the definition of and.

def and(l: Bool, r: Bool): Bool =
  new Bool {
    def `if`[A](t: A)(f: A): A =
      l.`if`(r)(False).`if`(t)(f)
  }

Now the examples. This is simple enough that we can try the entire truth table.

and(True, True).`if`("yes")("no")
// res1: String = "yes"
and(True, False).`if`("yes")("no")
// res2: String = "no"
and(False, True).`if`("yes")("no")
// res3: String = "no"
and(False, False).`if`("yes")("no")
// res4: String = "no"

Exercise: Or and Not

Test your understanding of Bool by implementing or and not in the same way we implemented and above.

We can follow the same structure as and.

def or(l: Bool, r: Bool): Bool =
  new Bool {
    def `if`[A](t: A)(f: A): A =
      l.`if`(True)(r).`if`(t)(f)
  }

def not(b: Bool): Bool =
  new Bool {
    def `if`[A](t: A)(f: A): A =
      b.`if`(False)(True).`if`(t)(f)
  }

Once again, we can test the entire truth table.

or(True, True).`if`("yes")("no")
// res5: String = "yes"
or(True, False).`if`("yes")("no")
// res6: String = "yes"
or(False, True).`if`("yes")("no")
// res7: String = "yes"
or(False, False).`if`("yes")("no")
// res8: String = "no"

not(True).`if`("yes")("no")
// res9: String = "no"
not(False).`if`("yes")("no")
// res10: String = "yes"

Notice that, once again, computation only happens on demand. In this case, nothing happens until if is actually called. Until that point we’re just building up a representation of what we want to happen. This again points to how codata can handle infinite data, by only computing the finite amount required by the actual computation.

The rules here for converting from data to codata are:

  1. On the interface (trait) defining the codata, define a method with the same signature as fold.
  2. Define an implementation of the interface for each product case in the data. The data’s constructor arguments become constructor arguments on the codata classes. If there are no constructor arguments, as in Bool, we can define values instead of classes.
  3. Each implementation implements the case of fold that it corresponds to.

Let’s apply this to a slightly more complex example: List. We’ll start by defining it as data and implementing fold. I’ve chosen to implement foldRight but foldLeft would be just as good.

enum List[A] {
  case Pair(head: A, tail: List[A])
  case Empty()
  
  def foldRight[B](empty: B)(f: (A, B) => B): B =
    this match { 
      case Pair(head, tail) => f(head, tail.foldRight(empty)(f))
      case Empty() => empty
    }
}

Now let’s implement it as codata. We start by defining the interface with the fold method. In this case I’m calling it foldRight as it’s going to exactly mirror the foldRight we just defined.

trait List[A] {
  def foldRight[B](empty: B)(f: (A, B) => B): B
}

Now we define the implementations. There is one for Pair and one for Empty, which are the two cases in data definition of List. Notice that in this case the classes have constructor arguments, which correspond to the constructor arguments on the correspnding product types.

final class Pair[A](head: A, tail: List[A]) extends List[A] {
  def foldRight[B](empty: B)(f: (A, B) => B): B =
    ???
}

final class Empty[A]() extends List[A] {
  def foldRight[B](empty: B)(f: (A, B) => B): B =
    ???
}

I didn’t implement the bodies offoldRight so I could show this as a separate step. The implementation here directly mirrors foldRight on the data implementation, and we can use the same strategies to implement the codata equivalents. That is to say, we can use the recursion rule, reasoning by case, and following the types. I’m going to skip these details as we’ve already gone through them in depth. The final code is shown below.

final class Pair[A](head: A, tail: List[A]) extends List[A] {
  def foldRight[B](empty: B)(f: (A, B) => B): B =
    f(head, tail.foldRight(empty)(f))
}

final class Empty[A]() extends List[A] {
  def foldRight[B](empty: B)(f: (A, B) => B): B =
    empty
}

This code is almost the same as the dynamic dispatch implementation, which again shows the relationship between codata and object-oriented code.

The transformation from data to codata goes under several names: refunctionalization, Church encoding, and Böhm-Berarducci encoding. The latter two terms specifically refer to transformations into the untyped and typed lambda calculus respectively. The lambda calculus is a simple model programming language that contains only functions. We’re going to take a quick detour to show that we can, indeed, encode lists using just functions. This demonstrates that objects and functions have equivalent power.

The starting point is creating a type alias List, which defines a list as a fold. This uses a polymorphic function type, which is new in Scala 3. Inspect the type signature and you’ll see it is the same as foldRight above.

type List[A, B] = (B, (A, B) => B) => B

Now we can define Pair and Empty as functions. The first parameter list is the constructor arguments, and the second parameter list is the parameters for foldRight.

val Empty: [A, B] => () => List[A, B] = 
  [A, B] => () => (empty, f) => empty

val Pair: [A, B] => (A, List[A, B]) => List[A, B] =
  [A, B] => (head: A, tail: List[A, B]) => (empty, f) => 
    f(head, tail(empty, f))

Finally, let’s see an example to show it working. We will first define the list containing 1, 2, 3. Due to a restriction in polymorphic function types, I have to add the useless empty parameter.

val list: [B] => () => List[Int, B] = 
  [B] => () => Pair(1, Pair(2, Pair(3, Empty())))

Now we can compute the sum and product of the elements in this list.

val sum = list()(0, (a, b) => a + b)
// sum: Int = 6
val product = list()(1, (a, b) => a * b)
// product: Int = 6

It works!

The purpose of this little demonstration is to show that functions are just objects (in the codata sense) with a single method. Scala this makes apparent, as functions are objects with an apply method.

We’ve seen that data can be translated to codata. The reverse is also possible: we simply tabulate the results of each possible method call. In other words, the data representation is memoisation, a lookup table, or a cache.

Although we can convert data to codata and vice versa, there are good reasons to choose one over the other. We’ve already seen one reason: with codata we can represent infinite structures. In this next section we’ll see another difference: the extensibility that data and codata permit.

3.5 Data and Codata Extensibility

We have seen that codata can represent types with an infinite number of elements, such as Stream. This is one expressive difference from data, which must always be finite. We’ll now look at another, which is the type of extensibility we get from data and from codata. Together these gives use guidelines to choose between the two.

Firstly, let’s define extensibility. It means the ability to add new features without modifying existing code. (If we allow modification of existing code then any extension becomes trivial.) In particular there are two dimension along which we can extend code: adding new functions or adding new elements. We will see that data and codata have orthogonal extensibility: it’s easy to add new functions to data but adding new elements is impossible without modifying existing code, while adding new elements to codata is straight-forward but adding new functions is not.

Let’s start with a concrete example of both data and codata. For data we’ll use the familiar List type.

enum List[A] {
  case Empty()
  case Pair(head: A, tail: List[A])
}

For codata, we’ll use Set as our exemplar.

trait Set[A] {
  def contains(elt: A): Boolean
  def insert(elt: A): Set[A]
  def union(that: Set[A]): Set[A]
}

We know there are lots of methods we can define on List. The standard library is full of them! We also know that any method we care to write can be written using structural recursion. Finally, we can write these methods without modifying existing code.

Imagine filter was not defined on List. We can easily implement it as

import List.*

def filter[A](list: List[A], pred: A => Boolean): List[A] = 
  list match {
    case Empty() => Empty()
    case Pair(head, tail) => 
      if pred(head) then Pair(head, filter(tail, pred))
      else filter(tail, pred)
  }

We could even use an extension method to make it appear as a normal method.

extension [A](list: List[A]) {
  def filter(pred: A => Boolean): List[A] = 
    list match {
      case Empty() => Empty()
      case Pair(head, tail) => 
        if pred(head) then Pair(head, tail.filter(pred))
        else tail.filter(pred)
    }
}

This shows we can add new functions to data without issue.

What about adding new elements to data? Perhaps we want to add a special case to optimize single-element lists. This is impossible without changing existing code. By definition, we cannot add a new element to an enum without changing the enum. Adding such a new element would break all existing pattern matches, and so require they all change. So in summary we can add new functions to data, but not new elements.

Now let’s look at codata. This has the opposite extensibility; duality strikes again! In the codata case we can easily add new elements. We simply implement the trait that defines the codata interface. We saw this when we defined, for example, ListSet.

final class ListSet[A](elements: List[A]) extends Set[A] {

  def contains(elt: A): Boolean =
    elements.contains(elt)

  def insert(elt: A): Set[A] =
    ListSet(elt :: elements)

  def union(that: Set[A]): Set[A] =
    elements.foldLeft(that) { (set, elt) => set.insert(elt) }
}
object ListSet {
  def empty[A]: Set[A] = ListSet(List.empty)
}

What about adding new functionality? If the functionality can be defined in terms of existing functionality then we’re ok. We can easily define this functionality, and we can use the extension method trick to make it appear like a built-in. However, if we want to define a function that cannot be expressed in terms of existing functions we are out of luck. Let’s saw we want to define some kind of iterator over the elements of a Set. We might use a LazyList, the standard library’s equivalent of Stream we defined earlier, because we know some sets have an infinite number of elements. Well, we can’t do this without changing the definition of Set, which in turn breaks all existing implementations. We cannot define it in a different way because we don’t know all the possible implementations of Set.

So in summary we can add new elements to codata, but not new functions.

If we tabulate this we clearly see that data and codata have orthogonal extensibility.

Extension Data Codata
Add elements No Yes
Add functions Yes No

This difference in extensibility gives us another rule for choosing between data and codata as an implementation strategy, in addition to the finite vs infinite distinction we saw earlier. If we want extensibilty of functions but not elements we should use data. If we have a fixed interface but an unknown number of possible implementations we should use codata.

You might wonder if we can have both forms of extensibility. Achieving this is called the expression problem. There are various ways to solve the expression problem, and we’ll see one that works particularly well in Scala in a later chapter.

3.6 Exercise: Sets

In this extended exercise we’ll explore the Set interface we have already used in several examples, reproduced below.

trait Set[A] {
  
  /** True if this set contains the given element */
  def contains(elt: A): Boolean
  
  /** Construct a new set containing the given element */
  def insert(elt: A): Set[A]
  
  /** Construct the union of this and that set */
  def union(that: Set[A]): Set[A]
}

We also saw a simple implementation, storing the elements in the set in a List.

final class ListSet[A](elements: List[A]) extends Set[A] {

  def contains(elt: A): Boolean =
    elements.contains(elt)

  def insert(elt: A): Set[A] =
    ListSet(elt :: elements)

  def union(that: Set[A]): Set[A] =
    elements.foldLeft(that) { (set, elt) => set.insert(elt) }
}
object ListSet {
  def empty[A]: Set[A] = ListSet(List.empty)
}

The implementation for union is a bit unsatisfactory; it’s doesn’t use any of our strategies for writing code. We can implement both union and insert in a generic way that works for all sets (in other words, is implemented on the Set trait) and uses the strategies we’ve seen in this chapter. Go ahead and do this.

I used structural corecursion to implement these methods. I decided to name the subclasses, as I think it’s a little bit clearer what’s going on in this case.

trait Set[A] {
  
  def contains(elt: A): Boolean
  
  def insert(elt: A): Set[A] =
    InsertOneSet(elt, this)
  
  def union(that: Set[A]): Set[A] =
    UnionSet(this, that)
}

final class InsertOneSet[A](element: A, source: Set[A]) 
    extends Set[A] {

  def contains(elt: A): Boolean =
    elt == element || source.contains(elt)
}

final class UnionSet[A](first: Set[A], second: Set[A])
    extends Set[A] {

  def contains(elt: A): Boolean =
    first.contains(elt) || second.contains(elt)
}

Your next challenge is to implement Evens, the set of all even integers, which we’ll represent as a Set[Int]. This is an infinite set; we cannot directly enumerate all the elements in this set. (We actually could enumerate all the even elements that are 32-bit Ints, but we don’t want to as this would use excessive amounts of space.)

I implemented Evens using an object. This is possible because all possible instances of this set are the same, so we only need one instance.

object Evens extends Set[Int] {

  def contains(elt: Int): Boolean =
    (elt % 2 == 0)
}

It turns out, perhaps surprisingly, that this works. Let’s define a few sets using Evens and ListSet.

val evensAndOne = Evens.insert(1)
val evensAndOthers = 
  Evens.union(ListSet.empty.insert(1).insert(3))

Now show that they work as expected.

evensAndOne.contains(1)
// res1: Boolean = true
evensAndOthers.contains(1)
// res2: Boolean = true
evensAndOne.contains(2)
// res3: Boolean = true
evensAndOthers.contains(2)
// res4: Boolean = true
evensAndOne.contains(3)
// res5: Boolean = false
evensAndOthers.contains(3)
// res6: Boolean = true

We can generalize this idea to defining sets in terms of indicator functions, which is a function of type A => Boolean, returning returns true if the input belows to the set. Implement IndicatorSet, which is constructed with a single indicator function parameter.

final class IndicatorSet[A](indicator: A => Boolean)
    extends Set[A] {

  def contains(elt: A): Boolean =
    indicator(elt)
}

To test this, let’s define the infinite set of odd integers.

val odds = IndicatorSet[Int](_ % 2 == 1)

Now we’ll show it works as expected.

odds.contains(1)
// res7: Boolean = true
odds.contains(2)
// res8: Boolean = false
odds.contains(3)
// res9: Boolean = true

Taking the union of even and odd integers gives us a set that contains all integers.

val integers = Evens.union(odds)

It has the expected behaviour.

integers.contains(1)
// res10: Boolean = true
integers.contains(2)
// res11: Boolean = true
integers.contains(3)
// res12: Boolean = true

3.7 Conclusions

In this chapter we’ve explored codata, the dual of data. Codata is defined by its interface—what we can do with it—as opposed to data, which is defined by what it is. More formally, codata is a product of destructors, where destructors are functions from the codata type (and, optionally, some other inputs) to some type. By avoiding the elements of object-oriented programming that make it hard to reason about—state and implementation inheritance—codata brings elements of object-oriented programming that accord with the other functional programming strategies. In Scala we define codata as a trait, and implement it as a final class, anonymous subclass, or an object.

We have two strategies for implementing methods using codata: structural corecursion, which we can use when the result is codata, and structural recursion, which we can use when an input is codata. Structural corecursion is usually the more useful of the two, as it gives more structure (pun intended) to the method we are implementing. The reverse is true for data.

We saw that data is connected to codata via fold: any data can instead be implemented as codata with a single destructor that is the fold for that data. The reverse is also: we can enumerate all potential pairs of inputs and outputs of destructors to represent codata as data. However this does not mean that data and codata are equivalent. We have seen many examples of codata representing infinite structures, such as sets of all even numbers and streams of all natural numbers. We have also seen that data and codata offer different forms of extensibility: data makes it easy to add new functions, but adding new elements requires changing existing code, while it is easy to add new elements to codata but we change existing code if we add new functions.

The earliest reference I could find to codata in programming languages is Hagino [1989]. This is much more recent than algebraic data, which I think explains why codata is relatively unknown. There are some excellent recent papers that deal with codata. I highly recommend Codata in Action [Downen et al. 2019], which inspired large portions of this chapter. Exploring Codata: The Relation to Object-Orientation [Sullivan 2019] is also worthwhile. How to Add Laziness to a Strict Language Without Even Being Odd [Wadler et al. 1998] is an older paper that discusses the implementation of streams, and in particular the difference between a not-quite-lazy-enough implementation they label odd and the version we saw, which they call even. These correspond to Stream and LazyList in the Scala standard library respectively. Classical (Co)Recursion: Programming [Downen and Ariola 2021] is an interesting survey of corecursion in different languages, and covers many of the same examples that I used here. Finally, if you really want to get into the weeds of the relationship between data and codata, Beyond Church encoding: Boehm-Berarducci isomorphism of algebraic data types and polymorphic lambda-terms [Kiselyov 2005] is for you.

4 Contextual Abstraction

All but the simplest programs depend on the context in which they run. The number of available CPU cores is an example of context provided by the computer, which a program might adapt to by changing how work is distributed. Other forms of context include configuration read from files and environment variables, and (and we’ll see at lot of this later) values created at compile-time, such as serialization formats, in response to the type of some method parameters.

Scala is one of the few languages that provides features for contextual abstraction, known as implicits in Scala 2 or given instances in Scala 3. In Scala these features are intimately related to types; types are used to select between different available given instances and drive construction of given instances at compile-time.

Most Scala programmers are less confident with the features for contextual abstraction than with other parts of the language, and they are often entirely novel to programmers coming from other languages. Hence this chapter will start by reviewing the abstractions formely known as implicits: given instances and using clauses. We will then look at one of their major uses, type classes3. Type classes allow us to extend existing types with new functionality, without using traditional inheritance, and without altering the original source code. Type classes are the core of Cats, which we will be exploring in the next part of this book.

4.1 The Mechanics of Contextual Abstraction

In section we’ll go through the main Scala language features for contextual abstraction. Once we have a firm understanding of the mechanics of contextual abstraction we’ll move on to their use.

The language features for contextual abstraction have changed name from Scala 2 to Scala 3, but they work in largely the same way. In the table below I show the Scala 3 features, and their Scala 2 equivalents. If you use Scala 2 you’ll find that most of the code works simply by replacing given with implicit val and using with implicit.

Scala 3 Scala 2
given instance implicit value
using clause implicit parameter

Let’s now explain how these language features work.

4.1.1 Using Clauses

We’ll start with using clauses. A using clause is a method parameter list that starts with the using keyword. We use the term context parameters for the parameters in a using clause.

def double(using x: Int) = x + x

The using keyword applies to all parameters in the list, so in add below both x and y are context parameters.

def add(using x: Int, y: Int) = x + y

We can have normal parameter lists, and multiple using clauses, in the same method.

def addAll(x: Int)(using y: Int)(using z: Int): Int =
  x + y + z

We cannot pass parameters to a using clause in the normal way. We must proceed the parameters with the using keyword as shown below.

double(using 1)
// res0: Int = 2
add(using 1, 2)
// res1: Int = 3
addAll(1)(using 2)(using 3)
// res2: Int = 6

However this is not the typical way to pass parameters. In fact we don’t usually explicit pass parameters to using clause at all. We usually use given instances instead, so let’s turn to them.

4.1.2 Given Instances

A given instance is a value that is defined with the given keyword. Here’s a simple example.

given theMagicNumber: Int = 3

We can use a given instance like a normal value.

theMagicNumber * 2

However, it’s more common to use them with a using clause. When we call a method that has a using clause, and we do not explicitly supply values for the context parameters, the compiler will look for given instances of the required type. If it finds a given instance it will automatically use it to complete the method call.

For example, we defined double above with a single Int context parameter. The given instance we just defined, theMagicNumber, also has type Int. So if we call double without providing any value for the context parameter the compiler will provide the value theMagicNumber for us.

double
// res4: Int = 6

The same given instance will be used for multiple parameters in a using clause with the same type, as in add defined above.

add
// res5: Int = 6

The above are the most important points for using clauses and given instances. We’ll now turn to some of the details of their semantics.

4.1.3 Given Scope and Imports

Given instances are usually not explicitly passed to using clauses. Their whole reason for existence is to get the compiler to do this for us. This could make code hard to understand, so we need to be very clear about which given instances are candidates to be supplied to a using clause. In this section we’ll look at the given scope, which is all the places that the compiler will look for given instances, and the special syntax for importing given instances.

The first rule we should know about the given scope is that it starts at the call site, where the method with a using clause is called, not at the definition site where the method is defined. This means the following code does not compile, because the given instance is not in scope at the call site, even though it is in scope at the definition site.

object A {
  given a: Int = 1
  def whichInt(using int: Int): Int = int
}

A.whichInt
// error:
// No given instance of type Int was found for parameter int of method whichInt in object A
// A.whichInt
//   ^^^^^^^^

The second rule, which we have been relying on in all our examples so far, is that the given scope includes the lexical scope at the call site. The lexical scope is where we usually look up the values associated with names (like the names of method parameters or val declarations). This means the following code works, as a is defined in a scope that includes the call site.

object A {
  given a: Int = 1
  
  object B {
    C.whichInt 
  }
  
  object C {
    def whichInt(using int: Int): Int = int
  }
}

However, if there are multiple given instances in the same scope the compiler will not arbitrarily choose one. Instead it fails with an error telling us the choice is ambiguous.

object A {
  given a: Int = 1
  given b: Int = 2
    
  def whichInt(using int: Int): Int = int
    
  whichInt
}
// error:
// Ambiguous given instances: both given instance a in object A and
// given instance b in object A match type Int of parameter int of 
// method whichInt in object A

We can import given instances from other scopes, just like we can import normal declarations, but we must explicitly say we want to import given instances. The following code does not work because we have not explicitly imported the given instances.

object A {
  given a: Int = 1

  def whichInt(using int: Int): Int = int
}
object B {
  import A.*
    
  whichInt
}
// error:
// No given instance of type Int was found for parameter int of method whichInt in object A
// 
// Note: given instance a in object A was not considered because it was not imported with `import given`.
//   whichInt
//           ^

It works when we do explicitly import them using import A.given.

object A {
  given a: Int = 1

  def whichInt(using int: Int): Int = int
}
object B {
  import A.{given, *}
    
  whichInt
}

One final wrinkle: the given scope includes the companion objects of any type involved in the type of the using clause. This is best illustrated with an example. We’ll start by defining a type Sound that represents the sound made by its type variable A, and a method soundOf to access that sound.

trait Sound[A] {
  def sound: String
}

def soundOf[A](using s: Sound[A]): String =
  s.sound

Now we’ll define some given instances. Notice that they are defined on the relevant companion objects.

trait Cat
object Cat {
  given catSound: Sound[Cat] =
    new Sound[Cat]{
      def sound: String = "meow"
    }
}

trait Dog
object Dog {
  given dogSound: Sound[Dog] = 
    new Sound[Dog]{
      def sound: String = "woof"
    }
}

When we call soundOf we don’t have to explicitly bring the instances into scope. They are automatically in the given scope by virtue of being defined on the companion objects of the types we use (Cat and Dog). If we had defined these instances on the Sound companion object they would also be in the given scope; when looking for a Sound[A] both the companion objects of Sound and A are in scope.

soundOf[Cat]
// res12: String = "meow"
soundOf[Dog]
// res13: String = "woof"

We should almost always be defining given instances on companion objects. This simple organization scheme means that users do not have to explicitly import them but can easily find the implementations if they wish to inspect them.

4.1.3.1 Given Instance Priority

Notice that given instance selection is based entirely on types. We don’t even pass any values to soundOf! This means given instances are easiest to use when there is only one instance for each type. In this case we can just put the instances on a relevant companion object and everything works out.

However, this is not always possible (though it’s often an indication of a bad design if it is not). For cases where we need multiple instances for a type, we can use the instance priority rules to select between them. We’ll look at the three most important rules below.

The first rule is that explicitly passing an instance takes priority over everything else.

given a: Int = 1
def whichInt(using int: Int): Int = int
whichInt(using 2)
// res15: Int = 2

The second rule is that instances in the lexical scope take priority over instances in a companion object

trait Sound[A] {
  def sound: String
}
trait Cat
object Cat {
  given catSound: Sound[Cat] =
    new Sound[Cat]{
      def sound: String = "meow"
    }
}

def soundOf[A](using s: Sound[A]): String =
  s.sound
given purr: Sound[Cat]  =
  new Sound[Cat]{
    def sound: String = "purr"
  }

soundOf[Cat]
// res17: String = "purr"

The final rule is that instances in a closer lexical scope take preference over those further away.

{
  given growl: Sound[Cat] =
   new Sound[Cat]{
     def sound: String = "growl"
   }
   
  {
    given mew: Sound[Cat] =
     new Sound[Cat]{
       def sound: String = "mew"
     }
     
    soundOf[Cat]
  }
}
// res18: String = "mew"

We’re now seen most of the details of how given instances and using clauses work. This is a craft level explanation, and it naturally leads to the question: where would use these tools? This is what we’ll address next, where we look at type classes and their implementation in Scala.

4.2 Anatomy of a Type Class

Let’s now look at how type classes are implemented. There are three important components to a type class: the type class itself, which defines an interface, type class instances, which implement the type class for particular types, and the methods that use type classes. The table below shows the language features that correspond to each component.

Type Class Concept Language Feature
Type class trait
Type class instance given instance
Type class use using clause

Let’s see how this works in detail.

4.2.1 The Type Class

A type class is an interface or API that represents some functionality we want implemented. In Scala a type class is represented by a trait with at least one type parameter. For example, we can represent generic “serialize to JSON” behaviour as follows:

// Define a very simple JSON AST
sealed trait Json
final case class JsObject(get: Map[String, Json]) extends Json
final case class JsString(get: String) extends Json
final case class JsNumber(get: Double) extends Json
case object JsNull extends Json

// The "serialize to JSON" behaviour is encoded in this trait
trait JsonWriter[A] {
  def write(value: A): Json
}

JsonWriter is our type class in this example, with Json and its subtypes providing supporting code. When we come to implement instances of JsonWriter, the type parameter A will be the concrete type of data we are writing.

4.2.2 Type Class Instances

The instances of a type class provide implementations of the type class for specific types we care about, which can include types from the Scala standard library and types from our domain model.

In Scala we create type class instances by defining given instances implementing the type class.

object JsonWriterInstances {
  given stringWriter: JsonWriter[String] =
    new JsonWriter[String] {
      def write(value: String): Json =
        JsString(value)
    }
  
  final case class Person(name: String, email: String)
  
  given JsonWriter[Person] with
    def write(value: Person): Json =
      JsObject(Map(
        "name" -> JsString(value.name),
        "email" -> JsString(value.email)
      ))
  
  // etc...
}

In this example we define two type class instances of JsonWriter, one for String and one for Person. The definition for String uses the syntax we saw in the previous section. The definition for Person uses two bits of syntax that are new in Scala 3. Firstly, writing given JsonWriter[Person] creates an anonymous given instance. We declare just the type and don’t need to name the instance. This is fine because we don’t usually need to refer to given instances by name. The second bit of syntax is the use of with to implement a trait directly without having to write out new JsonWriter[Person] and so on.

In a real implementation we’d usually want to define the instances on a companion object: the instance for String on the JsonWriter companion object (because we cannot define it on the String companion object) and the instance for Person on the Person companion object. I haven’t done this here because I would need to redeclare JsonWriter, as a type and it’s companion object must be declared at the same time.

4.2.3 Type Class Use

A type class use is any functionality that requires a type class instance to work. In Scala this means any method that accepts instances of the type class as part of a using clause.

We’re going to look at two patterns of type class usage, which we call interface objects and interface syntax. You’ll find these in Cats and other libraries.

4.2.3.1 Interface Objects

The simplest way of creating an interface that uses a type class is to place methods in a singleton object:

object Json {
  def toJson[A](value: A)(using w: JsonWriter[A]): Json =
    w.write(value)
}

To use this object, we import any type class instances we care about and call the relevant method:

import JsonWriterInstances.{*, given}
Json.toJson(Person("Dave", "dave@example.com"))
// res1: Json = JsObject(
//   get = Map(
//     "name" -> JsString(get = "Dave"),
//     "email" -> JsString(get = "dave@example.com")
//   )
// )

The compiler spots that we’ve called the toJson method without providing the given instances. It tries to fix this by searching for given instances of the relevant types and inserting them at the call site.

4.2.3.2 Interface Syntax

We can alternatively use extension methods to extend existing types with interface methods4. This is sometimes referred to as as syntax for the type class, which is the term used by Cats. Scala 2 has an equivalent to extension methods known as implicit classes.

Here’s an example defining an extension method to add a toJson method to any type for which we have a JsonWriter instance.

object JsonSyntax {
  extension [A](value: A) {
    def toJson(using w: JsonWriter[A]): Json =
      w.write(value)
  }
}

We use interface syntax by importing it alongside the instances for the types we need:

import JsonWriterInstances.given
import JsonSyntax.*
Person("Dave", "dave@example.com").toJson
// res2: Json = JsObject(
//   get = Map(
//     "name" -> JsString(get = "Dave"),
//     "email" -> JsString(get = "dave@example.com")
//   )
// )

Extension Methods on Traits

In Scala 3 we can define extension methods directly on a type class trait. Since we’re defining toJson as just calling write on JsonWriter, we could instead define toJson directly on JsonWriter and avoid creating an separate extension method.

trait JsonWriter[A] {
  extension (value: A) def toJson: Json
}

object JsonWriter {
  given stringWriter: JsonWriter[String] =
    new JsonWriter[String] {
      extension (value: String) 
        def toJson: Json = JsString(value)
    }
  
  // etc...
}

We do not advocate this approach, because of a limitation in how Scala searches for extension methods. The following code fails because Scala only looks within the String companion object for extension methods, and consequently does not find the extension method on the instance in the JsonWriter companion object.

"A string".toJson
// error:
// value toJson is not a member of String
// "A string".toJson
// ^^^^^^^^^^^^^^^^^

This means that users will have to explicitly import at least the instances for the built-in types (for which we cannot modify the companion objects).

import JsonWriter.given

"A string".toJson
// res5: Json = JsString(get = "A string")

For consistency we recommend separating the syntax from the type class instances and always explicitly importing it, rather than requiring explicit imports for only some extension methods.

4.2.3.3 The summon Method

The Scala standard library provides a generic type class interface called summon. Its definition is very simple:

def summon[A](using value: A): A =
  value

We can use summon to summon any value in the given scope. We provide the type we want and summon does the rest:

summon[JsonWriter[String]]
// res6: JsonWriter[String] = repl.MdocSession$MdocApp3$JsonWriter$$anon$7@4cb4a968

Most type classes in Cats provide other means to summon instances. However, summon is a good fallback for debugging purposes. We can insert a call to summon within the general flow of our code to ensure the compiler can find an instance of a type class and ensure that there are no ambiguity errors.

4.3 Type Class Composition

So far we’ve seen type classes as a way to get the compiler to pass values to methods. This is nice but it does seem like we’ve introduced a lot of new concepts for a small gain. The real power of type classes lies in the compiler’s ability to combine given instances to construct new given instances. This is known as type class composition.

Type class composition works by a feature of given instances we have not yet seen: given instances can themselves have context parameters. However, before we go into this let’s see a motivational example.

Consider defining a JsonWriter for Option. We would need a JsonWriter[Option[A]] for every A we care about in our application. We could try to brute force the problem by creating a library of given instances:

given optionIntWriter: JsonWriter[Option[Int]] =
  ???

given optionPersonWriter: JsonWriter[Option[Person]] =
  ???

// and so on...

However, this approach clearly doesn’t scale. We end up requiring two given instances for every type A in our application: one for A and one for Option[A].

Fortunately, we can abstract the code for handling Option[A] into a common constructor based on the instance for A:

Here is the same code written out using a parameterized given instance:

given optionWriter[A](using writer: JsonWriter[A]): JsonWriter[Option[A]] =
  new JsonWriter[Option[A]] {
    def write(option: Option[A]): Json =
      option match {
        case Some(aValue) => writer.write(aValue)
        case None         => JsNull
      }
  }

This method constructs a JsonWriter for Option[A] by relying on a context parameter to fill in the A-specific functionality. When the compiler sees an expression like this:

Json.toJson(Option("A string"))

it searches for an given instance JsonWriter[Option[String]]. It finds the given instance for JsonWriter[Option[A]]:

Json.toJson(Option("A string"))(using optionWriter[String])

and recursively searches for a JsonWriter[String] to use as the context parameter to optionWriter:

Json.toJson(Option("A string"))(using optionWriter(using stringWriter))

In this way, given instance resolution becomes a search through the space of possible combinations of given instance, to find a combination that creates a type class instance of the correct overall type.

4.3.1 Type Class Composition in Scala 2

In Scala 2 we can achieve the same effect with an implicit method with implicit parameters. Here’s the Scala 2 equivalent of optionWriter above.

implicit def scala2OptionWriter[A]
    (implicit writer: JsonWriter[A]): JsonWriter[Option[A]] =
  new JsonWriter[Option[A]] {
    def write(option: Option[A]): Json =
      option match {
        case Some(aValue) => writer.write(aValue)
        case None         => JsNull
      }
  }

Make sure you make the method’s parameter implicit! If you don’t, you’ll end up defining an implicit conversion. Implicit conversion is an older programming pattern that is frowned upon in modern Scala code. Fortunately, the compiler will warn you should you do this.

4.4 What Type Classes Are

We’ve have now seen the mechanics of type classes: they are a specific arrangement of trait, given instances, and using clauses. This is a very craft-level explanation. Let’s now raise the level of the explanation with three different views of type classes.

The first view goes back Chapter 3, where we looked at codata. The type class itself—the trait—is an example of codata with the usual advantages of codata (we can easily add implementations) and disadvantages (we cannot easily change the interface). Given instances and using clauses add the ability to chose the codata implementation based on the type of the context parameter and the instances in the given scope, and to compose instances from smaller components.

Raising the level of abstraction again, we can say that type classes allow us to implement functionality (the type class instance) separately from the type to which it applies, so that the implementation only needs to be defined at the point of the use—the call site—not at the point of declaration.

Raising the level again, we can say type classes allow us to implement ad-hoc polymorphism. I find it easiest to understand ad-hoc polymorphism in contrast to parametric polymorphism. Parametric polymorphism is what we get with type parameters, also known as generic types. It allows us to treat all types in a uniform way. For example, the following function calculates the length of any list of an arbitrary type A.

def length[A](list: List[A]): Int =
  list match {
    case Nil => 0
    case x :: xs => 1 + length(xs)
  }

We can implement length because we don’t require any particular functionality from the values of type A that make up the elements of the list. We don’t call any methods on them, and indeed we cannot call any methods on them because we don’t know what concrete type A will be at the point where length is defined5.

Ad-hoc polymorphism allows us to call methods on values with a generic type. The methods we can call are exactly those defined by the type class. For example, we can use the Numeric type class from the standard library to write a method that adds together elements of any type that implements that type class.

import scala.math.Numeric

def add[A](x: A, y: A)(using n: Numeric[A]): A = {
  n.plus(x, y)
}

So parametric polymorphism can be understood as meaning any type, while ad-hoc polymorphism means any type that also implements this functionality. In ad-hoc polymorphism there doesn’t have to be any particular type relationship between the concrete types that implement the functionality of interest. This is in contast to object-oriented style polymorphism (i.e. codata) where all concrete types must be subtypes of the type that defines the functionality of interest.

4.5 Exercise: Display Library

Scala provides a toString method to let us convert any value to a String. This method comes with a few disadvantages:

  1. It is implemented for every type in the language. There are situations where we don’t want to be able to view data. For example, we may want to ensure we don’t log sensitive information, such as passwords, in plain text.

  2. We can’t customize toString for types we don’t control.

Let’s define a Display type class to work around these problems:

  1. Define a type class Display[A] containing a single method display. display should accept a value of type A and return a String.

  2. Create instances of Display for String and Int on the Display companion object.

  3. On the Display companion object create two generic interface methods:

    • display accepts a value of type A and a Display of the corresponding type. It uses the relevant Display to convert the A to a String.

    • print accepts the same parameters as display and returns Unit. It prints the displayed A value to the console using println.

These steps define the three main components of our type class. First we define Display—the type class itself:

trait Display[A] {
  def display(value: A): String
}

Then we define some default instances of Display and package them in the Display companion object:

object Display {
  given stringDisplay: Display[String] with {
    def display(input: String) = input
  }

  given intDisplay: Display[Int] with {
    def display(input: Int) = input.toString
  }
}

Finally we extend the Display companion object to provide a basic interface:

object Display {
  given stringDisplay: Display[String] with {
    def display(input: String) = input
  }

  given intDisplay: Display[Int] with {
    def display(input: Int) = input.toString
  }

  def display[A](input: A)(using p: Display[A]): String =
    p.display(input)

  def print[A](input: A)(using Display[A]): Unit =
    println(display(input))
}

Notice that the Display instance on print is anonymous. This is allowed in Scala 3, and works because we only pass it to display.

4.5.1 Using the Library

The code above forms a general purpose printing library that we can use in multiple applications. Let’s define an “application” now that uses the library.

First we’ll define a data type to represent a well-known type of furry animal:

final case class Cat(name: String, age: Int, color: String)

Next we’ll create an implementation of Display for Cat that returns content in the following format:

NAME is a AGE year-old COLOR cat.

Finally, use the type class on the console or in a short demo app: create a Cat and print it to the console:

// Define a cat:
val cat = Cat(/* ... */)

// Print the cat!

This is a standard use of the type class pattern. First we define custom data type for our application:

final case class Cat(name: String, age: Int, color: String)

Then we define type class instances for the types we care about. These either go into the companion object of Cat or a separate object to act as a namespace:

given catDisplay: Display[Cat] = new Display[Cat] {
  def display(cat: Cat) = {
    val name  = Display.display(cat.name)
    val age   = Display.display(cat.age)
    val color = Display.display(cat.color)
    s"$name is a $age year-old $color cat."
  }
}

Finally, we use the type class by bringing the relevant instances into scope and using interface object/syntax. If we defined the instances in companion objects Scala brings them into scope for us automatically. Otherwise we use an import to access them:

val cat = Cat("Garfield", 41, "ginger and black")
Display.print(cat)
// Garfield is a 41 year-old ginger and black cat.

4.5.2 Better Syntax

Let’s make our printing library easier to use by adding extension methods for its functionality:

  1. Create an object DisplaySyntax.

  2. Define display and print as extension methods on DisplaySyntax.

  3. Use the extension methods to print the example Cat you created in the previous exercise.

First we define DisplaySyntax with the extension methods we want.

object DisplaySyntax {
  extension [A](value: A)(using p: Display[A]) {
    def display: String = p.display(value)
    def print: Unit = p.print(value)
  }
}

Now we can show everything working by calling print on a Cat.

import DisplaySyntax.*

given Display[Cat] with {
  def display(cat: Cat): String = {
    val name  = cat.name.display
    val age   = cat.age.display
    val color = cat.color.display
    s"$name is a $age year-old $color cat."
  }
}

Cat("Garfield", 41, "ginger and black").print
// Garfield is a 41 year-old ginger and black cat.

We get a compile error if we haven’t defined an instance of Display for the relevant type:

import java.util.Date
new Date().print
// error:
// value print is not a member of java.util.Date.
// An extension method was tried, but could not be fully constructed:
// 
//     this.DisplaySyntax.print[java.util.Date](new java.util.Date())(
//       /* missing */summon[MdocApp3.this.Display[java.util.Date]])
// 
//     failed with:
// 
//         No given instance of type MdocApp3.this.Display[java.util.Date] was found for parameter p of method print in object DisplaySyntax
// new Date().print
// ^^^^^^^^^^^^^^^^

4.6 Type Classes and Variance

In this section we’ll discuss how variance interacts with type class instance selection. Variance is one of the darker corners of Scala’s type system, so we start by reviewing it before moving on to its interaction with type classes.

4.6.1 Variance

Variance concerns the relationship between an instance defined on a type and its subtypes. For example, if we define a JsonWriter[Option[Int]], will the expression Json.toJson(Some(1)) select this instance? (Remember that Some is a subtype of Option).

We need two concepts to explain variance: type constructors, and subtyping.

Variance applies to any type constructor, which is the F in a type F[A]. So, for example, List, Option, and JsonWriter are all type constructors. A type constructor must have at least one type parameter, and may have more. So Either, with two type parameters, is also a type constructor.

Subtyping is a relationship between types. We say that B is a subtype of A if we can use a value of type B anywhere we expect a value of type A. We may sometimes use the shorthand B <: A to indicate that B is a subtype of A.

Variance concerns the subtyping relationship between types F[A] and F[B], given a subtyping relationship between A and B. If B is a subtype of A then

  1. if F[B] <: F[A] we say F is covariant in A; else
  2. if F[B] >: F[A] we say F is contravariant in A; else
  3. if there is no subtyping relationship between F[B] and F[A] we say F is invariant in A.

When we define a type constructor we can also add variance annotations to its type parameters. For example, we denote covariance with a + symbol:

trait F[+A] // the "+" means "covariant"

If we don’t add a variance annotation, the type parameter is invariant. Let’s now look at covariance, contravariance, and invariance in detail.

4.6.2 Covariance

Covariance means that the type F[B] is a subtype of the type F[A] if B is a subtype of A. This is useful for modelling many types, including collections like List and Option:

trait List[+A]
trait Option[+A]

The covariance of Scala collections allows us to substitute collections of one type with a collection of a subtype in our code. For example, we can use a List[Circle] anywhere we expect a List[Shape] because Circle is a subtype of Shape:

sealed trait Shape
final case class Circle(radius: Double) extends Shape
val circles: List[Circle] = ???
val shapes: List[Shape] = circles

Generally speaking, covariance is used for outputs: data that we can later get out of a container type such as List, or otherwise returned by some method.

4.6.3 Contravariance

What about contravariance? We write contravariant type constructors with a - symbol like this:

trait F[-A]

Perhaps confusingly, contravariance means that the type F[B] is a subtype of F[A] if A is a subtype of B. This is useful for modelling types that represent inputs, like our JsonWriter type class above:

trait JsonWriter[-A] {
  def write(value: A): Json
}

Let’s unpack this a bit further. Remember that variance is all about the ability to substitute one value for another. Consider a scenario where we have two values, one of type Shape and one of type Circle, and two JsonWriters, one for Shape and one for Circle:

val shape: Shape = ???
val circle: Circle = ???

val shapeWriter: JsonWriter[Shape] = ???
val circleWriter: JsonWriter[Circle] = ???
def format[A](value: A, writer: JsonWriter[A]): Json =
  writer.write(value)

Now ask yourself the question: “Which combinations of value and writer can I pass to format?” We can write a Circle with either writer because all Circles are Shapes. Conversely, we can’t write a Shape with circleWriter because not all Shapes are Circles.

This relationship is what we formally model using contravariance. JsonWriter[Shape] is a subtype of JsonWriter[Circle] because Circle is a subtype of Shape. This means we can use shapeWriter anywhere we expect to see a JsonWriter[Circle].

4.6.4 Invariance

Invariance is the easiest situation to describe. It’s what we get when we don’t write a + or - in a type constructor:

trait F[A]

This means the types F[A] and F[B] are never subtypes of one another, no matter what the relationship between A and B. This is the default semantics for Scala type constructors.

4.6.5 Variance and Instance Selection

When the compiler searches for a given instnace it looks for one matching the type or subtype. Thus we can use variance annotations to control type class instance selection to some extent.

There are two issues that tend to arise. Let’s imagine we have an algebraic data type like:

enum A {
  case B
  case C
}

The issues are:

  1. Will an instance defined on a supertype be selected if one is available? For example, can we define an instance for A and have it work for values of type B and C?

  2. Will an instance for a subtype be selected in preference to that of a supertype. For instance, if we define an instance for A and B, and we have a value of type B, will the instance for B be selected in preference to A?

It turns out we can’t have both at once. The three choices give us behaviour as follows:

Type Class Variance Invariant Covariant Contravariant
Supertype instance used? No No Yes
More specific type preferred? No Yes No

Let’s see some examples, using the following types to show the subtyping relationship.

trait Animal
trait Cat extends Animal
trait DomesticShorthair extends Cat

Now we’ll define three different type classes for the three types of variance, and define an instance of each for the Cat type.

trait Inv[A] {
  def result: String
}
object Inv {
  given Inv[Cat] with
    def result = "Invariant"
    
  def apply[A](using instance: Inv[A]): String =
    instance.result
}

trait Co[+A] {
  def result: String
}
object Co {
  given Co[Cat] with
    def result = "Covariant"

  def apply[A](using instance: Co[A]): String =
    instance.result
}

trait Contra[-A] {
  def result: String
}
object Contra {
  given Contra[Cat] with
    def result = "Contravariant"

  def apply[A](using instance: Contra[A]): String =
    instance.result
}

Now the cases that work, all of which select the Cat instance. For the invariant case we must ask for exactly the Cat type. For the covariant case we can ask for a supertype of Cat. For contravariance we can ask for a subtype of Cat.

Inv[Cat]
// res1: String = "Invariant"
Co[Animal]
// res2: String = "Covariant"
Co[Cat]
// res3: String = "Covariant"
Contra[DomesticShorthair]
// res4: String = "Contravariant"
Contra[Cat]
// res5: String = "Contravariant"

Now cases that fail. With invariance any type that is not Cat will fail. So the supertype fails

Inv[Animal]
// error: 
// No given instance of type MdocApp0.this.Inv[MdocApp0.this.Animal] was found for parameter instance of method apply in object Inv

as does the subtype.

Inv[DomesticShorthair]
// error: 
// No given instance of type MdocApp0.this.Inv[MdocApp0.this.DomesticShorthair] was found for parameter instance of method apply in object Inv

Covariance fails for any subtype of the type for which the instance is declared.

Co[DomesticShorthair]
// error: 
// No given instance of type MdocApp0.this.Co[MdocApp0.this.DomesticShorthair] was found for parameter instance of method apply in object Co

Contravariance fails for any supertype of the type for which the instance is declared.

Contra[Animal]
// error: 
// No given instance of type MdocApp0.this.Contra[MdocApp0.this.Animal] was found for parameter instance of method apply in object Contra

It’s clear there is no perfect system. The most choice is to use invariant type classes. This allows us to specify more specific instances for subtypes if we want. It does mean that if we have, for example, a value of type Some[Int], our type class instance for Option will not be used. We can solve this problem with a type annotation like Some(1) : Option[Int] or by using “smart constructors” like the Option.apply, Option.empty, some, and none methods we saw in Section 6.3.3.

4.7 Conclusions

In this chapter we took a first look at type classes. We saw the components that make up a type class:

We saw that type classes can be composed from components using type class composition. This is one form of metaprogramming in Scala, where we can get the compiler to do work for us based on our program’s types.

We can view type classes as marrying codata with tools to select and compose implementations based on type. We can also view type classes as shifting implementation from the definition site to the call site. Finally, can see type classes as a mechanism for ad-hoc polymorphism, allowing us to define common functionality for otherwise unrelated types.

Type classes were first described in Kaes [1988] and Wadler and Blott [1989]. Oliveira et al. [2010] details the encoding of type classes in Scala 2, and compares Scala’s and Haskell’s approach to type classes. Note that type classes are not restricted to Haskell and Scala. For examples, Rust’s traits are essentially type classes.

As we have seen, Scala’s support for type classes is based on implicit parameters (known as using clauses in Scala 3). Implicit parameters [Lewis et al. 2000] were motivated by a desire to decompose type classes into smaller orthogonal language features, but they have been shown to be useful for other tasks. Křikava et al. [2019] surveys different uses of implicits in Scala. See Oliveira and Gibbons [2010] for a particularly mind-bending example. We’ll see some of these different uses in later chapters.

Scala 3 has a few language features related to contextual abstraction that we haven’t mentioned in this chapter. Context functions [Odersky et al. 2017] allow functions to have using clauses. They are something the community is still exploring, and well defined use cases have yet to emerge. Generic derivation allows us to write code that generates type classes instances. Although this is extremely useful I think it’s conceptually quite simple and doesn’t warrant space in this book.

5 Reified Interpreters

The interpreter strategy is perhaps the most important in all of functional programming. The central idea is to separate description from action. When we use the interpreter strategy our program consists of two parts: the description, instructions, or program that describes what we want to do, and the interpreter that carries the actions in the description. In this chapter we’ll start exploring the design and implementation of interpreters, focusing on implementations using algebraic data types.

Interpreters arise whenever there is this distinction between description and action. You may think an interpreter is a complex piece requiring a lot of development effort, but I hope to show you this is not the case. You probably already use lots of interpreters in your daily coding without realizing it. For example, consider the code below which is taken from a web framework called Krop

val route =
  Route(
    Request.get(Path.root / "user" / Param.int),
    Response.ok(Entity.text)
  ).handle(userId => s"You asked for the user ${userId.toString}")

This defines a route, which matches GET requests for the path "/user/<int>", and responds with an Ok containing text. This kind of routing library is ubiquitous in web frameworks, is simple to write, and yet contains everything we need for the interpreter strategy.

Interpreters are so important because they are the key to enabling compositionality and reasoning, particularly while allowing effects. For example, imagine implementing a graphics library using the interpreter strategy. A program simply describes what we want to draw on the screen, but critically it does not draw anything. The interpreter takes this description and creates the drawing described by it. We can freely compose descriptions only because they do not carry out any effects. For example, if we have a description that describes a circle, and one for a square, we can compose them by saying we should draw the circle next to the square thereby creating a new description. If we immediately drew pictures there would be nothing to compose with. Similarly, it’s easier to reason about pictures in this system because a program describes exactly what will appear on the screen, and there is no state from prior drawing that we need to worry about.

Throughout this chapter we will explore the interpreter strategy by building a series of interpreters for regular expressions. We’ve chosen to use regular expressions because they are already familiar to many and they are simple to work with. This means we can focus on the details of the interpreter strategy without getting caught up in problem specific details, but we still end up with a realistic and useful result.

We’ll start with a basic implementation strategy that uses algebraic data types and structural recursion. We’ll then look at transformations to turn our interpreter into a version that avoids using the stack and hence avoids the possibility of stack overflow.

5.1 Regular Expressions

We’ll start this case study by briefly describing the usual task for regular expressions—matching text—and then take a more theoretical view. We’ll then move on to implementation.

We most commonly use regular expressions to determine if a string matches a particular pattern. The simplest regular expression is one that matches only one string. In Scala we can create a regular expression by calling the r method on String. Here’s a regular expression that matches exactly the string "Scala".

val regexp = "Scala".r

We can see that it matches only "Scala" and fails if we give it a shorter or longer input.

regexp.matches("Scala")
// res0: Boolean = true
regexp.matches("Sca")
// res1: Boolean = false
regexp.matches("Scalaland")
// res2: Boolean = false

Notice we already have a separation between description and action. The description is the regular expression itself, created by calling the r method, and the action is calling the matches method on the regular expression.

There are some characters that have a special meaning within the String describing a regular expression. For example, the character * matches the preceding character zero or more times.

val regexp = "Scala*".r
regexp.matches("Scal")
// res4: Boolean = true
regexp.matches("Scala")
// res5: Boolean = true
regexp.matches("Scalaaaa")
// res6: Boolean = true

We can also use parentheses to group sequences of characters. For example, if we wanted to match all the strings like "Scala", "Scalala", "Scalalala" and so on, we could use the following regular expression.

val regexp = "Scala(la)*".r

Let’s check it matches what we’re looking for.

regexp.matches("Scala")
// res8: Boolean = true
regexp.matches("Scalalalala")
// res9: Boolean = true

We should also check it fails to match as expected.

regexp.matches("Sca")
// res10: Boolean = false
regexp.matches("Scalal")
// res11: Boolean = false
regexp.matches("Scalaland")
// res12: Boolean = false

That’s all I’m going to say about Scala’s built-in regular expressions. If you’d like to learn more there are many resources online. The JDK documentation is one example, which describes all the features available in the JVM implementation of regular expressions.

Let’s turn to the theoretical description, such as we might find in a textbook. A regular expression is:

  1. the empty regular expression that matches nothing;
  2. a string, which matches exactly that string (including the empty string);
  3. the concatenation of two regular expressions, which matches the first regular expression and then the second;
  4. the union of two regular expressions, which matches if either expression matches; and
  5. the repetition of a regular expression (often known as the Kleene star), which matches zero or more repetitions of the underlying expression.

This kind of description may seem very abstract if you’re not used to it. It is very useful for our purposes because it defines a minimal API that we can easily implement. Let’s walk through the description and see how each part relates to code.

The empty regular expression is defining a constructor with type () => Regexp, which we can simplify to a value of type Regexp. In Scala we put constructors on the companion object, so this tells us we need

object Regexp {
  val empty: Regexp =
    ???
}

The second part tells us we need another constructor, this one with type String => Regexp.

object Regexp {
  val empty: Regexp =
    ???

  def apply(string: String): Regexp =
    ???
}

The other three components all take a regular expression and produce a regular expression. In Scala these will become methods on the Regexp type. Let’s model this as a trait for now, and define these methods.

The first method, the concatenation of two regular expressions, is conventionally called ++ in Scala.

trait Regexp {
  def ++(that: Regexp): Regexp
}

Union is conventionally called orElse.

trait Regexp {
  def ++(that: Regexp): Regexp
  def orElse(that: Regexp): Regexp
}

Repetition we’ll call repeat, and define an alias * that matches how this operation is written in conventional regular expressions.

trait Regexp {
  def ++(that: Regexp): Regexp
  def orElse(that: Regexp): Regexp
  def repeat: Regexp
  def `*`: Regexp = this.repeat
}

We’re missing one thing: a method to actually match our regular expression against some input. Let’s call this method matches.

trait Regexp {
  def ++(that: Regexp): Regexp
  def orElse(that: Regexp): Regexp
  def repeat: Regexp
  def `*`: Regexp = this.repeat
  
  def matches(input: String): Boolean
}

This completes our API. Now we can turn to implementation. We’re going to represent Regexp as an algebraic data type, and each method that returns a Regexp will return an instance of this algebraic data type. What should be the elements that make up the algebraic data type? There will be one element for each method, and the constructor arguments will be exactly the parameters passed to the method including the hidden this parameter for methods on the trait.

Here’s the resulting code.

enum Regexp {
  def ++(that: Regexp): Regexp =
    Append(this, that)

  def orElse(that: Regexp): Regexp =
    OrElse(this, that)

  def repeat: Regexp =
    Repeat(this)

  def `*`: Regexp = this.repeat
  
  def matches(input: String): Boolean =
    ???
  
  case Append(left: Regexp, right: Regexp)
  case OrElse(first: Regexp, second: Regexp)
  case Repeat(source: Regexp)
  case Apply(string: String)
  case Empty
}
object Regexp {
  val empty: Regexp = Empty
  
  def apply(string: String): Regexp =
    Apply(string)
}

A quick note about this. We can think of every method on an object as accepting a hidden parameter that is the object itself. This is this. (If you have used Python, it makes this explicit as the self parameter.) As we consider this to be a parameter to a method call, and our implementation strategy is to capture all the method parameters in a data structure, we must make sure we capture this when it is available. The only case where we don’t capture this is when we are defining a constructor on a companion object.

Notice that we haven’t implemented matches. It doesn’t return a Regexp so we cannot return an element of our algebraic data type. What should we do here? Regexp is an algebraic data type and matches transforms an algebraic data type into a Boolean. Therefore we can use structural recursion! Let’s write out the skeleton, including the recursion rule.

def matches(input: String): Boolean =
  this match {
    case Append(left, right)   => left.matches(???) ??? right.matches(???)
    case OrElse(first, second) => first.matches(???) ??? second.matches(???)
    case Repeat(source)        => source.matches(???) ???
    case Apply(string)         => ???
    case Empty                 => ???
  }

Now we can apply the usual strategies to complete the implementation. Let’s reason independently by case, starting with the case for Empty. This case is trivial as it always fails to match, so we just return false.

def matches(input: String): Boolean =
  this match {
    case Append(left, right)   => left.matches(???) ??? right.matches(???)
    case OrElse(first, second) => first.matches(???) ??? second.matches(???)
    case Repeat(source)        => source.matches(???) ???
    case Apply(string)         => ???
    case Empty                 => false
  }

Let’s move on to the Append case. This should match if the left regular expression matches the start of the input, and the right regular expression matches starting where the left regular expression stopped. This has uncovered a hidden requirement: we need to keep an index into the input that tells us where we should start matching from. Using a nested method is the easiest way to keep around additional information that we need. Here I’ve created a nested method that returns an Option[Int]. The Int is the new index to use, and we return an Option to indicate if the regular expression matched or not.

def matches(input: String): Boolean = {
  def loop(regexp: Regexp, idx: Int): Option[Int] =
    regexp match {
      case Append(left, right) =>
        loop(left, idx).flatMap(idx => loop(right, idx))
      case OrElse(first, second) => 
        loop(first, idx) ??? loop(second, ???)
      case Repeat(source) => 
        loop(source, idx) ???
      case Apply(string) => 
        ???
      case Empty =>
        None
    }

  // Check we matched the entire input
  loop(this, 0).map(idx => idx == input.size).getOrElse(false)
}

Now we can go ahead and complete the implementation.

def matches(input: String): Boolean = {
  def loop(regexp: Regexp, idx: Int): Option[Int] =
    regexp match {
      case Append(left, right) =>
        loop(left, idx).flatMap(i => loop(right, i))
      case OrElse(first, second) => 
        loop(first, idx).orElse(loop(second, idx))
      case Repeat(source) =>
        loop(source, idx)
          .flatMap(i => loop(regexp, i))
          .orElse(Some(idx))
      case Apply(string) =>
        Option.when(input.startsWith(string, idx))(idx + string.size)
    }

  // Check we matched the entire input
  loop(this, 0).map(idx => idx == input.size).getOrElse(false)
}

The implementation for Repeat is a little tricky, so I’ll walk through the code.

case Repeat(source) =>
  loop(source, idx)
    .flatMap(i => loop(regexp, i))
    .orElse(Some(idx))

The first line (loop(source, index)) is seeing if the source regular expression matches. If it does we loop again, but on regexp (which is Repeat(source)), not source. This is because we want to repeat an indefinite number of times. If we looped on source we would only try twice. Remember that failing to match is still a success; repeat matches zero or more times. This condition is handled by the orElse clause.

We should test that our implementation works.

Here’s the example regular expression we started the chapter with.

val regexp = Regexp("Sca") ++ Regexp("la") ++ Regexp("la").repeat

Here are cases that should succeed.

regexp.matches("Scala")
// res14: Boolean = true
regexp.matches("Scalalalala")
// res15: Boolean = true

Here are cases that should fail.

regexp.matches("Sca")
// res16: Boolean = false
regexp.matches("Scalal")
// res17: Boolean = false
regexp.matches("Scalaland")
// res18: Boolean = false

Success! At this point we could add many extensions to our library. For example, regular expressions usually have a method (by convention denoted +) that matches one or more times, and one that matches zero or once (usually denoted ?). These are both conveniences we can build on our existing API. However, our goal at the moment is to fully understand interpreters and the implementation technique we’ve used here. So in the next section we’ll discuss these in detail.

Regular Expression Semantics

Our regular expression implementation handles union differently to Scala’s built-in regular expressions. Look at the following example comparing the two.

val r1 = "(z|zxy)ab".r
val r2 = Regexp("z").orElse(Regexp("zxy")) ++ Regexp("ab")
r1.matches("zxyab")
// res19: Boolean = true
r2.matches("zxyab")
// res20: Boolean = false

The reason for this difference is that our implementation commits to the first branch in a union that successfully matches some of the input, regardless of how that affects later matching. We should instead try both branches, but doing so makes the implementation more complex. The semantics of regular expressions are not essential to what we’re trying to do here; we’re just using them as an example to motivate the programming strategies we’re learning. I decided the extra complexity of implementing union in the usual way outweighed the benefits, and so kept the simpler implementation. Don’t worry, we’ll see how to do it properly in the next chapter!

5.2 Interpreters and Reification

There are two different programming strategies at play in the regular expression code we’ve just written:

  1. the interpreter strategy; and
  2. the interpreter’s implementation strategy of reification.

Remember the essence of the interpreter strategy is to separate description and action. Therefore, whenever we use the interpreter strategy we need at least two things: a description and an interpreter. Descriptions are programs; things that we want to happen. The interpreter runs the programs, carrying out the actions described within them.

In the regular expression example, a Regexp value is a program. It is a description of a pattern we are looking for within a String. The matches method is an interpreter. It carries out the instructions in the description, checking the pattern matches the entire input. We could have other interpreters, such as one that matches if at least some part of the input matches the pattern.

5.2.1 The Structure of Interpreters

All uses of the interpreter strategy have a particular structure to their methods. There are three different kinds of methods:

  1. constructors, or introduction forms, with type A => Program. Here A is any type that isn’t a program, and Program is the type of programs. Constructors conventionally live on the Program companion object in Scala. We see that apply is a constructor of Regexp. It has type String => Regexp, which matches the pattern A => Program for a constructor. The other constructor, empty, is just a value of type Regexp. This is equivalent to a method with type () => Regexp and so it also matches the pattern for a constructor.

  2. combinators have at least one program input and a program output. The type is similar to Program => Program but there are often additional parameters. All of ++, orElse, and repeat are combinators in our regular expression example. They all have a Regexp input (the this parameter) and produce a Regexp. Some of them have additional parameters, such as ++ or orElse. For both these methods the single additional parameter is a Regexp, but it is not the case that additional parameters to a combinator must be of the program type. Conventionally these methods live on the Program type.

  3. destructors, interpreters, or elimination forms, have type Program => A. In our regular expression example we have a single interpreter, matches, but we could easily add more. For example, we often want to extract elements from the input or find a match at any location in the input.

This structure is often called an algebra or combinator library in the functional programming world. When we talk about constructors and destructors in an algebra we’re talking at a more abstract level then when we talk about constructors and destructors on algebraic data types. A constructor of an algebra is an abstract concept, at the theory level in my taxonomy, that we can choose to concretely implement at the craft level with the constructor of an algebraic data type. There are other possible implementations. We’ll see one later.

5.2.2 Implementing Interpreters with Reification

Now that we understand the components of an interpreter we can talk more clearly about the implementation strategy we used. We used a strategy called reification, defunctionalization, deep embedding, or an initial algebra.

Reification, in an abstract sense, means to make concrete what is abstract. Concretely, reification in the programming sense means to turn methods or functions into data. When using reification in the interpreter strategy we reify all the components that produce the Program type. This means reifying constructors and combinators.

Here are the rules for reification:

  1. We define some type, which we’ll call Program, to represent programs.
  2. We implement Program as an algebraic data type.
  3. All constructors and combinators become product types within the Program algebraic data type.
  4. Each product type holds exactly the parameters to the constructor or combinator, including the this parameter for combinators.

Once we’ve defined the Program algebraic data type, the interpreter becomes a structural recursion on Program.

Exercise: Arithmetic

Now it’s your turn to practice using reification. Your task is to implement an interpreter for arithmetic expressions. An expression is:

Reify this description as a type Expression.

The trick here is to recognize how the textual description relates to code, and to apply reification correctly.

enum Expression {
  case Literal(value: Double)
  case Addition(left: Expression, right: Expression)
  case Subtraction(left: Expression, right: Expression)
  case Multiplication(left: Expression, right: Expression)
  case Division(left: Expression, right: Expression)
}
object Expression {
  def apply(value: Double): Expression =
    Literal(value)
}

Now implement an interpreter eval that produces a Double. This interpreter should interpret the expression using the usual rules of arithmetic.

Our interpreter is a structural recursion.

enum Expression {
  case Literal(value: Double)
  case Addition(left: Expression, right: Expression)
  case Subtraction(left: Expression, right: Expression)
  case Multiplication(left: Expression, right: Expression)
  case Division(left: Expression, right: Expression)
  
  def eval: Double =
    this match {
      case Literal(value)              => value
      case Addition(left, right)       => left.eval + right.eval
      case Subtraction(left, right)    => left.eval - right.eval
      case Multiplication(left, right) => left.eval * right.eval
      case Division(left, right)       => left.eval / right.eval
    }
}
object Expression {
  def apply(value: Double): Expression =
    Literal(value)
}

Add methods +, - and so on that make your system a bit nicer to use. Then write some expressions and show that it works as expected.

Here’s the complete code.

enum Expression {
  case Literal(value: Double)
  case Addition(left: Expression, right: Expression)
  case Subtraction(left: Expression, right: Expression)
  case Multiplication(left: Expression, right: Expression)
  case Division(left: Expression, right: Expression)

  def +(that: Expression): Expression =
    Addition(this, that)

  def -(that: Expression): Expression =
    Subtraction(this, that)

  def *(that: Expression): Expression =
    Multiplication(this, that)

  def /(that: Expression): Expression =
    Division(this, that)

  def eval: Double =
    this match {
      case Literal(value)              => value
      case Addition(left, right)       => left.eval + right.eval
      case Subtraction(left, right)    => left.eval - right.eval
      case Multiplication(left, right) => left.eval * right.eval
      case Division(left, right)       => left.eval / right.eval
    }
}
object Expression {
  def apply(value: Double): Expression =
    Literal(value)
}

Here’s an example showing use, and that the code is correct.

val fortyTwo = ((Expression(15.0) + Expression(5.0)) * Expression(2.0) + Expression(2.0)) / Expression(1.0)
fortyTwo.eval
// res2: Double = 42.0

5.3 Tail Recursive Interpreters

Structural recursion, as we have written it, uses the stack. This is not often a problem, but particularly deep recursions can lead to the stack running out of space. A solution is to write a tail recursive program. A tail recursive program does not need to use any stack space, and so is sometimes known as stack safe. Any program can be turned into a tail recursive version, which does not use the stack and therefore cannot run out of stack space.

The Call Stack

Method and function calls are usually implemented using an area of memory known as the call stack, or just the stack for short. Every method or function call uses a small amount of memory on the stack, called a stack frame. When the method or function returns, this memory is freed and becomes available for future calls to use.

A large number of method calls, without corresponding returns, can require more stack frames than the stack can accommodate. When there is no more memory available on the stack we say we have overflowed the stack. In Scala a StackOverflowError is raised when this happens.

In this section we will discuss tail recursion, converting programs to tail recursive form, and limitations and workarounds for the Scala’s runtimes.

5.3.1 The Problem of Stack Safety

Let’s start by seeing the problem. In Scala we can create a repeated String using the * method.

"a" * 4
// res0: String = "aaaa"

We can match such a String with a regular expression and repeat.

Regexp("a").repeat.matches("a" * 4)
// res1: Boolean = true

However, if we make the input very long the interpreter will fail with a stack overflow exception.

Regexp("a").repeat.matches("a" * 20000)
// java.lang.StackOverflowError

This is because the interpreter calls loop for each instance of a repeat, without returning. However, all is not lost. We can rewrite the interpreter in a way that consumes a fixed amount of stack space, and therefore match input that is as large as we like.

5.3.2 Tail Calls and Tail Position

Our starting point is tail calls. A tail call is a method call that does not take any additional stack space. Only method calls that are in tail position are candidates to be turned into tail calls. Even then, runtime limitations mean that not all calls in tail position will be converted to tail calls.

A method call in tail position is a call that immediately returns the value returned by the call. Let’s see an example. Below are two versions of a method to calculate the sum of the integers from 0 to count.

def isntTailRecursive(count: Int): Int =
  count match {
    case 0 => 0
    case n => n + isntTailRecursive(n - 1)
  }

def isTailRecursive(count: Int): Int = {
  def loop(count: Int, accum: Int): Int =
    count match {
      case 0 => accum
      case n => loop(n - 1, accum + n)
    }
    
  loop(count, 0)
}

The method call to isntTailRecursive in

case n => n + isntTailRecursive(n - 1)

is not in tail position, because the value returned by the call is then used in the addition. However, the call to loop in

case n => loop(n - 1, accum + n)

is in tail position because the value returned by the call to loop is itself immediately returned. Similarly, the call to loop in

loop(count, 0)

is also in tail position.

A method call in tail position is a candidate to be turned into a tail call. Limitations of Scala’s runtimes mean that not all calls in tail position can be made tail calls. Currently, only calls from a method to itself that are also in tail position will be converted to tail calls. This means

case n => loop(n - 1, accum + n)

is converted to a tail call, because loop is calling itself. However, the call

loop(count, 0)

is not converted to a tail call, because the call is from isTailRecursive to loop. This will not cause issues with stack consumption, however, because this call only happens once.

Runtimes and Tail Calls

Scala supports three different platforms: the JVM, Javascript via Scala.js, and native code via Scala Native. Each platform provides what is known as a runtime, which is code that supports our Scala code when it is running. The garbage collector, for example, is part of the runtime.

At the time of writing none of Scala’s runtimes support full tail calls. However, there is reason to think this may change in the future. Project Loom should eventually add support for tail calls to the JVM. Scala Native is likely to support tail calls soon, as part of other work to implement continuations. Tail calls have been part of the Javascript specification for a long time, but remain unimplemented by the majority of Javascript runtimes. However, WebAssembly does support tail calls and will probably replace compiling Scala to Javascript in the medium term.

We can ask the Scala compiler to check that all self calls are in tail position by adding the @tailrec annotation to a method. The code will fail to compile if any calls from the method to itself are not in tail position.

import scala.annotation.tailrec

@tailrec
def isntTailRecursive(count: Int): Int =
  count match {
    case 0 => 0
    case n => n + isntTailRecursive(n - 1)
  }
// error:
// Cannot rewrite recursive call: it is not in tail position
//     case n => n + isntTailRecursive(n - 1)
//                   ^^^^^^^^^^^^^^^^^^^^^^^^

We can check the tail recursive version is truly tail recursive by passing it a very large input. The non-tail recursive version crashes.

isntTailRecursive(100000)
// java.lang.StackOverflowError

The tail recursive version runs just fine.

isTailRecursive(100000)
// res4: Int = 705082704

5.3.3 Continuation-Passing Style

Now that we know about tail calls, how do we convert the regular expression interpreter to use them? Any program can be converted to an equivalent program with all calls in tail position. This conversion is known as continuation-passing style or CPS for short. Our first step to understanding CPS is to understand continuations.

A continuation is an encapsulation of “what happens next”. Let’s return to our Regexp example. Here’s the full code for reference.

enum Regexp {
  def ++(that: Regexp): Regexp =
    Append(this, that)

  def orElse(that: Regexp): Regexp =
    OrElse(this, that)

  def repeat: Regexp =
    Repeat(this)

  def `*` : Regexp = this.repeat

  def matches(input: String): Boolean = {
    def loop(regexp: Regexp, idx: Int): Option[Int] =
      regexp match {
        case Append(left, right) =>
          loop(left, idx).flatMap(i => loop(right, i))
        case OrElse(first, second) =>
          loop(first, idx).orElse(loop(second, idx))
        case Repeat(source) =>
          loop(source, idx)
            .flatMap(i => loop(regexp, i))
            .orElse(Some(idx))
        case Apply(string) =>
          Option.when(input.startsWith(string, idx))(idx + string.size)
        case Empty =>
          None
      }

    // Check we matched the entire input
    loop(this, 0).map(idx => idx == input.size).getOrElse(false)
  }

  case Append(left: Regexp, right: Regexp)
  case OrElse(first: Regexp, second: Regexp)
  case Repeat(source: Regexp)
  case Apply(string: String)
  case Empty
}
object Regexp {
  val empty: Regexp = Empty

  def apply(string: String): Regexp =
    Apply(string)
}

Let’s consider the case for Append in matches.

case Append(left, right) =>
  loop(left, idx).flatMap(i => loop(right, i))

What happens next when we call loop(left, idx)? Let’s give the name result to the value returned by the call to loop. The answer is we run result.flatMap(i => loop(right, i)). We can represent this as a function, to which we pass result:

(result: Option[Int]) => result.flatMap(i => loop(right, i))

This is exactly the continuation, reified as a value.

As is often the case, there is a distinction between the concept and the representation. The concept of continuations always exists in code. A continuation means “what happens next”. In other words, it is the program’s control flow. There is always some concept of control flow, even if it is just “the program halts”. We can represent continuations as functions in code. This transforms the abstract concept of continuations into concrete values in our program, and hence reifies them.

Now that we know about continuations, and their reification as functions, we can move on to continuation-passing style. In CPS we, as the name suggests, pass around continuations. Specifically, each function or method takes an extra parameter that is a continuation. Instead of returning a value it calls that continuation with the value. This is another example of duality, in this case between returning a value and calling a continuation.

Let’s see how this works. We’ll start with a simple example written in the normal style, also known as direct style.

(1 + 2) * 3
// res5: Int = 9

To rewrite this in CPS style we need to create replacements for + and * with the extra continuation parameter.

type Continuation = Int => Int

def add(x: Int, y: Int, k: Continuation) = k(x + y)
def mul(x: Int, y: Int, k: Continuation) = k(x * y)

Now we can rewrite our example in CPS. (1 + 2) becomes add(1, 2, k), but what is k, the continuation? What we do next is multiply the result by 3. Thus the continuation is a => mul(a, 3, k2). What is the next continuation, k2? Here the program finishes, so we just return the value with the identity continuation b => b. Put it all together and we get

add(1, 2, a => mul(a, 3, b => b))
// res6: Int = 9

Notice that every continuation call is in tail position in the CPS code. This means that code written in CPS can potentially consume no stack space.

Now we can return to the interpreter loop for Regexp. We are going to CPS it, so we need to add an extra parameter for the continuation. In this case the contination accepts and returns the result type of loop: Option[Int].

def matches(input: String): Boolean = {
  // Define a type alias so we can easily write continuations
  type Continuation = Option[Int] => Option[Int]

  def loop(regexp: Regexp, idx: Int, cont: Continuation): Option[Int] =
  // etc...
}

Now we go through each case and convert it to CPS. Each continuation we construct must call cont as its final step. This is tedious and a bit error-prone, so good tests are helpful.

def matches(input: String): Boolean = {
  // Define a type alias so we can easily write continuations
  type Continuation = Option[Int] => Option[Int]

  def loop(
      regexp: Regexp,
      idx: Int,
      cont: Continuation
  ): Option[Int] =
    regexp match {
      case Append(left, right) =>
        val k: Continuation = _ match {
          case None    => cont(None)
          case Some(i) => loop(right, i, cont)
        }
        loop(left, idx, k)

      case OrElse(first, second) =>
        val k: Continuation = _ match {
          case None => loop(second, idx, cont)
          case some => cont(some)
        }
        loop(first, idx, k)

      case Repeat(source) =>
        val k: Continuation =
          _ match {
            case None    => cont(Some(idx))
            case Some(i) => loop(regexp, i, cont)
          }
        loop(source, idx, k)

      case Apply(string) =>
        cont(Option.when(input.startsWith(string, idx))(idx + string.size))
        
      case Empty =>
        cont(None)
    }

  // Check we matched the entire input
  loop(this, 0, identity).map(idx => idx == input.size).getOrElse(false)
}

Every call in this interpreter loop is in tail position. However Scala cannot convert these to tail calls because the calls go from loop to a continuation and vice versa. To make the interpreter fully stack safe we need to add trampolining.

Exercise: CPS Arithmetic

In a previous exercise we wrote an interpreter for arithmetic expressions. Your task now is to CPS this interpreter. For reference, the definition of an arithmetic expression is:

The continuations have a slightly different structure to the regular expression example. In the regular expression example, all the information needs by a continuation is either found in the parameter to the continuation (the index) or values extracted via pattern matching. In the arithmetic code we need values from previous continuations that are not passed as parameters. This is to compute binary operations like additions. The solution is to capture these values within the environment of the closure that represents the continuation.

type Continuation = Double => Double

enum Expression {
  case Literal(value: Double)
  case Addition(left: Expression, right: Expression)
  case Subtraction(left: Expression, right: Expression)
  case Multiplication(left: Expression, right: Expression)
  case Division(left: Expression, right: Expression)

  def eval: Double = {
    def loop(expr: Expression, cont: Continuation): Double =
      expr match {
        case Literal(value) => cont(value)
        case Addition(left, right) =>
          loop(left, l => loop(right, r => cont(l + r)))
        case Subtraction(left, right) =>
          loop(left, l => loop(right, r => cont(l - r)))
        case Multiplication(left, right) =>
          loop(left, l => loop(right, r => cont(l * r)))
        case Division(left, right) =>
          loop(left, l => loop(right, r => cont(l / r)))
      }

    loop(this, identity)
  }
  
  def +(that: Expression): Expression =
    Addition(this, that)

  def -(that: Expression): Expression =
    Subtraction(this, that)

  def *(that: Expression): Expression =
    Multiplication(this, that)

  def /(that: Expression): Expression =
    Division(this, that)
}
object Expression {
  def apply(value: Double): Expression =
    Literal(value)
}

5.3.4 Trampolining

Earlier we said that CPS utilizes the duality between function calls and returns: instead of returning a value we call a function with a value. This allows us to transform our code so it only has calls in tail positions. However, we still have a problem with stack safety. Scala’s runtimes don’t support full tail calls, so calls from a continuation to loop or from loop to a continuation will use a stack frame. We can use this same duality to avoid using the stack by, instead of making a call, returning a value that reifies the call we want to make. This idea is the core of trampolining. Let’s see it in action, which will help clear up what exactly this all means.

Our first step is to reify all the method calls made by the interpreter loop and the continuations. There are three cases: calls to loop, calls to a continuation, and, to avoid an infinite loop, the case when we’re done.

type Continuation = Option[Int] => Call

enum Call {
  case Loop(regexp: Regexp, index: Int, continuation: Continuation)
  case Continue(index: Option[Int], continuation: Continuation)
  case Done(index: Option[Int])
}

Now we update loop to return instances of Call instead of making the calls directly.

def loop(regexp: Regexp, idx: Int, cont: Continuation): Call =
  regexp match {
    case Append(left, right) =>
      val k: Continuation = _ match {
        case None    => Call.Continue(None, cont)
        case Some(i) => Call.Loop(right, i, cont)
      }
      Call.Loop(left, idx, k)

    case OrElse(first, second) =>
      val k: Continuation = _ match {
        case None => Call.Loop(second, idx, cont)
        case some => Call.Continue(some, cont)
      }
      Call.Loop(first, idx, k)

    case Repeat(source) =>
      val k: Continuation =
        _ match {
          case None    => Call.Continue(Some(idx), cont)
          case Some(i) => Call.Loop(regexp, i, cont)
        }
      Call.Loop(source, idx, k)

    case Apply(string) =>
      Call.Continue(
        Option.when(input.startsWith(string, idx))(idx + string.size),
        cont
      )

    case Empty =>
      Call.Continue(None, cont)
  }

This gives us an interpreter loop that returns values instead of making calls, and so does not consume stack space. However, we need to actually make these calls at some point, and doing this is the job of the trampoline. The trampoline is simply a tail recursive loop that makes calls until it reaches Done.

def trampoline(next: Call): Option[Int] =
  next match {
    case Call.Loop(regexp, index, continuation) =>
      trampoline(loop(regexp, index, continuation))
    case Call.Continue(index, continuation) =>
      trampoline(continuation(index))
    case Call.Done(index) => index
  }

Now every call has a corresponding return, so the stack usage is limited. Our interpreter can handle input of any size, up to the limits of available memory.

Here’s the complete code for reference.

// Define a type alias so we can easily write continuations
type Continuation = Option[Int] => Call

enum Call {
  case Loop(regexp: Regexp, index: Int, continuation: Continuation)
  case Continue(index: Option[Int], continuation: Continuation)
  case Done(index: Option[Int])
}

enum Regexp {
  def ++(that: Regexp): Regexp =
    Append(this, that)

  def orElse(that: Regexp): Regexp =
    OrElse(this, that)

  def repeat: Regexp =
    Repeat(this)

  def `*` : Regexp = this.repeat

  def matches(input: String): Boolean = {
    def loop(regexp: Regexp, idx: Int, cont: Continuation): Call =
      regexp match {
        case Append(left, right) =>
          val k: Continuation = _ match {
            case None    => Call.Continue(None, cont)
            case Some(i) => Call.Loop(right, i, cont)
          }
          Call.Loop(left, idx, k)

        case OrElse(first, second) =>
          val k: Continuation = _ match {
            case None => Call.Loop(second, idx, cont)
            case some => Call.Continue(some, cont)
          }
          Call.Loop(first, idx, k)

        case Repeat(source) =>
          val k: Continuation =
            _ match {
              case None    => Call.Continue(Some(idx), cont)
              case Some(i) => Call.Loop(regexp, i, cont)
            }
          Call.Loop(source, idx, k)

        case Apply(string) =>
          Call.Continue(
            Option.when(input.startsWith(string, idx))(idx + string.size),
            cont
          )

        case Empty =>
          Call.Continue(None, cont)
      }

    def trampoline(next: Call): Option[Int] =
      next match {
        case Call.Loop(regexp, index, continuation) =>
          trampoline(loop(regexp, index, continuation))
        case Call.Continue(index, continuation) =>
          trampoline(continuation(index))
        case Call.Done(index) => index
      }

    // Check we matched the entire input
    trampoline(loop(this, 0, opt => Call.Done(opt)))
      .map(idx => idx == input.size)
      .getOrElse(false)
  }

  case Append(left: Regexp, right: Regexp)
  case OrElse(first: Regexp, second: Regexp)
  case Repeat(source: Regexp)
  case Apply(string: String)
  case Empty
}
object Regexp {
  val empty: Regexp = Empty

  def apply(string: String): Regexp =
    Apply(string)
}

Exericse: Trampolined Arithmetic

Convert the CPSed arithmetic interpreter we wrote earlier to a trampolined version.

The process to produce this code is very similar to the regular expression example. We just identify all the different types of calls (which are the same as the regular expression example) and reify them.

type Continuation = Double => Call

enum Call {
  case Continue(value: Double, k: Continuation)
  case Loop(expr: Expression, k: Continuation)
  case Done(result: Double)
}

enum Expression {
  case Literal(value: Double)
  case Addition(left: Expression, right: Expression)
  case Subtraction(left: Expression, right: Expression)
  case Multiplication(left: Expression, right: Expression)
  case Division(left: Expression, right: Expression)

  def eval: Double = {
    def loop(expr: Expression, cont: Continuation): Call =
      expr match {
        case Literal(value) => Call.Continue(value, cont)
        case Addition(left, right) =>
          Call.Loop(
            left,
            l => Call.Loop(right, r => Call.Continue(l + r, cont))
          )
        case Subtraction(left, right) =>
          Call.Loop(
            left,
            l => Call.Loop(right, r => Call.Continue(l - r, cont))
          )
        case Multiplication(left, right) =>
          Call.Loop(
            left,
            l => Call.Loop(right, r => Call.Continue(l * r, cont))
          )
        case Division(left, right) =>
          Call.Loop(
            left,
            l => Call.Loop(right, r => Call.Continue(l / r, cont))
          )
      }

    def trampoline(call: Call): Double =
      call match {
        case Call.Continue(value, k) => trampoline(k(value))
        case Call.Loop(expr, k)      => trampoline(loop(expr, k))
        case Call.Done(result)       => result
      }

    trampoline(loop(this, x => Call.Done(x)))
  }

  def +(that: Expression): Expression =
    Addition(this, that)

  def -(that: Expression): Expression =
    Subtraction(this, that)

  def *(that: Expression): Expression =
    Multiplication(this, that)

  def /(that: Expression): Expression =
    Division(this, that)
}
object Expression {
  def apply(value: Double): Expression =
    Literal(value)
}

5.3.5 When Tail Recursion is Easy

Doing a full CPS conversion and trampoline can be quite involved. Some methods can made tail recursive without so large a change. Remember these examples we looked at earlier?

def isntTailRecursive(count: Int): Int =
  count match {
    case 0 => 0
    case n => n + isntTailRecursive(n - 1)
  }

def isTailRecursive(count: Int): Int = {
  def loop(count: Int, accum: Int): Int =
    count match {
      case 0 => accum
      case n => loop(n - 1, accum + n)
    }
    
  loop(count, 0)
}

The tail recursive version doesn’t seem to involve the complexity of CPS. How can we relate this to what we’ve just learned, and when can we avoid the work of CPS and trampolining?

Let’s use substitution to show how the stack is used by each method, for a small value of count.

isntTailRecursive(2)
// expands to
(2 match {
  case 0 => 0
  case n => n + isntTailRecursive(n - 1)
})
// expands to
(2 + isntTailRecursive(1))
// expands to
(2 + (1 match {
        case 0 => 0
        case n => n + isntTailRecursive(n - 1)
      }))
// expands to
(2 + (1 + isntTailRecursive(n - 1)))
// expands to
(2 + (1 + (0 match {
             case 0 => 0
             case n => n + isntTailRecursive(n - 1)
           })))
// expands to
(2 + (1 + (0)))
// expands to
3

Here each set of brackets indicates a new method call and hence a stack frame allocation.

Now let’s do the same for isTailRecursive.

isTailRecursive(2)
// expands to
(loop(2, 0))
// expands to
(2 match {
   case 0 => 0
   case n => loop(n - 1, 0 + n)
 })
// expands to
(loop(1, 2))
// call to loop is a tail call, so no stack frame is allocated 
// expands to
(1 match {
   case 0 => 2
   case n => loop(n - 1, 2 + n)
 })
// expands to
(loop(0, 3))
// call to loop is a tail call, so no stack frame is allocated 
// expands to
(0 match {
   case 0 => 3
   case n => loop(n - 1, 3 + n)
 })
// expands to
(3)
// expands to
3

The non-tail recursive function computes the result (2 + (1 + (0))) If we look closely, we’ll see that the tail recursive version computes (((2) + 1) + 0), which simply accumulates the result in the reverse order. This works because addition is associative, meaning (a + b) + c == a + (b + c). This is our first criteria for using the “easy” method for converting to a tail recursive form: the operation that accumulates results must be associative.

This doesn’t explain, though, how we come to realize that addition is the correct operation to use. The second criteria is that we don’t need any memory beyond the partial result calculated from the data we’ve already seen. Some implications of this are that we can stop at any time and have a usable result, and that we are only applying a single operation to the data. This is not the case in the regular expression example. For example, we have the following code in the Append case:

case Append(left, right) =>
  loop(left, idx).flatMap(i => loop(right, i))

To compute the result for the Append we need to compute and combine results from both left and right. So when we have computed the result for right we need to remember both the result from left and that we’re combining the two results using the rule for Append rather than, say, OrElse. It’s remembering this that is exactly what the continuation does, and what stops us from using the easy method we saw when summing the elements of a list.

So, in summary, if we are applying only a single associative operation to data we can use the simple method for writing a tail recursive method:

  1. define an structurally recursive loop with an additional parameter that is the partial result or accumulator;
  2. in the base cases return the accumulator; and
  3. in the recursive cases update the accumulator and call the loop in tail position.

You might be wondering how we handle tree-shaped data with this technique. One consequence of an associative operation is that we can transform any sequence of operations into a list-shaped sequence. If, for example, we have an expression tree that suggests we should call operations in the order (1 + 2) + (3 + 4) (where I’m using + to indicate the operation) we can rewrite that to (((1 + 2) + 3) + 4) via associativity. So we can transform our tree into a list and then apply the recipe above.

5.4 Conclusions

In this chapter we’ve discussed why we might want to build interpreters, and seen techniques for building them. To recap, the core of the interpreter strategy is a separation between description and action. The description is the program, and the interpreter is the action that carries out the program. This separation is allows for composition of programs, and managing effects by delaying them till the time the program is run. We sometimes call this structure an algebra, with constructs and combinators defining programs and destructors defining interpreters. Although the name of the strategy focuses on the interpreter, the design of the program is just as important as it is the user interface through which the programmer interacts with the system.

Our starting implementation strategy is reification of the algebra’s constructors and compositional methods as an algebraic data type. The interpreter is then a structural recursion over this ADT. We saw that the straightforward implementation is not stack-safe, and which caused us to introduction the idea of tail recursion and continuations. We reified continuations as functions, and saw that we can convert any program into continuation-passing style which has every method call in tail position. Due to Scala runtime limitations not all calls in tail position can be converted to tail calls, so we reified calls and returns into data structures used by a recursive loop called a trampoline. Underlying all these strategies in the concept of duality. We have seen a duality between functions and data, which we utilize in reification, and a duality between calling functions and returning data, which we use in continuations and trampolines.

Stack-safe interpreters are important in many situations, but the code is harder to read than the basic structural recursion. In some contexts a basic interpreter may be just fine. It’s unlikely to run out of stack space when evaluating a straightforward expression tree, as in the arithmetic example. The depth of such a tree grows logarithmically with the number of elements, so only extremely large trees will have sufficient depth that stack safety becomes relevant. However, in the regular expression example the stack consumption is determined not by the depth of the regular expression tree, but by the length of the input being matched. In this situation stack safety is more important. There may still be other constraints that allow a simpler implementation. For example, if we know the library will only used in situations where inputs were guaranteed to be small. As always, only use coding techniques where they make sense.

These ideas are classics in programming language theory. Definitional Interpreters for Higher-Order Programming Languages [Reynolds 1972] details defunctionalization, a limited form of reification and continuation passing style. (If you want to read this paper, I suggest the re-typeset version from 1998, which is much more readable than the original typewriter version.) These ideas are expanded on in Defunctionalization at Work [Danvy and Nielsen 2001]. Continuation-Passing Style, Defunctionalization, Accumulations, and Associativity [Gibbons 2022] is a very readable and elegant paper that highlights the importance of associativity in these transformations.

{#sec:part:two}

In this part of the book we move on to type classes. We looked at the implementation of type classes in Chapter 4. Our focus here is on a handful of specific type classes, that are both very useful for day-to-day programming tasks and as conceptual models that can drive program design. In this part we’ll be looking more at their use for day-to-day programming, while the case studies will focus on their role in design.

In Chapter 6 we introduce the Cats library. Cats provides implementation of the type classes we’re interested in, and so it saves a lot of time and typing to use it.

TODO: complete description

6 Using Cats

In this Chapter we’ll learn how to use the Cats library. Cats provides two main things: type classes and their instances, and some useful data structures. Our focus will mostly be on the type classes, though we will touch on the data structures where appropriate.

6.1 Quick Start

The easiest, and recommended, way to use Cats is to add the following imports:

import cats.*
import cats.syntax.all.*

The first import adds all the type classes (and makes their instances available, as they are found in the companion objects.) The second import adds the syntax helpers, which makes the type classes easier to work with. Note we don’t need to import cats.{*, given} as, at the time of writing, Cats is written in Scala 2 style (using implicits) and these are imported by the wildcard import.

If we want use some of Cats’ datastructures, we also need to add

import cats.data.*

6.2 Using Cats

Let’s now see how we work with Cats, using cats.Show as an example.

Show is Cats’ equivalent of the Display type class we defined in Section 4.5. It provides a mechanism for producing developer-friendly console output without using toString. Here’s an abbreviated definition:

package cats

trait Show[A] {
  def show(value: A): String
}

The easiest way to use Show is with the wildcard import above. However, we can also import Show directly from the cats package:

import cats.Show

The companion object of every Cats type class has an apply method that locates an instance for any type we specify:

val showInt = Show.apply[Int]

Once we have an instance we can call methods on it.

showInt.show(42)
// res0: String = "42"

More common, however, is to use the syntax or extension methods, which we imported with import cats.syntax.all.*. In the case of Show, an extension method show is defined.

42.show
// res1: String = "42"

If, for some reason, we wanted just the syntax for show, we could import cats.syntax.show.

import cats.syntax.show.* // for show

6.2.1 Defining Custom Instances

We can define an instance of Show simply by implementing the trait for a given type:

import java.util.Date

given dateShow: Show[Date] with 
  def show(date: Date): String =
    s"${date.getTime}ms since the epoch."
new Date().show
// res2: String = "1723635510011ms since the epoch."

However, Cats also provides a couple of convenient methods to simplify the process. There are two construction methods on the companion object of Show that we can use to define instances for our own types:

object Show {
  // Convert a function to a `Show` instance:
  def show[A](f: A => String): Show[A] =
    ???

  // Create a `Show` instance from a `toString` method:
  def fromToString[A]: Show[A] =
    ???
}

These allow us to quickly construct instances with less ceremony than defining them from scratch:

given dateShow: Show[Date] =
  Show.show(date => s"${date.getTime}ms since the epoch.")

As you can see, the code using construction methods is much terser than the code without. Many type classes in Cats provide helper methods like these for constructing instances, either from scratch or by transforming existing instances for other types.

6.2.1.1 Exercise: Cat Show

Re-implement the Cat application from Section 4.5.1 using Show instead of Display.

Using this data type to represent a well-known type of furry animal:

final case class Cat(name: String, age: Int, color: String)

create an implementation of Display for Cat that returns content in the following format:

NAME is a AGE year-old COLOR cat.

Then use the type class on the console or in a short demo app: create a Cat and print it to the console:

// Define a cat:
val cat = Cat(/* ... */)

// Print the cat!

First let’s import everything we need from Cats.

import cats.*
import cats.syntax.all.*

Our definition of Cat remains the same:

final case class Cat(name: String, age: Int, color: String)

In the companion object we replace our Display instance with an instance of Show using one of the definition helpers discussed above:

given catShow: Show[Cat] = Show.show[Cat] { cat =>
  val name  = cat.name.show
  val age   = cat.age.show
  val color = cat.color.show
  s"$name is a $age year-old $color cat."
}

Finally, we use the Show interface syntax to print our instance of Cat:

println(Cat("Garfield", 38, "ginger and black").show)
// Garfield is a 38 year-old ginger and black cat.

6.3 Example: Eq

We will finish off this chapter by looking at another useful type class: cats.Eq. Eq is designed to support type-safe equality and address annoyances using Scala’s built-in == operator.

Almost every Scala developer has written code like this before:

List(1, 2, 3).map(Option(_)).filter(item => item == 1)
// warning: Option[Int] and Int are unrelated: they will most likely never compare equal
// res: List[Option[Int]] = List()

Ok, many of you won’t have made such a simple mistake as this, but the principle is sound. The predicate in the filter clause always returns false because it is comparing an Int to an Option[Int].

This is programmer error—we should have compared item to Some(1) instead of 1. However, it’s not technically a type error because == works for any pair of objects, no matter what types we compare. Eq is designed to add some type safety to equality checks and work around this problem.

6.3.1 Equality, Liberty, and Fraternity

We can use Eq to define type-safe equality between instances of any given type:

package cats

trait Eq[A] {
  def eqv(a: A, b: A): Boolean
  // other concrete methods based on eqv...
}

The interface syntax, defined in cats.syntax.eq, provides two methods for performing equality checks provided there is an instance Eq[A] in scope:

6.3.2 Comparing Ints

Let’s look at a few examples. First we import the type class:

import cats.*

Now let’s grab an instance for Int:

val eqInt = Eq[Int]

We can use eqInt directly to test for equality:

eqInt.eqv(123, 123)
// res1: Boolean = true
eqInt.eqv(123, 234)
// res2: Boolean = false

Unlike Scala’s == method, if we try to compare objects of different types using eqv we get a compile error:

eqInt.eqv(123, "234")
// error:
// Found:    ("234" : String)
// Required: Int
// eqInt.eqv(123, "234")
//                ^^^^^

We can also import the interface syntax in cats.syntax.eq to use the === and =!= methods:

import cats.syntax.all.* // for === and =!=
123 === 123
// res4: Boolean = true
123 =!= 234
// res5: Boolean = true

Again, comparing values of different types causes a compiler error:

123 === "123"
// error:
// Found:    ("123" : String)
// Required: Int
// 123 === "123"
//         ^^^^^

6.3.3 Comparing Options

Now for a more interesting example—Option[Int].

Some(1) === None
// error:
// value === is not a member of Some[Int] - did you mean Some[Int].==?
// Some(1) === None
// ^^^^^^^^^^^

We have received an error here because the types don’t quite match up. We have Eq instances in scope for Int and Option[Int] but the values we are comparing are of type Some[Int]. To fix the issue we have to re-type the arguments as Option[Int]:

(Some(1) : Option[Int]) === (None : Option[Int])
// res8: Boolean = false

We can do this in a friendlier fashion using the Option.apply and Option.empty methods from the standard library:

Option(1) === Option.empty[Int]
// res9: Boolean = false

or using special syntax from cats.syntax.option:

1.some === none[Int]
// res10: Boolean = false
1.some =!= none[Int]
// res11: Boolean = true

6.3.4 Comparing Custom Types

We can define our own instances of Eq using the Eq.instance method, which accepts a function of type (A, A) => Boolean and returns an Eq[A]:

import java.util.Date

given dateEq: Eq[Date] =
  Eq.instance[Date] { (date1, date2) =>
    date1.getTime === date2.getTime
  }
val x = new Date() // now
val y = new Date() // a bit later than now
x === x
// res12: Boolean = true
x === y
// res13: Boolean = false

6.3.4.1 Exercise: Equality, Liberty, and Felinity

Implement an instance of Eq for our running Cat example:

final case class Cat(name: String, age: Int, color: String)

Use this to compare the following pairs of objects for equality and inequality:

val cat1 = Cat("Garfield",   38, "orange and black")
val cat2 = Cat("Heathcliff", 33, "orange and black")

val optionCat1 = Option(cat1)
val optionCat2 = Option.empty[Cat]

First we need our Cats imports. In this exercise we’ll be using the Eq type class and the Eq interface syntax, so we start by importing that.

import cats.*
import cats.syntax.all.* 

Our Cat class is the same as ever:

final case class Cat(name: String, age: Int, color: String)

We bring the Eq instances for Int and String into scope for the implementation of Eq[Cat]:

given catEqual: Eq[Cat] =
  Eq.instance[Cat] { (cat1, cat2) =>
    (cat1.name  === cat2.name ) &&
    (cat1.age   === cat2.age  ) &&
    (cat1.color === cat2.color)
  }

Finally, we test things out in a sample application:

val cat1 = Cat("Garfield",   38, "orange and black")
// cat1: Cat = Cat(name = "Garfield", age = 38, color = "orange and black")
val cat2 = Cat("Heathcliff", 32, "orange and black")
// cat2: Cat = Cat(name = "Heathcliff", age = 32, color = "orange and black")

cat1 === cat2
// res15: Boolean = false
cat1 =!= cat2
// res16: Boolean = true

val optionCat1 = Option(cat1)
// optionCat1: Option[Cat] = Some(
//   value = Cat(name = "Garfield", age = 38, color = "orange and black")
// )
val optionCat2 = Option.empty[Cat]
// optionCat2: Option[Cat] = None

optionCat1 === optionCat2
// res17: Boolean = false
optionCat1 =!= optionCat2
// res18: Boolean = true

7 Monoids and Semigroups

In this s