Advanced Programming 2018 – Introduction to Monads
Advanced Programming 2018
Introduction to Monads
Andrzej Filinski
andrzej@di.ku.dk
Department of Computer Science
University of Copenhagen
September 11, 2018
1 / 28
Where are we?
I In first lecture, saw some general FP concepts and constructs:
I (Pure) value-oriented computation paradigm
I Functions as values
I Algebraic datatypes and pattern matching
I In second, looked at more advanced, Haskell-specific features:
I Type classes, including type-constructor classes
I Laziness
I Purely functional IO
I List comprehensions
I Today: functional programming with monads.
I Conceptually simple idea (literally, just a few lines of code)
I But profound impact on Haskell programming style
I Even reflected in official language logo:
I Draws upon many topics from previous two lectures
2 / 28
Why monads?
I Misconception: “selling out” on pure functional programming.
I Allowing mutable store, exceptions, I/O, … back in
I … after working so hard to banish them!
I Actual goal: organizing all “impurity” into distinct effects.
I Type system keeps track of which functions have which effects.
I Many still have have exactly none.
I Can be as fine-grained or coarse-grained as programmer wishes
I E.g., distinguish read-write vs. read-only vs. append-only state
I Using just the standard Haskell type system.
I Each kind of effect is encapsulated as a monad.
I Just another type class, like Monoid or Functor, with a few laws.
I Programming and reasoning remain completely pure.
I Framework can directly accommodate problem-specific effects
(e.g., backtracking search with oracles).
I Same syntax as “standard” effects.
I Bulk of program code doesn’t need to change when adding or
adjusting effect.
3 / 28
A bit of historical context for monads
I Concept first formulated 1960s–70s, in category theory.
I Particularly abstract branch of mathematics, no apparent
connection to (especially impure) programming.
I Computational lambda calculus and monads (E. Moggi, 1989)
I Work in context of denotational semantics, a formalism for
describing language features using pure mathematical functions
I Recognized that almost all common notions of effects were
instances of the same mathematical pattern.
I “Test of Time” award in 2009 for most influential paper from
1989 Logic in Computer Science conference.
I Comprehending monads (P. Wadler, 1990)
I Recognized that almost all of Moggi’s constructions could be
reformulated in a pure functional language, not mathematics.
I But presentation of monads still rooted in categorical tradition.
I Proposed a generalization of list-comprehension notation, which
eventually evolved into current do-notation.
4 / 28
A brief history of monads, continued
I The essence of functional programming (P. Wadler, 1992)
I Tutorial introduction to “monadic style” of programming,
especially for writing interpreters.
I Relatively close to modern syntax and terminology, but not yet
using type classes.
I Various developments, 1990s–2010s
I Gradual evolution into current form
I Lots of generalizations and refinements (e.g., Applicatives)
I Extensive and powerful (but also somewhat complicated)
Monad Transformer Library (MTL) for GHC
I Staggering number (80+) of “monad tutorials” out there…
I Advanced Programming, 2018
I Back to the essentials, but in modern context.
I Really not that complicated, but don’t despair:
I “Young man, in Mathematics we don’t understand things. We just
get used to them.” – J. von Neumann, answering a student.
5 / 28
Motivating example 1: Exceptions/errors
I Consider function that may need to signal an error condition
I Often, only one possible error, e.g., “key not found”.
I Observation: a partial function of type a -> b can be
represented as total function of type a -> Maybe b .
I Typical use pattern:
case lookup k m of
Just v -> … — continue with computation, using v
Nothing -> … — deal with error
I When looking up two keys:
case lookup k1 m of
Just v1 -> case lookup k2 m of
Just v2 -> … — continue, using v1 and v2
Nothing -> … — deal with error
Nothing -> … — deal with error
I Exception-passing style: explicitly check all subcomputation
results and propagate failures
I Makes control flow apparent, but clutters everything.
6 / 28
Motivating example 2: Mutable, global state
I Consider function that may silently modify a global variable.
I For concreteness, assume said variable has type Int.
I E.g., random-number seed, allocation counter, …
I Observation: a side-effecting function of type a -> b can be
represented as pure function of type (a, Int) -> (b, Int),
or equivalently (in curried style), a -> Int -> (b, Int).
I Typical sequencing pattern, when computing g (f a) (as
effectful functions):
let (b, i1) = f a i0 — i0 is initial value of var.
(c, i2) = g b i1
in … — i2 is final value of var.
I State-passing style: explicitly thread current value of global
variable through all function calls.
I Makes data flow and dependencies apparent, but clutters
everything.
7 / 28
A unified view
I In both examples, same pattern: effectful function of type
a -> b corresponds to total, pure function of type a -> M b .
I M t is the type of computations returning a result of type t .
I For errors, type M t = Maybe t
I For state, type M t = Int -> (t, Int)
I Computation that just returns a given value: unit :: a -> M a
I For errors: unit a = Just a
I For state: unit a = \s -> (a, s)
I Computation that applies effectful function to result (if any) of
effectful computation: bind :: M a -> (a -> M b) -> M b
I For errors: bind (Just a) f = f a
bind Nothing f = Nothing
I For state: bind m f = \s -> let (a,s1) = m s in (f a) s1
I Such a triple (M, unit, bind) constitutes a monad.
8 / 28
The Monad class
I Class of type constructors, like Functor.
class Applicative m => Monad m where
return :: a -> m a — unit
(>>=) :: m a -> (a -> m b) -> m b — bind
(>>) :: m a -> m b -> m b — bind, ignoring first value
m1 >> m2 = m1 >>= \_ -> m2 — default implementation
fail :: String -> m a
fail s = error s — default implementation
I (Ignore the Applicative m constraint for now.)
I fail may have non-default definition, esp. for monads representing
potentially failing computations; will see relevance later.
I Predefined instances in standard prelude, e.g.:
instance Monad Maybe where
return a = Just a
Just a >>= f = f a
Nothing >>= f = Nothing
fail s = Nothing
9 / 28
A Monad instance for state
I Start by defining suitable new type constructor (not synonym):
newtype IntState a = St {runSt :: Int -> (a, Int)}
instance Monad IntState where
return a = St (\s -> (a, s))
m >>= f = St (\s0 -> let (a,s1) = runSt m s0
in runSt (f a) s1)
I In fact, can generalize to arbitrarily typed state:
newtype State s a = St (runSt :: s -> (a, s))
instance Monad (State s) where
— defs of return, (>>=) exactly as above
I For any fixed s, (State s) itself is still a type constructor.
I Likewise, can define a general error monad parameterized by type of
error values (exception data), where Maybe a ≈ Either () a:
data Either a b = Left a | Right b — in standard prelude
instance Monad (Either e) where
return a = Right a
(Left e) >>= f = Left e
(Right a) >>= f = f a
10 / 28
Monad laws
I All instances M of Monad should satisfy the three monad laws:
1. return v >>= f == f v
2. m >>= (\a -> return a) == m
3. (m >>= f) >>= g == m >>= (\a -> (f a >>= g))
(for all types a, b , and c; and all values v :: a, m :: M a,
f :: a -> M b , and g :: b -> M c.)
I Roughly say that composition of represented effectful functions
behaves as expected (in particular, is associative).
I Note: these equations are between monadic-type expressions,
which may not have Eq instances.
I Interpret “==” as “behaves indistinguishably from” (‘)
I Can check laws by essentially mathematical reasoning about
purely functional expressions, “replacing equals by equals”
I If Monad instance satisfies laws, clever optimizations possible.
I Including using imperative implementation “under the hood”,
such as destructive state updates.
I Details beyond the scope of this course.
11 / 28
Verifying the monad laws
For example, checking Law 1 for the State monad:
return v >>= f
== — def. of >>= for State
St (\s0 -> let (a,s1) = runSt (return v) s0 in runSt (f a) s1)
== — def. of return for State
St (\s0 -> let (a,s1) = runSt (St (\s -> (v, s))) s0
in runSt (f a) s1)
== — runSt (St h) == h (accessor . constructor == id)
St (\s0 -> let (a,s1) = (\s -> (v, s)) s0 in runSt (f a) s1)
== — (\x -> e1) e2 == e1[x := e2] (subst. actual for formal)
St (\s0 -> let (a,s1) = (v, s0) in runSt (f a) s1)
== — let (x,y) = (e1,e2) in e3 == e3[x := e1, y := e2]
St (\s0 -> runSt (f v) s0)
== — \x -> e x == e, if no occurrences of x in e
St (runSt (f v))
== — St (runSt m) == m (constructor . accessor == id)
f v
12 / 28
Monads are Functors and Applicatives
I Category-theoretically, every monad is also a functor.
I For any Monad instance M, can derive a Functor instance by:
instance Functor M where
fmap f m = m >>= \a -> return (f a) — or fmap = liftM
I Fact: If M satisfies the monad laws, fmap will satisfy functor laws.
I GHC 7.10 actually made Monad a subclass of Functor.
I Must have instance Functor M to even be able to declare
instance Monad M.
I Old code and textbook examples no longer compile as written.
I Fortunately, can still define fmap using return and >>=, as above.
I Can just include above instance declaration verbatim (with M
replaced by the name of your Monad), to satisfy compiler.
I Likewise, Monad is now a subclass of Applicative.
I Generally enough to include following magic phrase (for each M):
instance Applicative M where
pure = return; (<*>) = ap — from Control.Monad
I Will not say more about Applicative operations here, but you’re
welcome to use them if you like.
13 / 28
General programming with effectful functions
I Using return and >>=, can now combine computations in
generic way, even when they have effects.
I Consider simple higher-order function:
pair :: (a -> b) -> (a -> c) -> a -> (b, c)
pair f g a = (f a, g a)
I What if f and g may have effects? Then so will their pairing:
pairM :: Monad m => (a -> m b) -> (a -> m c) -> a -> m (b, c)
pairM f g a = f a >>= \b -> g a >>= \c -> return (b, c)
I Note: Same code works, whether m represents partial or
state-manipulating computations (or any other monad).
I Had we not used monads, would need separate definitions:
pairE f g a =
case f a of Just b -> case g a of Just c -> Just (b,c)
Nothing -> Nothing
Nothing -> Nothing
pairS f g a =
\s0 -> let (b,s1) = f a s0; (c,s2) = g a s1 in ((b,c), s2)
14 / 28
Effectful operations in a monad
I Have seen how to combine (e.g.) state-manipulating functions
in general way.
I But how do we actually access the state?
I Following instance declaration, we can also define additional
functions, specific to the particular monad, e.g.:
newtype IntState a = St {runSt :: Int -> (a, Int)}
instance Monad IntState where …
getState :: IntState Int — read state
getState = St (\i -> (i, i))
putState :: Int -> IntState () — write state
putState i’ = St (\i -> ((), i’))
modifyState :: (Int -> Int) -> IntState () — “bump” state
modifyState f = St (\i -> ((), f i))
I Will see operations for other monads shortly
15 / 28
do-notation
I For increased readability, Haskell offers a convenient notation
for writing chains of monadic computations, e.g.:
pairM :: Monad m => (a -> m b) -> (a -> m c) -> a -> m (b, c)
pairM f g a = do b <- f a; c <- g a; return (b, c)
I General syntax: do bind1; ... bindn; mexp0 (n≥0)
I Each bindi may be one of three forms: “pati <- mexpi”, “mexpi”,
or “let pati = expi”
I All mexpi must be of monadic type (with same monad).
I The semicolons are normally replaced by newlines + indentation.
I All uses of do-notation are simplified (“desugared”, ) into
standard monad operations early in the compilation process.
I First step: bring all do’s into form with exactly one bind :
do mexp0 mexp0
do bind1;bind2; ...;bindn ;mexp0
do bind1; (do bind2; ...;bindn ;mexp0) (use repeatedly while n ≥ 2)
I Then only have to deal with do’s of form “do bind1; mexp0”. 16 / 28
Desugaring do-notation, continued
I More desugaring rules:
do x <- mexp1; mexp0 mexp1 >>= \x -> mexp0
do mexp1; mexp0 mexp1 >> mexp0
do let pat = exp; mexp0 let pat = exp in mexp0
I But monadic bindings with (refutable) patterns are a bit special:
do pat <- mexp1; mexp0
let f pat = mexp0
f _ = fail "Pattern matching failure at ..."
in mexp1 >>= f
I fail is defined in the Monad, not necessarily as error
I A bit obscure feature, but sometimes convenient (will see later)
I Example:
do c <- getChar
putStr "Hi"
return [c]
getChar >>= \c ->
putStr “Hi” >>
return [c]
17 / 28
A few more, frequently useful monads
I Have already seen general state (State s) and exception
(Either e) monads.
I Let’s see a few others:
I Read-only state (Reader s)
I Accumulating state (Writer s)
I Nondeterminism/backtracking ([])
I I/O (SimpleIO)
18 / 28
Read-only state
I Useful for parameterizing computation with some additional
data that will stay constant throughout computation.
I A simplification of the State monad constructor:
newtype Reader d a = Rd {runRd :: d -> a}
instance Monad (Reader d) where
return a = Rd (\d -> a)
m >>= f = Rd (\d -> let a=runRd m d in runRd (f a) d)
ask :: Reader d d
ask = Rd (\d -> d)
local :: (d -> d) -> Reader d a -> Reader d a
local f m = Rd (\d -> runRd m (f d))
— often used as: local (const d’) m
19 / 28
Read-only state, continued
I Sample use: expression evaluator
data Expr = Const Int | Var String | Plus Expr Expr
| Let String Expr Expr
eval :: Expr -> Reader (String -> Int) Int
eval (Const n) = return n
eval (Var x) = do d <- ask; return (d x)
eval (Plus e1 e2) =
do n1 <- eval e1; n2 <- eval e2; return (n1+n2)
eval (Let x e1 e2) =
do n1 <- eval e1
local (\d -> \y -> if y == x then n1 else d y)
(eval e2)
evalTop :: Expr -> Int
evalTop e = runRd (eval e)
(\x -> error $ “unbound variable: ” ++ x)
20 / 28
Accumulating state
I Sometimes computations can only “add” to an accumulator:
I Append string to log
I Increment event counter
I Possibly adjust high water mark to new maximum
I Want to make it manifest from types that computations can
neither read from the accumulator, nor erase it:
newtype Writer s a = Wr {runWr :: (a, s)}
instance Monoid s => Monad (Writer s) where
return a = Wr (a, mempty)
m >>= f = let (a, s1) = runWr m
(b, s2) = runWr (f a)
in Wr (b, s1 `mappend` s2)
tell :: s -> Writer s ()
tell s = Wr ((), s)
runWr (do tell “foo”; tell “bar”; return 5) — (5, “foobar”)
I Fact: if s satisfies monoid laws, Writer s will satisfy monad ones.
21 / 28
Nondeterminsm
I Sometimes want to express that several results are possible
from a computation.
I E.g., a function returning an arbitrary element of a list or set
I Standard prelude declares a Monad instance for lists:
instance Monad [] where — remember: type [t] is [] t
return a = [a]
m >>= f = concat (map f m) — concat :: [[a]] -> [a]
fail s = []
I List comprehensions become simply do-notation:
I [exp | qual1, …, qualn] do qual1; …; qualn; [exp]
I Note: using [exp] instead of return exp to force list monad.
I Generators x <- lexp simply kept as monadic bindings. I Boolean guards in qualifiers become refutable bindings: bexp True <- [bexp] I Or: bexp if bexp then return () else fail "" 22 / 28 Monadic I/O I Defined last time: data SimpleIO a = Done a | PutChar Char (SimpleIO a) | GetChar (Char -> SimpleIO a)
I Can organize as a monad, by specifying how to concatenate request
sequences:
instance Monad SimpleIO where
return a = Done a
Done a >>= f = f a
PutChar c m >>= f = PutChar c (m >>= f)
GetChar h >>= f = GetChar (\c -> h c >>= f)
myPutChar :: Char -> SimpleIO ()
myPutChar c = PutChar c (return ())
myGetChar :: SimpleIO Char
myGetChar = GetChar (\c -> return c)
23 / 28
I/O, continued
I Can define further I/O operations by normal monadic
sequencing
myPutStr :: String -> SimpleIO ()
myPutStr s = mapM_ myPutChar s
— using mapM_ :: Monad m => (a -> M b) -> [a] -> m ()
— mapM_ f [a1,…,an] = do f a1; …; f an; return ()
I In particular, can check that
myPutStr “AP” == PutChar ‘A’ (PutChar ‘P’ (Done ()))
I Can also turn SimpleIO computations into “real” IO.
perform :: SimpleIO a -> IO a
perform (Done a) = return a
perform (PutChar c m) = putChar c >> perform m
perform (GetChar h) = getChar >>= \c -> perform (h c)
24 / 28
Combining monads
I Have seen a range of basic monadic effects
I Maybe 2–3 more are commonly used, of similar complexity.
I But how do we deal with programs that use multiple effects?
I In general, have to create custom-tailored monad for any
particular combination.
I Example: exceptions and (persistent) state
newtype ExnState s e a =
ExSt {runExSt :: s -> (Either e a, s)}
instance Monad (ExnState s e) where
return a = ExSt (\s -> (Right a, s))
m >>= f = ExSt (\s -> case runExSt m s of
(Left e, s’) -> (Left e, s’)
(Right a, s’) -> runExSt (f a) s’)
putState :: s -> ExnState s e ()
putState s’ = ExSt (\s -> (Right (), s’))
…
25 / 28
Combining monads, continued
I Also possible to combine exceptions and state with a
transactional semantics.
I State modifications discarded when error signaled.
I Subtle modification of monad type and operations:
newtype StateExn s e a = StEx {runStEx:: s -> Either e (a,s)}
instance Monad (StateExn s e) where
return a = StEx (\s -> Right (a, s))
m >>= f = StEx (\s -> case runStEx m s of
Left e -> Left e
Right (a, s’) -> runStEx (f a) s’)
putState :: s -> StateExn s e ()
putState s’ = StEx (\s -> Right ((), s’))
I GHC comes with Monad Transformer Library, a way of building
complex monads out of building blocks.
I A monad transfomer extends a monad with new features.
I For AP, usually manageable to build combined monad by hand.
26 / 28
Summary: monads from a SE perspective
I Monads abstract out the “plumbing” inherent to any notion of
effectful computations.
I Not really essential: could just write programs explicitly in
state-pasing, error-passing, etc. style.
I But doing so loses key benefits of abstraction:
I More concise, readable code
I Less room for manual error
I Subsequent fixes, changes, or extensions to effect only have to be
done in one place.
I Cf. Wadler’s interpreter examples.
I Any non-trivial piece of Haskell code you are likely to encounter
will probably already make heavy use of monads.
27 / 28
What’s next
I If you haven’t yet, do the recommended readings and quiz for
this lecture.
I May want to start with Wadler paper.
I Assignment 0 due tomorrow
I Do run it past OnlineTA before submitting!
I Start looking at this week’s excercises and assignments
I Assignment 1 out later today, due Wednesday 19/9 at 20:00
I Next lecture (Ken): monads recap + introduction to
property-based testing.
I Recommended readings up soon.
I Look at associated exercises before the lecture.
28 / 28