CS计算机代考程序代写 SQL scheme prolog python x86 data structure javascript c/c++ database Lambda Calculus chain compiler Java flex js c++ computer architecture Haskell cache Excel assembly assembler algorithm interpreter Levels of Abstraction

Levels of Abstraction
Learning Outcomes
· Understand the motivation for different programming paradigms: to abstract machine operation into human understandable and composable programs
· Understand the difference between syntax the textual symbols and grammatical rules of a program, and semantics the meaning of what is computed
· Understand that there are different models of computation upon which different programming languages are based, including machine models such as Turing Machines and von Neumann architecture, and the Lambda Calculus based on mathematical functions.
Syntax versus Semantics
As a branch of Computer Science, the theory and practice of programming has grown from a very practical need: to create tools to help us get computers to perform useful and often complex tasks. Programming languages are tools, that are designed and built by people to make life easier in this endeavour. Furthermore, they are rapidly evolving tools that have grown in subtlety and complexity over the many decades to take advantage of changing computer hardware and to take advantage of different ways to model computation.
An important distinction to make when considering different programming languages, is between syntax and semantics. The syntax of a programming language is the set of symbols and rules for combining them (the grammar) into a correctly structured program. These rules are often arbitrary and chosen for historical reasons, ease of implementation or even aesthetics. An example of a syntactic design choice is the use of indentation to denote block structure or symbols such as “BEGIN” and “END” or “{” and “}”.
Example: functions in python and C that are syntactically different, but semantically identical:
# python code
def sumTo(n):
sum = 0
for i in range(0,n):
sum = sum + i
return sum
// C code:
int sumTo(int n) {
int sum = 0;
for(int i = 0; i < n; i++) { sum += i; } return sum; } By contrast, the “semantics” of a programming language relate to the meaning of a program: how does it structure data? How does it execute or evaluate? In this course we will certainly be studying the syntax of the different languages we encounter. It is necessary to understand syntax in order to correctly compose programs that a compiler or interpreter can make sense of. The syntactic choices made by language designers are also often interesting in their own right and can have significant impact on the ease with which we can create or read programs. However, arguably the more profound learning outcome from this course should be an appreciation for the semantics of programming and how different languages lend themselves to fundamentally different approaches to programming, with different abstractions for modelling problems and different ways of executing and evaluating. Hence, we will be considering several languages that support quite different programming paradigms. For example, as we move forward we will see that C programs vary from the underlying machine language mostly syntactically. C abstracts certain details and particulars of the machine architecture and has much more efficient syntax than Assembly language, but the semantics of C are not so far removed from Assembler - especially modern assembler that supports procedures and conveniences for dealing with arrays. By contrast, Haskell and MiniZinc (which we will be exploring in the second half of this course) represent quite different paradigms where the underlying machine architecture, and even the mode of execution, is significantly abstracted away. MiniZinc, in particular, is completely declarative in the sense that the programmer’s job is to define and model the constraints of a problem. The approach to finding the solution, in fact any algorithmic details, are completely hidden. One other important concept that we try to convey in this course is that while some languages are engineered to support particular paradigms (such as functional programming or logic programming), the ideas can be brought to many different programming languages. For example, we begin learning about functional programming in JavaScript (actually TypeScript and EcmaScript 2017) and we will demonstrate that functional programming style is not only possible in this language, but brings with it a number of benefits. Later, we will pivot from JavaScript (briefly) to PureScript, a haskell-like language that compiles to JavaScript. We will see that, while PureScript syntax is very different to Javascript, the JavaScript generated by the PureScript compiler is not so different to the way we implemented functional paradigms manually in JavaScript. Then we will dive a little-more deeply into a language that more completely “buys into” the functional paradigm: Haskell. As well as having a syntax that makes functional programming very clean, the haskell compiler strictly enforces purity and makes the interesting choice of being lazy by default. In summary, the languages we will study (in varying degrees of depth) will be Assembler, C/C++, JavaScript (ES2017 and TypeScript), PureScript, Haskell and MiniZinc, with JavaScript/TypeScript and Haskell being the main languages explored in problem sets and assignments. Thus, this course will be a tour through programming paradigms that represent different levels of abstraction from the underlying machine architecture. To begin, we spend just a little time at the level of least abstraction: the hardware itself. The Machine Level Conceptually, modern computer architecture deviates little from the von Neumann model proposed in 1945 by Hungarian-American computer scientist John von Neumann. The development of the von Neumann model occurred shortly after the introduction of the theoretical Turing Machine concept. The von Neumann architecture was among the first to unify the concepts of data and programs. That is, in a von Neumann architecture, a program is just data that can be loaded into memory. A program is a list of instructions that read data from memory, manipulate it, and then write it back to memory. This is much more flexible than previous computer designs which had stored the programs separately in a fixed (read-only) manner. The classic von Neumann model looks like so: At a high-level, standard modern computer architecture still fits within this model: Programs run on an x86 machine according to the Instruction Execution Cycle: · CPU fetches instruction from instruction queue, increments instruction pointer · CPU decodes instruction and decides if it has operands · If necessary, CPU fetches operands from registers or memory · CPU executes the instruction · If necessary, CPU stores result, sets status flags, etc. Registers are locations on the CPU with very low-latency access due to their physical proximity to the execution engine. Modern x86 processors also have 2 or 3 levels of cache memory physically on-board the CPU. Accessing cache memory is slower than accessing registers (with all sorts of very complicated special cases) but still many times faster than accessing main memory. The CPU handles the movement of instructions and data between levels of cache memory and main memory automatically, cleverly and—for the most part—transparently. To cut a long story short, it can be very difficult to predict how long a particular memory access will take. Probably, accessing small amounts of memory repeatedly will be cached at a high-level and therefore fast. Other Models of Computation Turing Machines are a conceptual model of computation based on a physical analogy of tapes being written to and read from as they feed through a machine. The operations written on the tape determine the machine’s operation. The von Neumann model of computation has in common with Turing Machines that it follows an imperative paradigm of sequential instruction execution, but it is a practical model upon which most modern computers base their architecture. There are other models of computation that are also useful. In particular, we will look at the Lambda Calculus, which was roughly a contemporary of these other models but based on mathematical functions, their application and composition. We will see that while imperative programming languages are roughly abstractions of machine architecture, functional programming languages provide an alternative abstraction built upon the rules of the lambda calculus. Alternative Abstractions Humans think about programs in a different way to machines. Machines can read thousands or millions of instructions per second and execute them tirelessly and with precision. Humans need to understand programs at a higher level, and relate the elements of the program back to the semantics of the task and the ultimate goal. There is a clear and overwhelmingly agreed-upon need to create human-readable and writable languages which abstract away the details of the underlying computation into chunks that people can reason about. However, there are many ways to create these abstractions. Comparing the Dominant Programming Paradigms First some definitions which help to describe the different families of programming languages that we will be considering in this course. It is assumed that the reader already has experience with the first three paradigms (imperative, procedural and objected-oriented) since these are taught in most introductory programming courses. The rest are discussed further in later chapters in these notes. · Imperative programs are a sequence of statements that change a programs state. They follow the Turing machine model described above. This is probably the dominant paradigm for programming languages today. Languages from Assembler to Python are built around this concept and most modern languages still allow you to program in this style. · Procedural languages are basically imperative in nature, but add the concept of named procedures, i.e. named subroutines that may be invoked from elsewhere in the program, with parameterised variables that may be passed in. The result of the procedure may be returned as an explicit return value, or the procedure may mutate (change) the value of variables referenced elsewhere in the program (such a mutation is called a side effect). Again, most modern languages support definition of such procedures. Sometimes, the term function is used synonymously with procedure. Some languages (e.g. Pascal) make a distinction between procedures which mutate state without an explicit return value, while a function has an explicit return value. Note that the latter is closer to the definition of function in mathematics. · Object-oriented languages are built around the concept of objects where an objects captures the set of data (state) and behaviours (methods) associated with entities in the system. We assume most readers have already studied the basics of object oriented programming, such as defining classes of objects, inheritance and encapsulation. · Declarative languages focus on declaring what a procedure (or function) should do rather than how it should do it. Classic examples of declarative languages are SQL - which precisely describes a relational database query (i.e. what should be returned) without having to tell the database how the computation should be performed in (e.g. the sequence of steps to carry out to perform the query). HTML is also declarative: describing what a web page should look like without specifying how to render it. SQL and HTML both avoid defining the how by relying on an engine which already knows how to do the task (i.e. the database system (SQL) or the browser (HTML). By contrast, Prolog is an example of a language which is declarative but also a complete programming language (being able to perform any computation that a Turing machine can compute, i.e. Turing Complete). Declarative languages like Prolog manage to perform arbitrarily complex computations (e.g. with loops and evolving state) through recursive definitions. · Functional languages are built around the concept of composable functions. Such languages support higher order functions which can take other functions as arguments or return new functions as their result, following the rules of the Lambda Calculus (which we will explore in some depth later). Functional languages such as Haskell (which we are going to explore later) adhere closely to the mathematical definition of functions that are pure, i.e. that do not have side effects but whose only result is an explicit return value. Pure functional programs tend to be closer to a declarative definition of the solution since their manipulation of state is made explicit from the function parameter and return types rather than hidden in side effects (this is a quality which we will see through many examples later). Many modern languages support a mixture of functional (i.e. by supporting higher-order functions) and imperative style programming (by allowing mutable variables and side effects from functions). Until relatively recently there was a fairly clear distinction between languages which followed the Functional Programming (FP) paradigm versus the Object Oriented (OO) paradigm. From the 1990s to… well it still is… Object Oriented programming has been arguably the dominant paradigm. Programmers and system architects have found organising their programs and data into class hierarchies a natural way to model everything from simulations, to games, to business rules. But this focus on creating rigid structures around data is a static view of the world which becomes messy when dealing with dynamic state. Functional programming, however, focuses on functions as transformations of data. These two alternatives, Object-Oriented versus Functional, have been described as a Kingdom of Nouns versus a Kingdom of Verbs, respectively, where nouns are descriptions of data and verbs are functions. Historically these were seen as competing paradigms, leading to quite different languages such as C++ and Haskell. However, modern languages like TypeScript, Scala, C++17, Java >8, Swift, Rust, etc, provide facilities for both paradigms for you to use (or abuse) as you please.
Thus, these are no longer competing paradigms. Rather they are complementary, or at-least they can be depending on the skill of the programmer. They provide programmers with a choice of abstractions to apply as necessary to better manage complexity and achieve robust, reusable, scalable and maintainable code.
These notes focus on introducing programmers who are familiar with the OO paradigm to FP concepts via the ubiquitous multiparadigm language of JavaScript. The following table functions as a summary but also an overview of the contents of these notes with links to relevant sections.
 
Functional
Object-Oriented

Unit of Composition
Functions
Objects (classes)

Programming Style
Declarative
Imperative

Control Flow
Functions, recursion and chaining
Loops and conditionals

Polymorphism
Parametric
Sub-Typing

Data and Behaviour
Loosely coupled through pure, generic functions
Tightly coupled in classes with methods

State Management
Treats objects as immutable
Favours mutation of objects through instance methods

Thread Safety
Pure functions easily used concurrently
Can be difficult to manage

Encapsulation
Less essential
Needed to protect data integrity

Model of Computation
Lambda Calculus
Imperative (von Neumann/Turing)

JavaScript Introduction
Learning Outcomes
· Understand and use basic JavaScript coding concepts and features
· Understand the difference between mutable and immutable (const) variables
· Explain the relationship between javascript functions and objects
· Understand that the scope of variables is limited to delineated code blocks and within functions
· Understand that a closure captures variables referenced within its scope
· Create and apply anonymous functions to fluent style code
· Compare arrow functions and regular function syntax
· Explain JavaScript’s prototype mechanism for creating classes from functions
· Create ES6 style classes with constructors and getters
· Compare object oriented polymorphism to dependency injection through functions
Introduction
In the late 90s the mood was right for a language that was small and simple and with executable files small enough to be distributed over the web. Originally Java was meant to be that language but, while it quickly gained traction as a language for building general purpose applications and server-side middleware, it never really took off in the browser. Something even simpler, and better integrated with the Document Object Model (DOM) of HTML pages was required to add basic interaction to web pages.
Brendan Eich was hired by Netscape in 1995 to integrate a Scheme interpreter into their browser for this purpose. No messy deployment of Java bytecode bundles – the browser would have been able to run Scheme scripts embedded directly into web pages. This would have been awesome. Unfortunately, for reasons that were largely political and marketing related, it was felt that something more superficially resembling Java was required. Thus, Eich created a prototype scripting language in 2 weeks that eventually became JavaScript. As we will see, it is syntactically familiar for Java developers. Under the hood, however, it follows quite a different paradigm.
The fact it was initially rushed to market, the fact that browser makers seemingly had difficulty early on making standards-compliant implementations, and a couple of regrettable decisions at first regarding things like scoping semantics, meant that JavaScript developed something of a bad name. It’s also possible that there was some inherent snobbiness amongst computer science types that, since JavaScript was not a compiled language, it must inevitably lead to armageddon. Somehow, however, it survived and began the “web 2.0” phenomenon of what we now refer to as rich, client-side “web apps”. It has also matured and, with the EcmaScript 6 (ES6) and up versions, has actually become quite an elegant little multi paradigm language.
The following introduction to JavaScript assumes a reasonable knowledge of programming in another imperative language such as Python or Java.
Declaring Variables
We declare constant variables in JavaScript with the const keyword:
const z = 1 // constant (immutable variable) at global scope
You can try this in the debug console in a browser such as Chrome. If we try to change the value of such a const variable, we get a run-time error:
const z = 1
z = 2
Uncaught TypeError: Assignment to constant variable.
We define mutable variables in JavaScript with the let keyword:
let w = 1
console replies with the value returned by the let statement, which is:
undefined
but fear not, w was assigned the correct value which you can confirm by typing just w into the console:
w
1
Now if we assign a new value to w it succeeds:
w = 2 // note that without a let or const keyword before it, this assignment an expression which returns a value:
2
(Note: there is another legacy keyword for declaring variables in JavaScript var that has different scoping rules. Don’t use it.)
JavaScript Types
JavaScript has several “primitive types” (simple types that are not Objects). These include:
· number: any numeric value, integer or decimal
· string: delineated like “hello” or ‘hello’ or even `hello`.
· boolean: can be only true or false
· undefined: is a special type with only one value which is undefined.
and a couple of others that we won’t worry about here.
JavaScript is loosely typed in the sense that a mutable variable can be assigned (and re-assigned) different values of different types.
let x
undefined
x = 3
3
x = ‘hello’
“hello”
Variable scope
A variable’s scope is the region of the program where it is visible, i.e. it can be referenced. You can limit the scope of a variable by declaring it inside a block of code delineated by curly braces:
{
let x = 1
console.log(x)
}
Console prints 1 from the console.log:
1 But if we try to get the value of x:
x
Uncaught ReferenceError: x is not defined
The above console.log statement successfully output the value of x because it was inside the same scope (the same set of curly braces). The subsequent error occurs because we tried to look at x outside the scope of its definition. Variables declared outside of any scope are said to be “global” and will be visible to any code loaded on the same page and could clobber or be clobbered by other global definitions – so take care!
Be especially carefully to always declare variables with either let or const keywords. If you omit these keywords, a variable will be created at the global scope even though it is inside a { … } delimited scope, like so:
{
x = 1
}

x
1

We are going to start to use a few operators, that may be familiar from C or Java, some are JS specific.
Here’s a cheatsheet:
JavaScript 101: Basic Operator Cheat Sheet
Binary Operators
x % y // modulo
x == y // loose* equality
x != y // loose* inequality
x === y // strict+ equality
x !== y // strict+ inequality

a && b // logical and
a || b // logical or

a & b // bitwise and
a | b // bitwise or
* Loose (in)equality means type conversion may occur
+ Use strict (in)equality if type is expected to be same
Unary Operators
i++ // post-increment
++i // pre-increment
i– // post-decrement
–i // pre-decrement
!x // not x
Ternary Conditional Operator
? :
In-place math operators
x +=
// add result of expr to x
// also -=, *=, /=, |=, &=.
Functions
Functions are declared with the function keyword. You can give the function a name followed by a tuple of zero or more parameters. Variables declared within the function’s body (marked by a matching pair of curly braces { … }) are limited to the scope of that function body, but of course the parameters to the function are also in scope within the function body. You return the result with the return keyword.
/**
* define a function called “myFunction” with two parameters, x and y
* which does some silly math, prints the value and returns the result
*/
function myFunction(x, y) {
let t = x + y; // t is mutable
t += z; // += adds the result of the expression on the right to the value of t
const result = t // semi colons are not essential (but can help to catch errors)
console.log(“hello world”) // prints to the console
return result; // returns the result to the caller
}
You invoke (or ‘call’, or ‘apply’) a function like so:
myFunction(1,2)
hello world
4
An if-else statement looks like so:
/**
* get the greater of x and y
*/
function maxVal(x, y) {
if (x >= y) {
return x;
} else {
return y;
}
}
There is also a useful ternary expression syntax for if-then-else:
function maxVal(x, y) {
return x >= y ? x : y;
}
We can loop with while:
/**
* sum the numbers up to and including n
*/
function sumTo(n) {
let sum = 0;
while (n) { // when n is 0 this evaluates to false ending the loop
// add n to sum then decrement n
sum += n–; // see operator cheatsheet above
}
return sum;
}
sumTo(10)
55
Or for:
function sumTo(n) {
let sum = 0;
for (let i = 1; i <= n; i++) { sum += i; } return sum; } Or we could perform the same computation using a recursive loop: function sumTo(n) { if (n === 0) return 0; // base case return n + sumTo(n-1); // inductive step } We can make this really succinct with a ternary if-then-else expression: function sumTo(n) { return n ? n + sumTo(n-1) : 0; } We consider this recursive loop a more “declarative” coding style than the imperative loops. It is closer to the inductive definition of sum than a series of steps for how to compute it. · No mutable variables used · Each expression in this code is “pure”: it has no effects outside the expression. Thus, you could replace each element of code with something else that produces the same result for a given input (such as a simple look up of a precomputed cache) and it would work the same. · Therefore: this code has the property of referential transparency. · The code succinctly states the loop invariant. Stack Overflow and Tail Recursion Each time a function is invoked, the interpreter (or ultimately the CPU) will allocate another chunk of memory to a special area of memory set aside for such use called the stack. The stack is finite. A recursive function that calls itself too many times will consume all the available stack memory. Therefore, too many levels of recursion will cause a stack overflow. sumTo(1000000) Uncaught RangeError: Maximum call stack size exceeded However, functional languages (like Haskell) rely on recursion because they have no other way to create loops without mutable variables - so they must have a way to make this scale to real-world computations. When a recursive function is written in a special way, such that the recursive call is in tail position, compilers are able to transform the recursion into a while loop with constant memory use - this is called tail call optimisation. Let’s see what a tail recursive version of the sumTo function looks like: function sumTo(n, sum = 0) { return n ? sumTo(n-1, sum + n) : sum; } We have added a second parameter sum to store the computation as recursion proceeds. Such parameters are called accumulators. The = 0 in the parameter definition provides a default value in case the caller does not specify an argument. Thus, this new version can be called the same way as before: sumTo(10) 55 The important change is that the recursive call (on the branch of execution that requires it) is now the very last operation to be executed before the function returns. The computation (sum + n) occurs before the recursive call. Therefore, no local state needs to be stored on the stack. Note: although it has been proposed for the EcmaScript standard, as of 2020, not all JavaScript engines support tail call optimisation (only WebKit AFAIK). Functions as parameters to other functions We can make functions more versatile by parameterising them with other functions: function sumTo(n, f = x => x) {
return n ? f(n) + sumTo(n-1, f) : 0;
}
Note that the new parameter f defaults to a simple function that directly returns its argument. Thus, called without a second parameter sumTo has the same behavior as before:
sumTo(10)
55
But, we can now specify a non-trivial function to be applied to the numbers before they are summed. For example, a function to square a number:
function square(x) {
return x * x;
}
can be passed into sumTo to compute a sum of squares:
sumTo(10, square)
> 385
Objects
Like Java, everything in JavaScript is an object. You can construct an object populated with some data, essentially just with JSON syntax:
const myObj = {
aProperty: 123,
anotherProperty: “tim was here”
}
However, in JavaScript, objects are simply property bags, indexed by a hashtable:
// the following are equivalent and both involve a hashtable lookup:
console.log(myObj.aProperty)
console.log(myObj[‘aProperty’])
Note that when we declare an object with the const keyword as above, it is only weakly immutable. This means that we cannot reassign myObj to refer to a different object, however, we can change the properties inside myObj. Thus, the myObj variable is constant/immutable, but the object created by the declaration is mutable. So, after making the above const declaration, if we try the following reassignment of myObj we receive an error:
myObj = {
aProperty: 0,
anotherProperty: “tim wasn’t here”
}
VM48:1 Uncaught TypeError: Assignment to constant variable.
But the immutability due to const is weak or shallow in the sense that while the myObj variable which references the object is immutable, the properties of the object are mutable, i.e. we can reassign properties on myObj with no error:
myObj.aProperty = 0
We can also quickly declare variables that take the values of properties of an object, through destructuring syntax:
const {aProperty} = myObj
console.log(aProperty)
123
Which is equivalent to:
const aProperty = myObj.aProperty
This is most convenient to destructure objects passed as arguments to functions. It makes it clear from the function definition precisely which properties are going to be accessed. Consider:
const point = {x:123, y:456}
function showX({x}) {
console.log(x)
}
You can also initialise an object’s properties directly with variables. Unless a new property name is specified, the variable names become property names, like so:
const x = 123, tempY = 456
const point = {x /* variable name used as property name */,
y:tempY /* value from variable but new property name */}
point
{x: 123, y: 456}
Arrays
JavaScript has python-like syntax for array objects:
const a = [1,2,3]
a.length
3
a[0]
1
a[2]
3
You can also destructure arrays into local variables:
const [x,y,z] = a
z
3
Below, we see how Anonymous Functions can be applied to transform arrays, and a cheatsheet summary for further functions for working with arrays.
Dynamic Typing
The members of myObj are implicitly typed as number and string respectively, and as we see in the console.log, conversion to string happens automatically. JavaScript is interpreted by a JavaScript engine rather than compiled into a static executable format. Originally, this had implications on execution speed, as interpreting the program line by line at run time could be slow. Modern JavaScript engines, however, feature Just in Time (JIT) compilation and optimisation – and speed can sometimes be comparable to execution of C++ code that is compiled in advance to native machine code. However, another implication remains. It is not type checked by a compiler. Thus, type errors cause run-time failures rather than being caught at compile time. JavaScript is dynamically typed in that types are associated with values rather than variables. That is, a variable that is initially bound to one type, can later be rebound to a different type, e.g.:
let i = 123; // a numeric literal has type number
i = ‘a string’; // a string literal has type string, but no error here!
The C compiler would spit the dummy when trying to reassign i with a value of a different type, but the JavaScript interpreter is quite happy to go along with your decision to change your mind about the type of i.
Functions are Objects
The nifty thing about JavaScript – one Scheme’ish thing that presumably survived from Eich’s original plan – is that functions are also just objects. That is, given the following function:
function sayHello(person) {
console.log(‘hello ‘ + person)
}
sayHello(‘tim’)
“hello tim”
We can easily bind a function to a variable:
const hi = sayHello
hi(‘tim’)
“hello tim”
(Note: The original JavaScript syntax for declaring a variable used the var keyword. However, the scoping of variables declared in this way was strange for people familiar with C and Java scoping rules, and caused much angst. It has been fixed since ES6 with the let and const keywords, we prefer these to var.)
Anonymous Functions
The sayHello function is called a named function. We can also create an anonymous function to be bound immediately to a variable:
const hi = function(person) {
console.log(“hello ” + person)
}
or to pass as a parameter into another function, for example, Array objects have a forEach member that expects a function as an argument, which is then applied to every member of the array:
[‘tim’, ‘sally’, ‘anne’].forEach(function(person) {
console.log(‘hello ‘ + person)
})
“hello tim”
“hello sally”
“hello anne”
This pattern of passing functions as parameters to other functions is now so common in JavaScript that the EcmaScript 6 standard introduced some new arrow syntax (with slightly different semantics, as explained below) for anonymous functions:
[‘tim’, ‘sally’, ‘anne’].forEach(person=> console.log(‘hello ‘ + person))
Note that whatever value the expression on the right-hand side of the arrow evaluates to is implicitly returned:
[‘tim’, ‘sally’, ‘anne’].map(person=> “hello ” + person)
[“hello tim”, “hello sally”, “hello anne”]
Multiple statements (either split across lines or separated with ;s) including local variable declarations can be enclosed in brackets with arrow syntax, but then an explicit return statement is required to return a value:
[‘tim’, ‘sally’, ‘anne’].map(person=> {
const message = “hello ” + person
console.log(message)
return message
})
Arrow Functions
As mentioned above, ES6 introduced compact notation for anonymous functions:
const greeting = person=> ‘hello’ + person
Which is completely equivalent to the long form:
const greeting = function(person) {
return ‘hello ‘ + person
}
You can also have functions with a list of arguments, just put the list in brackets as for usual function definitions:
const greeting = (greeting, person)=> greeting + ‘ ‘ + person
The body of the above functions are simple expressions. If you need a more complex, multiline body (e.g. with local variables) you can do this but you need to surround the code block with curly braces {}:
const greeting = (greeting, person)=> {
const msg = greeting + ‘ ‘ + person
console.log(msg)
return msg
}
We can use multi-parameter anonymous functions with another nifty method on Array objects which allows us to reduce them to a single value.
[5,8,3,1,7,6,2].reduce((accumulator,x)=>accumulator+x,0)
32
The reduce method applies a function to each of the elements in the array, in order to compute an aggregated value for the whole array. The nature of the aggregate depends on the function you pass in. Here we just sum the elements in the array. The function we pass in has two parameters, the second is the array element (which we refer to here as x), the first parameter accumulator is either:
· the second argument to reduce (which in our case is 0), if this is the first call to the function,
· or, for every other call, the result returned by the previous call to the function.
reduce is incredibly versatile and can be used to do much more than sum numbers. For example, say we want to see whether all the elements of an array pass some test.
const all = (test, array) => array.reduce(
(accumulator, x) => accumulator && test(x),
true)
Here the accumulator is a boolean with initial value true. If an element of the array fails the test the accumulator becomes false and stays false, using the && operator.
all(x => x < 5, [1, 2, 3]) all(x => x < 5, [1, 3, 5]) true false Exercise · Can you write a function any that returns true if any of the tests pass? What if we wanted to see how many times each word appears in a list? const wordCount = (array) => array.reduce(
(accumulator, word) => {
if (accumulator[word]) {
accumulator[word] += 1
} else {
accumulator[word] = 1
}
return accumulator
},
{}
)
Here the accumulator is an object which is initially empty. For each word in the list the word count is either updated or created in the accumulator object. Note however that this implementation is not pure, the aggregator function modifies accumulator in place before returning it.
wordCount([‘tim’, ‘sally’, ‘tim’])
{ tim: 2, sally: 1 }
Array Cheatsheet
In the following, the annotations beginning with : after each parameter describe its type and again at the end of each function to describe its return type. The array a has elements of type U, and U=>V is the type of a function with input parameter type U and return type V (Note: these are not correct TS annotations, but an informal “shorthand”)
a.forEach(f: U=> void): void // apply the function f to each element of the array
Although it does not typically mutate a, forEach is impure if f has any side effect (which it most likely will because otherwise why would you bother!).
Pure Methods on Array
a.slice(): U[] // copy the whole array
a.slice(start: number): U[] // copy from the specified index to the end of the array
a.slice(start: number, // copy from start index up to
end: number): U[] // (but not including) end index
a.map(f: U=> V): V[] // apply f to elements of array
// and return result in new array of type V
a.filter(f: U=> boolean): U[] // returns a new array of the elements of a
// for which f returns true
a.concat(b: U[]): U[] // return a new array with the elements of b
// concatenated after the elements of a
a.concat(b: U[], c: U[]): U[] // return b and c appended to a.
// further arrays can be passed after c
a.reduce(f: (V, U)=> V, V): V // Uses f to combine elements of
// the array into a single result of type V
All of the above are pure in the sense that they do not mutate a, but return the result in a new object.
Note: the function passed to forEach takes an optional second parameter (not shown above) which is the index of the element being visited (see docs). While map, filter & reduce have a similar optional index parameter I suggest to avoid using it because it leads to hacky imperative style thinking about loops.
Closures
Functions can be nested inside other function definitions and can access variables from the enclosing scope.
Definitions:
· A function and the set of variables it accesses from its enclosing scope is called a closure.
· Variables from the enclosing scope that are accessed by the closure are said to be captured by the closure.
You can also have a function that creates and returns a closure that can be applied later:
function add(x) {
return y => y+x; // we are going to return a function which includes
// the variable x from it’s enclosing scope
// – “a closure”
}
let addNine = add(9)
addNine(10)
19
In the above example, the parameter x of the add function is captured by the anonymous function that is returned, which forms a closure. Thus, the binding of x to a value persists beyond the scope of the add function itself. Effectively, we have used the add function to create a new function: y=>y+9 – without actually writing the code ourselves.
addNine(1)
10
We can also call the add function with two arguments at once:
add(1)(2)
3
Compare to a more traditional function of two parameters:
function plus(x,y) { return x + y }
plus(1,2)
3
The add function above is a Curried version of the plus function.
As another example, consider a curried wrapper for our sumTo from before:
function sumOf(f) {
return n => sumTo(n, f)
}
Now, we can create custom functions that compute sums over arbitrary sequences:
const sumOfSquares = sumOf(square)
sumOfSquares(10)
385
sumOfSquares(20)
2870
Prototype Class Mechanism
Note: the following way to achieve class encapsulation is deprecated by ES6 syntax – skip to the next section to see the modern way to do it.
In JavaScript you can also create functions as members of objects:
let say = {
hello: person => console.log(‘hello ‘ + person)
}
say.hello(“tim”)
“hello tim”
But these objects are only single instances.
JavaScript supports creating object instances of a certain type (i.e. having a set of archetypical members, like a Java class) through a function prototype mechanism. You create a constructor function:
function Person(name, surname) {
this.name = name
this.surname = surname
}
let author = new Person(‘tim’, ‘dwyer’)
sayHello(author.name)
“hello tim”
You can also add method functions to the prototype, that are then available from any objects of that type:
Person.prototype.hello = function() { console.log(“hello ” + this.name) }
author.hello()
“hello tim”
Note that above we use the old-style verbose JavaScript anonymous function syntax instead of the arrow form. This is because there is a difference in the way the two different forms treat the this symbol. In the arrow syntax, this refers to the enclosing execution context. In the verbose syntax, this resolves to the object the method was called on.
It’s very tempting to use the prototype editing mechanism for evil. For example, I’ve always wished that JS had a function to create arrays initialised over a range:
Array.prototype.range =
(from, to)=>Array(to) // allocate space for an array of size `to`
.fill() // populate the array (with `undefined`s)
.map((_,i)=>i) // set each element of the array to its index
.filter(v=> v >= from) // filter out values below from

[].range(3,9)
[3,4,5,6,7,8]
Of course, if you do something like this in your JS library, and it pollutes the global namespace, and one day EcmaScript 9 introduces an actual range function with slightly different semantics, and someone else goes to use the [].range function expecting the official semantics – well, you may lose a friend or two.
Some notes about this implementation of range:
· Although the Array(n) function allocates space for n elements, the result is still “empty” so fill() is necessary to actually create the entries.
· The function passed to map is using an optional second argument which receives the index of the current element. See note in the Array Cheatsheat suggesting not to use this.
· The _ is not special syntax, it’s a valid variable name. I use _ as a convention for a parameter that I don’t use. This is imitating Haskell syntax.
EcmaScript 6 Class Syntax
Consider another class created with a function and a method added to the prototype:
function Person(name, occupation) {
this.name = name
this.occupation = occupation
}
Person.prototype.sayHello = function() {
console.log(`Hi, my name’s ${this.name} and I ${this.occupation}!`)
}
const tim = new Person(“Tim”,”lecture Programming Paradigms”)
tim.sayHello()
Hi, my name’s Tim and I lecture Programming Paradigms!
ES6 introduced a new syntax for classes that will be more familiar to Java programmers:
class Person {
constructor(name, occupation) {
this.name = name
this.occupation = occupation
}
sayHello() {
console.log(`Hi, my name’s ${this.name} and I ${this.occupation}!`)
}
}
There is also now syntax for “getter properties”: functions which can be invoked without (), i.e. to look more like properties:
class Person {
constructor(name, occupation) {
this.name = name
this.occupation = occupation
}
get greeting() {
return `Hi, my name’s ${this.name} and I ${this.occupation}!`
}
sayHello() {
console.log(this.greeting)
}
}
And classes of course support single-inheritance to achieve polymorphism:
class LoudPerson extends Person {
sayHello() {
console.log(this.greeting.toUpperCase())
}
}

const tims = [
new Person(“Tim”,”lecture Programming Paradigms”),
new LoudPerson(“Tim”,”shout about Programming Paradigms”)
]

tims.forEach(t => t.sayHello())
Hi, my name’s Tim and I lecture Programming Paradigms!
HI, MY NAME’S TIM AND I SHOUT ABOUT PROGRAMMING PARADIGMS!
Polymorphism
According to Cartelli et al., “Polymorphic types are types whose operations are applicable to values of more than one type.” Thus, although Person and LoudPerson are different types, since LoudPerson is a sub-type of Person, they present a common sayHello method allowing operations like forEach to operate over an array of the base class. In a traditional Object Oriented language like Java, the compiler enforces that objects must be instances of a common base class or interface to be treated as such. This type of polymorphism is called subtyping polymorphism.
In JavaScript, with no compile-time typecheck, a kind of polymorphism is possible such that if two objects both present a similarly named method that is callable in the same way, of course there is nothing preventing you simply using that method on each object as if it is the same:
const a = {f: ()=>console.log(“a”)}
const b = {f: ()=>console.log(“b”)}
[a,b].forEach(o=>o.f())
a
b
Informally, this type of polymorphism is called “Duck Typing” (i.e. “If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck”).
Another type of polymorphism which is key to strongly typed functional programming languages (like Haskell), but also a feature of many modern OO languages is parametric polymorphism. We will see this in action when we introduce TypeScript generics.
Reference: Cardelli, Luca, and Peter Wegner. “On understanding types, data abstraction, and polymorphism.” ACM Computing Surveys (CSUR) 17.4 (1985): 471-523.
Dependency Injection
It’s useful to compare the above style of polymorphism, to a functional approach to dependency injection:
class Person {
constructor(name, occupation, voiceTransform = g => g) {
this.name = name
this.occupation = occupation
this.voiceTransform = voiceTransform
}
get greeting() {
return `Hi, my name’s ${this.name} and I ${this.occupation}!`
}
sayHello() {
console.log(this.voiceTransform(this.greeting))
}
}
const tims = [
new Person(“Tim”, “lecture Programming Paradigms”),
new Person(“Tim”, “shout about Programming Paradigms”, g => g.toUpperCase())
]
tims.forEach(t => t.sayHello())
Hi, my name’s Tim and I lecture Programming Paradigms!
HI, MY NAME’S TIM AND I SHOUT ABOUT PROGRAMMING PARADIGMS!
So the filter property defaults to the identity function (a function which simply returns its argument), but a user of the Person class can inject a dependency on another function from outside the class when they construct an instance.
This is a “lighter-weight” style of code reuse or specialisation.
Functional Programming in JavaScript
Learning Outcomes
· Create programs in JavaScript in a functional style
· Understand the definitions of Function Purity and Referential Transparency
· Explain the role of pure functional programming style in managing side effects
· See how pure functions can be used to model sophisticated computation
Introduction
The elements of JavaScript covered in our introduction, specifically:
· Binding functions to variables
· Anonymous functions
· Higher-order functions
are sufficient for us to explore a paradigm called functional programming. In the functional programming paradigm the primary model of computation is through the evaluation of functions. Functional Programming is highly inspired by the Lambda Calculus, a theory which develops a model for computation based on application and evaluation of mathematical functions.
While JavaScript (and many—but not all, as we shall see—other languages inspired by the functional paradigm) do not enforce it, true functional programming mandates the functions be pure in the sense of not causing side effects.
Side Effects
Side effects of a function are changes to state outside of the result explicitly returned by the function.
Examples of side-effects from inside a function:
· changing the value of a variable declared outside the function scope
· mutating global state in this way can cause difficult-to-diagnose bugs: for example, an effective debugging strategy is dividing the program up into little pieces which can easily be proven correct or unit tested – sources of side-effects deep inside functions are a hidden form of coupling making such a strategy very difficult.
· printing to the console changing the state of the world in such a way can also be dangerous
· for example, filling a disk with log messages is a good way to crash the whole computer!
In languages without compilers that specifically guard against them, side effects can occur:
· intentionally through sloppy coding practices, where a misguided programmer may think it’s more convenient to have a function do multiple things at once;
· unintentionally, for example by accidentally setting a global variable instead of a local one.
We’ll see more examples in actual code below.
Function Purity and Referential Transparency
A pure function:
· has no side effects: i.e. it has no effects other than to create a return value;
· always produces the same result for the same input.
In the context of functional programming, referential transparency is:
· the property of being able to substitute an expression that evaluates to some value, with that value, without affecting the behaviour of the program.
Trivial Example
Imagine a simple function:
const square = x=>x*x
And some code that uses it:
const four = square(2)
The expression square(2) evaluates to 4. Can we replace the expression square(2) with the value 4 in the program above without changing the behaviour of the program? YES! So is the expression square(2) referentially transparent? YES!
But what if the square function depends on some other data stored somewhere in memory and may return a different result if that data mutates? (an example might be using a global variable or reading an environment variable from IO or requiring user input). What if square performs some other action instead of simply returning the result of a mathematical computation? What if it sends a message to the console or mutates a global variable? That is, what if it has side effects (is not a pure function)? In that case is replacing square(2) with the value 4 still going to leave our program behaving the same way? Possibly not!
Put another way, if the square function is not pure—i.e. it produces side effects (meaning it has hidden output other than its return value) or it is affected by side effects from other code (meaning it has hidden inputs)—then the expression square(2) would not be referentially transparent.
Realistic Examples
A real life example where this might be useful would be when a cached result for a given input already exists and can be returned without further computation. Imagine a pure function which computes PI to a specified number of decimal points precision. It takes an argument n, the number of decimal points, and returns the computed value. We could modify the function to instantly return a precomputed value for PI from a lookup table if n is less than 10. This substitution is trivial and is guaranteed not to break our program, because its effects are strictly locally contained.
Pure functions and referential transparency are perhaps most easily illustrated with some examples and counterexamples. Consider the following impure function:
function impureSquares(a) {
let i = 0
while (i < a.length) { * a[i] = a[i] * a[i++]; } } Since the function modifies a in place we get a different outcome if we call it more than once. const myArray=[1,2,3] impureSquares(myArray) // now myArray = [2,4,9] impureSquares(myArray) // now myArray = [4,16,81] Furthermore, the very imperative style computation in impureSquares at the line marked with * is not pure. It has two effects: incrementing i and mutating a. You could not simply replace the expression with the value computed by the expression and have the program work in the same way. This piece of code does not have the property of referential transparency. True pure functional languages (such as Haskell) enforce referential transparency through immutable variables (note: yes, “immutable variable” sounds like an oxymoron - two words with opposite meanings put together). That is, once any variable in such a language is bound to a value, it cannot be reassigned. In JavaScript we can opt-in to immutable variables by declaring them const, but it is only a shallow immutability. Thus, the variable myArray above, cannot be reassigned to reference a different array. However, we can change the contents of the array as shown above. A more functional way to implement the squares function would be more like the examples we have seen previously: function squares(a) { return a.map(x=> x*x)
}
The above function is pure. Its result is a new array containing the squares of the input array and the input array itself is unchanged. It has no side effects changing values of variables, memory or the world, outside of its own scope. You could replace the computation of the result with a different calculation returning the same result for a given input and the program would be unchanged. It is referentially transparent.
const myArray = [1,2,3]
const mySquares = squares(myArray)
const myRaisedToTheFours = squares(mySquares)
We could make the following substitution in the last line with no unintended consequences:
const myRaisedToTheFours = squares(squares(myArray))
An impure function typical of something you may see in OO code:
let messagesSent = 0;
function send(message, recipient) {
let success = recipient.notify(message);
if (success) {
++messagesSent;
} else {
console.log(“send failed! ” + message);
}
console.log(“messages sent ” + messagesSent);
}
This function is impure in three ways:
· it mutates the state of the count variable messagesSent from the enclosing scope
· it (likely) does something (what exactly is unclear) to recipient
· finally, it sends output to the console. Outputting to a device (although only for display purposes) is definitely a side effect.
Side effects are bad for transparency (knowing everything about what a function is going to do) and maintainability. When state in your program is being changed from all over the place bugs become very difficult to track down.
Functional Patterns
Passing functions around, anonymous or not, is incredibly useful and pops up in many practical programming situations.
Eliminating Loops
Loops are the source of many bugs: fence-post errors, range errors, typos, incrementing the wrong counter, etc.
A typical for loop has four distinct places where it’s easy to make errors that can cause critical problems:
for ([initialization]; [condition]; [final-expression])
statement
· The initialization can initialise to the wrong value (e.g. n instead of n-1, 1 instead of 0) or initialise the wrong variable.
· The condition test can use = instead of ==, <= instead of < or test the wrong variable, etc. · The final-expression can (again) increment the wrong variable · The statement body might change the state of variables being tested in the termination condition since they are in scope. For many standard loops, however, the logic is the same every time and can easily be abstracted into a function. Examples: Array.map, Array.reduce, Array.forEach, etc. The logic of the loop body is specified with a function which can execute in its own scope, without the risk of breaking the loop logic. Callbacks In JavaScript and HTML5 events trigger actions associated with all mouse clicks and other interactions with the page. You subscribe to an event on a given HTML element as follows: element.addEventHandler('click', e=>{
// do something when the event occurs,
// maybe using the result of the event e
})
Note that callback functions passed as event handlers are a situation where the one semantic difference between the arrow syntax and regular anonymous function syntax really matters. In the body of the arrow function above, this will be bound to the context of the caller, which is probably what you want if you are coding a class for a reusable component. For functions defined with the function keyword the object that this refers to will depend on the context of the callee, although the precise behaviour of this for functions defined with function may vary with the particular JavaScript engine and the mode of execution.
Here’s a situation where this makes a difference. Recall that JavaScript functions are just objects. Therefore, we can also assign properties to them. We can do this from within the function itself using the this keyword:
function Counter() {
this.count = 0;
setInterval(function increment() {
console.log(this.count++)
}, 500);
}
const ctr = new Counter();
But, if I run this program at a console, I get the following, each line emitted 500 milliseconds apart:
NaN
NaN
NaN

This occurs because the this inside the function passed to setInterval is referring to the first function enclosing its scope, i.e. the increment function. Since increment has no count property, we are trying to apply ++ to undefined and the result is NaN (Not a Number).
Arrow functions have different scoping rules for this. That is, they take the this of the enclosing scope (outside the arrow function), so in the following we get the expected behaviour:
function Counter() {
this.count = 0;
setInterval(()=>console.log(this.count++), 500);
}
new Counter();
0
1 2

Continuations
Continuations are functions which, instead of returning the result of a computation directly to the caller, pass the result on to another function, specified by the caller.
We can rewrite basically any function to pass their result to a user-specified continuation function instead of returning the result directly. The parameter done in the ` continuationPlus` function below will be a function specified by the caller to do something with the result.
function simplePlus(a, b) {
return a + b;
}
function continuationPlus(a, b, done) {
done(a+b);
}
We can also rewrite tail-recursive functions to end with continuations, which specify some custom action to perform when the recursion is complete:
function tailRecFactorial(a, n) {
return n<=1 ? a : tailRecFactorial(n*a, n-1); } function continuationFactorial( a, n, finalAction=(result)=>{})
{
if (n<=1) finalAction(a); else continuationFactorial(n*a, n-1, finalAction); } Continuations are essential in asynchronous processing, which abounds in web programming. For example, when an HTTP request is dispatched by a client to a server, there is no knowing precisely when the response will be returned (it depends on the speed of the server, the network between client and server, and load on that network). However, we can be sure that it will not be instant and certainly not before the line of code following the dispatch is executed by the interpreter. Thus, continuation style call-back functions are typically passed through to functions which trigger such asynchronous behaviour, for those call-back functions to be invoked when the action is completed. A simple example of an asynchronous function invocation is the built-in setTimeout function, which schedules an action to occur after a certain delay. The setTimeout function itself returns immediately after dispatching the job, e.g. to the JavaScript event loop: setTimeout(()=>console.log(‘done.’), 0);
// the above tells the event loop to execute
// the continuation after 0 milliseconds delay.
// even with a zero-length delay, the synchronous code
// after the setTimeout will be run first…
console.log(‘job queued on the event loop…’);
job queued on the event loop…
done.
Function and Method Chaining
Chained functions are a common pattern.
Take a simple linked-list data structure as an example. We’ll hard code a list object to start off with:
const l = {
data: 1,
next: {
data: 2,
next: {
data: 3,
next: null
}
}
}
We can create simple functions similar to those of Array, which we can chain:
const
map = (f,l) => l ? ({data: f(l.data), next: map(f,l.next)}) : null
, filter = (f,l) => !l ? null :
(next =>
f(l.data) ? ({data: l.data, next})
: next
) (filter(f,l.next))
// the above is using an IIFE such that
// the function parameter `next` is used
// like a local variable for filter(f,l.next)
, take = (n,l) => l && n ? ({data: l.data, next: take(n-1,l.next)})
: null
We can chain calls to these functions like so:
take(2,
filter(x=> x%2 === 0,
map(x=> x+1, l)
)
)
{ data: 2, next: { data: 4, next: null }}
Fluent Interfaces (pure vs impure)
In the chained function calls above you have to read them inside-out to understand the flow. Also, keeping track of how many brackets to close gets a bit annoying. Thus, in object oriented languages you will often see class definitions that allow for method chaining, by providing methods that return an instance of the class itself, or another chainable class.
class List {
constructor(private head) {}
map(f) {
return new List(map(f, this.head));
}

Then the same flow as above is possible without the nesting and can be read left-to-right, top-to-bottom:
new List(l)
.map(x=>x+1)
.filter(x=>x%2===0)
.take(2)
This is called fluent programming style. Interfaces in object-oriented languages that chain a sequence of method calls (as above) are often called fluent interfaces. One thing to be careful about fluent interfaces in languages that do not enforce purity is that the methods may or may not be pure.
That is, the type system does not warn you whether the method mutates the object upon which it is invoked and simply returns this, or creates a new object, leaving the original object untouched. We can see, however, that List.map as defined above, creates a new list and is pure.
Computation with Pure Functions
Pure functions may seem restrictive, but in fact pure function expressions and higher-order functions can be combined into powerful programs. In fact, anything you can compute with an imperative program can be computed through function composition. Side effects are required eventually, but they can be managed and the places they occur can be isolated. Let’s do a little demonstration, although it might be a bit impractical, we’ll make a little list processing environment with just functions:
const cons = (_head, _rest)=> selector=> selector(_head, _rest);
With just the above definition we can construct a list (the term cons dates back to LISP) with three elements, terminated with null, like so:
const list123 = cons(1, cons(2, cons(3, null)));
The data element, and the reference to the next node in the list are stored in the closure returned by the cons function. Created like this, the only side-effect of growing the list is creation of new cons closures. Mutation of more complex structures such as trees can be managed in a similarly ‘pure’ way, and surprisingly efficiently, as we will see later in this course.
So cons is a function that takes two parameters _head and _rest (the _ prefix is just to differentiate them from the functions I create below), and returns a function that itself takes a function (selector) as argument. The selector function is then applied to _head and _rest.
The selector function that we pass to the list is our ticket to accessing its elements:
list123((_head, _rest)=> _head)
1
list123((_,r)=>r)((h,_)=>h) // we can call the parameters whatever we like
2
We can create accessor functions to operate on a given list (by passing the list the appropriate selector function):
const
head = list=> list((h,_)=>h),
rest = list=> list((_,r)=>r)
Now, head gives us the first data element from the list, and rest gives us another list. Now we can access things in the list like so:
const one = head(list123), // ===1
list23 = rest(list123),
two = head(list23), // ===2
… // and so on
Now, here’s the ubiquitous map function:
const map = (f, list)=> !list ? null
: cons(f(head(list)), map(f, rest(list)))
In the above, we are using closures to store data. It’s just a trick to show the power of functions and to put us into the right state of mind for the Lambda Calculus – which provides a complete model of computation using only anonymous functions like those above. In a real program I would expect you would use JavaScript’s class and object facilities to create data structures.
Towards Lambda Calculus and Church Encoding
Thus, with only pure function expressions and JavaScript conditional expressions (?:) we can begin to perform complex computations. We can actually go further and eliminate the conditional expressions with more functions! Here’s the gist of it: we wrap list nodes with another function of two arguments, one argument, whenempty, is a function to apply when the list is empty, the other argument, notempty, is applied by all internal nodes in the list. An empty list node (instead of null) applies the whenempty function when visited, a non-empty node applies the notempty function. The implementations of each of these functions then form the two conditions to be handled by a recursive algorithm like map or reduce. See “Making Data out of Functions” by Braithwaite for a more detailed exposition of this idea.
These ideas, of computation through pure function expressions, are inspired by Alonzo Church’s lambda calculus. We’ll be looking again at the lambda calculus later. Obviously, for the program to be at all useful you will need some sort of side effect, such as outputting the results of a computation to a display device. When we begin to explore PureScript and Haskell later in this course we will discuss how such languages manage this trick while remaining “pure”.
Updating Data Structures With Pure Functions
We saw in the introduction to JavaScript that one can create objects with a straightforward chunk of JSON:
const studentVersion1 = {
name: “Tim”,
assignmentMark: 20,
examMark: 15
}
studentVersion1
{name: “Tim”, assignmentMark: 20, examMark: 15}
Conveniently, one can copy all of the properties from an existing object into a new object using the “spread” operator …, followed by more JSON properties that can potentially overwrite those of the original. For example, the following creates a new object with all the properties of the first, but with a different assignmentMark:
const studentVersion2 = {
…studentVersion1,
assignmentMark: 19
}
studentVersion2
{name: “Tim”, assignmentMark: 19, examMark: 15}
One can encapsulate such updates in a succinct pure function:
function updateExamMark(student, newMark) {
return {…student, examMark: newMark}
}

const studentVersion3 = updateExamMark(studentVersion2, 19)
studentVersion3
{name: “Tim”, assignmentMark: 19, examMark: 19}
Note that when we declared each of the variables studentVersion1-3 as const, these variables are only constant in the sense that the object reference cannot be changed. That is, they cannot be reassigned to refer to different objects:
studentVersion1 = studentVersion2
VM430:1 Uncaught TypeError: Assignment to constant variable.
However, there is nothing in these definitions to prevent the properties of those objects from being changed:
studentVersion1.name = “Tom”
studentVersion1
{name: “Tom”, assignmentMark: 20, examMark: 15}
We will see later how the TypeScript compiler allows us to create deeply immutable objects that will trigger compile errors if we try to change their properties.
You may wonder how pure functions can be efficient if the only way to mutate data structures is by returning a modified copy of the original. There are two responses to such a question, one is: “purity helps us avoid errors in state management through wanton mutation effects – in modern programming correctness is often a bigger concern than efficiency”, the other is “properly structured data permits log(n) time copy-updates, which should be good enough for most purposes”. We’ll explore what is meant by the latter in later sections of these notes.
TypeScript Introduction
Learning Outcomes
· Create programs in TypeScript using types to ensure correctness at compile time.
· Explain how TypeScript features like Interfaces, Union Types and Optional Properties allow us to model types in common JavaScript coding patterns with precision.
· Explain how Generics allow us to create type safe but general and reusable code.
· Compare and contrast strongly, dynamically and gradually typed languages.
· Describe how compilers that support type inference assist us in writing type safe code.
Introduction
As the Web 2.0 revolution hit in 2000s web apps built on JavaScript grew increasingly complex and today, applications like GoogleDocs are as intricate as anything built over the decades in C++. In the 90s I for one (though I don’t think I was alone) thought that this would be impossible in a dynamically typed language. It is just too easy to make simple mistakes (as simple as typos) that won’t be caught until run time. It’s likely only been made possible due to increased rigour in testing. That is, instead of relying on a compiler to catch mistakes, you rely on a comprehensive suite of tests that evaluate running code before it goes live.
Part of the appeal of JavaScript is that the source code being the artefact that runs directly in a production environment gives an immediacy and comprehensibility to software deployment. However, in recent years more and more tools have been developed that introduce a build-chain into the web development stack. Examples include: minifiers, which compact and obfuscate JavaScript code before it is deployed; bundlers, which merge different JavaScript files and libraries into a single file to (again) simplify deployment; and also, new languages that compile to JavaScript, which seek to fix older versions of the JavaScript language’s shortcomings and compatibility issues in different browsers. Examples of the latter include CoffeeScript, ClojureScript and (more recently) PureScript (which we will visit later in this unit). Right now, however, we will take a closer look at another language in this family called TypeScript. See the official TypeScript documenation for some tutorials and deeper reference.
TypeScript is interesting because it forms a relatively minimal augmentation, or superset, of EcmaScript syntax that simply adds type annotations. For the most part, the compilation process simply performs validation on the declared types and strips away the type annotations rendering just the legal JavaScript ready for deployment. This lightweight compilation into a language with a similar level of abstraction to the source is known also known as transpiling (as opposed to C++ or Java where the object code is much closer to the machine execution model).
The following is intended as a minimally sufficient intro to TypeScript features such that we can type some fairly rich data structures and higher-order functions. An excellent free resource for learning the TypeScript language in depth is the TypeScript Deep-dive book.
Type annotations
Type annotations in TypeScript come after the variable name’s declaration, like so:
let i: number = 123;
Actually, in this case the type annotation is completely redundant. The TypeScript compiler features sophisticated type inference. In this case it can trivially infer the type from the type of the literal.
Previously, we showed how rebinding such a variable to a string in JavaScript is perfectly fine by the JavaScript interpreter. However, such a change of type in a variable a dangerous pattern that is likely an error on the programmer’s part. The TypeScript compiler will generate an error:
let i = 123;
i = ‘hello!’;
[TS compiler says] Type ‘“hello!”’ is not assignable to type ‘number’.
Type Annotations Cheat Sheet
(Each of these features is described in more detail in subsequent sections – this is just a summary and roadmap)
Type annotation begins with : and goes after the variable name, but before any assignment. Primitive types include number, string, boolean.
let x: number, s: string, b: boolean = false;
x = “hello” // type error: x can only be assigned numbers!
Note: the primitive types begin with a lower-case letter and are not to be mistaken for Number, String and Boolean which are not types at all but Object wrappers with some handy properties and methods. Don’t try to use these Object wrappers in your type definitions.
Union types allow more than one option for the type of value that can be assigned to a variable:
let x: number | string;
x = 123;
x = “hello” // no problem now
x = false; // type error! only numbers or strings allowed.
Function parameter types are declared similarly, the return type of the function goes after the ) of the parameter list:
function theFunction(x: number, y: number): number {
return x + y; // returns a number
}
When working with higher-order functions you’ll need to pass functions into and/or return them from other functions. Function types use the fat-arrow syntax:
function curry(f: (x:number, y:number)=>number): (x:number)=>(y:number)=>number {
return x=>y=>f(x,y)
}
The curry function above only works for functions that are operations on two numbers. We can make it generic by parameterising the argument types.
function curry(f:(x:U,y:V)=>W): (x:U)=>(y:V)=>W {
return x=>y=>f(x,y)
}
We can declare types for objects with multiple properties using interfaces
interface Student {
name: string
mark: number
}
We can declare interfaces readonly to make them immutable
const student: Readonly = { name:”Tim”, mark:51 }
When type annotations get long and complex we can declare aliases for them using the type keyword:
type CurriedFunc = (x:U)=>(y:V)=>W

function curry(f:(x:U,y:V)=>W): CurriedFunc {
return x=>y=>f(x,y)
}
Why should we declare types?
Declaring types for variables, functions and their parameters, and so on, provides more information to the person reading and using the code, but also to the compiler, which will check that you are using them consistently and correctly. This prevents a lot of errors.
Consider that JavaScript is commonly used to manipulate HTML pages. For example, I can get the top level headings in the current page, like so:
const headings = document.getElementsByTagName(“h1”)
In the browser, the global document variable always points to the currently loaded HTML document. Its method getElementsByTagName returns a collection of elements with the specified tag, in this case 

.
Let’s say I want to indent the first heading by 100 pixels, I could do this by manipulating the “style” attribute of the element:
headings[0].setAttribute(“style”,”padding-left:100px”)
Now let’s say I do a lot of indenting and I want to build a library function to simplify manipulating padding of HTML elements.
function setLeftPadding(elem, value) {
elem.setAttribute(“style”, `padding-left:${value}`)
}
setLeftPadding(headings[0], “100px”)
But how will a user of this function (other than myself, or myself in three weeks when I can’t remember how I wrote this code) know what to pass in as parameters? elem is a pretty good clue that it needs to be an instance of an HTML element, but is value a number or a string?
In TypeScript we can make the expectation explicit:
function setLeftPadding(elem: Element, value: string) {
elem.setAttribute(“style”, `padding-left:${value}`)
}
If I try to pass something else in, the TypeScript compiler will complain:
setLeftPadding(headings[0],100)
Argument of type ‘100’ is not assignable to parameter of type ‘string’.ts(2345)
In JavaScript, the interpreter would silently convert the number to a string and set padding-left:100 – which wouldn’t actually cause the element to be indented because CSS expects px (short for pixel) at the end of the value.
Potentially worse, I might forget to add the index after headings:
setLeftPadding(headings,100)
This would cause a run-time error in the browser:
VM360:1 Uncaught TypeError: headings.setAttribute is not a function
This is because headings is not an HTML Element but an HTMLCollection, with no method setAttribute. Note that if I try to debug it, the error will be reported from a line of code inside the definition of setLeftPadding. I don’t know if the problem is in the function itself or in my call.
The same call inside a typescript program would trigger a compile-time error. In a fancy editor with compiler services like VSCode I’d know about it immediately because the call would get a red squiggly underline immediately after I type it, I can hover over the squiggly to get a detailed error message, and certainly, the generated broken JavaScript would never make it into production. 
Union Types
Above we see how typescript prevents a variable from being assigned values of different types. However, it is a fairly common practice in JavaScript to implicitly create overloaded functions by accepting arguments of different types and resolving them at run-time.
The following will append the “px” after the value if a number is passed in, or simply use the given string (assuming the user added their own “px”) otherwise.
function setLeftPadding(elem, value) {
if (typeof value === “number”)
elem.setAttribute(“style”, `padding-left:${value}px`)
else
elem.setAttribute(“style”, `padding-left:${value}`)
}
So this function accepts either a string or a number for the value parameter – but to find that out we need to dig into the code. The “Union Type” facility in typescript allows us to specify the multiple options directly in the function definition, with a list of types separated by “|”:
function setLeftPadding(elem: Element, value: string | number) {…
Going further, the following allows either a number, or a string, or a function that needs to be called to retrieve the string. It uses typeof to query the type of the parameter and do the right thing in each case.
function setLeftPadding(elem: Element, value: number | string | (()=>string)) {
if (typeof value === “number”)
elem.setAttribute(“style”, `padding-left:${value}px`)
else if (typeof value === “function”)
elem.setAttribute(“style”, `padding-left:${value()}`)
else
elem.setAttribute(“style”, `padding-left:${value}`)
}
The TypeScript typechecker also knows about typeof expressions (as used above) and will also typecheck the different clauses of if statements that use them for consistency with the expected types.
Interfaces
In typescript I can declare an interface which defines the set of properties and their types, that I expect to be available for certain objects.
For example, when tallying scores at the end of semester, I will need to work with collections of students that have a name, assignment and exam marks. There might even be some special cases which require mark adjustments, the details of which I don’t particularly care about but that I will need to be able to access, e.g. through a function particular to that student. The student objects would need to provide an interface that looks like this:
interface Student {
name: string
assignmentMark: number
examMark: number
markAdjustment(): number
}
Note that this interface guarantees that there is a function returning a number called markAdjustment available on any object implementing the Student interface, but it says nothing about how the adjustment is calculated. Software engineers like to talk about Separation of Concerns (SoC). To me, the implementation of markAdjustment is Someone Else’s Problem (SEP).
Now I can define functions which work with students, for example to calculate the average score for the class:
function averageScore(students: Student[]): number {
return students.reduce((total, s) =>
total + s.assignmentMark + s.examMark + s.markAdjustment(), 0)
/ students.length
}
Other parts of my program might work with richer interfaces for the student database—with all sorts of other properties and functions available—but for the purposes of computing the average class score, the above is sufficient.
Generic Types
Sometimes when we do marking we get lists of students indexed by their Student Number (a number). Sometimes its by email address (a string). You can see the concept of student numbers probably predates the existence of student emails (yes, universities often predate the internet!). What if one day our systems will use yet another identifier? We can future proof a program that works with lists of students by deferring the decision of the particular type of the id:
interface Student {
id: T;
name: string;
… // other properties such as marks etc…
}
Here T is the type parameter, and the compiler will infer its type when it is used. It could be number, or it could be a string (e.g. if it’s an email), or it could be something else.
Type parameters can have more descriptive names if you like, but they must start with a capital. The convention though is to use rather terse single letter parameter names in the same vicinity of the alphabet as T. This habit comes from C++, where T used to stand for “Template”, and the terseness stems from the fact that we don’t really care about the details of what it is.
As in function parameter lists, you can also have more than one type parameter:
interface Student {
id: T;
name: string;
someOtherThingThatWeDontCareMuchAbout: U

}
Formally, this is a kind of “parametric polymorphism”. The T and U here may be referred to as type parameters or type variables. We say that id has generic type.
You see generic types definitions used a lot in algorithm and data structure libraries, to give a type—to be specified by the calling code—for the data stored in the data structures. For example, the following interface might be the basis of a linked list element:
interface IListNode {
data: T;
next?: IListNode;
}
The specific type of T will be resolved when data is assigned a specific type value.
We can add type parameters to interfaces, ES6 classes, type aliases, and also functions. Consider the function which performs a binary search over an array of numbers (it assumes the array is sorted):
function binarySearch1(arr:number[], key:number): number {
function bs(start:number, end:number): number {
if(start > end) return -1;
const mid = Math.floor((start + end) / 2);
if(key > arr[mid]) return bs(mid + 1, end);
if(key < arr[mid]) return bs(start, mid - 1); return mid; } return bs(0,arr.length); } const studentsById = [ {id: 123, name: "Harry Smith"}, {id: 125, name: "Cindy Wu"}, ... ] const numberIds = studentsById.map(s=>s.id);
console.log(studentsById[binarySearch1(numberIds,125)].name)
Cindy Wu
If we parameterise the type of elements in the array, we can search on sorted arrays of strings as well as numbers:
function binarySearch2(arr:T[], key:T): number {
function bs(start:number, end:number): number {
if(start > end) return -1;
const mid = Math.floor((start + end) / 2);
if(key > arr[mid]) return bs(mid + 1, end);
if(key < arr[mid]) return bs(start, mid - 1); return mid; } return bs(0,arr.length); } const studentsByEmail = [ {id: " ", name: "Cindy Wu"}, {id: " ", name: "Harry Smith"}, ... ] const stringIds = studentsByEmail.map(s=>s.id);
console.log(studentsByEmail[binarySearch2(stringIds,’ ‘)].name)
Harry Smith
Why is this better than raw JavaScript with no type checking, or simply using TypeScript’s wildcard any type? Well it ensures that we use the types consistently. For example:
binarySearch(numberIds,” “)
TYPE ERROR!
The binarySearch2 function above is usable with more types than binarySearch1, but it still requires that T does something sensible with < and >.
We can add a function to use for comparison, so now we can use it with students uniquely identified by some other weird thing that we don’t even know about yet:
function binarySearch3(arr:T[], key:T, compare: (a:T,b:T)=>number): number {
function bs(start:number, end:number): number {
if(start > end) return -1;
const mid = Math.floor((start + end) / 2),
comp = compare(key,arr[mid]);
if(comp>0) return bs(mid + 1, end);
if(comp<0) return bs(start, mid - 1); return mid; } return bs(0,arr.length); } Elsewhere in our program where we know how students are sorted, we can specify the appropriate compare function: binarySearch3(students, (a,b)=>/* return 1 if a is greater than b, 0 if they are the same, -1 otherwise */)
We can also have multiple type parameters for a single function. The following version of curry uses type parameters to catch errors without loss of generality:
function curry(f:(x:U,y:V)=>W): (x:U)=>(y:V)=>W {
return x=>y=>f(y,x) // error!
}
The TypeScript compiler underlines y,x and says:
Error: Argument of type ‘V’ is not assignable to parameter of type ‘U’. ‘U’ could be instantiated with an arbitrary type which could be unrelated to ‘V’.
So it’s complaining that our use of a y:V into a parameter that should be a U and vice-versa for x. We flip them back and we are good again… but TypeScript helps us make sure we use the function consistently too:
function curry(f:(x:U,y:V)=>W): (x:U)=>(y:V)=>W {
return x=>y=>f(x,y) // good now!
}

function prefix(s: string, n: number) {
return s.slice(0,n);
}

const first = curry(prefix)

first(3)(“hello”) // Error!
Error: Argument of type ‘number’ is not assignable to parameter of type ‘string’.
Error: Argument of type ‘string’ is not assignable to parameter of type ‘number’.
So the error messages are similar to above, but now they list concrete types because the types for U and V have already been narrowed by the application of curry to prefix.
first(“hello”)(3) // good now!
So type checking helps us to create functions correctly, but also to use them correctly. You begin to appreciate type checking more and more as your programs grow larger and the error messages appear further from their definitions. However, the most important thing is that these errors are being caught at compile time rather than at run time, when it might be too late!
Optional Properties
Look again at the next property of IListNode:
next?: IListNode;
The ? after the next property means that this property is optional. Thus, if node is an instance of IListNode, we can test if it has any following nodes:
typeof node.next === ‘undefined’
or simply:
!node.next
In either case, if these expressions are true, it would indicate the end of the list.
A concrete implementation of a class for IListNode can provide a constructor:
class ListNode implements IListNode {
constructor(public data: T, public next?: IListNode) {}
}
Then we can construct a list like so:
const list = new ListNode(1, new ListNode(2, new ListNode(3)));

Exercises
· Implement a class List whose constructor takes an array parameter and creates a linked list of ListNode.
· Add methods to your List class for:
forEach(f: (_:T)=> void): List
filter(f: (_:T)=> boolean): List
map(f: (_:T)=> V): List
reduce(f: (accumulator:V, t:T)=> V, initialValue: V): V
Note that each of these functions returns a list, so that we can chain the operations, e.g.:
list.filter(x=>x%2===0).reduce((x,y)=>x+y,0)

Using the compiler to ensure immutability
We saw earlier, that while an object reference can be declared const:
const studentVersion1 = {
name: “Tim”,
assignmentMark: 20,
examMark: 15
}
Which prevents reassigning studentVersion1 to any other object, the const declaration does not prevent properties of the object from being changed:
studentVersion1.name = “Tom”
The TypeScript compiler will not complain about the above assignment at all. However, TypeScript does give us the possibility to add an as const after creating an object to make it deeply immutable:
const studentVersion2 = {
name: “Tim”,
assignmentMark: 20,
examMark: 15
} as const
studentVersion2.name = “Tom”
Cannot assign to ‘name’ because it is a read-only property.ts(2540)
The above is a singleton immutable Object. However, more generally, if we need multiple instances of a deeply immutable object, we can declare immutable types using the Readonly construct:
type ImmutableStudent = Readonly<{ name: string; assignmentMark: number; examMark: number; }>

const studentVersion3: ImmutableStudent = {
name: “Tim”,
assignmentMark: 20,
examMark: 15
}

studentVersion3.name = “Tom”
Again, we get the squiggly:

Typing systems with different ‘degrees’ of strictness
C++ is considered a strongly typed language in the sense that all types of values and variables must match up on assignment or comparison. Further, it is “statically” typed in that the compiler requires complete knowledge (at compile-time) of the type of every variable. This can be overridden (type can be cast away and void pointers passed around) but the programmer has to go out of their way to do it (i.e. opt-out).
JavaScript, by contrast, as we have already mentioned, is dynamically typed in that types are only checked at run time. Run-time type errors can occur and be caught by the interpreter on primitive types, for example the user tried to invoke an ordinary object like a function, or refer to an attribute that doesn’t exist, or to treat an array like a number.
TypeScript represents a relatively new trend in being a gradually typed language. Another way to think about this is that, by default, the type system is opt-in. Unless declared otherwise, all variables have type any. The any type is like a wild card that always matches, whether the any type is the target of the assignment:
let value; // has implicit type
value = “hello”;
value = 123;
// no error.
Or the source (r-value) of the assignment:
let value: number;
value = “hello”;
//[ts] Type ‘”hello”‘ is not assignable to type ‘number’.
value = “hello”
// no error.
While leaving off type annotations and forcing types with any may be convenient, for example, to quickly port legacy JavaScript into a TypeScript program, generally speaking it is good practice to use types wherever possible, and can actually be enforced with the –noImplicitAny compiler flag. The compiler’s type checker is a sophisticated constraint satisfaction system and the correctness checks it applies are usually worth the extra effort – especially in modern compilers like TypeScript where type inference does most of the work for you.
Lazy Evaluation
Learning Outcomes
· Understand how functions can be used to delay evaluation of code until the result is actually required
· Understand how this lazy evaluation allows for the definition of infinite sequences
· Code and use lazily evaluated infinite number sequences
Introduction
Usually, expressions in imperative languages are fully evaluated, each step immediately after the previous step. This is called strict or eager evaluation. Functions, however, give us a way to execute code (or evaluate expressions) only when they are really required. This is called lazy evaluation. As an eager student at Monash University you will be unfamiliar with the concept of laziness, since you always do all your work as soon as possible because you love it so much. This is obviously the best strategy for most things in life (and definitely for studying for this course), but laziness as a programming paradigm can sometimes enable different ways to model problems. Early in our introduction to TypeScript we created a function setLeftPadding that took as argument, either an immediate value or a function that returns a value (simplifying a bit):
function setLeftPadding(elem: Element, value: string | (()=>string)) {
if (typeof value === “string”)
elem.setAttribute(“style”, `padding-left:${value}`)
else // value must be a function
elem.setAttribute(“style”, `padding-left:${value()}`)
}
We didn’t discuss this much at the time, but this potentially lets us delay the evaluation of an expression. We don’t have to have the value ready at the time of the function call, rather we can provide a computation to obtain the value at a later time.
This is the essence of how functional languages can elevate laziness to a whole paradigm, and it is a powerful concept. For one thing, it allows us to define infinite lists. Previously we defined an interface for List nodes:
interface IListNode {
data: T;
next?: IListNode;
}
Compare the following definition which uses a function to access the next element in the list instead of a simple property:
interface LazySequence {
value: T;
next():LazySequence;
}
Now we can define infinite sequences:
function naturalNumbers() {
return function _next(v:number):LazySequence {
return {
value: v,
next: ()=>_next(v+1)
}
}(1)
}
Note that _next is immediately invoked in the return. If you are like me you will need to look at this for a little to get your head around it. To summarise: we are defining a function in a returned expression and immediately invoking it with 1 as the starting value. This pattern is used so often in JavaScript it has an acronym: IIFE or Immediately Invoked Function Expression. It was one way of achieving encapsulation before ES6 introduced proper classes.
const n = naturalNumbers();
n.value
1
n.next().value
2
n.next().next().value
3
Exercise
· Create a function take(n,seq) which returns a LazySequence of the first n elements of an infinite LazySequence of the sort generated by naturalNumbers. After returning the nth element, take should return undefined to indicate the end.
· Create map, filter and reduce functions (similar to those defined on Array.prototype) for such a sequence and use them along with take to create a solution for Project Euler Problem 1 (encountered earlier): sum of first n natural numbers not divisible by 3 or 5.
· Make a general purpose infinite sequence initialisation function that creates infinite lazy sequences. It will take as parameter a function to compute the next value from the current value. In other words, it should be a “factory” for functions like naturalNumbers. Thus, if we call our function initSequence, then initSequence(n=>n+1) will return a function equivalent to naturalNumbers.
· Use your general purpose sequence generator to generate fibonacci numbers.
Functional Reactive Programming
Learning Outcomes
· Understand that the Functional Reactive Programming Observable construct is just another container of elements, but whose “push-based” architecture allows them to be used to capture asynchronous behaviour
· Understand that Observables provide the benefits of functional programming: composability, reusability.
· See that Observables structure complex stateful programs in a more linear and understandable way that maps more easily to the underlying state machine.
· Use Observables to create simple UI programs in-place of asynchronous event handling.
Introduction
Functional Reactive Programming describes an approach to modelling complex, asynchronous behaviours that uses many of the functional programming principles we have already explored. In particular:
· Applications of functions to elements of containers to transform to a new container (e.g. map, filter, reduce etc. over arrays).
· Use of function composition and higher-order functions to define complex transformations from simple, reusable function elements.
We will explore FRP through an implementation of the Observable data structure in the Reactive Extensions for Javascript (rx.js) library. We will then see it applied in application to a straight-forward browser-based user interface problem.
Observable Streams
We have seen a number of different ways of wrapping collections of things in containers: built-in JavaScript arrays, linked-list data structures, and also lazy sequences. Now we’ll see that Observable is just another type of container with some simple examples, before demonstrating that it also easily applies to asynchronous streams.
You can also play with a live version of this code. Note that the code in this live version begins with a pair of import statements, bringing the set of functions that we describe below into scope for this file from the rxjs libraries:
import { of, range, fromEvent, merge } from ‘rxjs’;
import { last,filter,scan,map,flatMap,take } from ‘rxjs/operators’;
Conceptually, the Observable data structure just wraps a collection of things in a container in a similar way to each of the above. The function of creates an Observable that will emit the specified elements in its parameter list in order. Similar to the lazy sequences though, nothing actually happens until we initialise the stream. We do this by “subscribing” to the Observable, passing in an “effectful” function that is applied to each of the elements in the stream. For example, we could print the elements out with console.log:
of(1,2,3,4)
.subscribe(console.log)
1
2
3
4
So, there is a similarity to the lazy sequence where nothing happened until we started calling next, but there is also a difference. You could think of our lazy sequences as being “pull-based” data structures, because we had to “pull” the values out one at a time by calling the next function as many times as we wanted elements of the list. Observables are a bit different. They are used to handle “streams” of things, such as asynchronous UI or communication events. These things are asynchronous in the sense that we do not know when they will occur.
Just as we have done to each of the above data structures (arrays and so on) in previous chapters, we can define a transform over an Observable to create a new Observable. This transformation may have multiple steps the same way that we chained filter and map operations over arrays previously. In rx.js’s Observable implementation, however, they’ve gone a little bit more functional, by insisting that such operations are composed (rather than chained) inside a pipe. For example, here’s the squares of even numbers in the range [0,10):
range(10)
.pipe(
filter(x=>x%2===0),
map(x=>x*x))
.subscribe(console.log)
0
4
16
36
64
Now here’s the solution to the first Project Euler problem, the sum of numbers divisible by 3 or 5 under 1000:
range(1000)
.pipe(
filter(x=> x%3===0 || x%5===0),
scan((a,v)=>a+v),
last())
.subscribe(console.log);
233168
Scan is very much like the reduce function on Array in that it applies an accumulator function to the elements coming through the Observable, except instead of just outputting a single value (as reduce does), it emits a stream of the running accumulation (in this case, the sum so far). Thus, we use the last function to finally produce an Observable with the final value.
There are also functions for combining Observable streams. The zip function provides a pretty straightforward way to pair the values from two streams into an array:
const
columns = of(‘A’,’B’,’C’),
rows = range(3);

zip(columns,rows)
.subscribe(console.log)
[“A”,0]
[“B”,1]
[“C”,2]
If you like mathy vector speak, you can think of the above as an inner product of the two streams. By contrast, the flatMap operator gives the cartesian product of two streams. That is, it gives us a way to take, for every element of a stream, a whole other stream, but flattened (or projected) together with the parent stream. The following enumerates all the row/column indices of cells in a spreadsheet:
columns.pipe(
flatMap(column=>rows.pipe(
map(row=>[column,row])
))
).subscribe(console.log)
[“A”, 0]
[“A”, 1]
[“A”, 2]
[“B”, 0]
[“B”, 1]
[“B”, 2]
[“C”, 0]
[“C”, 1]
[“C”, 2]
Another way to combine streams is merge. Streams that are generated with of and range have all their elements available immediately, so the result of a merge is not very interesting, just the elements of one followed by the elements of the other:
merge(columns,rows)
.subscribe(console.log)
A
B
C
0
1
2
However, merge when applied to asynchronous streams will merge the elements in the order that they arrive in the stream. For example, a stream of key-down and mouse-down events from a web-page:
const
key$ = fromEvent(document,”keydown”),
mouse$ = fromEvent(document,”mousedown”);
It’s a convention to end variable names refering to Observable streams with a $ (I like to think it’s short for “$tream”, or implies a plurality of the things in the stream, or maybe it’s just because cash rules everything around me).
The following lets us see in the console the keys pressed as they come in, it will keep running for as long as the web page is open:
key$.pipe(
map(e=>e.key)
).subscribe(console.log)
The following prints “Mouse Click!” on every mousedown:
mouse$.pipe(
map(_=>”Mouse Click!”)
).subscribe(console.log)
Once again this will keep producing the message for every mouse click for as long as the page is open. Note that the subscribes do not “block”, so the above two subscriptions will run in parallel. That is, we will receive messages on the console for either key or mouse downs whenever they occur.
The following achieves the same thing with a single subscription using merge:
merge(key$.pipe(map(e=>e.key)),
mouse$.pipe(map(_=>”Mouse Click!”))
).subscribe(console.log)
Observable Cheatsheet
The following is a very small (but sufficiently useful) subset of the functionality available for rx.js. I’ve simplified the types rather greatly for readability and not always included all the optional arguments.
Creation
The following functions create Observable streams from various sources.
// produces the list of arguments as elements of the stream
of(…args: T[]): Observable

// produces a stream of numbers from ‘start’ until ‘count’ have been emitted
range(start?: number, count?: number): Observable

// produces a stream for the specified event, element type depends on
// event type and should be specified by the type parameter, e.g.: MouseEvent, KeyboardEvent
fromEvent(target: FromEventTarget, eventName: string): Observable

// produces a stream of increasing numbers, emitted every ‘period’ milliseconds
interval(period?: number): Observable
Combination
Creating new Observable streams from existing streams
// create a new Observable stream from the merge of multiple Observable streams.
// The resulting stream will have elements of Union type.
// i.e. the type of the elements will be the Union of the types of each of the merged streams
// Note: there is also an operator version.
merge(t: Observable, u: Observable, …): Observable

// create n-ary tuples (arrays) of the elements at the head of each of the incoming streams
zip(t: Observable, r: Observable):Observable<[T,U,...]>
Observable methods
Methods on the Observable object itself that may be chained.
// composes together a sequence of operators (see below) that are applied to transform the stream
pipe(…op1: OperatorFunction): Observable;

// {next} is a function applied to each element of the stream
// {error} is a function applied in case of error (only really applicable for communications)
// {complete} is a function applied on completion of the stream (e.g. cleanup)
// @return {Subscription} returns an object whose “unsubscribe” method may be called to cleanup
// e.g. unsubscribe could be called to cancel an ongoing Observable
subscribe(next?: (value: T) => void, error?: (error: any) => void, complete?: () => void): Subscription;
Operators
Operators are passed to pipe. They all return an OperatorFunction which is used by pipe.
// transform the elements of the input stream using the `project’ function
map(project: (value: T) => R)

// only take elements which satisfy the predicate
filter(predicate: (value: T) => boolean)

// take `n’ elements
take(n: number)

// take the last element
last()

// AKA concatMap: produces an Observable for every input stream element
flatMap(project: (value: T) => Observable)

// accumulates values from the stream
scan(accumulator: (acc: R, value: T) => R, seed?: R)
A User Interface Example
Modern computer systems often have to deal with asynchronous processing. Examples abound:
· In RESTful web services, where a client sends a non-blocking request (e.g. GET) with no guarantee of when the server will send a response.
· In user interfaces, events are triggered by user interaction with different parts of the interface, which may happen at any time.
· Robotics and other systems with sensors, the system must respond to events in the world.
Under the hood, most of these systems work on an event model, a kind of single-threaded multitasking where the program (after initialisation) polls a FIFO (First-In-First-Out) queue for incoming events in the so-called event loop. When an event is popped from the queue, any subscribed actions for the event will be applied.
In JavaScript the first event loop you are likely to encounter is the browser’s. Every object in the DOM (Document Object Model – the tree data structure behind every webpage) has events that can be subscribed to, by passing in a callback function which implements the desired action. We saw a basic click handler earlier.
Handling a single event in such a way is pretty straightforward. Difficulties arise when events have to be nested to handle a (potentially-bifurcating) sequence of possible events.
A simple example that begins to show the problem is implementing a UI to allow a user to drag an object on (e.g.) an SVG canvas (play with it here!). The state machine that models this is pretty simple:

There are only three transitions, each triggered by an event.
Turning a State-Machine into Code with Event Listeners
The typical way to add interaction in web-pages and other UIs has historically been creating the Event Listeners. In software engineering terms it’s typically referred to as the Observer Pattern (not to be confused with the “Observable” FRP abstraction we have been discussing).
Here’s an event-driven code fragment that provides such dragging for some SVG element draggableRect, that is a child of an SVG canvas element referred to by the variable svg:
const svg = document.getElementById(“svgCanvas”)!;
const rect = document.getElementById(“draggableRect”)!;
rect.addEventListener(‘mousedown’,e => {
const
xOffset = Number(rect.getAttribute(‘x’)) – e.clientX,
yOffset = Number(rect.getAttribute(‘y’)) – e.clientY,
moveListener = (e:MouseEvent)=>{
rect.setAttribute(‘x’,String(e.clientX + xOffset));
rect.setAttribute(‘y’,String(e.clientY + yOffset));
},
done = ()=>{
svg.removeEventListener(‘mousemove’, moveListener);
};
svg.addEventListener(‘mousemove’, moveListener);
svg.addEventListener(‘mouseup’, done);
})
We add ‘event listeners’ to the html elements, which invoke the specified functions when the event fires. There are some awkward dependencies. The moveListener function needs access to the mouse coordinates from the mousedown event, the done function which ends the drag on a mouseup event needs a reference to the moveListener function.
It’s all a bit amorphous:
· the flow of control is not very linear or clear;
· we’re declaring callback functions inside of callback functions and the side effect of the program (that is, the change of state to the underlying web page by moving the rectangle) is hidden in the deepest nested part of the program;
· we have to manually unsubscribe from events when we’re done with them (or potentially deal with weird behaviour when unwanted zombie events fire).
The last issue is not unlike the kind of resource cleanup that RAII is meant to deal with. Generally speaking, nothing about this function resembles the state machine diagram.
The code sequencing has little sensible flow.
The FRP Solution
We now rewrite precisely the same behaviour using Observable FRP:
const svg = document.getElementById(“svgCanvas”)!;
const rect = document.getElementById(“draggableRect”)!;

const mousedown = fromEvent(rect,’mousedown’),
mousemove = fromEvent(svg,’mousemove’),
mouseup = fromEvent(svg,’mouseup’);

mousedown
.pipe(
map(({clientX, clientY}) => ({
mouseDownXOffset: Number(rect.getAttribute(‘x’)) – clientX,
mouseDownYOffset: Number(rect.getAttribute(‘y’)) – clientY
})),
flatMap(({mouseDownXOffset, mouseDownYOffset}) =>
mousemove
.pipe(
takeUntil(mouseup),
map(({clientX, clientY}) => ({
x: clientX + mouseDownXOffset,
y: clientY + mouseDownYOffset
})))))
.subscribe(({x, y}) => {
rect.setAttribute(‘x’, String(x))
rect.setAttribute(‘y’, String(y))
});
The Observable’s mousedown, mousemove and mouseup are like streams which we can transform with familiar operators like map and takeUntil. The flatMap operator “flattens” the inner mousemove Observable stream back to the top level, then subscribe will apply a final action before doing whatever cleanup is necessary for the stream.
Compared to our state machine diagram above, we have:
· modelled each of the possible transition triggers as streams;
· the flow of data is from top to bottom, with the cycling branch introduced by the flatMap operation (which we will look into below);
· the only side effects (the movement of the rectangle) occur in the function passed to the subscribe;
· the cleanup of subscriptions to the mousemove and mouseup events is handled automatically by the takeUntil function when it closes the streams.
FRP Asteroids
Introduction
Functional Reactive Programming (specifically the Observable/Observer pattern) allows us to capture asynchronous actions like user interface events in streams. These allow us to “linearise” the flow of control, avoid deeply nested loops, and process the stream with pure, referentially transparent functions.
As an example we will build a little “Asteroids” game using FRP. We’re going to use rxjs as our Observable implementation, and we are going to render it in HTML using SVG. We’re also going to take some pains to make pure functional code (and lots of beautiful curried lambda (arrow) functions). We’ll use typescript type annotations to help us ensure that our data is indeed immutable and to guide us in plugging everything together without type errors into a nicely decoupled Model-View-Controller (MVC) architecture:

If you’re the kind of person who likes to work backwards, you can jump straight to playing the final result and you can also live edit its code.
We’ll build it up in several steps.
· First, we’ll just rotate the ship
1. directly with old-school events
2. abstracting away event handling with an Observable
· Then, we’ll eliminate global mutable state using “Pure” Observable Streams
· Then, we’ll add physics and handling more inputs
· We’ll isolate the view
· Next, we’ll introduce other objects, starting with bullets
· Finally, we’ll deal with collisions
Let’s start by making the svg with a simple polygon for the ship. It will look like this:

And here’s the snippet of html that creates the ship:

Note that the ship is rendered inside a transform group . We will be changing the transform attribute to move the ship around.
Rotating the ship
To begin with we’ll make it possible for the player to rotate the ship with the arrow keys. First, by directly adding listeners to keyboard events. Then, by using events via Observable streams. Here’s a preview of what it’s going to look like (or you can play with it in a live editor):

There are basically just two states, as sketched in the following state machine:

Using Events Directly
The first event we assign a function to is the window load event. This function will not be invoked until the page is fully loaded, and therefore the SVG objects will be available. Thus, our code begins:
window.onload = function() {
const ship = document.getElementById(“ship”)!;

So ship will reference the SVG  whose transform attribute we will be manipulating to move it. To apply an incremental movement, such as rotating the ship by a certain angle relative to its current orientation, we will need to store that current location. We could read it out of the transform attribute stored in the SVG, but that requires some messy string parsing. We’ll just store the state in a local object, which we will keep up to date as we move the ship. For now, all we have is the ship’s position (x and y coordinates) and rotation angle:

const state = {
x:100, y:100, angle:0
}

Next, we need to specify a function to be invoked on keydown events:

document.onkeydown = function(d:KeyboardEvent) {

Inside this function, we are only interested in left and right arrow keys. If the keys are held down, after a moment they may start repeating automatically (this is OS dependent) and will churn out continuous keydown events. We filter these out too by inspecting the KeyboardEvent.repeat property:

if((d.key === “ArrowLeft” || d.key === “ArrowRight”) && !d.repeat) {

Let’s say we want a left- or right-arrow keydown event to start the ship rotating, and we want to keep rotating until the key is released. To achieve this, we use the builtin setInterval(f,i) function, which invokes the function f repeatedly with the specified interval i delay (in milliseconds) between each invocation. setInterval returns a numeric handle which we need to store so that we can clear the interval behaviour later.
const handle = setInterval(function() {
ship.setAttribute(‘transform’,
`translate(${state.x},${state.y}) rotate(${state.angle+=d.key === “ArrowLeft” ? -1 : 1})`)
}, 10);
So as promised, this function is setting the transform property on the ship, using the position and angle information stored in our local state object. We compute the new position by deducting or removing 1 (degree) from the angle (for a left or right rotation respectively) and simultaneously update the state object with the new angle. Since we specify 10 milliseconds delay, the ship will rotate 100 times per second.
We’re not done yet. We have to stop the rotation on keyup by calling clearInterval, for the specific interval we just created on keydown (using the handle we stored). To do this, we’ll use document.addEventListener to specify a separate keyup handler for each keydown event, and since we will be creating a new keyup listener for each keydown event, we will also have to cleanup after ourselves or we’ll have a memory (event) leak:

const keyupListener = function(u:KeyboardEvent) {
if(u.key === d.key) {
clearInterval(handle);
document.removeEventListener(‘keyup’,keyupListener);
}
};
document.addEventListener(“keyup”,keyupListener);
}
}
}
And finally we’re done. But it was surprisingly messy for what should be a relatively straightforward and commonplace interaction. Furthermore, the imperative style code above finished up deeply nested with function declarations inside function declarations, inside ifs and variables like d, handle and keyupListener are referenced from inside these nested function scopes in ways that are difficult to read and make sense of. The state machine is relatively straightforward, but it’s tangled up by imperative code blocks.
Using Observable
Observable (we’ll use the implementation from rxjs) wraps common asynchronous actions like user events and intervals in streams, that we can process with a chain of ‘operators’ applied to the chain through a pipe.
We start more or less the same as before, inside a function applied on window.onload and we still need local variables for the ship visual and its position/angle:
window.onload = function() {
const
ship = document.getElementById(“ship”)!,
state = { x:100, y:100, angle:0 };

But now we use the rxjs fromEvent function to create an Observable keydown$ (the ‘$’ is a convention indicating the variable is a stream),
const keydown$ = fromEvent(document, ‘keydown’);
The objects coming through the stream are of type KeyboardEvent, meaning they have the key and repeat properties we used before. We can create a new stream which filters these out:
const arrowKeys$ = keydown$.pipe(
filter(({key})=>key === ‘ArrowLeft’ || key === ‘ArrowRight’),
filter(({repeat})=>!repeat));
To duplicate the behaviour of our event driven version we need to rotate every 10ms. We can make a stream which fires every 10ms using interval(10), which we can “graft” onto our arrowKeys$ stream using flatMap. We use takeUntil to terminate the interval on a ‘keyup’, filtered to ignore keys other than the one that initiated the ‘keydown’. At the end of the flatMap pipe we use map to return d, the original keydown KeyboardEvent object. Back at the top-level pipe on arrowKeys$ we inspect this KeyboardEvent object to see whether we need a left or right rotation (positive or negative angle). Thus, angle$ is just a stream of -1 and 1.
const angle$ = arrowKeys$.pipe(
flatMap(d=>interval(10).pipe(
takeUntil(fromEvent(document, ‘keyup’).pipe(
filter(({key})=>key === d.key)
)),
map(_=>d))
),
map(d=>d.key===’ArrowLeft’?-1:1));
Finally, we subscribe to the angle$ stream to perform our effectful code, updating state and rotating ship.
angle$.subscribe(a=>
ship.setAttribute(‘transform’,
`translate(${state.x},${state.y}) rotate(${state.angle+=a})`)
)
}
Arguably, the Observable code has many advantages over the event handling code:
· the streams created by fromEvent and interval automatically clean up the underlying events and interval handles when the streams complete.
· the ‘stream’ abstraction provided by observable gives us an intuitive way to think about asynchronous behaviour and chain transformations of the stream together through pipes.
· We didn’t see it so much here, but the various observable streams we created are composable, in the sense that adding new pipes (or potentially multiple subscribes) to them allow us to reuse and plug them together in powerful ways.
Pure Observable Streams
A weakness of the above implementation using Observable streams, is that we still have global mutable state. Deep in the function passed to subscribe we alter the angle attribute on the state object. Another Observable operator scan, allows us to capture this state transformation inside the stream, using a pure function to transform the state, i.e. a function that takes an input state object and—rather than altering it in-place—creates a new output state object with whatever change is required.
We’ll start by altering the start of our code to define an interface for State with readonly members, and we’ll place our initialState in a const variable that matches this interface. You can also play with the code in a live editor.
window.onload = function() {
type State = Readonly<{ x: number; y: number; angle: number; }>
const initialState: State = { x: 100, y: 100, angle: 0};
Now we’ll create a function that is a pure transformation of State:

function rotate(s:State, angleDelta:number): State {
return { …s, // copies the members of the input state for all but:
angle: s.angle + angleDelta // only the angle is different in the new State
}
}

Next, we have another, completely self contained function to update the SVG for a given state:

function updateView(state:State): void {
const ship = document.getElementById(“ship”)!;
ship.setAttribute(‘transform’,
`translate(${state.x},${state.y}) rotate(${state.angle})`)
}

And now our main pipe (collapsed into one) ends with a scan which “transduces” (transforms and reduces) our state, and the subscribe is a trivial call to updateView:

fromEvent(document, ‘keydown’)
.pipe(
filter(({code})=>code === ‘ArrowLeft’ || code === ‘ArrowRight’),
filter(({repeat})=>!repeat),
flatMap(d=>interval(10).pipe(
takeUntil(fromEvent(document, ‘keyup’).pipe(
filter(({code})=>code === d.code)
)),
map(_=>d))
),
map(({code})=>code===’ArrowLeft’?-1:1),
scan(rotate, initialState))
.subscribe(updateView)
}
The code above is a bit longer than what we started with, but it’s starting to lay a more extensible framework for a more complete game. And it has some nice architectural properties, in particular we’ve completely decoupled our view code from our state management. We could swap out SVG for a completely different UI by replacing the updateView function.
Adding Physics and Handling More Inputs
Classic Asteroids is more of a space flight simulator in a weird toroidal topology than the ‘static’ rotation that we’ve provided above. We will make our spaceship a freely floating body in space, with directional and rotational velocity. We are going to need more inputs than just left and right arrow keys to pilot our ship too.
Here’s a sneak preview of what this next stage will look like (click on the image to try it out in a live code editor):

Input Actions
Let’s start with adding “thrust” in response to up arrow. With the code above, adding more and more intervals triggered by key down events would get increasingly messy. Rather than have streams triggered by key events, for the purposes of simulation it makes sense that our main Observable be a stream of discrete timesteps. Thus, we’ll be moving our interval(10) to the top of our pipe.
Before we had just left and right rotations coming out of our stream. Our new stream is going to have multiple types of actions as payload:
· Tick – a discrete timestep in our simulation, triggered by interval.
· Rotate – a ship rotation triggered by left or right arrows keys
· Thrust – fire the boosters! Using the up-arrow key
Each of these actions has some data, we’ll model them as simple classes (note that we need classes rather than anonymous interfaces here because the instanceof pattern-match used inside the reduceState function below requires the presence of a constructor):
class Tick { constructor(public readonly elapsed:number) {} }
class Rotate { constructor(public readonly direction:number) {} }
class Thrust { constructor(public readonly on:boolean) {} }
Now, we’ll create separate Observables for each of the key events. There’s a repetitive pattern in creating each of these Observables, for a given:
· Event – keydown or keyup
· Key – one of the arrows (for now)
· produce an Observable stream of a particular action type.
It sounds like something we can model with a nice reusable function:
type Event = ‘keydown’ | ‘keyup’
type Key = ‘ArrowLeft’ | ‘ArrowRight’ | ‘ArrowUp’
const observeKey = (eventName:string, k:Key, result:()=>T)=>
fromEvent(document,e)
.pipe(
filter(({code})=>code === k),
filter(({repeat})=>!repeat),
map(result)),
Now we have all the pieces to create a whole slew of input streams:
const
startLeftRotate = observeKey(‘keydown’,’ArrowLeft’,()=>new Rotate(-.1)),
startRightRotate = observeKey(‘keydown’,’ArrowRight’,()=>new Rotate(.1)),
stopLeftRotate = observeKey(‘keyup’,’ArrowLeft’,()=>new Rotate(0)),
stopRightRotate = observeKey(‘keyup’,’ArrowRight’,()=>new Rotate(0)),
startThrust = observeKey(‘keydown’,’ArrowUp’, ()=>new Thrust(true)),
stopThrust = observeKey(‘keyup’,’ArrowUp’, ()=>new Thrust(false))
Vector Maths
Since we’re going to start worrying about the physics of our simulation, we’re going to need some helper code. First, a handy dandy Vector class. It’s just standard vector maths, so hopefully self explanatory. In the spirit of being pure and declarative I’ve made it immutable, and all the functions (except len which returns a number) return new instances of Vec rather than changing its data in place.
class Vec {
constructor(public readonly x: number = 0, public readonly y: number = 0) {}
add = (b:Vec) => new Vec(this.x + b.x, this.y + b.y)
sub = (b:Vec) => this.add(b.scale(-1))
len = ()=> Math.sqrt(this.x*this.x + this.y*this.y)
scale = (s:number) => new Vec(this.x*s,this.y*s)
ortho = ()=> new Vec(this.y,-this.x)
rotate = (deg:number) =>
(rad =>(
(cos,sin,{x,y})=>new Vec(x*cos – y*sin, x*sin + y*cos)
)(Math.cos(rad), Math.sin(rad), this)
)(Math.PI * deg / 180)

static unitVecInDirection = (deg: number) => new Vec(0,-1).rotate(deg)
static Zero = new Vec();
}
To implement the toroidal topology of space, we’ll need to know the canvas size. For now, we’ll hard code it in a constant CanvasSize. Alternately, we could query it from the svg element, or we could set the SVG size – maybe later. The torus wrapping function will use the CanvasSize to determine the bounds and simply teleport any Vec which goes out of bounds to the opposite side.
const
CanvasSize = 200,
torusWrap = ({x,y}:Vec) => {
const wrap = (v:number) =>
v < 0 ? v + CanvasSize : v > CanvasSize ? v – CanvasSize : v;
return new Vec(wrap(x),wrap(y))
};
We’ll use Vec in a slightly richer set of State.
type State = Readonly<{ pos:Vec, vel:Vec, acc:Vec, angle:number, rotation:number, torque:number }>
We create an initialState using CanvasSize to start the spaceship at the centre:
const initialState:State = {
pos: new Vec(CanvasSize/2,CanvasSize/2),
vel: Vec.Zero,
acc: Vec.Zero,
angle:0,
rotation:0,
torque:0
}
Reducing State
We can encapsulate all the possible transformations of state in a function:
const reduceState = (s:State, e:Rotate|Thrust|Tick)=>
e instanceof Rotate ? {…s,
torque: e.direction
} :
e instanceof Thrust ? {…s,
acc:e.on ? Vec.unitVecInDirection(s.angle).scale(0.05)
: Vec.Zero
} : {…s,
rotation: s.rotation+s.torque,
angle: s.angle+s.rotation,
pos: torusWrap(s.pos.add(s.vel)),
vel: s.vel.add(s.acc)
};
And finally we merge our different inputs and scan over State, and the final subscribe calls the updateView, once again, a self-contained function which does whatever is required to render the State. We describe the updated updateView in the next section.
interval(10)
.pipe(
map(elapsed=>new Tick(elapsed)),
merge(
startLeftRotate,startRightRotate,stopLeftRotate,stopRightRotate),
merge(startThrust,stopThrust),
scan(reduceState, initialState))
.subscribe(updateView);
View
Once again, the above completely decouples the view from state management. But now we have a richer state, we have more stuff we can show in the view. We’ll start with a little CSS, not only to style elements, but also to hide or show flame from our boosters.
.ship {
fill: lightblue;
}
.booster {
fill: orange;
}
.hidden {
visibility:hidden;
}
We add a few more polygons for the booster flames:


And here’s our updated updateView function where we not only move the ship but also show flames shooting out of it as it powers around the torus:
function updateView(s: State) {
const
ship = document.getElementById(“ship”)!,
show = (id:string,condition:boolean)=>((e:HTMLElement) =>
condition ? e.classList.remove(‘hidden’)
: e.classList.add(‘hidden’))(document.getElementById(id)!);
show(“leftThrust”, s.torque<0); show("rightThrust", s.torque>0);
show(“thruster”, s.acc.len()>0);
ship.setAttribute(‘transform’, `translate(${s.pos.x},${s.pos.y}) rotate(${s.angle})`);
}
Additional Objects
Things get more complicated when we start adding more objects to the canvas that all participate in the physics simulation. Furthermore, objects like asteroids and bullets will need to be added and removed from the canvas dynamically – unlike the ship whose visual is currently defined in the svg and never leaves.
However, we now have all the pieces of our MVC architecture in place, all tied together with an observable stream:

So completing the game is just a matter of:
· adding more input actions (e.g. Shoot)
· extending our state data structure to include bullets, rocks, etc.
· extending our reduce state function to manipulate this State
· extending the updateView function so the user can see it
We’ll start with bullets that can be fired with the Space key, and which expire after a set period of time:

However, the basic framework above is a good basis on which to extend.
Per-object State
The first complication is generalising bodies that participate in the force model with their own type Body, separate from the State:
type Body = Readonly<{ id:string, pos:Vec, vel:Vec, thrust:boolean, angle:number, rotation:number, torque:number, radius:number, createTime:number }>
type State = Readonly<{ time:number, ship:Body, bullets:ReadonlyArray,
rocks:ReadonlyArray,
exit:ReadonlyArray,
objCount:number
}>
So the ship is a Body, and we will have collections of Body for both bullets and rocks. What’s this exit thing? Well, when we remove something from the canvas, e.g. a bullet, we’ll create a new state with a copy of the bullets array minus the removed bullet, and we’ll add that removed bullet – together with any other removed Bodys – to the exit array. This notifies the updateView function that they can be removed.
Note the objCount. This counter is incremented every time we add a Body and gives us a way to create a unique id that can be used to match the Body against its corresponding view object.
Now we define functions to create objects:
function createBullet(s:State):Body {
const d = Vec.unitVecInDirection(s.ship.angle);
return {
id: `bullet${s.objCount}`,
pos:s.ship.pos.add(d.scale(20)),
vel:s.ship.vel.add(d.scale(-2)),
createTime:s.time,
thrust:false,
angle:0,
rotation:0,
torque:0,
radius:3
}
}
function createShip():Body {
return {
id: ‘ship’,
pos: new Vec(CanvasSize/2,CanvasSize/2),
vel: Vec.Zero,
thrust:false,
angle:0,
rotation:0,
torque:0,
radius:20,
createTime:0
}
}
const initialState:State = {
time:0,
ship: createShip(),
bullets: [],
exit: [],
objCount: 0
}
Shoot Action
We’ll add a new action type and observable for shooting with the space bar:
class Shoot { constructor() {} }
const shoot = keyObservable(‘keydown’,’Space’, ()=>new Shoot())
And now a function to move objects, same logic as before but now applicable to any Body:
const moveObj = (o:Body) => {
…o,
rotation: o.rotation + o.torque,
angle:o.angle+o.rotation,
pos:torusWrap(o.pos.sub(o.vel)),
vel:o.thrust?o.vel.sub(Vec.unitVecInDirection(o.angle).scale(0.05)):o.vel
}
And our tick action is a little more complicated now, complicated enough to warrant its own function:
const tick = (s:State,elapsed:number) => {
const not = (f:(x:T)=>boolean)=>(x:T)=>!f(x),
expired = (b:Body)=>(elapsed – b.createTime) > 100,
expiredBullets:Body[] = s.bullets.filter(expired),
activeBullets = s.bullets.filter(not(expired));
return {…s,
ship:moveObj(s.ship),
bullets:activeBullets.map(moveObj),
exit:expiredBullets,
time:elapsed
}
}
Note that bullets have a life time (presumably they are energy balls that fizzle into space after a certain time). When a bullet expires it is sent to exit.
Now adding bullets as they are fired to our state reducer:
const reduceState = (s:State, e:Rotate|Thrust|Tick|Shoot)=>
e instanceof Rotate ? {…s,
ship: {…s.ship,torque:e.direction}
} :
e instanceof Thrust ? {…s,
ship: {…s.ship, thrust:e.on}
} :
e instanceof Shoot ? {…s,
bullets: s.bullets.concat([createBullet(s)]),
objCount: s.objCount + 1
} :
tick(s,e.elapsed);
We merge the Shoot stream in as before:
interval(10).pipe(

merge(shoot),

And we tack a bit on to updateView to draw and remove bullets:
function updateView(s: State) {

s.bullets.forEach(b=>{
const createBulletView = ()=>{
const v = document.createElementNS(svg.namespaceURI, “ellipse”)!;
v.setAttribute(“id”,b.id);
v.classList.add(“bullet”)
svg.appendChild(v)
return v;
}
const v = document.getElementById(b.id) || createBulletView();
v.setAttribute(“cx”,String(b.pos.x))
v.setAttribute(“cy”,String(b.pos.y))
})
s.exit.forEach(o=>{
const v = document.getElementById(o.id);
if(v) svg.removeChild(v)
})
}
Collisions
So far the game we have built allows you to hoon around in a space-ship blasting the void with fireballs which is kind of fun, but not very challenging. The Asteroids game doesn’t really become “Asteroids” until you actually have… asteroids. Also, you should be able to break them up with your blaster and crashing into them should end the game. Here’s a preview:

Before we go forward, let’s put all the magic numbers that are starting to permeate our code in one, immutable place:
const
Constants = {
CanvasSize: 600,
BulletExpirationTime: 1000,
BulletRadius: 3,
BulletVelocity: 2,
StartRockRadius: 30,
StartRocksCount: 5,
RotationAcc: 0.1,
ThrustAcc: 0.1,
StartTime: 0
} as const
Initial State
We will need to store two new pieces of state: the collection of asteroids (rocks) which is another array of Body, just like bullets; and also a boolean that will become true when the game ends due to collision between the ship and a rock.
type State = Readonly<{ ... rocks:ReadonlyArray,
gameOver:boolean
}>
Since bullets and rocks are both just circular Bodys with constant velocity, we can generalise what was previously the createBullet function to create either:
const createCircle = (viewType: ViewType)=> (oid:number)=> (time:number)=> (radius:number)=> (pos:Vec)=> (vel:Vec)=>
{
createTime: time,
pos:pos,
vel:vel,
acc:Vec.Zero,
angle:0, rotation:0, torque:0,
radius: radius,
id: viewType+oid,
viewType: viewType
};
Our initial state is going to include several rocks drifting in random directions, as follows:
const
startRocks = […Array(Constants.StartRocksCount)]
.map((_,i)=>createCircle(“rock”)(i)
(Constants.StartTime)(Constants.StartRockRadius)(Vec.Zero)
(new Vec(0.5 – Math.random(), 0.5 – Math.random()))),
initialState:State = {
time:0,
ship: createShip(),
bullets: [],
rocks: startRocks,
exit: [],
objCount: Constants.StartRocksCount,
gameOver: false
}
Reducing State
Our tick function is more or less the same as above, but it will apply one more transformation to the state that it returns, by applying the following function. This function checks for collisions between the ship and rocks, and also between bullets and rocks.
// check a State for collisions:
// bullets destroy rocks spawning smaller ones
// ship colliding with rock ends game
handleCollisions = (s:State) => {
const
bodiesCollided = ([a,b]:[Body,Body]) => a.pos.sub(b.pos).len() < a.radius + b.radius, shipCollided = s.rocks.filter(r=>bodiesCollided([s.ship,r])).length > 0,
allBulletsAndRocks = flatMap(s.bullets, b=> s.rocks.map(r=>([b,r]))),
collidedBulletsAndRocks = allBulletsAndRocks.filter(bodiesCollided),
collidedBullets = collidedBulletsAndRocks.map(([bullet,_])=>bullet),
collidedRocks = collidedBulletsAndRocks.map(([_,rock])=>rock),

// spawn two children for each collided rock above a certain size
child = (r:Body,dir:number)=>({
radius: r.radius/2,
pos:r.pos,
vel:r.vel.ortho().scale(dir)
}),
spawnChildren = (r:Body)=>
r.radius >= Constants.StartRockRadius/4
? [child(r,1), child(r,-1)] : [],
newRocks = flatMap(collidedRocks, spawnChildren)
.map((r,i)=>createCircle(‘rock’)(s.objCount + i)(s.time)(r.radius)(r.pos)(r.vel)),

// search for a body by id in an array
elem = (a:ReadonlyArray) => (e:Body) => a.findIndex(b=>b.id === e.id) >= 0,
// array a except anything in b
except = (a:ReadonlyArray) => (b:Body[]) => a.filter(not(elem(b)))

return {
…s,
bullets: except(s.bullets)(collidedBullets),
rocks: except(s.rocks)(collidedRocks).concat(newRocks),
exit: s.exit.concat(collidedBullets,collidedRocks),
objCount: s.objCount + newRocks.length,
gameOver: shipCollided
}
},
Final View
Finally, we need to update updateView function. Again, the view update is the one place in our program where we allow imperative style, effectful code. Called only from the subscribe at the very end of our Observable chain, and not mutating any state that is read anywhere else in the Observable, we ensure that is not the source of any but the simplest display bugs, which we can hopefully diagnose with local inspection of this one function.
First, we need to update the visuals for each of the rocks, but these are the same as bullets. The second, slightly bigger, change, is simply to display the text “Game Over” on s.gameover true.
function updateView(s: State) {

s.bullets.forEach(updateBodyView);
s.rocks.forEach(updateBodyView);
s.exit.forEach(o=>{
const v = document.getElementById(o.id);
if(v) svg.removeChild(v);
})
if(s.gameOver) {
subscription.unsubscribe();
const v = document.createElementNS(svg.namespaceURI, “text”)!;
attr(v,{
x: Constants.CanvasSize/6,
y: Constants.CanvasSize/2,
class: “gameover”
});
v.textContent = “Game Over”;
svg.appendChild(v);
}
}
where we’ve created a little helper function attr to bulk set properties on an Element:
const
attr = (e:Element, o:Object) =>
{ for(const k in o) e.setAttribute(k,String(o[k])) },
The other thing happening at game over, is the call to subscription.unsubscribe. This subscription is the object returned by the subscribe call on our main Observable:
const subscription = interval(10).pipe(
map(elapsed=>new Tick(elapsed)),
merge(
startLeftRotate,startRightRotate,stopLeftRotate,stopRightRotate),
merge(startThrust,stopThrust),
merge(shoot),
scan(reduceState, initialState)
).subscribe(updateView);
At this point we have more-or-less all the elements of a game. The implementation above could be extended quite a lot. For example, we could add score, ability to restart the game, multiple lives, perhaps some more physics. But generally, these are just extensions to the framework above: manipulation and then display of additional state.
The key thing is that the observable has allowed us to keep well separated state management (model), its input and manipulation (control) and the visuals (view). Further extensions are just additions within each of these elements – and doing so should not add greatly to the complexity.
I invite you to click through on the animations above, to the live code editor where you can extend or refine the framework I’ve started.
Higher Order Functions
Learning Outcomes
· Understand that Higher-Order Functions are those which take other functions as input parameters or which return functions
· Understand that curried functions support partial application and therefore creation of functions that are partially specified for reuse scenarios.
· Understand that a Combinator is a higher-order function that uses only function application and earlier defined combinators to define a result from its arguments
· Use simple Combinator functions to manipulate and compose other functions
Introduction
The really exciting aspect of higher-order function support in languages like JavaScript is that it allows us to combine simple reusable functions in sophisticated ways. We’ve already seen how functions like map, filter and reduce can be chained to flatten the control flow of data processing. In this section we will look at some tricks that allow us to use functions that work with other functions in convenient ways.
A note about type annotations in this section. If you are following the reading order given by the index for these notes, then you have already read our introduction to TypeScript. Therefore, below we frequently use TypeScript type annotions to be precise about the intended use of the functions. However, as we start to rely more and more heavily on curried higher-order functions in this chapter, TypeScript type annotations start to become a bit cumbersome, and for the purposes of concisely representing use of combinators to create new functions, we abandon them entirely. As an exercise, you may like to think about what the TypeScript annotations for some of these functions should be. This is one of the reasons why we later in these notes move away from JavaScript and TypeScript entirely to instead focus on a real functional language, Haskell.
Higher-Order Functions
Functions that take other functions as parameters or which return functions are called higher-order functions. They are called “higher-order” because they are functions which operate on other functions. Higher-order functions are a very powerful feature and central to the functional programming paradigm.
We’ve seen many examples of functions which take functions as parameters, for example, operations on arrays:
[1,2,3].map(x=>x*x)
[1,4,9]
Being able to pass functions into other functions enables code customisability and reuse. For example, a sort function which allows the caller to pass in a comparison function can easily be made to sort in increasing or decreasing order, or to sort data elements on an arbitrary attribute.
And we also saw a simple example of a function which returns a new function:
const add = x => y => x + y
const add9 = add(9)

add9(3)
add9(1)
12
10
Functions that can create new functions gives rise to all sorts of emergent power, such as the ability to customise, compose and combine functions in very useful ways. We will see this later when we look at function composition and combinators.
Curried Functions
Higher-order functions which take a single parameter and return another function operating on a single parameter are called curried functions. The add function above is one example. You can either call it twice immediately to operate on two parameters:
add(3)(2)
5
or call it once, leaving one parameter left unspecified, to create a reusable function, like add9 above. Such use of a curried function with only a subset of its parameters is called partial application. Partial application returns a function for which further parameters must be supplied before the body of the function can finally be evaluated, e.g.:
add9(1)
10
Here’s a practical example of a curried function, let’s say we want a function for computing the volume of cylinders, parameterised by the approximation for π that we plan to use:
function cylinderVolume(pi: number, height: number, radius: number): number {
return pi * radius * radius * height;
}
And we invoke it like so:
cylinderVolume(Math.PI, 4, 2);
Now consider another version of the same function:
function cylinderVolume(pi: number) {
return function(height: number) {
return function(radius: number) {
return pi * radius * radius * height;
}
}
}
This one, we can invoke like so:
cylinderVolume(Math.PI)(4)(2)
But we have some other options too. For example, we are unlikely to change our minds about what precision approximation of PI we are going to use between function calls. So let’s make a local function that fixes PI:
const cylVol = cylinderVolume(Math.PI);
Which we can invoke when we are ready like so:
cylVol(4)(2)
What if we want to compute volumes for a whole batch of cylinders of fixed height of varying radii?
const radii = [1.2,3.1,4.5, … ],
makeHeight5Cylinder = cylVol(5),
cylinders = radii.map(makeHeight5Cylinder);
Or we can make it into a handy function to compute areas of circles:
const circleArea = cylVol(1)
Such functions are called curried functions and they are named after a mathematician named Haskell Curry. This gives you a hint as to what functions look like in the Haskell programming language and its variants. We can also create a function to make curried versions of conventional multiparameter javascript functions:
function curry2(f: (x:T,y:U)): V {
return x=>y=>f(x,y)
}
Now, given a function like plus = (x,y) => x + y, we can create the curried add function above, like so:
const add = curry2(plus)
add(3)(4)
7
We can also create curried versions of functions with more than two variables – but the TypeScript syntax for functions with arbitrary numbers of arguments gets a bit scary (one of the many reasons we will shortly switch to Haskell for our exploration of more advanced functional programming topics).
// Don’t try to understand this – this is just to show you why Haskell is better for strictly-typed FP
function curry(fn: (…ts: […T, …U]) => R, …args:T): (…bs:U) => R {
return (…bargs: U) => fn(…args, …bargs);
}
Composition
Consider the following function which takes two functions as input, note the way the types line up:
function compose(f:(x:V)=>W,g:(x:U)=>V) {
return (x:U)=> f(g(x))
}
This function lets us combine two functions into a new reusable function. For example, given a messy list of strings representing numbers of various precision:
const grades = [‘80.4′,’100.000′,’90’,’99.25′]
We can define a function to parse these strings into numbers and then round them to the nearest whole number purely through composition of two existing functions:
const roundFloat = compose(Math.round, Number.parseFloat)
And then apply it to the whole set:
grades.map(roundFloat)
[80, 100, 90, 99]
Note that compose let us define roundFloat without any messing around with anonymous functions and explicit wiring-up of return values to parameters. We call this tacit or point-free style programming.
Exercise
· Create a compose function in javascript that takes a variable number of functions as arguments and composes (chains) them. Using the spread operator (…) to take a variable number of arguments as an array and the Array.prototype.reduce method, the function should be very small.
· Create a pipe function which composes its arguments in the opposite order to the compose function above. That is, left-to-right. Note that in rx.js, such a pipe function is an important way to create chains of operations (over Observable streams).
Combinators
Combinators are higher-order functions which perform pure operations on their arguments to perform a result. They may seem very basic, but as their name suggests, they provide useful building blocks for manipulating and composing functions to create new functions. The compose function is a combinator. Some more examples follow.
Identity I-Combinator
The following may seem trivial:
function identity(value: T): T {
return value;
}
But it has some important applications:
· Higher-order functions which take a user specified function to apply in some context (such as our sumTo from earlier) can be passed identity to restore the default behaviour.
· For extracting data from encapsulated types (e.g. by passing identity into map).
· The above scenarios are also indicative of a useful way to test such higher-order functions, broadly: “does passing the identity operator really give us back what we started with?”.
· For composition with other combinators, as below.
K-Combinator
The curried K-Combinator looks like:
const K = x=> y=> x
So it is a function that ignores its second argument and returns its first argument directly. Note the similarity to the head function of our cons list. In fact, we can derive curried version so both the head and rest functions used earlier from K and I combinators:
const
K = x=> y=> x,
I = x=> x,
cons = x=> y=> f=> f(x)(y),
head = l=> l(K),
tail = l=> l(K(I)),
forEach = f=> l=> l?(f(head(l)),forEach(f)(tail(l))):null;

const l = cons(1)(cons(2)(cons(3)(null)));

forEach(console.log)(l)
1
2
3
The definition of head is by straight-forward, like-for-like substitution of K into a curried version of our previous definition for head. Note, the following is not code, just statements of equivalence (≡):
head ≡ l=>l((h,_)=>h) — previous uncurried definition of head
≡ l=>l(curry2((h,_)=>h))
≡ l=>l(h=>_=>h)
Where the expression in brackets above we notice is equivalent to K:
K ≡ x=> y=> x ≡ h=> _=> h
In the context of the Lambda Calculus, we will see that such a renaming is called Alpha Conversion.
We are gradually changing our terminology to be more haskell-like, so we have named the curried version of rest to tail. The new definition of tail, compared to our previous definition of rest, is derived as follows:
K(I) ≡ K(i=> i) — expand I := i=> i
≡ (x=> y=> x)(i=> i) — expand K := x=> y=> x
≡ y=> i=> i
Where the last line above is the result of applying x=>y=>x to i=>i. Thus, we substitute x:=i=>i in the body of the first function (the expansion of K). When we explore the Lambda Calculus, we will see that this operation (simple evaluation of function application by substition of expressions) is called Beta reduction.
Now we could derive tail from rest using our curry2 function:
rest ≡ l=>l((_,r)=>r)
tail ≡ l=>l(curry2((_,r)=>r))
≡ l=>l(_=>r=>r)
Where _=> r=> r ≡ y=> i=> i and therefore tail ≡ l=>l(K(i)). QED!!!
FYI it has been shown that simple combinators like K and I (at least one other is required) are sufficient to create languages as powerful as lambda calculus without the need for lambdas, e.g. see SKI Combinator Calculus.
In previous sections we have see a number of versions of functions which transform lists (or other containers) into new lists like map, filter and so on. We have also introduced the reduce function as a way to compute a single value over a list. If we realise that the value we produce from reduce can also be a list, we can actually use reduce to implement all of the other lists transformations. Instead of returning a value from a reduce, we could apply a function which produces only side effects, thus, performing a forEach. We’ll use this as an example momentarily.
First, here’s another implementation of reduce for the above formulation of cons lists – but we rename it fold (again, as our JavaScript becomes more and more Haskell like we are beginning to adopt Haskell terminology).
const fold = f=> i=> l=> l ? fold(f)(f(i)(head(l)))(tail(l)) : i
Now, for example, we can define forEach in terms of fold:
const forEach = f=>l=>fold(_=>v=>f(v))(null)(l)
Now, the function f takes one parameter and we don’t do anything with its return type (in TypeScript we could enforce the return type to be void). However, fold is expecting as its first argument a curried function of two parameters (the accumulator and the list element). Since in forEach we are not actually accumulating a value, we can ignore the first parameter, hence we give fold the function _=>v=>f(v), to apply f to each value v from the list.
But note that v=>f(v) is precisely the same as just f. So we can simplify forEach a bit further:
const forEach = f=>l=>fold(_=>f)(null)(l)
But check out these equivalences:
K(f) ≡ (x=> y=> x)(f) — expand K
≡ y=> f — apply the outer function to f, hence we substitute x:= f
≡ _=> f — rename y to _
Where, in the last line above, since y doesn’t appear anywhere in the body of the function we don’t care what it’s called anymore and rename it to _.
Therefore, we can use our K combinator to entirely avoid defining any functions in the body of forEach:
const forEach = f=>l=>fold(K(f))(null)(l)

Fold Exercise
· Write map and filter for the above cons list definition in terms of fold

Alternation (OR-Combinator)
A function that applies a first function. If the first function fails (returns undefined, false or null), it applies the second function. The result is the first function that succeeded.
const or = f=> g=> v=> f(v) || g(v)
Basically, it’s a curried if-then-else function with continuations. Imagine something like the following data for student names in a unit, then a dictionary of the ids of students in each class:
const students = [‘tim’,’sally’,’sam’,’cindy’],
class1 = { ‘tim’:123, ‘cindy’:456},
class2 = { ‘sally’:234, ‘sam’:345};
We have a function that lets us lookup the id for a student in a particular class:
const lookup = class=> name=> class[name]
Now we can try to find an id for each student, first from class1 but fall back to class2 if it isn’t there:
const ids = students.map(or(lookup(class1))(lookup(class2)))
Fork-Join Combinator
The following is cute:
function fork(join, f, g) {
return value => join(f(value), g(value));
}
But we’ll leave trying it out as an exercise.

Fork-Join Exercise
· Use the fork-join combinator to compute the average over a sequence of numeric values.
· Add Type annotations to the above definition of the fork function. How many distinct type variables do you need?

End Note
Unary versus Binary Functions in JavaScript
Uncurried functions of two parameters can be called Binary functions. Functions of only one parameter can therefore be called Unary functions. Note that all of our curried functions are unary functions, which return other unary functions. We’ve seen situations now where curried functions are flexibly combined to be used in different situations.
Note that in JavaScript you sometimes see casual calls to binary functions but with only one parameter specified. Inside the called function the unspecified parameter will simply be undefined which is fine if the case of that parameter being undefined is handled in a way that does not cause an error or unexpected results, e.g.:
function binaryFunc(x,y) {console.log(`${x} ${y}`) }
binaryFunc(“Hello”, “World”)
binaryFunc(“Hello”)
Hello World
Hello undefined
Conversely, javascript allows additional parameters to be passed to unary functions which will then simply be unused, e.g.:
function unaryFunc(x) { console.log(x) }
unaryFunc(“Hello”)
unaryFunc(“Hello”, “World”)
Hello
Hello
But, here’s an interesting example where mixing up unary and binary functions in JavaScript’s very forgiving environment can go wrong.
[‘1′,’2′,’3’].map(parseInt);
We are converting an array of strings into an array of int. The output will be [1,2,3] right? WRONG!
[‘1′,’2′,’3’].map(parseInt);
[1, NaN, NaN]
What the …!
But:
parseInt(‘2’)
2
What’s going on? HINT: parseInt is not actually a unary function.
The point of this demonstration is that curried functions are a more principled way to support partial function application, and also much safer and easier to use when the types guard against improper use. Thus, this is about as sophisticated as we are going to try to get with Functional Programming in JavaScript, and we will pick up our discussion further exploring the power of FP in the context of the Haskell functional programming language. However, libraries do exist that provide quite flexible functional programming abstractions in JavaScript. For example, you might like to investigate Ramda.