We have a great series learn for you written by Mateusz Kubuszok on kinds of types in Scala. Extend your Scala knoweledge and follow this series we will be posting over the next couple of days. Happy learning!
'When I try to explain to someone why I prefer Scala to Java, and why functional programming works better with Scala, one of the arguments is that is has a better type system. But what exactly it means? What advantage it has over the static type of languages like Java or Go?
In this mini-series I want to explain different concepts related to types. Originally it was supposed to be a single post, but it was way too long for anyone to read.
Origin of types
First of all, lets us think what are types themselves. Historically, they were introduced to solve the issue of ambiguities in mathematics, which appeared in certain conditions, and were developed as a port of the effort to formalize mathematics.
Before formalization, mathematicians needed to face paradoxes like set of all sets. Bertrand Russell described it as one of entities we are unable to decide whether it could exist or not, without building upon strict, formal definitions. Nowadays, set theory solves this problem using Zermelo–Fraenkel axioms:
- let us assume, that the set of all sets could be constructed,
- the power set of some set is always bigger than said set,
- if this set is the set of all sets it would have to be a strict superset of itself and its own power set as well,
- this leads to the conclusion, that the set would have to be bigger than itself (not equal!),
- by contradiction we conclude, that building the set of all sets is impossible.
At a time though, Bertrand Russell proposed another way to avoid the paradox:
- each term should be assigned a type. It is usually denoted with a colon: 2 : nat
- if we wanted to define a function we would define on which types it operates: e.g. double: nat → nat
f:X→Y read as: f is a function from X (domain) to Y (codomain).
He proposed, that we should build objects from bottom-up: if we define something using predicates, we can only use already defined object and never other predicates. However, such definition has a flaw: since we cannot reuse other predicates, we couldn’t e.g. compare cardinality of sets (how could you prove, that the types match?).- terms can be converted using explicitly defined rules: e.g. 2+2↠4 means that 2+2 reduces to 4, so it has the same type.
In 1940, Alonzo Church combined the type theory his own lambda calculus creating simply typed lambda calculus. Since then more advanced type theories appeared.
Types, intuitively
Type theory exists in parallel to a set theory: one is not defined using the other. Because of that we cannot strictly speaking compare types to sets.In Scala
If we look how Scala implements types, we’ll see that a judgment 'a:A' (a is of type A) directly translated into the syntax:
val a: A
def f(a: A): B
When it comes to Scala syntax, we won’t talk about a judgment. Instead, we would talk about a type ascription. Notation for function type also comes from the type theory: 'double:int→int' becomes:
val double: Int => Int
Scala has also the notion of the top and bottom types. The top type - a type we don’t have any assertions about, which contains any other type - is 'Any'. In Java, the closest thing would be an 'Object', except it is not a supertype for primitives. Scala’s 'Any' is a supertype for both objects ('AnyRef') and primitives ('AnyVal'). When we see 'Any' as an inferred type, it usually means we messed up and combined two unrelated types.
The bottom type - a type which is a subtype of all others, and has no citizen - in Scala is 'Nothing'. We cannot construct a value of type'Nothing', so it often appears as a parameter of empty containers or functions that can only throw.
(Quite often I see that 'Null' is mentioned as another bottom type in Scala together with 'Nothing'. It is a type extending everything that is an 'Object' on JVM. So it is kind of like 'Nothing' without primitives, 'Unit/void', and functions that only throws).
Also, a word about functions. In Scala we make a distinction between a function and a method. A 'function type' is a value, which can be called:
val double: Int => Int = x => x * 2
A method, on the other hand, is not a value. It cannot be assigned, or directly passed, though via a process called eta-expansion a method can be turned into a function.
def double(x: Int): Int = x * 2
// val y = def double(x: Int): Int = x * 2 - error!
val y = { def double(x: Int): Int = x * 2 } // Unit
// however
val z: Int => Int = double _ // works!
Less accurate, but simpler version: a function is an instance of some 'Function0', 'Function1', …, 'Function22' trait, while a method is something bound to a JVM object. We make a distinction because authors of Scala haven’t solved the issue in a more elegant way.
Oh, and one more thing. With a method, you can declare a argument as 'implicit'. You cannot do that with function. For now. It will be possible with implicit function types in Dotty.
Type inference and Least Upper Bounds
This hierarchy of types is quite important when it comes to type inference. We expect it to provide us with the most concrete type possible, while it has to comply to any type restriction it was imposed for a given case. Let’s take a look at a few cases:
val s = "test"
'"test"' has a type of 'String'. There are no additional restrictions. So, the inferred type is 'String'.
def a(b: Boolean) = if (b) Some(12) else None
Here, the returned value is one of two types 'Some[Int]' or 'None.type' (a type of 'None' companion object). How to decide the common type for these two?
Let’s build a graph of a type hierarchy:
Any
|
AnyRef
|
+----+------+
| |
Equals java.io.Serializable
| |
Product Serializable
| |
+----+------+
|
Option[T]
|
+-----+-----+
| |
Some[T] None.type
As we can see, there is quite a lot of candidates for the type of value returned by the function - any type which is above 'Some[T]' and 'None.type' at once is a valid candidate. Each of them is an upper bound for the sought type. (By analogy, if we were looking for a type of parameter we want to pass to a function, that would have to conform to several types, each of the types that would be below all of them would be a lower bound).Algebraic Data Types
When we think of types as a sets, there are 2 special constructions, that help us build new sets from existing ones, which complement each other. One is a Cartesian product and the other is a disjoint union/disjoint set sum.
Let’s start with a Cartesian product. We can define it as a generalization of an ordered pair. A set of ordered pairs (A,B) would be a set of values (a,b), such as a∈A,b∈B, which we could distinct by its position inside a brackets. More formally, we could define an ordered pair as (Kuratowski’s definition):
(a,b)={{a},{a,b}}
Such definition help us indicate the order using construction that itself has no distinction of order (that is a set).
Cartesian product generalizes the idea. It an operator which takes 2 sets and creates a set of tuples for them:
A×B={(a,b):a∈A,b∈B}
{x:p(x)} is a set builder notation meaning: a set made of elements xx for which a predicate p(x) is true
Such definition is not associative ((A×B)×C≠A×(B×C), which we can easily check:
((1,2),3)={{{{1},{1,2}}},{{{1},{1,2}},3}}
(1,(2,3))={{1},{1,{{2},{2,3}}}}
type IntTwice = (Int, Int)
case class TwoInts(first: Int, second: Int)
type IntList = List[Int]
// all of the above are products
('IntList' is a type alias, another name for existing type. We could use 'List[Int]' instead of 'IntList' and it would make no difference to Scala).
The other construct - a disjoint union, sum type or coproduct - a is sum of finite number of sets, where no two sets have a non-empty intersection. Types are also tagged, that is we can always say from each subset originated any value of the union.
Scala 2.x doesn’t allow to actually build up an union from already existing types. If we want to let compiler know that types should form a coproduct, we need to use a sealed hierarchy:
sealed trait Either[+A, +B] {}
case class Left[+A, +B](a: A) extends Either[A, B]
case class Right[+A, +B](b: B) extends Either[A, B]
sealed trait Option[+A] {}
case class Some[A](value: A) extends Option[A]
case object None extends Option[Nothing]
The 'sealed' keyword makes sure that no new subtype of said class can be created outside the current file, and so compiler will always know all types that directly inherit the trait (and form a disjoint union). This allows pattern matching to be exhaustive: compiler can tell us that we didn’t matched all components of the union.
Dotty will introduce the union types, which will let us build an union out of already existing types.
def f(intOrString: Int | String) = intOrString {
case i: Int => ...
case s: String => ...
}
Product types and sum types together are known as algebraic data types. They are the foundation the data modeling in functional programming build upon.
They also allow us to use generic programming in Scala with libraries like shapeless.
Compound and intersection types
trait Str { def str: String }
trait Count { def count: Int }
def repeat(cd: Str with Count): String =
Iterator.fill(cd.count)(cd.str).mkString
repeat(new Str with Count {
val str = "test"
val count = 3
})
// "testtesttest"
It follows the mathematical definition exactly:
x∈A∩B⟺x∈A∧x∈B⟺x∈B∧x∈A⟺x∈B∩A
which mean that the order of composing type using 'with' does not matter.
val sc: Str with Count
val ca: Count with Str
def repeat(sc) // works as expected
def repeat(ca) // also works!
Well, at least when it comes to types. Since we can compose types this way we have to face a diamond problem sooner or later.
trait A { def value = 10 }
trait B extends A { override def value = super.value * 2 }
trait C extends A { override def value = super.value + 2 }
(new B with C {}).value // ???
(new C with B {}).value // ???
How would Scala deal with such cases? The answer is a trait linearization:
trait X extends A with B with C
is the same as
trait AnonymousB extends A {
// B overrides A
override def value = super.value * 2
}
trait AnonymousC extends AnonymousB {
// C overrides AnonymousB
override def value = super.value + 2
}
trait X extends AnonymousC
(implementation wise, types 'B' and 'C' are not lost). If we switch the order:
trait Y extends A with C with B
then it is as if we
trait AnonymousC extends A {
// C overrides A
override def value = super.value + 2
}
trait AnonymousB extends AnonymousC {
// C overrides AnonymousB
override def value = super.value * 2
}
trait Y extends AnonymousC
It should be obvious now, that
(new B with C {}).value // (10 * 2) + 2 = 22
(new C with B {}).value // (20 + 2) * 2 = 24
In Dotty 'intersection types' will be denotes using '&'. Documentation claims that it will differ from 'with' in a way, that Dotty want to guarantee, that there will be no difference between 'A & B' and 'B & A' (operator is commutative). Currently, while 'A with B' can be used in place of 'B with A' and it just works., their behavior might be different due to change of order in trait linearization. Dotty will also handle parametric types intersection and intersection of properties types ('List[A] & List[B] = List[A & B]'). Because 'with' has no such guarantees we call types created with it compound types instead.
Classes
Because class does not has to be set, we can talk e.g. about class of all sets.
{x:isSet(x)}
- statically typed languages (like C/C++ where class definition dictates memory layout of an instance),
- dynamically-typed (like Python or Ruby where class is basically a factory object that creates other objects according to the spec)
- or prototype-based (like JavaScript, where question does it have a property/methods? is extended to does it or its prototype-object have a property/method?).
Unit
This one is special. In lambda calculus, there are no constants, no nullary functions, and no multi-parameter functions. Everything is a single argument function returning another single argument function (therefore currying is the only way of achieving function with arity ≥≥ 1).
Similarly, in category theory, each arrow (function abstracted away from its body) goes from one object to one object (function domains/codomains abstracted away from specific values).
In such situation, we would have to implement somehow functions returning no (meaningful) value as well as functions taking no parameters (nullary). Category theory names such irrelevant garbage input an initial object or void. It assumes that for every object there is you can create an arrow from void to the object (for every type there is a nullary function returning a value of this type). Since you never have the need to actually pass any value of void type (it won’t be used anyway), the set might be pretty much empty, and an actual implementation can work around it (category theory don’t care about the values, remember?). Thus the name void.
Similarly, there is also a final object or unit, for each object there is an arrow from an object to unit (for every type there is function that could be feed with values of this type and don’t return anything useful). If it only serves the purpose of being returned, we don’t attribute it with anything meaningful. No matter how many values of this type exists, we cannot tell them apart. For us, the unit might be as well be a singleton. Thus the name unit.
While creators of Scala didn’t see a point in making void an actual type (and making people confuse it with Java’s void), they did want to make sure that every function returns something. That something is '()', the sole citizen of an 'Unit' type.
Summary
We talked a bit about what are (concrete) types, how they are represented in Scala, and we learned a thing or two about type theory and some minimum about categories
In next post, we’ll see what are parametric types, type constructors and what constraints we can put on parameters to make the type system our friend.'
This article was written by Mateusz Kubuszok and posted originally on kubuszok.com