This is a great read written by Adam Warski following Scala Days and Scalapeño. Adam gives his research on why to use Scala. What are your thoughts?
'Following Martin Odersky’s keynote at ScalaDays, which laid out plans for Scala 3, and John de Goes’s keynote during Scalapeño on the future of Scala, there has been quite a lot of debate in the Scala community. Apart from the keynotes itself, there’s a significant amount of threads on twitter, Slack communities, as well as reddit (e.g. 1, 2).
All of these are very interesting and educational to follow for somebody from inside the Scala ecosystem. However, a significant portion of the arguments involve various perceived or factual shortcomings of Scala as a language. This might be off-putting to anybody from the outside, prompting questions “Why would I ever want to use Scala?”, “Is it a dead end?”, “Will Scala 3 become Python 3?”, etc. As in most discussions, the weak and emotionally loaded points attract most attention, and the current debate is no exception.
Hence, taking a step back: why would you consider using Scala? What are the technical and business reasons to use the language?
First of all, Scala is especially well suited for certain problem domains (but not all!). The biggest strength of Scala is its flexibility in defining abstractions. There’s a number of basic building blocks at our disposal; sometimes defining an abstraction is as simple as using a class, methods and lambdas; sometimes an implicit parameter has to be used, or an extension method; in rare cases there’s a need to resort to a macro. However, the options are there.
Hence, Scala works great when there’s a need to navigate a complex domain. Distributed and concurrent programming is one example. Parallelism is very tricky to get right, and Scala offers a number of libraries which make this task easier, by building abstractions. There are two main approaches: an actor-based, represented by Akka, and an FP-based one, represented by Monix/cats-effect and Scalaz/ZIO (if you’d like to read more about how these compare, I’ve written a series of articles on this subject).
But of course that’s not the only possible domain. Modeling typical business applications can also be taken to another level, using the available Scala features. Here however the complexity is of a different kind. With distributed systems, the complexity was technical. With business applications, the complexity is in the problem domain itself. As an example, Debasish Ghosh’s book “Functional and reactive domain modeling” explains how to combine DDD with functional and reactive programming.
As a side note, Scala is often said to have a large number of features, thus making the language complex. That’s not entirely true.
As mentioned earlier, there’s a handful of basic constructs which can be used to build abstractions. However, they all can be combined with each other, which gives the languages its flexibility. Most of the novel work that you see out there is some form of combining higher-kinded types, implicit parameters and subtyping.
Hence while the number of core features is small (comparing grammar size, the language is simpler than Kotlin, Java or Swift!) — which also aids the learning process — the number of combinations is much larger.
Aren’t there too much choices? I dont’s think so. As competent, responsible software engineers, we are more than capable to choose how to best solve a specific problem. See the “Simple Scala Stack” for more on this topic.
Quite often you can hear that Kotlin has taken Scala’s place as a “better Java”. However, I think that Scala is still the “better Java”, despite Kotlin. There are two main reasons:
- Firstly, Scala is an immutable-first language. That’s because of the language itself: Scala makes it easy to write code using immutable data. This includes constructs such as: first-class vals, case classes, higher order functions etc. But that’s also because how the standard library is designed and written; all of the “default” data structures are immutable. Immutability makes a number of things simpler, especially in a highly concurrent world, and languages which favor immutability have an edge.
- Secondly, Scala’s support for type constructors, higher-kinded types and typeclasses (through implicits) make it much easier to work with wrapper/container-like types, such as Promises, Futures or Tasks. These wrappers are prevalent when coding in the asynchronous or reactive styles, and having language constructs which make it convenient e.g. to work in a codebase which makes heavy usage of Futures is another point in favor of Scala.
What are typeclasses? They play a role similar to design patterns in Java, but are more flexible and easier to use once the idea sinks in. There’s a number of great tutorials, e.g. 1, 2.
And what about type constructors? These are types such as Future or Option, which are “containers” or “wrappers” for other types. You might have a Option[String], or a Future[Int]. Higher-kinded types allow you to write code which abstract over any wrapper. See e.g. 1, 2.
And if you wonder, why use Futures/Tasks in the first place? It’s not only because most high-performance, low-latency or “reactive” computing is done using asynchronous I/O, which naturally maps to these constructs. Check out this reddit question and my blog “Why wrestle with wrappers?”.
Working in an immutable-first environment, and being able conveniently work (and abstract over!) constructs like Futures, Promises or Tasks is truly transformative to how you code. This might not be apparent at first: coming from Java or Ruby, it’s usually a longer process. But even for the educational aspect, it’s worth finding out how the “functional” approach works and more importantly, why it might be a good alternative.
Of course, both Scala and Kotlin share a number of advantages over Java, for example:
- more compact syntax
- less boilerplate
- richer type system
- no language baggage
at the same time having access to the JVM ecosystem of libraries and frameworks.
The richer the type system (Scala’s type system is richer than Kotlin’s, which in turn is richer than Java’s), the more of the verification work is done by the compiler, instead of relying on human labor. And that’s what computers are made for: to perform the boring, mundane, repetitive tasks. Verifying that types match is definitely one such task.
But, a rich type system is not only useful when writing the code, even more so when reading the code. Being able to navigate a codebase, understand what it is doing and refactor with no (or less) fear are very important traits of a language, both from technical and business perspectives.
FP vs OO
Another point that is often raised in the discussion is whether Scala should continue on the path of a fusion OO/FP language or going only the FP route. I’m on the fusion side, especially that FP and OO aren’t alternatives, but are complementary to each other.
FP is programming with functions, though not any functions — only such, which are referentially transparent (see this reddit answer, in a previously linked thread, which is a great explanation of referential transparency). OO is communicating with objects using “messages”, or in our terminology, virtual method calls. There’s no reason why these two approaches can’t live together, as Scala has already shown!
In a tweet about the good sides of Scala, John de Goes mentioned some of the OO features which are useful in the purely-functional approach as well: first-class modules (via object composition), dot syntax (calling methods on objects) and first-class type classes/instances, among others. These are all elements of a successful combination of the two concepts. Maybe there are more on the horizon?
The “fusion” isn’t a finished project, there’s definitely room for discussions. One area for example is the proposed syntax for extension methods, which replaces much of the more confusing implicit conversion usages. Another is a better syntax for type classes; the proposal from some time ago falls short and doesn’t address some of the most common usages of monads in Scala. Some proposals are better, others need work, but it’s good that they are coming, and good that there are discussions around them; this helps to keep the language alive and ultimately arrive at the best solution.
What we know now, is that there will be a tool to automatically migrate code from Scala 2 to Scala 3, using scalafix. As Scala is a statically typed language, that’s a task that can be done on large scale and is much easier than e.g. in the case of Python. But of course there’s no magic: even if the tool will convert 99% of the code correctly, there’s still the 1% that is most problematic. And because there’s no magic, manual effort will be required to migrate these fragments.
That’s the cost of using an actively developed language: you get the latest features, but then some of them turn out to be not so good and need to be adjusted. Even having that in mind, the changes aren’t revolutionary. Scala 3 is a very similar language to Scala 2, without major paradigm shifts.
One reassuring fact is that the Scala team is taking the migration seriously. While previously migrating within major Scala versions (e.g. 2.8 to 2.9) was quite painful, recent migrations were much better. There are three parties involved: EPFL, ScalaCenter
, and each of them works (often together) to make the migrations smoother. For example, there’s the binary-compatibility MIMA tool
, and a large number of community libraries are continuously built
to make sure that they work with new Scala versions.
Finally, while not yet done (so not possible to verify), TASTY is supposed to enable using binaries from Scala 2 in Scala 3.
Hence, while migration is going to be a problem — as with all migrations — I’m quite confident that it will be seriously taken into account by the people who work on Scala.
So why use Scala?
What are the business reasons to use Scala, then? All of the above mentioned technical advantages translate directly to business advantages. Having a language which helps write complex code with less bugs is less downtime and happier users. Writing highly concurrent, low-latency applications with the help of one of the Scala concurrency toolkits means bigger profits for the company.
And let’s not forget Spark: the leading platform for distributed data analysis. Scala is not only used to implement Spark, but also to define the computations themselves, providing one more example of a data-scientist-friendly abstraction hiding a complex computational model.
We have our problems of course, but then, who doesn’t? The good news is that there’s an active effort of a large number of people using Scala daily to improve the tooling, libraries and the language itself. And I can only assume that they are sticking to Scala because even though it’s far from perfect, there’s nothing better for their problem domain.
Scala allows you to evolve your programming style, whether coming from Java, Ruby, or just starting with programming. There’s no one good way to do Scala; you can go with the more imperative approach of Akka, or the more functional approach of Cats and Scalaz.
What could seem to be a problem: sub-communities centered around “reactive”, “fused” OO/FP and pure-FP programming, is in fact a huge advantage. This diversity causes discussions to receive a lot of varying opinions, from different points of view. This, in turn, is great for learning how to approach solving problems differently, and enriching one’s own toolbox.
Whichever direction you go in Scala, there’s quite a substantial community working on the development of the libraries and frameworks, happy to help new users and discuss ideas.'
This article was written by Adam Warski and posted originally on blog.softwaremill.com