Connecting...

Pexels Photo 276223

Purely Functional Web APIs in Scala by Anatolii Kmetiuk

Pexels Photo 276223

We have an exclusive read for you! This article is written by Anatolii Kmetiuk on 'Purely Functional Web API's in Scala' and is purely for us to share with you so we are very excited about this read!


'Web application development is a widespread task that is solved in the industry with Scala. In this article, I would like to overview my approach to building functional web APIs in Scala.


Libraries and technologies

For building web applications, I am using the Typelevel stack of libraries. Precisely:

  • 'HTTP4S' is used for HTTP request handling.
  • 'Circe' – for conversion between model classes and JSON.
  • 'Doobie' – for interactions with SQL databases.

As a database, I use Postgres. The database and the backend are integrated using Docker.

Architecture

├── Main.scala

├── package.scala

├── db

│   ├── customer.scala

│   └── order.scala

├── service

│   ├── ServiceImplicits.scala

│   ├── customer.scala

│   └── order.scala

└── model

   ├── Customer.scala

   └── Order.scala

I organise my backend in several layers, and each layer is defined under its own package. The top-level package contains:

  • The 'Main' object which bootstraps the server.
  • The configuration used throughout the software.
  • The effect type system.

Bootstrapping the Server

def main(args: Array[String]): Unit =

 BlazeBuilder[IO]

   .bindHttp(8888, "0.0.0.0")

   .mountService(all, "/")

   .serve.run.unsafeRunSync()

We are using HTTP4S to bootstrap the server. We are mounting the 'all' variable containing all the HTTP handlers onto the root path.

'all' is defined under the 'Main' object. This variable contains a parallel composition of all the handlers in the 'service' module:

def all = (

   customer.all

<+> order   .all)


Services
The service module contains HTTP4S handlers for every logical aspect of your application. It is a good idea to organise these handlers following the RESTful principle. The handlers are organised into singleton objects, each containing handlers for a specific concern (entity) of your application. A typical service singleton looks as follows.

object customer extends ServiceImplicits {

 def all = (

     get

 <+> list)


 val root = Root / "customer"

 

 object IdParam extends QueryParamDecoderMatcher[Int]("id")

 

 def get = HttpService[IO] {

   case req @ GET -> `root` :? IdParam(id) => db.customer.get(id)

 }

 

 def list = HttpService[IO] {

   case req @ GET -> `root` => db.customer.list

 }

}


Above code implements a Read capability for the Customer entity. Notice how we are able to return the output from the database as the response to the user request. The output of the database access objects are model case classes under 'IO'. Such an ability to return case classes as a response is achieved due to the 'ServiceImplicits' trait which we define as a part of the Service layer. This trait provides implicit conversions to bring Circe codecs in scope. Circe is a Typelevel library responsible for converting case classes into JSON which you can easily integrate with HTTP4S.
Each service singleton has an 'all' handler which is a parallel combination of all the handlers defined in it.

Database Access

Services rely on the database layer for data storage. Similarly to Service layer, the Database layer is composed of singletons that contain data access methods specific to a given concern. A typical database access singleton looks as follows.

import infrastructure.tr


object customer {

 def get(id: Int): IO[Customer] =

   sql"select * from customer where id = $id"

     .query[Customer].unique.transact(tr)


 def list: IO[List[Customer]] =

   sql"select * from customer"

     .query[Customer].list.transact(tr)

}


The database access methods are defined in plain SQL using Doobie. Doobie allows for  safe access to the databases using SQL queries, and also features object-relational mapping capabilities: you can automatically convert the results the database returns to your model classes using the type parameter to the 'query' method: '.query[Customer].'

The database layer has an infrastructure singleton that specifies the connection to the database:

object infrastructure {

 implicit lazy val tr: Transactor[IO] = {

   val host = System.getenv("POSTGRES_HOST")

   val port = System.getenv("POSTGRES_PORT")

   val user = System.getenv("POSTGRES_USER")

   val pass = System.getenv("POSTGRES_PASS")


   Transactor.fromDriverManager[IO](

     "org.postgresql.Driver", s"jdbc:postgresql://${host}:${port}/postgres", user, password

   )

 }

}

The connection URI and the database driver are set here.

Notice nice one-to-one mapping between service handlers and database access methods. This mapping is often the case with HTTP4S and Doobie, since they allow for a very fine-grained description of the handlers and data access methods.


Model

The model layer has no surprises. Every entity has its own case class:

case class Customer (

 id        : Option[Int] = None

, first_name: String

, last_name : String)

In my model classes, I always set 'id' to 'Option[Int]'. This is for the 'create' operations against the database: on create time, we know all the information about an entity except its which will be generated by the database.


Effect System

The effect system is defined in the top-level package object and comprises the effect type we are using throughout the application together with convenience methods to work with it.

package object server {

 type Ef[A] = EitherT[IO, String, A]


 /** Helper methods to convert to and from Ef go here. */

}

You can find more about my approach to effect systems in my another article.


Docker Images

The backend and the database are deployed as separate docker containers. The images of these containers are defined using Dockerfiles. The backend Dockerfile resides in the root of the SBT project, and the Postgres Dockerfile – in the 'postgres' directory which also resides at the root of the project.

├── 180101-schema.sql

├── 180220-orders-schema-update.sql

├── 180304-customers-have-emails.sql

├── 180405-order-classifications-added.sql

└── Dockerfile


The 'postgres' folder contains SQL schema and updates to it as SQL files. These SQL files are used to initialise the database and update it to the most recent state. Each SQL file is prefixed with a date of its creation in the format: 'YYMMDD' (year, month, day, each in two digit format). The date allows sorting the files in the order in which they must be executed against the database.

The advantage of this approach is an evolvable database with version history. As your application evolves, you define each change to the database schema as a separate SQL file and execute it against the database. Should you ever lose your database or need to revert the schema to the previous state, you can execute the history SQL files one by one up to the desired point.

The Dockerfile for Postgres initialises the database by executing all the files in chronological order:


FROM postgres:latest

ADD ./*.sql /docker-entrypoint-initdb.d/

The capability to initialise the Dockerized Postgres via SQL script is a feature specific to the 'FROM' image our image extends – see the corresponding documentation for details.

The separate Dockerfiles allow for flexibility and reproducibility of the images' environments. We can have each image customised under our application's needs. We also have formal descriptions of these images in the Dockerfiles – hence reproducibility.

The images are deployed and glued together using the 'docker-compose.yml' file.

version: '3'

services:

 postgres:

   container_name: server_postgres

   build: postgres

   ports:

     - 5432:5432

   # volumes:

   # - pgdata:/var/lib/postgresql/data

 backend:

   container_name: server_backend

   build: .

   ports:

     - 8888:8888

   volumes:

     - home:/root

     - .:/root/backend

   stdin_open: true

   environment:

     - POSTGRES_HOST = ???

     - POSTGRES_PORT = ???

     - POSTGRES_USER = ???

     - POSTGRES_PASS = ???

   tty: true


volumes:

 # pgdata:

 home:


Uncomment the three commented lines above to have the Postgres data persisted between Postgres container restarts. Under such a Docker architecture, you can conveniently launch both images using one command:

docker-compose down; docker-compose build; docker-compose up


Once the containers are running, you can connect to the backend image and run the sbt console as follows:

docker exec -ti server_backend sbt

You can run the server using the 'run' or 'reStart' (in case you have the Revolver plugin) commands from the SBT console.

Summary

The approach described in the article allows for the following advantages:

  • Portability – the infrastructure of the web application is described with Docker, hence the only software requirement for deployment is Docker.
  • Purely Functional approach. My approach leverages the Typelevel libraries which provide all the tools you need for server-side programming.
  • The speed of deployment. You can quickly launch all your containers in three commands.'

This article was written by Anatolii Kmetiuk exclusively for Signify Technology.