Related videos

Hi, I’m Wouter Janssens. This is the 4th episode of Digita’s Tech Talks. Last episode, we discussed how data in the Solid ecosystem is made interoperable on the Web. In this talk, we will go more into detail about how this data is exchanged in the Solid ecosystem using HTTP.

The HyperText Transfer Protocol or HTTP is -- as the name says -- a protocol, more specifically, a network protocol. It is a set of rules that every system needs to follow within that network. The protocol has been tried and tested for more than 30 years: you surfing the Web right now, makes use of HTTP. At its core, HTTP is a very simple request/response model. One system makes a request -- a client --, and another system responds to that request -- a server.

After 30 years, we still rely on HTTP (or its more secure variant, HTTPS). How is this possible? Roy T. Fielding was one of the first to investigate this, and came to the conclusion that HTTP is very successful because this simple request/response model can follow the Representational State Transfer architectural style, or REST. A RESTful architectural style adheres to very specific constraints, and by adhering to these constraints, we have a scalable network where all different actors -- clients, servers, and intermediate systems -- can evolve and improve independently.  Our network based on HTTP -- The Web -- has been able to scale with the increase of data exchange over the years, and we can still browse certain websites made 30 years ago. Interactions between actors continue to work even when changes occur on either side.

The most important RESTful constraints are (i) its client-server architecture, (ii) statelessness, (iii) cacheability, and (iv) uniform interface.

Separation of concerns is the principle behind the client-server constraint. By separating the user interface concerns of the client from the data storage concerns of the server, user interfaces can more easily be used across different platforms and improve scalability by simplifying the server components.

Statelessness implies that each request contains all the needed information to form a response. We specifically talk about application state: the server does not know about the current state of applications that talk to the server. Resource state (or resource data) is still stored on the server. Although statelessness might require more bandwidth, as more metadata is sent for each request, it enables servers to scale since they do not need to manage session data between requests.

Statelessness also allows for cacheability of specific responses. Some responses can be reused, both on the client and on intermediate systems such as proxies. This can improve the user experience, and decreases the server load.

Finally, being RESTful requires providing a uniform interface. Servers and clients can evolve and improve, as long as they keep adhering to it. This uniform interface also has  a set of properties. First, this interface is used to retrieve or modify resources. Second, a single resource can have different representations (for example, for different applications), but all representations have the same information content. Third, how to process these representations is described as metadata as part of the response: the response is self-descriptive. Fourth and finally, the response should contain, next to the resource, also hypermedia: potential actions so the client knows which relevant requests it could do next.

Let’s show this interface in the case of a RESTful API over HTTP. A client forms a request. This request asks for a resource, or a manipulation thereof using a small set of well-defined methods. The most common ones are GET, PUT, POST, and DELETE, to respectively retrieve, alter, create, or delete resources. This resource is identified using the URI of the request. Additional metadata -- typically in the header of the HTTP request -- is used to specify which representation of the resource the client wants to receive (this is called content negotiation, to receive, for example JSON or HTML for respectively apps or browsers). Finally, the server sends an HTTP response back containing the resource, in the representation as requested, and adding hyperlinks of relevant next requests.

Over the years, applications on the Web have diverged from these RESTful constraints. Some might still remember SOAP APIs. But still, most applications are tied to very specific server APIs and vice versa. They communicate using HTTP, but by not adhering to the RESTful constraints, it is much harder to maintain these applications in a similar fashion as how 30-year old websites are still browsable today.

The Solid ecosystem returns to the original RESTful constraints. It relies on a specific subset of the possibilities that HTTP provides, so that the network can scale, and clients and servers can evolve and improve independently.

In the Solid ecosystem, applications are decoupled from the data that is stored in Solid pods. This brings back the separation of concerns between client and server, and servers are stateless: agnostic of the application state. In our previous video, we already discussed semantic and syntactic interoperability in Solid pods: resources are described in RDF -- an ideal, mathematical way to structure data --, and can have different equivalent representations in for example JSON, HTML, and XML. Using a Solid pod, these resources can be retrieved and manipulated using the Linked Data Platform: exactly the set of rules for the well-defined HTTP methods such as GET, PUT, POST, and DELETE to adhere to the RESTful constraints.

In short, the Solid ecosystem is built on the components of HTTP that made the Web scale to its current size, and thus forms a firm foundation for a scalable network where applications and data are decoupled and can evolve independently.

Script by Ben De Meester and by Wouter Termont.