Low Code as the Basis for Your Data and Application Strategy

Andreas Eberhart
12 min readJun 25, 2021
Photo by Campaign Creators on Unsplash

Low Code platforms are rapidly gaining momentum in the IT industry. They are a great fit for digital transformation and IT projects that cannot be addressed by using off the shelf solutions. This is not surprising since the alternative is starting a development project using a classical development stack. Various PaaS offerings certainly make this task easier, however, a team still needs to address a complex mix of technologies, architecture, security, etc. The following picture shows some of the dimensions and choices. You could spend all day every day just evaluating things without doing any actual project work…

Source: https://tech.foodora.com/tech-stack/

The case for low code platforms is that you can avoid — at least some of — this mess. Research and Markets estimates that “the global low-code development platform market is predicted to generate a revenue of $187.0 billion by 2030, rising from $10.3 billion in 2019, and is expected to advance at a fast pace, 31.1% CAGR, during the forecast period (2020–2030).” The report also features low code platforms offered by Outsystems, Microsoft, Appian, and others.

Looking at this massive list of vendors, are we just going from bad to worse by switching from having to choose a technology stack to having to pick a low code platform? We believe that low code platforms are definitely worth adopting. However, customers should develop a bigger data and application strategy around low code, rather than just treating a low code project as an isolated one-off endeavor. This way, customers can establish a rich ecosystem of data and apps that reinforce each other and also build up in house expertise to drive this process. This should be the guiding principle in choosing a platform.

This article provides some guidance for developing such a strategy. We also describe some of the features a low code platform should bring to the table in order to support the strategy.

Note that we (Dashjoin) offer an Open Source & Cloud Native Low Code development and integration platform, so we obviously are not neutral. However, we hope to raise some important points in helping you decide on the best approach for your situation.

Photo by NASA on Unsplash

The Web: A Source of Inspiration for the Strategy

Why did Tim Berners-Lee create the Web? He describes it as: “Creating the web was really an act of desperation.” His desperation was caused by the hoops one had to jump through in order to access a research document located at a different site or a site using different technology.

The solution was a system that

  1. Is open — The Web enables everyone to participate.
  2. Uses a common address (URL) to locate documents.
  3. Is standards-based — The Web is based on standards and allows a heterogeneous set of technologies to interoperate on a higher level.
  4. Is decentralized — The Web embraces the philosophy that it is not the end of the world, if a server is down or a hyperlink points to nowhere.

For our analysis, it is important to note that the technology does not dictate the organizational scope and context of where the technology is used. HTML, HTTP, web servers, and browsers can be used on a global and public scale as well as on a corporate intranet with a limited user base.

Web of Data Strategy

We can draw an analogy from the web (of documents) to the web of data and the question becomes: Under which circumstances can we exchange and access data in a uniform way? Let’s look at some existing approaches:

OpenAPI, JSON:API, GraphQL, OData, Semantic Web

Today, the web of data world is definitely based on RESTful principles with a couple of noteworthy and prominent approaches:

OpenAPI is a widely used description language for RESTful services. It was born out of the Swagger tool which is a generic web based tool to test and document RESTful services. Internally, it uses JSON Schema to describe data types and structures. In our context, it is an interesting technology since it establishes a standard way to describe and invoke services. However, OpenAPI services are way too heterogeneous to be able to form a universal data access layer.

JSON:API establishes some best practices on how to structure APIs in order to best handle CRUD operations, pagination, filtering, and much more. JSON:API also employs the “follow your nose” principle by embedding URLs for subsequent REST calls in the responses, thereby addressing some of OpenAPI’s shortcomings for our use case. However, JSON:API is not concerned with query languages.

GraphQL and OData are both technologies that aim at standardizing the way clients query and update a data source over the web. Consequently, both contain a query language. OData is older and clearly has its roots in SQL. GraphQL was largely inspired by giving (mobile) clients fine grained control over what data is transferred. While OData has a notion of context that identifies data uniquely, GraphQL does not.

The Semantic Web takes a radically different approach by basing on a graph data model. Note that despite having “graph” in its name, the GraphQL data model can be described as “a pattern similar to Object-Oriented Programming: types that reference other types”, whereas a graph data model is based on nodes and labelled edges linking the nodes. The Semantic Web stack is based on SPARQL, a RESTful graph database query language and the RDF graph data model. While the ideas of the Semantic Web are intriguing, its adoption is often hampered by its over-reliance on the graph data model, which is quite uncommon in most systems today.

Photo by Dan Clear on Unsplash

Web of Data Design Principles

In order to realize the web of data vision, we propose merging ideas from the technologies introduced above into the following design principles:

Uniform *Record* (or Table) Locator

Being able to “point to” data is key to the vision. We define a “record” as a self contained unit of data. This can be a row in a relational database table, a document in a MongoDB, a node with some properties in a property graph database, or an object in an application that exposes its data via an API only. It can be located using 4 “coordinates”:

  1. the address of the system we use to access the data
  2. the name of the database containing the record
  3. the name of the table (see next section) containing the record
  4. the IDs used to find the record within the table

Furthermore, we define a “table” as a collection holding a number of records. This can be a table in a relational database, a collection in a MongoDB, a node type in a graph database, or a concept in an application (e.g. an issue in Jira). A table can be identified using coordinates 1–3 from the list above.

Schema Metadata is Data Too

For clients to “understand” data, we need to make the schema available as well. Borrowing from OpenAPI, we express the table structure as a JSON Schema. Databases contain a set of tables and a data access system contains a set of databases that is manages. We end up with a hierarchical structure which we model as four tables (system, database, table, field) which are exposed as if they were regular data. We borrow this idea from the RDF data model, where ontologies are also represented as a graph that is stored with the “normal” data.

Relationships as First Class Citizens

When we speak of a Web of data, it is clear, that the relationships between records are crucial. Since we are able to identify record fields, we can store the information that a certain field is (part of) the primary key of the record. Likewise, we represent the information that a field references another field. This information is stored in the schema metadata.

Uniform CRUD and Query API

The architecture defines a standardized API for CRUD and query operations. This API is used for both the data and the metadata layer and uses the identifiers introduced above. The API design follows several of the JSON:API principles.

Universal Data Browser of the Dashjoin Platform

The screenshot above shows an example of these principles. The URL points to https://demo.my.dashjoin.com/#/table/northwind/ORDERS and identifies the order table of the northwind sample database. Note that the tables displays links for all key columns. The first row in the ship_via column, for instance, points to https://demo.my.dashjoin.com/#/resource/northwind/SHIPPERS/3. Also note that the page displays a form for creating new entries. The order id is required and the shipment date can be entered via a date picker. As we will explain in the next section, this interface is constructed solely based on the metadata described above.

Leverage the Native Query Language

Other than OData, GraphQL, or the Semantic Web, we do not standardize on a single query language. The rationale for this is that the CRUD API already provides for a database agnostic access and allows for a follow your nose type navigation. This way, more advanced and specialized applications can still take advantage of specialized features of the underlying database.

Allow all Systems and Databases to Participate

This architecture allows any kind of database or backend system to be included in the ecosystem. The minimal requirement is to expose the CRUD API and publish the metadata about tables and record structures.

Access Control

Clearly, a web of data architecture needs to have strong access control built into the system. Rather than delegating access control to the underlying system, we propose embedding a per table access control mechanism based on OpenID users and roles in the data access layer.

We believe that these design principles follow the example of the Web: It is open to very different kinds of databases and backend APIs, it uses a common way to address data, it is based on a common metadata format and standardized API, and finally it is decentralized by allowing references to point not just into other databases but even into databases registered in a different data access system. This allows for a decentralized approach where different teams can share and reuse each others data by just linking to it.

Photo by Marvin Meyer on Unsplash

Web of (Low Code) Applications

Now a Web of data is nice, but we still need the equivalent of a web browser to make use of the data and the equivalent of an HTML editor to create content.

Universal Data Browser

A universal data browser allows to search and navigate the data. This must work out of the box, without any development effort. After all, the metadata about tables, fields, keys, and relationships is available via a standardized API. This must work in the same way as a Swagger UI is able to provide a default user interface when fed an OpenAPI description.

There is another important inspiration we can draw from the Semantic MediaWiki project, which enhances MediaWiki by tagging the links between wiki pages and thereby creating data and relations between pages. A key principle we can draw from Semantic MediaWiki is that a record is visualized using a constant URL.

This is a very important principle since this allows any other page of a related record to show a hyperlink pointing to this URL. This works much like the Wikipedia page of the US showing a link that points to its neighbor Canada and vice versa. Likewise, the screenshot above shows this principle being applied to employees, customers, and orders.

JSON Schema for Universal Forms

JSON Schema is a well accepted standard for describing the structure of JSON records and documents. It is a central piece of the metadata layer described in the section above and it also plays a crucial role for realizing “universal forms.” This term describes a generic user interface on top of the CRUD API. In other words, it provides the ability to create, read / display, update, and delete records in the web of data.

Besides describing API payloads, the second major use case of JSON Schema is the generation of forms. This is demonstrated by the large number of implementations covering this. In the screenshot above, the form is driven by JSON Schema, which in turn was generated by inspecting the database metadata.

Note that this JSON Schema can be enhanced with additional information entered through the low code interface. An example would be layout hints in order to control the form’s appearance and style.

Widget and Template Based User Interface

The universal data browser and the universal forms provide a good basis for exploring data. However, an application must do more than that. Depending on the user’s role, additional elements should be available. For instance there must be active elements that trigger things like sending email or that invoke another RESTful service. Other pages should display charts for the current context. For instance, a user might like to see the total sales numbers for the product, when viewing the product page.

A JSONata expression is used to specify query parameters of a tree widget

A low code platform must offer a large variety of widgets in order to address these kinds of use cases. These widgets provide a set of options that control their appearance and also their behavior. A key question is how a low code platform maps JSON input and output data of a RESTful service to the widget. For instance, how can a chart widget translate user input into query parameters and display results the right way? Another example would be a generic invocation of a service. Parameters need to be collected and fed into the call.

We suggest using JSONata for this purpose. It is a “lightweight query and transformation language for JSON data.” The screenshot above show an example of this approach. The low code developer is customizing a tree widget that shows an org chart. The widget is based on a recursive database query that requires the ID of the root node. The following JSONata expression is used to transform the current record context into the required structure:

{“node”: value.EMPLYOEE_ID}

App Editors

Just like we need browsers, we also need low code app editors. Now low code means “low code” and not “we enter lots of code in the browser rather than an IDE.” The code sample above is probably acceptable, but we certainly need several types of editors.

We already saw one representative of its kind in the screenshot above, namely a layout editor. Another important asset of a low code platform is the ability to graphically assemble queries.

Graphical SQL Editor

Ideally, this editor should support all query languages that are supported in the framework.

The same editor used for constructing a Firebase query

Finally, a key feature of integration platforms is the ability to map between data structures. We believe that it makes sense to also leverage JSONata for this task since it allows expressing complex mapping and transformation rules in a very concise manner. The next screenshot shows an example where a mapping is constructed by placing smaller JSONata snippets into a graphical mapping editor.

Data Import Mapping Editor

Cloud Native Deployment

When we speak of a web of data and applications, a system obviously must be scalable and easy to handle from an administrative / DevOps point of view. So you definitely should make sure that your low code platform has horizontal scale out options, can be deployed in a cloud native fashion, or can be consumed as a service.

Non Technical Considerations

So far, our discussion has been fairly technical. But there obviously are other considerations that are important when adopting a low code platform.

Probably the most important factor is you team’s skillset. Even if the platform is low code, this does not mean that everybody will be productive with it from the get go. This is the main reason why it makes little sense to simply base a single project on a low code platform. Ideally, your staff or your consulting partner develops a deep skillset in order to handle most future projects using that technology.

Second, the platform should be open and should be based on open standards. Not every task can be accomplished using a low code approach. If — for some reason — you decide or are forced to switch to another platform or technology, you should make sure that you’re not stuck on a deeply proprietary platform.

Obviously, licensing costs are a major factor too. As you scale up and develop more know-how using a platform, make sure your benefit and cost savings are not eaten up by the cost of the platform.

--

--

Andreas Eberhart

Entrepreneur, Geek, Techie, Programmer, Dad & Husband, Biker & Baller