From now on, GEFEG.FX supports the development of APIs on the basis of OpenAPI 3.x. And the best thing is: you can reuse your data in APIs and use your existing messages.

GEFEG.FX is known as the ‘hidden champion’ for design, documentation and mapping software when it comes to classic EDI(FACT) messages, XML messages, data models and standards. In short: GEFEG has been supporting the digitalisation of companies around the globe for almost four decades. Therefore, it is the natural next step to also accompany the transformation into the world of web APIs. With the new version of GEFEG.FX, you can reuse your data in APIs.

Successful pilot projects from automotive to BlockChain to marketing

When it comes to developing and setting standards, this must be optimally supported by the software used. GEFEG.FX is permanently being further developed and new functionalities added. However, the support of APIs has been a major step. A separate programme area has been created and is still being expanded. It is therefore particularly important for GEFEG to offer the usual reliability and quality here. We were able to win several pilot projects in which we were able to successfully test the new modules. The feedback from our customers was thus immediately incorporated into the development and ensured the usability of the new API module.

Support of OpenAPI 3.0 and OpenAPI 3.1 in Swagger and YAML format

For many, “Swagger” is a household name. Why? Because for many years Swagger was one of the standards on the web for APIs. Therefore, there is still a lot of software on the market that supports the Swagger 2.0 format. But this format has been obsolete for a few years and OpenAPI is the successor. Since spring 2021, OpenAPI is available in version 3.1. Of course, GEFEG.FX fully supports the new standard, but can also export the Swagger 2.0 format.

What makes GEFEG.FX different?

The strength of GEFEG.FX lies in its design-time support for electronic data exchange. Governance is on everyone’s lips today – and has been standard at GEFEG.FX for decades. Thus, data models or a message can of course be developed from scratch. In addition, existing standards can also be used or imported. Transparently tailored to the respective profile or mutually mapped to each other, this results in a significant advantage when using GEFEG.FX. It becomes easily and immediately apparent whether and where gaps exist or what special features the respective implementation requires. GEFEG.FX transfers this strength to the world of APIs.

So how can I reuse my data models in APIs?

Existing standards can also be used for APIs. Existing APIs in OpenAPI or Swagger format can be easily imported into GEFEG.FX. Either directly or via GIT and SVN support.

In addition, however, the UML data models or the UN/CEFACT reference data models as well as any XML format can be used to define an API from them. And the special thing: The connection remains intact. If the underlying data model or the linked message structure is updated, this can directly affect the API if desired.

A central point for the maintenance of data models, XML and API

This means, in particular, that GEFEG.FX can effectively be used on several tracks. If there is a need to change the electronic connection of the business partners, it is no longer necessary to adapt each mapping and each profile individually. The adjustment can be made centrally and has an immediate effect on the data models, XML messages and the API specification. This is how simple the implementation of compliance requirements can be.

Use and advantages of APIs and EDI in data exchange processes in the Supply Chain

In the world of EDI, industry-specific standards play an essential role. Uniform implementations of electronic data exchange make systems stable, but also sluggish. In contrast, APIs enable real-time supply chain management – transparent and controllable. However, a comprehensive, cross-organisational or even cross-industry implementation often fails due to a lack of standards. Nevertheless, APIs are seen as the way of the future. Some are already predicting an imminent end to EDI. But is that really the case? After weighing the pros and cons, APIs are the future of data exchange. They will replace particularly time-critical EDI processes in the future. EDI will remain with established processes.

APIs in Supply Chain Management – Do APIs herald the end of EDI?

Supply chain management became established in Germany in the mid-1990s. It stands for overcoming internal and external company boundaries. The holistic approach focuses on controlling and improving all production steps. Starting with planning, through procurement and production to distribution, all steps of data exchange are covered. Furthermore, this also includes activities that are not within the sphere of influence of an individual company.

The key to success lies in a functioning flow of information that can be automated. For this purpose, business processes such as ordering or invoicing are digitally modelled. But time-critical processes, such as those that occur in just-in-time production, are also (partially) automated. What sounds excellent in theory often comes up against limits in actual practice. Even in the 2020s, common EDI standards are still derived from standards from the 1990s. And this despite the fact that there have long been more up-to-date versions. In addition to the “never-touch-a-running-system” principle, there is also the fact that EDI standards allow many options. In actual practice, such an open standard leads to a multitude of implementations that deviate from each other to a greater or lesser extent. It must be ensured that all partners involved in the data exchange implement the same variant of a standard. Thus, in practice, standards are degraded to application recommendations.

The data exchange itself is document- or message-oriented. This means that even with small changes to the business process, the entire message must be coordinated with all partners involved and implemented by all of them on the same deadline.

APIs are seen by many as a way out of this dilemma. Even though APIs could be used to transport the same messages from A to B, this is not the idea behind them. Rather, by breaking down the document and message structures, they make it easier to carry out these and entirely new business processes. Supply chain management in real time with APIs is the declared ultimate goal. So do APIs herald the end of EDI? A resounding yay.

APIs the drivers of interconnectivity

APIs are the electronic connective tissue of our global world today. They are now included in almost every piece of software that communicates with other software, for example via the internet. As a means of B2B data transfer, however, they are quite new. For companies, they bring cost advantages, efficiency gains and increased service quality. They improve existing or even enable the implementation of new business processes. It is not for nothing that a study by McKinsey put the expected profit potential of APIs at one trillion US dollars.

With the help of a sensibly designed API, systems that are independent of each other can communicate with each other. The API “lies” between the two systems, so to speak. It builds the bridge between the systems and specifies the format and type of data transmission. But EDI can do that too. It gets exciting when APIs are part of the business logic. For example, they are not used exclusively for data transport, but also take on service tasks. For example, they notify a system when certain conditions are fulfilled in another system. Most people know this when they have ordered something in an online shop. They automatically receive a message when the delivery of the package is not far in the future. If implemented correctly, they offer another decisive advantage over EDI: supply chain management in real time with APIs.

Real-time supply chain management with APIs

This aspect opens up the possibility for companies to cooperate even more closely with each other. Information is shared more quickly across multiple stages of the supply chain. Large logistics service providers often even use several APIs to improve their warehouse management. Via Inventory Management APIs, B2B customers’ systems are informed in real time about the current stock level of a certain product or about free delivery capacities. Thus, customers view stock levels directly in a web shop in real time.

Ordering rhythms and process chains can be managed more precisely via such APIs. This offers decisive competitive advantages over other market participants. In order to use an API successfully, however, a number of requirements must be met.

Barriers in API development

When specifying an API, it is first necessary to define what the API should do. What exact service is to be provided? With which existing systems should it interact? Should the API be freely accessible or only for a closed group of participants? How many queries of the API are expected? And last but not least: What business model is to finance the API? The first two points in particular must be made comprehensible to outsiders so that they can understand the API and use it properly. In addition, data protection aspects must also be considered in order to avoid misuse of the API. Not to mention that there are also intensive testing and iteration phases. But is the effort worth it, and can an SME even afford such a step?

Yes, because the hurdles from a developer’s point of view are much lower than in the EDI world. EDI development usually requires specialists or the corresponding service providers. The further away the partners are from the established EDI markets, the more difficult it is to find the required knowledge and experience. Today’s developers, however, have grown up with APIs around the globe. This is because the current APIs originally come from the provision of services on mobile devices. This starts with map displays and navigation and goes all the way to automatic translator APIs that provide videos with automatically generated subtitles.

The specification of APIs is also comparatively simple. Especially if the tools used allow the existing data structures to continue to be used.

EDI – a phase-out model?

EDI is still the established approach for transmitting structured business-relevant data in the B2B environment. Several aspects speak for a positive future of EDI. The connections established between companies have existed for many years. Certain EDI standards have become established within the industry. Not adhering to these standards usually means having to adapt the established contracts. Successful EDI projects serve as a reference. Their data structures form the blueprint for further implementations. Harmonised EDI workflows have also developed for data exchange between industries through converter software and mappings. Through years of work, developers and supporters know what to look for when fixing errors or setting up a new EDI connection. In many cases, EDI is considered very reliable by companies because they have been using the same message standard since the 1990s. Never change a running system is the motto here.

What speaks against EDI is the lower flexibility and scalability. APIs are designed to be flexible in their functionality and to be used by a varying number of users at the same time. At the same time, they only transmit the concretely required part of the information. EDI messages, on the other hand, can be so extensive that it is technically almost impossible to transmit updated information in a time window shorter than 15 to 30 minutes. The focus on the exchange of business documents entails high maintenance and servicing costs.

Unlike the use of APIs, when an EDI message is sent, the sender does not receive an acknowledgement of receipt or a response immediately by default. Of course, something like this can also be realised with EDI. However, this means the implementation of additional (return) messages.

So, is the API the solution par excellence?

Today, the benefits of API are often offset by the need for greater collaboration to achieve communication standards. Instead of reusing uniform data structures as in EDI processes, APIs are implemented in a company-specific way. This is especially important when new trading partners are connected. Often, the larger partners define the APIs to be used for their suppliers or customers. However, at the latest when there is to be cross-industry or cross-process communication, major incompatibility hurdles arise. But it is precisely at this point that initiatives such as those currently being pursued by large standardisation organisations offer hope.

A clear yes and no to EDI

In conclusion, as an “old” basic technology, EDI works well, but it also has its limits. For companies, introducing a new technology for functioning processes means taking staff away from day-to-day business and thus risking a decline in the company’s performance. With this knowledge in mind, it seems quite unlikely that companies will completely replace EDI (in the short term). Established EDI processes are often mission-critical. They will remain in place unless there is a compelling reason to replace them. APIs are the future of data exchange. For the foreseeable future, however, they should be viewed as complementary to EDI. In particular, they are introduced at an early stage when particularly time-critical processes need to be optimised. Thus, real-time supply chain management with APIs is a realistic approach. But not a complete replacement of EDI.

The OpenAPI Initiative has just released version OpenAPI 3.1. On 17 February 2021, Ron Ratovsky from SmartBear and Darrel Miller from Microsoft presented the new version. This article deals with the question of what OpenAPI 3.1 brings with it that is new and what this means for existing implementations.

If you would like to know more about the OpenAPI specification standard itself, you are welcome to visit the official website or read the article “With OpenAPI, this is what your future looks like: A world without business documents. Think model-First with GEFEG.FX“.

The essential changes of OpenAPI 3.1

Support of webhooks

From my personal point of view, the most significant difference is the support of webhooks in APIs. Since the topic is new to many, I have written another small article on the topic “Still no catch on webhooks? Create your flexible APIs now with OpenAPI 3.1 and GEFEG.FX“. There you will also find an example. Webhooks are an essential factor for the success of WordPress. It is this technology that makes it possible to write applications efficiently and to extend them easily and in a standardised way via plug-ins. The OpenAPI initiative has also regarded webhooks as so essential that they are a new document type in OpenAPI. An OpenAPI specification now has three basic root elements: Paths, Components and Webhooks.

So now there can be a valid API specification that exclusively defines webhooks. At first glance, this may seem strange. But this can be something very useful. Applications are API-enabled more quickly, especially if they have a fixed process flow. It also means that existing APIs of applications like WordPress can now be specified cleanly.

When specifying a webhook, the path (endpoint) that is executed in the foreign API is defined. In contrast to a callback, a webhook is an active action in the foreign API. Write me in the comments what you think of the new webhooks.

While testing the webhooks, the need quickly arose to also make paths reusable, which leads to the next change.

Global path definitions

With version OpenAPI 3.1, paths can now also be specified under the components. Similar to the parameters, a globally available name can now be assigned to a path. This is an essential step towards standardisation and better governance of API implementations.

Referenced objects with overwritten description

Particularly when defining global objects or global data structures, one often comes to the situation that in the concrete application of the structure, the business term changes.

Here is a simple example: Let’s say we have defined a generic schema object with which we can map an address consisting of street, post office box, postcode, city and country. And now we want to use this address in different places. Even an organisation can have more than one address. From a purely postal point of view, an organisation in Germany can have a home address, a P.O. box address or a major customer address. If logistical considerations are then added, information on gates, branches or the like can be added, for example.

OpenAPI 3.1 now allows the summary and description to be overwritten when referencing a schema object. This means that these different forms of the address can now be clearly described.

What OpenAPI 3.1 cannot do, but the solution with GEFEG.FX

OpenAPI 3.1 only supports overwriting the verbal description of a reference. In particular, however, repetitions or (restrictive) changes to the referenced structure are not possible.

GEFEG.FX can do that. In the above example, it is not possible to distinguish between the structure of a home address, a P.O. box address and a major customer address. . In OpenAPI 3.1, this would require three different schema objects or the use of a OneOf structure. With GEFEG.FX, however, it can be defined directly at the reference that the major customer address, for example, consists only of postcode, city and country.

For a home address, the PO Box is marked as
Simply overwrite the repetition frequencies with GEFEG.FX

Support for roles and claims security

With version 3.1, roles are supported throughout the specification of security requirements. This simplifies rights management considerably. For example, it is now easy to define that write or delete operations require a different role than read access.

Description and summary

In addition to the description, there is now also a short summaryfor describing the objects of an API. The main difference is that the description supports Markdown formatting and is thus ideally suited for detailed descriptions for the developer. The summary on the other hand, does not support this. It serves as short information on the function of the object. Ideally, it can be used by code generation tools that generate source code directly from the OpenAPI 3.1 specification. The summary could be included in the source code as a comment, for example.

Full JSON schema support

The OpenAPI 3.0.x version forms both a subset of JSON Schema and a superset of JSON Schema. This has now been fixed. OpenAPI 3.1 is now fully compatible with JSON Schema.

Fewer format specifications for data types

OpenAPI 3.0.x defined a whole range of supported format specifications for data types. This changed with the OpenAPI 3.1 version. Some format specifications went beyond the JSON schema specification or repeated format specifications already defined there.

By approximating the JSON schema, only those formats are now specified that extend the JSON schema. At the same time, the specification of a format as a restriction of the data type has been removed. The specification of a format is now more of an annotation than a restriction. The check is therefore to be carried out by the respective application.

Photo of Andreas Pelekies

Was denkt Ihr darüber? Ist das sinnvoll, oder verkompliziert es die Umsetzung eher?
Ich würde mich über eine Diskussion freuen.

 

In OpenAPI 3.1, only the format specifications that are not supported by JSON Schema by default are specified.
Because of JSON schema not all format extensions are specified anymore

More changes with OpenAPI 3.1

 

In addition, there are a few more changes in OpenAPI 3.1, which I would like to summarise here:

 

  • Multipart/form data support for parameters
  • A path parameter must also be defined. This is actually logical, but was not explicitly required until now.
  • All HTTP methods support request bodies. This also applies to DELETE, GET, POST.
  • Responses are now optional. If a specification is developed in a “design-first” approach, it can make sense to specify it at a later stage. Especially in a collaborative environment, valid specifications can be exchanged that do not require fancy values for responses, as these may not yet be finally defined.
  • When specifying media types, the contentEncoding can now also be specified.

 

I have linked the complete presentation of the OpenAPI 3.1 format below.

 

What does this mean for implementation?

 

When I look at the changes compared to the previous version, I can understand why the OpenAPI Initiative thought for a long time about whether it should be called OpenAPI 3.1 or rather OpenAPI 4.0.

 

On the other hand, most innovations are compatible. An API with a specification based on version 3.0.x still works. Only downward compatibility is no longer necessarily guaranteed. It is clear that the previous version did not yet know webhooks and global paths. But there would also be problems if the now optional responses were omitted. Such a specification would not have been a valid document in the previous version. But this constellation should occur in very few cases.

There is a whole series of articles and pages on the subject of OpenAPI on the net. So why should I write another one? Quite simple. Many of these articles are aimed at (experienced) web developers. By comparison what about all those who have so far dealt with EDI or the exchange of electronic business messages? This short introduction is for you.

APIs for the exchange of business documents

An API as a link between two systems or software components is nothing new. But its application for the exchange of business documents is. For this requirement, solutions for the electronic exchange of business data have existed for many decades. Examples are the classic EDI based on EDIFACT or the exchange of XML files. The latter is becoming more widespread, especially in the wake of the mandatory introduction of electronic invoices for public buyers.

But it is precisely here that the problems of previous solutions become apparent. In principle, classic document exchange is nothing else than the digital replica of the paper world. So basically, hardly anything has changed in the last 100 years in terms of the basic principle. Only the transmission medium has changed: from paper to one of many electronic formats. This worked well as long as the documents to be transmitted were only exchanged between two partners. For example, an invoice from the seller to the buyer.

However, due to advancing digitalisation combined with globalisation, requirements are increasing. Often not only two but more partners are interested in information interchange. For example, if goods are to be transported across borders. Then, on top of the classic partners, add the importing and exporting customs authorities, the transport company and possibly other partners. Consequently, this is a very heterogeneous world in terms of technology. And at the same time, the demand for transparency in the supply chain is increasing. And the demand for better detection and prevention of product piracy and counterfeiting.

This is hardly feasible with classic EDI

But why is this so difficult to implement with classic EDI? Certainly, the fact that there is not the one and only “EDI” but many differing standards for the exchange of business documents plays an important role. In many cases, industry requirements or the requirements of individual organisations add a burden. This often dilutes a standard to such an extent that there can be many hundreds of variants of a single message. In addition, the fact that the requirements for “business documents” are to be fulfilled is a further complicating factor.

And come to think of it – these requirements come from the paper world. A world where people reconcile the received (paper) document with the books (accounting). These documents also contain a lot of information that could actually be superfluous in an electronic process.

For example, a customs authority does not need to know the complete contractual relationships including delivery terms or agreed conditions. Classically, this in turn creates new documents and new (EDI) messages. Or the persons or systems involved do not (yet) have uniform electronic access to the information. Today, large suppliers and their customers announce deliveries electronically with the despatch advice message. And yet a delivery note or goods accompanying note is also printed out and attached to the consignment.

For a smooth EDI implementation, the processes of the individual organisations along the value chains must also be semantically interlinked. So the management systems of the organisations must also be able to deal with such electronic messages. And be always on the same level. This is difficult to achieve in practice. Standards help, but implementation is often too difficult and too expensive.

Who might cloud my data?

There is yet another aspect to this. A study from 2020 showed that one of the biggest obstacles is the fear of losing control over one’s own data through central data exchange platforms.

Ensuring data sovereignty is therefore an essential aspect when new platforms are to be created in the business-to-business sector.

REST APIs offer a clear approach here to move away from business documents for data exchange. Instead, only the information that is actually needed is exchanged. With clearly defined partners. This ensures, for example, that no agreed prices get into the wrong hands via a supplier portal.

Separation of data structures and services

A classic EDI scenario essentially comprises only two services. The conversion of data from a source system into the data format of the target system and the forwarding of the data to the target system. Of course, there can be more complex scenarios with feedback that works similar to registered mail with return receipt. But anything beyond that is actually no longer part of the actual EDI scenario. The processes themselves have to be provided by the respective end systems. For example, an order confirmation for an order is created by the seller’s system. The EDI solution then transmits the full content back again. So again with the same services, but using a different message.

In the world of APIs, the service concept goes beyond this. An API can actively support individual process steps. Or it can also provide real added value, such as the provision of information. Whereby the API user can determine which filter criteria should be applied. This is hardly conceivable in the classic EDI world.

These possibilities lead to the fact that not only the data structures are defined in an API, but also the services. These then use the defined data structures to process incoming information or to provide outgoing information.

Won’t APIs make everything more complicated?

Phew, that sounds pretty complicated. And if more is added now, won’t it become even more confusing? And someone still has to implement it!

This is exactly where OpenAPI comes in. Precisely not to develop proprietary rules, but to standardise the specification of an API. Understanding this difference is immensely important. So it’s not about how an API is implemented. No, it’s about what exactly it should do. What services it offers and what data structures it supports.

As described above, there are many standards in the EDI environment, including international ones. The United Nations Centre for Trade Facilitation and Electronic Business Processes UN/CEFACT has been standardising the meanings of data structures in business documents for many years. UN/CEFACT publishes semantic reference data models, which are used for guidance by many organisations and industries worldwide.

About meanings and twins

REST APIs have been used on the internet for many years. Especially for providing services for other websites or on mobile devices. Examples are APIs for currency conversion or the various map APIs, with which one can easily navigate from A to B. The semantic web also plays an important role. The systems of online shops, search engines and social networks should be able to recognise semantic connections. For example, which ingredients a recipe requires, how long it takes to prepare, and which shops offer these ingredients at which prices.

The standards on schema.org provide an essential basis for this. All these services are guided by these clear definitions and make it possible to map so-called digital twins. Everything identifiable in the real world can also be mapped digitally: People, events, houses, licences, software, products and services, to name but a few. And the whole thing is understandable for people and processable by machines.

No wonder that many have asked how this can be transferred to the business-to-business level, including UN/CEFACT. How do the achievements of the last decades remain with web APIs?

OpenAPI – A specification standard for APIs

The OpenAPI specification standard thus defines a set of rules on how APIs are to be specified. Which services are provided? What data structures are needed? What are the requirements for an implementation? And all of this in a version that can be read by the developer, but also by a machine.

This is precisely where the real strength of this standard lies. The very large support through a variety of tools and programming environments. This makes it possible to define an API at the business level. With all its services and data structures. It can be described simultaneously so that a developer can implement it as intuitively as possible.

And the developer is massively supported in this. Since the specification is machine-readable, tools exist that generate source code directly from the specification. This ensures that the specified properties of the interface itself are implemented correctly: The names of endpoints (services) are correct. The data structures used by these are implemented correctly and all return values are also clearly defined.

Of course, the developer still has to implement the server or client side. If he is clever about it, this implementation can even be secure against future changes. An update of the specification can be directly incorporated into the source code and only requires minor adjustments.

Who defines OpenAPI?

OpenAPI has its origins in Swagger. As early as 2010, the manufacturer of Swagger, the company SmartBear, recognised that an API specification can only be successful if it is developed openly and collaboratively (community-driven). That is why they transferred the rights to the OpenAPI Initiative. Big names such as Google, Microsoft, SAP, Bloomberg and IBM belong to this community. This community is very active and constantly developing the specification. The most recent release during this writing is OpenAPI 3.1.

So since 2016 at the latest, Swagger has only stood for the tools created by the company SmartBear. The specifications created with these tools are usually OpenAPI specifications. However, these tools also still supporting the old predecessor formats, especially the Swagger 2.0 format.

Use and spread of OpenAPI

TMForum regularly conducts studies on the spread of OpenAPI. The latest study shows a significant increase in adoptions. Increasingly, companies are re-dressing their existing APIs in OpenAPI specifications as soon as these are used for cross-company data exchange. According to the study, the market is divided into two camps in particular: Clear advocates of OpenAPI and those organisations that want to increasingly take care of OpenAPI in the future.

In the presentation of OpenAPI 3.1, Darrel Miller from Microsoft explained that there are still many implementations with RAML. However, the trend shows that RAML is found more in in-house solutions. OpenAPI increasingly forms the basis for cross-company scenarios.

Code-First (Swagger) or Model-First (GEFEG.FX) to OpenAPI?

A major difference in the tools currently available on the market is whether putting the focus is on source code development, or model-based development. The Swagger editor is a typical example of a code-first application. In an editor, the API developer can directly capture and document his API specification. This includes both the service structures and the data structures. In addition to this machine-readable format, he also immediately sees the prepared documentation for another developer. The documentation is then available in a developer portal, for example.

In contrast, the GEFEG.FX solution follows a model-driven approach. The focus here is not on the technical developer, but on the business user. He is responsible for the business view and the processes in the organisation or among organisations. He is often familiar with the existing EDI implementations, or at least knows the (economic and legal) requirements for the processes to be implemented. With this knowledge, he is able to use the existing semantic reference models and standards in the API specification. The wheel is not reinvented every time. . If such a standard changes, GEFEG.FX simply incorporates the change into the API specification. At the same time, governance requirements are implemented smartly without restricting the individual departments too much. For this, it does not matter whether it is the implementation of the electronic invoice, the electronic delivery note, EDI, the consumer goods industry, the automotive industry, the utilities sector, UN/CEFACT, or others.

My recommendation

The code-first approach is perfect for web developers. Business users, however, are overwhelmed by it. Therefore, I give a clear recommendation to all those with a focus on EDI or the exchange of electronic business messages: Plan OpenAPI as a model-first approach. This is future-proof, extensible and customisable.

The webhooks supported since OpenAPI 3.1 define clear points in a process at which other APIs perform operations in a clearly defined way. The special feature of webhooks is that – in contrast to callbacks – they run synchronously with the process. The process can further process the results of the webhook.

What is the purpose of webhooks in OpenAPI 3.1?

Webhooks are actually nothing special in internet applications. The secret of WordPress‘ success lies, among other things, in the in-depth integration of webhooks in the entire application. It is only through this technology that WordPress can be extended so flexibly with plug-ins.

Principle use of webhooks with WordPress

A website consists of several areas, both technically and visually. It has at least a header, often a menu area, a content area and a footer.

A website is divided into several areas. At the top is a
Example of a website structure

The output of such a website is therefore a clearly defined process on WordPress. And this process consists of several steps. Before, during or after each of these steps, WordPress has defined points into which plug-ins can hook, the so-called webhooks.

The image is divided into two sections:
Example of the use of webhooks with plug-ins in WordPress

In the simplified example shown here, the plug-in extends the menu of the website. In addition, it replaces a block on a page with the corresponding output. The special feature is that the plug-in influences the processing of data in the main WordPress process.

OpenAPI 3.1 webhooks are something like callbacks, only different

With OpenAPI 3.1, callbacks are of course still supported. Nowadays, callbacks are used especially for event-driven management. The API consumer can subscribe to a callback (subscription). In doing so, it informs the callback of the address for the notification. This is usually an API endpoint again. However, depending on the implementation, it could also be an email address. Some callbacks still support the transfer of certain (filter) parameters during subscription.

Callback example in online retail

We have all experienced a typical practical example of this kind of application: We have ordered something from an online retailer. And he sends the parcel. We are informed (at the latest) on the day of delivery that the package will arrive soon. But some providers now go so far as to send messages like “The driver is still 5 stops away”. What happened here?

Theoretically, a callback was set up in which a Geofence around the target address (here 5 stops) was also specified. When the event condition is reached, the API provider executes the callback, and we receive the corresponding message. And now we can become active ourselves, but we don’t have to. Some of these applications now allow us to track the position of the delivery truck on a map in real time. But we have to actively call up this application. If we do not do this, it has no influence on the delivery process. If we are not present and have not given permission for the delivery to be made, only then does an exception occur in the delivery process (insert paper message in letterbox).

The screen is divided vertically into two areas

This design has a decisive disadvantage regarding the implementation of the API Consumer. It means that a second, complete API must be created here On the one hand, the consumer needs the endpoint for the callback. For this, a secure connection must be established from the API provider to the API consumer. On the other hand, it also needs a secure connection in the opposite direction for the other calls. The two APIs must therefore trust each other and support the respective opposing interfaces. However, the callback itself behaves passively.

Active event-driven control via webhooks as of OpenAPI 3.1

If webhooks are used instead of callbacks, the scenario could look somewhat different. Especially with just-in-time deliveries, it is important to have the right article at the right place at the right time. Often, however, several items are loaded on a truck that are needed at points in the production process that are close together in time. A haulage contractor has a rather narrow time window in which he has to deliver his goods to the respective ramp.

Traditionally, this is often difficult – despite the long experience and the high level of automation. Which ramp does the driver drive to first when he is standing at the barrier to the company premises? Here, a synchronous decision is required in the “delivery” process. The process must be interrupted and can only be continued when the necessary data is available.

The screen is divided vertically into two areas
Webhooks with OpenAPI 3.1 usually interrupt a process

The service process of the API provider is interrupted by the webhook. The webhook performs an action on the API consumer. The results of this action can then be processed further in the API provider’s service.

This return is a nice thing. But not absolutely necessary. It is also conceivable to define webhooks that only expect a positive return, e.g.: 200 OK

Nothing but webhooks in the API

A major new feature of webhooks with OpenAPI 3.1 is now that APIs can be specified that consist solely of webhooks. When I first heard this, I found it strange. On further reflection, however, it makes perfect sense.

For example, APIs can be specified that serve purely for process monitoring. This is roughly comparable to the control centre of a power plant. The actual operation is fully automatic. Clearly defined events are displayed on the instruments of the control centre. The operators have the possibility to react to the individual events in a controlling manner.

Another approach would be to define APIs that follow a similar concept to WordPress. . The API provider executes one or more clearly defined processes. For example, the calculation of an invoice, the booking of a ticket or the production of goods. Extension points are defined in this process, comparable to extension points in XML messages. And other APIs can hook into these extension points to extend the basic function dynamically and flexibly.

Let’s say we have written an API that can calculate and create very simple invoices. “Very simple” here means that it only supports one seller, one buyer and simple item positions. If webhooks are defined in this process at the right steps, this process can easily be extended. For example, a plug-in could add the consideration of discounts. Another plug-in the support of invoices in a foreign currency.

The skilful use of webhooks makes the difference

And this is exactly where the difficulty lies again, but at the same time a powerful opportunity. If I have also specified the plug-in API in such a way that it can handle additional APIs or even be extended again itself, webhooks become compelling tools.

But this is perhaps the most crucial point when designing an API with webhooks. I have to trust the (foreign) APIs. . I have to trust them to change the data for my process in the way it is basically intended. On the API provider side, I have to consider that the data is manipulated in a way that I may not be able to use.

But on the other hand, the OpenAPI 3.1 specification also works against this. The API provider can also specify the data structure of the return format. However, a check for correctness of content or meaningfulness may have to be carried out additionally. If the provider does not take this situation into account or if the plug-in with the discount function has an error, the entire invoice could become incorrect. For example, if it subsequently (apparently) says on the item level that only EUR 17 is to be paid in total for an item with quantity 5 and a unit price of EUR 4.

A webhook example, created with GEFEG.FX

Since the official webhook examples of the OpenAPI initiative are feeble, I would like to conclude by showing a simple example in YAML format.

openapi: 3.1.0
info:
  title: GEFEG CrossIndustryInvoice Webhook example
  version: 1.0.0
webhooks:
  createLineItem:
    post:
      summary: Inform external API that a new LineItem is created
      requestBody:
        description: Information about a new line item in the invoice
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/LineItem'
        required: true
      responses:
        200:
          description: Return a 200 status to indicate that the data was processed successfully. The response body may contain the extended line item.
          content:
            application/json:
              schema:
                $ref: '#/components/schemas/LineItem'
  validateLineItem:
    get:
      summary: Validate the LineItem
      requestBody:
        description: Information about the LineItem to be validated
        content:
          application/json:
            schema:
              $ref: '#/components/schemas/LineItem'
        required: true
      responses:
        406:
          description: Not Acceptable, The validation went wrong.
        200:
          description: OK, the line item is valid

components:
  schemas:
    LineItem:
      type: object
      properties:
        AssociatedDocumentLineDocument:
          $ref: '#/components/schemas/DocumentLineDocumentType'
        SpecifiedTradeProduct:
          $ref: '#/components/schemas/TradeProductType'
        SpecifiedLineTradeAgreement:
          $ref: '#/components/schemas/LineTradeAgreementType'
        SpecifiedLineTradeDelivery:
          $ref: '#/components/schemas/LineTradeDeliveryType'
        SpecifiedLineTradeSettlement:
          $ref: '#/components/schemas/LineTradeSettlementType'
      required:
        - AssociatedDocumentLineDocument
        - SpecifiedTradeProduct
        - SpecifiedLineTradeAgreement
        - SpecifiedLineTradeDelivery
        - SpecifiedLineTradeSettlement
    DocumentLineDocumentType:
      type: object
      properties:
        LineID:
          $ref: '#/components/schemas/IDType'
    LineTradeAgreementType:
      type: object
      properties:
        NetPriceProductTradePrice:
          $ref: '#/components/schemas/TradePriceType'
      required:
        - NetPriceProductTradePrice
    LineTradeDeliveryType:
      type: object
      properties:
        BilledQuantity:
          $ref: '#/components/schemas/QuantityType'
      required:
        - BilledQuantity
    LineTradeSettlementType:
      type: object
      properties:
        SpecifiedTradeSettlementLineMonetarySummation:
          $ref: '#/components/schemas/TradeSettlementLineMonetarySummationType'
      required:
        - SpecifiedTradeSettlementLineMonetarySummation
    TradeProductType:
      type: object
      properties:
        Name:
          $ref: '#/components/schemas/TextType'
    IDType:
      type: string
    QuantityType:
      type: number
    TextType:
      type: string
    TradePriceType:
      type: object
      properties:
        ChargeAmount:
          $ref: '#/components/schemas/AmountType'
      required:
        - ChargeAmount
    TradeSettlementLineMonetarySummationType:
      type: object
      properties:
        LineTotalAmount:
          $ref: '#/components/schemas/AmountType'
      required:
        - LineTotalAmount
    AmountType:
      type: number
Code language: YAML (yaml)

In this example, two webhooks are defined. The first createLineItem is called when a new item is inserted. The webhook thus performs the POST operation in the external API and transfers the information about the current line item. The position that has been extended (if necessary by extension) is expected as the return value.

The second webhook validateLineItem is used to be able to extend the validation of the item. The external API would thus be able to check the discount calculation, for example. If this is correct, it returns the code 200. If something went wrong, it returns the code 406.

This example may not yet be fully developed in all respects, but is intended to show the possibility of using webhooks with OpenAPI 3.1.