You are here

Mathematical Core Tests: Maximizing Test Scope And Quality Using Suitable Tools and Automation

When migrating an old system to a new one and when introducing new products onto a modern platform, mathematical core tests are usually a complex matter, but are both essential for the success of the project and are extremely urgent. This makes it all the more important to support this process with sophisticated tools and a high degree of automation.
Written on 07/15/24

Article by Dr Björn Medeke, Principal & Managing Director Cominia Aktuarielle Services GmbH, and Tobias Wessels, Head of Software Development

The most important basic requirement for successful system testing is a reference calculation kernel in which the policies to be tested can be entered quickly and efficiently without compromising our high-quality standards. At Cominia, we have developed and deployed such a computing module with our customers, characterized by an advanced core architecture. This enables us and our customers to implement new tariff groups, including the necessary customizations, in the shortest time possible. To meet our high expectations and to continuously compare our reference implementation against the production system, we have developed tools that allow for automated testing at regular intervals using an extensive, productive test dataset.

During the development of the processing unit and in the context of modifying it to our customer’s needs, several advantages of our computational engine have emerged, which reflect our experience in calculation kernel development. These can be summarized as follows:

  1.  A consistent separation of technical and functional calculation core architecture enables actuaries without extensive programming knowledge to quickly incorporate new rates into the reference computer, thus increasing the speed of development.
  2. A technical description of the data models in a meta-programming language simplifies further development and can be used as a basis for generating the necessary code base (data classes, interfaces to peripheral systems, data storage). 
  3. Modular modeling of the product data, which comes as close as possible to the target system, but at the same time leaves a degree of sovereignty regarding the specific tariff coding, enables prompt and independent implementation of the tariffs.
  4. A high degree of automation and interfaces to the production system increase acceptance within the team and are essential for mass testing.

Even if these observations are likely to be well received, failure often occurs upon the actual implementation. In consequence, the development of the computing module becomes unnecessarily complicated as the range of tariffs and functions increases and can therefore only be carried out by experts.

This leads to delays, increased cost and ultimately jeopardizes the success of the project. Therefore, it`s worth taking a closer look at these points and the approaches we have developed.

Building a sustainable computing core architecture

When setting up our core computing architecture, we have ensured a strict separation of business and technology. On the one hand, this concerns the implementation of new mathematical arithmetic units and the sequences within a processing step. This can be achieved without in-depth knowledge of the technical procedure, such as caching or logging, but also the interpolation of values between two cut-off dates or iterations within a process. On the other hand, we have used or implemented frameworks that facilitate the extension of the existing computing core architecture, such as the modeling of the required data structures. This often involves the adaptation of interfaces to peripheral systems. However, we have designed our frameworks in a way that such technical adaptations are not necessary for the specialized developers or at least will be generated automatically.

Figure 1 shows a present value class of our reference calculation kernel. Even though this class is not exempt from technical details, it is still easy to understand for any actuary – with or without coding knowledge – and could easily be modified or copied to implement a new NPV, for example.

Ein Bild, das Text, Screenshot, Dokument, Schrift enthält. Automatisch generierte Beschreibung

Figure 1

Figure 2 shows a class for modeling our product data model. For this purpose, we have established our own meta-programming language in our calculation kernel to describe the data fields and relationships between different components of the data model. This also makes it possible for non-technical actuaries to extend this data model. In addition, our framework can use the information to generate the database interface and the necessary data classes so that no further customization is required from the specialist developer.

Ein Bild, das Text, Screenshot, Schrift, Zahl enthält. Automatisch generierte Beschreibung

Figure 2

Independent development of a product data model

When developing our product data model, we took various aspects into account that affect the feasibility of the tariff landscape in the reference arithmetic engine, and now offer every opportunity to implement necessary extensions for new customers flexibly without great effort.

Firstly, we are able to integrate new tariffs into the processing unit very easily and efficiently. In addition to adjusting the code, this requires the policy parameters and rules to be recorded in a product data model. However, as different policies in the same tariff group often only differ in a few characteristics, we have decided to implement the product data model according to the modular principle: in our calculation kernel, for example, a tariff consists of the modules ‘installment surcharge system’, ‘rules for early expiry’, ‘cost parameters’ and ‘tariff module’, whereby the tariff rules for different statuses and contract components of a contract are stored at tariff module level. This allows us to reuse large parts of the existing settings to map new tariffs.

Secondly, our product data model often allows us to map different rate variants – such as discounts – by using suitable contract keys. This means that we can easily combine several tariff variants or old tariffs into one, in case this is desired in the production system.

Thirdly, we also took care not to become too dependent on the productive system when categorizing the tariffs, i.e. naming the tariff variants and dividing the product data into different modules. A frequent requirement of the IT service providers of the productive system is that reference values are provided at an early stage and, at best, before the start of implementation into the productive system. Therefore, it is often necessary and easily possible in our reference calculation engine to start conversions before the final tariff keys are known in the target system. Missing information can then be added later with little effort.

Finally, as with the calculation core architecture, when developing the product data model, we also ensured that the maintenance and further development of the data model can be carried out by specialized developers. They are supported by a tailored selection of tools and technologies, which have been very well received by the actuaries in our previous projects.

Use and acceptance of the reference computer engine in a project

For a reference computer to be successful, it is essential that it meets with broad acceptance in the project. If the expectations of the reference computer core have not been agreed between all project participants or if the requirements of the users have not been sufficiently analyzed and considered, frustration arises, which ultimately leads to the reference computer not being used sufficiently in the test. In this case, the values for individual tests are often recalculated manually. This prevents the reference computer from unleashing its full potential and the investment is wasted.

When using our reference computer with customers, we have found that the manual input of test cases via a user interface and the subsequent evaluation of the results using the Excel output represents a major hurdle for users. Even the option of importing test cases from the production system and having them calculated has only partially improved the situation.

Consequently, we took the logical step of fully automating both the calculation of the test cases and the processing of the calculated results. Today, only the test cases must be defined by the testers and traces from the productive system must be provided. These traces are then used by our reference computer as test data and are automatically calculated at regular intervals. The comparison of the calculated results with the productive values and a clear presentation of the deviations is also carried out automatically, in order to allow the testers to concentrate on analyzing these deviations without having to deal with the operation of the reference computer.

Due to the high degree of automation, test scenarios may be repeated as often as required. Hence, it is possible to continue to run the already successful test cases with new deductions of the productive traces and build up a large test inventory of productive test cases. This both ensures that neither the values in the productive system nor the reference values change unintentionally. Additionally, we developed a tool that provides a large stock of test cases in the event of necessary changes in the productive system and can be used in the test with little effort once both sides have adapted.

Conclusions for efficient reference kernel development

The development of a reference calculation core and its integration into an existing test process is complex and usually ties up a lot of capacity. Thanks to a well-thought-out processing unit architecture, we have managed to greatly reduce the overall complexity and separate it to such an extent that the development can be divided among several, more specialized experts. This enables us and our customers to adapt new inventories quickly and efficiently and to start system testing with a sophisticated and highly automated tool in the shortest time possible.

Factsheet

 

Tools developed

Reference calculation kernel, regression test tool, automated test tool, unit testing framework, generated HTML documentation of the source code, developer scripts (e.g. import of excess records)

Statistics

Programme code:

~1.500 classes

More than 100.000 lines of code

Additional code artefacts:

Approx. 50,000 lines of code for scripts & developer tools (SQL)

Development time

5 years / 15 person years

Contact

Head of Software Development:
Tobias.Wessels@cominia.de

 

With my extensive experience in the field of modern software development and my in-depth actuarial knowledge, I support insurance companies in mapping their tariff landscape in a more efficient, future-proof and mathematically precise way.

Managing Director:
Bjoern.Medeke@cominia.de