Example Solutions

These practical example solutions illustrate the features of the Integrator, and can be used as starting points for developing your own solutions.

About These Examples

The examples described in this post are either included as part of the Integrator download (in the download’s “example-solution” sub-folder) or can be obtained from this website from the links provided. Foldda solutions are very small archives of folders and files, and it takes only seconds to download.

All examples distributed from us are tested and ready to run out-of-the-box. For illustration purposes, there should be no difference whether the Integrator is running in DEMO mode (without the license) or in FULL mode.

Because the local and network environments of your computer can be different to how these solutions are set up, before running a solution, you will need to examine each node’s settings, and maybe change some of them to suit your environment. These include checking the file names and paths, network IP and port numbers, specific data values (for data filtering and mapping) and firewall rules, among other settings that are used by these solution configs.

If you encountered an error, make sure to check the solution’s node logs (like this one below) and the runtime logs. Normally information like these can be useful for troubleshooting. 

Feel free to contact us if you require assistance in your troubleshooting your solution.

1. HL7 Network Sender And Receiver

Purpose: this solution is intended to give beginners the first introduction to the Foldda Integrator. It’s walked-through and explained in-details in the Quick Start Guide

This solution contains two separate data flows: an “HL7 network sender” and an “HL7 network receiver”, and they are pre-configured to test each other out: the “sender” data flow reads HL7 data files from a pre-configured source path, and parses HL7 records from the input files, and sends these records to a network destination; the “receiver” data flow listens from a network port and captures HL7 messages and stores them.

Config notes: for the sender’s file input, make sure the file reader points to an existing source location, and the input file conforms to the expected file naming pattern. The sender’s target port needs to match the receiver’s listening port number.

2. HL7 Mapping, Filtering, and Forwarding

Purpose: this solution is a simplified typical Integration interfacing setup: the Receiver listens and receives HL7 data from a common data source, such as the PAS, then it forwards the data to various destinations, such as different departments and applications, after applied some mapping and filtering.

 

This solution contains one data flow with one inbound interface (HL7NetReceiver) and two outbound interfaces (HL7NetSender). The Mapper applies some common mapping (in the example, it’s the Sex and Language) to all data. Then the mapped data is selectively sent to two different destinations: the filters are set up to ensure only A03 and A08 messages are sent to one place, and R01 messages are sent to the other.

At each outbound, a Catcher node is used to save a copy of the data that has been sent.

Config notes: for testing out this solution, you can use the free SmartHL7 tools for setting up network sending or receiving, or you can build your own network sender and receiver tools following the “HL7 Sender and Receiver” example above.

The Mapper and Filter are set up to work with applicable HL7 message structure and values. You can use the “sample.hl7” file which is included in the Integrator download, but if you want to test the solution against your own data, make sure to check (and adjust if necessary) the Mapper and the Filter node settings, i.e. making sure the Sex and Language values are applicable with the mapping logic, and the filtered message types are applicable to the data being tested. 

3. Converting HL7 Data to CSV

Purpose: HL7 data by its nature has an implied hierarchical structure, that is, data elements from a message are organized in a Segment/Field/Repeat/Component/Sub-component hierarchy. Unless you have a good parser and viewer tool(such as the SmartHL7 Viewer), people would normally find it difficult reading HL7 message directly.

This customizable solution allows you to convert HL7 message data into the much-more-common CSV format, so you can read and do further data analysis. It does so by letting you specify which HL7 data elements are of interest and parse and select these elements into a CSV line. 

 

The HL7ToCsv node is the key part of this solution. The node’s handler (HL7ToTabularConverter) has the following settings – 

As in the example, the configuration specifies each produced CVS line would have the HSH-9, PID-3.1, PID-5.1, PID-8, and PID-19 data elements from an HL7 input message. (More specifically, the ‘PID-8==M’ specifies the conversion will only apply to HL7 messages with PID-8 value equals ‘M’). The syntax used for specifying the rule is called “Foldda Expression”.

In addition to this innovative handler, this solution also demonstrates how Foldda Integrator handlers can automatically manipulate CSV data: it illustrates how you can filter CSV data by values at a specific column, how to change the heading columns’ value, and how to mark columns so when the CSV file is opened in Excel, these columns won’t be automatically (and undesirably) formatted by Excel.

Config notes: for simple conversions such as “print a list of patients’ names and SSN’s, it is relatively straight forward to extend this solution to suit your needs. For more complicated situations, such as selecting certain observation results with certain test status, you will need to be explained more about Foldda Expression. (And it’s not hard 🙂

Again, you can use the provided “sample.hl7” file for testing out the solution. But when using it against your own data, be mindful that the settings of the solution may need to be changed to match your source of data and your output requirement. 

Also, note in this solution, Catcher nodes are placed to capture output at each step of the data processing flow. You can check the intermediate output at each step for trouble-shooting your solution.

4. Simple CSV ETL

Purpose: ETL stands for Extract-Transform-Load. It’s a typical pattern commonly used in data warehousing for loading data from an external data source to a target database.

This solution allows you automate loading data from a CSV file to a target database table flexibly. The solution demonstrates that, if the source and target columns don’t match, how you can select only some columns from the source CSV for being loaded into the target table; also how you can specify which columns from the source are mapped to which columns of the target table, even the names of these columns don’t match. The solution also illustrates how to specify data-types during the database loading. 

The OleDbWriter node is the key part of this solution. The node’s handler (TabularOleDbDatabaseWriter) has the following settings – 

 

As in the example, the configuration is set up by using the provided sample CSV input data “PatientList.csv”, and sample MS Access database. Make sure the path of the database is set up correctly in your test environment.

Hint: once you understand how to get an ETL solution to work with your database, you can combine this solution with the “Convert HL7 to CSV” solution to automatically import HL7 data to a database, which would be quite a neat solution.

Hint 2: if you require more sophisticated data transformation, check out the “TabularDataTransformer” handler and the associated transformation functions.

5. A Mock Integration Server Setup

Purpose: We use this solution internally for testing our development, as it has 5 data flows and contains a number of nodes with a variety of handlers.  

The top data flow “05.1-ADT-Feeder” is used for sending data to the input of the second data flow, this is a bit like the Example solution 1.

The second data flow, starting with “05.2-MOCK-PAS-In”, is an “extended version” of the Example solution 2. It processes the received data and eventually sends out the output three “ADT-Out_NetSenders” (highlighted in blue).

The last three Receiver data flows simply receive these output data.

Question: the first data flow has only one node (HL7NetSender), how and from where does it get its input data?

Answer: You can use this handy feature to load input test data to a node.

More Solution Examples

Here are some solution examples that are not included as part of the download, among these – 

  • Patient data de-identifier – shows how to map the patient’s private data to a random value in HL7 messages.
  • Conditional mapper – how to apply mapping conditionally to an HL7 data element based on the value of another data element.
  • CSV data-driven data generator – automatically generate test data by merging CSV data as input into a data template (a bit like MS Word’s Mail-Merge feature).
  • HL7 email alert – monitor data element’s values in HL7 messages, and triggers email-alerts if a certain value is observed. Eg, you may use this solution to automatically generate email alerts to doctors or admin staff if a patient is discharged from a specific ward.
  • CSV and HL7 data complex transformation – these two solutions illustrate the use of Foldda transformation Functions for manipulating the value of data elements of CSV and HL7 data.

These example solutions can be downloaded from here.

Let us know if you have any questions of using these examples.