leadconduit-integrations

Writing LeadConduit Integrations

This guide should tell you everything you need to know to develop LeadConduit integration modules.

1. LeadConduit Overview

What It Does

LeadConduit is a real-time data integration platform designed for processing internet leads. It is basically an HTTP transaction handler that is customizable for a variety of different use-cases.

In a typical scenario the system receives an HTTP POST of data, as from an online form submission. This post typically contains a consumer’s contact information (a “lead”), such as e-mail address, first and last name, phone number, etc. (It may also contain a lot of other data specific to the marketing campaign it’s part of. For example, mortgage-industry leads would contain details about the mortgage loan the consumer is shopping for: the size of the loan needed, their credit score, whether they qualify for a veteran’s loan offer, and much more.)

These lead posts are sent in to a LeadConduit flow, which a LeadConduit customer has set up to define the steps that should be taken with leads of that type. The flow defines what sources to accept leads from. Each source in a flow uses a particular inbound integration, which controls how the data on each post is parsed, and how the fields which make up the lead (“email”, “first_name”, etc.) are populated. How these integrations work will be discussed in much more detail later in this guide.

Sources are used for reporting, answering questions such as: “How many leads did Vendor X send in to this flow last month?” Each source may also have acceptance criteria rules defined for it, which allows the flow to immediately reject leads that don’t meet some bare minimum requirements. For example, if a particular vendor must always provide a valid postal code, this could be defined as a rule on that source.

Once a lead has been accepted in the flow, it proceeds to the remaining predefined steps. There are different types of steps. One type is a recipient step. In the UI, this type of step is presented as either an “Enhancement” or “Recipient” step; from the standpoint of an integration developer, they’re essentially the same. This kind of step is where LeadConduit makes HTTP requests of other services, via a particular outbound integration. These integrations and how they work is the main subject of this guide, but at a high level they define what data is sent where, in what format, and how the response from that service is interpreted. They also control how data will be added to the lead from that point on in the flow, which is referred to as appended data.

The other type of step that can be added to a flow are filter steps, which define criteria to stop processing and reject a lead. These are similar to the acceptance criteria mentioned previously, but they apply to leads from all sources, and they can be placed after recipient steps. That means that their rules can also use appended data. For example, after a recipient step that sends the lead’s email address to an email-verification service, there would probably be a filter step immediately following it, with a rule such as, “if the email-verification service responded that this email is fraudulent, then stop processing now”.

After all the defined steps have been executed, or if a filter evaluation results in early termination of the flow, a response is returned to the original source of the lead. The format of that response is determined by the inbound integration, but generally includes LeadConduit’s unique lead ID and some indication of overall success (i.e., a good, accepted lead) or failure (a bad, rejected lead). The duration of this process for each lead varies, depending on the number of steps and the responsiveness of external services, but is typically only a second or two.

How It Does It

The LeadConduit service provides two interfaces, referred to as the API and the handler. The API is used by the web client UI (as well as directly from other systems, in some cases), while the handler interface receives incoming lead posts and, of course, handles them.

These two interfaces are provided by a single Node.js server application. The many modules that make up the server are published on npm.org. Some are public, but most are available only to internal ActiveProspect developers. Source code is published on github.com and is also a mix of public (i.e., open source) repos and others that are accessible only to members of the ActiveProspect organization.

Similarly, all integrations are Node.js modules, published on npm.org. Some parts of each integration module are used by the API, while others are used by the handler. For example, when a flow step is being configured, the list of fields required by an outbound integration will be shown in the LeadConduit UI (via data from the API). The majority of the integration code – how to formulate the outbound request, how to parse the response, and more – will be used by the handler at lead-handling time.

2. Key Concepts

Fields - Standard and Custom

Every lead can be thought of as a collection of fields. LeadConduit has a large – and growing – number of “standard” fields. These are predefined fields, with a meaningful name and a particular type. The full list is available in the LeadConduit UI at https://app.leadconduit.com/fields, but examples include:

There are also custom fields, which any customer may create, name, and use however they wish within their account. But we will be developing reusable integrations that can be used in any account, so we will only ever use standard fields.

Field Types

All fields have a type. The default type is often string, but other types exist that provide a richer set of components. These are sometimes called “rich types”. Some examples:

Here’s a full example of JSON representing a valid phone-type field:

{
  "prefix": "1",
  "raw": "5125551212",
  "area": "512",
  "exchange": "555",
  "line": "1212",
  "number": "5551212",
  "extension": null,
  "country_code": "US",
  "is_tollfree": false,
  "type": null,
  "valid": true
}

And here is an example of an invalid phone-type field:

{
  "raw": "do not call me",
  "valid": false
}

Inbound vs. Outbound Integrations

As described so far, LeadConduit has many standardized internal fields to represent attributes on leads. Let’s consider the field for storing a lead’s first name (e.g., “Juan” or “Esther”) as an example. In LeadConduit, that data belongs in the standard field first_name. But a lead vendor posting the lead to LeadConduit may use a different name to collect first-name on their webform, say, fname. Meanwhile, the CRM system receiving the lead from LeadConduit may call the first-name field something else still, perhaps name_1.

The format of lead data may be inconsistent, as well. Our same hypothetical lead vendor may collect phone numbers on their webform with parentheses around the area code and dashes separating the line and exchange: (512) 555-1212. But it could be that the buyer’s CRM system expects that data to be exactly 10 numeric digits: 5125551212.

A lot of the value that LeadConduit provides is in solving these kinds of incompatibilities for our customers. And a lot of the work that makes that happen is in integrations.

As mentioned previously, there are two types of integrations: inbound, to process data posted into LeadConduit, and outbound, which processes how data is sent out of LeadConduit.

An inbound integration has three main jobs: it receives an incoming request, it parses that request and creates a lead with the parsed data, and it formulates the response that is given back to the submitter.

An outbound integration also has three main jobs: it validates that the minimum required data it needs is available, it formulates the outgoing request, and it parses the response it receives, appending data to the lead as appropriate.

Vars, Appended Data, and “The Snowball”

As a lead progresses through a flow, it accumulates data from each step. We sometimes call this “the snowball”, because it’s like a snowball rolling down a snowy hill, growing larger and larger as more snow sticks to it. In the code, this ‘snowball’ is contained in a variable conventionally named vars.

We’ll see a lot more of vars, but in this JavaScript object, the originally submitted data is always stored under the key lead: vars.lead.email, vars.lead.first_name, etc. There is also some metadata that is available: vars.submission.timestamp, vars.flow.id, etc. And as each recipient step is run, that step’s integration adds more data, namespaced under its own key. For example, the TrustedForm Consent integration adds datapoints such as vars.trustedform.outcome and vars.trustedform.required_scans_found.

Step Outcomes

Each recipient step can result in one of four possible outcomes: “success”, “failure”, “error”, and “skip”. Flow continues to the next step in every case, as only filter steps can halt the lead’s progress through the flow. That said, in most cases a filter step will be added following each recipient step, to evaluate the outcome and halt processing, for example, if the recipient step’s outcome was something other than “success”.

The outcome of the recipient is explicitly set in the integration, as will be detailed later, by setting the value of an append variable called outcome. The exact cases when these are set may vary from one integration to another, but these are the conceptual meanings.

Success

Success indicates, at a minimum, that the transaction with the recipient service completed normally and returned a positive response.

Examples:

Failure

Failure indicates that the transaction with the recipient service completed normally but that a negative response of some kind was received.

Examples:

One rule of thumb is that transactions that “failed” can’t be “fixed”–i.e., that they would not have a different outcome if they were retried.

Error

Errors indicate that a processing problem of some kind has occurred, either between LeadConduit and the recipient service, or within that service. To follow the rule of thumb for failures, transactions that “errored” can sometimes be corrected in a way such that if they were retried with the same lead data, no error would occur.

Examples:

Skip

In the case where an outbound integration doesn’t have all the data it requires, it will cause that step to be “skipped”. This behavior is implemented by a function called validate(), described in the Integration API section below. In this case, no request is even attempted to the recipient, since it could not possibly result in success.

Note that skip outcomes can also occur if the step has “step criteria” configured via the flow UI. For example, a customer could specify that an expensive data-verification service should not be used for leads provided from their own website (with a rule such as, “send to this recipient only if ‘Source’ is not equal to ‘My Website’”). In a situation like that, the handler never invokes the integration in any way.

3. The Integration API

Module Introduction

A LeadConduit integration is implemented as a Node.js module that conforms to a defined interface for what functionality it exports.

The git repository name should use the prefix leadconduit-integration-, followed by a descriptive name for the integration. For example, leadconduit-integration-trustedform.

If you’re working on a proprietary or open-source ActiveProspect integration, that repo will be in the activeprospect Github organization.

The module’s package.json includes typical boilerplate information, which doesn’t vary much from integration to integration: name, description, etc. Brand new integrations should start with a version number of “0.0.0”, and pull requests should not increment this number; it is incremented during the deploy process.

index.js

A given integration module may have more than one actual integration in it, as external data providers may provide more than one service or action endpoint. The SuppressionList module, for example, has three integrations: one to query items, one to add items, and one to delete items. These are all listed in the module’s index.js:

module.exports = {
  outbound: {
    query_item: require('./lib/query_item'),
    add_item: require('./lib/add_item'),
    delete_item: require('./lib/delete_item')
  }
};

“request and response” vs. “handle”

There are two possible ways to build an outbound integration. One is referred to as “request and response style”, the other is “handle style”.

When the service being integrated with is accessed by a single, fairly simple HTTP request, the first, “request and response”, is preferred. With this approach, one function (request()) is used to create an object that defines how the transaction should be made, and another (response()) is given the result of the transaction to parse (see more details below). Meanwhile, the actual HTTP transaction itself is performed by the core LeadConduit application, not the integration module.

Other times, the service being integrated with may be more complex, requiring multiple requests to accomplish a single lead transaction. Or, there may be a 3rd-party library that can be used to perform the transaction. In cases like this the second approach is the way to go: a single handle() function is written to formulate and execute the HTTP request, and then parse the response.

request()

This function takes a single parameter, vars (see below), and can access attributes of it as described in the request.variables() array (see below). It returns a JavaScript object that details how LeadConduit should make the HTTP call. That includes setting these attributes, as necessary:

This JavaScript object is simply returned from request(), and then LeadConduit uses it to execute the HTTP transaction as described.

response()

The complement to request(), this function takes three parameters: vars (see below), req (the request object), and res (the response object). It returns a JavaScript object that LeadConduit will “append” to the lead. This object includes a few standard attributes, but otherwise is defined by the the response.variables() array (see below).

The standard attributes of the object returned by response() are outcome, reason, and billable.

outcome

Outcome can be set to one of “success”, “failure”, or “error”.

The difference between “failure” and “error” is usually between a transaction failing (failure) and a system failing (error). For example, if an integration intended to add a record in a CRM database got back a response indicating that a record is a duplicate, that would be a failure. For a lookup service, if the data being looked-up isn’t found, that’s typically considered a “failure” as well. “Error”, on the other hand, is usually something like the remote system being unreachable, returning an HTTP 500, etc.

One rule of thumb is that problems that would probably recur on a retry, such as a duplicate record, are “failures”, while those that might not, such as the service being unreachable, are “errors”. However, the details of when to use each outcome are usually specific to each integration.

There is a fourth possible outcome – “skip” – but that value is never explicitly set by an integration. It is only set by the handler process after evaluating flow rules, or when an integration’s validate() returns a validation error message (see the section on validate(), below).

reason

Reason must always be set if the outcome is “failure” or “error”. This information will be seen in the LeadConduit UI, and may also be returned to the original provider of the lead, so the more human-understandable, the better.

The UI also shows counts of these “reason” messages in reporting views, so making them unique per lead is a bad idea. For example, “user not found” can be shown as happening 200 times yesterday, but if the message is too specific – like “user John Smith not found”, “user Jane Doe not found”, etc. – then that aggregation isn’t possible.

Usually, “error” is when something unexpected happens, so details such as the HTTP status could be included in the reason. Knowing that it was a 404 vs. a 500, for example, is helpful when troubleshooting. We also sometimes capture the entire server response; when it’s truly an error, that response may not be valid according to that service’s API. For example, a 500 might just return an HTML error page.

billable

Billable should always be set for integrations that are resold through LeadConduit. That is, if ActiveProspect is paying the service provider, and in turn billing our customers for that service, this value is required. Other integrations that aren’t resold, such as delivery to a CRM, will omit the billable attribute.

The value assigned is the number of transactions that the LeadConduit customer will ultimately be billed for. This is usually “0” or “1”, and depends on ActiveProspect’s terms with the service provider. For example, if every Acme lookup should be charged for, regardless of whether a record is found and returned, then this would always be “1” (except for an “error” outcome). Alternatively, if only lookups that successfully return data were charged for, then this would only be “1” on success.

Note that the “0” and “1” here are numeric, not boolean. It’s possible that a single transaction may have a billable value greater than “1”. This occurs, for example, in the TowerData integration, which can be used to request up to sixteen data-points in one transaction. Each data-point successfully returned is added to the billable value.

request.variables()

This function returns an array of objects that define the variables that can be used by request(). In other words, it describes the input to the integration. This information is also used in the LeadConduit UI, to allow users to set or override these values with mappings.

Each item in the array is a JavaScript object with these attributes:

Example:

{
  name: 'lead.postal_code',
  type: 'postal_code',
  required: true,
  description: 'Postal code to verify existence of'
}

The name, required, and description attributes are used to show the user what data can be configured for this integration (type is not; with some complex mappings, it would be impossible for the UI to know whether the type were being correctly matched).

If the integration needs to use values from the original lead, those are listed with a name prepended with “lead.”. Listing them that way lets the UI ensure those fields are added to the flow when the integration is added.

For example, a ZIP Code integration would list lead.postal_code as its only request variable, and when a user adds that service to a flow, the UI adds postal_code to the flow (if it isn’t already there), and no additional mapping is needed.

Note that the required attribute is used only by the UI. Enforcing that “required” values are present at lead-handling time is solely the job of the validate() function (see below).

Though the type value does not affect the UI, it is used by the lead-handler, which creates an instance of the specified type for each variable before invoking request() (or handle()).

However, a specific “rich” type (see Key Concepts section on “Field Types”) should only be used when it’s really needed; use the string type wherever possible. This avoids subtle bugs and unexpected behavior caused by the lead-handler’s automatic typecasting. If the integration needs to check .valid, or access other object attributes (e.g., the .area or .exchange attributes), then list it as type: 'phone'. Otherwise the integration will ask for a string, and LeadConduit will provide it that way. When a rich type is specified, care must be taken to “stringify” the value as necessary with the .valueOf() function.

It’s an unlikely scenario, but also note that if a variable corresponds to a standard system field, such as lead.annual_salary, the “rich” type listed by the integration should match that field’s type or be given as string as described just above. If the standard field is a number, listing it in the request variables as a range will have no effect. Only listing the type as string affects the field’s type within the integration; no other typecasting occurs.

Using rich types also requires extra care in test code; see details about testing in the Author’s Guide section.

In a “handle style” integration, this function is called requestVariables() (since there is no request definition to add .variables() to), but is otherwise exactly the same.

response.variables()

Similar to requestVariables(), this function returns an array of objects that define the variables that can be appended to the lead by the integration. This information is used in the LeadConduit UI, allowing the user to create filters and subsequent mappings with this data.

There are a few standard variables that are always included: outcome, reason, and billable. See the section about response(), above, for more about those.

What fields to include differs from one integration to another. For a CRM delivery, there may be little useful information to append, other than perhaps an ID returned for the new record created there. For lookup services, it may be best to list and append everything returned, or it may make sense to exclude some data, if it’s decided to not be of interest to LeadConduit users. Ideally these decisions are provided in a new integration’s requirements, but arriving at the final list is often an iterative process.

To ensure there aren’t appended-data name conflicts, the integration name should prefix each field name. The three SuppressionList integrations provide a good example: the query_item integration lists the variable query_item.outcome, not just outcome. That ensures that the value is unique relative to the outcomes of the other SuppressionList integrations (add_item and delete_item). If each one only listed outcome, then a filter step in a flow after both a query_item and an add_item step would only list a single value, named “SuppressionList Outcome”. There would be no way to differentiate the query outcome from the add outcome at that point.

In a “handle style” integration, this function is called responseVariables() (since there is no response definition to add .variables() to), but is otherwise exactly the same.

handle()

As discussed in the section above (“request and response” vs. “handle”), some integrations can’t be built using the “request and response” style just described. One example is when an integration needs to make multiple requests per lead, such as a login request, a data transmission request, and finally a logout request. The soap integration in the LeadConduit “Custom” module is one open-source example; the send integration in the Email Delivery module is a simpler one, but is visible only by ActiveProspect developers.

Another case for writing a handle() function is when there is an existing library that does some of the work for us.

The handle() function takes two values: vars (see below), and the callback function to invoke when it’s complete.

The callback function takes two values: an error object, and the JavaScript object to append to the lead (i.e., as defined by responseVariables(), with outcome, reason, etc.).

vars

The main data structure for lead data is conventionally named vars in these functions (see also: the Key Concepts section “Vars, Appended Data, and “The Snowball””).

Integrations never directly mutate the vars object. Indeed, integrations are given a copy of vars, so any changes made to the object would be lost. The LeadConduit handler manages adding data to the vars object (aka building up “the snowball”), using the data returned by response() and handle() (as described in previous sections). The one exception here is with vars.credential. The handler compares the credential after the integration is finished and, if the credential has changed, saves it.

That leaves the single thing vars is used for in integrations: using the data it contains. This nearly always comes from the lead attribute, which itself contains all the fields that define the lead being handled (email, first_name, etc.). In an integration, those are fully referenced as, for example, vars.lead.email, vars.lead.first_name.

All the lead fields used by the integration should be listed in request.variables(), as described previously. This restriction isn’t enforced by the handler, but it’s essential. In other words, though it would be possible to use vars.lead.comments even if that field weren’t listed as a request variable, that would inevitably cause problems for users.

Similarly, it’s technically possible to use data appended by other integrations, such as using vars.suppressionlist.query.outcome (appended data doesn’t get appended under lead), but that’s also a bad idea. Doing that would require the user to have set up – and use on every lead – the other integration previously in the flow that this integration is used in. The appropriate way for this to be written is to have a request variable that the end-user can map the correct data to, whether it’s from a previous step or not. In this way, the integration is self-contained.

There are some other metadata values available on vars. They’re rarely used by an integration, but here’s an example to illustrate them (with some simple lead data, including email, a “rich” type):

{
  "submission" : {
    "timestamp" : "2016-11-28T21:44:42.699Z"
  },
  "lead" : {
    "id" : "583ca54afd2847153ae89b1b",
    "email" : {
      "normal" : "gina@chavez.biz",
      "raw" : "gina@chavez.biz",
      "user" : "gina",
      "domain" : "chavez.biz",
      "host" : "chavez",
      "tld" : "biz",
      "valid" : true
    },
    "first_name" : "Gina",
    "last_name" : "Chavez"
  },
  "account" : {
    "id" : "53a310fa9d29c9c72100006c",
    "name" : "ActiveProspect, Inc.",
    "sso_id" : "4d9a4c421d011c553e000001"
  },
  "flow" : {
    "id" : "564b6135d3754dcf205eae6f",
    "name" : "Sales Leads"
  },
  "random" : 95,
  "source" : {
    "id" : "53ab1f319d29c9ddf2000045",
    "name" : "AP Site Contact Form"
  },
  "recipient" : {
    "id" : "535e9f8c94149d05b5000002",
    "name" : "TrustedForm"
  }
}

validate()

This function is invoked prior to the request() (or handle()) function; its purpose is to ensure that the minimum necessary data is present to bother calling request() (or handle()) at all. See also: “Step Outcomes, Skip” in Key Concepts.

For example: consider a phone-verification service. If there is no phone data provided, or if what’s provided is not a valid phone number, then there’s no point in spending the cost or processing time required to call that service.

The validate() function takes a single parameter, vars (see above). If the required lead data is present, it returns either nothing (technically, undefined). If any required lead data is missing, it returns a string “skip” message. That message will be set as the reason text for this integration’s step, so as discussed previously, it should not include lead-specific data (see the “response()” section, above).

This reason text should be consistent with similar errors in other integrations. For example, the standard validate message for a missing email, used in many integrations, is “email must not be blank”.

Note that the required attribute on request variables, by itself, does nothing to the behavior of the integration (see “request.variables()” above). However, that metadata should match what variables the validate() function checks for. There is no automatic enforcement for these to match; they must be kept in sync by the developer.

Another type of exit is possible from validate(), which should be used when required environment variables (see below) are missing: an Error object should be thrown. This is treated differently because it’s not an issue with the particular lead being handled, but is a misconfiguration of the LeadConduit handler. This thrown error will alert ActiveProspect personnel to the problem immediately, so that it can be fixed. Examples of this can be found in the TowerData and WhitePages integrations.

envVariables

Sometimes there is key integration data that is static, but should not be hardcoded in the integration itself, such as API keys for resold services. These values should be treated like passwords, and therefore aren’t appropriate to be kept in source code, even in a private Github repo. Instead, they’re set in system environment variables and accessed via process.env; an example can be seen in integrations such as BriteVerify, Clearbit, and ZipCodes.com.

When an integration requires a value from the process environment, another item should be exported by the integration: envVariables. This is an array of strings, containing the names of any environment variables the integration needs. This is used to ensure that the app isn’t deployed without required environment vars.

When an integration with new environment variables is deployed, these values will have to be configured in the server environment. A member of the LeadConduit development team can set this up for you.

See the section on validate() for the best practices on validating that required environment variables are present.

editable

To enable the integration to be edited directly through the rich UI, the integration module must export an editable flag set to true. This flag indicates to LeadConduit that the integration’s configuration can be modified through the built-in rich editor. If the integration should support rich UI editing, include the flag in your module’s exports.

Example:

module.exports = {
  handle,
  requestVariables,
  responseVariables,
  validate,
  editable = true
};

package metadata

Metadata for integrations includes the name, provider, provider URL, an icon png, etc. It’s used in the LC UI and is also available to other clients (like the integrations catalog).

/docs

Each integration should have a docs directory at the root level, containing at least two Markdown-with-frontmatter files.

The first is index.md, and contains information about the package as a whole. The others have information about each integration, and are named to match them (e.g., outbound.query_item.md; see “index.js & Naming”, above).

  1. package information (index.md)
    1. provider - the organization that provides the service (“ActiveProspect”)
    2. name - the name of the package (“SuppressionList”)
    3. link - the URL to learn more about the provider(“https://activeprospect.com/”)
    4. account_access - optional field that describes which accounts are able to use the integration, defaults to all if no value is present. Currently, the only other value in use is paid
    5. following the end of the frontmatter separator, the remainder of the file contains Markdown descriptive text (“Our lightning-fast API allows you to query your lists…”)
  2. integration information (e.g., outbound.query_item.md)
    1. name - the name of the particular service within the package (“Query List”)
    2. link - the URL to learn more about this particular service (“https://activeprospect.com/products/suppressionlist/”)
    3. integration_type - a categorization of what the integration is used for. One of: “delivery”, “enhancement” (for bring-your-own-license services), or “marketplace enhancement” (for resold services)
    4. tag - a tag value to help search and sort across all integrations. Multiple values can be listed, separated by commas (“Email, Phone”).
      • Address
      • CRM
      • Call Center
      • Code
      • Demographic
      • Email
      • Email Marketing
      • Geographic
      • List Management
      • Marketing Acquisition
      • Marketing Automation
      • Phone
      • Pixel
      • TCPA
      • Verification
    5. as with index.md, Markdown text following the frontmatter divider provides longer descriptive text (“Query one or more Lists for a single Value.”)

icon.png

The icon for an integration should be provided within the /lib/ui/public/images directory, with the filename icon.png.

4. Integration User Interfaces

While LeadConduit integrations eliminate many of the day-to-day headaches associated with lead flows, figuring out 3rd-party field mappings can still be complicated. To reduce this complexity, many integrations also include their own custom UI, sometimes called a Rich UI, or RUI.

This UI is essentially a simple wizard modal that walks users through setting up the details of an integration. They are implemented as iframes, using Vue.js, webpack, and sometimes an expressJS backend.

Rich UIs interact with LeadConduit through the integration-ui package. The RUI takes user input, uses it to build objects representing flow steps, and then passes those objects back to the parent LeadConduit client UI.

To simplify and standardize development, there is a small but growing UI component library for use in RUIs. Use components from there when possible and consider expanding it when you can.

Note that unlike other integration code, most of the RUI code is run in the browser. This means that, when using newer ECMAScript features, you must ensure that they have a reasonable amount of browser support.

File Structure

UI code lives in the lib/ui directory of an integration, the basic structure of which is shown here. As discussed in the Development Guide below, you will often start with a copy of this structure, rather than creating it manually.

├── index.js
├── dist
├── api
│   ├── auth.js
│   └── index.js
└── public
    ├── app
    │   ├── auth
    │   │   ├── Auth.vue
    │   ├── config
    │   │   ├── Config.vue
    │   │   ├── Page1.vue
    │   │   └── Page2.vue
    │   ├── index.js
    │   ├── store.js
    │   └── router.js
    ├── images
    │   └── icon.png
    └── index.html

The first item listed is index.js. This is simple ExpressJS boilerplate that serves static assets and api endpoints. The second, /dist, is generated by webpack and should not be committed (i.e., it should be listed in the integration’s .gitignore).

api

Many integration UIs don’t need an api directory. If this is your first time reading through this guide, you can probably skip this section, as it’s a somewhat advanced topic.

For security reasons, the RUI iframes cannot make HTTP requests to other services or have access to LeadConduit’s environment variables. To get around this limitation, an integration can provide an internal Express server, which is mounted at runtime as part of the LeadConduit API server process. Instead of making outbound HTTP requests directly, an integration UI will make requests to these internal API endpoints, which perform whatever task is needed.

An example is the SuppressionList RUI, which presents the end user with a dropdown selector of all the lists available in their SuppressionList account. Retrieving that list requires an API call from LeadConduit to SuppressionList, and this is handled by that RUI’s API code.

The contents of this directory are the least uniform across integrations, as each RUI will have different outside calls it needs to make.

When developing a RUI that requires an api, there are a few requirements that must be followed for the integration to successfully call it’s own API:

  1. Requests to the integration API from the RUI must not include authorization headers. The LeadConduit API expects requests from integrations to use session authorization via the cookie header; including other authorization headers will cause the LeadConduit API to use an incorrect authorization scheme for the request, which will most likely result in an authentication failure. A common workaround for this is to pass the credential as a query param which can be used by the integration API as needed to authenticate with the third-party API.
  2. Request paths must not start with a leading /. The leading slash will be interpretted as calling the LeadConduit API, rather than the RUI integration API. Omitting it will allow LeadConduit to correctly route the API to the integration. This behavior is due to the fact that LeadConduit mounts the integration Express router at a path which is dynamic every integration; therefore, the API calls defined in the RUI are not actually going to absolute paths defined by the integration’s router, but rather at a nested route defined by LeadConduit.
    • ✅ Good: axios.get('lists')
    • ❌ Bad: axios.get('/lists')

public/images

Contains a single image file, icon.png. This image is not actually part of the Rich UI, but in the LeadConduit UI and the public integrations catalog; see the Package Metadata elsewhere in this guide.

public/index.html

This is the main HTML page loaded into the LeadConduit client’s iframe when an integration UI is launched. It is boilerplate that will probably never need to be changed.

public/app

Subdirectories in the public/app directory represent different views or routes. By convention, RUIs have two routes: config and auth.

public/app/config

The config route is the most common in Rich UIs. The screens in this route accept user input and the controllers take that input and use it to build filter steps which are added to the flow after the integration (recipient) step.

On final exit, the config controller will call ui.create(), passing it an object that contains a flow object to merge with the flow being edited. This object can include the following arrays:

  1. sources
  2. fields - can be a simple array of field-id strings (e.g., ['email', 'phone_1', ...]. For outbound integrations only, this can instead be an array of objects, with the attributes below. Providing this additional detail triggers the client UI to render a special field-mapping dialog.
    1. name (string)
    2. type (see Key Concepts section on “Field Types”)
    3. required (boolean)
    4. label (string; not yet in use)
  3. steps

public/app/auth

The auth route is used for to allow users to create and store 3rd-party credentials in LeadConduit. These credentials can then be easily reused in other steps of the same type, without the user having to authenticate each time.

Similar to the config router above, the auth template collects data from the user (e.g., an API key) and the auth controller provides a credential object to ui.create. This creates the credential in LeadConduit and typically proceeds to the next config route.

4. Development Guide

Getting Started

There are two easy ways to get started building when building a brand-new integration.

1) Use the integration template tool, which will scaffold most of the necessary boilerplate.

2) Alternatively, you can find an existing integration that’s similar to yours and use it as a template. The reference integrations are good candidates for templates.

Development Environment

LeadConduit integrations are Node.js modules. To work on them, your development environment will need to include Node.js (LeadConduit runs on Node 14 as of this writing).

Internal ActiveProspect developers also need access to our private accounts at Github.com and npmjs.com; see the Administration Guide.

Style Guide

As with any style guide, consider these conventions as rules of thumb. The consistency that comes from following these will aid in troubleshooting and maintenance across the growing number of integrations on the LeadConduit platform.

The following are not in priority order, but are numbered for reference.

general guidelines

  1. Keep code clear, readable, and concise
  2. Names of functions, variables, etc. are camelCased (e.g., parseCreditRate())
  3. Names of lead and mapped parameters are snake_cased (e.g., vars.credit_rate)
  4. Use local variables to reduce repetition of long data-structure paths (e.g., custLoan = vars.lead.customer.loan.information.data)
  5. Prefer string interpolation (${last_name}, ${first_name}) over JavaScript-style concatenation (last_name + ", " + first_name)
  6. Handle simple logic in the template (e.g., to ensure empty string: ${vars.lead.postal_code || ''})
  7. In tests, use nested describe() statements to logically organize test cases

module review checklist

  1. a freshly cloned repo should be able to have npm install and npm test run successfully with no errors
  2. package.json should have correct name, description, etc.
  3. package.json should have no unnecessary packages as dependencies or devDependencies
  4. Readme.md should have correct GitHub Actions badge code
  5. index.js should list export integration names under the outbound (or inbound) namespace
  6. CHANGELOG.md should exist and be updated for each change. Reference Github issue numbers if appropriate, and use the planned version number, even though it will not match package.json until the “Publish to npm” action is run
  7. integration code should have:
    • no unnecessary requires
    • no API keys, etc. hardcoded anywhere
    • no stray, unused “helper” functions
    • no unnecessary exports
    • the correct Accept header on outbound requests integrations
    • no custom request variables, only standard fields
    • correct descriptions, types, and required flag on all request & response variables
    • descriptions on request and response variables should include clear, end-user appropriate details, including default values
  8. integration tests (see below) should:
    • have a validation test for required request and environment variables
    • have a validation test for when nothing is returned (i.e., no validation errors)
    • use the leadconduit-integration.parse() utility to create typed request variables

Test Coverage

Thorough test coverage is an important aspect of integration development. Testing the various edge-case conditions that are common in working with other systems, and being confident that future changes won’t cause regressions are part of what make integration modules superior to customers simply using the general-purpose (aka “custom”) integrations.

Refer to test code for existing modules, located in the test subdirectory, to help remind yourself of the kinds of cases to account for.

Tests can be run locally by running npm test at the command line.

Our continuous-integration (CI) process uses GitHub Actions. Within an integration repo, this is controlled by the workflow files under .github/workflows.

devDependencies

To simplify management of development dependencies across many integration modules, we use a module that wraps our most commonly-used ones: integration-dev-dependencies. This should be included as a devDependency of your integration, and provides common tools such as mocha, webpack, and eslint (and a standard ESLint configuration).

If you’re adding integration-dev-dependencies to an older existing module, there is a conversion script that can be run to automatically apply its commonalities. After installing it (npm install --save-dev @activeprospect/integration-dev-dependencies), run npx convert. Then review the changes made, and commit them as necessary.

Linting

We strive to keep integrations in compliance with our standard ESLint configuration (see devDependencies, above). To run the linter in module that’s been converted to use integration-dev-dependencies, you can run npm run lint. Your IDE may also support applying our lint configuration directly.

If there are any errors, you can run npm run fixlint to let ESLint automatically fix all the issues it safely can (spaces, brace style, etc.). Other errors may have to be fixed manually, or excluded with configuration comments.

Development Cycle

When working on a brand new module, internal ActiveProspect developers will create a repo in the ActiveProspect Github organization. If you’re an external developer, you can use whatever git-based source control you like. In either case, start by creating a new repo with just one file on the master branch: a Readme.md with a single empty line. Then add your new code in another branch, so that the merge PR will show it all as new code. That way, it’s easy to comment on and discuss during the PR review process.

  1. Create a feature (non-master) branch for your changes. If you’re not in the ActiveProspect GitHub organization, fork the repo first as necessary.
  2. Make as many commits as you want while you do your work
  3. Don’t increment the module’s version number; that will be done later (new integrations start at “0.0.0”)
  4. Push your branch to GitHub when you’re ready to have it reviewed
  5. GitHub Actions should be set up to run this module’s tests (see “Test Coverage” above), and all tests should pass before you create a PR
  6. Create the PR and request code review from another integration developer. If you’re not sure who is able to or has time to review it, ask
  7. Make changes as needed per PR feedback, iterating until the PR is approved
  8. Squash your commits as needed, down to semantically useful chunks of work. That may be a single commit, or it may be multiple, per your judgment (see this blog post for more information)
  9. Merge your PR (if you’re in the ActiveProspect GitHub organization)
  10. The next step is to cut a new release of the module, as covered in “Cutting a Release” in the Administration Guide chapter

Following are some common things to know, keep in mind, or have handy for future reference.

Useful references

Reference integrations

There are designated reference implementations for some broad types of things that integrations do. These are the best first places to look to see how these things are done (and if they’re out of date or not optimal, that’s a bug that should be fixed ASAP). These can be seen using this GitHub search, or via this index:

Send the right types in tests

If you create a vars object in your tests, the values will be JSON strings, integers, etc., and not the “rich” LeadConduit types that the integration would get from the handler. Use the type-parsing utility found in the leadconduit-integration module to automatically create those rich types as needed, as defined by the integration’s request variables array.

The usage is usually a line like this at the top of your test file:

const parser = require('leadconduit-integration').test.types.parser(outbound.requestVariables());

That creates a parser function based on your integration’s requestVariables() (or request.variables(), as the case may be). That function takes your test JSON object and replaces any attributes with their rich-type versions:

const vars = parser({
  lead: {
    first_name: 'Alexander',
    last_name: 'Hamilton',
    postal_code: '00123'
  }
});

That yields this object, with a rich postal_code value:

{ lead:
   { first_name: 'Alexander',
     last_name: 'Hamilton',
     postal_code:
      { [String: '00123']
        raw: '00123',
        country_code: 'US',
        code: '00123',
        zip: '00123',
        four: null,
        valid: true } } }

Look for leadconduit-integration functions

The leadconduit-integration module is where common functions, like the type-parser just above, should go. Look there for utility code.

By the same token, if you’ve written or found functions that should be common, add them there.

“Mask” sensitive data

The full details of all transactions are visible in the LeadConduit UI, but we mask, or obscure, the data that should not be. This happens automatically for some field types, such as ssn (for Social Security Numbers) and credential. However, sometimes it’s necessary for an integration to manually mask data.

The key to understanding how this works is knowing that the integration’s request() or handle() function is actually executed twice by the handler process: once for real, and a second time in a kind of emulation mode to capture the details for the event record. So, to mask data from being captured in that second run, the integration simply writes over the necessary parts of vars after the request() function has used them.

For example:

  vars.apiKey = vars.apiKey || process.env.THIRDPARTY_COM_API_KEY

  const req = {
    method: 'GET',
    url: `https://api.thirdparty.com/lookup?key=${vars.apiKey}`
  };

  vars.apiKey = '****************';

The first time this is run, vars.apiKey is not set, so it will be assigned the value from the process variable. After that value is used to formulate the URL, it is replaced with a string of asterisks.

The second time it’s run, vars.apiKey will still be that string of asterisks, and so will not be assigned the real API key value. The URL assignment will include those asterisks, and that value will be captured in the permanent record of the event. (Using that wrong value won’t cause it to fail, because that second transaction isn’t really made.)

To test this, simply invoke request() twice, as in this example:

before(() => {
  process.env.THIRDPARYTY_COM_API_KEY = '1234'
});

it('should mask the API key', () => {
  let req = integration.request();
  assert.equal(req.url, 'https://api.thirdparty.com/lookup?key=1234');

  req = integration.request();
  assert.equal(req.url, 'https://api.thirdparty.com/lookup?key=****');
});

5. Administration Guide

Some integration development and administration tasks can only be performed by ActiveProspect, Inc. personnel. These include publishing to NPM, adding new integrations to the platform, and deployments.

Note: although this guide is currently published publicly, if you’re outside the ActiveProspect organization, the information in this section isn’t of use to you, and can be ignored.

Version Numbers

Module version numbers follow semver guidelines, using “major.minor.patch” form. Our guidelines are similar to general usage, but choosing what level to increment on a release can be subjective. Make your best call, or ask a teammate for a second opinion if you’re not sure.

To publish an updated version, use the GitHub Action “Publish to npm registry”. This actions should be defined by a workflow file in the .github directory, and also requires that the module’s .npmrc specifies an NPM authToken (see the integration template repo).

Publishing a New or Updated Integration

If you’re working on a new integration, or one that doesn’t satisfy the semver expression in the package.json of the leadconduit-integrations module, then that file must be updated. For a new integration, add it to the list, in alphabetical order, with the semver pattern ^1.0.0. For an updated integration, modify the semver to match as appropriate.

Either of these changes will require that the integrations module itself be updated and released as well. Make the above changes on a branch, create a PR, have it reviewed, etc., through to publishing the new version to NPM (new integrations are usually treated as an increment to the “minor” version of leadconduit-integrations).

Package-lock

The main LeadConduit API application uses a package lockfile (package-lock.json), so deployment to staging or production requires that be updated (see below for a streamlined way to deploy updates to the development environment):

  1. ensure your local leadconduit-api repo is up to date, and create a branch off of master
  2. ensure your local node_modules matches the current lockfile. A foolproof way to do this is to delete it (rm -rf node_modules) and reinstall (npm ci)
  3. update the lockfile for just your changes. Because integrations are not direct dependencies of the top-level app, you may have to use the --depth parameter

Example 1: Update package-lock.json for a minor- or patch-level update:

npm update --depth 1 @activeprospect/leadconduit-whatever

Example 2: Update package-lock.json for a major update or new integration:

npm update @activeprospect/leadconduit-integrations

Your npm update should show output that includes the new target version of the package you’re updating. Verify that the changes to package-lock.json look correct, and open a PR for them.

We want changes to the lockfile to be as specific and intentional as possible, which is why we don’t simply run npm update and blindly update every dependency in the app. However, when updating the higher-level leadconduit-integrations package, you’ll likely see a bunch of updates to low-level dependencies, such as @babel and aws libraries. Those changes are hard to avoid, and acceptable.

Verdaccio & the Development Environment

Deploys to the development environment do not use the package-lock (that deploy process deletes package-lock.json before running npm install). This means that if you have published an updated version of a package to NPM that will match the semver in leadconduit-integrations, a simple deploy to development will automatically pick it up.

In fact, npm install commands run for development deployments are proxied through our Verdaccio private Node registry. This allows us to publish test versions of packages internally, where they can never be accidentally picked up by a production deploy. To start, you’ll need a login to our server at verdaccio.leadconduit-development.com. Then you can access it from the command-line using the --registry flag.

For example, to publish the module in your current working directory, just run:

npm publish --registry https://verdaccio.leadconduit-development.com

To clean up the registry when you no longer need that test version, you use the --force flag, and have to specify the package name:

npm unpublish --force --registry https://verdaccio.leadconduit-development.com/ @activeprospect/leadconduit-whatever

Deploying to Staging

After a new or updated integration is accepted by Product and/or QA in the development environment, we smoke-test the deploy in staging. This first requires the package-lock update described above. Then you create a LeadConduit API release using the “Create release” GitHub Action. After that succeeds and passes CI, you can use the “Staging” GitHub Action workflow to deploy it. Specify the new version string in the “leadconduit-api branch or tag to deploy” field (note that these begin with a “v”: v13.4.2).

Deploying to Production

If the staging deploy completed without error and the integration you’re working on shows an updated version on the status page, you can repeat the process with the “Production” workflow.

Additional Setup for New Integrations

Entity Added or Updated

“Enhancement” integrations require corresponding records in the entities database collection. See the “Managing Entity Records” section below.

Environment Variables

If a module requires a new or updated environment variable (i.e., has exported the envVariables array), those are provided via LeadConduit’s Ansible repo. These are set separately for each environment; for staging the two key files are in /inventories/lc_stage/group_vars/leadconduit. The file vars file in that directory lists the environment variable name, and the name of the Ansible vault value it uses. The encrypted file vault contains the actual value.

You can use the command-line tool ansible-vault to edit the vault file; the password is in the 1Password entry “LeadConduit/Batch Ansible Vaults”.

Scoping an Integration to a Specific Account

For integrations that should be exclusive to specific accounts, the MongoDB packages collection serves as an allowlist for packages and modules. Below is an example of a package and a module on the allowlist; see the LeadConduit API Readme for more details.

{
  "_id": "leadconduit-epsilon",
  "account_ids": ["53a310f00000000000001234"]
}

Specific integrations can also be allowlisted, and other integrations in the same package will remain visible to other accounts.

{
  "_id": "leadconduit-briteverify.outbound.email",
  "account_ids": ["53a310f00000000000001234"]
}

SSO Billing Information

For integrations that will be resold by ActiveProspect – that is, integrations that set the billable attribute – the pricing details for that service must be configured in SSO before the integration is used by customers. There are two places in SSO that will need to be updated: the SSO config file and the billing-pricing products file. The billing-pricing module is also used directly by LeadConduit, so updates there will require an update to LeadConduit’s package-lock.json to include them.

Managing Entity Records

Enhancement integrations (not recipients) have corresponding records in the entities database collection. Here’s how to query, update, and create those.

Some examples presume installation of jq. You’ll also need your LeadConduit superuser API key. Use the browser dev-tools to find this (in Chrome, use the “Network” view, while logged in to LeadConduit in the appropriate environment; the user response JSON will include api_key). This is shown below as the environment variable AP_API; if you run export AP_API=your_super_user_api_key to set that, you’ll be able to copy & paste these examples.

Note that changes to production are reflected in snapshots that happen every hour, on the hour. Those snapshots are then used to re-seed staging on full deploy.

Get an Existing Entity Record by Name

The following jq query uses the regular expression "^briteverify" to match any name beginning with that name (the "i" option ignores case). This is useful because the names of entities isn’t always precise. After you run this, you may need to weed through which ones are accounts, or other endpoints, etc.

$ curl -X GET -uX:$AP_API -H 'Accept: application/json' https://next.leadconduit.com/entities | jq 'map(select(.name | match("^briteverify"; "i")))'
{
  "id": "535e9f8c9414932925b00001",
  "name": "BriteVerify",
  "source": null,
  "recipient": "enhancement",
  "logo_url": "https://s3.amazonaws.com/integration-logos/briteverify.png",
  "module_ids": [
    "leadconduit-briteverify.outbound.email",
    "leadconduit-briteverify.outbound.name_verify"
  ],
  "website": "http://www.briteverify.com",
  "standard": true
}

Get an Existing Entity Record by ID

If you already have the id, as you might from a previous query like the one above, you can also query that directly.

curl -X GET -uX:$AP_API -H 'Accept: application/json' https://next.leadconduit.com/entities/535e9f8c9414932925b00001 | jq '.'
{
  "id": "535e9f8c9414932925b00001",
  "name": "BriteVerify",
  "source": null,
  "recipient": "enhancement",
  "logo_url": "https://s3.amazonaws.com/integration-logos/briteverify.png",
  "module_ids": [
    "leadconduit-briteverify.outbound.email",
    "leadconduit-briteverify.outbound.name_verify"
  ],
  "website": "http://www.briteverify.com",
  "standard": true
}

Update an Existing Entity Record

You could do this, for example, if you need to add a new endpoint to an existing integration.

First, GET the existing data, as shown above. You could redirect it to a file by adding this to the end of the command: > entity.json. Then that file can be edited as needed (e.g., to add another entry to the module_ids array). Using the "id" value from that GET, you can now PUT to update the record:

curl -X PUT -uX:$AP_API -H 'Accept: application/json' -H 'Content-Type: application/json' -d@entity.json https://next.leadconduit.com/entities/ID_VALUE_FROM_JSON | jq '.'

On success, the API will return the updated JSON for the entity record.

To Create a New Entity

Similar to the “update” above, start with a JSON file. Don’t include an "id"; that will be assigned by the database on insert. Here’s a template:

{
  "name": "Panopticon",
  "source": null,
  "recipient": "enhancement",
  "logo_url": "https://s3.amazonaws.com/integration-logos/panopticon.png",
  "module_ids": [
    "leadconduit-panopticon.outbound.everything"
  ],
  "standard": true
}

Values:

curl -X POST -uX:$AP_API -H 'Accept: application/json' -H 'Content-Type: application/json' -d@entity.json https://next.leadconduit.com/entities ; echo

Reviewing 3rd-party Code

In addition to the usual style and logic guidelines (see Development Guide), some of the things to watch for when reviewing code not written in-house, or by developers who are unfamiliar integrations development:

Over time, we realize or discover new, better ways to do things in integration modules. But because there are so many modules, it’s not usually worthwhile to revisit all of them at once to make those changes. Instead, we try to make these changes when we’re fixing a bug or adding a feature to a module. The list below are updates that should be made whenever feasible.

Remove type and name from integration modules

The type and name of an integration were sometimes included in the exports of those modules (e.g., type: 'outbound' or name: 'Name Validate'). This information is now provided in the .md metadata files in docs/, and should be removed from the integration module.

Throw errors for missing environment variables

If a required environment variable is missing, that’s a problem that can never be fixed by the user, it can only be fixed by an ActiveProspect developer. That’s why the check for such environment variables in the validate() function should throw an error, not just return an error message that causes a “skip” outcome.

An example, from the White Pages integration:

const validate = (vars) => {
  if (!process.env.WHITEPAGES_LEAD_VERIFY_KEY) {
    throw new Error('Missing credentials, contact ActiveProspect support');
  }

List environment variables in envVariables()

When an integration needs environment variables, they should be declared in an array exported by the integration, as in this example from the ZipCodes.com integration:

module.exports = {
  envVariables: ['ZIPCODES_COM_API_KEY'],
  ...
};

Add CHANGELOG.md

Originally changes were not tracked in a changelog, but now they should be. If there is no CHANGELOG.md file in the root directory, add one with content like this:

# Change Log
All notable changes to this project will be documented in this file.
This project adheres to [Semantic Versioning](http://semver.org/).

## [0.0.7] - 2016-08-05
### Fixed
- Add this changelog

## [0.0.1] - 2015-06-16
### Added
- Initial version

Finding the date of the first published version is interesting, but note that it’s not necessary to dig up the history of all past versions.

Include node as a Travis build version

Originally the versions of Node.js specified in each module’s .travis.yml included the current known major versions (4, 5, etc.). Travis also provides a way to automatically include the latest version, with the keyword node.

In other words, a module’s .travis.yml should look like this (the numbers may vary over time, but node should always be included):

language: node_js
node_js:
  - 8
  - node
sudo: false

8. Appendix B - Running Locally

Note: although this guide is currently published publicly, if you’re outside the ActiveProspect organization, the information in this section isn’t of use to you, and can be ignored.

Running LeadConduit locally

Before integrations can be tested locally, your machine must be configured to run LeadConduit (and by extension, SSO) locally.

You can find instructions on installing and running LeadConduit locally here.

Before npm-publishing a new or updated integration, you can hack it locally using npm link:

  1. in the new integration directory (make sure package.json has the right name info), run npm link

  2. in leadconduit-integrations, run npm link @activeprospect/leadconduit-whatever. (Note that if you are testing a new integration, you’ll need to add it to the package.json file of leadconduit-integrations.)

  3. in the local leadconduit-api directory , npm link in leadconduit-integrations

Your local LeadConduit instance should now be using the linked integration code.

Tips & Tricks

  1. To verify what’s being served by the LC API:

    a. List all modules/integration names: curl -X GET -H 'Accept: application/json' http://leadconduit.localhost/packages | jq '.[] | .id'

    b. Full detail for one integration (including, for example, the boolean package.ui that indicates a “rich UI” is included in the module): curl -X GET -H 'Accept: application/json' http://leadconduit.localhost/packages/leadconduit-suppressionlist | jq '.'

  2. To search modules installed on the filesystem, run this from your LC root directory: find . -name "leadconduit-suppressionlist" -print -follow -exec grep version {}/package.json \;

  3. To verify the exact version loaded by LC: a. In the app root directory run node b. x = require('@activeprospect/leadconduit-integrations') (this takes a moment) c. x.packages['leadconduit-suppressionlist']

    9. Appendix C - Deleting an Integration

It is sometimes necessary to remove, or end-of-life (EOL), an integration. This process is composed of several relatively simple steps:

  1. Remove the integration from all flows that reference it. Depending on the integration, you may find references in the Sources tab (for inbound modules) or the Steps tab (for outbound integrations). Team members with server access can query the database and identify these flows for you.

  2. If the integration has an Enhancement module, drop the corresponding entity record. See the ‘Managing Entity Records’ section for more info.

  3. Remove the corresponding NPM module from leadconduit-integrations. (That module will then need to be updated and deployed as usual.)

  4. Deprecate the integration modules in SSO, by adding deprecated: true to its entries in products.yml (search that file for “datamyx” or “versium” for examples).

  5. Archive the GitHub repository.