How to install the Send Event to Endpoint integration through Azion Marketplace

Azion Send Event to Endpoint is a serverless integration available at Azion Marketplace. This integration enables you to stream request data to an HTTP endpoint, taking the request data and transmitting it to a user-defined endpoint via Javascript fetch API.

The integration also permits you to specify what kind of data you wish to capture by editing a JSON file.

After sending the collected data, the integration allows the request to proceed through the Rules Engine.


Getting the integration

To use the Send Event to Endpoint integration, you have to:

  1. Access Azion Console > Marketplace.
  2. On the Marketplace homepage, select Send Event to Endpoint card.
  3. Once the integration’s page opens, click the Install button.

You’ll see a message indicating that your integration was successfully installed.


Configuring the integration

Setting up an edge firewall

To instantiate the Send Event to Endpoint integration, follow these steps:

  1. Under Products menu, select Edge Firewall in the SECURE section.
  2. Click the + Edge Firewall button.
  3. Give an easy to remember name to your edge firewall.
  4. Turn the Edge Functions switch on.
  5. Click the Save button.

Done. Now you have instantiated the edge firewall for your function.

Setting up the integration

To instantiate the Send Event to Endpoint integration, while still on the Edge Firewall page:

  1. Select the Functions Instances tab.
  2. Click the + Function Instance button.
  3. Give an easy to remember name to your instance.
  4. On the dropdown menu, select the Send Event to Endpoint function.
  • This action will load the Arguments tab.

The Arguments tab will load a JSON file that looks like this:

{
"metadata": ["remote_addr"],
"headers": ["x-hello"],
"body": ["message", "user.id"],
"http_connection_args": {
"endpoint": "http://example_api:3000/test",
"headers": {
"Authorization": "FakeAuth",
"X-Provider": "Azion Cells"
}
}
}

Where:

FieldRequiredData TypeNotes
metadataNoNull or ArrayDefines which metadata fields will be streamed.

When null (or not set), all metadata fields will be streamed.

If you don’t want to stream any metadata, you must set an empty array [ ] as the value of this field.
headersNoNull or ArrayDefines which request headers will be streamed.

When null (or not set), all request headers will be streamed.

If you don’t want to stream any header, you must set an empty array [ ] as the value of this field.
bodyNoNull or ArrayDefines which request body fields will be streamed.

When null (or not set), all request body fields will be streamed.

If you don’t want to stream any body field, you must set an empty array [ ] as the value of this field.

To filter multi-level fields, use the dot notation. For example, if you use the string ‘user.name’ here, the function will seek for the field ‘name’ within the object ‘user’ in the request body.
connection_argsYesObjectDefines the data that will be used to stream the request data.

The URL to which the data will be posted is defined by the endpoint.

The headers specify which headers will be included in the fetch request.

An additional ‘Content-Type: application/json’ header will be used.
s3_connection_argsNoObjectDefines the arguments used to connect to the S3 bucket.
s3_connection_args.full_hostOnly when using s3_connection_argsStringDefines the full host of the S3 bucket.
s3_connection_args.regionOnly when using
s3_connection_args
StringDefines the region of the S3 bucket.
s3_connection_args.access_keyOnly when using
s3_connection_args
StringDefines the access key to be used in the connection to the S3 bucket.
s3_connection_args.secret_keyOnly when using
s3_connection_args
StringDefines the secret key to be used in the connection to the S3 bucket.
s3_connection_args.file_pathNoStringDefines the path where the file created by the function must be stored.

Default value:
/
s3_connection_args.use_date_prefixNoStringWhen enabled, it’ll include a sub-folder with the current date (in format YYYY-MM-DD) to the file path.

Default value:
true

This integration will return a response with the data streamed in a JSON file that will look like this:

{
"body": {
"field_a": <data>,
...
},
"geoip_asn": <data>,
"geoip_city": <data>,
"geoip_city_continent_code": <data>,
"geoip_city_country_code": <data>,
"geoip_city_country_name": <data>,
"geoip_continent_code": <data>,
"geoip_country_code": <data>,
"geoip_country_name": <data>,
"geoip_region": <data>,
"geoip_region_name": <data>,
"headers": {
"x-header-a": <data>,
...
},
"remote_addr": <data>,
"remote_port": <data>,
"remote_user": <data>,
"request_id": <data>,
"request_url": <data>,
"server_protocol": <data>,
"ssl_cipher": <data>,
"ssl_protocol": <data>
}

Notice how the request_id, request_url, and metadata fields will be delivered in the root of the JSON file, whereas the body fields and request headers will be sent in objects.

Important: you can also use a “catch-all” JSON Args file, like this:

{
"connection_args": {
"endpoint": "http://example_api:3000/test",
}
}

S3 output files

For each new function run, a new file will be produced in the given S3 bucket. The file will be named after the request ID that initiated the function.

Example: if the connection_args.file_path parameter is set to /my-data/ and the function is performed on May 9th, 2023, with the request ID abcd-1234, the resultant file will be saved at /my-data/2023-05-09/abcd-1234.json. If the connection_args.use_date_prefix parameter is set to false, the resultant file will be saved as /my-data/abcd-1234.json.

If there are no http_connection_args or s3_connection_args supplied in the JSON Args, the function doesn’t have any valid connection arguments to utilize. Then, the request will be terminated, and a JSON error message will be sent to indicate the cause of the issue.

{
"error": "A001",
"detail": "The function instance is missing or has invalid required arguments."
}

If the function is unable to connect to the HTTP endpoint or the S3 provider, the user request will be ignored. Regardless, an error log will be created, which the client may access via Data Stream.

For example, if an invalid access key is used, the following notice will be displayed:

[Send event to endpoint] S3 connection error;
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
<Code>InvalidAccessKeyId</Code>
<Message>The key 'DAKEY' is not valid</Message>
</Error>

At last, if you supply the correct connection options for both the HTTP endpoint and the S3 bucket, the function will deliver event data to both sources at the same time.

Setting up Rules Engine

To finish, you have to set up the Rules Engine to configure the behavior and the criteria to run the integration.

Still in the Edge Firewall page:

  1. Select the Rules Engine tab.
  2. Click the + Rule Engine button.
  3. Give a name to the rule.
  4. Select a criteria to run and catch the domains that you want to run the integration on.
  • Example: if Host matches yourdomain.com.
  1. Below, select a behavior to the criteria. In this case, it’ll be Run Function.
    • Select the adequate function according to the name you gave it during the instantiation step.
  2. Click the Save button.

On the Console, you must now configure your domain so your edge firewall protects it.

  1. On the Products menu, select Domains.
  2. Click on the domain you want to protect with your Send Event to Endpoint function.
  3. In the Settings section, click on the Edge Firewall selector and choose the edge firewall you created.
  4. Click the Save button.

Done. Now the Send Event to Endpoint integration is running for every request made to the domain you indicated.


Contributors