You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

Prerequisites

  • Install IoT Bridge for Snowflake into your Azure account
    • Before being able to access the Virtual Machine you must have completed the installation process here.
  • Install an MQTT Server
    • Chariot MQTT Server can be installed using this guide. However, any Sparkplug compliant MQTT Server will work. Note Azure IoT Hub is not Sparkplug compliant.

Summary

IoT Bridge for Snowflake (IBSNOW) is an application that connects to an MQTT Server (such as Chariot MQTT Server) and consumes MQTT Sparkplug messages from Edge devices.  These messages must be formatted as Sparkplug Templates. Sparkplug Templates are defined in the Sparkplug Specification. These Templates are used to create the data in Snowflake automatically with no additional coding or configuration.  Then multiple instances of these Templates generate the Assets and start to populate with real time data sent on change only, thus significantly reducing the amount of data being sent to the cloud.  For further details on Snowflake, refer to the documentation here. For further details on Eclipse Sparkplug, refer to the Eclipse Sparkplug resources.

This Quickstart document covers how IoT Bridge can be used to consume MQTT Sparkplug data and create and update data in Snowflake. This will show how to configure IoT Bridge as well as show how to use Inductive Automation's Ignition platform along with Cirrus Link's MQTT modules to publish device data to an MQTT Server. This data will ultimately be consumed by IoT Bridge to create and update the Snowflake components. This tutorial will use the AWS IoT Core MQTT Server implementation. However, IBSNOW does work with any MQTT v3.1.1 compliant MQTT Server including Cirrus Link's MQTT Servers.

It is also important to note that Ignition in conjunction with Cirrus Link's MQTT Transmission module converts Ignition User Defined Types (UDTs) to Sparkplug Templates. This is done automatically by the MQTT Transmission module. So, much of this document will refer to UDTs rather than Sparkplug Templates since that is what they are in Ignition. More information on Inductive Automation's Ignition platform can be found here. Additional information on Cirrus Link's MQTT Transmission module can be found here.

Snowflake Setup

If you don't have a Snowflake account, open a Web Browser and go to https://www.snowflake.com. Follow the instructions there to start a free trial. After creating an account, log in to Snowflake via the Web Console. You should see something like what is shown below.

Create a new 'SQL Worksheet' by clicking the blue + button in the upper right hand corner of the window as shown below.

Copy and paste the following SQL script into the center pane. Click the 'Expand source' button on the right to copy the script source code
SQL Script 01  Expand source

After pasting the code into the center pane of the SQL Worksheet, click the drop down arrow next to the blue play button in the upper right corner of the window and click 'Run All' as shown below.

After doing so, you should see a message in the 'Results' pane denoting the SPARKPLUG_RAW table was created successfully as shown below.

Now, repeat the process for each of the following scripts in order. Each time, fully replace the contents of the SQL script with the new script and click the 'Run All' button after pasting each script. Make sure no errors are displayed in the Results window after running each script.

  • Script 02
    SQL Script 02  Expand source

    • Expected Result: Stream NBIRTH_STREAM successfully created.
  • Script 03
    SQL Script 03  Expand source

    • Expected Result: Function GENERATE_DEVICE_ASOF_VIEW_DDL successfully created.
  • Script 04
    SQL Script 04  Expand source

    • Expected Result: Function CREATE_EDGE_NODE_SCHEMA successfully created.
  • Script 05
    SQL Script 05  Expand source

    • Expected Result: Function CREATE_ALL_EDGE_NODE_SCHEMAS successfully created.
  • Script 10
    SQL Script 10  Expand source

    • Expected Result: Statement executed successfully.
  • Script 11
    SQL Script 11  Expand source

    • Expected Result: Statement executed successfully.


After all of the scripts have successfully executed, create a new user in Snowflake. This user will be used by IoT Bridge for Snowflake to push data into Snowflake. In the Snowflake Web UI, go to Admin → Users & Roles and then click '+ User' in the upper right hand corner. Give it a username of your choice and a secure password as shown below. For this example we're calling the user IBSNOW_INGEST so we know this user is for ingest purposes. See below for an example and then click 'Create User'.

In addition, the user must have a specific role to be able to stream data into Snowflake. Click the newly created user to see the following.

In the bottom of the center 'Granted Roles' pane you will see this user has no roles. Click 'Grant Role' to set up a new role. Then, select the 'CL_BRIDGE_PROCESS_RL' role and click 'Grant' as shown below.

After this has been done successfully you will see the role now associated with the new user as shown below.

Now a key pair must be generated and uploaded to Snowflake. This will be used for authentication by the IoT Bridge for Snowflake application to push data to Snowflake via the Snowflake Streaming API. See this document for details on how to generate this and assign this to a user in your snowflake account: https://docs.snowflake.com/en/user-guide/key-pair-auth. Step 6 (Configuring the Snowflake Client to User Key Pair Authentication) in the linked tutorial can be skipped. This tutorial will cover configuring IoT Bridge for Snowflake with the generated key. Attach the public key to the user that we just created for Snowflake ingest purposes.


The generated key MUST NOT be encrypted

IoT Bridge Setup

First you will need access to the Azure Virtual Machine via SSH. See this document for information on how to access the VM.

Modify the file /opt/ibsnow/conf/ibsnow.properties file. Set the following:

  • mqtt_server_url
  • mqtt_server_name
    • Give it a meaningful name if desired
  • mqtt_username
    • The username for the MQTT connection if required
  • mqtt_password
    • The password for the MQTT connection if required
  •  mqtt_ca_cert_chain_path
    • The path to the Root Certificate if required
  • mqtt_client_cert_path
    • The path to the Client Certificate if required
  • mqtt_client_private_key_path
    • The path to the Client Private Key if required
  • primary_host_id
    • Set it to a text string such as 'IamHost'
  • snowflake_streaming_client_name
    • Some text string such as 'MY_CLIENT'
  • snowflake_streaming_table_name
    • This must be 'SPARKPLUG_RAW' based on the scripts we previously used to provision Snowflake
  • snowflake_notify_db_name
    • This must be 'cl_bridge_node_db' based on the scripts we previously used to provision Snowflake
  • snowflake_notify_schema_name
    • This must be 'stage_db' based on the scripts we previously used to provision Snowflake
  • snowflake_notify_warehouse_name
    • This must be 'cl_bridge_ingest_wh' based on the scripts we previously used to provision Snowflake

When complete, it should look similar to what is shown below.
ibsnow.properties

# The IBSNOW instance friendly name. If ommitted, it will become 'IBSNOW-vm-instance-id'
#ibsnow_instance_name =
 
# The region the VM is located in
#ibsnow_cloud_region = East US
 
# The MQTT Server URL
mqtt_server_url = ssl://55.23.12.33:8883
 
# The MQTT Server name
mqtt_server_name = Chariot MQTT Server
 
# The MQTT username (if required by the MQTT Server)
mqtt_username = admin
 
# The MQTT password (if required by the MQTT Server)
mqtt_password = changeme
 
# The MQTT keep-alive timeout in seconds
#mqtt_keepalive_timeout = 30
 
# The path to the TLS Certificate Authority certificate chain
#mqtt_ca_cert_chain_path =
 
# The path to the TLS certificate
#mqtt_client_cert_path =


# The path to the TLS private key
#mqtt_client_private_key_path =
 
# The TLS private key password
#mqtt_client_private_key_password =
 
# Whether or not to verify the hostname against the server certificate
#mqtt_verify_hostname = false
 
# The Sparkplug sequence reordering timeout in milliseconds
sequence_reordering_timeout = 5000
 
# Whether or not to block auto-rebirth requests
#block_auto_rebirth = false
 
# The primary host ID if this is the acting primary host
primary_host_id = IamHost
 
# The MQTT Client ID - It is recommend to not set this unless there is a specific reason to do so. If this is not set a random client ID will be automatically generated
#client_id =
 
# Snowflake streaming connection properties - A custom client name for the connection (e.g. MyClient)
snowflake_streaming_client_name = MY_CLIENT
 
# Snowflake streaming connection properties - A custom channel name for the connection (e.g. MyChannel)
# If this is left blank/empty, Channel names of the Sparkplug Group ID will be used instead of a single channel
#snowflake_streaming_channel_name =
 
# Snowflake streaming connection properties - The Table name associated with the Database and Schema already provisioned in the Snowflake account (e.g. MyTable)
snowflake_streaming_table_name = SPARKPLUG_RAW
 
# Snowflake notify connection properties - The Database name associated with the connection that is already provisioned in the Snowflake account (e.g. MyDb)
snowflake_notify_db_name = cl_bridge_node_db
 
# Snowflake notify connection properties - The Schema name associated with the Database already provisioned in the Snowflake account (e.g. PUBLIC)
snowflake_notify_schema_name = stage_db
 
# Snowflake notify connection properties - The Warehouse name associated with the notifications already provisioned in the Snowflake account (e.g. PUBLIC)
snowflake_notify_warehouse_name = cl_bridge_ingest_wh
 
# Whether or not to send notification tasks to Snowflake based on incoming Sparkplug events
snowflake_notify_task_enabled = true
 
# The number of milliseconds to delay after receiving an NBIRTH before notifying Snowflake over the event (requires snowflake_notify_task_enabled is true)
snowflake_notify_nbirth_task_delay = 10000
 
# The number of milliseconds to delay after receiving a DBIRTH or DATA message before notifying Snowflake over the event (requires snowflake_notify_task_enabled is true)
snowflake_notify_data_task_delay = 5000


Now, modify the file /opt/ibsnow/conf/snowflake_streaming_profile.json file. Set the following:

  • user
    • This must be 'IBSNOW_INGEST' based on the user we provisioned in Snowflake earlier in this tutorial
  • url
    • Replace 'ACCOUNT_ID' with your Snowflake account id. Leave the other parts of the URL the same.
  • account
    • Replace 'ACCOUNT_ID' with your Snowflake account id
  • private_key
    • Replace with the text string that is the private key you generated earlier in this tutorial
  • host
    • Replace 'ACCOUNT_ID' with your Snowflake account id. Leave the other parts of the hostname the same.
  • schema
    • Set this to 'stage_db' based on the scripts we previously used to provision Snowflake
  • database
    • Set this to 'cl_bridge_stage_db' based on the scripts we previously used to provision Snowflake
  • connect_string
    • Replace 'ACCOUNT_ID' with your Snowflake account id. Leave the other parts of the connection string the same.
  • warehouse
    • Set this to 'cl_bridge_ingest_wh' based on the scripts we previously used to provision Snowflake
  • role
    • Set this to 'cl_bridge_process_rl' based on the scripts we previously used to provision Snowflake

When complete, it should look similar to what is shown below.
snowflake_streaming_profile.json

{
  "user": "IBSNOW_INGEST",
  "url": "https://RBC48284.snowflakecomputing.com:443",
  "account": "RBC48284",
  "private_key": "MIIEwAIBADANBgkqhkiG9w0BAQEFAASCBKowggSmAgEAAoIBAQDN6NOoaoVVZSz/AIUohNn9oJThwDZg2/qASsIRYFjy0pSNKh+XsG6yp4kteII900lEgt5koroU+8oQrX7vnTI/69mvc5o+xJBfGogd+qcdw9tEEUZHEfBxBtlpZvfMY/HHyrilQBvWVrFqB3hYt9n15lE/wVi1LDII378yh2p+QcwEaKhKD1aWBYUlpOoA0d2214/UQiU6ytI18jJNPN3yQv9Jx3+/DRldlYh5fLIQ0AWbBqRnQoyLvLaYRIgynxDhrQpVtw8QN2M/XQErT3OxZzti7CKeI9M4xLchO3VZozsde5kcQwCIcop05PX6dtdSsqheQBRrhytf1K9GnfGdAgMBAAECggEBAKO8auLXoaMgS0GTlk98JSRL11gU0qj/BBmUWPIcXV7qGPqP7oNe5wfltW2VEGw9YVu7fUElLTeWaT4N2IyNwfGWiIm+MX+MKwmVPXwpX06J+ggMfIfzOfGG8sef+5hqOU8YYu/1JK2yTm3z9r0FpaqmNSGvi+y1ciwgUBfMGuC93pySQCHuNXkWw2njxvaltpOSm9H08aQtXsA+1JL31kP4WZAexk2EqzRzEka8hrGYfNIs9qQimKI9XznNoqmlSN6ZIO+A5e+kSUg0viyY/cZLwVj5FYV/wKN49WDDkB3dthCbx1Z1VuIw7rub9VU609eoTXDxEgMMBEDqbE5BXkECgYEA/ZsySGO2QouCihpPp0VtNrNh7PhA/OLch5zZt2iJBMhbjPn4SAyrzgi6lQc7b4oZOznOK5jOL4NQijt7SGz8rfrwfTd5Rl84lHlN7Cu3V0lBC8IN6JcXWBuTedmF7ShlL5ATbpXTsgaaqPm3H8VCS4fkoQ54bDZnCjI5/GtI5OkCgYEAz9pgvqXCXyJQxj5bM0uihZV3lZzvwpqlEuT0GvT9XqHM+LNKtf2kQ958qRq5Nh381oeRLyVbZTFrr0eNCCEA5YesbmxE7d/5vlWszfW5e70TUJEWbk0rrGNmqVUlAfEZKfK6ms87W3peHqqMyXqnmMwwecMl2c013XZaLKYxRpUCgYEAnWgHdKDXDkSTGG6uQ882sz3xqOiJRaz1XgK/qzPp35sQH9dDAE1FEZOfY0Ji5J8dfAIr8ilcyGbDxZiXs2NaDg5z1/RnhIMzlgwYjl6v5DBmfArNITEuXxR2m6mkk4eADl5pgTjjdVreAcVEoSaJOGI3SLO3kMrPd6enEAHy84kCgYEAsL2BjDtI1zpHsvqs9CY5URuybt7epPx4p2NWCmIN3Fz6/PL/8VZ3SlqyZ9zYZqMDLqxiENPULmzio03VJ3dg2swOHGsmBZtxMp6JbSyoBwbUmKp2h15JZ7GyRwSmjksj2Z6TfDYAxB1+UNc3Fc+dGXlvMup0kgpD5kfQD61Vsy0CgYEAn9QCQG+lcPG5GXXu3EAeVzqgy+gOXpyd4ys0fdssFF93AM+/Dd9F31sSSfdasEQ8+jFKjunEeQAOiecVQA4Vu9GGQAykrK/m8nD0zf02L1QpADTBA8TymkpD1yFEMo+T5DrZ24SRCl/zREb0hLn//ZOA=",
  "port": 443,
  "host": "RBC48284.snowflakecomputing.com",
  "schema": "stage_db",
  "scheme": "https",
  "database": "cl_bridge_stage_db",
  "connect_string": "jdbc:snowflake://RBC48284.snowflakecomputing.com:443",
  "ssl": "on",
  "warehouse": "cl_bridge_ingest_wh",
  "role": "cl_bridge_process_rl"
}


Now the service can be restarted to pick up the new configuration. Do so by running the following command.

sudo /etc/init.d/ibsnow restart

At this point, IBSNOW should connect to the MQTT Server and be ready to receive MQTT Sparkplug messages. Verify by running the following command.

tail -f /opt/ibsnow/log/wrapper.log

After doing so, you should see something similar to what is shown below. Note the last line is 'MQTT Client connected to ...'. That denotes we have successfully configured IBSNOW and properly provisioned MQTT Server.

INFO|7263/0||23-06-29 20:19:32|20:19:32.932 [Thread-2] INFO  org.eclipse.tahu.mqtt.TahuClient - IBSNOW-8bc00095-9265-41: Creating the MQTT Client to tcp://54.236.16.39:1883 on thread Thread-2
INFO|7263/0||23-06-29 20:19:33|20:19:33.275 [MQTT Call: IBSNOW-8bc00095-9265-41] INFO  org.eclipse.tahu.mqtt.TahuClient - IBSNOW-8bc00095-9265-41: connect with retry succeeded
INFO|7263/0||23-06-29 20:19:33|20:19:33.280 [MQTT Call: IBSNOW-8bc00095-9265-41] INFO  org.eclipse.tahu.mqtt.TahuClient - IBSNOW-8bc00095-9265-41: Connected to tcp://54.236.16.39:1883
INFO|7263/0||23-06-29 20:19:33|20:19:33.294 [MQTT Call: IBSNOW-8bc00095-9265-41] INFO  o.eclipse.tahu.host.TahuHostCallback - This is a offline STATE message from IamHost - correcting with new online STATE message
FINEST|7263/0||23-06-29 20:19:33|20:19:33.297 [MQTT Call: IBSNOW-8bc00095-9265-41] INFO  o.eclipse.tahu.host.TahuHostCallback - This is a offline STATE message from IamHost - correcting with new online STATE message
FINEST|7263/0||23-06-29 20:19:33|20:19:33.957 [Thread-2] INFO  org.eclipse.tahu.mqtt.TahuClient - IBSNOW-8bc00095-9265-41: MQTT Client connected to tcp://54.236.16.39:1883 on thread Thread-2


Edge Setup with Ignition and MQTT Transmission

At this point IoT Bridge is configured and ready to receive data. To get data flowing into IBSNOW we'll set up Inductive Automation's Ignition platform along with the MQTT Transmission module from Cirrus Link. Begin by downloading Ignition here.

https://inductiveautomation.com/downloads

Installation of Ignition is very straightforward and fast. There is a guide to do so here.

https://docs.inductiveautomation.com/display/DOC80/Installing+and+Upgrading+Ignition

With Ignition installed, MQTT Transmission must be installed as well as a plugin to Ignition. Get MQTT Transmission for your version of Ignition here.

https://inductiveautomation.com/downloads/third-party-modules

Now use the procedure below to install the MQTT Transmission module.

https://docs.inductiveautomation.com/display/DOC80/Installing+or+Upgrading+a+Module

With Ignition and MQTT Transmission installed, we can configure the MQTT Transmission module to connect to AWS IoT Core using the same certificates that we provisioned earlier. Begin by clicking 'Get Desginer' in the upper right hand corner of the Ignition Gateway Web UI as shown below.

Now launch the Ignition Designer using the Designer Launcher as shown below.

Once it is launched, you should see something similar to what is shown below. Note the Tag Browser has been expanded and the automatically created example tags have been highlighted.

Begin by deleting these two tags (Example Tag and MQTT Quickstart). Then click the 'UDT Definitions' tab as shown below. We will use this view to create a very simple UDT definition.

Now, click the '+' icon in the upper left corner of the tag browser as shown below and select 'New Data Type'

This will open the following dialog box.

Change the name of the tag to Motor as shown below. Also, note the highlighted 'new member tag' icon in the middle of the dialog. We'll use this to create some member tags.

Now use the 'new member tag' button to create a new 'memory tag' as shown below.

Then, set the following parameters for the new memory tag.

  • Name
    • Set to 'Temperature'
  • Date Type
    • Set to 'Float'
  • Engineering Units
    • Set to 'Celsius'

Now create two additional member tags with the following configuration.

  • Amps
    • Memory tag
    • Data Type = Integer
  • RPMs
    • Memory tag
    • Data Type = Integer

When complete, the UDT definition should look as follows.

Now switch back to the 'Tags' tab of the Tag Browser. Right click on the 'PLC 1' folder and select 'New Tag → Data Type Instance → Motor' as shown below.


Now set the name to 'My Motor' as shown below and click OK.

Now, set some values under the instance as shown below.

At this point, our tags are configured. A UDT definition will map to a model in Snowflake and UDT instances in Ignition will map to Snowflake. But, before this will happen we need to point MQTT Transmission to AWS IoT Core. To do so, browse back to the Ignition Gateway Web UI and select MQTT Transmission → Settings from the left navigation panel as shown below.

Now select the 'Transmitters' tab as shown below.

Now click the 'edit' button to the right of the 'Example Transmitter'. Scroll down to the 'Convert UDTs' option and uncheck it as shown below. This will also un-grey the 'Publish UDT Defintions' option. Leave it selected as shown below.

Then select the 'Servers' tab and 'Certificates' tab as shown below.

Now upload the three certificate files that were acquired during the AWS IoT thing provisioning. Upload all three to the MQTT Transmission configuration. When done, you should see something similar to what is shown below.


Now switch to the 'Servers' and 'Settings' tab. Delete the existing 'Chariot SCADA' pre-seeded MQTT Server Definition. Then create a new one with the following configuration.

  • Name
    • AWS IoT
  • URL
  • CA Certificate File
    • The AWS Root CA certificate
  • Client Certificate File
    • The AWS client certificate for your provisioned 'thing'
  • Client Private Key File
    • The AWS client private key for your provisioned 'thing'

When complete, you should see something similar to the following. However, the 'Connected' state should show '1 of 1' if everything was configured properly.


At this point, data should be flowing into Snowflake. By tailing the log in IBSNOW you should see something similar to what is shown below. This shows IBSNOW receiving the messages published from Ignition/MQTT Transmission. When IBSNOW receives the Sparkplug MQTT messages, it creates and updates asset models and assets in Snowflake. The log below is also a useful debugging tool if things don't appear to work as they should.
Successful Insert

FINEST|199857/0||23-04-21 15:46:22|15:46:22.951 [TahuHostCallback--3deac7a5] INFO  o.e.tahu.host.TahuPayloadHandler - Handling NBIRTH from My MQTT Group/Edge Node ee38b1
FINEST|199857/0||23-04-21 15:46:22|15:46:22.953 [TahuHostCallback--3deac7a5] INFO  o.e.t.host.manager.SparkplugEdgeNode - Edge Node My MQTT Group/Edge Node ee38b1 set online at Fri Apr 21 15:46:22 UTC 2023
FINEST|199857/0||23-04-21 15:46:23|15:46:23.072 [TahuHostCallback--3deac7a5] INFO  o.e.tahu.host.TahuPayloadHandler - Handling DBIRTH from My MQTT Group/Edge Node ee38b1/PLC 1
FINEST|199857/0||23-04-21 15:46:23|15:46:23.075 [TahuHostCallback--3deac7a5] INFO  o.e.t.host.manager.SparkplugDevice - Device My MQTT Group/Edge Node ee38b1/PLC 1 set online at Fri Apr 21 15:46:22 UTC 2023
FINEST|199857/0||23-04-21 15:46:23|15:46:23.759 [ingest-flush-thread] INFO  n.s.i.s.internal.FlushService - [SF_INGEST] buildAndUpload task added for client=MY_CLIENT, blob=2023/4/21/15/46/rth2hb_eSKU3AAtxudYKnPFztPjrokzP29ZXzv5JFbbj0YUnqUUCC_1049_48_1.bdec, buildUploadWorkers stats=java.util.concurrent.ThreadPoolExecutor@32321763[Running, pool size = 2, active threads = 1, queued tasks = 0, completed tasks = 1]
FINEST|199857/0||23-04-21 15:46:23|15:46:23.774 [ingest-build-upload-thread-1] INFO  n.s.i.i.a.h.io.compress.CodecPool - Got brand-new compressor [.gz]
FINEST|199857/0||23-04-21 15:46:23|15:46:23.822 [ingest-build-upload-thread-1] INFO  n.s.i.streaming.internal.BlobBuilder - [SF_INGEST] Finish building chunk in blob=2023/4/21/15/46/rth2hb_eSKU3AAtxudYKnPFztPjrokzP29ZXzv5JFbbj0YUnqUUCC_1049_48_1.bdec, table=CL_BRIDGE_STAGE_DB.STAGE_DB.SPARKPLUG_RAW, rowCount=2, startOffset=0, uncompressedSize=5888, compressedChunkLength=5872, encryptedCompressedSize=5888, bdecVersion=THREE
FINEST|199857/0||23-04-21 15:46:23|15:46:23.839 [ingest-build-upload-thread-1] INFO  n.s.i.s.internal.FlushService - [SF_INGEST] Start uploading file=2023/4/21/15/46/rth2hb_eSKU3AAtxudYKnPFztPjrokzP29ZXzv5JFbbj0YUnqUUCC_1049_48_1.bdec, size=5888
FINEST|199857/0||23-04-21 15:46:24|15:46:24.132 [ingest-build-upload-thread-1] INFO  n.s.i.s.internal.FlushService - [SF_INGEST] Finish uploading file=2023/4/21/15/46/rth2hb_eSKU3AAtxudYKnPFztPjrokzP29ZXzv5JFbbj0YUnqUUCC_1049_48_1.bdec, size=5888, timeInMillis=292
FINEST|199857/0||23-04-21 15:46:24|15:46:24.148 [ingest-register-thread] INFO  n.s.i.s.internal.RegisterService - [SF_INGEST] Start registering blobs in client=MY_CLIENT, totalBlobListSize=1, currentBlobListSize=1, idx=1
FINEST|199857/0||23-04-21 15:46:24|15:46:24.148 [ingest-register-thread] INFO  n.s.i.s.i.SnowflakeStreamingIngestClientInternal - [SF_INGEST] Register blob request preparing for blob=[2023/4/21/15/46/rth2hb_eSKU3AAtxudYKnPFztPjrokzP29ZXzv5JFbbj0YUnqUUCC_1049_48_1.bdec], client=MY_CLIENT, executionCount=0
FINEST|199857/0||23-04-21 15:46:24|15:46:24.301 [ingest-register-thread] INFO  n.s.i.s.i.SnowflakeStreamingIngestClientInternal - [SF_INGEST] Register blob request returned for blob=[2023/4/21/15/46/rth2hb_eSKU3AAtxudYKnPFztPjrokzP29ZXzv5JFbbj0YUnqUUCC_1049_48_1.bdec], client=MY_CLIENT, executionCount=0


Data will also be visible in Snowflake at this point. See below for an example. By changing data values in the UDT tags in Ignition DDATA Sparkplug messages will be produced. Every time the Edge Node connects, it will produce NBIRTH and DBIRTH messages. All of these will now appear in Snowflake with their values, timestamps, and qualities

Additional Resources

  • No labels