You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »

Prerequisites

Familiarity with creating event scripts in Designer

Familiarity with creating tags trees in Designer

Installation of Cirrus Link Modules at v4.0.10 or greater

  • Distributor
  • Transmission
  • Engine

Familiarity with configuration of the MQTT modules


Abstract

Tag Latching is used for synchronizing events at an Ignition Gateway running MQTT Engine when using tag change scripts.

If events are occurring very quickly (many times per second) this can lead to synchronization problems in tag change scripts and the use of trigger and latch tags can be used resolve this.

Another scenario for rapidly changing tags is when the Edge Node is flushing large volumes of historical data and MQTT Engine is configured to write these historical events directly to the tag. 


In this tutorial we will show how to use trigger and tag latches to synchronize events.



Trigger and Latch Tag Usage

For this example, say you have two tags, Data1 and Data2 in MQTT Engine as follows:


Lets assume that Data1 and Data2 are changing (incrementing) once per second and have the same value every time.

Now, say you have a tag change script on Data1 as follows:

dataOneValue = newValue.getValue()
dataTwoValue = system.tag.read("[MQTT Engine]Edge Nodes/G1/E1/D1/Data2").value
print "Values: " + str(dataOneValue) + " => " + str(dataTwoValue)


Based on this script, we would expect to see the following output.

INFO   | jvm 1    | 2020/04/29 10:13:31 | Values: 0 => 0
INFO   | jvm 1    | 2020/04/29 10:13:32 | Values: 1 => 1
INFO   | jvm 1    | 2020/04/29 10:13:33 | Values: 2 => 2
INFO   | jvm 1    | 2020/04/29 10:13:34 | Values: 3 => 3

The problem arises when we have store and forward configured in MQTT Engine or if the tag change rate is much faster. For store and forward this only applies if we're using 'in order' store and forward. Make sure you understand this document before proceeding. Only in order store and forward will trigger tag change events including our example script. With a flush quantity's default value of 10000, MQTT Engine could receive 10000 tag change events and call 'update tag' on both Data1 and Data2 very quickly (many times per second). This can lead to synchronization problems in our tag change script. This is because by the time the script is reading Data2, MQTT Engine has already written a new value to it and we end up with an erroneous output. See the output of this scenario below.

INFO   | jvm 1    | 2020/04/29 10:45:29 | Values: 10 => 10
INFO   | jvm 1    | 2020/04/29 10:45:30 | Values: 11 => 15
INFO   | jvm 1    | 2020/04/29 10:45:30 | Values: 12 => 18
INFO   | jvm 1    | 2020/04/29 10:45:30 | Values: 13 => 21
INFO   | jvm 1    | 2020/04/29 10:45:30 | Values: 14 => 23
INFO   | jvm 1    | 2020/04/29 10:45:30 | Values: 15 => 25
INFO   | jvm 1    | 2020/04/29 10:45:30 | Values: 16 => 27

Note the above can happen even if store and forward is not being used but if the tag value is changing rapidly. When this happens, we can use tag latching to synchronize MQTT Engine with the tag change script. To do so, modify the MQTT Engine general configuration as follows. The Latch Tags field is 'G1/E1/D1/Data1,G1/E1/D1/Latch'. Note we are using Data1 as the trigger tag since that is the 'trigger' for our existing tag change script.

With this set, MQTT Engine will now set a new tag '[MQTT Engine]Engine Info/Latches/G1/E1/D1/Latch' to true every time the trigger tag changes. It is the responsibility of the script, transaction group, or external application to reset the latch tag to allow MQTT Engine to continue processing data normally. If this is not done MQTT Engine will stop processing incoming change events until the 'Latch Timeout' elapses. So for this example, the tag change script is modified as follows to release the latch at the end of the script.

dataOneValue = newValue.getValue()
dataTwoValue = system.tag.read("[MQTT Engine]Edge Nodes/G1/E1/D1/Data2").value
print "Values: " + str(dataOneValue) + " => " + str(dataTwoValue)
 
# Free the latch
system.tag.writeSynchronous("[MQTT Engine]Engine Info/Latches/G1/E1/D1/Latch", False, 45)

So with this now set up, we see the output is now correct as shown below. Note all of this output came from within the same second.

INFO   | jvm 1    | 2020/04/29 11:26:09 | Values: 39 => 39
INFO   | jvm 1    | 2020/04/29 11:26:09 | Values: 40 => 40
INFO   | jvm 1    | 2020/04/29 11:26:09 | Values: 41 => 41
INFO   | jvm 1    | 2020/04/29 11:26:09 | Values: 42 => 42
INFO   | jvm 1    | 2020/04/29 11:26:09 | Values: 43 => 43
INFO   | jvm 1    | 2020/04/29 11:26:09 | Values: 44 => 44
INFO   | jvm 1    | 2020/04/29 11:26:09 | Values: 45 => 45
INFO   | jvm 1    | 2020/04/29 11:26:09 | Values: 46 => 46
INFO   | jvm 1    | 2020/04/29 11:26:09 | Values: 47 => 47

For reference, the following Timer script was used to generate the simulated change events at the Edge (in MQTT Transmission):

value = system.tag.read("[default]Edge Nodes/G1/E1/D1/Data1").value
value += 1
if value > 500:
  value = 0
 
system.tag.writeSynchronous("[default]Edge Nodes/G1/E1/D1/Data2", value, 30)
system.tag.writeSynchronous("[default]Edge Nodes/G1/E1/D1/Data1", value, 30)

It should also be noted that this does have a performance impact on MQTT Engine. Because we must synchronize the processing of incoming events MQTT Engine will not process these events as quickly. From the time MQTT Engine sets the latch and the script releases it, MQTT Engine pauses all processing of tag change events.

Reference Information

  • The 'Latch Timeout' is how long MQTT Engine will pause waiting for the latch to be freed before Engine does it on its own.
  • The 'Latch Tags' must be of the form:
    • Group ID/Edge Node ID/Device ID/Trigger Tag,Group ID/Edge Node ID/Device ID/Latch Tag
  • A 'Trigger Tag' is a 'real tag' that you would normally use in a standard Transaction Group or Tag Change Script as a trigger. The 'Latch Tag' is a tag that will be created in the [MQTT Engine]Engine Info/Latches folder. Each latch tag will be set to true when MQTT Engine calls updateTag for that given trigger tag. Then MQTT Engine waits until the timeout or until the latch tag gets set back to false by a script, transaction group, etc - which ever comes first. The latch tag is what should be used by scripts or transaction groups and must be set back to false at the end of the operation to allow MQTT Engine to continue processing incoming Sparkplug messages.
  • Here is an example of a single latch tag:
    • G1/E1/D1/Trigger Tag 1,G1/E1/D1/Latch Tag 1
  • You can also have longer folder paths on any trigger tag or latch tag.  For example:
    • G1/E1/D1/my/longer/tag/path/Trigger Tag 1,G1/E1/D1/my/longer/path/Latch Tag 1
  • If you want to specify two or more latches, separate them with a semicolon:
    • G1/E1/D1/Trigger Tag 1,G1/E1/D1/Latch Tag 1;G1/E1/D1/Trigger Tag 2,G1/E1/D1/Latch Tag 2
  • You can also specify multiple trigger tags for any given latch.  Just create two or more entries that each point to the same latch tag:
    • G1/E1/D1/Trigger Tag 1,G1/E1/D1/Latch Tag 1;G1/E1/D1/Trigger Tag 2,G1/E1/D1/Latch Tag 1
  • You can specify as many as trigger and latch tags as you want.
  • If using this for store and forward purposes, make sure MQTT Transmission is configured to flush 'in order' and make sure MQTT Engine is configured to write to the tag instead of directly to the Historian.

Additional Resources

  • No labels