Setting Up Messaging Services with ZSA
    • 12 Sep 2024
    • 16 Minutes to read
    • Contributors
    • Dark
      Light

    Setting Up Messaging Services with ZSA

    • Dark
      Light

    Article summary

    The first step when working with ZSA is to define how your data will be gathered. This is done in the Messaging tab of the ZSA module's Configuration page.

    Defining Your First Connector

    Note: 
    If you updated from a version prior to May 2023, there has been some changes.
    See the dedicated section at the bottom of this article.

    Creating a New Connector

    To create a new connector, do the following:

    1. From the Messaging Connectors List, select "Create Connector" in the top bar.
    2.  An Add Server dialog box appears.
      Warning:
      The datasource must be accessible before creating the connector.
    3. In the Add Server dialog box, do the following:
      1. CONFIGURATION section
        1. In the Name field type the intended name for connector.
        2. In the Host Name field type the host address of the data source to which you want to connect.
        3. In the Port Number field type the port number at which the datasource is accessible. Typically this is 9999, however you must confirm the port used by the mainframe.
        4. The Auto start option automatically launches the connector at creation and in the event of machine outages. (also read this article)
        5. In the fields Number of Retries and Retry Delay type a number of retries and their delay (in seconds) in case of failure. (also read this article)
      2.  TAGS section
        1. In the Tags section you can link or create personnalized tags. (also read this article)
      3.  PROPERTIES section: see advices in Configuring your Connectors
        1. In the Quantity of instances field, type the number of separate connections that will be made to the datasource. (for more information, see Increasing the Number of Connection Instances).
          1. The Autoscale instances option automatically creates a new instance when the Usage Value reaches 90%. (read more here)
        2. In the field Message Size you can modify the default maximum number of messages requested/processed per execution cycle. 
        3. In the field Connection Sleep Time you can modify the default waiting time before restarting a processing cycle if there is no more input data.
      4. Click Apply.
        The connector details you specified appear in a new row within the Connector List, and several actions are available (see Overview of the Connector Actions).

    Overview of the Connector Actions

    Once saved, the connector details appear in a new row in the Connector List. In the example below, one connector called ZDEV exists.

    Buttons are available for a connector within the table, as summarized below:

    • 1 - Start/Stop button : starts or stops the connector to the datasource and thus the collection of data.
    • 2 - Edit connector button : displays the same Add Server dialog box as when creating a new connector.
    • 3 - Duplicate connector button : creates a copy of this connector. You are prompted to enter a new name for the new duplicate connector. The duplicated connector appears within an additional row in the Connector List.
    • 4 - Delete connector button : deletes the connector.
    • 5 - Connector logs section (expanded row): displays the logs related to that connector.

    Other buttons are present in the top bar of the interface, as summarized below:

    • 6 - Start all the selected connectors. All the selected connectors will be started.
    • 7 - Stop all the selected  connectors. All the selected connectors will be stopped. 
    • 8 - Assign tags to all the selected connectors. Assign one or more tag(s) to all the selected connectors. Read more
    • 9 - Delete all the selected connectors. All the selected connectors will be deleted. 
    • 10 - Add a new connector. Opens an Add Server dialog box, letting you create a new connector as described in Creating a New Connector.

    Every connector displays its status in a dedicated column of the table:

    • STARTING: The messaging initialized everything it needs before starting to collect data.
    • RUNNING: The messaging is running and collecting data.
    • STOPPING: The messaging is stopping.
    • STOPPED: The messaging is not running and did not terminate with an error.
    • READY: The messaging is launched but there is nothing to collect yet.
    • DEGRADED: At least one instance of the connector is in "error", but not all the instances.
    • ERROR: The messaging is not running and terminated with an error. You can find the detail in the logs by clicking the logs button.

    To start the connector click on "start". The collection of data starts if your datasource is accessible, and your data is being gathered by ZSA.

     


    This connection has been started and is currently running 



    You can follow the connector's operations in the logs by clicking on the arrow to expand the row.  

     
    Note for mainframe users:
    The connectors request a specific number of CSV records every time. ZSA process the requests and sends back the CSV, informing how many were returned. After sending, there is a "Confirmation" of the received CSV, these records are removed from a "Pending List" in ZSA, and the SLOTs they occupied are considered "AVAILABLE" again.

    Configuring Your Connectors

    The default configuration of your connectors might be enough for your datasources, but you can find yourself needing to optimize it to meet your needs. This usually happens when the datasource is producing a lot of data that is not fetched fast enough by ZSA. You can optimize your connector by doing one or both of the following:

    Increasing the Number of Connector Instances

    To increase the number of connector instances, do the following in the connector properties section:

    1. Click on edit (2).
      An Update Server dialog box appears.
    2. From the Update Server dialog box that appears, do the following:
      1. In the Quantity of instances field, specify a larger number (e.g. 3).
      2. Click Apply.
        The configuration change you made takes effect immediately.

    Each instance is a copy of the same connector configuration. The number of connector instances is displayed in the connector list.

    Configuring the Datasource

    If you want to further optimize your connector configuration, you can determine how the datasource is fetched by each of the connector's instances. Do the following in the connector list: 

    1. Click on edit (2).
    2. From the Properties section of the dialog box that appears, do the following:
      1. In the Message size field, specify a different message size. The message size defines the number of individual data entries to fetch at once. The default message size is 5000.
        If your network is slow, you may want to reduce the message size.
        If you network is fast, you may want to increase the message size.
      2. In the Connection sleep time (s) field, specify a different sleep time. The connection sleep time defines the delay (in seconds) between two calls to the datasource, if the last call did not find any data. The default connection sleep time is 5 seconds.
      3. Click Apply.
        The configuration change you made takes effect immediately, and is applied the same way on all of the connector's instances.

    Parsing Data From Your Datasources

    ZSA currently assumes that no data can be handled as is, and will try to parse each data entry before saving it in database. This is the case for mainframe SMF records, which is handled by the ZSA parser.

    Starting the Parser

    Note:
    The parser currently only handles CSV records produced by the mainframe part of ZSA.

    The parser has its dedicated Parser List that resembles the connector one. 

    All buttons function as they do for the connectors.
    Once you start the parser with the play button, the Messaging will start parsing data. See this page to make them available to the other Zetaly apps.

    Configuring the Parser

    As with connectors, the number of instances for the parser can be changed using the edit function, and clicking on it opens the dialog box. 

     

    The dialog box contains the following fields:

    • Number of retries & Retry Delay - number of retries and their delay (in seconds) in case of failure. 
    • Number of records - defines the number of records to be parsed at once.
    • Poll interval (s) - defines the time (in seconds)  to wait before attempting to parse records if none are waiting to be parsed.
    • Insert flush time (s) - defines the maximum delay in seconds to wait before inserting in the Zetaly database.
    • Bulk insert quantity - defines the number of parsed records that will be inserted at once. The flush time is not awaited if this amount is reached.
    • Statistics interval - not related to parsing. It defines interval (in milliseconds) at which statistics are produced by ZSA (displayed using the button (*) on the parser agent - see ZSA Parsing Statistics). This must be the same or less than the interval defined in the CSV_Parser window (see ZSA Parsing Statistics).
    • The Autoscale instances option automatically creates a new instance when the Usage Value reaches 90%. (read more here)

    ZSA Parsing Statistics

    Expand the row to see the statistics of the ZSA parser. This is used to control the parser operation and can help you determine if more instances are needed.

    Parsers Charts - CSV

    Four graphs help you track the performance of the CSV parser:

    • (1)The "Parsed records" graph shows the number of parsed records in the last 15 seconds, and also the ignored records (due to unknown types, which is rare) and error records.
    • (2) The "Inserted records" graph displays the number of records waiting for insertion and those that have been inserted.
    • (3) The "Usage" graph displays the average activity percentage of all instances of the parser. If the usage value exceeds 90% for more than 5 minutes and autoscaling is enabled, a new instance will be automatically created.
    • (4) The "Queue" graph displays the number of processed messages (dequeue) and the number of stored messages (current).

    Parsers Charts - RAW

    Five graphs help you track the performance of the RAW parser:

    • (1) The "Parsed records" graph shows the number of parsed SMF in the last 15 seconds, and the errors too.
    • (2) The "Queue" graph displays the number of processed messages (dequeue) and the number of stored messages (current).
    • (3) The "Inserted records" graph displays the records inserted into the database, and error records and also the records without a parser or records with multiple parsers.
    • (4) The "Usage" graph displays the average activity percentage of all the instances of the parser. If the usage value exceeds 90% for more than 5 minutes and autoscaling is enabled, a new instance will be automatically created.
    • (5) The "RAW records bunches" graph displays the number of processed groups of messages (dequeue) and the number of stored groups of messages (current).


    Current Limitations

    • After a loadviews, all the messaging are restarted.
    • If ZSA server restarts, the messaging will be automatically started only if you choose the option in the connector configuration.
    • If a messaging encounters an error, the messaging is stopped and not restarted.


    How to find the right configuration

    The aim is to enable ZSA to absorb all DXQUEUE records without overloading ZQM.

    The configuration of each agent is therefore important and must be carried out according to the size of the Lpars.

    Parser

    The parser processes all data from the mainframe and inserts it into the database. There are therefore two possible points of contention:

    • The parser doesn't process fast enough
    • Database insertions are not fast enough.

    Parser process

    Open the parser statistics, observe the first graph named "Queues". The "Number of stored messages" curve should be stable and close to 0. If this is not the case, then you have a potential problem. Performance can be enhanced via two properties:

    • Number of records
    • Quantity of instance

    Increasing number of records will reduce the number of calls made to ZQM, thus reducing the incompressible network load and improving response time by reducing the number of calls. The default value is 5000, but it can be increased drastically (100,000, for example). The aim is to ensure that the processing time for a defined message packet takes no more than a few seconds. 

    If you're in a saturation situation, you can check that the value you've set is not too high. Open the "Queues" graph, look at the value of the "Number of processed messages" curve and divide it by 5, then divide the value obtained by the value of "Number of records". This will give the number of seconds required to process a message packet. This number must remain below 5 (note that this number is only valid in the event of saturation).

    If you've already increased the "Number of records" property but you're still in a saturation situation, you can increase the number of parser instances. This will allow more CPU to be allocated to message processing. Beware, however, that increasing the number of instances increases the competition for ZQM access, which can slow it down. The aim is therefore to increase the number of instances to benefit from parallel processing, while avoiding overloading ZQM access.

    Open the "Queues" graph, look at the value of the "Number of processed messages" curve. At the same time, increase the number of parser instances. You should see an increase in this number after a few minutes. As long as you remain saturated, repeat the operation. If the number no longer increases, or even decreases, then return to the previous value. If this doesn't resolve the ZQM saturation, please contact our support team with your configuration, analysis and environment specifications.

    Parser insert into database

    Open the parser statistics, observe the first graph named "Inserted records". Take a look to the value of "Waiting for bulk size or flush time". This value should be stable and close to zero. If this value increases and doesn't seem to decrease, then you need to modify your configuration. This is made possible by two properties:

    • Bulk insert quantity
    • Quantity of instance

    Increasing the "Bulk insert quantity" value will reduce the number of calls made to the database for mass-produced SMFs. This value can be increased to several hundred thousand if necessary. Be careful, however, as this will increase the RAM consumption of both ZSA and your database.

    If you've already increased the "Bulk insert quantity " property but you're still in a saturation situation, you can increase the number of parser instances. This will allow more CPU to be allocated to message inserts. Beware, however, that increasing the number of instances increases the competition for ZQM access and database access, which can slow it down. The aim is therefore to increase the number of instances to benefit from parallel processing, while avoiding overloading ZQM and database access. 

    Open the "Inserted records" graph, look at the value of the "Inserted in last 30 secs" curve. At the same time, increase the number of parser instances. You should see an increase in this number after a few minutes. As long as you remain saturated, repeat the operation. If the number no longer increases, or even decreases, then return to the previous value. If this doesn't resolve the database saturation, please contact our support team with your configuration, analysis and environment specifications.

    How to Automatically Launch Messagings (only for versions prior to 4.1)

    ZSA does not provide the ability to launch the service automatically. However, it is possible to use scheduled task to call the ZSA API and launch them.

    Windows Scheduler

    You can use Windows Task Scheduler to automate your messaging task. All you need is the possibility to call the ZSA url.

    For this example, we are going to use the curl library.

     1. Create a new script called startmessaging.cmd with the following line:

    curl -k -X POST https://ZETALY_HOST/zsa/api/v1/servers/MESSAGING_NAME/messaging/start

    Replace ZETAY_HOST by your Zetaly host, and MESSAGING_NAME by the messaging name to launch. Repeat the line for each messaging you want to launch.

    2. Create the task. Run the following command in an Administrative console: 

    schtasks /CREATE /TN ZetalyMessaging /RU "SYSTEM" /SC daily /ST 05:00 /TR "PATH_TO_SCRIPT\startmessaging.cmd"

    Replace PATH_TO_SCRIPT with the folder used to create the script in step 1.

    You can replace the /ST and /SC parameters by the hour and periodicity you want. See windows documentation for more details.

    Docker (crontab-ui)

    Get the Image

    To schedule messaging using a Docker container, Zetaly recommend to use crontab-ui

    If needed, Zetaly provides a s390x image for ZCX : 


    Load the tar file:

    docker load -i crontab-ui.tar.gz

    Run the Container

    Once the image is available on your docker, you can run a container with crontab-ui:

    docker run -d -p 8000:8000 alseambusher/crontab-ui

    Scheduling

    At the top of the ui, use "New":

    Create a job with the desired scheduling. This job should run first following command line:

    token=$(curl -k -H "Accept: application/json" -H "Content-type: application/json" -X POST https://ZETALY_HOST/zhb/api/v1/users/public/login -d '{"username":"USERNAME","password":"PASSWORD"}' | python3 -c "import sys, json; print(json.load(sys.stdin)['token'])");

    Replace ZETAY_HOST by your Zetaly host, and USERNAME/PASSWORD by your user credentials.

    Technical User
    It is recommend to create a specific user to execute this script. If a generic one is used, it will disconnect it.

    After this first line, add as many lines as needed using the the following command line:

    curl -ki -H "token: $token" -X POST https://ZETALY_HOST/zsa/api/v1/servers/MESSAGING_NAME/messaging/start

    Replace ZETAY_HOST by your zetaly host, and MESSAGING_NAME by the messaging name to launch.

    Note: Host IP
    Docker provides the host named "host.docker.internal" if you want to refer to the Docker host inside a container.
    Note: Multiple commands
    You can add several commands separated by ;

    Example, every hour:

    Once the job is created, you can try it:

    When the job is working, you can save it to cron:


    Linux

    Prerequisite
    For this example, the linux installation must have:
    • Crontab
    • Bash
    • Python 3
    • Curl

    Script

    Create a script as followed:

    # To update
    action=start # start/stop
    zetaly_host=https://host
    zetaly_user=user
    zetaly_password=pwd
    # List all messaging or parser to start. Do no use coma !
    messagings=(CSV_PARSER RAW_PARSER)
    # Set to 1 for more information
    debug=0
    
    #####################################
    # Login (get token)
    #####################################
    body="{\"username\":\"$zetaly_user\",\"password\":\"$zetaly_password\"}"
    url="$zetaly_host/zhb/api/v1/users/public/login"
    curlCmd="curl -s -k -H \"Accept: application/json\" -H \"Content-type: application/json\" $url -d '$body' --trace './login.log' -o './login.json'"
    
    if [ "$debug" = 1 ]; then
            echo "***********"
            echo "Url to call: $url"
            echo "Body to send: $body"
            echo "Curl command to execute: $curlCmd"
            echo "***********"
    fi
    
    eval $curlCmd
    result=$(cat ./login.json)
    
    if [ "$debug" = 1 ]; then
            echo "***********"
            echo "Curl result: $result"
            echo "***********"
    fi
    
    token=$(echo $result | python3 -c "import sys, json; print(json.load(sys.stdin)['token'])")
    
    if [ "$debug" = 1 ]; then
            echo "***********"
            echo "Token: $token"
            echo "***********"
    fi
    
    
    #################################
    # Start messaging if logged
    #################################
    if [ -n "$token" ]; then
            for messaging in "${messagings[@]}"
            do
                    url="$zetaly_host/zsa/api/v1/servers/$messaging/messaging/$action"
                    curlCmd="curl -ki -X POST $url -H 'token: $token' -H 'accept: */*' -d ''"
                     if [ "$debug" = 1 ]; then
                            echo "***********"
                            echo "Messaging: $messaging"
                            echo "Url to call: $url"
                            echo "Curl command to execute: $curlCmd"
                            echo "***********"
                    fi
                    eval $curlCmd
            done
    else
            echo "!!!!!!!!!!!!!!!! Login failed !!!!!!!!!!!!!!!!"
    fi
    echo ""

    At the beginning, edit the following lines:

    zetaly_host=https://host
    zetaly_user=user
    zetaly_password=pwd
    messagings=(CSV_PARSER RAW_PARSER)

    Test the script using the command:

    bash './myscript'

    Display example:

    HTTP/1.1 200 OK
    X-Powered-By: Express
    content-length: 0
    connection: close
    date: Tue, 23 Jul 2024 09:33:05 GMT
    server: Kestrel
    
    HTTP/1.1 200 OK
    X-Powered-By: Express
    content-length: 0
    connection: close
    date: Tue, 23 Jul 2024 09:33:05 GMT
    server: Kestrel

    Crontab

    Create a crontab using:

    crontab -e

    Add a line with your script execution:

    * * * * * bash /home/myuser/script/start-connectors.sh
    Cron format
    If you need help to create your cron format, you can use https://crontab.guru/

    Once edited, the script will be launched according to your cron.

    You can check logs using:

    grep 'home/myuser/script/start-connectors' /var/log/syslog


    How to mass create connector

    It is sometimes necessary to create a large number of connectors. You can automate the creation process by using ZSA's Rest APIs.

    All APIs are available and can be viewed at this address: /zsa/api/v2/q/swagger-ui/

    The creation of a connector use the POST API /zsa/api/v2/connectors

    Example of full body:

    {
       "label":"Lpar",
       "type":"CSV",
       "host":"45.21.456.89",
       "port":9999,
       "autoStart":true,
       "numberOfRetries":2,
       "retryDelay":30,
       "tags":[
          
       ],
       "properties":[
          {
             "name":"NbInstances",
             "value":"1"
          },
          {
             "name":"MsgSize",
             "value":"5000"
          },
          {
             "name":"ConnectionSleepTime",
             "value":"5"
          },
          {
             "name":"AutoScaleInstances",
             "value":"false"
          }
       ]
    }

    All attributes are not mandatory, the mandatory attributes are the following:

    {
       "label":"Lpar",
       "type":"CSV",
       "host":"45.21.456.89",
       "port":9999
    }

    Example of shell script to mass create connectors:

    # To update
    zetaly_host=https://zetaly
    zetaly_user='user'
    zetaly_password='password'
    # List all connecteur/hosts/port
    name=(Lpar1 Lpar2)
    host=(localhost localhost2)
    port=(9999 9999)
    # Connector types to create
    types=(CSV RAW)
    # Set to 1 for more information
    debug=0
    
    #####################################
    # Check values
    #####################################
    if [ "${#name[@]}" != "${#host[@]}" ] || [ "${#name[@]}" != "${#port[@]}" ]; then
    	echo "Error: All arrays does not have the same number of elements."
    	exit 8
    fi
    if [ "${#types[@]}" = "0" ]; then
    	echo "Error: Type arrays does not hae any value."
    	exit 8
    fi
    
    #####################################
    # Login (get token)
    #####################################
    body="{\"username\":\"$zetaly_user\",\"password\":\"$zetaly_password\"}"
    url="$zetaly_host/zhb/api/v1/users/public/login"
    curlCmd="curl -s -k -H \"Accept: application/json\" -H \"Content-type: application/json\" $url -d '$body' --trace './login.log' -o './login.json'"
    
    if [ "$debug" = 1 ]; then
    	echo "***********"
    	echo "Url to call: $url"
    	echo "Body to send: $body"
    	echo "Curl command to execute: $curlCmd"
    	echo "***********"
    fi
    
    eval $curlCmd
    result=$(cat ./login.json)
    
    if [ "$debug" = 1 ]; then
    	echo "***********"
    	echo "Curl result: $result"
    	echo "***********"
    fi
    
    token=$(echo $result | python3 -c "import sys, json; print(json.load(sys.stdin)['token'])")
    
    if [ "$debug" = 1 ]; then
    	echo "***********"
    	echo "Token: $token"
    	echo "***********"
    fi
    
    
    #################################
    # Create messaging
    #################################
    if [ -n "$token" ]; then
    	url="$zetaly_host/zsa/api/v2/connectors"
    	maxIndex="${#name[@]}"
    	forExpr={1..$maxIndex}
    	for (( i=0; i<maxIndex; i++))
    	do
    		connectorName="${name[i]}" 
    		connectorHost="${host[i]}" 
    		connectorPort="${port[i]}"
    		for type in "${types[@]}"
    		do
    			body="{\"label\":\"$connectorName\",\"type\":\"$type\",\"host\":\"$host\",\"port\":$port, \"autoStart\":true}"
                curlCmd="curl -s -k -H 'token: $token' -H \"Accept: application/json\" -H \"Content-type: application/json\" $url -d '$body'"
    			if [ "$debug" = 1 ]; then
    				echo "***********"
    				echo "Messaging: $messaging"
    				echo "Url to call: $url"
    				echo "Curl command to execute: $curlCmd"
    				echo "***********"
    			fi
    			eval $curlCmd
    		done
    	done
    else
    	echo "!!!!!!!!!!!!!!!! Login failed !!!!!!!!!!!!!!!!"
    fi
    echo ""
    




    Changing your password will log you out immediately. Use the new password to log back in.
    First name must have atleast 2 characters. Numbers and special characters are not allowed.
    Last name must have atleast 1 characters. Numbers and special characters are not allowed.
    Enter a valid email
    Enter a valid password
    Your profile has been successfully updated.