Troubleshooting
    • 24 Oct 2024
    • 7 Minutes to read
    • Contributors
    • Dark
      Light

    Troubleshooting

    • Dark
      Light

    Article summary

    If you have a problem with the ZSA module, please contact us and provide all of the following:

    • Logs
    • How to reproduce the issue

    Logs

    Linux/Windows

    Application logs

    ZSA is composed of 4 technical parts. Each part has its own logs.

    • ZSA: Application logs can be found at %Installation Directory%/zsa/logs/log-{data}.log
    • ZSAC: Application logs can be found at %Installation Directory%/zsac/logs/log-{data}.log
    • ZSAJ: Application logs can be found at %Installation Directory%/logs/zsaj.log.*
    • ZQM: Application logs can be found at %Installation Directory%/logs/zqm.log.*

    Messagings

    • ZSA: Messagings logs can be found at %Installation Directory%/zsa/persist/logs/{parser}_{instance}.log
    • ZSAC: Messagings logs can be found at %Installation Directory%/logs/
    Windows Known Issue

    Messagings logs can be found at C:/Windows/System32/logs instead of regular Installation Directory.

    Docker

    Connect in ssh to your environment

    docker cp zetaly_zsa:/zetaly/persist/logs ./logs1
    docker cp zetaly_zsa:/zetaly/logs ./logs2
    
    tar -cvf logs1.tar logs1
    tar -cvf logs2.tar logs2
    

    Please send us the logs1.tar and logs2 file.

    Logs from database

    Postgres

    From pgadmin:

    COPY (select * from public.log) TO 'C:\Temp\result.csv'  WITH DELIMITER ',' CSV HEADER;
    

    From bash in server:

    psql -U postgres -d zetalydb -c "\copy public.log TO '/tmp/logs.csv' delimiter '|' csv"
    

    If you are using ZCX, you need to connect to the container before:

    1. Execute a bash in the container
    docker exec -it <container> /bin/bash
    
    1. Execute the copy
    psql -U <postgres_user> -d <database_name> -c "\copy public.log TO '/tmp/logs.csv' delimiter '|' csv"
    

    Reminder : Default user and database name are : postgres and zetalydb

    1. Exit the bash:
    exit
    
    1. Copy the file from the container to ZCX:
    docker cp <container>:/tmp/logs.csv .
    

    SqlServer

    sqlcmd -S . -d ZetalyDb -E -s, -W -Q "SELECT * FROM dbo.Log" > result.csv
    

    https://solutioncenter.apexsql.com/how-to-export-sql-server-data-to-a-csv-file/

    Statistics

    You can get a tool from Zetaly support to automatically get all the informations during a long period of time.
    If you just need to check the current status of your MVS Servers you have a stats icon on each CSV MVS Servers.
    image.png

    Docker

    # Load the image in docker
    docker load --input zsa-statistics.tar.gz
    
    # Create the container
    docker create --name zsa-statistics zsa-statistics
    
    # Get the properties to allow edition
    docker cp zsa-statistics:/application.properties application.properties
    
    # Update the file
    nano application.properties
    
    # Upload the updated file to the container
    docker cp application.properties zsa-statistics:/application.properties
    
    # Start the container
    docker start zsa-statistics
    
    # Copy the created file
    docker cp zsa-statistics:/statistics ./statistics
    

    Configuration

    The application.properties is like

    zhb_url=http://localhost
    zsa_url=http://localhost
    username=username
    password=password
    

    You have to update with your installation informations, the url of ZSA and ZHB (most of the time it should be the Frontal url).
    The username / password need to be a Zetaly User with access to ZSA.

    Reload data from SMF

    If one of your partitions has suffered a significant loss of data coming from SMF, it is possible to reload the data.

    Procedure

    This can be a cumbersome procedure, and should not be undertaken lightly. It is preferable to find the cause of the data loss and correct it before carrying out data recovery. See Monitoring.

    ZSA does not automatically detect possible duplicate injections. It is therefore essential to target the recovery perimeter before carrying out the procedure.

    The steps in the procedure are described below:

    1. Extract and check the SMFs to be reintegrated
    2. Delete table data
    3. Re-inject data into LPAR DXQUEUE

    There are two types of table:

    • Data tables: These contain the raw records from the SMFs.
    • Historical or view tables: These contain data aggregations.

    When data is aggregated, the notion of SID may no longer exist. It is therefore impossible to correct these tables by reloading only the data from a single LPAR. For SMF data, the tables concerned are as follows:

    How to choose interval

    When you've lost data and need to recover it, it's important to choose the right data recovery interval. As some tables are aggregations of records, if you reload data that has already been aggregated, it will be duplicated and the aggregation will become false.
    It is therefore necessary to know the largest aggregation period you are using in the time interval where the data loss occurred. To find out this interval, please refer to your Loadviews and History aggregation configuration.

    Extract and check the SMFs to be reintegrated

    The IBM IFASMFDP utility can be used to filter SMF records from the global file. See IBM documentation for more details: https://www.ibm.com/docs/en/zos/2.4.0?topic=programs-using-ifasmfdp-smf-data-set-dump-program

    Delete table data

    Zetaly provides a SQL script to delete data from tables according to a defined time schedule and SID. This file must be retrieved, customized and run on the environment.

    Validation/Commit/Rollback

    This file must be checked by several users before execution. It is also advisable to disable automatic commit to enable rollback in the event of errors during query execution.

    File to be customized for deletions:

    clean_interval_smf.sql

    Instruction:

    • Modify START_TIME with deletion start date (included).
    • Modify END_TIME with deletion end date (excluded).
    • Modify SID with the target SID
    • Execute

    Re-inject data into LPAR DXQUEUE

    The steps for reinserting the data are as follows:

    • Convert raw SMFs into CSV files
    • Push SMFs into DXQUEUE

    Convert SMF to CSV

    The DXPLSMF utility reads the dump SMF file informed on the SMFIN DD and write the selected records on the CSVOUT dataset.
    JCL example:

    //DXPLSMF JOB MSGLEVEL=(1,1),NOTIFY=&SYSUID
    //DXPLSMF EXEC PGM=DXPLSMF 
    //STEPLIB DD DISP=SHR,DSN=ZETALY.DXPV7000.DXPLLOAD
    //SMFIN DD DISP=SHR,DSN=smf.dump.file
    //CICSDIC DD DISP=SHR,DSN=cics.dicitionary.file
    //CSVOUT DD DSN=dxpl.csvout,UNIT=SYSALLDA,
    // DISP=(NEW,CATLG),VOL=SER=volser,
    // SPACE=(CYL,(100,50),RLSE)
    //SYSPRINT DD SYSOUT=* 
    //DXPLIN DD * 
    PROD ALL
    

    Push data to DXQUEUE

    The program DXPUT gets all the records on a sequential file (CSVFILE) and adds them to the end of the DXQUEUE queue to be sent to ZETALY Server.
    JCL example:

    //DXPUT EXEC PGM=DXPUT 
    //STEPLIB DD DISP=SHR,DSN=ZETALY.DXPV6400.DXPLLOAD 
    //CSVFILE DD DSN=ZETALY.DXPV6400.CSVOUT, 
    // DISP=SHR 
    

    Reload data from utilities

    Reinjecting data from utilities is almost the same as for SMFs. Each utility uses a different data source. Each utility produces a CSV file which is then injected via CSV.
    It is advisable to back up the files produced by the utilities before injection, to enable data reinjection. Without this backup, reinjection is impossible.

    It is therefore necessary to identify the time of the utility's passage in order to remove the data from the corresponding time interval. As with SMF, it is also necessary to use a time interval corresponding to the maximum granularity of the period. To find out this interval, please refer to your Loadviews and History aggregation configuration.

    Unlike SMFs, the notion of sid doesn't always make sense for harvested data. It is therefore necessary to identify the criterion best suited to your data loss. For example, if the data loss occurred on a particular ims, it's best to delete the data from an ims using the IMS ID, then reload the corresponding data.

    Delete table data

    Zetaly does provides a SQL script to delete data from this tables. However, we've provided some examples that you can adapt to your own use of the utilities.

    Batch VTS (Hydra) records

    The example provided contains only a filter for the collect date. Add criteria according to your use case.

    bvirVirtualData:

    DELETE FROM bvirVirtualData
    WHERE 	endTime >= '2022-07-26 18:00:00'
    AND	endTIme < '2022-07-26 19:00:00';
    

    bvirAdapterData:

    DELETE FROM bvirAdapterData
    WHERE 	endTime >= '2022-07-26 18:00:00'
    AND	endTIme < '2022-07-26 19:00:00';
    

    bvirHsmData:

    DELETE FROM bvirHsmData
    WHERE 	endTime >= '2022-07-26 18:00:00'
    AND	endTIme < '2022-07-26 19:00:00';
    

    bvirGridData:

    DELETE FROM bvirGridData
    WHERE 	endTime >= '2022-07-26 18:00:00'
    AND	endTIme < '2022-07-26 19:00:00';
    

    Catalog records

    The example provided contains a filter for the collect date and catalog name. Add criteria according to your use case.

    catData:

    DELETE FROM catData
    WHERE 	endTime >= '2022-07-26 18:00:00'
    AND	endTIme < '2022-07-26 19:00:00'
    AND catalogname = 'CATALOG';
    

    catHist:

    DELETE FROM catHist
    WHERE 	endTime >= '2022-07-26 18:00:00'
    AND	endTIme < '2022-07-26 19:00:00'
    AND catalogname = 'CATALOG';
    

    IMS records

    The example provided contains a filter for the collect date and imsid. Add criteria according to your use case.

    imsData:

    DELETE FROM imsdata
    WHERE 	endTime >= '2022-07-26 18:00:00'
    AND	endTIme < '2022-07-26 19:00:00'
    AND imsid = 'ID';
    

    imsHist:

    DELETE FROM imsHist
    WHERE 	startTime >= '2022-07-26 18:00:00'
    AND	startTime < '2022-07-26 19:00:00'
    AND imsid = 'ID';
    

    DCollect records

    The example provided contains a filter for the collect date and sid. Add criteria according to your use case.

    volData:

    DELETE FROM volData
    WHERE 	endTime >= '2022-07-26 18:00:00'
    AND	endTIme < '2022-07-26 19:00:00'
    AND sid = 'SID';
    

    volHist:

    DELETE FROM volHist
    WHERE 	startTime >= '2022-07-26 18:00:00'
    AND	startTime < '2022-07-26 19:00:00'
    AND sid = 'SID';
    

    invData:

    DELETE FROM invData
    WHERE 	endTime >= '2022-07-26 18:00:00'
    AND	endTIme < '2022-07-26 19:00:00'
    AND sid = 'SID';
    

    invHist:

    DELETE FROM invHist
    WHERE 	startTime >= '2022-07-26 18:00:00'
    AND	startTime < '2022-07-26 19:00:00'
    AND sid = 'SID';
    

    Push data to DXQUEUE

    The program DXPUT gets all the records on a sequential file (CSVFILE) and adds them to the end of the DXQUEUE queue to be sent to ZETALY Server. Any CSV file produced by utilities can be reinjected.

    JCL example:

    //DXPUT EXEC PGM=DXPUT 
    //STEPLIB DD DISP=SHR,DSN=ZETALY.DXPV6400.DXPLLOAD 
    //CSVFILE DD DSN=ZETALY.DXPV6400.CSVOUT, 
    // DISP=SHR 
    

    Annex

    If you need more precise information about DX* utility, please take a look to: ZSA on the Mainframe (v7.0)


    Changing your password will log you out immediately. Use the new password to log back in.
    First name must have atleast 2 characters. Numbers and special characters are not allowed.
    Last name must have atleast 1 characters. Numbers and special characters are not allowed.
    Enter a valid email
    Enter a valid password
    Your profile has been successfully updated.