Vision Administration: Production Processing

Overview

Many Vision installations expect to receive various data feeds on a regular basis. Once the data is saved in the Vision database, other updates that rely on this data can be performed. In addition, standard production reports can be generated based on the completion of one or more updates.

It is useful to automate the production cycle so that standard updates and report generations can be scheduled to run each night. There are a number of ways that you can automate your daily production cycle. A comprehensive set of scripts has been provided for Unix environments as a template for this automation. These tools enable you to:

  • Define any number of update and report jobs that can be scheduled to run on a daily or as needed basis.

  • Assign dependencies to jobs so that specific tasks will not begin until other tasks have completed.

  • Get a quick snapshot of the status of the current production cycle.

These scripts can be used directly or can serve as a basis for designing your own production processing. With the exception of the last section, the remainder of this document refers to the production management tools included with Unix installations. The last section describes the building blocks you will need if you design your own production processing.


Overview of the Production Process

A job can be thought of as any task that needs to be executed. For example, an update job could update security prices using a file supplied by an external vendor. A report job could display a list of any security with a price change of more than ten percent. Any job can be dependent on one or more conditions being met. For example, the security price update job requires that the external file of prices is available. A portfolio holdings update job requires that the security and portfolio master updates for the day have completed successfully. The price change report job requires that the security price update job has completed. The rules for defining dependency conditions are stored in files known as okay scripts.

You can create a job and an optional okay script for any updates and reports that you want to run on a regular or as-needed basis. In addition, tools are provided to submit any ad-hoc updates of Vision code and data feeds that you want to commit to the database.

The production processing is managed by various file tokens, files whose existence control or reflect specific processing events. For example, you could manually create a token to schedule a job or to indicate that a data file is ready for processing. The production cycle creates tokens to indicate that specific jobs have been completed or canceled. Token files are used extensively during the production cycle.

Each night, cron starts a process known as the daily.daemon. This daemon performs any general maintenance activities required and starts the scripts that manage the update and reporting queues. The nightly daemon runs until it is terminated or until the next night's daemon begins. The daemon can be scheduled to run on business nights only (i.e., Monday through Friday) or on any other schedule that suits your organization's needs.


Installation

The Vision Server Installation for Unix servers includes a comprehensive set of scripts that can be used to automate your daily production cycle. No special installation or modifications are needed to use these tools.

If you are currently using the VAdmin Module to submit data feeds and Vision script updates, you should modify the .cshrc and the .login files for the Vision Administrator user code (e.g., dbadmin) to include the following:

    #---  in .cshrc
    source /localvision/include/dba.cshrc.inc
    
    #---  in .login
    source /localvision/include/dba.login.inc
This will set a number of environment variables, including some that are used by the VAdmin Module, to control updates in a production environment.

You will probably want to use cron (or an equivalent facility) to start a new production cycle each business night using an entry similar to:

    00 22 * * 1-5   csh -f /localvision/production/admin/daily.daemon


The Production Directory Structure (Unix)

The Vision Server Installation supplies two directories: /vision/ and /localvision/. The /localvision directory contains a sub-directory named production/ which manages all the directories, scripts, jobs, and tokens related to production processing.

The /localvision/production directory contains the following sub-directories:

Directory Description
Reports/ This directory contains the production report jobs and the current and prior day's report output.
Updates/ This directory contains the update jobs used for pre-defined updates and the historical logs for pre-defined and ad-hoc updates.
admin/ This directory contains the tools to manage the daily production cycle and the historical logs generated by prior production cycles.
status/ This directory shows the status of the current production cycle. The active/ directory lists the update and/or report that is currently in progress. The pending/ directory lists the names of the scheduled update and report jobs that are waiting to run during this cycle. The tokens/ directory shows the completed, canceled, and error status of jobs that have run during the current cycle. The logs/ directory contains the log files associated with each update completed during the current cycle.
utils/ This directory contains a number of utility programs.


Daily Production Cycle

The script daily.daemon, located in the directory /localvision/production/admin, controls the nightly production cycle. It is usually started by cron every Monday through Friday evening using a crontab entry similar to:

    00 22 * * 1-5   csh -f /localvision/production/admin/daily.daemon
which would start the daemon at 10:00 pm on weeknights. This entry is included for the Vision Administrator user code, referred to as dbadmin in this document.

When the daily.daemon starts, a new production cycle begins. If the prior night's production cycle is still running, it will terminate when the new cycle begins. The production cycle date is the date on which the cycle begins. This is also known as the process date. The process date is saved in an environment variable (processDate) when the daemon begins. The process date remains unchanged during the production cycle which runs until a new cycle begins or until it is terminated via manual intervention or a system reboot.

The daily.daemon performs the following steps:

  1. Sets the process date and the day of the week.
      The environment variable processDate is set to the date in YYYYMMDD format when the daemon starts. This environment variable remains constant throughout the cycle. The environment variable dayOfWeek is set to the current day of the week and is used to determine which daily setup file to read (see below).

  2. Deletes old files in the /localvision/production/status subdirectories.

  3. Sets up any special processing tokens for the current date and day of the week.

  4. Runs any scheduled Vision database maintenance activities (i.e., garbage collection, compaction, and/or segment deletion).

  5. Starts the database update daemon in background.

  6. Starts the production reporting daemon in background.

  7. Mails the output from the setup and maintenance process to the dbadmin user.

If the setup or database maintenance steps fail for any reason, then daily.daemon terminates. A message is mailed to the dbadmin user indicating that the process was halted and no updates or reports are started. If the maintenance activities complete successfully, the daily.daemon starts two new daemons: one to manage the production updates (runUpdates) and one to manage the production reports (runReports). These processes will run throughout the cycle.

After initializing the update process for the current day's cycle, the update daemon runs the process processUpdates which checks every five minutes to see if any updates are pending. After initializing the report process for the current day's cycle, the report daemon runs the process processReports which checks every six minutes to see if any reports are pending. These processes will run until terminated or until the next night's cycle begins.

Daily Setup and Maintenance Activities

When it starts, the daily.daemon loads a command file defined for the current day of the week. The files setup.Mon, setup.Tue, etc., located in the production/admin/helpers directory, are used to create tokens that are needed on a particular day of the week. Minimally, these files are used to schedule the three main database maintenance activities to run each day by placing the following tokens in the directory /localvision/production/status/tokens:

dsegs ... schedule segment deletion to run
collect ... schedule garbage collection to run
compact ... schedule database compaction to run

The daily setup scripts can be modified to reflect special procedures required by your organization on specific days.

Once the daily setup script has been loaded, the daily maintenance procedures begin. If the dsegs token exists, old database segments are deleted. If the collect token exists, the garbage collection process is executed. If the compact token exists, a full compaction is run. Note that if segment deletion is scheduled, it runs first, prior to the current cycle's compaction. In other words, it removes the segment files that were compacted by the prior day's cycle. Output from the current cycle's garbage collection and compaction processing is stored in the files garbageCollect.log and compact.log in the directory /localvision/production/status/logs. A permanent copy of the output is also saved in the directory /localvision/logs with the process date added as an extension to the file name.

For more information about the maintenance procedures, see the document Vision Network Administration Tools.

The Status Directory

The /localvision/production/status directory provides a snapshot of the current production cycle's activities. There are six subdirectories under the status directory:

    active/ This directory shows the update and/or report job that is currently in progress.
    pending/ This directory lists the names of the scheduled update and report jobs that are waiting to run during this cycle.
    tokens/ This directory shows the completed, canceled, and error status of jobs that have run during the current cycle. This directory may include additional file tokens to set and help control job dependency conditions.
    logs/ This directory contains log files associated with each update completed during the current cycle. In addition, this directory contains the file update.log which shows the starting and ending times for each update job that has completed, the file submit.log which shows the starting and ending times for each submitted ad-hoc Vision update request that has completed, the file report.log which shows the starting and ending times for each report job that has completed, and the file errors.log which shows any errors encountered during the current cycle.
    reports/ This directory contains output files associated with each report job completed during the current cycle. It is actually a link to the directory /localvision/production/Reports/output.
    todo/ This directory shows jobs from prior cycles that did not complete successfully. These token files serve as a reminder of tasks that you may need to reschedule. They can be removed by the Database Administrator when they are no longer needed.

For example, to see pending jobs use:

    cd /localvision/production/status
    ls pending
and to see the status of completed and in progress jobs use:
    cd /localvision/production/status
    ls active tokens
Many tokens are in the form prefix.jobName where prefix is one of:

Token Prefix Definition
Done Job has completed
Cancel Job has been canceled
Error Job completed and generated an error
Warning Job completed and generated a warning
InProgress Job is currently in progress

Several other tokens will exist in the status/tokens directory. The token InProgress.yyyymmdd (where yyyymmdd is the process date) will exist if the cycle is still active. The tokens updateDaemon.inProgress and reportDaemon.inProgress will also exist to indicate whether the update and/or report daemons are specifically running.

The logs directory contains the log files associated with each update completed during the current cycle as well as a number of summary files. To see all the logs created during the current cycle use:

    cd /localvision/production/status
    ls logs
To look at the overall process summary for updates run during the current cycle use:
    cd /localvision/production/status
    more logs/update.log
which shows the start/stop time for each job that has run. To look at the errors generated by any job in the current cycle use:
    cd /localvision/production/status
    more logs/errors.log
For each job that has run there should be a corresponding log file in the logs directory named logs/jobName.log containing details for that specific job for the current date's processing.

Scheduling Update and Report Jobs

Update jobs are scheduled by placing the job name in the status/pending directory. For example, to schedule a job named Prices, you would create the file /localvision/production/status/pending/Prices. While the production cycle is active, the pending directory is checked every five minutes to see if any update jobs are waiting. If a job is pending and all required pre-conditions are met, the job will run. The job name will appear in the status/active directory while it is running. Upon completion, the pending token will be removed. If the job completed successfully, the file token Done.jobName will be placed in the status/tokens directory where jobName refers to the job that was run (e.g., Prices). If the job generated an error, the token Error.jobName will be created in the status/tokens directory instead. If the job generated a warning, the token Warning.jobName will be created in the status/tokens directory in addition to Done.jobName.

The specific steps needed to create update job and okay scripts are described later in this document.

Report jobs are scheduled by placing the job name, prefixed by report., in the status/pending directory. For example, to schedule a report job named PriceChanges, you would create the file token named /localvision/production/status/pending/report.PriceChanges. While the production cycle is active, the pending directory is checked every six minutes to see if any report jobs are waiting. If a job is pending and all required pre-conditions are met, the job will run. The job name will appear in the status/active directory while it is running. Upon completion, the pending token will be removed. Report output will be saved in the directory /localvision/production/Reports/output.

The specific steps needed to create report job and okay scripts are described later in this document.

Production Cycle Termination and Restart

If the new production cycle is ready to start while a cycle is in progress, the new process creates the token /localvision/production/status/active/daemon.WaitingToStart, to signal that it is waiting. The current cycle will then perform the following wrap-up tasks:

  • The files update.log, report.log, submit.log, listener.log, errors.log, and warnings.log in the directory /localvision/production/status/logs are copied to the directory /localvision/production/admin/logs with the log extension replaced by the process date.

  • For each update job remaining in the status/pending directory, the token Pending.jobName.yyyymmdd is created in the status/todo directory.

  • For each job that generated an Error token, the token Error.jobName.yyyymmdd is created in the status/todo directory.

By default, the update daemon runs until a new cycle is ready to begin. You can manually stop the update and/or report daemons using:

    cd /localvision/production/status
    touch pending/updates.Stop 
    touch pending/reports.Stop
The same cycle wrap-up procedures described above are executed when the cycle is manually terminated.

To restart the processing daemon after it has been stopped manually or as the result of a hardware reboot, use:

    cd /localvision/production/admin
    restart.daemon 
The update and/or report processing will be restarted if they are not currently running.

If the daily.daemon never started (e.g., because the machine was unavailable or cron was not running), you can manually start it. To ensure the correct process date, you should create a file containing the following environment variables:

    setenv processDate 19981105 # date cron should have started 
    setenv dayOfWeek   Thu      # day cron should have started
This file should be supplied as an argument to the daily.daemon. For example, if you saved your environment variables in the file /localvision/production/admin/helpers/sample.env use:
    cd /localvision/production/admin
    daily.daemon helpers/sample.env >& logs/nocron.yyyymmdd & 

Holiday Processing and Job Cancellation

To cancel the normal daily processing for holidays that occur during the business week, you can store holiday dates in the directory /localvision/production/admin/holidays. For example, to prevent the normal daily processing from running on 11/26/1998 use:

    cd /localvision/production/admin
    touch holidays/19981126 
This technique can be used to set up holiday processing in advance of the actual holiday.

To cancel a specific job in advance of the current processing date, create a token in the form:

    yyyymmdd.Cancel.jobName
in the status/tokens directory. For example, if you receive a daily file of Canadian prices and want to cancel this required job on a day that is not a US holiday but is a Canadian holiday, you could use the form:
    cd /localvision/production/status
    touch tokens/19980523.Cancel.PriceCanada 


Update Jobs

You will need to create an update job for each task that updates the Vision database on a production basis. For example, you would create an update job to:

  • Wait for a required data set to arrive and start the appropriate update script.
  • Run supplemental updates based on the completion of one or more other update jobs.

Creating Update Jobs

Update jobs are defined in the directory /localvision/production/Updates/jobs. Each job has a unique name and an associated job script, a Unix script that contains the actual processing steps needed to perform the update. This job script is named jobName.job where jobName refers to the unique name of the job. An optional okay script containing any pre-conditions for running the job may be defined as well. If you have any pre-conditions for running the job, you will also create a file named jobName.okay.

For example, you could create a job named Prices.job to perform the following steps:.

  • Start a Vision session using batchvision, logging the output to a file.
  • Load the price file using the PriceFeed data feed format.
  • Update the database.
  • Check for successful completion of update.
  • Copy the log file from the update directory to the status area for convenient access.
  • Exit with a status value of 0 if the update was successful, a status of 1 if the update failed.

For example, the file /localvision/production/Updates/jobs/Prices.job could look something like:

    #!/bin/csh
    #--  File: /localvision/production/Updates/jobs/Prices.job 
    set jobName = Prices
    
    #-- setup log file 
    set workDir = /localvision/production/Updates/workArea/Internal
    set processLog = $workDir/$jobName.log.$processDate
    
    #-- Start Vision and Load the data file using the PriceFeed format
    set dataFile = /localvision/upload/feeds/prices.dat
    
    /vision/bin/batchvision -U3 << EOInput  >& $processLog        
    
    #--->  Vision input starts here   <---
    Interface BatchFeedManager
           upload: "PriceFeed" usingFile: $dataFile withConfig: NA ;
           Utility updateNetworkWithAnnotation: "$jobName Loads" ;
    ?g
    
    #--->  End of Vision Input    <---
    EOInput
    
    #--  Check job completion status
    set okay = $status
    if ($okay != 0) set okay = 1
    
    #--  copy the log to global status area
    cp $processLog $globalLogs/$jobName.log
    
    #--  end exit with status
    exit $okay

Update jobs can be scheduled to run on a required or as needed basis. If a job is scheduled, you may want it to wait until specific conditions are met before it actually runs. For example, you may want to schedule the Prices job to run each day but you do not want it to begin until the file prices.dat exists in a known directory. If you have one or more of these conditions that must be met before a scheduled job can begin, you can create an okay script, a Unix script that returns a status indicating if the job is ready to run. If conditions are ready, this script should return a status value of 0. If conditions are not ready, this script should return a value of 1. If conditions exist that indicate that the job should be canceled, this script should return a value of 2.

For example, the file /localvision/production/Updates/jobs/Prices.okay could look something like:

    #!/bin/csh
    #--  File: /localvision/production/Updates/jobs/Prices.okay 
    if (-f /localvision/upload/feeds/prices.dat ) exit 0
    
    exit 1

Scheduling Update Jobs

To schedule an update job, you create a token containing the job name in the status/pending directory. For example, to manually schedule the Prices job:

    cd /localvision/production/status
    touch pending/Prices 
While the production cycle is active, the pending directory is checked every five minutes to see if any jobs are waiting. If a job is pending and its okay script returns the value 0 (or no okay script is defined), the job will execute. If the job succeeds (returns a status of 0), the token Done.jobName will be placed in the status/tokens directory. If the job returns a status of 1, the token Error.jobName will be created in the status/tokens directory. If the job returns a status code of 2, the tokens Done.jobName and Warning.jobName will be created in the status/tokens directory.

The starting and ending time for each update job is logged in the file update.log in the directory /localvision/production/status/logs.

Required jobs can automatically be added to the pending directory when the production cycle begins. The job tokens defined in the /localvision/production/Updates/required directory are copied to the status/pending directory at the start of the cycle. For example, to schedule the Prices job to run automatically each day:

    cd /localvision/production/Updates
    touch required/Prices 
When jobs are automatically scheduled this way, your okay script would supply the rules to indicate when processing conditions are ready. Alternatively, an independent process could place the job token in the pending directory when conditions for processing have been satisfied. A sample set of production jobs are described in the section Sample Production Environment.

To cancel a job currently in pending, you could manually remove it from the pending directory. If other jobs are dependent on this job's completion, this approach will not produce the appropriate tokens. Alternatively, you could leave the token in pending and create the token Cancel.jobName in the status/tokens directory. For example:

    cd /localvision/production/status
    touch tokens/Cancel.Prices 
When the daemon next cycles through the pending jobs, it will remove the Prices token from pending and create the Done.Prices token. The Cancel.Prices token will remain in the status/tokens directory for the remainder of the cycle.

Pre-defined Update Jobs

Your installation includes several pre-defined update jobs in the /localvision/production/Updates/jobs directory:

    DailyCleanup.job
      This job runs a standard set of procedures designed to follow the major updates for the day. All work is performed in the directory /localvision/production/Updates/workArea/Cleanup. Various Vision methods will run. Output is logged to the file logs/daily.yyyymmdd in this directory. The token DailyCleanup is included in the Updates/required directory so that this job is automatically scheduled to run each day.
    StartListeners.job
      This job is used to restart the master listeners associated with web-basis access. The token StartListeners is included in the Updates/required directory so that this job is automatically scheduled to run each day. The StartListeners.okay script instructs the job to wait until the DailyCleanup update has finished.
    ShutListeners.job
      This job is used to shut down any master listeners associated with web-based access that are currently running.
    Suspend.job
      This job is used to temporarily prevent any other updates from running. It will continue to run until you remove it from pending. To temporarily suspend updates, use:
        cd /localvision/production/status
        touch pending/Suspend 
      To continue the updates, use:
        cd /localvision/production/status
        "rm" pending/Suspend 


Report Jobs

Production report jobs are managed in the /localvision/production/Reports directory. This directory includes the jobs directory which contains the actual report jobs, the output directory which contains the output generated during the current production cycle and the oldoutput directory which contains the output generated during the prior production cycle.

Production reports are executed daily unless the day has been flagged as a holiday. When the production reports daemon begins, the following steps are executed:

  • The prior cycle's output is copied from the output to the oldoutput directory in /localvision/production/Updates/Reports.
  • Unprocessed jobs remaining in the /localvision/production/status/pending directory are purged.
  • A pending job token is created for each job in the Reports/jobs directory.

Creating Report Jobs

You will need to create a report job for each production report you wish to run. Report jobs are defined in the directory /localvision/production/Reports/jobs. Each job has a unique name and an associated job script, a Vision script that contains the actual Vision code that generates the reports. An optional okay script containing any pre-conditions for running the report job and an optional wrapup script containing any steps to run after the output has been generated may be defined as well. This job script is named jobName.job where jobName refers to the unique name of the job. If you have any pre-conditions for running the job, you will also create a file named jobName.okay. If you have any steps to run after the option has been created, you will include them in a file named jobName.wrapup. The rules for defining an okay script for a report job are the same as for an update job. You can also define a wrapup script which runs after the report job has completed. This script can contain any additional Unix commands you require.

For example, you could create a report job named PriceChanges to display all prices that have changed by more than a certain amount since the prior date. The file PriceChanges.job would contain the actual Vision code to generate this report. The file PriceChanges.okay would contain any conditions that must be satisfied prior to starting the job, such as successful completion of the Prices update job.

The PriceChanges job could look something like:

    #--  File: /localvision/production/Reports/jobs/PriceChanges.job
    #--  Note - this is a Vision script
    
    !cutoff <- 10 ;
    !baseDate <- ^today - 1 businessDays ;
    Security masterList
       select: [ getPriceRecord date = ^my baseDate ] .
    do: [ !current <- price ;
          !prior <- :price asOf: ^my baseDate - 1 businessDays ;
          !pch <- current pctChangeTo: prior ;
          pch absoluteValue > ^my cutoff
          ifTrue: 
            [ code print: 10 ; name print ; 
              current print ; prior print ; pch printNL ;
            ]
        ] t;

The PriceChanges.okay file could look something like:

    #!/bin/csh
    #--  File: /localvision/production/Reports/jobs/PriceChanges.okay
    #--  Note - this is a Unix script
    
    #-- wait for Prices update to finish
    if (-f /localvision/production/status/tokens/Done.Prices ) exit 0
    
    exit 1

The PriceChanges.wrapup file could look something like:

    #!/bin/csh
    #--  File: /localvision/production/Reports/jobs/PriceChanges.wrapup
    #--  Note - this is a Unix script
    
    mailx dbadmin << @@@EOF
    The Price Changes Report is Done
    @@@EOF
    

Scheduling Report Jobs

To schedule a report job, you create a token containing the job name prefixed by report. in the directory /localvision/production/status/pending. All the production report jobs in the Reports/jobs directory are automatically scheduled to run at the start of each cycle, unless the day has been flagged as a holiday. To rerun a job or to schedule a new job that has been added after the current cycle began, you can manually create the appropriate token:

    cd /localvision/production/status
    touch pending/report.PriceChanges
While the production cycle is active, the pending directory is checked every six minutes to see if any jobs are waiting. If a job is pending and its okay script returns the value 0 (or no okay script is defined), the job will execute. Output from the job is stored in /localvision/production/Reports/output/jobName.out. If the output contains formatter commands, it is automatically run through the formatter utility. If there is a wrapup script defined for the job, it is executed when the job has completed.

The starting and ending time for each report job is logged in the file report.log in the directory /localvision/production/status/logs/.

To cancel a report, remove its token from the status/pending directory.

Pre-defined Report Jobs

Your installation includes several pre-defined report jobs in the /localvision/production/Reports/jobs directory:

    currencySample.job
      This report shows currencies that have had a large change in US dollar exchange rate since the prior date and currencies that have not had their exchange rates updated for a while. Sample okay and wrapup scripts for this job are also supplied.
    spaceSample.job
      This report shows disk space usage and structure counts for each object space in your current database. Its okay script prevents the job from executing until the DailyCleanup update has finished successfully.


Sample Production Environment

Sample jobs that can be used in a typical portfolio management environment are provided with all installations that include the Portfolio Management Application Layer. The following sample files can be found in the /localvision/production/Updates/jobs directory: MiscMasters.job, SecurityPlus.job, HoldingsPlus.job, HoldingsPlus.okay, PricePlus.job, and DailyCleanup.okay. For many installations, these jobs can be used as a starting point for managing daily updates.

These jobs and the production process are described in detail in the document Sample Production Environment.


Submitting Ad-Hoc Updates


The visionSubmit utility is a Unix script that allows a user to submit ad-hoc Vision code and data feeds to the update queue. This utility may be restricted to the dbadmin user, or it can be made available to other users, as appropriate.

Submitted requests are validated and added to the update queue for processing. Prior to update, these submitted jobs will appear in the directory /localvision/production/status/pending with the prefix submit. Once processed, the output from these updates will be stored in the /localvision/production/status/logs directory for the remainder of the current production cycle. Permanent copies of the submitted requests and output are retained historically in the /localvision/production/Updates/submits/posted and /localvision/production/Updates/submits/logs directories.

You can use the visionSubmit utility to submit ad-hoc Vision code and data feed updates. You can invoke the utility with parameters or you will be prompted for various pieces of information needed to submit the request. Note that if this command does not work, you may need to check your Unix configuration. This command is an alias for the script /localvision/production/admin/scripts/submitUpdate.

Submitting Ad-Hoc Vision Code Updates

You may need to load and save new Vision scripts into your database for several reasons such as report modifications, new applications, and corrections that cannot be performed via data feed files. You may receive Vision files from your Insyte consultant and/or develop your own code.

Although you can submit a file from any directory, by convention, files containing Vision code to submit are placed in the directory /localvision/upload/vscripts . To submit the Vision code for update, type:

    visionSubmit -C
at the Unix prompt. You will be prompted as follows:
    Vision Input Code File:           #- name of code file
    
    Run in Space [3]:                 #- carriage return or alternate space 
    
    Test File Before Submitting?      #- Yes or No
If the supplied file is okay, you will see a message of the form:
    ***  Vision Code File Update Pending. File: submitCode.dbadmin.32
This indicates that this is the 32nd submit for user code dbadmin. You will see the pending token submitCode.dbadmin.32 in the /localvision/production/status/pending/ directory.

You can avoid the prompts by supplying the inputs on one line. For example:

    visionSubmit -C myfile
You will not be prompted for the file name in this case, but you will be prompted for the object space and testing option. To avoid the prompting, supply all inputs or use the -n option:
    visionSubmit -C myfile 3 no
    
    or
    
    visionSubmit -C -n myfile
    
    

Submitting Ad-Hoc DataFeed Updates

It is sometimes useful to load data using a DataFeed format for a data feed that is not part of pre-defined update job. You can use visionSubmit to load these feeds. By convention, feed files are placed in the directory /localvision/upload/feeds, although you can submit a feed file from any directory. To submit the feed for update, type:

    visionSubmit -F
at the Unix prompt. You will be prompted as follows:
    Vision Input Feed File:              #- name of feed file
    
    Data Feed Name:                      #- name of DataFeed format
    
    Optional Configuration File Name:    #- name of optional config file 
If the supplied file is okay, you will see a message of the form:
    ***  Vision Feed File Update Pending. File: submitFeed.dbadmin.33
This indicates that this is the 33rd submit for user code dbadmin. You will see the pending token submitFeed.dbadmin.33 in the /localvision/production/status/pending/ directory.

You can avoid the prompts by supplying the inputs on one line. For example:

    visionSubmit -F currency.dat CurrencyMaster currency.cfg
    
    or
    visionSubmit -F -n currency.dat CurrencyMaster
    


Frequently Asked Questions

Question:

    I placed a job in the status/pending directory and it has not run. What is going on?
Answer:
    Check to see if you have an okay script that specifies conditions that would keep the job from running. If this is an update job, the corresponding okay script is in the directory /localvision/production/Updates/jobs. If this is a report job, the corresponding okay script is in the directory /localvision/production/Reports/jobs.

    You should also check to make sure the update and report processing are actually running.

Question:

    There are a number of files in the status/todo directory that start with the prefix Pending. What should I do with these?
Answer:
    When a new cycle begins, any jobs that are still pending in the current cycle are copied to the status/todo directory with a suffix of the process date. This is just a reminder that a particular job did not run as expected on this date. Jobs that encountered errors are also saved in this directory. Once you have taken the appropriate action (which may be doing nothing), you can remove these files.

Question:

    How can I tell if the update and report daemons are really running?
Answer:
    In Unix, you can use variations of the ps commands to see the process status for a user code. If the production process is running under the user code dbadmin, you can type:
      ps -fu dbadmin
    at the Unix prompt. If the update daemon is running, you should see entries similar to:
      dbadmin 10251  1        0   Nov 3   ?   0:00 scripts/runUpdates
      dbadmin 10262 10251     0   Nov 3   ?   1:54 scripts/processUpdates 
    If the report daemon is running, you should see entries similar to:
      dbadmin 10252  1        0   Nov 3   ?   0:00 scripts/runReports
      dbadmin 10266 10252     0   Nov 3   ?   1:54 scripts/processReports

Question:

    How can I avoid waiting five or six minutes before the status/pending directory is checked again?
Answer:
    In Unix, the processUpdates and processReports tasks run the sleep 300 and sleep 360 commands each time they are done processing the current jobs in the pending queue. You can "kill the sleeps" to force either of these tasks to re-examine the pending queue. For example:
      ps -fu dbadmin | grep sleep           #- show the sleep tasks
      
      displays:
      dbadmin 22183 10262  0 13:39:37 ?         0:00 sleep 300
      dbadmin 22160 10266  0 13:36:54 ?         0:00 sleep 360
      
    To force the update daemon to re-examine the pending queue, use the Unix kill command to remove the sleep 300 task:
      kill 22183
      
    The next update job in the status/pending directory will begin processing immediately.

Question:

    What happens to the update and report daemons if the hardware is rebooted?
Answer:
    Unless your hardware administrator has arranged to automatically start these processes, they will not be running after a reboot. You can manually restart them using:
      cd /localvision/production/admin
      restart.daemon
      
    Your system administrator can add this step to an appropriate script that is automatically run after a reboot. Note that this step should be started by the dbadmin user (or the user code defined as the Vision Administrator for your environment).


Production Environment Variable Summary

Production jobs can utilize a number of environment variables that are defined at the start of the daily.daemon including:

Environment
Variable
Definition
processDate current cycle's start date (yyyymmdd)
LocalVisionRoot /localvision
DBbatchvision /vision/bin/batchvision by default; can be reset to a test release
feedFilePath /localvision/upload/feeds - used to store data feed files
scriptFilePath /localvision/upload/vscripts - used to store ad-hoc vision code changes
statusArea /localvision/production/status
reportArea /localvision/production/Reports
updateArea /localvision/production/Updates
adminArea /localvision/production/admin
tokens /localvision/production/status/tokens
statusFile /localvision/production/status/active/status.out
globalLogs /localvision/production/status/logs
updateLog /localvision/production/status/logs/update.log
errorLog /localvision/production/status/logs/errors.log
warningLog /localvision/production/status/logs/warnings.log
reportLog /localvision/production/status/logs/report.log
submitLog /localvision/production/status/logs/submit.log
listenerLog /localvision/production/status/logs/listener.log