Pythian Blog: Technical Track

EBS: Access the PDF Outputs After the Migration to AWS

During the Oracle E-Business Suite migrations from on-premise to AWS Cloud (similar to any cloning exercise), you may need to preserve access to concurrent request log and output files, especially the reports. Usually, it’s an easy task that involves tricks around the FND_CONCURRENT_REQUESTS table. However, it’s not always so simple.

When the user checks Oracle Forms to see the output for a particular request, it is not the Apps-tier frmweb process that searches for a file requested—this is where the FND file server (FNDFS) comes into action. Here is a troubleshooting guide by Oracle on how to tackle general issues in this area.

As an Apps DBA, you know the FNDSM as a Service Manager concurrent manager and the appropriate Apps TNS listener entries pointing to a file server:

[applmgr@app_server ~]$ ps -fea | grep FNDSM
applmgr    123     1  0 Sep10 ?        00:00:16 FNDSM
[applmgr@app_server ~]$ lsnrctl status APPS_EBS

Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=app_server.domain)(PORT=1627)))
Services Summary...
Service "FNDFS" has 1 instance(s).
  Instance "FNDFS", status UNKNOWN, has 1 handler(s) for this service...
Service "FNDSM" has 1 instance(s).
  Instance "FNDSM", status UNKNOWN, has 1 handler(s) for this service...
The command completed successful

Concurrent request runtime data

Usually, all request log files are being created with the $APPLCSF/$APPLLOG/r<request_Id>.req format, and they request output files with $APPLCSF/$APPLLOG/o<request_Id>.out. These path values are also getting stored in the FND_CONCURRENT_REQUESTS table under the LOGFILE_NAME and OUTFILE_NAME columns. On top of that, there are two additional columns being tracked: LOGFILE_NODE_NAME and OUTFILE_NODE_NAME.

During the EBS migration or cloning, there is a full set of concurrent runtime details to actually copy over, and the node name usually changes. If the path remains the same, the following SQL commands will preserve the log and out file references correctly.

SQL> update FND_CONCURRENT_REQUESTS set OUTFILE_NODE_NAME = 'NEW_SERVER' where OUTFILE_NODE_NAME = 'OLD_SERVER'; 
SQL> update FND_CONCURRENT_REQUESTS set LOGFILE_NODE_NAME = 'NEW_SERVER' where LOGFILE_NODE_NAME = 'OLD_SERVER'; 
SQL> commit;

I like to preserve the original paths, obviously, for simplicity reasons, and the purge procedures should cover the data as required. However, if the paths have to change, Oracle’s SQL replace function can be used.

SQL> update FND_CONCURRENT_REQUESTS set LOGFILE_NAME = REPLACE(LOGFILE_NAME, '/old_path/', '/new_path/') where LOGFILE_NAME like '/old_path/%';
SQL> update FND_CONCURRENT_REQUESTS set OUTFILE_NAME = REPLACE(OUTFILE_NAME, '/old_path/', '/new_path/') where OUTFILE_NAME like '/old_path/%';
SQL> commit;

XML Publisher

A significant part of the content is not just the concurrent log/output files but the files generated by the XML Publisher (PDF or other readable formats).A track of the output files is created by the requests but stored in a separate FND_CONC_REQ_OUTPUTS table. It also requires the updates in case of a migration to avoid errors like “An error occurred while attempting to establish an Applications File Server connection with the node FNDFS_<oldhostname>.”

SQL> update FND_CONC_REQ_OUTPUTS set FILE_NODE_NAME = 'NEW_SERVER' where FILE_NODE_NAME='OLD_SERVER';
SQL> update FND_CONC_REQ_OUTPUTS set FILE_NAME = REPLACE(FILE_NAME, '/old_path/', '/new_path/') where FILE_NAME like '/old_path/%';
SQL> commit;

After taking care of the file references in those two tables (and copying the actual file data over), the concurrent requests output and log files are available from the migrated environment for end-users.

Both of these tables also contain log and out sizing data. In busy EBS systems, it’s usually a hot topic to understand what’s inside $APPLCSF as well as file sizes and amounts, so this could definitely help explain and show the utilization, as well as highlight the weak spots for improvement.

Thank you for reading my blog. For more like this, please subscribe at the top of the page.

No Comments Yet

Let us know what you think

Subscribe by email