Monday, December 12, 2022

Minor Update to Maximo Log Analysis Tool

The Maximo Log Analysis Tool from a previous blog entry, has been updated.

It fixes some bugs:

  • Properly handles ordering a log files if they appear in a subdirectory within the zip file.
  • Always allow the user to specify the date format.  

It also adds new functionality:

  • Generates a graph of the total number of MboSets reported in Maximo against memory usage.  This is a cumulative report instead of the per-Mbo breakdown of the other graphs.
The Maximo Log Analysis Tool can be found at:

Monday, November 28, 2022

Database Error Authenticating a WebService Call

My Saturday turned out differently than I had planned.  

A customer was a database error trying to authenticate a webservice call.  The first error message and stack trace was:

BMXAA6714E - The data for the next record in the mboset could not be retrieved for the SQL query select * from maxuser  where loginid = :1. See the log file for more details about the error.
java.sql.SQLException: ORA-01008: not all variables bound
	at oracle.jdbc.driver.T4CTTIoer11.processError( ~[oraclethin.jar:?]
	at oracle.jdbc.driver.T4CTTIoer11.processError( ~[oraclethin.jar:?]
	at oracle.jdbc.driver.T4C8Oall.processError( ~[oraclethin.jar:?]
	at oracle.jdbc.driver.T4CTTIfun.receive( ~[oraclethin.jar:?]
	at oracle.jdbc.driver.T4CTTIfun.doRPC( ~[oraclethin.jar:?]
	at oracle.jdbc.driver.T4C8Oall.doOALL( ~[oraclethin.jar:?]
	at oracle.jdbc.driver.T4CStatement.doOall8( ~[oraclethin.jar:?]
	at oracle.jdbc.driver.T4CStatement.doOall8( ~[oraclethin.jar:?]
	at oracle.jdbc.driver.T4CStatement.executeForDescribe( ~[oraclethin.jar:?]
	at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe( ~[oraclethin.jar:?]
	at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout( ~[oraclethin.jar:?]
	at oracle.jdbc.driver.OracleStatement.executeQuery( ~[oraclethin.jar:?]
	at oracle.jdbc.driver.OracleStatementWrapper.executeQuery( ~[oraclethin.jar:?]
	at psdi.mbo.MboSet.getNextRecordData( [businessobjects.jar:?]
	at psdi.mbo.MboSet.fetchMbosActual( [businessobjects.jar:?]
	at psdi.mbo.MboSet.fetchMbos( [businessobjects.jar:?]
	at psdi.mbo.MboSet.getMbo( [businessobjects.jar:?]
	at psdi.mbo.MboSet.isEmpty( [businessobjects.jar:?]
	at [businessobjects.jar:?]
	at [businessobjects.jar:?]
	at [businessobjects.jar:?]
	at [businessobjects.jar:?]
	at psdi.iface.util.SecurityUtil.getNewUserInfo( [businessobjects.jar:?]
	at psdi.iface.util.SecurityUtil.getUserInfo( [businessobjects.jar:?]
	at psdi.iface.action.MAXActionServiceBean.secureAction( [mboejb.jar:?]
	at psdi.iface.action.MAXActionServiceBean.wsSecureAction( [mboejb.jar:?]
	at psdi.iface.action.EJSLocalStatelessactionservice_05493ca6.wsSecureAction(Unknown Source) [mboejb.jar:?]
	at psdi.iface.webservices.ActionWebServiceProxy.invokeService( [classes/:?]
	at psdi.iface.webservices.JAXWSWebServiceProvider.invoke( [classes/:?]
It was followed by this error message:

BMXAA6713E - The MBO fetch operation failed in the mboset with the SQL error code 1008. The record could not be retrieved from the database. See the log file for more details about the error.
java.sql.SQLException: ORA-01008: not all variables bound

When using DB2, the stack trace looks like

BMXAA6714E - The data for the next record in the mboset could not be retrieved for the SQL query select * from maxuser  where loginid = :1 for read only. See the log file for more details about the error. The number of variables in the EXECUTE statement, the number of variables in the OPEN statement, or the number of arguments in an OPEN statement for a parameterized cursor is not equal to the number of values required.. SQLCODE=-313, SQLSTATE=07004, DRIVER=3.69.71
	at psdi.mbo.MboSet.getNextRecordData(
	at psdi.mbo.MboSet.fetchMbosActual(
	at psdi.mbo.MboSet.fetchMbos(
	at psdi.mbo.MboSet.getMbo(
	at psdi.mbo.MboSet.isEmpty(

That's a lot of stack traces. I've included them to make this blog entry easier to find in the future. 

So what was the problem? mxe.useAppServerSecurity was 0, was 0, and the webservice call was not passing a MAXAUTH header. This scenario resulted in a null username which triggered the database error. 

In our case, changing was our solution. Changing mxe.useAppServerSecurity or passing a MAXAUTH header would have also worked.

Wednesday, September 28, 2022

Preventing User Logins

Still Around

I have not written in a long time.  Until early this year, I have been supporting a client still running a highly customized Maximo 7.5.  There were some interesting problems come up, especially as the browsers remove support for Java.  Maximo 7.5 uses Java for Workflow editing and Direct Print.  I didn't think I had much to share as the world moved on to later versions of Maximo.

My new position has me working with Maximo 7.6 and MAS8.  I should have more to write about.

Preventing User Logins

I have recently been working on a problem where I want to prevent users from logging into a Maximo node after startup while some complex initialization takes place within the Maximo JVM.  I want the same behaviour as placing Maximo into Admin Mode without actually doing that.   This snippet of code will do that.

// grab the SECURITY service.
SecurityService securityService = (SecurityService)MaximoHelper.getService("SECURITY");

// disable user logins

// do what you want to do without user logins

// enable user logins
This will prevent new users from logging in but will not log out users who are currently logged in. 

Friday, November 23, 2018

Maximo Log Analysis Tool

This was originally posted on the Interloc blog.

I recently needed to diagnose out of memory problems with Maximo.  There is some information in the Maximo logs that can help.  Maximo can display mbo counts and free memory information.  Maximo will also log the number of active users and when crons execute.

If the mxe.mbocount system property is set to 1, then Maximo will output a count of each MboSet type and the total number of those Mbos in the system every minute.  It would look something like this:
PERSON: mbosets (275), mbos (550).

Also every minute, Maximo will display the number of users connected to each instance
BMXAA6369I - Server host: Server name: UI03A. Number of users: 9
and the total amount of memory available and used in the JVM
BMXAA7019I - The total memory is 2684354560 and the memory available is 633519896.

All this is useful information, but it can be hard to make sense of it by just looking at it as text in the log files.  To help visualize it better, I created a web application to parse the logs and graph the data.  It also gives the option of downloading the raw data so you can analyze the data yourself.  The web application can be found at

To use it, upload a zip file containing SystemOut logs.

Once the file is loaded, specify an identifier to label the graph, the size you want the graphs, and click Process.  It assumes that sorting the log filenames will place them in chronological order.  This will be true if you are using the default SystemOut naming and log rotation.  If you upload files that do not start with SystemOut, you will be prompted how to parse the date and time from the log file.  This will follow Java’s SimpleDateFormat class.

After the log files have been processed, you can view the graphs in your browser or download them along with the extracted data.

In addition to information about memory usage, the Log Analyzer can graph the number of active users and when crons execute.

While analyzing some logs, I ran into an interesting phenomenon as seen in the following graph.

The total available is on a downward trend until each restart (black vertical line).  Initially I would have said that this is an example of a memory leak.  Identically configured sibling nodes did not show this trend.  Deeper analysis showed that this node had actually been taken out of the cluster and hadn’t done anything for weeks.  The sibling nodes that did process requests showed deeper drops in total available memory and higher peaks when memory was released.  My interpretation is that there is a “laziness” to garbage collection: the JVM will release the easy stuff but won’t look any harder than it needs to. 

Monday, May 28, 2018

Solving “Record has been updated by another user. Refetch and try again.” Problems

Originally posted at Interloc Solutions' Blog.

We’ve all seen the dreaded “Record has been updated by another user. Refetch and try again.” It can happen when the record has been updated by another user, but it can also happen when the record is updated by the same user more than once.

In memory, an MboSet owns zero or more Mbos. Each of those Mbos can have zero or more MboSets which in turn own zero or more Mbos. Naturally, if your MboSets contain zero Mbos you won’t have problems with records updated by another user. Problems will arise if two different Mbos reference the same database record and both Mbos attempt to update data. In memory, these Mbos will be represented as separate Java objects and will be owned by different MboSets. If only one Mbo is updated, it won’t be a problem. If both Mbos are updated, the first will update the database record and change the ROWSTAMP value. The second will attempt to update the record but will fail because the ROWSTAMP doesn’t match. This will trigger an MXRowUpdateException.

Let’s look at an example. In this diagram, ellipses represent Mbo objects in memory. Rectangular boxes represent database tables and their records. Solid lines represent an in-memory association through the named relationship. Dashed lines point to the database record.

In this example, a WORKORDER Mbo loads work order wo1. Following the woactivity relationship, WOACTIVITY wo1.2 is loaded into an Mbo. From here, the parent relationship is followed to WORKORDER wo1. The two WORKORDER objects reference the same database record, but they are represented in memory by two separate Mbo objects because they were loaded through two different relationships.

If either the first WORKORDER Mbo or the second WORKORDER Mbo is updated, it will save properly. If both Mbos are updated, the save will fail with an MXRowUpdateException. This occurs because the first update is applied to the database and updates the ROWSTAMP column. The second update is then applied, but the ROWSTAMP column has changed so it fails with an MXRowUpdateException.

To find where a single database record is updated by multiple Mbos, I have developed a MboDumper class. Starting from a single Mbo, it will travel down all instantiated relationships and display the relationship name and all Mbos contained there-in. If it detects the same Mbo in more than one place, it will highlight it. It’s then a matter of finding where in the code the Mbos are modified.

Code for the MboDumper is available on GitHub at

To use the MboDumper, place a try-catch block around the call to save. Exactly where will depend on whether the save takes place in a DataBean or in a Cron Task or something else. The code for the above example could look like:
MXServer mxs = MXServer.getMXServer();
UserInfo ui = mxs.getSystemUserInfo();
MboSetRemote workorders = mxs.getMboSet("WORKORDER", ui);
workorders.setWhere("wonum = '10190548'");
MboRemote workorder = workorders.getMbo(0);
workorder.setValue("DESCRIPTION", "Example");

MboSetRemote woactivities = workorder.getMboSet("WOACTIVITY");
MboRemote woactivity = woactivities.getMbo(0);

MboSetRemote parents = woactivity.getMboSet("PARENT");
MboRemote parent = parents.getMbo(0);
parent.setValue("DESCRIPTION", "Example 2");
try {;
} catch (MXRowUpdateException e) {
   MboDumper.dump(FixedLoggers.MAXIMOLOGGER, mbo);
   throw e;

Data is output to the given logger at the INFO level.

Given the example above, MboDumper would generate output like this:

*** MBO Data Dump Start
Mbo: WORKORDER [ToBeSaved ToBeUpdated] <=== DUPLICATE MBO
Key WONUM: 10190548
Modified DESCRIPTION: Example
Modified CHANGEDATE: 2/27/18 3:03 PM
Relationship: WOACTIVITY parent= '10190548'  and siteid= 'AMTRKENG' 
Mbo: WOACTIVITY [ToBeSaved ToBeUpdated] 
Key WONUM: 10190549
Relationship: $SECGROUPS userid = 'MAXADMIN' and groupname in ('MAXADMIN','SUPERUSER')
Relationship: __LONGDESCRIPTIONMLFALSEWORKORDER ldownertable = 'WORKORDER' and ldkey =  196399 
Relationship: PARENT wonum= '10190548'  and siteid= 'AMTRKENG' 
Mbo: WORKORDER [ToBeSaved ToBeUpdated ] <=== DUPLICATE MBO
Key WONUM: 10190548
Modified DESCRIPTION: Example 2
Modified CHANGEDATE: 2/27/18 3:03 PM
Relationship: CLASSSTRUCTURE classstructureid =  '' 
Relationship: __LONGDESCRIPTIONMLFALSEWORKORDER ldownertable = 'WORKORDER' and ldkey =  196398 
Relationship: CLASSSTRUCTURE classstructureid =  '' 
Relationship: __LONGDESCRIPTIONMLFALSEWORKORDER ldownertable = 'WORKORDER' and ldkey =  196398 
*** MBO Data Dump Done

The WORKORDER updated in two places is marked with “<== DUPLICATE MBO.“

Given the Mbo and the relationship name, it’s usually enough to find where in the code these multiple updates occur.

Thursday, March 29, 2018

Using Ant to Deploy Automation Scripts


I am a firm believer in deploying from source control.  The only thing that should end up in a production environment should come from the source control system.  This is the best way of knowing what is in production and the best way to know that your source is up to date.

It's relatively easy to set up a process to deploy Java changes to Maximo from code taken from the source control system.  Automation scripting makes this harder.  Scripts can be written from within Maximo and some cut & paste process can be used to copy changes to a file that's under source control with Migration Manager moving the changes from one environment to another.  The problem is making sure the script in Maximo is the same as the one in source control.  Any manual process to keep the two files in sync will be error prone and Migration Manager will only propagate that error through the environments.

Using an ant script and some Enterprise Services, I have managed to automate deploying Automation scripts to Maximo.

Create External System

In Maximo, create an External System.  I called mine CONFIG.  Make it use the MXXMLFILE endpoint, or create your own XML file endpoint.

Duplicate the DMLAUNCHPOINT and DMSCRIPT Obejct Structures.  I called mine IS_LAUNCHPOINT and IS_SCRIPT.

Create an Enterprise Service for IS_LAUNCHPOINT called ConfigLAUNCHPOINT and another one for IS_SCRIPT called ConfigSCRIPT.  

Also, create Publish Channels for IS_LAUNCHPOINT and IS_SCRIPT.  This makes it easier to extract data.

Associate the Publish Channel and the Enterprise Service created above with the External System created above.  Make sure everything is enabled.

Ant Script

Here is the ant script I created to publish xml files to the Enterprise Services.  You'll need ant-contrib, commons-httpclient, commons-logging and commons-codec jar files.

<project name="DeployXML" default="deploy" basedir=".">
    <description>Deploy Maximo XML files through Enterprise Services</description>

    <taskdef resource="net/sf/antcontrib/antlib.xml" onerror="fail">
            <pathelement location="ant-contrib.jar"/>
            <pathelement location="commons-httpclient-3.0.1.jar"/>
            <pathelement location="commons-logging-1.0.4.jar"/>
            <pathelement location="commons-codec-1.10.jar"/>

    <property name="deploy.file" value="" />

    <target name="deploy" depends="init" description="Deploy XML files to Maximo.  The files are determined from deploy properties file.">
        <for list="${deploy.list}" param="index">
                <propertycopy name="" from="deploy.@{index}.name" override="true"/>
                <propertycopy name="temp.dir" from="deploy.@{index}.dir" override="true"/>
                <propertycopy name="temp.files" from="deploy.@{index}.files" override="true"/>
                <propertycopy name="temp.url" from="deploy.@{index}.url" override="true"/>
                <echo message="Processing @{index} ${}"/>
                <antcall target="">
                    <param name="" value="${}"/>
                    <param name="service.dir" value="${temp.dir}"/>
                    <param name="service.files" value="${temp.files}"/>
                    <param name="service.url" value="${temp.url}"/>

    <target name="">        
        <echo message="Processing ${} ${service.dir}/${service.files}" />
        <for param="file">
                <fileset dir="${service.dir}">
                    <filename name="${service.files}"/>
                <antcall target="deploy.single">
                    <param name="file" value="@{file}"/>
    <target name="deploy.single" depends="init.client">
        <echo message="Deploying ${file}"/>
        <postMethod url="${service.url}"  responseDataProperty="response" clientRefId="maximo">
            <file path="${file}" contentType="text/xml;charset=utf-8"/>            
        <echo message="${response}"/>

    <target name="init">
        <loadproperties srcFile="${deploy.file}" />
    <target name="init.client">
        <httpClient id="maximo">
            <clientParams authenticationPreemptive="true"></clientParams>        
                <credentials username="${maximo.username}" password="${maximo.password}"/>

Property File

Here is the property file.  



Set maximo.base.url to the root url of all Enterprise Services for the External System you created.

Set maximo.username and maximo.password either in the property file or as command line parameters.  This will be a username and password that is allowed to call the Enterprise Services.

Set to the top level of your source project.

Set deploy.list to the list of configurations you want deployed.  They will be deployed in the order given.  Each entry is an index to the deploy.x entries below.

Set to a descriptive name of what will be deployed.  It will be displayed while the ant script runes.

Set deploy.x.dir to a directory in which the files to upload can be found.  This can be a top level directory as shown here.  It really depends on how you want to organize your files.

Set deploy.x.files to a file or files to import into Maximo.  The sample property file will actually look in all subdirectories for file names that contain ConfigScript.  Again, this will depend on how your files are organized.

Set deploy.x.url to the URL of the Enterprise Service that will import the files.

Deploying Automation Scripts

Deploying the automation scripts is simply a matter of running the ant script. 

Once the ant script completes, it's a good idea to check Message Reprocessing in case there were any errors importing the scripts.

You can use this to also deploy workflows.

Friday, January 19, 2018

Risk Assessment Tool

I originally posted this at Interloc Solution's blog.

At my current client, we use a Change Request approach to Maximo changes.  A Change Request is created describing the new functionality desired.  Developers work on these changes in separate Subversion branches.  Change Requests are chosen for a Release, then merged together, tested, and deployed.  The merged changes are then merged back into our Trunk and the process repeats.  Change Requests are not necessarily deployed during the next Release.  It can be several Releases before a Change Request is deployed.

When it is decided to include a Change in a Release, a risk assessment is performed.  Two common questions are “What has this change touched,” and “Does it require regression testing.”  The intent is not to get into the details of the change, but to provide a rough overview that can help QA get an idea of the scope of the change.

We have recently introduced a Risk Assessment Tool modeled on the FAA’s Flight Risk Assessment Tool.  It is available on GitHub.  We are looking for a better acronym than MRAT or RAT when referring to this tool.  I welcome any input on the matter.

It is simply a list of development activities that are weighted depending on their potential impact.

Once development on a Change Request is complete, the developer creates a copy of the Risk Assessment Tool specific for their change.  The developer goes through and places a “1” in the Applicable column for anything that applies.  An overall score for the page is calculated automatically.

The development activities are then grouped together into more general types of changes and provides an overall score.

The end product is an overall summary of what has been changed (e.g. Screen, Database, Coding, etc.) and a score that gives an idea of the size of the change.  A bigger number means a bigger change means bigger risk and implies more testing.

The Considerations column contains notes to the developer to check common mistakes associated with a change.  For example, a consideration for adding a database column that is part of an Object Structure is that it has the potential of affecting external systems that consume that Object Structure.  The consideration column also contains notes to QA about what should be tested.  When an Mbo save method is modified, the Consideration is that testing should include saving a record.

The Score calculation is simply Weight × Applicable.  The common usage is to place a “1” in the Applicable column.  In the beginning we toyed around with the idea of using larger numbers to represent larger changes.  For example, if two Mbo classes were modified, then place a “2” in the Applicable column.  We felt this had the potential of overweighting changes.  We decided to go with ranges — 1 file, 2 to 5 files, etc — each with a different weight.

Where the FAA Flight Risk Assessment Tool gives meaning to the calculated scores — 0-10 Not Complex Flight, 11-20 Exercise Caution, 20-30 Area of Concern — we haven’t yet determined appropriate ranges and what those ranges might mean, so any outside thoughts or input would be welcome.
There are some obvious areas that are missing from this tool, such as Work Flow.  We don’t use it, so we don’t have a section for it. So any thoughts or input would be welcome here as well.

The Risk Assessment Tool is a simple approach that gives an overview of where changes took place and how much of an impact they might have.