The U2 Health Check You Should Have Done Six Months Ago
Multi-value central
March 26, 2026
Your Rocket UniVerse-based application is running. Your users are logged in. Your reports are printing. Nothing is on fire—so everything must be fine, right?
That’s what the team at a distribution company told themselves for eighteen months. The nightly batch jobs crept a little longer each quarter. Disk usage climbed steadily. A few files hit 60% overflow, then 70%. Nobody noticed because nobody was looking. Then one Monday morning, the order entry system ground to a halt. A single file had grown so fragmented that every read triggered a cascade of disk seeks. It took the team three days of emergency resizing and reindexing to fix the issue, plus the IT director had a very uncomfortable conversation with the operations director.
The frustrating part? Every warning sign was visible months earlier. The data existed. It just wasn’t being collected, analyzed, or acted upon. That’s the value of a health check—not a crisis response, but a structured assessment that catches problems while they’re still cheap to fix.
What a Health Check Actually Measures
A proper U2 health check isn’t a single command or a fifteen-minute spot check. It’s a systematic evaluation across three layers: the operating system, the database engine, and the application workload.
At the OS layer, you’re validating that the underlying infrastructure supports U2 properly. This means confirming kernel parameters are set according to Rocket Software’s recommendations—shared memory limits, file descriptor limits, semaphore configurations. It means checking that disk I/O latency stays within acceptable bounds and that memory isn’t paging to swap. These aren’t U2 settings, but they constrain everything U2 can do.
On AIX, you’d review vmo and ioo tunables. On Linux you’d review, /etc/sysctl.conf entries and ulimit settings. The specific values depend on your U2 version and workload, but they should be documented somewhere in your environment.
Pro Tip: If they’re not documented, that’s finding number one.
At the database layer, the health check examines file structures, licensing utilization, and process behavior. File sizing is typically where the most actionable findings live. Run ANALYZE.FILE against your high-transaction files and note the overflow percentage. Anything above 20% for a busy file is a performance drag. Above 40% is urgent. Above 60% is an incident waiting to happen.
License utilization matters too. If you’re consistently running at 85% of your license limit during peak hours, you’re one busy day away from users getting locked out. Health checks should capture both current usage and peak historical usage over the collection window.
At the workload layer, you’re looking for patterns in how the system is actually used. How many phantoms run concurrently? How long do batch jobs take compared to last quarter? Are lock wait times increasing? These trends reveal whether your system is scaling gracefully or slowly drowning.
Automated vs. Manual Assessment
You can perform a health check manually. Connect to the server, run the commands, copy the output into a document, repeat across each category. Many shops do exactly this once a year or before an audit.
The problem with manual assessments is that they capture a single moment. You might run PORT.STATUS at 10 AM on a quiet Wednesday and see nothing unusual. Meanwhile, the real contention happens at 7 PM on month-end close when nobody’s watching. Manual checks also depend on the person running them remembering every relevant command, interpreting the output correctly, and documenting findings consistently.
Automated health checks solve both problems. A script that collects metrics over a representative period—say, 48 hours spanning normal operations and peak loads—captures behavior you’d never see in a spot check. Automation also applies consistent thresholds every time. It doesn’t forget to check index effectiveness because it’s running late for a meeting.
Pro Tip: The output should be a report, not a pile of log files. Categorize findings by severity—critical issues that need immediate attention, warnings that should be addressed this quarter, and informational items for future planning. Each finding should include what was measured, what threshold was exceeded, and what action to take.
What a Healthy System Looks Like
After reviewing hundreds of U2 environments, certain patterns emerge. Healthy systems share common characteristics that go beyond “nothing is broken today.”
File structures are right-sized. Dynamic files maintain overflow percentages below 15-20%. Static files have appropriate modulos for their record counts. Files aren’t left in default configurations years after creation.
Indexes match query patterns. High-volume selection criteria are indexed. Index effectiveness—the ratio of indexed selects to full file scans—is measurable and reasonable. There aren’t dozens of unused indexes adding overhead to every write.
Phantoms are disciplined. Batch processes have expected runtimes, and they meet them. Phantoms don’t accumulate indefinitely. Long-running jobs are investigated rather than accepted as normal.
Locking is intentional. Record locks are held briefly during transactions, not for the duration of a user session. Lock contention is rare rather than routine. When lock waits occur, they’re measured in seconds, not minutes.
Capacity has headroom. License utilization peaks below 80%. CPU headroom exists during normal operations. Disk space has at least 20% free, preferably more. The system can absorb a busy day without tipping into crisis.
History exists. Someone can answer the question “how does this compare to six months ago?” Performance baselines are documented. Changes are tracked.
The Findings You Don’t Want to Ignore
Some health check findings are informational—useful context but not urgent. Others require action.
Critical findings include files with overflow above 50%, swap space actively in use during business hours, phantoms that have been running for days, and license utilization above 95%. These indicate imminent problems and you should address them immediately.
Warning findings include files with overflow between 25-50%, memory utilization consistently above 85%, batch jobs trending longer quarter over quarter, and kernel parameters that don’t match Rocket’s recommendations. These are degradation indicators—the system still works, but it’s heading somewhere bad.
Informational findings include unused indexes, minor configuration deviations, and capacity that’s adequate today but will need expansion within a year. These belong on a planning list rather than an emergency list.
That distribution company I mentioned earlier had six months of warning-level findings that escalated to a critical outage because nobody acted.
Pro tip: The discipline is treating warnings like warnings—not ignoring them until they become critical.
Building Health Checks into Operations
A health check is most valuable when it’s routine, not reactive. Schedule it quarterly at minimum. Monthly is better for high-transaction environments. The goal is trend detection—catching the slow creep toward problems before users feel the impact.
Keep historical reports. A single health check tells you the current state. A year of quarterly reports tells you where you’re headed. That trend information is what enables proactive capacity planning instead of emergency purchases.
Assign ownership. Someone should be responsible for reviewing findings and tracking remediation. Health check reports that go into a folder where no one ever reads them are just documentation theater.
Need Help? Talk to MultiValue Central
Running a health check is straightforward once you know what to measure and what the thresholds should be. But interpreting findings in the context of your specific environment—understanding which warnings are urgent and which can wait, prioritizing fixes for maximum impact—requires experience across many U2 installations.
MultiValue Central offers structured health assessments for Rocket U2 environments. We collect the right metrics over a representative period, analyze against proven thresholds, and deliver actionable findings prioritized by severity. More importantly, we help you fix what we find—not just document problems, but solve them.
If your U2 system hasn’t had a proper health check recently—or ever—now is a good time. Visit www.multivaluecentral.com, email us at info@multivaluecentral.com or call us at +1 720 918 1300 x7113. You’ll know exactly where your system stands, and you’ll have a clear plan for keeping it there.
MultiValue Central is a privately owned professional services company with over 30 years of experience in providing technology, talent, and learning solutions. Our services are successfully delivered through a network of offices located in countries such as the United Kingdom, Australia, and the United States. MultiValue Central serves a diverse range of companies, of all sizes, and is responsible for professionals on assignment annually across a wide range of industries.