FusionReactor Blog - News and expert analysis for Java APM users.

5 things you should check every day to ensure your application health

Short application health checklist to ensure you’re getting the most out of FusionReactor

Configure your notification email address

The notification email address is where FusionReactor will send the daily, weekly and monthly reports to and is also the email used to send crash protection notifications. If you have not done this already, it’s very important to set this up as soon as possible.

Configure your mail settings within the FusionReactor settings page.

1 – Setup Crash Protection – get alerts when things go bad

Crash protection will alert you and attempt to keep your server/application responding when certain events or situations occur.  The alerts are usually the first capability enabled, because these will provide critical insight into what’s going wrong and why.

Crash protection can alert you when:

  • A request runs for longer than a configured time period (Long-running requests).
  • A number of requests run for a configured period (A spike in traffic slowing down the application).
  • Heap memory peaks at a certain threshold for a configured period (In case of a memory leak or request consuming large amounts of memory).
  • Instance CPU peaks at a certain threshold for a configured period (In case of any resource-heavy background process, request or event consuming large amounts of CPU).

For each of these alerts an email can be sent, which contains details of the running requests, resource usage and a full stack trace at the time the event triggered. 

NOTE: Even when you have set up your notification email, you still need to set Crash Protection email to ENABLED before the email will be sent.  You can do this in the Crash Protection Settings.

As well as email alerts you can also queue or reject new requests coming into the application server to reduce load whilst the server recovers.

2 – Check daily, weekly and monthly reports

Once the notification email is configured, you will automatically start receiving daily reports from your FusionReactor instance, in the report, you will see information on any outages, total load for the day and number of erroring requests.

5 things you should check every day to ensure your applications health

NOTE: All editions of FusionReactor provide a Daily Report – however, the Enterprise and Ultimate Editions also provide a weekly and monthly report.

3 – Review historical Archive Metrics – find behavioural issues 

Archive metrics allow you to view your historic log data within a user-friendly interface, so you can go back in time to identify issues and spot behavioural patterns within the application server.

A key issue for maintianing application health is to identify issues post-crash, this can be a challenge as there can be vast amounts of data dumped to log files and sifting through this data can be time-consuming. 

With FusionReactor, we have made this process simple as you can view all the metrics available in the running server but for the past 31 days of captured logs.

In the example above we can examine the Garbage Collection activity at the time before a crash and see that we had a steady increase until the point the server became unstable and crashed.

The Relations Tab provides a visual breakdown of sub transactions, which are often database or external service functions.  This makes it easier to spot potential performance bottlenecks.

4 – Recognize performance hot-spots from the Relations Tab 

If your web request makes any HTTP, JDBC, Mongo, Redis or ColdFusion tag (and many others), these are tracked as sub-transactions that you can see as an overview in the Relations tab of the request and drill into.

5 – See resource details to quickly gauge JVM health

Resources allows you to monitor the health of the JVM and find potential optimizations.

Within resources, you have multiple graphs that allow you to monitor:

  • Heap and non-heap memory
  • The usage of each memory space
  • Garbage Collection time and quantity
  • Class loading and JIT
  • Thread state and activity

From the Thread’s view, we can see the state of each thread in live time and perform a stack trace to see what each thread is doing.

Securing FusionReactor and jsp applications in tomcat using LDAP


FusionReactor provides different types of user accounts (Administrators/Manager/Observer), however, if you would like to restrict access to FusionReactor for individual users, you can do this via LDAP authentication.

This technote will guide you through configuring tomcat to use LDAP authentication to restrict access to both FusionReactor and your JSP applications.

We will split this guide into 5 distinct sections

  1. Configuring LDAP
  2. Configuring the server.xml file
  3. Configuring the JSP application
  4. Configuring the FusionReactor web root
  5. Disabling the external web root of FusionReactor

1.  Configuring LDAP

When configuring LDAP for use with tomcat you are required to create a collection of individuals and a collection of groups (one group per required tomcat security role), each user can be assigned to one specific group

In this example, FusionReactor and the JSP application are assigned to separate tomcat roles. The domain structure is as follows

dn: dc=mydomain,dc=com
objectClass: dcObject

dn: ou=people,dc=mydomain,dc=com
objectClass: organizationalUnit
ou: people

dn: ou=groups,dc=mydomain,dc=com
objectClass: organizationalUnit
ou: groups

dn: uid=jsmith,ou=People,dc=mydomain,dc=com
objectClass: inetOrgPerson
uid: jsmith
cn: John Smith
sn: Smith
userPassword: myPassword

dn: uid=ajones,ou=People,dc=mydomain,dc=com
objectClass: inetOrgPerson
uid: ajones
cn: Adam Jones
sn: Jones
userPassword: myPassword

dn: cn=fusionreactor,ou=groups,dc=mydomain,dc=com
objectClass: groupOfUniqueNames
cn: fusionreactor 
uniqueMember: uid=jsmith,ou=People,dc=mydomain,dc=com

dn: cn=myApplication,ou=groups,dc=mydomain,dc=com
objectClass: groupOfUniqueNames
cn: myApplication
uniqueMember: uid=ajones,ou=People,dc=mydomain,dc=com

 You could instead create one group for example “admin” and use this for FusionReactor and the JSP application.

2. Configuring the server.xml file

Tomcat in its default installation will use a local database to authenticate user access, we need to modify the server.xml file typically located at {Tomcat Root Directory}/conf/server.xml so that tomcat will instead use the LDAP server as it’s authentication service.

To do this first open the server.xml file in a text editor, you should replace the default Realm element tag:

<Realm className="org.apache.catalina.realm.LockOutRealm">
   <!-- This Realm uses the UserDatabase configured in the global JNDI
        resources under the key "UserDatabase".  Any edits
        that are performed against this UserDatabase are immediately
        available for use by the Realm.  -->
   <Realm className="org.apache.catalina.realm.UserDatabaseRealm"

With the following:

<Realm   className="org.apache.catalina.realm.JNDIRealm"

More information on realms can be found here: https://tomcat.apache.org/tomcat-7.0-doc/realm-howto.html

3.  Configuring the JSP application

By default any application you place in the webapps directory of tomcat will be accessible without authentication, however, you may have an application that should only be accessible to a specific user, you can achieve this by modifying the web.xml file of the application this can usually be found at {Tomcat Root Directory}/webapps/{App Name}/WEB-INF/web.xml

Within the “web-app” element tag add the following:


This will block any user with an unauthorized role from accessing your application, it is possible to define multiple authorized roles my duplicating the “role-name” element tag for example:


4. Configuring the FusionReactor web root

 With the default configuration of FusionReactor, you will be able to access the Application Performance Monitor through either the application server port (external port), 8080 for tomcat, or the instance port defined in the java arguments (internal port). Accessing FusionReactor through the external port uses the web root, the path to FusionReactor on the port.

By default, this is “/fusionreactor/” so if the internal port is enabled you will be able to access your FusionReactor instance at http://localhost:8080/fusionreactor/. 

You can change this value by navigation to FusionReactor > Settings > Web Root:

To configure LDAP security you will first need to create the following web app directory structure:

 ensuring you replace “fusionreactor” with your web root.

Your web.xml file should contain the following:

<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee"
                             http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">        <security-constraint>

This will ensure that any user that does not have the tomcat role fusionreactor cannot access the instance.

At this stage, you will be able to test that both your application and FusionReactor authentication access is working as expected

5. Disabling the external web root of FusionReactor

Although your external port is now secured FusionReactor is still accessible over the internal port without LDAP authentication, to stop this we simply need to disable the external port.

You can do this in 2 ways:

  1. In the settings page
    1. Simply disable the port
  2. In the java arguments
    • Windows – Run the tomcatw.exe application within {Tomcat Directory}\bin
    • Linux – Open the setenv.sh file in a text editor this file should be located at {Tomcat Directory}/bin/setenv.sh

                 In the -javaagent argument remove the address={port number} configuration option, for example:

                -javaagent:/opt/fusionreactor/instance/tomcat7/fusionreactor.jar=name=tomcat7,address=8098 will become –javaagent:/opt/fusionreactor/instance/tomcat7/fusionreactor.jar=name=tomcat7           


After following the above steps we should now be in the following state:

  • An unauthorized user cannot access either the JSP application or FusionReactor
  • To Authenticate a user your LDAP server will be contacted
  • Only users with appropriate tomcat roles will be able to access the JSP application of FusionReactor
  • FusionReactor will not be accessible on the internal port

Issue Details

Type Technote
Issue Number FRS-448
Attachments image-2018-07-19-12-41-25-990.png
Resolution Fixed
Last Updated 2020-03-16T11:04:50.857+0000
Fix Version(s) None
Server(s) Tomcat

Debugging plugin performance in CFWheels 2.x with FusionReactor

Originally posted March 5th, 2020 by Tom King – reproduced by kind permission

Debugging plugin performance in CFWheels – the issue

Shortly after the release of CFWheels 2.0, we started to get reports of slower running requests under certain conditions. For instance, a page which might have had 1000 calls to `linkTo()` could take anything from 1-2ms to 5-6ms a call, which, after 1000 iterations, is one hell of a performance bottle neck. In 1.x, the same call would be between 0-1ms, usually with a total execution time of sub 200ms. 

This behaviour was something which could be proven by a developer, but not everyone was seeing the same results: what was the difference? Plugins (or rather, plugins which override or extend a core function, like linkTo()). To make matters worse, the performance degradation was doubled for each plugin, so you might get 1-2ms for 1 plugin, 2-4 ms for adding another plugin and so on.

So what was causing this?

Enter FusionReactor

We approached FusionReactor, who were kind enough to give us a temporary licence to help debug the issue (it’s great when companies support open-source!). So next up were some tests to help diagnose the issue.

Installing FusionReactor was really simple. As we use CommandBox locally, we could just utilise the existing module via install commandbox-fusionreactor to bootstrap FusionReactor onto our local running servers, which gave us access to the FR instance, already plumbed in. As we were looking for a specific line of code, we also installed the FusionReactor Lucee Plugin and configured it track CFML line execution times using the CF line performance explorer.

This was instantly illuminating, and tracked the problem to our new pluginrunner() method. When we released CFWheels 2.0, there was a fairly heft rewrite of the plugins system. It was designed to be able to allow for plugins to be chained, and execute in a specific order, so you could hypothetically have the result from one plugin overriding the previous one in the chain.

The way it did this was by creating a “stack” of plugins in an array, working out where it was in that stack, and executing the next plugin in the stack till it reached the end. It did that via a combination of callStackGet() and getFunctionCalledName() function to do the comparison.

As you can see from the screenshot below, the line debugger clearly highlights this. This app had four plugins, two of which extended core functions.

Debugging plugin performance in CFWheels
Example of FR Lucee 4 Line Debugger

callStackGet() gets invoked 2364 times in this example, but appeared performant, only causing 10ms execution time. getFunctionCalledName() is called the same number of times, but has a total execution time of 2242ms(!). We had our potential culprit. Either way, it was looking like the combination of calling the stack and trying to find the calling function name which was causing so much pain. I suspect it’s to do with how Java deals with this: I think it might be calling a full stack trace and writing it to disk on each call – at least that was the hint from FusionReactor’s thread profiler (I’m sure those who have a better understanding of Java’s underlying functions will chip in).

After some deliberation, we decided to revert this behaviour in CFWheels 2.1 back to how it used to work in 1.x, as the vast majority weren’t using it, but were being affected by it. We’d seen no plugins in the wild which used this behaviour either, which was largely undocumented.

Obviously thanks to FusionReactor for helping us out – hopefully this gives some insight into just one of the ways FusionReactor can be used. Maybe one day I’ll understand Java stack traces – maybe.

Thank you for reading our article Debugging plugin performance in CFWheels; we hope that you found it informative.

Start a free 14 day FusionReactor trial

Book a demo with one of our engineers

Installing FusionReactor in dynamic environments – Live Stream Support

Our last webinar on what’s new in 8.3.0 was a success and we were able to show all the exciting new features and answer any questions you had.

Based on the comments received on the last live session, we have decided to run a follow-up session on installing FusionReactor in dynamic environments.

Register your interest in our livestream

This session will run at 7PM UTC (12PM PST, 3PM EST) on the 25th of March

See your local time.

This session will cover how to automate the installation of FusionReactor via Docker and CommandBox as well as answer any questions you may have related to the install of FusionReactor. In this session will be;

Michael Flewitt – A technical support engineer for Intergral.

Charlie Arehart – A server troubleshooting consultant and long term advocate of FusionReactor.

Brad Wood – A developer for Ortus Solutions, developer for CommandBox and long term user of FusionReactor

During the session we will cover the following:

  • A high-level explanation on the process of installing FusionReactor dynamically
  • An example of installing FusionReactor in a fat jar Docker image
  • An example of installing FusionReactor in a tomcat Docker image
  • An example of installing FusionReactor in a ColdFusion Docker image
  • An example of installing FusionReactor in CommandBox
  • Advice on features you may want to configure in dynamic environments.

We expect the demonstration to take around 20 – 30 minutes, at which point we will answer any questions you may have.

Installing FusionReactor in dynamic environments – Live Stream Support

Subscribe to the FusionReactor channel and set a reminder for the live session here.

Database Monitoring

Fig 1. FusionReactor: Database Monitoring Tool.

Databases embody the most crucial aspects of many business processes. Due to technological advancements, applications and IT infrastructures are becoming far more diverse. But with this development comes such application performance related issues as troubleshooting and problem rectification. Hence, introducing the quality of the services that end-users demand from a server begins with an excellent monitoring strategy.

This post examines the advantages of database monitoring, alongside a brief description of the best way to implement database performance monitoring. 

In this guide, we would highlight such topics as

  • What Database Monitoring is?
  • Advantages of Database Monitoring.
  • Monitoring Database Performance.
  • Conventional approaches to Database Monitoring.
  • Key Database Performance Metrics.
  • Best implementation of Database Monitoring.
  • Top 5 Database monitoring software.
  • How to select the best tool for Database Monitoring.

What Database Monitoring Is?

The concept of database monitoring revolves around the tracking of database performance and resources to operate in a high availability/performance application infrastructure. It involves the process of measuring and tracking database performance for the fundamental metrics affecting it in real-time. Monitoring allows you to spot current and potential performance issues from the word go. In a productive database monitoring environment, the process significantly improves the database structure to optimize overall application performance.

The idea is to keep your database (and its resources) running at the most favorable condition possible, ensuring that your application infrastructure is readily available and functional. Database monitoring is essential to database administrators and software analysts, since it permits them ample opportunity to conduct and implement accurate problem-solving techniques while saving time and valuable resources. And as these issues get resolved quickly, end-users get a streamlined experience.

Most modern Application Performance Monitoring (APM) tools keep track of hardware and software performance by taking snapshots of performance metrics at different time intervals. Allowing you to identify any sudden changes, bottlenecks and pinpoint the exact timeframe an issue becomes established. With this information at hand, they can conveniently proffer tailored optimization strategies to handle the issue better. 

Advantages of Database Monitoring

The use of complex applications across varying infrastructures, calls for the need for a database monitoring tool that can provide a fast, reliable and cost-effective solution. A robust database monitoring tool is essential to aiding Software Engineers troubleshoot problems before it extends towards the end-users. More so, the necessity of a proper database monitoring tool can never be over-emphasized. After all, an unchecked database could lead to a high application load time resulting in a slow network. Whereas a slow application reduces overall customer interaction, reduced user engagement results in loss of customer (money) for the business.

But through the analysis of a database’s performance parameters – such as user and application activities, you get a clear-cut picture of your database functionalities. Implementing a robust database monitoring strategy can amount to many advantages, which include;

  • Stress and hassle-free during debugging
  • Cost-effective as it saves time and resources
  • Increased end-user experience
  • Most effective capacity planning
  • Ability to hastily free up bottlenecks as they are spotted in real-time before it extends to the end-users
  • Improves security as it provides insights into any security flaws
  • Increases awareness on when, how and the type of optimization technique to be employed.

Monitoring Database Performance

Just as earlier stated, database monitoring is essential to spot and nullify errors before they become full-blown. To keep your application operational, you’ll need to understand what database performance monitoring entails – including the perfect monitoring approach to implement, key metrics and best practices. Let’s define the standard approaches during database monitoring and the best implementation of these processes.

Conventional approaches to Database Monitoring.

By convention, there are two types of database monitoring approaches; which includes the Proactive and the Reactive database monitoring modus operandi.

Proactive Approach

A proactive approach involves the use of preventive measures to hinder the occurrence of any potential errors. Hence the proactive approach actively identifies issues before they become problems. The Proactive approach appears to be safer since it occurs beforehand. As it is less risky and increases user experience. But it is most advisable for use by experts – who would ensure to monitor the right metrics. And in a closely-knit software development environment, one where you can conveniently alert the right person should there be any issue.  

Reactive Approach

A rather more ensuing approach that aims to mitigate the effects of these problems once they occur is the Reactive method. This approach is set aside specifically as a final mechanism for performance troubleshooting, security breach investigation or a significant incident reporting.

However, it is pertinent to note that the need for proactive database monitoring greatly multiplies once there is substantial growth in the database size.

Key Database Performance Metrics. 

Below are the principal performance metrics you should consider during a database monitoring exercise, to provide an extensive insight into the overall condition of a database environment.

  • Queries: 

It is crucial to monitor the performance of the queries themselves, that’s if you want to enjoy a top-tier performance. The database monitoring tool should be able to alert you to query issues like; insufficient/overabundant indexes, overly literal SQL and inefficient joins between tables. 

  • Capacity issues: 

Some database issues can be caused by hardware problems, like lagging CPU/processor speed or insufficient CPUs, slow disks, misconfigured disks, full disks, and lack of memory.

  • Conflicts among users: 

In situations where many users are trying to access a database, this could lead to conflicting activities and queries, subsequently resulting in a deadlock (or a traffic jam in layman terms). For example, the performance of your database could suffer from page/row locking due to slow queries, transactional locks and deadlocks, or batch activities causing resource contention.

  • Configuration issues: 

A Disk without proper configuration can cause performance issues. Implementation of database monitoring procedure would help uncover issues like an insufficiently sized buffer cache or a lack of query caching.

Best implementation of Database Monitoring.

Few resources carry essential information as a database; therefore, a slight oversight could lead to the loss of some or all of this crucial information. Without the right database monitoring tools, it could be challenging to keep an eye on all the required metrics. When conducting a database monitoring procedure, the following indicators should be considered.

  • Automatically collect key database, server and virtualization performance metrics.
  • Keep tabs on alerts regarding performance or availability problems for both database and server components as well as optionally take corrective actions.
  • Generate a comprehensive report on database capacity and utilization bottlenecks.
  • Compare the present database issues with end-user’s response metric for accurate assessment of application performance.

How to Select the Best Tool for Database Monitoring.

For the implementation of a robust database monitoring strategy, there should be an efficient means to analyze data across some categories and to minimize lags. Even so, before selecting a database monitoring tool, it is necessary to note that different types of databases require varying data points to be analyzed. 

FusionReactor APM is ranked highly by its users on top review  platforms such as G2.com due to its enhanced feature set and excellent customer service; a 14 day free trial is available. 

Continuous Profiling

Continuous profiling with FusionReactor

Fig 1. Continuous Profiler alerting configeration

Software development in production is everything because when code fails in production, it fails in reality. Having an excellent debugging tool that debugs and profiles code efficiently is critical, and it does help you quite a lot if you want to enhance your code performance and scalability. In a development environment, the performance of a codebase could be easily optimized by means of tracing (instrumentation) profilers. Yet in a production setting, a much more different approach is employed such as a sampling profiler that entails continuous profiling. But before we delve into the importance of continuous profiling, it is necessary to stress what profiling is, in general.  

Continuous Profiling in Production

Profiling is the process of monitoring various parameters such as Method Execution, Thread Execution, Object Creation and Garbage Collection. All of which helps you understand your application system and solve any issue that may arise. It is simply the analysis of your application’s performance by measuring and comparing the frequency and duration of your function calls. Hence, profiling helps you with a detailed observation of your target application execution and its resource utilization.

” In software engineering, profiling is a form of dynamic program analysis that measures, for example, the space or time complexity of a program, the usage of particular instructions, or the frequency and duration of function calls. Most commonly, profiling information serves to aid program optimization.”


There are a number of other approaches to profiling. However, if you are in a production environment, a low overhead sampling profiler seems to be a better choice. And given that continuous profiling is an approach that identifies and improves the faulty portion of your code on the fly, makes it the best option.

Why Continuous Profiling?

Because ‘continuous profiling’ is the collection of line-level profiling performance data from a production environment and making such data available to developers and operations team for rapid analysis. However, this is in contrast to an Ad Hoc production profiling – which involves the connection of a profiler to a production environment on an intermittent on-demand basis. Continuous profiling is a mechanism with significantly low overhead, that allows its user unlimited access (in real-time) to insights into code performance issues during development, staging and in production environments.

 Continuous Profiling in a Software Development Life Cycle (SDLC).

Fig 1. Continuous Profiling in a Software Development Life Cycle (SDLC).

Notably, debugging and profiling are two essential phases in software development. And as such, these stages could impact a severe strain on developers, software analysts, and the entire team if not well implemented. More so, with a Continuous Profiling system in place, developers can get a line-level understanding of the performance of their code – including but not limited to the consumption rate of such resources as CPU time, wall-clock time, memory and disk I/O which are naturally in short supply. Because these resources could lead to thread deadlocks and/or application bottlenecking when not evenly allocated. 

Therefore, we need continuous profiling for the following reasons;

Evaluate overall application performance.

Memory affects everything; therefore, it is critical to be able to pinpoint memory issues throughout an application development life cycle. One of the most useful functionalities of continuous profiling is that it can show us how much memory was allocated for a specific transaction or web request. With continuous profiling, you can actively track memory and subsequently optimize memory usage. This allows developers to get an instant insight into the heap in their production java applications with low overhead memory. Also, having a wider overview of how specific threads are running is essential to understanding performance syntax. A continuous profiler provides an interface where users can instrument individual threads in order to get to the root cause of the stability of performance-related issues.

 Continuous Profiling - thread profiling interface

Fig 2. Continuous Profiler – Thread profiling Interface.

Reduce Server cost and Find Bottlenecks

Another notable feature of continuous profiling is its ability to profile code and spot performance bottlenecks on the go. Continuous profilers come equipped with a low overhead performance analysis feature, which is perfect for your production environment. Configured to take snapshots at regular intervals, continuous profiling provides excellent real-time insight so you will never miss an issue. Since profiling involves the process of measuring what part of your application is consuming a particular resource, a continuous profiler is able to track not just memory but also CPU usage on individual threads. As well as find and tune inefficient processes running on your application server.

 Continuous Profiling - cpu snapshots

Fig 3. Continuous Profiler – CPU Snapshots.

Determine performance issues in a particular method.

The heap memory is the runtime data area where the Java Virtual Machine (JVM) allocates memory for all class instances, methods and arrays. A continuous profiler such as FusionReactor, utilizes a graph to display such metadata as; how much heap has been used and how much is freed as the garbage collector is running. Continuous profiling also enables you to look at the heap in close detail by comparing heap snapshots from the memory view heap screen. While the stack-trace mechanism (mainly for debugging purpose) can be triggered to expose the various classes and methods for de-compilation.

Continuous Profiler - Heap Histogram.

Fig 4. Continuous Profiler – Heap Histogram. 

What is the best tool for profiling?

Although profiling in a development environment is easy, it is hardly enough. More so, to say that pinpointing performance issues in a production environment is easy is far from true. Hence selecting the best tool for continuous profiling becomes really necessary. However, it is pertinent to consider the following conditions when choosing one, such as;

  • Choosing the right sampler for your programming language (e.g Java) with the least overhead and an optimal runtime.
  • Selecting the perfect database for storing the data from your profilers.
  • having a seamless means to generate reports from this data

That said, the aforementioned tasks would normally require the intuition of an expert. Nonetheless, with such sampling profilers like FusionReactor, there would be no need for a specialized software analyst during production-level profiling.

Would you like to know more about the FusionReactor profiler? Click here to start a free trial or book a demo.

FusionReactor Database Monitoring software

FusionReactor database monitoring software enables the monitoring and tracking of a database’s performance. This allows users to identify and solve any potential performance issues as well as track changes in the database’s function. FusionReactor analyzes and captures all the data related to your SQL statements so you can focus on improving performance and reducing bottlenecks. See right down to which SQL statements were run, the number of rows returned and the time spent on the query.

Database monitoring tools are used by database administrators to help maintain database performance and pinpoint potential issues to

Number 1 database monitoring software for small business

Best by customer satisfaction

FusionReactor is ranked number one Database Monitoring Software for small businesses, ranked by customer satisfaction. This means that our customers put us here, and our support and product teams work very hard in maintaining our position.

Why did our customers choose FR?

FusionReactor APM review by Ben N.

Ben N.

The most powerful feature that we’ve used so far is the automatic profiling of long-running requests. This has proved to be enormously value for debugging bottlenecks as we can see exactly where the requests are being blocked by processing. With this feature, we were able to, within a single day, vastly improve the responsiveness and stability of our service.

Read full review on G2

FusionReactor APM review by Jan J.

Jan J. Managing Director, Pixl8 GmbH

Finding performance issues is so much easier when using FusionReactor. Typically and especially in complex web applications (think MVC, layered architecture, etc.) sometimes those nasty db-query-in-loop are tough to find. Easy as pie with FR.

Read full review on G2

FusionReactor APM review by Forrest H.

Forrest H. Solutions Architect, Auto Europe

Fusion Reactor has been key in helping us work through issues in deploying our new platform

Read full review on G2

FusionReactor APM review by Dave L.

Dave L. The Big Kahuna and CEO, Angry Sam Productions, Inc.

If you’re having issues with slow CFML requests, I definitely recommend giving FusionReactor a try

Read full review on G2

FusionReactor APM review by Brad W.

Brad W. Senior Application Architect, Ortus Solutions

I’ve used it to find production errors, memory leaks, and performance issues as well.

Read full review on G2

FusionReactor APM review by noah b.

noah b. Software Engineer, SYNNEX

FusionReactor APM is without a doubt one of my best tools for real-time monitoring of my applications,

Read full review on G2

FusionReactor APM review by Tony B.

Tony B. Software Developer and Operations, Trialsmith, Inc

Bottom line, FusionReactor saves time and development costs.

Read full review on G2

FusionReactor APM review by Jennifer H.

Jennifer H. Senior ColdFusion Developer, Webauthor.com

We value the wide range realtime and logged metrics that are readily available. It is a tool that is used by entire IT team – from developers, to DBAs to NetOps

Read full review on G2

Read our reviews

Read FusionReactor APM reviews on G2

Leave us a review

Review FusionReactor APM on G2

Dynamically Instrumenting ColdFusion Component Methods With FusionReactor Tracked Transactions In Lucee CFML

Originally posted By Ben Nadel on February 13, 2020 reproduced by kind permission.

One of the really fun features of ColdFusion is its highly dynamic nature. Whether you’re using onMissingMethod() or using getFunctionCalledName() or injecting methods, you can basically make your ColdFusion code do anything that you want it to. In celebration of this flexibility, I wanted to have some fun with my FusionReactor helper component, and see if I could dynamically add FusionReactor instrumentation (in the form of “tracked transactions”) to a ColdFusion component at runtime in Lucee CFML

DISCLAIMER: Just because ColdFusion is a highly dynamic language, it doesn’t necessarily mean that you should be using all of these language features. Often times, the most clever code becomes the code that is hardest to maintain in the long-run. In reality, you should strive for boring code that everyone can understand.

To explore this idea, I created a very silly ColdFusion component that has a variety of public and private methods. Within these various methods, I am including nested method calls that are executing both with and without explicit scoping. I added all of this complexity to make sure that my “proxy” logic handles the various ways in which a developer may have wired things together:

	output = false
	hint = "I provide a sample component on which to try annotating methods."

	public any function init( required any javaAgentHelper ) {

		// This component is going to ask the JavaAgentHelper to add instrumentation to
		// all of the Public and Private methods. This will wrap them in "tracked
		// transactions", which I'm calling "Segments" (a hold-over from New Relic).
		javaAgentHelper.annotateMethods( variables );


	// ---
	// ---

	public numeric function test() {

		sleep( randRange( 10, 50 ) );
		// Testing with and without scoping.

		return( getTickCount() );


	public void function publicMethodA() {

		sleep( randRange( 10, 50 ) );


	public void function publicMethodB() {

		sleep( randRange( 10, 50 ) );


	public void function publicMethodC() {

		sleep( randRange( 10, 50 ) );


	public void function publicMethodD() {

		sleep( randRange( 10, 50 ) );


	// ---
	// ---

	private void function privateMethodA() {

		sleep( randRange( 10, 50 ) );
		// Testing with scoping.


	private void function privateMethodB() {

		sleep( randRange( 10, 50 ) );
		// Testing without scoping.


	private void function privateMethodC() {

		sleep( randRange( 10, 50 ) );


	private void function privateMethodD() {

		sleep( randRange( 10, 50 ) );


	private void function privateMethodE() {

		sleep( randRange( 10, 50 ) );


	private void function privateMethodF() {

		sleep( randRange( 10, 50 ) );



view rawMyService.cfc hosted with ❤ by GitHub

As you can see, this ColdFusion component is nothing more than a set of stubbed-out method calls that demonstrate simulated latency. The only point of interest to note is that the component is receiving an instance of JavaAgentHelper.cfc when it is instantiated. It is then asking the JavaAgentHelper.cfc component to add instrumentation to its own instance:

javaAgentHelper.annotateMethods( variables );

Now, before we dive into the details of what JavaAgentHelper.cfc is doing, let’s try to instantiate and consume the MyService.cfc ColdFusion component to see what happens in FusionReactor:

	// MyService is going to use the JavaAgentHelper to "wrap" each method call so that
	// all methods calls on MyService, whether PUBLIC or PRIVATE, will be instrumented
	// with a FusionReactor "Tracked Transaction".
	service = new MyService( new JavaAgentHelper() );
	dump( service.test() );

view rawtest.cfm hosted with ❤ by GitHub

If we run this page and then look in FusionReactor’s dashboard, we get the following data:

FusionReactor dashboard showing ColdFusion component method instrumentation.

As you can see, under the Relations tab, we get the full breakdown of all the public and private method calls made within MyService.cfc. The Gantt chart only shows a few levels; but, if you look at the full Transaction History, you can see all the nested method calls.

We can also see the same data in FusionReactor’s Cloud dashboard under the Tracing tab:

Dynamically Instrumenting ColdFusion Component Methods With FusionReactor

The Cloud dashboard shows all the same Transactions; but, is a bit more colorful.

Ok, now that we see what the automatic method instrumentation is doing for us, let’s look at the JavaAgentHelper.cfc to see how it works. Internally, the .annotateMethods() call is iterating over each method in the target component and is swapping out every given method with a proxy method that calls the original method, wrapped in a “Tracked Transaction”:

	output = false
	hint = "I help interoperate with the Java Agent that is instrumenting the ColdFusion application (which is provided by FusionReactor)."

	// I initialize the java agent helper.
	public any function init() {

		// The FusionReactor Agent is not available in all contexts. As such, we have to
		// be careful about trying to load the Java Class; and then, be cautious of its
		// existence when we try to consume it. The TYPE OF THIS VARIABLE will be used
		// when determining whether or not the FusionReactor API should be consumed. This
		// approach allows us to use the same code in the calling context without having
		// to worry if the FusionReactor agent is installed.
		try {

			// NOTE: The FRAPI was on Version 8.2.3 at the time of this writing.
			variables.FRAPIClass = createObject( "java", "com.intergral.fusionreactor.api.FRAPI" );

		} catch ( any error ) {

			variables.FRAPIClass = "";



	// ---
	// ---

	* I wrap all of the methods defined in the given Component Scope (VARIABLES) with
	* PROXY methods that will automatically create a FusionReactor "tracked transaction"
	* that records the timing of each invocation.
	* @privateScope I am the VARIABLES scope of the component being instrumented
	* @annotatePrivateMethods I determine if private methods should be instrumented.
	public void function annotateMethods(
		required struct privateScope,
		boolean annotatePrivateMethods = true
		) {

		// In order to make sure the proxy methods can create FusionReactor segments,
		// let's store a reference to the JavaAgentHelper in the private scope. This will
		// then be accessible on the VARIABLES scope.
		privateScope.__javaAgentHelper__ = this;

		// -- START: Proxy method. -- //

		// Every relevant method in the given Component Scope is going to be replaced
		// with this PROXY method, which wraps the underlying call to the original method
		// in a FusionReactor Segment.
		// --
		// CAUTION: We need to use a FUNCTION DECLARATION here, not a CLOSURE, because
		// this Function needs to execute in the CONTEXT of the ORIGINAL component (ie,
		// it has to have all the correct Public and Private scope bindings).
		function instrumentedProxy() {

			var key = getFunctionCalledName();
			var proxiedKey = ( "__" & key & "__" );

			var segment = variables.__javaAgentHelper__.segmentStart( key );

			try {

				// NOTE: In a Lucee CFML component, both PUBLIC and PRIVATE methods can
				// be accessed on the VARIABLES scope. As such, we are able to invoke the
				// given method on the private component scope regardless of whether or
				// not the proxied method is public or private.
				return( invoke( variables, proxiedKey, arguments ) );

			} finally {

				variables.__javaAgentHelper__.segmentEnd( segment );



		// -- END: Proxy method. -- //

		// Replace each Function in the target component with a PROXY function.
		// --
		// NOTE: Both Public and Private methods show up in the private scope of the
		// component. As such, we only need to iterate over the private scope when
		// looking for methods to instrument.
		for ( var key in structKeyArray( privateScope ) ) {

			// Skip if not a defined, custom method.
			if (
				( key == "init" ) ||
				! structKeyExists( privateScope, key ) ||
				! isCustomFunction( privateScope[ key ] )
				) {



			// Skip if we're only annotating PUBLIC methods, and this key isn't aliased
			// in the PUBLIC scope.
			if (
				! annotatePrivateMethods &&
				! structKeyExists( privateScope.this, key )
				) {



			var proxiedKey = ( "__" & key & "__" );

			// Regardless of whether or not we're dealing with a PUBLIC method, we always
			// want to create a proxy in the PRIVATE scope - remember, all methods, both
			// PUBLIC and PRIVATE, are accessible on the private Component scope.
			privateScope[ proxiedKey ] = privateScope[ key ];
			privateScope[ key ] = instrumentedProxy;

			// However, if the original method is PUBLIC, we ALSO want to alias the given
			// method on the PUBLIC scope so that we can allow for explicitly-scope calls
			// (ie, this.method).
			if ( structKeyExists( privateScope.this, key ) ) {

				privateScope.this[ key ] = privateScope[ key ];




	* I end the segment and associate the resultant sub-transaction with the current
	* parent transaction.
	* @segment I am the OPAQUE TOKEN of the segment being ended and timed.
	public void function segmentEnd( required any segment ) {

		if ( shouldUseFusionReactorApi() ) {

			// In the case where the segment is not available (because the FusionReactor
			// agent has not been installed), it will be represented as an empty string.
			// In such cases, just ignore the request.
			if ( isSimpleValue( segment ) ) {






	* I start and return a new Segment to be associated with the current request
	* transaction. The returned Segment should be considered an OPAQUE TOKEN and should
	* not be consumed directly. Instead, it should be passed to the .segmentEnd() method.
	* Segments will show up in the Transaction Breakdown table, as well as in the
	* "Relations" tab in the Standalone dashboard and the "Traces" tab in the Cloud
	* dashboard.
	* @name I am the name of the segment being started.
	public any function segmentStart( required string name ) {

		if ( shouldUseFusionReactorApi() ) {

			return( FRAPIClass.getInstance().createTrackedTransaction( javaCast( "string", name ) ) );


		// If the FusionReactor API feature is not enabled, we still need to return
		// something as the OPAQUE SEGMENT TOKEN so that the calling logic can be handled
		// uniformly within the application code.
		return( "" );


	// ---
	// ---

	* I check to see if this machine should consume the FusionReactor static API as part
	* of the Java Agent Helper class (this is to allow the methods to exist in the
	* calling context without a lot of conditional consumption logic).
	private boolean function shouldUseFusionReactorApi() {

		// If we were UNABLE TO LOAD THE FRAPI CLASS, there's no API to consume.
		if ( isSimpleValue( FRAPIClass ) ) {

			return( false );


		// Even if the FRAPI class is loaded, the underlying FusionReactor instance may
		// not yet be ready for interaction. We have to wait until .getInstance() returns
		// a non-null value.
		if ( isNull( FRAPIClass.getInstance() ) ) {

			return( false );


		return( true );



view rawJavaAgentHelper.cfc hosted with ❤ by GitHub

There’s a lot of fun, dynamic stuff going on in this code: we’re declaring a function inside of another function (that is not a closure), we’re injecting methods into a component, we’re dynamically checking the name of an invoked method, we’re messing with Public and Private scopes.

It’s just hella exciting!

Dynamically Instrumenting ColdFusion Component Methods using FusionReactor Cloud

More information about FusionReactor Cloud or to start a free trial or request a free demo

Sending FusionReactor Tracked Transaction Metrics To The Cloud Dashboard With Lucee CFML

Originally posted by Ben Nadel on February 1, 2020, reproduced by kind permission.

One of the nice features of FusionReactor is that when you create a sub-Transaction with the FRAPI, you can graph that Transaction performance against the server’s CPU and Heap profile. This helps identify correlations, bottlenecks, and performance opportunities. This works out-of-the-box with the Standalone dashboard. However, at the time of this writing, FusionReactor does not send sub-Transaction metrics to the Cloud dashboard automatically. In order to graph sub-Transaction metrics in the Cloud dashboard, you have to explicitly enable them in your ColdFusion code. This was not obvious to me; so, I wanted to demonstrate how this works in Lucee CFML

ASIDE: I want to give a special shout-out to Michael Flewitt, a Support engineer at Integral (makers of FusionReactor), who spent no less than 3-hours working with me, helping me to figure how this code works (and why I wasn’t seeing the results that I expected to see). He is a true champion!

When you create a tracked-Transaction in your ColdFusion code, FusionReactor is implicitly logging six metrics about that Transaction’s performance. So, for example, when you create a Transaction called demo-segment:

frapi.createTrackedTransaction( "demo-segment" )

… FusionReactor implicitly logs the following numeric-aggregate metrics:

  • /transit/txntracker/demo-segment/active/activity
  • /transit/txntracker/demo-segment/active/time
  • /transit/txntracker/demo-segment/history/activity
  • /transit/txntracker/demo-segment/history/time
  • /transit/txntracker/demo-segment/error/activity
  • /transit/txntracker/demo-segment/error/time

Because of these metrics, we can graph the sub-Transaction, demo-segment, in the Standalone dashboard:

Sub-transaction metrics being graphed in the Standalone FusionReactor dashboard.

In order to do the same thing with the Cloud dashboard, we have to explicitly enable the aforementioned metrics to be streamed to the Cloud. To see this in action, I’ve created a simple CFML page that creates a tracked-Transaction and then calls, .enableCloudMetric(), on 4-of-the-6 implicitly-created metrics:

	// Get the running FusionReactor API (FRAPI) instance from the FRAPI factory class.
	// --
	// Java Docs: https://www.fusion-reactor.com/frapi/8_0_0/com/intergral/fusionreactor/api/FRAPI.html
	frapi = createObject( "java", "com.intergral.fusionreactor.api.FRAPI" )
	// ------------------------------------------------------------------------------- //
	// ------------------------------------------------------------------------------- //
	// By default, FusionReactor will use the name of the application as defined in the
	// Application.cfc ColdFusion framework component. However, we can set the name
	// programmatically.
	frapi.setTransactionApplicationName( "FRAPI-Testing" );
	// By default, FusionReactor will calculate the transaction name based on the request
	// context. It actually "understands" the fact that we're using Framework One (FW/1)
	// in production and uses the "action" value as the transaction name. That's the
	// beauty of using an APM product that is embedded within the ColdFusion and CFML
	// community. That said, we can set the transaction name programmatically.
	// --
	// See Framework Support: https://www.fusion-reactor.com/support/kb/frs-431/
	frapi.setTransactionName( "testing-cloud-transaction-metrics" );
	// ------------------------------------------------------------------------------- //
	// ------------------------------------------------------------------------------- //
	try {
		// Let's explicitly wrap a segment of our code in a custom, tracked transaction.
		// This way, we can see how this code executes in the context of parent request.
		subtransaction = frapi.createTrackedTransaction( "demo-segment" );
		// When a custom Transaction is explicitly created in the ColdFusion code,
		// FusionReactor sends the Transaction data to the CLOUD dashboard; however, by
		// default, it doesn't send the METRICS about that Transaction to the CLOUD
		// dashboard. This means that we can see the Transaction in the Tracing and the
		// data-tables; but, we can't graph it in our custom graphs. In order to do this,
		// we have to explicitly set the Transaction-related metrics to be cloud-enabled.
		frapi.enableCloudMetric( "/transit/txntracker/demo-segment/active/activity" );
		frapi.enableCloudMetric( "/transit/txntracker/demo-segment/active/time" );
		frapi.enableCloudMetric( "/transit/txntracker/demo-segment/history/activity" );
		frapi.enableCloudMetric( "/transit/txntracker/demo-segment/history/time" );
		// frapi.enableCloudMetric( "/transit/txntracker/demo-segment/error/activity" );
		// frapi.enableCloudMetric( "/transit/txntracker/demo-segment/error/time" );
		sleep( randRange( 500, 1500 ) );
	} finally {

<!--- ------------------------------------------------------------------------------ --->
<!--- ------------------------------------------------------------------------------ --->

	// Simulate regular throughput / traffic to this endpoint by refreshing.
		function() {

view rawcloud-transaction.cfm hosted with ❤ by GitHub

Every 60-seconds, metrics in the local FusionReactor instance are aggregated and sent to the Cloud. So, once this demo-page has been running for a while, and the metrics have been sent to the Cloud, and have had time to get processed by the metrics-ingress, we should be able to find the following metrics in the custom-Graph tooling:

NOTE: I’m including the “error” metrics below, even though I didn’t enable them for the Cloud in my demo code. I’m including them for documentation / completeness purposes.

  • /custom/transit/txntracker/demo-segment/active/activity
  • /custom/transit/txntracker/demo-segment/active/time
  • /custom/transit/txntracker/demo-segment/history/activity
  • /custom/transit/txntracker/demo-segment/history/time
  • /custom/transit/txntracker/demo-segment/error/activity
  • /custom/transit/txntracker/demo-segment/error/time

Notice that each of the metrics has been automatically prefixed with, /custom/:

Sub-transaction metrics being graphed in the Cloud FusionReactor dashboard.

NOTE: Part of the reason that I spent 3-hours with Michael Flewitt is because not all of the metrics were consistently showing up for me. And, in fact, even as I write this, I don’t see all of the metrics being logged in this demo. This appears to be either a bug in the rendering code; or, a timing issue with the metrics-ingress processing.

At this point, I can graph my sub-Transactions metrics in the Metrics section of the Cloud dashboard; but, I can’t add it to any of my Server performance graphs. In order to do this, I have to add Filtering to the query and then save the graph as a Server Template using the literal_or function:

Sub-transaction metrics need a Filter in order to added to the Server graphs in the Cloud FusionReactor dashboard.

Once I do this (and add the graph to my metric’s “Profile”), I can then find the Demo Segments graph in the Graphs section of my server performance monitoring:

Sub-transaction metrics being graphed against the Server performance in the Cloud FusionReactor dashboard.

Hopefully this is somewhat helpful for anyone else who might be using FusionReactor with their Lucee CFML code; and, is using the Cloud dashboard, not the Standalone dashboard.

Cloud Metrics is a Bit of an Uphill Battle

I am loving FusionReactor’s out-of-the-box functionality; but, to be honest, working with custom metrics has been an uphill battle. I am sure that a lot of this is my unfamiliarity with the tooling. But, some of the battle revolves around the stability of the platform. Some points of friction:

  • The “Metrics” dropdown menu in the Query configuration fails to load like 90% of the time. Which means that creating a simple graph involves several minutes of page-refreshing in an attempt to get the “Metrics” dropdown to load.
  • The custom metrics which I am enabling in my code often don’t show up in the “Metrics” dropdown. Which means, even when the dropdown finally loads (see point above), my metrics are not there.
  • The “Filter” configuration only seems to load if the “Metrics” also loaded. And, since the Metrics fail to load most of the time (see above point), I can’t add Filtering to my queries.
  • The Filtering functionality is confusing. For example, why do I even have to add filtering for “transaction related” metrics? If a Transaction can only ever be created as part of a running application (on a server), why do I have to explicitly identify the metric as a “Server template”? It would be great if all transaction-related metrics were defaulted as server templates.
  • The difference between a “Graph” and a “Profile” took me a while to understand. I think this is more of a user-interface (UI) problem than anything else. Since Graphs and Profiles are created and updated in the same place, the optionally-hierarchical relationship between the Profile and the graphs that the Profile renders is not immediately obvious. Maybe I’ll make a demo of how this now that I think I finally get it.
Sub-transaction metrics often fail to load in Cloud FusionReactor dashboard.

I’ve discussed some of these issues with the FusionReactor support team and they are looking into it. For example, in the future, I won’t have to explicitly enable “cloud metrics” for sub-Transactions – that will just automatically happen.

While I’ve had some friction with the Cloud Metrics, I do want to be clear that I am loving FusionReactor. I’ve been living in it for the past week and have already identified a multitude of performance issues in my ColdFUsion code. In fact, my JIRA backlog of performance-related tickets is getting a bit overwhelming – there are only so many hours in the day.

See more about FusionReactor Cloud which has a free 14 day trial

Configuring FusionReactor in CommandBox

CommandBox is a tool that allows you to deploy your CFML applications through an easy-to-use command-line interface. 

Configuring FusionReactor in CommandBox

Instead of deploying a tomcat-based installer version of ColdFusion or Lucee, CommandBox utilizes an Undertow servlet and deploys a war file for the CFML server. This allows you to switch between a Lucee and ColdFusion server with the same application and configuration. 

In terms of configuration, rather than having a multitude of small files, you can control everything from a single JSON file containing all settings for the Undertow servlet, application server as well as any installed modules.

Commandbox-fusionreactor module

To install FusionReactor in CommandBox, we recommend that the commandbox-fusionreactor module is used. This is a module designed and maintained by Ortus (makers of CommandBox).

The module, along with the FusionReactor module ensuring your FusionReactor instance is the latest version is stored in ForgeBox. This makes installation simple as you can run a single command to load the module.

box install commandbox-fusionreactor

Licensing FusionReactor

With the commandbox-fusionreactor module installed, you have access to the fr command namespace.

You can run commands such as ‘fr open’ to open FusionReactor in the browser.

To make licensing FusionReactor simple, you can run ‘fr register “myLicenseKey”‘, this automatically applies your license key to each running instance.

Passing in configuration

Any modified settings in FusionReactor are stored in the reactor.conf file of each FusionReactor instance. With CommandBox you can set this reactor.conf file to be passed into each running instance by running:

server set fusionreactor.reactorconfFile=path/reactor.conf

There are also several values you can set for FusionReactor directly through the server set command, see the full list here: https://commandbox.ortusbooks.com/embedded-server/fusionreactor#additional-jvm-args

Setting a fixed Application name

The default behaviour of FusionReactor automatically detects the name of the running application and applies this to transactions.

If you would like to disable this, you can do so by running:

server set fusionreactor.autoApplicationNaming=false
server set fusionreactor.defaultApplicationName=myApp

Setting the Instance name

The instance name of FusionReactor will either be set to the name of the directory you are running box from, or to the name of the CommandBox server.

For example, if I have no server name set and run CommandBox from a folder called test, my instance is called test.

You can override this value via the server name, which is a value defined in the server.json config file. You can set this value by running:

//Within CommandBox
server set name="myName"

//Outside CommandBox using environment variables
box server set name = "$var1+$var2+myName"

Removing FusionReactor

When removing the FusionReactor module, it is important to ensure that the –system flag is set on the uninstall command, i.e:

box uninstall commandbox-fusionreactor --system

If the system flag is not specified, CommandBox will try to uninstall from the current package, not from the CommandBox system packages.

Running ‘box restart’ after performing the uninstallation ensures that the module is not stored in memory and reloaded when a CommandBox server is restarted.

Running in Linux

When running in a Linux Desktop, we have seen that CommandBox can crash without warning. This is due to an issue with CommandBox interacting with the system tray.

If you are running Ubuntu 18.04 or greater, you will be required to install the libappindicator-dev package to allow CommandBox to use the system tray.

Alternatively, you can disable the CommandBox system tray element. To do this, run the following commands:

 server set trayEnable=false
 config set server.defaults.trayEnable=false