FusionReactor Blog - News and expert analysis for Java APM users.

Spring Boot Performance Monitoring

Spring Boot Performance Monitoring.

Modern applications are highly distributed in nature, and thus they are often bundled with various dependencies like Database, caching, and much more. Monitoring becomes essential due to the complexity of the implementation scheme that comes within such advanced software. The Spring Boot Performance Monitor is a library kit that comes preinstalled with any Java Agent. It utilizes HTTP endpoints or JMX beans – because you can also configure the proxy profile to display Spring Boot JMX metrics, to enable us to interact with it. By way of this, we get firsthand reading of operational information about the running application – health, metrics, info, dump, env, etc.  

The Spring Boot Monitoring toolset provides you with an immediate and extensive insight into your Java applications running on the Spring MVC Framework. In its essence, the Spring Boot Performance Actuator is a unifier that introduces production-ready features into our application. With this dependency manager, such functionalities as gathering metrics, monitoring our apps, understanding traffic, in addition to the state of our Database becomes inconsequential. The Spring Actuator discovers the application name through the spring.application.name, property located in the application.properties file or the SpringBoot Application startup class, if the spring.application.name property is not defined.

The Spring Boot Actuator

Notably, Spring Boot supports many additional features that help you manage and monitor your application, from the development stage, extending to production. Such auditing exercises as metrics and health gatherings are seamlessly implemented through the Spring Boot Performance Monitor. Once installed onto the classpath, this dependency supplies us with several endpoints right out of the box. 

As with several other Spring modules, we can extend and configure the Spring Boot Actuator in a variety of ways, of which we would be looking into shortly.

Getting Started

Typically, a monitoring system would have three core components which include;

  1. Dashboard for the visualization of data stored in the Database.
  2. A Metric Store – Time-series Database. For example, Prometheus, InfluxDB, TimescaleDB etc.
  3. Applications that periodically supplies the Metric store with metrics data from the application’s local state.

We can go on ahead to have other components installed as well like alerting – where the alert channel could be via Email, Slack, or other conventional media. Alerting components would be sending alerts to the application owners or subscribers of events. For this guide, we would be using Grafana as a dashboard and alerting system, Prometheus as a metrics store system.

The Requirements

  1. An IDE
  2. Java Development Environment
  3. Gradle

Now we create a project from the Spring Boot initializer, we then add the dependencies. Next, we use the Micrometer library, which is an instrumentation façade that provides integrations for many metric stores like Prometheus, New Relic, Datadog, to mention a few.

Micrometre provides the following functionalities ready for use;

1. JVM.

2. Database.

3. HTTP request.

3. Cache system and related metrics.

While some of these metrics are enabled by default, others are subject to customized settings. For this feature; we would employ the application.properties file to handle enabling, disabling and customization. We would also need to use the Spring Boot Actuator to expose the endpoint of the Metric Store – Prometheus.

Steps: –

  1. Add these following dependencies in build gradle file just as done in line 7 of figure 1.:
  • io.micrometer:micrometer-registry-Prometheus.
  • org.springframework.boot:spring-boot-starter-actuator.

Figure 1:

  1. We would enable the Prometheus export functionality by adding the following line to the properties file.

Figure 2:

  1. As soon as the above command is enabled Micrometer automatically accumulates data about the application. This data can be viewed by visiting the actuator / Prometheus endpoint – which is used in the Prometheus script to fetch data from our application servers.

Although we have included the code line mentioned above into the properties file, we still won’t be able to browse the Prometheus endpoint since it is disabled by default. For that, we head on to the management endpoint and add Prometheus in the list.

Figure 3:


Not all endpoints are enabled from the actuator as this could lead to a security loophole. We choose endpoints selectively, especially in a production environment. Even though we need to use these endpoints, it is not advisable to expose certain endpoints to the whole world as we place a whole lot of sensitive application data at risk. Therefore, it is best to use some proxy to hide these endpoints from the real world.

Also, various components of the HTTP requests are customizable much like the SLA (Service Level Agreement), percentile histogram should be computed or not. This can be done through the metrics—distribution properties.

A sample application.properties can contain the following lines;

Figure 4:

  1. We can now run the application and navigate to the page http://locahost:8080/actuator/prometheus on a browser, to view our data.

Figure 5:

  1. The above data displays HTTP request detail, such as
  • exception=None, which means there is no exception just as the name implies. If any, we can use that to filter how many requests have failed due to that exception handler.
  • method=GET HTTP method name. 
  • status=200 HTTP status code is 200.
  • le=xyz displays the processing time.
  • N.0 displays the number of times that endpoint was called.

This data can be represented as a pie-chart or histogram that can be plotted in Grafana, for example, to plot p95 over 5 minutes we can use the following query.

Figure 6:

Custom Metrics

Most times, we might need to customize our metrics where some of such use-cases include the number of logged-in users, number of orders in the order queue, currently available stock details etc. Certain business use cases can be solved using custom metrics. Thus, Micrometer supports multiple kinds of metrics like Gauge, Counter, Timer, Long task timers, Distribution summaries etc. But for the scope of this walkthrough, we would focus mainly on Gauge and Counter. Gauge gives us instantaneous data like the length of a queue, whereas the counter is much like a uniformly increasing number, starting from one.

For this, we’re going to create a demo stock manager that will store details in memory and would provide two functionalities:

1. Add items.

2. Get items.

By that, we’ve created a single counter and an available gauge in the init method. Therefore, whenever getItems is called, we increase the counter as well as to measure the stock size. While a call to addItems updates the gauge.

Figure 7:

For demonstration purposes, we’ll add two endpoints to add items and get items.

Figure 8:

Firstly, we add ten items using two API calls.

  1. Curl -X POST http://localhost:8080/stocks?items=1,2,3,4
  2. Curl -X POST http://localhost:8080/stocks?items=5,6,7,8,9,10

Now if we browse to Prometheus endpoints, then we can see the following data indicating that we currently have 10 items in the stock.

Figure 9:

Now, we’re going to place an order of size three;


Then again, if we browse Prometheus endpoint, then we get the following data that indicates stock size has been changed to seven.

Figure 10:

We could also see that the counter has been updated with a value of 1. This indicates that just a single order has been placed.

Figure 11:

Troubleshooting with Spring boot performance monitoring

Other tools like FusionReactor permits you to troubleshoot the most complex of application performance issues. They were giving you instant insight into where problems are occurring or where your application is performing poorly.

Critical Features for Spring Boot Performance Monitoring Tools

Below are some of the critical features that FusionReactor provides in order to monitor and find performance issues within your application.

  • Production Debugger – Production, Safe / User, Controlled enclosed debug environment.
  • Code Profiler – Production Safe / instantly see performance bottlenecks.
  • Memory Profiler – Production Safe / Real-time heap analysis to locate memory leaks.
  • Instant Code decompilation.
  • JMX MBeans Support.
  • Crash Protection – Constant application availability and performance checking.
  • Generate alerting for poor performance or when resource thresholds (memory/CPU) are reached.
  • Isolate Long Running threads.

One essential Actuator software for monitoring and interacting with your application is FusionReactor, as it goes beyond “just monitoring”. FusionReactor instruments web-based transactions within the Spring 2.x and 3.x MVC Framework, hence provisioning a seamless transaction identification process. Also, proactively mitigates server downtime while expediting time to a resolution just like no other Spring Boot Performance Actuator on the market.

Eclipse Performance Monitor

Introducing the Eclipse Performance Monitor

Eclipse IDE (Integrated Development Environment) is a multi-lingual development setting for almost any programming language you could think of – Java, C, C++, Clojure, Groovy, Haskell, JavaScript, Julia, Perl, PHP, Ruby, Rust, and much more. Nonetheless, eclipse is famous for being a dedicated Java IDE

Listed as one amongst the top three Java IDEs, this next-generation IDE is available in both desktop and cloud editions, hence its extensive clientele. But as the saying goes; “To Whom much is given much is expected. With an increasing large user community, the requirement for certain functionality becomes more needful over time. 

For a renowned IDE like Eclipse, the feasibility of such serviceability as debugging, performance monitoring, profiling etc., has to be treated with a specific degree of sensitivity. Which is why we would be discussing on how to perform one of the most crucial aspects in the software development life cycle on an eclipse environment – performance monitoring on an Eclipse IDE. 

How to Execute Performance Monitoring on An Eclipse IDE

For web artisans who are well-versed in the Java programming language, looking to include distinctive functionalities on the Eclipse IDE, a PDE (Plugin Development Environment) is conveniently available for this. This PDE allows the IDE to flaunt robust tools that aid developers in speeding up the entire development process. Yet, for more in-depth capabilities, it is best to utilize an exclusive eclipse performance monitor viz FusionReactor.

The UI Freeze Monitor

Eclipse IDE is providing an all-inclusive application programming interface. And as earlier mentioned, it comes fully packed with its own performance and profiling toolset in addition to an external plugin additive – all thanks to the PDE. Consequently, before we go into the details of using the dedicated performance tracing facility of the Eclipse IDE, we ought to talk about the interactive UI monitoring tool that comes with the Eclipse IDE. To activate this responsive UI performance measurement feature you would have to navigate to the preference tab on your window. You can do this by following these few steps below.

Steps: –

1 Click on Windows > Preference > UI Responsiveness Monitoring. 

2 Once activated, a window would pop on where a stack trace would write to the Errorview, in case there is an occurrence of a UI freeze just like the window below.

During the event of a deadlock, it is best that you report such freezes https://bugs.eclipse.org/, so that the team can fix these.

For the rest of this guide we would be highlighting on the integrated performance inspecting facilities of Eclipse. Along with steps on how to implement this scrutiny in the most detailed way possible. Let’s immediately get into the nitty-gritty of using a custom-built performance checker. 

Utilizing the In-built Tracing Instruments of Eclipse.

The Eclipse IDE proffers a tracing proficiency which can be activated on demand. Upon activation, additional plug-in information is written to the console at runtime. You can activate the native tracing feature through the -debug start parameter. At which time the IDE would search for the .options file in the directory where the installation file for Eclipse is being held up. The file would then be set to the ‘one key=value pair per line’ property.

The plugin is designed such that you can see the tracing option in the preference settings by default. It is also possible to initiate this tracking options at runtime. An example of the Eclipse IDE preference is shown in the following screenshot.

These tracing options are also available for a launch configuration. The following examples are going to be slated for specific tracing functions.

Tracing the Start-up Time of Plug-ins

In this example, we would trace the start time of each plug-in during start-up. For this, we would be creating a distinct .options file just like we did in the last instance, but this time with a different content as shown below;

Use the underlisted command to start Eclipse;

The ‘Starting application timestamp’ describes when OSGI (Open Service Gateway Initiative) is done with its activation process. Subsequently, its ‘Application Started’ time reference indicates when the application starts, just as the name imply. After which you extract the information you are most interested in. For instance, the above example is only a small script that extracts the time of the activator of each bundle. Thereafter, sorting every bundle by this time.

Monitoring the Resource Plugin.

The following example uses the same .option file to trace resources.

Implementing Tracking for your Custom Plugin

By means of the Eclipse Tracing API, users can also implement tracing for their custom plugins. To enable users, include their tracing options to the preference page, they would have to turn them on at run time. Using the extension to route to the org.eclipse.ui.trace.traceComponents. See the TracingPreferencePage for more details on how to implement this.

Tracing for Key Binding

The tracing functionality of Eclipse allows one to trace which commands are associated with certain key binding. The following listing contains the trace options to enable that.

Eclipse Sleak

The Sleak monitor tracks the creation and disposal of SWT (Standard Widget Toolkit) graphics resources. You can get this feature directly from SWT Development Tools or install the Eclipse plugin via the SWT Tools Update Sites.

To activate the Sleak functionality, you can use the ‘Tracing’ tab in your Eclipse runtime configuration.

You could also use active this component on the Eclipse IDE. To do this, you would have to start Eclipse with the -debug option from the command line terminal. You would also need to create a .options file in the Eclipse installation directory with the following entries;

Once you start the Eclipse IDE, you could find the Sleak view under;

Window > Show View > Other… > SWT Tools > Sleak.

Sleak allows you to take snapshots and conveniently creates a difference for comparison. You could click on ‘Stacktrace’ to access the stack trace utility that lets you view where the resource was allocated. Other tools that could be used for Eclipse performance monitoring includes NetBeans profiler, FusionReactor, GC Viewer, VisualVM, JIP, to name a few.

Minimize server downtime

The overall idea of monitoring performance in your Eclipse environment is to ensure you actively minimize server downtime, and speed up the time to resolve the issue. Even though there is an endless list of tools for performance monitoring on a Java development environment such as Eclipse, FusionReactor stands out amidst the crowd. Reason being that FusionReactor outstrips conventional monitoring since it actively tunes the Eclipse IDE, minimizes server downtime and accelerates time to resolution unlike any other Eclipse Performance Monitoring tool obtainable.

 Fusion Reactor’s performance monitor comes with a unique package that outclasses many conventional monitoring platforms. It gives you a broad and instant view of what is going on within your Eclipse environment so you can evaluate how much time is needed for each transaction.

How to Find Memory Leaks in Java Web Applications

Finding memory leaks in your Java application could be a needle in a haystack exercise if you don’t know your way around the Java Virtual Machine (JVM) production environment. However, depending on your profiling tool, you can easily analyze your Java memory consumption, while obtaining instantaneous insights into the heap in your Java production applications. But before we go into the details on how to find memory leaks in java web applications, let’s get into what a Java memory leak is, the possible causes of such leakages, and how to fix them.

Java Memory Leak

A memory leak is simply caused by reference chaining which is held up in the PermGen and cannot be garbage-collected. Sounds gibberish, right? Well, keep calm and follow-through while I explain further. Web containers utilize a class to class loader mapping system to isolate web applications and because a class is uniquely identified by its name and the class loader that loaded it. Hence, you can have a class with the same name, loaded multiple times in a single JVM – with each class having a distinct class loader.


  • An object retains a reference to the class (java.lang.Class) that instantiated it
  • The class retains a reference to the class loader that loaded it 
  • The class loader retains a reference to every class that it loaded.

This potentially becomes a very big reference graph to handle. These classes are loaded directly into the PermGen. Retaining a reference to a particular object from a web application pins every class loaded by the web application into the PermGen. These references often remain even after a web application is reloaded and with each new reload, more classes get pinned or stuck in the PermGen – which in due course gets full.

What is a PermGen?

PermGen, short for Permanent Generation is the heap in the JVM dedicated to storing the JVM’s internal representation of Java Classes and detained string instances. In simple terms; it is an exclusive heap location separate from the primary Java heap, where the JVM registers metadata related to the classes which have been loaded. 

Most Java Servlet containers and WebSocket technologies enable org.apache.catalina.core.JreMemoryLeakPreventionListener Class by an extension such as Apache Tomcat 7.0 and upwards. The inclusion of this memory leak handler wouldn’t help in the event of more sophisticated issues such as PermGen errors on reload or bug interference caused by the application itself. This gets a little more interesting when the Tomcat servlet isn’t causing the leak and neither is the application (at least not directly) but rather a bug in the JRE code triggered by some third-party library. 

With the advent of the Java Development Kit – JDK6 (Update 7 or later), comes a handy tool that ships with the JDK and makes life a whole lot easier for us. This tool is called the VisualVM; which is a graphical tool that seamlessly connects to any JVM and allows you to expel unwanted garbage from the JVM’s heap whilst also allowing you to you navigate that heap. In a Tomcat servlet, a class loader for a web application is also a class called org.apache.catalina.loader.WebappClassLoader. So if our Tomcat instance has just a single web application deployed, then there should only ever be one instance of this class in the heap. But in the event that there are more, then we have a leak.  

How to find memory leaks in Java web applications

Now that we have that out of the way, let’s quickly dive into the steps on how to detect and avoid Java memory leaks. Let’s immediately look at how we can use VisualVM to figure this out.


1. Open the command prompt terminal and type in the following command below to Start Visual VM;


A window similar to figure 1 would pop up.

How to Find Memory Leaks in Java Web Applications
Figure 1: The Java VisualVM 

2. Right-click on Tomcat from the sidebar on the left-hand side then select ‘Heap Dump’.

How to Find Memory Leaks in Java Web Applications
Figure 2: Heap Dump: Click on the ‘OQL Console’ button. 

3. Click on the ‘OQL Console’ button at the top of the Heap Dump navbar. This opens up a console that allows you to query the heap dump. For this exercise, we want to locate all instance of org.apache.catalina.loader.WebappClassLoader.So enter the command below in the resulting console;

select x from org.apache.catalina.loader.WebappClassLoader x

Figure 3: ‘OQL Query Editor Console’: Type in the above command and click execute

4. In this case, VisualVM found two instances of the web application class loader; one slated for the web application itself and the other for the Tomcat manager application. 

Use the Tomcat manager application to restart the web application at http://localhost:8080/manager/html and take yet another heap dump of the Tomcat process.

Figure 4: Restart the web application and navigate back to the ‘OQL Console’ to repeat step 3 again

5. Notice the inclusion of an extra instance from the above step. This is because one of these 3 instances was supposed to be collected by the garbage-collector, yet it wasn’t. Thanks to Tomcat we can easily tell which instance was not garbage-collected as all active class loaders are provisioned with the field name: ‘started’ set to ‘true’

In order to find the invalid instance, click through each class loader instance until you find the one whose ‘started’ field is set to ‘false’.

Figure 5: Click through each class loader instance in the heap dump to spot the faulty one – with the ‘started’ field set to false

6. Now that we have spotted the class loader that causes the leak, we need to determine what object is holding a reference to the class loader. Evidently, a good number of objects would be referenced by many other objects, but in all essence, only a limited number of these objects would form the root of the graph of references. These are particularly the object(s) we are interested in.

Therefore, on the bottom pane of the instances tab, right-click on the object instances that form the root of the reference graph and select ‘Show nearest GC root’.  The resulting window should look like this:

Figure 6: Right-click on the object instances that forms the root of the reference graph and select ‘Show nearest GC root’.

7. Right-click on the instance and select ‘Show Instance’.

Figure 7: Right-click on the instance and select ‘Show Instance’.

8. From this, we can deduce that this is an instance of the sun.awt.AppContext type. We could also see that the contextClassLoader field in AppContext is holding a reference to the WebappClassLoader. Hence, this is the errant reference causing the memory leak. Now we figure out what instantiated the sun.awt.AppContext type, for starters.

Firstly, we restart the Tomcat in debug mode with the following code;

${TOMCAT_HOME}/bin/catalina.sh jpda<br>

Then, we move on to remotely debug the class loading sequence – in which case I would be using Eclipse to do this. Also, we need to set a class loader breakpoint on sun.awt.AppContext;

  • Use the Open Type Command (Shift+Control+T) to navigate to the sun.awt.AppContext type.
  • Right-click on the class name in the Outline pane and choose ‘Toggle Class Load Breakpoint’.

Next, we need to trigger the class loading sequence by connecting the debugger to the Tomcat instance and having the debugger come to a halt exactly at the point where the sun.awt.AppContext is loaded:

Figure 8: Connecting the debugger to the Tomcat instance and set the debugger to stop right after where the sun.awt.AppContext is loaded.

And there you go! It has been instantiated by the JavaBeans framework, which in this instance is being used by the Oracle Universal Connection Pool (UCP). We could, therefore, notice that the contextClassLoader is a final field and it looks like AppContext appears once; so we can assume that this field is set once and once only during the instantiation of AppContext.

How to Find Memory Leaks in Java Web Applications – Summary

In summary, JEE Applications’ PermGen ‘out of memory errors’ usually reside in the application itself (or a library used by the application) and is often compounded by classes in the JRE library holding references to the web application class loader or objects instantiated by the web application class loader.

How to find memory leaks faster

The quick way of finding the memory leak is to use a solution like FusionReactor’s GC Roots analysis feature. This video explains how to instantly find memory leaks and optimize memory usage.

Setting up multiple SSH keys with GitLab.

My issue

For some years I have had a large set of scripts, bash aliases, tools which I have had which I deploy on every system I have access.

The scripts have lots of standard utilities I have created lots of customisations for how I work when managing Linux systems.   These scripts started off as cygwin utilities for my Windows development and evolved over 15 years.

For some time I have used mercurial and Bitbucket, mainly as they provided private repositories for free when GitHub didn’t.

A year ago I moved to GitLab, they provided private free repositories and I was getting increasingly frustrated with the bitbucket usability and performance.

Both Bitbucket and GitLab used my personal ssh keys to get access, and I had configured these keys for many different computers (each with its own key).

Fast forward to 2020 and coronavirus… Intergral decided we need to move our development offsite, which was at risk due to our dedicated internet connection and our VPN connection.   If either failed we would not be able to continue the development of FusionReactor.    Many other projects were already running on GitLab, but we hadn’t moved FusionReactor as there hadn’t been any real necessity until the Coronavirus.

Moving our development process from subversion and Jenkins, hosted internally within Intergral to use GitLab and its CI runners was relatively simple.   We had been planning to do this over time, but Coronavirus made us do a big push to just get it moved ASAP.

One of the problems I had was that all my computers already had the SSH keys setup to use my personal/private GitLab repositories and I was constantly using my internal email address and password to develop FusionReactor.

The Solution

Unlike GitHub which allows you to use the same SSH key for multiple accounts, GitLab doesn’t.   So to change which key is used you have to configure ssh to chose the correct one.

This can be done by using a hostname alias for gitlab.com and using this alias instead of the real hostname.

Firstly you need a new specific SSH key for your new account.  My default key will be my personal GitLab and I need to create a new one for the Integral GitLab login.

$ ssh-keygen -t ed25519
Generating public/private ed25519 key pair.
Enter file in which to save the key (/home/nwightma/.ssh/id_ed25519): /home/nwightma/.ssh/id_ed25519_intergral
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/nwightma/.ssh/id_ed25519_intergral.
Your public key has been saved in /home/nwightma/.ssh/id_ed25519_intergral.pub.
The key fingerprint is:
SHA256:88pj17IttNz9JJ1S62BrdlOt0pw7+xPRjDiPqcxNZHc nwightma@neil
The key's randomart image is:
+--[ED25519 256]--+
|                 |
|                 |
|             . o.|
|            = o.E|
|        S  o * oo|
|         o. + oo=|
|         =.B B.==|
|       .o.X.*o%* |
|       .oo.++o+B=|

$ cat ~/.ssh/id_ed25519_intergral.pub
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKnvXlRGntKCJYUV4cYiFTOcvfItYodDAeRi3yBvXTyn nwightma@neil

In GitLab go to your settings ( top right ) and select SSH Keys on the left.

Copy and paste the output from the cat command into the Key text box like this : 

Now we just need to configure the ssh host alias.

Edit the ~/.ssh/config file and add an alias like this 

# Private GitLab instance
Host gitlab.intergral.com
     HostName gitlab.com
     User johndoe@example.com
     IdentityFile ~/.ssh/id_ed25519_intergral.pub

Now the project can be checked out normally but you have to change the hostname


git clone git@gitlab.com:intergral/fusionreactor/agent.git


git clone git@gitlab.intergral.com:intergral/fusionreactor/agent.git

The last thing to remember is to configure your git username and email addresses.   If you want to use your personal email for personal git projects and company email for company projects you have to configure the emails per project.

I currently have my personal email address as my git global email address and select my company email address for each project, but on my work PC I have this reversed.


$ git config --local user.name "John Doe"
$ git config --local user.email johndoe@example.com

$ git config -l --local | grep email

The project will show what email address your changes are being tracked as.

Cloud Application Monitoring for Remote Workers

Desperate Times call for Sensible Measures

Cloud Application Monitoring for Remote Workers

With the onset of the Covid-19 virus spreading through the planet at a phenomenal rate, businesses have been forced to respond with an unprecedented new business model. For many, it’s not possible remain operational, however, for the online business community; help is at hand.

Many digital businesses are able to sustain satellite workers all working in synergy remotely to ensure the continuity of business operations. Which means more than ever, the resources of IT infrastructure of organisations are being tested to the limit, and it’s our job to keep the cogs turning.

What is an APM?

Application Performance Monitors (APMs) are a pinnacle tool for the tech community and their key function is to detect and diagnose issues with performance in order to enable the level of service needed or to match the demand.

Traditionally an APM would be running “on premise”, which is great. However, there are times when having access to a cloud based solution makes the whole world a much smaller place!

Why do we need an APM and how does Cloud help when working remotely?

In terms of business needs, an APM is key to us as it reduces the time resolve problems. Meaning that issues relating to performance and productivity are fixed sooner and more importantly, the potential negative impact to the business is reduced or negated. 

Luckily for us, working away from the office is no longer an issue as we can utilise cloud based APMs such as FusionReactor Cloud to monitor our systems and then give us the information before issues get out of hand.

Barring the outbreak of a worldwide pandemic, it has been known for the Tech community to work from home, or on location when attending expo events. So having a Cloud based interface is absolutely vital to smooth running.

Cloud Application Monitoring for Remote Workers

What can we do with a Cloud APM?

Cloud APMs offer the ability to view metrics and other data so that we can remotely monitor servers for activity, usage and much more. It also enables us to monitor our applications and transactions, as well as affording us the functionality to set alerts to conditions we want or really need to know about quickly.

Perhaps this is the most useful feature from a cloud APM. Powerful alerting capabilities, allowing notifications to be sent to us in a number of different ways, enables us to be responsive. This promotes efficiency, prevents potential problems and promotes productivity. 

Alerts can be configured to be sent with a selection of options depending how you prefer to work, from Slack, HTTP web hooks, right down to an email, the options of how you want to get those notifications are vast.

Cloud Application Monitoring for Remote Workers

Monitoring Applications with a Cloud based AMP allows you to monitor all of the applications in your scope, and if you’re monitoring multiple server instances all running the same application, then Cloud APM will aggregate all of the data together.

The interface can deliver us information remotely on operations, processes, threads and transactions which may be critical for you to see. Not only see, but you may need to know when a particular condition is occurring, and be alerted to take action if that happens.

In short, cloud based APMs are vital to the smooth running of online systems and applications, but especially where you need to work remotely, or have remote workers who need the high level of in-depth insight that a cloud APM is able to bring.

Hopefully this short read is a helpful introduction to the cloud based APM. Of course the current situation really is unprecedented. However, the long-term benefits of this system are clear. Not just from a green-perspective but also from an occupational well-being front; allowing the potential to give more flexibility to staff working from home.

5 things you should check every day to ensure your application health

Short application health checklist to ensure you’re getting the most out of FusionReactor

Configure your notification email address

The notification email address is where FusionReactor will send the daily, weekly and monthly reports to and is also the email used to send crash protection notifications. If you have not done this already, it’s very important to set this up as soon as possible.

Configure your mail settings within the FusionReactor settings page.

1 – Setup Crash Protection – get alerts when things go bad

Crash protection will alert you and attempt to keep your server/application responding when certain events or situations occur.  The alerts are usually the first capability enabled, because these will provide critical insight into what’s going wrong and why.

Crash protection can alert you when:

  • A request runs for longer than a configured time period (Long-running requests).
  • A number of requests run for a configured period (A spike in traffic slowing down the application).
  • Heap memory peaks at a certain threshold for a configured period (In case of a memory leak or request consuming large amounts of memory).
  • Instance CPU peaks at a certain threshold for a configured period (In case of any resource-heavy background process, request or event consuming large amounts of CPU).

For each of these alerts an email can be sent, which contains details of the running requests, resource usage and a full stack trace at the time the event triggered. 

NOTE: Even when you have set up your notification email, you still need to set Crash Protection email to ENABLED before the email will be sent.  You can do this in the Crash Protection Settings.

As well as email alerts you can also queue or reject new requests coming into the application server to reduce load whilst the server recovers.

2 – Check daily, weekly and monthly reports

Once the notification email is configured, you will automatically start receiving daily reports from your FusionReactor instance, in the report, you will see information on any outages, total load for the day and number of erroring requests.

5 things you should check every day to ensure your applications health

NOTE: All editions of FusionReactor provide a Daily Report – however, the Enterprise and Ultimate Editions also provide a weekly and monthly report.

3 – Review historical Archive Metrics – find behavioural issues 

Archive metrics allow you to view your historic log data within a user-friendly interface, so you can go back in time to identify issues and spot behavioural patterns within the application server.

A key issue for maintianing application health is to identify issues post-crash, this can be a challenge as there can be vast amounts of data dumped to log files and sifting through this data can be time-consuming. 

With FusionReactor, we have made this process simple as you can view all the metrics available in the running server but for the past 31 days of captured logs.

In the example above we can examine the Garbage Collection activity at the time before a crash and see that we had a steady increase until the point the server became unstable and crashed.

The Relations Tab provides a visual breakdown of sub transactions, which are often database or external service functions.  This makes it easier to spot potential performance bottlenecks.

4 – Recognize performance hot-spots from the Relations Tab 

If your web request makes any HTTP, JDBC, Mongo, Redis or ColdFusion tag (and many others), these are tracked as sub-transactions that you can see as an overview in the Relations tab of the request and drill into.

5 – See resource details to quickly gauge JVM health

Resources allows you to monitor the health of the JVM and find potential optimizations.

Within resources, you have multiple graphs that allow you to monitor:

  • Heap and non-heap memory
  • The usage of each memory space
  • Garbage Collection time and quantity
  • Class loading and JIT
  • Thread state and activity

From the Thread’s view, we can see the state of each thread in live time and perform a stack trace to see what each thread is doing.

Securing FusionReactor and jsp applications in tomcat using LDAP


FusionReactor provides different types of user accounts (Administrators/Manager/Observer), however, if you would like to restrict access to FusionReactor for individual users, you can do this via LDAP authentication.

This technote will guide you through configuring tomcat to use LDAP authentication to restrict access to both FusionReactor and your JSP applications.

We will split this guide into 5 distinct sections

  1. Configuring LDAP
  2. Configuring the server.xml file
  3. Configuring the JSP application
  4. Configuring the FusionReactor web root
  5. Disabling the external web root of FusionReactor

1.  Configuring LDAP

When configuring LDAP for use with tomcat you are required to create a collection of individuals and a collection of groups (one group per required tomcat security role), each user can be assigned to one specific group

In this example, FusionReactor and the JSP application are assigned to separate tomcat roles. The domain structure is as follows

dn: dc=mydomain,dc=com
objectClass: dcObject

dn: ou=people,dc=mydomain,dc=com
objectClass: organizationalUnit
ou: people

dn: ou=groups,dc=mydomain,dc=com
objectClass: organizationalUnit
ou: groups

dn: uid=jsmith,ou=People,dc=mydomain,dc=com
objectClass: inetOrgPerson
uid: jsmith
cn: John Smith
sn: Smith
userPassword: myPassword

dn: uid=ajones,ou=People,dc=mydomain,dc=com
objectClass: inetOrgPerson
uid: ajones
cn: Adam Jones
sn: Jones
userPassword: myPassword

dn: cn=fusionreactor,ou=groups,dc=mydomain,dc=com
objectClass: groupOfUniqueNames
cn: fusionreactor 
uniqueMember: uid=jsmith,ou=People,dc=mydomain,dc=com

dn: cn=myApplication,ou=groups,dc=mydomain,dc=com
objectClass: groupOfUniqueNames
cn: myApplication
uniqueMember: uid=ajones,ou=People,dc=mydomain,dc=com

 You could instead create one group for example “admin” and use this for FusionReactor and the JSP application.

2. Configuring the server.xml file

Tomcat in its default installation will use a local database to authenticate user access, we need to modify the server.xml file typically located at {Tomcat Root Directory}/conf/server.xml so that tomcat will instead use the LDAP server as it’s authentication service.

To do this first open the server.xml file in a text editor, you should replace the default Realm element tag:

<Realm className="org.apache.catalina.realm.LockOutRealm">
   <!-- This Realm uses the UserDatabase configured in the global JNDI
        resources under the key "UserDatabase".  Any edits
        that are performed against this UserDatabase are immediately
        available for use by the Realm.  -->
   <Realm className="org.apache.catalina.realm.UserDatabaseRealm"

With the following:

<Realm   className="org.apache.catalina.realm.JNDIRealm"

More information on realms can be found here: https://tomcat.apache.org/tomcat-7.0-doc/realm-howto.html

3.  Configuring the JSP application

By default any application you place in the webapps directory of tomcat will be accessible without authentication, however, you may have an application that should only be accessible to a specific user, you can achieve this by modifying the web.xml file of the application this can usually be found at {Tomcat Root Directory}/webapps/{App Name}/WEB-INF/web.xml

Within the “web-app” element tag add the following:


This will block any user with an unauthorized role from accessing your application, it is possible to define multiple authorized roles my duplicating the “role-name” element tag for example:


4. Configuring the FusionReactor web root

 With the default configuration of FusionReactor, you will be able to access the Application Performance Monitor through either the application server port (external port), 8080 for tomcat, or the instance port defined in the java arguments (internal port). Accessing FusionReactor through the external port uses the web root, the path to FusionReactor on the port.

By default, this is “/fusionreactor/” so if the internal port is enabled you will be able to access your FusionReactor instance at http://localhost:8080/fusionreactor/. 

You can change this value by navigation to FusionReactor > Settings > Web Root:

To configure LDAP security you will first need to create the following web app directory structure:

 ensuring you replace “fusionreactor” with your web root.

Your web.xml file should contain the following:

<?xml version="1.0" encoding="UTF-8"?>
<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee"
                             http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd">        <security-constraint>

This will ensure that any user that does not have the tomcat role fusionreactor cannot access the instance.

At this stage, you will be able to test that both your application and FusionReactor authentication access is working as expected

5. Disabling the external web root of FusionReactor

Although your external port is now secured FusionReactor is still accessible over the internal port without LDAP authentication, to stop this we simply need to disable the external port.

You can do this in 2 ways:

  1. In the settings page
    1. Simply disable the port
  2. In the java arguments
    • Windows – Run the tomcatw.exe application within {Tomcat Directory}\bin
    • Linux – Open the setenv.sh file in a text editor this file should be located at {Tomcat Directory}/bin/setenv.sh

                 In the -javaagent argument remove the address={port number} configuration option, for example:

                -javaagent:/opt/fusionreactor/instance/tomcat7/fusionreactor.jar=name=tomcat7,address=8098 will become –javaagent:/opt/fusionreactor/instance/tomcat7/fusionreactor.jar=name=tomcat7           


After following the above steps we should now be in the following state:

  • An unauthorized user cannot access either the JSP application or FusionReactor
  • To Authenticate a user your LDAP server will be contacted
  • Only users with appropriate tomcat roles will be able to access the JSP application of FusionReactor
  • FusionReactor will not be accessible on the internal port

Issue Details

Type Technote
Issue Number FRS-448
Attachments image-2018-07-19-12-41-25-990.png
Resolution Fixed
Last Updated 2020-03-16T11:04:50.857+0000
Fix Version(s) None
Server(s) Tomcat

Debugging plugin performance in CFWheels 2.x with FusionReactor

Originally posted March 5th, 2020 by Tom King – reproduced by kind permission

Debugging plugin performance in CFWheels – the issue

Shortly after the release of CFWheels 2.0, we started to get reports of slower running requests under certain conditions. For instance, a page which might have had 1000 calls to `linkTo()` could take anything from 1-2ms to 5-6ms a call, which, after 1000 iterations, is one hell of a performance bottle neck. In 1.x, the same call would be between 0-1ms, usually with a total execution time of sub 200ms. 

This behaviour was something which could be proven by a developer, but not everyone was seeing the same results: what was the difference? Plugins (or rather, plugins which override or extend a core function, like linkTo()). To make matters worse, the performance degradation was doubled for each plugin, so you might get 1-2ms for 1 plugin, 2-4 ms for adding another plugin and so on.

So what was causing this?

Enter FusionReactor

We approached FusionReactor, who were kind enough to give us a temporary licence to help debug the issue (it’s great when companies support open-source!). So next up were some tests to help diagnose the issue.

Installing FusionReactor was really simple. As we use CommandBox locally, we could just utilise the existing module via install commandbox-fusionreactor to bootstrap FusionReactor onto our local running servers, which gave us access to the FR instance, already plumbed in. As we were looking for a specific line of code, we also installed the FusionReactor Lucee Plugin and configured it track CFML line execution times using the CF line performance explorer.

This was instantly illuminating, and tracked the problem to our new pluginrunner() method. When we released CFWheels 2.0, there was a fairly heft rewrite of the plugins system. It was designed to be able to allow for plugins to be chained, and execute in a specific order, so you could hypothetically have the result from one plugin overriding the previous one in the chain.

The way it did this was by creating a “stack” of plugins in an array, working out where it was in that stack, and executing the next plugin in the stack till it reached the end. It did that via a combination of callStackGet() and getFunctionCalledName() function to do the comparison.

As you can see from the screenshot below, the line debugger clearly highlights this. This app had four plugins, two of which extended core functions.

Debugging plugin performance in CFWheels
Example of FR Lucee 4 Line Debugger

callStackGet() gets invoked 2364 times in this example, but appeared performant, only causing 10ms execution time. getFunctionCalledName() is called the same number of times, but has a total execution time of 2242ms(!). We had our potential culprit. Either way, it was looking like the combination of calling the stack and trying to find the calling function name which was causing so much pain. I suspect it’s to do with how Java deals with this: I think it might be calling a full stack trace and writing it to disk on each call – at least that was the hint from FusionReactor’s thread profiler (I’m sure those who have a better understanding of Java’s underlying functions will chip in).

After some deliberation, we decided to revert this behaviour in CFWheels 2.1 back to how it used to work in 1.x, as the vast majority weren’t using it, but were being affected by it. We’d seen no plugins in the wild which used this behaviour either, which was largely undocumented.

Obviously thanks to FusionReactor for helping us out – hopefully this gives some insight into just one of the ways FusionReactor can be used. Maybe one day I’ll understand Java stack traces – maybe.

Thank you for reading our article Debugging plugin performance in CFWheels; we hope that you found it informative.

Start a free 14 day FusionReactor trial

Book a demo with one of our engineers

Installing FusionReactor in dynamic environments – Live Stream Support

Our last webinar on what’s new in 8.3.0 was a success and we were able to show all the exciting new features and answer any questions you had.

Based on the comments received on the last live session, we have decided to run a follow-up session on installing FusionReactor in dynamic environments.

Register your interest in our livestream

This session will run at 7PM UTC (12PM PST, 3PM EST) on the 25th of March

See your local time.

This session will cover how to automate the installation of FusionReactor via Docker and CommandBox as well as answer any questions you may have related to the install of FusionReactor. In this session will be;

Michael Flewitt – A technical support engineer for Intergral.

Charlie Arehart – A server troubleshooting consultant and long term advocate of FusionReactor.

Brad Wood – A developer for Ortus Solutions, developer for CommandBox and long term user of FusionReactor

During the session we will cover the following:

  • A high-level explanation on the process of installing FusionReactor dynamically
  • An example of installing FusionReactor in a fat jar Docker image
  • An example of installing FusionReactor in a tomcat Docker image
  • An example of installing FusionReactor in a ColdFusion Docker image
  • An example of installing FusionReactor in CommandBox
  • Advice on features you may want to configure in dynamic environments.

We expect the demonstration to take around 20 – 30 minutes, at which point we will answer any questions you may have.

Installing FusionReactor in dynamic environments – Live Stream Support

Subscribe to the FusionReactor channel and set a reminder for the live session here.

Database Monitoring

Fig 1. FusionReactor: Database Monitoring Tool.

Databases embody the most crucial aspects of many business processes. Due to technological advancements, applications and IT infrastructures are becoming far more diverse. But with this development comes such application performance related issues as troubleshooting and problem rectification. Hence, introducing the quality of the services that end-users demand from a server begins with an excellent monitoring strategy.

This post examines the advantages of database monitoring, alongside a brief description of the best way to implement database performance monitoring. 

In this guide, we would highlight such topics as

  • What Database Monitoring is?
  • Advantages of Database Monitoring.
  • Monitoring Database Performance.
  • Conventional approaches to Database Monitoring.
  • Key Database Performance Metrics.
  • Best implementation of Database Monitoring.
  • Top 5 Database monitoring software.
  • How to select the best tool for Database Monitoring.

What Database Monitoring Is?

The concept of database monitoring revolves around the tracking of database performance and resources to operate in a high availability/performance application infrastructure. It involves the process of measuring and tracking database performance for the fundamental metrics affecting it in real-time. Monitoring allows you to spot current and potential performance issues from the word go. In a productive database monitoring environment, the process significantly improves the database structure to optimize overall application performance.

The idea is to keep your database (and its resources) running at the most favorable condition possible, ensuring that your application infrastructure is readily available and functional. Database monitoring is essential to database administrators and software analysts, since it permits them ample opportunity to conduct and implement accurate problem-solving techniques while saving time and valuable resources. And as these issues get resolved quickly, end-users get a streamlined experience.

Most modern Application Performance Monitoring (APM) tools keep track of hardware and software performance by taking snapshots of performance metrics at different time intervals. Allowing you to identify any sudden changes, bottlenecks and pinpoint the exact timeframe an issue becomes established. With this information at hand, they can conveniently proffer tailored optimization strategies to handle the issue better. 

Advantages of Database Monitoring

The use of complex applications across varying infrastructures, calls for the need for a database monitoring tool that can provide a fast, reliable and cost-effective solution. A robust database monitoring tool is essential to aiding Software Engineers troubleshoot problems before it extends towards the end-users. More so, the necessity of a proper database monitoring tool can never be over-emphasized. After all, an unchecked database could lead to a high application load time resulting in a slow network. Whereas a slow application reduces overall customer interaction, reduced user engagement results in loss of customer (money) for the business.

But through the analysis of a database’s performance parameters – such as user and application activities, you get a clear-cut picture of your database functionalities. Implementing a robust database monitoring strategy can amount to many advantages, which include;

  • Stress and hassle-free during debugging
  • Cost-effective as it saves time and resources
  • Increased end-user experience
  • Most effective capacity planning
  • Ability to hastily free up bottlenecks as they are spotted in real-time before it extends to the end-users
  • Improves security as it provides insights into any security flaws
  • Increases awareness on when, how and the type of optimization technique to be employed.

Monitoring Database Performance

Just as earlier stated, database monitoring is essential to spot and nullify errors before they become full-blown. To keep your application operational, you’ll need to understand what database performance monitoring entails – including the perfect monitoring approach to implement, key metrics and best practices. Let’s define the standard approaches during database monitoring and the best implementation of these processes.

Conventional approaches to Database Monitoring.

By convention, there are two types of database monitoring approaches; which includes the Proactive and the Reactive database monitoring modus operandi.

Proactive Approach

A proactive approach involves the use of preventive measures to hinder the occurrence of any potential errors. Hence the proactive approach actively identifies issues before they become problems. The Proactive approach appears to be safer since it occurs beforehand. As it is less risky and increases user experience. But it is most advisable for use by experts – who would ensure to monitor the right metrics. And in a closely-knit software development environment, one where you can conveniently alert the right person should there be any issue.  

Reactive Approach

A rather more ensuing approach that aims to mitigate the effects of these problems once they occur is the Reactive method. This approach is set aside specifically as a final mechanism for performance troubleshooting, security breach investigation or a significant incident reporting.

However, it is pertinent to note that the need for proactive database monitoring greatly multiplies once there is substantial growth in the database size.

Key Database Performance Metrics. 

Below are the principal performance metrics you should consider during a database monitoring exercise, to provide an extensive insight into the overall condition of a database environment.

  • Queries: 

It is crucial to monitor the performance of the queries themselves, that’s if you want to enjoy a top-tier performance. The database monitoring tool should be able to alert you to query issues like; insufficient/overabundant indexes, overly literal SQL and inefficient joins between tables. 

  • Capacity issues: 

Some database issues can be caused by hardware problems, like lagging CPU/processor speed or insufficient CPUs, slow disks, misconfigured disks, full disks, and lack of memory.

  • Conflicts among users: 

In situations where many users are trying to access a database, this could lead to conflicting activities and queries, subsequently resulting in a deadlock (or a traffic jam in layman terms). For example, the performance of your database could suffer from page/row locking due to slow queries, transactional locks and deadlocks, or batch activities causing resource contention.

  • Configuration issues: 

A Disk without proper configuration can cause performance issues. Implementation of database monitoring procedure would help uncover issues like an insufficiently sized buffer cache or a lack of query caching.

Best implementation of Database Monitoring.

Few resources carry essential information as a database; therefore, a slight oversight could lead to the loss of some or all of this crucial information. Without the right database monitoring tools, it could be challenging to keep an eye on all the required metrics. When conducting a database monitoring procedure, the following indicators should be considered.

  • Automatically collect key database, server and virtualization performance metrics.
  • Keep tabs on alerts regarding performance or availability problems for both database and server components as well as optionally take corrective actions.
  • Generate a comprehensive report on database capacity and utilization bottlenecks.
  • Compare the present database issues with end-user’s response metric for accurate assessment of application performance.

How to Select the Best Tool for Database Monitoring.

For the implementation of a robust database monitoring strategy, there should be an efficient means to analyze data across some categories and to minimize lags. Even so, before selecting a database monitoring tool, it is necessary to note that different types of databases require varying data points to be analyzed. 

FusionReactor APM is ranked highly by its users on top review  platforms such as G2.com due to its enhanced feature set and excellent customer service; a 14 day free trial is available.