FusionReactor Blog - News and expert analysis for Java APM users.

Using FusionReactor to measure success of application and environment migration and upgrade activities

Someone asked me recently if FusionReactor could be used to do performance testing, specifically to test the performance of an application before and after upgrading their application server. I said FR COULD be used to help with that, absolutely.  

In the same way, one could imagine testing various other significant changes, such as a large-scale change to your codebase, or to test an upgrade of the server or platform on which your app server is deployed, or an upgrade of your database/database server, or even to test a new version of FR itself.  (I use the generic term “app server” because in fact–as may be a surprise to some readers–FR can be used to monitor either CFML engines like ColdFusion or Lucee, or indeed any Java application server or application like Tomcat, Jetty, Wildfly, Undertow, and so on. It can also be implemented in the execution of even a single Java class. For the sake of simplicity, I will use the term “app server” to cover any of them all.) 

The folks then asked if I could write up some of the ways that FusionReactor (FR) could be used for such a testing effort, which led to this post. There are indeed several such ways: various features which FR offers, and they do mirror the many ways I’ve helped people see how FR could be used to monitor or troubleshoot their servers for years, but in this post, we will focus specifically on this aspect of how one might best leverage FR for such migration testing.

Overview of assessment mechanisms in FusionReactor

Before diving into the features, it may help to take a moment to distinguish a couple of different ways different folks may intend or try to do testing, and different ways to use FR with such testing.

Long-duration vs short-duration testing

First, in terms of duration, some folks might be interested in implementing changes and testing them for days. This is especially true if someone may feel they’ve done adequate localized (perhaps manual) small-scale testing and now want to run more large-scale testing simply by just putting the change into production. That may make some gasp in horror, but not everyone has the ability or resources (or simply the experience or lessons learned) to lead them to do such “large-scale” testing in a testing environment (such as via load testing)

The good news is that there are features in FR that are especially well-suited to assess the aggregate performance of things over periods of full days, such that you could easily compare things from one day to another.

But still, other folks will do more short-term testing, such as in tests lasting hours (or even minutes). They may have tools or manual processes that they leverage which facilitate even “large-scale” testing without having to just “run it in production for a while”.  

And there are still other aspects of FR that can help with doing such tests over smaller periods of time, such as one hour, or from the restart of their application server to another, and so on. 

And while some aspects of such long- or short-duration testing, testing/assessment might involve features accessed in the FR user interface, still others would be about leveraging FR’s available alerts, reports, and logs. There’s a place for each, and I will cover both in this post.

Assessment after the test vs during the test

There’s another way to distinguish the ways one may use FR for such an assessment of testing. In either of the above cases (long- vs short-duration testing), the FR features to help with those would primarily be focused on assessing things AFTER running the test(s).  But there are still other aspects of FR which could be useful during such testing to help understand WHY something’s amiss during the test especially if one of the tests proves to have especially troublesome performance. 

Someone might assert that this is not really about “assessing the performance of the test”, but unless you have a way to readily diagnose the real cause of such a problem, you can’t know WHAT may be the CAUSE of such poor performance or of many errors–and it may or may not be down to the actual “new version” being tested, but rather could be something totally unexpected. I would be remiss not to at least mention these other facets of using FR, rather than to presume that “everyone” already knows about them.  They are all tools in the belt of someone assessing the impact of major changes that might be implemented in your application or architecture.

The bottom line is that whatever your need for and approach to “performance testing”, you should get value from considering the various ways FR can help. 

On whether your app server is restarted between tests

Before moving on to the FR features, there’s one more aspect of how one might go about doing such testing, and that’s whether one would restart their application server (CF, Lucee, Tomcat, Wildfly, Jetty, Undertow, or whatever) between your tests. Of course, if what you’re changing IS the versions of the app server, then this may seem a moot point. But even if one might be testing, for instance, the impact of a change to the database server (running on a remote machine or platform), it may not seem so obvious to someone that one would or should restart their app server. 

Certainly, there can be advantages of doing a restart between tests: there may be information cached in the app server, such that if you don’t restart, the second test could be impacted-either positively or negatively–by such cached information remaining from the first test. It could make the second test seem faster (if it positively leveraged what was cached in the first test), or it could make the second test fail (if the information cached was incorrect for the second test). 

It’s probably best in most cases to do such an app server restart between tests. (Some might go so far as to also restart the server–physical or virtual or the container–on which the app and app server are running. Again, an argument could be made for doing that, so that each test starts from the same “square one”.)

But I also mention all this in the context of the impact on FR of such an app server (or indeed server) restart. Since FR (as a Javaagent) is implemented WITHIN the app server, FR will restart along with the app server–and all the data tracked within the FR UI will be reset. Again, that is generally a good thing, but I wanted to clarify the point. But note as well that several FR features discussed here can still track information OVER restarts (meaning even if the app server is restarted), which may surprise some who’ve grown accustomed to JVM-based monitors that “lose everything” on an app server restart.

Beware changing more than one thing at a time, during such testing

Before moving on to the FR features for assessing testing, I want to make a plea to please be careful when doing such testing, to be wary of making too many changes at once.

I have very often helped people who were making a migration to something (such as a new version of their app server), and then they do find that oddly things perform poorly, but it turns out often that it’s not BECAUSE of that thing one they are focused on changing, but instead, it’s they have indeed changed still OTHER things: such as deploying the new app server version to a new machine, perhaps running a new OS version as well, or perhaps changed from being on-prem to running in the cloud, or changing from one infrastructure (for instance, hosting) provider to another, and so on. Indeed, often they have also changed the DB version they’re running, and perhaps changed where the DB exists, and perhaps are running that on a new OS or different infrastructure. 

The point is that sometimes when there are problems in such a “migration”, it’s not at all down to that “one thing they were focused on”, but could be influenced by any or many such other changes.

With those prefatory remarks out of the way, let’s look at some in more detail, first with that focus of assessing things over time (in aggregate) AFTER the tests, then we’ll wrap up with a brief discussion of options for assessment/diagnosis DURING the tests, for those who might benefit from knowing of that.

FusionReactor features enabling assessment over time, after tests

There are at least 4 key FR features that could help in assessing the processing of one version of a thing against another version when those tests are run one after another:

  • FR’s reports
  • FR’s logs
  • FR’s archived metrics feature
  • FR’s CP email alerts

Let’s look at each of these a bit more closely. 

FusionReactor’s reports

Especially when one may be doing long-term testing, where the test of one version would be running all of one day (or period of days) while another test would follow on subsequent days, one could leverage FR’s “reports” feature. These are produced daily, weekly, and monthly, depending on your FR version, and they offer a succinct, high-level assessment of processing within the instance of the app server being monitored, which can be easily used to compare the aggregate performance before and after an upgrade being tested.

(You would be forgiven if you may say that you never knew that FR even offered such reports. They are only enabled if you configure your FusionReactor instance to provide the email addresses and mail server through which to send the reports. While there is a “notification” alert that appears via an icon at the top left of FR if you have not bothered to configure the mail settings–which shows you how to implement that change, many folks never notice the reports until perhaps they setup the same mail settings for the sake of getting FR Crash Protection email alerts. Nothing more needs be done to start getting the alerts.)

I mentioned that the reports received depend on the version of FR you have. All FR versions have daily reports, while those running the Enterprise or Ultimate edition also get weekly and monthly reports. For the sake of this testing, let’s focus on the daily reports.

Here’s an example of a daily report looks like if you’re running FR Standard:

And here’s what one looks like if you’re running FR Enterprise or Ultimate:

They both contain essentially the same info at the top of each report: for the sake of the kind of testing we’re discussing, note how it tracks things like how many requests ran and how long they took (“avg web time”), as well as the average query (“Avg JDBC”) time, and a count of requests getting status code 500 (error) or 404 (not found). It also tracks the avg CPU and “mem” (heap used), as well as things like counts of sessions, outages, and up and downtime for the instance.

Again, these are tracked for an entire day. So you can see how many of these would be useful as aggregate values you could easily compare, for a test of perhaps a new app server version. If you had the reports from the days before making the change, and then made the change, you could readily see at a high level whether things have gotten better or worse. (But I realize that for some testing, you either won’t run the test for an entire day or won’t test the change in production so that you’d not be able to compare to previous days of production data. More on other features to assess such tests, in the next section.)

Before leaving the subject of the FR reports, I want to note as well that the reports are also configurable, including the option to include in the report any of over 180 available metrics (such as garbage collections, metaspace usage, sessions creation, network or disk usage, and more), in case any of those may be an especially important metric for your testing. 

To learn more about the FR reports, including configuring to receive them, configuring the optional metrics, and to see the weekly and monthly reports, please see any of the following resources:

FusionReactor’s logs

When it comes to wanting to do a more granular assessment of things than over an entire day (making the reports unsuitable), there are of course FR’s logs. Since its inception, FR has done a tremendous job of tracking most of the data it tracks (in its UI) in logs as well.  

And they track nearly everything that FR does, including:

  • all the information in the FR metrics>web metrics page (logged every 5 seconds, in the resource.log)
  • all the tracking of memory spaces and garbage collections (logged every 5 seconds, in logs whose names indicate the memory space or GC type being tracked)
  • various high-level CF metrics (tracked every 5 seconds, in the realtimestats.log)
  • every request (when it starts and ends, including high-level metrics tracked such as URL, query-string, IP address, request duration, number of queries, their avg duration, user agent, and more, tracked in the request.log)

As you can see, this information can be very useful to understanding what happened during a period of time, such as during a test. FR’s logs are simply plain text, space-separated value logs which can easily be assessed either by hand or using tools like Microsoft Log Parser, or spreadsheet tools like Excel or OpenOffice Calc, and I have created webinars and other resources showing using such tools to analyze the FR logs to obtain important information information over a given period time. Of course, they can also be processed by more modern log tracking and analysis tools like Splunk, Loggly, Sumo Logic, etc. 

You can find the FR logs either by directly accessing them on the file system (where FR is installed) or in the FR UI, in an available Logging button on the bottom of the left nav bar. FR’s logs are created (by default) in a “logs” folder holding the current hour, then at the top of the hour (or upon restart of the instance being monitored) the existing logs are zipped up and stored in an “archives” folder. The logging interface allows you to easily find and download the current logs or the past (“archived”) FR log zip files for any hour and any day (the FR logs are kept for 30 days, by default, and they take only a few meg of disk space per month, for most users).

There is yet another way to view the information in the FR logs; by way of the FR that includes even graphing them or more easily viewing them. That feature deserves its own section, next up.

To learn more about FR’s logs, including a guide to each log and its columns, as well as tools for analyzing them, see the following: 

FusionReactor’s archived metrics capability

Before leaving the topic of the FR logs, and using them to assess performance between tests, it’s very important to take a moment to discuss the “Archived Metrics” feature, which was introduced in FR 7.2. Indeed, one might never look directly at the logs of zip files, as discussed above, once they know about this feature. And again it can be very helpful for assessing the overall performance of the metrics that FR tracks, over the course of tests (from one hour to another, or simply one app server restart to another).

The Archived Metrics feature is available either via the Metrics or Logging menus in FusionReactor. 

measure the success of an application and environment migration

Most importantly, this feature will let you easily view all the FR logs, but in addition to simply being able to view them as text (including a helpful “spreadsheet” view that shows a column heading for each column in every log), the real power of the Archived Metrics is in its automatically graphing nearly any log you’re looking at. When viewing the resource.log, for instance, you are shown a set of graphs that look very much like the graphs on the FR Metrics>Web Metrics page.  When viewing the memory or GC logs, you see graphs much like you would see in the pages under the FR Resources menu of the same name.

Better still, you can use the date and time picker features in the top left of the Archived Metrics page to look back in time, where you can see in effect what is in each FR log zip (so again, each hour or since each restart within an hour, for as far back as the logs are kept–30 days by default.) 

performance testing

So you can see how easily you could use these aspects of the Archived Metrics to look back over your tests to understand how various FR-tracked metrics did change over time. Indeed, notice how the time picker (and date picker) in the previous image has a left and right arrow, allowing you to easily move from one timeframe to another, watching as whatever graphs you are viewing will have changed over that time, which makes it very easy to discern significant changes, such as when a test has been performed during one of those timeframes.

To learn more about the FR Archived Metrics feature, see the following:

Option to force rotation of FusionReactor logs, between tests

As an aside, regarding the FR logs and assessment during testing, note as well that the FR Logging>Log Archive page offers a button at the top, “rotate now”, that would allow you to force FR to create a new zip at that moment of whatever logs were currently tracked (since the top of the previous hour, or any previous instance restart, whichever happened most recently), which again could be helpful when you perform a given test, and then want to perhaps use the archive viewer (or your own log analysis) to assess what happened in that small timeframe.

FusionReactor Crash Protection (CP) email alerts

Finally, just like the FR reports and logs can be used to assess things after a test run, so too can the FR Crash Protection email alerts. They can serve both as indicators THAT there are unexpected problems, but can also provide details on the CAUSE of such problems. These alerts can be setup (in the Protection>CP Settings page of FR’s left  nav bar), to be triggered to run when any of 4 possible trouble situations arise:

  • too many requests running
  • requests taking too long
  • heap usage too high
  • cpu usage (within the app server instance) is too high

And in addition to reporting THAT these things are happening, the whole point of these alerts is that they give far more detail about what IS happening (within the instance) at the time of the alert, including showing such things as:

  • how many requests are running (at the time of the alert)
  • every request running at the time of the alert (its URL, query-string, duration, IP address, user agent, the query being run, the total time of queries run so far, and more)
  • a stack trace of every request and indeed (by default) every JVM thread, at the time of the alert

These details can be tremendous in understanding both what’s going on after a test (such as whether and how many such alerts may be created during a given test versus another), and of course, they can be used to understand and diagnose what’s going on DURING a test, which is broaching on the intended focus of the last section.

To learn more about FR CP email alerts, see the following:

FusionReactor features enabling assessment/diagnosis, during tests

As we move toward wrapping up this post, recall that at the opening I talked about a way one could regard FR features as those that could be used to enable “assessment over time, after tests”, while others could be used to enable “assessment/diagnosis during tests”. This section discusses the latter.

Of course, that distinction is somewhat arbitrary. For instance, there are times when the aforementioned logs could even be leveraged during a test (or troubleshooting session), while some things in this section could also be used to assess things after the test and over time, such as viewing the Requests>Slow Requests or Error History, and so on (as long as the app server had not been restarted). 

Still, their primary value might be in helping diagnose things DURING a test. And there simply isn’t time or space here to detail these. As above, I will end this section pointing you to resources for learning more about these things.

But as you perform testing, you should definitely consider watching some of these key FR UI elements:

  • Requests
    • running requests
    • slow/longest requests
    • error history, event snapshot history
    • response codes
    • requests by memory used
    • applications (see choices at the top of its screen)
  • JDBC
    • similar details as above, and also see the “databases” option (and see choices at the top of its screen)
  • Transactions
    • similar to requests and JDBC, but focused on any of several types of transactions that FR tracks, including cfhttp/httpclient calls, cfmail calls, etc.
      • be sure to note the optional choices offered at the top of each screen, to control “type” of transactions shown
    • See also the features for
      • middlware sources
      • external sources
      • transit
  • Request profiling (to understand where time is spent within a given slow request)
  • Metrics>Web Metrics
    • and its additional submenus, such as those related to JMX and CF
  • UEM and Sessions
    • UEM for focusing on time spent sending content back and forth to clients
    • Sessions for focusing on counts of sessions in your application server
  • Resources
    • and its additional submenus, such as Memory, GC, etc
  • Memory>View Heap
    • to observe and track changes in heap use over time
  • Event snapshots
    • allowing additional details to be tracked (and sent by email) regarding application errors happening during testing 

You can learn more about all these (as sections of the FR user interface) in the FR User Guide, starting here:

Considering FusionReactor Cloud’s role in such an assessment

Finally, in addition to the traditional on-premise FR monitoring (enabled by the FR java agent, implemented in your app server), there is also the available FR Cloud feature, which is an add-on that causes nearly all FR monitoring data to be pushed up into the cloud, to a server/service provided by Intergral (makers of FR), made available securely (and only to those in an org given access to it). The FR Cloud UI offers still more powerful features for assessing performance and processing over time, and this data is kept for days or weeks depending on your FR version (Enterprise or Ultimate). 

FR Cloud allows for filtering based on more aspects of request processing than the on-prem UI for FR, such as filtering by application name, or by request duration, and so on. And whereas the on-prem UI for FR tends to limit the number of items tracked to the most recent 100 (because that is tracked in memory on the app server), with FR Cloud such information is tracked in a database (in the cloud, managed by Intergral), which again greatly helps in assessing activity from one test to another. This includes not only request and database details, but also the request profiles and heap analysis information mentioned previously. 

For more on FR Cloud, see such resources as:


Phew, that was quite a whirlwind tour of the many possible ways that FR can be used to assist in assessing a migration to a new version of something, like a new version of your app server. Recall that we discussed both things to use after the test as well as during the test. We also threw in various bonus topics along the way, such as being careful not to change TOO many things at once, and also Fusionreactor Cloud as yet another tool to in the FR arsenal for assisting with such assessment during testing.

Hardware and Tools to Offer Effective Support

Hardware and Tools to Offer Effective Support

For the past 3 years, I have been running the Support Desk at Integral and throughout this time I have found both hardware and tools that have helped make the job easier and provide the best service possible. In this post, I am going to share the hardware and tools I believe are required to offer effective support.


Firstly, I am going to discuss the hardware that I cannot work without, which includes:

  • Lightweight Laptop
  • Mobile Phone For On-Cover
  • A Compact Microphone For Travel
  • High-Quality Desk Audio
  • HD Webcam
  • Multiple Screens

Lightweight Laptop

This may seem like an obvious entry, but it is by far the most important tool for the job. There are a few key criteria it must meet, which includes:

  1. Essential specs to run everything you need
  2. Easily portable
  3. Comfortable keyboard

As a Technical Support Engineer with a software development background, I am quite often looking at low-level issues by inspecting code and running up reproduction environments, which requires a powerful machine.

Although I generally work at my desk, whether that be in the office or currently at home, I do have to travel for events or to see clients on odd occasions, so a 17” heavy laptop with a 2-hour battery isn’t suitable. For instances like this, you need something easily transportable and doesn’t need charging too frequently.

Finally, a good keyboard is needed, which needs to be comfortable and easy to use for times when not using the mechanical keyboard at my desk.

By taking these 3 factors into account, it led me to purchase a ‘Lenovo X1 Carbon’ laptop, which can be loaded up with hardware, has a 14” screen, and weighs less than a kilo. The keyboard has a heavier keypress compared to other laptops, which is great for me as it should have longer durability.

Mobile Phone

A part of working in support is being on call, and the good thing about phones these days is most tools you need are already on there so you have access to them all the time. Some people choose to have 2 separate phones, others get one phone with multiple sim cards.

I use a OnePlus as both my personal and work device, but which phone you use is all down to personal preference.

Compact Microphone

Laptop microphones are fine in general, but when taking customer calls, built-in microphones don’t have the quality required to provide a good service. With having to travel from time to time, I find having a compact microphone or headset is an easy alternative that you can keep in your bag. I use a ModMic for this as my regular headphones can be used as a headset and it takes up little space in my bag.

High-Quality Desk Audio

When working from your desk, it’s important to have either a good quality headset or speaker and microphone setup. Headphones or speakers with good sound quality can make all the difference as you can hear the customer clearly and be able to distinguish from background noise.

Microphones allow the customer to hear you clearly, but can also have features to dull the background noise. I use Sennheiser headphones and a Blue Yeti standing microphone, as they are affordable and reliable.

HD Webcam

In my experience, being visible to the customer and being able to see the customer makes calls more productive and makes conversations feel personal. By allowing the customer to see you confirms they are speaking to a real person and makes it easier to convey empathy and understanding. Being presentable and having a clean background/backdrop helps to look professional to the customer too.

Laptops usually have webcams built-in, but having an external webcam can help to position the view and often has a better quality image. Logitech webcams have always been reliable and work well for me.

Multiple Screens

Some people can work happily with one screen, but I need more screen real estate. I find that 3 monitors allows me to separate different programs I’m using such as team chat, social media channels, meetings, and browsing the web without needing to constantly switch tabs. Having enough screens to see everything you need at once makes multitasking in support easier.


Having all the hardware, but no software to manage the support service would result in poor customer experience, which is why the following software tools and documentation are required to ensure the support process can run smoothly and the team can communicate:

  • Internal and External Support Policies
  • Customer Service Systems
  • Calendars/Scheduling Apps
  • Reliable Meeting Software

Internal and External Support Policies

The most critical tool needed by a support team is an agreed policy on how the team will operate. This policy can be defined both internally in terms of how the team will operate and externally in terms of what service will be delivered.

The internal policy is made up of information such as team roles and objectives, supported products and channels, service level objectives, available times, and defined procedures for certain events.

The external policy is typically defined as a matrix and clearly defined when, how, and why the support team can be contacted.

The support policy for Integral was initially defined years ago, and since then has received periodic tweaks as we add channels and change the team structure. 

Customer Service Systems

To run an effective support team you need dedicated tools designed to run a support desk, and there are many products to choose from including Freshworks, Intercom, and Hubspot. These systems allow you to receive emails, chats, calls, social media, and many other channels into a single location, where the interactions can be logged, managed, and reported on.

While setting up tooling for support can be time-consuming, it is worth the effort as you have full control and visibility of your support process and most of the work can be automated saving you time in the long run. We use Freshdesk to run our support desk and have it fully configured to meet our policy and alert on any potential issues we may be experiencing.

Calendars and Scheduling Applications

Having a record of all the support appointments as well as any required team meetings in a central calendar is essential. Using an application that syncs to your calendar and allows customers to schedule calls without negotiating available slots, time zones, and assigning who will need to attend can save multiple emails and allow the customer to find the right time for them.

Something like ‘calendly’ is a free tool you can use for this purpose and some customer support systems have this built-in.

Reliable Meeting Software

Recently, more people have started using meeting software, whether it be for working or meeting with friends during lockdown. Being able to meet with multiple people, share videos, screens, and having communal chats is a great way to communicate without having to meet in person.

Having these capabilities for a webinar, internal team meeting, or customer call can reduce the time needed on a call and allows you to view the customer’s screen and in some cases even take control of it. If you have customers with security restrictions it can limit what you can do, but in most cases, they are still able to use screen share and webcams.

Internally we use google meet for communication and offer this to customers, but as a support team, you have to be flexible and be open to using what the customer uses.

Inheritance, Abstract Class and Interface in Java

Java is an Object Oriented Programming language, and all the OOPS (object-oriented programming systems) concepts are applicable in programming. There are four main pillars of OOPS, and they are:

1.     Inheritance

2.     Abstraction

3.     Encapsulation

4.     Polymorphism

There is an additional pillar in OOPS that is the concept of multiple inheritances. This concept is only supported by C++, and it has pitfalls while writing complex code; hence the other languages which originated much later to C++ dropped this idea of multiple inheritances rather than modified it to be adopted in a different way. In Java, it is supported through the concept of interfaces.


In the biological world, all living organisms inherit some or all the properties of their parents. An object-Oriented Programming paradigm is based on this real-world philosophy. A child will derive some or all the features of their parents. In Java, ‘extends’ keyword is used to show inheritance.

Let us try to understand with the example of class Animal and class Cat. In this case, Animal class can be considered as a parent class, whereas class Cat can be considered as a child class. A class in Java is the blueprint for creating objects. An object has properties and states. Properties are denoted using methods, while states are denoted using variables. When we design classes for our java program, we can design the parent class methods and states.

-sound: String
-sleep: String
-legs: int
-diet: String 

+getSound(): String
+getSleep(): String
+getLegs(): int
+getDiet(): String 

-type : String
-size : String
-breed: String

+getType(): String
+getsize(): String
+getbreed(): String

When we design our Cat class, we need not rewrite all the code for the cat class from scratch and can reuse the code from the animal class. The cat class can be designed to add further some more properties and can reuse the properties of the animal class. Here is the sample code for inheritance:

class Animal {
  private String sound;
  private String sleep;
  private String legs;
  private String diet; // Animal attribute
  public String getSound() { // Animal method
      //implementation of getSound() method here
  public String getSleep() {
      //implementation of getSleep() method here
  public String getLegs() {
      //implementation of getLegs() method here
  public String getDiet() {
      //implementation of getDiet() method here
class Cat extends Animal { //Cat class inherits methods of Animal
  private String type;	// Cat attribute
  private String size;
  private String breed;

  public String getType() {
      //implementation of getType() method here
  public String getSize() {
      //implementation of getSize() method here
  public String getBreed() {
      //implementation of getBreed() method here

  public static void main(String[] args) {
   Cat myCat= new Cat();


The OOPS provides the benefit of hiding non-essential details from the end-user by providing an abstraction. A driver of the car need not necessarily know under the hood implementation of the gearbox, steering mechanism, etc. He/she is shown only the relevant and essential details which he/she wants to know. For example, a driver would be more interested in the clutch, gear, brake, accelerator, dashboard, horn, windshield, side mirror, Air conditioning working, but would not be interested in their implementation. In Java, abstraction can be implemented by using:

a)     Abstract Class

b)     Abstract method

c)     Interfaces

The keyword ‘abstract’ is a non-access modifier and is used for both abstract classes and abstract methods to achieve abstraction. Interface itself helps in achieving the abstraction.

Abstract Class

A java class is declared abstract using the keyword ‘abstract’ and can contain both abstract and non-abstract methods. It cannot be instantiated, or its objects can’t be created. A class inheriting the abstract class has to provide the implementation for the abstract methods declared in the abstract class. An abstract class can contain constructors, static methods, and final methods as well.

Abstract Method

Abstract keyword is used to declare a method abstract, and the method cannot have an implementation in the class where it is declared. The inheriting class has to provide the implementation for that abstract method.

The below sample code will show the example of an abstract class and abstract method. Notice the subclass has to provide the implementation for the abstract method.

// Abstract class
abstract class Animal {
  // Abstract method
  public abstract void eyeColor();
  // Regular method
  public void sound() {
// Subclass (inherit from Animal)
class Pig extends Animal {
  public void favoriteFood() {
	// The body of favoriteFood() is provided here
	System.out.println("Hay Stack");
  public void eyeColor() {
      //implementation of parent abstract method is given here
      System.out.println(“Eye color is black”);

class MyMainClass {
  public static void main(String[] args) {
	Pig pig = new Pig(); // Create a Truck object


Before diving deeper into the concepts of an Interface implementation, we have first to understand the concept of multiple inheritances used by C++. Before Java came into the world, several programming languages were trying to solve the complexity of the code generated using multiple inheritance. It was C++, where it was widely adopted. No doubt, C++ brought Object-Oriented Programming into the world but also introduced some of the complex problems due to the concept of multiple inheritances.

What is Multiple Inheritance?

According to multiple inheritance, a child can have multiple parents. This means that a child can implement the properties of several parents at the same time. This created ambiguity for similar types of properties implemented by various parents. It added to the complexity of the code and introduced bugs in the developed code.

To tackle the problem that was created by multiple inheritance, several ideas were put forward and implemented. Java used interfaces to provide the features used by multiple inheritance.

Interfaces can also be considered an abstract class which group similar methods without any implementation. To use interface in the java code, ‘implements’ keyword is used. A class can implement several interfaces, thus providing similar features that are provided by multiple inheritance. The below example describes an interface and its implementation.

// Interface
interface Animal {
  public void sound(); // interface method
  public void sleep(); // interface method
interface Breed {
  public void domesticBreed();
// Dog "implements" the Animal interface
class Dog implements Animal, Breed {
  public void sound() {
	// The implementation of sound() is provided here
	System.out.println("The dog says: woof woof");
  public void sleep() {
	// The body of sleep() is provided here
  public void domesticBreed(){
    //implementation of domesticBreed() method
class MyMainClass {
  public static void main(String[] args) {
	Dog dog = new Dog();  // Create a Dog object

Inheritance, Abstract Class and Interface in Java

Java is an object-oriented programming language, and the above topics form the key pillars of object-oriented programming, or we can also say that Java provides an object-oriented programming paradigm through inheritance, abstraction, and interface.

Root Filesystem Full on our Backup Server

This morning I arrived to find our backup servers root file system full. This is very strange as the backups all go onto a second btrfs disk which has loads of space left.

The Problem

The root file system was using the full ~40Gb of disk even though the backup runs a single python script using rsync and btrfs snapshots and these backups go onto a second disk not the root filesystem.

root@backup:/# df
Filesystem                   1K-blocks       Used Available Use% Mounted on
udev                           4068220          0   4068220   0% /dev
tmpfs                           817520      82784    734736  11% /run
/dev/mapper/ubuntu--vg-root   39875172   39452220         0 100% /
tmpfs                          4087588          0   4087588   0% /dev/shm
tmpfs                             5120          0      5120   0% /run/lock
tmpfs                          4087588          0   4087588   0% /sys/fs/cgroup
/dev/vda1                       240972      57414    171117  26% /boot
/dev/sda                    1572864000 1217936940 352567252  78% /mnt/btrfs
cgmfs                              100          0       100   0% /run/cgmanager/fs
tmpfs                           817520          0    817520   0% /run/user/1000

I ran apt-get autoremove --purge to see if there was anything left behind but it removed nothing. /tmp and /home were all fine too.

I then started walking the largest directory using du -sh to find the largest directories.

$ cd /
$ du -sk *
0           sys
68          tmp
887304      usr
16751520    var
0           vmlinuz
$ cd /var/
$  du -sk *
4           man-db
4           misc
15809208    mlocate
48          nssdb
8           ntp

This showed that mlocate was using lots of disk.

Our backup system uses btrfs and creates a snapshot every night so from the mlocate view it was adding our whole backup size every night.

The Solution

I simply had to add our /mnt/btrfs directory to the PRUNEPATHS in /etc/updatedb.conf. Below shows the PRUNEPATHS before and after my change.

PRUNEPATHS="/tmp /var/spool /media /home/.ecryptfs /var/lib/schroot"
PRUNEPATHS="/mnt/btrfs /tmp /var/spool /media /home/.ecryptfs /var/lib/schroot"

After running updatedb again the disk usage dropped to this :

root@backup:~# df
Filesystem                   1K-blocks       Used Available Use% Mounted on
udev                           4068220          0   4068220   0% /dev
tmpfs                           817520      82784    734736  11% /run
/dev/mapper/ubuntu--vg-root   39875172    7654340  30509764  21% /
tmpfs                          4087588          0   4087588   0% /dev/shm
tmpfs                             5120          0      5120   0% /run/lock
tmpfs                          4087588          0   4087588   0% /sys/fs/cgroup
/dev/vda1                       240972      57414    171117  26% /boot
/dev/sda                    1572864000 1217936940 352567252  78% /mnt/btrfs
cgmfs                              100          0       100   0% /run/cgmanager/fs
tmpfs                           817520          0    817520   0% /run/user/1000

TLDR : /var/lib/mlocate.db was massive due to lots of btrfs snapshots, added backup directory /mnt/btrfs to PRUNEPATHS variable in /etc/updatedb.conf and then ran updatedb.

Sending Custom Metrics to FusionReactor Cloud

Recently we were asked if we had specific integration for openmetrics to send information to FusionReactor Cloud. While we dont have support for openmetrics specifically we do have an API where any metrics can be posted from the FusionReactor instance.

There 2 stages to getting this working. Firstly you need to use FRAPI to post a metric every second / minute for your custom series and the second part is configuring the cloud graph profile to show this data.

Posting Metrics

To post metrics to the cloud, the metrics first need to be published into FusionReactors metric system.

final long end = System.currentTimeMillis() + 60 * 60 * 1000;
while (System.currentTimeMillis() < end) 
             (float)Math.random() * 100);


FusionReactors metric system expects data to be posted in a reliable fashion. This normally means you have to aggregate the data and post the metric every second. The code should continue to push 0s to keep the series up to date.

FusionReactor will then allow this to be seen via the Custom Metrics page.


Now you have data being captured in FusionReactor, you have to configure it to send the data to the cloud.

To do this you simply need to create a file called custom_series.txt. This file needs to contain the name of the series.


$ cat /opt/fusionreactor/instance/tomcat8/custom_series.txt

Where my instance is running from the instance directory /opt/fusionreactor/instance/tomcat8/

You have to restart your server for this file to be read. Once you do and the postNumericAggregateMetric API is being called reliably, the data will be sent to the cloud.

Configuring Cloud Graph Profile

To get the custom metric to appear in a graph you have to use the Metrics explorer. You can get to this by clicking the “Metrics” menu from the top nav bar in the Cloud.

You then need to create a new graph by clicking “New”, give it a name and select the custom metric. All custom metrics are prefixed with /custom/ and then have the series id.

To get this graph to be usable as a template you have to select the “Filter” tab, select the tag option “server”, function option “literal_or” and the value option should be your server, then finally select the “Save as a template” check box.

Now press the “+ Add” button to add this filter.

To save this graph click on the vertical ellipsis in the top right and select “Save”

If you go to the Server view and view the graphs, you can now select the graph you have created and it will appear with the other graphs in the current graph profile.

The graphs shown are available under the “Graphs” filter menu.

Custom graphs are appended to the bottom of this menu. Once your graph is selected it will appear.

See also

Object-Oriented Programming; what is Inheritance, Polymorphism, Abstraction & Encapsulation?

Object-oriented programming refers to the concept in high-level languages such as Java and Python that uses Objects and classes in their implementations. OOP has four major building blocks which are, Polymorphism, Encapsulation, Abstraction, and Inheritance.  There are other programming paradigms such as Procedural programming in which codes are written in sequentially. Python and Java are multi-paradigm high-level programming languages that means they support both OOP and procedural programming. A programmer decides on the paradigm to use based on his expertise and the problems his trying to solve. However, there is no controversy that OOP makes programming easier, faster, more dynamic, and secured. This is a major reason Java and Python are the top most popular programming languages in the world today

If you want to learn Java and Python or any other object-oriented programming languages, then you must understand these Object-Oriented Programming paradigms which are a relatively easy concept to understand. Let’s take a look at them.

What is Inheritance?

In Java and Python, codes are written in objects or blocks if you are adopting OOP methodology. Objects can interact with one another by using the properties of each block or extending the functionalities of a block through inheritance.  Inheritance ensures that codes are reused. There are millions of Java and Python libraries that a programmer can use through inheritance. The properties of a class can be inherited and extended by other classes or functions. There are two types of classes. One is the Parent or base class, and the other is the child class which can inherit the properties of the parent class. Inheritance is a major pillar in Object-Oriented programming. It is the mechanism by which classes in Java, Python, and other OOP languages inherits the attribute of other classes

A parent class can share its attributes with a child class. An example of a parent class implementation is in DDL (Dynamic-link library). A DDL can contain different classes that can be used by other programs and functions

A real-world example of inheritance is a mother and child. The child may inherit attributes such as height, Voice patters, color. The mother can reproduce other children with the same attributes as well

You can create a function or class called “Move Robot,” which controls a robot to move. And you could create method and functions in other programs that can inherit the ” Move Robot” Class without rewriting the codes over and over again.  You can also extend this class by inheriting it and writing few more codes to it that would instruct a robot to move and also run in some specific circumstances using if and else statement.  With inheritance, you can create multiple robots that would inherit the attributes of the parent class “Move Robot,” which ensures code reusability.

In summary, Inheritance is concerned with the relationship between classes and method, which is like a parent and a child. A child can be born with some of the attributes of the parents. Inheritance ensures reusability of codes just the way multiple children can inherit the attributes of their parents.

When we want to create a function, method, or class, we look for a superclass that contains the code or some of the code we want to implement. We can then derive our class from the existing one. In Java, we do this by using the keyword “Extends”, and in Python, we achieve this by inheriting the attributes of a class by calling up the class name.

Here is an example :

A vehicle class would define fields for speed. All vehicles are capable of traveling at some speed (even if 0 when stationary).

All boats would define buoyancy and draft and then the specific types (sailing, paddle, speed) would define its method of propulsion.

Cars define type of fuel, engine size etc.

Airplanes would have logic for flight, i.e. weight limits etc.

public class Vehicle {
    public float speedInKPH;
public class Car extends Vehicle {
    public int numberOfWheels;
    public int numberOfSeats;
public class Plane extends Vehicle {
     public long maximumTakeoffWeight;
public class SportsCar extends Car {
     boolean hasSoftTop;

What is Encapsulation?

This is a programming style where implementation details are hidden. It reduces software development complexity greatly. With Encapsulation, only methods are exposed. The programmer does not have to worry about implementation details but is only concerned with the operations. For example, if a developer wants to use a dynamic link library to display date and time, he does not have to worry about the codes in the date and time class rather he would simply use the data and time class by using public variables to call it up. In essence encapsulation is achieved in Python and Java by creating Private variables to define hidden classes in and then using public variables to call them up for use. With this approach, a class can be updated or maintained without worrying about the methods using them. If you are calling up a class in ten methods and you need to make changes, you don’t have to update the entire methods rather you update a single class. Once the class is changed, it automatically updates the methods accordingly. Encapsulation also ensures that your data is hidden from external modification. Encapsulation is also known as Data-Hidden.

Encapsulation can be viewed as a shield that protects data from getting accessed by outside code.

In essence, in Object-Oriented Programming, Encapsulation binds data and code as a single unit and enforces modularity.

What is Polymorphism

Polymorphism means existing in many forms. Variables, functions, and objects can exist in multiple forms in Java and Python. There are two types of polymorphism which are run time polymorphism and compile-time polymorphism. Run time can take a different form while the application is running and compile-time can take a different form during compilation.

An excellent example of Polymorphism in Object-oriented programing is a cursor behavior. A cursor may take different forms like an arrow, a line, cross, or other shapes depending on the behavior of the user or the program mode. With polymorphism, a method or subclass can define its behaviors and attributes while retaining some of the functionality of its parent class. This means you can have a class that displays date and time, and then you can create a method to inherit the class but should display a welcome message alongside the date and time. The goals of Polymorphism in Object-oriented programming is to enforce simplicity, making codes more extendable and easily maintaining applications.

Inheritance allows you to create class hierarchies, where a base class gives its behavior and attributes to a derived class. You are then free to modify or extend its functionality. Polymorphism ensures that the proper method will be executed based on the calling object’s type.

Program codes would run differently in a different operating system. The ability of program code exhibiting different behaviors across the operating system is known as polymorphism in OOP.  You can create a class called “Move” and then four people create animals that would inherit the move class. But we don’t know the type of animals that they would create. So polymorphism would allow the animals to move but in different forms based on the physical features

A creates a Snail that inherits the move class, but the snail would crawl

B creates a Kangaroo that inherits the move class, but the Kangaroo would leap

C creates a Dog that inherits the move class, but the dogs would walk

D creates a Fish that inherits the move class, but the Fish would swim.

Polymorphism has ensured that these animals are all moving but in different forms. How the programs would behave would not be known until run time.

What is Abstraction?

Abstraction in Java and Python is a programming methodology in which details of the programming codes are hidden away from the user, and only the essential things are displayed to the user. Abstraction is concerned with ideas rather than events.

Its like in our vehicle example some of the vehicles need to be started before they can move.

public interface Startable {
    void start();
public class Car extends Vehicle implements Startable {
    public int numberOfWheels;
    public int numberOfSeats;
    public void start() {
       // some implementation

You don’t need to care about how to start a car, plane, boat etc, if it implements Startable you know it will have a start method.

Why is Inheritance, Polymorphism, Abstraction & Encapsulation used?

The main idea behind Object Oriented Programming is simplicity, code reusability, extendibility, and security. These are achieved through Encapsulation, abstraction, inheritance, and polymorphism. For a language to be classified as OOP, it must have these 4 OOP blocks.  Abstraction has to do with displaying only the relevant aspect to the user, for example, turning on the radio, but you don’t need to know how the radio works. Abstraction ensures simplicity. Inheritance has to do with methods and functions inheriting the attributes of another class. The main aim is code reuse which ensures that programs are developed faster. DRY (don’t repeat yourself) is a concept in inheritance which implies that in a program, you should not have different codes that are similar. Instead, have one class and use other methods to call them and extend the functionalities where necessary. Polymorphism allows program code to have different meanings or functions while encapsulation is the process of keeping classes private so they cannot be modified by external codes.

What is FusionReactor?

FusionReactor is an APM for Java applications that features low-level capabilities including profilers, automated root cause analysis, and production debugger.

How to Find Memory Leaks in Your Application

How to Find Memory Leaks in Your Application

In software development, finding memory leaks is one of the most essential stages of the whole development process. It is worth noting that the most modern Integrated Development Environment (IDE) comes fully packed with a suite of debugging and profiling tools. These profiling instruments aren’t a universal remedy since they only pinpoint the part of the application which is using the most memory. It doesn’t indicate directly what portions of your application are faulty. Nonetheless, developers are known to sidestep this crucial phase of application testing, mostly due to negligence, ignorance, or both. In this guide, we will discuss how to check:-

  •  if your application has a leak 
  • how to spot the exact location of the memory leaks 
  • measures to redress leakages in your applications running on a Windows OS.

What is a Memory Leak?

The term ‘memory leak’ simply refers to a process where software is unable to efficiently manage the available resources; the RAM thus resulting in overall performance degradation of the application and its host machine. A proper functioning system is one in which the RAM resources are dynamically allocated to ‘requesting’ software; hence the name application. Upon completing its tasks and when the software no longer needs these resources, these memory resources are returned and reallocated to the next software, application, or program in need. 

How to Detect A Memory Leak in your application?

The best approach to checking for the existence of a memory leak in your application is by looking at your RAM usage and investigating the total amount of memory been used versus the total amount available. Evidently, it is advisable to obtain snapshots of your memory’s heap dump while in a production environment. In doing so, you get firsthand insight into the traffic specifications; since the traffic type, volume, and pattern all play a vital role in determining the object type and number of objects to be created in the memory. You can recreate this step in your test environment if you have a mechanism that mirrors exactly the production traffic pattern. Another source of memory leak in running applications is large file uploads by the user which if not properly managed could overwhelm the application’s memory. Hence, the approach, in this case, would be the use of automated tools.

How to Resolve Memory Leakages?

In cases where you experience memory leakages as described above, it means that software has trapped the RAM, and denies every other applications access to it. Over time a large portion of the RAM gets entangled by this defective procedure.


  1. Take Snapshots of Heap Dump: –

It is good practice to capture your heap dump every 10 – 15 minutes, prior to launching an application. That way, you can analyze heap dump snapshots that are based upon real traffic. Since heap dumps are pretty much snapshots of your memory, these memory depictions reveal profiling data on allocated objects that reside in memory. This includes; inbound & outbound references as well as values that are stored in those objects. These snapshots are compared and the differences utilized as a benchmark for analyzing memory usage. You can use this using the following command: –

In software development, finding memory leaks is one of the most essential stages of the whole development process. It is worth noting that the most modern Integrated Development Environment (IDE) comes fully packed with a suite of debugging and profiling tools. These profiling instruments aren’t a universal remedy since they only pinpoint the part of the application which is using the most memory. It doesn’t indicate directly what portions of your application are faulty. Nonetheless, developers are known to sidestep this crucial phase of application testing, mostly due to negligence, ignorance, or both. In this guide, we will discuss how to check:-
if your application has a leak,
how to spot the exact location of the memory leaks
measures to redress leakages in your applications running on a Windows OS.


where pid: is the Java Process Id, whose heap dump should be captured

file-path: is the file path where heap dump will be written in to.

2 – Capture Disturbed Heap Dump: –

Essentially, this step is initiated to take note of the heap dump the application displays an OutOfMemoryError and just moments before it crashes. Just as step one is undertaken for the purpose of fetching heap snapshots through actual traffic, this step will capture these memory heap dumps before it crashes. Consequently, it is best to initiate the application with the JVM property described below – considering we do not necessarily know when the application would crash.

-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=<file-path>

file-path: is the file path where heap dump will be written in to.

3 – Comparing Heap Dumps: –

From the above step, we are able to determine the causative agent creating the memory leak. This is simply done by marking out those objects with size discrepancies as shown in step 1 and 2 above. With this, we could determine the objects causing the leaks. 

How to find memory leaks in your application

Figure 1: Displays the sizes (in bytes and %) of classes and objects in descending order.

Figure 2: Compares the difference in heap dumps between two different time frames.

How to Find Memory Leaks in Your Application

In my experience, memory leaks are more predominant within the realm of older and low-level programming languages such as Java, C, and C++. Due to the advent of cloud computing technologies, memory spaces are now abundantly allocated for development. Subsequently, most modern programming languages do not require you to specify your own memory, or even release it once done. For this reason, a good number of developers do not even know what a memory leak is, not to mention how to handle it. Tools like FusionReactor have low-level capabilities that incorporate memory profiling and allow you to quickly find memory leaks as they occur.

IntelliJ Memory Leak Detection

IntelliJ Memory Leak Detection

IntelliJ is a pioneer development kit and one among JetBrains extended lineage of integrated development environments (IDE). This trailblazer IDE formerly shared the same name with the parent company – IntelliJ Software Company. To illustrate further the superiority of the IntelliJ IDE, it powers and supports over a dozen programming languages including; JavaGroovyKotlinRubyPythonPHPCObjective-CC++C#Go,[9] JavaScript and SQL.

Also, referred to as IntelliJ IDEA, this IDE is an all-round and handy tool for programming, debugging, and profiling. Therefore, learning how to detect memory leaks in a powerful development environment such as IntelliJ is of utmost importance. Although, IntelliJ comes with an inbuilt profiling functionality, however, the use of certain dedicated and multi-purpose profilers such as FusionReactor would be pretty slick. 

How to Find Memory Leaks in IntelliJ IDEA

A robust memory detection tool can efficiently analyze and detect java heap leaks while optimizing memory usage on the fly. Nonetheless, this is a primer exclusive to monitoring and reviewing the conditions that induce object retention which is a very useful information to keep in mind when detecting leaks. Another notable procedure would be to evaluate the memory usage, to better understand the inner workings of your application as well as minimize the creation of redundant objects.

Steps taken for IntelliJ Memory Leak Detection

  1. Firstly, we would need to display the Memory tab by clicking on the window icon in the top-right corner of the Debug tool window.

The Memory tab would display the following information as follows;

  • The Class: – which is the name of the Class in use. 
  • The Count: – a number indicating the amount of class instances (objects) present in the heap.
  • The Diff: – this represents the discrepancy that exists between the number of instances in two execution points.

Figure 1: The Memory Tab.

  1. Next, we would need to fetch the information regarding the number of objects present. To do this, we would have to;
  • Manually pause the program or halt at a certain breakpoint.
  • Then, we would have to initiate the procedure by selecting Load Classes on the Memory tab settings icon. While the list of loaded Classes is displayed on the Memory tab, the corresponding number of active objects are shown in the Count column.

Note that this process is only initiated on demand for performance reasons.

  1. Our next task would be to find and sort these Classes. Finding a Class, is as easy as typing the class name in the search bar provided. To sort these Classes, click

Click on any of the headers that corresponds to what you want from the option – Class, Count, or Diff. By clicking on the already selected criterion changes, you automatically change the format of the order – to either ascending or descending.

  1. Next up is to get the difference between two execution points by means of the Diff column as described above. In order to achieve this, we would have to set up two points – which we would refer to as the starting and second point throughout the scope of this guide.
  • Collect the instance data at the starting point.
IntelliJ Memory Leak Detection

Figure 2: Collecting instance data at start point.

  • Resume program execution.
IntelliJ Memory Leak finding

Figure 3: Stepping through the code.

  • Fetch the instance data from the second point by looking at the Diff column, we could ascertain the changes in the number of instances in real time.
Discover IntelliJ Memory Leak

      Figure 4: Checking for changes in the number of instances.

  1. Once we double click on a Class in the Memory tab to view its instances, a dialog box pops on, indicating all live instances of that Class. At this juncture, we could filter the list using conditions, while exploring the contents of each object.

For instance, in order to propagate a list of all objects with an empty String, we would have to double-click String on the Memory tab and key in the following condition; this.isEmpty() in the Condition field.

Find IntelliJ Memory Leak

Figure 5: Viewing Instances.

  1. Now the have to keep track of these instances. Additionally, we can view the number of instances and also record the creation of every instance along with its location in the call stack. For this, we would have to
  • Right-click on that Class and select Track New Instances. Upon initiating this process, the Memory tab would store and all information on every instance of that Class, from that moment onwards. If there are new instances their number would be displayed in parenthesis in the Diff column. 
IntelliJ Memory Leak Detection

      Figure 6: Tracking Instances.

  • To navigate the contents of each object, view the list of new instances and see the stack trace, click on the number in the Diff column. 

Note that, for the Memory tab to track instances, it has to be kept open at all times. 

  1. The final step would be to customize the view and thus, the modus operandi of the Memory tab. Now there are a number of options available for this. These settings could be toggled from the Memory View Settings icon.
  • Show with Instances Only option lets you to only display Classes with live instances.
  • Show Non-Zero Diff Only option filters objects with a varying total number between the starting and second point.
  • Show tracked classes only option allows you to only view classes whose new instances you are tracking. 
  • Enable Tracking with Hidden Memory View option permits you access track new instances of selected Classes even when they are hidden.
  • Update Loaded Classes on Debugger Stop option lets you to auto-load Classes and collect data even when the program is suspended.

Note that, the last two options may likely add an extra overhead to the debugged application and thus impact stepping performance.

IntelliJ Memory Leak Detection

A dedicated memory profiler such as FusionReactor initiates the above process by taking individual snapshots of the heap, comparing the differences through GC Root analysis features, and saving them onto the local machine for instant and future references. Hence, through this process, you could quickly and efficiently analyze Java usage and spot heap leaks. 

FusionAnalytics End Of Life (EOL)

Since 2011, FusionAnalytics (FA) has been delivering incredible insight into the metrics provided by FusionReactor. FusionAnalytics is built using a variety of technologies from Adobe Inc. – the main technology being Adobe Flash. Adobe announced that they will end-of-life Flash at the end of 2020, specifically, Adobe will stop updating and distributing the Flash Player at the end of 2020. In light of this situation, we have decided to also halt further development of FusionAnalytics and official support will end in December 2020.

We have some very good news though – in 2017 we launched FusionReactor Cloud as the SaaS replacement platform for FusionAnalytics. FusionReactor Cloud offers many of the core features of FA, plus many new and exciting capabilities and as FusionReactor Cloud includes the on-premise FusionReactor license, this results in a great saving in terms of license cost and ROI. Customers who are currently using FusionAnalytics are invited to evaluate FusionReactor Cloud with our free trial account. Using FusionReactor Cloud is really simple, as it uses the exact same agent as FusionReactor, so all you need is a Cloud account and you’re ready to go.

Should you have restrictions on using a SaaS application and would prefer to continue to use the on-premise version of FusionAnalytics, then we will still offer this product for purchase and maintain the license activation mechanism, however, further development and product support will no longer be available.

If you have any questions, please don’t hesitate to get in touch with our sales team – sales@fusion-reactor.com

Finding and fixing Spring Data JPA performance issues with FusionReactor

Finding and fixing Spring Data JPA performance issues with FusionReactor

For several years, Spring Data JPA has established itself as one of the most commonly used persistence frameworks in the Java world. It gets most of its features from the very popular Hibernate object-relational mapping (ORM) implementation. The ORM features provide great developer productivity, and the basic functionality is very easy to learn. 

But as so often, you need to know a lot more than just the basic parts if you want to build enterprise applications. Without a good understanding of its internals and some advanced features, you will struggle with severe performance issues. Spring Data’s and Hibernate’s ease of use sometimes make it way too easy to build a slow application.

But that doesn’t have to be the case. With the right tools in place, you can identify performance problems easily and often even before they cause trouble in production. In this article, I will show you 3 of Hibernate’s and Spring Data’s most common performance pitfalls, how you can find them using FusionReactor’s Java Monitoring or Hibernate’s statistics, and how you can fix them.

Pitfall 1: Lazy loading causes lots of unexpected queries

When you learn about Spring Data JPA and Hibernate performance optimizations, you always get told to use FetchType.LAZY for all of your applications. This tells Hibernate to only load the associated entities when you access the association. That’s, of course, a much better approach than using FetchType.EAGER, which always fetches all associated entities, even if you don’t use them.

Unfortunately, FetchType.LAZY introduces its own performance issue if you use a lazily fetched association. Hibernate then needs to execute an SQL query to get the associated entities from the database. This becomes an issue if you work with a list of entities, as I do in the following code snippet. 

@RequestMapping(path = "/concert")
public class ConcertController {

    Logger logger = Logger.getLogger(ConcertController.class.getSimpleName());

    private ConcertRepository concertRepo;

    public List<Concert> getConcerts() {
        List<Concert> concerts = this.concertRepo.findAll();

        for (Concert c : concerts) {
            logger.info("Concert "+c.getName()+" gets played by "+c.getBand().getName());

        return concerts;

The findAll method of Spring Data’s JpaRepository executes a simple JPQL query that gets all Concert entities from the database and returns them as a List. Each of these concerts is played by band. If you set the FetchType of that association to FetchType.LAZY, Hibernate executes a SQL query to fetch the Band when you call the getter method on the Concert entity. If you do that for each Concert entity in the List, Hibernate will execute an SQL query for each Band who plays a concert. Depending on the size of that List, this will cause performance problems.

Find unexpected queries

This issue is relatively hard to find in your code. But it gets pretty easy if you monitor the queries executed by your application. 

Using FusionReactor, you can easily see all the SQL statements, which were performed by the getConcerts method. Based on the code, you would probably expect that Hibernate only performs 1 SELECT statement. But as you can see in the screenshot, Hibernate executed 10 SELECT statements because it had to get the associated Band entity for each Concert.

Or you can activate Hibernate’s statistics component and the logging of SQL statements. Hibernate then writes a log message at the end of each session, which includes the number of executed JDBC statements and the overall time spent on these operations.

2020-04-11 15:28:04.293 DEBUG 23692 --- [nio-7070-exec-1] org.hibernate.SQL                        : select concert0_.id as id1_2_, concert0_.band_id as band_id6_2_, concert0_.event_date_time as event_da2_2_, concert0_.name as name3_2_, concert0_.price as price4_2_, concert0_.version as version5_2_ from concert concert0_
2020-04-11 15:28:04.338 DEBUG 23692 --- [nio-7070-exec-1] org.hibernate.SQL                        : select band0_.id as id1_1_0_, band0_.description as descript2_1_0_, band0_.founding_date as founding3_1_0_, band0_.name as name4_1_0_, band0_.version as version5_1_0_ from band band0_ where band0_.id=?
2020-04-11 15:28:04.354  INFO 23692 --- [nio-7070-exec-1] ConcertController                        : Concert concert 1 gets played by band 1
2020-04-11 15:28:04.355 DEBUG 23692 --- [nio-7070-exec-1] org.hibernate.SQL                        : select band0_.id as id1_1_0_, band0_.description as descript2_1_0_, band0_.founding_date as founding3_1_0_, band0_.name as name4_1_0_, band0_.version as version5_1_0_ from band band0_ where band0_.id=?
2020-04-11 15:28:04.358  INFO 23692 --- [nio-7070-exec-1] ConcertController                        : Concert concert 2 gets played by band 2
2020-04-11 15:28:04.358 DEBUG 23692 --- [nio-7070-exec-1] org.hibernate.SQL                        : select band0_.id as id1_1_0_, band0_.description as descript2_1_0_, band0_.founding_date as founding3_1_0_, band0_.name as name4_1_0_, band0_.version as version5_1_0_ from band band0_ where band0_.id=?
2020-04-11 15:28:04.361  INFO 23692 --- [nio-7070-exec-1] ConcertController                        : Concert concert 3 gets played by band 3
2020-04-11 15:28:04.362 DEBUG 23692 --- [nio-7070-exec-1] org.hibernate.SQL                        : select band0_.id as id1_1_0_, band0_.description as descript2_1_0_, band0_.founding_date as founding3_1_0_, band0_.name as name4_1_0_, band0_.version as version5_1_0_ from band band0_ where band0_.id=?
2020-04-11 15:28:04.364  INFO 23692 --- [nio-7070-exec-1] ConcertController                        : Concert concert 4 gets played by band 4
2020-04-11 15:28:04.364 DEBUG 23692 --- [nio-7070-exec-1] org.hibernate.SQL                        : select band0_.id as id1_1_0_, band0_.description as descript2_1_0_, band0_.founding_date as founding3_1_0_, band0_.name as name4_1_0_, band0_.version as version5_1_0_ from band band0_ where band0_.id=?
2020-04-11 15:28:04.367  INFO 23692 --- [nio-7070-exec-1] ConcertController                        : Concert concert 5 gets played by band 5
2020-04-11 15:28:04.367 DEBUG 23692 --- [nio-7070-exec-1] org.hibernate.SQL                        : select band0_.id as id1_1_0_, band0_.description as descript2_1_0_, band0_.founding_date as founding3_1_0_, band0_.name as name4_1_0_, band0_.version as version5_1_0_ from band band0_ where band0_.id=?
2020-04-11 15:28:04.369  INFO 23692 --- [nio-7070-exec-1] ConcertController                        : Concert concert 6 gets played by band 6
2020-04-11 15:28:04.370 DEBUG 23692 --- [nio-7070-exec-1] org.hibernate.SQL                        : select band0_.id as id1_1_0_, band0_.description as descript2_1_0_, band0_.founding_date as founding3_1_0_, band0_.name as name4_1_0_, band0_.version as version5_1_0_ from band band0_ where band0_.id=?
2020-04-11 15:28:04.372  INFO 23692 --- [nio-7070-exec-1] ConcertController                        : Concert concert 7 gets played by band 7
2020-04-11 15:28:04.372 DEBUG 23692 --- [nio-7070-exec-1] org.hibernate.SQL                        : select band0_.id as id1_1_0_, band0_.description as descript2_1_0_, band0_.founding_date as founding3_1_0_, band0_.name as name4_1_0_, band0_.version as version5_1_0_ from band band0_ where band0_.id=?
2020-04-11 15:28:04.375  INFO 23692 --- [nio-7070-exec-1] ConcertController                        : Concert concert 8 gets played by band 8
2020-04-11 15:28:04.376 DEBUG 23692 --- [nio-7070-exec-1] org.hibernate.SQL                        : select band0_.id as id1_1_0_, band0_.description as descript2_1_0_, band0_.founding_date as founding3_1_0_, band0_.name as name4_1_0_, band0_.version as version5_1_0_ from band band0_ where band0_.id=?
2020-04-11 15:28:04.378  INFO 23692 --- [nio-7070-exec-1] ConcertController                        : Concert concert 9 gets played by band 9
2020-04-11 15:28:04.379  INFO 23692 --- [nio-7070-exec-1] ConcertController                        : Concert concert 10 gets played by band 1
2020-04-11 15:28:04.379  INFO 23692 --- [nio-7070-exec-1] ConcertController                        : Concert concert 11 gets played by band 2
2020-04-11 15:28:04.379  INFO 23692 --- [nio-7070-exec-1] ConcertController                        : Concert concert 12 gets played by band 3
2020-04-11 15:28:04.379  INFO 23692 --- [nio-7070-exec-1] ConcertController                        : Concert concert 13 gets played by band 4
2020-04-11 15:28:04.379  INFO 23692 --- [nio-7070-exec-1] ConcertController                        : Concert concert 14 gets played by band 5
2020-04-11 15:28:04.379  INFO 23692 --- [nio-7070-exec-1] ConcertController                        : Concert concert 15 gets played by band 6
2020-04-11 15:28:04.379  INFO 23692 --- [nio-7070-exec-1] ConcertController                        : Concert concert 16 gets played by band 7
2020-04-11 15:28:04.379  INFO 23692 --- [nio-7070-exec-1] ConcertController                        : Concert concert 17 gets played by band 8
2020-04-11 15:28:04.379  INFO 23692 --- [nio-7070-exec-1] ConcertController                        : Concert concert 18 gets played by band 9
2020-04-11 15:28:04.379  INFO 23692 --- [nio-7070-exec-1] ConcertController                        : Concert concert 19 gets played by band 1
2020-04-11 15:28:04.379  INFO 23692 --- [nio-7070-exec-1] ConcertController                        : Concert concert 20 gets played by band 2
2020-04-11 15:28:04.463  INFO 23692 --- [nio-7070-exec-1] i.StatisticalLoggingSessionEventListener : Session Metrics {
    4769200 nanoseconds spent acquiring 1 JDBC connections;
    0 nanoseconds spent releasing 0 JDBC connections;
    2552200 nanoseconds spent preparing 10 JDBC statements;
    24755400 nanoseconds spent executing 10 JDBC statements;
    0 nanoseconds spent executing 0 JDBC batches;
    0 nanoseconds spent performing 0 L2C puts;
    0 nanoseconds spent performing 0 L2C hits;
    0 nanoseconds spent performing 0 L2C misses;
    0 nanoseconds spent executing 0 flushes (flushing a total of 0 entities and 0 collections);
    495900 nanoseconds spent executing 1 partial-flushes (flushing a total of 0 entities and 0 collections)

Avoid additional queries

You can avoid this issue by using a JOIN FETCH clause that tells Hibernate to fetch the Concert and associated Band entities within the same query. You can do that by adding a method to your repository interface and defining a custom query using the @Query annotation.

public interface ConcertRepository extends JpaRepository<Concert, Long> {

	@Query("SELECT c FROM Concert c LEFT JOIN FETCH c.band")
	List<Concert> getConcertsWithBand();

@RequestMapping(path = "/concert")
public class ConcertController {

    Logger logger = Logger.getLogger(ConcertController.class.getSimpleName());

    private ConcertRepository concertRepo;

    public List<Concert> getConcerts() {
        List<Concert> concerts = this.concertRepo.getConcertsWithBand();

        for (Concert c : concerts) {
            logger.info("Concert "+c.getName()+" gets played by "+c.getBand().getName());

        return concerts;

Instead of 10 queries, Hibernate now gets all information with only 1 query.

Pitfall 2: Slow database queries

Slow queries are a common issue in all applications that store their data in a relational database. That’s why all databases provide an extensive set of tools to analyze and improve these queries.

Even though we can’t blame Spring Data JPA or Hibernate for these issues, we still need to find and fix these queries in our application. And that’s often not as easy as it might seem. Hibernate generates the executed SQL statements based on our JPQL queries. In general, the executed queries are efficient. But sometimes, the additional abstraction of JPQL hides performance problems that would be obvious, if we would write the SQL query ourselves.

The following JPQL query, for example, looks totally fine. We’re loading Concert entities and use multiple JOIN FETCH clauses.

public interface ConcertRepository extends JpaRepository<Concert, Long> {

    @Query("SELECT c FROM Concert c LEFT JOIN FETCH c.band b LEFT JOIN FETCH b.artists WHERE b.name = :band")
    Concert getConcertOfBand(@Param("band") String band);

Find inefficient queries

The problem becomes obvious if you activate the logging of SQL statements in Hibernate or take a look at the executed JDBC statements in FusionReactor.

Hibernate has to select all columns mapped by an entity if you reference it in your SELECT clause or if you tell Hibernate to JOIN FETCH an association. In this case, the JPQL query that referenced 3 entities caused an SQL statement that selects 22 columns. These are a lot more columns than you might expect when you look at the JPQL query, and it gets worse if your entities map more columns or you JOIN FETCH more associations. 

The JOIN FETCH clause creates another issue: The result set contains the product of all joined records. Due to that, such result sets often contain thousands of records.

Improve inefficient queries

The only way to fix this performance problem is to avoid these kinds of queries. You could try to use a smaller, use case specific projection. Or you could split your query into multiple ones, e.g., one that fetches the Band entity with a JOIN FETCH clause for the artist attribute and another query for the Concert entity.

@RequestMapping(path = "/concert")
public class ConcertController {

    Logger logger = Logger.getLogger(ConcertController.class.getSimpleName());

    private ConcertRepository concertRepo;
    private BandRepository bandRepo;

    public ConcertController(ConcertRepository orderRepo, BandRepository bandRepo) {
        this.concertRepo = orderRepo;
        this.bandRepo = bandRepo;

    @GetMapping(path = "/name/{name}")
    public List<Concert> getConcertOfBand(@PathVariable("name") String name) {
        Band b = this.bandRepo.getBandWithArtists(name);
        List<Concert> concerts = this.concertRepo.getConcertsOfBand(name);
        if (concerts.isEmpty()) {
            throw new NoResultException();
        return concerts;

Pitfall 3: Too many write operations

Another common performance pitfall is the inefficient handling of write operations for multiple entities. 

Let’s say you need to reschedule all concerts that were planned for the month of April. Using Java and Hibernate as your ORM framework, it feels natural to get a Concert entity object for each of these concerts and to change the eventDateTime attribute.

List<Concert> concerts = this.concertRepo.getConcertsScheduledFor(LocalDateTime.of(2020, 04, 01, 00, 00), LocalDateTime.of(2020, 04, 30, 23, 59));

for (Concert c : concerts) {

Find inefficient write operations

But that would force Hibernate to execute an SQL UPDATE statement for each concert. Similar to the previous performance issues, this inefficiency is only visible, if you monitor the executed SQL statements.

Reduce the number of write operations

In SQL, you would write one SQL UPDATE statement that changes the value in the event_date_time column of all concerts that are scheduled for the month of April. That’s obviously the more efficient approach.

You can do the same with a native query in Hibernate. But before you do that, you should always call the flush() and clear() methods on your EntityManager. That ensures that your 1st level cache doesn’t contain any local copy of the data that your query will change.

em.createNativeQuery("UPDATE concert SET event_date_time = event_date_time + INTERVAL '1 month'").executeUpdate();

Conclusion – Finding and fixing Spring Data JPA performance issues with FusionReactor

As you have seen, Hibernate is easy to use, but it can also cause some unexpected performance problems. These are often hard to find in your code but very easy to see as soon as you monitor the executed SQL statements. If you use the right logging configuration, you can find these statements in your application log file. Or you can use FusionReactor’s Database Monitoring features and integrate these checks in your application monitoring strategy.