FusionReactor Blog - News and expert analysis for Java APM users.

Configuring FusionReactor in CommandBox

CommandBox is a tool that allows you to deploy your CFML applications through an easy-to-use command-line interface. 

Configuring FusionReactor in CommandBox

Instead of deploying a tomcat-based installer version of ColdFusion or Lucee, CommandBox utilizes an Undertow servlet and deploys a war file for the CFML server. This allows you to switch between a Lucee and ColdFusion server with the same application and configuration. 

In terms of configuration, rather than having a multitude of small files, you can control everything from a single JSON file containing all settings for the Undertow servlet, application server as well as any installed modules.

Commandbox-fusionreactor module

To install FusionReactor in CommandBox, we recommend that the commandbox-fusionreactor module is used. This is a module designed and maintained by Ortus (makers of CommandBox).

The module, along with the FusionReactor module ensuring your FusionReactor instance is the latest version is stored in ForgeBox. This makes installation simple as you can run a single command to load the module.

box install commandbox-fusionreactor

Licensing FusionReactor

With the commandbox-fusionreactor module installed, you have access to the fr command namespace.

You can run commands such as ‘fr open’ to open FusionReactor in the browser.

To make licensing FusionReactor simple, you can run ‘fr register “myLicenseKey”‘, this automatically applies your license key to each running instance.

Passing in configuration

Any modified settings in FusionReactor are stored in the reactor.conf file of each FusionReactor instance. With CommandBox you can set this reactor.conf file to be passed into each running instance by running:

server set fusionreactor.reactorconfFile=path/reactor.conf

There are also several values you can set for FusionReactor directly through the server set command, see the full list here: https://commandbox.ortusbooks.com/embedded-server/fusionreactor#additional-jvm-args

Setting a fixed Application name

The default behaviour of FusionReactor automatically detects the name of the running application and applies this to transactions.

If you would like to disable this, you can do so by running:

server set fusionreactor.autoApplicationNaming=false
server set fusionreactor.defaultApplicationName=myApp

Setting the Instance name

The instance name of FusionReactor will either be set to the name of the directory you are running box from, or to the name of the CommandBox server.

For example, if I have no server name set and run CommandBox from a folder called test, my instance is called test.

You can override this value via the server name, which is a value defined in the server.json config file. You can set this value by running:

//Within CommandBox
server set name="myName"

//Outside CommandBox using environment variables
box server set name = "$var1+$var2+myName"

Removing FusionReactor

When removing the FusionReactor module, it is important to ensure that the –system flag is set on the uninstall command, i.e:

box uninstall commandbox-fusionreactor --system

If the system flag is not specified, CommandBox will try to uninstall from the current package, not from the CommandBox system packages.

Running ‘box restart’ after performing the uninstallation ensures that the module is not stored in memory and reloaded when a CommandBox server is restarted.

Running in Linux

When running in a Linux Desktop, we have seen that CommandBox can crash without warning. This is due to an issue with CommandBox interacting with the system tray.

If you are running Ubuntu 18.04 or greater, you will be required to install the libappindicator-dev package to allow CommandBox to use the system tray.

Alternatively, you can disable the CommandBox system tray element. To do this, run the following commands:

 server set trayEnable=false
 config set server.defaults.trayEnable=false

What’s new in FusionReactor 8.3.0

Improved alerting, Event Snapshots, improved Cloud UI and more

FusionReactor 8.3 has new CPU alerts in crash protection. We have redeveloped Event Snapshot for ColdFusion users which means that it no longer courses server issues. FusionReactor Cloud now lets you choose your theme and gives you better warning notifications. As well as a host of other smaller improvements and bug fixes.

FusionReactor 8.3 is now available for download now!

CPU alerting in Crash Protection

This feature has been requested a lot. If you have a long-running request, large GC issues or problems with background tasks or threads then FusionReactor 8.3 Crash Protection can send you an alert.

What's new in FusionReactor 8.3 - alert configuration
Threshold set to 50% and the duration is set to 10 seconds

It is easy to use and works in a similar way to Memory Protection. You simply set the required threshold, the minimum duration and your alerting strategy.

When the alert threshold is hit, you will have 3 available alerting strategies:

  • Send an email containing the details of the stack, running requests and system metrics into your inbox
  • Queue requests entering the application server until the CPU is below the threshold
  • Reject requests entering the application server until the CPI is below the threshold
email alert - What's new in FusionReactor 8.3
In our inbox we have an alert email which gives us all of the relevant details and links you back into FR

Event Snapshot for ColdFusion in FusionReactor 8.3

Every error now has an Event Snapshot, historically if you were a CF user then this might have had a negative effect on your server; this was down to CF error handling.

We have now redeveloped this feature so that it no longer impacts your server and enabled Event Snapshots by default.

Whats new in FusionReactor 8.3 - event snapshot
error history
All errors now have an Event Snapshot

Any recurring errors tracked by FusionReactor will now automatically generate an Event Snapshot, in the snapshot you will see the Exception, source code, stack frames and variables.

Note that for snapshots generated for Adobe ColdFusion servers, variables will only be available for the top stack frame.

FusionReactor 8.3 - event snapshot view
Click the “Event Snapshot” link and you will see the variables, stack trace, log messages and the exact line of source code where the error occurred

Improvements to the Cloud UI

We have made a number of smaller improvements to the FusionReactor Cloud that make a large difference to the usability.


Everyone has a preference to whether they use a light theme or a dark theme so Cloud will now let you choose. Simply go into account settings or use the keyboard shortcut Q and you can decide on a light or a dark theme. Cloud trails are free – even if you are an existing customer.

Fusionreactor Cloud white theme
The futures’ bright, or dark – you choose

Better warning messages

We have improved our messaging for offline servers and for when data is not available. You will now be prompted to change the timeframe to and adjust set filters to find the relevant data.

FusionReactor Cloud warning message
You now get a clear warning when you try and look at data from before you connected your server

Other Improvements

  • Brand new documentation to be released soon
  • Middle click support in local FR UI
  • Support for upcoming CommandBox 4.9
  • Fix for ColdBox tracking
  • Support for Arm 64  / Aarch 64 architectures
  • Support for Wildfly 14 – 19
  • Tracking for RMI calls in applications (Java 1.7 – 1.9)
  • Kubernetes detection

Want to know more about FusionReactor 8.3 or have any questions?

Our support team is holding a Live Stream Q&A tomorrow at 11 am PST.

What’s new in FusionReactor 8.3.0 – Live Stream Support

On February 11th at 7PM UTC (11AM PST; see your local time), we are running our first live demo and Q&A session on the FusionReactor YouTube channel, covering what’s new in FusionReactor 8.3.0, which is releasing very soon.

This session will be hosted by Michael Flewitt, a technical support engineer for FusionReactor and the FusionReactor Cloud and features Charlie Arehart, a ColdFusion consultant and all-around expert as a guest.

Our intention is to host a regular demo and Q&A sessions covering everything related to the FusionReactor product, CFML/Java development and other exciting projects we are working on at Intergral.

During these sessions, you will have the opportunity to ask us any questions you may have and get direct advice from our support team.

Our first session will cover what’s new in FusionReactor 8.3.0! 

We have been working hard to give you powerful new features and an improved user experience with FusionReactor including;

  • Enhanced Crash Protection
  • Superior error detection in ColdFusion
  • Interactive and intuitive self-service support
  • Significant user experience improvements in the Cloud
  • A load more cool features

Subscribe to the FusionReactor Youtube channel and set a reminder here!

The demo should take around 30 minutes, at which point we will have time to answer all your questions.

We look forward to chatting with you soon!

Understanding StackTraces in Java

By guest author Thorben Janssen

The StackTrace is one of the key concepts in Java. It’s a call stack for the thread and lists all method calls since the start of the thread. You have probably seen its textual representation in your log file or console output. It gets printed to System.out whenever an exception is thrown and not handled by your application. The following snippet shows a typical example of such an output.

java.lang.NumberFormatException: For input string: "123a45"
	at java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:68)
	at java.base/java.lang.Long.parseLong(Long.java:699)
	at java.base/java.lang.Long.valueOf(Long.java:1151)
	at org.thoughts.on.java.TestStackTrace.testStackTrace(TestStackTrace.java:17)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:567)

Using monitoring tools, like FusionReactor, or by calling the getStackTrace() method on the current thread, you can access the StackTrace for all active threads in your JVM. But there are other ways to examine and work with a StackTrace. 

In most cases, you will not look at a StackTrace until you need to analyze an Exception. The StackTrace is part of an Exception object, and it shows all method calls that happened until the exception was thrown. That shows you where the exception occurred and how you reached that specific part of the code. 

In the next step, you can then analyze your code and find out what caused the exception. But that’s a topic for a different article. In this one, I want to tell you more about exceptions with their StackTraces and all the information they provide, so that you have a better understanding of StackTraces in Java.

Exceptions in Java

An exception gets thrown whenever an error happens within a Java application. It gets represented by an object of the java.lang.Exception class or one of its subclasses. The JDK provides you with a huge set of different Exception classes. If you want, you can also implement your own business exceptions.

It’s a general best practice to use the most specific exception class for each error. A typical example for that is the valueOf method of the java.lang.Long class. You can call it with a java.lang.String and it throws a java.lang.NumberFormatException if the String has a format that can’t be parsed to a Long. The NumberFormatException is a subclass of the IllegalArgumentException, which indicates that an invalid argument value was passed to a method. As you can see, the IllegalArgumentException would describe the error situation, but the NumberFormatException is more specific and should be preferred.

private Long parseToLong(String s) {
	return Long.valueOf(s);

Long l;
try {
	l = parseToLong(s);
} catch (NullPointerException npe) {
	// handle NullPointerException
	log.error("No value provided. Using 0 as default.", npe);
	l = 0L;

Using the most specific exception class makes your code easier to read, and it enables you to implement a different catch clause for each exception class. This allows you to handle each error situation differently.

You could, for example, decide to throw a NullPointerException if the provided String is null and throw a NumberFormatException if it doesn’t have the correct format.

private Long parseToLong(String s) {
	if (s == null) {
		throw new NullPointerException("String can't be null");
	return Long.valueOf(s);

In the code that calls this method, you can then implement 2 separate catch blocks that handle the NullPointerException and the NumberFormatExceptions in different ways. I did that in the following code snippet to provide different error messages for both situations. But you could, of course, use the same approach to implement a more complex error handling or to provide a fallback to default values.

Long l;
try {
	l = parseToLong(s);
} catch (NullPointerException npe) {
	// handle NullPointerException
	log.error("No value provided. Using 0 as default.", npe);
	l = 0L;
} catch (NumberFormatException nfe)	{
	// handle NullPointerException
	log.error("Provided value was invalid. Using 0 as default.", nfe);
	l = 0L;

The structure of a StackTrace

In the previous code snippet, I wrote log messages that contained the caught exception objects. The following snippet shows an example of such a message in the log file. Your application writes a similar message for all unhandled exceptions to your console.

15:28:34,694  ERROR TestStackTrace:26 - Provided value was invalid. Using 0 as default.
java.lang.NumberFormatException: For input string: "123a45"
	at java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:68)
	at java.base/java.lang.Long.parseLong(Long.java:699)
	at java.base/java.lang.Long.valueOf(Long.java:1151)
	at org.thoughts.on.java.TestStackTrace.parseToLong(TestStackTrace.java:39)
	at org.thoughts.on.java.TestStackTrace.testStackTrace(TestStackTrace.java:18)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:567)

As you can see, the log message contains a long list of class and method names. This is the textual representation of the StackTrace of the exception. Whenever a new method gets called, it gets added to the top of the stack, and after it got executed, it gets removed from the stack. Based on this approach, the last method that got called before the exception occurred is at the top of the StackTrace and logged first. The following elements in the StackTrace and lines in the log file show which methods were previously called to reach the part of the code that caused the exception.

Using StackTraces to analyze incidents

If you take another look at the previously shown StackTrace, you can see that the exception was created in the forInputString method of the NumberFormatException class. The actual problem occurred in the parseLong method of the Long class, which was called by the valueOf method of the same class, which was called by parseToLong method of my TestStackTrace class. As you can see, the StackTrace provided by the NumberFormatException clearly shows where the exception happened and which chain of method calls lead to it.

This information is a good start to analyze an exception and to find the actual cause of it. But quite often, you will need more information to understand the issue. The Exception object and its StackTrace only describe which kind of error occurred and where it happened. But they don’t provide you with any additional information, like the values of certain variables. This can make it very hard to reproduce the error in a test case.

How FusionReactor can help

FusionReactor can provide you more information about the error situation. If you want, you can even debug the issues on your live system when it occurs next time.

FusionRector Debugger error

The only thing you need to do is to log into the web interface of the FusionReactor instance that monitors your application, select the exception from the “Error History” and go to the “Error Details” tab. There you can activate the debugger for this specific exception.

After you’ve done that, FusionReactor will send you an email when the exception occurs again and pause the thread for a configured amount of time. The email contains the stack trace and information about the variable context. As long as the thread is paused, you can also use FusionReactor’s Production Debugger to debug the error in a similar way as you would in your IDE without affecting any of the other threads of your application.

FusionReactor Debugger

Our move from Confluence to mkdocs

For many years the FusionReactor product documentation has been ran on a confluence server. We maintained our own server for many years and currently use the cloud version, but its never really been ticking all the boxes for our product documentation.

For each major release of FusionReactor we have a separate confluence space on the confluence cloud server. This allows users to see the documentation for the each version (excluding minor / bug fix releases) that the customer is running to ensure the screenshots are accurate.

The problem with having many copies of the documentation, for different version of the product, is that users don’t realize that they are reading documentation for the wrong version. This is not something which can be blamed on confluence but how we have been managing this system in the past. Google returns the search results which are most popular and as our 6.2.8 release was the newest release for the longest time, there has been a trend of people finding and using the docs from 6.2.8. Now its difficult for new releases of the documentation to be returned higher in googles search results.

This can be shown by googling for
site:docs.fusion-reactor.com crash protection
you get the following:

The image above shows that the actual page for FusionReactor Crash Protection hit doesn’t appear and the first FR 8 crash protection related page is the 7th search result and this is an 8.0.0 hit not 8.2.0. (results will vary from region to region). When google returns so many old versions before the newest FR version it causes customer confusion and impacts our support team.

Another major issue we had when updating documentation for a new release, was that we found it very difficult to update images. We had no idea if there was a screen shot of some UI component on a specific page, so we had to check every page we could find to update the screenshot based UI change.

We have been using markdown and mkdocs for some time in other areas of documentation (like the FR Cloud docs) so we knew that this worked but moving from confluence was not going to be automatic and we had not done this before.

First we needed to get the space content out of Confluence. You can do this by going to the confluence space, selecting “Space Settings”. Then under “Content Tools” there is an “Export” tab, which shows :

If you select “HTML” then “Normal Export” to get a HTML file per space page. Once you press “Export” the task will run and give you a zip file of the content to download.

We then converted all the html files to markdown

for i in *.html ; do echo "$i" && pandoc -f html -t markdown_strict -s $i -o $i.md ; done

We then use rename and mmv to rename the files to get them from filenames with a name like Weekly-Report_245548143.html.md to Weekly-Report.md.

Using following commands :

rename 's/.html.md$/.md/' *
mmv '*_[0-9]*\.md' '#1\.md'

The following image shows how the html files were converted to markdown files. We can now delete the html files if we want.

The renaming of files on disk also needs to be reflected in the markdown content. This was relatively simply using replace all with the same regex as we used with the mmv and rename commands.

We then setup the mkdocs and followed this mkdocs-material setup guide and made some customisation to use the FusionReactor icons and social links.

We managed to quickly get the docs running inside mkdocs as shown above.

The next problem we have going forward, before mkdocs actually replaces the current docs, is to update the documentation and fix it.

On our confluence versions of the docs we had lots of copy and paste content which was updated in one place and not another and we had a lot of broken links and out of date images.

We will continue to repair and improve our mkdocs over the next few weeks and months and then change the DNS entry.

Benefits of mkdocs

Below is a list of the benefits for mkdocs (as we see it) compared to our confluence documentation system.

  • Ability to run automated tools against the docs.
    • Allows checking of dead links.
    • Automate spell checking and readability tools.
    • Easy to find duplicate content and use markdown-include
  • Ability to find and update images in a simply way.

Signing, Notarizing and Stapling on macOS

The Gatekeeper system has protected macOS users against malicious software since its introduction in OS X 10.7.3 (Lion). This system assures users that software comes from a trusted source and doesn’t contain malicious content. But how does it work?


The Mac software ecosystem has historically been fairly untroubled by malicious viruses and software. This was due in part to the comparatively small user base and partly because the system — which is based on Unix — is naturally partitioned into users and groups, making it difficult for malicious code to obtain administrative privileges.

But the platform is increasing in popularity. Apple’s desktop operating system accounted for approximately 10% of desktop market share in 2019. The same kernel — Apple XNU, a modified Mach microkernel — is also used by the company’s IOS devices: iPhone and Apple Watch. This increasing install base makes the platform an attractive target for malicious software.

Apple has two main strategies in place to protect its users. We’ll look at each stage of this protection regime in the next sections, but broadly they comprise:

  • Code Signing: this ensures that code comes from a known, trusted source, and hasn’t been altered since it was signed.
  • Notarization: the code is inspected by Apple to ensure it falls within its safety guidelines; the resulting receipt can be attached to the software permanently, in a process known as “stapling.”

Code Signing

Signing ensures the code belongs to us and can’t be changed after it’s signed. This is done by the codesign tool, which:

  • Creates a secure hash of the code itself (this hash will change if the code is tampered-with after the fact)
  • Signs the hash and the code with our Developer Certificate. This puts our name and details on the code. Apple has checked our credentials and has also signed our Developer Certificate to say that we are a valid, trusted developer.
  • Stamps the resulting signature with the current time. This ensures that if our certificate expires, you can continue to use this software.

Here’s how we test-sign the FusionReactor macOS Native Library, which is used by the FusionReactor Production Debugger:

xcrun codesign --verbose --strict --keychain /Users/jhawksley/Library/Keychains/login.keychain -s CERT_ID_HERE --timestamp target/libfrjvmti_x64.dylib

Notarization and Stapling

The second stage is to have Apple actually check our code.

The Notarization command looks like this:

xcrun altool --notarize-app --username "our_apple_id@intergral.com" --password "our_password" --primary-bundle-id "com.intergral.bundleid" --file bundle.zip

Before we ship the library to Apple for Notarization, we have to sign it using codesign, and we have to zip it up to minimize the transfer size. The username and password are those of a valid Apple Developer Account.

Notarization is an automated service, which — while not providing usability or design feedback, like App Review — does checks code is correctly signed, and doesn’t contain malicious content.

The result of this is a Notarization Ticket, which is a piece of secure data that Apple sends back to us and also publishes online in the Gatekeeper Code Directory Catalog.

Some types of software — for instance the native library we showed in Code Signing above — don’t have space in their structure for a ticket, so they can’t be stapled. Other types of software, like the FusionReactor macOS Installer, do have space, and the Notarization Ticket obtained above can be stapled to them.

When you run our software on your machine, Gatekeeper automatically checks to see if the software is valid. If there’s a network connection available, Gatekeeper uses the online Code Directory to look up the ticket and checks it against the software. Should no network is available, Gatekeeper uses the stapled ticket.

If a valid ticket is located, Gatekeeper knows that Apple has checked this software and that it meets their standards — and can run.

Why Bother?

Apple has been gradually tightening up the conditions under which unsigned software can run. In macOS Catalina, this is still possible (you have to turn Gatekeeper off using Terminal commands) although in the future even that may no longer be possible.

When macOS tries to use or install unsigned content on macOS Catalina, you’ll see the following dialog (which can’t be bypassed) — and the content is not opened.

When content has been correctly signed, Gatekeeper tells you where it came from and lets you decide whether to open it. Here’s what we see when open our (signed, notarized, stapled) installer from our build server.

Trust, but Verify

If you want to check a signature, this is easy to do. Open the Terminal app, and use the codesign command to retrieve the certificate:

codesign -d --verbose=4 ~/Downloads/FusionReactor_macos_8_3_0-SNAPSHOT.dmg

This spits out the following (excerpted for clarity):

CodeDirectory v=20100 size=179 flags=0x0(none) hashes=1+2 location=embedded
CandidateCDHashFull sha256=f684fe6584f8249c3bfb60c188dd18c614adc29e6539490094947e1e09bbb6c8
Authority=Developer ID Application: Intergral Information Solutions GmbH (R3VQ6KXHEL)
Authority=Developer ID Certification Authority
Authority=Apple Root CA
Timestamp=Dec 5, 2019 at 14:52:15

The Identifier is the file name (minus extension) we signed, and the Identifier and CandidateCDHashFull values tell Gatekeeper how to perform an online lookup of our Notarization Ticket.

The three Authority lines are the chain of trust: they show that Apple has trusted us by checking our details and issued a certificate, whose serial number is R3VQ6KXHEL.

Finally, the Timestamp shows when we actually signed this software. If you try to use it in future, perhaps when our certificate has expired (hopefully there’ll be many new versions of FusionReactor before then!), the Timestamp assures Gatekeeper that the software was signed within the certificate validity period, and should be treated as if the certificate was still valid. Gatekeeper should then continue to open this software indefinitely.


It’s possible to automate the signing, notarization and stapling process, but it’s not exactly straightforward.

Apple’s development tool – the venerable Xcode – handles signing, notarization and stapling seamlessly as part of its user interface. Apple does provide command-line tools to perform these tasks (codesign, altool and stapler) but these all make some assumptions that the user running them is logged in.

The exact mechanics of automated signing of Apple binaries are rather beyond the scope of this article. However, we can give you some hints:

  • A Jenkins node running on a Mac Mini is used. Apple doesn’t allow virtualization of its hardware, and Docker on Mac is a linux environment, so the node must be a bare-metal Apple environment. Mac Minis are excellent little machines for this.
  • The user under which the Jenkins node is running:
    • Must be logged in to the desktop. This creates an Aqua desktop session for the user — which is valid even if the Jenkins node is launched as that user using ssh. The Aqua session is required to use the code signing tools. The login can be done on the Mac itself, or using Apple Screen Sharing (which is a customized VNC) but the session should be ended by closing the window, not logging the user out.
    • Must have the Apple Developer ID code signing certificate, along with the corresponding private key, installed into its Login Keychain (the default keychain).
    • Must have the keychain unlocked prior to code signing using the command security unlock-keychain

Another wrinkle in the automation of this procedure is that the notarization command (xcrun altool --notarize-app) is asynchronous. Once the code is submitted (which itself can take a couple of minutes), the Apple Notarization Service returns some XML containing a RequestUUID. You have to then poll the Service using this UUID until it returns either success or failure. This can take up to 10 minutes, in our experience. If you don’t parallelize this part of your build, it will be impose a long delay.

Ephemeral Dockers as Tool Containers

Developing tools to automate what you do often is common sense. But what if you want to share your tools with other developers? As soon as you have something that relies on more than a simple script, you’re going to be faced with dependencies and distribution headaches. Docker can solve that.

“Anything you do more than twice should be automated” is a common mantra in the software industry. And it makes sense: doing something once, well that could be considered a one-off. Doing the same thing again? Coincidence. The third time you start the task, it’s time to bite the bullet and take the time to script it.

The resulting script might be a few lines of simple bash. Most developers have Bash installed, and it’s available on almost all platforms. So you’ve got a universal script that’ll run anywhere.

But your task might be more complicated than Bash can handle. Or you might be refining a script which is becoming too complicated to code in simple Bash functions. You decide to write some nice object-oriented Python, or Ruby, or Java, or… some other language. You build the runnable scripts and you put them into Git so everyone can access them.

Soon, the complaints start rolling in:

“I don’t have Python 3 – I have to stick with 2.7 because Fred’s widget-frobulator script requires it.”

“I can’t install that Ruby gem requirement because I have something else that requires an earlier version.”

“I’m not installing $ENVIRONMENT, I’ve got too much stuff installed already.”

all my colleagues, all the time

There must be an easier way to package tooling.

Enter Docker.


Here’s an example: geocoder-in-a-box: takes a single argument and provides geographical information about it.

The script is very simple, but it has two very important requirements: it needs Ruby 2.6.5, and it needs a specific version of Alex Reisner’s Geocoder library. We will additionally need to pass command-line arguments to it. The resulting image should go to a repository so the team can find it.

I’ve put all the code into GitLab. For the sake of the example, I’m going to push to Docker Hub – but you (like us) probably have your own internal Docker repository too.


The bit that does the work:

#!/usr/bin/env ruby
require 'geocoder'

abort 'At least one argument must be supplied:  try a location ("Stuttgart") or an IP address ("")' unless ARGV.count > 0
result = Geocoder.search( ARGV[0] ).first
puts "  #{ARGV[0]}: #{result.city}, #{result.state}. Lat/Long: #{result.coordinates}"

You can run this from the command line (it’s a runnable Ruby script), but you’ll probably need to gem install geocoder -v 1.5.2 first. You can try it out:

Docker Packaging

Next comes the Dockerfile. This tells Docker how to build an image containing the right version of Ruby, and get our dependency installed too. Here’s the code:

# Docker Tools Demo - rgeo - geolocate something!
# Creates a docker image containing a small weather tool, to illustrate
# packing tools as ephemeral dockers.
# John Hawksley <john_hawksley@intergral.com>

FROM ruby:2.6.5-alpine
MAINTAINER John Hawksley <john_hawksley@intergral.com>

COPY ./rgeo.rb /rgeo.rb
RUN gem install geocoder -v 1.5.2

ENTRYPOINT ["/rgeo.rb"]

You can build an image by running this, in the same directory as the Dockerfile:

docker build -t rgeo .

The image will be built using the Ruby-2.6.5-alpine base (the FROM directive). This is a compact version of Linux from Alpine Linux, into which Ruby 2.6.5 has been pre-installed.

The COPY directive simply copies our script into the image. This doesn’t have to be a script – it could be its own distributable unit, like a Gem, Egg or Jar. The material being copied must be at the same folder level as Dockerfile or below, and there can be more than one COPY.

The RUN directive installs our dependency – Geocoder 1.5.2 – into the image. Again, multiple RUN directives can appear. It’s not uncommon to see apk or apt package management commands appear here. The main purpose of these commands is build an environment with the right supporting packages and tooling, so your own code can run.

Finally, the magic: ENTRYPOINT. This tells Docker what to actually run, when we run the image. We copied the script into /rgeo.rb (the COPY directive above), and it’s runnable, so we can just run it.

If your tooling requires some special handling (environment variables, for instance, or specific actions pre- and post-run), you might want to COPY in a shell script which does the actually call of your tooling for you.

After the build completes, it should dump out the following lines:

Successfully built 77f1cb7cdf08
Successfully tagged rgeo:latest

Now we can try it out:

It’s working perfectly. We can create a nice alias for this, so that our colleagues don’t have to complain about long-winded Docker commands when they need to geolocate something. Aliases are typically added to developers’ shell startup scripts (.bashrc, .zshrc for example):

alias geo='docker run --rm rgeo:latest'

The ‘--rm‘ option tells Docker not to keep an image of the finished container. This makes the run ephemeral: nothing remains afterwards.

Now we can just use the alias, as if it were a command installed locally. Docker never appears:

Distribution via Docker Hub

To push images to Docker Hub, you’ll need a username and password (go ahead and sort that out, I’ll get a cuppa ☕️)

In your terminal session, log in to the hub using docker login. If everything goes well, you’ll see Login Succeeded.

Docker uses the tag infrastructure to differentiate local images from Hub images. This is done by prepending your Hub username to the image:

docker tag rgeo:latest jhawksleyintergral/rgeo:latest

Finally, push the image by referring to its tag:

docker push jhawksleyintergral/rgeo:latest

That’s it! Your colleagues can then use the full tag name in their aliases, and the image will be pulled from the Hub automatically:

Subsequent calls don’t pull the image again – naturally. They use the cached version:


Docker makes your life easier by providing a simple packaging methodology for your code.

Your colleagues will love you (don’t they already?) because they gain access to awesome (well, you wrote it, so that’s a given) tooling without having to set up a complicated environment.

If you want to know more about Docker, there are loads of tutorials online.. There’s a lot more you can do with it than what we’ve covered here.

Screencast – Using the FusionReactor Profiler to find slow code

In this awesome video by CF and Coldbox developer advocate Brad Wood – we see how to use FusionReactor features such as the request Profiler to identify several bottlenecks of slow code in a ColdFusion

Remember though that the overhead is actually really small in comparison to the benefit you get. In this video, the profile probably took under 45ms in total. So for a 9-second request that’s not bad!

How I improved Angular performance and page responsiveness

For a little while now I’ve had issues with DOM rendering performance within an enterprise scale product, built using Angular. I’ve always tried to follow some common approaches1 to improving and maintaining high performance within an Angular application.

The main approaches I’ve taken to combat performance degradation over time within this application are as follows;

Working outside the Angular zone

	 * Loop outside of the Angular zone
	 * so the UI does not refresh after each setTimeout cycle
	logOutsideOfAngularZone() {
		this.ngZone.runOutsideAngular(() => {
			setTimeout(() => {
				// reenter the Angular zone and display a console log
				this.ngZone.run(() => { console.log('Outside Done!'); });

Adjusting ChangeDetection Strategies

	selector       : 'app-my-component',
	template       : `
	changeDetection: ChangeDetectionStrategy.OnPush,
export class MyComponent implements OnInit, OnDestroy
	constructor(private cdRef: ChangeDetectorRef) {}
	public ngOnInit(): void {}

Using trackBy Functions with *ngFor

	selector       : 'app-my-component',
	template       : `

		<li *ngFor="let person of people; trackBy:trackByFunction">{{person.name}}</li>
export class MyComponent implements OnInit, OnDestroy
	public people: any[] = [
		{ id: 123, name: 'John' },
		{ id: 456, name: 'Doe' },

	constructor() {}
	public ngOnInit(): void {}

	public trackByFunction = (index: number, person: any): number => person.id;

While using all these techniques listed above, did result in isolated and localized increases in page performance, I still suffered from an overall application-wide DOM rendering performance problem. This was perceivable to me as what I consider to be page rendering lag where by elements of the page are visible at given dimensions and positioned a given way, then ping to their correct position and dimensions. Another visible indicator of this issue was noticeable delays in mouse hover queues, such as subtle underlines, css animations and tooltip display.

The Culprit! my position-to-bottom directive

After further investigation I discovered that one thing this product made use of that my other products did not, was a directive, the position-to-bottom directive.

export class PositionToBottomDirective implements OnDestroy, AfterViewInit
	private readonly ngUnsubscribe: Subject<any> = new Subject();

	constructor(private readonly el: ElementRef, private readonly zone: NgZone) {
		this.el.nativeElement.style.height = 'auto';

	public ngOnDestroy(): void {

	public ngAfterViewInit(): void {
		setTimeout(() => this.calcSize());

		this.zone.runOutsideAngular(() => {
			fromEvent(window, 'resize')
				.pipe(debounceTime(500), takeUntil(this.ngUnsubscribe))
				.subscribe((res: any) => this.calcSize());

	public calcSize(): void {
		let viewport: { top: string };

		this.zone.runOutsideAngular(() => {
			viewport = this.el.nativeElement.getBoundingClientRect()

		const offset: number = parseInt(viewport.top, 10) + 10;
		const height: string = `calc(100vh - ${ offset }px)`;

		this.el.nativeElement.style.overflowY = 'auto';
		this.el.nativeElement.style.height = height;

From the code snippet above, you can see that this directive was relatively simple. Upon initialization and after every browser resize event, each component this directive was attached to would have it’s height set to the available space within the window.

Use of the RxJS debounceTime2 and Angular’s runOutsideAngular3 functionality I had hoped to mitigate the impact this directive would have on the performance of the product, as I knew Angular’s change detection will be called for every asynchronous browser event4.

Unfortunately this was not enough so I removed the the use of this directive in favor of CSS Flexbox (probably should have used this to begin with :D). After removing the use of this directive I saw a 61% increase in page responsiveness. This was calculate using the top 10 Total time consuming activities.

Top 10 time consuming activities before removal of this directive
Top 10 time consuming activities after removal of this directive

Black Friday Sale 2019

Voucher Code FR-DEV-SAVER-19

Half Price FusionReactor Developer Edition annual license

To get a year’s license for FusionReactor Developer Edition at half price use coupon code FR-DEV-SAVER-19 at checkout. Hurry offer ends 30 November 2019.

What is FusionReactor Developer Edition?

The FusionReactor Developer Edition can be used in development and test environments (*) to help you to pinpoint issues and performance bottlenecks before applications are deployed to production.

It has all the same features and functionality of the FusionReactor Ultimate Edition which is used all over the world by Java and CFML developers who are looking for deep insight into their code.

What does FusionReactor Developer Edition do?

High level features

FusionReactor Developer has all of the features that you would expect to find in a APM

  • Application monitoring
    • Transactions, web requests, JSON, JMX, Kafka, Java & CF
  • Database monitoring – JDBC requests 
    • Slowest requests, numbers of requests and any errors
  • End User monitoring 
    • Sessions, DB time, request time – live and aggregated 
  • System monitoring for your instance and your system & server
    • CPU, heap and non-heap, garbage collection information, thread state 

Low level deep insight

If you are looking for deep insight then FusionReactor delivers

  • Automated root cause analysis – automated delivery of code, scope variables and stack when an issue arises 
  • Production safe debugging with a user friendly IDE style debugger 
  • Advanced profiling
    • Code profiler insight into code performance issues 
    • The memory profiler enables you to isolate memory leaks and excessive object creation 
    • The thread profiler will detect thread contention, deadlocks and show thread state 
    • The CPU profiler analyses the CPU usage per running thread and will enable you to find performance bottlenecks

Who would use FusionReactor Developer Edition

FusionReactor Developer Edition is used by Java and CFML developers around the globe who need deep insight into their applications during development and testing stages.

How do I buy FusionReactor Developer Edition

The easiest way to buy FR Developer is directly from us; don’t forget to use this coupon code FR-DEV-SAVER-19 at checkout and get the annual license for half price saving $100.

FusionReactor Developer Edition Usage Policy (EULA)

FusionReactor Developer Edition enables you to develop, test, evaluate and analyze applications which are running in a non-production environment. The Developer Edition may not be used to monitor an application which is running in a live or stand-by production environment or staging environment, in each case, including, without limitation, in any environment accessed by application end- users, including, but not limited to, servers, workstations, kiosks, and mobile computers.