FusionReactor Blog - News and expert analysis for Java APM users.

Understanding StackTraces in Java

By guest author Thorben Janssen

The StackTrace is one of the key concepts in Java. It’s a call stack for the thread and lists all method calls since the start of the thread. You have probably seen its textual representation in your log file or console output. It gets printed to System.out whenever an exception is thrown and not handled by your application. The following snippet shows a typical example of such an output.

java.lang.NumberFormatException: For input string: "123a45"
	at java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:68)
	at java.base/java.lang.Long.parseLong(Long.java:699)
	at java.base/java.lang.Long.valueOf(Long.java:1151)
	at org.thoughts.on.java.TestStackTrace.testStackTrace(TestStackTrace.java:17)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:567)
	...

Using monitoring tools, like FusionReactor, or by calling the getStackTrace() method on the current thread, you can access the StackTrace for all active threads in your JVM. But there are other ways to examine and work with a StackTrace. 

In most cases, you will not look at a StackTrace until you need to analyze an Exception. The StackTrace is part of an Exception object, and it shows all method calls that happened until the exception was thrown. That shows you where the exception occurred and how you reached that specific part of the code. 

In the next step, you can then analyze your code and find out what caused the exception. But that’s a topic for a different article. In this one, I want to tell you more about exceptions with their StackTraces and all the information they provide, so that you have a better understanding of StackTraces in Java.

Exceptions in Java

An exception gets thrown whenever an error happens within a Java application. It gets represented by an object of the java.lang.Exception class or one of its subclasses. The JDK provides you with a huge set of different Exception classes. If you want, you can also implement your own business exceptions.

It’s a general best practice to use the most specific exception class for each error. A typical example for that is the valueOf method of the java.lang.Long class. You can call it with a java.lang.String and it throws a java.lang.NumberFormatException if the String has a format that can’t be parsed to a Long. The NumberFormatException is a subclass of the IllegalArgumentException, which indicates that an invalid argument value was passed to a method. As you can see, the IllegalArgumentException would describe the error situation, but the NumberFormatException is more specific and should be preferred.

private Long parseToLong(String s) {
	return Long.valueOf(s);
}

Long l;
try {
	l = parseToLong(s);
} catch (NullPointerException npe) {
	// handle NullPointerException
	log.error("No value provided. Using 0 as default.", npe);
	l = 0L;
}
log.info(l);

Using the most specific exception class makes your code easier to read, and it enables you to implement a different catch clause for each exception class. This allows you to handle each error situation differently.

You could, for example, decide to throw a NullPointerException if the provided String is null and throw a NumberFormatException if it doesn’t have the correct format.

private Long parseToLong(String s) {
	if (s == null) {
		throw new NullPointerException("String can't be null");
	}
	return Long.valueOf(s);
}

In the code that calls this method, you can then implement 2 separate catch blocks that handle the NullPointerException and the NumberFormatExceptions in different ways. I did that in the following code snippet to provide different error messages for both situations. But you could, of course, use the same approach to implement a more complex error handling or to provide a fallback to default values.

Long l;
try {
	l = parseToLong(s);
} catch (NullPointerException npe) {
	// handle NullPointerException
	log.error("No value provided. Using 0 as default.", npe);
	l = 0L;
} catch (NumberFormatException nfe)	{
	// handle NullPointerException
	log.error("Provided value was invalid. Using 0 as default.", nfe);
	l = 0L;
}
log.info(l);

The structure of a StackTrace

In the previous code snippet, I wrote log messages that contained the caught exception objects. The following snippet shows an example of such a message in the log file. Your application writes a similar message for all unhandled exceptions to your console.

15:28:34,694  ERROR TestStackTrace:26 - Provided value was invalid. Using 0 as default.
java.lang.NumberFormatException: For input string: "123a45"
	at java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:68)
	at java.base/java.lang.Long.parseLong(Long.java:699)
	at java.base/java.lang.Long.valueOf(Long.java:1151)
	at org.thoughts.on.java.TestStackTrace.parseToLong(TestStackTrace.java:39)
TestStackTrace.java:39
	at org.thoughts.on.java.TestStackTrace.testStackTrace(TestStackTrace.java:18)
TestStackTrace.java:18
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.base/java.lang.reflect.Method.invoke(Method.java:567)

As you can see, the log message contains a long list of class and method names. This is the textual representation of the StackTrace of the exception. Whenever a new method gets called, it gets added to the top of the stack, and after it got executed, it gets removed from the stack. Based on this approach, the last method that got called before the exception occurred is at the top of the StackTrace and logged first. The following elements in the StackTrace and lines in the log file show which methods were previously called to reach the part of the code that caused the exception.

Using StackTraces to analyze incidents

If you take another look at the previously shown StackTrace, you can see that the exception was created in the forInputString method of the NumberFormatException class. The actual problem occurred in the parseLong method of the Long class, which was called by the valueOf method of the same class, which was called by parseToLong method of my TestStackTrace class. As you can see, the StackTrace provided by the NumberFormatException clearly shows where the exception happened and which chain of method calls lead to it.

This information is a good start to analyze an exception and to find the actual cause of it. But quite often, you will need more information to understand the issue. The Exception object and its StackTrace only describe which kind of error occurred and where it happened. But they don’t provide you with any additional information, like the values of certain variables. This can make it very hard to reproduce the error in a test case.

How FusionReactor can help

FusionReactor can provide you more information about the error situation. If you want, you can even debug the issues on your live system when it occurs next time.

FusionRector Debugger error

The only thing you need to do is to log into the web interface of the FusionReactor instance that monitors your application, select the exception from the “Error History” and go to the “Error Details” tab. There you can activate the debugger for this specific exception.

After you’ve done that, FusionReactor will send you an email when the exception occurs again and pause the thread for a configured amount of time. The email contains the stack trace and information about the variable context. As long as the thread is paused, you can also use FusionReactor’s Production Debugger to debug the error in a similar way as you would in your IDE without affecting any of the other threads of your application.

FusionReactor Debugger

Our move from Confluence to mkdocs

For many years the FusionReactor product documentation has been ran on a confluence server. We maintained our own server for many years and currently use the cloud version, but its never really been ticking all the boxes for our product documentation.

For each major release of FusionReactor we have a separate confluence space on the confluence cloud server. This allows users to see the documentation for the each version (excluding minor / bug fix releases) that the customer is running to ensure the screenshots are accurate.

The problem with having many copies of the documentation, for different version of the product, is that users don’t realize that they are reading documentation for the wrong version. This is not something which can be blamed on confluence but how we have been managing this system in the past. Google returns the search results which are most popular and as our 6.2.8 release was the newest release for the longest time, there has been a trend of people finding and using the docs from 6.2.8. Now its difficult for new releases of the documentation to be returned higher in googles search results.

This can be shown by googling for
site:docs.fusion-reactor.com crash protection
you get the following:

The image above shows that the actual page for FusionReactor Crash Protection hit doesn’t appear and the first FR 8 crash protection related page is the 7th search result and this is an 8.0.0 hit not 8.2.0. (results will vary from region to region). When google returns so many old versions before the newest FR version it causes customer confusion and impacts our support team.

Another major issue we had when updating documentation for a new release, was that we found it very difficult to update images. We had no idea if there was a screen shot of some UI component on a specific page, so we had to check every page we could find to update the screenshot based UI change.

We have been using markdown and mkdocs for some time in other areas of documentation (like the FR Cloud docs) so we knew that this worked but moving from confluence was not going to be automatic and we had not done this before.

First we needed to get the space content out of Confluence. You can do this by going to the confluence space, selecting “Space Settings”. Then under “Content Tools” there is an “Export” tab, which shows :


If you select “HTML” then “Normal Export” to get a HTML file per space page. Once you press “Export” the task will run and give you a zip file of the content to download.

We then converted all the html files to markdown

for i in *.html ; do echo "$i" && pandoc -f html -t markdown_strict -s $i -o $i.md ; done

We then use rename and mmv to rename the files to get them from filenames with a name like Weekly-Report_245548143.html.md to Weekly-Report.md.

Using following commands :

rename 's/.html.md$/.md/' *
mmv '*_[0-9]*\.md' '#1\.md'

The following image shows how the html files were converted to markdown files. We can now delete the html files if we want.

The renaming of files on disk also needs to be reflected in the markdown content. This was relatively simply using replace all with the same regex as we used with the mmv and rename commands.

We then setup the mkdocs and followed this mkdocs-material setup guide and made some customisation to use the FusionReactor icons and social links.

We managed to quickly get the docs running inside mkdocs as shown above.

The next problem we have going forward, before mkdocs actually replaces the current docs, is to update the documentation and fix it.

On our confluence versions of the docs we had lots of copy and paste content which was updated in one place and not another and we had a lot of broken links and out of date images.

We will continue to repair and improve our mkdocs over the next few weeks and months and then change the DNS entry.

Benefits of mkdocs

Below is a list of the benefits for mkdocs (as we see it) compared to our confluence documentation system.

  • Ability to run automated tools against the docs.
    • Allows checking of dead links.
    • Automate spell checking and readability tools.
    • Easy to find duplicate content and use markdown-include
  • Ability to find and update images in a simply way.

Signing, Notarizing and Stapling on macOS

The Gatekeeper system has protected macOS users against malicious software since its introduction in OS X 10.7.3 (Lion). This system assures users that software comes from a trusted source and doesn’t contain malicious content. But how does it work?

Introduction

The Mac software ecosystem has historically been fairly untroubled by malicious viruses and software. This was due in part to the comparatively small user base and partly because the system — which is based on Unix — is naturally partitioned into users and groups, making it difficult for malicious code to obtain administrative privileges.

But the platform is increasing in popularity. Apple’s desktop operating system accounted for approximately 10% of desktop market share in 2019. The same kernel — Apple XNU, a modified Mach microkernel — is also used by the company’s IOS devices: iPhone and Apple Watch. This increasing install base makes the platform an attractive target for malicious software.

Apple has two main strategies in place to protect its users. We’ll look at each stage of this protection regime in the next sections, but broadly they comprise:

  • Code Signing: this ensures that code comes from a known, trusted source, and hasn’t been altered since it was signed.
  • Notarization: the code is inspected by Apple to ensure it falls within its safety guidelines; the resulting receipt can be attached to the software permanently, in a process known as “stapling.”

Code Signing

Signing ensures the code belongs to us and can’t be changed after it’s signed. This is done by the codesign tool, which:

  • Creates a secure hash of the code itself (this hash will change if the code is tampered-with after the fact)
  • Signs the hash and the code with our Developer Certificate. This puts our name and details on the code. Apple has checked our credentials and has also signed our Developer Certificate to say that we are a valid, trusted developer.
  • Stamps the resulting signature with the current time. This ensures that if our certificate expires, you can continue to use this software.

Here’s how we test-sign the FusionReactor macOS Native Library, which is used by the FusionReactor Production Debugger:

xcrun codesign --verbose --strict --keychain /Users/jhawksley/Library/Keychains/login.keychain -s CERT_ID_HERE --timestamp target/libfrjvmti_x64.dylib

Notarization and Stapling

The second stage is to have Apple actually check our code.

The Notarization command looks like this:

xcrun altool --notarize-app --username "our_apple_id@intergral.com" --password "our_password" --primary-bundle-id "com.intergral.bundleid" --file bundle.zip

Before we ship the library to Apple for Notarization, we have to sign it using codesign, and we have to zip it up to minimize the transfer size. The username and password are those of a valid Apple Developer Account.

Notarization is an automated service, which — while not providing usability or design feedback, like App Review — does checks code is correctly signed, and doesn’t contain malicious content.

The result of this is a Notarization Ticket, which is a piece of secure data that Apple sends back to us and also publishes online in the Gatekeeper Code Directory Catalog.

Some types of software — for instance the native library we showed in Code Signing above — don’t have space in their structure for a ticket, so they can’t be stapled. Other types of software, like the FusionReactor macOS Installer, do have space, and the Notarization Ticket obtained above can be stapled to them.

When you run our software on your machine, Gatekeeper automatically checks to see if the software is valid. If there’s a network connection available, Gatekeeper uses the online Code Directory to look up the ticket and checks it against the software. Should no network is available, Gatekeeper uses the stapled ticket.

If a valid ticket is located, Gatekeeper knows that Apple has checked this software and that it meets their standards — and can run.

Why Bother?

Apple has been gradually tightening up the conditions under which unsigned software can run. In macOS Catalina, this is still possible (you have to turn Gatekeeper off using Terminal commands) although in the future even that may no longer be possible.

When macOS tries to use or install unsigned content on macOS Catalina, you’ll see the following dialog (which can’t be bypassed) — and the content is not opened.

When content has been correctly signed, Gatekeeper tells you where it came from and lets you decide whether to open it. Here’s what we see when open our (signed, notarized, stapled) installer from our build server.

Trust, but Verify

If you want to check a signature, this is easy to do. Open the Terminal app, and use the codesign command to retrieve the certificate:

codesign -d --verbose=4 ~/Downloads/FusionReactor_macos_8_3_0-SNAPSHOT.dmg

This spits out the following (excerpted for clarity):

Identifier=FusionReactor_macos_8_3_0-SNAPSHOT
CodeDirectory v=20100 size=179 flags=0x0(none) hashes=1+2 location=embedded
CandidateCDHashFull sha256=f684fe6584f8249c3bfb60c188dd18c614adc29e6539490094947e1e09bbb6c8
Authority=Developer ID Application: Intergral Information Solutions GmbH (R3VQ6KXHEL)
Authority=Developer ID Certification Authority
Authority=Apple Root CA
Timestamp=Dec 5, 2019 at 14:52:15

The Identifier is the file name (minus extension) we signed, and the Identifier and CandidateCDHashFull values tell Gatekeeper how to perform an online lookup of our Notarization Ticket.

The three Authority lines are the chain of trust: they show that Apple has trusted us by checking our details and issued a certificate, whose serial number is R3VQ6KXHEL.

Finally, the Timestamp shows when we actually signed this software. If you try to use it in future, perhaps when our certificate has expired (hopefully there’ll be many new versions of FusionReactor before then!), the Timestamp assures Gatekeeper that the software was signed within the certificate validity period, and should be treated as if the certificate was still valid. Gatekeeper should then continue to open this software indefinitely.

Automation

It’s possible to automate the signing, notarization and stapling process, but it’s not exactly straightforward.

Apple’s development tool – the venerable Xcode – handles signing, notarization and stapling seamlessly as part of its user interface. Apple does provide command-line tools to perform these tasks (codesign, altool and stapler) but these all make some assumptions that the user running them is logged in.

The exact mechanics of automated signing of Apple binaries are rather beyond the scope of this article. However, we can give you some hints:

  • A Jenkins node running on a Mac Mini is used. Apple doesn’t allow virtualization of its hardware, and Docker on Mac is a linux environment, so the node must be a bare-metal Apple environment. Mac Minis are excellent little machines for this.
  • The user under which the Jenkins node is running:
    • Must be logged in to the desktop. This creates an Aqua desktop session for the user — which is valid even if the Jenkins node is launched as that user using ssh. The Aqua session is required to use the code signing tools. The login can be done on the Mac itself, or using Apple Screen Sharing (which is a customized VNC) but the session should be ended by closing the window, not logging the user out.
    • Must have the Apple Developer ID code signing certificate, along with the corresponding private key, installed into its Login Keychain (the default keychain).
    • Must have the keychain unlocked prior to code signing using the command security unlock-keychain

Another wrinkle in the automation of this procedure is that the notarization command (xcrun altool --notarize-app) is asynchronous. Once the code is submitted (which itself can take a couple of minutes), the Apple Notarization Service returns some XML containing a RequestUUID. You have to then poll the Service using this UUID until it returns either success or failure. This can take up to 10 minutes, in our experience. If you don’t parallelize this part of your build, it will be impose a long delay.

Ephemeral Dockers as Tool Containers

Developing tools to automate what you do often is common sense. But what if you want to share your tools with other developers? As soon as you have something that relies on more than a simple script, you’re going to be faced with dependencies and distribution headaches. Docker can solve that.

“Anything you do more than twice should be automated” is a common mantra in the software industry. And it makes sense: doing something once, well that could be considered a one-off. Doing the same thing again? Coincidence. The third time you start the task, it’s time to bite the bullet and take the time to script it.

The resulting script might be a few lines of simple bash. Most developers have Bash installed, and it’s available on almost all platforms. So you’ve got a universal script that’ll run anywhere.

But your task might be more complicated than Bash can handle. Or you might be refining a script which is becoming too complicated to code in simple Bash functions. You decide to write some nice object-oriented Python, or Ruby, or Java, or… some other language. You build the runnable scripts and you put them into Git so everyone can access them.

Soon, the complaints start rolling in:

“I don’t have Python 3 – I have to stick with 2.7 because Fred’s widget-frobulator script requires it.”

“I can’t install that Ruby gem requirement because I have something else that requires an earlier version.”

“I’m not installing $ENVIRONMENT, I’ve got too much stuff installed already.”

all my colleagues, all the time

There must be an easier way to package tooling.

Enter Docker.

Geocoder-in-a-box

Here’s an example: geocoder-in-a-box: takes a single argument and provides geographical information about it.

The script is very simple, but it has two very important requirements: it needs Ruby 2.6.5, and it needs a specific version of Alex Reisner’s Geocoder library. We will additionally need to pass command-line arguments to it. The resulting image should go to a repository so the team can find it.

I’ve put all the code into GitLab. For the sake of the example, I’m going to push to Docker Hub – but you (like us) probably have your own internal Docker repository too.

Code

The bit that does the work:

#!/usr/bin/env ruby
require 'geocoder'

abort 'At least one argument must be supplied:  try a location ("Stuttgart") or an IP address ("139.162.203.138")' unless ARGV.count > 0
result = Geocoder.search( ARGV[0] ).first
puts "  #{ARGV[0]}: #{result.city}, #{result.state}. Lat/Long: #{result.coordinates}"

You can run this from the command line (it’s a runnable Ruby script), but you’ll probably need to gem install geocoder -v 1.5.2 first. You can try it out:

Docker Packaging

Next comes the Dockerfile. This tells Docker how to build an image containing the right version of Ruby, and get our dependency installed too. Here’s the code:

# Docker Tools Demo - rgeo - geolocate something!
#
# Creates a docker image containing a small weather tool, to illustrate
# packing tools as ephemeral dockers.
#
# John Hawksley <john_hawksley@intergral.com>

FROM ruby:2.6.5-alpine
MAINTAINER John Hawksley <john_hawksley@intergral.com>

COPY ./rgeo.rb /rgeo.rb
RUN gem install geocoder -v 1.5.2

ENTRYPOINT ["/rgeo.rb"]

You can build an image by running this, in the same directory as the Dockerfile:

docker build -t rgeo .

The image will be built using the Ruby-2.6.5-alpine base (the FROM directive). This is a compact version of Linux from Alpine Linux, into which Ruby 2.6.5 has been pre-installed.

The COPY directive simply copies our script into the image. This doesn’t have to be a script – it could be its own distributable unit, like a Gem, Egg or Jar. The material being copied must be at the same folder level as Dockerfile or below, and there can be more than one COPY.

The RUN directive installs our dependency – Geocoder 1.5.2 – into the image. Again, multiple RUN directives can appear. It’s not uncommon to see apk or apt package management commands appear here. The main purpose of these commands is build an environment with the right supporting packages and tooling, so your own code can run.

Finally, the magic: ENTRYPOINT. This tells Docker what to actually run, when we run the image. We copied the script into /rgeo.rb (the COPY directive above), and it’s runnable, so we can just run it.

If your tooling requires some special handling (environment variables, for instance, or specific actions pre- and post-run), you might want to COPY in a shell script which does the actually call of your tooling for you.

After the build completes, it should dump out the following lines:

Successfully built 77f1cb7cdf08
Successfully tagged rgeo:latest

Now we can try it out:

It’s working perfectly. We can create a nice alias for this, so that our colleagues don’t have to complain about long-winded Docker commands when they need to geolocate something. Aliases are typically added to developers’ shell startup scripts (.bashrc, .zshrc for example):

alias geo='docker run --rm rgeo:latest'

The ‘--rm‘ option tells Docker not to keep an image of the finished container. This makes the run ephemeral: nothing remains afterwards.

Now we can just use the alias, as if it were a command installed locally. Docker never appears:

Distribution via Docker Hub

To push images to Docker Hub, you’ll need a username and password (go ahead and sort that out, I’ll get a cuppa ☕️)

In your terminal session, log in to the hub using docker login. If everything goes well, you’ll see Login Succeeded.

Docker uses the tag infrastructure to differentiate local images from Hub images. This is done by prepending your Hub username to the image:

docker tag rgeo:latest jhawksleyintergral/rgeo:latest

Finally, push the image by referring to its tag:

docker push jhawksleyintergral/rgeo:latest

That’s it! Your colleagues can then use the full tag name in their aliases, and the image will be pulled from the Hub automatically:

Subsequent calls don’t pull the image again – naturally. They use the cached version:

Conclusion

Docker makes your life easier by providing a simple packaging methodology for your code.

Your colleagues will love you (don’t they already?) because they gain access to awesome (well, you wrote it, so that’s a given) tooling without having to set up a complicated environment.

If you want to know more about Docker, there are loads of tutorials online.. There’s a lot more you can do with it than what we’ve covered here.

Screencast – Using the FusionReactor Profiler to find slow code

In this awesome video by CF and Coldbox developer advocate Brad Wood – we see how to use FusionReactor features such as the request Profiler to identify several bottlenecks of slow code in a ColdFusion

Remember though that the overhead is actually really small in comparison to the benefit you get. In this video, the profile probably took under 45ms in total. So for a 9-second request that’s not bad!

How I improved Angular performance and page responsiveness

For a little while now I’ve had issues with DOM rendering performance within an enterprise scale product, built using Angular. I’ve always tried to follow some common approaches1 to improving and maintaining high performance within an Angular application.

The main approaches I’ve taken to combat performance degradation over time within this application are as follows;

Working outside the Angular zone

	/**
	 * Loop outside of the Angular zone
	 * so the UI does not refresh after each setTimeout cycle
	 */
	logOutsideOfAngularZone() {
		this.ngZone.runOutsideAngular(() => {
			setTimeout(() => {
				// reenter the Angular zone and display a console log
				this.ngZone.run(() => { console.log('Outside Done!'); });
			});
		});
	}

Adjusting ChangeDetection Strategies

@Component({
	selector       : 'app-my-component',
	template       : `
		<h1>Title</h1>
	`,
	changeDetection: ChangeDetectionStrategy.OnPush,
})
export class MyComponent implements OnInit, OnDestroy
{
	constructor(private cdRef: ChangeDetectorRef) {}
	public ngOnInit(): void {}
}

Using trackBy Functions with *ngFor

@Component({
	selector       : 'app-my-component',
	template       : `
		<h1>Title</h1>

		<li *ngFor="let person of people; trackBy:trackByFunction">{{person.name}}</li>
	`,
})
export class MyComponent implements OnInit, OnDestroy
{
	public people: any[] = [
		{ id: 123, name: 'John' },
		{ id: 456, name: 'Doe' },
	];

	constructor() {}
	public ngOnInit(): void {}

	public trackByFunction = (index: number, person: any): number => person.id;
}

While using all these techniques listed above, did result in isolated and localized increases in page performance, I still suffered from an overall application-wide DOM rendering performance problem. This was perceivable to me as what I consider to be page rendering lag where by elements of the page are visible at given dimensions and positioned a given way, then ping to their correct position and dimensions. Another visible indicator of this issue was noticeable delays in mouse hover queues, such as subtle underlines, css animations and tooltip display.

The Culprit! my position-to-bottom directive

After further investigation I discovered that one thing this product made use of that my other products did not, was a directive, the position-to-bottom directive.

export class PositionToBottomDirective implements OnDestroy, AfterViewInit
{
	private readonly ngUnsubscribe: Subject<any> = new Subject();

	constructor(private readonly el: ElementRef, private readonly zone: NgZone) {
		this.el.nativeElement.style.height = 'auto';
	}

	public ngOnDestroy(): void {
		this.ngUnsubscribe.next();
		this.ngUnsubscribe.complete();
	}

	public ngAfterViewInit(): void {
		setTimeout(() => this.calcSize());

		this.zone.runOutsideAngular(() => {
			fromEvent(window, 'resize')
				.pipe(debounceTime(500), takeUntil(this.ngUnsubscribe))
				.subscribe((res: any) => this.calcSize());
		});
	}

	public calcSize(): void {
		let viewport: { top: string };

		this.zone.runOutsideAngular(() => {
			viewport = this.el.nativeElement.getBoundingClientRect()
		});

		const offset: number = parseInt(viewport.top, 10) + 10;
		const height: string = `calc(100vh - ${ offset }px)`;

		this.el.nativeElement.style.overflowY = 'auto';
		this.el.nativeElement.style.height = height;
	}

From the code snippet above, you can see that this directive was relatively simple. Upon initialization and after every browser resize event, each component this directive was attached to would have it’s height set to the available space within the window.

Use of the RxJS debounceTime2 and Angular’s runOutsideAngular3 functionality I had hoped to mitigate the impact this directive would have on the performance of the product, as I knew Angular’s change detection will be called for every asynchronous browser event4.

Unfortunately this was not enough so I removed the the use of this directive in favor of CSS Flexbox (probably should have used this to begin with :D). After removing the use of this directive I saw a 61% increase in page responsiveness. This was calculate using the top 10 Total time consuming activities.

Top 10 time consuming activities before removal of this directive
Top 10 time consuming activities after removal of this directive

Black Friday Sale 2019

Voucher Code FR-DEV-SAVER-19

Half Price FusionReactor Developer Edition annual license

To get a year’s license for FusionReactor Developer Edition at half price use coupon code FR-DEV-SAVER-19 at checkout. Hurry offer ends 30 November 2019.

What is FusionReactor Developer Edition?

The FusionReactor Developer Edition can be used in development and test environments (*) to help you to pinpoint issues and performance bottlenecks before applications are deployed to production.

It has all the same features and functionality of the FusionReactor Ultimate Edition which is used all over the world by Java and CFML developers who are looking for deep insight into their code.

What does FusionReactor Developer Edition do?

High level features

FusionReactor Developer has all of the features that you would expect to find in a APM

  • Application monitoring
    • Transactions, web requests, JSON, JMX, Kafka, Java & CF
  • Database monitoring – JDBC requests 
    • Slowest requests, numbers of requests and any errors
  • End User monitoring 
    • Sessions, DB time, request time – live and aggregated 
  • System monitoring for your instance and your system & server
    • CPU, heap and non-heap, garbage collection information, thread state 

Low level deep insight

If you are looking for deep insight then FusionReactor delivers

  • Automated root cause analysis – automated delivery of code, scope variables and stack when an issue arises 
  • Production safe debugging with a user friendly IDE style debugger 
  • Advanced profiling
    • Code profiler insight into code performance issues 
    • The memory profiler enables you to isolate memory leaks and excessive object creation 
    • The thread profiler will detect thread contention, deadlocks and show thread state 
    • The CPU profiler analyses the CPU usage per running thread and will enable you to find performance bottlenecks

Who would use FusionReactor Developer Edition

FusionReactor Developer Edition is used by Java and CFML developers around the globe who need deep insight into their applications during development and testing stages.

How do I buy FusionReactor Developer Edition

The easiest way to buy FR Developer is directly from us; don’t forget to use this coupon code FR-DEV-SAVER-19 at checkout and get the annual license for half price saving $100.

FusionReactor Developer Edition Usage Policy (EULA)

FusionReactor Developer Edition enables you to develop, test, evaluate and analyze applications which are running in a non-production environment. The Developer Edition may not be used to monitor an application which is running in a live or stand-by production environment or staging environment, in each case, including, without limitation, in any environment accessed by application end- users, including, but not limited to, servers, workstations, kiosks, and mobile computers.

FusionReactor ends support for Java 1.6

FusionReactor will be dropping support for Java 1.6 with FusionReactor 8.3.0. This release is currently scheduled for the end of this year / early 2020.

Fusionreactor 8.2.x releases will be the last micro releases to support Java 1.6.

FusionReactor customers have been moving off Java 1.6 for some time and only 0.3% of FusionReactor 8 users are still using Java 1.6.

Oracle ended its “Extended Support” on December 2018.

IBM ended its support of Java 1.6 with its “End of Service” on September 2018.

Configuring and Disabling log tracking in FusionReactor

Introduction

FusionReactor tracks calls to any logging implementation made within your application. These logs are captured within the request object and can be configured based on their log severity.

We capture log statements for both Java frameworks and CF log statements;

Java Frameworks

  • SLF4J
  • Log4J
  • Logback
  • Apache Commons Logging

CFML log tags

  • ColdFusion log tags
  • Lucee log tags

In this blog we will cover how to configure framework log capture and how to disable log capture all together if you believe FusionReactor log tracking is causing issues in your application.

Configuring Java Framework log tracking in FusionReactor

It is possible to configure the logging severity for captured requests by going to FusionReactor (Top Left) > Plugins > Active Bundles, then modifying the configuration of the  FusionReactor Log Tracker Plugin.

In the configuration, you can capture log statements for error and above, warning and above, fatal only or no log statements at all.

Disabling log tracking in FusionReactor

If you have sensitive information in log statements, or believe that FusionReactor is causing an issue with log capture, you can disable log tracking.

To do this you will need to deploy a properties file, as well as add a system property to your application server.

Creating the fusionreactoragent.properties file

In order to disable pointcuts into the logging Frameworks that FusionReactor makes using ASM, you will need to create a properties file in the same directory as your fusionreactor.jar file.

By default this will be {FusionReactor Directory}/instance/{instance name}, so on your server you may see;

  • /opt/fusionreactor/tomcat/fusionreactor.jar
  • C:\\FusionReactor\instance\CF2018\fusionreactor.jar

In this directory you should create a file with the name ‘fusionreactoragent.properties’

In this file should contain;

com.intergral.fusionreactor.agent.pointcuts.logtracker.SLF4JPointCut=false

com.intergral.fusionreactor.agent.pointcuts.logtracker.Log4J2PointCut=false

com.intergral.fusionreactor.agent.pointcuts.logtracker.ColdFusionCFLOGPointCut=false

com.intergral.fusionreactor.agent.pointcuts.logtracker.LuceeCFLOGPointCut=false

Adding system properties

FusionReactor uses mixins as well as cuts into the application code to track certain frameworks. In order to disable these mixins you will need to add the following system property to your jvm arguments file;

  • -Dfr.mixin.apache.commons.logging=false

In ColdFusion, your jvm arguments are typically set in the jvm.config file, which is located in the {ColdFusion Directory}/cfusion/bin directory.

In tomcat / lucee, your jvm arguments are typically located in the setenv.sh file for unix, or through running the TomcatW.exe process in Windows. These files are located under the {Tomcat Directory}/bin directory.

For a full list of configuration files for the supported application server types see Application Server Examples 

Restarting the Application server

In order to apply these changes, you will need to restart the application server.

  • On windows this would typically involve restarting the Tomcat / ColdFusion service. 
  • On Linux this will normally involve running the restart command on the Tomcat / ColdFusion executable file.

You should now no longer see log statements on any transactions as FusionReactor is no longer interacting with the logging frameworks.

The Top Application Performance Monitoring (APM) Software for Small-Business

We have been on the G2 Review site for a little under a year; we have encouraged our customers to leave reviews on G2 and have made it to the top of the Best APM for Small business category by customer satisfaction. This makes us very proud as it is our customers who have placed us here and we thank them for their kind words and continued commitment to FusionReactor.

G2.com is a real-time and unbiased user review website that specializes in business software. It uses algorithms to calculate scores based on detailed reviews that real (and verified) customers leave.

The team at FusionReactor has always been proud of our software and our service and historically we have always had extremely good reviews from the FeeFo service.

We are competing with some very big players and indeed very big budgets so you can imagine our delight to be ranked #1 APM for Small business. Winning an overall satisfaction score of 91 out of 100

number 1 APM for Small business
The Top Application Performance Monitoring (APM) Software for Small-Business

Some of the competitors in the Small Business category of APM

Invaluable Insights with FusionReactor

Our reviews are from customers who use our APM with Java and ColdFusion applications. Looking through our reviews you will quickly see that customers particularly like the depth of insight that FusionReactor gives its users.

I love the ability to immediately gain insight into actual, real world production issues and the system factors behind them. From system resources to network or database congestion and conflicts, it’s all right there at your fingertips

Easily the best tool I’ve used for root causes analysis on the JVM – and one of the cheapest too

FusionReactor provides amazing insight into the server health, specifically in regards to the way ColdFusion is operating. The reporting and visualization into the server are fantastic!

See all of our G2 reviews

See our FeeFo reviews

Start a free trial