Behind the Scenes: Openfire Optimization

Dec. 19, 2005
by Gaston Dombiak and Matt Tucker

A major priority for Openfire (formerly Wildfire) is to provide the fastest and most scalable XMPP server implementation available. The Pampero project will be the major effort over the next several months to help us achieve that goal. However, one side-affect of Pampero is that the core Openfire Server instance will need to handle and route many more packets per second. So, to prepare for Pampero development, we embarked on an optimization project for the core server that improved performance by at least 33%.

All of this work was performed as we renamed the server to Openfire from Jive Messenger. What follows is the story of our quest to make Openfire significantly faster vs. the previous (already fast) Jive Messenger release.

Profiling Basics

As a pure Java server, one of the best weapons in our optimization arsenal is the profiler. In this case, JProfiler was used to analyze server performance and to find hotspots --specific parts of the code where the CPU spends a lot of time. So, profiling let us figure out exactly where we should focus our optimization efforts.

Before diving into the details of the work we did, we always feel obligated to preface any optimization discussion with the principles we keep in mind when doing profiling and optimization work:

On to the Optimizations!

An XMPP server should spend most of its time doing network operations or parsing XML since those are two things that can never be optimized away. With this in mind we analyzed an initial run of the profiling tool:

Profiling Jive Messenger 2.3.0


As you can see, most of the time was spent doing String manipulation (62%) and only a small percentage of time performing network I/O or parsing XML. This was a surprising and disturbing finding, but we knew we were on the hunt for some big performance improvements.

Most of the String manipulation operations were quickly tracked down to the core packet classes. More precisely, many user address (JID) objects were being created each time the methods Packet.getFrom() or Packet.getTo() were being called (Packet class diff). Other expensive operations included JID.toString() and JID.toBareJID() (JID class diff).

After caching the String representation of the JID and caching the "to" and "from" JIDs of packets, the profile picture started to look much better. However, profiling uncovered quite a few other optimizations. Some of these optimizations were quite minor, but they all added up nicely as we'll see later. A partial list of the optimizations:

After all these optimizations, the profiler picture was as follows:

Profiling Openfire Server 2.4.0


The server is now spending 51% doing network I/O and parsing XML. String operations declined from 62% to 5%. In other words, the server is now spending most of the time doing what it's supposed to.

Real-world Measurements

How do these optimizations affect real-world performance? To evaluate the impact, we prepared a simple stress test. Our test consisted of users logging in, sending messages back and forth, and then logging out. Some stats on the testing setup:

Packets per second
Openfire 2.4.0 handles 500 more packets per second than Jive Messenger 2.3.0.


Login speed
Openfire 2.4.0 improves login time vs. Jive Messenger 2.3.0 by 33%.

Future Work

We're quite happy with the 33% performance improvement that only a few days of optimization work yielded. It's likely that measured performance improvements would be even greater on multi-CPU servers, since we eliminated a lot of synchronization in the code.

We'll likely go through another round of profiling in about six months - we've found that to be a good interval to catch hotspots introduced by refactoring and new features. Until then, our focus will shift to the Pampero project, which is the next big opportunity for performance improvements in Openfire.