Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

OSGi at the UK's biggest science lab

Matthew Gerring | Jan. 13, 2017
Developers at Diamond Light Source set out to migrate a mission-critical, Java-based acquisition system to dynamic class loading. Here’s what they learned.

For our first foray into OSGi, we chose to migrate and re-write part of the client. We moved from Swing to SWT/JFace using the Rich Client Platform (RCP) which is available from the good people at Eclipse Foundation. The move led us to adopt an Equinox classloader for our client. It was not our intention to migrate the client because of dynamic class loading; that was something that came with the new platform. We used it first, not to make the server modular, but to make the client start faster.  It worked well for that purpose, so there was no real reason to modify the server architecture. For the next five years, we didn’t.

The creeping cost of support

So what changed? Well, like a lot of real-world projects, the proportion of maintenance work developers were doing started going up. This was especially costly when compared to time spent writing software that the new beamlines needed. In many cases, maintenance became almost all a developer was doing. These days we run a variation of a devops shop, so developers are usually involved with supporting systems as part of their work. This is the correct approach for us, but if developers aren’t also innovating and creating new software, we know that something has gone wrong.

A lot of what we do at Diamond Light requires creative input from developers to get the new science available to our users. But over time, we built up technical debt. Some signs of our technical debt included:

  • Using different APIs that do the same or similar things
  • Making overly interconnected projects and classes
  • Improper encapsulation of functionality
  • Writing adequate unit tests rarely

Another thing we did was to run from source. Yes, you read that correctly: we manually pulled out the software from its repository and built it specially for each experiment, leaving the source code and bytecode compiled as a bespoke version for each given beamline. We have all the usual stack of integration tools, an automated build for each beamline in Jenkins, JUnit tests, Squish for the user interface, and so on, but ultimately a developer was pulling a custom product out of the repositories, changing certain files by hand, and leaving this version for the next run of the machine.

The system wasn’t efficient or reproducible, which meant it had to change. After reading online articles, learning at industry conferences, and taking input from new colleagues, we came up with a plan to move our server to something closer to industry standards. The path, however, was not entirely smooth.

Real world problem #1: Integration

The first thing we decided to do was make a single-server product for the data acquisition server. One that could be used on any beamline and was created with a binary built from a reproducible build. OSGi was a perfect fit for this project: bundles are loaded dynamically, after all, and one of the main reasons for dynamic classloading is that the binary product size can grow beyond that which is in memory. Using OSGi meant that beamline-specific bundles–for example, those dealing with certain detectors or specific libraries for decoding streams–could be built into the single product. Only if they were used on an experiment would they be class-loaded and take up space in the virtual machine (VM).

 

Previous Page  1  2  3  4  5  6  7  8  Next Page 

Sign up for CIO Asia eNewsletters.