So how does Fidelity make this a reality? Azua says the key is developing applications that run in a world of software-defined infrastructure. Applications are written in microservices frameworks, with their dependencies outlined, and the underlying infrastructure requirements clearly defined.
Fidelity has not standardized on any one technology, but rather it uses a variety of tools to accomplish this. The company builds its apps in Docker containers. Apps that need to stay on the company’s premises run on an OpenStack private cloud. Amazon Web Services and Microsoft Azure are used for public cloud. Fidelity uses a combination of cloud-native management tools like CloudFormation for AWS, Heat templates for OpenStack, and Terraforms, which runs across both public and private environments. It uses Cloud Foundry as a PaaS layer that spans both public and private clouds, too.
But which tools the company uses are irrelevant, she says. “Process trumps tools,” she says. Applications should be built in a certain way, and if they are, it doesn't matter what underlying technology is used to run or manage them, she argues.
There is no set rule to determine where an app will run, but Azua says that, generally speaking, if the application runs 24 hours a day, 7 days a week then Fidelity can run it more efficiently internally compared to the public cloud. For short-term workloads or ones that spike in infrastructure resource needs, the public cloud is a more natural landing spot.
But that’s not for the developers to worry about. “The (application development) pipeline is so important,” she says. “If you build the applications in a declarative way then we should not have to worry about what the target environment is.”
Sign up for CIO Asia eNewsletters.