Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Inside Bank of America’s IT transformation

Brandon Butler | March 8, 2016
Bank of America is on a journey to embrace a software-defined infrastructure

“We’ve held a very hard line to say that the new cannot look like the old,” he says, even delaying the project at times to comply. The goal is to one day have a system built entirely on commodity servers that are uber-flexible: They could act as compute servers one day, a network switch another and be part of a pool of storage resources if needed. Containerized, micro services-designed applications automatically provision infrastructure resources they need without requiring any human hands in the process.

The proprietary stack is running about six-months ahead of the open source one because technology innovation by private vendors has been faster than the methodical yet essential work of the open source community.

About 95% of the applications that have migrated to this new greenfield environment have thus far been written to the proprietary stack. But Reilly hopes in the coming years as the open source innovation picks up, that will become the bank’s dominate platform. Open source software-defined networking is one of the laggards, Reilly noted.

Eight lines of business within Bank of America are already using the new platform or have plans to migrate including wholesale banking, development and quality assurance. Grid-based workloads are planned to migrate while pockets of the wealth management division will have use cases for it.

“We are now convinced that this software-defined model, where an application presents a manifest for the assets and facilities it needs, it acquires those assets with a call, uses them, then lets them go when they’re done so they can be used by someone else – that’s going to work for something like 80% of our technology workloads,” Reilly says. The other 20% of workloads will run on more statically provisioned machines and a mainframe that the bank still uses.

Matt Eastwood, senior vice president at IDC, says financial services firms are the next logical candidates to embrace this type of technology after the web-scale businesses. Businesses that do drug discovery, oil and gas exploration, aerospace and defense could all have workloads that favor a dense, scale-out, cluster-model compute environment.

“Any cost savings on the CAPEX and open side of the equation can be re-invested back into the computing environment in an attempt to get a competitive strategic advantage in the market,” Eastwood says.

Already Bank of America’s greenfield system has created enormous efficiencies in how application developers do their jobs. Reilly positions developers as pseudo micro-CIOs who serve their individual business units, and are charged back based on infrastructure usage. “A developer will move to this environment initially to get the price benefit,” says Reilly. “But the developer stays because of the flexibility and agility.”

 

Previous Page  1  2  3  Next Page 

Sign up for CIO Asia eNewsletters.