JPMorgan Chase spent 16 percent of its budget on technology last year—that’s more than $10B, $3B of which was allotted to “new initiatives” where the public cloud lives.
The company has more than 40,000 technologists, and roughly 18,000 of them are developers creating intellectual property.
Jamie Dimon once said Silicon Valley is coming to eat Wall Street’s lunch, and he needs to invest, innovate, and frankly out-hire and out-spend Silicon Valley to compete.
The biggest, best-run banks think of themselves as information technology companies.
“JPM really is like a large tech company in some respects, basically, if you name a process the banks do, JPM is likely trying to automate that process and also grow market share.
” —Brian Kleinhanzl, Keefe, Bruyette & Woods.
Future-Proofing the Enterprise: Data Warehouse Virtualization As Wall Street grows more comfortable using the public cloud, many firms are considering how to split work across the three main providers.
Most banks would prefer to be cloud agnostic, maintaining the ability to seamlessly move between cloud environments but doing so is no easy task.
The biggest hurdle for most firms is navigating the applications that require significant amounts of data which is a common occurrence in finance.
In such cases, firms are forced to pick one provider, or else face steep costs maintaining data spread across multiple cloud environments.
Moreover, most have a hybrid on-premise and cloud environment (spoiler alert: they do; they all do.
The bottom line: Wall Street is finally willing to go to Amazon, Google or Microsoft’s cloud, but nobody can agree on the best way to do it.
And as a leader in IT, if you make a decision and pick the wrong provider, you’re fired.
But there’s hope.
At the heart of enterprise database modernization is data warehouse virtualization.
Without it, banks aren’t able to manage large data sets across multiple cloud platforms and leverage the benefits of automated data engineering.
Using data warehouse virtualization, there is no reason to pick a winner.
Pick all three.
Virtualization done right, you can interface directly with an Azure SQL data warehouse, Google Big Query, Amazon Redshift, Snowflake, and your on-premise Teradata, Oracle, and DB2—and use machine-learned optimization to manage the complexity of figuring out what’s working and where to save money on processing costs.
Data and query will naturally evolve to the right platform and be served from there.
The Data Warehouse Virtualization Journey for Wall Street Leads to Performance, Scale, Concurrency, Security and Cost.
Data warehouse virtualization alleviates the need to choose just one cloud provider and risk vendor lock-in.
The reality of autonomously managing three cloud environments and on-premise platforms seamlessly is possible.
Firms position themselves for future profitability, viability and competitive advantage when they leverage the flexibility to move work between multiple cloud platforms.
The major advantage of eliminating risk while optimizing for cost (avoiding costly and often incorrect analysis and pricey software projects to actually move the data) can’t be ignored.
A common, cloud-built virtual data warehouse platform that is not database specific/tied is the answer.
About the Author Matthew Baird is Co-Founder and Chief Technology Officer of AtScale.
He has a Statistics and Computer Science double major from Queen’s University.
He has built software and managed teams at companies like PeopleSoft, Siebel Systems and Oracle.
He loves the open source movement, and building scalable, innovative enterprise software.
Prior to AtScale, Matt held positions as Vice President engineering at Ticketfly which was acquired for $450M by Pandora and as the CTO at Inflection, an enterprise trust and safety software platform, where his team developed Archives, a leading genealogy site that was acquired by Ancestry.
com in 2012.
Sign up for the free insideBIGDATA newsletter.