We are entering an era that will harness technologies such as Artificial Intelligence (AI), Internet of Things, Blockchain, and Virtual/Augmented Reality to unleash the power of data and drive industry transformation. Together, these new technologies will reshape the way various industries collect, analyze, and act on data across their entire value chains—ultimately bringing enterprise productivity to the next level.
One of these industries is financial services. The new generation of Financial Technologies (FinTech) companies have the potential to disrupt the financial services status quo through a new set of capabilities to collect data, build models, and make predictions ahead of the competition. With AI capabilities, FinTech companies can sift through huge volumes of data and offer game-changing insights. One of these companies, Alpha Vertex, has made significant breakthroughs with AI running atop Intel® Xeon® Scalable processors in Google Cloud Platform* and managed by Google Kubernetes Engine*.
Alpha Vertex—Pursuit of Faster and More Precise Alpha Prediction
Investment professionals seek the highest asset return at the lowest risk possible. When making investment decisions, they look for “Alpha,” an indicator of the performance of an investment against a market index or benchmark that represents the market’s movement as a whole. Predicting alpha, though, is both an art and science.
Alpha Vertex was founded to deliver more precise and faster investment performance predictions to institutional investors, leveraging state-of-the-art AI technologies. Alpha Vertex models take in complex data sets that oftentimes are unstructured, including:
- 150 million financial documents
- 56,000 public companies
- 20 million private companies
- 100TB of company financial data
- 5 billion connections
In order to improve prediction accuracy, Alpha Vertex integrates tens of thousands of machine learning models and adds hundreds more every month. Examples include model pipelines based on Python* scikit-learn packages for predicting future returns, and RNN and LSTM models based on the TensorFlow* framework to perform natural language understanding.
Benefits of Intel® Architecture-based Cloud
As one of the largest CPU providers, Intel has partnered with Google Cloud to provide an industry-leading cloud services platform, optimizing CPU design to best service a wide range of workloads intended to help customers meet the dynamic requirements of increasingly diverse and complex data workloads. When Google Cloud launched their Google Cloud Platform* (GCP) cloud instances based on Intel® Xeon® Scalable processors, they could support up to 64 virtual CPUs—and now the largest instance has 96 vCPUs. These instances were built with a large number of high-performing Intel® cores and abundant memory bandwidth with smart memory management to keep those cores highly utilized. As such, they are well-suited for compute and data intensive applications, such as high-performance computing, machine learning and AI, and other concurrent workloads.
By adopting Intel® Xeon® Scalable processor-based cloud instances running on GCP, Alpha Vertex was able to see two key benefits:
Performance boost and shortened model training time
By upgrading to 64-core VMs on Google Cloud Platform with Intel® Xeon® Scalable processors, Alpha Vertex reduced their model training time by 20%. According to Alpha Vertex CTO and co-founder Michael Bishop, their model training time has since been further reduced through Python software optimization for an overall improvement of 32% compared to GCP instances based on previous generation of Intel® Xeon® processors.
To put this in perspective, 32% model training time reduction results in almost a third more compute power to redirect to experimentation, innovation, and adding new inputs to the models. This means enhanced prediction accuracy and the ability to train the models with more data or create more models, enabling Alpha Vertex to improve the quality of their service and create differentiation.
Efficient AI at Scale
Alpha Vertex values scalable infrastructure, as the company is training more than 20,000 models concurrently and growing quickly. At scale, every ounce of infrastructure counts towards cost reduction.
Intel® Xeon® Scalable processors accumulate designs aimed to optimize a wide range of workloads running in the cloud. As a result, if your workload mix is composed of a variety of different workload types, or your data pipeline includes a series of operations such as data cleansing, data pre-processing, database queries, data modeling, post-processing and visualization, then it is very efficient to run your data analytics workload on a general-purpose processor, such as a CPU.
In addition, for a high-growth firm like Alpha Vertex, leveraging an elastic pool of computing resources built for highly compute intensive workloads is mandatory. The combination of Google Cloud and Intel® architecture meets this need. In addition, the support of Kubernetes for cloud resource orchestration with the joint effort from Google Cloud and Intel boosts the infrastructure’s elasticity to the next level, making scaling simple and cost-effective.
For more information and to hear Michael Bishop discuss Alpha Vertex journey with Google Cloud and Intel® architecture, please listen to the Alpha Vertex Chip Chat podcast, and visit www.intel.com/CSP and also cloud.google.com.