Polycloud is more than a buzzword—it recognizes that no single cloud provider can meet an enterprise’s needs across performance, regulatory, availability, and pricing dimensions.
In interviews, Goldman’s CIO Marco Argenti and managing directors from the engineering team have laid out a vision where cloud providers are treated like interchangeable parts in a composable infrastructure stack.
The idea is simple in theory: use the best tool for the job, regardless of which cloud it's on. The execution? Far from simple.
Polycloud isn’t just about choosing between AWS, Azure, and Google Cloud—it’s about being able to run critical workloads across all of them, depending on:
Goldman Sachs built a platform to abstract this complexity for developers. But most enterprises aren’t Goldman. And most analysts and ML researchers certainly aren’t platform engineers.
From an analyst’s perspective, a machine-learning workflow should look like this:
But here’s what it actually looks like in a multi-cloud environment:
Multi-cloud promises flexibility. It delivers friction, especially for people who just want to run, fine-tune, or leverage a model.
Let’s take a typical machine learning use case. An analyst at a financial institution wants to train a foundation model for forecasting risk across multiple portfolios. She’s got datasets in S3 on AWS, compliance policies that require execution on Azure in certain regions, and her team’s preferred fine-tuning pipeline relies on Hugging Face's tools—except the model she needs is only pre-optimized in Google’s Model Garden.
To train this model at scale, she’ll need:
Cloud APIs are diverging, not converging.
Each cloud provider—AWS, Azure, GCP—offers its managed ML stack. Each stack comes with different primitives, different naming conventions, and different security controls. That’s by design. Cloud vendors don’t want to be interchangeable. They want you locked in.
GPU capacity? It’s bursty, region-bound, and subject to quota drama. Compliance? It’s not just legal anymore—it’s embedded into infrastructure policy. And higher-order services? They don’t talk to each other.
So, where does that leave you if you’re trying to train or fine-tune models across providers? Either you build a lot of glue code and internal tooling… or give up and pick the least lousy cloud for every project forever. Or worse yet, try to aggregate unoptimized Kubernetes clusters.
Because your data? Scattered. Your GPUs? Wherever they're available. Your ML stack? Split across three cloud APIs and ten acronyms.
This is the part where most analysts give up and call the platform team. Or worse: they don’t start at all.
Let’s take a real scenario:
Oh—and your analysts want to launch the job from Jupyter. Not Terraform.
That’s why we built Project Robbie.
Robbie is a high-performance computing service that makes multi-cloud invisible for ML workloads. Built on top of infrastructure software from Positron Networks, Robbie lets analysts and researchers launch training, fine-tuning, or inference jobs across clouds without having to know—or care—where the job runs.
It works with the tools analysts already use (JupyterLab, VS Code, CLI), and under the hood it handles:
Robbie is the abstraction layer for people, not just infrastructure.
Your analysts get a fast path to ML results. Your platform team gets a policy-aligned way to scale AI across clouds. Your organization stops reinventing glue code every quarter.
For IT, architecture, and security teams, Robbie acts as a control plane for AI workloads:
In short, Robbie lets your people move faster—without breaking the rules.
Goldman Sachs is right: the future of enterprise cloud is composable. But composability doesn’t come from cloud providers. It comes from the layers you build.
Project Robbie exists to make multi-cloud ML possible without complexity.
Let’s forget about what the clouds can’t do together, and start building what we need them to do together. Let’s go.