Stephen Hunton Stephen Hunton

QumulusAI Secures $500M Non-Recourse Financing Facility through USD.AI to Accelerate AI Infrastructure Growth

QumulusAI announces a $500 million non-recourse financing facility arranged by Permian Labs and distributed through the USD.AI Protocol to accelerate AI infrastructure growth.

Partnership with Permian Labs, the developer of the USD.AI protocol unlocks blockchain-based credit markets for scalable GPU deployments

ATLANTA, GA / October 9, 2025 / QumulusAI, a provider of GPU-powered cloud infrastructure for artificial intelligence, today announced a $500 million non-recourse financing facility arranged by Permian Labs and distributed through the USD.AI Protocol.

The facility allows QumulusAI to finance up to 70% of approved GPU deployments with stablecoin liquidity from USD.AI’s blockchain-based credit market. This structure offers faster access to capital compared to traditional financing alternatives like bank or private credit capital, with flexible terms enabling a non-dilutive path to scale AI infrastructure.

QumulusAI’s facility reflects a broader shift in how compute infrastructure is financed. Global demand for AI infrastructure is projected to surpass $6.7 trillion by 2030, yet capital remains concentrated among hyperscalers like OpenAI, Google, and Meta. USD.AI’s financing model opens new pathways for emerging operators like QumulusAI, linking real-world hardware directly to blockchain-based credit markets for accelerated, more transparent scaling.

Permian Labs developed the financing framework behind USD.AI, which treats GPUs as a financeable commodity. Permian Labs issues GPU Warehouse Receipt Tokens (GWRTs), and USD.AI serves as the on-chain DeFi protocol that enables those tokens to be used as collateral for borrowing stablecoin-based credit, unlocking capital for the next generation of AI builders.

This structure creates yield-bearing opportunities for onchain depositors, while giving operators fast, transparent access to non-dilutive financing.

For QumulusAI, the $500 million facility signals institutional confidence in its infrastructure growth strategy and provides a repeatable model for scaling deployments with blockchain-native financing rails. For Permian Labs and USD.AI, it represents the continued expansion of real- world assets into on-chain credit markets, bridging institutional capital with income-generating compute infrastructure.

"This partnership represents a paradigm shift in AI infrastructure financing," said Mike Maniscalco, CEO of QumulusAI. "By leveraging Permian Labs' tokenization framework, we can scale faster and more flexibly – meeting the surge in AI compute demand without the constraints of legacy financing.”

"QumulusAI is exactly the type of innovative AI operator we built USD.AI to serve," said Conor Moore, Permian Labs Co-Founder and COO. "Their integrated approach to AI supercompute—combining HPC cloud, purpose-built data centers, and controlled power generation—fits seamlessly with our tokenized financing model, proving how blockchain can unlock institutional capital for real-world infrastructure."

 

About QumulusAI

QumulusAI is a vertically integrated AI infrastructure company focused on delivering a distributed AI cloud by innovating around power, data center and GPU-based cloud services—the company delivers immediate access to high-performance computing with enhanced cost control, reliability, and flexibility. Machine learning teams, AI startups, research institutions, and growing enterprises can now scale their AI training and inference workloads quickly and cost effectively.

For more information, visit https://www.qumulusai.com

About Permian Labs

Permian Labs is the developer of USD.AI, building the infrastructure that connects institutional capital with real-world AI compute. Permian Labs designs the legal, financial, and technical systems that transform GPUs into collateral and make them accessible through blockchain- based credit markets. By bridging traditional asset finance with DeFi innovation, Permian Labs enables AI operators to scale efficiently while creating new opportunities for investors to access yield from real-world infrastructure.

 

About USD.AI

USD.AI is the world’s first blockchain-native credit market for GPU-backed infrastructure. The protocol turns AI hardware into tokenized collateral, unlocking financing markets with deep liquidity, attractive cost of capital and instant settlement for emerging AI operators who require capital to scale. Through its dual-token model, USDai (a stablecoin with deep liquidity) and sUSDai (its yield-bearing counterpart), USD.AI creates new liquidity pathways for operators while offering investors scalable, real-world yields. Developed by Permian Labs, USD.AI combines DeFi principles with institutional-grade securitization standards to accelerate the financing of AI infrastructure worldwide.

For more information on QumulusAI: Press: media@qumulusai.com Investors: investors@qumulusai.com

Follow QumulusAI on social media: https://www.linkedin.com/company/qumulusai

For more information on Permian Labs:

Email: hello@permianlabs.xzy

This press release contains certain “forward-looking statements” that are based on current expectations, forecasts and assumptions that involve risks and uncertainties, and on information available to QumulusAI as of the date hereof. QumulusAI’s actual results could differ materially from those stated or implied herein, due to risks and uncertainties associated with its business and/or the strategic partnership, which include, without limitation, integration challenges between the companies, market volatility and/or regulatory conditions. Forward-looking statements include statements regarding QumulusAI’s expectations, beliefs, intentions or strategies regarding the future, and can be identified by forward-looking words such as “anticipate,” “believe,” “could,” “continue,” “estimate,” “expect,” “intend,” “may,” “should,” “will” and “would” or words of similar import. Forward-looking statements include, without limitation, statements regarding future operating and financial results, QumulusAI’s plans, objectives, expectations and intentions, and other statements that are not historical facts. QumulusAI expressly disclaims any obligation or undertaking to disseminate any updates or revisions to any forward-looking statement contained in this press release to reflect any change in QumulusAI’s expectations with regard thereto or any change in events, conditions or circumstances on which any such statement is based in respect of its business, the strategic partnership or otherwise.

Read More
Adam Brown Adam Brown

HyperFRAME Research: Why QumulusAI Is Built Different in the AI Compute Race

What does it actually take to compete in AI infrastructure when everyone claims to have "high-performance compute"? That's the question Steven Dickens, CEO and Principal Analyst at HyperFRAME Research, tackles in his recent analysis of QumulusAI. After sitting down with our leadership team, Dickens digs into the real differentiators that matter in a market where simply having GPUs isn't enough—it's about how you architect them, who you serve, and how quickly you can move.

The piece cuts through the noise around AI infrastructure providers by highlighting a fundamental truth: the hyperscalers are built for massive scale, but not every workload needs that. Training a large language model has completely different requirements than running real-time inference or handling sensitive regulated data. Dickens points out that QumulusAI's modular architecture lets us serve customers who fall between the cracks—companies that need something more tailored than what AWS or Google offer, but with the reliability and performance they'd expect from tier-one infrastructure. He draws comparisons to neocloud providers like CoreWeave and Lambda Labs, noting that success in this space comes down to being developer-friendly and operationally nimble in ways that larger incumbents simply can't be.

What really stands out in Dickens' analysis is his focus on our deployment strategy. While others are building gigawatt-scale data center campuses that take years to come online, QumulusAI is taking a different approach—deploying compute in smaller, distributed pockets that can reach customers in months instead of years. This matters because the AI infrastructure market is supply-constrained, and speed wins. Dickens recognizes that our ability to move fast and deploy flexibly across multiple geographies isn't just an operational advantage—it's a fundamental differentiator that lets us serve customers the hyperscalers can't reach quickly enough. Check out the video interview with CEO Michael Maniscalco below, and read the full HyperFRAME Research analysis for his complete take on our growth strategy.

Read More
Adam Brown Adam Brown

SiliconANGLE: Neocloud Infrastructure and QumulusAI’s Compute Strategy

SiliconANGLE recently published an in-depth profile of QumulusAI and our approach to GPU-powered cloud infrastructure for artificial intelligence workloads. Written by Zeus Kerravala, principal analyst at ZK Research, the piece explores what sets neoclouds apart in today's AI infrastructure landscape and why enterprises are increasingly looking beyond traditional hyperscalers for their GPU compute needs.

The article digs into our full-stack ownership model—from power and data centers to GPU-accelerated cloud services—and how we're addressing the massive gap between AI compute supply and demand. CEO Michael Maniscalco shares our strategy of deploying smaller, modular compute clusters that can reach the market faster and more cost-effectively than gigawatt-scale data center campuses. While hyperscalers serve the OpenAIs of the world, there are millions of businesses that need flexible, affordable access to GPU infrastructure for model training, inference, and AI development. That's the opportunity we're going after.

Kerravala also highlights what makes our approach different: flexibility in architecture, transparent pricing, and a developer-friendly experience that abstracts away data center complexity. Whether you're a consulting firm scaling AI solutions for clients or a development team building the next generation of AI-powered applications, the goal is the same—getting reliable GPU compute into your hands quickly without the usual barriers. Read the complete article on SiliconANGLE for Maniscalco's full thoughts on the neocloud market, our partnership strategy, and where AI infrastructure is headed.

Read More
Adam Brown Adam Brown

QumulusAI Appoints Former Applied Digital CTO Michael Maniscalco as CEO to Lead Growth in AI Infrastructure Market

CTO and CMO appointments round out team with the experience to bring enterprise-grade AI supercomputing infrastructure to market

ATLANTA, GA / September 25, 2025 / QumulusAI, a provider of GPU-powered cloud infrastructure for artificial intelligence, today announced the appointment of Michael Maniscalco as Chief Executive Officer to propel the company through a rapid growth phase.

Maniscalco, formerly CTO of Applied Digital, brings deep expertise in scaling high-performance computing platforms – under his leadership at Applied Digital, his team deployed 6,000 state-of-the-art GPUs in 12 months. At QumulusAI, he will drive expansion of the company’s differentiated approach of owning the full stack – from energy and data centers to GPU-accelerated cloud services – delivering cost-efficient, enterprise-grade AI infrastructure with the speed to move fast and the scale to grow with customers.

The company also announced two additional executive appointments: Ryan DiRocco as Chief Technology Officer and Stephen Hunton as Chief Marketing Officer. DiRocco, previously CTO at Performive LLC, a leading VMware-focused managed multicloud provider. In this role, he will oversee QumulusAI’s technical strategy, ensuring products are secure, high-performing, and aligned with customer needs, while guiding clients’ smooth, cost-effective adoption of AI.

Hunton, who most recently served as Head of Global Social and Content Experience at IBM, adds global marketing expertise from Google, YouTube and Chevrolet. In this role, Hunton will focus on establishing the brand as the category leader in AI infrastructure – driving market visibility, accelerating enterprise adoption, and building the momentum that will fuel long-term value for customers and partners/investors. 

The strengthened leadership team will focus on expanding market presence, accelerating product innovation, and building strategic partnerships as QumulusAI advances its mission to make enterprise-grade AI supercomputing more accessible.

“These appointments mark a pivotal inflection point for QumulusAI,” said Steve Gertz, Chairman of the Board. “AI adoption is accelerating across every industry, and the ability to deliver scalable, cost-efficient infrastructure has become a critical enabler. Michael, Ryan, and Stephen bring proven expertise in building technology platforms, scaling infrastructure, and creating global brands. This team has the vision and execution experience needed to establish QumulusAI as a premier AI infrastructure provider.”

“The demand for scalable AI infrastructure is one of the fastest-growing markets in tech,” said Steven Dickens, CEO & Principal Analyst at HyperFrame Research. “QumulusAI’s model of controlling the full stack positions it to deliver performance and economics that many enterprises simply can’t get from hyperscalers. Adding Michael Maniscalco as CEO is a strong signal the company is ready to scale.”

View the original press release on ACCESS Newswire

Read More
Adam Brown Adam Brown

Yotta 2025: Our Key Takeaways

Yotta 2025 brought together leaders from energy, data centers, hardware, and software. The central message was unmistakable: data centers are now critical infrastructure, and AI is accelerating demand at a pace unlike anything before. Analysts estimate $7 trillion will be invested in the next five years. Many compared this moment to the arrival of the steam engine — a turning point for how human work is organized and scaled.

1. Power Stole the Show

The defining bottleneck is no longer GPUs but electricity. Yotta 2025 often felt like a power conference. Attendees agreed that the only true competitive advantage today is speed to market. Not everyone will win, but those who act fast will be rewarded.

On-site cogeneration is emerging as the reality, from natural gas turbines to nuclear, geothermal, and microgrids. Cully Cavness of Crusoe Energy highlighted Oracle’s Abilene campus, where a 350-megawatt natural gas plant is bridging grid delays while replacing traditional diesel backup. The grid is not ready for the AI boom, and reliable behind-the-meter solutions will dominate the near term.

QumulusAI Take: We see power as inseparable from compute. That’s why our roadmap integrates natural gas generation with sub-50 MW facilities, letting us move quickly while staying modular. Rather than betting on single gigawatt campuses, we’re aligning with distributed builds that can be deployed near stranded or excess energy, tightening the loop between power and compute availability.

2. Speed Defines Competitiveness

The urgency of speed was felt everywhere. Modular power capacity, fast turbine sourcing, and flexible microgrids allow players to bypass interconnection queues and deliver capacity years faster. In this market, bold moves can dethrone incumbents overnight. Speed is not just an advantage, it is survival.

QumulusAI Take: Speed doesn’t just mean turbine procurement — it’s about bringing usable compute online. We leverage colocation partners with idle or excess capacity, which enables customers to scale into environments already built and powered. That accelerates delivery while we stand up additional purpose-built campuses.

3. Cooling Innovation Is Inevitable

Power density and heat go hand in hand. Cooling is becoming a limiting factor, sparking debate between liquid-to-chip, hybrid air and liquid, and immersion cooling. Advances in cold plate manufacturing, new conducting materials, and non-PFAS fluids hint at an innovation wave reminiscent of cooling ICBMs. We are still in the first inning, with huge opportunities for next-generation clean tech and climate tech.

4. Talent Is a Hidden Bottleneck

Amid all the focus on hardware and megawatts, one sobering reality came up repeatedly: we need people to build. Skilled labor is dwindling, and without a workforce that can deploy turbines, lay cables, and assemble racks, even the most advanced plans stall.

5. Navigating Uncertainty

To work in this space is to embrace risk. Technology is evolving almost as fast as it can be deployed. Within 12 months, we’ll seen rack densities for AI factories surge from 130kw to 600kw for Nvidia’s Rubin Ultra. Cooling, power, rack density, and chips all shift underfoot. Regional regulations, viral usage spikes, and divergent workloads make forecasting complex. Yet these hurdles create opportunities for faster iteration and collaboration. The industry is laying track as the train speeds forward.

6. Optimizing Inference

With inference workloads surging, optimization is becoming as critical as raw scale. OpenAI described multi-layered strategies:

  • Inference-side efficiencies like caching and smart routing to cut latency.

  • Model-side efficiencies like downsizing and unifying architectures.

  • Adaptive routing to direct queries to the right type of model.

Strict latency targets are now treated as non-negotiable. Every optimization is ultimately about user experience.

7. Push for Hardware Diversity

NVIDIA remains the market leader, leveraging its software ecosystem and performance to justify premium pricing. Competitors like AMD, Intel, and newer accelerator companies face difficulty matching NVIDIA’s scale, though some succeed in niche use cases.

At the same time, there is broad recognition that no single chip architecture can address all workloads. Blended environments that combine CPUs, GPUs, and accelerators are increasingly seen as the practical approach to balancing training, inference, and real-time applications.

8. Edge and Regional Growth

While gigawatt-scale data centers attract attention, smaller regional deployments are also gaining traction. Proximity to renewable energy sources and lower land costs are driving some operators away from traditional hubs. In certain regions, former industrial towns are being redeveloped as data center sites, creating new employment pipelines through retraining programs.

QumulusAI Take: We believe the future isn’t just in hyperscale hubs but in a mesh of regional deployments. Sub-50 MW campuses can align with regional grids, sit closer to enterprises, and adapt to new workloads faster than monolithic builds. This approach also unlocks optionality: combining our own infrastructure with colocation partnerships ensures flexibility while maintaining enterprise-grade reliability.

Yotta 2025 underscored how far the industry has come and how much uncertainty remains. Power shortages, cooling constraints, labor gaps, and shifting workloads are immediate challenges. Yet with trillions of dollars flowing into the sector, the momentum is clear. The companies that act quickly, adapt to uncertainty, and integrate land, power, and compute into unified strategies are most likely to shape the next decade of digital infrastructure.

Read More
Adam Brown Adam Brown

Modular Designs Are the Starting Point for the Future of AI Infrastructure

Abstract technology image of modular high performance computing

“Data centers are evolving to become AI-optimized, modular, purpose-built ecosystems.” — Pipeline Magazine, June 2025

The recent piece from Pipeline makes a compelling case for modular data center design in the AI era. They highlight the rapid shift toward prefabricated builds, new cabinet geometries, high-density liquid cooling, and pre-integrated power systems—and how all of it is converging to meet the demands of AI.

We agree. That’s why QumulusAI’s latest facilities in Oklahoma and Texas are being built around the very modular design principles Pipeline describes.

But we also believe modularity alone won’t get us where we need to go.

What AI workloads require isn’t just faster construction or tighter thermal envelopes—it’s orchestration. The real barrier to AI isn’t just the time it takes to build. It’s aligning every layer of the stack: energy, power distribution, compute, cooling, and deployment timelines.

That’s where the QumulusAI approach builds on what Pipeline calls out.

We deploy modular designs—but we tie them directly to:

  • Behind-the-meter natural gas with fixed 10-year pricing to eliminate energy volatility

  • Real-time GPU inventory access for priority deployment of H200s and B200s

  • Cluster designs optimized around pulse-load behavior

  • Factory-tested cooling subsystems that drop in without delay

  • Immersion cooling built into the spec from day one, not retrofitted later

Modular construction builds the site. Integrated infrastructure gets it to revenue.

And that’s the part most headlines miss.

As the Pipeline article concludes, “deep collaboration across the supply chain” is the only way forward. At QumulusAI, we’ve taken that a step further: we’ve compressed the supply chain into a single delivery model—from molecules to models, from megawatts to machines.

Read More
Adam Brown Adam Brown

Not Hyperscale. Hyperspeed.

A crane hook hovers in front of a sky at dawn

There’s something awe-inspiring about a 500 MW data center. Until you remember how long it takes to build. The tech that goes in often changes faster than the permits clear. And by the time power comes online? The workloads it was designed for may be obsolete.

That’s the hyperscale dilemma: chasing AI growth with industrial-age momentum.

QumulusAI is built to move differently.

Forget Massive. Think Modular.

While the industry celebrates ever-larger campuses, we’re focused on sub-50 MW facilities deployed where they’re actually needed. These aren’t proof-of-concepts or pop-up sheds—they’re fully redundant, GPU-optimized data centers, designed from day one for AI performance and next-gen cooling.

By staying under the 50 MW threshold, we avoid years-long approval cycles. We co-locate with gas and fiber. And we activate faster than most teams can even negotiate a hyperscale contract.

The Cost of Overbuilding

What’s often left out of hyperscale headlines is the cost—not just in dollars, but in friction:

  • Communities face rising opposition: noise, water consumption, and grid strain have turned public sentiment.

  • Companies face lock-in: rigid contracts for compute that may no longer serve their evolving models.

  • And regulators are playing catch-up with energy realities that hyperscalers helped create.

Meanwhile, investors wait. Clients stall. Innovation slows.

AI Moves Fast. So Should Infrastructure.

We’re not anti-scale. We’re anti-lag.

QumulusAI is proving that scale doesn’t have to mean sprawl. By deploying purpose-built facilities faster, closer to where the demand lives, we give our clients access to compute without the drag. No twelve-month waitlist. No fifteen-year amortization gamble.

Just energy-efficient, AI-tuned, revenue-generating infrastructure—in months, not years.

From Molecules to Models

Our approach is vertically integrated: power gen, data center, compute. That means fewer intermediaries, more predictability, and complete control across the stack. It also means we can pass savings to clients and reinvest faster in the tech that matters.

This isn’t just about facilities. It’s about philosophy. QumulusAI believes infrastructure should evolve at the pace of innovation—not slow it down.

Read More
Adam Brown Adam Brown

Public Backlash Against Data Centers Is Emerging. Here’s Our Plan.

Power lines stretch across a warm sky of cumulus clouds

Public pushback against data centers is rising—and not without reason. When massive mega and giga factories threaten to overwhelm local grids, or quietly shift infrastructure costs onto ratepayers, communities are right to demand better.

  • In New Jersey, electric rates jumped 20%, and lawmakers say hyperscale data centers are overloading infrastructure without covering the costs. NJ101.5

  • In Pennsylvania, grid operators say surging AI and data center demand is tipping the balance—leaving power supplies potentially short under extreme summer conditions. WESA

  • In Illinois, new legislation would require data centers to report energy and water use—aiming to uncover whether residents are unknowingly footing the bill for AI growth. Capitol News Illinois

QumulusAI: Built for the Long Term

At QumulusAI, we’re building for the long term: a more strategic, more nimble, and more measured approach to AI infrastructure. Our plans align with local capacity, not against it, and sustain real growth.

  • Right-sized for the region, not oversized for the headline: We build nimble, sub-50MW facilities designed to match local capacity—not overwhelm it.

  • Built with diverse, sustainable power—including behind-the-meter natural gas: Our model reduces grid stress, improves resiliency, and aligns with long-term environmental planning.

  • Live in months, not years: Our modular data centers deploy fast—without dragging down utilities or forcing costly upgrades on ratepayers.

  • In step with the communities we serve: We work directly with policymakers, utilities, and local leaders to align infrastructure growth with public interest—not just private demand.

  • Sustainable by design: Our energy-efficient clusters are optimized for AI workloads from day one—minimizing waste, maximizing performance, and staying accountable to the regions that host us.

The data center industry is at a crossroads. We can keep bulldozing through communities with oversized projects that privatize profits and socialize costs—or we can prove that AI infrastructure can actually strengthen the places that host it. The choice we make now will determine whether communities welcome the next wave of technology or fight it at every turn.

Read More