First sites activate Q3 2026 · capacity reservations open now

Your GPUs.
Inside the building.
Closer to where AI runs.

Centralized clouds put your inference 30 ms and a thousand miles away. ARO opens single-tenant GPU sites inside hotels, apartments, hospitals, and offices across the U.S. — built for AI inference, IoT, autonomous-systems compute, real-time CV, and every GPU-intensive workload that runs better next to its data. Enterprise hardware. Fiber backhaul. Multi-year reservations. Real human support.

NVIDIA Blackwell-class GPUs Dell-validated reference designs Your GPUs only, never shared Reserve in multi-year terms
64–256
GPUs / launch site
Expandable to 320+
96 GB
VRAM per Blackwell GPU
Single-card 70B inference
100 GbE
Site backhaul
Redundant fiber paths
10 yr
Site exclusivity
Long-term capacity guarantees

The hyperscalers built for training. The world is shipping inference.
Inference belongs near the data.

You don't need a 500-megawatt campus in Loudoun County to run a 70B model on a video stream from a building in Tampa. You need 96 GB of VRAM in the basement, single-tenant, with 100 GbE out the back. That's what ARO ships.

What ARO is in 30 seconds
A national footprint of single-tenant GPU sites inside the buildings your workloads already touch — sized for inference, not training, and reserved by you for years at a time.
Hyperscaler region

Centralized GPU cloud

Your inference call leaves the building, crosses two states, lands in a multi-tenant pool, and waits its turn behind whoever booked first.

  • 20–60 ms round trip before a token is generated
  • Shared GPUs, shared neighbors, surprise queue depth
  • Egress fees on the data you didn't want to move
  • Quarterly capacity scrums for a customer your size
ARO Micro Edge Hub

GPUs in the building, reserved for you

Your inference call hits a hub in the same metro, on hardware you reserved, with neighbors you don't have.

  • Sub-10 ms tenant-to-hub target latency
  • Single-tenant nodes — predictable performance, no waits
  • Local ingest, local compute, less data to backhaul
  • Multi-year reservations, real human on speed-dial
Size your deployment

Pick a workload, pick a footprint, see what you'd get.

Indicative throughput on Blackwell-class hardware. Real numbers depend on model, quantization, sequence length, and tenancy. We size every reservation to your actual workload.

Configure

Tell us what you're running.

Numbers are mid-range estimates on RTX PRO 6000 Blackwell with 96 GB GDDR7. Tuned configurations on H100/H200 nodes available for training-class workloads. Final numbers always come from a reservation conversation, not a calculator.
Estimated throughput
Pick a workload to size
GPUs reserved
Aggregate VRAM
Power draw (typical)
Fiber backhaul
Reservations are sized in 8-, 16-, 32-, 64-, 128-, or 256-GPU increments with multi-year terms. Need a custom size? Talk to capacity.
Why ARO

Built for inference at the edge.

Three things separate ARO from the centralized GPU clouds and the hyperscaler training clusters.

Distributed by design

Hubs sit inside hotels, apartment buildings, hospitals, and offices, co-located with the workloads they support — AI inference, IoT pipelines, autonomous-systems compute, real-time CV. No backhaul cost to a remote region.

Single-tenant by default

Dedicated nodes, not shared multi-tenant. Predictable performance, predictable costs, isolated security boundary, no surprise wait time.

Capacity, not waiting in line

We reserve up front. No bursting against neighbors. Reserve in 8-, 16-, 32-, 64-, 128-, or 256-GPU increments with multi-year terms.

How it fits your operations

How an ARO hub plugs into your operations.

Local data ingest, on-premise compute, low-latency outputs, without trucking your data to a remote hyperscaler.

DATA SOURCES Cameras & sensors Lobby, common areas, IoT Guest / resident devices Phones, tablets, in-room Property systems PMS, POS, CRM, BMS IoT & environmental HVAC, energy, occupancy Operational data Bookings, reviews, logs ON-PROPERTY HUB ARO Micro Edge Data Hub DELL POWEREDGE / NVIDIA CAPABILITIES Blackwell-class GPUs 96 GB GDDR7 memory NVLink high-speed fabric Local NVMe storage Liquid + air cooling Containerized workloads Secure tenant isolation FIBER BACKHAUL 100 GbE redundant GPU-AS-A-SERVICE On-demand compute Reserve in 8–256 GPU blocks APIs & endpoints REST / gRPC / SSH Tenant isolation Single-tenant nodes Usage monitoring Quotas, metrics, billing Secure access AuthN / AuthZ / TLS OUTCOMES Real-time decisions Sub-10 ms tenant latency Predictive analytics Maintenance, forecasting Custom AI applications RAG, agents, vision Hour-zero alerting No round-trip to a region Lower bandwidth cost Ingest stays on-property
Private. Local.Data stays at the property.
Data sovereigntyTenant controls what leaves.
Compliance roadmapSOC 2 in flight, HIPAA sequenced.
24/7 availabilityContinuous monitoring & alerting.
Hardware Stack

Enterprise components, validated reference designs.

A condensed view of what ships at every site. Specs depend on configuration; the full data sheet lives on the hardware page.

COMPUTE
NVIDIA Blackwell-class GPUsRTX PRO 6000 Blackwell (96 GB GDDR7) and H100 / H200 nodes for training-class workloads.
SERVERS
Dell PowerEdge AI reference architecturesDell ProSupport with 5-year hardware warranty across the fleet.
COOLING
Liquid rear-door & immersion optionsMotivair M16 rear-door heat exchangers; liquid immersion for high-density racks.
NETWORK
100 GbE backhaul, NVLink internalRedundant fiber paths; sub-10 ms tenant-to-hub latency targets.
Blue-lit GPU server rack, representative of the Dell PowerEdge AI reference architecture ARO deploys
Inside a hub

What gets installed at every site.

Three building blocks: enterprise-grade compute, redundant fiber, and liquid-assisted cooling. Hardware shown is representative of the Dell-validated reference architecture we deploy.

Use Cases

Built for AI, IoT, autonomous systems, and the workloads that need to live at the edge.

AI inference is the lead workload today. The same hardware runs IoT pipelines, autonomous-systems compute, real-time computer vision, and any GPU-intensive workload that benefits from sitting close to where data is generated.

Server cabling close-up
LLM Inference

Production inference of 7B–70B-parameter models

A single Blackwell-class card runs 70B at FP4 with KV-cache headroom. Ideal for OEM, RAG, and agentic deployments where latency matters.

Modern hospital ward interior
Healthcare AI

Medical imaging and clinical inference

Radiology, pathology, and clinical decision support with data residency, audit logging, and isolated tenant environments by design.

Modern hotel building exterior
IoT & Real-time CV

Sensor pipelines and property-resident vision

Hotel, retail, building-systems, and smart-city sensor data processed on-property. Lower bandwidth costs, lower latency, sensor data stays local.

Modern office hallway
Autonomous & robotics

Edge inference for autonomous systems

Vehicle and robotics fleets need GPU inference within milliseconds of the sensor. Distributed hubs put compute next to the operating environment, with single-tenant guarantees the safety case requires.

Server cabling close-up
Industrial & IoT

Manufacturing edge and connected operations

Predictive maintenance, defect detection, and process optimization on the factory floor. GPU-accelerated inference at the site, with data sovereignty over sensor and process telemetry.

Modern office hallway
Regulated workloads

Data-residency-sensitive enterprise inference

Single-tenant deployments kept inside specific regions, for financial, legal, and healthcare workloads where compliance posture matters more than burst capacity.

Operations & Security

Built and run like enterprise infrastructure.

We own the hardware. We monitor it. We support it. Our hubs are designed for the operating standards a serious AI customer expects.

See our operations posture →
  • 24/7 monitoring & NOC oversightContinuous health checks, telemetry, alerting on every node.
  • Dell-backed maintenance5-year warranty, ProSupport, on-site break-fix.
  • Insured equipmentProperty & cyber liability coverage carried by ARO.
  • Compliance roadmapSOC 2 Type I in flight, HIPAA + ISO 27001 sequenced.
Hardware Ecosystem

Selected for what holds up in production.

DELL
Servers & ProSupport
NVIDIA
GPU compute
XEROX
Channel & integration
MOTIVAIR
Liquid cooling
PROPERTY PARTNER PROGRAM

Earn revenue from infrastructure your building doesn't have to operate.

Hotels, apartment and condo buildings, healthcare facilities, and commercial buildings host an ARO Micro Edge Data Hub with zero capital from you. ARO finances, owns, and operates the equipment. The property contributes space, electrical access, and water. You earn a share of the revenue for ten years.

The property provides
  • Approx. 200 sq ft of conditioned indoor space
  • Electrical service (sub-meter at ARO's expense)
  • Water access for closed-loop liquid cooling
  • 10-year exclusive hosting agreement
ARO covers everything else
  • Equipment financing, ownership, and operation
  • Insurance, security, and 24/7 NOC monitoring
  • Tenant acquisition and contract management
  • Removal at end of term at end of term
Good fit for Hotels & resorts Multifamily & condo Senior living Healthcare campuses Class A commercial
$0Upfront from property
10 yrContract term
25–30%Revenue share
~200 sq ftFootprint required
From first call to first revenue check
01
Site survey30 min, no commitment, no NDA
02
Sized economicsWithin 5 business days, in writing
03
Letter of IntentIf there's a fit, signed within 10 days
04
Install & revenueHub live in 90 days, monthly checks begin
Will tenants notice?No. The hub is silent inside, vibrationless, and lives in back-of-house space your residents and guests never see.
Who pays for power?ARO does. We sub-meter at our expense and pay you for the electricity the hub uses.
What if I sell the property?The hosting agreement transfers with the building. We've structured this for portfolio operators.
See the property model → Schedule a site survey Modern apartment building exterior at dusk
A 200-room hotel typically projects:
Year 1$15–20K+ revenue share
By Year 5$45–90K+ as utilization fills
10-year total$400K–1M+ depending on tier
Indicative ranges. Sized to your specific site after a 30-min walk-through.
Leadership

Operators, not theorists.

D
Doug Brough
Founder & CEO

Founder of Additional Revenue Opportunities ARO LLC. Three decades operating businesses at the intersection of communications, real-estate-resident infrastructure, and ancillary revenue. Leads ARO's site origination, capital strategy, and tenant relationships.

David Tyre
David Tyre
Sales & Strategic Partnerships

Senior sales and partnerships executive, 20+ years across hospitality, healthcare, retail, and multi-site enterprise. Marine Corps, Army, and State Department alumnus. Most recently led Samsung Electronics enterprise TV business development; previously helped scale Ruckus Wireless to 4,000+ hotel deployments and influenced more than $300M in partner-driven pipeline. Drives ARO's GPU-as-a-Service tenant pipeline.

Reserve Capacity

Talk to our capacity team.

Sizing GPU capacity for the next 12–36 months? We'll walk you through what's coming online and where. The full reservation form lives on the contact page.

Open the reservation form →