The hardware that makes
a hub credible.
Every Micro Edge Data Hub ships with a Dell-validated reference architecture: NVIDIA Blackwell-class GPUs in PowerEdge servers, NVLink fabric inside the rack, 100 GbE fiber out, and Motivair liquid cooling. The detail below is what we hand a tenant's technical lead.
One reference design, four building blocks.
Compute, network, cooling, and power. Each is engineered to enterprise-grade thresholds, and each can be tuned per site once we know what's hosting the hub.
01 / Compute
Blackwell-class GPUs
RTX PRO 6000 Blackwell at 96 GB GDDR7. Single-card 70B inference at FP4. H100 / H200 nodes for training-class workloads at anchor sites.
02 / Network
100 GbE backhaul
Redundant fiber paths from the hub. NVLink high-speed fabric inside the rack. Sub-10 ms tenant-to-hub latency targets.
03 / Cooling
Liquid + immersion
Motivair M16 rear-door heat exchangers as standard. Liquid immersion option for high-density racks. 55–65°F facility loop.
04 / Power
415V 3-phase
100–500 kW per site. UPS-backed. Behind-the-meter and microgrid options at qualified sites. AI-rack-ready densities to 140 kW.
Engineered for inference at the edge.
The default GPU at every ARO site is the NVIDIA RTX PRO 6000 Blackwell, 96 GB of GDDR7 memory and native FP4 precision. That combination is the difference between needing two H100s to run a 70B-parameter model and running it on a single card with KV-cache headroom to spare.
For tenants whose workload profile leans toward training or distributed multi-node clusters, anchor sites can be specified with NVIDIA H100 or H200 nodes. H200 brings 141 GB of HBM3e per GPU and roughly twice the memory bandwidth of an H100, useful for very-large-context inference or fine-tuning.
Inside each rack, GPUs are wired together over NVLink high-speed fabric for low-latency tensor sharding. Outside the rack, every node has a 100 GbE backhaul path to the property's fiber demarc.
What you can run
- Production LLM inference, 7B to 70B parameter models on a single Blackwell card; larger model parallelism across the rack.
- Multi-modal & vision, image, video, and audio inference workloads with bandwidth headroom.
- Fine-tuning & LoRA, tenant-specific model adaptation on isolated nodes.
- Distributed training at anchor sites with H100/H200 NVLink-fabric configurations.


100 GbE out. NVLink in.
Edge inference only matters if the network gets out of the way. Every ARO hub lands on a 100 GbE site backhaul, with redundant fiber paths back to the property's network demarc and onward to the public internet or a tenant's private connection.
Internal to the rack, GPUs are interconnected over NVLink (and NVSwitch where appropriate) so multi-GPU workloads scale without saturating the host bus. We design for sub-10 ms tenant-to-hub round-trip latency targets at anchor sites.
For tenants with their own networks, we support direct connect and private VLAN handoff at the demarc, useful when egress costs and data-residency posture matter more than internet-routed throughput.
Liquid where it matters. Sensible where it doesn't.
Modern AI racks push 100–140 kW. That density isn't survivable on air alone. Standard ARO hub cooling is the Motivair M16 rear-door heat exchanger, a passive plate at the back of each rack fed by a 55–65°F facility water loop. Heat leaves the IT loop in sensible exchange to the rear-door coil; no condensation, no air handlers in the room.
For high-density configurations, we offer a liquid immersion cooling option, racks submerged in dielectric coolant, near-silent operation, and density well beyond what rear-door alone can carry.
Power profile
- 415V 3-phase electrical service per rack, six IEC 309 60A 3P+N+E connections.
- UPS-backed with site-specific runtime targets.
- Behind-the-meter generation or microgrid integration at qualified sites, valuable where utility queue times are blocking grid-tied AI capacity.
- Sub-meter on the property's electrical service so utility consumption is reimbursed cleanly.
Water profile (rear-door cooling)
- Closed loop, 55–65°F supply temperature.
- Coolant chemistry within published Motivair limits (chloride, calcium, magnesium, sulfate ≤ 25 ppm; pH 7–10.5).
- 50-mesh inline filtration in the closed loop.
- Total water draw is per-site; we share specifics during the design pass.

What it takes from the building.
A hub fits in modest, secure space, the kind of square footage every commercial property has spare somewhere.
Rack & access
Modular Dell racks at 750 mm × 1200 mm. With 36″ rear and 36″ front clearance, total footprint per rack is about 10 ft in depth.
Tier III-class redundancy targets at anchor sites; smaller hubs run Tier II with a documented uptime profile.
Property contributes
Secure room with controlled access, three-phase electrical service, water access for the cooling loop, fiber path to the demarc, and 24/7 operations access.
Property pays nothing upfront. ARO finances, owns, and insures the hardware; the property reimburses for nothing and earns a share of the GPU revenue.
Selected for what holds up in production.
No single-vendor risk. Each component is sourced from the supplier whose product is best-in-class for what it does, not from whoever bundled the deepest discount.
PowerEdge AI reference architectures. 5-year hardware warranty. ProSupport on-site break-fix at every hub.
Blackwell-class GPUs, plus H100 / H200 at anchor sites. Hardware ecosystem partner; no other affiliation implied.
Channel partner for procurement and integration. Configuration validation through Xerox's data-center practice.
M16 rear-door heat exchangers as standard. Liquid immersion partner for high-density configurations.
Every deployment gets its own design.
Generic specs go only so far. The actual hub at your site, or your tenant's site, is sized to the rooms available, the electrical service in the panel, the fiber paths on the property, and the workload profile your team intends to run.
Request a deployment-specific data sheet- GPU SKU recommendationBlackwell, H100, or H200 sized to the workload.
- Power & thermal envelopekW per rack, panel capacity, water specs, BTU rejection.
- Network designFiber paths, VLAN handoff, latency targets, peering options.
- Tier & SLATier I / II / III commitments per site, uptime targets.
- Compliance postureSOC 2 status, BAA availability, isolation guarantees for the workload type.