Deploying OpenClaw on NVIDIA Jetson Orin Nano
A complete walkthrough for setting up your self-hosted AI assistant on NVIDIA’s edge hardware — from JetPack flash to a running gateway with local model inference.
Read MoreA complete walkthrough for setting up your self-hosted AI assistant on NVIDIA’s edge hardware — from JetPack flash to a running gateway with local model inference.
Read MoreAn in-depth technical exploration of NVIDIA GPU Operator, covering its architecture, components, lifecycle, and advanced features. Learn how Driver Manager evolved from 264 to 876+ lines, understand the JIT compilation model, and master production deployment strategies.
Read MoreDeep dive into eBPF VM implementation, Probe engine mechanisms, program counter jumps, and hardware resource access. Covers register architecture, instruction encoding, Kprobe/Kretprobe/Uprobe implementation, PMU access, JIT compilation optimization, and production best practices. Ideal for systems engineers and kernel developers.
Read MoreNVIDIA GPU Fabric is a high-speed, low-latency interconnect technology that enables direct peer-to-peer communication between GPUs across multiple nodes. This technology is critical for scaling high-performance computing (HPC) and artificial intelligence (AI) workloads that require massive parallel processing capabilities.
Read More
This article describes how to configure Scalable Function (SF) on Debian. SF is implemented using the Sub Function capability in the Linux kernel. A similar technology is Intel Scalable IOV.
In this guide, we demonstrate how to enable inter-VM communication using SF instead of SR-IOV VFs, leveraging the vhost-vDPA module.