<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Gpu-Computing on Pi Stack</title>
    <link>https://www.pistack.xyz/tags/gpu-computing/</link>
    <description>Recent content in Gpu-Computing on Pi Stack</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Mon, 04 May 2026 00:00:00 +0000</lastBuildDate>
    <atom:link href="https://www.pistack.xyz/tags/gpu-computing/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Self-Hosted Distributed Training: Horovod vs DeepSpeed vs PyTorch FSDP Guide 2026</title>
      <link>https://www.pistack.xyz/posts/2026-05-04-self-hosted-distributed-training-horovod-deepspeed-pytorch-fsdp-guide/</link>
      <pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate>
      <guid>https://www.pistack.xyz/posts/2026-05-04-self-hosted-distributed-training-horovod-deepspeed-pytorch-fsdp-guide/</guid>
      <description>&lt;p&gt;When a single GPU can no longer keep up with model training workloads, distributed training becomes essential. Whether you&amp;rsquo;re running large-scale recommendation systems, computer vision pipelines, or natural language processing models on a self-hosted GPU cluster, choosing the right distributed training framework determines your hardware utilization, training speed, and operational complexity.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Self-Hosted GPU Management in Kubernetes: NVIDIA GPU Operator vs Container Toolkit vs Volcano Guide 2026</title>
      <link>https://www.pistack.xyz/posts/2026-05-04-self-hosted-gpu-management-kubernetes-nvidia-operator-container-toolkit-volcano/</link>
      <pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate>
      <guid>https://www.pistack.xyz/posts/2026-05-04-self-hosted-gpu-management-kubernetes-nvidia-operator-container-toolkit-volcano/</guid>
      <description>&lt;p&gt;Running GPU workloads on Kubernetes requires more than just installing GPU drivers. You need device discovery, driver installation, container runtime integration, resource scheduling, and optionally Multi-Instance GPU (MIG) partitioning. The tooling ecosystem has evolved from manual driver installation on every node to fully automated operators that manage the entire GPU lifecycle.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Self-Hosted Hyperparameter Optimization: Optuna vs Ray Tune vs Hyperopt Guide 2026</title>
      <link>https://www.pistack.xyz/posts/2026-05-04-self-hosted-hyperparameter-optimization-optuna-ray-tune-hyperopt-guide/</link>
      <pubDate>Mon, 04 May 2026 00:00:00 +0000</pubDate>
      <guid>https://www.pistack.xyz/posts/2026-05-04-self-hosted-hyperparameter-optimization-optuna-ray-tune-hyperopt-guide/</guid>
      <description>&lt;p&gt;Finding the right hyperparameters for a model can make the difference between mediocre and state-of-the-art performance. Manual tuning is slow and error-prone. Automated hyperparameter optimization (HPO) frameworks systematically search the parameter space, finding better configurations with fewer training runs.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
