Skip to main content

Inference Devnet

Overview

The Inference Devnet is a globally distributed network made up of GPU operators who provide Compute resources to run AI inference tasks on large language models like Llama 3, DeepSeek, and others.

By decentralizing inference, the network lowers costs, increases availability, and creates an open marketplace for AI Compute, enabling more scalable, efficient AI application delivery.

Why run an Inference Devnet Node?

By renting GPU from the NodeOps Cloud Marketplace and to run an Inference Node, you get the opportunity to earn devnet $INT token rewards proportional to the Compute power you contribute and the performance of your Node/s.

This template enables you to:

  • Contribute GPU Compute to the Inference Devnet to run AI inference tasks (such as DeepSeek R1, Gemma3, Llama 3.3 70b) and earn $INT points
  • Stake test $INT-DEV tokens on Solana Devnet
  • Manage operator pools to increase job allocation and rewards
  • Participate in network security via inference verification
  • Maintain GPU uptime and integrity to stay compliant and avoid slashing
tip

Skip the technical headaches — with NodeOps templates, you can instantly deploy Nodes and start contributing to Inference. Join the network, support decentralization, and unlock the opportunity to earn token incentives directly from Inference.

⚠️ Disclaimer: NodeOps does not control or guarantee rewards. Incentives, if any, are issued solely by Inference based on their rules.

Deploy an Inference Devnet operator Node on NodeOps DePIN Cloud

Use the video or walkthrough to understand how to deploy and operate an Inference.net Devnet GPU Node at-a-click with no setup overhead.

Prerequisites

  • Sufficient funds for the GPU deployment

Step 1: Get your Worker Code

  1. Create an account at Devnet.Inference.Net/dashboard/workers.
  2. Click Create Worker, accept default Install Type as CLI.
  3. From the command box (Step 2), copy the code shown after the --code flag.

Step 2: Deploy your Inference Devnet operator Node on NodeOps

  1. Use the Cloud Compute Marketplace Inference link, or log in and navigate to the Template Marketplace, and search for Inference.

You can filter by GPU and other filters on the right hand side of the dash to refine the search set.

  1. Paste the Worker Code copied in Step 1 into the template field.
  2. Pick your preferred GPU and plan (7-day or 30-day).
  3. Click Next, complete payment, and click Deploy.

Congratulations, you are now contributing GPU resources to the Inference.net Devnet.

What next?