Anything LLM
Overview
Anything LLM is a self-hosted app that, unlike general-purpose AIs served through public endpoints, you can configure it so that they data remains your own.
Why Use NodeOps Cloud for Anything LLM?
- The NodeOps Anything LLM template makes setup easy and scalable
- NodeOps provides the cost-effective Compute your LLM model's needs (4–32 GB RAM is typical)
- You can leverage your own Machine as the server or those of trusted allies
- You earn gNODE when you participate in the NodeOps ecosystem
warning
This setup provides an LLM front end hosted on NodeOps DePIN Cloud that sends request to OpenAPI's servers. Your data security depends upon your key's settings. A forthcoming video and documentation will explain how to connect that front end to a LLM instance hosted by you on NodeOps infrastructure.
Deploy Anything LLM
Prerequisites
- API Key for the AI model provider
- A Cloud Marketplace account
- Sufficient funds for the Compute
- Use the Cloud Compute Marketplace AnythingLLM link, or log in, navigate to Template Marketplace, and search for AnythingLLM.
- Click Deploy Template, and select which Machine to run the workload on, or Auto assign.
- Enter the API key, decide whether to leave CPU values as default or customize them, click Next.
- Pay for the template deployment.
You are now ready to chat with the LLM, upload documents, or customize the LLM.
Use Anything LLM
- Logged into the Cloud Marketplace, navigate to My Deployments.
- Click on the AnythingLLM card and click Endpoints for your instance's URL.
- Click Get Started and choose the LLM model for which you provided the key.
- Follow the rest of the prompts.
You can now customize your LLM settings, upload documents, and start interacting with the AI.
Troubleshooting
- If your app crashes, consider increasing CPU/memory values
- For delayed deployments, contact the NodeOps support team with your workload name
What next?
- Access your app
- Monitor your deployment
- Manage your instance