Home / Companies / Daytona / Blog / Post Details
Content Deep Dive

Run LLM with Ollama inside Daytona workspace

Blog post from Daytona

Post Details
Company
Date Published
Author
Kiran Naragund
Word Count
1,053
Language
English
Hacker News Points
-
Summary

This guide demonstrates how to set up and run Large Language Models (LLMs) with Ollama inside Daytona workspace, significantly improving productivity. To follow this guide, you'll need to have a Python environment and chat interface set up. The process involves creating a dev container using a devcontainer.json configuration file, writing project files like ollama_chat.py and requirements-dev.txt, and then running LLMs through Daytona's containerized setup. The devcontainer.json file specifies the development environment settings, including the image, features, customizations, and post-start command. The guide also covers adding a chat script, creating a requirements-dev.txt file, initializing Git, committing changes, and pushing to a remote repository. Finally, it explains how to run LLMs with Ollama in Daytona, using GitHub as a provider, and opens a workspace in VS Code.