Updated Ollama text generation (markdown)

Azgaar 2025-06-14 15:42:23 +02:00
parent 0d26d2f072
commit 7a4cd50503

@ -1,65 +1,55 @@
# Ollama integration with Fantasy Map Generator
## What is Ollama?
An integration with [Ollama](https://ollama.com/) allows users to leverage local large language models to generate text data for the Fantasy Map Generator.
Ollama is a free program that lets you run AI models directly on your computer. This means:
- **No internet required** - Works offline
- **No monthly fees** - Completely free to use
- **Private** - Your data stays on your computer
- **Fast** - No waiting for online services
## Setup and Configuration
Think of it like having your own personal AI generator living on your computer!
To use Ollama with Fantasy Map Generator, you need to ensure Ollama is correctly running and configured on your machine.
## Step 1: Download and Install Ollama
### 1. Install Ollama
1. Go to [[ollama.com](https://ollama.com/)](https://ollama.com/)
2. Click the download button and select your system (Windows, Mac, or Linux)
3. Run the downloaded file and follow the installation steps
4. That's it! Ollama is now installed.
- Download and install Ollama from [ollama.com](https://ollama.com/)
- Download the desired models (e.g., `ollama run llama3`)
## Step 2: Download an AI Model
### 2. Configure Ollama for Network Access (Crucial Step)
After installing Ollama, you need to download an AI model (think of it as the "brain" for your AI):
By default, Ollama might only listen for connections from the same machine (`localhost` or `127.0.0.1`). For Fantasy Map Generator to access Ollama, especially from other devices on your local network, you must configure Ollama to listen on all network interfaces and allow cross-origin requests.
1. **Open Command Prompt** (Windows) or **Terminal** (Mac/Linux):
- **Windows**: Press `Windows key + R`, type `cmd`, press Enter
- **Mac**: Press `Cmd + Space`, type `Terminal`, press Enter
- **Linux**: Press `Ctrl + Alt + T`
#### Set `OLLAMA_HOST` Environment Variable
2. **Type this command and press Enter:**
```
ollama run llama3.2
```
This variable tells Ollama which network interfaces to listen on.
3. **Wait for the download** - This might take 5-30 minutes depending on your internet speed
**Action:** Set `OLLAMA_HOST` to `0.0.0.0`
4. **You'll see a prompt like `>>>` when it's ready** - just type `exit` to close it
**How to set (Windows Permanent):**
1. Search for "Edit the system environment variables" in the Windows search bar
2. Click "Environment Variables..."
3. In the "System variables" section (bottom pane), click "New..." (or "Edit..." if it exists)
4. Variable name: `OLLAMA_HOST`
5. Variable value: `0.0.0.0`
6. Click "OK" on all dialogs
7. **Restart your PC** for the changes to take effect for all processes
## Step 3: Configure Fantasy Map Generator
**How to set (Linux/macOS - per session or persistent):**
- **Per session:** In your terminal, before running `ollama serve`: `export OLLAMA_HOST="0.0.0.0"`
- **Persistent:** Add `export OLLAMA_HOST="0.0.0.0"` to your shell profile file (e.g., `~/.bashrc`, `~/.zshrc`), then `source` the file or restart your terminal
1. **Open Command Prompt/Terminal again**
2. **Type:** `ollama serve`
3. **Leave this window open** - Ollama is now running!
4. **Open Fantasy Map Generator**
5. **Select "ollama" from the AI model list**
6. **In the key field, type:** `llama3.2` (or whatever model you downloaded)
#### Set `OLLAMA_ORIGINS` Environment Variable (CORS Configuration)
**Important:** Fantasy Map Generator connects to Ollama at `http://localhost:11434/api/generate`. This should work automatically if you followed the steps above.
This variable is essential for browsers to allow JavaScript code from one origin (Fantasy Map Generator's port 8000) to communicate with Ollama on a different port (11434).
If you need to change the connection address, you can modify the endpoint in the `ai-generator.js` file.
**Action:** Set `OLLAMA_ORIGINS` to allow your Fantasy Map Generator's origin
That's It! You can now generate text using your local AI model.
**How to set (Windows Permanent):** Follow the same steps as for `OLLAMA_HOST`, but use:
- Variable name: `OLLAMA_ORIGINS`
- Variable value: `http://<YOUR_PC_IP_ADDRESS>:8000` (e.g., `http://192.168.178.46:8000`)
## Troubleshooting
**For development (easiest):** You can use `*` as the value (`OLLAMA_ORIGINS=*`) to allow all origins. This is less secure for production but simplifies testing.
**Restart your PC** after setting the variable.
**How to set (Linux/macOS - per session or persistent):**
- **Per session:** `export OLLAMA_ORIGINS="http://<YOUR_PC_IP_ADDRESS>:8000"` or `export OLLAMA_ORIGINS="*"`
- **Persistent:** Add the `export` line to your shell profile file
#### Firewall Configuration
Ensure your PC's firewall (e.g., Windows Defender Firewall) is not blocking incoming connections to Ollama's default port, `11434`.
**Action:** Create an inbound rule to allow TCP traffic on port `11434`
### 3. Configure Fantasy Map Generator
Select `ollama` from the list of models and provide desired model name to the `key` field. You don't need a key, so use this field to define the model name.
The FMG points to `http://localhost:11434/api/generate` for Ollama generation. If you need a different endpoint, change it in the `ai-generator.js` file.
**If it doesn't work:**
- Check that `ollama serve` is still running in your command prompt/terminal
- Try typing `ollama list` to see if your model downloaded correctly