- Using Ollama requires it to be running locally on your machine at http://localhost:11434 beside azgaar you would not be able to use ollama if you are using azgaar online in the oficial website
-
diff --git a/modules/ui/OLLAMAREADME.MD b/modules/ui/OLLAMAREADME.MD
deleted file mode 100644
index a3fd1d7c..00000000
--- a/modules/ui/OLLAMAREADME.MD
+++ /dev/null
@@ -1,78 +0,0 @@
-
-## Recent Changes (May 18, 2025)
-
-### Ollama Integration for AI Text Generation
-
-An integration with [Ollama](https://ollama.com/) has been added as a new provider for the AI text generator feature, allowing users to leverage local large language models.
-
-**Key Changes:**
-
-* **New Provider:** "Ollama" is now available in the AI generator's model/provider selection.
-* **Model Name as Key:** When Ollama is selected, the "API Key" input field is repurposed to accept the Ollama model name (e.g., `llama3`, `mistral`, etc.) instead of a traditional API key.
-* **Local Endpoint:** The integration communicates with a local Ollama instance. Configuration details below.
-* **Streaming Support:** Responses from Ollama are streamed into the text area.
-
-## Ollama Setup and Configuration
-
-To use Ollama with Fantasy Map Generator, you need to ensure Ollama is correctly running and configured on your machine.
-
-**1. Install Ollama:**
-
-* Download and install Ollama from [ollama.com](https://ollama.com/).
-* Download the desired models (e.g., `ollama run llama3`).
-
-**2. Configure Ollama for Network Access (Crucial Step):**
-
-By default, Ollama might only listen for connections from the same machine (`localhost` or `127.0.0.1`). For Fantasy Map Generator to access Ollama, especially from other devices on your local network, you must configure Ollama to listen on all network interfaces and allow cross-origin requests.
-
-* **Set `OLLAMA_HOST` Environment Variable:**
- * This variable tells Ollama which network interfaces to listen on.
- * **Action:** Set `OLLAMA_HOST` to `0.0.0.0`.
- * **How to set (Windows Permanent):**
- 1. Search for "Edit the system environment variables" in the Windows search bar.
- 2. Click "Environment Variables...".
- 3. In the "System variables" section (bottom pane), click "New..." (or "Edit..." if it exists).
- 4. Variable name: `OLLAMA_HOST`
- 5. Variable value: `0.0.0.0`
- 6. Click "OK" on all dialogs.
- 7. **Restart your PC** for the changes to take effect for all processes.
- * **How to set (Linux/macOS - per session or persistent):**
- 1. **Per session:** In your terminal, before running `ollama serve`: `export OLLAMA_HOST="0.0.0.0"`
- 2. **Persistent:** Add `export OLLAMA_HOST="0.0.0.0"` to your shell profile file (e.g., `~/.bashrc`, `~/.zshrc`), then `source` the file or restart your terminal.
-
-* **Set `OLLAMA_ORIGINS` Environment Variable (CORS Configuration):**
- * This variable is essential for browsers to allow JavaScript code from one origin (Fantasy Map Generator's port 8000) to communicate with Ollama on a different port (11434).
- * **Action:** Set `OLLAMA_ORIGINS` to allow your Fantasy Map Generator's origin.
- * **How to set (Windows Permanent):** Follow the same steps as for `OLLAMA_HOST`, but use:
- * Variable name: `OLLAMA_ORIGINS`
- * Variable value: `http://:8000` (e.g., `http://192.168.178.46:8000`)
- * **For development (easiest):** You can use `*` as the value (`OLLAMA_ORIGINS=*`) to allow all origins. This is less secure for production but simplifies testing.
- * **Restart your PC** after setting the variable.
- * **How to set (Linux/macOS - per session or persistent):**
- 1. **Per session:** `export OLLAMA_ORIGINS="http://:8000"` or `export OLLAMA_ORIGINS="*"`
- 2. **Persistent:** Add the `export` line to your shell profile file.
-
-* **Firewall Configuration:**
- * Ensure your PC's firewall (e.g., Windows Defender Firewall) is not blocking incoming connections to Ollama's default port, `11434`.
- * **Action:** Create an inbound rule to allow TCP traffic on port `11434`.
-
-**3. Configure Fantasy Map Generator's `ai-generator.js`:**
-
-The `ai-generator.js` file needs to point to the correct Ollama endpoint.
-
-* **Scenario A: Using only on the same machine (`localhost`):**
- * Ensure the `fetch` call in the `generateWithOllama` function (inside `modules/ui/ai-generator.js`) points to `http://localhost:11434/api/generate`. This is usually the default.
-
-* **Scenario B: Using from other machines on the local network:**
- * You **must** change the `fetch` call in the `generateWithOllama` function (inside `modules/ui/ai-generator.js`) to use the actual local IP address of your machine where Ollama is running.
- * **Example:**
- ```javascript
- // Inside modules/ui/ai-generator.js, within generateWithOllama function:
- const response = await fetch("http://192.168.178.46:11434/api/generate" // Replace with your actual PC's IP
-
- ```
- * **How to find your PC's IP:**
- * **Windows:** Open Command Prompt (`cmd`) and type `ipconfig`. Look for "IPv4 Address" under your active network adapter.
- * **Linux/macOS:** Open Terminal and type `ip addr show` or `ifconfig`.
-
----
diff --git a/modules/ui/ai-generator.js b/modules/ui/ai-generator.js
index a0c532f9..92d9d380 100644
--- a/modules/ui/ai-generator.js
+++ b/modules/ui/ai-generator.js
@@ -10,7 +10,7 @@ const PROVIDERS = {
generate: generateWithAnthropic
},
ollama: {
- keyLink: "https://ollama.com/library",
+ keyLink: "https://github.com/Azgaar/Fantasy-Map-Generator/wiki/Ollama-text-generation",
generate: generateWithOllama
}
};
@@ -27,15 +27,11 @@ const MODELS = {
"claude-3-5-haiku-latest": "anthropic",
"claude-3-5-sonnet-latest": "anthropic",
"claude-3-opus-latest": "anthropic",
- "Ollama (enter model in key field)": "ollama"
+ "ollama (local models)": "ollama"
};
const SYSTEM_MESSAGE = "I'm working on my fantasy map.";
-if (typeof modules.generateWithAi_setupDone === 'undefined') {
- modules.generateWithAi_setupDone = false;
-}
-
async function generateWithOpenAI({key, model, prompt, temperature, onContent}) {
const headers = {
"Content-Type": "application/json",
@@ -58,7 +54,7 @@ async function generateWithOpenAI({key, model, prompt, temperature, onContent})
if (content) onContent(content);
};
- await handleStream(response, getContent, "openai");
+ await handleStream(response, getContent);
}
async function generateWithAnthropic({key, model, prompt, temperature, onContent}) {
@@ -82,59 +78,38 @@ async function generateWithAnthropic({key, model, prompt, temperature, onContent
if (content) onContent(content);
};
- await handleStream(response, getContent, "anthropic");
+ await handleStream(response, getContent);
}
async function generateWithOllama({key, model, prompt, temperature, onContent}) {
- // For Ollama, 'key' is the actual model name entered by the user.
- // 'model' is the value from the dropdown, e.g., "Ollama (enter model in key field)".
- const ollamaModelName = key;
-
- const headers = {
- "Content-Type": "application/json"
- };
-
- const body = {
- model: ollamaModelName,
- prompt: prompt,
- system: SYSTEM_MESSAGE,
- options: {
- temperature: temperature
- },
- stream: true
- };
+ const ollamaModelName = key; // for Ollama, 'key' is the actual model name entered by the user
const response = await fetch("http://localhost:11434/api/generate", {
method: "POST",
- headers,
- body: JSON.stringify(body)
+ headers: {"Content-Type": "application/json"},
+ body: JSON.stringify({
+ model: ollamaModelName,
+ prompt,
+ system: SYSTEM_MESSAGE,
+ options: {temperature},
+ stream: true
+ })
});
const getContent = json => {
- // Ollama streams JSON objects with a "response" field for content
- // and "done": true in the final message (which might have an empty response).
- if (json.response) {
- onContent(json.response);
- }
+ if (json.response) onContent(json.response);
};
- await handleStream(response, getContent, "ollama");
+ await handleStream(response, getContent);
}
-async function handleStream(response, getContent, providerType) {
+async function handleStream(response, getContent) {
if (!response.ok) {
let errorMessage = `Failed to generate (${response.status} ${response.statusText})`;
try {
const json = await response.json();
- if (providerType === "ollama" && json?.error) {
- errorMessage = json.error;
- } else {
- errorMessage = json?.error?.message || json?.error || `Failed to generate (${response.status} ${response.statusText})`;
- }
- } catch (e) {
-
- ERROR && console.error("Failed to parse error response JSON:", e)
- }
+ errorMessage = json.error?.message || json.error || errorMessage;
+ } catch {}
throw new Error(errorMessage);
}
@@ -151,24 +126,14 @@ async function handleStream(response, getContent, providerType) {
for (let i = 0; i < lines.length - 1; i++) {
const line = lines[i].trim();
- if (providerType === "ollama") {
- if (line) {
- try {
- const json = JSON.parse(line);
- getContent(json);
- } catch (jsonError) {
- ERROR && console.error(`Failed to parse JSON from Ollama:`, jsonError, `Line: ${line}`);
- }
- }
- } else {
- if (line.startsWith("data: ") && line !== "data: [DONE]") {
- try {
- const json = JSON.parse(line.slice(6));
- getContent(json);
- } catch (jsonError) {
- ERROR && console.error(`Failed to parse JSON:`, jsonError, `Line: ${line}`);
- }
- }
+ if (!line) continue;
+ if (line === "data: [DONE]") break;
+
+ try {
+ const parsed = line.startsWith("data: ") ? JSON.parse(line.slice(6)) : JSON.parse(line);
+ getContent(parsed);
+ } catch (error) {
+ ERROR && console.error("Failed to parse line:", line, error);
}
}
@@ -177,59 +142,65 @@ async function handleStream(response, getContent, providerType) {
}
function generateWithAi(defaultPrompt, onApply) {
+ updateValues();
- function updateDialogElements() {
+ $("#aiGenerator").dialog({
+ title: "AI Text Generator",
+ position: {my: "center", at: "center", of: "svg"},
+ resizable: false,
+ buttons: {
+ Generate: function (e) {
+ generate(e.target);
+ },
+ Apply: function () {
+ const result = byId("aiGeneratorResult").value;
+ if (!result) return tip("No result to apply", true, "error", 4000);
+ onApply(result);
+ $(this).dialog("close");
+ },
+ Close: function () {
+ $(this).dialog("close");
+ }
+ }
+ });
+
+ if (modules.generateWithAi) return;
+ modules.generateWithAi = true;
+
+ byId("aiGeneratorKeyHelp").on("click", function (e) {
+ const model = byId("aiGeneratorModel").value;
+ const provider = MODELS[model];
+ openURL(PROVIDERS[provider].keyLink);
+ });
+
+ function updateValues() {
byId("aiGeneratorResult").value = "";
byId("aiGeneratorPrompt").value = defaultPrompt;
byId("aiGeneratorTemperature").value = localStorage.getItem("fmg-ai-temperature") || "1";
const select = byId("aiGeneratorModel");
- const currentModelVal = select.value;
select.options.length = 0;
Object.keys(MODELS).forEach(model => select.options.add(new Option(model, model)));
-
- const storedModel = localStorage.getItem("fmg-ai-model");
- if (storedModel && MODELS[storedModel]) {
- select.value = storedModel;
- } else if (currentModelVal && MODELS[currentModelVal]) {
- select.value = currentModelVal;
- } else {
- select.value = DEFAULT_MODEL;
- }
- if (!select.value || !MODELS[select.value]) select.value = DEFAULT_MODEL;
+ select.value = localStorage.getItem("fmg-ai-model");
+ if (!select.value || !MODELS[select.value]) select.value = DEFAULT_MODEL;
const provider = MODELS[select.value];
- const keyInput = byId("aiGeneratorKey");
- if (keyInput) {
- keyInput.value = localStorage.getItem(`fmg-ai-kl-${provider}`) || "";
- if (provider === "ollama") {
- keyInput.placeholder = "Enter Ollama model name (e.g., llama3)";
- } else {
- keyInput.placeholder = "Enter API Key";
- }
- } else {
- ERROR && console.error("AI Generator: Could not find 'aiGeneratorKey' element in updateDialogElements.");
- }
+ byId("aiGeneratorKey").value = localStorage.getItem(`fmg-ai-kl-${provider}`) || "";
}
- async function doGenerate(button) {
+ async function generate(button) {
const key = byId("aiGeneratorKey").value;
- const modelValue = byId("aiGeneratorModel").value;
- const provider = MODELS[modelValue];
+ if (!key) return tip("Please enter an API key", true, "error", 4000);
- if (provider !== "ollama" && !key) {
- return tip("Please enter an API key", true, "error", 4000);
- }
- if (provider === "ollama" && !key) {
- return tip("Please enter the Ollama model name in the key field", true, "error", 4000);
- }
- if (!modelValue) return tip("Please select a model", true, "error", 4000);
-
- localStorage.setItem("fmg-ai-model", modelValue);
+ const model = byId("aiGeneratorModel").value;
+ if (!model) return tip("Please select a model", true, "error", 4000);
+ localStorage.setItem("fmg-ai-model", model);
+
+ const provider = MODELS[model];
localStorage.setItem(`fmg-ai-kl-${provider}`, key);
- const promptText = byId("aiGeneratorPrompt").value;
- if (!promptText) return tip("Please enter a prompt", true, "error", 4000);
+ const prompt = byId("aiGeneratorPrompt").value;
+ if (!prompt) return tip("Please enter a prompt", true, "error", 4000);
const temperature = byId("aiGeneratorTemperature").valueAsNumber;
if (isNaN(temperature)) return tip("Temperature must be a number", true, "error", 4000);
@@ -240,83 +211,14 @@ function generateWithAi(defaultPrompt, onApply) {
const resultArea = byId("aiGeneratorResult");
resultArea.disabled = true;
resultArea.value = "";
- const onContentCallback = content => (resultArea.value += content);
+ const onContent = content => (resultArea.value += content);
- await PROVIDERS[provider].generate({key: key, model: modelValue, prompt: promptText, temperature, onContent: onContentCallback});
+ await PROVIDERS[provider].generate({key, model, prompt, temperature, onContent});
} catch (error) {
- tip(error.message, true, "error", 4000);
+ return tip(error.message, true, "error", 4000);
} finally {
button.disabled = false;
byId("aiGeneratorResult").disabled = false;
}
}
-
- $("#aiGenerator").dialog({
- title: "AI Text Generator",
- position: {my: "center", at: "center", of: "svg"},
- resizable: false,
- width: Math.min(600, window.innerWidth - 20),
- modal: true,
- open: function() {
-
- if (!modules.generateWithAi_setupDone) {
- const keyHelpButton = byId("aiGeneratorKeyHelp");
- if (keyHelpButton) {
- keyHelpButton.addEventListener("click", function () {
- const modelValue = byId("aiGeneratorModel").value;
- const provider = MODELS[modelValue];
- if (provider === "ollama") {
- openURL(PROVIDERS.ollama.keyLink);
- } else if (provider && PROVIDERS[provider] && PROVIDERS[provider].keyLink) {
- openURL(PROVIDERS[provider].keyLink);
- }
- });
- } else {
- ERROR && console.error("AI Generator: Could not find 'aiGeneratorKeyHelp' element for event listener.");
- }
-
- const modelSelect = byId("aiGeneratorModel");
- if (modelSelect) {
- modelSelect.addEventListener("change", function() {
- const newModelValue = this.value;
- const newProvider = MODELS[newModelValue];
- const keyInput = byId("aiGeneratorKey");
- if (keyInput) {
- if (newProvider === "ollama") {
- keyInput.placeholder = "Enter Ollama model name (e.g., llama3)";
- } else {
- keyInput.placeholder = "Enter API Key";
- }
-
- keyInput.value = localStorage.getItem(`fmg-ai-kl-${newProvider}`) || "";
- } else {
- ERROR && console.error("AI Generator: Could not find 'aiGeneratorKey' element during model change listener.");
- }
- });
- } else {
- ERROR && console.error("AI Generator: Could not find 'aiGeneratorModel' element for event listener.");
- }
- modules.generateWithAi_setupDone = true;
- }
-
- updateDialogElements();
- },
- buttons: {
- "Generate": function (e) {
-
- doGenerate(e.currentTarget || e.target);
- },
- "Apply": function () {
- const result = byId("aiGeneratorResult").value;
- if (!result) return tip("No result to apply", true, "error", 4000);
- onApply(result);
- $(this).dialog("close");
- },
- "Close": function () {
- $(this).dialog("close");
- }
- }
- });
}
-
-window.generateWithAi = generateWithAi;
From d96e339043b0f8e0479fa250c2cfdd9fe7c47705 Mon Sep 17 00:00:00 2001
From: Azgaar
Date: Sat, 14 Jun 2025 15:20:41 +0200
Subject: [PATCH 3/4] chore: update version to 1.108.8
---
index.html | 2 +-
versioning.js | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/index.html b/index.html
index 9cb1750b..c2afdb33 100644
--- a/index.html
+++ b/index.html
@@ -8143,7 +8143,7 @@
-
+
diff --git a/versioning.js b/versioning.js
index 6713f019..b5e0f1a1 100644
--- a/versioning.js
+++ b/versioning.js
@@ -13,7 +13,7 @@
* Example: 1.102.2 -> Major version 1, Minor version 102, Patch version 2
*/
-const VERSION = "1.108.7";
+const VERSION = "1.108.8";
if (parseMapVersion(VERSION) !== VERSION) alert("versioning.js: Invalid format or parsing function");
{
From c891689796128c29ad76ea98748ce77a0e26c1e6 Mon Sep 17 00:00:00 2001
From: Azgaar
Date: Sat, 14 Jun 2025 15:24:23 +0200
Subject: [PATCH 4/4] feat(ai-generator): update supported AI models list
---
modules/ui/ai-generator.js | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/modules/ui/ai-generator.js b/modules/ui/ai-generator.js
index 92d9d380..8ef13cf6 100644
--- a/modules/ui/ai-generator.js
+++ b/modules/ui/ai-generator.js
@@ -22,8 +22,12 @@ const MODELS = {
"chatgpt-4o-latest": "openai",
"gpt-4o": "openai",
"gpt-4-turbo": "openai",
- "o1-preview": "openai",
- "o1-mini": "openai",
+ o3: "openai",
+ "o3-mini": "openai",
+ "o3-pro": "openai",
+ "o4-mini": "openai",
+ "claude-opus-4-20250514": "anthropic",
+ "claude-sonnet-4-20250514": "anthropic",
"claude-3-5-haiku-latest": "anthropic",
"claude-3-5-sonnet-latest": "anthropic",
"claude-3-opus-latest": "anthropic",