mirror of
https://github.com/Azgaar/Fantasy-Map-Generator.git
synced 2025-12-16 17:31:24 +01:00
refactor: ollama generation
This commit is contained in:
parent
fe2fa6d6b8
commit
bba3587e50
3 changed files with 81 additions and 254 deletions
15
index.html
15
index.html
|
|
@ -4978,16 +4978,19 @@
|
||||||
Temperature:
|
Temperature:
|
||||||
<input id="aiGeneratorTemperature" type="number" min="-1" max="2" step=".1" class="icon-key" />
|
<input id="aiGeneratorTemperature" type="number" min="-1" max="2" step=".1" class="icon-key" />
|
||||||
</label>
|
</label>
|
||||||
<label for="aiGeneratorKey" >Key:
|
<label for="aiGeneratorKey"
|
||||||
<input id="aiGeneratorKey" placeholder="Enter API key" class="icon-key" />
|
>Key:
|
||||||
|
<input
|
||||||
|
id="aiGeneratorKey"
|
||||||
|
placeholder="Enter API key"
|
||||||
|
class="icon-key"
|
||||||
|
data-tip="Enter API key. Note: the Generator doesn't store the key or any generated data"
|
||||||
|
/>
|
||||||
<button
|
<button
|
||||||
id="aiGeneratorKeyHelp"
|
id="aiGeneratorKeyHelp"
|
||||||
class="icon-help-circled"
|
class="icon-help-circled"
|
||||||
data-tip="Open provider's website to get the API key there. Note: the Map Generator doesn't store the key or any generated data"
|
data-tip="Click to see the usage instructions"
|
||||||
/>
|
/>
|
||||||
<div id="ollamaHint" style="display: none; font-size: 0.85em; color: #999; margin-top: 0.5em;">
|
|
||||||
Using Ollama requires it to be running locally on your machine at http://localhost:11434 beside azgaar you would not be able to use ollama if you are using azgaar online in the oficial website
|
|
||||||
</div>
|
|
||||||
</label>
|
</label>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
|
||||||
|
|
@ -1,78 +0,0 @@
|
||||||
|
|
||||||
## Recent Changes (May 18, 2025)
|
|
||||||
|
|
||||||
### Ollama Integration for AI Text Generation
|
|
||||||
|
|
||||||
An integration with [Ollama](https://ollama.com/) has been added as a new provider for the AI text generator feature, allowing users to leverage local large language models.
|
|
||||||
|
|
||||||
**Key Changes:**
|
|
||||||
|
|
||||||
* **New Provider:** "Ollama" is now available in the AI generator's model/provider selection.
|
|
||||||
* **Model Name as Key:** When Ollama is selected, the "API Key" input field is repurposed to accept the Ollama model name (e.g., `llama3`, `mistral`, etc.) instead of a traditional API key.
|
|
||||||
* **Local Endpoint:** The integration communicates with a local Ollama instance. Configuration details below.
|
|
||||||
* **Streaming Support:** Responses from Ollama are streamed into the text area.
|
|
||||||
|
|
||||||
## Ollama Setup and Configuration
|
|
||||||
|
|
||||||
To use Ollama with Fantasy Map Generator, you need to ensure Ollama is correctly running and configured on your machine.
|
|
||||||
|
|
||||||
**1. Install Ollama:**
|
|
||||||
|
|
||||||
* Download and install Ollama from [ollama.com](https://ollama.com/).
|
|
||||||
* Download the desired models (e.g., `ollama run llama3`).
|
|
||||||
|
|
||||||
**2. Configure Ollama for Network Access (Crucial Step):**
|
|
||||||
|
|
||||||
By default, Ollama might only listen for connections from the same machine (`localhost` or `127.0.0.1`). For Fantasy Map Generator to access Ollama, especially from other devices on your local network, you must configure Ollama to listen on all network interfaces and allow cross-origin requests.
|
|
||||||
|
|
||||||
* **Set `OLLAMA_HOST` Environment Variable:**
|
|
||||||
* This variable tells Ollama which network interfaces to listen on.
|
|
||||||
* **Action:** Set `OLLAMA_HOST` to `0.0.0.0`.
|
|
||||||
* **How to set (Windows Permanent):**
|
|
||||||
1. Search for "Edit the system environment variables" in the Windows search bar.
|
|
||||||
2. Click "Environment Variables...".
|
|
||||||
3. In the "System variables" section (bottom pane), click "New..." (or "Edit..." if it exists).
|
|
||||||
4. Variable name: `OLLAMA_HOST`
|
|
||||||
5. Variable value: `0.0.0.0`
|
|
||||||
6. Click "OK" on all dialogs.
|
|
||||||
7. **Restart your PC** for the changes to take effect for all processes.
|
|
||||||
* **How to set (Linux/macOS - per session or persistent):**
|
|
||||||
1. **Per session:** In your terminal, before running `ollama serve`: `export OLLAMA_HOST="0.0.0.0"`
|
|
||||||
2. **Persistent:** Add `export OLLAMA_HOST="0.0.0.0"` to your shell profile file (e.g., `~/.bashrc`, `~/.zshrc`), then `source` the file or restart your terminal.
|
|
||||||
|
|
||||||
* **Set `OLLAMA_ORIGINS` Environment Variable (CORS Configuration):**
|
|
||||||
* This variable is essential for browsers to allow JavaScript code from one origin (Fantasy Map Generator's port 8000) to communicate with Ollama on a different port (11434).
|
|
||||||
* **Action:** Set `OLLAMA_ORIGINS` to allow your Fantasy Map Generator's origin.
|
|
||||||
* **How to set (Windows Permanent):** Follow the same steps as for `OLLAMA_HOST`, but use:
|
|
||||||
* Variable name: `OLLAMA_ORIGINS`
|
|
||||||
* Variable value: `http://<YOUR_PC_IP_ADDRESS>:8000` (e.g., `http://192.168.178.46:8000`)
|
|
||||||
* **For development (easiest):** You can use `*` as the value (`OLLAMA_ORIGINS=*`) to allow all origins. This is less secure for production but simplifies testing.
|
|
||||||
* **Restart your PC** after setting the variable.
|
|
||||||
* **How to set (Linux/macOS - per session or persistent):**
|
|
||||||
1. **Per session:** `export OLLAMA_ORIGINS="http://<YOUR_PC_IP_ADDRESS>:8000"` or `export OLLAMA_ORIGINS="*"`
|
|
||||||
2. **Persistent:** Add the `export` line to your shell profile file.
|
|
||||||
|
|
||||||
* **Firewall Configuration:**
|
|
||||||
* Ensure your PC's firewall (e.g., Windows Defender Firewall) is not blocking incoming connections to Ollama's default port, `11434`.
|
|
||||||
* **Action:** Create an inbound rule to allow TCP traffic on port `11434`.
|
|
||||||
|
|
||||||
**3. Configure Fantasy Map Generator's `ai-generator.js`:**
|
|
||||||
|
|
||||||
The `ai-generator.js` file needs to point to the correct Ollama endpoint.
|
|
||||||
|
|
||||||
* **Scenario A: Using only on the same machine (`localhost`):**
|
|
||||||
* Ensure the `fetch` call in the `generateWithOllama` function (inside `modules/ui/ai-generator.js`) points to `http://localhost:11434/api/generate`. This is usually the default.
|
|
||||||
|
|
||||||
* **Scenario B: Using from other machines on the local network:**
|
|
||||||
* You **must** change the `fetch` call in the `generateWithOllama` function (inside `modules/ui/ai-generator.js`) to use the actual local IP address of your machine where Ollama is running.
|
|
||||||
* **Example:**
|
|
||||||
```javascript
|
|
||||||
// Inside modules/ui/ai-generator.js, within generateWithOllama function:
|
|
||||||
const response = await fetch("http://192.168.178.46:11434/api/generate" // Replace with your actual PC's IP
|
|
||||||
|
|
||||||
```
|
|
||||||
* **How to find your PC's IP:**
|
|
||||||
* **Windows:** Open Command Prompt (`cmd`) and type `ipconfig`. Look for "IPv4 Address" under your active network adapter.
|
|
||||||
* **Linux/macOS:** Open Terminal and type `ip addr show` or `ifconfig`.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
@ -10,7 +10,7 @@ const PROVIDERS = {
|
||||||
generate: generateWithAnthropic
|
generate: generateWithAnthropic
|
||||||
},
|
},
|
||||||
ollama: {
|
ollama: {
|
||||||
keyLink: "https://ollama.com/library",
|
keyLink: "https://github.com/Azgaar/Fantasy-Map-Generator/wiki/Ollama-text-generation",
|
||||||
generate: generateWithOllama
|
generate: generateWithOllama
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
@ -27,15 +27,11 @@ const MODELS = {
|
||||||
"claude-3-5-haiku-latest": "anthropic",
|
"claude-3-5-haiku-latest": "anthropic",
|
||||||
"claude-3-5-sonnet-latest": "anthropic",
|
"claude-3-5-sonnet-latest": "anthropic",
|
||||||
"claude-3-opus-latest": "anthropic",
|
"claude-3-opus-latest": "anthropic",
|
||||||
"Ollama (enter model in key field)": "ollama"
|
"ollama (local models)": "ollama"
|
||||||
};
|
};
|
||||||
|
|
||||||
const SYSTEM_MESSAGE = "I'm working on my fantasy map.";
|
const SYSTEM_MESSAGE = "I'm working on my fantasy map.";
|
||||||
|
|
||||||
if (typeof modules.generateWithAi_setupDone === 'undefined') {
|
|
||||||
modules.generateWithAi_setupDone = false;
|
|
||||||
}
|
|
||||||
|
|
||||||
async function generateWithOpenAI({key, model, prompt, temperature, onContent}) {
|
async function generateWithOpenAI({key, model, prompt, temperature, onContent}) {
|
||||||
const headers = {
|
const headers = {
|
||||||
"Content-Type": "application/json",
|
"Content-Type": "application/json",
|
||||||
|
|
@ -58,7 +54,7 @@ async function generateWithOpenAI({key, model, prompt, temperature, onContent})
|
||||||
if (content) onContent(content);
|
if (content) onContent(content);
|
||||||
};
|
};
|
||||||
|
|
||||||
await handleStream(response, getContent, "openai");
|
await handleStream(response, getContent);
|
||||||
}
|
}
|
||||||
|
|
||||||
async function generateWithAnthropic({key, model, prompt, temperature, onContent}) {
|
async function generateWithAnthropic({key, model, prompt, temperature, onContent}) {
|
||||||
|
|
@ -82,59 +78,38 @@ async function generateWithAnthropic({key, model, prompt, temperature, onContent
|
||||||
if (content) onContent(content);
|
if (content) onContent(content);
|
||||||
};
|
};
|
||||||
|
|
||||||
await handleStream(response, getContent, "anthropic");
|
await handleStream(response, getContent);
|
||||||
}
|
}
|
||||||
|
|
||||||
async function generateWithOllama({key, model, prompt, temperature, onContent}) {
|
async function generateWithOllama({key, model, prompt, temperature, onContent}) {
|
||||||
// For Ollama, 'key' is the actual model name entered by the user.
|
const ollamaModelName = key; // for Ollama, 'key' is the actual model name entered by the user
|
||||||
// 'model' is the value from the dropdown, e.g., "Ollama (enter model in key field)".
|
|
||||||
const ollamaModelName = key;
|
|
||||||
|
|
||||||
const headers = {
|
|
||||||
"Content-Type": "application/json"
|
|
||||||
};
|
|
||||||
|
|
||||||
const body = {
|
|
||||||
model: ollamaModelName,
|
|
||||||
prompt: prompt,
|
|
||||||
system: SYSTEM_MESSAGE,
|
|
||||||
options: {
|
|
||||||
temperature: temperature
|
|
||||||
},
|
|
||||||
stream: true
|
|
||||||
};
|
|
||||||
|
|
||||||
const response = await fetch("http://localhost:11434/api/generate", {
|
const response = await fetch("http://localhost:11434/api/generate", {
|
||||||
method: "POST",
|
method: "POST",
|
||||||
headers,
|
headers: {"Content-Type": "application/json"},
|
||||||
body: JSON.stringify(body)
|
body: JSON.stringify({
|
||||||
|
model: ollamaModelName,
|
||||||
|
prompt,
|
||||||
|
system: SYSTEM_MESSAGE,
|
||||||
|
options: {temperature},
|
||||||
|
stream: true
|
||||||
|
})
|
||||||
});
|
});
|
||||||
|
|
||||||
const getContent = json => {
|
const getContent = json => {
|
||||||
// Ollama streams JSON objects with a "response" field for content
|
if (json.response) onContent(json.response);
|
||||||
// and "done": true in the final message (which might have an empty response).
|
|
||||||
if (json.response) {
|
|
||||||
onContent(json.response);
|
|
||||||
}
|
|
||||||
};
|
};
|
||||||
|
|
||||||
await handleStream(response, getContent, "ollama");
|
await handleStream(response, getContent);
|
||||||
}
|
}
|
||||||
|
|
||||||
async function handleStream(response, getContent, providerType) {
|
async function handleStream(response, getContent) {
|
||||||
if (!response.ok) {
|
if (!response.ok) {
|
||||||
let errorMessage = `Failed to generate (${response.status} ${response.statusText})`;
|
let errorMessage = `Failed to generate (${response.status} ${response.statusText})`;
|
||||||
try {
|
try {
|
||||||
const json = await response.json();
|
const json = await response.json();
|
||||||
if (providerType === "ollama" && json?.error) {
|
errorMessage = json.error?.message || json.error || errorMessage;
|
||||||
errorMessage = json.error;
|
} catch {}
|
||||||
} else {
|
|
||||||
errorMessage = json?.error?.message || json?.error || `Failed to generate (${response.status} ${response.statusText})`;
|
|
||||||
}
|
|
||||||
} catch (e) {
|
|
||||||
|
|
||||||
ERROR && console.error("Failed to parse error response JSON:", e)
|
|
||||||
}
|
|
||||||
throw new Error(errorMessage);
|
throw new Error(errorMessage);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -151,24 +126,14 @@ async function handleStream(response, getContent, providerType) {
|
||||||
|
|
||||||
for (let i = 0; i < lines.length - 1; i++) {
|
for (let i = 0; i < lines.length - 1; i++) {
|
||||||
const line = lines[i].trim();
|
const line = lines[i].trim();
|
||||||
if (providerType === "ollama") {
|
if (!line) continue;
|
||||||
if (line) {
|
if (line === "data: [DONE]") break;
|
||||||
try {
|
|
||||||
const json = JSON.parse(line);
|
try {
|
||||||
getContent(json);
|
const parsed = line.startsWith("data: ") ? JSON.parse(line.slice(6)) : JSON.parse(line);
|
||||||
} catch (jsonError) {
|
getContent(parsed);
|
||||||
ERROR && console.error(`Failed to parse JSON from Ollama:`, jsonError, `Line: ${line}`);
|
} catch (error) {
|
||||||
}
|
ERROR && console.error("Failed to parse line:", line, error);
|
||||||
}
|
|
||||||
} else {
|
|
||||||
if (line.startsWith("data: ") && line !== "data: [DONE]") {
|
|
||||||
try {
|
|
||||||
const json = JSON.parse(line.slice(6));
|
|
||||||
getContent(json);
|
|
||||||
} catch (jsonError) {
|
|
||||||
ERROR && console.error(`Failed to parse JSON:`, jsonError, `Line: ${line}`);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -177,59 +142,65 @@ async function handleStream(response, getContent, providerType) {
|
||||||
}
|
}
|
||||||
|
|
||||||
function generateWithAi(defaultPrompt, onApply) {
|
function generateWithAi(defaultPrompt, onApply) {
|
||||||
|
updateValues();
|
||||||
|
|
||||||
function updateDialogElements() {
|
$("#aiGenerator").dialog({
|
||||||
|
title: "AI Text Generator",
|
||||||
|
position: {my: "center", at: "center", of: "svg"},
|
||||||
|
resizable: false,
|
||||||
|
buttons: {
|
||||||
|
Generate: function (e) {
|
||||||
|
generate(e.target);
|
||||||
|
},
|
||||||
|
Apply: function () {
|
||||||
|
const result = byId("aiGeneratorResult").value;
|
||||||
|
if (!result) return tip("No result to apply", true, "error", 4000);
|
||||||
|
onApply(result);
|
||||||
|
$(this).dialog("close");
|
||||||
|
},
|
||||||
|
Close: function () {
|
||||||
|
$(this).dialog("close");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
if (modules.generateWithAi) return;
|
||||||
|
modules.generateWithAi = true;
|
||||||
|
|
||||||
|
byId("aiGeneratorKeyHelp").on("click", function (e) {
|
||||||
|
const model = byId("aiGeneratorModel").value;
|
||||||
|
const provider = MODELS[model];
|
||||||
|
openURL(PROVIDERS[provider].keyLink);
|
||||||
|
});
|
||||||
|
|
||||||
|
function updateValues() {
|
||||||
byId("aiGeneratorResult").value = "";
|
byId("aiGeneratorResult").value = "";
|
||||||
byId("aiGeneratorPrompt").value = defaultPrompt;
|
byId("aiGeneratorPrompt").value = defaultPrompt;
|
||||||
byId("aiGeneratorTemperature").value = localStorage.getItem("fmg-ai-temperature") || "1";
|
byId("aiGeneratorTemperature").value = localStorage.getItem("fmg-ai-temperature") || "1";
|
||||||
|
|
||||||
const select = byId("aiGeneratorModel");
|
const select = byId("aiGeneratorModel");
|
||||||
const currentModelVal = select.value;
|
|
||||||
select.options.length = 0;
|
select.options.length = 0;
|
||||||
Object.keys(MODELS).forEach(model => select.options.add(new Option(model, model)));
|
Object.keys(MODELS).forEach(model => select.options.add(new Option(model, model)));
|
||||||
|
select.value = localStorage.getItem("fmg-ai-model");
|
||||||
const storedModel = localStorage.getItem("fmg-ai-model");
|
if (!select.value || !MODELS[select.value]) select.value = DEFAULT_MODEL;
|
||||||
if (storedModel && MODELS[storedModel]) {
|
|
||||||
select.value = storedModel;
|
|
||||||
} else if (currentModelVal && MODELS[currentModelVal]) {
|
|
||||||
select.value = currentModelVal;
|
|
||||||
} else {
|
|
||||||
select.value = DEFAULT_MODEL;
|
|
||||||
}
|
|
||||||
if (!select.value || !MODELS[select.value]) select.value = DEFAULT_MODEL;
|
|
||||||
|
|
||||||
const provider = MODELS[select.value];
|
const provider = MODELS[select.value];
|
||||||
const keyInput = byId("aiGeneratorKey");
|
byId("aiGeneratorKey").value = localStorage.getItem(`fmg-ai-kl-${provider}`) || "";
|
||||||
if (keyInput) {
|
|
||||||
keyInput.value = localStorage.getItem(`fmg-ai-kl-${provider}`) || "";
|
|
||||||
if (provider === "ollama") {
|
|
||||||
keyInput.placeholder = "Enter Ollama model name (e.g., llama3)";
|
|
||||||
} else {
|
|
||||||
keyInput.placeholder = "Enter API Key";
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
ERROR && console.error("AI Generator: Could not find 'aiGeneratorKey' element in updateDialogElements.");
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
async function doGenerate(button) {
|
async function generate(button) {
|
||||||
const key = byId("aiGeneratorKey").value;
|
const key = byId("aiGeneratorKey").value;
|
||||||
const modelValue = byId("aiGeneratorModel").value;
|
if (!key) return tip("Please enter an API key", true, "error", 4000);
|
||||||
const provider = MODELS[modelValue];
|
|
||||||
|
|
||||||
if (provider !== "ollama" && !key) {
|
const model = byId("aiGeneratorModel").value;
|
||||||
return tip("Please enter an API key", true, "error", 4000);
|
if (!model) return tip("Please select a model", true, "error", 4000);
|
||||||
}
|
localStorage.setItem("fmg-ai-model", model);
|
||||||
if (provider === "ollama" && !key) {
|
|
||||||
return tip("Please enter the Ollama model name in the key field", true, "error", 4000);
|
const provider = MODELS[model];
|
||||||
}
|
|
||||||
if (!modelValue) return tip("Please select a model", true, "error", 4000);
|
|
||||||
|
|
||||||
localStorage.setItem("fmg-ai-model", modelValue);
|
|
||||||
localStorage.setItem(`fmg-ai-kl-${provider}`, key);
|
localStorage.setItem(`fmg-ai-kl-${provider}`, key);
|
||||||
|
|
||||||
const promptText = byId("aiGeneratorPrompt").value;
|
const prompt = byId("aiGeneratorPrompt").value;
|
||||||
if (!promptText) return tip("Please enter a prompt", true, "error", 4000);
|
if (!prompt) return tip("Please enter a prompt", true, "error", 4000);
|
||||||
|
|
||||||
const temperature = byId("aiGeneratorTemperature").valueAsNumber;
|
const temperature = byId("aiGeneratorTemperature").valueAsNumber;
|
||||||
if (isNaN(temperature)) return tip("Temperature must be a number", true, "error", 4000);
|
if (isNaN(temperature)) return tip("Temperature must be a number", true, "error", 4000);
|
||||||
|
|
@ -240,83 +211,14 @@ function generateWithAi(defaultPrompt, onApply) {
|
||||||
const resultArea = byId("aiGeneratorResult");
|
const resultArea = byId("aiGeneratorResult");
|
||||||
resultArea.disabled = true;
|
resultArea.disabled = true;
|
||||||
resultArea.value = "";
|
resultArea.value = "";
|
||||||
const onContentCallback = content => (resultArea.value += content);
|
const onContent = content => (resultArea.value += content);
|
||||||
|
|
||||||
await PROVIDERS[provider].generate({key: key, model: modelValue, prompt: promptText, temperature, onContent: onContentCallback});
|
await PROVIDERS[provider].generate({key, model, prompt, temperature, onContent});
|
||||||
} catch (error) {
|
} catch (error) {
|
||||||
tip(error.message, true, "error", 4000);
|
return tip(error.message, true, "error", 4000);
|
||||||
} finally {
|
} finally {
|
||||||
button.disabled = false;
|
button.disabled = false;
|
||||||
byId("aiGeneratorResult").disabled = false;
|
byId("aiGeneratorResult").disabled = false;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
$("#aiGenerator").dialog({
|
|
||||||
title: "AI Text Generator",
|
|
||||||
position: {my: "center", at: "center", of: "svg"},
|
|
||||||
resizable: false,
|
|
||||||
width: Math.min(600, window.innerWidth - 20),
|
|
||||||
modal: true,
|
|
||||||
open: function() {
|
|
||||||
|
|
||||||
if (!modules.generateWithAi_setupDone) {
|
|
||||||
const keyHelpButton = byId("aiGeneratorKeyHelp");
|
|
||||||
if (keyHelpButton) {
|
|
||||||
keyHelpButton.addEventListener("click", function () {
|
|
||||||
const modelValue = byId("aiGeneratorModel").value;
|
|
||||||
const provider = MODELS[modelValue];
|
|
||||||
if (provider === "ollama") {
|
|
||||||
openURL(PROVIDERS.ollama.keyLink);
|
|
||||||
} else if (provider && PROVIDERS[provider] && PROVIDERS[provider].keyLink) {
|
|
||||||
openURL(PROVIDERS[provider].keyLink);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
} else {
|
|
||||||
ERROR && console.error("AI Generator: Could not find 'aiGeneratorKeyHelp' element for event listener.");
|
|
||||||
}
|
|
||||||
|
|
||||||
const modelSelect = byId("aiGeneratorModel");
|
|
||||||
if (modelSelect) {
|
|
||||||
modelSelect.addEventListener("change", function() {
|
|
||||||
const newModelValue = this.value;
|
|
||||||
const newProvider = MODELS[newModelValue];
|
|
||||||
const keyInput = byId("aiGeneratorKey");
|
|
||||||
if (keyInput) {
|
|
||||||
if (newProvider === "ollama") {
|
|
||||||
keyInput.placeholder = "Enter Ollama model name (e.g., llama3)";
|
|
||||||
} else {
|
|
||||||
keyInput.placeholder = "Enter API Key";
|
|
||||||
}
|
|
||||||
|
|
||||||
keyInput.value = localStorage.getItem(`fmg-ai-kl-${newProvider}`) || "";
|
|
||||||
} else {
|
|
||||||
ERROR && console.error("AI Generator: Could not find 'aiGeneratorKey' element during model change listener.");
|
|
||||||
}
|
|
||||||
});
|
|
||||||
} else {
|
|
||||||
ERROR && console.error("AI Generator: Could not find 'aiGeneratorModel' element for event listener.");
|
|
||||||
}
|
|
||||||
modules.generateWithAi_setupDone = true;
|
|
||||||
}
|
|
||||||
|
|
||||||
updateDialogElements();
|
|
||||||
},
|
|
||||||
buttons: {
|
|
||||||
"Generate": function (e) {
|
|
||||||
|
|
||||||
doGenerate(e.currentTarget || e.target);
|
|
||||||
},
|
|
||||||
"Apply": function () {
|
|
||||||
const result = byId("aiGeneratorResult").value;
|
|
||||||
if (!result) return tip("No result to apply", true, "error", 4000);
|
|
||||||
onApply(result);
|
|
||||||
$(this).dialog("close");
|
|
||||||
},
|
|
||||||
"Close": function () {
|
|
||||||
$(this).dialog("close");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
}
|
||||||
|
|
||||||
window.generateWithAi = generateWithAi;
|
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue