{"id":26,"date":"2026-04-27T18:53:02","date_gmt":"2026-04-27T18:53:02","guid":{"rendered":"https:\/\/ko4bep.net\/blog\/?p=26"},"modified":"2026-04-27T18:55:35","modified_gmt":"2026-04-27T18:55:35","slug":"building-a-local-ai-search-page-with-searxng-and-ollama","status":"publish","type":"post","link":"https:\/\/ko4bep.net\/blog\/index.php\/2026\/04\/27\/building-a-local-ai-search-page-with-searxng-and-ollama\/","title":{"rendered":"Building a Local AI Search Page with SearXNG and Ollama"},"content":{"rendered":"\n<h1 class=\"wp-block-heading\"><\/h1>\n\n\n\n<p>Most search engines now bolt an AI answer box onto the top of the results page. That can be useful, but it also means your query and whatever the model does with it are happening on somebody else\u2019s infrastructure.<\/p>\n\n\n\n<p>This project builds the same basic workflow locally:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>SearXNG handles normal web search.<\/li>\n\n\n\n<li>Ollama runs a local model.<\/li>\n\n\n\n<li>A tiny Flask wrapper shows search results immediately.<\/li>\n\n\n\n<li>AI answers are optional. You check a box when you want them.<\/li>\n\n\n\n<li>Apache or other reverse proxy can publish the whole thing under <code>\/search\/<\/code> on an existing site.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">What the final setup looks like<\/h2>\n\n\n\n<p>The public page is:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>https:&#47;&#47;YOUR_DOMAIN\/search\/<\/code><\/pre>\n\n\n\n<p>The local services are:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>SearXNG:   http:\/\/127.0.0.1:8080\nOllama:    http:\/\/127.0.0.1:11434\nAI search: http:\/\/0.0.0.0:5001<\/code><\/pre>\n\n\n\n<p>Of course, you can also use subdomains instead of directory based; I started using directories ages ago and have too much momentum to care about changing now.<\/p>\n\n\n\n<p>Not every search needs a summary. Sometimes you just want results. The browser hits the AI search wrapper which loads normal SearXNG results and only calls Ollama when the user requests it.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Assumptions<\/h2>\n\n\n\n<p>This guide is based on my experience setting this up on my own rig. It should apply broadly to current Ubuntu and derivatives, possibly with some tinkering:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Ubuntu 25.10 or close enough.<\/li>\n\n\n\n<li>Docker Engine and Compose v2.<\/li>\n\n\n\n<li>Apache 2.4 as the reverse proxy (used in this doc, can be easily adapted for other RPs)<\/li>\n\n\n\n<li>A machine that can run Ollama locally (my machine for reference: Ryzen9 3900X, 128GiB RAM, NVIDIA 4060Ti 16GiB VRAM).<\/li>\n\n\n\n<li>A reverse proxy path of <code>\/search\/<\/code>.<\/li>\n<\/ul>\n\n\n\n<p>The commands use <code>\/opt\/ai-search<\/code>. Change that path if you want, but don\u2019t scatter the files around. Future-you will hate present-you.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Install Docker from the Docker repository<\/h2>\n\n\n\n<p>Ubuntu\u2019s Docker packages and Docker\u2019s official packages can conflict. Pick one lane. I use Docker\u2019s repository here.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>sudo apt update\nsudo apt install -y ca-certificates curl gnupg apache2\n\nsudo install -m 0755 -d \/etc\/apt\/keyrings\nsudo curl -fsSL https:\/\/download.docker.com\/linux\/ubuntu\/gpg \\\n  -o \/etc\/apt\/keyrings\/docker.asc\nsudo chmod a+r \/etc\/apt\/keyrings\/docker.asc\n\necho \\\n\"deb &#91;arch=$(dpkg --print-architecture) signed-by=\/etc\/apt\/keyrings\/docker.asc] \\\nhttps:\/\/download.docker.com\/linux\/ubuntu \\\n$(. \/etc\/os-release &amp;&amp; echo \"$VERSION_CODENAME\") stable\" | \\\nsudo tee \/etc\/apt\/sources.list.d\/docker.list &gt; \/dev\/null\n\nsudo apt update\nsudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin\nsudo systemctl enable --now docker<\/code><\/pre>\n\n\n\n<p>Check it:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>sudo docker version\ndocker compose version<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Create the project directory<\/h2>\n\n\n\n<pre class=\"wp-block-code\"><code>sudo mkdir -p \/opt\/ai-search\/{searxng,ollama}\nsudo chown -R \"$USER:$USER\" \/opt\/ai-search\ncd \/opt\/ai-search<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Docker Compose: SearXNG and Ollama<\/h2>\n\n\n\n<p>This setup uses Docker host networking. That is deliberate.<\/p>\n\n\n\n<p>I use a userspace VPN and systemwide TailScale on the same rig (getting my money&#8217;s worth out of a gaming machine when I&#8217;m not gaming). Docker bridge networking and Docker\u2019s embedded DNS can get weird and create frustrating, time wasting conflicts. Host networking removes that whole layer for this project. The tradeoff is that ports bind directly on the host, so do not run this blindly on a shared machine.<\/p>\n\n\n\n<p>Create <code>\/opt\/ai-search\/docker-compose.yml<\/code>:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>services:\n  searxng:\n    image: docker.io\/searxng\/searxng:latest\n    container_name: searxng\n    restart: unless-stopped\n    network_mode: host\n    volumes:\n      - .\/searxng:\/etc\/searxng\n\n  ollama:\n    image: docker.io\/ollama\/ollama:latest\n    container_name: ollama\n    restart: unless-stopped\n    network_mode: host\n    volumes:\n      - .\/ollama:\/root\/.ollama<\/code><\/pre>\n\n\n\n<p>Start it:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>cd \/opt\/ai-search\nsudo docker compose up -d<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">SearXNG settings<\/h2>\n\n\n\n<p>Create <code>\/opt\/ai-search\/searxng\/settings.yml<\/code>:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>use_default_settings: true\n\ngeneral:\n  debug: false\n  instance_name: \"Search\"\n\nsearch:\n  safe_search: 0\n  autocomplete: duckduckgo\n  default_lang: en-US\n  formats:\n    - html\n    - json\n\nserver:\n  secret_key: \"CHANGE_THIS_TO_A_LONG_RANDOM_VALUE\"\n  base_url: http:\/\/127.0.0.1:8080\/\n  limiter: false\n  image_proxy: false\n  method: GET\n\nui:\n  infinite_scroll: false\n  query_in_title: true\n  results_on_new_tab: true\n\nplugins:\n  searx.plugins.hostnames.SXNGPlugin:\n    active: true\n  searx.plugins.tracker_url_remover.SXNGPlugin:\n    active: true\n  searx.plugins.calculator.SXNGPlugin:\n    active: true\n\nengines:\n  - name: brave\n    disabled: true\n  - name: karmasearch\n    disabled: true\n  - name: karmasearch videos\n    disabled: true\n  - name: mojeek\n    disabled: true\n  - name: yahoo\n    disabled: true<\/code><\/pre>\n\n\n\n<p>Generate a real secret key:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>python3 - &lt;&lt;'PY'\nimport secrets\nprint(secrets.token_hex(32))\nPY<\/code><\/pre>\n\n\n\n<p>Replace <code>CHANGE_THIS_TO_A_LONG_RANDOM_VALUE<\/code> with the generated value.<\/p>\n\n\n\n<p>Restart SearXNG:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>cd \/opt\/ai-search\nsudo docker compose restart searxng\ncurl -I http:\/\/127.0.0.1:8080<\/code><\/pre>\n\n\n\n<p>SearXNG must have JSON enabled because the AI wrapper reads search results through <code>\/search?q=...&amp;format=json<\/code>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Pull a local model with Ollama<\/h2>\n\n\n\n<p>This uses Qwen 2.5 7B because it is small enough for normal local hardware and good enough for short search summaries.<br>f you use a<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>curl http:\/\/127.0.0.1:11434\/api\/pull \\\n  -d '{\"model\":\"qwen2.5:7b\",\"stream\":false}'\n\ncurl http:\/\/127.0.0.1:11434\/api\/tags<\/code><\/pre>\n\n\n\n<p>Test generation:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>curl http:\/\/127.0.0.1:11434\/api\/generate \\\n  -d '{\"model\":\"qwen2.5:7b\",\"prompt\":\"Reply with exactly: model working\",\"stream\":false}'<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Install Python dependencies<\/h2>\n\n\n\n<p>The wrapper is a small Flask app. For the service, use Gunicorn instead of Flask\u2019s development server.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>sudo apt update\nsudo apt install -y python3-flask python3-requests gunicorn<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">The AI search wrapper<\/h2>\n\n\n\n<p>Create <code>\/opt\/ai-search\/ai_search_app.py<\/code>:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>from flask import Flask, request\nimport html\nimport requests\n\napp = Flask(__name__)\n\nSEARX_UI = \"http:\/\/127.0.0.1:8080\"\nSEARX_API = \"http:\/\/127.0.0.1:8080\/search\"\nOLLAMA_API = \"http:\/\/127.0.0.1:11434\/api\/generate\"\nMODEL = \"qwen2.5:7b\"\n\nSTYLE = \"\"\"\nbody {\n  margin: 0;\n  font-family: system-ui, sans-serif;\n  background: #111;\n  color: #eee;\n}\n.topbar {\n  padding: 12px;\n  background: #181818;\n  border-bottom: 1px solid #333;\n}\nform {\n  display: flex;\n  gap: 8px;\n}\ninput&#91;type=\"text\"] {\n  flex: 1;\n  padding: 10px;\n  font-size: 16px;\n}\nbutton {\n  padding: 10px 16px;\n  font-size: 16px;\n}\n.ai-box {\n  padding: 14px;\n  margin: 12px;\n  border: 1px solid #444;\n  border-radius: 8px;\n  background: #1b1b1b;\n}\n.ai-loading {\n  opacity: 0.75;\n}\n#raw-results {\n  background: #fff;\n  color: #111;\n  padding: 12px;\n}\n#raw-results a {\n  color: #0645ad;\n}\n\"\"\"\n\nSEARX_CSS = '&lt;link rel=\"stylesheet\" href=\"\/search\/raw\/static\/themes\/simple\/sxng-ltr.min.css\" type=\"text\/css\"&gt;'\n\n@app.route(\"\/\")\ndef index():\n    q = request.args.get(\"q\", \"\").strip()\n    ai_enabled = request.args.get(\"ai\") == \"1\"\n    q_html = html.escape(q)\n    checked = \"checked\" if ai_enabled else \"\"\n\n    if not q:\n        return f\"\"\"\n&lt;html&gt;\n&lt;head&gt;&lt;title&gt;AI Search&lt;\/title&gt;{SEARX_CSS}&lt;style&gt;{STYLE}&lt;\/style&gt;&lt;\/head&gt;\n&lt;body&gt;\n  &lt;div class=\"topbar\"&gt;\n    &lt;form action=\"\/search\/\" method=\"get\"&gt;\n      &lt;input type=\"text\" name=\"q\" autofocus placeholder=\"Search...\" \/&gt;\n      &lt;label style=\"display:flex;align-items:center;gap:6px\"&gt;\n        &lt;input type=\"checkbox\" name=\"ai\" value=\"1\"&gt;\n        include AI\n      &lt;\/label&gt;\n      &lt;button type=\"submit\"&gt;Search&lt;\/button&gt;\n    &lt;\/form&gt;\n  &lt;\/div&gt;\n&lt;\/body&gt;\n&lt;\/html&gt;\n\"\"\"\n\n    quoted_q = requests.utils.quote(q)\n    raw_url = \"\/search\/raw\/search?q=\" + quoted_q\n\n    if ai_enabled:\n        ai_block = \"\"\"\n  &lt;div id=\"ai\" class=\"ai-box ai-loading\"&gt;\n    &lt;b&gt;AI Answer&lt;\/b&gt;&lt;br&gt;&lt;br&gt;\n    Working...\n  &lt;\/div&gt;\n\"\"\"\n        ai_script = f\"\"\"\n  &lt;script&gt;\n    fetch(\"\/search\/answer?q=\" + encodeURIComponent({q!r}))\n      .then(r =&gt; r.text())\n      .then(t =&gt; {{\n        document.getElementById(\"ai\").classList.remove(\"ai-loading\");\n        document.getElementById(\"ai\").innerHTML = t;\n      }})\n      .catch(() =&gt; {{\n        document.getElementById(\"ai\").innerHTML = \"&lt;b&gt;AI Answer&lt;\/b&gt;&lt;br&gt;&lt;br&gt;Unavailable.\";\n      }});\n  &lt;\/script&gt;\n\"\"\"\n    else:\n        ai_block = f\"\"\"\n  &lt;div class=\"ai-box\"&gt;\n    &lt;b&gt;AI Answer&lt;\/b&gt;&lt;br&gt;&lt;br&gt;\n    &lt;a style=\"color:#9cf\" href=\"\/search\/?q={quoted_q}&amp;ai=1\"&gt;Generate AI summary&lt;\/a&gt;\n  &lt;\/div&gt;\n\"\"\"\n        ai_script = \"\"\n\n    return f\"\"\"\n&lt;html&gt;\n&lt;head&gt;&lt;title&gt;{q_html} - AI Search&lt;\/title&gt;{SEARX_CSS}&lt;style&gt;{STYLE}&lt;\/style&gt;&lt;\/head&gt;\n&lt;body&gt;\n  &lt;div class=\"topbar\"&gt;\n    &lt;form action=\"\/search\/\" method=\"get\"&gt;\n      &lt;input type=\"text\" name=\"q\" value=\"{q_html}\" \/&gt;\n      &lt;label style=\"display:flex;align-items:center;gap:6px\"&gt;\n        &lt;input type=\"checkbox\" name=\"ai\" value=\"1\" {checked}&gt;\n        include AI\n      &lt;\/label&gt;\n      &lt;button type=\"submit\"&gt;Search&lt;\/button&gt;\n      &lt;a style=\"color:#9cf;padding:10px\" href=\"{raw_url}\" target=\"_blank\"&gt;Open raw SearXNG&lt;\/a&gt;\n    &lt;\/form&gt;\n  &lt;\/div&gt;\n\n{ai_block}\n\n  &lt;div id=\"raw-results\"&gt;Loading search results...&lt;\/div&gt;\n\n  &lt;script&gt;\n    fetch(\"\/search\/raw-html?q=\" + encodeURIComponent({q!r}))\n      .then(r =&gt; r.text())\n      .then(t =&gt; {{\n        document.getElementById(\"raw-results\").innerHTML = t;\n      }})\n      .catch(() =&gt; {{\n        document.getElementById(\"raw-results\").innerHTML = \"Search results unavailable.\";\n      }});\n  &lt;\/script&gt;\n\n{ai_script}\n&lt;\/body&gt;\n&lt;\/html&gt;\n\"\"\"\n\n@app.route(\"\/raw-html\")\ndef raw_html():\n    q = request.args.get(\"q\", \"\").strip()\n    if not q:\n        return \"\"\n\n    r = requests.get(SEARX_UI + \"\/search\", params={\"q\": q}, timeout=45)\n    r.raise_for_status()\n    page = r.text\n\n    start = page.find('&lt;main id=\"main_results\"')\n    if start == -1:\n        return page\n\n    end = page.rfind(\"&lt;\/main&gt;\")\n    if end == -1:\n        return page&#91;start:]\n\n    return page&#91;start:end + len(\"&lt;\/main&gt;\")]\n\n@app.route(\"\/answer\")\ndef answer():\n    q = request.args.get(\"q\", \"\").strip()\n    if not q:\n        return \"\"\n\n    try:\n        sx = requests.get(SEARX_API, params={\"q\": q, \"format\": \"json\"}, timeout=30)\n        sx.raise_for_status()\n        data = sx.json()\n\n        lines = &#91;]\n        for r in data.get(\"results\", &#91;])&#91;:6]:\n            title = r.get(\"title\", \"\")\n            content = r.get(\"content\", \"\")\n            url = r.get(\"url\", \"\")\n            if title or content:\n                lines.append(f\"{title}\\n{content}\\n{url}\")\n\n        prompt = (\n            \"User query:\\n\" + q +\n            \"\\n\\nSearch results:\\n\" + \"\\n\\n\".join(lines) +\n            \"\\n\\nWrite a concise answer in 3 bullet points. \"\n            \"Use only the provided search results. \"\n            \"If the results are weak or unrelated, say so.\"\n        )\n\n        ol = requests.post(\n            OLLAMA_API,\n            json={\"model\": MODEL, \"prompt\": prompt, \"stream\": False},\n            timeout=120,\n        )\n        ol.raise_for_status()\n\n        text = ol.json().get(\"response\", \"\").strip()\n        safe = html.escape(text).replace(\"\\n\", \"&lt;br&gt;\")\n        return \"&lt;b&gt;AI Answer&lt;\/b&gt;&lt;br&gt;&lt;br&gt;\" + safe\n\n    except Exception as e:\n        return \"&lt;b&gt;AI Answer&lt;\/b&gt;&lt;br&gt;&lt;br&gt;Unavailable: \" + html.escape(str(e))\n\nif __name__ == \"__main__\":\n    app.run(host=\"0.0.0.0\", port=5001)<\/code><\/pre>\n\n\n\n<p>Two small design choices are doing a lot of work here:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The search results load first.<\/li>\n\n\n\n<li>AI only runs when <code>ai=1<\/code> is present.<\/li>\n<\/ul>\n\n\n\n<p>That keeps normal searches quick.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Run it as a service<\/h2>\n\n\n\n<p>Create <code>\/etc\/systemd\/system\/ai-search.service<\/code>:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>&#91;Unit]\nDescription=Local AI Search Wrapper\nAfter=network.target docker.service\nWants=docker.service\n\n&#91;Service]\nType=simple\nUser=YOUR_LOCAL_USER\nWorkingDirectory=\/opt\/ai-search\nExecStart=\/usr\/bin\/gunicorn -w 2 -b 0.0.0.0:5001 ai_search_app:app\nRestart=always\nRestartSec=3\n\n&#91;Install]\nWantedBy=multi-user.target<\/code><\/pre>\n\n\n\n<p>Set your local username:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>sudo sed -i \"s\/User=YOUR_LOCAL_USER\/User=$USER\/\" \/etc\/systemd\/system\/ai-search.service\nsudo systemctl daemon-reload\nsudo systemctl enable --now ai-search.service<\/code><\/pre>\n\n\n\n<p>Test locally:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>curl -I http:\/\/127.0.0.1:5001\ncurl -L \"http:\/\/127.0.0.1:5001\/?q=linux%20firewall\" | head\ncurl -L \"http:\/\/127.0.0.1:5001\/?q=linux%20firewall&amp;ai=1\" | head<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Apache reverse proxy<\/h2>\n\n\n\n<p>This example publishes the wrapper at <code>\/search\/<\/code> and raw SearXNG at <code>\/search\/raw\/<\/code>.<\/p>\n\n\n\n<p>Set the backend IP before using the config:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>AI_SEARCH_HOST=\"192.168.1.50\"<\/code><\/pre>\n\n\n\n<p>Use the actual LAN or VPN IP of the machine running the AI search service.<\/p>\n\n\n\n<p>Put this inside your existing Apache TLS vhost:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code># Local AI Search\n# Public paths:\n#   \/search\/      -&gt; AI wrapper\n#   \/search\/raw\/  -&gt; raw SearXNG assets and search page\n\nRedirectMatch 308 ^\/search$ \/search\/\n\nProxyPreserveHost On\nProxyRequests Off\n\n# RAW SEARXNG MUST COME FIRST\nProxyPass        \/search\/raw\/ http:\/\/AI_SEARCH_HOST:8080\/\nProxyPassReverse \/search\/raw\/ http:\/\/AI_SEARCH_HOST:8080\/\n\n# AI WRAPPER SECOND\nProxyPass        \/search\/ http:\/\/AI_SEARCH_HOST:5001\/\nProxyPassReverse \/search\/ http:\/\/AI_SEARCH_HOST:5001\/\n\n&lt;Location \/search\/raw\/&gt;\n    Require all granted\n\n    RequestHeader set X-Forwarded-Proto \"https\"\n    RequestHeader set X-Forwarded-Host \"YOUR_DOMAIN\"\n    RequestHeader set X-Forwarded-Prefix \"\/search\/raw\"\n    RequestHeader set X-Scheme \"https\"\n    RequestHeader set X-Script-Name \"\/search\/raw\"\n\n    RequestHeader set X-Real-IP %{REMOTE_ADDR}s\n    RequestHeader append X-Forwarded-For %{REMOTE_ADDR}s\n&lt;\/Location&gt;\n\n&lt;Location \/search\/&gt;\n    Require all granted\n\n    RequestHeader set X-Forwarded-Proto \"https\"\n    RequestHeader set X-Forwarded-Host \"YOUR_DOMAIN\"\n    RequestHeader set X-Forwarded-Prefix \"\/search\"\n    RequestHeader set X-Scheme \"https\"\n    RequestHeader set X-Script-Name \"\/search\"\n\n    RequestHeader set X-Real-IP %{REMOTE_ADDR}s\n    RequestHeader append X-Forwarded-For %{REMOTE_ADDR}s\n\n    SetEnvIf Request_URI \"^\/search\/\" dontlog\n&lt;\/Location&gt;<\/code><\/pre>\n\n\n\n<p>Replace <code>AI_SEARCH_HOST<\/code> and <code>YOUR_DOMAIN<\/code> before reloading Apache.<\/p>\n\n\n\n<p>The order matters. <code>\/search\/raw\/<\/code> must come before <code>\/search\/<\/code>, or Apache will send raw SearXNG requests to the wrapper.<\/p>\n\n\n\n<p>Enable modules and reload:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>sudo a2enmod proxy proxy_http headers rewrite ssl\nsudo apache2ctl configtest\nsudo systemctl reload apache2<\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Test the final page<\/h2>\n\n\n\n<p>Open:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>https:&#47;&#47;YOUR_DOMAIN\/search\/<\/code><\/pre>\n\n\n\n<p>Search normally. Results should load without calling the model.<\/p>\n\n\n\n<p>Then check <code>include AI<\/code> and search again. Results should still load first, and the AI answer should appear after a few seconds.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Notes from the build<\/h2>\n\n\n\n<p>Do not start by hacking SearXNG plugins. That sounds cleaner than it is. The current SearXNG plugin system expects proper importable Python modules and fully qualified plugin class names. A wrapper avoids tying your project to SearXNG internals.<\/p>\n\n\n\n<p>Do not put <code>127.0.0.1<\/code> URLs in HTML that will be loaded by another machine. The user\u2019s browser interprets <code>127.0.0.1<\/code> as the user\u2019s own computer, not your server. Use public paths like <code>\/search\/raw\/search?...<\/code> and let Apache proxy them.<\/p>\n\n\n\n<p>Do not run AI on every query unless you really want the latency (or you&#8217;re keeping warm in winter).<\/p>\n\n\n\n<p>If you use a VPN and Docker networking explodes, try host networking for this stack. It is blunt, but it avoids a lot of route and DNS drama.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Useful references<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>SearXNG Search API: https:\/\/docs.searxng.org\/dev\/search_api.html<\/li>\n\n\n\n<li>SearXNG JSON formats need to be enabled in <code>settings.yml<\/code>: https:\/\/docs.searxng.org\/dev\/search_api.html<\/li>\n\n\n\n<li>Ollama generate API: https:\/\/ollama.readthedocs.io\/en\/api\/<\/li>\n\n\n\n<li>Ollama pull API: https:\/\/docs.ollama.com\/api\/pull<\/li>\n\n\n\n<li>Docker host networking: https:\/\/docs.docker.com\/engine\/network\/drivers\/host\/<\/li>\n\n\n\n<li>Apache reverse proxy guide: https:\/\/httpd.apache.org\/docs\/2.4\/howto\/reverse_proxy.html<\/li>\n\n\n\n<li>Apache mod_proxy docs: https:\/\/httpd.apache.org\/docs\/current\/mod\/mod_proxy.html<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Most search engines now bolt an AI answer box onto the top of the results page. That can be useful, but it also means your query and whatever the model does with it are happening on somebody else\u2019s infrastructure. This project builds the same basic workflow locally: What the final setup looks like The public [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[7,3,10,9,8],"class_list":["post-26","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-ai","tag-it","tag-privacy","tag-search","tag-selfhosted"],"_links":{"self":[{"href":"https:\/\/ko4bep.net\/blog\/index.php\/wp-json\/wp\/v2\/posts\/26","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ko4bep.net\/blog\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ko4bep.net\/blog\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ko4bep.net\/blog\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ko4bep.net\/blog\/index.php\/wp-json\/wp\/v2\/comments?post=26"}],"version-history":[{"count":1,"href":"https:\/\/ko4bep.net\/blog\/index.php\/wp-json\/wp\/v2\/posts\/26\/revisions"}],"predecessor-version":[{"id":27,"href":"https:\/\/ko4bep.net\/blog\/index.php\/wp-json\/wp\/v2\/posts\/26\/revisions\/27"}],"wp:attachment":[{"href":"https:\/\/ko4bep.net\/blog\/index.php\/wp-json\/wp\/v2\/media?parent=26"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ko4bep.net\/blog\/index.php\/wp-json\/wp\/v2\/categories?post=26"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ko4bep.net\/blog\/index.php\/wp-json\/wp\/v2\/tags?post=26"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}