Jump to content

Aperture - AI-Powered Recommendations for Emby


Recommended Posts

GoldSpacer
Posted

It's working good now, I appreciate you!

  • Thanks 1
TheGru
Posted

A little sneak peek into what's coming: Similarity Graphs
image.thumb.png.5eeac16a472f3f8f00fe7aaa035e1557.png

image.thumb.png.44cf55fd69ad73807b460dd209c0b550.png

  • Like 4
FlameRed
Posted

I attempted to use this wonderful looking tool for the fist time on a QNAP running QuTS 5 and I am having a bit of an issue. When the container starts, I see the below issued every few seconds:

🔮 Running database migrations...
{"level":30,"time":1768339087398,"pid":1,"hostname":"894401f83755","name":"aperture","module":"uploads","dir":"/aperture-libraries/.aperture-data/uploads","msg":"Uploads system initialized"}
{"level":50,"time":1768339090471,"pid":1,"hostname":"894401f83755","name":"aperture","module":"migrations","err":{"type":"Error","message":"connect EHOSTUNREACH 192.168.68.48:5432","stack":"Error: connect EHOSTUNREACH 192.168.68.48:5432\n    at /app/node_modules/.pnpm/pg-pool@3.10.1_pg@8.16.3/node_modules/pg-pool/index.js:45:11\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async runMigrations (file:///app/packages/core/dist/migrations.js:55:24)\n    at async main (file:///app/apps/api/dist/index.js:34:28)","errno":-113,"code":"EHOSTUNREACH","syscall":"connect","address":"192.168.68.48","port":5432},"msg":"Migration failed"}
Failed to run migrations: Error: connect EHOSTUNREACH 192.168.68.48:5432
    at /app/node_modules/.pnpm/pg-pool@3.10.1_pg@8.16.3/node_modules/pg-pool/index.js:45:11
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async runMigrations (file:///app/packages/core/dist/migrations.js:55:24)
    at async main (file:///app/apps/api/dist/index.js:34:28) {
  errno: -113,
  code: 'EHOSTUNREACH',
  syscall: 'connect',
  address: '192.168.68.48',
  port: 5432
}

I am using the stock QNAP file found here with two unused IP addresses and my subnet mask and network gateway changed appropriately . I think the issue is aperture container cannot talk to aperature.db, or probably the internet because of the complex QNAP network virtualization is not quite right.

Anyone have this working on a QNAP under AuTS willing to please share their docker-compose.yml? 

Thank you in advance and thanks to the author for such a much needed tool!

TheGru
Posted
57 minutes ago, FlameRed said:

I attempted to use this wonderful looking tool for the fist time on a QNAP running QuTS 5 and I am having a bit of an issue. When the container starts, I see the below issued every few seconds:

 Running database migrations...
{"level":30,"time":1768339087398,"pid":1,"hostname":"894401f83755","name":"aperture","module":"uploads","dir":"/aperture-libraries/.aperture-data/uploads","msg":"Uploads system initialized"}
{"level":50,"time":1768339090471,"pid":1,"hostname":"894401f83755","name":"aperture","module":"migrations","err":{"type":"Error","message":"connect EHOSTUNREACH 192.168.68.48:5432","stack":"Error: connect EHOSTUNREACH 192.168.68.48:5432\n    at /app/node_modules/.pnpm/pg-pool@3.10.1_pg@8.16.3/node_modules/pg-pool/index.js:45:11\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\n    at async runMigrations (file:///app/packages/core/dist/migrations.js:55:24)\n    at async main (file:///app/apps/api/dist/index.js:34:28)","errno":-113,"code":"EHOSTUNREACH","syscall":"connect","address":"192.168.68.48","port":5432},"msg":"Migration failed"}
Failed to run migrations: Error: connect EHOSTUNREACH 192.168.68.48:5432
    at /app/node_modules/.pnpm/pg-pool@3.10.1_pg@8.16.3/node_modules/pg-pool/index.js:45:11
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async runMigrations (file:///app/packages/core/dist/migrations.js:55:24)
    at async main (file:///app/apps/api/dist/index.js:34:28) {
  errno: -113,
  code: 'EHOSTUNREACH',
  syscall: 'connect',
  address: '192.168.68.48',
  port: 5432
}

I am using the stock QNAP file found here with two unused IP addresses and my subnet mask and network gateway changed appropriately . I think the issue is aperture container cannot talk to aperature.db, or probably the internet because of the complex QNAP network virtualization is not quite right.

Anyone have this working on a QNAP under AuTS willing to please share their docker-compose.yml? 

Thank you in advance and thanks to the author for such a much needed tool!

Can you share your docker compose file? you can send it to me in a private message

TheGru
Posted (edited)

Aperture v0.4.1 Release Notes

Hey everyone! 👋

This is a major feature release introducing Similarity Graphs — a new interactive way to explore connections between movies and series in your library. Plus, you can now create playlists directly from your graph explorations!


🕸️ Similarity Graphs

Discover how your movies and series are connected through directors, actors, genres, collections, and more with our new interactive graph visualization.

What It Does

On any movie or series detail page, you'll now see a Graph tab alongside the traditional list view of similar content.

image.thumb.png.db9f4c49533b113528632c9875b296f8.png

This graph displays:

  • Poster nodes — Each movie/series is shown as its actual poster
  • Color-coded connections — Lines between posters show why they're related:

    🔵 Blue — Same director

    🌊 Teal — Shared actors

    🥇 Gold — Same collection/franchise

    💜 Purple — Shared genres

    💗 Pink — Shared keywords/themes

    🟠 Orange — Same studio

    🟢 Green — Same network (TV)

    🩶 Gray — Vector similarity (AI-detected)

    💚 Emerald — AI Discovery (bubble breaker)

Interactive Features

  • Click any poster to refocus the graph on that item (rabbit-hole exploration!)
  • Double-click to navigate to that movie/series detail page
  • Drag posters to rearrange the layout
  • Scroll to zoom in and out
  • Hover on connections to see detailed relationship info
  • Breadcrumb navigation tracks your exploration path

Fullscreen Mode

Click the fullscreen button to expand the graph for immersive exploration:

  • Graph expands to 3 levels deep — see connections of connections of connections
  • More items displayed (up to 35 nodes)
  • Create Playlist button to save your discoveries
  • Same drag/zoom/click interactions

image.thumb.png.d6a763f0f8894f654e71985ec892c591.png


🧠 Smart Bubble Breaking

The graph is smart about preventing you from getting stuck in a franchise bubble:

Collection Limits

  • The graph won't show all 26 James Bond films — it intelligently limits items per franchise
  • Larger franchises get slightly higher limits to stay representative
  • This encourages discovery of diverse content

AI-Powered Escape

When the graph detects you're stuck in a bubble (too many items from the same collection), it calls the AI to suggest thematically similar content from different franchises:

  • These AI discoveries appear with emerald green connections
  • They're semantically related but offer fresh perspectives
  • Example: If exploring Star Wars, AI might suggest Dune, Battlestar Galactica, or Foundation

Connection Validation

The graph validates connections to prevent weird matches:

  • Title pattern detection — Catches when movies match only because they both have "Return of..." in the title
  • Genre gating — Requires at least one shared genre for a valid connection
  • Collection diversity — Prevents unrelated franchises from chaining together

📋 Graph Playlists

Turn your graph explorations into playlists that sync to your media server!

Creating a Playlist

  • Open a similarity graph and go fullscreen
  • Explore until you have a nice collection of related content
  • Click Create Playlist
  • Choose a name (or click ✨ for AI-generated name)
  • Add a description (or click ✨ for AI-generated description)
  • Click Create

image.thumb.png.df25c8bb25bac48d80c57b5c50eeb01f.png

What Happens

  • Playlist is created in Emby/Jellyfin with all the graph items
  • Description/overview syncs to your media server
  • A record is saved in Aperture for tracking
  • Graph playlists appear on the Playlists page with a hub icon badge

image.png.ec867a5b018369fadb82079a5d2caeed.png

Managing Graph Playlists

  • View all playlists (both channel-based and graph-based) on the Playlists page
  • Delete with a confirmation dialog (no more accidental browser confirms!)
  • Graph playlists show their source item and item count

 

image.png.6bfe9afc49668cac7061cc04ddef2a55.png

image.thumb.png.fd73dde6bbf622697d62f3e0909763c0.png


⚙️ Similarity Graph Preferences

New user preferences to customize your graph experience:

Hide Watched Content (Default: ON)

When enabled, movies and series you've already watched won't appear in the similarity graph. This helps you discover new content rather than seeing things you've already seen.

Full Franchise Mode (Default: OFF)

When enabled, the collection limits are disabled and you can see entire franchises in the graph. Useful when you want to explore all entries in a series like James Bond, Marvel, or Star Trek.

Access these settings: User Settings → Preferences tab → Similarity Graph section

image.thumb.png.78c36df563be27dcc9f98b906214a286.png


🚀 Getting Started

  • Update to v0.4.0 which will run database migrations
  • Navigate to any movie or series detail page
  • You will see the Graph tab to see the similarity visualization
  • Explore! Click posters to dive deeper, double-click to visit detail pages
  • Go fullscreen for the full experience with playlist creation

Migration Notes

Three new database migrations will run automatically:

  • 0072_similarity_validation_cache.sql
  • 0073_graph_playlists.sql
  • 0074_similarity_preferences.sql

No manual intervention required.


Enjoy exploring your library in a whole new way! 🎬🕸️

Let me know if you have any feedback or run into issues.

Edited by TheGru
  • Like 2
TheGru
Posted

Aperture v0.4.2 - QNAP Networking Improvements

Release Date: January 14, 2026

What's New

Improved QNAP Docker Compose Configuration

Updated docker-compose.qnap.yml with comprehensive troubleshooting documentation for networking issues. Many QNAP users were experiencing connectivity problems where database migrations would succeed, but the web interface remained inaccessible on port 3456.

Key Improvements:

  • Added detailed troubleshooting section explaining common symptoms and solutions
  • Documented host networking mode as an alternative to qnet for complex QNAP setups
  • Clear instructions for switching between networking modes
  • Highlighted the critical DATABASE_URL difference between modes (@db:5432 vs @localhost:5432)
  • Both qnet and host networking options are now clearly labeled in the compose file

Who Should Update:

  • QNAP users experiencing connection issues after initial setup
  • Anyone setting up Aperture on QNAP for the first time (easier troubleshooting)
  • Existing QNAP users running smoothly can stay on their current setup

How to Update:

cd /path/to/aperture
docker-compose -f docker-compose.qnap.yml pull
docker-compose -f docker-compose.qnap.yml up -d

Note: If you're currently running and everything works, no configuration changes are needed. The updates only affect the comments and documentation in the compose file itself.

Full Changelog

  • Enhanced QNAP docker-compose documentation with networking troubleshooting guide
  • Added host networking mode as documented alternative for QNAP systems
  • Improved inline documentation for DATABASE_URL configuration

Questions or issues? Drop a comment below or open an issue on GitHub!

FlameRed
Posted

What is everyone selecting regarding OpenAI engines, plans and payments? 

Looks like on the initial setup, I already ran out of freebees:

[5:47:02 PM] ✗ ❌ Batch failed: 429 You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.

I don't want to give them a credit card and then find a $10,000 bill! 

TheGru
Posted
40 minutes ago, FlameRed said:

What is everyone selecting regarding OpenAI engines, plans and payments? 

Looks like on the initial setup, I already ran out of freebees:

[5:47:02 PM] ✗  Batch failed: 429 You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.

I don't want to give them a credit card and then find a $10,000 bill! 

1. Aperture should not run you much money at all on Open AI
2. You can configure Open Ai platform to set a max spend budget, so it will never go beyond what you are comfortable with.

TheGru
Posted

I am also working on a cost estimator that takes things like your library size and estimated weekly content additions to your library in to consideration. 

image.thumb.png.ebfcb9a58d067c965e76b4f28b8d704a.png

TeamB
Posted

have you experimented with ollama to see is self hosted models would work?

  • Like 1
  • Agree 1
TheGru
Posted
6 minutes ago, TeamB said:

have you experimented with ollama to see is self hosted models would work?

I have had the request to support it, I am building a branch using the Vercel AI-SDK now which supports many providers, OLLAMA being included so you can BYOAI.

It will be deployed to a different testing tagged and I will have more information when I need testers as I have no local LLMs setup here. 

  • Agree 1
TeamB
Posted (edited)

the openai client should work directly with the ollama server with minimal to no changes, just set the base_url when creating the openai client to your local machine and it should work out of the box.  

EDIT:
you would also need to change the model names for the embedding and chat calls

its going to be much slower and results might be lower quality but for cost conscious users or users with privacy concerns if it works it will no longer be a blocker for them. 

Edited by TeamB
TheGru
Posted
5 minutes ago, TeamB said:

the openai client should work directly with the ollama server with minimal to no changes, just set the base_url when creating the openai client to your local machine and it should work out of the box.  

EDIT:
you would also need to change the model names for the embedding and chat calls

its going to be much slower and results might be lower quality but for cost conscious users or users with privacy concerns if it works it will no longer be a blocker for them. 

I'm getting there

image.thumb.png.f6f634e353b877054916504963d7d30d.png

  • Like 2
TheGru
Posted

🚀 Aperture 0.4.2 Beta: Multi-LLM Support is Here!

Hey everyone! 👋

I'm excited to share a beta release of Aperture 0.4.2 featuring multi-provider AI/LLM support. This has been one of the most requested features, and it's finally ready for testing!

What's New?

🤖 Multi-Provider AI Support

You're no longer locked into OpenAI! Aperture now supports:

  • OpenAI (recommended - GPT-4o, GPT-4o-mini, text-embedding-3-large, etc.)
  • Anthropic (Claude 3.5 Sonnet, Claude 3 Haiku, etc.)
  • Google (Gemini 1.5 Pro, Gemini 1.5 Flash)
  • Groq (Llama 3, Mixtral - blazing fast inference)
  • Ollama (run models locally - llama3, mistral, nomic-embed-text)
  • LM Studio (local models with OpenAI-compatible API)
  • Any OpenAI-compatible endpoint

🎛️ Per-Function Configuration

The real power here is per-function provider selection. You can now mix and match:

Function Use Case Example Setup
Embeddings Semantic search, recommendations OpenAI text-embedding-3-large
Chat AI Assistant with tool calling Anthropic Claude 3.5 Sonnet
Text Generation Explanations, synopses Groq Llama 3 70B (fast & cheap)

Want embeddings from OpenAI but chat from Anthropic? Go for it!

💰 Dynamic Cost Estimation

The new AI / LLM Setup tab includes a cost estimator that:

  • Pulls real-time pricing from providers
  • Estimates costs based on your library size
  • Factors in your content growth rate
  • Shows $0.00 for self-hosted options (Ollama, LM Studio)

🔧 Other Improvements

  • Setup Wizard Overhaul - Now includes multi-provider configuration
  • Admins can re-run setup - Click "Re-run Setup Wizard" anytime to reconfigure
  • Capability Detection - Warns you if a provider doesn't support features like tool calling
  • Automatic Migration - Existing OpenAI configs are automatically migrated

🧪 How to Test

Pull the beta image:

docker pull ghcr.io/dgruhin-hrizn/aperture:llm

Or update your docker-compose:

services:
  aperture:
    image: ghcr.io/dgruhin-hrizn/aperture:llm
    # ... rest of config

⚠️ Important: This will run a database migration that updates your embedding model names. If you've been using OpenAI embeddings, they'll be migrated to the new format automatically. THIS COULD TAKE A FEW MINUTES, BE PATIENT. Aperture logs will show you progress.


🐛 Known Issues / Feedback Wanted

This is a beta, so please report any issues you find! Particularly interested in:

  • Local LLM users - How's Ollama/LM Studio working for you?
  • Non-OpenAI embeddings - Any quality differences you notice?
  • Cost estimates - Are they accurate for your setup?
  • UI/UX - Is the new AI Setup tab intuitive?

📸 Screenshots

image.thumb.png.6b8e463b8a6139764b8f83552ff58210.png

What's Next?

Who knows? Ideas come to me daily, and your feedback is the roadmap.

Thanks for testing! Drop your feedback below. 🙏


Docker Image: ghcr.io/dgruhin-hrizn/aperture:llm
Branch: feat-vercel-ai-sdk-multi-llm

  • Like 3
akacharos
Posted

although I am fine with using OpenAI or Ollama for fully local deployment, but since you made the effort to add more providers, consider the idea of adding openrouter.ai provider with their vast model support -some on free tier!
Just an idea!

TheGru
Posted
1 minute ago, akacharos said:

although I am fine with using OpenAI or Ollama for fully local deployment, but since you made the effort to add more providers, consider the idea of adding openrouter.ai provider with their vast model support -some on free tier!
Just an idea!

I used the vercel Ai-SDK because I use it for work and didn't have to learn anything new, https://ai-sdk.dev/

TheGru
Posted

I setup Ollama on my unraid, and pulled in the models

Run these on your Ollama server:

# Chat/Text Generation models
ollama pull llama3.2
ollama pull mistral
ollama pull qwen2.5

# Embedding models
ollama pull nomic-embed-text
ollama pull mxbai-embed-large
ollama pull all-minilm

Or all at once:

for model in llama3.2 mistral qwen2.5 nomic-embed-text mxbai-embed-large all-minilm; do
  ollama pull $model
done

image.thumb.png.8b05b3a42232da7cf19f1efc477f649e.png

  • Like 1
TheGru
Posted

pause on using the LLM build, there is an issue if you previously embedded with open ai that the ollama embeddings will fail as the dimension size is way smaller. I am fixing it to purge existing embeddings and fix the table dimensions before running. it will ask you to confirm before wiping out embeddings

TheGru
Posted
15 minutes ago, akacharos said:

strange...openrouter although very popular, is not in the "official" list but only on community list Community Providers: OpenRouter

I will add the community provider and you can play with it

TheGru
Posted
14 minutes ago, TheGru said:

pause on using the LLM build, there is an issue if you previously embedded with open ai that the ollama embeddings will fail as the dimension size is way smaller. I am fixing it to purge existing embeddings and fix the table dimensions before running. it will ask you to confirm before wiping out embeddings

new plan, keep multiple sets of embeddings for easy switching back and forth to test quality

TheGru
Posted (edited)

There will be a migration to reorganize existing embeddings... let it run it's not locked up

image.thumb.png.852b52eb5fc2128f698aab625227bee7.png

on unraid the above looks like this:

image.thumb.png.2734d1717b22815b9ac14072fc8e09d1.png

Aperture WEB UI will not be available until the migration completes and the API launches, until then you would see a 502 bad gateway error if you try and load the web interface.

 

Edited by TheGru
TheGru
Posted

So in my Unraid server I have a NVIDIA Quadro P2000 worthless for anything AI
And the onboard intel GPU, which the Ollama container I installed can leverage... but my lord is this painfully slow. There is no universe where the negligible amount of money spent on OpenAI would ever be worth avoiding relative to your time!

In the period of ~5 minutes Ollama has spit out 2 recommendation vs ~48 seconds to generate 24, but you all do what you want! 

 image.png.619f452f10aae5d1d48df9d66c8cc223.png

TeamB
Posted

that sounds a little slow, do you think it is the embedding or the chat that is slow?

I tried to set it up but got stuck on the initial setup at the select embedding stage, the drop down was empty.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...