Get the power back Archives - [x]olsen https://xolsen.com/tag/get-the-power-back/ all about music, photos and video. Sat, 07 Mar 2026 08:24:54 +0000 da-DK hourly 1 https://wordpress.org/?v=6.9.4 https://xolsen.com/wp-content/uploads/2024/08/Red-Black-Minimalist-Tech-Connect-Logo-4-150x150.png Get the power back Archives - [x]olsen https://xolsen.com/tag/get-the-power-back/ 32 32 Local AI https://xolsen.com/local-ai/ Sat, 07 Mar 2026 08:03:20 +0000 https://xolsen.com/?p=7924 The New AI Landscape: Local AI vs Cloud AI vs AI Applications Artificial intelligence is currently available in several different forms. Most people first encounter AI through ready-made services such as chatbots or productivity assistants, but under the surface there are several different ways to work with AI. Broadly speaking, AI can today be used …

The post Local AI appeared first on [x]olsen.

]]>
The New AI Landscape: Local AI vs Cloud AI vs AI Applications

Artificial intelligence is currently available in several different forms. Most people first encounter AI through ready-made services such as chatbots or productivity assistants, but under the surface there are several different ways to work with AI.

Broadly speaking, AI can today be used in three different ways:

  1. AI Applications (ready-made tools)
  2. Cloud AI Platforms (API-based AI services)
  3. Local AI (running models on your own hardware)

Understanding the differences between these approaches helps explain why local AI has recently become so interesting.

AI Applications

The most common way people interact with AI today is through ready-made applications.

Examples include:

ChatGPT
Claude
Microsoft Copilot
Perplexity
Notion AI

European alternatives include:

Mistral Le Chat (France)
Aleph Alpha AI Assistant (Germany)

These tools are extremely easy to use. You simply open a website or application and start interacting with the model.

Advantages:

  • extremely easy to use
  • always running the newest models
  • no installation required

Disadvantages:

  • your data is processed in external systems
  • limited customization
  • usage costs may increase over time
  • dependent on external providers

For many users these tools are perfectly sufficient, especially for everyday productivity tasks.

Cloud AI Platforms

A second category is AI platforms that provide models through APIs.

These are typically used by developers building applications.

Examples include:

OpenAI
Anthropic
Google AI Studio
Azure OpenAI

European alternatives include:

Mistral AI API (France)
Aleph Alpha API (Germany)

Advantages:

  • access to powerful models
  • scalable infrastructure
  • easy integration into applications

Disadvantages:

  • ongoing usage costs
  • dependency on external providers
  • potential compliance considerations depending on how data is processed

Many companies currently build AI products on top of these platforms.

Local AI

The third category is local AI, where models run directly on your own computer or servers.

In this approach you download the model and run it locally using specialized runtime software.

In other words, the AI runs inside your own environment.

This approach has become increasingly viable because:

  • models have become more efficient
  • open models are widely available
  • local runtimes have become easier to use

Advantages:

  • full data privacy
  • no API costs
  • complete control over infrastructure
  • high flexibility for experimentation

Disadvantages:

  • requires local hardware resources
  • setup is slightly more technical
  • models may not always match the largest cloud models

Despite these limitations, local AI is becoming increasingly attractive for developers, researchers, and organizations that want more control over how AI is used.

Choosing the Right Approach

Each of these approaches has its place.

AI applications are ideal for everyday productivity.

Cloud platforms are powerful for building scalable products.

Local AI is particularly useful when:

  • data privacy matters
  • you want full control over infrastructure
  • you want to experiment with different models
  • you want to avoid API costs

Because of this, many developers and organizations today combine several approaches.

For example:

  • using ChatGPT for general tasks
  • cloud APIs for production systems
  • local models for experimentation and private workflows

The rest of this article focuses on how to run AI locally, and how to approach it from an EU-first perspective.

Running AI Locally: How to Install and Use Pre-Trained Language Models on Your Own Computer

For a long time, working with advanced AI models meant sending your data to cloud services. That is rapidly changing. Today it is entirely possible to run powerful language models directly on your own computer using open-source tools and pre-trained models that are publicly available.

This approach has several advantages: privacy, lower operating costs, full control of your data, and the ability to experiment freely without depending on external APIs.

In this article, we will walk through the basic ecosystem that makes local AI possible and explain how you can get started running large language models (LLMs) on your own machine.

The Model Libraries: Where the AI Comes From

The first thing you need is a trained model. Training large language models from scratch is extremely expensive, but thousands of high-quality models are already available.

The largest open repository of models is maintained by Hugging Face. It hosts hundreds of thousands of models for natural language processing, computer vision, speech recognition, and more.

Popular language models available today include:

  • Llama
  • Mistral
  • Mixtral
  • Phi
  • Gemma

These models vary in size and capability. Some are small enough to run on a laptop, while others require more powerful hardware.

The Hugging Face model hub makes it easy to search, download, and experiment with these models.

(EU-first alternatives include the open-source model catalog maintained by LAION and models distributed through Aleph Alpha or Mistral AI.)

Running Models Locally

Once you have chosen a model, you need a runtime environment that can load and execute it on your computer.

One of the most popular tools for this today is Ollama.

Ollama acts as a local runtime for language models. It downloads the model, manages its dependencies, and exposes a simple command interface for running and interacting with the AI.

A typical command might look like this:

ollama run llama3

This command downloads the model and launches a local chat session directly on your machine.

Ollama supports many well-known models such as:

  • Llama
  • Mistral
  • Mixtral
  • Phi

Because everything runs locally, no data leaves your computer.

(EU-first alternatives include LocalAI (Italy), Text Generation WebUI (international open-source community), and Jan.ai (open-source desktop runtime developed outside major US cloud ecosystems).)

A User Interface for Local AI

While command-line tools are powerful, many users prefer a graphical interface.

LM Studio is one of the most user-friendly desktop applications for working with local language models.

It provides:

  • A model browser
  • One-click downloads
  • A chat interface
  • Local API endpoints compatible with OpenAI-style integrations

With LM Studio, you can browse thousands of models and start experimenting with them without writing any code.

(EU-first alternatives include GPT4All by Nomic (open-source community) and Jan.ai, which focuses on privacy-first local AI workflows.)

Creating a ChatGPT-Style Interface

Once you are running models locally, the next step is often to create a richer interface for interacting with them.

Open WebUI is an open-source project that provides a full ChatGPT-style experience for local models.

It supports features such as:

  • Multiple models
  • Document uploads
  • Retrieval-augmented generation (RAG)
  • Prompt templates
  • Agents and workflows

This allows you to turn a local model into a private AI assistant or internal knowledge system.

(EU-first alternatives include LibreChat (open-source project with contributors across Europe) and Flowise (community-driven visual LLM orchestration framework).)

A Typical Local AI Stack

A common setup for running AI locally today looks something like this:

Application Interface
→ Open WebUI or LM Studio

Model Runtime
→ Ollama or LocalAI

Language Model
→ Llama / Mistral / Phi

Hardware
→ Your local CPU or GPU

Optionally, you can add a vector database for document search and retrieval:

  • Chroma
  • Weaviate (EU-first alternative – Netherlands)
  • Qdrant (EU-first alternative – Germany)

This architecture enables advanced capabilities such as:

  • Private knowledge bases
  • AI-assisted documentation
  • Software development assistants
  • Internal copilots for organizations

All running entirely on your own infrastructure.

Hardware Requirements

Running AI locally does not necessarily require a powerful server.

Many modern models can run on:

  • A modern laptop
  • Apple Silicon machines
  • Workstations with consumer GPUs

For example:

  • 7–8B parameter models often run well on laptops
  • Larger models benefit from GPUs and more RAM

Quantized models make it possible to run surprisingly capable AI systems even on modest hardware.

Why Local AI Matters

Running AI locally is not only a technical curiosity. It represents a broader shift in how organizations and individuals can use artificial intelligence.

Local models provide:

  • Data sovereignty
  • Lower long-term cost
  • Full customization
  • Independence from external cloud providers

For experimentation, prototyping, and internal tooling, local AI has become an increasingly attractive option.

It allows developers, researchers, and curious technologists to explore the capabilities of modern AI systems while keeping full control of their infrastructure.

Final Thoughts

The barrier to experimenting with AI has never been lower. With tools such as Ollama, LM Studio, and open model repositories, anyone can run sophisticated language models locally.

Whether your goal is to build a personal AI assistant, experiment with new ideas, or create internal tools for your organization, the local AI ecosystem now provides everything you need to get started.

And perhaps most importantly: it allows you to explore AI on your own terms.

Running AI Locally: An EU-First Guide to Installing and Using Language Models on Your Own Computer

For years, most AI systems required sending data to cloud services operated by large technology providers. That model is now changing. Today it is increasingly possible to run powerful language models directly on your own computer.

Running AI locally provides several advantages:

  • stronger data control
  • improved privacy
  • lower long-term operating costs
  • independence from cloud vendors
  • the ability to experiment freely

In this article we will explore how to run large language models (LLMs) locally using an EU-first approach. That means prioritizing tools and models from Europe or open ecosystems that support European compliance requirements such as local hosting, transparency, and portability. When no strong EU option exists, we consider global open-source tools. US solutions are treated as a last option, unless they are open, portable, and easy to replace.

The Model Libraries: Where the AI Comes From

The first thing you need is a trained language model.

Training large models requires enormous compute resources, but fortunately thousands of high-quality models are already available. These can be downloaded and executed locally.

A widely used model library is Hugging Face, which hosts hundreds of thousands of models across many AI domains.

(US-based platform, but widely used in Europe and fully compatible with local execution and EU-compliant hosting workflows.)

However, if we take an EU-first perspective, several European ecosystems are becoming increasingly important.

EU-first model ecosystems

Mistral AI (France)
One of the strongest European players developing open and semi-open language models.

Examples:

  • Mistral 7B
  • Mixtral
  • Codestral

These models are widely used and designed to run locally or in private infrastructure.

Aleph Alpha (Germany)
Provides European LLMs designed specifically with European governance and compliance frameworks in mind.

These models are often used in regulated sectors such as government and finance.

LAION (Germany)
A non-profit research organization responsible for large open datasets and open AI initiatives.

They help maintain parts of the open AI ecosystem that make independent AI development possible.

US-based but portable model ecosystems (US-last fallback)

Some models come from US organizations but are still compatible with EU-first architecture because they can be:

  • run locally
  • hosted privately
  • replaced easily

Examples include:

  • Llama (Meta)
  • Phi (Microsoft)
  • Gemma (Google)

Because these models can run locally without sending data to external services, they remain viable within an EU-compliant architecture.

Running Models Locally

Once you have a model, you need a runtime engine capable of executing it on your computer.

Several tools exist for this.

EU-first runtime options

LocalAI (Italy)
A fully open-source AI runtime designed as a drop-in replacement for OpenAI APIs. It allows you to run models locally or on private servers without relying on external cloud services.

Text Generation WebUI (global open-source project)
A flexible interface widely used for running open models locally. It supports multiple backends and model formats.

Global open-source runtimes

Jan.ai
A privacy-focused desktop AI assistant designed to run models locally.

The application provides a user interface for downloading models and running them without external dependencies.

US-origin but widely used runtimes (US-last fallback)

Ollama
A very popular runtime for running language models locally.

It simplifies model downloads and execution and works well for development and experimentation.

Because Ollama runs models locally and does not require cloud APIs, it can still be used within EU-compliant environments.

A Graphical Interface for Local AI

Many users prefer a graphical interface rather than command-line tools.

A popular tool for this purpose is LM Studio, which allows users to browse models, download them, and interact with them locally.

Features include:

  • graphical model management
  • local chat interface
  • OpenAI-compatible API
  • simple model switching

(US-developed but locally executed and easily replaceable.)

EU-friendly alternatives

GPT4All
An open project designed to make local AI easy to use. It focuses on running models privately on personal computers.

Jan.ai
Also functions as a desktop AI assistant and provides a privacy-first user interface for interacting with local models.

Creating a ChatGPT-Style Interface

Once a model is running locally, many users want a richer interface that can interact with documents and internal knowledge sources.

One of the most popular tools for this is Open WebUI.

It provides:

  • multi-model chat
  • document ingestion
  • retrieval-augmented generation (RAG)
  • prompt templates
  • workflow automation

This makes it possible to build a private AI assistant or internal knowledge platform.

EU-first alternatives

Flowise (open-source)
A visual builder for AI workflows that allows organizations to build AI pipelines and RAG systems.

LibreChat
An open-source interface that supports multiple LLM providers and local deployments.

Building an EU-Friendly Local AI Stack

A typical architecture for local AI might look like this:

Interface Layer
→ Open WebUI / Flowise / Jan.ai

Model Runtime
→ LocalAI / Ollama

Language Models
→ Mistral / Mixtral / Aleph Alpha models

Hardware
→ Local GPU or CPU

Optional components:

Vector database for document search:

  • Weaviate (Netherlands)
  • Qdrant (Germany)
  • Chroma (US open-source but easily replaceable)

This architecture supports:

  • private knowledge bases
  • internal AI assistants
  • document analysis
  • software development copilots

All without relying on external AI services.

Hardware Requirements

Running AI locally does not necessarily require expensive infrastructure.

Many modern language models can run on:

  • modern laptops
  • Apple Silicon computers
  • workstations with GPUs

Typical guidelines:

  • 7B–8B models run well on laptops
  • larger models benefit from GPUs and more RAM

Quantized versions of models allow surprisingly capable systems to run even on modest hardware.

Why an EU-First Approach Matters

As AI becomes embedded in everyday tools, questions about data sovereignty, compliance, and technological independence become increasingly important.

An EU-first architecture helps ensure that:

  • data stays under your control
  • infrastructure can be hosted locally
  • components can be replaced easily
  • systems remain compliant with European regulations

By prioritizing open models and locally hosted tools, organizations can experiment with AI while maintaining flexibility and independence.

Final Thoughts

Running AI locally is no longer limited to researchers or large companies. With the growing ecosystem of open models and lightweight runtimes, anyone can experiment with advanced language models on their own machine.

By combining:

  • European AI initiatives
  • open-source infrastructure
  • portable runtimes

it is possible to build powerful AI systems that remain transparent, compliant, and under your control.

In many ways, local AI represents a shift toward a more decentralized and open AI ecosystem—one where innovation does not depend on a handful of centralized providers.

The Model Libraries: Where the AI Comes From

All components can be run locally or on private infrastructure.


Hardware Requirements

Typical local setups:

Laptop / MacBook (16-32GB RAM)
→ small models (3B–8B)

Workstation with GPU
→ medium models (7B–30B)

Server GPUs
→ large models (70B+)

Running locally usually means no API costs, only electricity and hardware resources.

The post Local AI appeared first on [x]olsen.

]]>
7924