在家中使用日常设备运行自己的 AI 集群。由 exo labs 维护。

Alex Cheema 89965e9568 add a system prompt that makes it aware of exo cluster topology 9 月之前
.github 9d9d257eb2 reduce chatgpt api response timeout in test 9 月之前
docs c432871ef5 replace the ring topology image as it was not rendering sometimes 9 月之前
examples 2084784470 per-request kv cache, remove all explicit reset functionality as it wasnt used. fixes #67 9 月之前
exo 89965e9568 add a system prompt that makes it aware of exo cluster topology 9 月之前
tinychat dd8c5d63a9 add support for mistral nemo and mistral large 9 月之前
.gitignore 35177690bd by default find an ephemeral node port fixes #35, more robust topology updates. both fix #15 and #14 9 月之前
LICENSE bde1e53f5f add license 9 月之前
README.md 5ac6b6a717 clearer documentation on accessing web UI and chatgpt-api 9 月之前
install.sh fbbb45c37e install script 9 月之前
main.py 9a373c2bb0 make configurable discovery timeout 9 月之前
setup.py bbfd5adc20 add support for llama3.1 (8b, 70b, 405b). bump mlx up to 0.16.0 and mlx-lm up to 0.16.1. fixes #66 9 月之前

README.md

exo logo exo: Run your own AI cluster at home with everyday devices. Maintained by [exo labs](https://x.com/exolabs_).

[Discord](https://discord.gg/EUnjGpsmWw) | [Telegram](https://t.me/+Kh-KqHTzFYg3MGNk) | [X](https://x.com/exolabs_)

[![GitHub Repo stars](https://img.shields.io/github/stars/exo-explore/exo)](https://github.com/exo-explore/exo/stargazers) [![Tests](https://github.com/exo-explore/exo/actions/workflows/test.yml/badge.svg)](https://github.com/exo-explore/exo/actions/workflows/test.yml) [![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)

Forget expensive NVIDIA GPUs, unify your existing devices into one powerful GPU: iPhone, iPad, Android, Mac, Linux, pretty much any device!

Update: Exo Supports Llama 3.1

Now the default models, run 8B, 70B and 405B parameter models on your own devices

See the code

Get Involved

exo is experimental software. Expect bugs early on. Create issues so they can be fixed. The exo labs team will strive to resolve issues quickly.

We also welcome contributions from the community. We have a list of bounties in this sheet.

Features

Wide Model Support

exo supports LLaMA (MLX and tinygrad) and other popular models.

Dynamic Model Partitioning

exo optimally splits up models based on the current network topology and device resources available. This enables you to run larger models than you would be able to on any single device.

Automatic Device Discovery

exo will automatically discover other devices using the best method available. Zero manual configuration.

ChatGPT-compatible API

exo provides a ChatGPT-compatible API for running models. It's a one-line change in your application to run models on your own hardware using exo.

Device Equality

Unlike other distributed inference frameworks, exo does not use a master-worker architecture. Instead, exo devices connect p2p. As long as a device is connected somewhere in the network, it can be used to run models.

Exo supports different partitioning strategies to split up a model across devices. The default partitioning strategy is ring memory weighted partitioning. This runs an inference in a ring where each device runs a number of model layers proportional to the memory of the device.

ring topology

Installation

The current recommended way to install exo is from source.

Prerequisites

From source

git clone https://github.com/exo-explore/exo.git
cd exo
pip install .
# alternatively, with venv
source install.sh

Troubleshooting

  • If running on Mac, MLX has an install guide with troubleshooting steps

Documentation

Example Usage on Multiple MacOS Devices

Device 1:

python3 main.py

Device 2:

python3 main.py

That's it! No configuration required - exo will automatically discover the other device(s).

The native way to access models running on exo is using the exo library with peer handles. See how in this example for Llama 3.

exo starts a ChatGPT-like WebUI (powered by tinygrad tinychat) on http://localhost:8000

For developers, exo also starts a ChatGPT-compatible API endpoint on http://localhost:8000/v1/chat/completions. Example with curl:

curl http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
     "model": "llama-3-8b",
     "messages": [{"role": "user", "content": "What is the meaning of exo?"}],
     "temperature": 0.7
   }'

Debugging

Enable debug logs with the DEBUG environment variable (0-9).

DEBUG=9 python3 main.py

Known Issues

  • 🚧 As the library is evolving so quickly, the iOS implementation has fallen behind Python. We have decided for now not to put out the buggy iOS version and receive a bunch of GitHub issues for outdated code. We are working on solving this properly and will make an announcement when it's ready. If you would like access to the iOS implementation now, please email alex@exolabs.net with your GitHub username explaining your use-case and you will be granted access on GitHub.

Inference Engines

exo supports the following inference engines:

Networking Modules