在家中使用日常设备运行自己的 AI 集群。由 exo labs 维护。

Alex Cheema 1d48192094 ci test há 8 meses atrás
.circleci ad695696a5 run on every commit on main, reuqire approval on other branches há 8 meses atrás
docs c432871ef5 replace the ring topology image as it was not rendering sometimes há 9 meses atrás
examples ea70c9fb76 reformat with yapf format.py há 8 meses atrás
exo 0d218e244e use fast AutoProcessor fixes #164 tokenizer issues with mistral-large. há 8 meses atrás
extra 14f2846a9c yapf set blank_line_before_nested_class_or_def to false há 8 meses atrás
test 710e5a31e7 TODO for why use_fast=False is giving inconsistent behaviour (no spaces decoding invididual tokens) for Mistral-Large-Instruct-2407-4bit há 8 meses atrás
tinychat 178fb75c84 fix image api prompt encoding há 9 meses atrás
.gitignore b85d1956bc open source astra example há 8 meses atrás
.pylintrc ce761038ac formatting / linting há 9 meses atrás
.style.yapf f53056dede more compact operator formatting há 8 meses atrás
LICENSE bde1e53f5f add license há 9 meses atrás
README.md e2e98c30a5 Update README.md há 8 meses atrás
format.py 2e27076665 simplify formatting with yapf há 8 meses atrás
install.sh fbbb45c37e install script há 9 meses atrás
lint.sh ce761038ac formatting / linting há 9 meses atrás
main.py ea70c9fb76 reformat with yapf format.py há 8 meses atrás
pyproject.toml 2e27076665 simplify formatting with yapf há 8 meses atrás
ruff.toml ce761038ac formatting / linting há 9 meses atrás
setup.py 2e27076665 simplify formatting with yapf há 8 meses atrás

README.md

exo logo exo: Run your own AI cluster at home with everyday devices. Maintained by [exo labs](https://x.com/exolabs_).

[Discord](https://discord.gg/EUnjGpsmWw) | [Telegram](https://t.me/+Kh-KqHTzFYg3MGNk) | [X](https://x.com/exolabs_)

[![GitHub Repo stars](https://img.shields.io/github/stars/exo-explore/exo)](https://github.com/exo-explore/exo/stargazers) [![Tests](https://dl.circleci.com/status-badge/img/circleci/TrkofJDoGzdQAeL6yVHKsg/4i5hJuafuwZYZQxbRAWS71/tree/main.svg?style=svg)](https://dl.circleci.com/status-badge/redirect/circleci/TrkofJDoGzdQAeL6yVHKsg/4i5hJuafuwZYZQxbRAWS71/tree/main) [![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)

Forget expensive NVIDIA GPUs, unify your existing devices into one powerful GPU: iPhone, iPad, Android, Mac, Linux, pretty much any device!

Update: Exo Supports Llama 3.1

Run 8B, 70B and 405B parameter Llama 3.1 models on your own devices

See the code

Get Involved

exo is experimental software. Expect bugs early on. Create issues so they can be fixed. The exo labs team will strive to resolve issues quickly.

We also welcome contributions from the community. We have a list of bounties in this sheet.

Features

Wide Model Support

exo supports LLaMA (MLX and tinygrad) and other popular models.

Dynamic Model Partitioning

exo optimally splits up models based on the current network topology and device resources available. This enables you to run larger models than you would be able to on any single device.

Automatic Device Discovery

exo will automatically discover other devices using the best method available. Zero manual configuration.

ChatGPT-compatible API

exo provides a ChatGPT-compatible API for running models. It's a one-line change in your application to run models on your own hardware using exo.

Device Equality

Unlike other distributed inference frameworks, exo does not use a master-worker architecture. Instead, exo devices connect p2p. As long as a device is connected somewhere in the network, it can be used to run models.

Exo supports different partitioning strategies to split up a model across devices. The default partitioning strategy is ring memory weighted partitioning. This runs an inference in a ring where each device runs a number of model layers proportional to the memory of the device.

ring topology

Installation

The current recommended way to install exo is from source.

Prerequisites

From source

git clone https://github.com/exo-explore/exo.git
cd exo
pip install .
# alternatively, with venv
source install.sh

Troubleshooting

  • If running on Mac, MLX has an install guide with troubleshooting steps.

Documentation

Example Usage on Multiple MacOS Devices

Device 1:

python3 main.py

Device 2:

python3 main.py

That's it! No configuration required - exo will automatically discover the other device(s).

The native way to access models running on exo is using the exo library with peer handles. See how in this example for Llama 3.

exo starts a ChatGPT-like WebUI (powered by tinygrad tinychat) on http://localhost:8000

For developers, exo also starts a ChatGPT-compatible API endpoint on http://localhost:8000/v1/chat/completions. Example with curls:

curl http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
     "model": "llama-3.1-8b",
     "messages": [{"role": "user", "content": "What is the meaning of exo?"}],
     "temperature": 0.7
   }'
curl http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
     "model": "llava-1.5-7b-hf",
     "messages": [
      {
        "role": "user",
        "content": [
          {
            "type": "text",
            "text": "What are these?"
          },
          {
            "type": "image_url",
            "image_url": {
              "url": "http://images.cocodataset.org/val2017/000000039769.jpg"
            }
          }
        ]
      }
    ],
     "temperature": 0.0
   }'

Debugging

Enable debug logs with the DEBUG environment variable (0-9).

DEBUG=9 python3 main.py

Known Issues

  • 🚧 As the library is evolving so quickly, the iOS implementation has fallen behind Python. We have decided for now not to put out the buggy iOS version and receive a bunch of GitHub issues for outdated code. We are working on solving this properly and will make an announcement when it's ready. If you would like access to the iOS implementation now, please email alex@exolabs.net with your GitHub username explaining your use-case and you will be granted access on GitHub.

Inference Engines

exo supports the following inference engines:

Networking Modules