在家中使用日常设备运行自己的 AI 集群。由 exo labs 维护。

Alex Cheema 94b6a2494b print debug only 1 year ago
docs f9a201ddbf docs dir 1 year ago
examples 1d5c28aed4 (partially) restore exo node equality by forwarding prompts to the dynamically selected head 1 year ago
exo 94b6a2494b print debug only 1 year ago
.gitignore 850b72d3ea make StatefulShardedModel callable, add some tests for mlx sharded inference 1 year ago
README.md bdf105a60c clarify example in readme 1 year ago
main.py f2895cbcee revive the chatgpt api endpoint on :8000 1 year ago
main_static.py 32f2e36fd3 main rename 1 year ago
requirements.txt bf565f945d fix #7 no module named aiohttp 1 year ago

README.md

exo logo exo: Run your own AI cluster at home with everyday devices. Maintained by [exo labs](https://x.com/exolabs_).

[Discord](https://discord.gg/EUnjGpsmWw) | [Telegram](https://t.me/+Kh-KqHTzFYg3MGNk) | [X](https://x.com/exolabs_)


Forget expensive NVIDIA GPUs, unify your existing devices into one powerful GPU: iPhone, iPad, Android, Mac, Linux, pretty much any device!

Get Involved

exo is experimental software. Expect bugs early on. Create issues so they can be fixed. The exo labs team will strive to resolve issues quickly.

We also welcome contributions from the community. We have a list of bounties in this sheet.

Features

Wide Model Support

exo supports LLaMA and other popular models.

Dynamic Model Partitioning

exo optimally splits up models based on the current network topology and device resources available. This enables you to run larger models than you would be able to on any single device.

Automatic Device Discovery

exo will automatically discover other devices using the best method available. Zero manual configuration.

ChatGPT-compatible API

exo provides a ChatGPT-compatible API for running models. It's a one-line change in your application to run models on your own hardware using exo.

Installation

The current recommended way to install exo is from source.

From source

git clone https://github.com/exo-explore/exo.git
cd exo
pip install -r requirements.txt

Documentation

Example Usage on Multiple MacOS Devices

Device 1:

python3 main.py

Device 2:

python3 main.py

That's it! No configuration required - exo will automatically discover the other device(s).

The native way to access models running on exo is using the exo library with peer handles. See how in this example for Llama 3.

exo also starts a ChatGPT-compatible API endpoint on http://localhost:8000. Note: this is currently only supported by tail nodes (i.e. nodes selected to be at the end of the ring topology). Example request:

curl http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
     "model": "llama-3-70b",
     "messages": [{"role": "user", "content": "What is the meaning of exo?"}],
     "temperature": 0.7
   }'
curl -X POST http://localhost:8001/api/v1/chat -H "Content-Type: application/json" -d '{"messages": [{"role": "user", "content": "What is the meaning of life?"}]}'

Inference Engines

exo supports the following inference engines:

Networking Modules

Known Issues

  • 🚧 As the library is evolving so quickly, the iOS implementation has fallen behind Python. This is being addressed, and longer term we will push out an approach that will unify the implementations so we don't have to maintain separate implementations.