.. |
api
|
2ebcf5f407
fix llama 3.2 issue with apply_chat_template assuming messages is a list if its not a dict fixes #239
|
7 months ago |
download
|
57745e4f02
Merge pull request #217 from jshield/feat/support_hf_endpoint
|
7 months ago |
inference
|
cb575f5dc3
ndim check in llama
|
7 months ago |
networking
|
073b3ffce8
move udp and tailscale into their own modules
|
7 months ago |
orchestration
|
8aab930498
if any peers changed from last time, we should always update the topology
|
7 months ago |
stats
|
f53056dede
more compact operator formatting
|
8 months ago |
topology
|
62e3726263
add RTX 20 series to device capabilities
|
8 months ago |
viz
|
2caccf897b
update gpu rich/poor calc
|
7 months ago |
__init__.py
|
57b2f2a4e2
fix ruff lint errors
|
9 months ago |
helpers.py
|
4b009401f9
move `.exo_used_ports` to `/tmp`
|
8 months ago |
models.py
|
abca3bfa37
add support for qwen2.5 coder 1.5b and 7b
|
7 months ago |
test_callbacks.py
|
ce761038ac
formatting / linting
|
9 months ago |