Alex Cheema abca3bfa37 add support for qwen2.5 coder 1.5b and 7b 7 月之前
..
api 2ebcf5f407 fix llama 3.2 issue with apply_chat_template assuming messages is a list if its not a dict fixes #239 7 月之前
download 57745e4f02 Merge pull request #217 from jshield/feat/support_hf_endpoint 7 月之前
inference cb575f5dc3 ndim check in llama 7 月之前
networking 073b3ffce8 move udp and tailscale into their own modules 7 月之前
orchestration 8aab930498 if any peers changed from last time, we should always update the topology 7 月之前
stats f53056dede more compact operator formatting 8 月之前
topology 62e3726263 add RTX 20 series to device capabilities 8 月之前
viz 2caccf897b update gpu rich/poor calc 7 月之前
__init__.py 57b2f2a4e2 fix ruff lint errors 9 月之前
helpers.py 4b009401f9 move `.exo_used_ports` to `/tmp` 8 月之前
models.py abca3bfa37 add support for qwen2.5 coder 1.5b and 7b 7 月之前
test_callbacks.py ce761038ac formatting / linting 9 月之前