Tech

SETI but for LLM; how an LLM solution that’s barely a few months old could revolutionize the way inference is done

Share
Share


  • Exo supports LLaMA, Mistral, LlaVA, Qwen, and DeepSeek
  • Can run on Linux, macOS, Android, and iOS, but not Windows
  • AI models needing 16GB RAM can run on two 8GB laptops

Running large language models (LLMs) typically requires expensive, high-performance hardware with substantial memory and GPU power. However, Exo software now looks to offer an alternative by enabling distributed artificial intelligence (AI) inference across a network of devices.

The company allows users to combine the computing power of multiple computers, smartphones, and even single-board computers (SBCs) like Raspberry Pis to run models that would otherwise be inaccessible.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
Coinbase 2FA error fixed after many believed their account was hacked
Tech

Coinbase 2FA error fixed after many believed their account was hacked

Coinbase users spotted a flaw in their Account Activity logs The logs...

Spain emerges as Europe’s workplace productivity powerhouse, UK lags behind
Tech

Spain emerges as Europe’s workplace productivity powerhouse, UK lags behind

Nine in 10 Spanish employees report high productivity Productivity is partly responsible...