Tech

SETI but for LLM; how an LLM solution that’s barely a few months old could revolutionize the way inference is done

Share
Share


  • Exo supports LLaMA, Mistral, LlaVA, Qwen, and DeepSeek
  • Can run on Linux, macOS, Android, and iOS, but not Windows
  • AI models needing 16GB RAM can run on two 8GB laptops

Running large language models (LLMs) typically requires expensive, high-performance hardware with substantial memory and GPU power. However, Exo software now looks to offer an alternative by enabling distributed artificial intelligence (AI) inference across a network of devices.

The company allows users to combine the computing power of multiple computers, smartphones, and even single-board computers (SBCs) like Raspberry Pis to run models that would otherwise be inaccessible.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
South Korea’s SK Telecom begins SIM card replacement after data breach
Tech

South Korea’s SK Telecom begins SIM card replacement after data breach

South Korea’s largest carrier SK Telecom started on Monday to replace mobile...

Smart driving new front in China car wars despite fatal crash
Tech

Smart driving new front in China car wars despite fatal crash

Almost 60 percent of cars sold in China last year had level-two...

Shanghai flaunts the future of cars
Tech

Shanghai flaunts the future of cars

Credit: KJ Brix from Pexels At the huge Auto Shanghai industry show...