• 2 Posts
  • 42 Comments
Joined 2 months ago
cake
Cake day: March 28th, 2025

help-circle


















  • Nah we’re up to running Qwen3 and Deepseek r1 locally with accessible hardware at this point so we have access to what you describe. Ollama is the app.

    The problem continues to be that LLMs are not suitable for many applications, and where they are useful, they are sloppy and inconsistent.

    My laptop is one of the ones they are talking about in the article. It has an AMD NPU, it’s a 780M APU that also runs games about as well as an older budget graphics card. It handles running local models really well for its size and power draw. Running local models is still lame as hell, not how I end up utilizing the hardware. 😑