Nipate
Forum => Kenya Discussion => Topic started by: Nefertiti on February 17, 2024, 01:53:10 PM
-
https://savant.holeyfox.co/
-
How to set up and run a local LLM with Ollama and Llama 2
https://thenewstack.io/how-to-set-up-and-run-a-local-llm-with-ollama-and-llama-2/
-
I guess Punda fool has nothing to say.
If you can showcase this to Safaricom - Tusker dwarf would not blink cutting you a blank cheque to upgrade ZURI.
It is painfully embarrassing at present.
-
Nice page to compare chatbots and accelerators. Apple M1 chip appear to kick AMD and Nvidia GeForce. Interesting Nvidia is no 3 despite the hyperbole.
Running Local LLMs, CPU vs. GPU - a Quick Speed Test
https://dev.to/maximsaplin/running-local-llms-cpu-vs-gpu-a-quick-speed-test-2cjn
Personally I am a fan of the Nvidia Jetson Nano and cuDNN for simple robotics. Nvidia chips and stuff are actually good in robotics and industrial apps - eg AVs, UAVs, IoT. In LLMs or software they can't compete with Google or Microsoft. Ergo, Nvidia's real competitors are Amazon and Huawei - chips- as- a- service crew.
NVIDIA has just breached the $2Trillion barrier. They need to consolidate cloud distribution or their lunch will be eaten.
-
Musk's xAI open-sources
Open Release of Grok-1
https://x.ai/blog/grok-os