LLM: ollama install ubuntu 24.04: Difference between revisions
From OnnoCenterWiki
Jump to navigationJump to search
No edit summary |
No edit summary |
||
| (17 intermediate revisions by the same user not shown) | |||
| Line 1: | Line 1: | ||
Sumber: https://www.jeremymorgan.com/blog/generative-ai/local-llm-ubuntu/ | Sumber: https://www.jeremymorgan.com/blog/generative-ai/local-llm-ubuntu/ | ||
Pastikan: | |||
* OS bisa linux server, seperti ubuntu server. | |||
* Memory RAM pastikan cukup untuk menyimpan model-nya. Model llama3 butuh memory paling tidak 8GB. | |||
* Kalau mau lebih enak/lebih cepat sebaiknya pakai GPU, seperti nvidia telsa yang agak baru. | |||
Install aplikasi pendukung | Install aplikasi pendukung | ||
sudo | sudo apt update | ||
apt install curl | sudo apt install curl net-tools | ||
| Line 11: | Line 17: | ||
curl -fsSL https://ollama.com/install.sh | sh | curl -fsSL https://ollama.com/install.sh | sh | ||
Sebagai user biasa run & download model | |||
ollama pull llama3 | |||
ollama pull bge-m3:latest | |||
ollama pull all-MiniLM | |||
optional, | |||
ollama pull gemma3:4b | |||
ollama pull deepseek-r1:7b | |||
ollama pull llama3.2:1b | |||
ollama pull qwen2.5-coder:7b | |||
ollama pull adijayainc/bhsa-deepseek-r1-1.5b | |||
ollama pull adijayainc/bhsa-llama3.2 | |||
ollama pull rizkiagungid/deeprasx | |||
ollama pull fyandono/chatbot-id | |||
ollama pull rexyb10/codeai | |||
ollama pull fahlevi20/DeepSeek-R1-TechSchole-Indonesia | |||
systemctl status ollama | |||
OPTIONAL: Sebagai super user, | |||
sudo snap install --beta open-webui | |||
==Contoh== | |||
curl http://localhost:11434/api/generate -d '{ | |||
"model" : "llama3", | |||
"prompt" : "tell me a joke", | |||
"stream" : false | |||
}' | |||
| Line 20: | Line 57: | ||
* https://www.jeremymorgan.com/blog/generative-ai/local-llm-ubuntu/ | * https://www.jeremymorgan.com/blog/generative-ai/local-llm-ubuntu/ | ||
* https://ollama.com/library | |||
Latest revision as of 09:08, 22 March 2025
Sumber: https://www.jeremymorgan.com/blog/generative-ai/local-llm-ubuntu/
Pastikan:
- OS bisa linux server, seperti ubuntu server.
- Memory RAM pastikan cukup untuk menyimpan model-nya. Model llama3 butuh memory paling tidak 8GB.
- Kalau mau lebih enak/lebih cepat sebaiknya pakai GPU, seperti nvidia telsa yang agak baru.
Install aplikasi pendukung
sudo apt update sudo apt install curl net-tools
Download
curl -fsSL https://ollama.com/install.sh | sh
Sebagai user biasa run & download model
ollama pull llama3 ollama pull bge-m3:latest ollama pull all-MiniLM
optional,
ollama pull gemma3:4b ollama pull deepseek-r1:7b ollama pull llama3.2:1b ollama pull qwen2.5-coder:7b
ollama pull adijayainc/bhsa-deepseek-r1-1.5b ollama pull adijayainc/bhsa-llama3.2 ollama pull rizkiagungid/deeprasx ollama pull fyandono/chatbot-id ollama pull rexyb10/codeai ollama pull fahlevi20/DeepSeek-R1-TechSchole-Indonesia systemctl status ollama
OPTIONAL: Sebagai super user,
sudo snap install --beta open-webui
Contoh
curl http://localhost:11434/api/generate -d '{ "model" : "llama3", "prompt" : "tell me a joke", "stream" : false }'