<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://lms.onnocenter.or.id/wiki/index.php?action=history&amp;feed=atom&amp;title=GPT4All%3A_Pilihan_Model_Bahasa_Indonesia</id>
	<title>GPT4All: Pilihan Model Bahasa Indonesia - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://lms.onnocenter.or.id/wiki/index.php?action=history&amp;feed=atom&amp;title=GPT4All%3A_Pilihan_Model_Bahasa_Indonesia"/>
	<link rel="alternate" type="text/html" href="https://lms.onnocenter.or.id/wiki/index.php?title=GPT4All:_Pilihan_Model_Bahasa_Indonesia&amp;action=history"/>
	<updated>2026-04-19T22:11:41Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.45.1</generator>
	<entry>
		<id>https://lms.onnocenter.or.id/wiki/index.php?title=GPT4All:_Pilihan_Model_Bahasa_Indonesia&amp;diff=72737&amp;oldid=prev</id>
		<title>Unknown user: Created page with &quot;Berikut ini beberapa **model lokal (gguf) yang mendukung Bahasa Indonesia** dan bisa langsung dipakai di GPT4All + Open WebUI atau melalui CLI `llama.cpp`:  ---  ### 1. **MiaL...&quot;</title>
		<link rel="alternate" type="text/html" href="https://lms.onnocenter.or.id/wiki/index.php?title=GPT4All:_Pilihan_Model_Bahasa_Indonesia&amp;diff=72737&amp;oldid=prev"/>
		<updated>2025-07-04T23:33:25Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;Berikut ini beberapa **model lokal (gguf) yang mendukung Bahasa Indonesia** dan bisa langsung dipakai di GPT4All + Open WebUI atau melalui CLI `llama.cpp`:  ---  ### 1. **MiaL...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;Berikut ini beberapa **model lokal (gguf) yang mendukung Bahasa Indonesia** dan bisa langsung dipakai di GPT4All + Open WebUI atau melalui CLI `llama.cpp`:&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### 1. **MiaLatte‑Indo‑Mistral‑7B (Q4\_K\_M gguf)**&lt;br /&gt;
&lt;br /&gt;
* Versi instruksi dari Mistral‑7B khusus bahasa Indonesia 🇮🇩&lt;br /&gt;
* Format: `.gguf`, quantization Q4\_K\_M (\~4.5 GB)&lt;br /&gt;
* Cepat, seimbang antara kualitas dan ukuran ([toolify.ai][1], [huggingface.co][2])&lt;br /&gt;
&lt;br /&gt;
**Unduh lewat CLI:**&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
pip install huggingface-hub&lt;br /&gt;
huggingface-cli download mradermacher/MiaLatte-Indo-Mistral-7b-GGUF \&lt;br /&gt;
  MiaLatte-Indo-Mistral-7b.Q4_K_M.gguf --local-dir ~/gpt4all/models&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### 2. **Mistral‑7B v0.1 (TheBloke)**&lt;br /&gt;
&lt;br /&gt;
* Model foundation multilingual, sangat efisien &amp;amp; cepat&lt;br /&gt;
* Format: `.gguf`, quant Q4\_K\_M (\~4.11 GB) ([dataloop.ai][3])&lt;br /&gt;
&lt;br /&gt;
**Unduh:**&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
huggingface-cli download TheBloke/Mistral-7B-v0.1-GGUF \&lt;br /&gt;
  mistral-7b-v0.1.Q4_K_M.gguf --local-dir ~/gpt4all/models&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### 3. **Nous‑Hermes‑2‑Mixtral‑8x7B (advanced multilingual)**&lt;br /&gt;
&lt;br /&gt;
* Model besar hybrid Mixtral, fine‑tuned berkualitas tinggi&lt;br /&gt;
* Pilihan Q4\_K\_M (\~28.5 GB RAM) hingga Q5\_K\_M (\~33 GB) ([dataloop.ai][4], [toolify.ai][1])&lt;br /&gt;
&lt;br /&gt;
**Unduh (Q4\_K\_M):**&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
huggingface-cli download TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF \&lt;br /&gt;
  nous-hermes-2-mixtral-8x7b-dpo.Q4_K_M.gguf --local-dir ~/gpt4all/models&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## 🚀 Cara pakai di GPT4All/Open WebUI&lt;br /&gt;
&lt;br /&gt;
1. Letakkan file `.gguf` di `~/gpt4all/models/`.&lt;br /&gt;
2. Akses WebUI (`http://&amp;lt;server-ip&amp;gt;:3000`) → Settings → Local Models → **Add model** → arahkan ke file.&lt;br /&gt;
3. Pilih quantization Q4\_K\_M (balancing performa dan RAM).&lt;br /&gt;
&lt;br /&gt;
Untuk CLI via `llama.cpp` atau `gpt4all-backend`:&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
./gpt4all-backend/build/bin/gpt4all-lora-quantized-ggml \&lt;br /&gt;
  -m ~/gpt4all/models/mistral-7b-v0.1.Q4_K_M.gguf&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
atau&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
llama.cpp/main -m ~/gpt4all/models/MiaLatte-Indo-Mistral-7b.Q4_K_M.gguf \&lt;br /&gt;
  -p &amp;quot;Halo, apa kabar?&amp;quot; -n 256&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## 🧭 Rekomendasi&lt;br /&gt;
&lt;br /&gt;
* **Server RAM ≤16 GB**: pakai **MiaLatte-Indo-Mistral** (4–6 GB).&lt;br /&gt;
* **RAM 16–32 GB**: **Mistral‑7B v0.1**.&lt;br /&gt;
* **RAM ≥32 GB + GPU optional**: **Nous‑Hermes‑2‑Mixtral‑8x7B** untuk performa terbaik.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
Kalau butuh bantuan konfigurasi lebih lanjut (systemd, GPU, daemon service), tinggal bilang aja ya!&lt;br /&gt;
&lt;br /&gt;
[1]: https://www.toolify.ai/ai-model/thebloke-nous-hermes-2-mixtral-8x7b-dpo-gguf?utm_source=chatgpt.com &amp;quot;TheBloke/Nous-Hermes-2-Mixtral-8x7B-DPO-GGUF - Toolify.ai&amp;quot;&lt;br /&gt;
[2]: https://huggingface.co/mradermacher/MiaLatte-Indo-Mistral-7b-GGUF?utm_source=chatgpt.com &amp;quot;mradermacher/MiaLatte-Indo-Mistral-7b-GGUF - Hugging Face&amp;quot;&lt;br /&gt;
[3]: https://dataloop.ai/library/model/thebloke_mistral-7b-v01-gguf/?utm_source=chatgpt.com &amp;quot;Mistral 7B V0.1 GGUF · Models - Dataloop&amp;quot;&lt;br /&gt;
[4]: https://dataloop.ai/library/model/thebloke_nous-hermes-llama2-gguf/?utm_source=chatgpt.com &amp;quot;Nous Hermes Llama2 GGUF · Models - Dataloop&amp;quot;&lt;/div&gt;</summary>
		<author><name>Unknown user</name></author>
	</entry>
</feed>