LLM: tips untuk CPU: Difference between revisions

From OnnoCenterWiki
Jump to navigationJump to search
No edit summary
No edit summary
 
Line 7: Line 7:


saya pakai model intfloat pak, lumayan cepet di CPU  https://huggingface.co/intfloat/multilingual-e5-large
saya pakai model intfloat pak, lumayan cepet di CPU  https://huggingface.co/intfloat/multilingual-e5-large
kalo pdf bisanya saya parse dulu textnya atau pakai ocr, terus embeddingnya disimpan di postgre pakai pgvector (https://github.com/pgvector/pgvector)
agak effort sih

Latest revision as of 21:26, 16 July 2024

Kata CGPT: saat pake CPU, coba:

1. Batch Processing u. kurangi overhead & speedup embedding.
2. Kurangi presisi model; float32->float16/int8;  speedup tanpa korbankan akurasi.
3. Buat versi kecil dari model yg sama.
4. Multi-threading.
5. Gunakan Intel MKL / OpenBLAS.

saya pakai model intfloat pak, lumayan cepet di CPU https://huggingface.co/intfloat/multilingual-e5-large


kalo pdf bisanya saya parse dulu textnya atau pakai ocr, terus embeddingnya disimpan di postgre pakai pgvector (https://github.com/pgvector/pgvector) agak effort sih