Proof-of-concept project, showing that it's possible to run an entire Large Language Model in nothing but a #PDF file.
It uses #Emscripten to compile #llama.cpp into asm.js, which can then be run in the PDF using an old PDF JS injection.
Combined with embedding the entire #LLM file into the PDF with base64, we are able to run LLM inference in nothing but a PDF