eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

225
active users

#localLLM

1 post1 participant0 posts today

Errata: the NVIDIA RTX A2000 6GB can reach up to 200 to/s ! On an 1.5B model. Not bad, @ <70W for the GPU, and less than 140W total for the build (old deprecated HW), given that this kind of useless benchmark is promoted everywhere by 'pro"/-paid- tech enthusiasts.

#nvidia#LLM#GPU
Replied in thread

@system76
I love #LLM, or as they're often called, #AI, especially when used locally. Local models are incredibly effective for enhancing daily tasks like proofreading, checking emails for spelling and grammatical errors, quickly creating image descriptions, transcribing audio to text, or even finding that one quote buried in tons of files that answers a recurring question.

However, if I wanted to be fully transparent to #bigtech, I would use Windows and Android with all the "big brotherly goodness" baked into them. That's why I hope these tools don't connect to third-party servers.

So, my question to you is: Do you propose a privacy-oriented and locally/self-hosted first LLM?

I'm not opposed to the general notion of using AI, and if done locally and open-source, I really think it could enhance the desktop experience. Even the terminal could use some AI integration, especially for spell-checking and syntax-checking those convoluted and long commands. I would love a self-hosted integration of some AI features. 🌟💻
#OpenSource #Privacy #AI #LocalModels #SelfHosted #LinuxAI #LocalLLM #LocalAI

Gemma 3 is out with different flavours whether your are rich or poor. Since I am not running DeepSeek R1 on 2 x Mac Minis like some on X, I guess I am not a ... but who knows...
Anyway, it is already avail via ollama or hugging face..

It is multimodal and passes MMLU-pro ! (because benchmarks are important when one tries to justify spending millions at the expense of everyone else. I don't think Earth, whether rare or not, belongs to ... but maybe I am wrong..

I have worked on a CPU-only inference portable solution so far, but Vulkan graphics api actually supports some older intel iGPUs.

The performance gain with llama.cpp is anecdotal, but it does actually helps with alleviating some load from the CPU.

This could interest owners of old intel laptops looking to play with 1B to 8B models (up to 14B works with 16GB of ram; a 14B is slow @ ~ 1.6 t/s).

Check here for GPUs tested: vulkan.gpuinfo.org/

People moan about #deepseek having China censorship not realising you can download it and run it locally without it.

It's a really good test of people's obtusity. If you MUST have an opinion about stuff you don't understand you're just proving your need to be right is above your need to be truthful.

Balancing Privacy and Assistive Technology: The Case for Large Language Models

In today’s digital world, the tension between privacy and technology is more pronounced than ever. I’m deeply concerned about the implications of surveillance capitalism—especially the spyware embedded in our devices, cars, and even our bodies. This pervasive technology can lead to a loss of autonomy and a feeling of being constantly monitored. Yet, amidst these concerns, assistive technology plays a critical role, particularly for those of us with neurological impairments.

I recently read a thought-provoking post by @serge that highlighted the importance of sharing perspectives on this issue.

<iframe src="babka.social/@serge/1137542699" width="400" allowfullscreen="allowfullscreen" sandbox="allow-scripts allow-same-origin allow-popups allow-popups-to-escape-sandbox allow-forms"></iframe>

With the rise of large language models (LLMs) like ChatGPT, we’re seeing a shift toward more accessible and user-friendly technology. Local LLMs offer a viable alternative to big tech solutions, often running on specially laptops or even compact devices like Raspberry Pi. For many, including myself, LLMs are invaluable tools that enhance communication, summarize information, transcribe voice, facilitate learning, and help manage tasks that might otherwise feel overwhelming. They can help strike the right emotional tone in our writing and assist in understanding complex data—capabilities that are especially crucial for those of us facing neurological challenges.

While the goal of eliminating surveillance capitalism is commendable, banning technology outright isn’t the answer. We must recognize the significance of LLMs for individuals with disabilities. Calls to remove these technologies can overlook their profound impact on our lives. For many, LLMs are not just tools; they are lifelines that enable us to engage with the world more fully. Removing access to these resources would only isolate individuals who already face significant barriers. Instead, we should focus on utilizing local LLMs and other privacy-focused alternatives.

This situation underscores the need for a nuanced approach to the intersection of privacy and assistive technology. Open-source LLMs, like Piper, exemplify how we can create locally run voice models that are accessible to everyone, even on low-cost devices. Advocating for privacy must go hand in hand with considering the implications for those who rely on these technologies for daily functioning. Striking a balance between protecting individual privacy and ensuring access to vital assistive tools is not just necessary; it’s imperative.

In conclusion, LLMs represent a promising avenue for assisting individuals with neurological impairments. By embracing local and open-source solutions, we can protect our privacy while ensuring that everyone has access to the tools they need to thrive. The conversation around privacy and technology must continue, focusing on inclusivity and empowerment for all.

I use SpeechNotes installed locally all the time, and I’d love to hear how you use LLMs as assistive technology! Do you run your LLM locally? Share your experiences!