eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

216
active users

#localai

0 posts0 participants0 posts today
ℒӱḏɩę :blahaj:<p>Well, AI can't reliably detect AI. Hit or miss. <a href="https://tech.lgbt/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://tech.lgbt/tags/localAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>localAI</span></a> </p><p>*I run all of this local, solar powered.</p>
Debby<p><span class="h-card" translate="no"><a href="https://wandering.shop/@Catvalente" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>Catvalente</span></a></span> </p><p>Or just use you AI locally 🦾 💻 🧠 </p><p>I completely understand the concerns about relying too heavily on AI, especially cloud-based, centralized models like ChatGPT. The issues of privacy, energy consumption, and the potential for misuse are very real and valid. However, I believe there's a middle ground that allows us to benefit from the advantages of AI without compromising our values or autonomy.</p><p>Instead of rejecting AI outright, we can opt for open-source models that run on local hardware. I've been using local language models (LLMs) on my own hardware. This approach offers several benefits:</p><p> - Privacy - By running models locally, we can ensure that our data stays within our control and isn't sent to third-party servers.</p><p> - Transparency - Open-source models allow us to understand how the AI works, making it easier to identify and correct biases or errors.</p><p> - Customization - Local models can be tailored to our specific needs, whether it's for accessibility, learning, or creative projects.</p><p> - Energy Efficiency - Local processing can be more energy-efficient than relying on large, centralized data centers.</p><p> - Empowerment - Using AI as a tool to augment our own abilities, rather than replacing them, can help us learn and grow. It's about leveraging technology to enhance our human potential, not diminish it.</p><p>For example, I use local LLMs for tasks like proofreading, transcribing audio, and even generating image descriptions. Instead of ChatGPT and Grok, I utilize Jan.ai with Mistral, Llama, OpenCoder, Qwen3, R1, WhisperAI, and Piper. These tools help me be more productive and creative, but they don't replace my own thinking or decision-making.</p><p>It's also crucial to advocate for policies and practices that ensure AI is used ethically and responsibly. This includes pushing back against government overreach and corporate misuse, as well as supporting initiatives that promote open-source and accessible technologies.</p><p>In conclusion, while it's important to be critical of AI and its potential downsides, I believe that a balanced, thoughtful approach can allow us to harness its benefits without sacrificing our values. Let's choose to be informed, engaged, and proactive in shaping the future of AI.</p><p>CC: <span class="h-card" translate="no"><a href="https://wandering.shop/@Catvalente" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>Catvalente</span></a></span> <span class="h-card" translate="no"><a href="https://mastodon.social/@audubonballroon" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>audubonballroon</span></a></span> <br><span class="h-card" translate="no"><a href="https://universeodon.com/@calsnoboarder" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>calsnoboarder</span></a></span> <span class="h-card" translate="no"><a href="https://mastodon.au/@craigduncan" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>craigduncan</span></a></span> </p><p><a href="https://hear-me.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://hear-me.social/tags/OpenSource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenSource</span></a> <a href="https://hear-me.social/tags/LocalModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LocalModels</span></a> <a href="https://hear-me.social/tags/PrivacyLLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PrivacyLLM</span></a> <a href="https://hear-me.social/tags/Customization" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Customization</span></a> <a href="https://hear-me.social/tags/LocalAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LocalAI</span></a> <a href="https://hear-me.social/tags/Empowerment" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Empowerment</span></a> <a href="https://hear-me.social/tags/DigitalLiteracy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DigitalLiteracy</span></a> <a href="https://hear-me.social/tags/CriticalThinking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CriticalThinking</span></a> <a href="https://hear-me.social/tags/EthicalAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EthicalAI</span></a> <a href="https://hear-me.social/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://hear-me.social/tags/Accessibility" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Accessibility</span></a> <a href="https://hear-me.social/tags/Inclusion" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Inclusion</span></a> <a href="https://hear-me.social/tags/Education" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Education</span></a></p>
Muhammeddd<p>Google, yerelde çalışan 3-4 gb ram ile çalışan ai modeli gemma 3n modelibi duyurmuş telefonlarda da yerel olarak kullanılabilecekmiş.Çok güzel gelişme veri mahremiyeti için.Hemde artık ai yavaştan dil modelleri optimizasyon kısmına önem vermeye başlıyorlar. :android_logo: <br><a href="https://social.vivaldi.net/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://social.vivaldi.net/tags/yz" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>yz</span></a> <a href="https://social.vivaldi.net/tags/yapayzeka" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>yapayzeka</span></a> <a href="https://social.vivaldi.net/tags/gemma3n" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>gemma3n</span></a> <a href="https://social.vivaldi.net/tags/Gemma" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Gemma</span></a> <a href="https://social.vivaldi.net/tags/localai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>localai</span></a> <a href="https://social.vivaldi.net/tags/mahremiyet" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>mahremiyet</span></a> <a href="https://social.vivaldi.net/tags/veri" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>veri</span></a> <a href="https://social.vivaldi.net/tags/Google" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Google</span></a> <a href="https://social.vivaldi.net/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://social.vivaldi.net/tags/telefon" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>telefon</span></a> <a href="https://social.vivaldi.net/tags/Android" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Android</span></a></p>
Emile Dingemans 🟥🟥🟥🟥🟥🟥<p>AI lokaal draaien, lijkt mij de toekomst hebben. Het geeft eigenaarschap terug aan de gebruiker en maakt ons los van bedrijven die steeds meer persoonlijke gegevens van ons willen vastleggen. Google lanceert nu een lokale AI versie. <a href="https://techcrunch.com/2025/05/31/google-quietly-released-an-app-that-lets-you-download-and-run-ai-models-locally/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">techcrunch.com/2025/05/31/goog</span><span class="invisible">le-quietly-released-an-app-that-lets-you-download-and-run-ai-models-locally/</span></a> <a href="https://mastodon.nl/tags/localAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>localAI</span></a></p>
Debby<p><span class="h-card" translate="no"><a href="https://fosstodon.org/@system76" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>system76</span></a></span> <br>I love <a href="https://hear-me.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a>, or as they're often called, <a href="https://hear-me.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a>, especially when used locally. Local models are incredibly effective for enhancing daily tasks like proofreading, checking emails for spelling and grammatical errors, quickly creating image descriptions, transcribing audio to text, or even finding that one quote buried in tons of files that answers a recurring question.</p><p>However, if I wanted to be fully transparent to <a href="https://hear-me.social/tags/bigtech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>bigtech</span></a>, I would use Windows and Android with all the "big brotherly goodness" baked into them. That's why I hope these tools don't connect to third-party servers.</p><p>So, my question to you is: Do you propose a privacy-oriented and locally/self-hosted first LLM?</p><p>I'm not opposed to the general notion of using AI, and if done locally and open-source, I really think it could enhance the desktop experience. Even the terminal could use some AI integration, especially for spell-checking and syntax-checking those convoluted and long commands. I would love a self-hosted integration of some AI features. 🌟💻 <br><a href="https://hear-me.social/tags/OpenSource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenSource</span></a> <a href="https://hear-me.social/tags/Privacy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Privacy</span></a> <a href="https://hear-me.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://hear-me.social/tags/LocalModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LocalModels</span></a> <a href="https://hear-me.social/tags/SelfHosted" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SelfHosted</span></a> <a href="https://hear-me.social/tags/LinuxAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LinuxAI</span></a> <a href="https://hear-me.social/tags/LocalLLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LocalLLM</span></a> <a href="https://hear-me.social/tags/LocalAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LocalAI</span></a></p>
ResearchBuzz: Firehose<p>MakeUseOf: Anyone Can Enjoy the Benefits of a Local LLM With These 5 Apps . “Cloud-based AI chatbots like ChatGPT and Gemini are convenient, but they come with trade-offs. Running a local LLM—the tech behind the AI chatbot—puts you in control, offering offline access and stronger data privacy. And while it might sound technical, the right apps make it easy for anyone to get started.”</p><p><a href="https://rbfirehose.com/2025/05/19/makeuseof-anyone-can-enjoy-the-benefits-of-a-local-llm-with-these-5-apps/" class="" rel="nofollow noopener" target="_blank">https://rbfirehose.com/2025/05/19/makeuseof-anyone-can-enjoy-the-benefits-of-a-local-llm-with-these-5-apps/</a></p>
Winbuzzer<p>Ollama Local LLM Platform Unveils Custom Multimodal AI Engine, Steps Away from Llama.cpp Framework</p><p><a href="https://mastodon.social/tags/Ollama" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ollama</span></a> <a href="https://mastodon.social/tags/MultimodalAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MultimodalAI</span></a> <a href="https://mastodon.social/tags/LocalLLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LocalLLM</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.social/tags/VisionModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>VisionModels</span></a> <a href="https://mastodon.social/tags/OpenSourceAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenSourceAI</span></a> <a href="https://mastodon.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://mastodon.social/tags/AIEngine" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIEngine</span></a> <a href="https://mastodon.social/tags/TechNews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechNews</span></a> <a href="https://mastodon.social/tags/LocalAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LocalAI</span></a></p><p><a href="https://winbuzzer.com/2025/05/16/ollama-local-llm-platform-unveils-custom-multimodal-ai-engine-steps-away-from-llama-cpp-framework-xcxwbn/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">winbuzzer.com/2025/05/16/ollam</span><span class="invisible">a-local-llm-platform-unveils-custom-multimodal-ai-engine-steps-away-from-llama-cpp-framework-xcxwbn/</span></a></p>
ℒӱḏɩę :blahaj:<p>Messing with some local AI vision models. Asked it to describe this 4 year old selfie (obfuscated against cloud AI). Now carefully read the 3rd paragraph 😆 <a href="https://tech.lgbt/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://tech.lgbt/tags/localai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>localai</span></a> <a href="https://tech.lgbt/tags/vision" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>vision</span></a> <a href="https://tech.lgbt/tags/ollama" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ollama</span></a></p>
joostruis<p>Running your own <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/Local" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Local</span></a> is super easy. well. If you have a powerful enough system of course.</p><p>According to the documentation from the <a href="https://mastodon.social/tags/LocalAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LocalAI</span></a> website we just need to run a <a href="https://mastodon.social/tags/docker" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>docker</span></a> command to get us going.</p><p>In a terminal run:<br>docker run -p 8080:8080 --name local-ai -ti localai/localai:latest-aio-cpu</p><p>We wait a bit for it to setup itself and leave the terminal open to get some output.</p><p>Now we open a web-browser and go to:<br>http://localhost:8080/</p><p>And there you have it!</p><p><a href="https://localai.io/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">localai.io/</span><span class="invisible"></span></a></p>
Debby<p>Balancing Privacy and Assistive Technology: The Case for Large Language Models</p><p>In today’s digital world, the tension between privacy and technology is more pronounced than ever. I’m deeply concerned about the implications of surveillance capitalism—especially the spyware embedded in our devices, cars, and even our bodies. This pervasive technology can lead to a loss of autonomy and a feeling of being constantly monitored. Yet, amidst these concerns, assistive technology plays a critical role, particularly for those of us with neurological impairments.</p><p>I recently read a thought-provoking post by <span class="h-card" translate="no"><a href="https://babka.social/@serge" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>serge</span></a></span> that highlighted the importance of sharing perspectives on this issue. </p><p>&lt;iframe src="<a href="https://babka.social/@serge/113754269997543779/embed" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">babka.social/@serge/1137542699</span><span class="invisible">97543779/embed</span></a>" width="400" allowfullscreen="allowfullscreen" sandbox="allow-scripts allow-same-origin allow-popups allow-popups-to-escape-sandbox allow-forms"&gt;&lt;/iframe&gt;</p><p>With the rise of large language models (LLMs) like ChatGPT, we’re seeing a shift toward more accessible and user-friendly technology. Local LLMs offer a viable alternative to big tech solutions, often running on specially laptops or even compact devices like Raspberry Pi. For many, including myself, LLMs are invaluable tools that enhance communication, summarize information, transcribe voice, facilitate learning, and help manage tasks that might otherwise feel overwhelming. They can help strike the right emotional tone in our writing and assist in understanding complex data—capabilities that are especially crucial for those of us facing neurological challenges.</p><p>While the goal of eliminating surveillance capitalism is commendable, banning technology outright isn’t the answer. We must recognize the significance of LLMs for individuals with disabilities. Calls to remove these technologies can overlook their profound impact on our lives. For many, LLMs are not just tools; they are lifelines that enable us to engage with the world more fully. Removing access to these resources would only isolate individuals who already face significant barriers. Instead, we should focus on utilizing local LLMs and other privacy-focused alternatives.</p><p>This situation underscores the need for a nuanced approach to the intersection of privacy and assistive technology. Open-source LLMs, like Piper, exemplify how we can create locally run voice models that are accessible to everyone, even on low-cost devices. Advocating for privacy must go hand in hand with considering the implications for those who rely on these technologies for daily functioning. Striking a balance between protecting individual privacy and ensuring access to vital assistive tools is not just necessary; it’s imperative.</p><p>In conclusion, LLMs represent a promising avenue for assisting individuals with neurological impairments. By embracing local and open-source solutions, we can protect our privacy while ensuring that everyone has access to the tools they need to thrive. The conversation around privacy and technology must continue, focusing on inclusivity and empowerment for all.</p><p>I use SpeechNotes installed locally all the time, and I’d love to hear how you use LLMs as assistive technology! Do you run your LLM locally? Share your experiences!</p><p><a href="https://hear-me.social/tags/PrivacyAdvocate" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PrivacyAdvocate</span></a> <a href="https://hear-me.social/tags/AssistiveTechnology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AssistiveTechnology</span></a> <a href="https://hear-me.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://hear-me.social/tags/SurveillanceCapitalism" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SurveillanceCapitalism</span></a> <a href="https://hear-me.social/tags/Neurodiversity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Neurodiversity</span></a> <a href="https://hear-me.social/tags/Accessibility" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Accessibility</span></a> <a href="https://hear-me.social/tags/TechForGood" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechForGood</span></a> <a href="https://hear-me.social/tags/a11y" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>a11y</span></a> <a href="https://hear-me.social/tags/LocalLLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LocalLLM</span></a> <a href="https://hear-me.social/tags/LocalAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LocalAI</span></a> <a href="https://hear-me.social/tags/FOSSai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FOSSai</span></a> <a href="https://hear-me.social/tags/opensource_ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>opensource_ai</span></a></p>
Tim Wappat :verified:<p>Did you know you can run AI models on your own laptop or desktop without needing an internet connection? <br>👉 How local AI is secure, cost-effective, and offline.<br>👉 Why smaller models can sometimes be the perfect tool for the job.<br>👉 How to quickly get started with Ollama and models like Llama 3.1.<br><a href="https://hachyderm.io/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://hachyderm.io/tags/TechBlog" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechBlog</span></a> <a href="https://hachyderm.io/tags/LocalAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LocalAI</span></a> <a href="https://hachyderm.io/tags/SmallModelsBigImpact" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SmallModelsBigImpact</span></a> <br><a href="https://buff.ly/496jmM3" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">buff.ly/496jmM3</span><span class="invisible"></span></a></p>
Nextcloud 📱☁️💻<p>Nextcloud Assistant 2.0 is here! ✨</p><p>Build an efficient, sovereign, AI-powered workspace with Nextcloud Hub 8 and get ready to experience AI-driven features like Context Chat, Context Write, and more!</p><p><a href="https://mastodon.xyz/tags/Nextcloud" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Nextcloud</span></a> <a href="https://mastodon.xyz/tags/ethicalAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ethicalAI</span></a> <a href="https://mastodon.xyz/tags/localAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>localAI</span></a></p><p><a href="https://youtu.be/kMl9OdP10EY" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">youtu.be/kMl9OdP10EY</span><span class="invisible"></span></a></p>
Paul O'Malley<p>Ever tried to set up and use AI Models directly on your laptop or computer? It can get overwhelming very quickly! 😩 Msty is here to change that! 🚀</p><p>My newest video walks you through how Msty makes it incredibly easy to tap into the power of AI, both locally (no internet needed) and in the cloud. No more complex setups or slow performance! ⚡</p><p>Want to boost your productivity and creativity with AI? ✨ Check out the video and let me know what you think! 👇</p><p>🎥 <a href="https://youtu.be/TVcsnof55Kk" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">youtu.be/TVcsnof55Kk</span><span class="invisible"></span></a></p><p><a href="https://c.im/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://c.im/tags/Msty" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Msty</span></a> <a href="https://c.im/tags/LocalAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LocalAI</span></a> <a href="https://c.im/tags/Productivity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Productivity</span></a> <a href="https://c.im/tags/Innovation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Innovation</span></a> <a href="https://c.im/tags/TechTips" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechTips</span></a> <a href="https://c.im/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChatGPT</span></a> <a href="https://c.im/tags/Gemini" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Gemini</span></a> <a href="https://c.im/tags/Claude" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Claude</span></a> <a href="https://c.im/tags/YouTube" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>YouTube</span></a> <a href="https://c.im/tags/Feditips" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Feditips</span></a></p>
MagiCat :badge: :daijin: :bl:<p>I have developed an iOS app that **upscales and enlarges blurry images locally** on your device using Apple Neural Engine.</p><p>It takes about 5 seconds, and all processing happens on the device.</p><p>Submitting to App Store today!</p><p><a href="https://sns.mszpro.com/tags/LocalAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LocalAI</span></a> <a href="https://sns.mszpro.com/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://sns.mszpro.com/tags/iOS" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>iOS</span></a> <a href="https://sns.mszpro.com/tags/Apple" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Apple</span></a></p>
Nextcloud 📱☁️💻<p>Nextcloud AI Assistant 2.0 with Context Chat ✨</p><p>🎙️ Audio transcriptions<br>🌄 Image generation<br>🗨️ Talk summarization bots<br>🦾 AI workload offloading<br>... and more!</p><p>Build an efficient, sovereign, AI-powered workspace with Nextcloud Hub 8! </p><p><a href="https://mastodon.xyz/tags/localAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>localAI</span></a></p><p><a href="https://youtu.be/JP4YKGBEL1s" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">youtu.be/JP4YKGBEL1s</span><span class="invisible"></span></a></p>
Nextcloud 📱☁️💻<p>Nextcloud AI Assistant 2.0 with Context Chat ✨</p><p>Build a private, sovereign, AI-powered workspace with Nextcloud Hub 8! 🎉</p><p>Here's a preview of the AI-powered features available - optionally! - in Nextcloud Hub 8:</p><p><a href="https://youtu.be/JP4YKGBEL1s" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">youtu.be/JP4YKGBEL1s</span><span class="invisible"></span></a></p><p><a href="https://mastodon.xyz/tags/NextcloudHub" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NextcloudHub</span></a> <a href="https://mastodon.xyz/tags/localAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>localAI</span></a> <a href="https://mastodon.xyz/tags/soverignAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>soverignAI</span></a></p>
Taylor Arndt<p>I've received many questions about Local AI since my post yesterday. I'd love to start a conversation about it. Please put your questions in the comments, and let's get the discussion going! <a href="https://techopolis.social/tags/LocalAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LocalAI</span></a> <a href="https://techopolis.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://techopolis.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://techopolis.social/tags/TechTalk" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechTalk</span></a> <a href="https://techopolis.social/tags/AICommunity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AICommunity</span></a> <a href="https://techopolis.social/tags/ResponsibleResearchAndInnovation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleResearchAndInnovation</span></a></p>
AG Connect<p>AI in clouds van techreuzen is nu de norm, maar eigen infra (en daarmee soevereiniteit) wint aan terrein (en belang).<br><a href="https://www.agconnect.nl/tech-en-toekomst/artificial-intelligence/de-cruciale-keuze-tussen-cloudgedreven-ai-en-lokale-ai" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">agconnect.nl/tech-en-toekomst/</span><span class="invisible">artificial-intelligence/de-cruciale-keuze-tussen-cloudgedreven-ai-en-lokale-ai</span></a><br><a href="https://mstdn.social/tags/clouddrivenAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>clouddrivenAI</span></a> <a href="https://mstdn.social/tags/genAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>genAI</span></a> <a href="https://mstdn.social/tags/localAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>localAI</span></a> <a href="https://mstdn.social/tags/digitalesoevereiniteit" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>digitalesoevereiniteit</span></a></p>