Notes to ourselves on posts we want to write when we have time but will certainly forget, and we won't check these notes
Notes to ourselves on posts we want to write when we have time but will certainly forget, and we won't check these notes
Happy to see that the migration to Hachyderm went pretty smoothly overall.
Still having followers trickling in. Just a few minor papercuts & double checking my settings. Nothing terribly surprising.
A few takeaways from my account migration as a reminder for me. Perhaps it could help others considering a future move.
1. Private notes for followers not migrated (I had put a lot of effort into annotating accounts. Kinda a bummed this info didn't migrate).
2. Manually need to re-create filters(a good excuse for review & cleanup!).
3. Need to manually re-create followed hashtags. No big deal.
4. I hide all of my lists from the Home feed and only leave my followed hashtags there. I had go in to each list and manually set this.
5. Bookmarks are imported in a jumbled order instead of a chronological newest -> oldest sort. I get to rediscover old bookmarks/posts/articles I had forgotten. This may be due to using slurp since my instance archive link was borked.
Also cool to see that Hachyderm has a pretty active GitHub issue page. I'll have to open one up to get an official Xfce emoji(Although I the mouse/rat I'm currently using is kinda cute).
Oh. And an incredibly huge character limit for posts. Yay verbosity!
Good stuff overall and happy for a fresh start.
Note to selves
Update driving licence and passport when possible.
The photo on those (pre-GAHT and hair removal) increasingly doesn't look like us any more
#NoteToSelf #NoteToSelves #NoteToFutureSelf #NoteToFutureSelves
#PleaseRememberToReadYourNotes
#ffmpeg #noteToSelf to burn timecode with drawtext:
ffmpeg -i video.mp4 -vf drawtext="fontsize=15:fontfile=/path/to/DroidSansMono.ttf:\
timecode='00\:00\:00\:00':rate=25:text='TCR\:':fontsize=72:fontcolor='white':\
boxcolor=0x000000AA:box=1:x=860-text_w/2:y=960"
Um Hilfe zu bitten, ist ein Zeichen von Stärke und Einsicht.
Fehler einzugestehen, ein Zeichen von Reife und Integrität; und eine Chance zu wachsen.
Impulse von außen aufzunehmen, um die eigenen Probleme anzugehen, ein Zeichen von Intelligenz und Wachheit.
Solche Einsichten zu haben, ist auch ein Zeichen von Intelligenz; sie dann auch umzusetzen, ein Zeichen von Weisheit und Achtsamkeit.
Beetje witruimte overlaten bij teksten, waardoor die zwarte blokjes niet over de tekst vallen.
Als je de naam van je wifi netwerk veranderd, moet je ook weer de apparaten opnieuw aansluiten.
Today from the trail; too many hikers for my liking, and moreover they were all unfriendly. There was a man who kept *hysterically* yelling at his unleashed dog. Not okay.
Decided not to go to that trail on weekends.
#hiking #renonevada #notetoself
Once again reminding myself, you can read all you want but unless you practice nothing will stick #NoteToSelf
The map is not the terrain.
With all the new updates this week, a reminder that LLMs are an excellent illustration of the attempted shifts to redefine what we usually call art (and knowledge/skill) to be almost entirely separate from its creation process and from its original meaning, context, environment and situation which lead to its creation. Being trained on digital reproductions of artworks and some select metadata, these models are fundamentally constrained to identify patterns for regenerating simulacra, their usage purely symbolic — a user-driven form of meme-style cultural sampling, pure semiotic “affiliation by association”, a kitschy clip-art-esque usage of looks, styles and aesthetics, entirely decoupled and devoid of history, meaning, context, incentives and other factors of (art) making/learning. A total lack of embodiment. Make this look like that. Baby portraits in Rembrandt's style, Ghibli used for PFPs or to create Neo-Nazi propaganda. Who cares?!
The great homogenizer.
Even for me as an artist primarily using non-LLM-based generative techniques for 25+ years, training a model on a corpus of my own works and then having it churn out new derivations, other than a case study, it would completely counter any of the creative & systemic investigations I'm after with most of my works. LLMs turn everything into a sampling and prompting workflow. Replicating a (non-existent) house style is the very thing I'm least interested in!
Triteness re-invented.
Removed from any original intentions of the consumed works enslaved in their training corpus, ignorant to the emotional states of their creators, free from the pains and joys and myriads of micro-decisions of art making, of the social context and the limitations (physical, material, skill) which led people to search for expressing their inner thoughts & feelings via artistic means... AI enthusiasts celebrate this new contextual freedom as creative breakthrough, but it’s always the same underlying sentiment behind: “The final original idea was that everything had already been done before.”
The Exascale mindset.
From the ravenous assembling of training datasets by ferociously crawling & harvesting absolutely anything which can be possibly found and accessed online, entirely disregarding author & privacy rights and social/technical contracts of acceptable use, the energy consumption for model training at a scale competing with developed nation states, to the abrasive social and political engineering and the artificial inflation of framing this tech as beneficial and inevitable to our societies. Most of the news & tech media, always hungry for clickbait, YouTubers able to create decades’ worth of new content — everyone happily lapping up any press-releases and amplifying the hype. Are there any responsible adults left where it currently matters most?
This ignorance-by-design isn’t about LLMs or their impact on art: The wider discussion is about how a tiny group of people with access to quasi-unlimited resources, capital and politicians is attempting to redefine what human culture is and to treat it (us) like just another large-scale mining operation, converting millennia of lived human experience, learning & suffering into monopolized resources for total extraction/monetization, filtered, curated, controlled and eventually sold back as de facto truth, with original provenance and meaning annihilated or vastly distorted to fit new purposes and shifting priorities/politics...
Don’t let the map become the terrain!
---
Two quotes by Friedrich A. Kittler as related food for thought:
“What remains of people is what media can store and communicate.”
“Understanding media is an impossibility, because conversely, the prevailing communication techniques remote-control all understanding and create all of its illusions.”
Verander nooit de 'klikjes' van je oortjes... Je blijft doorklikken naar de juiste stand.
To all who’re criticizing itself the mounting criticism of LLMs and who'd rather like to emphasize these models can also be used for good:
POSIWID (aka The Purpose Of a System Is What It Does) is very much applicable here, i.e. there is “no point in claiming that the purpose of a system is to do what it constantly fails to do”.[1]
For the moment (and I don’t detect _any_ signs of this changing), LLMs conceptually and the way they’re handled technologically/politically, are harmful, more than anything, regardless of other potential/actual use cases. In a non-capitalist, solarpunk timeline this all might look very different, but we’re _absolutely not_ in that world. It’s simply ignorant and impossible to only consider LLM benefits anecdotally or abstractly, detached from their implementation, their infrastructure required for training, the greed, the abuse, the waste of resources (and resulting conflicts), the inflation, disinformation, and tangible threats (with already real impacts) to climate, energy, rights, democracy, society, life etc. These aren't hypotheticals — not anymore!
A basic cost-benefit analysis:
In your eyes, are the benefits of LLMs worth these above costs?
Could these benefits & time savings have been achieved in other ways?
Do you truly believe a “democratization of skills” is achievable via the hyper-centralization of resources, whilst actively harvesting and then removing the livelihood and rights of entire demographics?
You’re feeling so very productive with your copilot subscription, how about funding FLOSS projects instead and help building sustainable/supportive communities?
How about investing $500 billions into education/science/arts?
Cybernetics was all about feedback loops, recursion, considering the effects of a system and studying their influence on subsequent actions/iterations. Technologists (incl. my younger self) have made the mistake/choice ignoring tech’s impact in the world for far too long. For this field to truly move forward and become more holistic, empathetic and ethical, it _must_ stop treating the above aspects as distracting inconvenient truths and start addressing them head on, start considering secondary and tertiary effects of our actions, and use those to guide us! Neglecting or actively denying their importance and the more-than-fair criticism without ever being able to produce equally important counter examples/reasons just make us look ignorant of the larger picture... Same goes for education/educators in related disciplines!
Nothing about LLMs is inevitable per se. There’s always a decision and for each decision we have to ask who’s behind it, for what purposes, who stands to benefit and where do we stand with these. Sure, like any other tech, LLMs are “just a tool”, unbiased in theory, usable for both positive and negative purposes. But, we’ve got to ask ourselves at which point a “tool” has attracted & absorbed a primary purpose/form as a weapon (incl. usage in a class war), and any other humanist aspects have become mere nice-to-have side effects, great for greenwashing, and — for some — surfing the hype curve, while it lasts. We’ve got to ask at which point LLMs currently are on this spectrum and in which direction they’re actively accelerating (are being accelerated)...
(Ps. Like many others, for many years I’ve been fascinated by, building and using AI/ML techniques in many projects. I started losing interest shortly after the introduction of GANs and the non-stop demand for exponentially increasing hardware resources and obvious ways how this tech will be used in ever more damaging ways... So my criticism isn’t against AI as general field of research, but about what is currently sold as AI and how it’s being pushed onto us, for reasons which actually have not much to do with AI itself, other than being a powerful excuse/lever for enabling empire building efforts and possible societal upheavals...)
[1] https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_what_it_does
My guide to not get mad all the time: ask is this problem/challenge bigger than the ones you already have? If not, don’t worry about it and save the energy for the other ones.
#NoteToSelf
It dawned on me, many conflicts in UI/UX design philosophy boil down to Black Box vs White Box worldviews and an increasing focus/bias towards opaqueness...
On the one hand, there's the long illusive dream/goal of UI designers for their technological artifacts/products/services (esp. AI™ powered ones) to blend perfectly into their physical and human environment, be as autonomous & intuitive-to-use as possible, have as few controls/interfaces as possible (often minimalist brand aesthetics explicitly demand so), all whilst offering perfectly suited outcomes/actions from only minimal direct inputs, yet always with perfect prediction/predictability — DWIM (Do What I Mean) magic!
This approach mostly this comes with a large set of brushed-under-the-rug costs: Patronizing/dehumanizing the people intended to interact with the artifact, doubting/denying their intelligence, outright removal/limitation of user controls (usually misrepresented/celebrated as "simplicity"), relying on intense telemetry/surveillance/tracking, enforced 24/7 connectivity, increased energy usage and all the resulting skewed incentives for monetization which actually have nothing to do with the original purpose of the artifact...
In contrast, the White/Clear Box approach offers the artifact with full transparency (and control) of the inner workings of the system. Because of this it only works (great) for smaller, human scale domains/contexts, but due to the out-of-bounds complexity of our surrounding contemporary tech stack, these days this very quickly just means Welcome to Configuration Hell (Dante says "ciao!")...
(So in the end: Choose your own hellscape... :)
#NoteToSelf to check out where #Shapeways has landed after their recent restructuring. Just seeing a note from them today.
I've sold a few things there over the years, and I liked their #3Dprinting service options, though they were a bit expensive.
They pulled back last year with financial/mgmt probs but appear to be re-entering the market. Hope it's improved and viable.
Anyone try them recently?
#NoteToSelf: andres dumhed er ikke en personlig fornærmelse mod mig.
#NoteToSelf
Niet om Vlaamse straat- en plaatsnamen lachen.
(Zeker niet in combinatie)
Note to self/selves: go through posts using the DIYHRT hashtag, then do any of:
#Octogenarian or greater? Feel like you are about to die? You have children?
Don't call you children every day & tell them about you expected impending death. Do that once.
Call every day though. Tell them stories about your childhood, or how other members of your family died. Just don't talk about your death over & over.