eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

192
active users

#swiftui

9 posts8 participants0 posts today

Corner concentricity was mentioned a lot at WWDC25, and now we finally have SwiftUI APIs for it. In my new post I explore how we can use the new ConcentricRectangle and containerShape() in iOS 26 to make our views fit perfectly within their containers: nilcoalescing.com/blog/Concent
#SwiftUI #iOSDev

Nil CoalescingCorner concentricity in SwiftUI on iOS 26Make your views and controls fit perfectly within their containers using new SwiftUI APIs in iOS 26 such as the ConcentricRectangle shape and the containerShape() view modifier.

This single API (and the associated ContainerValues) is the #1 most important addition to #SwiftUI in *years*. It makes building custom components that behave like system ones almost trivially easy. It is downright magical and I'm thrilled to finally rip out all of my old _VariadicView hacks:

developer.apple.com/documentat

A huge thanks to the SwiftUI team for making this public.

Apple Developer Documentationinit(subviews:transform:) | Apple Developer DocumentationConstructs a group from the subviews of the given view.

This issue may prevent me from shipping my update unless it resolves itself.

iOS 18 view scrolls behind the nav bar similar to iOS26 behavior. This should not happen.

Doesn't happen all the time but I am unable to refactor the issue away. One build has the problem and the next won’t with no changes to the code.

Any pointers to resolve this would be greatly appreciated.

The joys and uncertainties of beta tools.

Updated introductory post:

Hiya! I'm Cris, a 40+ expat living in #costarica. My family moved here recently to escape the American nightmare hellscape. Struggling daily to learn Spanish and to function outside the US.

Married with 2 kids, 2 cats, and 2 dogs. I'm a knitter and gamer. I'm slowly teaching myself #SwiftUI in my spare time. Once I'm able to ship my bass from where it's stored in the US I’ll get back to learning that as well.

I play and run TTRPGs. Mostly D&D 5e, but I'm always looking for opportunities to try out other games and systems. I prefer my video games solo and story-driven. I know I'm bad at them. I don't need any witnesses to that fact 😂

When I'm not gaming or working, I’m usually knitting while watching something scifi or horror. Trying my best to get out of the house and explore my new country, but the backlog is vast.

Topics of interest:
#tech
#apple
#knitting
#gaming
#ttrpg
#books
#writing
#SciFi
#StarTrek
#coffee

Apple stworzyło AI, która sama uczy się programować. Imponujący wynik, mimo że na starcie prawie nie znała kodu

Grupa badaczy z Apple i Carnegie Mellon University opublikowała pracę naukową, która może zwiastować rewolucję w sposobie, w jaki sztuczna inteligencja tworzy oprogramowanie.

Opisali w niej nowatorską metodę, dzięki której model AI, niemal od zera, nauczył się samodzielnie pisać wysokiej jakości, działający kod dla interfejsów użytkownika w języku SwiftUI. Wyniki są zdumiewające – stworzony w ten sposób model, nazwany UICoder, dorównuje, a w niektórych aspektach nawet przewyższa, giganta takiego jak GPT-4.

Jak nauczyć AI programować interfejsy?

Duże modele językowe (LLM) mają fundamentalny problem z generowaniem dobrego kodu dla interfejsów użytkownika (UI). Powód jest prosty: w ich gigantycznych zbiorach danych treningowych znajduje się bardzo mało wysokiej jakości, kompletnych przykładów takiego kodu. Zamiast szukać kolejnych przykładów lub polegać na drogim feedbacku od ludzi, badacze Apple postanowili, że AI nauczy się sama – metodą prób i błędów, z pomocą zautomatyzowanych recenzentów.

Proces wyglądał następująco:

  • Punkt wyjścia: wybrano open-source’owy model AI wyspecjalizowany w kodowaniu, StarChat-Beta.
  • Generowanie: poproszono go o wygenerowanie ogromnej liczby (prawie miliona) programów w SwiftUI na podstawie tekstowych opisów interfejsów.
  • Automatyczna recenzja: każdy wygenerowany program przechodził przez surowy, trzystopniowy system oceny:
    • Kompilator: czy kod w ogóle działa i się kompiluje? Jeśli nie – do kosza.
    • Model wizualny (CLIP): czy interfejs, który powstał po skompilowaniu kodu, faktycznie wygląda tak, jak w oryginalnym opisie? Jeśli nie – do kosza.
    • Filtr duplikatów: czy program nie jest zbyt podobny do tysięcy innych? Jeśli tak – do kosza, by uniknąć monotonii w danych.
  • Trening na najlepszych: programy, które przetrwały tę selekcję, stworzyły nowy, elitarny zbiór danych, na którym ponownie trenowano (dostrajano) oryginalny model AI.
  • Powtórz: cały proces powtórzono pięć razy. Z każdą iteracją AI stawała się coraz lepsza, generując wyższej jakości kod, co z kolei tworzyło jeszcze lepszy zbiór danych do kolejnego treningu.

Lepszy od open-source, doganiający GPT-4

Efektem tego procesu jest UICoder – model, który w testach zdeklasował inne otwarte modele do generowania kodu. Co więcej, w testach porównawczych zbliżył się do wyników potężnych, zamkniętych modeli od OpenAI.

GPT-5 – co warto wiedzieć o najnowszej sztucznej inteligencji od OpenAI

Wskaźnik generowania poprawnego, kompilującego się kodu dla modelu UICoder-Top (jedna z wersji) wyniósł 82%, przewyższając w tym zadaniu minimalnie GPT-4 (81%).

Największa niespodzianka: uczył się niemal od zera

Najbardziej fascynujący w całym eksperymencie jest fakt, który badacze odkryli niejako przy okazji. Okazało się, że bazowy model StarChat-Beta został wytrenowany na zbiorze danych, z którego przez pomyłkę (!) wykluczono repozytoria z kodem w języku Swift i SwiftUI. Oznacza to, że model, który zaczynał eksperyment, praktycznie nie widział wcześniej dobrego kodu w tym języku. Uczył się całkowicie od zera, wbrew pierwotnym zamiarom projektantów, którzy zakładali, że będzie to model wstępnie wytrenowany na zewnętrznych, wysokiej jakości danych.

Oznacza to, że UICoder nie nauczył się programować poprzez odtwarzanie tysięcy widzianych wcześniej przykładów, ale faktycznie „zrozumiał” zasady i logikę SwiftUI dzięki metodzie prób, błędów i automatycznej weryfikacji. To dowodzi niezwykłej skuteczności tej metody i sugeruje, że można ją z powodzeniem zastosować do nauki dowolnego innego języka programowania. Choć to na razie praca badawcza, daje ona niesamowity wgląd w to, jak Apple może w przyszłości tworzyć narzędzia AI, które zrewolucjonizują proces tworzenia aplikacji na jego platformy. Oczywiście jak zawsze dla zainteresowanych mam link do pełnej publikacji rzeczonej pracy „UICoder: Finetuning Large Language Models to Generate User Interface Code through Automated Feedback” umieszczonej na platformie arXiv.

#AI#Apple#badania

I’ve been exploring Core Spotlight APIs for showing content in Spotlight search and experimenting with using the same search index to power search inside the app. I wrote a detailed post on how to implement this in a SwiftUI app: nilcoalescing.com/blog/CoreSpo
#iOSDev #SwiftUI #Swift

Nil CoalescingCore Spotlight integration for Spotlight and internal app searchUse a shared Core Spotlight search index to make content discoverable in system Spotlight and support internal search within the app.

I have found when working with Swift Charts that it is extremely important to ensure the view who's body containers the chart is never re-evaluated unless the chart data itself changes.

So if you have things like duration toggles, scroll bindings etc move this all into view modifiers that rw from the Observable class accessed through the env.

Without this the Chart will be re-evaluated all the time as you scroll etc making for a very slow chart interaction.

Replied in thread

You cannot make two views in different hierarchies the same size. Containment-based layout is good enough.

The #SwiftUI View Body instrument only shows body executions despite a lot of work happening in other parts of the code. Leaf views like Text and container views like V/HStack also don’t have body getters so they also don’t show up. Having view bodies from your own named views show up + *some* SwiftUI views is good enough. (Note: the SwiftUI instrument in Instruments 26 was *significantly* improved. I believe it fixes some of these issues no have not formed an opinion on it, yet)

A SwiftUI Form view in the grouped style is embedded into its own scroll view. In the grouped style it’s not in a scroll view. There is no way to get rid of the scroll view around a grouped form to eg embed it in your own scroll view with additional UI. Having Form only work well if it’s filling the whole content of the current container is good enough.

Being able to completely disable interactive dismiss of sheets is good enough. There is no need to have a delegate callback (like in UIKit) that would allow putting up a confirmation dialog if data would be lost by „swipe to dismiss“. You can either turn it in or off, that’s it.

Applying a background modifier to a Grid does not actually set the background of the grid but only of all the individual grid elements. That’s good enough.

Our design Toolkits for Figma and Sketch still contain the default cell styles from UIKit (single label, title + subtitle label, label + value) but there isn’t actually any way to easily create these in SwiftUI. It’s good enough to have devs work with designers to figure out what the precise measurements, fonts etc. in Figma are to try to replicate what‘s sold to designers as a default component in SwiftUI.

Who needs a target-action-first-responder pattern in menus? Action closures should be good enough for everyone. It you really want to, you can just call NSApplication to invoke that responder chain manually from the closure.

Continued thread

The more I think about it everything about #SwiftUI and around it feels like „good enough“ (admittedly in a fairly high level of „good enough“, but decidedly *not* excellent).

The View Debugger *still* doesn’t work with SwiftUI. Using colored backgrounds in code for debugging is good enough.

SwiftUI doesn’t have overview system documentation (how do I *think* about SwiftUI? what’s good architecture with SwiftUI? Etc.). Having (admitted mostly good and fairly complete) documentation for individual types is good enough.

In the official SwiftUI tutorials basically everything is about iOS. Only in the very last chapter a Mac app is added. It’s created as separate target, despite Xcode supporting unified targets for iOS and macOS for several years now, there just wasn’t time to update it. The Mac app also only has about half the functionality of the iOS app because it turns out about half the patterns used in the iOS app don’t actually work well on macOS. It’s good enough that we proved that some of that code can run on macOS, we don’t need feature parity or keep it up to date.

We updated the Earthquakes sample app to use SwiftUI (and SwiftData) from UIKit (and CoreData). Originally it displayed a whole month of earthquakes. But that was too slow when using the new frameworks. So we’ll just change the sample app to show only the last 24 hours of earthquakes. That‘s good enough to demonstrate the concepts.

SwiftUI contains *a lot* of symbols, which makes it symbols file really large. So we strip the symbols from the framework when shipping the OS despite this meaning all SwiftUI frames in the debugger, in crash reports (maybe except for the ones coming from Apple itself) and Instruments will be unsymbolicated. We do this to save a few megabytes of OS size on disk. Stripping the symbols and giving only unsymbolicated frames to developers is good enough. (Note: this was fixed in iOS 18 and aligned releases, but I’m still bitter about it.)

I think I just figured out why I (and I think several other #iOS developers who have used #UIKit before) have developed somewhat of an animosity against #SwiftUI:

SwiftUI makes simple things really simple. It also makes some very specific complex things simple.

But despite the theoretically really high customizability (it’s all custom views with lots of modifiers overall), that‘s at first glance much higher than UIKit, getting things *just right* and creating a solution that feels just *excellent* is really hard. And by now I’m convinced that creating excellent solutions that really fit in well with the OS and offer a great, frictionless UX to people using your app is *harder* in SwiftUI.

However, creating a solution that works and is good enough is *easier* in SwiftUI. With it you fairly quickly arrive at a solution where it’s hard to argue that the small pieces of friction, the slight irregularities in the UI, the bits where people can accidentally „hold it wrong“, that these things should be removed.

I believe these bits of friction occur more often in SwiftUI and are harder to remove than in UIKit.

Add to that the higher initial cost of getting a working solution in UIKit at all and this *strongly* tips the balance in favor of „good enough“ UX when using SwiftUI, and away from excellent UX.

And I hate that about SwiftUI.

Submitted Cartographer to App Review. Very curious what they will flag.

Thanks to everyone who participated in the TestFlight process. Not everyone provided feedback but just launching the app and not having crashes reported helps.

Not looking forward to the anti-subscription trolls and the “why doesn't it do X?” reviews but I think this is a good #MacOS app. I have a long roadmap of things to add and this is a solid v1.0

spcartographer.app

spcartographer.appCartographer

Microsoft is going to release Windows 11 UI Framework as open source.

Following this Hacker News discussion, it looks like the whole WinUI environment is a shit show.
It also feels like Microsoft is abandoning it and hopes to find people who maintain it for free so they can layoff some more people (and focus on Azure and AI).

And I’m complaining about SwiftUI for macOS (which is actually pretty good, if you ask me).
#windows #opensource #win11 #swiftui

news.ycombinator.com/item?id=4

news.ycombinator.comMicrosoft is open sourcing Windows 11's UI framework | Hacker News