eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

225
active users

#metrics

2 posts2 participants0 posts today

Good news! After some fiddling and understanding the repository layout I finally have working OpenTelemetry Collector packages for @opensuse!

Packages for the "core" or "classic" collector, the contrib distribution and the otlp distribution are working fine in my tests and have been submitted to the server:monitoring devel project. This includes the packages required to build them.

Here is a vagrant-libvirt setup to play around with the packages (three branches currently).

codeberg.org/johanneskastl/ope
github.com/johanneskastl/opent

Once I find some information on how to use the ebpf-profiler distribution, I will test that package and add a branch for it.

Codeberg.orgopentelemetry-collector_opensuse_vagrant_libvirt_ansibleVagrant-libvirt setup that creates a VM with the OpenTelemetry Collector (using the packages I created for openSUSE)

A simple metric from #FDroid #metrics data: app downloads per week. Start with data from 1 of 2 servers for f-droid.org: http02, add hits for paths ending in ".apk". That gave about 2 million. Multiply by 18 (fronters + mirrors) and get ~36 mil app downloads a week.

import requests
hits = 0
r = requests.get(f'fdroid.gitlab.io/metrics/http0')
data = r.json()
for path in data['paths']:
if path.endswith('.apk'):
hits += data['paths'][path]['hits']
print('APKs', hits)

forum.f-droid.org/t/experiment

A Comprehensive Framework For Evaluating The Quality Of Street View Imagery
--
doi.org/10.1016/j.jag.2022.103 <-- shared paper
--
“HIGHLIGHTS
• [They] propose the first comprehensive quality framework for street view imagery.
• Framework comprises 48 quality elements and may be applied to other image datasets.
• [They] implement partial evaluation for data in 9 cities, exposing varying quality.
• The implementation is released open-source and can be applied to other locations.
• [They] provide an overdue definition of street view imagery..."
#GIS #spatial #mapping #streetlevelimagery #Crowdsourcing #QualityAssessmentFramework #Heterogeneity #imagery #dataquality #metrics #QA #urban #cities #remotesensing #spatialanalysis #StreetView #Google #Mapillary #KartaView #commercial #crowsourced #opendata #consistency #standards #specifications #metadata #accuracy #precision #spatiotemporal #terrestrial #assessment

What's going on when some #universities jump more than 950% in one year on #metrics used in university #rankings? Are they gaming the metrics?
biorxiv.org/content/10.1101/20

"Key findings include publication growth of up to 965%, concentrated in STEM fields; surges in hyper-prolific authors and highly cited articles; and dense internal co-authorship and citation clusters. The group [of studied institutions] also exhibited elevated shares of publications in delisted journals and high retraction rates. These patterns illustrate vulnerabilities in global ranking systems, as metrics lose meaning when treated as targets (Goodhart’s Law) and institutions emulate high-performing peers under competitive pressure (institutional isomorphism). Without reform, rankings may continue incentivizing behaviors that distort scholarly contribution and compromise research integrity."

#Academia
@academicchatter

bioRxiv · Gaming the Metrics? Bibliometric Anomalies and the Integrity Crisis in Global University RankingsGlobal university rankings have transformed how certain institutions define success, often elevating metrics over meaning. This study examines universities with rapid research growth that suggest metric-driven behaviors. Among the 1,000 most publishing institutions, 98 showed extreme output increases between 2018-2019 and 2023-2024. Of these, 18 were selected for exhibiting sharp declines in first and corresponding authorship. Compared to national, regional, and international norms, these universities (in India, Lebanon, Saudi Arabia, and the United Arab Emirates) display patterns consistent with strategic metric optimization. Key findings include publication growth of up to 965%, concentrated in STEM fields; surges in hyper-prolific authors and highly cited articles; and dense internal co-authorship and citation clusters. The group also exhibited elevated shares of publications in delisted journals and high retraction rates. These patterns illustrate vulnerabilities in global ranking systems, as metrics lose meaning when treated as targets (Goodhart’s Law) and institutions emulate high-performing peers under competitive pressure (institutional isomorphism). Without reform, rankings may continue incentivizing behaviors that distort scholarly contribution and compromise research integrity. ### Competing Interest Statement The author declares that he is affiliated with a university that is a peer institution to one of the universities included in the study group.

New study: #ChatGPT is not very good at predicting the #reproducibility of a research article from its methods section.
link.springer.com/article/10.1

PS: Five years ago, I asked this question on Twitter/X: "If a successful replication boosts the credibility a research article, then does a prediction of a successful replication, from an honest prediction market, do the same, even to a small degree?"
x.com/petersuber/status/125952

What if #LLMs eventually make these predictions better than prediction markets? Will research #assessment committees (notoriously inclined to resort to simplistic #metrics) start to rely on LLM replication or reproducibility predictions?

SpringerLinkChatGPT struggles to recognize reproducible science - Knowledge and Information SystemsThe quality of answers provided by ChatGPT matters with over 100 million users and approximately 1 billion monthly website visits. Large language models have the potential to drive scientific breakthroughs by processing vast amounts of information in seconds and learning from data at a scale and speed unattainable by humans, but recognizing reproducibility, a core aspect of high-quality science, remains a challenge. Our study investigates the effectiveness of ChatGPT (GPT $$-$$ - 3.5) in evaluating scientific reproducibility, a critical and underexplored topic, by analyzing the methods sections of 158 research articles. In our methodology, we asked ChatGPT, through a structured prompt, to predict the reproducibility of a scientific article based on the extracted text from its methods section. The findings of our study reveal significant limitations: Out of the assessed articles, only 18 (11.4%) were accurately classified, while 29 (18.4%) were misclassified, and 111 (70.3%) faced challenges in interpreting key methodological details that influence reproducibility. Future advancements should ensure consistent answers for similar or same prompts, improve reasoning for analyzing technical, jargon-heavy text, and enhance transparency in decision-making. Additionally, we suggest the development of a dedicated benchmark to systematically evaluate how well AI models can assess the reproducibility of scientific articles. This study highlights the continued need for human expertise and the risks of uncritical reliance on AI.

Creatividad Más Allá de las Métricas / Creativity Beyond Metrics

linkedin.com/posts/federicoant

📷 Rob Mieremet / Anefo

www.linkedin.com#davidogilvy #publicidad #advertising #marketing #creatividad #creativity… | Federico AntinCreatividad Más Allá de las Métricas / Creativity Beyond Metrics Me pregunto cuantas personas reconocerán al "señor" de la fotografía, figura fundamental en la historia de la publicidad, creativo más allá de la misma creatividad, el genial David Ogilvy. Y lo recuerdo hoy en particular, cerrando la semana, aunque con frecuencia pienso en su trabajo y su legado, porque veo tendencias preocupantes. Muchos KPIs, OKRs, y varias mediciones más, pero en las redes todo se ve igual, un post copiado del anterior, un reel que repite lo que vimos ayer. Hace unos días estuve revisando insights vinculados a diferentes marcas, y no veía resultados positivos, muy pocos, ni en las pequeñas, ni en las grandes. Nos habremos olvidado de crear? De generar contenidos destacando el valor diferencial? De pensar estratégicamente, contando con la disciplina para avanzar, sin asustarnos con rumores de lo qué podría llegar a ocurrir? Las métricas tienen su importancia, pero cuando se convierten en eje absoluto, allí vamos mal. Invito a todos, hoy por la noche, con sus cabezas descansando plácidamente sobre la almohada, consultar a mi querido David. I wonder how many people will recognize the "gentleman" in the photograph, a fundamental figure in the history of advertising, a creative mind beyond creativity itself, the brilliant David Ogilvy. And I remember him today in particular, as the week comes to a close, though I often reflect on his work and legacy, because I’m seeing some troubling trends. Lots of KPIs, OKRs, and various other measurements, but on social media everything looks the same, one post is a copy of the last, a reel repeats what we saw yesterday. A few days ago, I was reviewing insights related to different brands, and I wasn’t seeing positive results, very few, whether for small brands or big ones. Have we forgotten how to create? To generate content that highlights differentiated value? To think strategically, with the discipline to follow through, without being scared off by rumors of what might happen? Metrics have their importance, but when they become the absolute core, that’s when things go wrong. I invite everyone, tonight, as your head rests peacefully on your pillow, to consult my dear David. Fotografía / Photo: Rob Mieremet / Anefo. #DavidOgilvy #publicidad #advertising #marketing #creatividad #creativity #KPI #OKR #métricas #metrics #marcas #brands #branding #storytelling #estrategia #strategy #contenido #content