eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

199
active users

#ImageAnalysis

0 posts0 participants0 posts today
Helmholtz Imaging<p>Sept 25: Learn to track objects over time and instances. With Carsten Rother (<span class="h-card" translate="no"><a href="https://xn--baw-joa.social/@uniheidelberg" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>uniheidelberg</span></a></span>).</p><p>Register for the series 👉 <a href="https://bit.ly/6-image-processing-tasks" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">bit.ly/6-image-processing-tasks</span><span class="invisible"></span></a></p><p><span class="h-card" translate="no"><a href="https://helmholtz.social/@helmholtz" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>helmholtz</span></a></span><br><a href="https://helmholtz.social/tags/imaging" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>imaging</span></a> <a href="https://helmholtz.social/tags/Tracking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Tracking</span></a> <a href="https://helmholtz.social/tags/ImageAnalysis" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ImageAnalysis</span></a></p>
Bruno C. Vellutini<p>Last week to apply to the Light-Sheet Image Analysis Workshop.</p><p>A five-day practical course on the processing and analysis of light-sheet microscopy imaging data. It will take place in Santiago, Chile, from January 5–9, 2026.</p><p>Deadline: August 8.</p><p>Learn more and apply here: <a href="https://lightsheetchile.cl/light-sheet-image-analysis-workshop-2026-2/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">lightsheetchile.cl/light-sheet</span><span class="invisible">-image-analysis-workshop-2026-2/</span></a></p><p><a href="https://biologists.social/tags/Microscopy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Microscopy</span></a> <a href="https://biologists.social/tags/Lightsheet" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Lightsheet</span></a> <a href="https://biologists.social/tags/ImageProcessing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ImageProcessing</span></a> <a href="https://biologists.social/tags/ImageAnalysis" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ImageAnalysis</span></a> <a href="https://biologists.social/tags/LatinAmerica" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LatinAmerica</span></a> <a href="https://biologists.social/tags/GlobalSouth" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GlobalSouth</span></a></p>
Aneesh Sathe<p><strong>AI: Explainable Enough</strong></p><p class="">They look really juicy, she said. I was sitting in a small room with a faint chemical smell, doing one my first customer interviews. There is a sweet spot between going too deep and asserting a position. Good AI has to be just explainable enough to satisfy the user without overwhelming them with information. Luckily, I wasn’t new to the problem.&nbsp;</p><a href="https://aneeshsathe.com/wp-content/uploads/2025/07/image-from-rawpixel-id-3045306-jpeg.jpg" rel="nofollow noopener" target="_blank"></a>Nuthatcher atop Persimmons (ca. 1910) by Ohara Koson. Original from The Clark Art Institute. Digitally enhanced by rawpixel.<p>Coming from a microscopy and bio background with a strong inclination towards image analysis I had picked up deep learning as a way to be lazy in lab. Why bother figuring out features of interest when you can have a computer do it for you, was my angle. The issue was that in 2015 no biologist would accept any kind of deep learning analysis and definitely not if you couldn’t explain the details.&nbsp;</p><p>What the domain expert user doesn’t want:<br>– How a convolutional neural network works. Confidence scores, loss, AUC, are all meaningless to a biologist and also to a doctor.&nbsp;</p><p>What the domain expert desires:&nbsp;<br>– Help at the lowest level of detail that they care about.&nbsp;<br>– AI identifies features A, B, C, and that when you see A, B, &amp; C it is likely to be disease X.&nbsp;</p><p>Most users don’t care how a deep learning <em>really</em> works. So, if you start giving them details like the IoU score of the object detection bounding box or if it was YOLO or R-CNN that you used their eyes will glaze over and you will never get a customer. Draw a bounding box, heat map, or outline, with the predicted label and stop there. It’s also bad to go to the other extreme. If the AI just states the diagnosis for the whole image then the AI might be right, but the user does not get to participate in the process. Not to mention regulatory risk goes way up.</p><p>This applies beyong images, consider LLMs. No one with any expertise likes a black box. Today, why do LLMs generate code instead of directly doing the thing that the programmer is asking them to do? It’s because the programmer wants to ensure that the code “works” and they have the expertise to figure out if and when it goes wrong. It’s the same reason that vibe coding is great for prototyping but not for production and why frequent readers can spot AI patterns, ahem,&nbsp; easily.&nbsp; So in a Betty Crocker cake mix kind of way, let the user add the egg.&nbsp;</p><p>Building explainable-enough AI takes immense effort. It actually is easier to train AI to diagnose the whole image or to give details. Generating high-quality data at that just right level is very difficult and expensive. However, do it right and the effort pays off. The outcome is an <em>AI-Human causal prediction machine</em>. Where the causes, i.e. the median level features, inform the user and build confidence towards the final outcome. The deep learning part is still a black box but the user doesn’t mind because you aid their thinking.&nbsp;</p><p>I’m excited by some new developments like <a href="https://rex-xai.readthedocs.io/en/stable/" rel="nofollow noopener" target="_blank">REX</a> which sort of retro-fit causality onto usual deep learning models. With improvements in performance user preferences for detail may change, but I suspect that need for AI to be explainable enough will remain. Perhaps we will even have custom labels like ‘juicy’.</p><p><a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/ai/" target="_blank">#AI</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/ai-adoption/" target="_blank">#AIAdoption</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/ai-communication/" target="_blank">#AICommunication</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/ai-explainability/" target="_blank">#AIExplainability</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/ai-for-doctors/" target="_blank">#AIForDoctors</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/ai-in-healthcare/" target="_blank">#AIInHealthcare</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/ai-in-the-wild/" target="_blank">#AIInTheWild</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/ai-product-design/" target="_blank">#AIProductDesign</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/ai-ux/" target="_blank">#AIUX</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/artificial-intelligence/" target="_blank">#artificialIntelligence</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/betty-crocker-thinking/" target="_blank">#BettyCrockerThinking</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/biomedical-ai/" target="_blank">#BiomedicalAI</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/business/" target="_blank">#Business</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/causal-ai/" target="_blank">#CausalAI</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/data-product-design/" target="_blank">#DataProductDesign</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/deep-learning/" target="_blank">#DeepLearning</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/explainable-ai/" target="_blank">#ExplainableAI</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/human-ai-interaction/" target="_blank">#HumanAIInteraction</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/image-analysis/" target="_blank">#ImageAnalysis</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/llms/" target="_blank">#LLMs</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/machine-learning-2/" target="_blank">#MachineLearning</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/startup-lessons/" target="_blank">#StartupLessons</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/statistics/" target="_blank">#statistics</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/tech-metaphors/" target="_blank">#TechMetaphors</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/tech-philosophy/" target="_blank">#techPhilosophy</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/trust-in-ai/" target="_blank">#TrustInAI</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/user-centered-ai/" target="_blank">#UserCenteredAI</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/xai/" target="_blank">#XAI</a></p>
Fabrizio Musacchio<p>To wrap this up: Both tools are easy to test. I highly recommend trying them on your own data to see what works best for your use case.</p><p>I’ll include <a href="https://sigmoid.social/tags/CellSeg3D" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CellSeg3D</span></a> in our next <a href="https://sigmoid.social/tags/Napari" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Napari</span></a> <a href="https://sigmoid.social/tags/bioimage" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>bioimage</span></a> analysis course (<a href="https://www.fabriziomusacchio.com/teaching/teaching_bioimage_analysis/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">fabriziomusacchio.com/teaching</span><span class="invisible">/teaching_bioimage_analysis/</span></a>). Curious what impressions and feedback the students will share. 🧪🔍</p><p>What I really like about <span class="h-card" translate="no"><a href="https://fosstodon.org/@napari" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>napari</span></a></span> is how well it integrates modern <a href="https://sigmoid.social/tags/Python" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Python</span></a> tools. Great to have such a flexible, evolving <a href="https://sigmoid.social/tags/opensource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>opensource</span></a> platform for (bio) <a href="https://sigmoid.social/tags/imageanalysis" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>imageanalysis</span></a>! 👌</p>
Helmholtz Imaging<p>👏 Big congrats to Annika Reinke for winning the Hector Foundation Prize 2025 for Metrics Reloaded, setting new standards for AI in image analysis. </p><p>Learn more, explore the tool &amp; meet all awardees in a video 👉 <a href="https://helmholtz-imaging.de/news/hector-foundation-prize-for-annika-reinke/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">helmholtz-imaging.de/news/hect</span><span class="invisible">or-foundation-prize-for-annika-reinke/</span></a></p><p><a href="https://helmholtz.social/tags/helmholtz" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>helmholtz</span></a> <a href="https://helmholtz.social/tags/helmholtzimaging" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>helmholtzimaging</span></a> <a href="https://helmholtz.social/tags/imaging" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>imaging</span></a> <a href="https://helmholtz.social/tags/metrics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>metrics</span></a> <a href="https://helmholtz.social/tags/metricsreloaded" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>metricsreloaded</span></a> <a href="https://helmholtz.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://helmholtz.social/tags/imageanalysis" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>imageanalysis</span></a> </p><p><span class="h-card" translate="no"><a href="https://helmholtz.social/@association" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>association</span></a></span> <span class="h-card" translate="no"><a href="https://helmholtz.social/@DKFZ" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>DKFZ</span></a></span></p>
Helmholtz Imaging<p>Day 3 at <a href="https://helmholtz.social/tags/HIconference2025" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HIconference2025</span></a> wrapped with exciting talks on <a href="https://helmholtz.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> for <a href="https://helmholtz.social/tags/imageanalysis" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>imageanalysis</span></a>, data integration &amp; moonshot projects.</p><p>A big thank you to all speakers, chairs &amp; participants!</p><p>See you next year!</p><p><a href="https://helmholtz.social/tags/HelmholtzImaging" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HelmholtzImaging</span></a> <a href="https://helmholtz.social/tags/imaging" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>imaging</span></a> <a href="https://helmholtz.social/tags/Helmholtz" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Helmholtz</span></a></p>
Karsten Schmidt<p>To avoid a massive OpenCV dependency for a current project I'm involved in, I ended up porting my own homemade, naive optical flow code from 2008 and just released it as a new package. Originally this was written for a gestural UI system for Nokia retail stores (prior to the Microsoft takeover), the package readme contains another short video showing the flow field being utilized to rotate a 3D cube:</p><p><a href="https://thi.ng/pixel-flow" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">thi.ng/pixel-flow</span><span class="invisible"></span></a></p><p>I've also created a small new example project for testing with either webcam or videos:</p><p><a href="https://demo.thi.ng/umbrella/optical-flow/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">demo.thi.ng/umbrella/optical-f</span><span class="invisible">low/</span></a></p><p><a href="https://mastodon.thi.ng/tags/ThingUmbrella" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ThingUmbrella</span></a> <a href="https://mastodon.thi.ng/tags/OpticalFlow" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpticalFlow</span></a> <a href="https://mastodon.thi.ng/tags/ImageAnalysis" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ImageAnalysis</span></a> <a href="https://mastodon.thi.ng/tags/ComputerVision" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ComputerVision</span></a> <a href="https://mastodon.thi.ng/tags/TypeScript" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TypeScript</span></a> <a href="https://mastodon.thi.ng/tags/JavaScript" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>JavaScript</span></a></p>
Helmholtz Imaging<p>Congratulations, Fabian, for winning the 2025 @leopoldina.org Prize for Young Scientists! 🎉 Fabian's being recognized for his work on AI-driven image analysis. His best known project? nnU-Net, an open-source deep learning framework. </p><p>More 👉 <a href="https://bit.ly/Leopoldina-prize" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">bit.ly/Leopoldina-prize</span><span class="invisible"></span></a></p><p>Explore nnU-Net on CONNECT 👉 <a href="https://bit.ly/nnU-Net" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">bit.ly/nnU-Net</span><span class="invisible"></span></a></p><p><a href="https://helmholtz.social/tags/HelmholtzImaging" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HelmholtzImaging</span></a> <a href="https://helmholtz.social/tags/imaging" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>imaging</span></a> <a href="https://helmholtz.social/tags/imageanalysis" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>imageanalysis</span></a> <a href="https://helmholtz.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a><br><span class="h-card" translate="no"><a href="https://helmholtz.social/@association" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>association</span></a></span> <span class="h-card" translate="no"><a href="https://helmholtz.social/@DKFZ" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>DKFZ</span></a></span></p>
Moritz Negwer<p><span class="h-card" translate="no"><a href="https://biologists.social/@glyg" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>glyg</span></a></span> awesome! Tagging this with <a href="https://mstdn.science/tags/fedijobs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>fedijobs</span></a> <a href="https://mstdn.science/tags/getfedihired" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>getfedihired</span></a> <a href="https://mstdn.science/tags/ImageAnalysis" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ImageAnalysis</span></a> for better reach</p>
Moritz Negwer<p>New deep-learning cell detection pipeline for light-sheet mouse brain image stacks, with an interesting cell-coordinate clustering statistics approach:</p><p>A deep learning pipeline for three-dimensional brain-wide mapping of local neuronal ensembles in teravoxel light-sheet microscopy<br>Attarpour et al., Nature Methods 2025<br><a href="https://doi.org/10.1038/s41592-024-02583-1" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">doi.org/10.1038/s41592-024-025</span><span class="invisible">83-1</span></a></p><p>Code: <a href="https://github.com/AICONSlab/MIRACL" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">github.com/AICONSlab/MIRACL</span><span class="invisible"></span></a></p><p>Documentation: <a href="https://miracl.readthedocs.io/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">miracl.readthedocs.io/</span><span class="invisible"></span></a></p><p><a href="https://mstdn.science/tags/lightsheet" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>lightsheet</span></a> <a href="https://mstdn.science/tags/microscopy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>microscopy</span></a> <a href="https://mstdn.science/tags/ImageAnalysis" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ImageAnalysis</span></a> <a href="https://mstdn.science/tags/neuroscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuroscience</span></a></p>
Helmholtz Imaging<p>🎉 Thrilled to announce that <a href="https://helmholtz.social/tags/MetricsReloaded" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MetricsReloaded</span></a> is among the most cited papers of 2024 in Nature Methods, ranking 3rd among its peers! </p><p>This innovative framework, based on the “problem fingerprint,” helps researchers choose the right metrics and fosters a deeper understanding of validation methodologies </p><p>🔗 Dive into the details: <a href="https://bit.ly/3CckBNK" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">bit.ly/3CckBNK</span><span class="invisible"></span></a></p><p><a href="https://helmholtz.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://helmholtz.social/tags/imaging" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>imaging</span></a> <a href="https://helmholtz.social/tags/imageanalysis" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>imageanalysis</span></a> <a href="https://helmholtz.social/tags/validation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>validation</span></a> <span class="h-card" translate="no"><a href="https://helmholtz.social/@DKFZ" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>DKFZ</span></a></span></p><p>Image: Envato Elements</p>
Ferran Cardoso<p>Glad to join the Crick Bioimage Analysis Symposium for the second time (first in person!). Looking forward to two days of cool science, analyses, and technologies! <span class="h-card" translate="no"><a href="https://mstdn.science/@thecrick" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>thecrick</span></a></span> </p><p><a href="https://fosstodon.org/tags/CBIAS2024" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CBIAS2024</span></a> <a href="https://fosstodon.org/tags/ImageAnalysis" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ImageAnalysis</span></a> <a href="https://fosstodon.org/tags/Bioinformatics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Bioinformatics</span></a> <a href="https://fosstodon.org/tags/napari" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>napari</span></a> <a href="https://fosstodon.org/tags/pathology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>pathology</span></a> <a href="https://fosstodon.org/tags/pathomics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>pathomics</span></a></p>
Moritz Negwer<p>New <a href="https://mstdn.science/tags/microglia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>microglia</span></a> <a href="https://mstdn.science/tags/morphology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>morphology</span></a> analysis pipeline. Looks useful! </p><p>MorphoCellSorter: An Andrews plot-based sorting approach to rank microglia according to their morphological features<br>Benkeder et al., reviewed preprint at eLife 2024<br><a href="https://doi.org/10.7554/eLife.101630.1" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">doi.org/10.7554/eLife.101630.1</span><span class="invisible"></span></a></p><p>Code: <a href="https://github.com/Pascuallab/MorphCellSorter" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">github.com/Pascuallab/MorphCel</span><span class="invisible">lSorter</span></a> </p><p><a href="https://mstdn.science/tags/neuroscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuroscience</span></a> <a href="https://mstdn.science/tags/neuroinflammation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuroinflammation</span></a> <a href="https://mstdn.science/tags/ImageAnalysis" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ImageAnalysis</span></a></p>
Vis Lab @ Khoury, Northeastern<p>Khoury vis member <span class="h-card" translate="no"><a href="https://vis.social/@racquel" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>racquel</span></a></span> co-authored "Opening the Black Box of 3D Reconstruction Error Analysis with VECTOR", being presented Thursday at 10:15 in the short papers on text and multimedia:</p><p><a href="https://ieeevis.org/year/2024/program/paper_v-short-1144.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">ieeevis.org/year/2024/program/</span><span class="invisible">paper_v-short-1144.html</span></a><br><a href="https://arxiv.org/abs/2408.03503" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/2408.03503</span><span class="invisible"></span></a></p><p><a href="https://vis.social/tags/IEEEVIS" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>IEEEVIS</span></a> <a href="https://vis.social/tags/3D" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>3D</span></a> <a href="https://vis.social/tags/ImageAnalysis" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ImageAnalysis</span></a> <a href="https://vis.social/tags/datavisualization" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>datavisualization</span></a> <a href="https://vis.social/tags/research" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>research</span></a></p>
Moritz Negwer<p>New tool for brain-slice mapping in mouse brains, looks useful: </p><p>ABBA, a novel tool for whole-brain mapping, reveals brain-wide differences in immediate early genes induction following learning<br>Chiaruttini et al., preprint at biorxiv 2024<br><a href="https://doi.org/10.1101/2024.09.06.611625" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">doi.org/10.1101/2024.09.06.611</span><span class="invisible">625</span></a> </p><p><a href="https://mstdn.science/tags/neuroscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuroscience</span></a> <a href="https://mstdn.science/tags/preprint" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>preprint</span></a> <a href="https://mstdn.science/tags/atlasmapping" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>atlasmapping</span></a> <a href="https://mstdn.science/tags/ImageAnalysis" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ImageAnalysis</span></a></p>
Debora Weber-Wulff<p>"What’s in a picture? Two decades of image manipulation awareness and action" - a great read by Mike Rossner on Retraction Watch about image manipulation:</p><p><a href="https://retractionwatch.com/2024/08/12/whats-in-a-picture-two-decades-of-image-manipulation-awareness-and-action/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">retractionwatch.com/2024/08/12</span><span class="invisible">/whats-in-a-picture-two-decades-of-image-manipulation-awareness-and-action/</span></a> </p><p><a href="https://fediscience.org/tags/AcademicIntegrity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AcademicIntegrity</span></a> <a href="https://fediscience.org/tags/ImageAnalysis" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ImageAnalysis</span></a></p>
Moritz Negwer<p>This is a FIJI plugin that can analyze branched structures in a broad range of settings. They started with Microglia, but apparently it's broadly applicable (also works with neurons and even corals). Looks useful for analyzing 2D images. </p><p>AutoMorFi: Automated Whole-image Morphometry in Fiji/ImageJ for Diverse Image Analysis Needs<br>Bouadi ... Tuan Leng Tay, preprint at biorxiv 2024<br> ⁠<a href="https://www.biorxiv.org/content/10.1101/2024.07.26.605357v1.full" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">biorxiv.org/content/10.1101/20</span><span class="invisible">24.07.26.605357v1.full</span></a> </p><p><a href="https://mstdn.science/tags/neuroscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuroscience</span></a> <a href="https://mstdn.science/tags/microglia" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>microglia</span></a> <a href="https://mstdn.science/tags/imageanalysis" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>imageanalysis</span></a> <a href="https://mstdn.science/tags/microscopy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>microscopy</span></a> <a href="https://mstdn.science/tags/FijiSc" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FijiSc</span></a></p>
David Mason<p><span class="h-card" translate="no"><a href="https://mastodon.social/@brembs" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>brembs</span></a></span> Very curious to know more! Presumably there is some live <a href="https://mas.to/tags/ImageAnalysis" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ImageAnalysis</span></a> going on to detect velocity and move the ball accordingly?</p>
Moritz Negwer<p>This 3D image stack deconvolution tool looks super useful for <a href="https://mstdn.science/tags/bioimageanalysis" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>bioimageanalysis</span></a> </p><p>Deconwolf enables high-performance deconvolution of widefield fluorescence microscopy images<br>Wernersson et al., Nature Methods 2024<br><a href="https://doi.org/10.1038/s41592-024-02294-7" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">doi.org/10.1038/s41592-024-022</span><span class="invisible">94-7</span></a></p><p>Github: <a href="https://github.com/elgw/deconwolf/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">github.com/elgw/deconwolf/</span><span class="invisible"></span></a> <br>Program: <a href="https://deconwolf.fht.org/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">deconwolf.fht.org/</span><span class="invisible"></span></a></p><p><a href="https://mstdn.science/tags/microscopy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>microscopy</span></a> <a href="https://mstdn.science/tags/ImageAnalysis" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ImageAnalysis</span></a> <a href="https://mstdn.science/tags/deconvolution" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>deconvolution</span></a></p>
Moritz Negwer<p>Very excited that our whole-mouse-brain analysis pipeline - DELiVR - is published now at Nature Methods. With DELiVR, we built an open-source, easy-to-use pipeline for analyzing image stacks from cleared mouse brains. </p><p>Paper: <a href="https://www.nature.com/articles/s41592-024-02245-2" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">nature.com/articles/s41592-024</span><span class="invisible">-02245-2</span></a> <br>Code: <a href="https://github.com/erturklab/delivr_cfos" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">github.com/erturklab/delivr_cf</span><span class="invisible">os</span></a><br>Docker containers, test dataset, handbook: <a href="https://www.discotechnologies.org/DELiVR/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">discotechnologies.org/DELiVR/</span><span class="invisible"></span></a></p><p><a href="https://mstdn.science/tags/neuroscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuroscience</span></a> <a href="https://mstdn.science/tags/tissueclearing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tissueclearing</span></a> <a href="https://mstdn.science/tags/lightsheet" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>lightsheet</span></a> <a href="https://mstdn.science/tags/imageanalysis" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>imageanalysis</span></a></p>