eupolicy.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
This Mastodon server is a friendly and respectful discussion space for people working in areas related to EU policy. When you request to create an account, please tell us something about you.

Server stats:

219
active users

#neuralnetworks

4 posts3 participants0 posts today

Dynamic Pricing with Machine Learning

Dynamic pricing refers to the practice of adjusting product or service prices in response to changing conditions. This could include shifts in demand, customer behavior, market trends, or even the time of day. While the concept has existed for decades—most notably in the airline and hospitality sectors—machine learning has brought a new level of precision and scale to...

ml-nn.eu/a1/84.html

ml-nn.euDynamic Pricing with Machine LearningMachine Learning & Neural Networks Blog

🧠 Neural networks can ace short-horizon predictions — but quietly fail at long-term stability.

A new paper dives deep into the hidden chaos lurking in multi-step forecasts:
⚠️ Tiny weight changes (as small as 0.001) can derail predictions
📉 Near-zero Lyapunov exponents don’t guarantee system stability
🔁 Short-horizon validation may miss critical vulnerabilities
🧪 Tools from chaos theory — like bifurcation diagrams and Lyapunov analysis — offer clearer diagnostics
🛠️ The authors propose a “pinning” technique to constrain output and control instability

Bottom line: local performance is no proxy for global reliability. If you care about long-horizon trust in AI predictions — especially in time-series, control, or scientific models — structural stability matters.

#AI #MachineLearning #NeuralNetworks #ChaosTheory #DeepLearning #ModelRobustness
sciencedirect.com/science/arti

www.sciencedirect.comThe butterfly effect in neural networks: Unveiling hyperbolic chaos through parameter sensitivityNeural networks often excel in short-horizon tasks, but their long-term reliability is less assured. We demonstrate that even a minimal architecture, …

If you’re into #AI then you understand the role #NeuralNetworks and #transformers play in ‘reasoning’ and predictive processing. The ‘hidden layers’ are where the AI magic happens. But are we getting the most out of current architectures? This new study offers insights into what may be the next step in #ArtificialIntelligence… the CONTINUOUS THOUGHT MACHINE.

tl;dr
Neurons in brains use timing and synchronization in the way that they compute. This property seems essential for the flexibility and adaptability of biological intelligence. Modern AI systems discard this fundamental property in favor of efficiency and simplicity. We found a way of bridging the gap between the existing powerful implementations and scalability of modern AI, and the biological plausibility paradigm where neuron timing matters. The results have been surprising and encouraging.

pub.sakana.ai/ctm/

Continuous Thought MachinesContinuous Thought MachinesIntroducing Continuous Thought Machines: a new kind of neural network model that unfolds and uses neural dynamics as a powerful representation for thought.

"Neurons in brains use timing and synchronization in the way that they compute. This property seems essential for the flexibility and adaptability of biological intelligence. Modern AI systems discard this fundamental property in favor of efficiency and simplicity. We found a way of bridging the gap between the existing powerful implementations and scalability of modern AI, and the biological plausibility paradigm where neuron timing matters. The results have been surprising and encouraging.
(...)
We introduce the Continuous Thought Machine (CTM), a novel neural network architecture designed to explicitly incorporate neural timing as a foundational element. Our contributions are as follows:

- We introduce a decoupled internal dimension, a novel approach to modeling the temporal evolution of neural activity. We view this dimension as that over which thought can unfold in an artificial neural system, hence the choice of nomenclature.

- We provide a mid-level abstraction for neurons, which we call neuron-level models (NLMs), where every neuron has its own internal weights that process a history of incoming signals (i.e., pre-activations) to activate (as opposed to a static ReLU, for example).

- We use neural synchronization directly as the latent representation with which the CTM observes (e.g., through an attention query) and predicts (e.g., via a projection to logits). This biologically-inspired design choice puts forward neural activity as the crucial element for any manifestation of intelligence the CTM might demonstrate."

pub.sakana.ai/ctm/

Continuous Thought MachinesContinuous Thought MachinesIntroducing Continuous Thought Machines: a new kind of neural network model that unfolds and uses neural dynamics as a powerful representation for thought.

𝘏𝘶𝘮𝘢𝘯 𝘤𝘰𝘯𝘴𝘤𝘪𝘰𝘶𝘴𝘯𝘦𝘴𝘴 𝘪𝘴 𝘢 ‘𝘤𝘰𝘯𝘵𝘳𝘰𝘭𝘭𝘦𝘥 𝘩𝘢𝘭𝘭𝘶𝘤𝘪𝘯𝘢𝘵𝘪𝘰𝘯,’ 𝘴𝘤𝘪𝘦𝘯𝘵𝘪𝘴𝘵 𝘴𝘢𝘺𝘴 — 𝘢𝘯𝘥 𝘈𝘐 𝘤𝘢𝘯 𝘯𝘦𝘷𝘦𝘳 𝘢𝘤𝘩𝘪𝘦𝘷𝘦 𝘪𝘵

popularmechanics.com/science/a

Popular Mechanics · Human Consciousness Is a ‘Controlled Hallucination,’ Scientist Says—And AI Can Never Achieve ItBy Darren Orf