Machine Learning, 2025: Instructions for Staying Human (While Machines Learn)
Published on: 02/10/2025
I’ll start with myself: I’m afraid of automatism. Of the idea that reality becomes a conveyor corridor—everything optimized, everything predicted. But then I open my eyes: machine learning is not a corridor—it’s a luminous labyrinth. And we walk inside it, between breathing walls of data. This is where I choose where to stand: not neutral, not impersonal, but willing to get my hands dirty with code, words, ideas—without delegating everything. Because AI today is no longer just a statistical engine: it’s a magnetic field bending work, energy, law, trust.
The core hasn’t changed: data → models → feedback from reality. An athlete trains, falls, realigns the movement. The difference, in 2025, is the space where it all happens: not only in the cloud, but at the edges—on the smartphone, in the car, in the factory, on the wrist. Inference slips out of data centers and into the body of the device: latency shrinks, privacy breathes, decisions happen here, not “there.” Deny it and you’re blind: even the chip giants say it openly—the wave of on-device AI has already started. ([Business Insider][1])
Agents, not just models
This year we gave the new creature a name: agentic AI. No longer systems that “complete” text, but agents that plan, decide, act. Multi-agent ecosystems speaking through shared protocols. Sounds like rhetoric? No—it’s infrastructure: Google is pushing Agent-to-Agent (A2A), Anthropic has launched the Model Context Protocol (MCP)—common languages so agents can collaborate like flocks, not like isolated birds. And meanwhile, the Agent Payment rail (AP2) is emerging: signed mandates, transactions between agents and merchants. The web of pages is giving way to the agentic internet—a web of intentions. Whoever sets the standards sets the world’s habits. ([Google Developers Blog][2])
Uncomfortable question: are we ready to answer who is responsible when an agent fails? Management, legal teams, IT are discovering that autonomy demands new governance. Guardrails aren’t enough: we need to rethink processes, accountability chains, delegation flows. The debate is not theoretical—it’s urgent and operational, as recent management reviews show. ([MIT Sloan Management Review][3])
Multimodal is the new literacy
AI no longer just reads words: it sees, listens, summarizes video, manipulates space. Foundation models are becoming infrastructure—like electricity, like water. But my favorite novelty is elsewhere: small models are now biting hard. In 2025, inference costs have dropped sharply, and small models perform on benchmarks that once required giants. Translation: AI more accessible, more customizable, more edge-ready. Not a detail, but technical democracy. ([HAI Stanford][4])
Energy: the heat we don’t see
Behind every generated line there is heat. And that heat has a price. Global projections are brutal: data center electricity demand is set to more than double by 2030—AI is the hidden driver. While plants swell, the industry rushes for cover: microfluidics embedded directly in silicon to vent the thermal inferno, tripling cooling efficiency. It’s engineering but also metaphor: if we want sustainable AI, we must bring water inside the chip. Not just blow air around it. ([International Energy Agency][5])
Law: the perimeter that shapes
Europe didn’t just say “ethics”: it legislated. The AI Act is in force, and in phases: as of February 2, 2025, bans are active (social scoring, mass scraping for facial databases, criminal risk scoring based on profiles, and more); from August 2, 2025, rules for GPAI models apply (transparency, training data summaries, safety measures); full enforcement lands in 2026 (extra time for embedded high-risk until 2027). This isn’t bureaucracy—it’s civic design of AI. It means documenting, testing, making system actions traceable. It also means providers already have voluntary tools (Guidelines and Codes of Practice for GPAI) to prove compliance today, not tomorrow. ([Digital Strategy][6])
So the question: do we prefer powerful AI or reliable AI? It’s not either/or. 2025 shows us that power and reliability must grow together—or both collapse.
Living privacy: from federated to the right to be unlearned
One word returns: dignity. It means training without spying. Federated learning keeps maturing across cloud-edge-end, balancing heterogeneity and security; and the idea of machine unlearning takes shape: the right to ask a model to forget specific traces. Hard? Yes. But these are the kinds of challenges that define a technological civilization. ([MDPI][7])
Work: less apocalypse, more geography of change
No meteorite wipes out professions overnight. The most serious data shows a gradual transformation: work realigns, it doesn’t vanish. Yes, some entry-level tasks compress; yes, hybrid roles emerge, “human + agent.” But the 2025 picture doesn’t justify screaming headlines; it calls for training, reallocation, process design. Don’t reduce humans to sticky notes on the margins of automation. ([Financial Times][8])
Things to hold on to (today, not 2030)
- Design for the edge: low latency, privacy by default, compact models. Intelligence grows closer. ([Business Insider][1])
- Governable agents: open protocols (MCP, A2A), secure payments (AP2), auditable action trails. Autonomy yes, irresponsibility no. ([Anthropic][9])
- Energy and thermals: if AI heats the planet, innovate cooling and shift inference where it makes sense. Otherwise efficiency is just an alibi. ([IIF][10])
- Compliance as creative practice: the AI Act is not a brake, it’s a method to build trust and market. ([Digital Strategy][6])
- Right to distance: federated learning and unlearning to ensure innovation doesn’t colonize our memory. ([MDPI][7])
Epilogue (declared)
2025 is chiaroscuro: data that illuminates, models that blind, agents that free our time and those that consume it. My position is clear: embrace power, demand care. What I reject is the sterile illusion—AI as magic wand or totalizing scarecrow. I prefer the workshop: dirty hands, clear criteria, iteration, responsibility. Because the real question is not “how intelligent will the machine become?” but “how intelligent will we remain in designing its space?”
If this text sounds like a rewrite “with the heart out of place,” that’s intentional. Machine learning is still what you once called it—an athlete learning, a tireless assistant—but today it runs inside networks of agents, inside laws that both bind and protect, inside silicon crossed by liquid veins. And it forces us to choose not just what to build, but how to inhabit it.