top of page

AI Publications

Public·6 members

The hidden risk in Agentic AI

Introduction


Today’s push to deploy autonomous, agentic AI systems may be repeating a well-known mistake from the early days of microservices. What was once heralded as a path to agility and scalability ended up producing brittle, interdependent systems — “distributed monoliths” — that proved fragile and hard to maintain.


Unless architects learn from that history, agentic AI risks falling into the same trap. The approach has to be different, instead of linking agents with synchronous, point-to-point calls, build them on an event-driven, loosely coupled architecture, enabling resilience, scalability, observability and long-term growth.


The microservices déjà vu


When microservices first became popular, the idea was to break big monolithic applications into smaller, more modular services. Each service could evolve and scale independently. In theory this promised flexibility, faster deployment cycles, and easier maintainability. But in practice many teams implemented microservices with synchronous API calls — service A calling service B, which…


8 Views

Google vs Nvidia – The Real Battle for the Future of AI Chips


Google Alphabet Inc. is finally taking the AI hardware fight directly to Nvidia NVIDIA Corporation with its new Ironwood TPUs and Axion CPUs, and the battle is less about raw speed and more about who controls the economics of AI. Google’s custom chips are designed to make running large AI models cheaper and more efficient at massive scale, challenging Nvidia’s position as the default choice for advanced AI computing.


1. What exactly is Google launching?


Google has introduced Ironwood, its latest generation of Tensor Processing Units (TPUs), along with Axion, its first custom Arm-based CPU for data centers. These chips power everything from training large models like Gemini to serving billions of AI queries across Google’s products and Google Cloud.


Unlike Nvidia’s GPUs, Google doesn’t sell these chips as standalone boards; you access them through Google Cloud as part of its tightly integrated infrastructure. That allows Google to optimize the…


25 Views

Yes! Google’s new TPUs/CPUs feel like a real move to make big AI workloads cheaper, not just faster. Nvidia still dominates the flexible GPU space, but it’s cool to see big players starting to look around, even Meta seems to be eyeing a big TPU investment. Interesting shift in the AI chip world! https://www.reuters.com/business/meta-talks-spend-billions-googles-chips-information-reports-2025-11-25/?utm_source=chatgpt.com

Meet NEO: The First Real Home Robot Is Finally Here

For years, humanoid robots have lived in labs, research centers, and sci-fi movies. Now, with NEO, 1X is bringing the future directly into the home — and it might be the moment we look back on as the start of a new era.


NEO isn’t a gadget. It’s a household helper.Designed to move, see, understand, and learn, NEO is built to support everyday tasks: organizing spaces, handling chores, fetching items, and navigating the home with human-like motion. It’s not perfect, but it’s the closest we’ve ever been to a truly useful home robot.


Why NEO matters

  • It’s one of the first humanoid robots actually intended for real homes, not just labs.

  • It uses advanced vision-language AI (“Redwood”) to learn tasks and adapt to your environment.

  • It’s designed to be safe, soft, quiet, and able to operate around people.


30 Views
JA Soler
JA Soler
Nov 23

Carlos thanks for sharing this interested post. NEO really feels like one of those before-and-after moments in consumer tech. What strikes me most is how humanoid robots are finally crossing the bridge from research to real homes, something we’ve been hearing about for decades but never actually seeing at scale.


The combination of embodied robotics + vision-language models is what makes this different. If NEO can genuinely learn tasks, adapt to its environment, and operate safely around people, we’re stepping into a new category — not “smart devices,” but smart companions.


Early 2026 might seem far away, but in robotics terms it’s basically tomorrow. And as you said, this feels very much like the early smartphone era: niche today, everywhere sooner than we expect.


From Words to Worlds: Spatial Intelligence is AI’s Next Frontier

🔗https://shre.ink/From-Words-to-Worlds


1. Introduction


Fei-Fei Li argues that we are moving beyond an era in which AI focuses just on words, and heading into one where it must master worlds — the spatial, physical, embodied, three-dimensional reality in which we live. She suggests that while large language models (LLMs) have made great strides in text, they lack a true grasp of space: distance, motion, geometry, the physics of objects and how they relate to one another. Without that, AI remains fundamentally limited. She proposes that the next major frontier of AI is what she calls spatial intelligence — the scaffolding of human cognition, built on perception, action and understanding of the physical world.


She emphasizes that this isn’t a niche add-on to existing systems, but a paradigm shift. Just as human intelligence evolved from sensing and moving in the world, so too must AI evolve from processing words to interacting with and reasoning…


19 Views
bottom of page