top of page

Groups Feed

View groups and posts below.


This post is from a suggested group

Davos Signals a Disciplined Era for AI in Banking and FinTech

🔗https://www.pymnts.com/news/banking/2026/davos-signals-a-disciplined-era-for-ai-in-banking-and-fintech/


The Davos discussion “Banking Accelerated” framed a clear shift in tone around AI in financial services: moving from experimentation and “speed” narratives toward disciplined deployment—where trust, resilience, collaboration, and enabling regulation determine who wins.


Leaders from RBC, PayPal, Commerzbank, BTG Pactual, and the Qatar Central Bank converged on the idea that AI is reshaping finance faster than any single institution can adapt alone, so the competitive game is now about earning and sustaining trust while scaling safely.


Frenemies in a Digital Value Chain


Banks and FinTechs are increasingly “frenemies”: they compete across payments, wallets, and commerce, yet depend on each other to innovate and scale.


RBC’s CEO emphasized that digitization is pushing banks to expand beyond pure transaction processing into earlier stages of customer intent—like discovery and decision-making—because staying “the last mile of payments” invites disintermediation by platforms that control devices, data, and customer interfaces.


3 Views

This post is from a suggested group

The State of AI in 2026

As we enter 2026, the AI market is growing up. The big question isn’t “what can this model do?” anymore. It’s “Can we trust this system to run inside the messy reality of a business—under constraints, with real consequences, and with ROI we can actually measure?”.


That shift matters because AI is moving from advice to action. In early deployments, mistakes were mostly annoying—wrong summaries, weak drafts, bad suggestions. In operational deployments, mistakes can become expensive, non-compliant, or reputation-damaging. The failure model changes, so the product requirements change too.


The new differentiator is operationality: how deeply AI is embedded into workflows that truly execute work. The most valuable AI products aren’t just chat interfaces—they’re systems that reduce coordination overhead, connect to existing tools, and reliably turn intent into multi-step outcomes.


This is why orchestration is booming. Instead of ripping out CRM, email, project tools, or finance systems, orchestration layers sit on…


17 Views

This post is from a suggested group

Global AI adoption in 2025: a widening digital divide

🔗https://shre.ink/Microsoft-Global-AI-Adoption-2025


1. Executive Summary


In H2 2025, Microsoft estimates global “AI diffusion” (share of people using a gen-AI product) rose +1.2pp to 16.3% worldwide—about one in six people. Growth continues, but it’s not evenly distributed: adoption in the Global North reached 24.7% of working-age people, versus 14.1% in the Global South, widening the gap (from 9.8pp to 10.6pp).


The report argues the divide reflects differences in infrastructure, policy execution, skills, and product access. High-income countries keep accelerating, while many lower-income markets progress more slowly unless access barriers are reduced (e.g., via free tools or open-source distribution).


2. Changes in the second half of 2025


H2 2025 shows record usage growth, but the composition of that growth matters: all top 10 countries by adoption increase are high-income economies. This indicates that the “easy acceleration” is happening where citizens already have strong digital habits and where institutions can integrate AI into work and services quickly.


15 Views

This post is from a suggested group

AI cold war

  1. DeepSeek has gained momentum in Emerging Market

  2. Despite some limitations, a free-source model attracts many companies

  3. China also offers much cheaper energy cost than US

  4. This results from long-term energy production investments

  5. Despite enormous amount of US investments, my opinion is that Data Centers in some sense lacks gain of scale (each question to answer is different)


FULL STORY OF DEEP SEEK:


16 Views
JA Soler
JA Soler
Jan 18

Helcio thank you for sharing. Your post is a sharp reminder that the AI race is no longer just about model quality — it’s about distribution, price, and geopolitics.


What Microsoft is highlighting here is uncomfortable but real: open(-ish) models + state subsidies + emerging-market focus is a powerful combo. DeepSeek didn’t “win” on raw capability alone; it won on accessibility and economics, especially where budgets, infrastructure, and energy costs matter most.


Meanwhile, US players (OpenAI, Google, Anthropic) have optimized for control, margins, and enterprise value — a rational strategy, but one that leaves space elsewhere. If you don’t show up with affordable, deployable options, someone else will.


The deeper issue isn’t “China vs the US”, it’s whether the global south becomes a first-class participant in the AI economy or a downstream consumer of subsidised tech. If infrastructure, skills, and power costs aren’t addressed, the market will naturally gravitate to whoever can undercut on price — values come later.


This is less a warning about DeepSeek (DeepSeek) and more a warning about strategy blind spots. In AI, trust matters — but only if people can afford the product.

This post is from a suggested group

AI Takes Centre Stage at CES 2026: What You Need to Know

CES 2026 in Las Vegas has become a milestone event for artificial intelligence, signalling major shifts in how AI will be built, deployed and experienced. This year’s announcements show AI evolving from software on screens into powerful infrastructure, physical machines and everyday devices.

1. AI Compute Power Surges with Next-Gen Platforms

One of the biggest stories at CES was the launch of next-generation AI computing platforms. Nvidia revealed its Vera Rubin platform, a fully integrated system combining new CPUs, GPUs and networking to deliver much higher AI performance at lower cost. It promises significant improvements for running and training large models, helping companies reduce energy use and scale AI workloads more efficiently.

AMD introduced its Helios rack-scale architecture, offering enormous compute capacity for training trillion-parameter models. These advances enable cloud providers, research labs and enterprises to tackle AI challenges faster than ever.

Industry impact: With hardware barriers falling, more companies can access high-performance AI, widening…

16 Views
JA Soler
JA Soler
Jan 08

Sara thank you for sharing — great overview of how CES 2026 confirms AI’s shift from “cool demos” to core infrastructure and real-world execution.


What stands out most is the convergence: compute at scale, physical AI/robotics, and embedded intelligence in everyday devices all maturing at the same time. That combination is what turns AI from a feature into a competitive moat.


The real differentiator now won’t be who has AI, but who can deploy it responsibly, integrate it into operations, and extract sustained business value. The next 24–36 months will be decisive for companies that get this right.

This post is from a suggested group

2025: the dawn of AI’s industrial age

2025 marks a turning point where AI starts to behave like an industrial technology rather than a purely “software novelty.” The focus shifts from flashy demos to reliable systems at scale, constrained by real-world bottlenecks: compute supply, energy availability, talent concentration, geopolitics, and regulation.


In this framing, competitive advantage comes from building an AI “factory”: inputs (data/compute), throughput (agents/workflows), quality control (evaluation/monitoring), and governance (safety/compliance).

During this year has been standardized the reasoning-first (“thinking”) models. Instead of being prompted into step-by-step logic, many modern LLM systems increasingly embed reasoning strategies: planning, decomposition, tool use, and self-checking. The result is not just higher benchmark scores, but a wider set of tasks becoming practically solvable—with less prompt gymnastics and less human “glue” work to keep the model on track.


The reasoning-first models have a clear industrial implication: when reasoning is standard, organizations can treat models less like autocomplete and more like junior operators that can execute…


29 Views

This post is from a suggested group

Project Iceberg

🔗https://iceberg.mit.edu/


Introduction


Project Iceberg starts from a simple observation: when AI changes tasks inside jobs, the economic shockwaves don’t stay neatly inside “tech.” If AI automates quality control in a factory, the consequences can ripple through suppliers, logistics, and local service economies—yet most planning tools only notice the disruption after it shows up in employment or wage statistics.


The report argues that this is a measurement problem as much as a technology problem. Traditional labor metrics were built for a “human-only” economy; they track workers, wages, and outcomes, but not where AI capability overlaps with human skills before adoption takes off. That overlap is the early-warning signal policymakers and business leaders need when they’re committing billions to training, energy, and infrastructure.


To fill that gap, the team introduces a national-scale simulation framework (“Project Iceberg”) and a new KPI (“the Iceberg Index”) that estimates technical exposure: the share of wage value tied to skills…


29 Views

This post is from a suggested group

The hidden risk in Agentic AI

Introduction


Today’s push to deploy autonomous, agentic AI systems may be repeating a well-known mistake from the early days of microservices. What was once heralded as a path to agility and scalability ended up producing brittle, interdependent systems — “distributed monoliths” — that proved fragile and hard to maintain.


Unless architects learn from that history, agentic AI risks falling into the same trap. The approach has to be different, instead of linking agents with synchronous, point-to-point calls, build them on an event-driven, loosely coupled architecture, enabling resilience, scalability, observability and long-term growth.


The microservices déjà vu


When microservices first became popular, the idea was to break big monolithic applications into smaller, more modular services. Each service could evolve and scale independently. In theory this promised flexibility, faster deployment cycles, and easier maintainability. But in practice many teams implemented microservices with synchronous API calls — service A calling service B, which…


23 Views

This post is from a suggested group

Google vs Nvidia – The Real Battle for the Future of AI Chips


Google Alphabet Inc. is finally taking the AI hardware fight directly to Nvidia NVIDIA Corporation with its new Ironwood TPUs and Axion CPUs, and the battle is less about raw speed and more about who controls the economics of AI. Google’s custom chips are designed to make running large AI models cheaper and more efficient at massive scale, challenging Nvidia’s position as the default choice for advanced AI computing.


1. What exactly is Google launching?


Google has introduced Ironwood, its latest generation of Tensor Processing Units (TPUs), along with Axion, its first custom Arm-based CPU for data centers. These chips power everything from training large models like Gemini to serving billions of AI queries across Google’s products and Google Cloud.


Unlike Nvidia’s GPUs, Google doesn’t sell these chips as standalone boards; you access them through Google Cloud as part of its tightly integrated infrastructure. That allows Google to optimize the…


40 Views
SARA RAMOS
SARA RAMOS
Dec 01, 2025

Yes! Google’s new TPUs/CPUs feel like a real move to make big AI workloads cheaper, not just faster. Nvidia still dominates the flexible GPU space, but it’s cool to see big players starting to look around, even Meta seems to be eyeing a big TPU investment. Interesting shift in the AI chip world! https://www.reuters.com/business/meta-talks-spend-billions-googles-chips-information-reports-2025-11-25/?utm_source=chatgpt.com

bottom of page