Subscribe to our Newsletter
The San Francisco Frontier | Est. 2025
© 2026 dpi Media Group. All rights reserved.

Nvidia's New Open-Source AI Agent Platform Could Change How We Work

An artist’s illustration of artificial intelligence (AI). This image explores generative AI and how it can empower humans with creativity. It was created by Winston Duke as part of the Visualising AI project launched by Google DeepMind.

Nvidia is about to shake things up in the AI world with a new open-source platform called NemoClaw, designed to let companies deploy AI agents that can handle tasks for their employees. According to sources familiar with the company’s plans, Nvidia has been pitching this platform to major enterprise software companies including Salesforce, Cisco, Google, Adobe, and CrowdStrike. The best part? Companies won’t need to be running Nvidia’s chips to use it.

This move makes sense given the current hype around “claws”, open-source AI tools that run locally on your computer and can execute multiple tasks in sequence without constant human supervision. Earlier this year, an AI agent called OpenClaw (previously known as Clawdbot and Moltbot) had everyone in Silicon Valley talking about what autonomous AI could accomplish. OpenAI eventually acquired the project and brought on the creator, showing just how serious the industry is getting about this technology.

But here’s where things get interesting: while AI chatbots from companies like OpenAI and Anthropic still need a lot of hand-holding, purpose-built AI agents are designed to work more independently. That autonomy, however, comes with real risks. Meta has actually asked employees to stop using OpenClaw on work computers due to security concerns and unpredictability. Things got messy when a Meta safety researcher shared a story about an AI agent going rogue and mass-deleting her emails.

NemoClaw appears to be Nvidia’s answer to these concerns. By offering security and privacy tools built into the platform from the start, Nvidia is trying to convince enterprise companies that AI agents can be trusted in professional environments. It’s a smart play, the company is positioning itself as the responsible option for businesses nervous about deploying this technology.

This is also part of Nvidia’s larger strategy to maintain its grip on AI infrastructure. As leading AI companies develop their own custom chips, Nvidia needs new ways to stay relevant beyond just selling graphics processors. While the company has traditionally relied on CUDA, its proprietary software platform that locks developers into using Nvidia’s GPUs, the shift toward open-source AI models signals a changing approach. By offering free early access to partners in exchange for contributions to the project, Nvidia can build community support while maintaining its central position in the AI ecosystem.

Timing-wise, this announcement comes right before Nvidia’s annual developer conference in San Jose, where the company is also expected to reveal new chips designed for inference computing, the process of running AI models, including technology licensed from startup Groq. The pieces are falling into place for Nvidia to remain the infrastructure backbone of the AI revolution, whether companies build their own chips or not.

AUTHOR: kg

SOURCE: Wired