
Two key AI announcements today that can benefit the channel and its end users.
First, infrastructure services firm Kyndryl has launched its Agentic AI Framework, to help deploy agentic AI to augment human teams.
The enterprise-grade framework orchestrates and dispatches a portfolio of specialised, self-directed, self-learning AI agents that dynamically respond to shifting conditions, and keep humans in the loop for essential oversight.
Agents can be deployed on-premises, in the cloud or in a hybrid IT setting to help transform and improve business operations, said Kyndryl.
To deliver the offering, Kyndryl is leveraging thousands of infrastructure deployments and its experience of generating over 12m AI-driven insights monthly via its Kyndryl Bridge. The framework combines advanced algorithms, self-learning, optimisation and secure-by-design AI agents, that translate complex data into “clear, understandable insights", we are told.
“As customers worldwide adopt agentic AI to gain a competitive edge, it will increasingly impact the entire technology stack, including applications and business workflows. Kyndryl is positioned to provide a holistic, infrastructure-first perspective that enables customers to deploy AI with confidence across mission-critical systems with scalability and industry standard security,” said Ismail Amla, senior vice president, Kyndryl Consult.
In a second AI important announcement, JFrog has unveiled a new Model Context Protocol (MCP) Server. This architecture enables Large Language Models (LLMs) and AI agents to securely interact with tools and data sources within the JFrog Platform directly from MCP clients, including popular agentic coding development environments and IDEs, “boosting developer productivity and streamlining workflows”, said the provider.
The Model Context Protocol (MCP) is an open, industry-standard integration framework designed to connect AI systems with external tools, data, and services. With JFrog’s MCP Server, developers can now use natural language commands like, “Create a new local repository” or “Do we have this package in our organisation?”, to interact with the JFrog Platform directly from their IDE or AI assistant.
Teams gain “immediate awareness” of open-source vulnerabilities and software package usage without context switching, saving developers time. AI automation also helps simplify complex queries that previously required advanced developer knowledge, helping all teams work smarter and faster.
While remote MCP servers can help facilitate rapid code iteration and improve software reliability, they are not without risk. The JFrog Security Research Team recently discovered vulnerabilities, such as CVE-2025-6514, that could hijack MCP clients and execute remote code, potentially leading to severe consequences. This is another reason why JFrog’s MCP Server is designed with security in mind and relies exclusively on trusted connection methods, such as HTTPS.
“The developer tool stack and product architecture has fundamentally changed in the AI era. With the launch of the JFrog MCP Server, we’re expanding the open integration capabilities of the JFrog Platform to seamlessly connect with LLMs and agentic tools,” said Yoav Landman, co-founder and CTO at JFrog. "This allows developers to natively integrate their MCP-enabled AI tools and coding agents with our platform, enabling self-service AI across the entire development lifecycle, which helps increase productivity and build smarter, more secure applications, faster."
JFrog’s new MCP Server for the JFrog Platform is now available for developers to test and provide feedback during a preview period.