Connect with us

Hi, what are you looking for?

Top Stories

AWS Unveils Strands Agents and Bedrock AgentCore for Advanced Physical AI Integration

AWS launches Strands Agents 1.0 and Bedrock AgentCore, enhancing physical AI integration with multi-agent orchestration for real-time responsiveness and cloud collaboration.

Agentic AI systems are increasingly bridging the digital and physical realms, allowing AI agents to perceive, reason, and act in real environments. As these systems expand into robotics, autonomous vehicles, and smart infrastructure, a key question arises: how can developers create agents that harness massive cloud computing power for complex reasoning while ensuring millisecond-level responsiveness for physical interactions?

This year has marked a significant evolution for agentic AI at AWS. In May 2025, the company introduced Strands Agents, which simplifies the developer experience and emphasizes a model-driven approach to agent development. Following that, in July, AWS launched version 1.0, featuring multi-agent orchestration capabilities and the introduction of Amazon Bedrock AgentCore to expedite AI agents into large-scale production. At the recent re:Invent 2025 conference, enhancements included the TypeScript SDK and bidirectional streaming for voice agents. Today, AWS is focusing on extending these capabilities into the edge and physical AI, where agents not only process information but also collaborate with humans in real-world scenarios.

Demonstrations at AWS illustrate how physical AI agents control disparate robotic platforms through a unified Strands Agents interface. One example involves a 3D-printed SO-101 robotic arm utilizing the NVIDIA GR00T vision-language-action (VLA) model to perform tasks like identifying and picking fruit. Meanwhile, a Boston Dynamics Spot quadruped robot executes mobility tasks, such as inspecting its sensors by reasoning through its physical state. Both systems operate on NVIDIA Jetson edge hardware, showcasing the ability to execute sophisticated AI tasks directly on embedded systems.

The interplay between edge and cloud computing reveals a fundamental tension in designing intelligent systems. A robotic arm catching a ball must react almost instantaneously—requiring local processing to overcome network latency challenges. Yet, this same system can benefit from cloud capabilities for complex tasks like multi-step assembly, which necessitates the computational power of the cloud. This dichotomy reflects Daniel Kahneman’s concept of dual thinking—quick, instinctual responses at the edge versus slower, deliberate reasoning in the cloud.

Cloud capabilities contribute additional features that edge systems cannot accommodate alone. For instance, AgentCore Memory enables the storage of spatial and temporal context over extended periods. This collaborative memory allows insights gained by one robot to be shared across fleets, fostering a collective learning environment. Furthermore, Amazon SageMaker facilitates the large-scale simulation and training of models, enabling organizations to integrate real-world and simulated learnings to enhance performance across entire fleets.

This hybrid architecture is paving the way for new categories of intelligent systems. For instance, humanoid robots can utilize cloud-based reasoning for planning multi-step tasks while using edge-based models for precise physical movements. Autonomous vehicles can optimize routes and predict traffic using cloud intelligence while maintaining real-time obstacle avoidance locally. This collaborative approach ensures that robots can respond to immediate dangers while leveraging broader analytical insights from the cloud.

Developing edge and physical AI systems need not begin with complex orchestration. Instead, a progressive iteration approach allows developers to start simple and evolve sophistication based on requirements. For example, initial setups can involve installing the Strands Agents Python SDK on edge devices and running local models such as Qwen3-VL.

As developers gain familiarity, they can integrate visual understanding through cameras, allowing the AI to comprehend its surroundings. Agents can also tap into other sensors, enhancing their decision-making capabilities. For instance, reading battery levels can inform the agent about task continuity or the need to recharge.

Moving from sensing to action, physical AI systems follow a continuous cycle of environmental interpretation, reasoning, and action execution. Control of hardware is central to physical interaction, as robots need to coordinate multiple motors and joints to perform precise tasks. Using VLA models like NVIDIA GR00T, robots can seamlessly integrate visual inputs with language instructions, enabling a more intuitive interaction.

The integration of tools such as Hugging Face’s LeRobot and robust motion primitives allows developers to program robotic systems that can operate efficiently within their environments. For example, using SDK-based control, a Boston Dynamics Spot can be commanded to perform complex tasks while ensuring the safety and efficiency of its movements.

Edge agents also have the capability to delegate complex reasoning tasks to cloud-based counterparts as necessary. By employing a layered agent architecture, a local agent can consult a more powerful cloud agent for intricate planning tasks, while the cloud agent can orchestrate multiple edge devices working collaboratively.

As physical AI systems evolve, learning from collective experiences becomes paramount. For instance, in a warehouse scenario, multiple robots encountering similar challenges can share observations, leading to operational optimizations. AgentCore Observability provides a mechanism for continuous improvement, where agents learn from their interactions and adjust strategies accordingly. This loop—comprising inference, observation, evaluation, and optimization—contributes to the overall enhancement of AI capabilities.

The convergence of multimodal reasoning models, edge computing, and open-source robotics is reshaping the capabilities of physical AI systems. AWS aims to make AI agent development accessible, facilitating a transition where agents learn from experience in real environments. As these systems become more adept at reasoning about the physical world, they will increasingly simulate future scenarios, predict outcomes, and coordinate seamlessly within larger operational frameworks.

See also
Staff
Written By

The AiPressa Staff team brings you comprehensive coverage of the artificial intelligence industry, including breaking news, research developments, business trends, and policy updates. Our mission is to keep you informed about the rapidly evolving world of AI technology.

You May Also Like

AI Cybersecurity

Tenable forecasts a 2026 cybersecurity landscape where AI-driven attacks amplify traditional threats, compelling organizations to prioritize proactive security measures and custom tools.

Top Stories

AI is set to transform drug development, potentially reducing costs and timelines significantly, as Impiricus partners with top pharma companies amid rising regulatory scrutiny.

AI Education

U.S. Education Department announces $1B initiative to enhance immigrant student rights and integrate AI-driven personalized learning by 2027.

AI Research

Researchers demonstrate deep learning's potential in protein-ligand docking, enhancing drug discovery accuracy by 95% and paving the way for personalized therapies.

Top Stories

New studies reveal that AI-generated art is perceived as less beautiful than human art, while emotional bonds with chatbots risk dependency, highlighting urgent societal...

Top Stories

Analysts warn that unchecked AI enthusiasm from companies like OpenAI and Nvidia could mask looming market instability as geopolitical tensions escalate and regulations lag.

AI Business

The global software development market is projected to surge from $532.65 billion in 2024 to $1.46 trillion by 2033, driven by AI and cloud...

AI Technology

AI is transforming accounting by 2026, with firms like BDO leveraging intelligent systems to enhance client relationships and drive predictable revenue streams.

© 2025 AIPressa · Part of Buzzora Media · All rights reserved. This website provides general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information presented. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult appropriate experts when needed. We are not responsible for any loss or inconvenience resulting from the use of information on this site. Some images used on this website are generated with artificial intelligence and are illustrative in nature. They may not accurately represent the products, people, or events described in the articles.