Figure AI has unveiled HELIX, a pioneering Vision-Language-Action (VLA) model that integrates vision, language comprehension, and action execution into a single neural network. This innovation allows ...
Recent advances in large-scale AI models, including large language and vision-language-action models, have significantly expanded the capabilities of ...
SAN JOSE, Calif., March 17, 2026 /PRNewswire/ -- At NVIDIA GTC 2026, DeepRoute.ai presented a comprehensive introduction to its 40-billion-parameter Vision-Language-Action (VLA) Foundation Model ...
Stuttgart-based Sereact raises a $110M Series B led by Headline to develop AI that makes robots predict consequences before acting.
Safely achieving end-to-end autonomous driving is the cornerstone of Level 4 autonomy and the primary reason it hasn’t been widely adopted. The main difference between Level 3 and Level 4 is the ...
The field of robotics is undergoing a profound transformation driven by rapid advances in artificial intelligence, particularly large language models and ...
What if a robot could not only see and understand the world around it but also respond to your commands with the precision and adaptability of a human? Imagine instructing a humanoid robot to “set the ...
As I highlighted in my last article, two decades after the DARPA Grand Challenge, the autonomous vehicle (AV) industry is still waiting for breakthroughs—particularly in addressing the “long tail ...
Hugging Face Inc. today open-sourced SmolVLM-256M, a new vision language model with the lowest parameter count in its category. The algorithm’s small footprint allows it to run on devices such as ...
Vision language models (VLMs) have made impressive strides over the past year, but can they handle real-world enterprise challenges? All signs point to yes, with one caveat: They still need maturing ...