Late at night on December 17, 2024, Jen-Hsun Huang released a new product of Nvidia, a "freshly baked" supercomputer, in his house.
This Jetson Orin Nano has 1024 NVIDIA CUDA cores, but is only the size of a credit card.
Its built-in NVIDIA Maxwell architecture GPU, combined with the quad-core ARM Cortex-A57 CPU, can provide developers with up to 70 trillion calculations per second.
While the computing power is 70% higher than the old version, the power consumption has also been reduced to only 25 watts. The price has dropped from US$500 to only US$249 compared to the previous generation Jetson Orin NX, which is only one-quarter of the price of an iphone.
Jen-Hsun Huang defines Jetson Orin Nano as a robot processor.
In Jen-Hsun Huang's vision, the future development trend of humanoid robots must be equipped with AI models, and this Jeston Orin Nano is the hardware equipped with large AI models.
In other words, the brain of future robots.
This means that users can perform localized information collection and training on this small device to process complex tasks such as deep learning, computer vision, and robot control.
Let the robot completely move from the cloud to the terminal.
In order to get rid of the traditional walking "cerebral thrombosis" problem in robots, there must be a breakthrough in the field of intelligent vision!
Similarly, Jetson Orin Nano, as a robot terminal, can support up to four external cameras.
This is a huge advantage for applications that need to process large numbers of high-resolution visual images simultaneously.
On this basis, localized information collection is used to make the visual AI equipped on Jetson Orin Nano more adaptable to the working environment, thereby achieving sensitive obstacle avoidance capabilities.
Moreover, in the construction of smart cities, Jetson Nano can also be used for vehicle and pedestrian detection and analysis, and even traffic flow monitoring and management in complex urban environments.
In the field of Internet of Things, Jetson Nano gives devices intelligent capabilities, making smart homes, smart security and other products more intelligent and autonomous.
What’s even more surprising is that this supercomputer in the palm not only supports large language models, but can also easily control various generative AI applications.
Generative dialogue models based on Transformer architecture (such as GPT series, BERT, etc.) can be used to generate natural language dialogue, automatic question and answer, virtual assistant and other applications.
Through Jetson Nano, local devices can quickly generate real-time conversations, reducing the need to rely on the cloud, improving the response speed and stability of the conversation system, and enabling robots to "speak."
Especially in the inference stage, Jetson Nano provides extremely high cost performance and excellent energy efficiency, while the Apple M2 workstation, which is comparable to Jetson Nano, costs US$10,000.
The most important thing is that if you want to have your own large AI model, the Jetson Nano server is not only perfectly compatible with software ecosystem of NVIDIA, but can also be connected to all mainstream open source models in the market, thus preventing users from wasting a lot of time in training from scratch.