Test- FTSE 100 Kicks Off August on a High as BP and Senior Lead Market Momentum
$11
10 Oct 2025, 13:13
Unsplash.com
Hugging Face, an artificial intelligence firm, and Amazon's cloud division announced a partnership on Wednesday to facilitate the running of thousands of AI models on Amazon's specialised processing processors.
Hugging Face, valued at $4.5 billion, is supported, among others, by Nvidia, Amazon, and Alphabet's Google. It has developed into a centre for AI researchers and developers to trade chatbots and other AI software. It is where developers go to get their hands on and experiment with open-source AI models, including Llama 3 from Meta Platforms.
However, after modifying an open-source AI model, programmers usually wish to incorporate the model into a software product. Hugging Face and Amazon announced on Wednesday that they have teamed to enable this on a specially designed AWS chip known as Inferentia2.
Head of product and growth at Hugging Face Jeff Boudier stated, "One thing that's very important to us is efficiency – making sure that as many people as possible can run models and that they can run them in the most cost-effective way."
AWS, on the other hand, wants to get more AI developers to use its cloud services to provide AI. Although Nvidia leads the industry in model training, AWS contends that its CPUs can eventually run those learned models—a process known as inference—at a cheaper cost.
"These models are trained perhaps once a month. However, you may be using inference tens of thousands of times every hour against them. Inferentia2 truly excels in such a situation,' according to Matt Wood, AWS's product manager for artificial intelligence.
(Sources: reuters.com)