The world's leading AI firms are collaborating on a new Agentic Artificial Intelligence Foundation managed by the Linux Foundation to build open standards around AI agents. The move will focus on three key open source tools to begin with, sharing findings on technical problems.
Chinese government began to add government-approved AI suppliers to the Information Technology Innovation List in a bid to accelerate deployment of domestic hardware. But can Chinese semiconductor industry satisfy the needs of domestic AI industry?
DeepSeek is allegedly involved in a "phantom data center" smuggling scheme to get Blackwell GPU servers into China as part of training its newest LLM generation. While Nvidia refutes the claims as "farfetched", some proof indicates otherwise.
OpenAI and Anthropic claim in a pair of reports released today and earlier in the month that the use of enterprise AI tools increase productivity and corporate ROI. These studies may be damage control to counter those released by MIT and Harvard in August claiming the opposite.
The U.S. House shelved the GAIN AI Act, blocking a rule that would have forced AMD and Nvidia to put U.S. buyers ahead of China for advanced GPUs, though Beijing's own limits blunt the impact.
OpenAI's Sam Altman announced in an internal memo that the company is in 'Code Red' status, putting every other project on the backburner in favor of ChatGPT.
Nvidia has released a paper describing TiDAR, a decoding method that merges two historically separate approaches to accelerating language model inference.
Major insurers are moving to ring-fence their exposure to artificial intelligence failures, after a run of costly and highly public incidents pushed concerns about systemic, correlated losses.
Elon Musk argues that terawatt-scale AI computing will soon be impossible to power and cool on Earth and must move to orbit. But despite abundant solar energy and radiative cooling in GEO, launch mass, radiation-hardening, and networking challenges make such space-based data center only a distant dream.
Anthropic, Microsoft, and Nvidia have struck a joint partnership to run the Claude AI on Microsoft's Azure servers using Nvidia hardware. Anthropic will invest $30 billion in the move, as well as a guarantee to provide an additional gigawatt of compute performance. This deal could help all companies meet their existing commitments, but it adds extra inflation to the ballooning AI bubble.
Elon Musk claims Tesla may need 100 – 200 billion AI chips per year, a volume far beyond what TSMC and Samsung can supply, which is why he is considering building Tesla's own fab, as he believes existing foundries cannot scale fast enough for him.
Nvidia's Vera Rubin platform may upend the AI-server market by shipping partners fully built L10 compute trays that include all compute, power, and cooling hardware, leaving OEMs and ODMs to handle only rack integration while Nvidia takes over the core server design, much of the value, and margins.
A J.P. Morgan report says that the AI industry needs to make at least $650 billion annually for investors to get a 10% return on all the money going into it until 2030.