Toggle light / dark theme

At this point, Nvidia is widely regarded as the 800 pound gorilla, when it comes to silicon and software for artificial intelligence.


Beyond AI, as I mentioned previously, all of Nvidia’s BUs realized quarterly growth. Though its Automotive group rose a modest 3% to $261M, the company’s automotive design win pipeline is projected at $14 billion in new business (numbers soon to be updated). Automotive design wins have a longer gestation period, and the company has noted that this revenue impact opportunity will begin materializing in 2024 and beyond. Shifting to Nvidia’s Professional Visualization business unit, sequential growth of 9.8% to $416 million was achieved, while the company’s OEM And Other business grew 10.6% to $73 million. Finally, Nvidia’s Gaming group delivered $2.856 billion for the quarter, compared to $2.49 billion in its previous Q2 quarter (up about 15%), and $2.24 billion quarter on quarter from a year ago. Here again, the company’s gaming GPUs and software are widely respected as the performance and feature leaders currently in the PC Gaming industry, though its chief rival AMD is beginning to execute better with its Radeon product line, along with its potent Ryzen CPUs as a 1–2 punch platform solution.

Moving forward, the company guided for a nice round $20 billion for its Q4 FY24 number, representing a projected 11% sequential gain. There will be a bit of headwind of course, from competitors like AMD that is expected to deliver its MI300 GPU AI accelerators in December at its Advancing AI event. That said, it’s going to be a tough slog for all competitors, due to Nvidia’s long-building inertia as the clear leader and incumbent in AI. Another component of the company’s data center silicon portfolio is just coming online now as well, with its Grace-Hopper combined CPU-GPU Superchip, competing for host processor AI data center sockets, which Huang noted is “on a very, very fast ramp with our first data center CPU to a multi-billion dollar product line.”

Any way you slice it, there’s no stopping Nvidia from this level of growth for the foreseeable future, as AI adoption tracks a similar curve. The company continues to execute like a finely tuned machine, and the numbers, as they say, don’t lie.

Well before Washington banned Nvidia’s exports of high-performance graphic processing units to China, the country’s tech giants had been hoarding them in anticipation of an escalating tech war between the two nations.

Baidu, one of the tech firms building China’s counterparts to OpenAI, has secured enough AI chips to keep training its ChatGPT equivalent Ernie Bot for the “next year or two,” the firm’s CEO Robin Li said on an earnings call this week.

“Also, inference requires less powerful chips, and we believe our chip reserves, as well as other alternatives, will be sufficient to support lots of AI-native apps for the end users,” he said. “And in the long run, having difficulties in acquiring the most advanced chips inevitably impacts the pace of AI development in China. So, we are proactively seeking alternatives.”

The robot can help the construction industry overcome its challenges and reduce its environmental impact.


Michael Lyrenmann via Science Robotics.

A team of researchers has developed a 12-ton (approximately 2,000 pounds) autonomous robot that can construct stone walls from natural and recycled materials using advanced technologies. This could help the construction industry overcome its challenges of low productivity, high waste, and labor shortages while reducing its environmental impact and improving its sustainability.

The engineers at Fourier Intelligence have successfully combined functionality with a touch of creativity, making the GR-1 more than just a caregiver. The 300-Nm hip actuators, equivalent to 221 pound-feet (lb-ft), empower the GR-1 to lift a remarkable 110 lb (50 kilograms, kg) – an impressive feat for a robot of its stature. This capability positions the GR-1 as valuable in assisting patients with various activities, from getting up from a bed or toilet to navigating a wheelchair.

What role should text-generating large language models (LLMs) have in the scientific research process? According to a team of Oxford scientists, the answer — at least for now — is: pretty much none.

In a new essay, researchers from the Oxford Internet Institute argue that scientists should abstain from using LLM-powered tools like chatbots to assist in scientific research on the grounds that AI’s penchant for hallucinating and fabricating facts, combined with the human tendency to anthropomorphize the human-mimicking word engines, could lead to larger information breakdowns — a fate that could ultimately threaten the fabric of science itself.

“Our tendency to anthropomorphize machines and trust models as human-like truth-tellers, consuming and spreading the bad information that they produce in the process,” the researchers write in the essay, which was published this week in the journal Nature Human Behavior, “is uniquely worrying for the future of science.”

In a letter to the company’s board of directors, OpenAI researchers are said to have warned of an AI discovery that could pose a threat to humanity.

This was reported by Reuters, citing two sources familiar with the matter. The letter is also linked to Altman’s firing, but is not the only reason, according to Reuters.

According to a source from The Verge, the board never received such a letter, which is why it played no role in Altman’s firing. Reuters says it has not seen the letter. The Information reports not on the letter itself, but on the “Q*” breakthrough described in it.

Less than 48 hours after Sam Altman was fired from OpenAI, Brian Armstrong, the CEO of Coinbase made a post highlighting some of the impact his departure might cause the ChatGPT creator.

Armstrong eulogized Sam Altman for taking OpenAI’s valuation to $80 billion, a massive worth he believe the company has just ‘torched by ousting the tech veteran.

Coinbase CEO Suspects Foul Play

Inflection AI just announced Inflection-2, a HUGE new 175 billion parameter language model.

Capabilities exceed Google and Meta’s top models and “is very close” to catching GPT-4.

The CEO also said the company’s next model will be 10x larger in six months.


Inflection AI, the innovative startup behind the conversational chatbot Pi, has recently unveiled a new AI model, Inflection-2, claiming superior performance over popular models developed by Google and Meta. According to a recent Forbes report, this new model is rapidly gaining attention for its potential to rival OpenAI’s GPT-4.

AI can be used to make our lives easier, but it is also a frightening tool that could see many of us out of a job – at least according to some experts.

Australian Academy of Technological Sciences and Engineering (ATSE) CEO Kylie Walker told The Canberra Times AI could replace anywhere between 25 and 46 percent of all Aussie jobs by 2030.

Walker, and a group of 13 other AI experts, called for a $1 billion national artificial intelligence initiative in a new report to push out more than 100,000 digitally skilled workers over the next decade.