Big, Bigger, Nvidia at OpenAI

0
8
Nvidia OPENAİ

Nvidia announces a $100 billion investment in OpenAI. Hundreds of researchers and politicians call for “red lines” for AI use. The AI newsletter.

Of course, you also know that generative AI is supposed to make our work more efficient, streamline it by removing tedious routine tasks, and so on. But what if AI is a little *too* good at producing “workslop”: content that looks nicely polished but is quite poor in substance?

Researchers from BetterUp Labs and the Stanford Social Media Lab, in a survey of over 1,000 participants, claim to have found that 41 percent of employees have already encountered such bland AI-generated results. And that it costs them, on average, two hours to resolve problems caused by these results. Which, let’s just say, isn’t exactly conducive to increasing productivity through AI.

However robust the exact figures may be: I think this should be understood as a friendly motivation and reminder. Feeding tasks into AI and then forwarding the results with a “good enough” attitude – that was never a good idea. And it’s increasingly proving to be unproductive, too.

What you need to know: Very, very large sums of money, very, very large data centers

When it comes to AI data centers, gigantic investment sums and computing capacities have been bandied about for quite some time. This week, Nvidia and OpenAI once again shattered a few superlatives: The US chipmaker plans to invest up to $100 billion in the developer of ChatGPT. Together, both companies plan to build new AI data centers with a capacity of ten gigawatts. To put that into perspective: That’s roughly as much power as ten nuclear reactors can supply.

The stock market liked it at first – but the higher the sums, the more pressing the question becomes whether and when AI companies can recoup these gigantic amounts. You know, the debate around the AI bubble: What if users grow tired of chatbots? What if the world of work can’t be transformed as thoroughly with their help as expected (see above)?

The platform Axios warns: The US is betting its economic future on the belief that OpenAI CEO Sam Altman, Nvidia CEO Jensen Huang, and other AI leaders are brilliant innovators driving a golden future – and not just salespeople juggling billions and praying the music never stops playing.

Perhaps fittingly, The Verge reports that OpenAI is now looking for a “Head of Ads” – a leadership figure who will help integrate advertising into ChatGPT. After all, that huge investment has to pay off somehow.

What you should consider: What international rules does AI need?

It’s not new for prominent scientists to warn about the risks of artificial intelligence in open letters. But the warning call published by over 200 signatories at the start of the UN General Assembly was quite impactful. Signatories include Nobel laureates such as journalist Maria Ressa, former heads of state like ex-Colombian President Juan Manuel Santos, and also AI researcher Kate Crawford. Her work has previously focused more on current AI risks than on those in the somewhat more distant future.

The letter calls on the world’s governments to reach an international agreement on red lines for AI by the end of 2026. AI could “soon” far surpass human capabilities. Risks such as “artificially induced pandemics, widespread disinformation, large-scale manipulation of individuals including children, (…) mass unemployment, and systematic human rights violations” could escalate, fear the signatories.

Ukrainian President Volodymyr Zelenskyy did not sign the letter. But he also spoke about AI during his address to the United Nations. He warned against the use of artificial intelligence in weapon systems, speaking of the “most destructive arms race in human history.” Ten years ago, he said, no one could have imagined that drone attacks could leave entire areas lifeless – something previously only conceivable through a nuclear strike. Therefore, he called for international rules for AI use in weapon systems.

Initiating a discussion about this is important and valuable. In the US tech industry, companies like the startup Anduril are currently enjoying great success. Among other things, it works on autonomous military systems, and its CEO calls for less restraint in the development of AI weapons. Investors are also showing more interest in defense technology; tech companies are signing multi-billion dollar contracts with the US military. Regulation for AI use on the battlefield is rarely discussed. It would be long overdue to change that.

What you can try: Avatars with Heart

Two years ago, the company HeyGen already made quite a splash – with astonishingly convincing lip-synced videos in which our colleague Julian Stopa suddenly seemed to speak seven different languages.

Some time ago, the company introduced its new video agent, Avatar IV. Like its predecessor models, it can create a talking video with voice synchronization from a single photo and text input. However, facial movements are now said to be even more expressive – and the system can also display hand gestures.

In a quick test, this works quite well with an uploaded photo of a person. While the AI voice for the entered text sounds artificial, the lip movements and facial expressions appear quite convincing. In our test, the AI hands formed a heart as requested – but three times in an 18-second video.