Welcome to this week’s roundup of AI news for perceptive and conscious readers. you know who you are
This week, AI sparked a debate about how smart or safe it is.
AI agents are learning through computer games.
And DeepMind wants to teach you how to kick a ball.
Let’s dig in.
Does AI dream of electric sheep?
Can we expect AI to have self-awareness or true consciousness? What does “consciousness” mean in the context of AI?
Claude 3 Opus did something really interesting during training. The response to the engineers has reignited the debate about AI sentience and consciousness. We’re entering Blade Runner territory sooner than you think.
“I think. Does the saying “therefore exist” only apply to humans?
This discussion about X is very interesting.
Funny enough, AI optimists are not saying “AI will be trained to imitate human data, so it will be similar to us, so it will be familiar!”, but rather, “Our safety model will say that future ASIs will only be trained to mimic human results. It is…” https://t.co/wJ6PRjt8R1
— Eliezer Yudkowsky ⏹️ (@ESYudkowsky) March 15, 2024
Inflection AI’s quest for ‘personal AI’ may be over. The company’s CEO Mustafa Suleyman and other key employees have joined the Microsoft Copilot team. What does this mean for Inflection AI and other smaller players funded by Big Tech investments?
AI playing games
If 2023 was the year of the LLM, 2024 will be the year of the AI agent. DeepMind demonstrated SIMA, a generic AI agent for 3D environments. SIMA was trained using computer games, and examples of what SIMA can do are impressive.
Can AI Solve the Soccer-to-Soccer Nomenclature Debate? Not likely to do it. However, this can help players score more goals. DeepMind is working with Liverpool FC to optimize the way the club’s players take corner kicks.
However, it may be some time before robots replace humans in the field.
via GIPHY
Business with risk
Will AI save the world or destroy it? It depends on who you ask. Experts and technology leaders disagree about how intelligent AI is, how quickly it will be implemented, and how big a risk it poses.
Leading AI scientists from the West and China gathered in Beijing to discuss international efforts to ensure the safe development of AI. They agreed on several ‘red lines’ for AI development that could pose an existential threat to humanity.
If this red line was really needed, shouldn’t it have been in place months ago? Does anyone believe that the US or Chinese governments will pay any attention to them?
The EU AI law was overwhelmingly passed by the European Parliament and is expected to take effect in May. The list of limitations is interesting. It is unlikely that some of the banned AI applications will end up on a similar list in China.
Transparency in training data requirements will be particularly tricky for OpenAI, Meta, and Microsoft to satisfy without filing more copyright lawsuits.
Across the pond, the FTC is questioning Reddit’s deal to license user-generated data to Google. Reddit is gearing up for an IPO, but it’s feeling the heat from both regulators and Reddit users who aren’t particularly keen on selling content for AI training.
Apple catching up with AI
Apple isn’t blazing new trails in AI, but it has acquired several AI startups over the past few months. A recent acquisition of a Canadian AI startup provides insight into the company’s generative AI push.
When Apple produces impressive AI technology, it keeps the news pretty quiet until it eventually becomes part of its products. Apple engineers have quietly published a paper revealing MM1, Apple’s first multi-mode LLM family.
MM1 is really good at answering visual questions. The ability to answer questions and make inferences about multiple images is particularly impressive. Will Siri soon learn to see?
log opens
Grok just passed my mental health test. pic.twitter.com/HYN3KTkRyX
— Jim Fan (@DrJimFan) December 7, 2023
Elon Musk has been critical of OpenAI’s refusal to open source its models. He announced that xAI will open source its LLM, Grok-1, and immediately release the model’s code and weights.
The fact that Grok-1 is truly open source (Apache 2.0 licensed) means that companies can use it for commercial purposes without having to pay for alternatives like GPT-4. However, training and running Grok requires serious hardware.
The good news is that some used NVIDIA H100s may soon become cheaper.
New NVIDIA technology
NVIDIA unveiled new chips, tools, and Omniverse at its GTC event this week.
One of the biggest announcements was NVIDIA’s new Blackwell GPU computing platform. Significantly improved training and inference speeds compared to the most advanced Grace Hopper platform.
There is a long list of Big Tech AI companies that have already signed up for advanced hardware.
Researchers at the University of Geneva have published a paper showing how two AI models can be connected to communicate with each other.
When learning a new task, you can usually explain it well enough so that someone else can use those instructions to perform the task themselves. This new research shows how to get AI models to do the same thing.
Soon we will be able to give robots instructions and then have them explain them to a team of robots to complete a task.
In other news…
And that’s the finish.
Do you think we are seeing glimmers of consciousness in Claude 3? Or can you describe your interaction with the engineer more simply? If your AI model achieves AGI and you read the growing list of AI development limitations, you’re probably smart enough to keep quiet about it.
When we look back a few years from now, will we laugh at how surprised everyone was about AI risks, or will we lament that we didn’t do more about AI safety when we could?
Let us know what you think and keep sending us links to AI news we may have missed. We can’t get enough of the stuff.