Toggle light / dark theme

Apple’s digital car key feature for iPhone and Apple Watch is expanding to Mercedes-Benz, with changes to Apple’s back-end configuration files for the feature having been updated today with references to the automaker, as noticed by Nicolás Álvarez (via @aaronp613).

Only a handful of brands including BMW, BYD, Genesis, Hyundai, and Kia have so far introduced support for the feature on select models, which allows you to add a digital car key to the Wallet app on your ‌iPhone‌ and Apple Watch and then lock, unlock, and start your car without needing a physical key. Just a month ago, Lotus appeared in Apple’s configuration files as another upcoming brand that will support the feature.

While much of what Aligned AI is doing is proprietary, Gorman says that at its core Aligned AI is working on how to give generative A.I. systems a much more robust understanding of concepts, an area where these systems continue to lag humans, often by a significant margin. “In some ways [large language models] do seem to have a lot of things that seem like human concepts, but they are also very fragile,” Gorman says. “So it’s very easy, whenever someone brings out a new chatbot, to trick it into doing things it’s not supposed to do.” Gorman says that Aligned AI’s intuition is that methods that make chatbots less likely to generate toxic content will also be helpful in making sure that future A.I. systems don’t harm people in other ways. The work on “the alignment problem”—which is the idea of how we align A.I. with human values so it doesn’t kill us all and from which Aligned AI takes its name—could also help address dangers from A.I. that are here today, such as chatbots that produce toxic content, is controversial. Many A.I. ethicists see talk of “the alignment problem,” which is what people who say they work on “A.I. Safety” often say is their focus, as a distraction from the important work of addressing present dangers from A.I.

But Aligned AI’s work is a good demonstration of how the same research methods can help address both risks. Giving A.I. systems a more robust conceptual understanding is something we all should want. A system that understands the concept of racism or self-harm can be better trained not to generate toxic dialogue; a system that understands the concept of avoiding harm and the value of human life, would hopefully be less likely to kill everyone on the planet.

Aligned AI and Xayn are also good examples that there are a lot of promising ideas being produced by smaller companies in the A.I. ecosystem. OpenAI, Microsoft, and Google, while clearly the biggest players in the space, may not have the best technology for every use case.

Join top executives in San Francisco on July 11–12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

According to Google, Meta and a number of other platforms, generative AI tools are the basis of the next era in creative testing and performance. Meta bills its Advantage+ campaigns as a way to “use AI to eliminate the manual steps of ad creation.”

Provide a platform with all of your assets, from website to logos, product images to colors, and they can make new creatives, test them and dramatically improve results.

For a long time, scientists have been trying to figure out how plants start the process of turning sunlight into sugar through photosynthesis. But now, some researchers have finally decoded those tricky signals that plants send to themselves! Humans can’t survive without photosynthesis. Without plants, there would be no animals, including humans. So, if we understand how to manipulate plant growth, we can also control the quantity of food we produce and our life.

#brightside.

Animation is created by Bright Side.

Music by Epidemic Sound https://www.epidemicsound.com.

Cultured meat is gaining momentum, with large production facilities under construction and the arduous approval process for the finished products inching forward. Most of the industry’s focus thus far has been on ground beef, chicken, pork, and steak. Save for one startup that was working on lab-grown salmon, fish have been largely left out of the fray.

But last month an Israeli company called Steakholder Foods announced it had 3D printed a ready-to-cook fish fillet using cells grown in a bioreactor. The company says the fish is the first of its kind in the world, and they’re aiming to commercialize the 3D bioprinter used to create it.

Steakholder Foods didn’t produce the fish cells it used to print the fillet. They partnered with Umami Meats, a Singapore-based company working on cultured seafood. Umami created the fish cells the same way companies like Believer Meats and Good Meat create lab-grown chicken or beef: they extract cells from a fish (in a process that doesn’t harm it) and mix those cells with a cocktail of nutrients to make them divide, multiply, and mature. They signal the cells to turn into muscle and fat, which they then harvest and form into a finished product.

It’s been a strange week. On the technological side, it has been exciting. Since there is the possibility that Apple announces its headset soon, all the companies are rushing to announce what they have in the pipeline before the big day. This means that these days we are going to have a lot of announcements. This and the next editions of the newsletter are going to be full of cool pieces of news.

On the work side, it has been busy, very busy. I’m also working on a cool tech prototype and I will share it with you very soon in the next few days on this blog. Be sure not to miss it! Next week I’ll also be at AWE. So the next 2–3 weeks are going to be crazy for me, so sorry if I will make the comments on the newsletter a bit shorter than usual.

On the personal side, I’m a bit devasted by the flood that happened in central Italy. My city has not been affected, I’m kinda distant from there (thanks to everyone that asked if I was ok), but seeing the images of what happened there tore my heart. In the Friends section of this newsletter, I will tell you how you can donate to the people affected by this terrible event if you want.

Researchers have used generative AI to reconstruct “high-quality” video from brain activity, a new study reports.

Researchers Jiaxin Qing, Zijiao Chen, and Juan Helen Zhou from the National University of Singapore and The Chinese University of Hong Kong used fMRI data and the text-to-image AI model Stable Diffusion to create a model called MinD-Video that generates video from the brain readings. Their paper describing the work was posted to the arXiv preprint server last week.

Their demonstration on the paper’s corresponding website shows a parallel between videos that were shown to subjects and the AI-generated videos created based on their brain activity. The differences between the two videos are slight and for the most part, contain similar subjects and color palettes.