Toggle light / dark theme

Elon Musk has ruled out a ‘winter’ for artificial intelligence, and hints the AI boom is just beginning

Elon Musk has ruled out the possibility of a winter for artificial intelligence, and hinted that the current boom is just beginning.

“There will not be a winter for AI, quite the opposite,” the business mogul tweeted on Sunday.

Musk’s tweet was in response to comments from Adam D’Angelo, the CEO of Quora, who said he expects to see “continual progress” from where AI currently stands all the way to artificial general intelligence (AGI) – without a period of time that feels like a “winter,” or downturn.

Sci-fi author says he wrote 97 books in 9 months using AI tools, including ChatGPT and Midjourney

Sci-fi author Tim Boucher says he’s created 97 books in nine months with the help of AI.

In an article for Newsweek, Boucher said he’d used AI image generator Midjourney to illustrate the books, and ChatGPT and Anthropic’s Claude for brainstorming and text generation.

Boucher told Insider he plans to get to “at least 1,000 books, if not beyond.”

Extremely Lightweight Wearable Robot WIM!!

Video I found on wearable robots.


Introducing an innovative solution that enhances your mobility WIM:
- It provides easy and safe walking.
- It enables effective exercise.
- It guides and reinforces your gait performance.

Specification.
- Light weight 1.4kg.
- Portable small size 24cm x 10cm.
- World best performance 20% energy saving (Level ground walking)

www.wirobotics.com.
[email protected].

#wearabledevices #robot #WearableRobot #Exoskeleton #wearablemobility

OpenAI is exploring collective decisions on AI, like Wikipedia entries

May 22 (Reuters) — ChatGPT’s creator OpenAI is testing how to gather broad input on decisions impacting its artificial intelligence, its president Greg Brockman said on Monday.

At AI Forward, an event in San Francisco hosted by Goldman Sachs Group Inc (GS.N) and SV Angel, Brockman discussed the broad contours of how the maker of the wildly popular chatbot is seeking regulation of AI globally.

One announcement he previewed is akin to the model of Wikipedia, which he said requires people with diverse views to coalesce and agree on the encyclopedia’s entries.

These small startups are making headway on A.I.‘s biggest challenges

While much of what Aligned AI is doing is proprietary, Gorman says that at its core Aligned AI is working on how to give generative A.I. systems a much more robust understanding of concepts, an area where these systems continue to lag humans, often by a significant margin. “In some ways [large language models] do seem to have a lot of things that seem like human concepts, but they are also very fragile,” Gorman says. “So it’s very easy, whenever someone brings out a new chatbot, to trick it into doing things it’s not supposed to do.” Gorman says that Aligned AI’s intuition is that methods that make chatbots less likely to generate toxic content will also be helpful in making sure that future A.I. systems don’t harm people in other ways. The work on “the alignment problem”—which is the idea of how we align A.I. with human values so it doesn’t kill us all and from which Aligned AI takes its name—could also help address dangers from A.I. that are here today, such as chatbots that produce toxic content, is controversial. Many A.I. ethicists see talk of “the alignment problem,” which is what people who say they work on “A.I. Safety” often say is their focus, as a distraction from the important work of addressing present dangers from A.I.

But Aligned AI’s work is a good demonstration of how the same research methods can help address both risks. Giving A.I. systems a more robust conceptual understanding is something we all should want. A system that understands the concept of racism or self-harm can be better trained not to generate toxic dialogue; a system that understands the concept of avoiding harm and the value of human life, would hopefully be less likely to kill everyone on the planet.

Aligned AI and Xayn are also good examples that there are a lot of promising ideas being produced by smaller companies in the A.I. ecosystem. OpenAI, Microsoft, and Google, while clearly the biggest players in the space, may not have the best technology for every use case.

The thing missing from generative AI is the ‘why’

Join top executives in San Francisco on July 11–12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

According to Google, Meta and a number of other platforms, generative AI tools are the basis of the next era in creative testing and performance. Meta bills its Advantage+ campaigns as a way to “use AI to eliminate the manual steps of ad creation.”

Provide a platform with all of your assets, from website to logos, product images to colors, and they can make new creatives, test them and dramatically improve results.

AI Reconstructs ‘High-Quality’ Video Directly from Brain Readings in Study

Researchers have used generative AI to reconstruct “high-quality” video from brain activity, a new study reports.

Researchers Jiaxin Qing, Zijiao Chen, and Juan Helen Zhou from the National University of Singapore and The Chinese University of Hong Kong used fMRI data and the text-to-image AI model Stable Diffusion to create a model called MinD-Video that generates video from the brain readings. Their paper describing the work was posted to the arXiv preprint server last week.

Their demonstration on the paper’s corresponding website shows a parallel between videos that were shown to subjects and the AI-generated videos created based on their brain activity. The differences between the two videos are slight and for the most part, contain similar subjects and color palettes.