Toggle light / dark theme

Japan firm’s pioneering Moon landing fails

A Japanese startup attempting the first private landing on the Moon said Wednesday it had lost communication with its spacecraft and assumed the lunar mission had failed.

Ispace said that it could not establish communication with the unmanned Hakuto-R after its expected landing time, a frustrating end to a mission that began with a launch from the United States over four months ago.

“We have not confirmed communication with the lander,” a company official told reporters about 25 minutes after the expected landing.

GigaChat: Russia enters AI race with its ChatGPT rival

The AI is great at Russian but not so much in foreign languages.

Russia is the latest entrant to the artificial intelligence (AI) race as, mostly state-owned, Sberbank announced the launch of GigaChat, a rival to a conversational chatbot, ChatGPT. The move could heat the competition among countries looking to assume leadership positions in the technology that has taken the world by storm.

Last November, OpenAI launched ChatGPT, an AI model that can strike human-like conversations with its users. As more and more people began using ChatGPT, AI’s versatility became known.

The case for Singularity Activism

New AI systems released in 2023 demonstrate remarkable properties that have taken most observers by surprise. The potential both for positive AI outcomes and negative AI outcomes seems to have been accelerated. This leads to five responses:

1.) “Yawn” — AI has been overhyped before, and is being overhyped again now. Let’s keep our attention on more tangible issues.

2.) “Full speed ahead with more capabilities” — Let’s get to the wonderful positive outcomes of AI as soon as possible, sidestepping those small-minded would-be regulators who would stifle all the innovation out of the industry.

This Harvard Law Professor is an Expert on Digital Technology

Type: departments.

careers.

Harvard.

Cybersecurity.

internet.

law.

[article elid=2659855688 data-frozen-sections=[] class= clearfix page-article sm-mb-1 quality-HD post-2659855688 data-category= Careers]

Jonathan L. Zittrain wears many hats. An expert on the Internet, digital technology, law, and public policy, he regularly contributes to public discussions about what digital tech is doing to us and what we should do about it—most recently around the governance of AI and the incentives that shape major social media platforms.

He holds several roles, all at Harvard, reflecting his many converging interests. He is a professor of international law at Harvard Law School, a professor of public policy at its Kennedy School, and a professor of computer science at the university’s John A. Paulson School of Engineering and Applied Sciences. He’s also cofounder and faculty director of Harvard’s Berkman Klein Center for Internet & Society.

In his various capacities, he has been tackling many sticky cyberpolicy issues over the past 25 years.

Internet Training Data Of ChatGPT Can Be Used For Non-Allied Purposes Including Privacy Intrusions, Frets AI Ethics And AI Law

Keep your eye on the prize, but meanwhile don’t lose sight of other nifty opportunities too. What am I talking about? During the famous Gold Rush era, eager prospectors sought the dreamy riches of unearthed gold. Turns out that very few actually struck it rich by discovering those prized gold nuggets. You might be surprised to know that while panning for gold, there was a possibility of finding other precious metals. The erstwhile feverish desire to get gold would sometimes overpower the willingness to mine silver, mercury, and other ores that were readily seen while searching for gold.


It all has to do with data, particularly data mined or scanned from the Internet that is then used principally to data train generative AI apps.

OpenAI’s ChatGPT and its successor GPT-4 would not exist if it were not for all the data training undertaken to get the AI apps into shape for doing Natural Language Processing (NLP) and performing interactive conversations with humans. The data training entailed scanning various portions of the Internet, see my explanation at the link here. In the case of text-to-text or text-to-essay generative AI, the mainstay of ChatGPT, all kinds of text were scanned to ferret out patterns of how humans use words.

I’ll say more about this in a moment.

Grimes Tells Fans To Deepfake Her Music, Will Split 50% Royalties With AI

In the wake of the AI-generated hit Heart on My Sleeve going viral with deepfakes of multi-platinum artists Drake and The Weeknd, pop star Grimes has invited her fans to create music with her voice.

On Sunday night she tweeted, “I’ll split 50% royalties on any successful AI generated song that uses my voice. Same deal as I would with any artist i collab with. Feel free to use my voice without penalty. I have no label and no legal bindings.”

She added that she’s open to anything anyone wants. “Im just curious what even happens and interested in being a Guinea pig,” She said she welcomes the open sourcing of art and an end to copyright.

Auto-GPT May Be The Strong AI Tool That Surpasses ChatGPT

Like many people, you may have had your mind blown recently by the possibility of ChatGPT and other large language models like the new Bing or Google’s Bard.

For anyone who somehow hasn’t come across them — which is probably unlikely as ChatGPT is reportedly the fastest-growing app of all time — here’s a quick recap:

Large language models or LLMs are software algorithms trained on huge text datasets, enabling them to understand and respond to human language in a very lifelike way.


Auto-GPT is a breakthrough technology that creates its own prompts and enables large language models to perform complex multi-step procedures. While it has potential benefits, it also raises ethical concerns about accuracy, bias, and potential sentience in AI agents.

The biggest fear with AI is fear itself | De Kai | TEDxSanMigueldeAllende

In this talk, De Kai examines how AI amplifies fear into an existential threat to society and humanity, and what we need to be doing about it. De Kai’s work across AI, language, music, creativity, and ethics centers on enabling cultures to interrelate. For pioneering contributions to machine learning of AIs like Google/Yahoo/Microsoft Translate, he was honored by the Association for Computational Linguistics as one of only seventeen Founding Fellows worldwide and by Debrett’s HK 100 as one of the 100 most influential figures of Hong Kong. De Kai is a founding Professor of Computer Science and Engineering at HKUST and Distinguished Research Scholar at Berkeley’s ICSI (International Computer Science Institute). His public campaign applying AI to show the impact of universal masking against Covid received highly influential mass media coverage, and he serves on the board of AI ethics think tank The Future Society. De Kai is also creator of one of Hong Kong’s best known world music collectives, ReOrientate, and was one of eight inaugural members named by Google to its AI ethics council. This talk was given at a TEDx event using the TED conference format but independently organized by a local community.

Networks of Silver Nanowires Appear to Learn And Remember Like The Human Brain

Over the past year or so, generative AI models such as ChatGPT and DALL-E have made it possible to produce vast quantities of apparently human-like, high-quality creative content from a simple series of prompts.

Though highly capable – far outperforming humans in big-data pattern recognition tasks in particular – current AI systems are not intelligent in the same way we are. AI systems aren’t structured like our brains and don’t learn the same way.

AI systems also use vast amounts of energy and resources for training (compared to our three-or-so meals a day). Their ability to adapt and function in dynamic, hard-to-predict and noisy environments is poor in comparison to ours, and they lack human-like memory capabilities.