Toggle light / dark theme

Twitter CEO Elon Musk has bought around 10,000 graphics cards and is hiring AI experts to build a ChatGPT competitor within Twitter, Insider reports.

That’s despite the billionaire CEO repeatedly voicing concerns over AI chatbots like ChatGPT, and even signing an open letter calling for a six-month moratorium on developing AIs more advanced than OpenAI’s GPT-4.

Training a large language model like OpenAI’s highly popular AI chatbot takes a lot of computational power, which means Musk had to dig deep in his sizeable pockets — tens of millions of dollars, according to Insider — to finance the project.

The Hydromea Exray wireless drone is an underwater drone that uses optics instead of cables for many effortless applications.


Donate to Paypal…[email protected].
https://www.facebook.com/BeholdFuture.

Automating remote, inspection and monitoring of submerged assets.
CUT THE CORD

At Hydromea, they believe that the restrictive underwater robotics space requires a paradigm shift to help the rapidly growing Ocean Economy become greener, more sustainable agent on our planet and at a price point that is affordable and scalable.

Hydromea are convinced that investing in hardware and software technologies that allow us to CUT THE CORD is the necessary and much awaited step towards this direction. With that, we disrupt the underwater inspection and monitoring market, making it significantly more efficient and affordable, enabling unprecedented access to submerged assets, never available before. Join us and be a part of the underwater robotics revolution!

Unlike other businesses pushing clean nuclear energy, Helion is working on a “pulsed non-ignition fusion system.”

Microsoft Corporation has placed a big bet on Helion by agreeing to purchase power generated by its nuclear fusion process. Helion is also backed by Sam Altman, the OpenAI CEO with whom Microsoft is spearheading the artificial intelligence (AI) race.

Nuclear fusion is the holy grail of clean energy as it promises the generation of power without the emission of carbon or hassles of radioactive nuclear waste.


Peter Hansen/iStock.

Google announced it is rolling out generative AI to its core search engine.

Shortly after Google made several AI-related announcements at its annual developers conference, its parent company Alphabet’s stock shot up by 5% on Wednesday and reportedly made gains of $56 billion to its market value.

Kicking off the conference, saying, “As you may have heard, AI is having a very busy year. So we have lots to talk about,” CEO Sundar Pichai announced that the company is adding Search Generative Experience (SGE) to its search engine by way of AI snapshots. Those who opt for SGE will see an AI-powered snapshot of key information to consider, with links to dig deeper.


Google.

According to an announcement made at Google i/o, people who opt in to Search Generative Experience (SGE) will soon see “AI snapshots” appearing alongside results from Google Search.

MIT researchers developed a technique that captures images of an object from various angles and converts its surface into a virtual sensor that captures reflections.

Imagine seeing around corners or peeking beyond obstacles that once blocked your view. It’s like having X-ray vision. It might now be possible thanks to new research.

A group of brilliant researchers from MIT and Rice University unveiled an incredible computer vision technology that can change how we see the world. We could soon live in a world where shiny objects become extraordinary “cameras” that let us view our surroundings through their unique lenses.


Interesting Engineering is a cutting edge, leading community designed for all lovers of engineering, technology and science.

Microsoft will also contribute its algorithm knowledge to make Builder.ai’s Natasha, an AI assistant, sound more human.

Microsoft Corporation has invested an undisclosed amount in Builder.ai, a no-code builder startup, as it looks to diversify its bets in the artificial intelligence (AI) game. Builder.ai lets users with no technical knowledge or experience in coding build their own apps and manage them.

Microsoft is already ahead in the AI game thanks to its partnership with OpenAI, the maker of the popular chatbot ChatGPT.


Jean Luc Ichard/ iStock.

Thanks to advances in artificial intelligence (AI) chatbots and warnings by prominent AI researchers that we need to pause AI research lest it destroys society, people have been talking a little more about the ethics of artificial intelligence lately.

The topic is not new: Since people first imagined robots, some have tried to come up with ways of stopping them from seeking out the last remains of humanity hiding in a big field of skulls. Perhaps the most famous example of thinking about how to constrain technology so that it doesn’t destroy humanity comes from fiction: Isaac Asimov’s Laws of Robotics.

The laws, explored in Asimov’s works such as the short story Runaround and I, Robot, are incorporated into all AI as a safety feature in the works of fiction. They are not, as some on the Internet appear to believe, real laws, nor is there currently a way to implement such laws.

ChatGPT has put AI in the limelight. The NYT has this funny list. I can think of many more things, but it shows how interest is growing.

The public release of ChatGPT last fall kicked off a wave of interest in artificial intelligence. A.I. models have since snaked their way into many people’s everyday lives.

Despite their flaws, ChatGPT and other A.I. tools are helping people to save time at work, to code without knowing how to code, to make daily life easier or just to have fun.


Artificial intelligence models have found their way into many people’s lives, for work and for fun.

At a conference at New York University in March, philosopher Raphaël Millière of Columbia University offered yet another jaw-dropping example of what LLMs can do. The models had already demonstrated the ability to write computer code, which is impressive but not too surprising because there is so much code out there on the Internet to mimic. Millière went a step further and showed that GPT can execute code, too, however. The philosopher typed in a program to calculate the 83rd number in the Fibonacci sequence. “It’s multistep reasoning of a very high degree,” he says. And the bot nailed it. When Millière asked directly for the 83rd Fibonacci number, however, GPT got it wrong: this suggests the system wasn’t just parroting the Internet. Rather it was performing its own calculations to reach the correct answer.

Although an LLM runs on a computer, it is not itself a computer. It lacks essential computational elements, such as working memory. In a tacit acknowledgement that GPT on its own should not be able to run code, its inventor, the tech company OpenAI, has since introduced a specialized plug-in—a tool ChatGPT can use when answering a query—that allows it to do so. But that plug-in was not used in Millière’s demonstration. Instead he hypothesizes that the machine improvised a memory by harnessing its mechanisms for interpreting words according to their context—a situation similar to how nature repurposes existing capacities for new functions.

This impromptu ability demonstrates that LLMs develop an internal complexity that goes well beyond a shallow statistical analysis. Researchers are finding that these systems seem to achieve genuine understanding of what they have learned. In one study presented last week at the International Conference on Learning Representations (ICLR), doctoral student Kenneth Li of Harvard University and his AI researcher colleagues—Aspen K. Hopkins of the Massachusetts Institute of Technology, David Bau of Northeastern University, and Fernanda Viégas, Hanspeter Pfister and Martin Wattenberg, all at Harvard—spun up their own smaller copy of the GPT neural network so they could study its inner workings. They trained it on millions of matches of the board game Othello by feeding in long sequences of moves in text form. Their model became a nearly perfect player.

This is awesome and more than a little scary. You can actually try it on a youtube video and then ask questions about the content…


Introducing LeMUR, AssemblyAI’s new framework for applying powerful LLMs to transcribed speech.
With a single line of code, LeMUR can quickly process audio transcripts for up to 10 hours worth of audio content, which effectively translates into ~150k tokens, for tasks likes summarization and question answer.

Sam Altman & Patrick Collison’s Interview — https://www.assemblyai.com/playground/v2/transcript/6gsem9pf…673f20247b.

LeMUR AI Details — https://www.assemblyai.com/blog/lemur-early-access/

❤️ If you want to support the channel ❤️