Toggle light / dark theme

New Kurzweil Vid!, September 17, 2022!


Ray Kurzweil is an author, inventor, and futurist. Please support this podcast by checking out our sponsors:
- Shopify: https://shopify.com/lex to get 14-day free trial.
- NetSuite: http://netsuite.com/lex to get free product tour.
- Linode: https://linode.com/lex to get $100 free credit.
- MasterClass: https://masterclass.com/lex to get 15% off.
- Indeed: https://indeed.com/lex to get $75 credit.

EPISODE LINKS:

Yesterday, California-based AI firm Adept announced Action Transformer (ACT-1), an AI model that can perform actions in software like a human assistant when given high-level written or verbal commands. It can reportedly operate web apps and perform intelligent searches on websites while clicking, scrolling, and typing in the right fields as if it were a person using the computer.

In a demo video tweeted by Adept, the company shows someone typing, “Find me a house in Houston that works for a family of 4. My budget is 600K” into a text entry box. Upon submitting the task, ACT-1 automatically browses Redfin.com in a web browser, clicking the proper regions of the website, typing a search entry, and changing the search parameters until a matching house appears on the screen.

Another demonstration video on Adept’s website shows ACT-1 operating Salesforce with prompts such as “add Max Nye at Adept as a new lead” and “log a call with James Veel saying that he’s thinking about buying 100 widgets.” ACT-1 then clicks the right buttons, scrolls, and fills out the proper forms to finish these tasks. Other demo videos show ACT-1 navigating Google Sheets, Craigslist, and Wikipedia through a browser.

There was once a time, not so long ago, when scientists like Casey Holliday needed scalpels, scissors and even their own hands to conduct anatomical research. But now, with recent advances in technology, Holliday and his colleagues at the University of Missouri are using artificial intelligence (AI) to see inside an animal or a person—down to a single muscle fiber—without ever making a cut.

Holliday, an associate professor of pathology and anatomical sciences, said his lab in the MU School of Medicine is one of only a handful of labs in the world currently using this high-tech approach.

AI can teach computer programs to identify a in an image, such as a CAT scan. Then, researchers can use that data to develop detailed 3D computer models of muscles to better understand how they work together in the body for motor control, Holliday said.

Recent advancements in the development of machine learning and optimization techniques have opened new and exciting possibilities for identifying suitable molecular designs, compounds, and chemical candidates for different applications. Optimization techniques, some of which are based on machine learning algorithms, are powerful tools that can be used to select optimal solutions for a given problem among a typically large set of possibilities.

Researchers at Colorado State University and the National Renewable Energy Laboratory have been applying state-of-the-art molecular optimization models to different real-world problems that entail identifying new and promising molecular designs. In their most recent study, featured in Nature Machine Intelligence, they specifically applied a newly developed, open-source optimization framework to the task of identifying viable organic radicals for aqueous flow batteries, energy devices that convert into electricity.

“Our project was funded by an ARPA-E program that was looking to shorten how long it takes to develop new energy materials using machine learning techniques,” Peter C. St. John, one of the researchers who carried out the study, told TechXplore. “Finding new candidates for redox flow batteries was an interesting extension of some of our previous work, including a paper published in Nature Communications and another in Scientific Data, both looking at organic radicals.”

These 15 robots may demonstrate that the concept is viable.

Personal robots have been a common trope in sci-fi for many decades. Their apparent plausibility has made many sci-fi enthusiasts wonder when they may become a reality.

Some robots with personal robot-like features have been developed, but are they personal robots?

Would you like a robot to assist you in the house? Perhaps another for personal security? Well, you can’t help but notice that there appears to be a complete lack of them.

Have you ever looked at something and been creeped out by it’s almost, but not quite, human-like appearance? Be honest — does the Sophia robot creep you out?People find things that are human like, but not quite human, to be creepy. The feeling of creepiness can range from robots, to CGI animation, animatronics like you see at theme parks, dolls or even digital assistants. This concept is called the “uncanny valley”. And, believe it or not, is actually a particularly significant reason why many AI projects are failing.

The uncanny valley is the relationship between the degree of an object’s resemblance to being human and then humans emotional response to that object.


Why is AI so creepy sometimes?

Midjourney is one of the leading drivers of the emerging technology of using artificial intelligence (AI) to create visual imagery from text prompts. The San Francisco-based startup recently made news as the engine behind the artwork that won an award in a Colorado state fair competition, and that’s unlikely to be the last complicated issue that AI art will face in the coming years.

Midjourney differentiates from others in the space by emphasizing the painterly aesthetics in the images it produces.


Serial entrepreneur David Holz explains the goals and methods of the revolutionary text-to-image platform and his vision for the future of human imagination.

Earlier this summer, a piece generated by an AI text-to-image application won a prize in a state fair art competition, prying open a Pandora’s Box of issues about the encroachment of technology into the domain of human creativity and the nature of art itself.


Art professionals are increasingly concerned that text-to-image platforms will render hundreds of thousands of well-paid creative jobs obsolete.

From assembly lines to warehouses, robots have been used to automate processes at work for decades. Today, the demand for automation is growing rapidly, but there’s one big problem: today’s robots are expensive to build and complicated to set up. They have complex hardware and need programming by skilled engineers to perform specific tasks. In order to meet demand in an increasingly automated world, robotic arms have to be more affordable, lighterweight, and easier to use.

Ally Robotics, a startup specializing in AI-powered robotic arms, is working on doing just that. Ally is ushering in a new era of robotics—and giving early investors a unique opportunity to join the golden age.