Toggle light / dark theme

Researchers from EPFL have resolved a long-standing debate surrounding laser additive manufacturing processes with a pioneering approach to defect detection.

The progression of laser additive —which involves 3D printing of metallic objects using powders and lasers—has often been hindered by unexpected defects. Traditional monitoring methods, such as and machine learning algorithms, have shown significant limitations. They often either overlook defects or misinterpret them, making precision manufacturing elusive and barring the technique from essential industries like aeronautics and automotive manufacturing.

But what if it were possible to detect defects in real-time based on the differences in the sound the printer makes during a flawless print and one with irregularities? Up until now, the prospect of detecting these defects this way was deemed unreliable. However, researchers at the Laboratory of Thermomechanical Metallurgy (LMTM) at EPFL’s School of Engineering have successfully challenged this assumption.

Recent advances allow imaging of neurons inside freely moving animals. However, to decode circuit activity, these imaged neurons must be computationally identified and tracked. This becomes particularly challenging when the brain itself moves and deforms inside an organism’s flexible body, e.g., in a worm. Until now, the scientific community has lacked the tools to address the problem.

Now, a team of scientists from EPFL and Harvard have developed a pioneering AI method to track inside moving and deforming animals. The study, now published in Nature Methods, was led by Sahand Jamal Rahi at EPFL’s School of Basic Sciences.

The new method is based on a (CNN), which is a type of AI that has been trained to recognize and understand patterns in images. This involves a process called “convolution,” which looks at small parts of the picture—like edges, colors, or shapes—at a time and then combines all that information together to make sense of it and to identify objects or patterns.

Geoffrey Miller is a professor of evolutionary psychology at the University of New Mexico, a researcher and an author.

Artificial Intelligence possesses the capability to process information thousands of times faster than humans. It’s opened up massive possibilities. But it’s also opened up huge debate about the safety of creating a machine which is more intelligent and powerful than we are. Just how legitimate are the concerns about the future of AI?

The potential impact of generative AI on the economy, society, and work is polarizing, swinging from the positive benefits of a technological revolution to doomsday scenarios. The authors have come to think about this issue as points on a spectrum and have created a sports analogy to help think about it: AI tools can range from steroids, to sneakers, to a coach, each representing a different relationship between human users and the technology. Steroids elevate short-term performance, but leave you worse off in the long term. AI-powered tools can instead be used to augment people’s skills and make them more productive — much like a good running sneaker. On the most desirable end of the spectrum, AI-powered tools can be used like a coach that improves people’s own capabilities. This framework can be used to help conceptualize how we might craft AI-based tools that enhance rather than diminish human capabilities.

Page-utils class= article-utils—vertical hide-for-print data-js-target= page-utils data-id= tag: blogs.harvardbusiness.org, 2007/03/31:999.368607 data-title= A Sports Analogy for Understanding Different Ways to Use AI data-url=/2023/12/a-sports-analogy-for-understanding-different-ways-to-use-ai data-topic= AI and machine learning data-authors= Jake M. Hofman; Daniel G. Goldstein; David M. Rothschild data-content-type= Digital Article data-content-image=/resources/images/article_assets/2023/11/Nov23_22_200404124-001-383x215.jpg data-summary=

Will next-gen tools be used as a steroid, sneaker, or coach?

Meta’s chief scientist sees the current state of the AI industry as an “ongoing war”, claiming that NVIDIA’s CEO Jensen Huang is supplying the weapons for it.

Meta’s Chief Scientist Believes NVIDIA is Fueling “AI Developments” However, It is Far From Human-Level Intelligence

This coverage sort of gives a one-sided perspective of what professionals like Yann perceive about the AI industry, and his comments are fairly interesting, considering that he sees the current state of artificial intelligence as far from resembling “human” characteristics.

The “bot development platform” will be launched as a public beta by the end of the month, according to an internal memo seen by the Post.

The move aligns with the company’s new strategic vision to “explore new generative AI products and how they can integrate with the existing ones”, the companywide notice said.

The social media giant has already been working on its own text-to-image generator similar to Midjourney, according to a person familiar with the matter.