The study shows that machine-learning models can help predict COVID-19 infections.
Researchers have discovered a new way to predict which features are most useful in determining test results for COVID-19. The research team, from Florida Atlantic University’s (FAU) College of Engineering and Computer Science in the U.S., used AI to predict positive or negative COVID-19 test results.
The most common techniques currently used to detect COVID-19 are blood tests, also called serology tests, and molecular tests. Since the two assessments use different methods, they vary substantially.
What if artificial intelligence could help an aspiring author write a novel? Or coach people to improve the quality of their writing? Could machines learn how to make jokes? Inspired by these questions, computer scientist Jiao Sun has been exploring the potential of AI-generated text as a PhD candidate at the University of Southern California (USC).
After a four-month internship at Alexa AI last spring, she is now starting her journey as an Amazon Machine Learning Fellow for the 2022–23 academic year and hopes to continue developing text-generation models that enhance the interaction between humans and AI.
While Sun is passionate about the potential of natural language generation, she also believes it’s important to develop tools that improve human control over machine-created content. She is also cautiously optimistic about the surge in popularity surrounding text generation models.
The forecast, first reported by Reuters, represents how some in Silicon Valley are betting the underlying technology will go far beyond splashy and sometimes flawed public demos.
OpenAI was most recently valued at $20 billion in a secondary share sale, one of the sources said. The startup has already inspired rivals and companies building applications atop its generative AI software, which includes the image maker DALL-E 2. OpenAI charges developers licensing its technology about a penny or a little more to generate 20,000 words of text, and about 2 cents to create an image from a written prompt, according to its website.
A spokesperson for OpenAI declined to comment on its financials and strategy. The company, which started releasing commercial products in 2020, has said its mission remains advancing AI safely for humanity.
In this episode, filmed in early 2021, Sam and Peter discuss the creation of OpenAI and GPT-3, what the future of OpenAI will bring to humanity, and the power of AI/Human collaboration.
Sam Altman is the Co-Founder and CEO of OpenAI and former president of Y Combinator. OpenAI is an artificial intelligence research laboratory that creates programs, such as GPT-3, ChatGPT, and DALL-E 2, for the benefit of humanity.
🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Benji Rabhan, Bryan Learn, B Shang, Christian Ahlin, Edward Unthank, Eric Martel, Geronimo Moralez, Gordon Child, Jace O’Brien, Jack Lukic, John Le, Jonas, Jonathan, Kenneth Davis, Klaus Busse, Kyle Davis, Lorin Atzberger, Lukas Biewald, Luke Dominique Warner, Matthew Allen Fisher, Matthew Valle, Michael Albrecht, Michael Tedder, Nevin Spoljaric, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Richard Sundvall, Steef, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers.
Human-like articulated neural avatars have several uses in telepresence, animation, and visual content production. These neural avatars must be simple to create, simple to animate in new stances and views, capable of rendering in photorealistic picture quality, and simple to relight in novel situations if they are to be widely adopted. Existing techniques frequently use monocular films to teach these neural avatars. While the method permits movement and photorealistic image quality, the synthesized images are constantly constrained by the training video’s lighting conditions. Other studies specifically address the relighting of human avatars. However, they do not provide the user control over the body stance. Additionally, these methods frequently need multiview photos captured in a Light Stage for training, which is only permitted in controlled environments.
Some contemporary techniques seek to relight dynamic human beings in RGB movies. However, they lack control over body posture. They need a brief monocular video clip of the person in their natural location, attire, and body stance to produce an avatar. Only the target novel’s body stance and illumination information are needed for inference. It is difficult to learn relightable neural avatars of active individuals from monocular RGB films captured in unfamiliar surroundings. Here, they introduce the Relightable Articulated Neural Avatar (RANA) technique, which enables photorealistic human animation in any new body posture, perspective, and lighting situation. It first needs to simulate the intricate articulations and geometry of the human body.
The texture, geometry, and illumination information must be separated to enable relighting in new contexts, which is a difficult challenge to tackle from RGB footage. To overcome these difficulties, they first use a statistical human shape model called SMPL+D to extract canonical, coarse geometry, and texture data from the training frames. Then, they suggest a unique convolutional neural network trained on artificial data to exclude the shading information from the coarse texture. They add learnable latent characteristics to the coarse geometry and texture and send them to their proposed neural avatar architecture, which uses two convolutional networks to produce fine normal and albedo maps of the person underneath the goal body posture.