With the advent of generative image AI, a new discipline is emerging: prompt engineering. Search engines like Krea provide inspiration.
Prompts are the short text descriptions that an image AI like DALL-E 2, Midjourney or Stable Diffusion uses to generate an image.
Often it is difficult to put the idea you have in mind into fitting words. Moreover, the AI may execute instructions differently than you want it to. And there are plenty of modifiers, for example for image cropping or style, that can lead to very different results for the otherwise same prompt.
ERNIE-ViLG, the new text-to-image AI developed by Baidu can generate images that show Chinese objects and celebrities more accurately than existing AIs. But a built-in censorship mechanism will filter out politically sensitive words.
Scientific publishers such as the American Association for Cancer Research (AACR) and Taylor & Francis have begun attempting to detect fraud in academic paper submissions with an AI image-checking program called Proofig, reports The Register. Proofig, a product of an Israeli firm of the same name, aims to help use “artificial intelligence, computer vision and image processing to review image integrity in scientific publications,” according to the company’s website.
During a trial that ran from January 2021 to May 2022, AACR used Proofig to screen 1,367 papers accepted for publication, according to The Register. Of those, 208 papers required author contact to clear up issues such as mistaken duplications, and four papers were withdrawn.
In particular, many journals need help detecting image duplication fraud in Western blots, which are a specific style of protein-detection imagery consisting of line segments of various widths. Subtle differences in a blot’s appearance can translate to dramatically different conclusions about test results, and many cases of academic fraud have seen unscrupulous researchers duplicate, crop, stretch, and rotate Western blots to make it appear like they have more (or different) data than they really do. Detecting duplicate images can be tedious work for human eyes, which is why some firms like Proofig and ImageTwin, a German firm, are attempting to automate the process.
Ameca, a highly realistic android, has now been upgraded to include GPT-3, one of the largest neural networks and language prediction models.
Back in December 2021, UK-based Engineered Arts revealed what it described as “the most advanced android ever built” – a machine with strikingly lifelike motions and facial expressions. Since then, the company has been working to upgrade Ameca (as she is called) with speech and other capabilities.
In the video demonstration below, automated voice recognition has been combined with GPT-3, a large neural network and language prediction model that makes use of 175 billion parameters. This allows Ameca to recognise what people are saying and respond to questions. Before speaking, her output is fed to an online text-to-speech service, which generates the voice and visemes for lip sync timing.
The device can provide high-resolution 3D imaging of the neural network.
Researchers can now view the mouse brain through the skull thanks to a new holographic microscope. Led by Associate Director Choi Wonshik of the Center for Molecular Spectroscopy and Dynamics within the Institute for Basic Science, Professor Kim Moonseok of The Catholic University of Korea and Professor CHOI Myunghwan of Seoul National University developed a new type of holographic microscope.
The results were published in Science Advances on July 27.
Jian fan/iStock.
Led by Associate Director Choi Wonshik of the Center for Molecular Spectroscopy and Dynamics within the Institute for Basic Science, Professor Kim Moonseok of The Catholic University of Korea and Professor CHOI Myunghwan of Seoul National University developed a new type of holographic microscope.
Is artificial intelligence (AI) as smart as humans or is it smarter? As per scientists, it takes the human brain 25 years to reach full maturity, but new research claims that the AI used by Elon musk’s Tesla could equal that in only 17 years.
Researchers have long predicted that artificial intelligence will eventually surpass human intelligence, although there are different predictions as to when that will happen.
AI at the Edge, NAD-Enhancing Drugs, and Laser Beam Toting Sharks!! — Discovering, Enabling & Transitioning Technology For Special Operations Forces — Lisa R. Sanders, Director of Science and Technology for Special Operations Forces, USSOCOM.
Lisa R. Sanders is the Director of Science and Technology for Special Operations Forces, Acquisition, Technology & Logistics (SOF AT&L), U.S. Special Operations Command (USSOCOM — https://www.socom.mil/), located at MacDill Air Force Base, Florida, where she is responsible for all research and development funded activities — https://www.socom.mil/SOF-ATL/Pages/eSOF_cap_of_interest.aspx.
Ms. Sanders has over 30 years of civilian Federal service. She entered Federal Service as an Electronics Engineer at Naval Avionics Center in Indianapolis, Indiana where she served in quality engineering, production engineering and program management. In 1996, she transferred to Naval Air Warfare Center and Naval Air Systems Command (NAVAIR), Patuxent River, Maryland, serving as an Electronics Engineer and Program Manager for the E-2C Hawkeye aircraft. In 2003, she assumed responsibility for the production and modification of the CV-22 (a Vertical takeoff and landing aircraft). During her time at NAVAIR, she managed one of the first Multi-Year Procurements, and executed the modification and delivery of CV-22 production and developmental test aircraft.
Ms. Sanders transferred to USSOCOM in 2005, where she retained responsibility for CV-22 production and worked as the Systems Acquisition Manager for the C-130 program in Program Executive Office Fixed Wing managing all C-130 projects across the Special Operations Forces inventory.
In 2010, Ms. Sanders was promoted to position of Deputy Director for the Science and Technology Directorate; and in 2011, was assigned to the position of Director, Science & Technology.
According to a report by AI experts, the internet is set to be overrun by AI-generated content in just a few years. Will this ruin content for the rest of time?
A study by Europol, The European Union Agency for Law Enforcement Cooperation, claims that AI will be more prominent than human-made content very soon. The study explains that the vast expansion of AI tools means that we’ll have to deal with more AI-generated content than human-made content.
In the report, it’s claimed that humanity will be flooded with “synthetic media”. This is a new term for media that is fully generated by artificial intelligence programs, fuelled by bots designed to pump out as much content as possible.
Introducing SpecTrain as means to predict future weights of neural network to alleviate data staleness and improve speed of training via distributed training.