Menu

Blog

Archive for the ‘information science’ category: Page 9

Nov 19, 2023

Could Photosynthesis Blossom Into Quantum Computing Technology?

Posted by in categories: biotech/medical, computing, information science, quantum physics

As we learned in middle school science classes, inside this common variety of greens—and most other plants—are intricate circuits of biological machinery that perform the task of converting sunlight into usable energy. Or photosynthesis. These processes keep plants alive. Boston University researchers have a vision for how they could also be harnessed into programmable units that would enable scientists to construct the first practical quantum computer.

A quantum computer would be able to perform calculations much faster than the classical computers that we use today. The laptop sitting on your desk is built on units that can represent 0 or 1, but never both or a combination of those states at the same time. While a classical computer can run only one analysis at a time, a quantum computer could run a billion or more versions of the same equation at the same time, increasing the ability of computers to better model extremely complex systems—like weather patterns or how cancer will spread through tissue—and speeding up how quickly huge datasets can be analyzed.

The idea of using photosynthetic molecules from, say, a spinach leaf to power quantum computing services might sound like science fiction. It’s not. It is “on the fringe of possibilities,” says David Coker, a College of Arts & Sciences professor of chemistry and a College of Engineering professor of materials science and engineering. Coker and collaborators at BU and Princeton University are using computer simulations and experiments to provide proof-of-concepts that photosynthetic circuits could unlock new technological capabilities. Their work is showing promising early results.

Nov 18, 2023

This company is building AI for African languages

Posted by in categories: information science, internet, robotics/AI

The AI startups working to build products that support African languages often get ignored by investors, says Hadgu, owing to the small size of the potential market, a lack of political support, and poor internet infrastructure. However, Hadgu says small African startups including Lesan, GhanaNLP, and Lelapa AI are playing an important role: “Big tech companies do not give focus to our languages,” he says, “but we cannot wait for them.”

Lelapa AI is trying to create a new paradigm for AI models in Africa, says Vukosi Marivate, a data scientist on the company’s AI team. Instead of tapping into the internet alone to collect data to train its model, like companies in the West, Lelapa AI works both online and offline with linguists and local communities to gather data, annotate it, and identify use cases where the tool might be problematic.

Bonaventure Dossou, a researcher at Lelapa AI specializing in natural-language processing (NLP), says that working with linguists enables them to develop a model that’s context-specific and culturally relevant. “Embedding cultural sensitivity and linguistic perspectives makes the technological system better,” says Dossou. For example, the Lelapa AI team built sentiment and tone analysis algorithms tailored to specific languages.

Nov 18, 2023

These noise-canceling headphones can filter specific sounds on command, thanks to deep learning

Posted by in categories: information science, mobile phones, robotics/AI, transportation

Scientists have built noise-canceling headphones that filter out specific types of sound in real-time — such as birds chirping or car horns blaring — thanks to a deep learning artificial intelligence (AI) algorithm.

The system, which researchers at the University of Washington dub “semantic hearing,” streams all sounds captured by headphones to a smartphone, which cancels everything before letting wearers pick the specific types of audio they’d like to hear. They described the protoype in a paper published Oct. 29 in the journa IACM Digital Library.

Nov 18, 2023

LHC physicists can’t save them all

Posted by in categories: information science, particle physics, robotics/AI

In 2010, Mike Williams traveled from London to Amsterdam for a physics workshop. Everyone there was abuzz with the possibilities—and possible drawbacks—of machine learning, which Williams had recently proposed incorporating into the LHCb experiment. Williams, now a professor of physics and leader of an experimental group at the Massachusetts Institute of Technology, left the workshop motivated to make it work.

LHCb is one of the four main experiments at the Large Hadron Collider at CERN. Every second, inside the detectors for each of those experiments, proton beams cross 40 million times, generating hundreds of millions of proton collisions, each of which produces an array of particles flying off in different directions. Williams wanted to use machine learning to improve LHCb’s trigger system, a set of decision-making algorithms programmed to recognize and save only collisions that display interesting signals—and discard the rest.

Of the 40 million crossings, or events, that happen each second in the ATLAS and CMS detectors—the two largest particle detectors at the LHC—data from only a few thousand are saved, says Tae Min Hong, an associate professor of physics and astronomy at the University of Pittsburgh and a member of the ATLAS collaboration. “Our job in the trigger system is to never throw away anything that could be important,” he says.

Nov 16, 2023

James Clerk Maxwell’s Big Idea: A History of Our Understanding of Light from Maxwell to Einstein

Posted by in categories: information science, law, mathematics

An hypothesized term to fix a small mathematical inconsistency predicted electromagnetic waves, and that they had all the properties of light that were observed before and after him in the Nineteenth Century. Unwittingly, he also pointed science inexorably in the direction of the special theory of relativity

My last two articles, two slightly different takes on “recipes” for understanding Electromagnetism, show how Maxwell’s equations can be understood as arising from the highly special relationships between the electric and magnetic components within the Faraday tensor that is “enforced” by the assumption that the Gauss flux laws, equivalent to Coulomb’s inverse square force law, must be Lorentz covariant (consistent with Special Relativity).

From the standpoint of Special Relativity, there is obviously something very special going on behind these laws, which are clearly not from the outset Lorentz covariant. What i mean is that, as vector laws in three dimensional space, there is no way you can find a general vector field that fulfills them and deduce that it is Lorentz covariant — it simply won’t be so in general. There has to be something else further specializing that field’s relationship with the world to ensure such an in-general-decidedly-NOT-Lorentz covariant equation is, indeed covariant.

Nov 15, 2023

Forward Health launches CarePods, a self-contained, AI-powered doctor’s office

Posted by in categories: biotech/medical, health, information science

Get a blood test, check blood pressure, and swab for aliments — all without a doctor or nurse.

Adrian Aoun, CEO and co-founder of Forward Health, aims to scale healthcare.


Adrian Aoun, CEO and co-founder of Forward Health, aims to scale healthcare. It started in 2017 with the launch of tech-forward doctor’s offices that eschewed traditional medical staffing for technology solutions like body scanners, smart sensors, and algorithms that can diagnose ailments. Now, in 2023, he’s still on the same mission and rolled up all the learnings and technology found in the doctor’s office into a self-contained, standalone medical station called the CarePod.

Continue reading “Forward Health launches CarePods, a self-contained, AI-powered doctor’s office” »

Nov 15, 2023

Nanowire Network Mimics Brain, Learns Handwriting with 93.4% Accuracy

Posted by in categories: biological, computing, information science, nanotechnology, neuroscience

Summary: Researchers developed an experimental computing system, resembling a biological brain, that successfully identified handwritten numbers with a 93.4% accuracy rate.

This breakthrough was achieved using a novel training algorithm providing continuous real-time feedback, outperforming traditional batch data processing methods which yielded 91.4% accuracy.

The system’s design features a self-organizing network of nanowires on electrodes, with memory and processing capabilities interwoven, unlike conventional computers with separate modules.

Nov 15, 2023

Running thousands of LLMs on one GPU is now possible with S-LoRA

Posted by in categories: business, finance, information science, robotics/AI

VentureBeat presents: AI Unleashed — An exclusive executive event for enterprise data leaders. Hear from top industry leaders on Nov 15. Reserve your free pass

Fine-tuning large language models (LLM) has become an important tool for businesses seeking to tailor AI capabilities to niche tasks and personalized user experiences. But fine-tuning usually comes with steep computational and financial overhead, keeping its use limited for enterprises with limited resources.

To solve these challenges, researchers have created algorithms and techniques that cut the cost of fine-tuning LLMs and running fine-tuned models. The latest of these techniques is S-LoRA, a collaborative effort between researchers at Stanford University and University of California-Berkeley (UC Berkeley).

Nov 14, 2023

Can you spot the AI impostors? Research finds AI faces can look more real than actual humans

Posted by in categories: information science, robotics/AI

This is why i laughed at all that un canny valley crap talk in early 2010s. notice term is almost never used anymore. And, as for makin robots more attractive than most people. done in mid 2030s.


Does ChatGPT ever give you the eerie sense you’re interacting with another human being?

Artificial intelligence (AI) has reached an astounding level of realism, to the point that some tools can even fool people into thinking they are interacting with another human.

Continue reading “Can you spot the AI impostors? Research finds AI faces can look more real than actual humans” »

Nov 14, 2023

NVIDIA announces H200 Tensor Core GPU

Posted by in categories: information science, robotics/AI, supercomputing

The world’s most valuable chip maker has announced a next-generation processor for AI and high-performance computing workloads, due for launch in mid-2024. A new exascale supercomputer, designed specifically for large AI models, is also planned.

H200 Tensor Core GPU. Credit: NVIDIA

In recent years, California-based NVIDIA Corporation has played a major role in the progress of artificial intelligence (AI), as well as high-performance computing (HPC) more generally, with its hardware being central to astonishing leaps in algorithmic capability.

Page 9 of 280First678910111213Last