Toggle light / dark theme

James Clerk Maxwell’s Big Idea: A History of Our Understanding of Light from Maxwell to Einstein

An hypothesized term to fix a small mathematical inconsistency predicted electromagnetic waves, and that they had all the properties of light that were observed before and after him in the Nineteenth Century. Unwittingly, he also pointed science inexorably in the direction of the special theory of relativity

My last two articles, two slightly different takes on “recipes” for understanding Electromagnetism, show how Maxwell’s equations can be understood as arising from the highly special relationships between the electric and magnetic components within the Faraday tensor that is “enforced” by the assumption that the Gauss flux laws, equivalent to Coulomb’s inverse square force law, must be Lorentz covariant (consistent with Special Relativity).

From the standpoint of Special Relativity, there is obviously something very special going on behind these laws, which are clearly not from the outset Lorentz covariant. What i mean is that, as vector laws in three dimensional space, there is no way you can find a general vector field that fulfills them and deduce that it is Lorentz covariant — it simply won’t be so in general. There has to be something else further specializing that field’s relationship with the world to ensure such an in-general-decidedly-NOT-Lorentz covariant equation is, indeed covariant.

Forward Health launches CarePods, a self-contained, AI-powered doctor’s office

Get a blood test, check blood pressure, and swab for aliments — all without a doctor or nurse.

Adrian Aoun, CEO and co-founder of Forward Health, aims to scale healthcare.


Adrian Aoun, CEO and co-founder of Forward Health, aims to scale healthcare. It started in 2017 with the launch of tech-forward doctor’s offices that eschewed traditional medical staffing for technology solutions like body scanners, smart sensors, and algorithms that can diagnose ailments. Now, in 2023, he’s still on the same mission and rolled up all the learnings and technology found in the doctor’s office into a self-contained, standalone medical station called the CarePod.

The CarePod pitch is easy to understand. Why spend hours in a doctor’s office to get your throat swabbed for strep throat? Walk into the CarePod, soon to be located in malls and office buildings, and answer some questions to determine the appropriate test. CarePod users can get their blood drawn, throat swabbed, and blood pressure read – most of the frontline clinical work performed in primary care offices, all without a doctor or nurse. Custom AI powers the diagnosis, and behind the scenes, doctors write the appropriate prescription, which is available nearly immediately.

The cost? It’s $99 a month, which gives users access to all of the CarePods tests and features. As Aoun told me, this solution enables healthcare to scale like never before.

Nanowire Network Mimics Brain, Learns Handwriting with 93.4% Accuracy

Summary: Researchers developed an experimental computing system, resembling a biological brain, that successfully identified handwritten numbers with a 93.4% accuracy rate.

This breakthrough was achieved using a novel training algorithm providing continuous real-time feedback, outperforming traditional batch data processing methods which yielded 91.4% accuracy.

The system’s design features a self-organizing network of nanowires on electrodes, with memory and processing capabilities interwoven, unlike conventional computers with separate modules.

Running thousands of LLMs on one GPU is now possible with S-LoRA

VentureBeat presents: AI Unleashed — An exclusive executive event for enterprise data leaders. Hear from top industry leaders on Nov 15. Reserve your free pass

Fine-tuning large language models (LLM) has become an important tool for businesses seeking to tailor AI capabilities to niche tasks and personalized user experiences. But fine-tuning usually comes with steep computational and financial overhead, keeping its use limited for enterprises with limited resources.

To solve these challenges, researchers have created algorithms and techniques that cut the cost of fine-tuning LLMs and running fine-tuned models. The latest of these techniques is S-LoRA, a collaborative effort between researchers at Stanford University and University of California-Berkeley (UC Berkeley).

Can you spot the AI impostors? Research finds AI faces can look more real than actual humans

This is why i laughed at all that un canny valley crap talk in early 2010s. notice term is almost never used anymore. And, as for makin robots more attractive than most people. done in mid 2030s.


Does ChatGPT ever give you the eerie sense you’re interacting with another human being?

Artificial intelligence (AI) has reached an astounding level of realism, to the point that some tools can even fool people into thinking they are interacting with another human.

The eeriness doesn’t stop there. In a study published today in Psychological Science, we’ve discovered images of white faces generated by the popular StyleGAN2 algorithm look more “human” than actual people’s faces.

NVIDIA announces H200 Tensor Core GPU

The world’s most valuable chip maker has announced a next-generation processor for AI and high-performance computing workloads, due for launch in mid-2024. A new exascale supercomputer, designed specifically for large AI models, is also planned.

H200 Tensor Core GPU. Credit: NVIDIA

In recent years, California-based NVIDIA Corporation has played a major role in the progress of artificial intelligence (AI), as well as high-performance computing (HPC) more generally, with its hardware being central to astonishing leaps in algorithmic capability.

Experimental brain-like computing system more accurate with custom algorithm

An experimental computing system physically modeled after the biological brain has “learned” to identify handwritten numbers with an overall accuracy of 93.4%. The key innovation in the experiment was a new training algorithm that gave the system continuous information about its success at the task in real time while it learned. The study was published in Nature Communications.

The algorithm outperformed a conventional machine-learning approach in which training was performed after a batch of data had been processed, producing 91.4% accuracy. The researchers also showed that memory of past inputs stored in the system itself enhanced learning. In contrast, other computing approaches store memory within software or hardware separate from a device’s processor.

For 15 years, researchers at the California NanoSystems Institute at UCLA, or CNSI, have been developing a new platform technology for computation. The technology is a brain-inspired system composed of a tangled-up network of wires containing silver, laid on a bed of electrodes. The system receives input and produces output via pulses of electricity. The individual wires are so small that their diameter is measured on the nanoscale, in billionths of a meter.

AI-powered headphones let users choose what they hear

The devices are controlled via voice commands or a smartphone app.


Active noise control technology is used by noise-canceling headphones to minimize or completely block out outside noise. These headphones are popular because they offer a quieter, more immersive listening experience—especially in noisy areas. However, despite the many advancements in the technology, people still don’t have much control over which sounds their headphones block out and which they let pass.

Semantic hearing

Now, deep learning algorithms have been developed by a group of academics at the University of Washington that enable users to select which noises to filter through their headphones in real-time. The system has been named “semantic hearing” by its creators.

Twice As Powerful: Next-Gen AI Chip Mimics Human Brain for Power Savings

Hussam Amrouch has developed an AI-ready architecture that is twice as powerful as comparable in-memory computing approaches. As reported in the journal Nature, the professor at the Technical University of Munich (TUM) applies a new computational paradigm using special circuits known as ferroelectric field effect transistors (FeFETs). Within a few years, this could prove useful for generative AI, deep learning algorithms, and robotic applications.

The basic idea is simple: unlike previous chips, where only calculations were carried out on transistors, they are now the location of data storage as well. That saves time and energy.

“As a result, the performance of the chips is also boosted,” says Hussam Amrouch, a professor of AI processor design at the Technical University of Munich (TUM).

New algorithm finds failures and fixes in autonomous systems, from drone teams to power grids

From vehicle collision avoidance to airline scheduling systems to power supply grids, many of the services we rely on are managed by computers. As these autonomous systems grow in complexity and ubiquity, so too could the ways in which they fail.

Now, MIT engineers have developed an approach that can be paired with any , to quickly identify a range of potential failures in that system before they are deployed in the real world. What’s more, the approach can find fixes to the failures, and suggest repairs to avoid system breakdowns.

The team has shown that the approach can root out failures in a variety of simulated autonomous systems, including a small and large network, an aircraft collision avoidance system, a team of rescue drones, and a robotic manipulator. In each of the systems, the new approach, in the form of an automated sampling algorithm, quickly identifies a range of likely failures as well as repairs to avoid those failures.

/* */