Toggle light / dark theme

Last month, there were a lot of waves in the AI community when a senior Google engineer, Blake Lemoine, alleged that the company’s AI has become sentient. The claim was made about Google’s Language Model for Dialogue Applications (LaMDA) that can engage in a free-flowing way about a seemingly endless number of topics, an ability that unlocks more natural ways of interacting with technology.

Initially, Lemoine was put on paid administrative leave, but it appears that Google has now fired him.

Robots have always found it a challenge to work with people and vice versa. Two people on the cutting edge of improving that relationship joined us for TC Sessions: Robotics to talk about the present and future of human-robot interaction: Veo Robotics co-founder Clara Vu and Robust.ai founder Rod Brooks (formerly of iRobot and Rethink Robotics).

Part of the HRI challenge is that although we already have robotic systems that are highly capable, the worlds they operate in are still very narrowly defined. Clara said that as we move from “automation to autonomy” (a phrase she stressed she didn’t invent) we’re adding both capabilities and new levels of complexity.

“We’re moving … from robotic systems that do exactly what they were told to do or can perceive a very specific very low-level thing, to systems that have a little bit more autonomy and understanding,” she said. “The system that my company builds would not have been possible five years ago, because the sensors that we’re using and the processors that we’re using to crunch that data just didn’t exist. So as we do have better sensors and more processing capabilities, we’re able to, as you said, understand a little bit more about the world that we’re in and sort of move the level of robotic performance up a notch.”

A tech company in Hong Kong says it has developed the world’s first artificially intelligent laser skin treatment, which scans and detects the heat, sensitivity and shape of a customer’s face. Rods Technology spent over five years developing the technology to help reduce the number of injuries caused by manual treatments conducted by “blind” dermatologists. Over 100 laser facial injuries were reported in Hong Kong between January and July 1, 2022. With a human operator only needing to turn on the machine, the robot dermatologist is able to customise a treatment to help reduce acne scarring, wrinkles and even remove tattoos. The first commercial facial was sold in July 2022.

comment.

The People’s Liberation Army (PLA) of China is likely one of the leading forces in AI development as far as investment is concerned. An October 2021 report published by the Centre for Security and Emerging Technology (CSET) at Georgetown University estimated that the PLA was spending between $1.6bn and $2.7bn on AI research and procurement per year, which is approximately equivalent to that of the US military.

The report, titled Harnessed Lightning, identified seven areas of interest for the PLA and its AI development that are detailed here in order of the quantity of contracts awarded as found by CSET:

It is notable that the priority area for the PLA is the development of autonomous vehicles, specifically sub-surface and aerial platforms. This suggests that the primary concern at present is the development of autonomous platforms that would be able to contribute to generating an asymmetric advantage for the PLA in combat with the US or a similarly advanced opponent.

Summary: Researchers suggest that when in a group, ants behave in a similar fashion to networks of neurons in the brain.

Source: Rockefeller University.

Temperatures are rising, and one colony of ants will soon have to make a collective decision. Each ant feels the rising heat beneath its feet but carries along as usual until, suddenly, the ants reverse course. The whole group rushes out as one—a decision to evacuate has been made. It is almost as if the colony of ants has a greater, collective mind.

I’ve been researching the relationship between brain neurons and nodes in neural networks. Repeatedly it is claimed neurons can do complex information processing that vastly exceeds that of a simple activation function in a neural network.

The resources I’ve read so far suggest nothing fancy is happening with a neuron. The neuron sums the incoming signals from synapses, and then fires when the sum passes a threshold. This is identical to the simple perceptron, the precursor to today’s fancy neural networks. If there is more to a neuron’s operation that this, I am missing it due to lack of familiarity with the neuroscience terminology. I’ve also perused this stack exchange, and haven’t found anything.

If someone could point to a detailed resource that explains the different complex ways a neuron processes the incoming information, in particular what makes a neuron a more sophisticated information processor than a perceptron, I would be grateful.