Toggle light / dark theme

While much of what Aligned AI is doing is proprietary, Gorman says that at its core Aligned AI is working on how to give generative A.I. systems a much more robust understanding of concepts, an area where these systems continue to lag humans, often by a significant margin. “In some ways [large language models] do seem to have a lot of things that seem like human concepts, but they are also very fragile,” Gorman says. “So it’s very easy, whenever someone brings out a new chatbot, to trick it into doing things it’s not supposed to do.” Gorman says that Aligned AI’s intuition is that methods that make chatbots less likely to generate toxic content will also be helpful in making sure that future A.I. systems don’t harm people in other ways. The work on “the alignment problem”—which is the idea of how we align A.I. with human values so it doesn’t kill us all and from which Aligned AI takes its name—could also help address dangers from A.I. that are here today, such as chatbots that produce toxic content, is controversial. Many A.I. ethicists see talk of “the alignment problem,” which is what people who say they work on “A.I. Safety” often say is their focus, as a distraction from the important work of addressing present dangers from A.I.

But Aligned AI’s work is a good demonstration of how the same research methods can help address both risks. Giving A.I. systems a more robust conceptual understanding is something we all should want. A system that understands the concept of racism or self-harm can be better trained not to generate toxic dialogue; a system that understands the concept of avoiding harm and the value of human life, would hopefully be less likely to kill everyone on the planet.

Aligned AI and Xayn are also good examples that there are a lot of promising ideas being produced by smaller companies in the A.I. ecosystem. OpenAI, Microsoft, and Google, while clearly the biggest players in the space, may not have the best technology for every use case.

It may be that the famous Higgs boson, co-responsible for the existence of masses of elementary particles, also interacts with the world of the new physics that has been sought for decades. If this were indeed to be the case, the Higgs should decay in a characteristic way, involving exotic particles. At the Institute of Nuclear Physics of the Polish Academy of Sciences in Cracow, it has been shown that if such decays do indeed occur, they will be observable in successors to the LHC currently being designed.

When talking about the ‘hidden valley’, our first thoughts are of dragons rather than sound science. However, in high-energy physics, this picturesque name is given to certain models that extend the set of currently known elementary particles. In these so-called Hidden Valley models, the particles of our world as described by the Standard Model belong to the low-energy group, while exotic particles are hidden in the high-energy region. Theoretical considerations suggest then the exotic decay of the famous Higgs boson, something that has not been observed at the LHC accelerator despite many years of searching. However, scientists at the Institute of Nuclear Physics of the Polish Academy of Sciences (IFJ PAN) in Cracow argue that Higgs decays into exotic particles should already be perfectly observable in accelerators that are successors to the Large Hadron Collider – if the Hidden Valley models turn out to be consistent with reality.

“In Hidden Valley models we have two groups of particles separated by an energy barrier. The theory is that there could then be exotic massive particles that could cross this barrier under specific circumstances. The particles like Higgs boson or hypothetic Z’ boson would act as communicators between the particles of both worlds. The Higgs boson, one of the most massive particle of the Standard Model, is a very good candidate for such a communicator,” explains Prof. Marcin Kucharczyk (IFJ PAN), lead author of an article in the Journal of High Energy Physics, which presents the latest analyses and simulations concerning the possibility of detecting Higgs boson decays in the future lepton accelerators.

Seattle-based startup Jetoptera is designing vertical take-off and landing (VTOL) vehicles with bladeless propulsion systems — potentially making the future of urban flight quiet, safer, and faster.

The challenge: The proportion of the global population living in cities is expected to increase from 50% today to nearly 70% by 2050, meaning our already crowded urban streets are likely to become even more congested.

On Wednesday, Google displayed how Bard, its new AI robot, could be used to write up job listings from a simple one line prompt. Microsoft has demonstrated how a ChatGPT-powered tool can write entire articles in Word.

“There are a tonne of sales representatives doing a lot of banal work to compose prospecting emails,” says Rob Seaman, a senior vice president at workplace messaging company Slack, which is working with OpenAI to embed ChatGPT into its app as a kind of digital co-worker.

New AI tools may remove some of the most tedious aspects of such roles. But based on past evidence, technology also threatens to create a whole new class of menial roles.

According to the Financial Times, Meta is in talks with Magic Leap, an AR headset company, to look into licensing the latter’s tech.

Meta is reportedly in talks with a company called Magic Leap with an eye to a partnership that could see Meta developing its alternative reality (AR) headset in the future.

According to the Financial Times, the two are negotiating a multi-year intellectual property (IP) and manufacturing alliance. The report’s timing is significant for a few reasons.

ChatGPT has changed the world since it emerged a few short months ago. Where will future advancements in generative AI take us?


Welcome back Katie Brenneman, a regular contributor to 21st Century Tech Blog. Several weeks ago when ChatGPT entered the headlines I suggested to Katie that she consider writing about Large Language Modelling (LLM) and the technological and societal implications in terms of its capabilities. Were we witnessing the birth of consciousness in this new artificial intelligence (AI) discipline, or were we coming to terms with what defines our sentience?

By definition, sentience is about feelings and sensations and not thinking. Consciousness, on the other hand, is about our awareness of self and our place in the world around us. And thinking is about the ability to reason, consider a problem, come up with an idea or solution, or have an opinion.

So from what we know about ChatGPT in its various iterations, does it meet the definition of any of these terms? Is it sentient? Is it conscious? Is it thinking?

face_with_colon_three year 2021.


Dogs have been added to a group of animals that, like humans, “recognize themselves as distinct entities from their environment,” a new study shows.

A report by Live Science noted the study’s findings were published Feb. 18 in the journal Scientific Reports.

The study shows that dogs “know where their paws end and the world begins,” Live Science said.

23–25 May 2023 – Register now

Despite the wins offered by DoE, many of us working in commercial research, development, and manufacturing have yet to experience the method. This may be due in part to a lack of awareness and lack of know-how. The best way to gain an appreciation for what DoE may offer you is to experience it.

This series of three one-hour workshops will provide inspirational examples of the use of DoE in many aspects of bringing products to the market, including product design, discovery, and development, process development, scale-up, transfer, and analytical method development. It will also provide the necessary know-how and resources you will need to get started with DoE.