Toggle light / dark theme

ChatGPT forces us to ask: how much of “being human” belongs to us?

ChatGPT is a hot topic at my university, where faculty members are deeply concerned about academic integrity, while administrators urge us to “embrace the benefits” of this “new frontier.” It’s a classic example of what my colleague Punya Mishra calls the “doom-hype cycle” around new technologies. Likewise, media coverage of human-AI interaction – whether paranoid or starry-eyed – tends to emphasize its newness.

In one sense, it is undeniably new. Interactions with ChatGPT can feel unprecedented, as when a tech journalist couldn’t get a chatbot to stop declaring its love for him. In my view, however, the boundary between humans and machines, in terms of the way we interact with one another, is fuzzier than most people would care to admit, and this fuzziness accounts for a good deal of the discourse swirling around ChatGPT.

When I’m asked to check a box to confirm I’m not a robot, I don’t give it a second thought – of course I’m not a robot. On the other hand, when my email client suggests a word or phrase to complete my sentence, or when my phone guesses the next word I’m about to text, I start to doubt myself. Is that what I meant to say? Would it have occurred to me if the application hadn’t suggested it? Am I part robot? These large language models have been trained on massive amounts of “natural” human language. Does this make the robots part human?

Indian research team develops fully indigenous gallium nitride power switch

Researchers at the Indian Institute of Science (IISc) have developed a fully indigenous gallium nitride (GaN) power switch that can have potential applications in systems like power converters for electric vehicles and laptops, as well as in wireless communications. The entire process of building the switch—from material growth to device fabrication to packaging—was developed in-house at the Center for Nano Science and Engineering (CeNSE), IISc.

Due to their and efficiency, GaN transistors are poised to replace traditional silicon-based transistors as the in many , such as ultrafast chargers for , phones and laptops, as well as space and military applications such as radar.

“It is a very promising and disruptive technology,” says Digbijoy Nath, Associate Professor at CeNSE and corresponding author of the study published in Microelectronic Engineering. “But the material and devices are heavily import-restricted … We don’t have gallium nitride wafer production capability at commercial scale in India yet.” The know-how of manufacturing these devices is also a heavily-guarded secret with few studies published on the details of the processes involved, he adds.

How to Use ChatGPT’s New Image Features

OpenAI recently announced an upgrade to ChatGPT (Apple, Android) that adds two features: AI voice options to hear the chatbot responding to your prompts, and image analysis capabilities. The image function is similar to what’s already available for free with Google’s Bard chatbot.

Even after hours of testing the limits and capabilities of ChatGPT, OpenAI’s chatbot still manages to surprise and scare me at the same time. Yes, I was quite impressed with the web browsing beta offered through ChatGPT Plus, but I remained anxious about the tool’s ramifications for people who write for money online, among many other concerns. The new image feature arriving for OpenAI’s subscribers left me with similarly mixed feelings.

While I’ve not yet had the opportunity to experiment with the new audio capabilities (other great reporters on staff have), I was able to test the soon-to-arrive image features. Here’s how to use the new image search coming to ChatGPT and some tips to help you start out.

New Biosensors Allow Earbuds To Record Brain Activity and Exercise Levels

Who knows? Maybe this is a way for giving commands to a computer/AI instead of implants if further developed in the future.


The streaming data from these biosensors can be used for health monitoring and diagnosis of neuro-degenerative conditions.

A pair of earbuds can be turned into a tool to record the electrical activity of the brain as well as levels of lactate in the body with the addition of two flexible sensors screen-printed onto a stamp-like flexible surface.

The sensors can communicate with the earbuds, which then wirelessly transmit the data gathered for visualization and further analysis, either on a smartphone or a laptop. The data can be used for long-term health monitoring and to detect long-term neuro-degenerative conditions.

Researchers extract audio from still images and silent videos

What if you could hear photos? Impossible, right? Not anymore – with the help of artificial intelligence (AI) and machine learning, researchers can now get audio from photos and silent videos.

Academics from four US universities have teamed up to develop a technique called Side Eye that can extract audio from static photos and silent – or muted – videos.

The technique targets the image stabilization technology that is now virtually standard across most modern smartphones.

Meta putting AI in smart glasses, assistants and more

People will laugh and dismiss it and make comparisons to googles clown glasses. But around 2030 Augmented Reality glasses will come out. Basically, it will be a pair of normal looking sunglasses w/ smart phone type features, Ai, AND… VR stuff.


Meta chief Mark Zuckerberg on Wednesday said the tech giant is putting artificial intelligence into digital assistants and smart glasses as it seeks to gain lost ground in the AI race.

Zuckerberg made his announcements at the Connect developers conference at Meta’s headquarters in Silicon Valley, the company’s main annual product event.

“Advances in AI allow us to create different (applications) and personas that help us accomplish different things,” Zuckerberg said as he kicked off the gathering.

A Silicon Valley Supergroup Is Coming Together to Create an A.I. Device

Since founding OpenAI in 2015, Sam Altman has spent many days thinking that the company’s generative artificial-intelligence products need a new kind of device to succeed. Since leaving Apple in 2019, Jony Ive, the designer behind the iPhone, iPod and MacBook Air, has been considering what the next great computing device could be.

Now, the two men and their companies are teaming up to develop a device that would succeed the smartphone and deliver the benefits of A.I. in a new form factor, unconstrained by the rectangular screen that has been the dominant computing tool of the past decade, according to two people familiar with the discussions…

“…Many tech executives believe the technology has the power to introduce a new paradigm in computing that they call “ambient computing.” Rather than typing on smartphones and taking photographs, they imagine a future device in the form of something as simple as a pendant or glasses that can process the world in real time, using a sophisticated virtual assistant capable of fielding questions and processing images.”


OpenAI’s Sam Altman, the former Apple designer Jony Ive and SoftBank’s Masayoshi Son are teaming up to develop a device that could replace the smartphone.

Machine learning model able to detect signs of Alzheimer’s across languages

The University of Alberta is 3rd in the world for AI research.

Researchers meet the challenge of developing a model that uses speech traits to detect cognitive decline, paving the way for a potential screening tool.

Researchers are striving to make earlier diagnosis of Alzheimer’s dementia possible with a machine learning (ML) model that could one day be turned into a simple screening tool anyone with a smartphone could use.

The model was able to distinguish Alzheimer’s patients from healthy controls with 70 to 75 per cent accuracy, a promising figure for the more than 747,000 Canadians who have Alzheimer’s or another form of dementia.


A machine learning model able to screen individuals with Alzheimer’s dementia from individuals without it by examining speech traits typically observed among people with the disease could one day become a tool that makes earlier diagnosis possible.

New scooter battery can charge in 5 minutes. Can it transform electric cars?

Most of today’s EVs use lithium-ion batteries, the same kind you’ll find in your smartphone or laptop. These batteries all have two electrodes (one positive and one negative), and the negative one is usually made of graphite.

While the battery is being charged, the lithium ions flow from the side of the battery with the positive electrode to the side with the negative electrode. If the charging happens too fast, the flow can be disrupted, causing the battery to short circuit.

StoreDot’s EV battery replaces the graphite electrode with one made from nanoparticles based on the chemical element germanium — this allows the ions to flow more smoothly and quickly, enabling a faster charge.

Google created hurdles to protect smartphone foothold, small search firm says

WASHINGTON, Sept 27 (Reuters) — The founder of Branch Metrics, which developed a method of searching within smartphone apps, told a U.S. antitrust trial on Wednesday how his company struggled to integrate with devices because of steps Google took to block them.

The testimony came during the third week of a more than two-month trial in which the U.S. Justice Department is seeking to show that Alphabet’s Google (GOOGL.O) abused its monopoly of search and some search advertising. Google has said that its business practices were legal.

Google is accused of paying $10 billion a year based on “revenue share agreements” to smartphone makers, wireless carriers and others who agree to make its software the default and maintain its monopoly in search.