Toggle light / dark theme

On stage in front of packed rooms, at lavish private venture capital dinners and over casual games of ping pong, San Francisco’s AI startup scene is blowing up. With tens of thousands of laid off software engineers with time to tinker, glistening empty buildings beckoning them to start something new, and billions of dollars in idle cash in need of investing, it’s no surprise that just weeks after viral AI companion ChatGPT made its jaw-dropping debut on Nov. 30, one of the smartest cities in the world would pick generative AI as the driver of its next economic boom.


San Francisco’s AI startup scene is blowing up thanks to the ChatGPT craze and backing from investors like Y Combinator, Bessemer, Coatue, Andreessen Horowitz, Tiger Global, Mark Benioff’s Time Ventures and Ashton Kutcher’s Sound Ventures. Meet the players who are shaking up the tech mecca.

When Microsoft released Bing Chat, an AI-powered chatbot co-developed with OpenAI, it didn’t take long before users found creative ways to break it. Using carefully tailored inputs, users were able to get it to profess love, threaten harm, defend the Holocaust and invent conspiracy theories. Can AI ever be protected from these malicious prompts?

What set it off is malicious prompt engineering, or when an AI, like Bing Chat, that uses text-based instructions — prompts — to accomplish tasks is tricked by malicious, adversarial prompts (e.g. to perform tasks that weren’t a part of its objective. Bing Chat wasn’t designed with the intention of writing neo-Nazi propaganda. But because it was trained on vast amounts of text from the internet — some of it toxic — it’s susceptible to falling into unfortunate patterns.

Adam Hyland, a Ph.D. student at the University of Washington’s Human Centered Design and Engineering program, compared prompt engineering to an escalation of privilege attack. With escalation of privilege, a hacker is able to access resources — memory, for example — normally restricted to them because an audit didn’t capture all possible exploits.

On December 18, 2019, Wuhan Central Hospital admitted a patient with symptoms common for the winter flu season: a 65-year-old man with fever and pneumonia. AI Fen, director of the emergency department, oversaw a typical treatment plan, including antibiotics and anti-influenza drugs.

Six days later, the patient was still sick, and AI was puzzled, according to news reports and a detailed reconstruction of this period by evolutionary biologist Michael Worobey. The respiratory department decided to try to identify the guilty pathogen by reading its genetic code, a process called sequencing. They rinsed part of the patient’s lungs with saline, collected the liquid, and sent the sample to a biotech company. On December 27, the hospital got the results: The man had contracted a new coronavirus closely related to the one that caused the SARS outbreak that began 17 years before.

As chatbot responses begin to proliferate throughout the Internet, they will, in turn, impact future machine learning algorithms that mine the Internet for information, thus perpetuating and amplifying the impact of the current programming biases evident in ChatGPT.

ChatGPT is admittedly a work in progress, but how the issues of censorship and offense ultimately play out will be important. The last thing anyone should want in the future is a medical diagnostic chatbot that refrains from providing a true diagnosis that may cause pain or anxiety to the receiver. Providing information guaranteed not to disturb is a sure way to squash knowledge and progress. It is also a clear example of the fallacy of attempting to input “universal human values” into AI systems, because one can bet that the choice of which values to input will be subjective.

If the future of AI follows the current trend apparent in ChatGPT, a more dangerous, dystopic machine-based future might not be the one portrayed in the Terminator films but, rather, a future populated by AI versions of Fahrenheit 451 firemen.

Was given test for Singapore students and failed.


The publication compared ChatGPT with students who have taken the PSLE in the past three years using questions from the latest collection of past year papers that are available in bookstores.

ChatGPT received a mere 16 out of 100 points for three math papers, 21 points for the science papers, and 11 out of 20 for the English papers.

The bot was limited in that it was unable to answer questions that involved graphics or charts, receiving zero points for those sections. It, however, performed relatively better in answering questions that it could attempt to solve and managed to answer more than half of the questions in the maths papers and a quarter of the science papers’ questions, which predominantly included graphs as part of the questions.

A team of scientists has developed electronic skin that could pave the way for soft, flexible robotic devices to assist with surgical procedures or aid people’s mobility.

The creation of stretchable e-skin by Edinburgh researchers gives soft robots for the first time a level of physical self-awareness similar to that of people and animals.

The technology could aid breakthroughs in soft robotics by enabling devices to detect precisely their movement in the most sensitive of surroundings, experts say.

A new video released by the Australian Army shows soldiers using a form of enhanced digital ‘telepathy’ to control robot dogs.

Unlike more well-known brain-machine interface technologies like Elon Musk’s Neuralink, the new system doesn’t use an implant. Instead, the system reads brain signals via an external helmet and then translates that information into commands for the robot dogs. Although still in the testing stages, robotic systems that can be controlled by thoughts instead of verbal or physical commands could offer a significant technological advantage to militaries of the future.

Enhanced Electronic telepathy’ a Potential Battle Field Game Changer.