Toggle light / dark theme

Like many people, you may have had your mind blown recently by the possibility of ChatGPT and other large language models like the new Bing or Google’s Bard.

For anyone who somehow hasn’t come across them — which is probably unlikely as ChatGPT is reportedly the fastest-growing app of all time — here’s a quick recap:

Large language models or LLMs are software algorithms trained on huge text datasets, enabling them to understand and respond to human language in a very lifelike way.


Auto-GPT is a breakthrough technology that creates its own prompts and enables large language models to perform complex multi-step procedures. While it has potential benefits, it also raises ethical concerns about accuracy, bias, and potential sentience in AI agents.

In this talk, De Kai examines how AI amplifies fear into an existential threat to society and humanity, and what we need to be doing about it. De Kai’s work across AI, language, music, creativity, and ethics centers on enabling cultures to interrelate. For pioneering contributions to machine learning of AIs like Google/Yahoo/Microsoft Translate, he was honored by the Association for Computational Linguistics as one of only seventeen Founding Fellows worldwide and by Debrett’s HK 100 as one of the 100 most influential figures of Hong Kong. De Kai is a founding Professor of Computer Science and Engineering at HKUST and Distinguished Research Scholar at Berkeley’s ICSI (International Computer Science Institute). His public campaign applying AI to show the impact of universal masking against Covid received highly influential mass media coverage, and he serves on the board of AI ethics think tank The Future Society. De Kai is also creator of one of Hong Kong’s best known world music collectives, ReOrientate, and was one of eight inaugural members named by Google to its AI ethics council. This talk was given at a TEDx event using the TED conference format but independently organized by a local community.

Over the past year or so, generative AI models such as ChatGPT and DALL-E have made it possible to produce vast quantities of apparently human-like, high-quality creative content from a simple series of prompts.

Though highly capable – far outperforming humans in big-data pattern recognition tasks in particular – current AI systems are not intelligent in the same way we are. AI systems aren’t structured like our brains and don’t learn the same way.

AI systems also use vast amounts of energy and resources for training (compared to our three-or-so meals a day). Their ability to adapt and function in dynamic, hard-to-predict and noisy environments is poor in comparison to ours, and they lack human-like memory capabilities.

Pneumonia is a potentially fatal lung infection that progresses rapidly. Patients with pneumonia symptoms – such as a dry, hacking cough, breathing difficulties and high fever – generally receive a stethoscope examination of the lungs, followed by a chest X-ray to confirm diagnosis. Distinguishing between bacterial and viral pneumonia, however, remains a challenge, as both have similar clinical presentation.

Mathematical modelling and artificial intelligence could help improve the accuracy of disease diagnosis from radiographic images. Deep learning has become increasingly popular for medical image classification, and several studies have explored the use of convolutional neural network (CNN) models to automatically identify pneumonia from chest X-ray images. It’s critical, however, to create efficient models that can analyse large numbers of medical images without false negatives.

Now, K M Abubeker and S Baskar at the Karpagam Academy of Higher Education in India have created a novel machine learning framework for pneumonia classification of chest X-ray images on a graphics processing unit (GPU). They describe their strategy in Machine Learning: Science and Technology.

This clip is from the Before Skool Podcast ep. # 4 with Rupert Sheldrake. Full podcast can be accessed here: https://www.youtube.com/watch?v=68fjlUuvOGM&t=3784s.

Rupert Sheldrake, PhD, is a biologist and author best known for his hypothesis of morphic resonance. At Cambridge University he worked in developmental biology as a Fellow of Clare College. He was Principal Plant Physiologist at the International Crops Research Institute for the Semi-Arid Tropics in Hyderabad, India. From 2005 to 2010 he was Director of the Perrott-Warrick project for research on unexplained human and animal abilities, administered by Trinity College, Cambridge. Sheldrake has published a number of books — A New Science of Life (1981), The Presence of the Past (1988), The Rebirth of Nature (1991), Seven Experiments That Could Change the World (1994), Dogs That Know When Their Owners are Coming Home (1999), The Sense of Being Stared At (2003), The Science Delusion (Science Set Free) (2012), Science and Spiritual Practices (2017), Ways of Going Beyond and Why They Work (2019).

Rupert gave a talk entitled The Science Delusion at TEDx Whitechapel, Jan 12, 2013. The theme for the night was Visions for Transition: Challenging existing paradigms and redefining values (for a more beautiful world). In response to protests from two materialists in the US, the talk was taken out of circulation by TED, relegated to a corner of their website and stamped with a warning label.

To Learn more about Rupert Sheldrake and his research, please visit https://www.sheldrake.org/

Please subscribe to Before Skool. Thank you.

Click on photo to start video.

This video is from a presentation at a private gathering in San Francisco on March 9th with leading technologists and decision-makers with the ability to influence the future of large-language model AIs. This presentation was given before the launch of GPT-4.

Center for Humane Technology.

Original video : https://vimeo.com/809258916/92b420d98a

Read more

Join top executives in San Francisco on July 11–12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

Since its launch in November 2022, ChatGPT, an artificial intelligence (AI) chatbot, has been causing quite a stir because of the software’s surprisingly human and accurate responses.

The auto-generative system reached a record-breaking 100 million monthly active users only two months after launching. However, while its popularity continues to grow, the current discussion within the cybersecurity industry is whether this type of technology will aid in making the internet safer or play right into the hands of those trying to cause chaos.

Join top executives in San Francisco on July 11–12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

The explosion of new generative AI products and capabilities over the last several months — from ChatGPT to Bard and the many variations from others based on large language models (LLMs) — has driven an overheated hype cycle. In turn, this situation has led to a similarly expansive and passionate discussion about needed AI regulation.

The AI regulation firestorm was ignited by the Future of Life Institute open letter, now signed by thousands of AI researchers and concerned others. Some of the notable signees include Apple cofounder Steve Wozniak, SpaceX, Tesla and Twitter CEO Elon Musk; Stability AI CEO Emad Mostaque; Sapiens author Yuval Noah Harari; and Yoshua Bengio, founder of AI research institute Mila.