Toggle light / dark theme

Read the paper: https://lifearchitect.ai/roadmap/
The Memo: https://lifearchitect.ai/memo/

Sources: See the paper above.

Dr Alan D. Thompson is a world expert in artificial intelligence (AI), specialising in the augmentation of human intelligence, and advancing the evolution of ‘integrated AI’. Alan’s applied AI research and visualisations are featured across major international media, including citations in the University of Oxford’s debate on AI Ethics in December 2021.

Home

Music:

Imagine an AI where, all in the same model you could Translate languages, Write code, solve crossword puzzles, Be a chatbot and do a whole bunch of other crazy things.

In this video, we check out the BLOOM large language model. A free and totally open source 176B parameter LLM.

BLOOM model: https://huggingface.co/bigscience/bloom.

Quick examples of running BLOOM locally and/or via API: https://github.com/Sentdex/BLOOM_Examples.

Tesla Inc TSLA CEO Elon Musk has taken Twitter by storm by sharing his “sex tape” tweet, sending his fans wild guesses.

What Happened: Musk shared an image that he labeled as his “sex tape,” showing two tape dispensers placed in a way that formed the number 69.

Then he joked in the caption, saying, “But have you seen my sex tape?”

An interview with Emad Mostaque, founder of Stability AI.

OUTLINE:
0:00 — Intro.
1:30 — What is Stability AI?
3:45 — Where does the money come from?
5:20 — Is this the CERN of AI?
6:15 — Who gets access to the resources?
8:00 — What is Stable Diffusion?
11:40 — What if your model produces bad outputs?
14:20 — Do you employ people?
16:35 — Can you prevent the corruption of profit?
19:50 — How can people find you?
22:45 — Final thoughts, let’s destroy PowerPoint.

Links:
Homepage: https://ykilcher.com.
Merch: https://ykilcher.com/merch.
YouTube: https://www.youtube.com/c/yannickilcher.
Twitter: https://twitter.com/ykilcher.
Discord: https://ykilcher.com/discord.
LinkedIn: https://www.linkedin.com/in/ykilcher.

If you want to support me, the best thing to do is to share out the content smile

To help developers protect their applications against possible misuse, we are introducing the faster and more accurate Moderation endpoint. This endpoint provides OpenAI API developers with free access to GPT-based classifiers that detect undesired content — an instance of using AI systems to assist with human supervision of these systems. We have also released both a technical paper describing our methodology and the dataset used for evaluation.

When given a text input, the Moderation endpoint assesses whether the content is sexual, hateful, violent, or promotes self-harm — content prohibited by our content policy. The endpoint has been trained to be quick, accurate, and to perform robustly across a range of applications. Importantly, this reduces the chances of products “saying” the wrong thing, even when deployed to users at-scale. As a consequence, AI can unlock benefits in sensitive settings, like education, where it could not otherwise be used with confidence.