Toggle light / dark theme

The signaling protein, known as mTOR, is excessively active in many cancer cells and plays a key role in various diseases, such as diabetes, inflammation, and aging. Meanwhile, autophagy is well-known for its elaborately mediated regulation of activity by the mTOR protein in cells. Inhibiting this activity of the mTOR protein can increase autophagy and subsequently induce cancer cell death.

Professor Kim Se-yun’s research team conducted a study on developing an mTOR-inhibitory anticancer drug with a drug regeneration strategy based on effective binding technology that models physical interactions between compounds and target proteins using the three-dimensional protein structure.

Drug regeneration finds new indications for FDA-approved drugs or clinical drug groups previously proven safe. According to the researchers, this strategy can innovatively shorten the enormous time and investment in new drug development that traditionally takes more than 10 years.

New AI supercomputer from Graphcore will have 500 trillion parameters, (5x that of human brain) and compute at a speed of 10 exaflops per second (10x that of human brain) for a cost of $120 million USD. New AI powered exoskeleton uses machine learning to help patients walk. AI detects diabetes and prediabetes using machine learning to identify ECG signals indicative of the disease. AI identifies cancerous lesions in IBD patients.

AI News Timestamps:
0:00 New AI Supercomputer To Beat Human Brain.
3:06 AI Powered Exoskeleton.
4:35 AI Predicts Diabetes.
6:55 AI Detects Cancerous Lesions For IBD

👉 Crypto AI News: https://www.youtube.com/c/CryptoAINews/videos.

#ai #news #supercomputer

The story of future gaming starts when artificial intelligence takes over building the games for players — while they play them. And human brains are mapped by virtual reality headsets.

This sci fi documentary also covers A.I. npc characters, Metaverse scoreboards, brain to computer chips and gaming, Elon Musk and Neuralink, and the simulation hypothesis.

Taking inspiration from the likes of Westworld, Ready Player One, Squid Game, and Inception.

A future gaming sci-fi documentary, and a timelapse look into the future.

Read the paper: https://lifearchitect.ai/roadmap/
The Memo: https://lifearchitect.ai/memo/

Sources: See the paper above.

Dr Alan D. Thompson is a world expert in artificial intelligence (AI), specialising in the augmentation of human intelligence, and advancing the evolution of ‘integrated AI’. Alan’s applied AI research and visualisations are featured across major international media, including citations in the University of Oxford’s debate on AI Ethics in December 2021.

Home

Music:

Imagine an AI where, all in the same model you could Translate languages, Write code, solve crossword puzzles, Be a chatbot and do a whole bunch of other crazy things.

In this video, we check out the BLOOM large language model. A free and totally open source 176B parameter LLM.

BLOOM model: https://huggingface.co/bigscience/bloom.

Quick examples of running BLOOM locally and/or via API: https://github.com/Sentdex/BLOOM_Examples.

An interview with Emad Mostaque, founder of Stability AI.

OUTLINE:
0:00 — Intro.
1:30 — What is Stability AI?
3:45 — Where does the money come from?
5:20 — Is this the CERN of AI?
6:15 — Who gets access to the resources?
8:00 — What is Stable Diffusion?
11:40 — What if your model produces bad outputs?
14:20 — Do you employ people?
16:35 — Can you prevent the corruption of profit?
19:50 — How can people find you?
22:45 — Final thoughts, let’s destroy PowerPoint.

Links:
Homepage: https://ykilcher.com.
Merch: https://ykilcher.com/merch.
YouTube: https://www.youtube.com/c/yannickilcher.
Twitter: https://twitter.com/ykilcher.
Discord: https://ykilcher.com/discord.
LinkedIn: https://www.linkedin.com/in/ykilcher.

If you want to support me, the best thing to do is to share out the content smile

To help developers protect their applications against possible misuse, we are introducing the faster and more accurate Moderation endpoint. This endpoint provides OpenAI API developers with free access to GPT-based classifiers that detect undesired content — an instance of using AI systems to assist with human supervision of these systems. We have also released both a technical paper describing our methodology and the dataset used for evaluation.

When given a text input, the Moderation endpoint assesses whether the content is sexual, hateful, violent, or promotes self-harm — content prohibited by our content policy. The endpoint has been trained to be quick, accurate, and to perform robustly across a range of applications. Importantly, this reduces the chances of products “saying” the wrong thing, even when deployed to users at-scale. As a consequence, AI can unlock benefits in sensitive settings, like education, where it could not otherwise be used with confidence.