Toggle light / dark theme

What would happen if you combined the power of artificial intelligence with a human’s intellect and creativity? That’s the question Hack My Dogma sought to answer with the groundbreaking new series of videos featuring conversations between an AI GPT-3 created by OpenAI and Marcus, a human.

Through these videos, viewers can experience the power of AI first hand and get a glimpse into the potential of artificial intelligence, as well as what the future may hold.

In the series, Marcus and the AI discuss a range of topics such as IT, graphics engines, artificial intelligence, the singularity, and more.

Even more impressive is the computer animated model of GPT-3, which was created to represent GPT-3’s self-image. This model was rendered using synthesia.io, giving viewers a unique glimpse into the mind of the #AI. Join the conversation today to experience the power of AI for yourself and explore the potential of artificial intelligence!

Color-changing cars. Flying taxis. And a gaming-style tablet that can steer a vehicle.

Car companies descended on CES in Las Vegas this week to show off their latest ideas—some quirky and far out, others more relevant in the near term—as the industry navigates technological shifts in its business.

During the week, car executives unveiled new in-car software, hyped automated-driving tech, and highlighted new partnerships and investment deals. Auto makers in recent years have accelerated the rollout of their new battery-powered models.

With 2023 right around the corner, we make 5 predictions that will happen in the artificial intelligence (AI) world.

5 Predictions.
1) GPT-4 to be released.
2) Autonomous Vehicles as primary source of transportation for general population.
3) Evolution of search engines.
4) Humanoid robot development.
5) Run out of data to train AI language models.

At CES 2023, industry experts weighed in on why drones should be your next delivery vehicle.

Drone deliveries could help reduce unnecessary deaths on roads and improve the environment, industry experts explained Friday.

For example, delivering that hamburger helper via a drone versus someone getting in their car and driving to a place…


Ernir eyjolfsson/anadolu agency via getty images.

A sharp-eyed developer at Krita noticed recently that, in the settings for their Adobe Creative Cloud account, the company had opted them (and everyone else) into a “content analysis” program whereby they “may analyze your content using techniques such as machine learning (e.g. for pattern recognition) to develop and improve our products and services.” Some have taken this to mean that it is ingesting your images for its AI. And … they do. Kind of? But it’s not that simple.

First off, lots of software out there has some kind of “share information with the developer” option, where it sends telemetry like how often you use the app or certain features, why it crashed, etc. Usually it gives you an option to turn this off during installation, but not always — Microsoft incurred the ire of many when it basically said telemetry was on by default and impossible to turn off in Windows 10.

That’s gross, but what’s worse is slipping a new sharing method and opting existing users into it. Adobe told PetaPixel that this content analysis thing “is not new and has been in place for a decade.” If they were using machine learning for this purpose and said so a decade ago, that’s quite impressive, as is that apparently no one noticed that whole time. That seems unlikely. I suspect the policy has existed in some form but has quietly evolved.

The visually impaired are getting a helping hand (or a helping belt, as it were) from Korean startup AI Guided. At CES in Las Vegas, the company was showing off some pretty neat tech that incorporates optical and Lidar technology along with AI-powered on-device computing to identify obstacles and help with navigation.

The company claims to be able to do advanced object identification to help keep walkers safe, in addition to using gentle haptic feedback to help with wayfinding. The whole system is carried on a belt, leaving the users hands-free.

GOOGLE’S NEW SENSOR DENOISNG ALGORITHM brings yet another game changer for LOW LIGHT PHOTOGRAPHY. Within a handful of years, this will be added to other factors coming down the pipe, giving further impetus to a revolution in night vision. The video below speaks for itself. In effect, the system takes a series of images from different angles, exposures, and so on, then accurately reconstructs what is missing:


❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers.

📝 The paper “NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images” is available here:

Google co-founder Sergey Brin has taken a rather similar stance as Tesla CEO Elon Musk on artificial intelligence, emphasizing AI dangers in a recent investor communication. According to the Russian-born billionaire, the present day is an era of possibilities, but it is also a time when responsibility has to be practiced, particularly when it comes to emerging technologies.

“We’re in an era of great inspiration and possibility, but with this opportunity comes the need for tremendous thoughtfulness and responsibility as technology is deeply and irrevocably interwoven into our societies,” he wrote.

Brin’s statements were outlined in Alphabet’s recent Founders’ Letter, where the 44-year-old billionaire described how Google is utilizing bleeding-edge technology for its ventures. While AI as a discipline is still an emerging field, Brin noted that there are already a lot of everyday applications for the technology. Among these are the algorithms utilized by Waymo’s self-driving cars, the smart cooling units of Google’s data centers, and of course, Google Translate and YouTube’s automatic captions.

The Connectome and Connectomics: Seeking Neural Circuit Motifs

Talk Overview: The human brain is extremely complex with much greater structural and functional diversity than other organs and this complexity is determined both by one’s experiences and one’s genes. In Part 1 of his talk, Lichtman explains how mapping the connections in the brain (the connectome) may lead to a better understanding of brain function. Together with his colleagues, Lichtman has developed tools to label individual cells in the nervous system with different colors producing beautiful and revealing maps of the neuronal connections.
Using transgenic mice with differently colored, fluorescently labeled proteins in each neuron (Brainbow mice), Lichtman and his colleagues were able to follow the formation and destruction of neuromuscular junctions during mouse development. This work is the focus of Part 2.
In Part 3, Lichtman asks whether some day it might be possible to map all of the neural connections in the brain. He describes the technical advances that have allowed him and his colleagues to begin this endeavor as well as the enormous challenges to deciphering the brain connectome.

Speaker Bio: Jeff Lichtman’s interest in how specific neuronal connections are made and maintained began while he was a MD-PhD student at Washington University in Saint Louis. Lichtman remained at Washington University for nearly 30 years. In 2004, he moved to Harvard University where he is Professor of Molecular and Cellular Biology and a member of the Center for Brain Science.
A major focus of Lichtman’s current research is to decode the map of all the neural connections in the brain. To this end, Lichtman and his colleagues have developed exciting new tools and techniques such as “Brainbow” mice and automated ultra thin tissue slicing machines.