Toggle light / dark theme

Artificial Intelligence is the buzzword of the year with many big giants in almost every industry trying to explore this cutting-edge technology. Right from self-checkout cash registers to AI-based applications to analyse large data in real-time to advanced security check-ins at the airport, AI is just about everywhere.

Currently, the logistics industry is bloated with a number of challenges related to cost, efficiency, security, bureaucracy, and reliability. So, according to the experts, new age technologies like AI, machine learning, the blockchain, and big data are the only fix for the logistics sector which can improve the supply chain ecosystem right from purchase to internal exchanges like storage, auditing, and delivery.

AI is an underlying technology which can enhance the supplier selection, boost supplier relationship management, and more. When combined with big data analytics AI also helps in analysing the supplier related data such as on-time delivery performance, credit scoring, audits, evaluations etc. This helps in making valuable decisions based on actionable real-time insights.

The first important generative models for images used an approach to artificial intelligence called a neural network — a program composed of many layers of computational units called artificial neurons. But even as the quality of their images got better, the models proved unreliable and hard to train. Meanwhile, a powerful generative model — created by a postdoctoral researcher with a passion for physics — lay dormant, until two graduate students made technical breakthroughs that brought the beast to life.

DALL·E 2 is such a beast. The key insight that makes DALL·E 2’s images possible — as well as those of its competitors Stable Diffusion and Imagen — comes from the world of physics. The system that underpins them, known as a diffusion model, is heavily inspired by nonequilibrium thermodynamics, which governs phenomena like the spread of fluids and gases. “There are a lot of techniques that were initially invented by physicists and now are very important in machine learning,” said Yang Song, a machine learning researcher at OpenAI.

The goal of this activity was to have fun & boost everyone’s imagination to the limit. Everyone was shocked by Dall. E 2’s creative scope & infinite possibilities.

As this session was interactive & thought-provoking, it turned the usual tiresome process of learning into an energetic experience.

For those unfamiliar with Dall. E 2, it is Open AI’s newest tool that helps generate images from text inputs in seconds. The name “Dall. E 2” is a combination of the Spanish artist Salvador “Dali” & Pixar’s “Wall-E”. Dall. E 2 uses GPT 3 (Third generation Generative Pre-trained transformer) which is Open AI’s newest software release.

– 3000+)

🔔 Subscribe now for more Artificial Intelligence news, Data science news, Machine Learning news and more.
🦾 Support us NOW so we can create more videos: https://www.youtube.com/channel/UCItylrp-EOkBwsUT7c_Xkxg.

We’ve all heard and brushed off those crazy seeming futurists claims that robots replace most human activities in the future. But when we look at the pace at which AI and technology is growing, the thought doesn’t seem so crazy afterall.

#Robot #Timelapse #Future.

📺 Fun fact: Smart people watch the entire video!

Watch More from Artificial Intelligence News Daily.

Using radar commonly deployed to track speeders and fastballs, researchers have developed an automated system that will allow cars to peer around corners and spot oncoming traffic and pedestrians.

The system, easily integrated into today’s vehicles, uses Doppler radar to bounce radio waves off surfaces such as buildings and parked automobiles. The radar signal hits the surface at an angle, so its reflection rebounds off like a cue ball hitting the wall of a pool table. The signal goes on to strike objects hidden around the corner. Some of the radar signal bounces back to detectors mounted on the car, allowing the system to see objects around the corner and tell whether they are moving or stationary.

“This will enable cars to see occluded objects that today’s lidar and camera sensors cannot record, for example, allowing a self-driving vehicle to see around a dangerous intersection” said Felix Heide, an assistant professor of computer science at Princeton University and one of researchers. “The radar sensors are also relatively low-cost, especially compared to lidar sensors, and scale to mass production.”

The Memo: https://lifearchitect.ai/memo/

Demo site: https://muse-model.github.io/
Read the paper: https://arxiv.org/abs/2301.

Dr Alan D. Thompson is a world expert in artificial intelligence (AI), specialising in the augmentation of human intelligence, and advancing the evolution of ‘integrated AI’. Alan’s applied AI research and visualisations are featured across major international media, including citations in the University of Oxford’s debate on AI Ethics in December 2021.

Home

Music:
Under licence.

Liborio Conti — Looking Forward (The Memo outro)
https://no-copyright-music.com/

Nadella highlighted that while generative AI tools, such as ChatGPT and Dall-E, generated less than 1% of the world’s AI data sets in 2021, this can increase to 10% of all data generated by AI by 2025.

“In future, the generative models will generate most of the data. We are right now seeing the emergence of a new reasoning engine. We’ll clearly have to talk about this reasoning engine — what are its responsible uses, what displacements will it cause, and so on. But on the other side, we should also think about how it can augment us in what we are doing today since it can have a huge impact on our future,” Nadella said.

“Ultimately, these tools will accelerate creativity, ingenuity and productivity across a range of tasks. It is going to be a golden age — the computer revolution created mass consumer behaviour change and productivity for knowledge workers. But, what if we could spread that productivity more evenly? To me, that is one of the biggest things to look forward to, and the way to achieve this is by building a robust data infrastructure,” he added.

I really don’t care about IQ tests; ChatGPT does not perform at a human level. I’ve spent hours with it. Sometimes it does come off like a human with an IQ of about 83, all concentrated in verbal skills. Sometimes it sounds like a human with a much higher IQ than that (and a bunch of naive prejudices). But if you take it out of its comfort zone and try to get it to think, it sounds more like a human with profound brain damage. You can take it step by step through a chain of simple inferences, and still have it give an obviously wrong, pattern-matched answer at the end. I wish I’d saved what it told me about cooking and neutrons. Let’s just say it became clear that it did was not using an actual model of the physical world to generate its answers.

Other examples are cherry picked. Having prompted DALL-E and Stable Diffusion quite a bit, I’m pretty convinced those drawings are heavily cherry picked; normally you get a few that match your prompt, plus a bunch of stuff that doesn’t really meet the specs, not to mention a bit of eldritch horror. That doesn’t happen if you ask a human to draw something, not even if it’s a small child. And you don’t have to iterate on the prompt so much with a human, either.

Competitive coding is a cherry-picked problem, as easy as a coding challenge gets… the tasks are tightly bounded, described in terms that almost amount to code themselves, and come with comprehensive test cases. On the other hand, “coding assistants” are out there annoying people by throwing really dumb bugs into their output (which is just close enough to right that you might miss those bugs on a quick glance and really get yourself into trouble).