Toggle light / dark theme

These 120 people (91 pictured due to size restrictions) have dedicated their lives, their ideas and often a lot of capital to bring these amazing ideas to practice. Their language is passionate and the ideas they have can at one end be big and bold, and at the other end it can get extremely technical and nuanced. Imagine trying to take these vast ideas covering so many dimensions and the hundreds of thousands of words in these conversations and try and see patterns or signals. These interviews form the underbelly of the next book I am working on, titled Envisage, 100 ideas about the world of ten years from now.

Two years ago, maybe one year ago, this would have either been a very manual and forensic examination by a team of people with expertise in the areas or a building a database. Days, weeks and even months would go by with lot of revisions.

In its own tests, its proprietary camera system outperformed LiDARs in multiple conditions.

After getting his Ph.D. from the Massachusetts Institute of Technology (MIT), Leaf Jiang spent more than a decade building laser ranging systems for the military for various 3D sensing applications. In his experience, Leaf found that laser-based detection systems were too expensive to be deployed on autonomous vehicles being developed for the future, and that’s how NoDar was born.

Light detection and ranging (LiDAR) systems use laser beams to scan their surroundings and create 3D images from the data obtained when surfaces reflect the light. As companies look to make autonomous driving more mainstream,… More.


Scharfsinn86/iStock.

A fine-tuned version of GPT-3.5 Turbo can outperform GPT-4, said OpenAI.

US-based AI company OpenAI just released the fine-tuning API for GPT-3.5 Turbo. This gives developers more flexibility to customize models that perform better for their use cases. The company ran tests, showing that the fine-tuned versions of GPT-3.5 Turbo can surpass GPT-4’s base capabilities on certain tasks.

OpenAI released gpt-3.5-turbo in March this year as a ChatGPT model family for various non-chat uses. It’s priced at $0.002 per 1k tokens, which the AI company… More.


Steve Jennings/Getty.

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here

ElevenLabs, a year-old startup that is leveraging the power of machine learning for voice cloning and synthesis, today announced the expansion of its platform with a new text-to-speech model that supports 30 languages.

The expansion marks the platform’s official exit from the beta phase, making it ready to use for enterprises and individuals looking to customize their content for audiences worldwide. It comes more than a month after ElevenLabs’ $19 million series A round that valued the company at nearly $100M.

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here

Microsegmentation is table stakes for CISOs looking to gain the speed, scale and time-to-market advantages that multicloud tech stacks provide digital-first business initiatives.

Gartner predicts that through 2023, at least 99% of cloud security failures will be the user’s fault. Getting microsegmentation right in multicloud configurations can make or break any zero-trust initiative. Ninety percent of enterprises migrating to the cloud are adopting zero trust, but just 22% are confident their organization will capitalize on its many benefits and transform their business. Zscaler’s The State of Zero Trust Transformation 2023 Report says secure cloud transformation is impossible with legacy network security infrastructure such as firewalls and VPNs.

Artificial General Intelligence (AGI) is a term for Artificial Intelligence systems that meet or exceed human performance on the broad range of tasks that humans are capable of performing. There are benefits and downsides to AGI. On the upside, AGIs can do most of the labor that consume a vast amount of humanity’s time and energy. AGI can herald a utopia where no one has wants that cannot be fulfilled. AGI can also result in an unbalanced situation where one (or a few) companies dominate the economy, exacerbating the existing dichotomy between the top 1% and the rest of humankind. Beyond that, the argument goes, a super-intelligent AGI could find it beneficial to enslave humans for its own purposes, or exterminate humans so as to not compete for resources. One hypothetical scenario is that an AGI that is smarter than humans can simply design a better AGI, which can, in turn, design an even better AGI, leading to something called hard take-off and the singularity.

I do not know of any theory that claims that AGI or the singularity is impossible. However, I am generally skeptical of arguments that Large Language Models such the GPT series (GPT-2, GPT-3, GPT-4, GPT-X) are on the pathway to AGI. This article will attempt to explain why I believe that to be the case, and what I think is missing should humanity (or members of the human race) so choose to try to achieve AGI. I will also try to convey a sense for why it is easy to talk about the so-called “recipe for AGI” in the abstract but why physics itself will prevent any sudden and unexpected leap from where we are now to AGI or super-AGI.

To achieve AGI it seems likely we will need one or more of the following:

Waterborne illness is one of the leading causes of infectious disease outbreaks in refugee and internally displaced persons (IDP) settlements, but a team led by York University has developed a new technique to keep drinking water safe using machine learning, and it could be a game changer. The research is published in the journal PLOS Water.

As drinking water is not piped into homes in most settlements, residents instead collect it from public tap stands using storage containers.

“When water is stored in a container in a dwelling it is at high risk of being exposed to contaminants, so it’s imperative there is enough free residual chlorine to kill any pathogens,” says Lassonde School of Engineering Ph.D. student Michael De Santi, who is part of York’s Dahdaleh Institute for Global Health Research, and who led the research.

In today’s fast-paced technological landscape, Artificial Intelligence (AI) has emerged as a game-changer in various industries. With its ability to analyze vast amounts of data and derive meaningful insights, AI has now made its way into the realm of circuit design and hardware engineering. This article explores the transformative potential of AI in these domains, focusing on how it can accelerate component selection, enhance quality control, enable failure analysis, predict maintenance requirements, streamline supply chain management, optimize demand forecasting, and much more.

Circuit Design

Through the adoption of AI, hardware engineers are given unparalleled help in their pursuit of excellence. AI reveals secrets to sublime circuit performance through its industrious investigation of component databases and innovative simulations. Engineers can then go onto augment their own intelligence to design circuits that exceed expectations and reinvent what is possible in the realm of technology.

In the last ten years, AI systems have developed at rapid speed. From the breakthrough of besting a legendary player at the complex game Go in 2016, AI is now able to recognize images and speech better than humans, and pass tests including business school exams and Amazon coding interview questions.

Last week, during a U.S. Senate Judiciary Committee hearing about regulating AI, Senator Richard Blumenthal of Connecticut described the reaction of his constituents to recent advances in AI. “The word that has been used repeatedly is scary.”

The Subcommittee on Privacy, Technology, and the Law overseeing the meeting heard testimonies from three expert witnesses, who stressed the pace of progress in AI. One of those witnesses, Dario Amodei, CEO of prominent AI company Anthropic, said that “the single most important thing to understand about AI is how fast it is moving.”

On Tuesday, OpenAI announced fine-tuning for GPT-3.5 Turbo—the AI model that powers the free version of ChatGPT—through its API. It allows training the model with custom data, such as company documents or project documentation. OpenAI claims that a fine-tuned model can perform as well as GPT-4 with lower cost in certain scenarios.

So basically, fine-tuning teaches GPT-3.5 Turbo about custom content, such as project documentation or any other written reference. That can come in handy if you want to build an AI assistant based on GPT-3.5 that is intimately familiar with your product or service but lacks knowledge of it in its training data (which, as a reminder, was scraped off the web before September 2021).