Toggle light / dark theme

Summary: Researchers delved into how ChatGPT influences user decision-making, focusing on the ‘choice overload’ phenomenon. This condition emerges when an individual is overwhelmed by numerous options, often leading to decision paralysis or dissatisfaction.

The study found, however, that users preferred larger numbers of recommendations from ChatGPT over those from humans or online agents, appreciating the perceived accuracy of the chatbot’s suggestions. This points towards a new paradigm where AI-generated options might enhance decision-making processes across various industries.

When you type a question into Google Search, the site sometimes provides a quick answer called a Featured Snippet at the top of the results, pulled from websites it has indexed. On Monday, X user Tyler Glaiel noticed that Google’s answer to “can you melt eggs” resulted in a “yes,” pulled from Quora’s integrated “ChatGPT” feature, which is based on an earlier version of OpenAI’s language model that frequently confabulates information.

“This is actually hilarious,” Glaiel wrote in a follow-up post. “Quora SEO’d themselves to the top of every search result, and is now serving chatGPT answers on their page, so that’s propagating to the answers google gives.” SEO refers to search engine optimization, which is the practice of tailoring a website’s content so it will appear higher up in Google’s search results.

The topic of artificial intelligence’s rising involvement in our digital world and its associated opportunities and challenges have been the main topics of discussion at many security conferences and events in recent times. There is little doubt that humankind is on the verge of an era of exponential technological advancement, and AI is leading the way in the emerging digital world.

For cybersecurity, this tech trend has implications. In simple terms, artificial intelligence acts as a powerful catalyst and enabler for cybersecurity in our connected ecosystem.

LAS VEGAS, Nev. (FOX5) — The Sphere in Las Vegas has introduced “life-like” robots that will interact with guests at the venue.

According to a news release, the Sphere describes the creation, which is named Aura, as the “world’s most advanced humanoid robot.”

Serving as the Sphere’s “spokesbot,” Aura will permanently reside in the grand atrium at the venue.

The automotive industry as a whole is experiencing a significant transformation driven by artificial intelligence. AI tech is increasingly integrated into vehicles to enhance safety, efficiency, and the overall driving experience. From advanced driver-assistance systems (ADAS) that provide features like adaptive cruise control and lane-keeping assist to autonomous driving capabilities, AI is at the forefront of automotive innovation. Now, BMW has found yet another way to implement the new tech.

The German manufacturer wants to take the next step in customer service with its latest offering, Proactive Care that blends data and artificial intelligence. This new service empowers BMW vehicles to autonomously identify existing and predictable service requirements, enabling them to proactively anticipate customer needs and offer timely solutions. The initial applications are now live and the automaker plans continuous enhancements for the future.

Microsoft is looking at next-generation nuclear reactors to power its data centers and AI, according to a new job listing for someone to lead the way.

Microsoft thinks next-generation nuclear reactors can power its data centers and AI ambitions, according to a job listing for a principal program manager who’ll lead the company’s nuclear energy strategy.

Data centers already use a hell of a lot of electricity, which could thwart the company’s climate goals unless it can find clean sources of energy. Energy-hungry AI makes that an even bigger challenge for the company to overcome. AI dominated Microsoft’s Surface event last week.


Microsoft is hiring someone to lead its nuclear strategy.

When OpenAI first unveiled GPT-4, its flagship text-generating AI model, the company touted the model’s multimodality — in other words, its ability to understand the context of images as well as text. GPT-4 could caption — and even interpret — relatively complex images, OpenAI said, for example identifying a Lightning Cable adapter from a picture of a plugged-in iPhone.

But since GPT-4’s announcement in late March, OpenAI has held back the model’s image features, reportedly on fears about abuse and privacy issues. Until recently, the exact nature of those fears remained a mystery. But early this week, OpenAI published a technical paper detailing its work to mitigate the more problematic aspects of GPT-4’s image-analyzing tools.

To date, GPT-4 with vision, abbreviated “GPT-4V” by OpenAI internally, has only been used regularly by a few thousand users of Be My Eyes, an app to help low-vision and blind people navigate the environments around them. Over the past few months, however, OpenAI also began to engage with “red teamers” to probe the model for signs of unintended behavior, according to the paper.

Kolena, a startup building tools to test, benchmark and validate the performance of AI models, today announced that it raised $15 million in a funding round led by Lobby Capital with participation from SignalFire and Bloomberg Beta.

The new cash brings Kolena’s total raised to $21 million, and will be put toward growing the company’s research team, partnering with regulatory bodies and expanding Kolena’s sales and marketing efforts, co-founder and CEO Mohamed Elgendy told TechCrunch in an email interview.

“The use cases for AI are enormous, but AI lacks trust from both builders and the public,” Elgendy said. “This technology must be rolled out in a way that makes digital experiences better, not worse. The genie isn’t going back in the bottle, but as an industry we can make sure we make the right wishes.”

The two major problems with most AI generated human images is with teeth and fingers. Most of the AI tools don’t get these two human features correct. But The Multiverse has eliminated the problem by training their model and eliminating the need to include fingers in your headshots. The company recently announced its latest AI model, Upscale to further enhance the AI headshots.

Upscale is a new imaging model that improves lighting, skin texture, and hair which are notoriously challenging for imaging models. The results are really good. I’ve been following The Multiverse AI on socials, and have been reading great things about them, so much so that their revenue has grown more than 10 times through word of mouth in a period of last four weeks.

The Multiverse AI works by leveraging a latent diffusion model to engineer a custom headshot. According to the company, up to “80% of images look like studio quality professional headshots of the user, with the percentage depending on the quality of input images.” While there is no criteria to define and measure that percentage, I agree on the part that at least a few of the 100 generated images can be used on your LinkedIn and resume.

The SN40L chip will allow for improved enterprise solutions.

Palo Alto-based AI chip startup SambaNova has introduced a new semiconductor that can be used in running high-performance computing applications, like the company’s LLM platform SambaNova Suite, with faster training of models at a lower total cost.

SambaNova announced the chip, SN40L, can serve a 5 trillion parameter model with 256k+ sequence length possible on a single system node. It is being touted as an alternative to NVIDIA, which is a frontrunner in the chip race and recently unveiled what will possibly be the most powerful chip in the market – GH200 – which can support 480 GB of CPU RAM and 96 GB of GPU RAM.

A ‘game changer’

The… More