Toggle light / dark theme

The Multiverse AI Is Redefining How You Get Professional Headshots

The two major problems with most AI generated human images is with teeth and fingers. Most of the AI tools don’t get these two human features correct. But The Multiverse has eliminated the problem by training their model and eliminating the need to include fingers in your headshots. The company recently announced its latest AI model, Upscale to further enhance the AI headshots.

Upscale is a new imaging model that improves lighting, skin texture, and hair which are notoriously challenging for imaging models. The results are really good. I’ve been following The Multiverse AI on socials, and have been reading great things about them, so much so that their revenue has grown more than 10 times through word of mouth in a period of last four weeks.

The Multiverse AI works by leveraging a latent diffusion model to engineer a custom headshot. According to the company, up to “80% of images look like studio quality professional headshots of the user, with the percentage depending on the quality of input images.” While there is no criteria to define and measure that percentage, I agree on the part that at least a few of the 100 generated images can be used on your LinkedIn and resume.

SambaNova’s new chip runs AI models to 5 trillion parameters

The SN40L chip will allow for improved enterprise solutions.

Palo Alto-based AI chip startup SambaNova has introduced a new semiconductor that can be used in running high-performance computing applications, like the company’s LLM platform SambaNova Suite, with faster training of models at a lower total cost.

SambaNova announced the chip, SN40L, can serve a 5 trillion parameter model with 256k+ sequence length possible on a single system node. It is being touted as an alternative to NVIDIA, which is a frontrunner in the chip race and recently unveiled what will possibly be the most powerful chip in the market – GH200 – which can support 480 GB of CPU RAM and 96 GB of GPU RAM.

A ‘game changer’

The… More


X

Robots may soon have compound eye vision, thanks to MoCA

A color-changing system inspired by the wings of butterflies can also help scientists provide compound eye vision to robots, but why do scientists want robots to see like insects?

Would you like to try a T-shirt whose color changes with the weather? How about a bandage that alerts you by changing its color when an infection occurs at the site of an injury?

Researchers at the University of Hong Kong have developed a material to turn such ideas into a reality. They have created a rubber-like color-changing system called Morphable Concavity Array (MoCA).

Tesla raises Dojo D1 order from TSMC: report

Tesla is reportedly increasing the orders for its Dojo D1 supercomputer chips. The D1 is a custom Tesla application-specific integrated circuit (ASIC) that’s designed for the Dojo supercomputer, and it is reportedly ordered from Taiwan Semiconductor Manufacturing Company (TSMC).

Citing a source reportedly familiar with the matter, Taiwanese publication Economic Daily noted that Tesla will be doubling its Dojo D1 chip to 10,000 units for the coming year. Considering the Dojo supercomputer’s scalability, expectations are high that the volume of D1 chip orders from TSMC will continue to increase until 2025.

Dojo, after all, is expected to be used by Tesla for the training of its driver-assist systems and self-driving AI models. With the rollout of projects like FSD, the dedicated robotaxi, and Optimus, Dojo’s contributions to the company’s operations would likely be more substantial.

Tesla User Shares Full Self-Driving Passes The ‘Wife Test,’ Elon Musk Responds: ‘Great Story’

Tesla TSLA CEO Elon Musk responded positively to a social media post when a user’s spouse approved of Tesla’s Full Self-Driving (FSD) feature.

What Happened: Matt Smith, an Equity Analysis at Halter Ferguson, posted about his wife’s newfound approval of Tesla Inc.’s FSD feature during a 40-minute drive. Musk responded to the post with “Great story”.

See Also: Elon Musk Warned, ‘We’re Running Out Of Dead Dinosaurs, And Betting Against Science Is the Dumbest Experiment In History’ Amid One Of The Biggest Challenges The World Has Ever Faced.

Spotify is going to clone podcasters’ voices — and translate them to other languages

The backbone of the translation feature is OpenAI’s voice transcription tool Whisper, which can both transcribe English speech and translate other languages into English. But Spotify’s tool goes beyond speech-to-text translation — the feature will translate a podcast into a different language and reproduce it in a synthesized version of the podcasters’ own voice.

“By matching the creator’s own voice, Voice Translation gives listeners around the world the power to discover and be inspired by new podcasters in a more authentic way than ever before,” Ziad Sultan, Spotify’s vice president of personalization, said in a statement.

OpenAI is likely behind the voice replication part of this new feature, too. The AI company is making a few announcements this morning, including the launch of a tool that can create “human-like audio from just text and a few seconds of sample speech.” OpenAI says it’s intentionally limiting how widely this tool will be available due to concerns around safety and privacy.”


A partnership with OpenAI will let podcasters replicate their voices to automatically create foreign-language versions of their shows.

A robust, agnostic molecular biosignature based on machine learning

The search for definitive biosignatures—unambiguous markers of past or present life—is a central goal of paleobiology and astrobiology. We used pyrolysis–gas chromatography coupled to mass spectrometry to analyze chemically disparate samples, including living cells, geologically processed fossil organic material, carbon-rich meteorites, and laboratory-synthesized organic compounds and mixtures. Data from each sample were employed as training and test subsets for machine-learning methods, which resulted in a model that can identify the biogenicity of both contemporary and ancient geologically processed samples with ~90% accuracy. These machine-learning methods do not rely on precise compound identification: Rather, the relational aspects of chromatographic and mass peaks provide the needed information, which underscores this method’s utility for detecting alien biology.

Now you can chat with ChatGPT using your voice

The new feature is part of a round of updates for OpenAI’s app, including the ability to answer questions about images.

First, ChatGPT now has a voice. Choose from one of five lifelike synthetic voices and you can have a conversation with the chatbot as if you were making a call, getting responses to your spoken questions in real time.

ChatGPT also now answers questions about images. OpenAI teased this feature in March with its reveal of GPT-4 (the model that powers ChatGPT), but it has not been available to the wider public before. This means that you can now upload images to the app and quiz it about what they show.

/* */