Toggle light / dark theme

Every seven minutes a cyber-attack is reported in Australia.

Millions of Australians have had their data stolen in malicious attacks, costing some businesses tens of millions of dollars in ransom. The federal government is warning the country must brace for even more strikes as cyber gangs become more sophisticated and ruthless.

Four Corners investigates the cyber gangs behind these assaults, cracking open their inner operations and speaking to a hacker who says he targets Australians and shows no remorse.

The program travels all the way to Ukraine and discovers we share a common enemy in the battle for cyber security.

Fake images and misinformation in the age of AI are growing. Even in 2019, a Pew Research Center study found that 61% of Americans said it is too much to ask of the average American to be able to recognize altered videos and images. And that was before generative AI tools became widely available to the public.

AdobeADBE +0.5% shared August 2023 statistics on the number of AI-generated images created with Adobe Firefly reaching one billion, only three months after it launched in March 2023.


In response to the increasing use of AI images, Google Deep Mind announced a beta version of SynthID. The tool will watermark and identify AI-generated images by embedding a digital watermark directly into the pixels of an image that will be imperceptible to the human eye but detectable for identification.

Kris Bondi, CEO and founder of Mimoto, a proactive detection and response cybersecurity company, said that while Google’s SynthID is a starting place, the problem of deep fakes will not be fixed by a single solution.

Large language models (LLMs) are ushering in a revolutionary era with their remarkable capabilities. From enhancing everyday applications to transforming complex systems, generative AI is becoming an integral part of our lives.

However, the surge in demand for AI-powered solutions exposes a critical challenge: the scarcity of computational resources required to meet the growing appetite for logic and voice-based interfaces. This scarcity leads to a pressing need for cost-efficient platforms that can support the development and deployment of LLMs.

Industrializing AI software development will require transforming the processes for developing, deploying and maintaining AI systems from a research or ad-hoc approach into a structured, systematic and scalable industrial process. By focusing on cloud cost optimization and platform engineering, businesses can foster growth, profitability, and innovation in the field of AI.

Less than a year into the AI boom and startups are already grappling with what may become an industry reckoning.

Take Jasper, a buzzy AI startup that raised $125 million for a valuation of $1.5 billion last year — before laying off staff with a gloomy note from its CEO this summer.

Now, in a provocative new story, the Wall Street Journal fleshes out where the cracks are starting to form. Basically, monetizing AI is hard, user interest is leveling off or declining, and running the hardware behind these products is often very expensive — meaning that while the tech does sometimes offer a substantial “wow” factor, its path to a stable business model is looking rockier than ever.

“Personalize or Perish.” One of the leading newspapers aptly summarizes the critical nature of personalization 2.0, or hyper-personalization for businesses.

We live in an era where customers expect businesses to understand their wants and needs. Today, companies must meet customers’ needs and anticipate and exceed them. And for this, they must pivot to a digital-first mindset to create stronger, more authentic customer interactions.

How do they do this? Through a hyper-personalized, AI-powered business strategy where products, ads, and interactions are tailor-made for each customer or a group of customers.

The statement that may not be obvious is that the sleeping giant, Google has woken up, and they are iterating on a pace that will smash GPT-4 total pre-training FLOPS by 5x before the end of the year. The path is clear to 100x by the end of next year given their current infrastructure buildout. Whether Google has the stomach to put these models out publicly without neutering their creativity or their existing business model is a different discussion.

Today we want to discuss Google’s training systems for Gemini, the iteration velocity for Gemini models, Google’s Viperfish (TPUv5) ramp, Google’s competitiveness going forward versus the other frontier labs, and a crowd we are dubbing the GPU-Poor.

Access to compute is a bimodal distribution. There are a handful of firms with 20k+ A/H100 GPUs, and individual researchers can access 100s or 1,000s of GPUs for pet projects. The chief among these are researchers at OpenAI, Google, Anthropic, Inflection, X, and Meta, who will have the highest ratios of compute resources to researchers. A few of the firms above as well as multiple Chinese firms will 100k+ by the end of next year, although we are unsure of the ratio of researchers in China, only the GPU volumes.

Aude Oliva is a prominent Cognitive and Computer Scientist directing the MIT Computational Perception and Cognition group at CSAIL while also leading the MIT-IBM Watson AI Lab and co-leading the MIT AI Hardware Program. With research spanning computational neuroscience, cognition, and computer vision, she pioneers the integration of human perception and machine recognition. Her contributions extend across academia, industry, and research, making her a distinguished figure at MIT.

Fuel your success with Forbes. Gain unlimited access to premium journalism, including breaking news, groundbreaking in-depth reported stories, daily digests and more. Plus, members get a front-row seat at members-only events with leading thinkers and doers, access to premium video that can help you get ahead, an ad-light experience, early access to select products including NFT drops and more:

https://account.forbes.com/membership/?utm_source=youtube&ut…ytdescript.

Stay Connected.

Construction is set to break ground by the end of this year, and the company expects to move into the new space by the end of 2024. The production facility for semiconductor quartz will include a clean room, high-purity cleaning system and allow them to expand an automation component of their business that they’ve been capitalizing on for years.

“We knew that our customers all over the world were expanding at a rate we couldn’t keep up with,” said Scott Lingren, SXT’s managing director and U.S. chairman. “As you see all these expansions from Samsung in Taylor to Texas Instruments Inc. in the Dallas area to all over the world … we just have to keep up.”

SXT – which is headquartered in the Netherlands and owned by the privately-held Schunk Group in Germany – supplies semiconductor manufacturers around the world, like Samsung, which has had a presence in Central Texas for decades and is potentially adding to its existing Austin campus and its new site in Taylor. Other major players in the industry include Taiwan Semiconductor Manufacturing Co., which is expanding in Arizona, and Intel Corp., which is expanding to Ohio.

What happens in femtoseconds in nature can now be observed in milliseconds in the lab.

Scientists at the university of sydney.

The University of Sydney is a public research university located in Sydney, New South Wales, Australia. Founded in 1,850, it is the oldest university in Australia and is consistently ranked among the top universities in the world. The University of Sydney has a strong focus on research and offers a wide range of undergraduate and postgraduate programs across a variety of disciplines, including arts, business, engineering, law, medicine, and science.