Toggle light / dark theme

Researchers’ approach may protect quantum computers from attacks

Quantum computers, which can solve several complex problems exponentially faster than classical computers, are expected to improve artificial intelligence (AI) applications deployed in devices like autonomous vehicles; however, just like their predecessors, quantum computers are vulnerable to adversarial attacks.

A team of University of Texas at Dallas researchers and an industry collaborator have developed an approach to give quantum computers an extra layer of protection against such attacks. Their solution, Quantum Noise Injection for Adversarial Defense (QNAD), counteracts the impact of attacks designed to disrupt inference—AI’s ability to make decisions or solve tasks.

The team will present research that demonstrates the method at the IEEE International Symposium on Hardware Oriented Security and Trust held May 6–9 in Washington, D.C.

Demand for computer chips fueled by AI could reshape global politics and security

A global race to build powerful computer chips that are essential for the next generation of artificial intelligence (AI) tools could have a major impact on global politics and security.

The US is currently leading the race in the design of these chips, also known as semiconductors. But most of the manufacturing is carried out in Taiwan. The debate has been fueled by the call by Sam Altman, CEO of ChatGPT’s developer OpenAI, for a US$5 trillion to US$7 trillion (£3.9 trillion to £5.5 trillion) global investment to produce more powerful chips for the next generation of AI platforms.

The amount of money Altman called for is more than the has spent in total since it began. Whatever the facts about those numbers, overall projections for the AI market are mind blowing. The data analytics company GlobalData forecasts that the market will be worth US$909 billion by 2030.

Unlocking the Future of Security With MIT’s Terahertz Cryptographic ID Tags

MIT engineers developed a tag that can reveal with near-perfect accuracy whether an item is real or fake. The key is in the glue on the back of the tag.

A few years ago, MIT researchers invented a cryptographic ID tag that is several times smaller and significantly cheaper than the traditional radio frequency tags (RFIDs) that are often affixed to products to verify their authenticity.

This tiny tag, which offers improved security over RFIDs, utilizes terahertz waves, which are smaller and have much higher frequencies than radio waves. But this terahertz tag shared a major security vulnerability with traditional RFIDs: A counterfeiter could peel the tag off a genuine item and reattach it to a fake, and the authentication system would be none the wiser.

Tesla FSD v12 drives like a teenager!

The Rebellionaire Road Rally has made its way down to Austin, TX. This time we’re joined by Farzad (@farzyness) to test out Tesla FSD v12 on the streets of the greater Austin area. This is part 2 of the journey with Farzad joining in the car adding helpful commentary.

#Tesla #rebellionaire.

Rebellionaires check out www.Rebellionaire.com.
Rebellionaire is a brand of Halter Ferguson Financial. www.hffinancial.com/disclaimer.

As of March 7th, 2024, clients and employees of our firm Halter Ferguson Financial own Tesla stock and/or options and thereby stand to materially benefit from a rise in the share price. Past performance is no assurance of future results. Halter Ferguson Financial, Inc. (“Halter Ferguson Financial”) is a registered investment adviser with its principal place of business in the State of Indiana. A complete list of all recommendations will be provided if requested for the preceding period of not less than one year. It should not be assumed that recommendations made in the future will be profitable or will equal the performance of the securities in this list. Opinions expressed are those of Halter Ferguson Financial, Inc. and are subject to change, not guaranteed and should not be considered recommendations to buy or sell any security.

Halter Ferguson Financial is registered as an investment advisor with the SEC and only transacts business in states where it is properly registered, or is excluded or exempted from registration requirements. Registration as an investment advisor does not constitute an endorsement of the firm by the Commission nor does it indicate that the advisor has attained a particular level of skill or ability.

Information presented is believed to be factual and up-to-date, but we do not guarantee its accuracy and it should not be regarded as a complete analysis of the subjects discussed. All expressions of opinion reflect the judgment of the author/presenter as of the date of publication and are subject to change and do not constitute personalized investment advice. A professional advisor should be consulted before implementing any of the strategies presented. No content should be construed as an offer to buy or sell, or a solicitation of any offer to buy or sell any securities mentioned herein.

TheNET: ChatGPT, the popular AI-based large language model (LLM) app from OpenAI, has seen levels of user growth unique for many reasons

For one, it reached over a million users in five days of its release, a mark unmatched by even the most historically popular apps like Facebook and Spotify. Additionally, ChatGPT has seen near-immediate adoption in business contexts, as organizations seek to gain efficiencies in content creation, code generation, and other functional tasks.

But as businesses rush to take advantage of AI, so too do attackers. One notable way in which they do so is through unethical or malicious LLM apps.

Unfortunately, a recent spate of these malicious apps has introduced risk into an organization’s AI journey. And, the associated risk is not easily addressed with a single policy or solution. To unlock the value of AI without opening doors to data loss, security leaders need to rethink how they approach broader visibility and control of corporate applications.

Google’s AI-First Strategy Brings Vector Support To Cloud Databases

With an emphasis on AI-first strategy and improving Google Cloud databases’ capability to support GenAI applications, Google announced developments in the integration of generative AI with databases.


AWS offers a broad range of services for vector database requirements, including Amazon OpenSearch Service, Amazon Aurora PostgreSQL-Compatible Edition, Amazon RDS for PostgreSQL, Amazon Neptune ML, and Amazon MemoryDB for Redis. AWS emphasizes the operationalization of embedding models, making application development more productive through features like data management, fault tolerance, and critical security features. AWS’s strategy focuses on simplifying the scaling and operationalization of AI-powered applications, providing developers with the tools to innovate and create unique experiences powered by vector search.

Azure takes a similar approach by offering vector database extensions to existing databases. This strategy aims to avoid the extra cost and complexity of moving data to a separate database, keeping vector embeddings and original data together for better data consistency, scale, and performance. Azure Cosmos DB and Azure PostgreSQL Server are positioned as services that support these vector database extensions. Azure’s approach emphasizes the integration of vector search capabilities directly alongside other application data, providing a seamless experience for developers.

Google’s move towards native support for vector storage in existing databases simplifies building enterprise GenAI applications relying on data stored in the cloud. The integration with LangChain is a smart move, enabling developers to instantly take advantage of the new capabilities.

Korean researchers develop insect brain-inspired motion detector

The new semiconductor is expected to have some important applications in things like transportation and security systems in both industry and the public.

Korean researchers have developed a new “intelligent sensor” semiconductor that works similarly to the optic nerves of insects.


A team of researchers from Korea have developed an insect brain-inspired semiconductor that can be used as a fast, low power motion detector.

Scientists Develop a Technique to Protect a Quantum-era Metaverse

A team of Chinese scientists introduced a quantum communication technique that they say could help secure Web 3.0 against the formidable threat of quantum computing.

Their approach, called Long-Distance Free-Space Quantum Secure Direct Communication (LF QSDC), promises to improve data security by enabling encrypted direct messaging without the need for key exchange, a method traditionally vulnerable to quantum attacks.

They add the approach not only enhances security but also aligns with the decentralized ethos of Web 3.0, offering a robust defense in the rapidly evolving digital landscape.

Texas’s San Antonio airport will get a 420lb autonomous security robot

The robot, which weighs 420lbs, stands at 5ft 4in and travels at 3 miles per hour, is expected to make its appearance at the airport in the next two months, according to local reports.

According to Knightscope, the K5 is intended for outdoor use and features autonomous recharging without requiring human intervention. Features listed on Knightscope’s website include 360-degree and eye-level video streaming, people detection during certain restricted hours, thermal anomaly detection, as well as license plate recognition.

The city’s director of airports, Jesus Saenz, said that the K5 will be used to respond to door alarms at the airport and will be placed near doors with alarms that are frequently set off.

/* */