Toggle light / dark theme

By Chuck Brooks


Computing paradigms as we know them will exponentially change when artificial intelligence is combined with classical, biological, chemical, and quantum computing. Artificial intelligence might guide and enhance quantum computing, run in a 5G or 6G environment, facilitate the Internet of Things, and stimulate materials science, biotech, genomics, and the metaverse.

Computers that can execute more than a quadrillion calculations per second should be available within the next ten years. We will also rely on clever computing software solutions to automate knowledge labor. Artificial intelligence technologies that improve cognitive performance across all envisioned industry verticals will support our future computing.

Advanced computing has a fascinating and mind-blowing future. It will include computers that can communicate via lightwave transmission, function as a human-machine interface, and self-assemble and teach themselves thanks to artificial intelligence. One day, computers might have sentience.

The Glaze/Nightshade team, for its part, denies it is seeking destructive ends, writing: Nightshade’s goal is not to break models, but to increase the cost of training on unlicensed data, such that licensing images from their creators becomes a viable alternative.

In other words, the creators are seeking to make it so that AI model developers must pay artists to train on data from them that is uncorrupted.

How did we get here? It all comes down to how AI image generators have been trained: by scraping data from across the web, including scraping original artworks posted by artists who had no prior express knowledge nor decision-making power about this practice, and say the resulting AI models trained on their works threatens their livelihood by competing with them.

Radar altimeters are the sole indicators of altitude above a terrain. Spectrally adjacent 5G cellular bands pose significant risks of jamming altimeters and impacting flight landing and takeoff. As wireless technology expands in frequency coverage and utilizes spatial multiplexing, similar detrimental radio-frequency (RF) interference becomes a pressing issue.

To address this interference, RF front ends with exceptionally low latency are crucial for industries like transportation, health care, and the military, where the timeliness of transmitted messages is critical. Future generations of wireless technologies will impose even more stringent latency requirements on RF front-ends due to increased data rate, carrier frequency, and user count.

Additionally, challenges arise from the physical movement of transceivers, resulting in time-variant mixing ratios between interference and signal-of-interest (SOI). This necessitates real-time adaptability in mobile wireless receivers to handle fluctuating interference, particularly when it carries safety-to-life critical information for navigation and autonomous driving, such as aircraft and ground vehicles.

The internet’s steady fall into the AI-garbled dumpster continues. As Vice reports, a recent stud y conducted by researchers at the Amazon Web Services (AWS) AI Lab found that a “shocking amount of the web” is already made up of poor-quality AI-generated and translated content.

The paper is yet to be peer-reviewed, but “shocking” feels like the right word. According to the study, over half — specifically, 57.1 percent — of all of the sentences on the internet have been translated into two or more other languages. The poor quality and staggering scale of these translations suggest that large language model (LLM)-powered AI models were used to both create and translate the material. The phenomenon is especially prominent in “lower-resource languages,” or languages with less readily available data with which to more effectively train AI models.

In other words, in what the researchers believe to be a ploy to garner clickbait-driven ad revenue, AI is being used to first generate poor-quality English-language content at a remarkable scale, and then AI-powered machine translation (MT) tools transcribe said content into several other languages. The translated material gets worse each time — and as a result, entire regions of the web are filling to the brim with degrading AI-scrambled copies of copies.

Researchers at the University of Sydney Nano Institute have developed a small silicon semiconductor chip that combines electronic and photonic (light-based) elements. This innovation greatly enhances radio-frequency (RF) bandwidth and the ability to accurately control information flowing through the unit.

Expanded bandwidth means more information can flow through the chip and the inclusion of photonics allows for advanced filter controls, creating a versatile new semiconductor device.

Researchers expect the chip will have applications in advanced radar, satellite systems, wireless networks, and the roll-out of 6G and 7G telecommunications and also open the door to advanced sovereign manufacturing. It could also assist in the creation of high-tech value-add factories at places like Western Sydney’s Aerotropolis precinct.

One brain to rule them

Two researchers have revealed how they are creating a single super-brain that can pilot any robot, no matter how different they are.

Sergey Levine and Karol Hausman wrote in IEEE Spectrum that generative AI, which can create text and images, is not enough for robotics because the Internet does not have enough data on how robots interact with the world.