Mirax Android RAT spreads via Meta ads reaching 220,000 accounts, enabling proxy abuse and fraud operations.
The race to build smarter artificial intelligence has taken an unexpected philosophical turn after Google DeepMind quietly hired an in-house philosopher to investigate the potential for machine consciousness…
…DeepMind is now integrating philosophical reasoning directly into its research pipeline rather than treating ethics as an external concern. This move suggests that Big Tech is no longer viewing sentience as a science-fiction trope but as a technical and moral hurdle, thereby witnessing a transition from building tools to questioning the nature of those tools themselves.
The Google DeepMind philosopher role focuses on the machine sentience debate, aiming to define what it means for a digital system to ‘feel’ or ‘experience’
This internal appointment comes at a time when large language models are becoming increasingly indistinguishable from human interlocutors. While most researchers maintain that these systems are mere statistical predictors, the boundary is thinning. The decision to bring a philosopher into the core development team indicates that Google expects its path toward artificial general intelligence to raise profound questions about awareness and machine rights.
Google DeepMind has hired an in-house philosopher to explore the boundaries of machine consciousness and ethics. This move follows years of controversy surrounding AI sentience and the limits of large language models.
Senate Bill 205, passed in 2024, is one of the nation’s first attempts to regulate ‘high-risk’ AI systems and protect consumers from ‘algorithmic discrimination’ — or disparate treatment or impacts on protected classes under Colorado law.
In the complaint, which was filed in federal court in Denver, Musk’s lawyers contend that the law is ‘unconstitutionally vague’ and ‘invites arbitrary enforcement’ because it fails to define some key terms. They also contend that Colorado’s law would cause Musk’s AI chatbot, Grok, to ‘abandon its disinterested pursuit of truth and instead promote the State’s ideological views on various matters, racial justice in particular,’ which they say violates the First Amendment.
‘Unless the implementation and enforcement of SB24-205 is enjoined, it will violate xAI’s constitutional rights and cause irreparable constitutional harm, impose enormous burdens on xAI and the AI industry, and substitute Colorado’s political preferences for the national economic and security imperative of American AI dominance,’ the complaint reads in part…
…State Rep. Briana Titone, D-Arvada, one of Senate Bill 205’s lead sponsors, told The Sun that Musk’s lawsuit seems like a ‘fishing expedition’ that misinterprets the core of the law.
‘This is where the disconnect is. SB 205 is about consequential decisions, not about freedom of speech,’ Titone said. ‘It’s completely detached from it. And they’re trying to use this argument for a law that has nothing to do with what he’s saying. We’re not restricting speech. Our bill does not say that Grok still can’t be a dick.’
The lawsuit was filed at a time when the Trump administration looks to preempt state regulation of AI models through executive fiat.
AI models identified patients with end-stage kidney disease (ESKD) receiving hemodialysis who faced an imminent risk for hospital admission due to infections or fluid status abnormalities. When paired with nurse-led case reviews and targeted interventions, this strategy helped avert short-term admissions, demonstrating AI’s potential to guide timely, focused care.
AI-driven interventions reduce the odds of hospitalization within 7 days by 8% in patients with end-stage kidney disease receiving hemodialysis, according to a recent study.
Toyota unveils CUE7, advancing AI basketball robot with improved sensing, planning and precision beyond earlier record setting precedessors.
Researchers have demonstrated a new training technique that significantly improves the accuracy of graph neural networks (GNNs)—AI systems used in applications from drug discovery to weather forecasting. GNNs are AI systems designed to perform tasks where the input data is presented in the form of graphs. Graphs, in this context, refer largely to data structures where data points (called nodes) are connected by lines (called edges). The edges indicate some sort of relationship between the nodes. Edges can be used to connect nodes that are similar (called homophily)—but can also connect nodes that are dissimilar (called heterophily).
For example, in a graph of a neural system there would be edges between nodes representing two neurons that enhance each other, but there would also be edges between nodes that suppress each other.
Because graphs can be used to represent everything from social networks to molecular structure, GNNS are able to capture complex relationships better than many other types of AI systems.
A robotic metamaterial shows that the odd mechanics of active solids depend on how the active constituents connect across the system.
Active materials, composed of microscopic constituents that continuously inject motional energy into the system, can exhibit odd mechanical responses, such as stretching vertically when sheared horizontally. Such properties can be used to make materials that can spontaneously crawl or roll over a difficult terrain [1]. One might naively think that these desirable odd responses could be increased by making the components more active. Jack Binysh of the University of Amsterdam and his colleagues now find that this doesn’t always work [2]. The researchers show that in active solids a collective response only emerges when system-spanning connective networks are formed among the individual constituents of the system. Without such networks, the effects of microscopic activity remain confined locally and the macroscopic response disappears.
An active solid is, fundamentally, an elastic lattice made up of self-driving constituents. Examples include robotic lattices composed of motorized units [1, 2], magnetic colloidal crystals [3], and chiral living embryos [4]. The active solids that Binysh and his colleagues examined are examples of nonreciprocal active solids, meaning that the interactions between elements are directional. Interactions may become directional when individual constituents process information about their neighbors. Such nonreciprocal interactions arise in a wide range of settings. In robotic metamaterials, local control loops impose directional responses on adjacent mechanical units [1]. And in living chiral collectives, hydrodynamic flows allow rotating embryos to exchange momentum with the surrounding media [4].