OpenAI on Thursday said the U.S. National Laboratories will be using its latest artificial intelligence models for scientific research and nuclear weapons security.
Under the agreement, up to 15,000 scientists working at the National Laboratories may be able to access OpenAI’s reasoning-focused o1 series. OpenAI will also work with Microsoft, its lead investor, to deploy one of its models on Venado, the supercomputer at Los Alamos National Laboratory, according to a release. Venado is powered by technology from Nvidia and Hewlett-Packard Enterprise.
Identifying driver regulators in cell stateions is key to decoding cellular function. Here, the authors present regX, an interpretable AI framework to prioritise potential driver TFs and cCREs from single-cell multiomics data, showing potential for understanding and manipulating cell states.
Conflict between humans and AI is front and center in AMC’s sci-fi series “Humans,” which returned for its third season on Tuesday (June 5). In the new episodes, conscious synthetic humans face hostile people who treat them with suspicion, fear and hatred. Violence roils as Synths find themselves fighting for not only basic rights but their very survival, against those who view them as less than human and as a dangerous threat. [Can Machines Be Creative? Meet 9 AI ‘Artists’]
Even in the real world, not everyone is ready to welcome AI with open arms. In recent years, as computer scientists have pushed the boundaries of what AI can accomplish, leading figures in technology and science have warned about the looming dangers that artificial intelligence may pose to humanity, even suggesting that AI capabilities could doom the human race.
#GigaBerlinArt #TechPainters #RoboticMuralist. At Tesla’s Gigafactory Berlin-Brandenburg, creativity meets technology in a remarkable initiative to transform concrete surfaces into stunning artworks. Inspired by Elon Musk’s vision to turn the factory into a canvas, the project began with local graffiti crews. However, the sheer scale of the endeavor required innovative solutions, leading to the collaboration with a robotic muralist startup. This groundbreaking graffiti printer combines cutting-edge technology with artistry, using a triangulation method to maneuver its print head along factory walls. With 12 paint cans onboard, the robot sprays precise dots of color—10 million per wall and 300 million for the west side alone—creating intricate designs composed of five distinct colors. The curated artworks draw inspiration from Berlin’s vibrant culture, Tesla’s groundbreaking products, and the factory itself—described as “the machine that builds the machine.” A blend of global and in-house artistic talent has contributed to the ongoing project, making Giga Berlin not just a hub for innovation but also a celebration of art and ingenuity.
Professor Graham Oppy discusses the Turing Test, whether AI can understand, whether it can be more ethical than humans, moral realism, AI alignment, incoherence in human value, indirect normativity and much more.
Chapters: 0:00 The Turing test. 6:06 Agentic LLMs. 6:42 Concern about non-anthropocentric intelligence. 7:57 Machine understanding & the Chinese Room argument. 10:21 AI ‘grokking’ — seemingly understanding stuff. 13:06 AI and fact checking. 15:01 Alternative tests for assessing AI capability. 17:35 Moral Turing Tests — Can AI be highly moral? 18:37 Philosophy’s role in AI development. 21:51 Can AI help progress philosophy? 23:48 Increasing percision in the language of philosophy via technoscience. 24:54 Should philosophers be more involved in AI development? 26:59 Moral realism & fining universal principles. 31:02 Empiricism & moral truth. 32:09 Challenges to moral realism. 33:09 Truth and facts. 36:26 Are suffering and pleasure real? 37:54 Signatures of pain. 39:25 AI leaning from morally relevant features of reality. 41:22 AI self-improvement. 42:36 AI mind reading. 43:46 Can AI learn to care via moral realism? 45:42 Bias in AI training data. 46:26 Metaontology. 48:27 Is AI conscious? 49:45 Can AI help resolve moral disagreements? 51:07 ‘Little’ philosophical progress. 54:09 Does the human condition prevent or retard wide spread value convergence? 55:04 Difficulties in AI aligning to incoherent human values. 56:30 Empirically informed alignment. 58:41 Training AI to be humble. 59:42 Paperclip maximizers. 1:00:41 Indirect specification — avoiding AI totalizing narrow and poorly defined goals. 1:02:35 Humility. 1:03:55 Epistemic deference to ‘jupiter-brain’ AI 1:05:27 Indirect normativity — verifying jupiter-brain oracle AI’s suggested actions. 1:08:25 Ideal observer theory. 1:10:45 Veil of ignorance. 1:13:51 Divine psychology. 1:16:21 The problem of evil — an indifferent god? 1:17:21 Ideal observer theory and moral realism.
See Wikipedia article on Graham Oppy: https://en.wikipedia.org/wiki/Graham_… https://twitter.com/oppygraham #AI #philosophy #aisafety #ethics Many thanks for tuning in! Please support SciFuture by subscribing and sharing! Have any ideas about people to interview? Want to be notified about future events? Any comments about the STF series? Please fill out this form: https://docs.google.com/forms/d/1mr9P… Kind regards, Adam Ford