What are good policy options for academic journals regarding the detection of AI generated content and publication decisions? As a group of associate editors of Dialectica note below, there are several issues involved, including the uncertain performance of AI detection tools and the risk that material checked by such tools is used for the further training of AIs. They’re interested in learning about what policies, if any, other journals have instituted in regard to these challenges and how they’re working, as well as other AI-related problems journals should have policies about. They write: As associate editors of a philosophy journal, we face the challenge of dealing with content that we suspect was generated by AI. Just like plagiarized content, AI generated content is submitted under false claim of authorship. Among the unique challenges posed by AI, the following two are pertinent for journal editors. First, there is the worry of feeding material to AI while attempting to minimize its impact. To the best of our knowledge, the only available method to check for AI generated content involves websites such as GPTZero. However, using such AI detectors differs from plagiarism software in running the risk of making copyrighted material available for the purposes of AI training, which eventually aids the development of a commercial product. We wonder whether using such software under these conditions is justifiable. Second, there is the worry of delegating decisions to an algorithm the workings of which are opaque. Unlike plagiarized texts, texts generated by AI routinely do not stand in an obvious relation of resemblance to an original. This renders it extremely difficult to verify whether an article or part of an article was AI generated; the basis for refusing to consider an article on such grounds is therefore shaky at best. We wonder whether it is problematic to refuse to publish an article solely because the likelihood of its being generated by AI passes a specific threshold (say, 90%) according to a specific website. We would be interested to learn about best practices adopted by other journals and about issues we may have neglected to consider. We especially appreciate the thoughts of fellow philosophers as well as members of other fields facing similar problems. — Aleks…
Category: robotics/AI – Page 324
Summary: Researchers successfully connected lab-grown brain tissues, mimicking the complex networks found in the human brain. This novel method involves linking “neural organoids” with axonal bundles, enabling the study of interregional brain connections and their role in human cognitive functions.
The connected organoids exhibited more sophisticated activity patterns, demonstrating both the generation and synchronization of electrical activity akin to natural brain functions. This breakthrough not only enhances our understanding of brain network development and plasticity but also opens new avenues for researching neurological and psychiatric disorders, offering hope for more effective treatments.
In February 2023, Frontiers in Science published an article titled “Organoid Intelligence (OI): The New Frontier in Biocomputing and Intelligence-in-a-Dish.” Since its publication, this research has sparked significant scientific interest and gained coverage in Forbes, Financial Times, Wall Street Journal, BBC, CNN and many others.
So, what is organoid intelligence and why has this article gathered such attention?
The article showcases a forward-thinking and captivating concept of how brain organoids – artificially grown human brain tissue – could be used to study human brain cognitive function, with potential assistance from artificial intelligence and biocomputing. This multidisciplinary, emerging field holds great promise for advancing our understanding of the brain and accelerating progress in neuroscience research.
This fleshy, pink smiling face is made from living human skin cells, and was created as part of an experiment to let robots show emotion.
How would such a living tissue surface, whatever its advantages and disadvantages, attach to the mechanical foundation of a robot’s limb or “face”?
In humans and…
A team of scientists unveiled a robot face covered with a delicate layer of living skin that heals itself and crinkles into a smile in hopes of developing more human-like cyborgs.
The skin was made in a lab at the University of Tokyo from a mixture of human skin cells grown on a collagen model and placed on top of a 3D-printed resin base, the New Scientist reported.
Scientists on the project — who published their findings in Cell Reports Physical Science on Tuesday — believe the living skin could be a key step in creating robots that heal and feel like humans.
Numerous electrophysiological experiments have reported that the prefrontal cortex (PFC) is involved in the process of working memory. PFC neurons continue firing to maintain stimulus information in the delay period without external stimuli in working memory tasks. Further findings indicate that while the activity of single neurons exhibits strong temporal and spatial dynamics (heterogeneity), the activity of population neurons can encode spatiotemporal information of stimuli stably and reliably. From the perspective of neural networks, the computational mechanism underlying this phenomenon is not well demonstrated. The main purpose of this paper is to adopt a new strategy to explore the neural computation mechanism of working memory. We used reinforcement learning to train a recurrent neural network model to learn a spatial working memory task.
Disruptive innovations in technology, such as humanoid robots and electric vehicles, will lead to significant changes in labor, economy, and society, posing both opportunities and challenges for the future.
Questions to inspire discussion.
What are the predictions about the future of electric vehicles?
—The video discusses accurate predictions made by Tony Seba and his team about the future of EVs, which the media has not reported on.
A new startup emerged out of stealth mode today to power the next generation of generative AI. Etched is a company that makes an application-specific integrated circuit (ASIC) to process “Transformers.” The transformer is an architecture for designing deep learning models developed by Google and is now the powerhouse behind models like OpenAI’s GPT-4o in ChatGPT, Antrophic Claude, Google Gemini, and Meta’s Llama family. Etched wanted to create an ASIC for processing only the transformer models, making a chip called Sohu. The claim is Sohu outperforms NVIDIA’s latest and greatest by an entire order of magnitude. Where a server configuration with eight NVIDIA H100 GPU clusters pushes Llama-3 70B models at 25,000 tokens per second, and the latest eight B200 “Blackwell” GPU cluster pushes 43,000 tokens/s, the eight Sohu clusters manage to output 500,000 tokens per second.
You want to do anything about it, bud?
A resurfaced video shows an OpenAI engineer conceding that it’s “deeply unfair” to “build AI and take everyone’s job away.”
Explore the concept of the singularity— the point where AI could surpass human intelligence—and its potential impact on society.
As new AI models make their way into the mainstream, and business leaders scramble to adapt, one sometimes overlooked aspect of computing is the measure of conservation.
In an age where efficiency matters, teams are trying to provide new frameworks for compute.