Reenvisioning enterprise software with generative A.I.
This post is also available in:
עברית (Hebrew)
A new study reveals that artificial intelligence models can actually have varying political opinions. Researchers from Washington, Carnegie Mellon, and Xi’an Jiaotong universities worked with 14 large language models (LLMs) and discovered that they all had different political opinions and biases.
According to Interesting Engineering, the researchers presented the language models with 62 politically sensitive statements and asked them to agree or disagree. The answers were then used to create a political compass that measures the degree of social and economic liberalism or conservatism.
With just a couple of “pieces of matter”—representations of one basic unit of a material—the new platform can create thousands of previously unknown morphologies, or structures, with the properties Amir Alavi specified.(Credit: Amir Alavi/U. Pittsburgh)
In a paper published in the journal Advanced Intelligent Systems, Amir Alavi, assistant professor of civil and environmental engineering in the University of Pittsburgh’s Swanson School of Engineering, outlines a platform for the evolution of metamaterials, synthetic materials purposefully engineered to have specific properties.
Commercial sensors will be reliable, tiny, and affordable.
(Suggested reading: “The Non-things”, Byung-Chul Han).
The idea that human connection will be crucial in the age of superintelligence is not uncommon among some experts and futurists. Superintelligence refers to a hypothetical form of artificial intelligence that surpasses human intelligence across all domains. Some individuals, including scientists and technologists, have expressed concerns about the potential impact of superintelligence on humanity.
Some argument that if superintelligence were to become a reality, it could bring about profound changes to society and the way we live. In such a scenario, human traits and values like compassion, empathy, and emotional connections could become even more critical to preserve our uniqueness and counterbalance the overwhelming capabilities of superintelligent machines. The idea is that our ability to connect with each other emotionally and maintain a sense of community, compassion, and mutual understanding could provide a counterpoint to the cold and logical calculations of superintelligent AI. This human element might serve as a safeguard against potential negative consequences or misuse of superintelligence.
In a new AI research, a team of MIT and Harvard University researchers has introduced a groundbreaking framework called “Follow Anything” (FAn). The system addresses the limitations of current object-following robotic systems and presents an innovative solution for real-time, open-set object tracking and following.
The primary shortcomings of existing robotic object-following systems are a constrained ability to accommodate new objects due to a fixed set of recognized categories and a lack of user-friendliness in specifying target objects. The new FAn system tackles these issues by presenting an open-set approach that can seamlessly detect, segment, track, and follow a wide range of things while adapting to novel objects through text, images, or click queries.
The core features of the proposed FAn system can be summarized as follows:
The development of robotic avatars could benefit from an improvement in how computers detect objects in low-resolution images.
A team at RIKEN has improved computer vision recognition capabilities by training algorithms to better identify objects in low-resolution images. Inspired by human brain memory formation techniques, the model degrades the quality of high-resolution images to train the algorithm in self-supervised learning, enhancing object recognition in low-quality images. The development is expected to benefit not only traditional computer vision applications but also the creation of cybernetic avatars and terahertz imaging technology.
Robotic avatar vision enhancement inspired by human perception.
So, the bad AI has been arriving. On purpose. It reminds me of when hackers once or a few times checked traffic lights to both green and caused accidents resulting in human harm. That’s sad that such great tools are misused.
ChatGPT has become popular, influencing how people work and what they may find online. Many people, even those who haven’t tried it, are intrigued by the potential of AI chatbots. The prevalence of generative AI models has altered the nature of potential dangers. Evidence of FraudGPT’s emergence can now be seen in recent threads on the Dark Web Forum. Cybercriminals have investigated ways to profit from this trend.
The researchers at Netenrich have uncovered a promising new artificial intelligence tool called “FraudGPT.” This AI bot was built specifically for malicious activities, including sending spear phishing emails, developing cracking tools, doing carding, etc. The product may be purchased on numerous Dark Web marketplaces and the Telegram app.
What is FraudGPT?
Like ChatGPT, but with the added ability to generate content for use in cyberattacks, FraudGPT may be purchased on the dark web and through Telegram. In July of 2023, Netenrich threat research team members first noticed it being advertised. One of FraudGPT’s selling points was that it needs the safeguards and restrictions that make ChatGPT unresponsive to questionable queries.
This post is also available in:
עברית (Hebrew)
Two autonomous “robotaxi” companies in San Francisco received the green light to start increasing their operations in the city, after previously facing limits on when or where they could charge for rides.
The promise of driverless car services transforming transport has been very slow to develop since they first came into the public eye over a decade ago, having been faced with technology glitches, safety fears, and high-profile accidents involving vehicles.