Toggle light / dark theme

What AI developers need to know about artificial intelligence ethics

There may be some very compelling tools and platforms that promise fair and balanced AI, but tools and platforms alone won’t deliver ethical AI solutions, says Reid Blackman, who provides avenues to overcome thorny AI ethics issues in his upcoming book, Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent and Respectful AI (Harvard Business Review Press). He provides ethics advice to developers working with AI because, in his own words, “tools are efficiently and effectively wielded when their users are equipped with the requisite knowledge, concepts, and training.” To that end, Blackman provides some of the insights development and IT teams need to have to deliver ethical AI.

Don’t worry about dredging up your Philosophy 101 class notes

Considering prevailing ethical and moral theories and applying them to AI work “is a terrible way to build ethically sound AI,” Blackman says. Instead, work collaboratively with teams on practical approaches. “What matters for the case at hand is what [your team members] think is an ethical risk that needs to be mitigated and then you can get to work collaboratively identifying and executing on risk-mitigation strategies.”

Israeli-Based Tech Startup Brings Your Old Family Photos To Life With Amazing Artificial Intelligence

There has been a massive tidal wave of tech innovation over the last couple of years. Some apps and platforms offer basic services for the home or office. Others ignite your imagination.

Gil Perry, CEO and cofounder of D-ID, an Israeli-based tech company, has created something amazingly beautiful and touching. Leveraging artificial intelligence and sophisticated technology, the company has created a unique, animated, live portrait, which animates the photos of long-lost relatives or whoever you’d like to see, as if they are in the room with you. Its tech makes people come alive and look realistic and natural.

The feature, Deep Nostalgia, lets users upload a photo of a person or group of people to see individual faces animated by AI. People have been able to breathe life into their old black-and-white photos of grandma and grandpa that have been stored in boxes up in the attic.

How do we know when AI becomes conscious and deserves rights?

Machines becoming conscious, self-aware, and having feelings would be an extraordinary threshold. We would have created not just life, but conscious beings.

There has already been massive debate about whether that will ever happen. While the discussion is largely about supra-human intelligence, that is not the same thing as consciousness.

Now the massive leaps in quality of AI conversational bots is leading some to believe that we have passed that threshold and the AI we have created is already sentient.

A new brain-inspired intelligent system can drive a car using only 19 control neurons!

Read the article:► https://medium.com/towards-artificial-intelligence/a-new-bra…d127107db9
Paper:► https://www.nature.com/articles/s42256-020-00237-3.epdf.
Watch MIT’s video:► https://www.youtube.com/watch?v=8KBOf7NJh4Y&feature=emb_titl…l=MITCSAIL
GitHub:► https://github.com/mlech26l/keras-ncp.
Colab tutorials:
The basics of Neural Circuit Policies:► https://colab.research.google.com/drive/1IvVXVSC7zZPo5w-PfL3…sp=sharing.
How to stack NCP with other types of layers:► https://colab.research.google.com/drive/1-mZunxqVkfZVBXNPG0k…sp=sharing.

Follow me for more AI content:
Instagram: https://www.instagram.com/whats_ai/
LinkedIn: https://www.linkedin.com/in/whats-ai/
Twitter: https://twitter.com/Whats_AI
Facebook: https://www.facebook.com/whats.artificial.intelligence/
Medium: https://medium.com/@whats_ai.

The best courses to start and progress in AI:
https://www.omologapps.com/whats-ai.

Join Our Discord channel, Learn AI Together:
https://discord.gg/learnaitogether.

Support me on patreon:

Google Engineer On Leave After He Claims AI Program Has Gone Sentient

Blake Lemoine reached his conclusion after conversing since last fall with LaMDA, Google’s artificially intelligent chatbot generator, what he calls part of a “hive mind.” He was supposed to test if his conversation partner used discriminatory language or hate speech.

As he and LaMDA messaged each other recently about religion, the AI talked about “personhood” and “rights,” he told The Washington Post.

It was just one of the many startling “talks” Lemoine has had with LaMDA. He has linked on Twitter to one — a series of chat sessions with some editing (which is marked).

Fearing lawsuits, factories rush to replace humans with robots in South Korea

“Throughout our history, we’ve always had to find ways to stay ahead,” Kim told Rest of World. “Automation is the next step in that process.”

Speefox’s factory is 75% automated, representing South Korea’s continued push away from human labor. Part of that drive is labor costs: South Korea’s minimum wage has climbed, rising 5% just this year.

But the most recent impetus is legal liability for worker death or injury. In January, a law came into effect called the Serious Disasters Punishment Act, which says, effectively, that if workers die or sustain serious injuries on the job, and courts determine that the company neglected safety standards, the CEO or high-ranking managers could be fined or go to prison.

/* */