Toggle light / dark theme

It’s becoming increasingly clear that courts, not politicians, will be the first to determine the limits on how AI is developed and used in the US.

Last week, the Federal Trade Commission opened an investigation into whether OpenAI violated consumer protection laws by scraping people’s online data to train its popular AI chatbot ChatGPT. Meanwhile, artists, authors, and the image company Getty are suing AI companies such as OpenAI, Stability AI, and Meta, alleging that they broke copyright laws by training their models on their work without providing any recognition or payment.

OpenAI is releasing the Android version of the app for ChatGPT next week after launching on iOS in May.

Since launching in November, OpenAI’s ChatGPT tool has reached a number of users at a rate that’s astounding for anything outside of Threads — now the company says it’s ready to release an app for Android.

The ChatGPT for Android app is launching a few months after the free iOS app brought the chatbot to iPhones and iPads.

In a tweet, the company announced ChatGPT for Android is rolling out next week without listing a specific day and linked to a preorder page in the Google Play Store where you can register to get it installed once the app is available.


But Bing’s AI app can bring GPT-4 to Android right now.

The Biden administration announced on Friday a voluntary agreement with seven leading AI companies, including Amazon, Google, and Microsoft. The move, ostensibly aimed at managing the risks posed by AI and protecting Americans’ rights and safety, has provoked a range of questions, the foremost being: What does the new voluntary AI agreement mean?

At first glance, the voluntary nature of these commitments looks promising. Regulation in the technology sector is always contentious, with companies wary of stifling growth and governments eager to avoid making mistakes. By sidestepping the direct imposition of command and control regulation, the administration can avoid the pitfalls of imposing… More.


That said, it’s not an entirely hollow gesture. It does emphasize important principles of safety, security, and trust in AI, and it reinforces the notion that companies should take responsibility for the potential societal impact of their technologies. Moreover, the administration’s focus on a cooperative approach, involving a broad range of stakeholders, hints at a potentially promising direction for future AI governance. However, we should also not forget the risk of government growing too cozy with industry.

Still, let’s not mistake this announcement for a seismic shift in AI regulation. We should consider this a not-very-significant step on the path to responsible AI. At the end of the day, what the government and these companies have done is put out a press release.

Follow me on Twitter or LinkedIn.

Apple is secretly developing its own generative AI tools to challenge OpenAI’s GPT and other language models like Google’s Bard, reports Mark Gurman of Bloomberg News. Internally dubbed “Apple GPT,” the company’s ChatGPT clone is already being used by some employees for work tasks.

Gurman reveals that a cross-functional collaboration is underway at Apple, with teams from AI, software engineering, and cloud infrastructure working together on this covert project. This initiative is driven by concerns within Apple that its devices may lose their essential status if the company lags behind its competitors in AI.

AI has been a component of Apple’s devices for years, primarily operating behind the scenes to enhance photo and video quality, and to power features such as crash detection. More recently, Apple has begun to introduce AI-powered enhancements to its iOS.

Generative AI has been front and centre of the news for the last nine months and attention is often on existential risks, copyright claims or suspicions around deepfakes. However, there are a growing number of more positive ways it can be integrated into businesses.

One of those areas is customer service. The Samsung Neon people were a good example of what could be achieved with embodied AI. Samsung created an impressive suite of customer service agents whose profiles could match those of customers in need of help.


I wanted an avatar that was a bit ‘uncanny’, so that it had some resemblance to my real physical self but looked quite artificial too.

What happens when humans begin combining biology with technology, harnessing the power to recode life itself.

What does the future of biotechnology look like? How will humans program biology to create organ farm technology and bio-robots. And what happens when companies begin investing in advanced bio-printing, artificial wombs, and cybernetic prosthetic limbs.

Other topic include: bioengineered food and farming, bio-printing in space, new age living bioarchitecture (eco concrete inspired by coral reefs), bioengineered bioluminescence, cyberpunks and biopunks who experiment underground — creating new age food and pets, the future of bionics, corporations owning bionic limbs, the multi-trillion dollar industry of bio-robots, and bioengineered humans with super powers (Neo-Humans).

As well as the future of biomedical engineering, biochemistry, and biodiversity.
_______

Created by: Jacob.
Narration by: Alexander Masters (www.alexander-masters.com)

Modern Science Fiction.

I hope this isn’t been posted before especially by me. I do have a bit of pre dementia but it’s not too bad. It’s from my TBI but they’re working on weeding out bias from AI and making it so it’s not bad for us or to us.


Thought-provoking TED Talk on how AI can unintentionally reinforce societal prejudices, perpetuate discrimination, and amplify toxic behaviors online. This talk is a call to action for individuals, tech companies, and policymakers alike. By addressing AI bias and toxicity head-on, we can pave the way for a future where AI systems are truly unbiased, fostering inclusivity and equality for all.

AI, Algorithm, Behavioral Economics, Discrimination, Diversity, Empathy, Engineering, Entrepreneurship, Social Entrepreneurship, Social Media, Software, Voice, Vulnerability, Women, Women in business, Women’s Rights, Work, Workplace, Writing Product leader with over 9+ years of experience in building large scale consumer Products at Yahoo, Apple. Priya is passionate about driving innovation while building dynamic & inclusive teams. During her time at MIT, she built inclusively and won prestigious funding through MIT 100K award (previous finalists include Hubspot, Akamai). She was also invited at TedX Boston and MIT Media Lab to share her work This talk was given at a TEDx event using the TED conference format but independently organized by a local community.

Just as Ray Kurzweil predicted in the book The Age of Spiritual Machines.


The analysts cited Timothy Papandreou, an advisor to Alphabet’s research and development organization, X (formerly GoogleX), who explained at a recent event that A.I. will lead to a transition from a generation of programmers to a generation of “perfect prompters” as kids learn to utilize generative A.I. “assistants” throughout their lives.

Gen A won’t need programming skills to use their A.I models, instead they will work to properly prompt these systems with simple text to get the desired outcome, whether that’s finding information about Kafka for a school paper or writing an email at work.

“Children will now have an A.I. avatar shadow assistant or agent from birth. As they grow A.I. will grow with them and know everything it needs to know, and always be there as a mentor,” Papandreou said, arguing that Gen Z will be the “last generation to not grow up with AI.”

Artificial intelligence today is spreading like wildfire. New and more powerful applications are emerging almost daily, and people are interacting with AI more in their everyday lives.

All of this is very exciting for modern enterprises, which have reinvented themselves as digital powerhouses post-pandemic. Still, digital transformations don’t happen overnight, and tech companies—and SaaS models in particular—were the saving grace of businesses sprinting to align their customer experiences with new consumer expectations and data-driven capabilities.

But AI is a different animal. Not only does it require huge sums of data, but it also requires a new approach to how that data is obtained and managed within the enterprise. Businesses expecting a simple “build it and leave it” experience with AI are in for a shock, and many will undoubtedly ask, “Can SaaS deliver AI at speed and scale?”