Toggle light / dark theme

Meet FreedomGPT: An Open-Source AI Technology Built on Alpaca and Programmed to Recognize and Prioritize Ethical Considerations Without Any Censorship Filter

Large Language Models have rapidly gained enormous popularity by their extraordinary capabilities in Natural Language Processing and Natural Language Understanding. The recent model which has been in the headlines is the well-known ChatGPT. Developed by OpenAI, this model is famous for imitating humans for having realistic conversations and does everything from question answering and content generation to code completion, machine translation, and text summarization.

ChatGPT comes with censorship compliance and certain safety rules that don’t let it generate any harmful or offensive content. A new language model called FreedomGPT has recently been introduced, which is quite similar to ChatGPT but doesn’t have any restrictions on the data it generates. Developed by the Age of AI, which is an Austin-based AI venture capital firm, FreedomGPT answers questions free from any censorship or safety filters.

FreedomGPT has been built on Alpaca, which is an open-source model fine-tuned from the LLaMA 7B model on 52K instruction-following demonstrations released by Stanford University researchers. FreedomGPT uses the distinguishable features of Alpaca as Alpaca is comparatively more accessible and customizable compared to other AI models. ChatGPT follows OpenAI’s usage policies which restrict categories like hate, self-harm, threats, violence, sexual content, etc. Unlike ChatGPT, FreedomGPT answers questions without bias or partiality and doesn’t hesitate to answer controversial or argumentative topics.

The takeaways from Stanford’s 386-page report on the state of AI

Writing a report on the state of AI must feel a lot like building on shifting sands: By the time you hit publish, the whole industry has changed under your feet. But there are still important trends and takeaways in Stanford’s 386-page bid to summarize this complex and fast-moving domain.

The AI Index, from the Institute for Human-Centered Artificial Intelligence, worked with experts from academia and private industry to collect information and predictions on the matter. As a yearly effort (and by the size of it, you can bet they’re already hard at work laying out the next one), this may not be the freshest take on AI, but these periodic broad surveys are important to keep one’s finger on the pulse of industry.

This year’s report includes “new analysis on foundation models, including their geopolitics and training costs, the environmental impact of AI systems, K-12 AI education, and public opinion trends in AI,” plus a look at policy in a hundred new countries.

Generative AI’s future in enterprise could be smaller, more focused language models

The amazing.

But maybe the future of these models is more focused than the boil-the-ocean approach we’ve seen from OpenAI and others, who want to be able to answer every question under the sun.


The amazing abilities of OpenAI’s ChatGPT wouldn’t be possible without large language models. These models are trained on billions, sometimes trillions of examples of text. The idea behind ChatGPT is to understand language so well, it can anticipate what word plausibly comes next in a split second. That takes a ton of training, compute resources and developer savvy to make happen.

In the AI-driven future, each company’s own data could be its most valuable asset. If you’re an insurance company, you have a completely different lexicon than a hospital, automotive company or a law firm, and when you combine that with your customer data and the full body of content across the organization, you have a language model. While perhaps it’s not large, as in the truly large language model sense, it would be just the model you need, a model created for one and not for the masses.

How AI Can Look Into Your Eyes And Diagnose A Devastating Brain Disease

“The eyes are the windows to the soul.” It’s an ancient saying, and it illustrates what we know intuitively to be true — you can understand so much about a person by looking them deep in the eye. But how? And can we use this fact to understand disease?

One company is making big strides in this direction. Israel’s NeuraLight, which just won the Health and Medtech Innovation award at SXSW, was founded to bring science and AI to understanding the brain through the eyes.

A focal disease for NeuraLight is ALS, which is currently diagnosed through a subjective survey of about a dozen questions, followed by tests such as an EEG and MRI.


The patient’s eyes follow dots on a screen, and the AI system measures 106 parameters such as dilation and blink rate in less than 10 minutes. In other words, this will be an AI-enabled digital biomarker.

We Should Consider ChatGPT Signal For Manhattan Project 2.0

In 1942 The Manhattan Project was established by the United States as part of a top-secret research and development (R&D) program to produce the first nuclear weapons. The project involved thousands of scientists, engineers, and other personnel who worked on different aspects of the project, including the development of nuclear reactors, the enrichment of uranium, and the design and construction of the bomb. The goal: to develop an atomic bomb before Germany did.

The Manhattan Project set a precedent for large-scale government-funded R&D programs. It also marked the beginning of the nuclear age and ushered in a new era of technological and military competition between the world’s superpowers.

Today we’re entering the age of Artificial Intelligence (AI)—an era arguably just as important, if not more important, than the age of nuclear war. While the last few months might have been the first you’ve heard about it, many in the field would argue we’ve been headed in this direction for at least the last decade, if not longer. For those new to the topic: welcome to the future, you’re late.

New AI tool can generate faster, accurate and sharper cosmic images

The team was able to produce blur-free, high-resolution images of the universe by incorporating this AI algorithm.

Before reaching ground-based telescopes, cosmic light interacts with the Earth’s atmosphere. That’s why, the majority of advanced ground-based telescopes are located at high altitudes on Earth, where the atmosphere is thinner. The Earth’s changing atmosphere often obscures the view of the universe.

The atmosphere obstructs certain wavelengths as well as distorts the light coming from great distances. This interference may interfere with the accurate construction of space images, which is critical for unraveling the mysteries of the universe. The produced blurry images may obscure the shapes of astronomical objects and cause measurement errors.

MIT’s Codon compiler allows Python to ‘speak’ natively with computers

Researchers at MIT created Codon, which dramatically increases the speed of Python code by allowing users to run it as effectively as C or C++.

Python is one of the most popular computer languages, but it has a severe Achilles heel; it can be cumbersome compared to lower-level languages like C or C++. To rectify this, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) set out to change this through the development of Codon. This Python-based compiler allows users to write Python code that runs as efficiently as a program in C or C++.


Arsenii Palivoda/iStock.

In order to identify the type at runtime, Saman Amarasinghe, an MIT professor and lead investigator for the CSAIL who is also a co-author of the Codon paper, notes that “if you have a dynamic language [like Python], every time you have some data, you need to keep a lot of additional metadata around it.”

AI chip race: Google says its Tensor chips compute faster than Nvidia’s A100

It also says that it has a healthy pipeline for chips in the future.

Search engine giant Google has claimed that the supercomputers it uses to develop its artificial intelligence (AI) models are faster and more energy efficient than Nvidia Corporation’s. While processing power for most companies delving into the AI space comes from Nvidia’s chips, Google uses a custom chip called Tensor Processing Unit (TPU).

Google announced its Tensor chips during the peak of the COVID-19 pandemic when businesses from electronics to automotive faced the pinch of chip shortage.


AI-designed chips to further AI development

Interesting Engineering reported in 2021 that Google used AI to design its TPUs. Google claimed that the design process was completed in just six hours using AI compared to the months humans spend designing chips.

For most things associated with AI these days, product iterations occur rapidly, and the TPU is currently in its fourth generation. As Microsoft stitched together chips to power OpenAI’s research requirement, Google also put together 4,000 TPUs to make its supercomputer.

Mind control: 3D-patterned sensors allow robots to be controlled by thought

This novel technology looks like a sci-fi device. But it’s real.

It seems like something from a science fiction movie: a specialized, electronic headband and using your mind to control a robot.


Oonal/iStock.

A new study published in the journal ACS Applied Nano Materials took a step toward making this a reality. The team produced “dry” sensors that can record the brain’s electrical activity despite the hair and the bumps and curves of the head by constructing a specific, 3D-patterned structure that does not rely on sticky conductive gels.

New Stanford report highlights the potential, costs, and risks of AI

AI-related jobs are on the rise but funding has taken a dip.

The technology world goes through waves of terminologies. Last year, was much about building the metaverse until it turned to artificial intelligence (AI) which has occupied the top news spots almost everywhere. To know whether this wave will last or wither off, one needs to look at some trusted sources in the domain, such as the one released by Stanford University.

For years now, the Institute for Human-Centered Artificial Intelligence at Stanford has been releasing its AI Index on an annual basis.


Black_Kira/iStock.

With AI occupying center stage for the past few months, the AI Index is a valuable resource to see what the future holds.