artificial intelligence – Lifeboat News: The Blog https://lifeboat.com/blog Safeguarding Humanity Thu, 06 Feb 2025 07:04:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 Google opens its most powerful AI models to everyone, the next stage in its virtual agent push https://lifeboat.com/blog/2025/02/google-opens-its-most-powerful-ai-models-to-everyone-the-next-stage-in-its-virtual-agent-push https://lifeboat.com/blog/2025/02/google-opens-its-most-powerful-ai-models-to-everyone-the-next-stage-in-its-virtual-agent-push#respond Thu, 06 Feb 2025 07:04:50 +0000 https://lifeboat.com/blog/2025/02/google-opens-its-most-powerful-ai-models-to-everyone-the-next-stage-in-its-virtual-agent-push

In December, the company gave access to developers and trusted testers, as well as wrapping some features into Google products, but this is a “general release,” according to Google.

The suite of models includes 2.0 Flash, which is billed as a “workhorse model, optimal for high-volume, high-frequency tasks at scale,” as well as 2.0 Pro Experimental for coding performance, and 2.0 Flash-Lite, which the company calls its “most cost-efficient model yet.”

Gemini Flash costs developers 10 cents per million tokens for text, image and video inputs, while Flash-Lite, its more cost-effective version, costs 0.75 of a cent for the same. Tokens refer to each individual unit of data that the model processes.

]]>
https://lifeboat.com/blog/2025/02/google-opens-its-most-powerful-ai-models-to-everyone-the-next-stage-in-its-virtual-agent-push/feed 0
Figure drops OpenAI in favor of in-house models https://lifeboat.com/blog/2025/02/figure-drops-openai-in-favor-of-in-house-models https://lifeboat.com/blog/2025/02/figure-drops-openai-in-favor-of-in-house-models#respond Thu, 06 Feb 2025 07:02:56 +0000 https://lifeboat.com/blog/2025/02/figure-drops-openai-in-favor-of-in-house-models

Figure AI CEO Brett Adcock promised to deliver “something no one has ever seen on a humanoid” in the next 30 days.

“We found that to solve embodied AI at scale in the real world, you have to vertically integrate robot AI.”

“We can’t outsource AI for the same reason we can’t outsource our hardware.”


Figure AI, a robotics company working to bring a general-purpose humanoid robot into commercial and residential use, announced Tuesday on X that it is exiting a deal with OpenAI. The Bay Area-based outfit has instead opted to focus on in-house AI owing to a “major breakthrough.” In conversation with TechCrunch afterward, founder and CEO Brett Adcock was tightlipped in terms of specifics, but he promised to deliver “something no one has ever seen on a humanoid” in the next 30 days.

]]>
https://lifeboat.com/blog/2025/02/figure-drops-openai-in-favor-of-in-house-models/feed 0
Google removes pledge to not use AI for weapons, surveillance https://lifeboat.com/blog/2025/02/google-removes-pledge-to-not-use-ai-for-weapons-surveillance https://lifeboat.com/blog/2025/02/google-removes-pledge-to-not-use-ai-for-weapons-surveillance#respond Thu, 06 Feb 2025 07:02:27 +0000 https://lifeboat.com/blog/2025/02/google-removes-pledge-to-not-use-ai-for-weapons-surveillance

The company updated its ‘Responsible AI’ principles, which no longer includes a pledge to not use AI for weapons or surveillance.

]]>
https://lifeboat.com/blog/2025/02/google-removes-pledge-to-not-use-ai-for-weapons-surveillance/feed 0
The Epic History of Large Language Models (LLMs) https://lifeboat.com/blog/2024/12/the-epic-history-of-large-language-models-llms https://lifeboat.com/blog/2024/12/the-epic-history-of-large-language-models-llms#respond Wed, 11 Dec 2024 01:42:39 +0000 https://lifeboat.com/blog/2024/12/the-epic-history-of-large-language-models-llms

Initially a variant of LSTM known as AWD LSTM was pre trained (unsupervised pre training) for language modelling task using wikipedia articles. In the next step the output layer was turned into a classifier and was fine tuned using various datasets from IMDB, yelp etc. When the model was tested on unseen data, sate of the art results were obtained. The paper further went on to claim that if a model was built using 10,000 rows from scratch then fine tuning the above model (transfer learning) would give much better results with 100 rows only. The only thing to keep in mind is they did not used a transformer in their architecture. This was because both these concepts were researched parallely (transformers and transfer learning) so researchers on both the sides had no idea of what work the other was doing. Transformers paper came in 2017 and ULMFit paper (transfer learning) came in early 2018.

Now architecture wise we had state of the art architecture i.e. Transformers and training wise we have a very beautiful and elegant concept of Transfer Learning. LLMs were the outcome of the combination of these 2 ideas.

]]>
https://lifeboat.com/blog/2024/12/the-epic-history-of-large-language-models-llms/feed 0