Toggle light / dark theme

Google fed coding interview questions to ChatGPT and, based off the AI’s answers, determined it would be hired for a level three engineering position, according to an internal document.

As reported (Opens in a new window) by CNBC, the experiment was done as part of Google’s recent testing of multiple AI chatbots, which it’s considering adding to the site. ChatGPT’s ability to surface a concise, high-fidelity answer to a question could save users time typically spent surfing links on Google to find the same information.

“Amazingly, ChatGPT gets hired at L3 when interviewed for a coding position,” says the document. And while level three is considered an entry-level position on the engineering team at Google, average total compensation for the job is about $183,000 (Opens in a new window).

What will the future of AI programming look like with tools like ChatGPT and GitHub Copilot? Let’s take a look at how machine learning could change the daily lives of developers in the near future.

#ai #tech #TheCodeReport.

💬 Chat with Me on Discord.

https://discord.gg/fireship.

🔗 Resources.

ChatGPT Demo https://openai.com/blog/chatgpt.

Human astrocytes are larger and more complex than those of infraprimate mammals, suggesting that their role in neural processing has expanded with evolution. To assess the cell-autonomous and species-selective properties of human glia, we engrafted human glial progenitor cells (GPCs) into neonatal immunodeficient mice. Upon maturation, the recipient brains exhibited large numbers and high proportions of both human glial progenitors and astrocytes. The engrafted human glia were gap-junction-coupled to host astroglia, yet retained the size and pleomorphism of hominid astroglia, and propagated Ca2+ signals 3-fold faster than their hosts. Long-term potentiation (LTP) was sharply enhanced in the human glial chimeric mice, as was their learning, as assessed by Barnes maze navigation, object-location memory, and both contextual and tone fear conditioning. Mice allografted with murine GPCs showed no enhancement of either LTP or learning. These findings indicate that human glia differentially enhance both activity-dependent plasticity and learning in mice.

Video Camera

This post is also available in: he עברית (Hebrew)

Have you ever used Alexa to help you decide what movie you should watch? Maybe you asked Siri for restaurant recommendations. Artificial intelligence and virtual assistants are constantly being refined, and may soon be making appointments for you, offering medical advice, or trying to sell you a bottle of wine.

Although AI technology has miles to go to develop social skills on par with ours, some AI has shown impressive language understanding and can complete relatively complex interactive tasks.

“All things are numbers,” avowed Pythagoras. Today, 25 centuries later, algebra and mathematics are everywhere in our lives, whether we see them or not. The Cambrian-like explosion of artificial intelligence (AI) brought numbers even closer to us all, since technological evolution allows for parallel processing of a vast amounts of operations.

Progressively, operations between scalars (numbers) were parallelized into operations between vectors, and subsequently, matrices. Multiplication between matrices now trends as the most time-and energy-demanding operation of contemporary AI computational systems. A technique called “tiled matrix multiplication” (TMM) helps to speed computation by decomposing matrix operations into smaller tiles to be computed by the same system in consecutive time slots. But modern electronic AI engines, employing transistors, are approaching their intrinsic limits and can hardly compute at clock-frequencies higher than ~2 GHz.

The compelling credentials of light—ultrahigh speeds and significant energy and footprint savings—offer a solution. Recently a team of photonic researchers of the WinPhos Research group, led by Prof. Nikos Pleros from the Aristotle University of Thessaloniki, harnessed the power of light to develop a compact silicon photonic computer engine capable of computing TMMs at a record-high 50 GHz clock frequency.