Toggle light / dark theme

Improved AI model boosts GitHub Copilot’s code generation capabilities

GitHub Copilot is getting an upgrade with an improved AI model and enhanced contextual filtering, resulting in faster and more tailored code suggestions for developers.

The new AI model delivers a 13% improvement in latency, while enhanced contextual filtering delivers a 6% relative improvement in code acceptance. These improvements are coming to GitHub Copilot for Individuals and GitHub Copilot for Business.

According to Github, the new model was developed together with OpenAI and Azure AI, and the 13% improvement in latency means that GitHub Copilot generates code suggestions for developers faster than ever before, promising a significant increase in overall productivity.

Unearthing Our Past, Predicting Our Future: Scientists Discover the Genes That Shape Our Bones

This groundbreaking study, which was published as the cover article in the journal Science, not only sheds light on our evolutionary history but also paves the way for a future where physicians could more accurately assess a patient’s likelihood of suffering from ailments like back pain or arthritis later in life.

“Our research is a powerful demonstration of the impact of AI in medicine, particularly when it comes to analyzing and quantifying imaging data, as well as integrating this information with health records and genetics rapidly and at large scale,” said Vagheesh Narasimhan, an assistant professor of integrative biology as well as statistics and data science, who led the multidisciplinary team of researchers, to provide the genetic map of skeletal proportions.

Has JWST shown the Universe is TWICE as old as we think?!

Go to https://brilliant.org/drbecky to get a 30-day free trial and the first 200 people will get 20% off their annual subscription. A new research study has come out claiming that to explain the massive galaxies found at huge distances in James Webb Space Telescope images, the Universe is older than we think, at 26.7 billion years (rather than 13.8 billion years old). In this video I’m diving into that study, looking at what model they used to get at that claim (a combination of the expansion of the universe and “tired light” ideas of redshift), how this impacts our best model of the Universe and the so-called “Crisis is Cosmology”, and why I’m not convinced yet!

#astronomy #JWST #cosmology.

My previous YouTube video on how JWST’s massive galaxies are no longer “impossible” — https://youtu.be/W4KH1Jw6HBI

Gupta et al. (2023; is the universe 26.7 billion years old?) — https://academic.oup.com/mnras/advance-article/doi/10.1093/m…32/7221343
Labbé et al. (2023; over-massive galaxies spotted in JWST data) — https://arxiv.org/pdf/2207.12446.pdf.
Arrabal Haro et al. (2023; z~16 candidate galaxy turns out to be z=4.9) — https://arxiv.org/pdf/2303.15431.pdf.
Zwicky (1929; “tired light” hypothesis raised for first time) — https://www.pnas.org/doi/epdf/10.1073/pnas.15.10.

JWST observing schedules (with public access!): https://www.stsci.edu/jwst/science-execution/observing-schedules.
JWST data archive: https://mast.stsci.edu/portal/Mashup/Clients/Mast/Portal.html.
Twitter bot for JWST current observations: https://twitter.com/JWSTObservation.
The successful proposals in Cycle 2 (click on the proposal number and then “public PDF” to see details): https://www.stsci.edu/jwst/science-execution/approved-progra…cycle-2-go.

00:00 — Introduction: JWST’s massive galaxy problem.

A universal null-distribution for topological data analysis

One of the key challenges in TDA is to distinguish between “signal”—meaningful structures underlying the data, and “noise”—features that arise from the local randomness and inaccuracies within the data15,16,17. The most prominent solution developed in TDA to address this issue is persistent homology. Briefly, it identifies structures such as holes and cavities (“air pockets”) formed by the data, and records the scales at which they are created and terminated (birth and death, respectively). The common practice in TDA has been to use this birth-death information to assess the statistical significance of topological features18,19,20,21. However, research so far has yet to provide an approach which is generic, robust, and theoretically justified. A parallel line of research has been the theoretical probabilistic analysis of persistent homology generated by random data, as means to establish a null-distribution. While this direction has been fruitful22,23,24,25, its use in practice has been limited. The main gap between theory and practice is that these studies indicate that the distribution of noise in persistent homology: (a) does not have a simple closed-form description, and (b) strongly depends on the model generating the point-cloud.

Our main goal in this paper is to refute the last premise, and to make the case that the distribution of noise in persistent homology of random point-clouds is in fact universal. Specifically, we claim that the limiting distribution of persistence values (measured using the death/birth ratio) is independent of the model generating the point-cloud. This result is loosely analogous to the central limit theorem, where sums of many different types of random variables always converge to the normal distribution. The emergence of such universal ity for persistence diagrams is highly surprising.

We support our universal ity statements by an extensive body of experiments, including point-clouds generated by different geometries, topologies, and probability distributions. These include simulated data as well as data from real-world applications (image processing, signal processing, and natural language processing). Our main goal here is to introduce the unexpected behavior of statistical universal ity in persistence diagrams, in order to initiate a shift of paradigm in stochastic topology that will lead to the development of a new theory. Developing this new theory, and proving the conjectures made here, is anticipated to be an exciting yet a challenging long journey, and is outside the scope of this paper. Based on our universal ity conjectures, we develop a powerful hypothesis testing framework for persistence diagrams, allowing us to compute numerical significance measures for individual features using very few assumptions on the underlying model.

Stability AI releases its latest image-generating model, Stable Diffusion XL 1.0

AI startup Stability AI continues to refine its generative AI models in the face of increasing competition — and ethical challenges.

Today, Stability AI announced the launch of Stable Diffusion XL 1.0, a text-to-image model that the company describes as its “most advanced” release to date. Available in open source on GitHub in addition to Stability’s API and consumer apps, ClipDrop and DreamStudio, Stable Diffusion XL 1.0 delivers “more vibrant” and “accurate” colors and better contrast, shadows and lighting compared to its predecessor, Stability claims.

In an interview with TechCrunch, Joe Penna, Stability AI’s head of applied machine learning, noted that Stable Diffusion XL 1.0, which contains 3.5 billion parameters, can yield full 1-megapixel resolution images “in seconds” in multiple aspect ratios. “Parameters” are the parts of a model learned from training data and essentially define the skill of the model on a problem, in this case generating images.

CEO Fires 90 Percent of Support Staff, Saying AI Outperforms Them

Suumit Shah, a 31-year-old CEO of an e-commerce platform called Dukaan based in India, is getting torn to shreds online for firing 90 percent of the company’s customer support staff after arguing that an AI chatbot had outperformed them.

It was an unusually callous announcement that clearly didn’t sit well with plenty of netizens, as Insider reports.

“We had to layoff [sic] 90 percent of our support team because of this AI chatbot,” he tweeted. “Tough? Yes. Necessary? Absolutely.”

/* */