Toggle light / dark theme

Apple, like a number of companies right now, may be grappling with what role the newest advances in AI are playing, and should play, in its business. But one thing Apple is confident about is the fact that it wants to bring more generative AI talent into its business.

The Cupertino company has posted at least a dozen job ads on its career page seeking experts in generative AI. Specifically, it’s looking for machine learning specialists “passionate about building extraordinary autonomous systems” in the field. The job ads (some of which seem to cover the same role, or are calling for multiple applicants) first started appearing April 27, with the most recent of them getting published earlier this week.

The job postings are coming amid some mixed signals from the company around generative AI. During its Q2 earnings call earlier this month, CEO Tim Cook dodged giving specific answers to questions about what the company is doing in the area — but also didn’t dismiss it. While generative AI was “very interesting,” he said, Apple would be “deliberate and thoughtful” in its approach. Then yesterday, the WSJ reported that the company had started restricting use of OpenAI’s ChatGPT and other external generative AI tools for some employees over concerns of proprietary data leaking out through the platforms.

Self-driving vehicle companies Waymo and Cruise are on the cusp of securing final approval to charge fares for fully autonomous robotaxi rides throughout the city of San Francisco at all hours of the day or night.

Amid the city’s mounting resistance to the presence of AVs, the California Public Utilities Commission (CPUC) published two draft resolutions late last week that would grant Cruise and Waymo the ability to extend the hours of operation and service areas of their now-limited robotaxi services.

The drafts are dated for a hearing June 29, and there’s still room for public comments, which are due May 31. Based on the CPUC’s drafted language, many of the protests raised by the city of San Francisco have already been rejected.

Perhaps the most intriguing evidence of consciousness in early infancy comes from a study conducted by Julia Moser at the University of Tübingen. Moser and her colleagues used second-order (“global”) auditory oddballs to probe for consciousness. Consider a sequence of tones that are clustered together into four groups of four tones, where each tone is either high pitched or low pitched. In the global oddball paradigm, the final tone in the first three groups differs from the preceding three tones (for example, if they are low then it will be high), but the final member of the last group will be identical to the preceding three tones (for example, they might all be low tones). In this scenario, the final tone is not an oddball (that is, outlier) relative to the preceding three tones, but it is an oddball relative to the entire sequence, for anyone who hears the three earlier groups of tones will expect the final member of this group to be an oddball.

Earlier research has suggested that the brain produces a distinctive response to second-order oddballs, which can be roughly thought of as a neural marker of surprise. Further, there is some evidence that this response is produced only when an individual is conscious. Using fetal magnetoencephalography (MEG), Moser and her team discovered that a version of this response could be found not only in newborns but also in 35-week-old fetuses. Again, this result does not provide proof of perceptual awareness in early infancy (let alone in utero), but it is yet another illustration of how neuroscience is beginning to pull back the curtain on infant experience.

Language impairment is comorbid in most children with Autism Spectrum Disorder (ASD) but its neural basis is poorly understood. Using structural magnetic resonance imaging (MRI), the present study provides the whole-brain comparison of both volume-and surface-based characteristics between groups of children with and without ASD and investigates the relationships between these characteristics in language-related areas and the language abilities of children with ASD measured with standardized tools. A total of 36 school-aged children participated in the study: 18 children with ASD and 18 age-and sex-matched typically developing controls. The results revealed that multiple regions differed between groups of children in gray matter volume, gray matter thickness, gyrification, and cortical complexity (fractal dimension).

This year marks the 100th anniversary of Edwin Hubble’s observation of a pulsating star called a Cepheid variable in the Andromeda nebula. The star was surprisingly faint, implying that it was very far away and that Andromeda must be a separate galaxy—the first evidence that our Milky Way is not alone. Hubble went on to uncover other galaxies and found that they were all moving away from us—a cosmic expansion characterized by the so-called Hubble constant. Astronomers have now used another star, an exploding supernova whose light was bent as it traveled to Earth, to probe the expansion [1]. By determining a time delay between different images of the supernova, the team has recovered a value of the Hubble constant that is lower than estimates based on Cepheids and on other distance markers. However, the error bars are large for the new measurement, so astronomers will need more observations to make lensed supernovae a precision speed check on cosmic expansion.

A lensed supernova is created by the light-bending power of gravity. When a supernova is behind a galaxy, relative to Earth, the light from the exploding star gets curved around the galaxy by the galaxy’s gravity. This action both distorts the star’s image and magnifies it, just like a magnifying glass. Sometimes this lensing can produce multiple images of the star, with each appearing at a different point in the sky. The light from such a set of images travels to Earth along different paths, and so arrives at Earth at different times. In 1964, the astronomer Sjur Refsdal proposed using the time delays to measure the Hubble constant. But detecting a multi-imaged supernova has proved tricky.

Luck finally came 50 years after Refsdal’s proposal. In a Hubble space telescope image from December 2014, Patrick Kelly, then at the University of California, Berkeley, and now at the University of Minnesota, spotted four lensed images of the same supernova [2]. The team was unable to determine the exact time delays between these images, but from previous observations of this part of the sky, Kelly and his colleagues predicted that a fifth image was on the way. This expectation was based on the spotted supernova sitting behind a galaxy cluster, rather than a single galaxy, so the supernova light had multiple paths to reach Earth. The astronomers kept a steady watch, and sure enough the fifth image appeared in December 2015, roughly 376 days after the other four images. This long time delay, which was caused by the cluster’s large mass density, was a boon to the cosmic expansion measurement.

Optical imaging and metrology techniques are key tools for research rooted in biology, medicine and nanotechnology. While these techniques have recently become increasingly advanced, the resolutions they achieve are still significantly lower than those attained by methods using focused beams of electrons, such as atomic-scale transmission electron spectroscopy and cryo-electron tomography.

Researchers at University of Southampton and Nanyang Technological University have recently introduced a non-invasive approach for with atomic-scale resolution. Their proposed approach, outlined in Nature Materials, could open exciting new possibilities for research in a variety of fields, allowing scientists to characterize systems or phenomena at the scale of a fraction of a billionth of a meter.

“Since the nineteen century, improvements of spatial resolution of microscopy has been a major trend in science that has been marked with at least seven Nobel Prizes,” Nicolay I. Zheludev, one of the researchers who carried out the study told Phys.org. “Our dream was to develop technology that can detect atomic scale events with light, and we have been working on this for the last three years.”