Toggle light / dark theme

Sphere Studios has developed a brand new type of cinema camera called The Big Sky. It features a single 316-megapixel HDR image sensor that the company says is a 40x resolution increase over existing 4K cameras and PetaPixel was given an exclusive look at the incredible technology.


The Big Sky cameras are not up for sale (yet) but they are meeting with film companies and filmmakers to find ways to bring the technology to the home-entertainment world. A discussion we had on-site revolved around gimbals mounted on helicopters, airplanes, and automobiles and how those systems, even “the best” still experience some jitter/vibration which is often stabilized which causes the footage to be cropped in.

The technology built for Big Sky helps eliminate a massive percentage of this vibration, and even without it, the sheer amount of resolution the camera offers can provide a ton of space for post-production stabilization. This alone could be a game changer for Hollywood when capturing aerial and “chase scene” footage from vehicles allowing for even more detail than ever before.

The remarkable zero-shot learning capabilities demonstrated by large foundation models (LFMs) like ChatGPT and GPT-4 have sparked a question: Can these models autonomously supervise their behavior or other models with minimal human intervention? To explore this, a team of Microsoft researchers introduces Orca, a 13-billion parameter model that learns complex explanation traces and step-by-step thought processes from GPT-4. This innovative approach significantly improves the performance of existing state-of-the-art instruction-tuned models, addressing challenges related to task diversity, query complexity, and data scaling.

The researchers acknowledge that the query and response pairs from GPT-4 can provide valuable guidance for student models. Therefore, they enhance these pairs by adding detailed responses that offer a better understanding of the reasoning process employed by the teachers when generating their responses. By incorporating these explanation traces, Orca equips student models with improved reasoning and comprehension skills, effectively bridging the gap between teachers and students.

The research team utilizes the Flan 2022 Collection to enhance Orca’s learning process further. The team samples tasks from this extensive collection to ensure a diverse mix of challenges. These tasks are then sub-sampled to generate complex prompts, which serve as queries for LFMs. This approach creates a diverse and rich training set that facilitates robust learning for the Orca, enabling it to tackle a wide range of tasks effectively.

Getting a colonoscopy is important to screen for colorectal cancer. But how often you should get a colonoscopy depends on several different factors.

Current guidelines suggest that you get your first colonoscopy at age 45 if you are at average risk for colorectal cancer. If no polyps are found, you won’t need another colonoscopy for another 10 years. But in certain situations, you may need a colonoscopy more often.

We spoke with gastroenterologist Mazen Alasadi, M.D., to learn more.

In one sense, it is undeniably new. Interactions with ChatGPT can feel unprecedented, as when a tech journalist couldn’t get a chatbot to stop declaring its love for him. In my view, however, the boundary between humans and machines, in terms of the way we interact with one another, is fuzzier than most people would care to admit, and this fuzziness accounts for a good deal of the discourse swirling around ChatGPT.

When I’m asked to check a box to confirm I’m not a robot, I don’t give it a second thought—of course I’m not a robot. On the other hand, when my email client suggests a word or phrase to complete my sentence, or when my phone guesses the next word I’m about to text, I start to doubt myself. Is that what I meant to say? Would it have occurred to me if the application hadn’t suggested it? Am I part robot? These large language models have been trained on massive amounts of “natural” human language. Does this make the robots part human?

Over millennia, humans have observed and been inspired by beautiful displays of light bands dancing across dark night skies. Today, we call these lights the aurora: the aurora borealis in the northern hemisphere, and the aurora australis in the southern hemisphere.

Nowadays, we understand aurorae are caused by charged particles from Earth’s magnetosphere and the solar wind colliding with other particles in Earth’s upper atmosphere. Those collisions excite the atmospheric particles, which then release light as they “relax” back to their unexcited state.

The color of the light corresponds to the release of discrete chunks of energy by the atmospheric particles, and is also an indicator of how much energy was absorbed in the initial collision.

A hacking group believed to be behind a cybercrime spree in 2021 is once again involved in an active hacking campaign targeting a file transfer tool that could lead to a wave of data breaches, according to US cybersecurity officials.

The hacking group, named “CL0P,” is targeting a file transfer tool called MOVEit that belongs to Progress Software Corp., according to a joint advisory by the Cybersecurity and Infrastructure Security Agency and FBI on June 7.

Officials say CL0P has been attempting to steal data belonging to MOVEit clients since at least May 27. A confirmed victim includes Zellis, a UK-based payroll services provider whose clients include British Airways and the BBC, reports Law360.

If you tuned into Apple’s WWDC conference on June 5, you may have spotted the term ‘spatial computing’ in the company’s keynote.

Keep reading to learn everything you need to know about spatial computing.

Spatial computing is a term used to refer to machines that use human interaction to retain and manipulate real-life objects and spaces.

Researchers at the University of Chemistry and Technology in Prague have made progress in the field of assistive technology with the development of a novel auditory human–machine interface using black phosphorus–based tactile sensors. Research led by Prof. Martin Pumera and Dr. Jan Vyskočil has the potential to revolutionize communication for visually or speech-disabled individuals by providing an intuitive and efficient means of conveying information.

Assistive technology that utilizes has traditionally been employed by individuals with or speech and language difficulties. In this study, the focus was on creating an auditory that utilizes audio as a platform for communication between disabled users and society. The researchers developed a piezoresistive tactile sensor using a composite of black phosphorus and polyaniline (BP@PANI) through a simple chemical oxidative polymerization process on cotton fabric.

The unique structure and superior electrical properties of black phosphorus, combined with the large surface area of the fabric, enabled the BP@PANI-based tactile sensor to exhibit exceptional sensitivity, low-pressure sensitivity, reasonable response time, and excellent cycle stability. To demonstrate the real-world application, a was created, incorporating six BP@PANI corresponding to braille characters. This device can convert pressed text into audio, aiding visually or speech-disabled individuals in reading and typing. It offers a promising solution for improving communication and accessibility for this demographic.

A former executive of Samsung Electronics stole the juggernaut’s confidential semiconductor data to build a copycat chip facility in China, South Korean prosecutors alleged on Monday.

The 65-year-old defendant, who also previously worked for Korean chipmaker SK Hynix, has been arrested. He has been accused of violating industrial technology protection laws and stealing trade secrets from 2018 to 2019 to establish a copy of Samsung’s semiconductor plant, just 1.5 kilometers away from Samsung chip factory in Xi’an, China.

The ex-Samsung exec’s attempt to build the copycat chip plant allegedly fell through after his backer, purportedly an undisclosed Taiwanese company, canceled more than a $6 billion (approximately 8 trillion won) investment into the project, prosecutors said. Instead, he received capital from investors in China and Taiwan to produce trial chip products based on Samsung’s technology.