Toggle light / dark theme

😀


A Paris-based startup has created a genetically engineered houseplant that can literally clean the air within your home. The plant builds off the natural purifying properties that houseplants already offer. So, while it adds some color to whatever room you put it in, it’s also actively keeping the air cleaner than 30 air purifiers.

The company, called Neoplants, modified both a pothos plant as well as its root microbiome to pump the plant’s natural air-cleaning properties up quite a bit. Called Neo P1, the genetically engineered houseplant recently hit the market, and you can purchase it right now.

Plants can offer quite a bit to your home. Not only can they boost your mood and help reduce anxiety, according to researchers, but they can also clean the air thanks to their natural air-purifying properties. With this genetically engineered houseplant, though, you’re getting more than that basic level of purifying. In fact, Neoplants say that the Neo P1 is 30 times more effective than the top NASA plants.

T ransplant medicine could take a giant leap forward if donor organs could soak up oxygen for longer and decay delayed. A technology called OrganEx, described in Nature from a team at Yale, promises to do just that. The researchers stopped the hearts of pigs and an hour later used OrganEx, then cataloged the return of bodily functions. The new approach far exceeded the ability of existing technology to prolong organ viability.

Pigs have long been a popular animal model of human disease because they are about our size and their hearts and blood vessels are quite similar. They have also had fictional roles in medicine.

In the Twilight Zone episode Eye of the Beholder, Janet Tyler has undergone multiple procedures to replace the “pitiful twisted lump of flesh” that is her face with something more acceptable. At the end, as the bandages are slowly unrolled from yet another failed procedure, we see that she naturally looks like us, considered hideous in her world where most people, including the nurse and doctor, have pig faces. Janet and others like her are sent to live among themselves.

Quantum mechanics, the theory which rules the microworld of atoms and particles, certainly has the X factor.

Unlike many other areas of physics, it is bizarre and counter-intuitive, which makes it dazzling and intriguing.

When the 2022 Nobel prize in physics was awarded to Alain Aspect, John Clauser, and Anton Zeilinger for research shedding light on quantum mechanics, it sparked excitement and discussion.

Artificial intelligence is rapidly transforming all sectors of our society. Whether we realize it or not, every time we do a Google search or ask Siri a question, we’re using AI.

For better or worse, the same is true about the very character of warfare. This is the reason why the Department of Defense – like its counterparts in China and Russia– is investing billions of dollars to develop and integrate AI into defense systems. It’s also the reason why DoD is now embracing initiatives that envision future technologies, including the next phase of AI – artificial general intelligence.

AGI is the ability of an intelligent agent to understand or learn any intellectual task in the same way that humans do. Unlike AI which relies on ever-expanding datasets to perform more complex tasks, AGI will exhibit the same attributes as those associated with the human brain, including common sense, background knowledge, transfer learning, abstraction, and causality. Of particular interest is the human ability to generalize from scanty or incomplete input.

Large models have improved performance on a wide range of modern computer vision and, in particular, natural language processing problems. However, issuing patches to adjust model behavior after deployment is a significant challenge in deploying and maintaining such models. Because of the distributed nature of the model’s representations, when a neural network produces an undesirable output, making a localized update to correct its behavior for a single or small number of inputs is difficult. A large language model trained in 2019 might assign a higher probability to Theresa May than Boris Johnson when prompted. Who is the Prime Minister of the United Kingdom?

An ideal model editing procedure would be able to quickly update the model parameters to increase the relative likelihood of Boris Johnson while not affecting the model output for unrelated inputs. This procedure would yield edits with reliability, successfully changing the model’s work on the problematic input (e.g., Who is the Prime Minister of the United Kingdom?); locality, affecting the model’s output for unrelated inputs (e.g., What sports team does Messi play for?); and generality, generating the correct output for inputs related to the edit input (e.g., Who is the Prime Minister of the United Kingdom?). Making such edits is as simple as fine-tuning with a new label on the single example to be corrected. However, fine-tuning on a single sample tends to overfit, even when the distance between the pre-and post-fine-tuning parameters is limited.

Overfitting causes both locality and generality failures. While fine-tuning the edit example and ongoing training on the training set improves locality, their experiments show that it still needs more generality. Furthermore, it necessitates continuous access to the entire training set during testing and is more computationally demanding. Recent research has looked into methods for learning to make model edits as an alternative. Researchers present a bi-level meta-learning objective for determining a model initialization for which standard fine-tuning on a single edit example yields valuable modifications.

The company’s technology essentially lets users take a pre-existing static 3D model and bring it to life. So, if you’re building a 3D forest in a virtual world, and you’ve got some 3D models of what you want the animals to look like, Anything World’s machine learning-powered tech will put a virtual skeleton in that animal, allowing it to move in a lifelike way.

The round comes amid waning interest in metaverse investments this year, according to data from Dealroom. Investment into startups tagged under “metaverse” on its platform dropped from a high of $2.8bn in Q2 to $446m in Q3, as low user interest affects previously hyped platforms and Mark Zuckerberg’s Meta lays off 11k employees.

Anything World cofounder Sebastian Hofer says that, while many investors have been seduced by the metaverse hype in the last year, his company is building a tool that’s also useful to clients who have no interest in jumping on the Zuckerberg bandwagon.

Our earth is gorgeous, with majestic mountains, breathtaking seascapes, and tranquil forests. Flying past intricately detailed, three-dimensional landscapes, picture yourself taking in this splendor as a bird might. Is it possible for computers to learn to recreate this kind of visual experience? However, current techniques that combine new perspectives from photos typically only allow for a small amount of camera motion. Most earlier research can only extrapolate scene content within a constrained range of views corresponding to a subtle head movement.

In a recent research by Google Research, Cornell Tech, and UC Berkeley, they presented a technique for learning to create unrestricted flythrough videos of natural situations beginning with a single view, where this capacity is learned through a collection of single images, without the need for camera poses or even several views of each scene. This method can take a single image and construct long camera trajectories of hundreds of new views with realistic and varied contents during testing, despite never having seen a video during training. This method contrasts with the most recent cutting-edge supervised view generation techniques, which demand posed multi-view films and exhibit better performance and synthesis quality.

The fundamental concept is that they gradually learn to generate flythroughs. Using single-image depth prediction techniques, they first compute a depth map from a beginning view, such as the first image in the figure below. After rendering the image to a new camera viewpoint, as illustrated in the middle, they use that depth map to create a new image and depth map from that viewpoint.