Menu

Blog

Page 1552

Aug 16, 2023

Math Proof Draws New Boundaries Around Black Hole Formation

Posted by in categories: cosmology, mathematics

For a half century, mathematicians have tried to define the exact circumstances under which a black hole is destined to exist. A new proof shows how a cube can help answer the question.

Aug 16, 2023

The humanoid robot that can pilot an airplane better than a human

Posted by in categories: employment, robotics/AI, transportation

The robot’s memory is so large that it can memorise all Jeppesen navigation charts, a task that is impossible for human pilots.

Both artificial intelligence (AI) and robotics have made significant strides in recent years, meaning most human jobs could soon be overtaken by technology — on the ground and even in the skies above us.

A team of engineers and researchers from the Korea Advanced Institute of Science & Technology (KAIST) is currently developing a humanoid robot that can fly aircraft without needing to modify the cockpit.

Aug 16, 2023

Emergent entangled informational universe

Posted by in category: futurism

Shared with Dropbox.

Aug 16, 2023

Chinese Scientists Develop a High-Performance Ultralong-Life Aqueous Zinc-Ion Battery

Posted by in categories: innovation, materials

A research team has developed an advanced aqueous zinc-ion battery with an enhanced cycle lifespan using a weak magnetic field and a new VS2 material. The breakthrough addresses the challenges of zinc dendrite growth and cathode material limitations. Credit: Mao Yunjie.

A research team at the Hefei Institutes of Physical Science (HFIPS) of Chinese Academy of Sciences (CAS), led by Prof. Zhao Bangchuan, developed a high-performance aqueous zinc-ion battery with ultralong cycle lifespan in a weak magnetic field.

The findings were recently published in the journal Materials Horizons.

Aug 16, 2023

Mimicking the Mind: Quantum Material Exhibits Brain-Like “Non-Local” Behavior

Posted by in categories: information science, mathematics, quantum physics, robotics/AI

UC San Diego’s Q-MEEN-C is developing brain-like computers through mimicking neurons and synapses in quantum materials. Recent discoveries in non-local interactions represent a critical step towards more efficient AI hardware that could revolutionize artificial intelligence technology.

We often believe that computers are more efficient than humans. After all, computers can solve complex math equations in an instant and recall names that we might forget. However, human brains can process intricate layers of information rapidly, accurately, and with almost no energy input. Recognizing a face after seeing it only once or distinguishing a mountain from an ocean are examples of such tasks. These seemingly simple human functions require considerable processing and energy from computers, and even then, the results may vary in accuracy.

How close the measured value conforms to the correct value.

Aug 16, 2023

Why downsizing large language models is the future of generative AI

Posted by in categories: business, economics, robotics/AI

Smaller language models can be based on a billion parameters or less—still pretty large, but much smaller than foundational LLMs like ChatGPT and Bard. They are pre-trained to understand vocabulary and human speech, so the incremental cost to customize them using corporate and industry-specific data is vastly lower. There are several options for these pre-trained LLMs that can be customized internally, including AI21 and Reka, as well as open source LLMs like Alpaca and Vicuna.

Smaller language models aren’t just more cost-efficient, they’re often far more accurate, because instead of training them on all publicly available data—the good and the bad—they are trained and optimized on carefully vetted data that addresses the exact use cases a business cares about.

That doesn’t mean they’re limited to internal corporate data. Smaller language models can incorporate third-party data about the economy, commodities pricing, the weather, or whatever data sets are needed, and combine them with their proprietary data sets. These data sources are widely available from data service providers who ensure the information is current, accurate, and clean.

Aug 16, 2023

Transparent Holographic video glass wall by Glimm

Posted by in categories: augmented reality, computing, holograms

Transparent Holographic video glass wall with 4k resolution.
Glimm has made for one of her clients a transparent video wall called as well holographic video wall indoor with holographic content and video s for indoor location.
The video wall exist of 8 panels of 55 inch TOLED displays which we have combined all together and hide the transformers and graphic cards in a small aluminium frame.
The resolution is 4K and the display is of glass in the glass.
Technology explaining :
TOLED stands for Transparent Organic Light-Emitting Diode. It is a display technology that combines the benefits of both OLED (Organic Light-Emitting Diode) and transparent displays.
In TOLED, each pixel of the display consists of a thin layer of organic materials that emit light when an electric current passes through them. These organic materials are sandwiched between transparent electrodes, typically made of indium tin oxide (ITO), which allow light to pass through.
One of the key advantages of TOLED is its transparency. When the display is not actively emitting light, it appears transparent, allowing users to see through it. This property makes TOLED suitable for applications where transparency is desired, such as in heads-up displays, smart windows, or augmented reality devices or in retail designs, advertisement or create a large TOLED video wall or Hologram 2D 3D.
TOLED also offers the benefits of OLED technology, including high contrast ratios, wide viewing angles, and fast response times. The organic materials used in TOLED displays can emit light directly, eliminating the need for a separate back lighting system, which contributes to their thin and lightweight design.
Besides the Transparent OLED technology we produce as well Transparent LED displays or Transparent LCD displays.
How to combine TOLED displays together?
1. Ensure compatibility: Make sure the Transparent OLED displays you are using are compatible with each other in terms of resolution, interface, and electrical requirements.
2. Physical alignment: Align the displays physically to create a larger display area. This typically involves arranging the displays side by side or in a grid formation. Use appropriate mounting brackets or frames to secure them in place.
3. Connection: Connect the displays together using the necessary cables or connectors. The specific connection method depends on the interface supported by the TOLED displays. Common interfaces include HDMI, Display Port, or other proprietary interfaces.
4. Synchronization: If required, synchronize the displays to ensure coordinated content across all the panels. This may involve configuring the displays through software or hardware synchronization methods. Consult the manufacturer’s instructions or documentation for guidance on synchronization options.
5. Display control: Depending on the setup and software capabilities, you may need to adjust display settings, such as resolution, refresh rate, or color calibration, to optimize the combined TOLED display.
6. Content management: Use appropriate software or programming techniques to distribute and display content across the combined TOLED displays. This could involve treating them as a single large display or as individual screens, depending on your requirements.

By following these steps, you can effectively combine multiple TOLED displays to create a larger and visually cohesive display area.

Continue reading “Transparent Holographic video glass wall by Glimm” »

Aug 16, 2023

Google’s AI search experience adds AI-powered summaries, definitions and coding improvements

Posted by in categories: economics, internet, robotics/AI

Google today is rolling out a few new updates to its nearly three-month-old Search Generative Experience (SGE), the company’s AI-powered conversational mode in Search, with a goal of helping users better learn and make sense of the information they discover on the web. The features include tools to see definitions of unfamiliar terms, those that help to improve your understanding and coding information across languages, and an interesting feature that lets you tap into the AI power of SGE while you’re browsing.

The company explains that these improvements aim to help people better understand complicated concepts or complex topics, boost their coding skills and more.

One of the new features will let you hover over certain words to preview their definitions and see related images or diagrams related to the topic, which you can then tap on to learn more. This feature will become available across Google’s AI-generated responses to topics or questions related to certain subjects, like STEM, economics, history and others, where you may encounter terms you don’t understand or concepts you want to dive deeper into for a better understanding.

Aug 16, 2023

Drawing Stuff: AI Can Really Cook! How Far Can It Go?

Posted by in categories: robotics/AI, transportation

We’ve seen a lot about large learning models in general, and a lot of that has been elucidated at this conference, but many of the speakers have great personal takes on how this type of process works, and what it can do!

For example, here we have Yoon Kim talking about statistical objects, and the use of neural networks (transformer-based neural networks in particular) to use next-word prediction in versatile ways. He uses the example of the location of MIT:

“You might have a sentence like: ‘the Massachusetts Institute of Technology is a private land grant research university’ … and then you train this language model (around it),” he says. “Again, (it takes) a large neural network to predict the next word, which, in this case, is ‘Cambridge.’ And in some sense, to be able to accurately predict the next word, it does require this language model to store knowledge of the world, for example, that must store factoid knowledge, like the fact that MIT is in Cambridge. And it must store … linguistic knowledge. For example, to be able to pick the word ‘Cambridge,’ it must know what the subject, the verb and the object of the preceding or the current sentence is. But these are, in some sense, fancy autocomplete systems.”

Aug 16, 2023

Amazon is making its own chips to offer generative AI on AWS

Posted by in category: robotics/AI

The company aims for low-cost, high-throughput chips that allow users to work with its web services on the cloud.

Even as the world looks to Microsoft and Google to reveal the next big thing in the generative artificial intelligence (AI) field, Jeff Bezos-founded Amazon has been silently working to let its customers work directly with the technology. In an unmarked building in Austin, Texas, Amazon engineers are busy developing two types of microchips that will be used to train and run AI models, CNBC

The world took notice of generative AI when OpenAI launched ChatGPT last year. Microsoft, which has partnered with OpenAI previously, was quick to use its association with the company and incorporate the features of the AI model into its existing products.